Re: [AMBER] about mpirun in AMBER

From: kamlesh sahu <kamleshsemail.gmail.com>
Date: Mon, 10 May 2010 13:37:47 +0900

Thank you very much, Prof. Ross. The explanation is certainly helpful to me.

Best Regards,
kamlesh

On Mon, May 10, 2010 at 11:38 AM, Ross Walker <ross.rosswalker.co.uk> wrote:

> Kamlesh,
>
> > I have a question about AMBER. When we submit a simulation using
> > mpirun....
> > it devides the simulation into some number of CPUs. Could you please
> > tell
> > me how this sander job (one simulation) is devided.
>
> Your question is not overly clear here but I will try to answer what I
> think
> you are asking. The number of cpus the code uses is determined by the
> arguments to mpirun, typically '-np X' where x is some power of 2 number,
> 8,
> 16 etc. The mpirun command will fire up this many copies of sander.MPI
> based
> on the contents of the MACHINEFILE defined by the mpirun command. This
> varies based on the MPI installation but would typically be something like:
>
> node1 cpus=2
> node2 cpus=2
>
> In this case if you asked for 8 mpi threads you would get 4 on node1 and 4
> on node2. Probably these would be handed out as:
>
> thread0 = node1
> thread1 = node1
> thread2 = node2
> thread3 = node2
> thread4 = node1
> ...
>
> Although this is again highly dependent on the MPI implementation.
> Sander.MPI in the simplest explanation will then divide up the work in
> terms
> of atoms giving natom/nthread atoms to each thread in turn. They then
> calculate a subset of the interactions and the forces are summed on each
> step in order to integrate. Note this is different to the way PMEMD works,
> which divides up space rather than atoms.
>
> Note things are more complicated for things like REMD, TI multisander type
> runs. Here the individual replicas get divided up amongst cpus and then the
> work gets divided up based on the number of threads per group. E.g. if you
> ran a 16 replica REMD simulation on 64 cpus you would have 4 cpus doing
> each
> of the 16 MD calculations.
>
> I hope this helps.
>
> All the best
> Ross
>
>
> /\
> \/
> |\oss Walker
>
> | Assistant Research Professor |
> | San Diego Supercomputer Center |
> | Tel: +1 858 822 0854 | EMail:- ross.rosswalker.co.uk |
> | http://www.rosswalker.co.uk | http://www.wmd-lab.org/ |
>
> Note: Electronic Mail is not secure, has no guarantee of delivery, may not
> be read every day, and should not be used for urgent or sensitive issues.
>
>
>
>
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>



-- 
Kamlesh Kumar Sahu (Ph.D. student)
Dept. of applied chemistry, Tohoku University graduate school of
engineering, Aoba-yama, Sendai
JAPAN
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Sun May 09 2010 - 22:00:05 PDT
Custom Search