Hi,
On Tue, Oct 10, 2017 at 10:44 AM, Albert <mailmd2011.gmail.com> wrote:
> btw, I only have 2 GPU in my GPU workstation, it seems that the
> following command line doesn't work:
>
> mpirun -np 2 $AMBERHOME/bin/pmemd.cuda.MPI -ng 24 -groupfile
> infile/equilibrate.groupfile
>
> Amber asked for at least 24 GPU for the above job.....So I am just
> wondering is there anyway to make it work? If yes, it will fasten the
> whole simulation dramatically.....
>From the Amber 17 manual: 17.11. multisander (and multipmemd):
mpirun -np <#proc> sander.MPI -ng <#groups> -groupfile groupfile
In this case, #proc processors will be evenly divided among #groups
individual simulations ( #proc must be a multiple of #group !).
So at minimum <#proc> must equal <#groups>. You could do this on your
system (i.e. put 12 threads on one GPU and 12 on the other) but
*don't*. The Amber GPU code is designed to use the entire GPU, so what
you'll have is 12 different threads fighting for control of each GPU
which is going to kill your performance (also memory could potentially
be an issue). Do some benchmarking if you want but don't be surprised
if things are really inefficient.
Good luck,
-Dan
--
-------------------------
Daniel R. Roe
Laboratory of Computational Biology
National Institutes of Health, NHLBI
5635 Fishers Ln, Rm T900
Rockville MD, 20852
https://www.lobos.nih.gov/lcb
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Oct 10 2017 - 10:30:03 PDT