Hi Dailin,
1) The MPI size (number of processors that you ask in the -np flag) should be a multiple of the number of replicas (that you state in the -ng flag). With 32 replicas your command should look like this:
mpirun -np 32 $AMBERHOME/bin/pmemd.cuda.MPI -ng 32 -groupfile groupfile
For optimum Amber performance, you need to have 32 GPUs available when you execute this command, however, this command is still going to work if you have a fewer number of GPUs. But the computational performance is not going to be efficient. As you are overloading a single GPU with calculations that would otherwise be executed on other cards, you should see a drastic decrease in the performance. Thus, not a good idea.
2) NEB is an MPI job and is set to run with pmemd.MPI or pmemd.cuda.MPI only.
All the best,
Delaram
________________________________________
From: Li, Dailin <d.li.northeastern.edu>
Sent: Friday, December 7, 2018 2:11 PM
To: amber.ambermd.org
Subject: [AMBER] Can a Nudged elastic band (NEB) job run on a single GPU?
Hi,
I want to do NEB computations on GPUs. There are 32 images in the NEB job and only 2 GPUs could be used. When the job was submitted to the 2 GPUs, error concerning number of GPUs is not multiple will appear.
(1) Is it possible to do the NEB job on 2 GPUs? If yes, then how? Amber18 manual says "In case pmemd.cuda.MPI is used, it is best that the number of GPUs is equal to the number of images". Does "it is best" mean "it is required"?
(2) Is it possible to do the NEB job with pmemd.cuda, which means only 1GPU is used?
Thanks.
Regards,
Dailin
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.ambermd.org_mailman_listinfo_amber&d=DwICAg&c=pZJPUDQ3SB9JplYbifm4nt2lEVG5pWx2KikqINpWlZM&r=VvQy5PCXKJaGqwIFOxrZfrBLHWzuw9VxhPTw_bbTkzg&m=ibBdszPwsZmEN_q8N-AHz7Xzl1dVl7S8Kq_57Q1lyYo&s=Nqs9gIeICaKZzWuGUxc6cg5-AgjWYZOh1rZqnsDAmdE&e=
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Dec 07 2018 - 12:00:04 PST