Hi Henry,
Please take a look at the following manuscript that describes a lot of the
differences in the approach.
Kaus, J.W., Pierce, L.T., Walker, R.C., McCammon, J.A., "Improving the
Efficiency of Free Energy Calculations in the Amber Molecular Dynamics
Package", J. Chem. Theory. Comput., 2013, 9 (9), pp 4131-4139, DOI:
10.1021/ct400340s <
http://dx.doi.org/10.1021/ct400340s>
The setup approach for TI is different but the underlying method is still
the same. pmemd.MPI will NOT require a power of two MPI tasks. Similarly
the GPU code will not require a power of two GPUs (although for the new
peer to peer communication layer we are introducing to boost multi-GPU
performance this will only be active when the number of GPUs is a multiple
of 2).
I am not sure what you mean by step B. But suffice to say is sander.MPI
supports it then pmemd should too.
All the best
Ross
On 2/23/14, 12:46 PM, "psu4.uic.edu" <psu4.uic.edu> wrote:
>Dear Ross and Jason and the community,
>
> Thanks for the detailed explanations. The new features do look quite
>exciting (and speedy)! Few more questions regarding TI pmemd.MPI/
>pmemd.cuda.MPI :
>
>a. Since we might have to rebuild our CPU/GPU nodes, wonder if the
>potential TI pmemd.MPI/ pmemd.cuda.MPI still follows "the power of 2
>feature" as TI sander.MPI does?
>
>b. Will the softcore potential minimization step supported by pmemd.MPI/
>pmemd.cuda.MPI?
>
>c. Also will the 3 steps/1 step TI parameters being changed significantly?
> Or TI will just be shifted from sander.MPI to pmemd.MPI/ pmemd.cuda.MPI ?
>Thanks.
>
> Cheers,
> Henry
>_______________________________________________
>AMBER mailing list
>AMBER.ambermd.org
>http://lists.ambermd.org/mailman/listinfo/amber
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Sun Feb 23 2014 - 17:00:02 PST