Re: [AMBER] in vacuo dynamics

From: Jason Swails <jason.swails.gmail.com>
Date: Mon, 26 May 2014 15:30:48 -0400

On Sun, May 25, 2014 at 9:42 AM, Massimiliano Porrini <
m.porrini.iecb.u-bordeaux.fr> wrote:

> Hi Jason,
> ​​
>
> ​​
> That said, I am eager to see the difference in performance of the
> ​​
> the PME implementation between Amber12 and OpenMM (both CPU and GPU).
>

My experience -- using only the OpenMM Python application layer (which
incurs far more overhead than an OpenMM-accelerated compiled program) is
that PME calculations in OpenMM are ~half the speed of pmemd.cuda
calculations on a single GPU (when using the 'mixed' precision model in
OpenMM and the SPFP precision model in pmemd.cuda). OpenMM-Python suffers
from frequent I/O _much_ more than pmemd.cuda (since, again, OpenMM incurs
a lot of overhead from the builtin dimensional analysis and the time spent
in the Python interpreter).

Another limitation of the OpenMM engine compared to Amber is that the
OpenMM Ewald/PME implementation (as well as the imaging code) is hard-coded
for orthorhombic unit cells, so no other shapes (like common molecular
crystals or truncated octahedra, for instance) are supported by OpenMM.
 Using a truncated octahedron or a rhombic dodecahedron allows you to
significantly reduce the number of solvent molecules in your system, which
further reduces computational cost.

That said, you can implement entirely new models using OpenMM that run
directly on GPUs with impressive performance using only half a dozen lines
of code. For me, at least, Amber and OpenMM satisfy different needs.

HTH,
Jason

-- 
Jason M. Swails
BioMaPS,
Rutgers University
Postdoctoral Researcher
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon May 26 2014 - 13:00:02 PDT
Custom Search