Re: [AMBER] in vacuo dynamics

From: Robert McGibbon <rmcgibbo.gmail.com>
Date: Sun, 25 May 2014 12:30:49 -0700

> OPENMM (it uses all the available threads by default, dunno how to select
a
> number chosen by the user yet)
> 32 processes: 18.6 ns/day

How did you actually run the CPU platform for OpenMM? Did you have 32
processes running per node?

Also, the number of threads per process can be set with the
OPENMM_CPU_THREADS env variable, or
by using the CpuThreads platform property.

-Robert


On Sun, May 25, 2014 at 6:42 AM, Massimiliano Porrini <
m.porrini.iecb.u-bordeaux.fr> wrote:

> Hi Jason,
>
> I just wanted to share with Amber users the following performance results.
>
> I was aware of OpenMM existence, but I had never gone through it...
>
> I gave a try with the standalone OpenMM, using both CPU and GPU platforms
> and
> below are the performance results for _vacuum_ simulations
> (which should be considered with extreme care, as they are preliminary,
> indeed I still need to better familiarise with OpenMM directives):
>
>
> System: 758 atoms, vacuum, Langevin(gamma=2/ps)
>
>
> 32-threads blade:
>
> AMBER12 (sander.MPI)
> 16 processes: 61.79 ns/day
>
> OPENMM (it uses all the available threads by default, dunno how to select a
> number chosen by the user yet)
> 32 processes: 18.6 ns/day
>
>
> GPU-workstation (12 cores CPU + Kepler K20):
>
> OPENMM
>
> 12 CPUs: 11.9 ns/day
> *1 GPUs: 448.0 ns/day*
>
>
> Using the CPUs 'platform' apparently sander.MPI is far faster, but
> on the Kepler GPU the acceleration obtained with OpenMM is impressive
> (~7 times faster than using sander.MPI with 16 processes).
>
> That said, I am eager to see the difference in performance of the
> the PME implementation between Amber12 and OpenMM (both CPU and GPU).
>
> Thanks again Jason for your fruitful suggestion.
>
> Cheers,
>
>
>
> On 23 May 2014 16:41, Jason Swails <jason.swails.gmail.com> wrote:
>
> > On Fri, May 23, 2014 at 10:06 AM, Massimiliano Porrini <
> > m.porrini.iecb.u-bordeaux.fr> wrote:
> >
> > > On 23 May 2014 14:11, Jason Swails <jason.swails.gmail.com> wrote:
> > >
> >
> >
> > > > If performance is really critical to you, you can actually run vacuum
> > > > dynamics directly on either a NVidia or ATI/AMD GPU (if you have
> access
> > > to
> > > > one) using OpenMM. Their optimized CPU platform may also be a little
> > > > faster than sander for pure vacuum simulations as well. There is a
> > > section
> > > > about OpenMM capabilities in the ParmEd section of the Amber 14
> manual,
> > > and
> > > > you can find examples here:
> > > > http://swails.github.io/ParmEd/examples/amber/index.html (use the GB
> > > > example, but don't pass any value for implicitSolvent).
> > > >
> > >
> > > This is extremely interesting! And I do have access to a Tesla K20
> board.
> > > I am assuming that only in Amber14 this capability is present,
> therefore
> > it
> > > might be worth upgrading Amber12 to 14, even though we
> > > have purchased it very recently (indeed in AmberTools13 ug I did not
> find
> > > anything related to OpenMM).
> > > Useful discussion, as I did not know about the above web page of yours
> > > either (about ParmEd, I knew only this one
> > > http://jswails.wikidot.com/parmed#toc7),
> > > I will have a look at it more carefully to see if I will be able to
> > exploit
> > > the GPU power for gas phase calculations, thanks.
> > >
> >
> > If you are interested in the OpenMM capabilities of ParmEd, you may want
> to
> > download the source from Github, as there is more active OpenMM-related
> > development there than what is available in the AmberTools 14 release.
> > (Indeed, OpenMM support was added well after AmberTools 13 was released,
> > so this functionality is AmberTools 14 only).
> >
> > The main github page for ParmEd is http://swails.github.io/ParmEd/ (and
> > the
> > repository is at http://github.com/swails/ParmEd). Obviously if you
> wish
> > to use the ParmEd-OpenMM integration, you will need to install OpenMM as
> > well (but the performance will be _much_ greater with OpenMM in vacuum
> than
> > with sander).
> >
> >
> >
> > > By no cut-off I assume you refer to an "infinite" cut-off (like the one
> > > used cut = 1000.0 Angs) and in my runs it resulted that igb=6 is faster
> > > with Berendsen,
> > > is slightly faster with Andersen, but slower with Langevin.
> > >
> >
> > Yes, infinite cutoff is any cutoff larger than the total system size, at
> > least as I use the term.
> >
> > HTH,
> > Jason
> >
> > --
> > Jason M. Swails
> > BioMaPS,
> > Rutgers University
> > Postdoctoral Researcher
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> >
>
>
>
> --
> Dr Massimiliano Porrini
> Valérie Gabelica Team
> U869 ARNA - Inserm / Bordeaux University
> Institut Européen de Chimie et Biologie (IECB)
> 2, rue Robert Escarpit
> 33607 Pessac Cedex
> FRANCE
>
> Tel : 33 (0)5 40 00 63 31
>
> http://www.iecb.u-bordeaux.fr/teams/GABELICA
> Emails: m.porrini.iecb.u-bordeaux.fr
> mozz76.gmail.com
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Sun May 25 2014 - 13:00:02 PDT
Custom Search