Re: [AMBER] in vacuo dynamics

From: Massimiliano Porrini <m.porrini.iecb.u-bordeaux.fr>
Date: Mon, 26 May 2014 12:34:20 +0200

Hi,

On 25 May 2014 21:30, Robert McGibbon <rmcgibbo.gmail.com> wrote:

> > OPENMM (it uses all the available threads by default, dunno how to select
> a
> > number chosen by the user yet)
> > 32 processes: 18.6 ns/day
>
> How did you actually run the CPU platform for OpenMM? Did you have 32
> processes running per node?
>

As I had written, my results have to be considered with caution, I still
need to learn a lot of things on how to run OpenMM and how to exploit all
its capabilities.
I did not choose any number of processes (like one does with
mpirun/mpiexec),
I have just run my python input file:

python name.py > fname.out

where name.py contains the following lines related to the platform content:

********************************************************
platform = mm.Platform.getPlatformByName('CPU')
simulation = app.Simulation(prmtop.topology, system, integrator, platform)
********************************************************

then typing the command top followed by key 1 (or typing the command htop)
I can see
all the available 32 processors (threads?) running [*].



>
> Also, the number of threads per process can be set with the
> OPENMM_CPU_THREADS env variable, or
> by using the CpuThreads platform property.
>

Thanks for this suggestion, however even setting this env variable
like this:

export OPENMM_CPU_THREADS=2

I still see all the 32 processes working.

By the way, I think I should not keep on discussing about OpenMM issues
in this mailing list, I will ask help in OpenMM public forum (through which
I am
sure I can find the the solution) and I will read more carefully the users
guide.

Thanks again and all the best,


[*] As far as I understood, the blade has 2 sockets, with 8 cores each, and
2 threads per
core, therefore there is a total of 32 processors (in a previous email of
mine I had wrongly
written a different specification, sorry!).




>
> -Robert
>
>
> On Sun, May 25, 2014 at 6:42 AM, Massimiliano Porrini <
> m.porrini.iecb.u-bordeaux.fr> wrote:
>
> > Hi Jason,
> >
> > I just wanted to share with Amber users the following performance
> results.
> >
> > I was aware of OpenMM existence, but I had never gone through it...
> >
> > I gave a try with the standalone OpenMM, using both CPU and GPU platforms
> > and
> > below are the performance results for _vacuum_ simulations
> > (which should be considered with extreme care, as they are preliminary,
> > indeed I still need to better familiarise with OpenMM directives):
> >
> >
> > System: 758 atoms, vacuum, Langevin(gamma=2/ps)
> >
> >
> > 32-threads blade:
> >
> > AMBER12 (sander.MPI)
> > 16 processes: 61.79 ns/day
> >
> > OPENMM (it uses all the available threads by default, dunno how to
> select a
> > number chosen by the user yet)
> > 32 processes: 18.6 ns/day
> >
> >
> > GPU-workstation (12 cores CPU + Kepler K20):
> >
> > OPENMM
> >
> > 12 CPUs: 11.9 ns/day
> > *1 GPUs: 448.0 ns/day*
> >
> >
> > Using the CPUs 'platform' apparently sander.MPI is far faster, but
> > on the Kepler GPU the acceleration obtained with OpenMM is impressive
> > (~7 times faster than using sander.MPI with 16 processes).
> >
> > That said, I am eager to see the difference in performance of the
> > the PME implementation between Amber12 and OpenMM (both CPU and GPU).
> >
> > Thanks again Jason for your fruitful suggestion.
> >
> > Cheers,
> >
> >
> >
> > On 23 May 2014 16:41, Jason Swails <jason.swails.gmail.com> wrote:
> >
> > > On Fri, May 23, 2014 at 10:06 AM, Massimiliano Porrini <
> > > m.porrini.iecb.u-bordeaux.fr> wrote:
> > >
> > > > On 23 May 2014 14:11, Jason Swails <jason.swails.gmail.com> wrote:
> > > >
> > >
> > >
> > > > > If performance is really critical to you, you can actually run
> vacuum
> > > > > dynamics directly on either a NVidia or ATI/AMD GPU (if you have
> > access
> > > > to
> > > > > one) using OpenMM. Their optimized CPU platform may also be a
> little
> > > > > faster than sander for pure vacuum simulations as well. There is a
> > > > section
> > > > > about OpenMM capabilities in the ParmEd section of the Amber 14
> > manual,
> > > > and
> > > > > you can find examples here:
> > > > > http://swails.github.io/ParmEd/examples/amber/index.html (use the
> GB
> > > > > example, but don't pass any value for implicitSolvent).
> > > > >
> > > >
> > > > This is extremely interesting! And I do have access to a Tesla K20
> > board.
> > > > I am assuming that only in Amber14 this capability is present,
> > therefore
> > > it
> > > > might be worth upgrading Amber12 to 14, even though we
> > > > have purchased it very recently (indeed in AmberTools13 ug I did not
> > find
> > > > anything related to OpenMM).
> > > > Useful discussion, as I did not know about the above web page of
> yours
> > > > either (about ParmEd, I knew only this one
> > > > http://jswails.wikidot.com/parmed#toc7),
> > > > I will have a look at it more carefully to see if I will be able to
> > > exploit
> > > > the GPU power for gas phase calculations, thanks.
> > > >
> > >
> > > If you are interested in the OpenMM capabilities of ParmEd, you may
> want
> > to
> > > download the source from Github, as there is more active OpenMM-related
> > > development there than what is available in the AmberTools 14 release.
> > > (Indeed, OpenMM support was added well after AmberTools 13 was
> released,
> > > so this functionality is AmberTools 14 only).
> > >
> > > The main github page for ParmEd is http://swails.github.io/ParmEd/(and
> > > the
> > > repository is at http://github.com/swails/ParmEd). Obviously if you
> > wish
> > > to use the ParmEd-OpenMM integration, you will need to install OpenMM
> as
> > > well (but the performance will be _much_ greater with OpenMM in vacuum
> > than
> > > with sander).
> > >
> > >
> > >
> > > > By no cut-off I assume you refer to an "infinite" cut-off (like the
> one
> > > > used cut = 1000.0 Angs) and in my runs it resulted that igb=6 is
> faster
> > > > with Berendsen,
> > > > is slightly faster with Andersen, but slower with Langevin.
> > > >
> > >
> > > Yes, infinite cutoff is any cutoff larger than the total system size,
> at
> > > least as I use the term.
> > >
> > > HTH,
> > > Jason
> > >
> > > --
> > > Jason M. Swails
> > > BioMaPS,
> > > Rutgers University
> > > Postdoctoral Researcher
> > > _______________________________________________
> > > AMBER mailing list
> > > AMBER.ambermd.org
> > > http://lists.ambermd.org/mailman/listinfo/amber
> > >
> >
> >
> >
> > --
> > Dr Massimiliano Porrini
> > Valérie Gabelica Team
> > U869 ARNA - Inserm / Bordeaux University
> > Institut Européen de Chimie et Biologie (IECB)
> > 2, rue Robert Escarpit
> > 33607 Pessac Cedex
> > FRANCE
> >
> > Tel : 33 (0)5 40 00 63 31
> >
> > http://www.iecb.u-bordeaux.fr/teams/GABELICA
> > Emails: m.porrini.iecb.u-bordeaux.fr
> > mozz76.gmail.com
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> >
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>



-- 
Dr Massimiliano Porrini
Valérie Gabelica Team
U869 ARNA - Inserm / Bordeaux University
Institut Européen de Chimie et Biologie (IECB)
2, rue Robert Escarpit
33607 Pessac Cedex
FRANCE
Tel   : 33 (0)5 40 00 63 31
http://www.iecb.u-bordeaux.fr/teams/GABELICA
Emails: m.porrini.iecb.u-bordeaux.fr
             mozz76.gmail.com
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon May 26 2014 - 04:00:02 PDT
Custom Search