Re: [AMBER] in vacuo dynamics

From: Massimiliano Porrini <m.porrini.iecb.u-bordeaux.fr>
Date: Tue, 27 May 2014 15:04:49 +0200

On 26 May 2014 21:21, Jason Swails <jason.swails.gmail.com> wrote:

> On Mon, May 26, 2014 at 6:34 AM, Massimiliano Porrini <
> m.porrini.iecb.u-bordeaux.fr> wrote:
>
> > Hi,
> >
> > On 25 May 2014 21:30, Robert McGibbon <rmcgibbo.gmail.com> wrote:
> >
> > > > OPENMM (it uses all the available threads by default, dunno how to
> > select
> > > a
> > > > number chosen by the user yet)
> > > > 32 processes: 18.6 ns/day
> > >
> > > How did you actually run the CPU platform for OpenMM? Did you have 32
> > > processes running per node?
> > >
> >
> > As I had written, my results have to be considered with caution, I still
> > need to learn a lot of things on how to run OpenMM and how to exploit all
> > its capabilities.
> > I did not choose any number of processes (like one does with
> > mpirun/mpiexec),
> > I have just run my python input file:
> >
> > python name.py > fname.out
> >
> > where name.py contains the following lines related to the platform
> content:
> >
> > ********************************************************
> > platform = mm.Platform.getPlatformByName('CPU')
> > simulation = app.Simulation(prmtop.topology, system, integrator,
> platform)
> > ********************************************************
> >
> > then typing the command top followed by key 1 (or typing the command
> htop)
> > I can see
> > all the available 32 processors (threads?) running [*].
> >
> >
> >
> > >
> > > Also, the number of threads per process can be set with the
> > > OPENMM_CPU_THREADS env variable, or
> > > by using the CpuThreads platform property.
> > >
> >
> > Thanks for this suggestion, however even setting this env variable
> > like this:
> >
> > export OPENMM_CPU_THREADS=2
> >
> > I still see all the 32 processes working.
> >
>
> OPENMM_CPU_THREADS and the CpuThreads platform property do not apply to any
> released version of OpenMM. You need to download, compile, and use the
> developmental version of OpenMM if you wish to control the number of CPU
> threads that get launched. Short of adjusting the code and recompiling,
> there is no way for you to control the number of threads that are used for
> the CPU platform in OpenMM 6.0.1...
>

O.K., that is why it did not work.



>
> Furthermore, you wouldn't be the first to experience scalability issues on
> the CPU platform, so for the time being I suggest that you stick to OpenMM
> only for their GPU-accelerated platforms for your vacuum dynamics.
>

This was anyhow also my final conclusion :)



>
> One other comment I'll make is that you can actually have ParmEd run OpenMM
> simulations for you directly. There is an "OpenMM" command that behaves
> exactly like sander -- it reads sander/pmemd input files, runs a
> calculation with OpenMM, and writes sander-formatted trajectories and
> restart files. You would use this command exactly the same way as you
> would run sander or pmemd on the command-line.
>
> $ parmed.py -p my.prmtop
> loadRestrt my.inpcrd
> OpenMM -O -i mdin -o mdout -x mdcrd -i mdinfo ...etc
>
> This may make it a bit easier to use OpenMM coming from an Amber
> background.
>

I assume this is only available in Amber14, like you had already mentioned
it in a previous
email of yours. And in my group unfortunately we do not have Amber14...



>
> By the way, I think I should not keep on discussing about OpenMM issues
> > in this mailing list, I will ask help in OpenMM public forum (through
> which
> > I am
> > sure I can find the the solution) and I will read more carefully the
> users
> > guide.
> >
> > Thanks again and all the best,
> >
> >
> > [*] As far as I understood, the blade has 2 sockets, with 8 cores each,
> and
> > 2 threads per
> > core, therefore there is a total of 32 processors (in a previous email of
> > mine I had wrongly
> > written a different specification, sorry!).
> >
>
> Oh, ick. OpenMM uses the number of CPUs in the machine as the default
> number of threads (the only choice for the release versions). I believe
> CPUs with hyperthreading appear to be 2 cores to the OS (and therefore to
> OpenMM), so OpenMM tries to use these virtual cores. As I understand it,
> applications heavy in floating-point arithmetic do not benefit much from
> hyperthreading.
>

One more reason to use OpenMM only with the GPU platform
(at least for my vacuum simulations).



> HTH,
> Jason
>

Very clarifying, thanks!
Max



>
> --
> Jason M. Swails
> BioMaPS,
> Rutgers University
> Postdoctoral Researcher
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>



-- 
Dr Massimiliano Porrini
Valérie Gabelica Team
U869 ARNA - Inserm / Bordeaux University
Institut Européen de Chimie et Biologie (IECB)
2, rue Robert Escarpit
33607 Pessac Cedex
FRANCE
Tel   : 33 (0)5 40 00 63 31
http://www.iecb.u-bordeaux.fr/teams/GABELICA
Emails: m.porrini.iecb.u-bordeaux.fr
             mozz76.gmail.com
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue May 27 2014 - 06:30:02 PDT
Custom Search