Re: [AMBER] in vacuo dynamics

From: Jason Swails <jason.swails.gmail.com>
Date: Mon, 26 May 2014 15:21:59 -0400

On Mon, May 26, 2014 at 6:34 AM, Massimiliano Porrini <
m.porrini.iecb.u-bordeaux.fr> wrote:

> Hi,
>
> On 25 May 2014 21:30, Robert McGibbon <rmcgibbo.gmail.com> wrote:
>
> > > OPENMM (it uses all the available threads by default, dunno how to
> select
> > a
> > > number chosen by the user yet)
> > > 32 processes: 18.6 ns/day
> >
> > How did you actually run the CPU platform for OpenMM? Did you have 32
> > processes running per node?
> >
>
> As I had written, my results have to be considered with caution, I still
> need to learn a lot of things on how to run OpenMM and how to exploit all
> its capabilities.
> I did not choose any number of processes (like one does with
> mpirun/mpiexec),
> I have just run my python input file:
>
> python name.py > fname.out
>
> where name.py contains the following lines related to the platform content:
>
> ********************************************************
> platform = mm.Platform.getPlatformByName('CPU')
> simulation = app.Simulation(prmtop.topology, system, integrator, platform)
> ********************************************************
>
> then typing the command top followed by key 1 (or typing the command htop)
> I can see
> all the available 32 processors (threads?) running [*].
>
>
>
> >
> > Also, the number of threads per process can be set with the
> > OPENMM_CPU_THREADS env variable, or
> > by using the CpuThreads platform property.
> >
>
> Thanks for this suggestion, however even setting this env variable
> like this:
>
> export OPENMM_CPU_THREADS=2
>
> I still see all the 32 processes working.
>

OPENMM_CPU_THREADS and the CpuThreads platform property do not apply to any
released version of OpenMM. You need to download, compile, and use the
developmental version of OpenMM if you wish to control the number of CPU
threads that get launched. Short of adjusting the code and recompiling,
there is no way for you to control the number of threads that are used for
the CPU platform in OpenMM 6.0.1...

Furthermore, you wouldn't be the first to experience scalability issues on
the CPU platform, so for the time being I suggest that you stick to OpenMM
only for their GPU-accelerated platforms for your vacuum dynamics.

One other comment I'll make is that you can actually have ParmEd run OpenMM
simulations for you directly. There is an "OpenMM" command that behaves
exactly like sander -- it reads sander/pmemd input files, runs a
calculation with OpenMM, and writes sander-formatted trajectories and
restart files. You would use this command exactly the same way as you
would run sander or pmemd on the command-line.

$ parmed.py -p my.prmtop
loadRestrt my.inpcrd
OpenMM -O -i mdin -o mdout -x mdcrd -i mdinfo ...etc

This may make it a bit easier to use OpenMM coming from an Amber background.

By the way, I think I should not keep on discussing about OpenMM issues
> in this mailing list, I will ask help in OpenMM public forum (through which
> I am
> sure I can find the the solution) and I will read more carefully the users
> guide.
>
> Thanks again and all the best,
>
>
> [*] As far as I understood, the blade has 2 sockets, with 8 cores each, and
> 2 threads per
> core, therefore there is a total of 32 processors (in a previous email of
> mine I had wrongly
> written a different specification, sorry!).
>

​Oh, ick. OpenMM uses the number of CPUs in the machine as the default
number of threads (the only choice for the release versions). I believe
CPUs with hyperthreading appear to be 2 cores to the OS (and therefore to
OpenMM), so OpenMM tries to use these virtual cores. As I understand it,
applications heavy in floating-point arithmetic do not benefit much from
hyperthreading.

HTH,
Jason

-- 
Jason M. Swails
BioMaPS,
Rutgers University
Postdoctoral Researcher
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon May 26 2014 - 12:30:02 PDT
Custom Search