Re: [AMBER] Speed of heating MD

From: Jason Swails <>
Date: Sat, 25 Jul 2015 20:51:44 -0400

On Sat, Jul 25, 2015 at 5:32 AM, Amber mail <> wrote:

> Dear AMBER community,
> I am running a heating MD,
> Description of the job:
> I performed 50ps of MD simulation at 100K. The system is then heated up in
> increments of 25K with 50ps of MD simulation at each temperature increment
> until the desired temperature of 310K was established. Consequently, I
> created another heat input files as follows for example to heat the system
> from 100K to 125K the input file ** has the following
> modification
> temp0=125 & tempi=100
> Then to further heat up
> should contain the following modification
> temp0=150 & tempi=125
> The problem is that the current simulation timing is very low (0.23
> ns/day), On the other hand, the minimization, which i just finished, was
> performed at 13 ns/day

Where are you getting 13 ns/day for minimization? To compute ns/day, you
take the amount of time taken to complete some set number of steps (say,
1000) to give you steps per day, then multiply by the time step expressed
in nanoseconds (time steps are usually 1 to 2 femtoseconds). With
minimization, there is no such thing as a time step -- it's not a dynamics
simulation. Therefore, there is no such thing as "nanoseconds per day" for
minimization. So it's not clear to me whether your minimization and
dynamics efficiencies can be compared. Really what you need to compare is
the number of steps per day. Minimization steps and dynamics steps will be
roughly the same (although not exactly, since dynamics and minimization do
different things besides computing energies and/or forces).

> My question is, Is it possible that specific parameters may affect the
> speed of the simulation?

​Certainly. A long cutoff leads to slower simulations. A shorter timestep
leads to fewer ns/day. A stricter tolerance on things like the Ewald
reciprocal sum or SHAKE convergence will lead to slower simulations. But
really, each of those settings (except perhaps the SHAKE tolerance) should
be set to the same value for both minimization and dynamics, so there
should not be significant difference between them.

​The program you use, and whether you run in serial or parallel, will also
significantly impact how fast your simulation is (as will the quality of
hardware you are running on). The fastest simulations are run with a
state-of-the-art nVidia GPU using pmemd.cuda. The fastest CPU calculations
are run using pmemd.MPI on anywhere between 32 and 128 CPUs that are
connected by very fast interconnect. The slowest simulations are run in
serial with sander. If you only have AmberTools and not Amber, you are
limited to using sander on one or a small number of CPUs (between 16 and
64, perhaps -- sander does not scale well beyond that limit for most

> Here is my input file
> Heat
> > &cntrl
> > imin=0,
> > ntx=1,
> > irest=0,
> > nstlim=50000,
> > dt=0.001,

​This is a short timestep. It is common to use a 2 fs time step when using
SHAKE (as you are). This will double your ns/day right there (or at least
come close to it).​

> ntf=2,
> > ntc=2,
> > tempi=100.0,
> > temp0=125.0,
> > ntpr=100,
> > ntwx=100,
> > cut=10.0,

​You can also use an 8 A cutoff (which is the default). That should speed
up your simulations without compromising very much accuracy (since PME
ensures the full electrostatic interactions are computed regardless of the
value of the cutoff).

> ntb=2,
> > ntp=1,
> > ntt=3,
> > gamma_ln=1.0,
> > nmropt=1,
> > /
> > &wt type='END' /

​nmropt is not doing anything here. You can remove the &wt type='END'/ and
nmropt=1 line. It probably won't have a large impact on performance, but
it might slow the calculations down a tiny bit. (I didn't realize you
weren't applying any weight changes or geometric restraints in your last
email, or I would have just suggested setting nmropt=0.)​


Jason M. Swails
Rutgers University
Postdoctoral Researcher
AMBER mailing list
Received on Sat Jul 25 2015 - 18:00:02 PDT
Custom Search