Re: [AMBER] Speed of heating MD

From: Amber mail <amber.auc14.gmail.com>
Date: Sun, 26 Jul 2015 13:20:33 +0200

Dear Jason,

Thanks a lot for your very informative reply!

> The problem is that the current simulation timing is very low (0.23
> ns/day), On the other hand, the minimization, which i just finished, was
> performed at 13 ns/day

Where are you getting 13 ns/day for minimization?

Yes you are right, I meant to write heating instead of minimization. I
performed another heating MD at 13 ns/day.

I think that your suggestions may help a lot. I will do some changes, and I
will get back in case of any updates so that the problem/solution will be
documented for anyone.

The last question is, It is just a notice. I checked the size of my output
files and I found that they have eaxctly the same size, is this okay?


 mol_125K.crd 6446487081
> mol_125K.out 430525
> mol_125K.res 38732078
>


> mol_150K.crd 6446487081
> mol_150K.out 430525

 mol_150K.res 38732078
>


> mol_175K.crd 6446487081
> mol_175K.out 430525
> mol_175K.res 38732078
>


> mol_200K.crd 6446487081
> mol_200K.out 430525
> mol_200K.res 38732078
>
>
Thanks for your time!

Best Regards


On Sun, Jul 26, 2015 at 2:51 AM, Jason Swails <jason.swails.gmail.com>
wrote:

> On Sat, Jul 25, 2015 at 5:32 AM, Amber mail <amber.auc14.gmail.com> wrote:
>
> > Dear AMBER community,
> >
> > I am running a heating MD,
> >
> > Description of the job:
> >
> > I performed 50ps of MD simulation at 100K. The system is then heated up
> in
> > increments of 25K with 50ps of MD simulation at each temperature
> increment
> > until the desired temperature of 310K was established. Consequently, I
> > created another heat input files as follows for example to heat the
> system
> > from 100K to 125K the input file *heat12K5.in* has the following
> > modification
> > temp0=125 & tempi=100
> > Then to further heat up
> > heat150K.in should contain the following modification
> > temp0=150 & tempi=125
> >
> > The problem is that the current simulation timing is very low (0.23
> > ns/day), On the other hand, the minimization, which i just finished, was
> > performed at 13 ns/day
> >
>
> Where are you getting 13 ns/day for minimization? To compute ns/day, you
> take the amount of time taken to complete some set number of steps (say,
> 1000) to give you steps per day, then multiply by the time step expressed
> in nanoseconds (time steps are usually 1 to 2 femtoseconds). With
> minimization, there is no such thing as a time step -- it's not a dynamics
> simulation. Therefore, there is no such thing as "nanoseconds per day" for
> minimization. So it's not clear to me whether your minimization and
> dynamics efficiencies can be compared. Really what you need to compare is
> the number of steps per day. Minimization steps and dynamics steps will be
> roughly the same (although not exactly, since dynamics and minimization do
> different things besides computing energies and/or forces).
>
>
> > My question is, Is it possible that specific parameters may affect the
> > speed of the simulation?
> >
>
> ​Certainly. A long cutoff leads to slower simulations. A shorter timestep
> leads to fewer ns/day. A stricter tolerance on things like the Ewald
> reciprocal sum or SHAKE convergence will lead to slower simulations. But
> really, each of those settings (except perhaps the SHAKE tolerance) should
> be set to the same value for both minimization and dynamics, so there
> should not be significant difference between them.
>
> ​The program you use, and whether you run in serial or parallel, will also
> significantly impact how fast your simulation is (as will the quality of
> hardware you are running on). The fastest simulations are run with a
> state-of-the-art nVidia GPU using pmemd.cuda. The fastest CPU calculations
> are run using pmemd.MPI on anywhere between 32 and 128 CPUs that are
> connected by very fast interconnect. The slowest simulations are run in
> serial with sander. If you only have AmberTools and not Amber, you are
> limited to using sander on one or a small number of CPUs (between 16 and
> 64, perhaps -- sander does not scale well beyond that limit for most
> systems).​
>
>
> > Here is my input file
> >
> > Heat
> > > &cntrl
> > > imin=0,
> > > ntx=1,
> > > irest=0,
> > > nstlim=50000,
> > > dt=0.001,
> >
>
> ​This is a short timestep. It is common to use a 2 fs time step when using
> SHAKE (as you are). This will double your ns/day right there (or at least
> come close to it).​
>
> > ntf=2,
> > > ntc=2,
> > > tempi=100.0,
> > > temp0=125.0,
> > > ntpr=100,
> > > ntwx=100,
> > > cut=10.0,
> >
>
> ​You can also use an 8 A cutoff (which is the default). That should speed
> up your simulations without compromising very much accuracy (since PME
> ensures the full electrostatic interactions are computed regardless of the
> value of the cutoff).
>
> > ntb=2,
> > > ntp=1,
> > > ntt=3,
> > > gamma_ln=1.0,
> > > nmropt=1,
> > > /
> > > &wt type='END' /
> >
>
> ​nmropt is not doing anything here. You can remove the &wt type='END'/ and
> nmropt=1 line. It probably won't have a large impact on performance, but
> it might slow the calculations down a tiny bit. (I didn't realize you
> weren't applying any weight changes or geometric restraints in your last
> email, or I would have just suggested setting nmropt=0.)​
>
>
> ​HTH,
> Jason
>
> --
> Jason M. Swails
> BioMaPS,
> Rutgers University
> Postdoctoral Researcher
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Sun Jul 26 2015 - 04:30:02 PDT
Custom Search