Re: [AMBER] Different production speeds at different pHs when doing explicit constant pH simulations

From: Andrew Schaub <aschaub.uci.edu>
Date: Mon, 25 Sep 2017 14:52:49 -0700

Prof Roitberg,

Can I increase ntcnsntph from 100 to 1000 to increase the number of steps
in between protonation state change attempts? I am only titrating 10
residues, and plan on running the simulations for 1 microsecond. So I was
thinking maybe I could get away with an ntcnstph value a little higher?
maybe as high as 1000? That would still attempt protonation state changes
500,000 times over the course of one microsecond. I understand that only
one residue is selected each state change, so each of those residues would
average 50,000 state changes over the microsecond simulation.

Thanks for the clarification on ntrelax. I reviewed the 2012 JCTC paper
again, and noted that there was minimal difference between 100 fs, 200 fs,
and 2 ps for the solvent relaxation time. I tried to use a larger ntrelax
value in the hopes it might be speed up simulation speed, though this would
have the exact opposite effect (I believe if I'm understanding this right).
I noticed in Jason's tutorial on his home page for explicit simulations he
uses a value of 100. I'll use 200 per your recommendation, and with the
value in the amber manual.

Explicit solvent constant pH MD
 &cntrl
   imin=0, ! Run MD, no minimization
   ntx=5, ! Read coords/vel
   irest=1, ! Restart simulation using prev. coords/vel
   ntxo=2, ! NetCDF format restrt output
   ioutfm=1, ! NetCDF format traj output
   ntpr=20000, ! Write energy every 40 ps
   ntwr=5000000 ! rewrite restart every 10 ns
   iwrap=1, ! wrap coordinates
   ntwx=20000, ! write 2500 frames over 100 ns
   nstlim=50000000, ! 100 nanoseconds
   ig=-1, ! random seed
   dt=0.002, ! 2 femtosecond timestep
   ntt=3, ! langevin dynamics for temperature scaling
   gamma_ln=5.0, ! collision frequency
   temp0=300, ! ref temp
   tempi=300, ! initial temp
   ntp=0, ! no pressure scaling
   ntc=2, ! X-H bonds constrained
   ntf=2, ! ignore X-H bond interactions
   cut=8, ! nonbonded cutoff in angstroms
   icnstph=2, ! explicit constant pH simulation

* ntrelax=200, ! run solvent relaxation dynamics (in which the
non-solvent is held fixed) for 200 steps ntcnstph=100, ! number of
steps between protonation state change attempts *
   solvph=5, ! pH
   saltcon=0.1, ! 100 mM salt concentration
 /

Best Regards,

Andrew Schaub


On Sun, Sep 24, 2017 at 9:15 AM, Adrian Roitberg <roitberg.ufl.edu> wrote:

> Dear Andrew,
>
> When doing constant pH simulations in explicit solvent, every time you
> accept a protonation move, there is an extra cost of doing solvent
> relaxation. This means that at pHs where you do not expect changes to
> happen, you would more throughput in terms of ns/day, while at phs close
> to pkas, you will loose time doing the solvent relaxation.
>
>
> Btw: you are using an ntrelax value of 1000. Where did this come from ?
> Our experience is that is too large. Try 100 or 200 maybe. Look at the
> paper by Swails and myself where we try different values.
>
>
> Adrian
>
>
>
> On 9/24/17 2:47 AM, Andrew Schaub wrote:
> > Good Evening,
> >
> > I am attempting to run a simulation on a system of 400 residues, at pHs:
> 4,
> > 5, 6, 7, 8, 9, 10, 11. There are 51 protonatable residues. I selected the
> > 10 nearest the active site that I'm most concerned with. I used the H++
> > server, and generated models at each pH. This was done to try to correct
> > the more distant histidine protonation states. At low pH I would rather
> see
> > all the histidines protonated, versus at high pH. All eight systems ran
> at
> > relatively the same speed for the heating and equilibration phases.
> >
> > Though something odd is happening. When I run the simulation at pH 4, I
> am
> > getting speeds of: 108 ns/day. At all other pHs (5 thru 11), I am getting
> > ranges from 10 ns/day to 45 ns/day. I thought it might be a hardware
> issue,
> > so I ran the pH 4 on a different card, and still got normal speeds. I ran
> > pH4 using a set ig value on two different nodes, and they ran at equal
> > speeds. I ran a few fo the other pHs on different cards (1080, 780,
> > Quadros..), and ran two copies with same igs. All of my input files are
> > identical (except for solvph parameter):
> >
> > Explicit solvent constant pH MD
> > &cntrl
> > imin=0, irest=1, ntx=5, ntxo=2,
> > ntpr=20000, ntwx=20000,
> > nstlim=50000000, dt=0.002,
> > ntt=3, tempi=300,
> > temp0=300, gamma_ln=5.0, ig=-1,
> > ntc=2, ntf=2, cut=8, iwrap=1,
> > ioutfm=1, icnstph=2, ntcnstph=100,
> > solvph=4.0, ntrelax=1000, saltcon=0.1,
> > /
> >
> >
> > I couldn't find anyone else on the listserv with a similar issue so
> figured
> > I'd post it. It appears as though in these other simulations there is a
> > pause when I do a "tail -f" on the output during the 2nd section. It
> hangs
> > here:
> >
> > Ewald parameters:
> >
> > I'm not sure if thats significant. the Ewald parameters are very similar
> > for all pH values...
> >
> > Ewald parameters:
> > verbose = 0, ew_type = 0, nbflag = 1, use_pme
> > = 1
> > vdwmeth = 1, eedmeth = 1, netfrc = 1
> > Box X = 64.697 Box Y = 81.032 Box Z = 83.653
> > Alpha = 90.000 Beta = 90.000 Gamma = 90.000
> > NFFT1 = 64 NFFT2 = 84 NFFT3 = 84
> > Cutoff= 8.000 Tol =0.100E-04
> > Ewald Coefficient = 0.34864
> > Interpolation order = 4
> >
> > ------------------------------------------------------------
> --------------------
> > 3. ATOMIC COORDINATES AND VELOCITIES
> > ------------------------------------------------------------
> --------------------
> >
> > default_name
> > begin time read from input coords =200400.000 ps
> >
> >
> > Number of triangulated 3-point waters found: 12712
> >
> > Sum of charges from parm topology file = -0.00000016
> > Forcing neutrality...
> >
> > | Dynamic Memory, Types Used:
> > | Reals 1565040
> > | Integers 1861661
> >
> > | Nonbonded Pairs Initial Allocation: 7358787
> >
> > | GPU memory information (estimate):
> > | KB of GPU memory in use: 235748
> > | KB of CPU memory in use: 49631
> >
> >
> > If anyone else has any suggestions, I would appreciate it. I wish it were
> > possible to see or probe what exactly is causing pmemd to get choked up.
> > monitor a loop, etc.
> >
> > Best Regards,
> >
> > Andrew
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
>
> --
> Dr. Adrian E. Roitberg
> University of Florida Research Foundation Professor
> Department of Chemistry
> University of Florida
> roitberg.ufl.edu
> 352-392-6972
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>



-- 
Andrew Schaub
Graduate Program in Chemical & Structural Biology
Tsai Lab, http:///www.tsailabuci.com/ <http://www.tsailabuci.com/>
Luo Lab, http://rayl0.bio.uci.edu/html/
University of California, Irvine
Irvine, CA 92697-2280
949-824-8829 (lab)
949-877-9380 (cell)
aschaub.uci.edu  <http://www.linkedin.com/pub/andrew-schaub/9a/907/382/>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon Sep 25 2017 - 15:00:03 PDT
Custom Search