Dear Andrew,
When doing constant pH simulations in explicit solvent, every time you
accept a protonation move, there is an extra cost of doing solvent
relaxation. This means that at pHs where you do not expect changes to
happen, you would more throughput in terms of ns/day, while at phs close
to pkas, you will loose time doing the solvent relaxation.
Btw: you are using an ntrelax value of 1000. Where did this come from ?
Our experience is that is too large. Try 100 or 200 maybe. Look at the
paper by Swails and myself where we try different values.
Adrian
On 9/24/17 2:47 AM, Andrew Schaub wrote:
> Good Evening,
>
> I am attempting to run a simulation on a system of 400 residues, at pHs: 4,
> 5, 6, 7, 8, 9, 10, 11. There are 51 protonatable residues. I selected the
> 10 nearest the active site that I'm most concerned with. I used the H++
> server, and generated models at each pH. This was done to try to correct
> the more distant histidine protonation states. At low pH I would rather see
> all the histidines protonated, versus at high pH. All eight systems ran at
> relatively the same speed for the heating and equilibration phases.
>
> Though something odd is happening. When I run the simulation at pH 4, I am
> getting speeds of: 108 ns/day. At all other pHs (5 thru 11), I am getting
> ranges from 10 ns/day to 45 ns/day. I thought it might be a hardware issue,
> so I ran the pH 4 on a different card, and still got normal speeds. I ran
> pH4 using a set ig value on two different nodes, and they ran at equal
> speeds. I ran a few fo the other pHs on different cards (1080, 780,
> Quadros..), and ran two copies with same igs. All of my input files are
> identical (except for solvph parameter):
>
> Explicit solvent constant pH MD
> &cntrl
> imin=0, irest=1, ntx=5, ntxo=2,
> ntpr=20000, ntwx=20000,
> nstlim=50000000, dt=0.002,
> ntt=3, tempi=300,
> temp0=300, gamma_ln=5.0, ig=-1,
> ntc=2, ntf=2, cut=8, iwrap=1,
> ioutfm=1, icnstph=2, ntcnstph=100,
> solvph=4.0, ntrelax=1000, saltcon=0.1,
> /
>
>
> I couldn't find anyone else on the listserv with a similar issue so figured
> I'd post it. It appears as though in these other simulations there is a
> pause when I do a "tail -f" on the output during the 2nd section. It hangs
> here:
>
> Ewald parameters:
>
> I'm not sure if thats significant. the Ewald parameters are very similar
> for all pH values...
>
> Ewald parameters:
> verbose = 0, ew_type = 0, nbflag = 1, use_pme
> = 1
> vdwmeth = 1, eedmeth = 1, netfrc = 1
> Box X = 64.697 Box Y = 81.032 Box Z = 83.653
> Alpha = 90.000 Beta = 90.000 Gamma = 90.000
> NFFT1 = 64 NFFT2 = 84 NFFT3 = 84
> Cutoff= 8.000 Tol =0.100E-04
> Ewald Coefficient = 0.34864
> Interpolation order = 4
>
> --------------------------------------------------------------------------------
> 3. ATOMIC COORDINATES AND VELOCITIES
> --------------------------------------------------------------------------------
>
> default_name
> begin time read from input coords =200400.000 ps
>
>
> Number of triangulated 3-point waters found: 12712
>
> Sum of charges from parm topology file = -0.00000016
> Forcing neutrality...
>
> | Dynamic Memory, Types Used:
> | Reals 1565040
> | Integers 1861661
>
> | Nonbonded Pairs Initial Allocation: 7358787
>
> | GPU memory information (estimate):
> | KB of GPU memory in use: 235748
> | KB of CPU memory in use: 49631
>
>
> If anyone else has any suggestions, I would appreciate it. I wish it were
> possible to see or probe what exactly is causing pmemd to get choked up.
> monitor a loop, etc.
>
> Best Regards,
>
> Andrew
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
--
Dr. Adrian E. Roitberg
University of Florida Research Foundation Professor
Department of Chemistry
University of Florida
roitberg.ufl.edu
352-392-6972
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Sun Sep 24 2017 - 09:30:02 PDT