Re: [AMBER] pmemd.CUDA.mpi micro-second long simulation

From: <psu4.uic.edu>
Date: Tue, 4 Mar 2014 12:40:14 -0600

Dear Professor Walker and Simmerling,

    Thanks for your kind comments.

    Regarding the NANs issue, it means the system energy blows up during
simulations. You mention in this
post<http://archive.ambermd.org/201103/0431.html>
(http://archive.ambermd.org/201103/0431.html) that " In my opinion this is
a better option than using langevin for the entire simulation as all of the
issues with simulation problems, NANs seen on the GPUs etc arise from
running long ntt=3 simulations.I would even typically run NPT (if I thought
my system would change shape) *using ntt=1* once the system is
equilibrated." Our experience using pmemd.CUDA.mpi together with langevin
suggests NANs are not uncommon in couple hundreds nano second simulations.
  That is why we follow your advice for ntt = 1 and NTP in pmemd.CUDA.mpi
for long simulations.

    "For example you could run the protein itself for 500ns or so and
extract geometries every 20ns or so and use these as the seed structures
for the binding simulations." In this example, my understanding is to run
500ns simulation, and extract 25 seed structures. Followingly, use these
25 seed structures to start another 25 simulations?

    Cheers,
    Henry


On Mon, Feb 24, 2014 at 9:26 AM, Ross Walker <ross.rosswalker.co.uk> wrote:

> Hi Henry,
>
> Please see my answers inline below.
>
>
>
> On 2/23/14, 10:58 PM, "psu4.uic.edu" <psu4.uic.edu> wrote:
>
> >Dear Amber community,
> >
> >
> >
> > It will be interesting to see a non-guided drug molecule could find its
> >protein target binding site(s) and/or allosteric sites if we can run
> >several micro-second long simulations using pmemd.CUDA.mpi , similar to
> >this study.
> >
> >
> >
> >http://pubs.acs.org/doi/abs/10.1021/ja202726y
> >
> >
> >
> > Unfortunately there are not too many method details reported in the
> >manuscript to follow up. We are taking some other long simulation
> >examples
> >in the Amber community and other D.E. Shaw plus P.E. Vande's
> >publications. The proposed settings are below. Wonder if the community
> >could kindly offer some comments?
> >
>
> There is no reason why this should not work. Note the performance / way it
> works would I suspect bt very dependent on the concentration of drug
> molecules in the system. Also note that you'll likely want to run multiple
> simulations ideally from independent initial equilibrium geometries for
> better sampling. For example you could run the protein itself for 500ns or
> so and extract geometries every 20ns or so and use these as the seed
> structures for the binding simulations.
>
> Note I'm not entirely sure what these simulations ultimately give you -
> they are perhaps useful for identifying alosteric sites or potential
> intermediates in the binding process. They don't however, give you free
> energy or timescale information so be aware of that. They can be used to
> make nice movies though. :)
>
> >
> >
> >a. Force field: ff12SB. It seems to provide good protein stability in
> >Professor Case's studies.
> >
> >
> >
> >http://archive.ambermd.org/201211/0363.html
> >
> >
>
> This is a probably a good choice yes. Note you also need parameters for
> the ligand. GAFF is probably the reasonable (only?) choice for this unless
> you parameterize each ligand manually. Note since charge is likely very
> important here you probably want to take the time to do a multi
> conformational resp fit on each ligand rather than relying on AM1-BCC.
>
> >
> >b. pmemd.CUDA.mpi precision model: SPFP
>
> Should be good - and has the advantage of being deterministic. So you
> could always rerun the simulation with a lower value of NTWX if you want
> more 'resolution' in the trajectory file.
>
>
> >c. solvent: TIP3 water in an 10A octahedron truncated water box
>
> Probably good although some people prefer TIP4PEW.
>
> >
> >d. Minimization:
> >
> >
> >
> >&cntrl
> >
> > imin = 1,
> >
> > ntx = 1,
> >
> > maxcyc = 2000,
> >
> > ntmin = 2,
> >
> > ntpr = 100,
> >
> > ntf = 1,
> >
> > ntc = 1,
> >
> > ntb = 1,
> >
> > cut = 8.0,
> >
> > &end
> >
>
> Seems ok but use the CPU code for the minimization since it more robust
> when it comes to initially strained structures (or use the CUDA SPDP or
> DPDP precision models).
>
> >
> >
> >e. Equi 1
> >
> >
> >
> >&cntrl
> >
> > imin = 0,
> >
> > irest = 0,
> >
> > ntx = 1,
> >
> > ntb = 1,
> >
> > cut = 8.0,
> >
> > ntr = 1,
> >
> > ntc = 2,
> >
> > ntf = 2,
> >
> > tempi = 0.0,
> >
> > temp0 = 310.0,
> >
> > ntt = 3,
> >
> > gamma_ln = 2.0,
> >
> > nstlim = 50000,
> >
> > dt = 0.002,
> >
> > ntpr = 1000,
> >
> > ntwx = 25000,
> >
> > ntwr = 25000,
> >
> > restraint_wt = 10.0,
> >
> > restraintmask = '${protein-ligand-mask}',
> >
> > iwrap = 1,
> >
> > ioutfm =1,
> >
> > ig = -1,
> >
> > &end
>
>
> Seems reasonable to me - note you want to switch to constant pressure as
> soon as possible to prevent vacuum bubbles so you might want to heat to
> just 100K or so before finishing the heating with NPT.
>
>
> >f. NPT equilibration ntt =3
> >
> >&cntrl
> >
> > imin = 0,
> >
> > irest = 1,
> >
> > ntx = 5,
> >
> > ntb = 2,
> >
> > ntp = 1,
> >
> > pres0 = 1.0,
> >
> > taup = 2.0,
> >
> > cut = 8,
> >
> > ntr = 0,
> >
> > ntc = 2,
> >
> > ntf = 2,
> >
> > temp0 = 310.0,
> >
> > tempi = 310.0,
> >
> > ntt = 3,
> >
> > gamma_ln = 2.0,
> >
> > nstlim = 50000,
> >
> > dt = 0.002,
> >
> > ntpr = 1000,
> >
> > ntwx = 25000,
> >
> > ntwr = 25000,
> >
> > iwrap = 1,
> >
> > ioutfm =1,
> >
> > ig = -1,
> >
> > &end
>
> Also looks good - run this for long enough to make sure the density
> equilibrates.
>
> >g. NPT production run: the same as "equi 2" but change to ntt=1, taup =10
> >to avoid the NANs issue.
>
> What's the NAN issue? - I wasn't aware of a problem with ntt=3 and NTP.
>
> One thing to note though is that you are probably best using langevin
> thermostat for production run for better diffusion BUT note that the value
> of gamma_ln essentially acts as a viscosity - the higher it is the more
> viscous the system effectively is. Thus you may find there are optimum
> values of gamma_ln so you might want to play around with a few simulations
> run with different gamma_ln values and see if there is a difference.
>
> Note if you switch to AMBER 14 in a few months when it is released you can
> use the montecarlo barostat which willgive you NPT performance similar to
> NVT/NVE and across two GPUs you will get significantly better scaling if
> the motherboard supports peer to peer over PCI-E 3.0.
>
> All the best
> Ross
>
> /\
> \/
> |\oss Walker
>
> ---------------------------------------------------------
> | Associate Research Professor |
> | San Diego Supercomputer Center |
> | Adjunct Associate Professor |
> | Dept. of Chemistry and Biochemistry |
> | University of California San Diego |
> | NVIDIA Fellow |
> | http://www.rosswalker.co.uk | http://www.wmd-lab.org |
> | Tel: +1 858 822 0854 | EMail:- ross.rosswalker.co.uk |
> ---------------------------------------------------------
>
> Note: Electronic Mail is not secure, has no guarantee of delivery, may not
> be read every day, and should not be used for urgent or sensitive issues.
>
>
>
>
>
>
>
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>



-- 
Pin-Chih Su (Henry Su)
Ph.D. canditate
Center for Pharmaceutical Biotechnology (MC 870)
College of Pharmacy, University of Illinois at Chicago
900 South Ashland Avenue, Room 1052
Chicago, IL 60607-7173
office      312-996-5388
fax         312-413-9303
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Mar 04 2014 - 11:00:02 PST
Custom Search