On May 23, 2014, at 6:18 AM, Massimiliano Porrini <m.porrini.iecb.u-bordeaux.fr> wrote:
> Hi Jason,
>
> Very useful appreciated hints!
>
> Below my few comments.
>
>
> On 22 May 2014 23:42, Jason Swails <jason.swails.gmail.com> wrote:
>
>>
>> On May 22, 2014, at 3:52 PM, Massimiliano Porrini <
>> m.porrini.iecb.u-bordeaux.fr> wrote:
>>
>>> Hi all,
>>>
>>> I am attempting to run in vacuo simulations and I would like to know
>> which
>>> would be
>>> the best setting with sander module.
>>>
>>> As stated at p.60 of Amber14 user guide, igb=6 would correspond to an in
>>> vacuo
>>> simulation and I quote here what it is written "This option is logically
>>> equivalent to
>>> setting igb=0 and eedmeth=4, although the implementation (and
>> computational
>>> efficiency)
>>> is not the same".
>>>
>>> It was not clear to me in which of the 2 cases (igb=6 or ntb=0, igb=0,
>>> eedmeth=4)
>>> the computational efficiency should improve, therefore I made some tests
>>> with the
>>> 3 different temperature couplings available in Amber12, here are the
>>> results:
>>>
>>> *igb=6*
>>> Berendsen: 113.7 ns/day
>>> Andersen(vrand=1000): 70.12 ns/day
>>> Langevin(gamma_ln=2): 61.79 ns/day
>>>
>>> *ntb=0, igb=0, eedmeth=4*
>>> Berendsen: 66.48 ns/day
>>> Andersen(vrand=1000): 68.94 ns/day
>>> Langevin(gamma_ln=2): 72.70 ns/day
>>>
>>> I ran a simulation of a 758 atoms molecule, on a blade with 8 CPUs dual
>> core
>>> (hence 16 cores) and hyper-threading (hence 32 threads), however I
>>> utilised only 16 cores of such a blade and with the following
>>> sander input file (example from the simulation with Berendsen and igb=6):
>>
>> First a few words of caution about running vacuum dynamics -- the
>> parameters (including the standard charge sets and am1-bcc charge
>> derivation schemes) are designed to reproduce solvent-induced polarization
>> (particularly polar solvents). So you should be careful about how you
>> approach vacuum dynamics in general (for instance, dominant protonation
>> states are completely different -- zwitterions are unstable in vacuum but
>> very stable in solution).
>>
>
> Yes, I am aware of this risk, as removing the solvent (the very polar
> water) the potential energy surface is totally
> different and in principle one should recalculate the whole set of
> parameters for the solute.
> However, I just want the solute to explore the accessible microstates in
> absence of water, and compare
> the associated collision cross sections with ion mobility mass spectrometry
> experimental values.
>
>
>
>>>
>>> dyn in vacuo (igb=6)
>>> &cntrl
>>> imin = 0, irest = 1, ntx = 5,
>>> nstlim = 100000, dt = 0.001,
>>> ntc = 2, ntf = 2,
>>> cut = 1000.0,
>>> igb = 6,
>>> ntpr = 5000, ntwr = 10000, ntwx = 2000,
>>> ntt = 1,
>>> temp0 = 300.0,
>>> ioutfm = 1
>>> /
>>>
>>> Therefore I would obviously be tempted to utilise Berendsen thermostat
>> with
>>> igb=6,
>>> as it is the fastest one (113.7 ns/day)
>>
>> I question the use of any thermostat at all. The idea behind a thermostat
>> is that there is some "unlimited heat bath at a constant temperature"
>> surrounding the system of interest that is able to provide or absorb heat
>> energy as needed. This makes far more sense in a condensed phase
>> simulation than it does in vacuum or gas. If I was running a gas phase
>> simulation, for whatever reason, I would probably use a short time step,
>> stringent SHAKE tolerance (if I was using SHAKE at all), and if I didn't
>> observe acceptable energy conservation, a Berendsen thermostat with a very
>> weak coupling constant (on the order of 10 to 20 /ps) to smooth over
>> integration errors.
>>
>
> I totally forgot to mention 2 things:
> 1) the simulation in microcanonical ensemble was in my "to-do" list, I have
> just sent this email prematurely and I agree that in vacuo
> (no external bath) one should implement pure Newton dynamics, however (and
> here it comes the 2nd thing)
> 2) the simulation in not actually in the "high" vacuum, but there is a
> buffer gas at (relatively) low pressure ranges
> (with respect to the atmospheric one) and temperature T (which, in my case,
> is set equal to 300K), hence I would
> pursue the Berendsen coupling choice, with the settings suggested by you
> (short time-step, very low SHAKE tolerance
> and weak coupling).
For case 2, I would still use a more rigorous thermostat, like the Andersen thermostat (ntt=2) or Langevin dynamics (ntt=3). The fact that "collisions" occur on the wrong time scales is actually irrelevant since no constant temperature simulation method that I am aware of preserves temporal properties -- you need to run pure Newtonian dynamics for that. So all you are getting with a thermostat is sampling from the equilibrium distribution at a constant temperature (which is more rigorously correct with ntt=2 or ntt=3).
If performance is really critical to you, you can actually run vacuum dynamics directly on either a NVidia or ATI/AMD GPU (if you have access to one) using OpenMM. Their optimized CPU platform may also be a little faster than sander for pure vacuum simulations as well. There is a section about OpenMM capabilities in the ParmEd section of the Amber 14 manual, and you can find examples here: http://swails.github.io/ParmEd/examples/amber/index.html (use the GB example, but don't pass any value for implicitSolvent).
>
>
>
>>> But, provided that Berendsen coupling is kind of deprecated for several
>>> reasons (Amber14
>>> user guide, p.290), especially for simulations in implicit solvent, in
>>> which a uniform
>>> temperature is not guaranteed because collisions with water molecules are
>>> missing, do you think that it would be a sensible choice to use Berendsen
>>> coupling
>>> for in vacuo simulations?
>>>
>>> My answer would be yes for two reasons:
>>> 1) performance efficiency
>>> 2) with in vacuo simulations one does not actually want
>>> any "random kick" (like it occurs with Andersen or Langevin
>>> coupling) that would emulate the collisions with the solvent.
>>
>> The weak coupling algorithm (for either temperature or pressure) has
>> well-documented shortcomings and is fundamentally _known_ not to sample
>> from the desired distribution. For one thing, fluctuations are too small,
>> and the thermostat doesn't even keep correct partitioning between
>> translational, rotational, and vibrational motion (flying block of ice
>> effect in explicit solvent). I would personally not use the Berendsen
>> thermostat here except as I described above.
>>
>
> I was aware of these Berendsen thermostat drawbacks too, this is one main
> reason why I wanted any suggestion about
> running dynamics in gas phase (not actually vacuum).
>
>
>
>>
>>>
>>> Am I right/wrong?
>>> Any comments/hints on this issue would be very appreciated.
>>>
>>> Many thanks!
>>>
>>> PS Just to be meticulous (running 1 step of minimisation) I have also
>>> checked if the potential functions utilised were
>>> actually the same between igb=6 and igb=0, ntb=0, eedmeth=4 settings and
>>> they
>>> perfectly match :) as you can see below (therefore igb=6 with Berendsen
>> is
>>> actually
>>> the most efficient implementation):
>>
>> The difference is, as the manual states, in the implementation. I would
>> use igb=6 when doing gas-phase energy calculations since the electrostatic
>> code used to compute the energy in that case respects the exclusion list
>> stored in the topology file whereas the PME code (eedmeth=4, igb=0)
>> computes the exclusions from the bond list. This only really matters if
>> you add custom exclusions with ParmEd.
>>
>
> I am slightly puzzled here, in that I can not use PME where there are no
> periodic boundaries
> (like for molecules in gas phase).
> In fact, if igb>0 then ntb=0 by default (thus sander module does not
> implement PME algorithm).
> Therefore, if one wants to utilise exactly the same setting, one needs to
> impose ntb=0 when [eedmeth=4, igb=0],
> otherwise ntb would be equal to 1 by default (igb=0).
> As a matter of fact, my simulation with [eedmeth=4, igb=0] terminated
> because pmemd complained about the missing
> box.
Sorry for the confusion. I will try to clarify. There are two different code paths for computing electrostatic interactions -- the one used by PME and Ewald calculations and the one used by GB calculations. Some of the implementation details are that the first one uses a pairlist while the second one does not; the first one computes its own exclusion list from the bond, angle, and torsion lists while the second one does not (it uses the one stored in the prmtop file).
When you set eedmeth=4 and igb=0, the electrostatic interaction is computed by going through the PME code path, but instead of using point charge + Gaussian charge distributions, it just uses the point charges and omits the reciprocal sum -- exactly equivalent to gas phase, non-periodic energies. If igb=6 and ntb=0, the gas phase energies are computed through the GB code path, but the calculation of the effective GB radii is skipped, along with the polarization energy calculation.
Some consequences: vacuum calculations with a cutoff (which are a _horrible_ idea in general), are probably faster with eedmeth=4 than igb=6 (since it uses a pairlist, I think). No cutoff simulations are probably faster with igb=6 since there is no pairlist build overhead (although atom sorting in a pairlist _could_ minimize cache misses to improve performance). You can probably improve the performance of eedmeth=4 by setting skinnb=99999 in the &ewald namelist (which will effectively disable pairlist rebuilds, which are unnecessary in the absence of a cutoff). Again, as the manual states these are all implementation details...
All the best,
Jason
--
Jason M. Swails
BioMaPS,
Rutgers University
Postdoctoral Researcher
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri May 23 2014 - 05:30:02 PDT