Re: [AMBER] memory issue in mmpbsa_py_nabnmode

From: Jason Swails <jason.swails.gmail.com>
Date: Thu, 12 Feb 2015 17:44:14 -0500

> On Feb 12, 2015, at 5:14 PM, Marek Maly <marek.maly.ujep.cz> wrote:
>
> Hi Jason,
>
> thank you for your comments and complex explanation !
>
> Please check my last questions (in the text).
>
>
>
> Dne Thu, 12 Feb 2015 14:48:49 +0100 Jason Swails <jason.swails.gmail.com <mailto:jason.swails.gmail.com>>
> napsal/-a:
>
>>
>>> On Feb 10, 2015, at 10:49 AM, Marek Maly <marek.maly.ujep.cz> wrote:
>>>
>>> Dear David and Jason,
>>>
>>> first of all thank you for the really quick response !
>>>
>>> Since in this case it seems that there is really some memory
>>> bug inside "mmpbsa_py_nabnmode" code, which might be not so "trivial"
>>> to identify (or even to correct/solve) especially for the ordinary Amber
>>> user as I am,
>>> I am sending you relevant files off this mailing list to allow you i)
>>> reproduce the issue which I reported ii) analyze the problem (for sure
>>> 1000% more effectively than I could).
>>>
>>> Of course that on both mentioned machines we run 64bit Linux and
>>> compilation was done
>>> with these compilers:
>>>
>>> gcc version 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) (my machine)
>>> gcc version 4.8.3 (Gentoo 4.8.3 p1.1, pie-0.5.9) (cluster)
>>>
>>> using 64bit libs (if not there would be inter alia impossible that the
>>> nmode first phase
>>> (already after the initial minimization) is working for several hours
>>> with
>>> 24GB RAM requirements).
>>>
>>> Regarding drms, that 0.05 was set just to not waste much time with
>>> minimization and analyze the problem with nmode phase. I know that
>>> recommended values you mentioned (i.e. at least 1e-6), but in case of
>>> bigger systems it might be "killing" requirement ... so in such a cases
>>> I
>>> am using 0.01.
>>>
>>> The problem is that in the actual version of mmpbsa_py , all the work
>>> (minimization + nmode) on the
>>> given frame is done using just 1 CPU core (or even less if the
>>> multithreading is activated).
>>>
>>> So only way in the actual version how to overcome this "weakness" is
>>> probably to minimize
>>> selected snaps in parallel manner (before mmpbsa_py analysis) using
>>> sander
>>> to reach drms 1e-6 or less in some acceptable time (especially for the
>>> bigger molecular systems). Am I right ?
>>
>> Yes, but then you can’t use those frames to compute implicit solvent
>> energies, since they will no longer have a Boltzmann distribution (you
>> have taken them *off* the free energy surface and dropped them to the
>> potential energy surface).
>
> Even if for the purpose of the sander/pmemed minimization
> igb parameter is set to the suitable value (the same like nmode_igb in
> mmpbsa.py) to achieve
> minimization including the implicit solvent effect of the same type which
> is used by the mmpbsa.py
> (e.g. igb=nmode_igb=1) ?
>
> If there is still problem which you described (even if proper igb value is
> used during sander/pmemd minimisation) how it is possible that
> minimization of the frames directly by the mmpbsa.py is OK
> and minimization which could be done with sander/pmemd is problematic as
> the frames are
> "taken *off* the free energy surface and dropped to the potential energy
> surface)." ?
>
> Which is the difference between the two possible minimization approaches ?

There might be a misunderstanding here. MMPBSA.py only minimizes the snapshots before running normal mode analysis. The problem is if you run minimization in sander, then do the MM/GBSA or MM/PBSA part (not the normal mode part). For normal modes, it’s fine, but you can’t use the same trajectory for normal modes AND GBSA/PBSA.

>
>
>>
>>> Anyway why exactly so low drms values (at least 1e-6) are recommended
>>> before nmode analysis ?
>>
>> This has to do with the very nature of the normal mode approximation
>> itself. It is based on diagonalizing the Hessian matrix and treating
>> each of the eigenvectors as an orthogonal “vibrational” mode that is
>> approximately harmonic at the minimum, and using the frequency
>> (eigenvalue) -- which is related to the force constant of the harmonic
>> well -- to calculate the vibrational entropy for that mode.
>>
>> Now there are 6 modes (5 for linear molecules) that represent purely
>> translational or purely rotational motions -- these should have
>> eigenvalues of 0, ideally, at the true minimum (since there is no
>> potential energy well to rigid body translation or rotation in the
>> absence of external fields). Other than those 6 modes, there will be
>> modes with low frequencies (small force constants) that contribute
>> greatly to the entropy, up to those with very high force constants that
>> contribute very little.
>>
>> So the modes that contribute *most* to the entropy are those that are
>> closest to 0. These modes are the *most* sensitive to deviations from
>> the minimum, and very small changes to these eigenvalues can result in
>> much *larger* changes to their contributions to the entropy. That’s one
>> overarching problem with normal modes -- the most imprecise eigenvalues
>> contribute the most to the final answer. (Another, of course, being that
>> biomolecules are “soft” and globular and therefore do not rotate
>> rigidly.)
>>
>> If you are far enough away from the minimum, then more than just those 6
>> omitted modes will have negative eigenvalues (imaginary frequencies) --
>> in those cases, nab omits those modes from the entropy calculation
>> altogether... but those modes *would* be among the largest contributors
>> if the structure was more fully minimized, so you can see where the big
>> errors begin to creep in.
>
> So if I understood well if I find (in the nmode out file) records like
> this (bellow),
> where evidently just 6 modes are omitted from the vibrational analysis,
> the calculation
> is OK. If there will be more than 6 omitted modes listed, the structure
> was not
> minimized enough and the final results are not reliable. Did I understood
> you well ?

If more are omitted, that’s bad, for sure. But the low-frequency modes (like mode 7) may still change when the convergence criteria is tightened. It’s worth comparing if you’ve done both calculations.

HTH,
Jason

--
Jason M. Swails
BioMaPS,
Rutgers University
Postdoctoral Researcher
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Feb 12 2015 - 15:00:03 PST
Custom Search