Re: [AMBER] memory issue in mmpbsa_py_nabnmode

From: Marek Maly <marek.maly.ujep.cz>
Date: Tue, 17 Feb 2015 21:09:52 +0100

Hi Jason,

thanks for your advice regarding mmpbsa_entropy.nab modification, I am
going immediately
to test it and will report the result here when test done.
But I have to say that in perl version the calculation took (after the
initial minimization)
the very similar memory (i.e. cca 23-24GB) as in the case of the python
version.

It is possible that in some moment the memory requirements are even higher
(the moment
of the Hessian diagonalization process) but could it be higher than 130GB
(for 12k atoms system) ?
Another thing is that, as I already wrote you, I monitored used memory
with 1 second step using
"sar" function from sysstat.x86_64 package and there no extra RAM
requirements were apparent
during the python version of the calculation (the perl one I did not
monitored yet).
Just level cca 700MB (minimization) then 24GB(nmode analysis - for cca 11
hours) then crash.

Regarding differences between python and perl versions considering my
small testing molecular system,
the all available "mm_options" I already listed at the end of my previous
email together
with settings and results, so here they are again and if you see here
anything suspicious
please let me know.


HERE ARE PARAMETERS RECORDS FROM OUT FILES:

***********MMPBSA.py.MPI settings records from _MMPBSA_complex_nm.out.0
***********MMPBSA.py.MPI settings records from _MMPBSA_complex_nm.out.0
***********MMPBSA.py.MPI settings records from _MMPBSA_complex_nm.out.0

         Parameter topology includes 10-12 terms:
         These are assumed to be zero here (e.g. from TIP3P water)
        mm_options: ntpr=10000
        mm_options: diel=C
        mm_options: kappa=0.000000
        mm_options: cut=1000
        mm_options: gb=1
        mm_options: dielc=1.000000
        mm_options: temp0=298.150000

*******mm_pbsa.pl settings records from nmode_com.1.out
*******mm_pbsa.pl settings records from nmode_com.1.out
*******mm_pbsa.pl settings records from nmode_com.1.out
*******mm_pbsa.pl settings records from nmode_com.1.out

        mm_options: ntpr=50
        mm_options: nsnb=999999
        mm_options: cut=999.
        mm_options: diel=C
        mm_options: gb=1
        mm_options: rgbmax=999.
        mm_options: gbsa=1
        mm_options: surften=0.0072
        mm_options: epsext=78.3
        mm_options: kappa=0


BTW you spoke about the GB influence here but I did also test (which I did
not mention before) with nmode_igb=IGB=0 i.e. using (Distance-dependent)
dielectric constant
and there was really huge difference in results but now I checked
mm_options also here
and it is clear the source of this problem: "

python version : "mm_options: dielc=1.000000"
perl version : "mm_options: dielc=4"

So from some reason python and perl versions have here quite different
defaults for the
dielc parameter.

So thanks again for your support, I will let here know my progress.

    Best wishes,

        Marek





Dne Tue, 17 Feb 2015 19:17:35 +0100 Jason Swails <jason.swails.gmail.com>
napsal/-a:

>
>> On Feb 17, 2015, at 10:30 AM, Marek Maly <marek.maly.ujep.cz> wrote:
>>
>> Hello,
>>
>> #1
>> I succeeded to solve my problem with entropy analysis of my big system
>> (see below) using "obsolete" mm_pbsa.pl (PROC = 1 i.e. using NAB
>> implementation of nmode - mm_pbsa_nabnmode)
>> ( no chance with PROC = 2, i.e. "original" nmode implementation)
>>
>> This means that the problem with MMPBSA.py.MPI or just MMPBSA.py which I
>> described sooner (see below) is connected with i) Python "skeleton" or
>> ii)
>> the problem might be in "mmpbsa_py_nabnmode" if
>> there are some bigger differences comparing to "mm_pbsa_nabnmode".
>> Based on my experiences/tests I assume that i) is true.
>
> Actually it’s almost definitely not i). MMPBSA.py and mm_pbsa.pl are
> glorified scripts that basically organize the various tasks that need to
> be done, call external programs, and parse the results.
>
> There is only one *real* functional difference between mm_pbsa.pl and
> MMPBSA.py’s nmode NAB programs. MMPBSA.py uses dsyevd to diagonalize
> the Hessian (which is faster, but takes quite a bit more memory), and
> mm_pbsa.pl uses dsyev (which is slower, but takes quite a bit less
> memory).
>
> This is almost certainly why mm_pbsa.pl works for your large system and
> MMPBSA.py does not. If you want, you can modify mmpbsa_entropy.nab
> inside $AMBERHOME/AmberTools/src/mmpbsa_py and change the line:
>
> nmode(xyz, 3*natm, mme2, 0, 1, 0.0, 0.0, 0); //calc entropy
> to
>
> nmode(xyz, 3*natm, mme2, 0, 0, 0.0, 0.0, 0); //calc entropy
>
> (note the 1-->0 change). I *suspect* that will make things work (given
> that mm_pbsa.pl works). I will look more into the scratch memory
> requirements of dsyevd vs. dsyev and see if I can make a smarter default
> that will use the faster routine only when there is enough memory.
>
>> #2
>> The problem is that mm_pbsa.pl (PROC = 1 i.e. using NAB implementation
>> of
>> nmode - mm_pbsa_nabnmode)
>> provides different results (significantly different !) than
>> MMPBSA.py.MPI
>> or just MMPBSA.py as I tested on my small "testing" molecular system
>> (just
>> one single fraim) in spite the fact that I set the same input parameters
>> in both cases (please see below).
>>
>> I would be grateful for any useful suggestions what to change/add in
>> mm_pbsa.pl
>
> Ideally you can just change the mmpbsa_entropy.nab script in MMPBSA.py
> as shown above and have that *just work*.
>
>> settings/(input file) to obtain similar results from both mmpbsa
>> routines.
>> I hope it is just
>> the problem of some different defaults in additional parameters and that
>> for the
>> same input both NAB routines ("mmpbsa_py_nabnmode", "mm_pbsa_nabnmode")
>> have to
>> return very similar output.
>
> I agree that they should give similar output. In the early days when
> MMPBSA.py was first written, it was compared carefully to mm_pbsa.pl to
> make sure that it gave comparable results with very similar (if not
> identical) defaults. However, mmpbsa_py_nabnmode and mm_pbsa_nabnmode
> were both written well after that comparison was done (and in fact, I
> think mmpbsa_py_nabnmode was written shortly before the perl version
> was).
>
> Since mmpbsa_py_nabnmode was written first, it wasn’t compared against
> the GB-enabled version of the perl nab program. The key to identifying
> the source of the differences between the two scripts will be in the NAB
> output of the two scripts. You should compare those and make sure that
> the mm_options are being set the same in both cases and that the overall
> minimization is similar (in addition to the first few eigenvectors and
> eigenvalues).
>
> Hope this helps,
> Jason
>
> --
> Jason M. Swails
> BioMaPS,
> Rutgers University
> Postdoctoral Researcher
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber


-- 
Tato zpráva byla vytvořena převratným poštovním klientem Opery:  
http://www.opera.com/mail/
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Feb 17 2015 - 13:00:02 PST
Custom Search