Here
----
> dielc should be 1, the perl version is wrong here
----
you are probably wrong. As far as I remember, from the very start of my
work with
Amber (2008) there was "DIELC 4" as the default in the mm_pbsa.in
example files for NON-GB calculation.
I can demonstrate that value 4 is more reasonable than 1 directly on my
test calculation:
python version (nmode_igb=0)
DELTA S total= -100.1086 0.0000
0.0000
perl version ("IGB 0","DIELC 4" )
TSTOT -32.26 0.00
perl version is here much closer to the results which uses GB (cca -40
python, cca -20 perl).
Also if dielc is multiplicative scaling parameter seems to me not so
useful to have this constant equal 1 by default.
So my opinion is opposite here i.e. I would say that dielc should be 4 by
default so that value in
python version (1) is wrong and should be changed to 4.
------------
(this should never be !=1 for GB). From the NAB user’s manual describing
the “dielc” variable:
-------------
But sorry if GB is used dielc parameter is not used in calculation (so
should be ignored/skipped in the code) so it's actual value in case of
calc. with nmode_igb=IGB=1 is irrelevant or am I wrong ?
Best wishes,
Marek
Dne Tue, 17 Feb 2015 22:21:43 +0100 Jason Swails <jason.swails.gmail.com>
napsal/-a:
>
>> On Feb 17, 2015, at 3:09 PM, Marek Maly <marek.maly.ujep.cz> wrote:
>>
>> Hi Jason,
>>
>> thanks for your advice regarding mmpbsa_entropy.nab modification, I am
>> going immediately
>> to test it and will report the result here when test done.
>> But I have to say that in perl version the calculation took (after the
>> initial minimization)
>> the very similar memory (i.e. cca 23-24GB) as in the case of the python
>> version.
>
> According to the comment header in dsyevd.f, the working space required
> is 1+6N+2N^2 while for dsyev it seems to be 3N (in addition to the N^2
> for the actual matrix itself). So dsyevd seems to require over 3 times
> the memory required by dsyev...
>
>> It is possible that in some moment the memory requirements are even
>> higher
>> (the moment
>> of the Hessian diagonalization process) but could it be higher than
>> 130GB
>> (for 12k atoms system) ?
>
> It doesn’t seem like it should be larger than 130 GB, but I haven’t
> looked closely enough at the code to be sure.
>
> But what I *think* happened is that the working vector failed to
> allocate. It was probably asking for too much memory, so the allocation
> failed and spit back an error that crashed the application. Since the
> program couldn’t allocate all of the memory, it didn’t allocate *any* of
> the working space. Which could explain why the memory usage didn’t
> shoot up to “full” before crashing.
>
>> Another thing is that, as I already wrote you, I monitored used memory
>> with 1 second step using
>> "sar" function from sysstat.x86_64 package and there no extra RAM
>> requirements were apparent
>> during the python version of the calculation (the perl one I did not
>> monitored yet).
>> Just level cca 700MB (minimization) then 24GB(nmode analysis - for cca
>> 11
>> hours) then crash.
>>
>> Regarding differences between python and perl versions considering my
>> small testing molecular system,
>> the all available "mm_options" I already listed at the end of my
>> previous
>> email together
>> with settings and results, so here they are again and if you see here
>> anything suspicious
>> please let me know.
>>
>>
>> HERE ARE PARAMETERS RECORDS FROM OUT FILES:
>>
>> ***********MMPBSA.py.MPI settings records from _MMPBSA_complex_nm.out.0
>> ***********MMPBSA.py.MPI settings records from _MMPBSA_complex_nm.out.0
>> ***********MMPBSA.py.MPI settings records from _MMPBSA_complex_nm.out.0
>>
>> Parameter topology includes 10-12 terms:
>> These are assumed to be zero here (e.g. from TIP3P water)
>> mm_options: ntpr=10000
>> mm_options: diel=C
>> mm_options: kappa=0.000000
>> mm_options: cut=1000
>> mm_options: gb=1
>> mm_options: dielc=1.000000
>> mm_options: temp0=298.150000
>>
>> *******mm_pbsa.pl settings records from nmode_com.1.out
>> *******mm_pbsa.pl settings records from nmode_com.1.out
>> *******mm_pbsa.pl settings records from nmode_com.1.out
>> *******mm_pbsa.pl settings records from nmode_com.1.out
>>
>> mm_options: ntpr=50
>> mm_options: nsnb=999999
>> mm_options: cut=999.
>> mm_options: diel=C
>> mm_options: gb=1
>> mm_options: rgbmax=999.
>> mm_options: gbsa=1
>> mm_options: surften=0.0072
>> mm_options: epsext=78.3
>> mm_options: kappa=0
>>
>>
>> BTW you spoke about the GB influence here but I did also test (which I
>> did
>> not mention before) with nmode_igb=IGB=0 i.e. using (Distance-dependent)
>> dielectric constant
>> and there was really huge difference in results but now I checked
>> mm_options also here
>> and it is clear the source of this problem: "
>>
>> python version : "mm_options: dielc=1.000000"
>> perl version : "mm_options: dielc=4"
>>
>> So from some reason python and perl versions have here quite different
>> defaults for the
>> dielc parameter.
>
> dielc should be 1, the perl version is wrong here (this should never be
> !=1 for GB). From the NAB user’s manual describing the “dielc” variable:
> This is the dielectric constant used for non-GB simulations. It is
> implemented in routine mme_init() by scaling all of the charges by
> sqrt(dielc). This means that you need to set this (if desired) in
> mm_options() before calling mme_init().
>
> So what is happening here is that all of the charges are being
> (inappropriately) scaled before the normal modes are calculated.
>
> Thanks for the report,
> Jason
>
> --
> Jason M. Swails
> BioMaPS,
> Rutgers University
> Postdoctoral Researcher
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
--
Tato zpráva byla vytvořena převratným poštovním klientem Opery:
http://www.opera.com/mail/
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Feb 17 2015 - 14:30:02 PST