Re: [AMBER] entropy calculations

From: Jason Swails <jason.swails.gmail.com>
Date: Fri, 6 Jul 2012 14:05:09 -0400

With no error messages, it's impossible to diagnose. I'll provide my best
guess at what is happening, but I can't be sure.

My best guess, you are running out of memory (RAM). I provide estimates of
RAM requirements at the end of my email. To address this, try running in
serial (or if you have multiple nodes available, just one calculation per
node). If you are still running out of memory, you can reduce the size of
the work arrays by changing line 103 in
$AMBERHOME/AmberTools/src/mmpbsa_py/mmpbsa_entropy.nab from

        nmode(xyz, 3*natm, mme2, 0, 1, 0.0, 0.0, 0); //calc entropy

to

        nmode(xyz, 3*natm, mme2, 0, 0, 0.0, 0.0, 0); //calc entropy

(notice the 1 -> 0 change). This will, however, make the diagonalization
slower. If you are *still* running out of memory, try and find a machine
to use that has more ;).

HTH,
Jason

RAM estimates:

Each normal mode calculation builds a hessian matrix, which has ~3Nx3N
dimensionality. Given 8 byte reals (double precision), a 1000-atom system
will consume ~35 MB of RAM per normal mode calculation. And the memory
requirements don't grow linearly -- e.g., a 5000-atom system will consume
~900 MB.

The above estimates assume that the matrix arrays only store the
upper-triangular part of the matrix and neglect the work arrays, so they're
very liberal estimates. I'm also guessing that you are running MMPBSA.py
on 4 processors on the same node, in which case you have to multiply the
memory requirements by 4.

-- 
Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Candidate
352-392-4032
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Jul 06 2012 - 11:30:02 PDT
Custom Search