Re: [AMBER] how to speed up nmode calculation using MMPBSA.MPI?

From: Josmar R. da Rocha <bije_br.yahoo.com.br>
Date: Tue, 22 Nov 2011 04:33:48 -0800 (PST)

Dear Jason,

I'll try do implement your suggestions! Thank you very for your help!

Josmar




--- Em seg, 21/11/11, Jason Swails <jason.swails.gmail.com> escreveu:

De: Jason Swails <jason.swails.gmail.com>
Assunto: Re: [AMBER] how to speed up nmode calculation using MMPBSA.MPI?
Para: "AMBER Mailing List" <amber.ambermd.org>
Data: Segunda-feira, 21 de Novembro de 2011, 2:45

On Sun, Nov 20, 2011 at 9:51 PM, Josmar R. da Rocha <bije_br.yahoo.com.br>wrote:

>
>
> Dear amber users,
>
> I have been doing some tests to compute
> deltaS for a ligand-protein complex using MMPBSA.MPI (AT1.5). The system
> (protein+ligand+waters+ions) has around 70k atoms. I tryed to run this
> calculation for 8 frames using a 8 core (xeon) 8 Gb Ram machine. After
> some time
> I got the error “allocation failure in vector: nh = …   in
> _complex_nm.out”. I read something about
> it in the discussion list (http://archive.ambermd.org/201012/0341.html),
> but this seems to be ended
> without a solution.
> Then I thought it might be related to lack of memory and
> set up a calculation for 4 frames in a machine with 24 GB Ram. It has run
> ok.
> However, it lasts 72 hs. As I have never done this type of calculation I
> would
> appreciate to hear from you whether this is what should be expected for
> such
> system?


Yes.  Normal mode calculations are very slow, thanks to 3 time-consuming
steps: minimizing snapshots pretty close to a local minimum, constructing
the hessian, and diagonalizing the hessian.  The length of the first part
can vary quite a bit, obviously, depending on how far away from the minimum
that particular snapshot is.

If so, could I get this to run faster?
>

Use fewer frames.  You could also look at an approach used by the Ryde
group constructing a smaller system around the active site to perform
normal modes on (although MMPBSA.py is not set up to do this and you will
have to set it up on your own).


> Over a period of time, I observed that one of the nabnmode
> processes was using 78 % of the total memory while the other 3, only 1.5
> %.


It's likely that only one of the processors had actually finished
minimizing its snapshot (and thus was using a ton of memory), and the other
3 were still minimizing (which is why it's only taking 1.5%).

In
> another time, one process was using ~50 % and other one ~30 %, . I suspect
> that
> such kind memory usage may be caused the allocation
> failure error when I tried to run the calculation in the 8 Gb Ram machine.
> Is this memory usage normal?


Yes, this memory usage is normal -- especially for large systems.


> If so, is there a way to make this calculations to be less memory
> expansive so that I can run more frames at a time?
>

Yes, but not by too much.  If you open up mmpbsa_entropy.nab in
$AMBERHOME/AmberTools/src/mmpbsa_py, you can change the line

        nmode(xyz, 3*natm, mme2, 0, 1, 0.0, 0.0, 0); //entropy calc

to

        nmode(xyz, 3*natm, mme2, 0, 0, 0.0, 0.0, 0); //entropy calc

(note the change from a 1 to a 0).  Note that this line occurs in 2 places!
 You'll have to change them both (once for NetCDF trajectories, once for
ASCII trajectories).  What this does is simply change the algorithm used to
diagonalize from dsyevd to dseyv (which are lapack routines for
diagonalizing symmetric matrices).  The first uses a divide-and-conquer
algorithm (hence the trailing D), and executes faster at the expense of
more memory.  The latter does not employ the divide-and-conquer approach,
and so finishes slower, but doesn't use as much memory.  Both should give
the same answer (and Bill M. verified that it did when when Dwight wrote
mmpbsa_entropy.nab).

You will have to recompile MMPBSA.py after you do this.  You can just type
"make install" in the $AMBERHOME/AmberTools/src/mmpbsa_py directory to
accomplish that.

HTH,
Jason

-- 
Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Candidate
352-392-4032
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Nov 22 2011 - 05:00:02 PST
Custom Search