Re: [AMBER] Have any one used the Nmode to calculate the entropy?

From: Matthew Tessier <matthew.tessier.gmail.com>
Date: Thu, 9 Aug 2012 15:12:46 -0400

Ren,
We've found the best way to do nModes calculations on large systems is to
break up the calculations into single-frame, single-core jobs which can be
submitting over multiple computing cores (& nodes) on a cluster. Jason is
right in that each of these jobs takes awhile especially for a system that
large. You may want to try reducing the system size by truncating the
protein outside a certain distance from the ligand. This does alter your
entropy numbers slightly (we noticed a systemic 2 kcal/mol reduction in
entropy energy in our particular test case) but it will allow you to get
an approximation with a considerably shorter calculation time. We were able
to reduce our compute time from 24 hours/frame to about 8 hours/frame
(these are computer-dependent times). We ended up going with the
full-system approach because we have a lot of cores at our disposal but
your system is about 3x's the size of ours.

Also, Jason made the point that it does use a lot of RAM. When I submit
these, I don't fill a compute node's processors because there isn't enough
RAM on the node to do this. You'll want to gauge the usage of your
computer resources before submitting a lot of these. The disadvantage of
doing one frame per job is that you'll have to setup a script to
post-process the statistical information that MMPBSA.py does normally but
you can hack at the MMPBSA.py code to do this for you.

Good luck
-Matthew Tessier

On Thu, Aug 9, 2012 at 1:35 PM, Jason Swails <jason.swails.gmail.com> wrote:

> On Thu, Aug 9, 2012 at 12:59 PM, Kong, Ren <rkong.tmhs.org> wrote:
>
> > Dear amber users,
> >
> > It is my first time to use Nmode to calculate entropy.
> > I have met with two problems:
> >
> > 1. I extracted 10 snapshots to do the calculation and the system is
> a
> > protein and ligand complex with 10550 atoms. It seems extreme time
> > consuming. I submit the mpi job with 8 threads and it keep running for
> more
> > than one week. And the job is still running. Is this normal for the
> > calculation? There are no error output informed. How long will it take
> for
> > a system like that?
> >
>
> This is a *huge* system. Normal mode calculations have to do 2 things:
> each snapshot must be minimized to a local minimum, and then the normal
> modes in the minimum have to be calculated.
>
> I would not be surprised if the minimizations are taking a very long time.
> I have no idea how long that kind of system would take (it will largely
> depend on how long it takes to minimize to a local minimum).
>
>
> >
> > 2. I tried to use 5 snapshots to do the calculation. The job quitted
> > abnormally.
> >
> > The output file is:
> >
> > Running MMPBSA.MPI on 4 processors...
> >
> > Reading command-line arguments and input files...
> >
> > Loading and checking parameter files for compatibility...
> >
> > ptraj found! Using /home/rkong/amber11/bin/ptraj
> >
> > nmode program found! Using /home/rkong/amber11/bin/mmpbsa_py_nabnmode
> >
> > Preparing trajectories for simulation...
> >
> > 1000 frames were read in and processed by ptraj for use in calculation.
> >
> >
> >
> > Beginning nmode calculations with mmpbsa_py_nabnmode...
> >
> > Master thread is calculating normal modes for 2 frames
> >
> >
> >
> > calculating complex contribution for frame 0
> >
> > FATAL: allocation failure in vector()
> >
> > FATAL: allocation failure in vector()
> >
> > close failed in file object destructor:
> >
> > IOError: [Errno 9] Bad file descriptor
> >
> > FATAL: allocation failure in vector()
> >
> > close failed in file object destructor:
> >
> > IOError: [Errno 9] Bad file descriptor
> >
> > FATAL: allocation failure in vector()
> >
> > close failed in file object destructor:
> >
> > IOError: [Errno 9] Bad file descriptor
> >
> > The input file for 10 snapshots is as following:
> > &general
> > startframe=1,endframe=1000
> > keep_files=2,
> > receptor_mask=':1-692',ligand_mask=':693'
> > /
> > &nmode
> > nmstartframe=100, nmendframe=1000,
> > nminterval=100, nmode_igb=1, nmode_istrng=0.1,
> > /
> >
> > The input file for 5 snapshots is as following:
> > &general
> > startframe=1,endframe=1000
> > keep_files=2,
> > receptor_mask=':1-692',ligand_mask=':693'
> > /
> > &nmode
> > nmstartframe=100, nmendframe=1000,
> > nminterval=200, nmode_igb=1, nmode_istrng=0.1,
> > /
> >
> > The only difference between the input files is "nminterval". I just don't
> > know why the 5 snapshots job cannot run normally as the 10 snapshots job.
> >
> > Could anyone give some comments?
> >
>
> The errors you're getting suggest a lack of memory. Nmode calculations
> require storing a 3Nx3N Hessian matrix (although only an upper-triangular
> portion is saved), as well as substantial scratch space for the work the
> diagonlizer has to do. When you run 4 threads that all happen to be
> diagonalizing at the same time, that means you'll need 4x the amount of RAM
> required for a single calculation.
>
> HTH,
> Jason
>
> --
> Jason M. Swails
> Quantum Theory Project,
> University of Florida
> Ph.D. Candidate
> 352-392-4032
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>



-- 
Matthew B. Tessier
Complex Carbohydrate Research Center / Chemistry Dept.
University of Georgia
mbt3911.uga.edu
matthew.tessier.gmail.com
1-706-542-3508
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Aug 09 2012 - 12:30:03 PDT
Custom Search