Re: [AMBER] Have any one used the Nmode to calculate the entropy?

From: Jason Swails <jason.swails.gmail.com>
Date: Thu, 9 Aug 2012 13:35:03 -0400

On Thu, Aug 9, 2012 at 12:59 PM, Kong, Ren <rkong.tmhs.org> wrote:

> Dear amber users,
>
> It is my first time to use Nmode to calculate entropy.
> I have met with two problems:
>
> 1. I extracted 10 snapshots to do the calculation and the system is a
> protein and ligand complex with 10550 atoms. It seems extreme time
> consuming. I submit the mpi job with 8 threads and it keep running for more
> than one week. And the job is still running. Is this normal for the
> calculation? There are no error output informed. How long will it take for
> a system like that?
>

This is a *huge* system. Normal mode calculations have to do 2 things:
each snapshot must be minimized to a local minimum, and then the normal
modes in the minimum have to be calculated.

I would not be surprised if the minimizations are taking a very long time.
 I have no idea how long that kind of system would take (it will largely
depend on how long it takes to minimize to a local minimum).


>
> 2. I tried to use 5 snapshots to do the calculation. The job quitted
> abnormally.
>
> The output file is:
>
> Running MMPBSA.MPI on 4 processors...
>
> Reading command-line arguments and input files...
>
> Loading and checking parameter files for compatibility...
>
> ptraj found! Using /home/rkong/amber11/bin/ptraj
>
> nmode program found! Using /home/rkong/amber11/bin/mmpbsa_py_nabnmode
>
> Preparing trajectories for simulation...
>
> 1000 frames were read in and processed by ptraj for use in calculation.
>
>
>
> Beginning nmode calculations with mmpbsa_py_nabnmode...
>
> Master thread is calculating normal modes for 2 frames
>
>
>
> calculating complex contribution for frame 0
>
> FATAL: allocation failure in vector()
>
> FATAL: allocation failure in vector()
>
> close failed in file object destructor:
>
> IOError: [Errno 9] Bad file descriptor
>
> FATAL: allocation failure in vector()
>
> close failed in file object destructor:
>
> IOError: [Errno 9] Bad file descriptor
>
> FATAL: allocation failure in vector()
>
> close failed in file object destructor:
>
> IOError: [Errno 9] Bad file descriptor
>
> The input file for 10 snapshots is as following:
> &general
> startframe=1,endframe=1000
> keep_files=2,
> receptor_mask=':1-692',ligand_mask=':693'
> /
> &nmode
> nmstartframe=100, nmendframe=1000,
> nminterval=100, nmode_igb=1, nmode_istrng=0.1,
> /
>
> The input file for 5 snapshots is as following:
> &general
> startframe=1,endframe=1000
> keep_files=2,
> receptor_mask=':1-692',ligand_mask=':693'
> /
> &nmode
> nmstartframe=100, nmendframe=1000,
> nminterval=200, nmode_igb=1, nmode_istrng=0.1,
> /
>
> The only difference between the input files is "nminterval". I just don't
> know why the 5 snapshots job cannot run normally as the 10 snapshots job.
>
> Could anyone give some comments?
>

The errors you're getting suggest a lack of memory. Nmode calculations
require storing a 3Nx3N Hessian matrix (although only an upper-triangular
portion is saved), as well as substantial scratch space for the work the
diagonlizer has to do. When you run 4 threads that all happen to be
diagonalizing at the same time, that means you'll need 4x the amount of RAM
required for a single calculation.

HTH,
Jason

-- 
Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Candidate
352-392-4032
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Aug 09 2012 - 11:00:03 PDT
Custom Search