Re: [AMBER] segmentation fault when using mmpbsa.py

From: Jason Swails <jason.swails.gmail.com>
Date: Wed, 30 Mar 2011 16:23:43 -0700

Hello,

A couple comments.

First, the parallel version of MMPBSA.py is MMPBSA.py.MPI. From your job
script, it appears as though you're running

mpirun -np 16 MMPBSA.py ...

This has the effect of launching 16 individual MMPBSA.py processes each that
are trying to overwrite what the other one is doing. Make sure you use
MMPBSA.py.MPI to run in parallel. If you were unable to build mpi4py and
get that to be recognized when you launch mmpbsa, then you can't run in
parallel.

It also appears from your error file that your mdcrds and prmtops are
incompatible, which likely implies a faulty mask declaration.

Here's my suggestion: Try running a simple version on your own computer in
serial using only 1 or 2 frames in GB solvent and see if that works. I'm
guessing that you will get errors there as well. Visualize each of the
systems (for instance, open _MMPBSA_complex.mdcrd with your complex prmtop
file in VMD, and do the same thing for the receptor and ligand). If they
look warped, then that means that your ligand and receptor_mask definitions
are inappropriate (or if your complex looks warped, then you'll likely have
to redo all of your prmtops, or the initial coordinate file).

Good luck,
Jason

On Wed, Mar 30, 2011 at 3:03 PM, Chris Chris <alpharecept.yahoo.com> wrote:

> I am performing mmpbsa analysis using mmpbsa.py. I combined three different
> trajectories into one using ptraj via the following:
>
> trajin g1octa_prod_mdx1
> trajin g1octa_prod_mdx2
> trajin g1octa_prod_mdx3
> trajout g1octa_combined_mdx
>
> the mmpbsa submit file is as follows:
>
> #!/bin/csh
> #
> #PBS -l walltime=48:00:00
> #PBS -l mem=100gb
> #PBS -l ncpus=16
> #PBS -q normal
> #PBS -V
> #PBS -N mmpbsa_g1octa
> #PBS -o mmpbsa_g1octa.out
> #PBS -e mmpbsa_g1octa.err
> #------------------------------
> # End of embedded QSUB options
> # echo commands before execution; use for debugging
> # remove # from the line below to use
> set echo
> set JOBID=`echo $PBS_JOBID | cut -d'.' -f1`
> cd $SCR
> saveafterjob "tar cf ${PBS_JOBNAME}.${JOBID}.tar *"
> cp /u/ac/cgaughan/mmpbsa_g1octa.in $SCR
> cp /u/ac/cgaughan/g1octa.top $SCR
> cp /u/ac/cgaughan/g1octa_nosolv.top $SCR
> cp /u/ac/cgaughan/g1hexa_4octa_nosolv.top $SCR
> cp /u/ac/cgaughan/g1dimer_4octa_nosolv.top $SCR
> cp /u/ac/cgaughan/g1octa_combined_mdx $SCR
> mpirun -np 16 /gpfs1/apps/chemistry/amber/amber11/bin/MMPBSA.py -O -i
> mmpbsa_g1octa.in -o G1OCTA_RESULTS_MMPBSA.dat -sp g1octa.top -cp
> g1octa_nosolv.top -rp g1hexa_4octa_nosolv.top -lp g1dimer_4octa_nosolv.top
> -y
> g1octa_combined_mdx
>
>
> The job is run in the scratch directory with the files compressed and sent
> directly to mass storage when the job is done.
>
> It seemed as though the job would be completed, however I recieved a
> segmentation error as can be seen in the attached .err file. I also
> attached the
> .out file as well.
> It appears that the main problem was:
> Could not predict number of frames for AMBER trajectory file
>
> It appears as though a final report was being printed at the end of the job
> but
> there is no data in it:
> |Input file:
> |--------------------------------------------------------------
> |
> |&general
> |startframe=6000, endframe=8000, interval=1,
> |verbose=2, keep_files=0, receptor_mask=:1-112:223-334,
> ligand_mask=:113-140:335-362,
> |/
> |&gb
> |igb=2,
> |/
> |&pb
> |fillratio=4.0,
> |/
> |
> |--------------------------------------------------------------
> |Solvated complex topology file: g1octa.top
> |Complex topology file: g1octa_nosolv.top
> |Receptor topology file: g1hexa_4octa_nosolv.top
> |Ligand topology file: g1dimer_4octa_nosolv.top
> |Initial mdcrd(s): g1octa_combined_mdx
> |Calculations performed using 2001 frames.
> |Poisson Boltzmann calculations performed using internal PBSA solver in
> sander.
> |
> |All units are reported in kcal/mole.
>
> -------------------------------------------------------------------------------
>
> -------------------------------------------------------------------------------
>
>
> Can someone help me make sense of all this? As can be seen in the .out
> file, the
> job ran for ~ 7 hours. What would cause it to crash towards the end of the
> job?
>
> Thanks for any help,
> Chris
>
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
>


-- 
Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Candidate
352-392-4032
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Mar 30 2011 - 16:30:04 PDT
Custom Search