Thanks Jason. I am not running in parallel. It ran OK when I reduced the endframe to 200, but when I invoked the command:
/app1/common/Apps/gnu/amber12/AmberTools/bin/MMPBSA.py -O -i mmpbsa.in -cp com.top -rp rec.top -lp lig.top -y prod-clus_amb_rep1-dry.nc -eo energies -deo dec_energies
It started with the following:
--------------------------------------------------------------------------
WARNING: It appears that your OpenFabrics subsystem is configured to only
allow registering part of your physical memory. This can cause MPI jobs to
run with erratic performance, hang, and/or crash.
This may be caused by your OpenFabrics vendor limiting the amount of
physical memory that can be registered. You should investigate the
relevant Linux kernel module parameters that control how much physical
memory can be registered, and increase them to allow registering all
physical memory on your machine.
See this Open MPI FAQ item for more information on these Linux kernel module
parameters:
http://www.open-mpi.org/faq/?category=openfabrics#ib-locked-pages
Local host: atlas7-c01
Registerable memory: 65536 MiB
Total memory: 258528 MiB
Your MPI job will continue, but may be behave poorly and/or hang.
--------------------------------------------------------------------------
Loading and checking parameter files for compatibility...
sander found! Using /app1/common/Apps/gnu/amber12/bin/sander
cpptraj found! Using /app1/common/Apps/gnu/amber12/bin/cpptraj
Preparing trajectories for simulation...
--------------------------------------------------------------------------
An MPI process has executed an operation involving a call to the
"fork()" system call to create a child process. Open MPI is currently
operating in a condition that could result in memory corruption or
other system errors; your MPI job may hang, crash, or produce silent
data corruption. The use of fork() (or system() or other calls that
create child processes) is strongly discouraged.
The process that invoked fork was:
Local host: atlas7-c01 (PID 17146)
MPI_COMM_WORLD rank: 0
If you are *absolutely sure* that your application will successfully
and correctly survive a call to fork(), you may disable this warning
by setting the mpi_warn_on_fork MCA parameter to 0.
--------------------------------------------------------------------------
4 frames were processed by cpptraj for use in calculation.
Running calculations on normal system...
Beginning GB calculations with /app1/common/Apps/gnu/amber12/bin/sander
calculating complex contribution...
calculating receptor contribution...
calculating ligand contribution...
Timing:
Total setup time: 0.108 min.
Creating trajectories with cpptraj: 0.209 min.
Total calculation time: 5.476 min.
Total GB calculation time: 5.476 min.
Statistics calculation & output writing: 0.005 min.
Total time taken: 5.799 min.
---------------------------------------------------------------------------------------------------------------------
Then where is the problem when I need to process the entire trajectory.
Please note all the above I did in cluster where Amber12 and AmberTools 12 are installed.
Of second mention, I was initially trying in my personal Desktop where I only have Ambertools12 (not Amber), it gave error as was asking for Sander. Then I upgraded my AmberTools12 to 14 and surprisingly I could successfully run for the entire traj. So, the problem I am facing in our cluster, is it because of the Ambertools version or something else?
Thanks again,
Sucharita
________________________________________
From: Jason Swails [jason.swails.gmail.com]
Sent: Monday, September 1, 2014 10:51 PM
To: amber.ambermd.org
Subject: Re: [AMBER] mmpbsa amber error while running decomp
On Mon, 2014-09-01 at 05:52 +0000, Sucharita Dey wrote:
> Hello All,
> I am getting this error while running gb and decomp using Amber12 SANDER.
> gb ran successfully in separate. But when try to run decomp with the same
> set of top files it is giving this error :
> mmpbsa amber error failed with prmtop com.top!
> But I checked it prepared some files like _MMPBSA_gb_decomp_com.mdin,
> _MMPBSA_dummycomplex.inpcrd, _MMPBSA_complex.mdcrd.0, (the same for ligand
> and receptor)_MMPBSA_normal_traj_cpptraj.out, _MMPBSA_restrt.0,
> _MMPBSA_complex_gb.mdout.0
> I checked the file _MMPBSA_complex_gb.mdout.0 ,it started writing and
> stopped abruptly after "minimizing coord set # 60"
> Here is my mmpbsa.in:
> MMPBSA.py input file for running GB in serial
> &general
> startframe=1, endframe=12500, interval=50,
Are you running this in parallel? If so, make sure you look at all of
the output files (i.e., not just the mdout files that end with .0).
Also, try a small number of frames. Does it work if you change endframe
to 200, for instance?
What you have reported contains no error message, so it's impossible to
say what happened. Given the information you provided, it's just as
likely to be a problem outside Amber that killed your calculation as it
is to be a problem inside Amber itself...
HTH,
Jason
--
Jason M. Swails
BioMaPS,
Rutgers University
Postdoctoral Researcher
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
________________________________
Important: This email is confidential and may be privileged. If you are not the intended recipient, please delete it and notify us immediately; you should not copy or use it for any purpose, nor disclose its contents to any other person. Thank you.
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Sep 02 2014 - 01:00:02 PDT