Dear all,
I am running MMPBSA.py.MPI on a computing cluster with PBS platform.
Recently I got errors that the MMPBSA.py.MPI failed with the
complex.prmtop... I attached the error output at the end.
I highlighted the error message. But the problem is that, the same prmtop
file is been used for all the trajectories. I've been working on this
system for the past few weeks. The first 50 ns of trajectories were all
calculated with MMPBSA.py.MPI using same prmtop and same script, nothing
went wrong.
And also, the trajectories were prepared by cpptraj successfully (I
indicated different starting frames with interval at 5); therefore I
believe the prmtop file itself isn't a problem...(Although I did re-prepare
it again, apparently it didn't help)... Has anybody encountered the same
problem before?
The hexamer I am working on is a huge system containing 1300 residues. I
have imaged and centered on chains to make sure all coordinates are written
correctly. By inspecting the trajectories frame by frame, they all look
fine and nothing deviates... Will that be a memory issue or some how? What
tortures me is that the previous 50 ns trajectories of MMPBSA calculation
were finished successfully...I have no idea why now it's proceeding to
around 75 ns and things get weird...
Any thoughts and tips would help! Please let me know if more information is
needed!! Thank you!
-Guqin
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Loading and checking parameter files for compatibility...
sander found! Using /usr/local/amber/amber14/bin/sander
cpptraj found! Using /usr/local/amber/amber14/bin/cpptraj
Preparing trajectories for simulation...
100 frames were processed by cpptraj for use in calculation.
Running calculations on normal system...
Beginning PB calculations with /usr/local/amber/amber14/bin/sander
calculating complex contribution...
File "/usr/local/amber/amber14/bin/MMPBSA.py.MPI", line 96, in <module>
app.run_mmpbsa()
File "/usr/local/amber/amber14/bin/MMPBSA_mods/main.py", line 218, in
run_mmpbsa
self.calc_list.run(rank, self.stdout)
File "/usr/local/amber/amber14/bin/MMPBSA_mods/calculation.py", line 79,
in run
calc.run(rank, stdout=stdout, stderr=stderr)
File "/usr/local/amber/amber14/bin/MMPBSA_mods/calculation.py", line 416,
in run
self.prmtop) + '\n\t'.join(error_list) + '\n')
*CalcError: /usr/local/amber/amber14/bin/sander failed with prmtop
../../Hexamer_complex_mbondi2.prmtop!*
Error occured on rank 17.
Exiting. All files have been retained.
[cli_17]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 17
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 6537 RUNNING AT n0148
= EXIT CODE: 1
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
[proxy:0:0.n0153.ten.osc.edu] HYD_pmcd_pmip_control_cmd_cb
(pm/pmiserv/pmip_cb.c:885): assert (!closed) failed
[proxy:0:0.n0153.ten.osc.edu] HYDT_dmxu_poll_wait_for_event
(tools/demux/demux_poll.c:76): callback returned error status
[proxy:0:0.n0153.ten.osc.edu] main (pm/pmiserv/pmip.c:206): demux engine
error waiting for event
-----------------------
Resources requested:
nodes=2:ppn=12
-----------------------
Resources used:
cput=00:40:04
walltime=00:02:51
mem=91.644 GB
vmem=198.941 GB
-----------------------
Resource units charged (estimate):
0.114 RUs
-----------------------
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
--
Guqin SHI
PhD Candidate in Medicinal Chemistry and Pharmacognosy
College of Pharmacy
The Ohio State University
Columbus, OH, 43210
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Jul 07 2017 - 16:00:02 PDT