Hey,
for the first time I have been trying to run MMPBSA.py.MPI (from AT 1.5,
together with sander 10). For this test case, I used the settings
&general
startframe = 1000,
endframe = 1020,
interval = 5,
receptor_mask = :107-311,
ligand_mask = :1-106,312-355
/
&gb
igb = 5,
/
&decomp
idecomp = 1,
dec_verbose = 3
/
and ran into some problems indicated below (comments inbetween):
1) first run
============
$ mpirun -np 4 MMPBSA.MPI -i mmpbsa_decomp.in -cp
complex_unsolvated.prmtop -sp ../topology.top -rp receptor.prmtop -lp
ligand.prmtop -y ../md_equilibrate_00*
Running MMPBSA.MPI on 4 processors...
Reading command-line arguments and input files...
Loading and checking parameter files for compatibility...
Warning: Problem parsing L-J 6-12 parameters.
Warning: Problem parsing L-J 6-12 parameters.
Warning: Problem parsing L-J 6-12 parameters.
ptraj found! Using /home/bioinfp/jang/apps/amber11/bin/ptraj
sander found! Using /apps11/bioinfp/amber10+/bin/sander for GB calculations
Warning: Decomposition is only automated if I guess the ligand and receptor
masks! I will write skeleton mdin files which you must edit. Re-run
MMPBSA.py with -use-mdins, or allow me to guess the masks.
Warning: Problem parsing L-J 6-12 parameters.
* Comment:
What does the warning regarding the Lennard Jones parameters mean?
2) second run (with -use-mdins)
===============================
$ mpirun -np 4 MMPBSA.MPI -i mmpbsa_decomp.in -cp
complex_unsolvated.prmtop -sp ../topology.top -rp receptor.prmtop -lp
ligand.prmtop -use-mdins -y ../md_equilibrate_00*
Running MMPBSA.MPI on 4 processors...
Reading command-line arguments and input files...
Loading and checking parameter files for compatibility...
Warning: Problem parsing L-J 6-12 parameters.
Warning: Problem parsing L-J 6-12 parameters.
Warning: Problem parsing L-J 6-12 parameters.
Warning: Problem parsing L-J 6-12 parameters.
ptraj found! Using /home/bioinfp/jang/apps/amber11/bin/ptraj
sander found! Using /apps11/bioinfp/amber10+/bin/sander for GB calculations
Preparing trajectories for simulation...
20 frames were read in and processed by ptraj for use in calculation.
Beginning GB calculations with sander...
calculating complex contribution...
close failed in file object destructor:
IOError: [Errno 9] Bad file descriptor
close failed in file object destructor:
IOError: [Errno 9] Bad file descriptor
close failed in file object destructor:
IOError: [Errno 9] Bad file descriptor
* Comment:
3 Python IOErrors. But, unfortunately, without the full Python
Traceback. Why is it not printed here? Maybe due to a too catchy
try/except block? Close attempts fail three times, because the file
descriptors are already invalid. Looks one of the four MPI processes
wins, closes the file(s) and the other three fail...
3) third run (directly after the second, with the same arguments)
=================================================================
$ mpirun -np 4 MMPBSA.MPI -i mmpbsa_decomp.in -cp
complex_unsolvated.prmtop -sp ../topology.top -rp receptor.prmtop -lp
ligand.prmtop -use-mdins -y ../md_equilibrate_00*
Running MMPBSA.MPI on 4 processors...
Reading command-line arguments and input files...
Loading and checking parameter files for compatibility...
Warning: Problem parsing L-J 6-12 parameters.
ptraj found! Using /home/bioinfp/jang/apps/amber11/bin/ptraj
sander found! Using /apps11/bioinfp/amber10+/bin/sander for GB calculations
Warning: Problem parsing L-J 6-12 parameters.
Warning: Problem parsing L-J 6-12 parameters.
Warning: Problem parsing L-J 6-12 parameters.
Preparing trajectories for simulation...
20 frames were read in and processed by ptraj for use in calculation.
Beginning GB calculations with sander...
calculating complex contribution...
close failed in file object destructor:
IOError: [Errno 9] Bad file descriptor
* Comment:
only one IOError left. Hmm.. this time-dependency is strange and could
be related to our NFS setup? The situation may have improved due to
files that have already been created during the second run. I don't know.
4) fourth run (directly after the third)
========================================
$ mpirun -np 4 MMPBSA.MPI -i mmpbsa_decomp.in -cp
complex_unsolvated.prmtop -sp ../topology.top -rp receptor.prmtop -lp
ligand.prmtop -use-mdins -y ../md_equilibrate_00*
Running MMPBSA.MPI on 4 processors...
Reading command-line arguments and input files...
Loading and checking parameter files for compatibility...
Warning: Problem parsing L-J 6-12 parameters.
ptraj found! Using /home/bioinfp/jang/apps/amber11/bin/ptraj
Warning: Problem parsing L-J 6-12 parameters.
sander found! Using /apps11/bioinfp/amber10+/bin/sander for GB calculations
Warning: Problem parsing L-J 6-12 parameters.
Warning: Problem parsing L-J 6-12 parameters.
Preparing trajectories for simulation...
20 frames were read in and processed by ptraj for use in calculation.
Beginning GB calculations with sander...
calculating complex contribution...
* Comment:
Now, things looked fine and I left the office
5) next morning
===============
Still at
calculating complex contribution...
with python having 100 % CPU usage and no other heavy-cpu-using processes.
_MMPBSA_complex_gb.mdout.0 (attached) was last changed at approximately
the same time as MMPBSA run 4 started. So I killed the mpirun. The last
lines of _MMPBSA_complex_gb.mdout.0:
> rfree: Error decoding variable 2 2 from:
>RES EDIT
>
> this indicates that your input contains
>
> incorrect information
>
> field 2 was supposed to
>
> have a (1=character, 2=integer, 3=decimal) value
In `utils.sandercalc`, I printed the sander command:
/apps11/bioinfp/amber10+/bin/sander -O -i _MMPBSA_gb_decomp_com.mdin -o
_MMPBSA_complex_gb.mdout.0 -p complex_unsolvated.prmtop -c
_MMPBSA_dummycomplex.inpcrd.1 -y _MMPBSA_complex.mdcrd.0 -r
_MMPBSA_.restrt.0
I ran it independently and it almost immediately returned, creating the
same .mdout file as attached. Hence, in case (4) from above,
MMPBSA.py.MPI had a problem detecting this and ended up in some endless
loop responsible for 100 % CPU usage.
Btw: In `utils.sandercalc`, `os.system` is used to run the sander
process -- which is considered to be a deprecated way. One could think
about using Python's subprocess module in the future (one advantage is
being able to receive the stdout/stderr of the subprocess).
Summary
=======
- Something irritated MMPBSA.py.MPI so it opened/accessed/closed files
in the wrong order. Difficult to debug due to missing Python tracebacks
and potential dependency on NFS.
- Some input makes sander fail, which was not properly handled by
MMPBSA.py.MPI (endless loop). Also difficult to debug, but you may know
what's wrong with my input.
- There is another problem with my input leading to the warning about
parsing L-J 6-12 parameters.
Btw:
Why is the ./AmberTools/src/mmpbsa_py/MMPBSA_mods/utils.py the one that
is relevant during runtime, while changes in
/lib/python2.6/site-packages/MMPBSA_mods/utils.py do not have an effect?
Shouldn't it be the other way round?
Hope that someone can help me out here!
Thanks,
Jan-Philip
--
Jan-Philip Gehrcke
PhD student
Structural Bioinformatics Group
Technische Universität Dresden
Biotechnology Center
Tatzberg 47/49
01307 Dresden, Germany
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Oct 05 2011 - 04:00:03 PDT