[AMBER] Error when using mmpbsa_py_nabnmode in parallel

From: Ming Tang <m21.tang.qut.edu.au>
Date: Sun, 13 Mar 2022 10:36:01 +0000

Dear list,

I got below error when calculating entropy using mmpbsa_py_nabnmode for 8 frames on 8 cpus.

Loading and checking parameter files for compatibility...
cpptraj found! Using /pkg/suse12/software/Amber/18-foss-2019b-AmberTools-19-patchlevel-12-17-Python-2.7.16/bin/cpptraj
mmpbsa_py_nabnmode found! Using /pkg/suse12/software/Amber/18-foss-2019b-AmberTools-19-patchlevel-12-17-Python-2.7.16/bin/mmpbsa_py_nabnmode
Preparing trajectories for simulation...
--------------------------------------------------------------------------
A process has executed an operation involving a call to the
"fork()" system call to create a child process. Open MPI is currently
operating in a condition that could result in memory corruption or
other system errors; your job may hang, crash, or produce silent
data corruption. The use of fork() (or system() or other calls that
create child processes) is strongly discouraged.

The process that invoked fork was:

  Local host: [[49009,1],0] (PID 53062)

If you are *absolutely sure* that your application will successfully
and correctly survive a call to fork(), you may disable this warning
by setting the mpi_warn_on_fork MCA parameter to 0.

17001 frames were processed by cpptraj for use in calculation.
8 frames were processed by cpptraj for nmode calculations.

Running calculations on normal system...

Beginning nmode calculations with /pkg/suse12/software/Amber/18-foss-2019b-AmberTools-19-patchlevel-12-17-Python-2.7.16/bin/mmpbsa_py_nabnmode
  calculating complex contribution...
[cl4n094:52718] 1 more process has sent help message help-opal-runtime.txt / opal_init:warn-fork
[cl4n094:52718] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
[cl4n094:52718] 6 more processes have sent help message help-opal-runtime.txt / opal_init:warn-fork
Line minimizer aborted: step at lower bound 1e-20
Line minimizer aborted: rounding error
Line minimizer aborted: rounding error
Line minimizer aborted: rounding error
Line minimizer aborted: step at lower bound 1e-20
Line minimizer aborted: step at lower bound 1e-20
Line minimizer aborted: step at lower bound 1e-20
...
Line minimizer aborted: rounding error
Line minimizer aborted: step at lower bound 1e-20
Line minimizer aborted: rounding error
Line minimizer aborted: (brackt && (stp<=MIN(stx,sty) || stp>=MAX(stx,sty))) ||
                        (dx * (stp-stx) >= 0) || stpmax < stpmin
brackt = 0
stp = 1
stx = 0
sty = 0
stpmin = 0
stpmax = 5
dx = 24.795
Line minimizer aborted: step at lower bound 1e-20

  File "/pkg/suse12/software/Amber/18-foss-2019b-AmberTools-19-patchlevel-12-17-Python-2.7.16/bin/MMPBSA.py.MPI", line 108, in <module>
    app.parse_output_files()
  File "/pkg/suse12/software/Amber/18-foss-2019b-AmberTools-19-patchlevel-12-17-Python-2.7.16/lib/python2.7/site-packages/MMPBSA_mods/main.py", line 944, in parse_output_files
    self.INPUT['verbose'], self.using_chamber)
  File "/pkg/suse12/software/Amber/18-foss-2019b-AmberTools-19-patchlevel-12-17-Python-2.7.16/lib/python2.7/site-packages/MMPBSA_mods/amber_outputs.py", line 1007, in __init__
    self.delta2()
  File "/pkg/suse12/software/Amber/18-foss-2019b-AmberTools-19-patchlevel-12-17-Python-2.7.16/lib/python2.7/site-packages/MMPBSA_mods/amber_outputs.py", line 1239, in delta2
    self.data[key] = [self.com.data[key].avg() - self.rec.data[key].avg() -
  File "/pkg/suse12/software/Amber/18-foss-2019b-AmberTools-19-patchlevel-12-17-Python-2.7.16/lib/python2.7/site-packages/MMPBSA_mods/amber_outputs.py", line 187, in avg
    return (sum(self) / len(self))
ZeroDivisionError: integer division or modulo by zero
Error occured on rank 0.
Exiting. All files have been retained.
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.

It works fine with serial version when I calculate only one frame using one cpu. Does it mean I did not install the parallel version correctly?

Thanks a lot,
Tammy
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Sun Mar 13 2022 - 04:00:03 PDT
Custom Search