[AMBER] invoking MPI_ABORT causes Open MPI to kill all MPI processes.

From: Pooja Kesari <pkesari88.gmail.com>
Date: Wed, 28 Sep 2016 17:08:28 +0530

Dear All,

*I m trying to run a production run*
mpirun -np 8 sander.MPI -O -i prod.in -o prod_complex-solv.out -p
complex-pose8.prmtop -c protein_solv-equil.rst -r protein_solv-prod.rst -x
prod.mdcrd -ref protein_solv-equil.rst

*However my program is terminating with an error*

MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun has exited due to process rank 0 with PID 5123 on
node localhost.localdomain exiting improperly. There are two reasons this c
occur:

1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.

2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"

This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------

What has gone bad with process?

-- 
Thanks & Regards,
Pooja Kesari
Research Scholar
Department Of Biotechnology
Indian Institute of Technology Roorkee
INDIA
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Sep 28 2016 - 05:00:02 PDT
Custom Search