[AMBER] MPI_ABORT error

From: Amber mail <amber.auc14.gmail.com>
Date: Fri, 24 Jul 2015 14:28:02 +0200

Dear AMBER community,

I was running a MD parallel simulation using AMBER12 with pmemd.MPI on a
cluster, and I got the error below:

--------------------------------------------------------------------------
> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
> with errorcode 1.
>
> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> You may or may not see output from other processes, depending on
> exactly when Open MPI kills them.
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> mpirun has exited due to process rank 0 with PID 21536 on
> node e01 exiting improperly. There are two reasons this could occur:
>
> 1. this process did not call "init" before exiting, but others in
> the job did. This can cause a job to hang indefinitely while it waits
> for all processes to call "init". By rule, if one process calls "init",
> then ALL processes must call "init" prior to termination.
>
> 2. this process called "init", but exited without calling "finalize".
> By rule, all processes that call "init" MUST call "finalize" prior to
> exiting or it will be considered an "abnormal termination"
>
> This may have caused other processes in the application to be
> terminated by signals sent by mpirun (as reported here).
> --------------------------------------------------------------------------
> ~
>
>

I have checked my input files and everything were correct.

I appreciate If you could help me

Thanks for your time!

Best Regards,
Alaa
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Jul 24 2015 - 05:30:04 PDT
Custom Search