I'm not sure on how I should "run tests", I went to this directory
/home/elisa/amber14/test/cnstph_remd/Explicit_pHREM and ran the command mpirun
-np 16 Run.pHremd
this is the result (repeated 16 times):
[agachon:05491] [[29828,1],0] ORTE_ERROR_LOG: A message is attempting to be
sent to a process whose contact information is unknown in file
rml_oob_send.c at line 104
[agachon:05491] [[29828,1],0] could not get route to [[INVALID],INVALID]
[agachon:05491] [[29828,1],0] ORTE_ERROR_LOG: A message is attempting to be
sent to a process whose contact information is unknown in file
base/plm_base_proxy.c at line 81
On Wed, Feb 17, 2016 at 3:34 PM, Adrian Roitberg <roitberg.ufl.edu> wrote:
> where you able to run OTHER sander or pmemd multi core files ?
>
> Run some of them as found in the test directory of the amber
> installation. That way we can figure out if this is a problem with
> constant pH or if it is related to amber itself.
>
> Thanks
> adrian
>
>
> On 2/17/16 9:22 AM, Elisa Pieri wrote:
> > Dear all,
> >
> > I'm experiencing problems running pH-REMD on my computer. I have these
> > machine: Intel® Xeon(R) CPU E5-2623 v3 @ 3.00GHz × 16, and I ran
> > simulations using pmemd.MPI with no problems (I have Amber14). Now I'm
> > using this command:
> >
> > mpirun -np 16 pmemd.MPI -ng 8 -groupfile prova.grpfile &
> >
> > Unfortunately, I get this error:
> >
> > Running multipmemd version of pmemd Amber12
> > Total processors = 16
> > Number of groups = 8
> >
> >
> --------------------------------------------------------------------------
> > MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
> > with errorcode 1.
> >
> > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> > You may or may not see output from other processes, depending on
> > exactly when Open MPI kills them.
> >
> --------------------------------------------------------------------------
> >
> --------------------------------------------------------------------------
> > mpirun has exited due to process rank 12 with PID 4994 on
> > node agachon exiting improperly. There are two reasons this could occur:
> >
> > 1. this process did not call "init" before exiting, but others in
> > the job did. This can cause a job to hang indefinitely while it waits
> > for all processes to call "init". By rule, if one process calls "init",
> > then ALL processes must call "init" prior to termination.
> >
> > 2. this process called "init", but exited without calling "finalize".
> > By rule, all processes that call "init" MUST call "finalize" prior to
> > exiting or it will be considered an "abnormal termination"
> >
> > This may have caused other processes in the application to be
> > terminated by signals sent by mpirun (as reported here).
> >
> --------------------------------------------------------------------------
> > [agachon:04981] 7 more processes have sent help message help-mpi-api.txt
> /
> > mpi-abort
> > [agachon:04981] Set MCA parameter "orte_base_help_aggregate" to 0 to see
> > all help / error messages
> >
> > Can you help me?
> >
> > Elisa
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
>
> --
> Dr. Adrian E. Roitberg
> Professor.
> Department of Chemistry
> University of Florida
> roitberg.ufl.edu
> 352-392-6972
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Feb 17 2016 - 07:00:05 PST