[AMBER] Amber MPI CPU number problem

From: Donato Pera <donato.pera.dm.univaq.it>
Date: Mon, 22 Apr 2013 11:34:43 +0200 (CEST)

Hi,

I have problems with amber and MPI when a I use more than 2 processors,
if I use this istructions:

DO_PARALLEL='mpirun -np 4'
[user.caliban 4CPU]$
 mpirun -np 4 /home/SWcbbc/Amber12/amber12_GPU/bin/sander.MPI -O -i mdin -p
halfam0.top -c halfam0.mc.x -o halfam0.md.o

I obtain these error messages:


[caliban.dm.univaq.it:31996] *** An error occurred in MPI_Comm_rank
[caliban.dm.univaq.it:31996] *** on communicator MPI_COMM_WORLD
[caliban.dm.univaq.it:31996] *** MPI_ERR_COMM: invalid communicator
[caliban.dm.univaq.it:31996] *** MPI_ERRORS_ARE_FATAL (your MPI job will
now abort)
--------------------------------------------------------------------------
mpirun has exited due to process rank 0 with PID 31996 on
node caliban.dm.univaq.it exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
[caliban.dm.univaq.it:31986] 3 more processes have sent help message
help-mpi-errors.txt / mpi_errors_are_fatal
[caliban.dm.univaq.it:31986] Set MCA parameter "orte_base_help_aggregate"
to 0 to see all help / error messages


Thanks and Regards



_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon Apr 22 2013 - 03:00:03 PDT
Custom Search