Re: [AMBER] Amber MPI CPU number problem

From: Jason Swails <jason.swails.gmail.com>
Date: Mon, 22 Apr 2013 22:15:37 -0400

On Mon, Apr 22, 2013 at 5:34 AM, Donato Pera <donato.pera.dm.univaq.it>wrote:

> Hi,
>
> I have problems with amber and MPI when a I use more than 2 processors,
> if I use this istructions:
>
> DO_PARALLEL='mpirun -np 4'
> [user.caliban 4CPU]$
> mpirun -np 4 /home/SWcbbc/Amber12/amber12_GPU/bin/sander.MPI -O -i mdin -p
> halfam0.top -c halfam0.mc.x -o halfam0.md.o
>
> I obtain these error messages:
>
>
> [caliban.dm.univaq.it:31996] *** An error occurred in MPI_Comm_rank
> [caliban.dm.univaq.it:31996] *** on communicator MPI_COMM_WORLD
> [caliban.dm.univaq.it:31996] *** MPI_ERR_COMM: invalid communicator
> [caliban.dm.univaq.it:31996] *** MPI_ERRORS_ARE_FATAL (your MPI job will
> now abort)
> --------------------------------------------------------------------------
> mpirun has exited due to process rank 0 with PID 31996 on
> node caliban.dm.univaq.it exiting without calling "finalize". This may
> have caused other processes in the application to be
> terminated by signals sent by mpirun (as reported here).
> --------------------------------------------------------------------------
> [caliban.dm.univaq.it:31986] 3 more processes have sent help message
> help-mpi-errors.txt / mpi_errors_are_fatal
> [caliban.dm.univaq.it:31986] Set MCA parameter "orte_base_help_aggregate"
> to 0 to see all help / error messages
>
>
What is your input file? The fact that this appears to have failed in an
MPI_Comm_rank call leads me to believe something is wrong with your MPI
installation (or you are using mpirun from a different MPI than you
installed Amber with).

Good luck,
Jason

-- 
Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Candidate
352-392-4032
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon Apr 22 2013 - 19:30:03 PDT
Custom Search