Re: [AMBER] Hamiltonian Replica Exchange with Amber 18

From: Bruno Falcone <brunofalcone.qo.fcen.uba.ar>
Date: Mon, 26 Nov 2018 18:22:48 -0300

Thanks, Jason! It does appear to be an issue with that since I seem to
have both mpich and openmpi installed. (*) However, on the same computer
and with the same installation Amber16 did work.

If I run pmemd.cuda.MPI calling mpich with mpirun.mpich it does appear
to start pmemd.cuda.MPI but it fails with the following error:

#########

  Running multipmemd version of pmemd Amber18
     Total processors = 8
     Number of groups = 8


Program received signal SIGSEGV: Segmentation fault - invalid memory
reference.
#########

Is there a way to find out which mpi version was used to install amber,
or to reinstall it using the right one?

I know that this is perhaps not an issue directly related to Amber, but
could you give me some pointers on how to troubleshoot this issue?

Regards!

Bruno


(*)
This is what I get trying to find out which mpi version is installed:

mpirun --version
mpirun (Open MPI) 1.6.5

mpicc -v
mpicc for MPICH version 3.0.4


On 11/26/2018 01:20 PM, Jason Swails wrote:
> It looks like every MPI thread thinks it's the one-and-only master thread
> -- that is, none of the threads knows about each other.
>
> The only cause I've ever seen for this behavior is using an "mpirun" or
> "mpiexec" that comes from a *different* MPI than the one that was used to
> compile the program (for instance, using mpirun from mpich to run a program
> built with OpenMPI).
>
> This issue is being caused at the MPI layer (before ever reaching the pmemd
> code itself) -- it's not a problem with Amber per se.
>
> HTH,
> Jason
>
>
> On Fri, Nov 23, 2018 at 10:31 AM Bruno Falcone <brunofalcone.qo.fcen.uba.ar>
> wrote:
>
>> Hi! I'm trying to run a Hamiltonian Replica Exchange simulation using
>> Amber 18. This, with the very same files, works with Amber 16, but it's
>> not working with Amber 18. I attach the input and groupfile.
>>
>> I run it with the command:
>>
>> mpirun -np 8 pmemd.cuda.MPI -ng 8 -groupfile remd.gf
>>
>> I get the following error in the terminal and no output files are
>> generated:
>>
>> #################
>>
>> setup_groups: MPI size is not a multiple of -ng
>> application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
>> setup_groups: MPI size is not a multiple of -ng
>> application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
>> setup_groups: MPI size is not a multiple of -ng
>> application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
>> setup_groups: MPI size is not a multiple of -ng
>> application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
>> setup_groups: MPI size is not a multiple of -ng
>> application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
>> setup_groups: MPI size is not a multiple of -ng
>> application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
>> setup_groups: MPI size is not a multiple of -ng
>> setup_groups: MPI size is not a multiple of -ng
>> application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
>> application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0
>> --------------------------------------------------------------------------
>> mpirun noticed that the job aborted, but has no info as to the process
>> that caused that situation.
>> --------------------------------------------------------------------------
>> ##################
>>
>> Any help would be greatly appreciated.
>>
>> Thanks!
>>
>> Bruno
>>
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>>
>


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon Nov 26 2018 - 13:30:02 PST
Custom Search