Re: [AMBER] pmemd.MPI fails to run

From: Fabian Gmail <fabian.glaser.gmail.com>
Date: Sun, 28 Dec 2014 21:20:03 +0200

OK that's great I think I understand we will try both options if the flag does not help we will recompile.

Thanks!!

Fabian

Sent from my iPhone

> On 28 בדצמ 2014, at 21:07, Thomas Cheatham <tec3.utah.edu> wrote:
>
>
>> I am not sure about it we have successfully run amber 14 from PBS
>> without any PBS_NODEFILE variable, but I will try to use it.
>
> Any mpirun command needs a list of the nodes to run on, otherwise it
> defaults to the node the command was run from. There must be some way on
> your cluster to specify which nodes are assigned to the current job; the
> mpirun command itself does not have the built in intelligence to
> automatically figure it out. Usually this comes from the queuing system;
> if you are not running a queuing system, then you can create the
> "nodelist" by hand. Searching google for "mpirun tutorial" shows some
> examples...
>
>> What about Intel MPI?
>>
>>>> Can AMBER 14 work with Intel MPI generally?
>
> Yes, or even with the built-in MPI version that comes with AMBER 14 or
> mpich; the AMBER reference manual has clear discussion of this. For the
> Intel compile, there is an extra flag to configure, -intelmpi
>
> All of the compiles assume you have the matching mpicc and mpif90 in your
> path, and as mentioned previously, you want all the MPI commands to match.
> You showed this with the openmpi compile, just neglected to specify the
> host_file that lists what nodes to run on. If running PBS, as Ross Walker
> mentioned, this is usually set to the variable $PBS_NODEFILE
>
> --tec3
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber

_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Sun Dec 28 2014 - 11:30:03 PST
Custom Search