Re: [AMBER] pmemd.MPI with a single-entry group file?

From: Kevin Keane <kkeane.sandiego.edu>
Date: Thu, 22 Mar 2018 20:30:02 -0700

>
>
> On Thu, Mar 22, 2018, Kevin Keane wrote:
> >
> > I am trying to get Amber to work on our MPI cluster for one of our
> > researchers, and am running into a problem. Please forgive if I get any
> > chemical terminology wrong; I'm an IT guy.
> >
> > I am calling pmemd.MPI with a group file to take advantage of our
> cluster,
> > such as this:
> >
> > /opt/amber/amber16/bin/pmemd.MPI -ng 8 -groupfile <filename>
>
> A group file allows you to run multiple independent simulations from a
> single program instance. Each "replica" gets the same number
> of MPI threads (which is why the number of threads must be a multiple
> of the number of groups.
>
> First, you need to use mpirun (or its equivalent) to start pmemd.MPI:
> e.g.:
>
> mpirun -np 16 /opt/amber/amber16/bin/pmemd.MPI -ng 8 -groupfile <filename>
>
> This will assign two MPI threads to each of the 8 groups. I'm frankly
> not sure what will happen if you run pmemd.MPI without using mpirun.
> It may actually work (giving one MPI thread per replica), but that's
> almost by accident, since the code wasn't designed that way. Even if
> you must want one thread per replica, it's better to use mpirun with the
> number of threads set equal to the number of groups.
>

Copy and paste error on my part. I forgot to copy the mpirun part.

I'm actually using

mpirun -np <corecount> /opt/amber/amber16/bin/pmemd.MPI -ng 8 -groupfile
<filename>

The corecount is always twice the number of lines in the groupfile - in
fact, the reason we are using groups of eight is that the compute nodes
have 16 cores.

Just for background: we are actually running around 30-40 simulations at
the same time. I created a script that breaks these simulations into groups
of eight, plus one group with whatever is left over. In one particular
scenario, we had 33 simulations, which created the single-element group.

>
> > When I have a group with only one element, pmemd.MPI fails with the error
> > message that the file mdin is missing,
>
> It doesn't really make much sense to set "-ng 1", since you could just put
> the single line of the group file onto the command line.
>
>
True, of course. I was hoping that I wouldn't have to special-case this
scenario.

>
> I suppose that the "-ng 1" case should do what you expect, and
> effectively treat the group file as the rest of the command line. And
> it's arguably a bug that it doesn't do so. But I'm guessing no one
> ever tried or tested what happens with -ng 1.
>
> There are examples and further details in Section 17.11 of the Amber
> 2017 Reference Manual. But for now, you'll have to special case the
> situation where the "number of replicas" is 1: just run pmemd.MPI in
> the usual manner (not via multipmemd and group files).
>

I had actually initially done that, and switched to using group files when
I discovered that both pmemd.MPI and sander.MPI run *dramatically* faster
with a group file. Running a single simulation without a group file took
around 1 hour 20 minutes (sander.MPI) or 40 minutes (pmemd.MPI), which
seems to be consistent with running the simulations on a desktop computer.
Running eight of the same simulations in parallel with a group file (with
-np 16 and -ng 8) completed in as little as three minutes. That big
difference made me suspicious, but as best I can tell, both versions
yielded valid results (without being a chemist, I can't be sure, of course).

Thanks!


On Thu, Mar 22, 2018 at 6:52 PM, David A Case <david.case.rutgers.edu>
wrote:

> On Thu, Mar 22, 2018, Kevin Keane wrote:
> >
> > I am trying to get Amber to work on our MPI cluster for one of our
> > researchers, and am running into a problem. Please forgive if I get any
> > chemical terminology wrong; I'm an IT guy.
> >
> > I am calling pmemd.MPI with a group file to take advantage of our
> cluster,
> > such as this:
> >
> > /opt/amber/amber16/bin/pmemd.MPI -ng 8 -groupfile <filename>
>
> A group file allows you to run multiple independent simulations from a
> single program instance. Each "replica" gets the same number
> of MPI threads (which is why the number of threads must be a multiple
> of the number of groups.
>
> First, you need to use mpirun (or its equivalent) to start pmemd.MPI:
> e.g.:
>
> mpirun -np 16 /opt/amber/amber16/bin/pmemd.MPI -ng 8 -groupfile <filename>
>
> This will assign two MPI threads to each of the 8 groups. I'm frankly
> not sure what will happen if you run pmemd.MPI without using mpirun.
> It may actually work (giving one MPI thread per replica), but that's
> almost by accident, since the code wasn't designed that way. Even if
> you must want one thread per replica, it's better to use mpirun with the
> number of threads set equal to the number of groups.
>
> >
> > When I have a group with only one element, pmemd.MPI fails with the error
> > message that the file mdin is missing,
>
> It doesn't really make much sense to set "-ng 1", since you could just put
> the single line of the group file onto the command line.
>
> I suppose that the "-ng 1" case should do what you expect, and
> effectively treat the group file as the rest of the command line. And
> it's arguably a bug that it doesn't do so. But I'm guessing no one
> ever tried or tested what happens with -ng 1.
>
> There are examples and further details in Section 17.11 of the Amber
> 2017 Reference Manual. But for now, you'll have to special case the
> situation where the "number of replicas" is 1: just run pmemd.MPI in
> the usual manner (not via multipmemd and group files).
>
> ...good luck...dac
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>



-- 
_______________________________________________________________________
Kevin Keane | Systems Architect | University of San Diego ITS |
kkeane.sandiego.edu
Maher Hall, 192 |5998 Alcalá Park | San Diego, CA 92110-2492 | 619.260.6859
<%28619%29%20260-2298>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Mar 22 2018 - 21:00:03 PDT
Custom Search