Re: [AMBER] Compiling and running NAB programs in parallel using MPI

From: case <case.biomaps.rutgers.edu>
Date: Mon, 29 Jun 2009 13:19:10 +0100

On Fri, Jun 26, 2009, Kaushik Raha wrote:
> Hi Dr. Case,
>
> mpiinit() and mpifinalize() are not required -- this was an error in the
> > printed version of the manual (from lulu.com), but is fixed in the
> > documentation in AmberTools version 1.2.
> >
>
> Thanks for the clarification.
>
>
> >
> > First, I'm not clear which version of NAB you are using, and would
> > recommend
> > upgrading to AmberTools 1.2 if you are not already doing that. (Your
> > description makes me think you are not running the current version.)
> >
> > Second, I agree that the documentation for MPI is pretty sparse, and
> > assumes
> > you understand how MPI coding works. The mpirun program will indeed spawn
> > off
> > multiple copies of the same job. Division of work among processors is
> > controlled by the mytaskid variable, or the get_mytaskid() function. So,
> > there is no automatic parallelization -- the -mpi option just assists you
> > to
> > in writing MPI programs.
> >
> > However, the nab energy routines *are* written for MPI, and I am surprised
> > by
> > the behavior you report, that messages from the nab energy routines are
> > repeated n times. The code only prints energy results when get_mytaskid()
> > ==
> > 0 (see sff.c or eff.c). The codes in amber10/test/nab (such as gbrna.nab)
> > should work without modification with MPI, and should show speedups
> > (although
> > they are so short that you might not see it; see the programs in
> > amber10/benchmarks/nab for longer examples).
> >
> > ....hope this helps....dac
> >
>
> I think it was a version issue. I compiled the 1.2 version and it seems to
> have worked. I was able to run gbrna & gbrna_long in parallel and it scales
> up nicely with number of processors. However, the speed up don't seem that
> obvious in other examples. For example in the enerny routines that use *xmin
> *. So I was wondering if xmin is also written for MPI?

It's the energy routines that are parallelized, but they take most of the
time, so xmin calculations should benefit as well. But I haven't done
benchmarks in this area; I'm mainly relying on reports from Istvan Kolossvary.

...dac


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon Jul 06 2009 - 12:13:03 PDT
Custom Search