Re: [AMBER] MMPBSA.py.MPI

From: Jason Swails <jason.swails.gmail.com>
Date: Tue, 25 May 2010 08:32:37 -0400

What Bill said is correct. There was a fairly severe limitation in using
sander.MPI, as in the previous version, which is that PBSA (which does not
parallelize), could utilize at most 3 processors (to do the complex,
receptor, and ligand simultaneously), which resulted in far worse than a 3x
speed-up (since the size of the complex, receptor, and ligand were not the
same). This method started a single MMPBSA.py thread which, itself, spawned
a 3-threaded sander.MPI process. Now, multiple threads of MMPBSA.py.MPI are
spawned across the mpi world. Calling sander.MPI with as many threads would
cause each MMPBSA.py.MPI thread to start numerous threads, the end result of
which being the thrashing of system resources as, in this case, 2 processors
tried to simultaneously process 4 threads. Thus, only serial programs are
called in our new version, though the parallel scaling is drastically
improved (the primary beneficiaries of which are PB and nmode calculations,
since, as Bill stated, GB was already fast to begin with).

The only situation we have found where using sander.MPI is faster than the
current method is for very low processor counts on very low frame counts
(i.e. 1 or 2 frames using 2 processors with PBSA). And here, it's a
difference between 1.5 to 2 minutes (not something that concerned us). Of
course all I really did was to rehash what Bill said in slightly different
words.

All the best,
Jason

On Tue, May 25, 2010 at 7:03 AM, Bill Miller III <brmilleriii.gmail.com>wrote:

> Yes, the new release of MMPBSA.py (and MMPBSA.py.MPI) no longer utilizes
> sander.MPI. MMPBSA.py.MPI now divides the total amount of frames into the
> number of threads desired and runs that many separate sander calculations.
> Since a user can now, in theory, use one processor per frame then the use
> of
> sander vs. sander.MPI is not substantial. Furthermore, sander.MPI gives no
> actual speedup when performing a PB calculation, so sander.MPI would only
> be
> useful for GB calculations, which are already computationally very cheap in
> comparison.
>
> -Bill
>
> On Tue, May 25, 2010 at 6:17 AM, Alan <alanwilter.gmail.com> wrote:
>
> > Hi there,
> >
> > So I am playing with new released MMPBSA.py.MPI and the first I noticed
> > when
> > running test with DO_PARALLEL (-np 2) is that I got two 'sander' (serial
> > version) running when I was expecting sander.MPI with 2 threads.
> >
> > Is this behaviour correct?
> >
> > Thanks,
> >
> > Alan
> >
> > --
> > Alan Wilter S. da Silva, D.Sc. - CCPN Research Associate
> > Department of Biochemistry, University of Cambridge.
> > 80 Tennis Court Road, Cambridge CB2 1GA, UK.
> > >>http://www.bio.cam.ac.uk/~awd28 <http://www.bio.cam.ac.uk/%7Eawd28> <
> http://www.bio.cam.ac.uk/%7Eawd28><<
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> >
>
>
>
> --
> Bill Miller III
> Quantum Theory Project,
> University of Florida
> Ph.D. Graduate Student
> 352-392-6715
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>



-- 
Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Graduate Student
352-392-4032
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue May 25 2010 - 06:00:08 PDT
Custom Search