Re: [AMBER] Parallel running problem of MMPBSA

From: Jason Swails <jason.swails.gmail.com>
Date: Mon, 3 Feb 2014 07:29:24 -0500

On Mon, Feb 3, 2014 at 4:35 AM, zhongqiao hu <zhongqiao.hu.gmail.com> wrote:

> I forgot to mentioned that MMPBSA.py.MPI is working for the system ras-raf
> in Amber tutorial A3. My current system is rather big, the receptor has
> about 9000 atoms and ligand has about 6500 atoms. But I don' think the
> system size is the reason.
>

I do think the system size is the reason. MMPBSA.py.MPI is parallelized by
splitting up the trajectory into equal-sized numbers of frames and having
each processor work on a subset of the total trajectory. What this means
is that MMPBSA.py.MPI with 8 threads requests 8 times the amount of memory
used by MMPBSA.py on one frame at a time.

A 15K atom system sets up a very large (potentially enormous if the
structure is at all extended) PB grid upon which to evaluate the potential.
 Since the size of the grid grows more or less as N^3 where N is the number
of atoms, the memory requirements of the PB grid grows very rapidly as
systems get large. I suspect that a single frame uses up more than 1/8 of
your total available RAM, leading to this error (a suspicion that is
strengthened by the fact that MMPBSA.py works in serial and MMPBSA.py.MPI
works for Ras-Raf).

I would suggest running MMPBSA.py in serial and looking at how much RAM is
being consumed by the calculation and make sure that you don't ask for more
processors than the RAM you have available.

HTH,
Jason

-- 
Jason M. Swails
BioMaPS,
Rutgers University
Postdoctoral Researcher
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon Feb 03 2014 - 04:30:03 PST
Custom Search