On Tue, May 6, 2014 at 8:07 PM, Amir Shahmoradi <a.shahmoradi.gmail.com>wrote:
> Dear Amber community,
>
> I have been trying to use MMPBSA module in AmberTools12 package for binding
> affinity calculations during the past weeks on TACC clusters.
> However, I constantly receive the following error as soon as calculations
> reach the PBSA part:
>
> CalcError: /opt/apps/intel11_1/mvapich2_1_6/amber/12.0/bin/mmpbsa_py_energy
> failed with prmtop
> /work/01902/ashahmor/mmpbsa/setup/1JIW_PI/antemmpbsa/1JIW_PI.prmtop!
> PB bomb in pb_setgrd(): Allocation aborted 0
> 0 0
>
This looks like you have run out of available memory.
>
> Similarly, when I turn on the entropy option (entropy=1), or use
> mmpbsa_py_nabnmode for entropy calculations, I get the following error:
>
This is consistent with having run out of memory.
> [snip]
>
>
>
>
> It seems that there is some inconsistency between the topology files
>
> generated by tleap and the topology files that MMPBSA and ptraj modules
>
> expect to have as input. In contrast to PBSA, GBSA runs with no problems or
>
> errors for any of the structures that I have tried.
>
If GBSA runs with no problems, then there is no topology file
inconsistency. Indeed, it suggests that limited memory is the main issue
(GBSA requires far less memory on average than either PBSA or entropy
calculations).
> Any help or suggestion to resolve these errors is highly appreciated.
>
Try to get access to more memory. Since you are running on a
supercomputer, check if you can request more memory (some schedulers set
the available memory as a resource that you can request). It is also
common to run out of memory if you are trying to run in parallel. The
memory requirements grow linearly with the number of processors you request
since each processor is running the same calculations on a different subset
of frames. So if one frame takes on average 2 GB of memory to build the PB
grid, running on 8 processors will consume 16 GB of memory (same argument
goes with nmode calculations, although entropy=1 should consume far less
memory). If you are running in parallel, see if the error goes away by
running on a single processor. If you want to optimize your use of CPUs,
you can try adding one processor at a time until the job fails (or if you
can monitor memory usage, you can predict how much memory each thread will
take).
HTH,
Jason
--
Jason M. Swails
BioMaPS,
Rutgers University
Postdoctoral Researcher
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue May 06 2014 - 19:00:03 PDT