Re: [AMBER] RAM requirements

From: Ross Walker <ross.rosswalker.co.uk>
Date: Wed, 1 Jun 2011 10:16:49 -0700

Hi Dmitry,

> I've tried to make a solvated protein in tleap with the command
> "solvatebox M TIP3PBOX 275" to start with rather simple test system. It
> used 8 Gb RAM, then 7.5 Gb swap and hanged. It probably means that
> solvatebox routine memory usage is not optimal.

Yes I was going to email about this as I tried it over the last few days. I
got a 10 million atom system built in Leap but it had a malloc failure
attempting to save the prmtop. So if you want to start simulating 10 million
atom systems plus you will need to do a bunch of work first before even
getting to running the simulation. Nobody in the AMBER community that I know
of simulates atoms much over 1 million atoms or so. This is mostly because
at sizes above this you get such woefully poor sampling, with any MD code,
that you really can't give anything more than anecdotal results.

You could try sleap - maybe that does things better. Alternatively I'd add
updating Leap to the list of work needed to run your 10 million atom system.

> Interestingly, GROMACS successfully built this system using only 6 Gb
> RAM, but I'm not familiar with that program and its forcefields. The
> resulting PDB file size is 1 Gb. Is it possible to convert it into
> AMBER format or I need to solvate it from scratch?

There may be some tools available to convert gromacs files to amber prmtop
and inpcrd although I have not tried. If you have the pdb now why not read
this into leap and use setbox to give you the box size and then see what
happens.

As for memory usage in pmemd I have a 2 million atom test case and this uses


Dynamic Memory, Types Used:
| Reals 63553958
| Integers 69171816

| Nonbonded Pairs Initial Allocation: 58293216

| Running AMBER/MPI version on 8 nodes

on 8 nodes. So 748 MB per MPI task. So you'd probably need between 5 and 10
times this for a 10 million atom system so you are probably looking at
around 5 to 8GB per mpi task. So on an 8 way node you'd need around 64GB of
ram which is certainly reasonable. Note though that the memory usage will
actually go down some as you increase the number of MPI tasks so this would
probably end up around 2GB a core (a guess!) on 128 cores.

To be honest though most of the tools in the AMBER tool chain. Leap, pmemd
and then ptraj would all benefit from being tuned specifically for such
large simulations so if you plan to run these regularly it would probably be
in your long term benefit to look at the codes and make some of these
improvements. One example would be implementing a way to read the prmtop and
coordinate file in parallel for example to avoid the large memory spike
needed on the master thread etc.

Good luck,
Ross

/\
\/
|\oss Walker

---------------------------------------------------------
| Assistant Research Professor |
| San Diego Supercomputer Center |
| Adjunct Assistant Professor |
| Dept. of Chemistry and Biochemistry |
| University of California San Diego |
| NVIDIA Fellow |
| http://www.rosswalker.co.uk | http://www.wmd-lab.org/ |
| Tel: +1 858 822 0854 | EMail:- ross.rosswalker.co.uk |
---------------------------------------------------------

Note: Electronic Mail is not secure, has no guarantee of delivery, may not
be read every day, and should not be used for urgent or sensitive issues.





_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Jun 01 2011 - 10:30:02 PDT
Custom Search