Re: [AMBER] RAM requirements

From: Robert Duke <rduke.email.unc.edu>
Date: Fri, 27 May 2011 08:05:40 -0700

I have not looked at the code recently, but in the past at least, pmemd
would make an attempt to determine space associated with static allocations
in the master node during system setup. The master allocation is generally
a little higher than allocation for other nodes. BUT then the overall
allocation pattern is dynamic, and memory requirements may grow. There is
also the issue of stack-based array allocations, done initially for
performance reasons, that may create an upper end allocation demand - at
least in the past, again, if you put up a 4 processor run on an smp and
monitor the memory usage, you will see memory allocated in the processes
fluctuate. Also, please note that minimization and md have slightly
different memory requirements; it is all hard to estimate in advance because
the algorithms are so adaptive in different parts of the code, so what you
use for memory will change depending on problem size and the number of
processors you are using. The introduction of the gpu code is a complete
black hole for me, as I was not involved in that effort, but it also could
be very different.

The way I like to solve the problem of optimal node count is run a small
series of increasing node count (say 10-100 md steps at least; you will get
better data from longer runs (due to all the adaptive code taking over 1000
steps to settle down), but if you have insufficient memory and force the
system to swap memory, you will be running so slowly you are essentially
hung.

You are talking about some really big system sizes here; I am guessing you
would want to start with 128 nodes or more (non gpu code metric here; I
really don't know about the gpu code - sorry to say).

Best wishes - Bob Duke

(bottom line - the memory numbers at startup from the master are at best a
wild and low guess, due to the adaptive nature of the code)

-----Original Message-----
From: Jason Swails [mailto:jason.swails.gmail.com]
Sent: Thursday, May 26, 2011 9:01 PM
To: AMBER Mailing List
Subject: Re: [AMBER] RAM requirements

I think pmemd outputs the number of allocated integers and floating point
numbers allocated for each simulation, so run a 0-step minimization and look
for those numbers.

Note that each thread, I believe, allocates about the same amount of memory
(a little bit more) than the only thread of a serial pmemd job. It has some
atom-ownership maps in addition to the normal data structures, but that's
~1/3 the size of just the coordinate, velocity, force, and old velocity
arrays (which leaves a relatively small imprint).

HTH,
Jason

On Thu, May 26, 2011 at 5:08 PM, Dmitry Osolodkin
<divanych.rambler.ru>wrote:

> Dear AMBER developers,
>
> we are going to perform a MD simulation for an extremely huge system
> (ca. 10 millions atoms, maybe twice more). How to calculate memory
> requirements per processor for such task? We'll probably use pmemd.
>
> Thanks in advance
> Dmitry
>
> --
> Dmitry Osolodkin.
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>



-- 
Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Candidate
352-392-4032
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri May 27 2011 - 08:30:02 PDT
Custom Search