Hi Dmitry,
Someone familiar with the limitations of the various leap programs will have
to respond to you on this one. The most simple solution (I am guessing) is
to find a machine to borrow with huge ram...
Regards - Bob Duke
----- Original Message -----
From: "Dmitry Osolodkin" <divanych.rambler.ru>
To: "AMBER Mailing List" <amber.ambermd.org>
Sent: Wednesday, June 01, 2011 9:36 AM
Subject: Re: [AMBER] RAM requirements
> Hi all again,
>
> I've tried to make a solvated protein in tleap with the command
> "solvatebox M TIP3PBOX 275" to start with rather simple test system. It
> used 8 Gb RAM, then 7.5 Gb swap and hanged. It probably means that
> solvatebox routine memory usage is not optimal.
>
> Interestingly, GROMACS successfully built this system using only 6 Gb
> RAM, but I'm not familiar with that program and its forcefields. The
> resulting PDB file size is 1 Gb. Is it possible to convert it into AMBER
> format or I need to solvate it from scratch?
>
> Best regards,
> Dmitry
>
> On 05/28/2011 03:09 AM, Robert Duke wrote:
>> Hi Dmitry,
>> I am not sure what changes were made in the amber 11 code; I am fairly
>> familiar with the code base through 10, as I pretty much wrote the bulk
>> of
>> it. As of Amber 10, the code only handled 999,999 atoms, due to some
>> file
>> format limitations. I recollect that was changed in 11; I had even
>> advocated it be changed earlier. I really had to move onto other things
>> past amber 10, as my funding to work on amber died. Anyway, I am
>> assuming
>> you may well need to be building a 64 bit executable for the types of
>> atom
>> counts your are dealing with, but would have to look at a few things to
>> be
>> sure (I am on the road at the moment, without any source around). One
>> thing
>> that happens is total collection of all data in the master, which means
>> that
>> the memory requirement could get out of hand at really high atom count.
>> If
>> you all are really going to run 10-20 million atoms (sorry, don't
>> remember
>> exactly what you said), I would start trying no more than 256 nodes, and
>> see
>> what happens (so aside from the master, that would be like running less
>> than
>> 100,000 atoms per node, which is generally very tractable for pmemd). I
>> would then scale up, say adding 128 nodes per trial, and see when other
>> factors start giving you grief. One would expect a lot of the
>> performance
>> to scale with atom count, but not everything will, so there will be some
>> performance issues for sure (like building a structure called the CIT).
>> There are some very large data distribution problems I fear you will hit
>> also, based on my (currently fuzzy) detailed knowledge of how various
>> things
>> are done. I would be interested to hear about your problems and may be
>> able
>> to make a few other suggestions. These are interesting and solvable
>> algorithm issues; fixing them just did not get funded. If I were you I
>> would also look at scaling up my problem more slowly than jumping from 1
>> million to 20 million atoms all at once - you have a better chance of
>> seeing
>> the performance problems coming up, rather than just getting hammered by
>> a
>> system that is either crashing or running extremely slowly.
>> Best wishes - Bob Duke
>>
>> -----Original Message-----
>> From: Dmitry Osolodkin [mailto:divanych.rambler.ru]
>> Sent: Friday, May 27, 2011 11:01 AM
>> To: AMBER Mailing List
>> Subject: Re: [AMBER] RAM requirements
>>
>> Dear Bob,
>>
>> Thank you for detailed responce.
>>
>> On 05/27/2011 07:05 PM, Robert Duke wrote:
>>> You are talking about some really big system sizes here; I am guessing
>>> you
>>> would want to start with 128 nodes or more (non gpu code metric here; I
>>> really don't know about the gpu code - sorry to say).
>>
>> We will definitely not use GPU code. Our first task is to make a
>> requirements specification for a supercomputer possible to perform such
>> simulation, especially RAM per CPU requirements. We'll start with large
>> number of nodes, but not extremely large -- maybe 1024. Are there any
>> recommendations about reasonable atom per CPU ratio? Do they depend on
>> the system size?
>>
>> All the best,
>> Dmitry.
>>
>>>
>>> Best wishes - Bob Duke
>>>
>>> (bottom line - the memory numbers at startup from the master are at best
>>> a
>>> wild and low guess, due to the adaptive nature of the code)
>>>
>>> -----Original Message-----
>>> From: Jason Swails [mailto:jason.swails.gmail.com]
>>> Sent: Thursday, May 26, 2011 9:01 PM
>>> To: AMBER Mailing List
>>> Subject: Re: [AMBER] RAM requirements
>>>
>>> I think pmemd outputs the number of allocated integers and floating
>>> point
>>> numbers allocated for each simulation, so run a 0-step minimization and
>> look
>>> for those numbers.
>>>
>>> Note that each thread, I believe, allocates about the same amount of
>> memory
>>> (a little bit more) than the only thread of a serial pmemd job. It has
>> some
>>> atom-ownership maps in addition to the normal data structures, but
>>> that's
>>> ~1/3 the size of just the coordinate, velocity, force, and old velocity
>>> arrays (which leaves a relatively small imprint).
>>>
>>> HTH,
>>> Jason
>>>
>>> On Thu, May 26, 2011 at 5:08 PM, Dmitry Osolodkin
>>> <divanych.rambler.ru>wrote:
>>>
>>>> Dear AMBER developers,
>>>>
>>>> we are going to perform a MD simulation for an extremely huge system
>>>> (ca. 10 millions atoms, maybe twice more). How to calculate memory
>>>> requirements per processor for such task? We'll probably use pmemd.
>>>>
>>>> Thanks in advance
>>>> Dmitry
>>>>
>>>> --
>>>> Dmitry Osolodkin.
>>>>
>>>> _______________________________________________
>>>> AMBER mailing list
>>>> AMBER.ambermd.org
>>>> http://lists.ambermd.org/mailman/listinfo/amber
>>>>
>>>
>>>
>>>
>>
>
> --
> Dmitry Osolodkin
> Researcher
> Group of Computational Molecular Design
> Department of Chemistry
> Moscow State University
> Moscow 119991 Russia
> e-mail: dmitry_o.qsar.chem.msu.ru
> Phone: +7-495-9393557
> Fax: +7-495-9390290
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Jun 01 2011 - 10:30:04 PDT