Dear Alexey,
dear Amber community,
I tried to simulate the system you sent me (chrom40). For this purpose, I used AmberTools 20 and the input files you kindly shared with me.
I again received an out of memory failure message after trying to parallelize the simulation on 32 cores:
Nodes : node3606.doduo.os
Cores : 32
State : FAILED,OUT_OF_MEMORY
Submit : 2023-06-15T16:53:19
Start : 2023-06-15T19:32:20
End : 2023-06-16T00:02:30
Reserved walltime : 05:00:00
Used walltime : 04:30:10
Used CPU time : 5-23:09:35
% User (Computation): 99.57%
% System (I/O) : 0.43%
Mem reserved : 240G
Max Mem used : 232.22G (node3606.doduo.os)
Max Disk Write : 256.00K (node3606.doduo.os)
Max Disk Read : 42.95M (node3606.doduo.os)
I am really sorry to bother you again on this topic, but I really canīt figure out why this is happening! This lack of memory issue, as I was mentioning, occurs to me only when using the HCP scheme, as the simulations run instead fine when I try with hcp = 0.
Do you have any possible ideas?
Thanks a lot for the support!
Best,
Dominik
________________________________
From: alexey <alexey.cs.vt.edu>
Sent: Tuesday, Junei 13, 2023 8:15 PM
To: Dominik Brandstetter <Dominik.Brandstetter.UGent.be>
Cc: David A Case <david.case.rutgers.edu>
Subject: Re: [AMBER] huge memory consumption when running nab simulations
Dear Dominik,
I would still not recommend HCP for a 40,000 atom system, the method is intended for larger structures, really.
The set-up for an HCP simulation is a bit tricky: if and when you get to using it, I would recommend testing it first on one o the well-tested structures, e.g. the one I sent. Or a part of it, if you want a smaller one. If you are still running into trouble then, please ping me.
best, Alexey
On 2023-06-13 09:10, Dominik Brandstetter wrote:
Dear Alexey, dear Dac,
thanks a lot for your explanations!
I understand that at the moment we don't need HCP, but at a later time point I will simulate larger systems (at least 10-times larger). It owuld be great, if I could then parallelize also HCP simulations successfully. At the moment, I don't get, why my simulations are not working and consume this high amount of memory.
Thank you very much!
Kind regards,
Dominik
________________________________
From: alexey <alexey.cs.vt.edu>
Sent: Tuesday, June 13, 2023 4:06 AM
To: David A Case <david.case.rutgers.edu>
Cc: Dominik Brandstetter <Dominik.Brandstetter.UGent.be>; AMBER Mailing List <amber.ambermd.org>
Subject: Re: [AMBER] huge memory consumption when running nab simulations
Dear Dominik,
Dave is absolutely right -- you really do not need HCP for such
relatively small structure. I would suggest considering plain vanilla
GB with low gamma_ln (e.g. 0.01 ps^-1) for fast conformational sampling.
See e.g.
https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.sciencedirect.com%2Fscience%2Farticle%2Fpii%2FS000634951500003X&data=05%7C01%7CDominik.Brandstetter%40ugent.be%7C6466d5fe141d4181d6b508db6bb4f6c3%7Cd7811cdeecef496c8f91a1786241b99c%7C1%7C0%7C638222197995043593%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=Ognj7evRwSnRNLz4tdG3PUcK2%2FZA7g8OX6xZ31swWJI%3D&reserved=0<
https://www.sciencedirect.com/science/article/pii/S000634951500003X>
HCP was designed to handle very large complexes with 100,000+ atoms.
Here is one such example, appropriate for the model:
https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpeople.cs.vt.edu%2F~onufriev%2FCODES%2FGBHCPO-EXAMPLE-CHROMATIN-FIBER.zip&data=05%7C01%7CDominik.Brandstetter%40ugent.be%7C6466d5fe141d4181d6b508db6bb4f6c3%7Cd7811cdeecef496c8f91a1786241b99c%7C1%7C0%7C638222197995043593%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=PH2xxkWjF3C0g0PhY5bZHGW5TwuEklmxNORlCsdBjiI%3D&reserved=0<
http://people.cs.vt.edu/~onufriev/CODES/GBHCPO-EXAMPLE-CHROMATIN-FIBER.zip>
best, Alexey
On 2023-06-12 20:57, David A Case wrote:
> On Mon, Jun 12, 2023, Dominik Brandstetter via AMBER wrote:
>>
>> a quick update on this topic. When I try to run the same simulation
>> described in my previous message using hcp = 0, the simulation runs
>> fine
>> with a low memory consumption. The huge memory consumption, that
>> eventually
>> leads to failure of the run, occurs only when I use hcp = 4. This
>> makes me
>> think that my parallelization scheme works fine, and that there might
>> be a
>> sort of memory leak in the HCP implementation? What do you think?
>
> Thanks for the extra info. I'm cc-ing this to Alexey Onufriev, who may
> have
> some insight here. (I don't have any personal experience with HCP).
> But I
> still suspect (hope) that one doesn't really need HCP for a system with
> only
> 4000 atoms.
>
> ....regards...dac
>
>> ________________________________
>> From: Dominik Brandstetter <Dominik.Brandstetter.UGent.be>
>> Sent: Tuesday, June 6, 2023 2:32 PM
>> To: amber.ambermd.org <amber.ambermd.org>
>> Subject: huge memory consumption when running nab simulations
>>
>> I am new to NAB, and I am running some implicit solvent simulations
>> with it on a cluster having 2x 64-core nodes and with ~940 GiB of RAM
>> memory per node.
>>
>>
>>
>> I am using the sim.nab and submit.sh files attached to start my
>> simulations, which are nicely parallelized and run fine, but I notice
>> a huge memory consumption, which often leads to a failure of the
>> simulations, as you can see in the following error message for the
>> run:
>>
>>
>>
>> Slurm Job_id=25014708 Name=3U_1_64 Failed, Run time 00:47:53,
>> OUT_OF_MEMORY
>>
>> Name : 3U_1_64
>> Cores : 64
>> State : OUT_OF_MEMORY
>> Submit : 2023-06-05T08:42:07
>> Start : 2023-06-05T10:19:49
>> End : 2023-06-05T11:07:42
>> Reserved walltime : 03:00:00
>> Used walltime : 00:47:53
>> Used CPU time : 2-02:08:17
>> % User (Computation): 99.76%
>> % System (I/O) : 0.24%
>> Mem reserved : 900G
>>
>> Max Mem used : 897.41G (node4113.gallade.os)
>>
>> Max Disk Write : 194.56K (node4113.gallade.os)
>> Max Disk Read : 42.54M (node4113.gallade.os)
>>
>>
>>
>> The system I am trying to simulate has 3696 atoms. Do you think this
>> high memory consumption is normal for this size? Or is there a way I
>> can modify e.g. my sim.nab or submit.sh file to reduce the RAM usage
>> and have a successful completion of my run?
>>
>> Dominik
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Jun 16 2023 - 06:30:03 PDT