Re: [AMBER] huge memory consumption when running nab simulations

From: Dominik Brandstetter via AMBER <>
Date: Mon, 12 Jun 2023 06:46:16 +0000

Dear Amber community,

a quick update on this topic. When I try to run the same simulation described in my previous message using hcp = 0, the simulation runs fine with a low memory consumption. The huge memory consumption, that eventually leads to failure of the run, occurs only when I use hcp = 4. This makes me think that my parallelization scheme works fine, and that there might be a sort of memory leak in the HCP implementation? What do you think?

Thanks a lot in advance for support!

Best regards,

From: Dominik Brandstetter <>
Sent: Tuesday, June 6, 2023 2:32 PM
To: <>
Subject: huge memory consumption when running nab simulations

Dear Amber community,

I am new to NAB, and I am running some implicit solvent simulations with it on a cluster having 2x 64-core nodes and with ~940 GiB of RAM memory per node.

I am using the and files attached to start my simulations, which are nicely parallelized and run fine, but I notice a huge memory consumption, which often leads to a failure of the simulations, as you can see in the following error message for the run:

Slurm Job_id=25014708 Name=3U_1_64 Failed, Run time 00:47:53, OUT_OF_MEMORY

Name : 3U_1_64
Cores : 64
Submit : 2023-06-05T08:42:07
Start : 2023-06-05T10:19:49
End : 2023-06-05T11:07:42
Reserved walltime : 03:00:00
Used walltime : 00:47:53
Used CPU time : 2-02:08:17
% User (Computation): 99.76%
% System (I/O) : 0.24%
Mem reserved : 900G

Max Mem used : 897.41G (node4113.gallade.os)

Max Disk Write : 194.56K (node4113.gallade.os)
Max Disk Read : 42.54M (node4113.gallade.os)

The system I am trying to simulate has 3696 atoms. Do you think this high memory consumption is normal for this size? Or is there a way I can modify e.g. my or file to reduce the RAM usage and have a successful completion of my run?

Thanks in advance.

Best regards,

AMBER mailing list
Received on Mon Jun 12 2023 - 00:00:02 PDT
Custom Search