[AMBER] huge memory consumption when running nab simulations

From: Dominik Brandstetter via AMBER <amber.ambermd.org>
Date: Tue, 6 Jun 2023 12:32:47 +0000

Dear Amber community,



I am new to NAB, and I am running some implicit solvent simulations with it on a cluster having 2x 64-core nodes and with ~940 GiB of RAM memory per node.



I am using the sim.nab and submit.sh files attached to start my simulations, which are nicely parallelized and run fine, but I notice a huge memory consumption, which often leads to a failure of the simulations, as you can see in the following error message for the run:



Slurm Job_id=25014708 Name=3U_1_64 Failed, Run time 00:47:53, OUT_OF_MEMORY

Name : 3U_1_64
Cores : 64
State : OUT_OF_MEMORY
Submit : 2023-06-05T08:42:07
Start : 2023-06-05T10:19:49
End : 2023-06-05T11:07:42
Reserved walltime : 03:00:00
Used walltime : 00:47:53
Used CPU time : 2-02:08:17
% User (Computation): 99.76%
% System (I/O) : 0.24%
Mem reserved : 900G

Max Mem used : 897.41G (node4113.gallade.os)

Max Disk Write : 194.56K (node4113.gallade.os)
Max Disk Read : 42.54M (node4113.gallade.os)



The system I am trying to simulate has 3696 atoms. Do you think this high memory consumption is normal for this size? Or is there a way I can modify e.g. my sim.nab or submit.sh file to reduce the RAM usage and have a successful completion of my run?



Thanks in advance.



Best regards,



Dominik


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber

Received on Tue Jun 06 2023 - 06:00:03 PDT
Custom Search