Hello dear AMBER community,
I realized that Amber24 uses more memory than Amber16. For instance,
below are the memory usages of the same system with Amber16 and
Amber24 in conventional MD simulations:
Amber16:
| Dynamic Memory, Types Used:
| Reals 3298642
| Integers 3020621
| Nonbonded Pairs Initial Allocation: 18770556
| GPU memory information (estimate):
| KB of GPU memory in use: 557017
| KB of CPU memory in use: 118646
Amber24:
| Dynamic Memory, Types Used:
| Reals 3628426
| Integers 3731100
| Nonbonded Pairs Initial Allocation: 18770556
| GPU memory information (estimate):
| KB of GPU memory in use: 601960
| KB of CPU memory in use: 161416
This difference does not matter with conventional MD. However, with
replica exchange molecular dynamics we run out of memory. We have some
REMD jobs (with 40 replicas) that run on GPUs without problem using
Amber16. However, we cannot run the same jobs (or restart them) using
Amber24. The Amber output file ends after writing Ewald parameters
without any error message. On the other hand, the slurm output ends
with a message like "cudaMalloc Failed out of memory". The same
systems run successfully if we discard several replicas (for example,
REMD with 30 replicas runs successfully).
Is the difference in memory usage between Amber16 and Amber24 the
expected behavior of these two versions of Amber? If so, is there a
way of reducing the memory usage of Amber24 to the same level as
Amber16?
Thanks.
Cagan.
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Jul 23 2025 - 19:30:03 PDT