On Thu, Jul 24, 2025, Lucas Gasparello Viviani via AMBER wrote:
>
>I am writing to ask for your help with optimization of the calculation time
>of my MD simulations.
>My system is already equilibrated and some time ago I successfully ran
>production simulations starting from it, using GPU resources.
>However, I have no access to GPUs at this moment.
>So, I am trying to continue my simulations (50 ns) using Amber22 with
>pmemd.MPI, using CPUs available in HPC.
>I have already tried to configure my script in several ways, aiming to
>optimize the use of the available nodes/CPUs to accelerate the calculation,
>but the estimated time to complete the simulations remains very high (>1000
>hours).
>
>#!/bin/bash
>#SBATCH --partition=SP2
>#SBATCH --ntasks=2 # number of tasks / mpi processes
>#SBATCH --cpus-per-task=16 # Number OpenMP Threads per process
>#SBATCH -J equil
>#SBATCH --time=192:00:00
>
>If anyone has any tips to help me reduce the calculation time of the
>simulations, I would be very grateful.
Just to add to what others have said: pmemd.MPI is parallelized via MPI, not
OpenMP. So having 16 OpenMP threads does no good at all: what you want is
somewhere between 16 and 32 MPI threads. If the comments above are correct,
you probably want ntasks to be much bigger, and cpus-per-task to be 1.
That said, a lot depends on the size of your system, what type of GPUs you
used to use, and what the actual timings (GPU vs CPU) are, say in ns/day.
People on the list might then be able to say whether what you are seeing is
off-base or not. Generally, CPU times will be much slower than GPU.
...good luck...dac
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Sat Aug 02 2025 - 14:00:03 PDT