Re: [AMBER] Help with optimizing the time of calculation

From: Christina Bergonzo via AMBER <amber.ambermd.org>
Date: Fri, 1 Aug 2025 08:00:13 -0400

Hi Lucas,

The timing will be slower on CPUs than on GPUs.
There are a couple of things you can try:

First, Please make sure you are using pmemd.MPI, and make sure you are
sending it to a substantial number of CPUs.
Your SLURM script seems like it was adapted from a GPU run, but when I
switch from using GPUs to CPUs on my Cluster, I have to make changes.
For my cluster configuration, I need to make sure I know how many CPUs are
on a particular node, and allocate the number of nodes with "-N" and the
number of CPUs with "-n"

In the example below, I'm using 8 CPUs, but you should check your resources
to see how many CPUs you have access to across how many nodes.

#!/bin/bash
#SBATCH -J ti_rna_ps_test
#SBATCH -n 8
#SBATCH -N 1
#SBATCH -t 06:00:00

Also, you can Hydrogen mass repartition your system.
After the parmtop is built, you can use the parmed program with the
'HMassRepartition' command. Then 'outparm' will send the repartitioned
parmtop to a file you can name HMR.parm7 or something similarly
identifiable.

Hydrogen mass repartitioning works by repartitioning the masses of atoms to
which Hydrogens are bound so some of the mass is recentered on the hydrogen
atom. That changes the frequency of the bond vibration, and allows a 4 fs
timestep (dt in your input file) instead of a 2 fs or 1 fs timestep - so
you would automatically double your timing in this scenario.

You can read more about Hydrogen mass repartitioning in the manual or here:
https://ambermd.org/tutorials/basic/tutorial12/index.php and here:
https://pubs.acs.org/doi/abs/10.1021/ct5010406

Hope this helps,
Christina

On Thu, Jul 31, 2025 at 5:03 PM Lucas Gasparello Viviani via AMBER <
amber.ambermd.org> wrote:

> Hello,
>
> Sorry for sending a message on this topic again, but I am still in trouble
> with the calculation time of my MD simulations.
> My system is already equilibrated and some time ago I successfully ran
> production
> simulations starting from it, using GPU resources.
> As I have no access to GPUs at this moment, I am trying to continue my
> simulations (50 ns) with pmemd.MPI, using CPUs.
>
> As it is a continuation of previous simulations, I cannot do significant
> modifications in my simulation setup that might result in improved
> performance.
> Therefore, I have tried to configure my script in several ways, aiming
> to optimize
> the use of the available nodes/CPUs to accelerate the calculation,
> but the estimated time to complete the simulations remains very high.
>
> Below is the last configuration I have tried to use:
>
> #!/bin/bash
> #SBATCH --partition=SP2
> #SBATCH --ntasks=32
> #SBATCH --cpus-per-task=1
> #SBATCH -J prod
> #SBATCH --time=192:00:00
>
> I would appreciate it if you have any tips to reduce the calculation time
> of the simulations.
>
> Thank you in advance!
>
> Kind regards,
> Lucas
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>


-- 
-----------------------------------------------------------------
Christina Bergonzo
Research Chemist
Biomolecular Measurement Division, MML, NIST
-----------------------------------------------------------------
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Aug 01 2025 - 05:30:02 PDT
Custom Search