[AMBER] md simulations running very slow

From: Rosa Teijeiro Juiz via AMBER <amber.ambermd.org>
Date: Fri, 8 Nov 2024 14:54:14 +0100

Dear Amber users,

I am writing to ask for help to optimize the time of my MD simulations.
So far, I have only worked with short simulations because I was still
learning how to use the software, but now I am trying to run a 500ns MD
simulation of my protein and my problem is that the estimated time for
the run (what appears on my .mdinfo file) in over 8000 thousand hours,
which honestly doesn't make sense to me.

I tried submitting the MD run to a cluster (instead of just running on
my local computer) to see if this would help with the optimization of
the time but initially it was the same:

#!/bin/bash

#SBATCH --job-name=...
#SBATCH --mail-user=...
#SBATCH --mail-type=end
#SBATCH --ntasks=1
#SBATCH --mem-per-cpu=8000
#SBATCH --time=50:00:00
#SBATCH --qos=standard

module add AmberTools/23.6-foss-2023a

cd /scratch/MD_simulations/cotb2/ABA107


AmberTools/23.6-foss-2023a/bin/sander -O -i 03_Prod.in -o 03_Prod.out -p
solvated_cotb2.parm7 -c 02_Heat.ncrst -r 03_Prod.ncrst -x 03_Prod.nc
-inf 03_Prod.info &

I tried changing some parameters like the number of CPUs, number of
tasks... but the simulation was not running as it should. Now I was also
reading that for parallelization I should run sander.MPI not just sander
(or pmemd), but overall I am just really confused on if this is what
makes sense to optimize the time of my simulation, and if it is how I
should run my script, how many tasks I should set, how many CPUs....

If anyone could help me out that would be highly appreciated.

Thank you,

Rosa Teijeiro Juiz


Note:

In my .in file:

nstlim=250000000

dt=0.002

ntpr=50000

ntwx=50000

*In case changing any of these parameters could also help decreasing the
simulation time...
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Nov 08 2024 - 06:00:02 PST
Custom Search