Re: [AMBER] Temperature-Replica Exchange MD

From: Cruzeiro,Vinicius Wilian D <>
Date: Wed, 24 Oct 2018 15:00:01 +0000

Hello Gilberto,

An additional comment to what Alessandro said: pmemd.MPI requires at least 2 CPUs per replica. Therefore, the number of processors you are stating in your mpirun command needs to be at least 2 times larger than the number of replicas given in the -ng flag.


Vinícius Wilian D Cruzeiro

PhD Candidate
Department of Chemistry, Physical Chemistry Division
University of Florida, United States

Voice: +1(352)846-1633

From: Alessandro Contini <>
Sent: Wednesday, October 24, 2018 10:53:00 AM
Subject: Re: [AMBER] Temperature-Replica Exchange MD

Hi Gilberto,
as far as I understand, your cluster have 12 cpus x node and you
requested 4 nodes and 38 cpus, for 38 replicas, each on a single cpu.
You might try the following header for your slurm batch file:

#SBATCH --account $project
#SBATCH --job-name hremd_amber
#SBATCH --time=24:00:00
#SBATCH --nodes=$nodes
#SBATCH --ntasks-per-node=$ppn

where $ppn = max cpus (38, in your case) and $nodes = number of nodes
(4, in your case).

For better efficiency, I would try to saturate the nodes, so you might
try using 3 nodes and 36 replicas, 4 nodes and 48 replicas.



Il 24/10/2018 16:26, Gilberto Pereira ha scritto:
> Hello, dear Amber community.
> I am currently trying to run replica exchange MD simulations for a complex
> in explicit solvent. However, due to the low number of processors in my
> desktop, i must resort to a computer cluster, where i want to run the
> replicas in two different nodes. However, It returns an error stating that
> the number of processes that i wish to run is too big for one single node.
> I cannot understand why it is unable to use the cpus on both nodes and
> instead allocates all processes to a single node.
> Below, please find the script i am currently using:
> #! /bin/bash
> #SBATCH -p public
> #SBATCH -N 4
> #SBATCH --cpus-per-task=1
> #SBATCH --ntasks=12
> #SBATCH -t 24:00:00
> #SBATCH --job-name=REMD
> #SBATCH -o slurm.out
> #SBATCH -e slurm.err
> # Configuration
> module purge
> module load batch/slurm
> module load compilers/intel17
> module load mpi/openmpi-2.0.2.i17
> # Replace with your current amber16 (or 18) location
> source /b/home/isis/dbarreto/software/amber16_failsafe/
> RUNDIR="/b/home/isis/pereirag/Challenge_REMD/"
> cd ${RUNDIR}/
> mpirun -np 38 pmemd.MPI -ng 38 -groupfile equilibrate.groupfile
> -machinefile machines
> Hope you can help me figure out a solution.
> Thank you so much.
> Regards,
> Gilberto Pereira
> _______________________________________________
> AMBER mailing list

Prof. Alessandro Contini, PhD
Dipartimento di Scienze Farmaceutiche
Sezione di Chimica Generale e Organica "A. Marchesini"
Via Venezian, 21 (edificio 5 ovest, III piano) 20133 Milano
tel. +390250314480
skype alessandrocontini
AMBER mailing list
AMBER mailing list
Received on Wed Oct 24 2018 - 08:30:03 PDT
Custom Search