[AMBER] Running Amber in a GPU cluster

From: Aravind R <aravindspg27.gmail.com>
Date: Wed, 19 Feb 2020 17:53:35 +0530

Dear AmberUsers,
 I am running my REMD simulations on an SGE cluster with 62 nodes with each
node having 2 Tesla K40m.
I use the following input script to run the REMD simulation in
pmem.cuda.MPI and follow the same procedure as in "*Folding Simulations for
Proteins with Diverse Topologies Are Accessible in Days with a
Physics-Based Force Field and Implicit Solvent*" :

#!/bin/bash
#$ -N D1_REMD
#$ -S /bin/bash
#$ -cwd
#$ -V
#$ -q all.q
#$ -pe mpirun 12
export AMBERHOME=/home/rana/aravindr/amber18
source $AMBERHOME/amber.sh
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CUDA_HOME/lib:$CUDA_HOME/lib64
export
LD_LIBRARY_PATH=/softwares/cuda-8.0/lib:/softwares/cuda-8.0/lib64:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/openmpi-1.8.2/bin/mpirun
cd $SGE_O_WORKDIR
export CUDA_VISIBLE_DEVICES="0,1"

mpirun=~/openmpi-1.8.2/bin/mpirun
sander_mpi=/home/rana/aravindr/amber18/bin/sander.MPI
pmemd_cuda=/home/rana/aravindr/amber18/bin/pmemd.cuda
pmemd_mpi=/home/rana/aravindr/amber18/bin/pmemd.MPI
cpptraj_mpi=/home/rana/aravindr/amber18/bin/cpptraj.MPI
pmemd_cuda_mpi=/home/rana/aravindr/amber18/bin/pmemd.cuda.MPI

echo "Current working directory: `pwd`"
$mpirun -np 12 $pmemd_cuda_mpi -ng 12 -groupfile remd.groupfile

I see that in the paper they are able to achieve 800ns/day(system with
73AA) for a protein of similar size, whereas I am able to get only
77ns/day(system with 78AA) even after using HMR with the same number of
replicas. Is there something wrong with my pmemd.cuda.MPI implementation or
should I expect only this much performance from Tesla K40 cards?
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Feb 19 2020 - 04:30:02 PST
Custom Search