AMBER: Slow Processor Loads when Using PMEMD

From: Jonathan Suever <jsuever.uab.edu>
Date: Mon, 2 Jul 2007 19:28:48 -0500

I am currently running a simulation for a total of 10 ns. I have previously
run the simulation up to 5 ns and now would like to submit another job in
order to continue running the simulation for the remaining 5 ns. To perform
these calculations, I am using PMEMD installed on a cluster and I am
utilizing 16 processors.

I made a few changes to the input file for the second portion. These
involve changing the following values:

ntx = 5 ## I use this in order to read in the formatted velocity
information from the first job
irest = 1 ## This has to be set to 1 in order for the velocities to be read
in

When I submit this job to the cluster, it runs fine with no errors and shows
that all 16 processors are currently being used. However when the detailed
status is viewed, it can be seen that the highest load placed on any of the
processors is around 0.20 resulting in very slow calculation times.

When I change the ntx and irest values of the input file back to 1 and 0 as
used in the first run, the load on the processors returns to a normal value.

I was basically wondering if anyone has experienced this same problem when
running pmemd on a cluster and attempting to use velocity information from a
previous run.

The only other changes that I made were to the shell script used to run the
job in order that my existing files were not overwritten during the
process. Also, I set the input coordinate file to the output coordinate
file from the previous simulation. Below is the shell script I use to
execute the job (almost entirely the same as the first run):

#!/bin/bash
#$ -S /bin/bash
#$ -m e
#$ -cwd
#$ -p 20
#$ -j y
#$ -N complex_pmemd
#$ -M *******.***.***
# Resource limits: number of CPUs to use
#$ -pe mpi 16
#$ -v MPIR_HOME=/opt/mpich/intel
#$ -v P4_RSHCOMMAND=ssh
#$ -v MPICH_PROCESS_GROUP=no
#$ -v CONV_RSH=ssh
## Prepare nodelist file for mdrun ...
#
echo
"#####################################################################################"
echo " STARTED AT: $(date)"
echo ""
echo "NSLOTS: $NSLOTS"
echo "TMPDIR: $TMPDIR"
echo "$TMPDIR/machines file contains"
cat $TMPDIR/machines
#$ -V
export MPI_HOME=/opt/mpich/intel
export LD_LIBRARY_PATH=$MPI_HOME/lib:$LD_LIBRARY_PATH
export AMBER=/ibrixfs/apps/amber/intel/amber-9-64-mpich
export AMBERHOME=/ibrixfs/apps/amber/intel/amber-9-64-mpich
export PATH=$MPI_HOME/bin:$AMBER/exe:$PATH

MPIRUN=${MPI_HOME}/bin/mpirun
MDRUN=${AMBER}/exe/pmemd

export MYFILE=production

$MPIRUN -np $NSLOTS -machinefile $TMPDIR/machines $MDRUN -O -i $MYFILE.in -o
$MYFILE.out -p topology.top -c first_run.crd -r $MYFILE.crd -x $MYFILE.mdcrd
-inf $MYFILE.edr


Any help with this matter would be greatly appreciated. Thank you very
much.

-Jonathan Suever
Undergraduate Researcher
University of Alabama at Birmingham

-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber.scripps.edu
To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu
Received on Wed Jul 04 2007 - 06:07:25 PDT
Custom Search