[AMBER] Low performance in multi-pmemd.mpi

From: James Starlight <jmsstarlight.gmail.com>
Date: Mon, 23 Jun 2014 15:38:32 +0400

Dear Amber users!


Using pmemd from latest Amber together with the newest version of mpirun
I've obtained very low performance in replica-exchange simulation using
multi-pmemd.mpi. Below you can find my launch bash script and the output
file from 1 replica.

export AMBERHOME=/opt/amber/amber14
pmemd="$AMBERHOME/bin/pmemd.
MPI"
sander="$AMBERHOME/bin/sander.MPI"


cd /globaltmp/novikov/amber/remd
mpirun -n 88 $pmemd -O -ng 22 -groupfile ./remd.groupfile2



| Average timings for last 100 steps:
| Elapsed(s) = 143.58 Per Step(ms) = 1435.78
| ns/day = 0.12 seconds/ns = 717888.57
|
| Average timings for all steps:
| Elapsed(s) = 143.58 Per Step(ms) = 1435.78
| ns/day = 0.12 seconds/ns = 717888.57


Here I performed simulation of the protein in vacuum (system of 7000 atoms)
using implicit solvent with the applied position restrains on the part of
my protein

Production REMD input file
 &cntrl
   irest=0, ntx=1,
   nstlim=500, dt=0.002,
   irest=0, ntt=3, gamma_ln=1.0,
   temp0=300.00, ig=17461,
   ntc=2, ntf=2, nscm=1000,
   ntb=0, igb=5,
   cut=999.0, rgbmax=999.0,
   ntpr=100, ntwx=1000, ntwr=100000,
   nmropt=1, ioutfm=1,
   numexchg=1000,
   restraint_wt=1000.0, restraintmask=':1-285,325-460',
 /
 &wt TYPE='END'
 /
DISANG=chimera_chir.dat


Does the problem in the palatalization of this job or something wrong with
my input file?

TFH,

James
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon Jun 23 2014 - 05:00:02 PDT
Custom Search