Hi Jiali,
> Thank you very much for your detailed explanation. I'm new at TACC. Your
> helps made me quickly familiar with their environment. I am now about to
> run a benchmark with a 5ns simulation of a protein/DNA complex. Sander.MPI
> would be first used. I would like to use 8 cores/note as you suggested.
> Could you please check if my qsub script as the following is all right?
> Any further suggestions would be appreciated. (To shorten the message, I
> only show one line of the sander script as an example).
You should probably use PMEMD here, you will get MUCH better performance
from it that from sander.MPI. In fact sander probably won't scale beyond a
couple of nodes and may not have the same behavior as a function of the
number of cores per node as PMEMD. PMEMD will read sander input and produce
sander output so you don't need to change anything to use it other than the
name of the executable.
> #$ -pe 8way 32 # Requests 8 cores/node, 32/8 = 4 nodes total
> (16cpu)
This is the bit where things are a little confusing on Ranger and the syntax
here is not obvious.
Essentially the first part is correct, this says 8way which means it will
use 8core per node (out of 16). The second number specifies the total number
of threads you would have "if you were to use all 16 cores per node". So in
this case 8way 32 means use 8 cores per node on 2 nodes (32/16). I.e. the
second number is always 16x the number of nodes you actually want. So in
your case the calculation will run on a total of 2 nodes with 16cpu's which
seems to be what you expect based on the (16cpu) comment but differs from
the "4 nodes total" comment you have.
> #$ -q normal # Queue name
> #$ -l h_rt=24:00:00 # Run time (hh:mm:ss) - 24 hours
> set echo #{echo cmds, use "set echo" in csh}
> cd /share/home/00654/tg458141/sander/benchmarkA_int_T_ctrl0815
You will probably get better performance using the $WORK or $SCRATCH file
systems, and you won't be at risk of blowing your home directory quota. The
only issue though is that these filesystems, especially $WORK seem to be
VERY unreliable right now on Ranger and are for ever going offline so it may
be safer to write to the home directory - your choice...
> #...
> ibrun /share/home/00654/tg458141/local/amber9/exe/sander.MPI -O -i
> ./prod.in -o prod1.out -p A_int_T_ctrl.top -c equil_5.restrt -x prod1.traj
> -r prod1.restrt
Just be careful you don't run out of quota space for the trajectory file.
Also a note on performance in parallel (especially on pmemd).
ntt=0 > ntt=1 >> ntt=3
ntb=1 > ntb=2
Hence for maximum performance you probably want to run your production run
as an NVE simulation, that is with ntt=0 and ntb=1 (assuming you have
equilibrated your system for long enough, and equilibrated pressure etc).
Note for NVE to ensure good energy conservation you should probably set the
shake tolerance (tol) and the direct sum tolerance (dsum_tol) about one
order of magnitude tighter than the defaults.
Good luck,
Ross
-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber.scripps.edu
To unsubscribe, send "unsubscribe amber" (in the *body* of the email)
to majordomo.scripps.edu
Received on Sun Aug 17 2008 - 06:07:47 PDT