Dear Thomas,
Thanks for your explanation.
I have 4 units of GPU. Based on your prior explanation I am using CUDA_VISIBLE_DEVICES 0 , which means using only one GPU at a time to run the job.
Lets say I want to use two units of GPU to run the same job ( or three or four GPUs).
How I can achieve this? Is that possible?
Thanks in advance.
Vijay
Vijay Manickam Achari
(Phd Student c/o Prof Rauzah Hashim)
Chemistry Department,
University of Malaya,
Malaysia
vjramana.gmail.com
________________________________
From: Thomas Cheatham <tec3.utah.edu>
To: Vijay Manickam Achari <vjrajamany.yahoo.com>; AMBER Mailing List <amber.ambermd.org>
Sent: Wednesday, 18 April 2012, 11:04
Subject: Re: [AMBER] error installing Amber12-gpu version
> What I want to know is how to submit job with choosing lets say 12 cores
> of cpus and 2 units of GPU? We dont use PBS or any other job scheduler
> package yet. I would like to know how to submit job without scheduler?
Run pmemd.MPI or sander.MPI on the 12 cores and run pmemd.cuda on the
GPUs. You may have to experiment to see if the MPI job impacts GPU
performance; if it does, then reduce the number of cores used. As pointed
out already, the GPU code runs almost entirely on the GPU except for I/O
and some nmropt/restraint code.
Personally I haven't done a lot of scripting to use the cores in addition
to the GPUs since a single GPU = 48-60 cores. The gain from the cores I am
not using is not huge. However, if I were in an resource constrained
environment and didn't want to waste a single cycle, I would round-robin
jobs between the GPU and CPU... i.e. run three jobs (1 on cores, 2 on
GPUs) and then switch for the next run so every third run (of each job)
was on the cores. The timings get tricky (unless you simply let things
time out) and you need to trust restrt files are written appropriately or
recover appropriately but it can work... Soon I'll get to it...
With AMBER12, note that the pmemd.cuda jobs have changed to rely on
CUDA_VISIBLE_DEVICES (rather than -gpu #). If you try -gpu it will fail
and if you do not set CUDA_VISIBLE_DEVICES the runs will all run on the
first GPU...
mpirun -np 12 -machinefile hostfile pmemd.MPI -O ... &
setenv CUDA_VISIBLE_DEVICES 0
pmemd.cuda -O ... &
setenv CUDA_VISIBLE_DEVICES 1
pmemd.cuda -O ... &
wait
-tec3
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Apr 20 2012 - 05:30:06 PDT