Re: [AMBER] error installing Amber12-gpu version

From: Vijay Manickam Achari <vjrajamany.yahoo.com>
Date: Wed, 18 Apr 2012 08:11:39 +0100 (BST)

Dear Thomas

Thank you so much for your kind help.

Well I could run my job using CUDA in GPU. 
But there is one thing just bothering me, that is, I get the message as below upon I start running the job.

The message is 
**************************************************************************
[vijay.gpucc Production-maltoHL4800-RT-50ns]$ 
Cannot match namelist object name scnb
namelist read: misplaced = sign
Cannot match namelist object name .0
Cannot match namelist object name scee
namelist read: misplaced = sign
Cannot match namelist object name .2
[vijay.gpucc Production-maltoHL4800-RT-50ns]$ 

**************************************************************************


Is the message above is serious? Can we neglect this?

Thanks
Regards 
 
Vijay Manickam Achari
(Phd Student c/o Prof Rauzah Hashim)
Chemistry Department,
University of Malaya,
Malaysia
vjramana.gmail.com


________________________________
 From: Thomas Cheatham <tec3.utah.edu>
To: Vijay Manickam Achari <vjrajamany.yahoo.com>; AMBER Mailing List <amber.ambermd.org>
Sent: Wednesday, 18 April 2012, 11:04
Subject: Re: [AMBER] error installing Amber12-gpu version
 

> What I want to know is how to submit job with choosing lets say 12 cores
> of cpus and 2 units of GPU? We dont use PBS or any other job scheduler
> package yet. I would like to know how to submit job without scheduler?

Run pmemd.MPI or sander.MPI on the 12 cores and run pmemd.cuda on the
GPUs.  You may have to experiment to see if the MPI job impacts GPU
performance; if it does, then reduce the number of cores used.  As pointed
out already, the GPU code runs almost entirely on the GPU except for I/O
and some nmropt/restraint code.

Personally I haven't done a lot of scripting to use the cores in addition
to the GPUs since a single GPU = 48-60 cores. The gain from the cores I am
not using is not huge.  However, if I were in an resource constrained
environment and didn't want to waste a single cycle, I would round-robin
jobs between the GPU and CPU...  i.e. run three jobs (1 on cores, 2 on
GPUs) and then switch for the next run so every third run (of each job)
was on the cores. The timings get tricky (unless you simply let things
time out) and you need to trust restrt files are written appropriately or
recover appropriately but it can work...  Soon I'll get to it...

With AMBER12, note that the pmemd.cuda jobs have changed to rely on
CUDA_VISIBLE_DEVICES (rather than -gpu #).  If you try -gpu it will fail
and if you do not set CUDA_VISIBLE_DEVICES the runs will all run on the
first GPU...



mpirun -np 12 -machinefile hostfile pmemd.MPI -O ... &

setenv CUDA_VISIBLE_DEVICES 0
pmemd.cuda -O ... &

setenv CUDA_VISIBLE_DEVICES 1
pmemd.cuda -O ... &

wait


-tec3

_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Apr 18 2012 - 00:30:03 PDT
Custom Search