>Hi,
>
>http://ambermd.org/gpus/#Running
>" Ideally you would have a batch scheduling system that will set
>everything up for you correctly "
>
>In fact, PBS does just that with its PBS_GPUFILE, e.g.,
>#PBS -l nodes=2:ppn=x:gpus=2
>...
>cat $PBS_GPUFILE
>cat /var/spool/batch/torque/aux//517906.batch.edugpu
>n0659-gpu1
>n0659-gpu0
>n0658-gpu1
>n0658-gpu0
>
>And a reliable PBS source indicates that the PBS_GPUFILE and its syntax
>are stable.
>When will pmemd support PBS_GPUFILE ?
>
>Please provide a workaround script that takes a $PBS_GPUFILE and spews
>all the necessary environment variables to run on the specified gpus.
Volunteers? - Should be pretty simple for some Bash whizz to figure this
out.
Although in this situation it is pretty simple since it is homogenous. So
in pseudo code just:
1) grep for first node id > foo
2) extrac last character from each line in foo > foo2
3) export CUDA_VISIBLE_DEVICES=contents of foo2
4) mpirun -np (line count in $PBS_GPUFILE) -option to export environment
variables $AMBERHOME/bin/pmemd.cuda.MPI
That 'should' work. Alternatively in your case if you are using all the
GPUs in a node, i.e you have 2 gpus per node then the following:
#PBS -l nodes=2:ppn=2:gpus=2
should when run with mpirun -np 4 just 'do the right thing'(tm)
All the best
Ross
/\
\/
|\oss Walker
---------------------------------------------------------
| Assistant Research Professor |
| San Diego Supercomputer Center |
| Adjunct Assistant Professor |
| Dept. of Chemistry and Biochemistry |
| University of California San Diego |
| NVIDIA Fellow |
|
http://www.rosswalker.co.uk |
http://www.wmd-lab.org |
| Tel: +1 858 822 0854 | EMail:- ross.rosswalker.co.uk |
---------------------------------------------------------
Note: Electronic Mail is not secure, has no guarantee of delivery, may not
be read every day, and should not be used for urgent or sensitive issues.
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon Nov 05 2012 - 23:00:05 PST