Re: [AMBER] Select cuda ID device in PMEMD

From: Ross Walker <ross.rosswalker.co.uk>
Date: Thu, 17 Nov 2011 21:10:13 -0800

Hi Gonzalo,

Please see the following page for detailed information and examples:
http://ambermd.org/gpus/#Running

In summary you use the CUDA_VISIBLE_DEVICES environment variable to control
which GPUs are visible to pmemd.cuda.MPI in user space. In short:

Suppose you have 4 GPUs in a node, numbered 0 to 3 and you want to run 2 x 2
GPU jobs where one uses GPUs 0 and 2 and one uses GPUs 1 and 3. You would do
the following:

cd job1
export CUDA_VISIBLE_DEVICES="0,2"
nohup mpirun -np 2 $AMBERHOME/bin/pmemd.cuda.MPI -O -i ... </dev/null &

cd ../job2
export CUDA_VISIBLE_DEVICES="1,3"
nohup mpirun -np 2 $AMBERHOME/bin/pmemd.cuda.MPI -O -i ... </dev/null &

If you wanted to do this on multiple nodes, say use GPUs 0 and 2 on node1
and node2 for a total of 4 GPUs you would just set CUDA_VISIBLE_DEVICES to
"0,2" on both nodes - this is critical, make sure the environment variable
gets set on each node - most queuing systems have a facility for doing this
and most MPI implementations also allow you to set specific environment
variables on each of the nodes, and then do mpirun -np 4 (assuming your
nodefile is setup to run 2 threads on each node) and then you should be
golden.

Good luck,

All the best
Ross


> -----Original Message-----
> From: Gonzalo Jimenez [mailto:gjimenez.chem.ucla.edu]
> Sent: Thursday, November 17, 2011 7:44 PM
> To: AMBER Mailing List
> Subject: Re: [AMBER] Select cuda ID device in PMEMD
>
> Dear all,
>
> Following on this, I have found this information from the nvidia
> forums, but
> unfortunately, I cannot use nvidia-smi -c 1 in a script without root
> (or
> enough) permissions...
>
> Gonzalo
>
> ***********************************************************************
> *********************
> # Nvidia forum states nvidia-smi must be running continuously in the
> background for a GPU mode to stay "set"
> nvidia-smi -l -i 30 -lsa &
>
> # Now actually set the modes to exclusive use by one host thread per
> GPU...
> sudo nvidia-smi -g 0 -c 1
> sudo nvidia-smi -g 1 -c 1
> sudo nvidia-smi -g 2 -c 1
> sudo nvidia-smi -g 3 -c 1
>
> # Now list the compute modes we just set...
> nvidia-smi -g 0 -s
> nvidia-smi -g 1 -s
> nvidia-smi -g 2 -s
> nvidia-smi -g 3 -s
>
> -----Mensaje original-----
> From: Gonzalo Jimenez
> Sent: Thursday, November 17, 2011 7:14 PM
> To: amber.ambermd.org
> Subject: [AMBER] Select cuda ID device in PMEMD
>
> Dear all,
>
> Is there any chance to choose which cuda ID device to be used by
> PMEMD.cuda.MPI? This could be good to avoid competition between
> different
> jobs in the same node for the same GPUs.
>
> Thanks a lot,
>
> Gonzalo
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Nov 17 2011 - 21:30:03 PST
Custom Search