Re: [AMBER] Select cuda ID device in PMEMD

From: Gonzalo Jimenez <gjimenez.chem.ucla.edu>
Date: Thu, 17 Nov 2011 21:04:26 -0800

Hi Jason,

Thanks a lot for the info, I will read it very carefully and make some
testing. Yes, it seems that pmemd.cuda.mpi is programmed not to use any used
GPU, but in my case, if I run, let's say, two mpirun jobs on the same 6-GPUs
node using 3 GPUs each, each job takes forever and reading the .out files,
it seems that the two jobs are using the same GPUs:

md.out:| CUDA Device ID in use: 0
md.out:| CUDA Device ID in use: 1
md.out:| CUDA Device ID in use: 2
md.out2:| CUDA Device ID in use: 0
md.out2:| CUDA Device ID in use: 1
md.out2:| CUDA Device ID in use: 2

Anyway, I will read your info and try the CUDA_VISIBLE_DEVICES option.

Thanks again,
Gonzalo

-----Mensaje original-----
From: Jason Swails
Sent: Thursday, November 17, 2011 8:55 PM
To: AMBER Mailing List
Subject: Re: [AMBER] Select cuda ID device in PMEMD

Hi Gonzalo,

A couple things to note here. First, pmemd.cuda is already pretty smart
about what GPUs it will allocate. It will _not_ pick any GPU that's being
used, for instance. Furthermore, given multiple options, it will pick the
one with the most memory available.

If you still wish to exert control over which GPUs are used, use the
CUDA_VISIBLE_DEVICES environment variable.

You can find more details here: http://ambermd.org/gpus (where all of this
information can be found, too).

HTH,
Jason

On Thu, Nov 17, 2011 at 10:43 PM, Gonzalo Jimenez
<gjimenez.chem.ucla.edu>wrote:

> Dear all,
>
> Following on this, I have found this information from the nvidia forums,
> but
> unfortunately, I cannot use nvidia-smi -c 1 in a script without root (or
> enough) permissions...
>
> Gonzalo
>
>
> ********************************************************************************************
> # Nvidia forum states nvidia-smi must be running continuously in the
> background for a GPU mode to stay "set"
> nvidia-smi -l -i 30 -lsa &
>
> # Now actually set the modes to exclusive use by one host thread per
> GPU...
> sudo nvidia-smi -g 0 -c 1
> sudo nvidia-smi -g 1 -c 1
> sudo nvidia-smi -g 2 -c 1
> sudo nvidia-smi -g 3 -c 1
>
> # Now list the compute modes we just set...
> nvidia-smi -g 0 -s
> nvidia-smi -g 1 -s
> nvidia-smi -g 2 -s
> nvidia-smi -g 3 -s
>
> -----Mensaje original-----
> From: Gonzalo Jimenez
> Sent: Thursday, November 17, 2011 7:14 PM
> To: amber.ambermd.org
> Subject: [AMBER] Select cuda ID device in PMEMD
>
> Dear all,
>
> Is there any chance to choose which cuda ID device to be used by
> PMEMD.cuda.MPI? This could be good to avoid competition between different
> jobs in the same node for the same GPUs.
>
> Thanks a lot,
>
> Gonzalo
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>



-- 
Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Candidate
352-392-4032
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber 
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Nov 17 2011 - 21:30:03 PST
Custom Search