Re: [AMBER] gpu run

From: Jason Swails <jason.swails.gmail.com>
Date: Sun, 6 Sep 2015 13:26:03 -0400

On Sat, Sep 5, 2015 at 9:12 PM, Kenneth Huang <kennethneltharion.gmail.com>
wrote:

> Hi,
>
> I think in general what you want to do is first check the corresponding
> numbers of the GPUs with something like nvidia-smi and then set
>
> export CUDA_VISIBLE_DEVICES=0
> export CUDA_VISIBLE_DEVICES=1
> export CUDA_VISIBLE_DEVICES=2
>

​Just to clarify and warn here -- nvidia-smi does *not* number GPUs the
same way that the CUDA runtime does. So if you have 3 GPUs, there is no
guarantee that device 0 will be the same GPU to both nvidia-smi and
pmemd.cuda.

pmemd.cuda identifies GPUs using the CUDA API, whereas nvidia-smi does
not. The deviceQuery program that's part of the CUDA SDK has similar
functionality to nvidia-smi, but it uses the CUDA API so its numbering is
exactly what pmemd.cuda would see.

I would suggest consulting the output of deviceQuery instead of nvidia-smi
(or at least using it to assign a mapping between the nvidia-smi ordering
and the CUDA ordering).

Just as an example, here is the output on my machine:

nvidia-smi:

Sun Sep 6 13:22:59 2015
+------------------------------------------------------+

| NVIDIA-SMI 340.76 Driver Version: 340.76 |

|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr.
ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute
M. |
|===============================+======================+======================|
| 0 GeForce GTS 250 Off | 0000:01:00.0 N/A |
 N/A |
| 38% 61C P0 N/A / N/A | 239MiB / 1023MiB | N/A
 Default |
+-------------------------------+----------------------+----------------------+
| 1 GeForce GTX 680 Off | 0000:07:00.0 N/A |
 N/A |
| 45% 61C P0 N/A / N/A | 149MiB / 2047MiB | N/A
 Default |
+-------------------------------+----------------------+----------------------+


+-----------------------------------------------------------------------------+
| Compute processes: GPU
Memory |
| GPU PID Process name Usage
   |
|=============================================================================|
| 0 Not Supported
  |
| 1 Not Supported
  |
+-----------------------------------------------------------------------------+

​deviceQuery:

deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 2 CUDA Capable device(s)

Device 0: "GeForce GTX 680"
  CUDA Driver Version / Runtime Version 6.5 / 6.5
  CUDA Capability Major/Minor version number: 3.0
  Total amount of global memory: 2048 MBytes (2147287040
bytes)
  ( 8) Multiprocessors, (192) CUDA Cores/MP: 1536 CUDA Cores
  GPU Clock rate: 1137 MHz (1.14 GHz)
  Memory Clock rate: 3004 Mhz
  Memory Bus Width: 256-bit
  L2 Cache Size: 524288 bytes
[snip]
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device
simultaneously) >

Device 1: "GeForce GTS 250"
  CUDA Driver Version / Runtime Version 6.5 / 6.5
  CUDA Capability Major/Minor version number: 1.1
  Total amount of global memory: 1023 MBytes (1073020928
bytes)
  (16) Multiprocessors, ( 8) CUDA Cores/MP: 128 CUDA Cores
  GPU Clock rate: 1620 MHz (1.62 GHz)
  Memory Clock rate: 1100 Mhz
  Memory Bus Width: 256-bit
[snip]
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device
simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 6.5, CUDA Runtime
Version = 6.5, NumDevs = 2, Device0 = GeForce GTX 680, Device1 = GeForce
GTS 250
Result = PASS


As you can see, device 0 is my GTS 250 while device 1 is my GTX 680 for
nvidia-smi, but is the opposite order for deviceQuery (and therefore
pmemd.cuda).

HTH,
Jason

-- 
Jason M. Swails
BioMaPS,
Rutgers University
Postdoctoral Researcher
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Sun Sep 06 2015 - 10:30:03 PDT
Custom Search