[AMBER] multi gpu run using MPI

From: Hossein Pourreza <hpourreza.uchicago.edu>
Date: Thu, 6 Jun 2019 20:35:20 +0000

Greetings,

I am trying to run Amber 16 benchmark on a system with 4 GPUs. I compiled Amber with Intelmpi/2018 cuda/9.0. When the benchmark tries to run mpirun -np 2 $AMBERHOME/bin/pmemd.cuda.MPI … with CUDA_VISIBLE_DEVICES=0,1 (for example) I can see two processes running on each GPU (I monitor using nvidia-smi). Looks like each MPI process runs the code on both GPUs. I tried to set OMP_NUM_THREADS=1 but did not change anything. Looking at mdout.2GPU, things seem to be ok:
|------------------- GPU DEVICE INFO --------------------
|
| Task ID: 0
| CUDA_VISIBLE_DEVICES: 0,1
| CUDA Capable Devices Detected: 2
| CUDA Device ID in use: 0
| CUDA Device Name: Tesla V100-PCIE-16GB
| CUDA Device Global Mem Size: 16130 MB
| CUDA Device Num Multiprocessors: 80
| CUDA Device Core Freq: 1.38 GHz
|
|
| Task ID: 1
| CUDA_VISIBLE_DEVICES: 0,1
| CUDA Capable Devices Detected: 2
| CUDA Device ID in use: 1
| CUDA Device Name: Tesla V100-PCIE-16GB
| CUDA Device Global Mem Size: 16130 MB
| CUDA Device Num Multiprocessors: 80
| CUDA Device Core Freq: 1.38 GHz
|
|--------------------------------------------------------

|---------------- GPU PEER TO PEER INFO -----------------
|
| Peer to Peer support: ENABLED
|
|--------------------------------------------------------

I am wondering if this is the normal behavior or I am missing something here.

Many thanks
Hossein

_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Jun 06 2019 - 14:00:03 PDT
Custom Search