Re: [AMBER] Amber16 on GPUs and speed differences between CUDA_VISIBLE_DEVICES=0, 1 or 0, 2

From: Andreas Tosstorff <andreas.tosstorff.cup.uni-muenchen.de>
Date: Mon, 19 Dec 2016 22:32:17 +0100

Hi Christopher,

I am not an expert, but I can tell you this:

In order to get a speed up when running parallel GPUs, they need to be on
the same PCI-E bus, which enables P2P communication.

See the pmemd.cuda.mpi documentation here:

http://ambermd.org/gpus/

-----Original Message-----
From: Neale, Christopher Andrew [mailto:cneale.lanl.gov]
Sent: Monday, December 19, 2016 7:31 PM
To: amber.ambermd.org
Subject: [AMBER] Amber16 on GPUs and speed differences between
CUDA_VISIBLE_DEVICES=0, 1 or 0, 2

I learned how to get a bit more information, which perhaps will help:

$ nvidia-smi topo -m
        GPU0 GPU1 GPU2 GPU3 CPU Affinity
GPU0 X PHB SOC SOC 0-5,12-17
GPU1 PHB X SOC SOC 0-5,12-17
GPU2 SOC SOC X PHB 6-11,18-23
GPU3 SOC SOC PHB X 6-11,18-23

Legend:

  X = Self
  SOC = Path traverses a socket-level link (e.g. QPI)
  PHB = Path traverses a PCIe host bridge
  PXB = Path traverses multiple PCIe internal switches
  PIX = Path traverses a PCIe internal switch

Thank you,
Chris.

_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon Dec 19 2016 - 14:00:02 PST
Custom Search