[AMBER] pmemd performance on GTX Titan Z

From: Kenneth Huang <kennethneltharion.gmail.com>
Date: Tue, 12 May 2015 12:22:13 -0400

Dear all,

I was curious as to what the performance on 2 in 1 GPUs, specifically the
GTX Titan Z in a Ubuntu 14.04 machine with cuda 6.5. When I'm trying to
benchmark the card, I'm running into a strange situation where it does
appear to be running on both cards, it's not fully using both cards?

For example, running this command-

mpirun -np 2 /home/curvelinux/bin/amber14/bin/pmemd.cuda.MPI -O -i
05_prod1a.in -o test_prod3.out -p test_sol.prmtop -c test_prod2.rst -r
test_prod3.rst -x test_prod3.mdcrd

And then checking with nvidia-smi gives me-


| NVIDIA-SMI 346.46 Driver Version: 346.46
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr.
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute
M. |
| 0 GeForce GTX TIT... Off | 0000:04:00.0 On |
N/A |
| 57% 82C P2 125W / 189W | 857MiB / 6143MiB | 53%
Default |
| 1 GeForce GTX TIT... Off | 0000:05:00.0 Off |
N/A |
| 90% 89C P2 160W / 189W | 589MiB / 6143MiB | 98%
Default |

Where it seems that by all appearances, both cards are being used, but the
load per card seems to be disparate, so I was just wondering if this is
expected behaviour? There doesn't seem to be much of a speed up compared to
running on a K20 (23ns/day vs 27ns/day), although this is a small system
(50,000 atoms) and I haven't optimized for GPU runs.

Mostly, I was just wondering if it would be better to split the cards up to
run jobs in parallel, instead of trying jobs on both of them. Searching
google gives the suggestion that enabling double precision mode in nvidia,
but I was wondering if I was missing something in my setup for pmemd.



Ask yourselves, all of you, what power would hell have if those imprisoned
here could not dream of heaven?
AMBER mailing list
Received on Tue May 12 2015 - 09:30:04 PDT
Custom Search