Re: [AMBER] performance of pmemd.cuda.MPI vs pmemd.cuda both running on single GPU

From: Niel Henriksen <niel.henriksen.utah.edu>
Date: Tue, 16 Oct 2012 17:06:17 +0000

As promised, numbers ....

System: Small RNA, TIP3P water, 7622 atoms
Input: ntt=3, ntp=0, dt=0.002, (NVT)
GPU: Tesla M2090, Keeneland supercomputer

pmemd.cuda, conventional MD: 107 ns/day
pmemd.cuda.MPI, REMD, 24 replicas, exchange attempt every 1 ps: 85 ns/day

So yes, there is a performance hit. But not 50%, in this case.

--Niel

________________________________________
From: Niel Henriksen [niel.henriksen.utah.edu]
Sent: Monday, October 15, 2012 1:58 PM
To: AMBER Mailing List
Subject: Re: [AMBER] performance of pmemd.cuda.MPI vs pmemd.cuda both running on single GPU

> I'm guessing if Niel compared
> pmemd.cuda to pmemd.cuda.MPI on the same GPUs (e.g., on Keeneland) in which
> pmemd.cuda.MPI ran 1 GPU per replica, he would see a performance hit as
> well.

Well, "scheduling is paused" at the moment, but when the job runs I'll
post the numbers ... :)

--Niel
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber

_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Oct 16 2012 - 10:30:05 PDT
Custom Search