Re: [AMBER] performance of pmed.cuda.MPI

From: Scott Le Grand <varelse2005.gmail.com>
Date: Fri, 14 Sep 2012 11:46:28 -0700

MPI performance of GTX 690 is abysmal because the two GPUs share the same
PCIEX adaptor.

That will improve down the road somewhat.

In the meantime, I think you'll be happy at the performance of two
independent runs (one on each GPU): 98+% efficiency when I last checked...


On Fri, Sep 14, 2012 at 11:40 AM, Jonathan Gough <jonathan.d.gough.gmail.com
> wrote:

> Dear All,
>
> I'm curious if anyone is using a nvidia GTX 690 card. It has 2 cuda cores
> on it, and when using pmed.cuda.MPI I am not getting much of a speed bump
> when using both cores.
>
> Curious if others have experienced this or if there is something that I am
> missing, or maybe this is what to expect. Just asking...
>
> Jonathan
>
> for example - on the benchmarks - using the standard inputs and
> pmed.cuda.MPI:
>
> GB-myoglobin
> 1 core - ns/day = 144.09
> 2 core - ns/day = 156.14
>
> TRPCage
> 1 core - ns/day = 698.11
> 2 core - ns/day = 527.23
>
> Cellulose_prod_NPT
> 1 core - ns/day = 3.60
> 2 core - ns/day = 4.39
>
> JAC_production_NPT
> 1 core - ns/day = 57.33
> 2 core - ns/day = 67.92
>
> JAC_production_NVE
> 1 core - ns/day = 72.42
> 2 core - ns/day = 80.76
>
> FactorIX_production_NPT
> 1 core - ns/day = 15.57
> 2 core - ns/day = 18.50
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Sep 14 2012 - 12:00:03 PDT
Custom Search