Hi Sasha,
> The question is primarily for Ross and Scott.
> Have you been able to implement any parallel code for pmemd.cuda? If
> so,
> how does it look in multiple node situations and how much bandwidth is
> needed to keep all cards fully utilized?
I am putting together a patch and testing things. We are hoping to have a
release, a provisional beta one at least shortly. In the meantime
performance is generally better running across IB with 1 card per node. Our
best guess is that you need a 'minimum' of QDR IB to be able to keep the
cards fully utilized. I don't have a complete range of numbers for you
though due to not having enough variations of hardware to test on. Some
'provisional' numbers for C2050's at 1 per node with QDR IB are:
DHFR NVE
2 x E5462 = 5.94 ns/day
1 x C2050 = 20.70 ns/day
2 x C2050 = 29.8 ns/day
4 x C2050 = 41.14 ns/day
8 x C2050 = 48.97 ns/day
Cf. NICS Kraken XT4 using 192 cores at 1 core per node = 192 nodes (768
cores effectively) = 46.01 ns/day.
> I wonder whether DDR infiniband would be sufficient to maintain
> performance across several nodes or is this the case for QDR. If you
> haven't tested it yet, what's your best guess at this point?
Unfortunately I think it will be suck it and see - I haven't had enough
variations to actually know. Scott may have tested on a DDR machine. It
might work okay for 2 nodes, 1 GPU each but beyond that my best guess is
that QDR is needed.
All the best
Ross
/\
\/
|\oss Walker
---------------------------------------------------------
| Assistant Research Professor |
| San Diego Supercomputer Center |
| Adjunct Assistant Professor |
| Dept. of Chemistry and Biochemistry |
| University of California San Diego |
|
http://www.rosswalker.co.uk |
http://www.wmd-lab.org/ |
| Tel: +1 858 822 0854 | EMail:- ross.rosswalker.co.uk |
---------------------------------------------------------
Note: Electronic Mail is not secure, has no guarantee of delivery, may not
be read every day, and should not be used for urgent or sensitive issues.
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Jul 29 2010 - 19:30:05 PDT