[AMBER] Cluster considerations

From: peker milas <pekermilas.gmail.com>
Date: Fri, 11 Feb 2011 19:33:39 -0500

Dear Amber users,

We are considering putting together a small GPU cluster for running
AMBER simulations of some larger biomolecules (~100k atoms).
Naturally, there are many decisions to be made and not a whole lot of
documentation describing what works. Our budget is <$10k, so our first
inclination is to buy four Intel i5s boxes, each with two GPUs
connected over Gigabit Ethernet. Have people had good experiences with
this sort of setup? In particular,

1) Has anyone had experience using GPUs in an MPI configuration over
gigabit ethernet? Is Gigabit Ethernet capable of delivering the
bandwidth/latency to keep the cards busy?

2) In the event that gigabit ethernet is insufficient, we have
considered purchasing an Infiniband interconnect. This, of course,
would require 3x16 PCIe lanes, which no consumer motherboard I have
seen provides. It seems like the most common configuration is one x16
slot with two x8 slots. This brings us to the question, how much does
AMBER rely on GPU-CPU data transfers? Would running two GPUs with 8
lanes each substantially reduce performance? Is there a way we could
disable 8 lanes of our current setup for benchmarking purposes?

Thanks,

- Peker

_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Feb 11 2011 - 17:00:02 PST
Custom Search