Re: [AMBER] Cluster considerations

From: Brent Krueger <>
Date: Fri, 11 Feb 2011 21:08:00 -0500


Hello there -- hope things are going well for you.

Perhaps a first question is to ask how many simulations you plan to run at a
time. In my group we often run several variations of each simulation we
have, so we generally have at least 8 simulations running at any one time.
 If something similar is true for you, then you need not worry about using
MPI at all. Your total performance will be best if you simply run one
simulation per GPU. If you are going to have two GPUs per case, then you
could consider running four simulations simultaneously, each on a pair of

With the money you have on fancy interconnects you could buy lots of storage
so that you can keep track of all of those microseconds of MD simulations
that you will generate.

Good luck,

On Fri, Feb 11, 2011 at 7:33 PM, peker milas <> wrote:

> Dear Amber users,
> We are considering putting together a small GPU cluster for running
> AMBER simulations of some larger biomolecules (~100k atoms).
> Naturally, there are many decisions to be made and not a whole lot of
> documentation describing what works. Our budget is <$10k, so our first
> inclination is to buy four Intel i5s boxes, each with two GPUs
> connected over Gigabit Ethernet. Have people had good experiences with
> this sort of setup? In particular,
> 1) Has anyone had experience using GPUs in an MPI configuration over
> gigabit ethernet? Is Gigabit Ethernet capable of delivering the
> bandwidth/latency to keep the cards busy?
> 2) In the event that gigabit ethernet is insufficient, we have
> considered purchasing an Infiniband interconnect. This, of course,
> would require 3x16 PCIe lanes, which no consumer motherboard I have
> seen provides. It seems like the most common configuration is one x16
> slot with two x8 slots. This brings us to the question, how much does
> AMBER rely on GPU-CPU data transfers? Would running two GPUs with 8
> lanes each substantially reduce performance? Is there a way we could
> disable 8 lanes of our current setup for benchmarking purposes?
> Thanks,
> - Peker
> _______________________________________________
> AMBER mailing list

Brent P.   616 395 7629
Associate Professor................fax:       616 395 7118
Hope College..........................Schaap Hall 2120
Department of Chemistry
Holland, MI     49423
AMBER mailing list
Received on Fri Feb 11 2011 - 18:30:04 PST
Custom Search