Hi Peker,
we are solving similar problem, so here is just for the inspiration actual
suggestion from our IT guy regarding the new GPU machine on our
department.
Total price is around $4k according to actual Czech prices :)) ( $1 = cca
18 CZK ).
I would be grateful for any eventual important comment/notice from the
experienced GPU guys.
Best wishes,
Marek
#1
MOTHERBOARD
Asus Crosshair IV Extreme - AMD 890FX
http://www.czechcomputer.cz/product.jsp?artno=81412
#2
CPU
AMD Phenom II X6 1090T Black Edition
http://www.czechcomputer.cz/product.jsp?artno=76791
#3
RAM
2 x Corsair Dominator 8GB (8GB=2x4GB) DDR3 1600 = 16 GB in total
#4
HDD
Seagate Barracuda 7200.12 - 750GB (here we can go also with 1TB HDD)
http://www.czechcomputer.cz/product.jsp?artno=62801
#5
CASE
CoolerMaster HAF 932 Black
http://www.czechcomputer.cz/product.jsp?artno=61029
OR
Nexus EDGE
http://www.czechcomputer.cz/product.jsp?artno=67025
( The first choice seems a little better to our IT regarding the given
purpose but it is already in sale …).
Anyway regarding the CASE the important thing is enough space below the
last PCA-E slot - enough space for the last GPU …
#6
POWER SUPPLY
SilverStone Strider Plus Series SST-ST1500 1500W
http://www.czechcomputer.cz/product.jsp?artno=85705
#7
GPUs
3 x Asus ENGTX580/2DI/1536MD5, PCI-E
http://www.czechcomputer.cz/product.jsp?artno=83653
+
1 x GAINWARD VGA GTX-580 3072MB GDDR5
http://www.asus.cd/komponenty-2/gainward-vga-gtx-580-3072mb-gddr5-783-2010mhz-384bit-pcie-2xdvi-hdmi-dport/
--
Tato zpráva byla vytvořena převratným poštovním klientem Opery:
http://www.opera.com/mail/
Dne Sat, 19 Feb 2011 04:29:18 +0100 peker milas <pekermilas.gmail.com>
napsal/-a:
> Hi Brent and Ross,
>
> I am sorry for my late response but i was terribly sick last week.
> This GPU cluster idea is related with the need of pushing the
> simulation to higher time scales. I really want to simulate my system
> upto a microsecond. Unfortunately, it looks like we will not be able
> to start building it right away. You may probably guess that there are
> always funding issues.
>
> On the other hand we needed to have some information about the
> hardware. So, thank you so much Ross for enlightening us about
> hardware. Also, thank you so much Brent for letting us know about the
> other possible ways of simulation, and sharing your ideas.
>
> best
> peker
>
>
> On Fri, Feb 11, 2011 at 8:12 PM, Ross Walker <ross.rosswalker.co.uk>
> wrote:
>> Hi Peker,
>>
>>> We are considering putting together a small GPU cluster for running
>>> AMBER simulations of some larger biomolecules (~100k atoms).
>>> Naturally, there are many decisions to be made and not a whole lot of
>>> documentation describing what works. Our budget is <$10k, so our first
>>> inclination is to buy four Intel i5s boxes, each with two GPUs
>>> connected over Gigabit Ethernet. Have people had good experiences with
>>> this sort of setup? In particular,
>>>
>>> 1) Has anyone had experience using GPUs in an MPI configuration over
>>> gigabit ethernet? Is Gigabit Ethernet capable of delivering the
>>> bandwidth/latency to keep the cards busy?
>>
>> The gigabit Ethernet will be fine for mounting a file system, say over
>> NFS.
>> For MPI communication, to run simulations in parallel it will be
>> completely
>> useless. For GPU runs in parallel across nodes you need QDR Infiniband
>> as a
>> minimum. However, you'd be able to run in parallel within a node over
>> one or
>> more GPUs.
>>
>>> 2) In the event that gigabit ethernet is insufficient, we have
>>> considered purchasing an Infiniband interconnect. This, of course,
>>
>> Only if you want to run across multiple nodes. If you have multiple
>> jobs you
>> could always just run them on individual nodes which should be fine.
>>
>>> would require 3x16 PCIe lanes, which no consumer motherboard I have
>>> seen provides. It seems like the most common configuration is one x16
>>
>> See: http://www.provantage.com/supermicro-x8dtg-qf~7SUPM39V.htm
>>
>> Works well with 4 GPUs in one box. All 4 lanes are X16. If you want the
>> specs for a complete machine here's an option:
>>
>> http://www.rosswalker.co.uk/foo.htm
>>
>>> slot with two x8 slots. This brings us to the question, how much does
>>> AMBER rely on GPU-CPU data transfers? Would running two GPUs with 8
>>> lanes each substantially reduce performance? Is there a way we could
>>> disable 8 lanes of our current setup for benchmarking purposes?
>>
>> Running in parallel across multiple GPUs will be poor if you only have
>> them
>> in x8 slots. It should not affect single GPU runs too much. Maybe 10%
>> or so.
>> However, for running a single calculation across multiple GPUs then you
>> need
>> them all in x16 slots.
>>
>> All the best
>> Ross
>>
>> /\
>> \/
>> |\oss Walker
>>
>> ---------------------------------------------------------
>> | Assistant Research Professor |
>> | San Diego Supercomputer Center |
>> | Adjunct Assistant Professor |
>> | Dept. of Chemistry and Biochemistry |
>> | University of California San Diego |
>> | NVIDIA Fellow |
>> | http://www.rosswalker.co.uk | http://www.wmd-lab.org/ |
>> | Tel: +1 858 822 0854 | EMail:- ross.rosswalker.co.uk |
>> ---------------------------------------------------------
>>
>> Note: Electronic Mail is not secure, has no guarantee of delivery, may
>> not
>> be read every day, and should not be used for urgent or sensitive
>> issues.
>>
>>
>>
>>
>>
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
> __________ Informace od ESET NOD32 Antivirus, verze databaze 5887
> (20110218) __________
>
> Tuto zpravu proveril ESET NOD32 Antivirus.
>
> http://www.eset.cz
>
>
>
--
Tato zpráva byla vytvořena převratným poštovním klientem Opery:
http://www.opera.com/mail/
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Feb 18 2011 - 21:00:02 PST