Hi Parish,
I think you will be shocked at just how bad GigE is these days. You are
talking about putting 8 cores in one box which means you have 1 GigE
connection serving them which equates to 125 MBps per core without even
considering contention. So you are effectively running on 100 meg ethernet.
So short answer is that multi-core chips have completely killed ethernet as
a form of interconnect for MD. In fact such approaches to putting more power
in a box (forget the memory bandwidth issues for the moment) are rapidly
killing infiniband as well, just wait until we start getting to 16 core
chips in a couple of years and then we will truly be screwed.....
Firstly I would be wary of the quad core intel chips - Clovertown. They
really are very very poorly designed. You best option at the moment is
likely to be quad core opterons, although you might have to wait a couple of
months to get them as all the first batches are going direct to the new Sun
machine at TACC. Or alternatively wait a while for Intel Quickpath (CSI) to
come out but I think there is likely a 9 to 12 month wait for that. Failing
that you might want to look at quad dual core opterons and see how they
compare in price to dual quad core Intels.
So really if you want to run regular MD runs on more than one box at a time
then your only real choice is infiniband. You might be able to get to two
boxes with a couple of gigE cards in both boxes and crossover cables (to
avoid switch contention) but it probably won't be great performance. One
option you might want to look at is the possibility of infiniband cross over
cables. This will save you the cost of an infiniband switch and will let you
use 2 boxes at once so 16 cpus.
If, however, you plan to run a lot of coarse grain parallel stuff, like
umbrella sampling where you truly run seperate calculations or replica
exchange simulations which are loosely coupled then you can build yourself a
cluster using just gigabit ethernet and run 8 cpus per replica.
Beyond that rather than spend a small fortune getting something that will be
mediocre at best you would probably be better off just applying for time on
NSF supercomputers. See: https://pops-submit.teragrid.org/ for details of
applying for time. You can get 30,000 SUs for free by just sending a CV and
abstract. If you want more than this then you can write a short proposal
which are accepted every 3 months. At the moment there is a glut of time
available so it will be fairly easy to get plenty of time. These machines
are all connected with decent interconnects, as well as having parallel file
systems over SANs which is really the only 'proper' way to do cluster based
file systems since NFS really doesn't cut it. They are also centrally
maintained meaning you don't need to pay anyone to look after the machines.
Anyway just an idea, feel free to contact me directly if you want more
details.
All the best
Ross
/\
\/
|\oss Walker
| HPC Consultant and Staff Scientist |
| San Diego Supercomputer Center |
| Tel: +1 858 822 0854 | EMail:- ross.rosswalker.co.uk |
|
http://www.rosswalker.co.uk <
http://www.rosswalker.co.uk/> | PGP Key
available on request |
Note: Electronic Mail is not secure, has no guarantee of delivery, may not
be read every day, and should not be used for urgent or sensitive issues.
_____
From: owner-amber.scripps.edu [mailto:owner-amber.scripps.edu] On Behalf Of
Parish, Carol
Sent: Saturday, October 06, 2007 10:20
To: amber.scripps.edu
Subject: RE: AMBER: request for hardware recommendations
Thank very much. Judging by the timings in the Amber page, I think that
some of the time I'd want to run on at least 16 cores, maybe up to as many
as 40 cores, but most jobs would use 8 or so cores. I'd likely use both
pmemd and sander, but most of my longer jobs will likely be REMD/sander.
I'm starting to think that I'm sort of at the cusp; where some of my jobs
would do OK with gigE and some would benefit from infiniband and I just need
to choose a configuration? Thanks again, Carol
_____
From: owner-amber.scripps.edu [mailto:owner-amber.scripps.edu] On Behalf Of
Carlos Simmerling
Sent: Saturday, October 06, 2007 1:00 PM
To: amber.scripps.edu
Subject: Re: AMBER: request for hardware recommendations
Carol,
are you thinking of using all 80 cores at once for runs, or will you usually
have multiple simulations going (multiple users)? For 80 cores on
a small number of simulations you probably won't get good scaling with
gigE. It also depends on whether you will use mostly pmemd, which scales
well, or sander, which doesn't scale as well but has more functionality.
pmemd is great for standard MD, but anything beyond that like most
restraints,
free energy calculations, REMD, NEB, and so on will require sander.
Let us know how you'll be using it and we might be able to help more. There
are also lots of benchmarks on the Amber page for various clusters.
Carlos
On 10/6/07, Parish, Carol <cparish.richmond.edu> wrote:
Please forgive this hardware-recommendation related post. I looked
through the manual, the webpage and the reflectors and I didn't see any
recent information on this specific issue.
I would like to use AMBER for systems of about 200 residues in explicit
and implicit solvent. My budget will allow me to purchase either a
cluster of 10 dual quad core intels (80 cores) with gigE, or a much
smaller number of cores with infiniband. Should I invest in infiniband
or would gig E scale OK in the TIP3 calcs for a cluster this size? (10
boxes; 80 cores)
I have heard rumors that it can be difficult to install AMBER in
parallel on commodity clusters. Can anyone recommend linux
hardware/software combinations that work best?
Can anybody recommend a vendor who can install and test AMBER on a
gigE/intel cluster?
Can anyone recommend a good quality gigE switch? Does anyone know of a
HOWTO for configuring a switch to work optimally?
Thanks very much, Carol Parish
-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber.scripps.edu
To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu
-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber.scripps.edu
To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu
Received on Sun Oct 07 2007 - 06:08:06 PDT