We are currently considering upgrading our Intel-based Beowulf cluster with
Myrinet interconnects in an attempt to improve scaling with AMBER jobs.
Right now we're using standard fast ethernet, and we get the expected
results. Scaling tails off after about 8 CPUs depending on compiler, job
size, etc. The conventional wisdom is that the bottleneck is packet
latency, which leads us to consider Myrinet where latencies are claimed to
be reduced from 50ms with ethernet to ~20us.
We'd like to get good scaling up to at least 16 processors (32 if possible).
We're trying to get Myrinet to lend us some hardware for benchmarking, but
the best they'll come up with is 4 cards and a 4 port switch, which won't
tell us much. Considering the fact that this will nearly double the cost of
our cluster, we want to see some indication of real-world benefit before
committing to the hardware. Does anybody have experience running AMBER on a
cluster with this technology?
Thanks in advance.
Jarrod A. Smith
Research Asst. Professor, Biochemistry
Assistant Director, Center for Structural Biology
Computation and Molecular Graphics
Vanderbilt University
jsmith_at_structbio.vanderbilt.edu
Received on Tue Aug 08 2000 - 07:19:54 PDT