AMBER: Performance issues on Ethernet clusters

From: Sasha Buzko <>
Date: Thu, 17 Apr 2008 11:20:52 -0700

Hi all,
I've just completed setting up pmemd with mpich2 to test on a cluster
with gigabit Ethernet connections. As a test case, I used an example
from an Amber tutorial (suggested by Ross,
In my setup, using pmemd on up to 32 nodes gave no performance gain at
all over a single 4-processor system. The best case I had was about 5%
improvement when running 1 pmemd process per node on a 32 node subset of
the cluster. There is other traffic across this private subnet, but it's
minimal (another job running on the rest of the cluster only accesses
NFS shares to write the results of a job with no constant data
transfer). In all cases, cpu utilization ranged from 65% (1 process per
node) to 15-20% (4 per node). With 4 processes per node, it took twice
as long on 32 nodes whan it did on a single box.

Is there anything in the application/cluster configuration or build
options that can be done (other than look for cash to get Infiniband)? I
hope so, since it's hard to believe that all the descriptions of
Ethernet-based clusters (including this one: are meaningless..

Thank you for any suggestions.


The AMBER Mail Reflector
To post, send mail to
To unsubscribe, send "unsubscribe amber" to
Received on Fri Apr 18 2008 - 21:20:07 PDT
Custom Search