Re: ?s: Xeon vs. Athlon, latency

From: R. M. Fesinmeyer <>
Date: Mon 8 Oct 2001 10:15:39 -0700


Thank you for your response; I have some comments intermingled below.

> >2) Network latency usually comes up whenever a conversation about
> >clusters start. In a previous conversation
> >(
> >ethernet was frowned upon because of its high latency when using a
> >My understanding is that standard hubs can have lower latency than the
> >average store-and-forward switch. For a 2-4 node (dual cpu) cluster,
> >a hub provide better performance scaling?
> Most switches I've seen now are cut-through. Is that not right?
> At least, I can not detect a latency difference when I use a switch
> versus a cross over cable.

My impression was that most consumer grade switches (<$400) were still store
and forward or a mix of the two protocols (depending on network load). I
think the 10Mb/s swithces are pretty much all cut through, but I thought
store and forward was sitll prevalent with the 100Mb/s switches. Of course
I'm not going to argue with an actual test. For my reference, could you
tell me what switch you performed the latency testing on?

This page has some more information regarding hub vs. switch performance:

> >With a very small cluster (2-4 nodes), it might even be better to do away
> >with the hub/switch entirely and connect each pair of systems with an
> >ethernet crossover cable (obviously this scales depending on your number
> >PCI slots). Naturally this requires delving into some relatively-fancy
> >network set-up, but for the cost of 10Mb/s cards (I doubt amber could
> >saturate that connection between two nodes), it would seem like a very
> >inexpensive way of getting very-fast/low-latency interconnects. Is such
> >possibility even worth considering?
> Actually, many people have suggested using multiple NICs per node
> and just connecting every node to every node or nearly so. With 3 quad
port NICs
> per node you can connect up to 12 other nodes. But that you suggest
10Mb/s as
> being "fast" and "low latency" is just not right. From observing
throughput on
> my AMBER runs, during the communications phase I see AMBER saturating
> connecting on pairs of nodes.

Thanks for sending this bit of information. I hadn't realized AMBER needed
that much bandwidth between nodes. I have seen the quad-port NICS before,
but I was always somewhat surprised that one could have 400Mb/s worth of
network interfaces on a 33MB/s PCI interface. I have noticed that a few of
them support the 64-bit PCI interfaces which can run at 66MB/s (depending on
the motherboard).

I'm glad to know that the node-node interconnection method has been tried; I
don't think I've seen it discussed on the mailing list before.

Received on Mon Oct 08 2001 - 10:15:39 PDT
Custom Search