In my opinion, if you have more than about 8 to 16 cpu's, and you are 
running large long simulations, money spent on good interconnects is money 
well spent.  You simply cannot efficiently run 16+ processor jobs on GB 
ethernet, and honestly I don't much like going above about 4 processor jobs 
on GB ethernet.  I would recomment an infiniband implementation these days; 
I don't know what the costs are vs. myrinet (which is also ok but I believe 
I am seeing better numbers and stability in some of the infiniband 
installations I use (honestly, I no longer have access to any myrinet 
systems)).  These comments are all predicated on the assumption you 
primarily run pme simulations; generalized Born is less demanding on the 
interconnect so using GB ethernet is less of a problem.
Regards - Bob Duke
----- Original Message ----- 
From: "Mingfeng Yang" <mfyang.gmail.com>
To: <amber.scripps.edu>
Sent: Thursday, July 20, 2006 9:35 AM
Subject: AMBER: Myrinet or Gigabit ethernet?
>
> I am planning to build a dual-core opteron based linux cluster. Mostly, we 
> will run amber9 on this cluster for MD simulation. Due to limited budget, 
> I don't like to spend several thousand dollars on interconnector like 
> myrinet and infiniband. But I am afraid gigabit ethernet will give poor 
> parallel efficiencity. If any of you have experience on these 
> interconnectors, can you tell me how much difference they can give?
>
>
> Thanks,
> Mingfeng
> -----------------------------------------------------------------------
> The AMBER Mail Reflector
> To post, send mail to amber.scripps.edu
> To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu
> 
-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber.scripps.edu
To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu
Received on Sun Jul 23 2006 - 06:07:07 PDT