Well, without putting a lot of thought into a response, I would quickly say
that I would not use GB ethernet with more than about 8 cpu's, but I would
use a good infiniband implementation on up to maybe 128 cpus; certainly 96
(typically these are dual cpu nodes). For pmemd I test/bench on a large
opteron/infiniband cluster and a intel em64t/infiniband cluster (both
pushing out around 1000 cpu's); for both I can get up to around 6-7 nsec/day
for the factor ix benchmark (~90K atoms) on 128 cpu's. The interconnect
really really does matter, GB ethernet is not very good, and a good
infiniband really is rather nice, all considered. I say a "good infiniband"
because I am aware that infiniband comes in different brands and speeds, and
I don't have the hard data on the two clusters that I do use (this is the
sort of detail that seems to get neglected); I would expect that these
clusters that perform well are using top-end infiniband implementations
though.
Regards - Bob Duke
----- Original Message -----
From: "Ed Pate" <pate.math.wsu.edu>
To: <amber.scripps.edu>
Sent: Wednesday, April 19, 2006 7:10 PM
Subject: AMBER: timing estimates
> Dear Amber users:
>
> This is a somewhat poorly posed question. Are there any timing estimates
> available or rules of thumb for computational speed running an amber
> molecular dynamics simulation with infinband interconnects as
> opposed to ethernet? Does infiniband give significant speedup? The
> application would be a large protein in a box of explicit water molecules.
>
> Thanks for the help.
>
> Ed Pate
> -----------------------------------------------------------------------
> The AMBER Mail Reflector
> To post, send mail to amber.scripps.edu
> To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu
>
-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber.scripps.edu
To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu
Received on Sun Apr 23 2006 - 06:07:05 PDT