Re: AMBER: PMEMD configuration and scaling

From: Lars Skjærven <lars.skjarven.biomed.uib.no>
Date: Tue, 9 Oct 2007 15:56:09 +0200

Right.. Sorry for the typo. It is suppose to be 61% for 24 cpu's and not
31%. Thank you :-)

Quite some gap between your and my benchmark. Obviously room for
improvements...

Lars

On 10/9/07, Robert Duke <rduke.email.unc.edu> wrote:
>
> Lars -
> Thanks for the update. I expect what you are seeing here in the
> worse-than-expected values for infiniband are due either to 1) the impact
> of
> a quad core on one infiniband card (ie., with quad you are sending twice
> the
> traffic through one network interface card that you would send if you had
> a
> dual cpu per node configuration, roughly speaking), 2) possibly still mpi
> issues - mvapich is what we have tested in the past, 3) possibly less
> high-end infiniband hardware than we have tested. The data I have on the
> JAC benchmark, running on dual cpu opteron nodes, really nice infiniband,
> very well maintained (this is jacquard at NERSC), is:
>
> Opteron Infiniband Cluster - JAC - NVE ensemble, PME, 23,558 atoms
>
> #procs nsec/day scaling, %
>
> 2 0.491 100
> 4 0.947 96
> 8 1.82 92
> 16 3.22 82
> 32 6.08 77
> 64 10.05 64
> 96 11.84 50
> 128 12.00 38
>
> Also nice to see the GB ethernet numbers. Note that your calc for
> %scaling
> on 24 infiniband cpu's has to be wrong.
>
> Best Regards - Bob
>
>
>

-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber.scripps.edu
To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu
Received on Wed Oct 10 2007 - 06:07:48 PDT
Custom Search