RE: AMBER: pmemd speedup and interactions

From: Ross Walker <ross.rosswalker.co.uk>
Date: Wed, 31 Mar 2004 09:42:32 -0800

Dear Lubos,

Hyper threading typically gives you a 5 to 10% gain over non-hyper
threading. However, in going from 4 to 8 cpus you have increased your
communication overhead by more than you have gained from the hyper
threading. Hence the slow down. Note, this is not just a pmemd issue, you
are almost certainly going to see this with any program that does extensive
floating point operations since hyper threading is designed to essentially
(on a simple level) allow you to do floating point and integer arithmetic at
the same time. Hence 8 processes all doing floating point math on 4 physical
cpus cannot be expected to run efficiently.

My advice would be to turn off the hyper threading (in the bios) and simply
run with 4 proc (shmem + p4).

All the best
Ross

/\
\/
|\oss Walker

| Department of Molecular Biology TPC15 |
| The Scripps Research Institute |
| Tel:- +1 858 784 8889 | EMail:- ross.rosswalker.co.uk |
| http://www.rosswalker.co.uk/ | PGP Key available on request |



> -----Original Message-----
> From: owner-amber.scripps.edu
> [mailto:owner-amber.scripps.edu] On Behalf Of Lubos Vrbka
> Sent: 31 March 2004 01:27
> To: amber.scripps.edu
> Subject: AMBER: pmemd speedup and interactions
>
> hi guys,
>
> can the speedup for pmemd be dependent on the type of
> interactions that
> are present in my system?
>
> this is on linux xeon cluster (2 cpu per node), pmemd 3.1,
> mpich 1.2.5.2
> compiled with -comm=shared.
>
> i've the box of triangulated waters (SPCE) only.
>
> the time needed for finishing my test job (in hours, 1 million steps,
> constant pressure):
> 1 proc 3:30
> 2 proc (shmem) 2:35
> 2 proc (shmem + hyperthread) 2:02
> 4 proc (shmem + p4) 1:42
> 4 proc (shmem + p4 + hyperthread) 2:15
>
> so there's speed increase, but for the last case (8 processes on 4
> processors), it's bad. i think the problem shouldn't be in
> hyperthreading, since shmem with it works fine (note that in
> this case
> i've only one pmemd binary for both shmem and p4 thanks to
> -comm=shared
> switch to mpich)
>
> for 3.0.1 and 3.1 versions compiled against mpich without
> -comm=shared
> (i.e. separate binaries for p4 and shmem, i can see similar
> numbers...
> this is interesting, since my colleague told me that he gets best
> results for the last case - so i thought it could be studied system
> dependent...
>
> where could the problem be? do you have any ideas?
>
> regards,
>
> --
> Lubos
> _._"
> --------------------------------------------------------------
> ---------
> The AMBER Mail Reflector
> To post, send mail to amber.scripps.edu
> To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu
>

-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber.scripps.edu
To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu
Received on Wed Mar 31 2004 - 18:53:01 PST
Custom Search