Re: [AMBER] issue running AMBER on double sockets hexa cores

From: Ross Walker <ross.rosswalker.co.uk>
Date: Mon, 16 Sep 2013 15:54:25 -0700

As Dave says you should try this with PMEMD - BUT, I suspect it is a
misconfiguration of your MPI in some way. I'd expect this type of behavior
if your MPI was somehow locking the affinity to a single socket. Thus all
12 threads run on the same socket. Try running again and run 'top' - press
1 to show the utilization of each CPU. Run on a single socket as you have
been doing and note which Cores are used. Then repeat with both sockets
and see if all 12 cores are used. (I assume you have hyperthreading turned
off in order to not complicate matters).

All the best
Ross


On 9/16/13 11:42 AM, "Pascal Bonnet" <pascal.bonnet.univ-orleans.fr> wrote:

>Dear amber users,
>
>We have PowerEdge R410 Dell computers, each with double sockets hexa
>cores running Linux Centos6.4, 64 bits. Processors are Intel Xeon X5650.
>
>When executing the collection of AMBER benchmarks designed by Ross
>Walker, we obtain a good speed increase from single socket single core
>to single socket hexa cores (almost 6X faster).
>However when we try to use double sockets hexa cores there is no speed
>increase, we obtain the same speed as single socket hexa cores.
>I wonder if someone already observed this behavior or if someone has a
>solution.
>
>Here is the command line:
>mpirun -np X $AMBERHOME/bin/sander.MPI -O -i ../mdin -o mdout -p
>../prmtop -c ../inpcrd (with X=1 to 12)
>
>We use AMBER12, and OpenMPI: 1.6.4.
>Best regards,
>Pascal
>
>_______________________________________________
>AMBER mailing list
>AMBER.ambermd.org
>http://lists.ambermd.org/mailman/listinfo/amber



_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon Sep 16 2013 - 16:00:03 PDT
Custom Search