Re: [AMBER] issue running AMBER on double sockets hexa cores

From: Pascal Bonnet <pascal.bonnet.univ-orleans.fr>
Date: Wed, 18 Sep 2013 17:00:23 +0200

Dear Ross and Dave,

Thank you for your swift feedback.
Using pmemd.mpi is indeed much faster (x2 compared to sander.mpi). We
have indeed deactivating hyperthreading.

By deactivating hyperthreading we loose speed increase. As you said, all
12 threads run on the same socket and only 50% of the cpus are used.
However by activating hyperthreading, we used the two sockets as seen by
the top command, but we observe a speed decrease from using 6 to 8 cpus
and it increases slowly again up to 12 cpus. So it seems there is a lack
of communication between sockets.
Have you already observed such behavior?

Here are the data:
With hyperthreading
nb_procs time_(s) md_speed_(ns/day)
1 3617 1.2
2 2014 2.1
4 1083 4.0
6 777 5.6
8 937 4.6
10 831 5.2
12 690 6.3

Without hyperthreading
nb_procs time_(s) md_speed_(ns/day)
1 3596 1.2
2 2016 2.1
4 1099 3.9
6 783 5.5
8 1194 3.6
10 1077 4.0
12 1008 4.3

Best regards,
Pascal



On 17/09/2013 00:54, Ross Walker wrote:
> As Dave says you should try this with PMEMD - BUT, I suspect it is a
> misconfiguration of your MPI in some way. I'd expect this type of behavior
> if your MPI was somehow locking the affinity to a single socket. Thus all
> 12 threads run on the same socket. Try running again and run 'top' - press
> 1 to show the utilization of each CPU. Run on a single socket as you have
> been doing and note which Cores are used. Then repeat with both sockets
> and see if all 12 cores are used. (I assume you have hyperthreading turned
> off in order to not complicate matters).
>
> All the best
> Ross
>
>
> On 9/16/13 11:42 AM, "Pascal Bonnet" <pascal.bonnet.univ-orleans.fr> wrote:
>
>> Dear amber users,
>>
>> We have PowerEdge R410 Dell computers, each with double sockets hexa
>> cores running Linux Centos6.4, 64 bits. Processors are Intel Xeon X5650.
>>
>> When executing the collection of AMBER benchmarks designed by Ross
>> Walker, we obtain a good speed increase from single socket single core
>> to single socket hexa cores (almost 6X faster).
>> However when we try to use double sockets hexa cores there is no speed
>> increase, we obtain the same speed as single socket hexa cores.
>> I wonder if someone already observed this behavior or if someone has a
>> solution.
>>
>> Here is the command line:
>> mpirun -np X $AMBERHOME/bin/sander.MPI -O -i ../mdin -o mdout -p
>> ../prmtop -c ../inpcrd (with X=1 to 12)
>>
>> We use AMBER12, and OpenMPI: 1.6.4.
>> Best regards,
>> Pascal
>>
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Sep 18 2013 - 08:00:03 PDT
Custom Search