On 18.09.2013 17:00, Pascal Bonnet wrote:
> Dear Ross and Dave,
>
> Thank you for your swift feedback.
> Using pmemd.mpi is indeed much faster (x2 compared to sander.mpi). We
> have indeed deactivating hyperthreading.
>
> By deactivating hyperthreading we loose speed increase. As you said, all
> 12 threads run on the same socket and only 50% of the cpus are used.
With deactivated hyperthreading (HT), you absolutely need to make sure
that all 12 cores are equally used when spawning 12 pmemd.MPI processes
via mpirun. If this is not the case, your MPI environment is improperly
set up for this type of application. It dictates a certain CPU affinity
that you do not want. This is what you need to solve. It could be that
you need to provide more commandline parameters to the mpirun command in
order to get the CPU affinity straight.
> However by activating hyperthreading, we used the two sockets as seen by
> the top command, but we observe a speed decrease from using 6 to 8 cpus
> and it increases slowly again up to 12 cpus. So it seems there is a lack
> of communication between sockets.
> Have you already observed such behavior?
In any case, a solution to your problem should not be a compromise à la
with-hyperthreading-it-somehow-works. Don't invest time here. It must
properly work without HT.
Hyperthreading can be good for desktop workstations since it quite
efficiently allows for concurrently running more processes than CPU
cores available. However, for HPC, you should always only run as many
compute processes as you have cores available. In this kind of scenario,
HT never helps and should be deactivated in order to not complicate
things (it might confuse certain batch schedulers etc).
>
> Here are the data:
> With hyperthreading
> nb_procs time_(s) md_speed_(ns/day)
> 1 3617 1.2
> 2 2014 2.1
> 4 1083 4.0
> 6 777 5.6
> 8 937 4.6
> 10 831 5.2
> 12 690 6.3
>
> Without hyperthreading
> nb_procs time_(s) md_speed_(ns/day)
> 1 3596 1.2
> 2 2016 2.1
> 4 1099 3.9
> 6 783 5.5
> 8 1194 3.6
> 10 1077 4.0
> 12 1008 4.3
>
> Best regards,
> Pascal
>
>
>
> On 17/09/2013 00:54, Ross Walker wrote:
>> As Dave says you should try this with PMEMD - BUT, I suspect it is a
>> misconfiguration of your MPI in some way. I'd expect this type of behavior
>> if your MPI was somehow locking the affinity to a single socket. Thus all
>> 12 threads run on the same socket. Try running again and run 'top' - press
>> 1 to show the utilization of each CPU. Run on a single socket as you have
>> been doing and note which Cores are used. Then repeat with both sockets
>> and see if all 12 cores are used. (I assume you have hyperthreading turned
>> off in order to not complicate matters).
>>
>> All the best
>> Ross
>>
>>
>> On 9/16/13 11:42 AM, "Pascal Bonnet" <pascal.bonnet.univ-orleans.fr> wrote:
>>
>>> Dear amber users,
>>>
>>> We have PowerEdge R410 Dell computers, each with double sockets hexa
>>> cores running Linux Centos6.4, 64 bits. Processors are Intel Xeon X5650.
>>>
>>> When executing the collection of AMBER benchmarks designed by Ross
>>> Walker, we obtain a good speed increase from single socket single core
>>> to single socket hexa cores (almost 6X faster).
>>> However when we try to use double sockets hexa cores there is no speed
>>> increase, we obtain the same speed as single socket hexa cores.
>>> I wonder if someone already observed this behavior or if someone has a
>>> solution.
>>>
>>> Here is the command line:
>>> mpirun -np X $AMBERHOME/bin/sander.MPI -O -i ../mdin -o mdout -p
>>> ../prmtop -c ../inpcrd (with X=1 to 12)
>>>
>>> We use AMBER12, and OpenMPI: 1.6.4.
>>> Best regards,
>>> Pascal
>>>
>>> _______________________________________________
>>> AMBER mailing list
>>> AMBER.ambermd.org
>>> http://lists.ambermd.org/mailman/listinfo/amber
>>
>>
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>>
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Sep 18 2013 - 08:30:02 PDT