Re: [AMBER] Error in PMEMD run

From: Gustavo Seabra <gustavo.seabra.gmail.com>
Date: Fri, 8 May 2009 18:28:35 +0100

> the best performance I have obtained in case of using combination of 4 nodes
> and 4 CPUs (from 8) per node.

I don't know exactly what you have in your system, but I gather you
are using 8core-nodes, and from it you got the best performance by
leaving 4 cores idle. Is that correct?

In this case, I would suggest that you go a bit further, and also test
using only 1 or 2 cores per node, i.e., leaving the remaining 6-7
cores idle. So, for 16 MPI processes, try allocating 16 or 8 nodes.
(I didn't see this case in your tests)

AFAIK, The 8-core nodes are arranged in 2 4-core sockets, and the
communication between core, that was already bad within the 4-cores in
the same socket, gets even worse when you need to get information
between two sockets. Depending on your system, if you send 2 processes
to the same node, it may put all in the same socket or automatically
split it one for each socket. You may also be able to tell it to make
sure that this gets split in to 1 process per socket. (Look into the
mpirun flags.) From the tests we've run on those kind of machines, we
do get the best performance by leaving ALL BUT ONE core idle in each
socket.

Gustavo.

_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed May 20 2009 - 15:12:53 PDT
Custom Search