AMBER: amber 9 on Intel Harpertown

From: Geoff Wood <>
Date: Wed, 13 Aug 2008 11:45:36 +0200

Dear Reflector,

We are currently testing amber 9 on a new machine. We are having
problems with the MPI communications and I was wondering if there are
any known compatibility issues with the machine and the way amber is
compiled before we start looking at hardware and driver issues, any
comments or help would be much appreciated.

  The basic specks of the machine are as follows:

128 compute nodes, each with two quad-core Intel Harpertown 3.0 GHz
processors, for a total of 1024 cores;
Voltaire 20 Gbit/s InfiniBand fabric used both to share files thru
GPFS and to run MPI jobs.

11:07:15 cal2 root - /root > rpmg kernel

We have successfully compiled amber 9 using openmpi/1.2.6_gcc-4.1.2
and intel fortan and c++ compilers. We ran the tests without
problems, however, when scaling jobs to use 128-256 cpus we encounter
MPI problems. The error is the following:

The InfiniBand retry count between two MPI processes has been
exceeded. "Retry count" is defined in the InfiniBand spec 1.2
(section 12.7.38):

     The total number of times that the sender wishes the receiver to
     retry timeout, packet sequence, etc. errors before posting a
     completion error.

This error typically means that there is something awry within the
InfiniBand fabric itself. You should note the hosts on which this
error has occurred; it has been observed that rebooting or removing a
particular host from the job can sometimes resolve this issue.

Two MCA parameters can be used to control Open MPI's behavior with
respect to the retry count:

Thanks in advance.

Dr Geoffrey Wood
Ecole Polytechnique Fédérale de Lausanne
tel: +41 21 693 03 23
CH - 1015 Lausanne e-

The AMBER Mail Reflector
To post, send mail to
To unsubscribe, send "unsubscribe amber" (in the *body* of the email)
Received on Sun Aug 17 2008 - 06:07:04 PDT
Custom Search