Re: [AMBER] Good cluster configuration for running PMEMD.MPI?

From: Jason Swails <>
Date: Mon, 16 Apr 2012 17:37:10 -0400

On Mon, Apr 16, 2012 at 9:19 AM, Juan Carlos Muñoz García <> wrote:

> Hello,
> I'd like to ask you for your advise regarding a good/fast cluster for
> running PMEMD.MPI of AMBER. Our group has the possibility of adquiring a
> computer cluster with the following configuration:
> - 4 Processors Opteron Interlagos 16 Cores/each 6276 at 2,3 GHz, 8 Mb
> cache L2 and 16 Mb cache L3, and 4 Hipertransport vers. 3 channels of 6,4
> GT/s.
> - Dual circuit board H8QGI+-F 3 SATA Hot Swap, VGA MATROX G200.
> - 16Mb, 2x PCI-e 2.0 x16, 2 PCI-e x8, 2 network cards 1000 (Gigabit) IPMI
> 2.0 (KVM over LAN).
> - Memory: 128 GB DDR3/1600 ECC.
> According to what I've read in AMBER benchmarks site, it seems it's better
> to use a higher number of processors/node but with a lower number of cores
> per node.

This is highly system-dependent and depends on several factors. For
instance, you will get poor scaling between multiple nodes if your
interconnect is slow (infiniband is best here). Also, the more processors
you have sharing the same memory bandwidth, the worse it will scale within
a node that has a lot of processing cores.

The benchmarks are examples on supercomputing facilities aimed to give a
general idea of what you can expect with a few different hardware
configurations. However, the only way to determine the optimum scaling for
your cluster is to run some benchmarks on your cluster. The benchmark
suite is available on (I think), so you can download them and
run them to determine how your system performs.


Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Candidate
AMBER mailing list
Received on Mon Apr 16 2012 - 15:00:02 PDT
Custom Search