Re: [AMBER] Good cluster configuration for running PMEMD.MPI?

From: Jason Swails <jason.swails.gmail.com>
Date: Mon, 16 Apr 2012 17:37:10 -0400

On Mon, Apr 16, 2012 at 9:19 AM, Juan Carlos Muñoz García <
juan.munioz.iiq.csic.es> wrote:

> Hello,
>
> I'd like to ask you for your advise regarding a good/fast cluster for
> running PMEMD.MPI of AMBER. Our group has the possibility of adquiring a
> computer cluster with the following configuration:
>
> - 4 Processors Opteron Interlagos 16 Cores/each 6276 at 2,3 GHz, 8 Mb
> cache L2 and 16 Mb cache L3, and 4 Hipertransport vers. 3 channels of 6,4
> GT/s.
> - Dual circuit board H8QGI+-F 3 SATA Hot Swap, VGA MATROX G200.
> - 16Mb, 2x PCI-e 2.0 x16, 2 PCI-e x8, 2 network cards 1000 (Gigabit) IPMI
> 2.0 (KVM over LAN).
> - Memory: 128 GB DDR3/1600 ECC.
>
> According to what I've read in AMBER benchmarks site, it seems it's better
> to use a higher number of processors/node but with a lower number of cores
> per node.
>

This is highly system-dependent and depends on several factors. For
instance, you will get poor scaling between multiple nodes if your
interconnect is slow (infiniband is best here). Also, the more processors
you have sharing the same memory bandwidth, the worse it will scale within
a node that has a lot of processing cores.

The benchmarks are examples on supercomputing facilities aimed to give a
general idea of what you can expect with a few different hardware
configurations. However, the only way to determine the optimum scaling for
your cluster is to run some benchmarks on your cluster. The benchmark
suite is available on ambermd.org (I think), so you can download them and
run them to determine how your system performs.

HTH,
Jason

-- 
Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Candidate
352-392-4032
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon Apr 16 2012 - 15:00:02 PDT
Custom Search