On Tue, Sep 24, 2013 at 3:32 PM, George Tzotzos <gtzotzos.me.com> wrote:
> Hi everybody,
>
> I'm trying to run Amber on a cluster of the following specs
>
> SGI Specs – SGI ICE X
> OS - SUSE Linux Enterprise Server 11 SP2
> Kernel Version: 3.0.38-0.5
> 2x6-Core Intel Xeon
>
> 16 blades 12 cores each
>
> Environment
> export AMBERHOME=/bio/georgios/MD/amber12
> export
> LD_LIBRARY_PATH=/opt/rpm_share/lib/lib64:/bio/george/amber12/lib:/bio/george/MD/amber12/AmberTools/lib
>
> Command line
> mpirun -np 48 pmemd.MPI -O -i prod.in -o prod_12ns.out -p
> 2erb_bis_solv.prmtop -c prod_10ns.rst -r prod_12ns.rst -x prod_12ns.mdcrd
>
> Question
>
> No advantage in increasing the number of nodes beyond -np 24. The
> performance is reduced the more cores engaged. In fact it is similar or
> worse to that on a OSX 2 x 3.06 GHz 6-Core Intel Xeon
>
> I'd be very grateful for any suggestions on what may be wrong
>
Your interconnect between nodes may be too slow. Also, the more cores you
have on a single node, the more bandwidth you need between nodes to avoid a
slow-down (this is why some supercomputers give faster timings for Amber
when you do not utilize the whole node). This does not paint the whole
picture (the topology of the inter-node connections also matters to some
extent), but it's probably the most important part.
You really need Infiniband (QDR is typical, I think) to see good scaling
across nodes with Amber.
HTH,
Jason
--
Jason M. Swails
BioMaPS,
Rutgers University
Postdoctoral Researcher
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Sep 25 2013 - 05:00:03 PDT