RE: AMBER: PMEMD configuration and scaling

From: Ross Walker <ross.rosswalker.co.uk>
Date: Sat, 6 Oct 2007 09:27:16 -0700

Hi Lars,

I have never used scali MPI - first question - are you certain it is setup
to use the infiniband interconnect and not going over gigabit ethernet? -
Those numbers look to me like it's going over ethernet.

For infiniband I would recommend using MVAPICH / MVAPICH2 or VMI2 - both
compiled using the Intel compiler (yes I know they are Opteron chips but
surprise surprise the Intel compiler produces the fastest code on opterons
in my experience) and then compile PMEMD with the same compiler.

Make sure you run the MPI benchmarks with the mpi installation and check
that you are getting ping-pong and random-ring latencies and bandwidths that
match the specs of the infiniband - All to All tests etc will also check you
don't have a flakey cable connection which can be common with infiniband.

Good luck.
Ross

/\
\/
|\oss Walker

| HPC Consultant and Staff Scientist |
| San Diego Supercomputer Center |
| Tel: +1 858 822 0854 | EMail:- ross.rosswalker.co.uk |
| http://www.rosswalker.co.uk | PGP Key available on request |

Note: Electronic Mail is not secure, has no guarantee of delivery, may not
be read every day, and should not be used for urgent or sensitive issues.

> -----Original Message-----
> From: owner-amber.scripps.edu
> [mailto:owner-amber.scripps.edu] On Behalf Of
> Lars.Skjarven.biomed.uib.no
> Sent: Saturday, October 06, 2007 04:35
> To: amber.scripps.edu
> Subject: AMBER: PMEMD configuration and scaling
>
>
> Dear Amber Users,
>
> We recently got access to a cluster consisting of Opteron
> dual-cpu-dual-core (4
> cores) SUN nodes with InfiniBand interconnects. After what I
> have read about
> pmemd and scaling, this hardware should be good enough to
> achieve relatively
> good scaling up to at least 16-32 cpu's (correct?). However,
> my small benchmark
> test yields a peak at 8 cpu's (two nodes):
>
> 2 cpus: 85 ps/day - 100%
> 4 cpus: 140 ps/day - 81%
> 8 cpus: 215 ps/day - 62%
> 12 cpus: 164 ps/day - 31%
> 16 cpus: 166 ps/day - 24%
> 32 cpus: 111 ps/day - 8%
>
> This test is done using 400.000 atoms and with a simulation of 20 ps.
>
> Is it possible that our configuration of pmemd can cause this
> problem? If so, do
> you see any apparent flaws in the config.h file below?
>
> In the config.h below we use ScaliMPI and ifort (./configure
> linux64_opteron
> ifort mpi). We also have pathscale and portland as available
> compilers. however,
> I never managed to build pmemd using these..
>
> Any hints and tips will be highly appreciated.
>
> Best regards,
> Lars Skjærven
> University of Bergen, Norway
>
> ## config.h file ##
> MATH_DEFINES =
> MATH_LIBS =
> IFORT_RPATH =
> /site/intel/fce/9.1/lib:/site/intel/cce/9.1/lib:/opt/scali/lib
> 64:/opt/scali/lib:/opt/gridengine/lib/lx26-amd64:/site/pathsca
le/lib/3.0/32:/site/pathscale/lib/3.0:/op
> t/gridengine/lib/lx26-amd64:/opt/globus/lib:/opt/lam/gnu/lib
> MATH_DEFINES = -DMKL
> MATH_LIBS = -L/site/intel/cmkl/8.1/lib/em64t -lmkl_em64t -lpthread
> FFT_DEFINES = -DPUBFFT
> FFT_INCLUDE =
> FFT_LIBS =
> NETCDF_HOME = /site/NetCDF
> NETCDF_DEFINES = -DBINTRAJ
> NETCDF_MOD = netcdf.mod
> NETCDF_LIBS = $(NETCDF_HOME)/lib/libnetcdf.a
> DIRFRC_DEFINES = -DDIRFRC_EFS -DDIRFRC_NOVEC
> CPP = /lib/cpp
> CPPFLAGS = -traditional -P
> F90_DEFINES = -DFFTLOADBAL_2PROC
>
> F90 = ifort
> MODULE_SUFFIX = mod
> F90FLAGS = -c -auto
> F90_OPT_DBG = -g -traceback
> F90_OPT_LO = -tpp7 -O0
> F90_OPT_MED = -tpp7 -O2
> F90_OPT_HI = -tpp7 -xW -ip -O3
> F90_OPT_DFLT = $(F90_OPT_HI)
>
> CC = gcc
> CFLAGS =
>
> LOAD = ifort
> LOADFLAGS = -L/opt/scali/lib64 -lmpi -lfmpi
> LOADLIBS = -limf -lsvml -Wl,-rpath=$(IFORT_RPATH)
> ## config.h ends ##
>
>
>
>
> --------------------------------------------------------------
> ---------
> The AMBER Mail Reflector
> To post, send mail to amber.scripps.edu
> To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu
>


-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber.scripps.edu
To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu
Received on Sun Oct 07 2007 - 06:08:00 PDT
Custom Search