AMBER: PMEMD configuration and scaling

From: <Lars.Skjarven.biomed.uib.no>
Date: Sun, 07 Oct 2007 12:40:00 +0200

Bob, Ross, Thank you for your helpful replies. I will definitively get
back here with the jac benchmark results as Bob propose. This is
amber9 yes.. Whether or not the scali mpi is setup to use the
infiniband or not, I have no idea, and will definitively check that
with the tech on Monday.

After your reply yesterday I used the day to try and compile it with
ifort and mvapich2 as you suggest. However, it results in the
following error:

IPO link: can not find -lmtl_common
ifort: error: problem during multi-file optimization compilation (code 1)
make[1]: *** [pmemd] Error 1

 From the config.h file, the following is defined which may cause some trouble?
MPI_LIBS = -L$(MPI_LIBDIR) -lmpich -L$(MPI_LIBDIR2) -lmtl_common
-lvapi -lmosal -lmpga -lpthread

Using
- "Intel ifort compiler found; version information: Version 9.1"
- Intel MKL (under /site/intel/cmkl/8.1)
- NetCDF
- mvapich2 (/site/mvapich2)
- Inifinband libraries (/usr/lib64/infiniband)

Hoping you see anything that can help me out.. Thanks again..

Lars

On 10/6/07, Ross Walker < ross.rosswalker.co.uk> wrote:

     Hi Lars,

     I have never used scali MPI - first question - are you certain it is setup
     to use the infiniband interconnect and not going over gigabit ethernet? -
     Those numbers look to me like it's going over ethernet.

     For infiniband I would recommend using MVAPICH / MVAPICH2 or VMI2 - both
     compiled using the Intel compiler (yes I know they are Opteron chips but
     surprise surprise the Intel compiler produces the fastest code on opterons
     in my experience) and then compile PMEMD with the same compiler.

     Make sure you run the MPI benchmarks with the mpi installation and check
     that you are getting ping-pong and random-ring latencies and
bandwidths that
     match the specs of the infiniband - All to All tests etc will
also check you
     don't have a flakey cable connection which can be common with infiniband.

     Good luck.
     Ross

     /\
     \/
     |\oss Walker

     | HPC Consultant and Staff Scientist |
     | San Diego Supercomputer Center |
     | Tel: +1 858 822 0854 | EMail:- ross.rosswalker.co.uk |
     | http://www.rosswalker.co.uk | PGP Key available on request |

     Note: Electronic Mail is not secure, has no guarantee of delivery, may not
     be read every day, and should not be used for urgent or sensitive issues.

> -----Original Message-----
> From: owner-amber.scripps.edu
> [mailto: owner-amber.scripps.edu] On Behalf Of
> Lars.Skjarven.biomed.uib.no
> Sent: Saturday, October 06, 2007 04:35
> To: amber.scripps.edu
> Subject: AMBER: PMEMD configuration and scaling
>
>
> Dear Amber Users,
>
> We recently got access to a cluster consisting of Opteron
> dual-cpu-dual-core (4
> cores) SUN nodes with InfiniBand interconnects. After what I
> have read about
> pmemd and scaling, this hardware should be good enough to
> achieve relatively
> good scaling up to at least 16-32 cpu's (correct?). However,
> my small benchmark
> test yields a peak at 8 cpu's (two nodes):
>
> 2 cpus: 85 ps/day - 100%
> 4 cpus: 140 ps/day - 81%
> 8 cpus: 215 ps/day - 62%
> 12 cpus: 164 ps/day - 31%
> 16 cpus: 166 ps/day - 24%
> 32 cpus: 111 ps/day - 8%
>
> This test is done using 400.000 atoms and with a simulation of 20 ps.
>
> Is it possible that our configuration of pmemd can cause this
> problem? If so, do
> you see any apparent flaws in the config.h file below?
>
> In the config.h below we use ScaliMPI and ifort (./configure
> linux64_opteron
> ifort mpi). We also have pathscale and portland as available
> compilers. however,
> I never managed to build pmemd using these..
>
> Any hints and tips will be highly appreciated.
>
> Best regards,
> Lars Skjærven
> University of Bergen, Norway
>
> ## config.h file ##
> MATH_DEFINES =
> MATH_LIBS =
> IFORT_RPATH =
> /site/intel/fce/9.1/lib:/site/intel/cce/9.1/lib:/opt/scali/lib
> 64:/opt/scali/lib:/opt/gridengine/lib/lx26-amd64:/site/pathsca
     le/lib/3.0/32:/site/pathscale/lib/3.0:/op
> t/gridengine/lib/lx26-amd64:/opt/globus/lib:/opt/lam/gnu/lib
> MATH_DEFINES = -DMKL
> MATH_LIBS = -L/site/intel/cmkl/8.1/lib/em64t -lmkl_em64t -lpthread
> FFT_DEFINES = -DPUBFFT
> FFT_INCLUDE =
> FFT_LIBS =
> NETCDF_HOME = /site/NetCDF
> NETCDF_DEFINES = -DBINTRAJ
> NETCDF_MOD = netcdf.mod
> NETCDF_LIBS = $(NETCDF_HOME)/lib/libnetcdf.a
> DIRFRC_DEFINES = -DDIRFRC_EFS -DDIRFRC_NOVEC
> CPP = /lib/cpp
> CPPFLAGS = -traditional -P
> F90_DEFINES = -DFFTLOADBAL_2PROC
>
> F90 = ifort
> MODULE_SUFFIX = mod
> F90FLAGS = -c -auto
> F90_OPT_DBG = -g -traceback
> F90_OPT_LO = -tpp7 -O0
> F90_OPT_MED = -tpp7 -O2
> F90_OPT_HI = -tpp7 -xW -ip -O3
> F90_OPT_DFLT = $(F90_OPT_HI)
>
> CC = gcc
> CFLAGS =
>
> LOAD = ifort
> LOADFLAGS = -L/opt/scali/lib64 -lmpi -lfmpi
> LOADLIBS = -limf -lsvml -Wl,-rpath=$(IFORT_RPATH)
> ## config.h ends ##
>
>
>
>
> --------------------------------------------------------------
> ---------
> The AMBER Mail Reflector
> To post, send mail to amber.scripps.edu
> To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu
>


     -----------------------------------------------------------------------
     The AMBER Mail Reflector
     To post, send mail to amber.scripps.edu
     To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu



-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber.scripps.edu
To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu
Received on Wed Oct 10 2007 - 06:07:05 PDT
Custom Search