Re: AMBER: Question fot Amber 10 Benchmarks

From: Robert Duke <rduke.email.unc.edu>
Date: Thu, 27 Nov 2008 09:37:20 -0500

The benchmark results are for pmemd not sander. Sander is known to not
parallelize nearly as well as pmemd, and to use more memory. But the real
problem, it seems to me, is that you have an ethernet interconnect with quad
cores. This is not going to work very well; the ethernet simply can't keep
up. Try pmemd, and see how bad that is. We scream and holler all the time
about how hopeless ethernet is as an interconnect, especially between
something like quad core nodes. The way to go on this is an infiniband
interconnect, or don't use more than 8 cpu/job (I really think 4 is the more
practical limit). Ross may have more practical further comments, as he has
more access to this sort of hardware (I think).
Regards - Bob Duke
----- Original Message -----
From: "sychen" <u8613020.msg.ndhu.edu.tw>
To: <amber.scripps.edu>
Sent: Thursday, November 27, 2008 2:50 AM
Subject: AMBER: Question fot Amber 10 Benchmarks


> Dear all,
> We have compiled AMBER10 on the machines & platforms which are the same
> as those described by Ross Walker in Amber 10 Benchmarks. (Dual XEON E5430
> on SuperMicro X7DWA-N)
> We use mpich2-1.0.8 & ifort9.1 to build sander.MPI, the benchmark of
> original JAC by sander.mpi seems fine
> (2cpu: 161sec, 4cpu: 88sec, 8cpu: 54sec), but the result for 16cpu (on
> 2nodes & all 6 nodes are the same machine & platform) was very bad
> (83sec).
> (For 16cpu computation, abnormal usage of system CPU (60~70%) was observed
> by top or Ganglia monitoring, while 8cpu computation was fine & system CPU
> < 5% & user CPU > 95%)
>
> Command for building MPICH2 is 'export CC=gcc && export F77=ifort &&
> export F90=ifort && ./configure -prefix=/opt/mpich2 --with-device=ch3:ssm'
> I've tested mpich2 by 'mpiexec -l -machinefile mpd.hosts -n 48 hostname',
> and normal messages returned by 8 thread from node1~node6.
> PS: Each nodes communicates with each other by one GbE switch (3COM
> 2924-SFPplus)
>
>
> Can anyone give me some ideas to solve this problem while running parallel
> sander jobs across nodes?
> Thank you very much.
>
>
> Sincerely,
> yuann
>
> #######command for parallel sander (JAC benchmark)############
> /opt/mpich2/bin/mpirun -machinefile hostlist -n 16
> /opt/amber10/exe/sander.MPI -O -i mdin -c inpcrd.equil -p prmtop -o
> 16cpu.out
>
> ######hostlist######
> node01:8
> node02:8
> node03:8
> node04:8
> node05:8
> node06:8
>
> #############################################################################################
> config_amber.h for parallel sander was generated by
> ./configure_amber -mpich2 -p4 -mmtsb -static -verbose -nobintraj ifort
> #############################################################################################
> AMBERBUILDFLAGS=
> LOCALFLAGS=
> USE_BLASLIB=$(SOURCE_COMPILED)
> USE_LAPACKLIB=$(SOURCE_COMPILED)
> CC= gcc
> CPLUSPLUS=g++
> CFLAGS= -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -m64 -O2
> $(AMBERBUILDFLAGS)
> CPPFLAGS= -DMMTSB $(AMBERBUILDFLAGS)
> FPPFLAGS= -I/opt/mpich2/include -P -DMMTSB -DMPI -DUSE_MPI_IN_PLACE
> $(AMBERBUILDFLAGS)
> FPP= cpp -traditional $(FPPFLAGS)
> FC= /opt/mpich2/bin/mpif90
> FFLAGS= -w95 -sox -vec_report3 -opt_report -opt_report_level
> max -opt_report_phase all -V -v -Wl,-verbose,-M -mp1 -O0 $(LOCALFLAGS)
> $(AMBERBUILDFLAGS)
> FOPTFLAGS= -w95 -sox -vec_report3 -opt_report -opt_report_level
> max -opt_report_phase all -V -v -Wl,-verbose,-M -mp1 -ip -O3 -axWP
> $(LOCALFLAGS) $(AMBERBUILDFLAGS)
> FREEFORMAT_FLAG= -FR
> LOAD= /opt/mpich2/bin/mpif90 -static $(LOCALFLAGS) $(AMBERBUILDFLAGS)
> LOADCC= gcc -static $(LOCALFLAGS) $(AMBERBUILDFLAGS)
> LOADLIB=
> LM= -lm
> XHOME= /usr/X11R6
> XLIBS= -L/usr/X11R6/lib64 -L/usr/X11R6/lib
> .SUFFIXES: .f90
> EMPTY=
> AR=ar rv
> M4=m4
> RANLIB=ranlib
> SFX=
> NETCDF=
> NETCDFLIB=
> MODULEDIR=-I
> testsanderDIVCON=test.sander.DIVCON
> INCDIVCON=divcon
> LIBDIVCON=../dcqtp/src/qmmm/libdivcon.a
> BINDIR=/home/sychen/amber10/bin
> LIBDIR=/home/sychen/amber10/lib
> INCDIR=/home/sychen/amber10/include
> DATDIR=/home/sychen/amber10/dat
> .f.o: $<
> $(FPP) $< > _$<
> $(FC) -c $(FFLAGS) -o $. _$<
> .c.o:
> $(CC) -c $(CFLAGS) $(CPPFLAGS) -o $. $<
>
>
> --
> sychen <u8613020.mail.ndhu.edu.tw>
>
> -----------------------------------------------------------------------
> The AMBER Mail Reflector
> To post, send mail to amber.scripps.edu
> To unsubscribe, send "unsubscribe amber" (in the *body* of the email)
> to majordomo.scripps.edu
>

-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber.scripps.edu
To unsubscribe, send "unsubscribe amber" (in the *body* of the email)
      to majordomo.scripps.edu
Received on Fri Dec 05 2008 - 17:59:19 PST
Custom Search