RE: AMBER: Compile AMBER 9 on TACC Ranger super computer

From: Ross Walker <ross.rosswalker.co.uk>
Date: Wed, 13 Aug 2008 20:39:47 -0700

Hi Lei,

 

I think you are getting caught way down in the weeds here. You almost
certainly have clashes between 32 bit and 64 bit compilations and I think
you are just making things over complicated for yourself. I'm not sure why
you want to compile your own mpich2. The only option you have for anything
close to decent performance on Ranger is to use mvapich2 or mvapich. Both
versions of mvapich leak like a sieve right now with respect to memory so in
some cases it is possible to have mvapich segfault after several hours of
running with buffer allocation errors but this tends not to be a problem in
pmemd - it is more a problem with sander.MPI which uses a lot of collectives
but you probably shouldn't be running sander on this machine, if you do
(i.e. for REMD) just be prepared to restart it every few hours if need be.
I.e multiple jobs in a single submission script. This will of course depend
on the simulation being run so you will have to test.

 

I would suggest you just try and use mvapich2 to begin with. Here is what I
did, all be it for AMBER 10 - I haven't actually tried AMBER 9 on Ranger.
The performance should better with PMEMD 10.

 

which mpif90

>/opt/apps/pgi7_1/mvapich2/1.0/bin/mpif90

 

tar xvjf AmberTools.tar.bz2

tar xvjf Amber10.tar.bz2

cd amber10

wget http://www.ambermd.org/bugfixes/10.0/bugfix.all

patch -p0 -N -r patch-rejects <bugfix.all

rm -f bugfix.all

wget http://www.ambermd.org/bugfixes/AmberTools/1.2/bugfix.all

patch -p0 -N -r patch-rejects <bugfix.all

cd src

./configure_at

make -f Makefile_at

setenv MPI_HOME /opt/apps/pgi7_1/mvapich2/1.0/

./configure_amber -mpich2 -nosanderidc pgf90

 

 

Note I had to edit evb_matrix.f here and change:

 

   use evb_pimd, only: nbead

 

to

 

#if defined(LES)

   use evb_pimd, only: nbead

#endif

 

to avoid a problem with an undefined reference to evb_pimd when linking
sander.MPI. This is probably an actual bug but I haven't looked into it
further yet.

 

make parallel

 

cd pmemd

./configure linux64_opteron pgf90 mvapich pubfft bintraj

 

>Please enter name of directory where Infiniband libraries are installed:

>/opt/apps/pgi7_1/mvapich2/1.0/lib/

 

edit config.h and change all pgf90's to mpif90

 

change MPI_LIBS line to be MPI_LIBS =

 

make

make install

 

>Run test cases for PMEMD

 

1) WORK filesystem is down AGAIN on Ranger - this seems to never work - so
for testing I do:

mkdir ~/work

setenv WORK ~/work

 

Job submission script (test_amber10_pmemd.x)...

 

#!/bin/bash

#$ -V # Inherit the submission environment

#$ -cwd # Start job in submission directory

#$ -N testPMEMD # Job Name

#$ -j y # combine stderr & stdout into stdout

#$ -o $JOB_NAME.o$JOB_ID # Name of the output file (eg. myMPI.oJobID)

#$ -pe 4way 32 # Requests 4 cores/node, 32/16 = 2 nodes total (8
cpu)

#$ -q development # Queue name

#$ -l h_rt=00:30:00 # Run time (hh:mm:ss) - 0.5 hours

 set -x #{echo cmds, use "set echo" in csh}

 setenv AMBRHOME ~/amber10

 cd $AMBERHOME/test

 setenv DO_PARALLEL ibrun

 make test.parallel

 make test.pmemd

 

qsub test_amber10_pmemd.x

 

You can also do this on a single login node with:

 

setenv DO_PARALLEL 'mpirun -np 4'

cd $AMBERHOME/test/

make clean

make test.parallel

make test.pmemd

 

This works fine for amber10 - tweaking can help with performance somewhat
although the degrees of freedom are huge... Note it can be 'very' beneficial
to leave cores idle on Ranger - It is 4 cpus x 4 cores - so running 4 cpus
per node will generally give you the most optimum scaling although you get
charged for all 16 cores so there is a tradeoff to be had - you can also try
8 cores per node - this will normally give performance far superior to 16
cores per node so works out cheaper in SUs per ns even though you get
charged for all 16. Note though that the performance you get is VERY problem
specific.

 

As for AMBER 9 you should be able to compile pmemd using pgf90 by following
the above tips for PMEMD with AMBER 10 - AMBER 9 sander might need some more
hacking due to problems with pgf90 64 bit at the time amber9 was released -
you can probably just remove the -tp p7 flag from the pgf90 lines - make
sure you test things though since the compiler bug may still be present.

 

Good luck,

Ross

 

From: owner-amber.scripps.edu [mailto:owner-amber.scripps.edu] On Behalf Of
jialei
Sent: Wednesday, August 13, 2008 3:26 PM
To: amber.scripps.edu
Subject: AMBER: Compile AMBER 9 on TACC Ranger super computer

 

Dear AMBER Users,

 

I am trying to compile AMBER9 on the Texas Advance Computing Center (TACC)'s
Ranger machine (ranger.tacc.utexas.edu). I cannot complete compiling the
parallel version. Could anyone please help me? Thank you very much. Here is
the details of my problems:

 

Ranger has AMD Opteron processes. So I set the configuration to
“./configure -mpich2 -opteron pgf90”.

 

Due to problems of using native MPI programs on ranger (mvapich2 and
openmpi), I have compiled a version of Mpich2 in my local directory with the
pgi compilers. And I used this Mpich2 to compile AMBER9.

 

The AMBER9 parallel compilation process stopped with the follow error
message:

 

“ /usr/bin/ld: skipping incompatible
/share/home/00654/tg458141/local/mpich2-1.0.7/lib/libmpichf90.a when
searching for -lmpichf90

/usr/bin/ld: cannot find -lmpichf90

make[1]: *** [sander.MPI] Error 2”

 

On the AMBER reflector, Dr. Ross Walker suggested that compiling mpich2 and
AMBER9 in the 32bit setting may solve the problem. So I tried to compile
mpich2 and AMBER9 again by setting '-tp p7' to force 32bit compilation.
However, same error messages were obtained when compiling AMBER9.

 

When I tried to use intel 10.1 compiler on Ranger to compile Mpich2 and
AMBER9, I got the following error message:

 

“ checking for C compiler default output file name... a.out

checking whether the C compiler works... configure: error: cannot run C
compiled programs.

If you meant to cross compile, use `--host'.

See `config.log' for more details.”

 

Any suggestions are appreciated.

 

Sincerely,

 

Lei Jia

 

  _____

用 Windows Live Spaces 展示个性自我,与好友分享生活!
<http://spaces.live.com/?page=HP> 了解更多信息!



-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber.scripps.edu
To unsubscribe, send "unsubscribe amber" (in the *body* of the email)
      to majordomo.scripps.edu
Received on Sun Aug 17 2008 - 06:07:17 PDT
Custom Search