Amber6 and MPI

From: Yu Chen <chen_at_hhmi.umbc.edu>
Date: Wed 23 Oct 2002 14:10:30 -0400

Hi, greetings;

I tried to run MPI version of Amber6, but got this strange problem, I
would highly apperaciate your advices for helping me out!

I compiled and installed Amber6 using PGI "pgf77" compiler withought
problem(after I added farg.f and made all the necessary changes). Then I
run the sandre_classic with:

mpirun -nolocal -v -np 9 -machinefile mach.test chen.test

It then told me all processores started, but when I checked the nodes, it
actually only run on one node, not 9! But there was no error messages
reported, and the program finished correctly except only running on one
node.

Following are the informations on my cluster and some configuration, I can
provide more information such like running using -echo if requested:

Thanks for taking time!

-------------------------------------------------------
My mach.test file:
------------------------------------------------------
node2:2
node3:2
node4:2
node5:2
node6:2
node7:2
node8:2
node9:2
node10:2

---------------------------------------------------------
My chen.test file:
---------------------------------------------------------
#!/bin/csh -f
echo ""
echo " gag production run----PME simulation"
#
# choose executable
#

/chem/chen/test/amber6/exe/sander_classic -O \
                   -i /chem/chen/test/amber6/gag/mdp.in \
                   -o /chem/chen/test/amber6/gag/1l6n_1a.out \
                   -c /chem/chen/test/amber6/gag/equil_md.restart \
                   -p /chem/chen/test/amber6/gag/1l6n.top\
                   -r /chem/chen/test/amber6/gag/1l6n_1a.restart \
                   -x /chem/chen/test/amber6/gag/1l6n_1a.crd || goto error

-------------------------------------------------------------------
My MACHINE file:
------------------------------------------------------------------
setenv MPICH_HOME /mpi
setenv MPICH_INCLUDE $MPICH_HOME/include
setenv MPICH_LIBDIR $MPICH_HOME/lib
setenv MPICH_LIB mpich


#
setenv MACHINE "linux/FreeBSD/Windows PC"
setenv MACH Linux
setenv MACHINEFLAGS "-DISTAR4 -DREGNML -DMPI"

# CPP is the cpp for this machine
setenv CPP "/lib/cpp -traditional -I$MPICH_INCLUDE"
#setenv CC "gcc "

# SYSDIR is the name of the system-specific source directory relative to
src/*/
setenv SYSDIR Machines/g77

# COMPILER ALIASES:
setenv FC "pgf77"
setenv OPT_0 "-g -tp p6 -Mnoframe"
setenv OPT_1 "-O2 -Munroll -tp p6 -Mnoframe "
setenv OPT_2 "-O2 -Munroll -tp p6 -Mnoframe "
setenv OPT_3 "-O2 -Munroll -tp p6 -Mnoframe "

# LOADER/LINKER:
setenv LOAD "pgf77 "
setenv LOADLIB "-lm -L$MPICH_LIBDIR farg.o -l$MPICH_LIB"
setenv LOADCC " gcc "

#setenv G77_COMPAT "-fno-globals -ff90 -funix-intrinsics-hide"
#
# following seems to actually slow down code, though it works well on
CHARMM:
#setenv G77_OPT "-O6 -m486 -malign-double -ffast-math -fomit-frame-pointer
-funroll-loops -funroll-all-loops -mcpu=pentiumpro -march=pentiumpro
-ffloat-store -fforce-mem -frerun-cse-after-loop -fexpensive-optimizations
-fugly-complex"

# following appears to be the best we have found so far:
setenv G77_OPT "-O3 -m486 -malign-double -ffast-math -fomit-frame-pointer"

# little or no optimization:
setenv L0 "$FC -c $OPT_0"

# modest optimization (local scalar):
setenv L1 "$FC -c $OPT_1"

# high scalar optimization (but not vectorization):
setenv L2 "$FC -c $OPT_2"

# high optimization (may be vectorization, not parallelization):
setenv L3 "$FC -c $OPT_3"

# ranlib, if it exists
setenv RANLIB ranlib

------------------------------------------------------------------
My cluster configuration:
-----------------------------------------------------------------
Master: two NIC, one is master.xx.xx.xx (connecting to nodes)
        the other is name.yy.yy.yy (connectiong to outside)
Nodes: all have the name as node(numbers).xx.xx.xx, such as node2.xx.xx.xx
All Master and Nodes are Dual AMD Athlon 1.5GHz CPUs with 1G MEM, same
configuration. The executable and data are nfs mounted on Master and every
node.

-----------------------------------------------------------------
Output from mpirun -t:
------------------------------------------------------------------
mpirun -nolocal -t -v -np 9 -machinefile mach.test chen.test
running /chem/chen/test/amber6/gag/chen.test on 9 LINUX ch_p4 processors
Procgroup file:
node2 0 /chem/chen/test/amber6/gag/chen.test
node3 1 /chem/chen/test/amber6/gag/chen.test
node4 1 /chem/chen/test/amber6/gag/chen.test
node5 1 /chem/chen/test/amber6/gag/chen.test
node6 1 /chem/chen/test/amber6/gag/chen.test
node7 1 /chem/chen/test/amber6/gag/chen.test
node8 1 /chem/chen/test/amber6/gag/chen.test
node9 1 /chem/chen/test/amber6/gag/chen.test
node10 1 /chem/chen/test/amber6/gag/chen.test
/usr/bin/rsh -n node2 /chem/chen/test/amber6/gag/chen.test -p4pg
/chem/chen/test/amber6/gag/PI30376 -p4wd /chem/chen/test/amber6/gag

Thanks again for your time!

Sincerely
Yu


===========================================
Yu Chen
Howard Hughes Medical Institute
University of Maryland at Baltimore County
1000 Hilltop Circle
Baltimore, MD 21250

phone: (410)455-6347
        (410)455-2718
fax: (410)455-1174
email: chen_at_hhmi.umbc.edu
===========================================
Received on Wed Oct 23 2002 - 11:10:30 PDT
Custom Search