Fwd: Re: AMBER: Fwd: Amber9 parallel compilation openmpi issues

From: Francesco Pietra <chiendarret.yahoo.com>
Date: Wed, 25 Jul 2007 15:01:40 -0700 (PDT)

Confirmed successful installation of Amber9 following Mark's easier route. All
tests (export DO_PARALLEL='mpirun -np -4'; cd $AMBERHOME/test; make
test.parallel) passed, except differences in attached TEST_FAILURES.diff
(renamed TEST_FAILURES). These seem to me marginal differences.

Thanks
francesco pietra



--- Francesco Pietra <chiendarret.yahoo.com> wrote:

> Date: Wed, 25 Jul 2007 13:43:08 -0700 (PDT)
> From: Francesco Pietra <chiendarret.yahoo.com>
> Subject: Re: AMBER: Fwd: Amber9 parallel compilation openmpi issues
> To: amber.scripps.edu
>
> Mark's easier procedure probably worked for debian Linux amd64
> as "make parallel" ended with
>
> Installation of Amber9 (parallel) is complete at Wed Jul 25 22:23:48 CEST
> 2007
> (it finished a couple of minutes before Wagner's Nuernberger .. from
> Bayreuth)
>
> Just to avoid mistakes, is a correct procedure now (bash):
>
> export DO_PARALLEL mpirun -np 4
>
> before running the tests?
>
> If test OK, is the compilation of PMEND to be folled (for my system) as
> described at sect 8.6 of the manual)
>
> Thanks
> francesco pietra
>
>
>
> --- "David A. Case" <case.scripps.edu> wrote:
>
> > On Wed, Jul 25, 2007, Mark Williamson wrote:
> >
> >
> > >
> > > I'm desperately trying not cloud the waters here, but here's my take on
> > > it :)
> > >
> > > When AMBER's configure script is run with the openmpi flag, the
> > > following section of code within this script is visited:
> > >
> > >
> > >
> > > ...blah....
> > >
> > > openmpi)
> > > if [ -z "$MPI_HOME" ]; then
> > > PAR="OPENMPI"
> > > FILES="mpif.h and libmpi.a, liblam.a or liblamf77mpi.a"
> > > EXAMPLE="/usr/local/openmpi-1.0"
> > > par_error
> > > fi
> > > echo "MPI_HOME is set to $MPI_HOME"
> > > loadlib=`$MPI_HOME/bin/mpif90 -showme | perl -p -e
> > > 's/(-[lLW]\S+\s)|\S+\s/$1/g'`
> > > fppflags="-I$MPI_HOME/include $fppflags -DMPI"
> > > ;;
> > >
> > > ...blah....
> > >
> > >
> > >
> > > Hence, I suggest trying:
> > >
> > >
> > > cd $AMBERHOME/src
> > > make clean
> > >
> > > export MPI_HOME=/usr/local
> > > ./configure -openmpi ifort_x86_64
> > >
> > > make parallel
> > >
> >
> > I'll defer to Mark here, because I guess this works for him. But here's
> the
> > problem I have, and I don't quite understand why he doesn't have it as
> well:
> >
> > If config.h has FC=ifort, then the following happens when I try to compile
> > a parallel code foo.f:
> >
> > 1. The preprocessor sees the "#include mpif.h" and correctly finds
> > $MPI_HOME/include/mpif.h and puts it into the _foo.f file
> >
> > 2. However, $MPI_HOME/include/mpif.h has inside it a fortran90
> include
> > line "include 'mpif-common.h'". And when ifort sees this line,
> > it doesn't know where to find mpif-common.h, and fails. If FC in
> > config.h is set to mpif90, then there is no problem, since mpif90
> > is smart enough to know where to find this second include file.
> >
> > It was after Amber9 was released that the openMPI folks changed their
> header
> > file, to use the fortran90 include mechanism to get mpif-common.h.
> > Previously, $MPI_HOME/include/mpif.h had been a simple file that could just
> > be included at the cpp pre-processing step. So, the configure script above
> > worked at release time, but stopped working later on (at least for me!).
> >
> > The simple question for Mark is this: what version of openMPI are you
> using?
> > Can you check on what is inside your $MPI_HOME/include/mpif.h file?
> >
> > Somewhat of an aside: we've found that relying on the "showme" command is
> > not always that reliable. This has been more a problem with lam than
> > openmpi,if I remember correctly. But the general consensus among Amber
> > developers is that we are better off setting FC=mpif90 (or mpif77 for lam)
> > and relying on mpif90 to do the right thing, both for compiling and
> loading.
> > (After all, that is its purpose in life.)
> >
> > Bottom line: it's a real pain that we have to rely on a outside MPI
> > installation, which might be one of several flavors. For amber10, I am
> > seriously considering including an MPI installation with Amber, along with
> > instructions that we know(!?!) will work. This will at least get people
> > going; more advanced users can then substitute a local MPI library in place
> > of the one we supply, if they need that for efficiency. Comments and
> > suggestions would be welcome here.
> >
> > ...regards...dac
> >
> > -----------------------------------------------------------------------
> > The AMBER Mail Reflector
> > To post, send mail to amber.scripps.edu
> > To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu
> >
>
>
>
>
>
____________________________________________________________________________________
> Park yourself in front of a world of choices in alternative vehicles. Visit
> the Yahoo! Auto Green Center.
> http://autos.yahoo.com/green_center/
>



       
____________________________________________________________________________________
Got a little couch potato?
Check out fun summer activities for kids.
http://search.yahoo.com/search?fr=oni_on_mail&p=summer+activities+for+kids&cs=bz

-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber.scripps.edu
To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu

Received on Sun Jul 29 2007 - 06:07:16 PDT
Custom Search