Re: [AMBER] Problems with pmemd.cuda.mpi (again!)

From: Adam Jion <adamjion.yahoo.com>
Date: Mon, 26 Mar 2012 10:25:30 -0700 (PDT)

Thank you Jason!
Everything works well now.

Very appreciative of your help,
Adam



________________________________
 From: Jason Swails <jason.swails.gmail.com>
To: AMBER Mailing List <amber.ambermd.org>
Sent: Tuesday, March 27, 2012 12:51 AM
Subject: Re: [AMBER] Problems with pmemd.cuda.mpi (again!)
 
My suggestion is to try to replace your OpenMPI installation from aptitude
with MPICH2.  You can do this as follows:

sudo apt-get remove openmpi-bin libopenmpi-dev
sudo apt-get install libmpich2-dev mpich2

Then try rebuilding parallel Amber in parallel.

Good luck,
Jason

On Mon, Mar 26, 2012 at 12:13 PM, Adam Jion <adamjion.yahoo.com> wrote:

> Hi Jason,
>
> I'm unable to get a F90 compiler for my Ubuntu machine. So when I built
> mpich2, it resulted only with a mpicc and mpif77.
>
> I did what you said and swiched the compilers to mpich2's version of mpicc
> and mpif77 (not mpif90).
> But when I ran make cuda_parallel. I got an error about a missing
> "cuda_info.o" (see error log below)
>
> Is this due to using mpif77 instead of mpif90?
> Or is it due to something else?
>
> Regards,
> Adam
>
> Partial Error Log:
> /usr/local/cuda/bin/nvcc -use_fast_math -O3 -gencode
> arch=compute_13,code=sm_13 -gencode arch=compute_20,code=sm_20 -DCUDA
> -DMPI  -DMPICH_IGNORE_CXX_SEEK -I/usr/local/cuda/include -IB40C
> -IB40C/KernelCommon -I/usr/include  -c kPMEInterpolation.cu
> ar rvs cuda.a cuda_info.o gpu.o gputypes.o kForcesUpdate.o
> kCalculateLocalForces.o kCalculateGBBornRadii.o
> kCalculatePMENonbondEnergy.o kCalculateGBNonbondEnergy1.o kNLRadixSort.o
> kCalculateGBNonbondEnergy2.o kShake.o kNeighborList.o kPMEInterpolation.o
> ar: creating cuda.a
> ar: cuda_info.o: No such file or directory
> make[3]: *** [cuda.a] Error 1
> make[3]: Leaving directory `/home/adam/amber11/src/pmemd/src/cuda'
> make[2]: *** [-L/usr/local/cuda/lib64] Error 2
> make[2]: Leaving directory `/home/adam/amber11/src/pmemd/src'
> make[1]: *** [cuda_parallel] Error 2
> make[1]: Leaving directory `/home/adam/amber11/src/pmemd'
> make: *** [cuda_parallel] Error 2
>
>  ------------------------------
> *From:* Jason Swails <jason.swails.gmail.com>
> *To:* AMBER Mailing List <amber.ambermd.org>
> *Sent:* Monday, March 26, 2012 10:24 PM
>
> *Subject:* Re: [AMBER] Problems with pmemd.cuda.mpi (again!)
>
> The problem is probably what I pointed out earlier -- you didn't fully
> switch MPIs to MPICH2.  You need to set mpif90 and mpicc to the MPICH2
> installation (this means changing mpif90 and mpicc to
> /path/to/mpich2/bin/mpif90 and /path/to/mipch2/bin/mpicc, respectively, not
> changing mpif90 and mpicc to mpif90.mpich2 and mpicc.mpich2).  Note that
> /path/to/mpich2 should be replaced with the path that leads to the MPICH2
> binaries.
>
> Then you have to completely rebuild pmemd.cuda.MPI (so do a "make clean"
> before you try to do "make cuda_parallel" again).
>
> HTH,
> Jason
>
> On Mon, Mar 26, 2012 at 2:07 AM, Adam Jion <adamjion.yahoo.com> wrote:
>
> > Hi Jason,
> >
> > Ignore my earlier email. It was caused by pointing to a 32-bit library
> > instead of a 64-bit library. After rectification, I get an error about
> > needing to run on 2 or more processors. See below.
> >
> > I have an intel i7-system with 8 threads. I have 2 GPUs.
> > What's the problem?
> >
> > Is it because of the export DO_PARALLEL='mpirun -np 2'?
> > Or is it because I'm running a job in background (i.e. a Gromacs
> > simulation)?
> >
> > Regards,
> > Adam
> >
> > Error Log:
> > adam.adam-MS-7750:~/amber11/test$ ./test_amber_cuda_parallel.sh
> > Using default GPU_ID = -1
> > Using default PREC_MODEL = SPDP
> > cd cuda && make -k test.pmemd.cuda.MPI GPU_ID=-1 PREC_MODEL=SPDP
> > make[1]: Entering directory `/home/adam/amber11/test/cuda'
> > ------------------------------------
> > Running CUDA Implicit solvent tests.
> >  Precision Model = SPDP
> >            GPU_ID = -1
> > ------------------------------------
> > cd trpcage/ && ./Run_md_trpcage -1 SPDP netcdf.mod
> >  MPI version of PMEMD must be used with 2 or more processors!
> >
> --------------------------------------------------------------------------
> > MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
> > with errorcode 1.
> >
> > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> > You may or may not see output from other processes, depending on
> > exactly when Open MPI kills them.
> >
> --------------------------------------------------------------------------
> >  MPI version of PMEMD must be used with 2 or more processors!
> >
> --------------------------------------------------------------------------
> > MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
> > with errorcode 1.
> >
> > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> > You may or may not see output from other processes, depending on
> > exactly when Open MPI kills them.
> >
> --------------------------------------------------------------------------
> >
> >  ./Run_md_trpcage:  Program error
> > make[1]: *** [test.pmemd.cuda.gb] Error 1
> > ------------------------------------
> > Running CUDA Explicit solvent tests.
> >  Precision Model = SPDP
> >            GPU_ID = -1
> > ------------------------------------
> > cd 4096wat/ && ./Run.pure_wat -1 SPDP netcdf.mod
> >  MPI version of PMEMD must be used with 2 or more processors!
> >  MPI version of PMEMD must be used with 2 or more processors!
> >
> --------------------------------------------------------------------------
> > MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
> > with errorcode 1.
> >
> > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> > You may or may not see output from other processes, depending on
> > exactly when Open MPI kills them.
> >
> --------------------------------------------------------------------------
> >
> >  ./Run.pure_wat:  Program error
> > make[1]: *** [test.pmemd.cuda.pme] Error 1
> > make[1]: Target `test.pmemd.cuda.MPI' not remade because of errors.
> > make[1]: Leaving directory `/home/adam/amber11/test/cuda'
> > make: *** [test.pmemd.cuda.MPI] Error 2
> > make: Target `test.parallel.cuda' not remade because of errors.
> > 0 file comparisons passed
> > 0 file comparisons failed
> > 10 tests experienced errors
> > Test log file saved as
> > logs/test_amber_cuda_parallel/2012-03-26_13-58-56.log
> >
> > No test diffs to save!
> >
> >  ------------------------------
> > *From:* Adam Jion <adamjion.yahoo.com>
> > *To:* Jason Swails <jason.swails.gmail.com>
> > *Sent:* Monday, March 26, 2012 1:06 PM
> >
> > *Subject:* Re: [AMBER] Problems with pmemd.cuda.mpi (again!)
>
> >
> > Hi Jason,
> >
> > I did what you told me (i.e. pointing to the cuda libraries) but this
> > time, I got errors about "wrong ELF class".
> > Below is the error log.
> >
> > Regards,
> > Adam
> >
> > Error Log:
> > adam.adam-MS-7750:~/amber11/test$ ./test_amber_cuda_parallel.sh
> > Using default GPU_ID = -1
> > Using default PREC_MODEL = SPDP
> > cd cuda && make -k test.pmemd.cuda.MPI GPU_ID=-1 PREC_MODEL=SPDP
> > make[1]: Entering directory `/home/adam/amber11/test/cuda'
> > ------------------------------------
> > Running CUDA Implicit solvent tests.
> >  Precision Model = SPDP
> >            GPU_ID = -1
> > ------------------------------------
> > cd trpcage/ && ./Run_md_trpcage -1 SPDP netcdf.mod
> > ../../../bin/pmemd.cuda_SPDP.MPI: error while loading shared libraries:
> > libcurand.so.4: wrong ELF class: ELFCLASS32
> > ../../../bin/pmemd.cuda_SPDP.MPI: error while loading shared libraries:
> > libcurand.so.4: wrong ELF class: ELFCLASS32
> >  ./Run_md_trpcage:  Program error
> > make[1]: *** [test.pmemd.cuda.gb] Error 1
> > ------------------------------------
> > Running CUDA Explicit solvent tests.
> >  Precision Model = SPDP
> >            GPU_ID = -1
> > ------------------------------------
> > cd 4096wat/ && ./Run.pure_wat -1 SPDP netcdf.mod
> > ../../../bin/pmemd.cuda_SPDP.MPI: error while loading shared libraries:
> > libcurand.so.4: wrong ELF class: ELFCLASS32
> > ../../../bin/pmemd.cuda_SPDP.MPI: error while loading shared libraries:
> > libcurand.so.4: wrong ELF class: ELFCLASS32
> >  ./Run.pure_wat:  Program error
> > make[1]: *** [test.pmemd.cuda.pme] Error 1
> > make[1]: Target `test.pmemd.cuda.MPI' not remade because of errors.
> > make[1]: Leaving directory `/home/adam/amber11/test/cuda'
> > make: *** [test.pmemd.cuda.MPI] Error 2
> > make: Target `test.parallel.cuda' not remade because of errors.
> > 0 file comparisons passed
> > 0 file comparisons failed
> > 11 tests experienced errors
> > Test log file saved as
> > logs/test_amber_cuda_parallel/2012-03-26_13-01-13.log
> > No test diffs to save!
> >
> >  ------------------------------
> > *From:* Jason Swails <jason.swails.gmail.com>
> > *To:* Adam Jion <adamjion.yahoo.com>
> > *Sent:* Monday, March 26, 2012 3:48 AM
> > *Subject:* Re: [AMBER] Problems with pmemd.cuda.mpi (again!)
>
> >
> > These errors result from the fact that the dynamic CUDA libs linked into
> > pmemd.cuda.MPI can't be found in the standard searched paths.  You'll
> need
> > to add the cuda lib directory to your LD_LIBRARY_PATH.  This should
> always
> > be done, so you might as well put this in your .bashrc or something:
> >
> > export LD_LIBRARY_PATH=$LD_LIBRARY_PATH\:/usr/local/cuda/lib
> >
> > (Note, the /usr/local/cuda depends on where you installed cuda -- that is
> > the CUDA_HOME you set when you built Amber CUDA in the first place).
> >
> > HTH,
> > Jason
> >
> > On Sun, Mar 25, 2012 at 12:58 PM, Adam Jion <adamjion.yahoo.com> wrote:
> >
> > Yes, Jason. You're right.
> > Changing the /bin/sh to /bin/bash will eliminate the "50: unexpected
> > operator, 57: unexpected operator'
> > However, a new error has arisen. It is given below.
> >
> > Regards,
> > Adam
> >
> > ps. Also, I checked the compiler versions in mpich2. They are mpicc and
> > mpi77 (not mpi90). This is because to compile mpich2, I used gcc-4.4.6
> > which does not contain a fortran 90 compiler. Presumable a higher gcc
> > compiler will contain a fortran 90 compiler that can create mpif90.
> > However, Cuda does not work with gcc-5 and above.
> >
> > pps. Error Log
> > adam.adam-MS-7750:~/amber11/test$ ./test_amber_cuda_parallel.sh
> > Using default GPU_ID = -1
> > Using default PREC_MODEL = SPDP
> > cd cuda && make -k test.pmemd.cuda.MPI GPU_ID=-1 PREC_MODEL=SPDP
> > make[1]: Entering directory `/home/adam/amber11/test/cuda'
> > ------------------------------------
> > Running CUDA Implicit solvent tests.
> >  Precision Model = SPDP
> >            GPU_ID = -1
> > ------------------------------------
> > cd trpcage/ && ./Run_md_trpcage -1 SPDP netcdf.mod
> > ../../../bin/pmemd.cuda_SPDP.MPI: error while loading shared libraries:
> > libcurand.so.4: cannot open shared object file: No such file or directory
> > ../../../bin/pmemd.cuda_SPDP.MPI: error while loading shared libraries:
> > libcurand.so.4: cannot open shared object file: No such file or directory
> >
> >  ./Run_md_trpcage:  Program error
> > make[1]: *** [test.pmemd.cuda.gb] Error 1
> > ------------------------------------
> > Running CUDA Explicit solvent tests.
> >  Precision Model = SPDP
> >            GPU_ID = -1
> > ------------------------------------
> > cd 4096wat/ && ./Run.pure_wat -1 SPDP netcdf.mod
> > ../../../bin/pmemd.cuda_SPDP.MPI: error while loading shared libraries:
> > libcurand.so.4: cannot open shared object file: No such file or directory
> > ../../../bin/pmemd.cuda_SPDP.MPI: error while loading shared libraries:
> > libcurand.so.4: cannot open shared object file: No such file or directory
> >
> >  ./Run.pure_wat:  Program error
> > make[1]: *** [test.pmemd.cuda.pme] Error 1
> > make[1]: Target `test.pmemd.cuda.MPI' not remade because of errors.
> > make[1]: Leaving directory `/home/adam/amber11/test/cuda'
> > make: *** [test.pmemd.cuda.MPI] Error 2
> > make: Target `test.parallel.cuda' not remade because of errors.
> > 0 file comparisons passed
> > 0 file comparisons failed
> > 11 tests experienced errors
> > Test log file saved as
> > logs/test_amber_cuda_parallel/2012-03-26_00-47-44.log
> >
> > No test diffs to save!
> >
> >  ------------------------------
> > *From:* Jason Swails <jason.swails.gmail.com>
> > *To:* AMBER Mailing List <amber.ambermd.org>
> > *Sent:* Sunday, March 25, 2012 11:24 PM
> >
> > *Subject:* Re: [AMBER] Problems with pmemd.cuda.mpi (again!)
>
> >
> > OK, I think I know what's happening.  I'm guessing that you're using
> > Ubuntu, correct?  If so, the default /bin/sh on Ubuntu is actually dash,
> > not bash (most other OSes use bash).
> >
> > The problem with test_amber_cuda_parallel.sh (and test_amber_cuda.sh,
> too,
> > I think), is that lines 50 and 57 use bash-isms that dash does not
> > recognize.  As a result, it does not set the precision model (which is
> why
> > it's looking for pmemd.cuda_.MPI, which doesn't exist, instead of
> > pmemd.cuda_SPDP.MPI, which does).
> >
> > The easiest thing to do is to just change the top line of
> > test_amber_cuda_parallel.sh to read:
> >
> > #!/bin/bash
> >
> > instead of
> >
> > #!/bin/sh
> >
> > and see if that works.
> >
> > However, I still expect most of the tests to fail unless you really
> > recompiled with the MPICH2 compiler wrappers.  That doesn't mean turn
> > mpif90 and mpicc into mpif90.mpich2 and mpicc.mpich2, that means changing
> > "mpif90" to "/path/to/mpich2/bin/mpif90" and mpicc to
> > "/path/to/mpich2/bin/mpicc", where /path/to/mpich2/bin is the path that
> > points to the mpif90 and mpicc compilers in MPICH2.
> >
> > HTH,
> > Jason
> >
> > On Sun, Mar 25, 2012 at 10:34 AM, Adam Jion <adamjion.yahoo.com> wrote:
> >
> > > Hi,
> > >
> > > Thanks for the reply. I did as you suggested but still got the same
> > error.
> > > I'm still unable to test pmemd.cuda.mpi.
> > >
> > > Regards,
> > > Adam
> > >
> > >
> > > Error Log:
> > > adam.adam-MS-7750:~/amber11/test$ export DO_PARALLEL='mpirun -np 2'
> > > adam.adam-MS-7750:~/amber11/test$ make test.cuda.parallel
> > > (find . -name '*.dif' -o -name 'profile_mpi' | \
> > >    while read dif ;\
> > >    do \
> > >        rm -f $dif ;\
> > >    done ;\
> > >    )
> > > rm -f TEST_FAILURES.diff
> > > ./test_amber_cuda_parallel.sh
> > > [: 50: unexpected operator
> > > [: 57: unexpected operator
> > > make[1]: Entering directory `/home/adam/amber11/test'
> > > cd cuda && make -k test.pmemd.cuda.MPI GPU_ID= PREC_MODEL=
> > > make[2]: Entering directory `/home/adam/amber11/test/cuda'
> > > ------------------------------------
> > > Running CUDA Implicit solvent tests.
> > >  Precision Model =
> > >            GPU_ID =
> > > ------------------------------------
> > > cd trpcage/ && ./Run_md_trpcage  netcdf.mod
> > > [proxy:0:0.adam-MS-7750] HYDU_create_process
> > > (./utils/launch/launch.c:69): execvp error on file
> > > ../../../bin/pmemd.cuda_.MPI (No such file or directory)
> > > [proxy:0:0.adam-MS-7750] HYDU_create_process
> > > (./utils/launch/launch.c:69): execvp error on file
> > > ../../../bin/pmemd.cuda_.MPI (No such file or directory)
> > >  ./Run_md_trpcage:  Program error
> > > make[2]: *** [test.pmemd.cuda.gb] Error 1
> > > ------------------------------------
> > > Running CUDA Explicit solvent tests.
> > >  Precision Model =
> > >            GPU_ID =
> > > ------------------------------------
> > > cd 4096wat/ && ./Run.pure_wat  netcdf.mod
> > > [proxy:0:0.adam-MS-7750] HYDU_create_process
> > > (./utils/launch/launch.c:69): execvp error on file
> > > ../../../bin/pmemd.cuda_.MPI (No such file or directory)
> > > [proxy:0:0.adam-MS-7750] HYDU_create_process
> > > (./utils/launch/launch.c:69): execvp error on file
> > > ../../../bin/pmemd.cuda_.MPI (No such file or directory)
> > >  ./Run.pure_wat:  Program error
> > > make[2]: *** [test.pmemd.cuda.pme] Error 1
> > > make[2]: Target `test.pmemd.cuda.MPI' not remade because of errors.
> > > make[2]: Leaving directory `/home/adam/amber11/test/cuda'
> > > make[1]: *** [test.pmemd.cuda.MPI] Error 2
> > > make[1]: Target `test.parallel.cuda' not remade because of errors.
> > > make[1]: Leaving directory `/home/adam/amber11/test'
> > > 0 file comparisons passed
> > > 0 file comparisons failed
> > > 11 tests experienced errors
> > > Test log file saved as
> > > logs/test_amber_cuda_parallel/2012-03-25_22-31-20.log
> > > No test diffs to save!
> > >
> > >
> > >
> > >
> > > ________________________________
> > >  From: Ross Walker <rosscwalker.gmail.com>
> > > To: Adam Jion <adamjion.yahoo.com>; AMBER Mailing List <
> > amber.ambermd.org>
> > > Sent: Sunday, March 25, 2012 10:03 PM
> > > Subject: Re: [AMBER] Problems with pmemd.cuda.mpi (again!)
> > >
> > > Hi Adam,
> > >
> > > The scripts are not designed to be used directly.
> > >
> > > Do
> > >
> > > Export DO_PARALLEL='mpirun -np 2'
> > >
> > > Make test.cuda.parallel
> > >
> > > All the best
> > > Ross
> > >
> > >
> > >
> > > On Mar 25, 2012, at 4:51, Adam Jion <adamjion.yahoo.com> wrote:
> > >
> > > > Hi all,
> > > >
> > > > I'm having a nightmare making Amber11 fully-functional.
> > > > Anyone able to help??
> > > >
> > > >
> > > > I managed to compile, install and test the parallel version of Amber
> > 11.
> > > > I'm also able to compile, install and test the serial version of
> > > Amber11-GPU (i.e. pmemd.cuda)
> > > >
> > > > After installing mpich2, I am able to compile and install Amber
> > > 11-MultiGPU (i.e.pmemd.cuda.mpi)
> > > > All this was done without tweaking the config.h file.
> > > >
> > > > However, for some reasons, I cannot run tests on pmemd.cuda.mpi.
> > > > Here's the error log:
> > > >
> > > > adam.adam-MS-7750:~/amber11/test$ ./test_amber_cuda_parallel.sh
> > > > [: 50: unexpected operator
> > > > [: 57: unexpected operator
> > > > cd cuda && make -k test.pmemd.cuda.MPI GPU_ID= PREC_MODEL=
> > > > make[1]: Entering directory `/home/adam/amber11/test/cuda'
> > > > ------------------------------------
> > > > Running CUDA Implicit solvent tests.
> > > >  Precision Model =
> > > >            GPU_ID =
> > > > ------------------------------------
> > > > cd trpcage/ && ./Run_md_trpcage  netcdf.mod
> > > > [proxy:0:0.adam-MS-7750] HYDU_create_process
> > > (./utils/launch/launch.c:69): execvp error on file
> > > ../../../bin/pmemd.cuda_.MPI (No such file or directory)
> > > > [proxy:0:0.adam-MS-7750] HYDU_create_process
> > > (./utils/launch/launch.c:69): execvp error on file
> > > ../../../bin/pmemd.cuda_.MPI (No such file or directory)
> > > >  ./Run_md_trpcage:  Program error
> > > > make[1]: *** [test.pmemd.cuda.gb] Error 1
> > > > ------------------------------------
> > > > Running CUDA Explicit solvent tests.
> > > >  Precision Model =
> > > >            GPU_ID =
> > > > ------------------------------------
> > > > cd 4096wat/ && ./Run.pure_wat  netcdf.mod
> > > > [proxy:0:0.adam-MS-7750] HYDU_create_process
> > > (./utils/launch/launch.c:69): execvp error on file
> > > ../../../bin/pmemd.cuda_.MPI (No such file or directory)
> > > > [proxy:0:0.adam-MS-7750] HYDU_create_process
> > > (./utils/launch/launch.c:69): execvp error on file
> > > ../../../bin/pmemd.cuda_.MPI (No such file or directory)
> > > >  ./Run.pure_wat:  Program error
> > > > make[1]: *** [test.pmemd.cuda.pme] Error 1
> > > > make[1]: Target `test.pmemd.cuda.MPI' not remade because of errors.
> > > > make[1]: Leaving directory `/home/adam/amber11/test/cuda'
> > > > make: *** [test.pmemd.cuda.MPI] Error 2
> > > > make: Target `test.parallel.cuda' not remade because of errors.
> > > > 0 file comparisons passed
> > > > 0 file comparisons failed
> > > > 11 tests experienced errors
> > > > Test log file saved as
> > > logs/test_amber_cuda_parallel/2012-03-25_19-35-37.log
> > > > No test diffs to save!
> > > >
> > > > Appreciate any help,
> > > > Adam
> > > >
> > > > ps. Using compilers gcc-4.4.6, gfortran-4.4.6, mpicc,mpif90
> > > > pps. The config.h file is given below:
> > > >
> > > > #MODIFIED FOR AMBERTOOLS 1.5
> > > > #  Amber configuration file, created with: ./configure -cuda -mpi gnu
> > > >
> > > >
> > >
> >
> ###############################################################################
> > > >
> > > > # (1)  Location of the installation
> > > >
> > > > BINDIR=/home/adam/amber11/bin
> > > > LIBDIR=/home/adam/amber11/lib
> > > > INCDIR=/home/adam/amber11/include
> > > > DATDIR=/home/adam/amber11/dat
> > > >
> > > >
> > >
> >
> ###############################################################################
> > > >
> > > >
> > > > #  (2) If you want to search additional libraries by default, add
> them
> > > > #      to the FLIBS variable here.  (External libraries can also be
> > > linked into
> > > > #      NAB programs simply by including them on the command line;
> > > libraries
> > > > #      included in FLIBS are always searched.)
> > > >
> > > > FLIBS=  -L$(LIBDIR) -lsff_mpi -lpbsa  $(LIBDIR)/arpack.a
> > > $(LIBDIR)/lapack.a $(LIBDIR)/blas.a  $(LIBDIR)/libnetcdf.a  -lgfortran
> > > > FLIBS_PTRAJ= $(LIBDIR)/arpack.a $(LIBDIR)/lapack.a $(LIBDIR)/blas.a
> > >  -lgfortran
> > > > FLIBSF= $(LIBDIR)/arpack.a $(LIBDIR)/lapack.a $(LIBDIR)/blas.a
> > > > FLIBS_FFTW2=-L$(LIBDIR)
> > > >
> > >
> >
> ###############################################################################
> > > >
> > > > #  (3)  Modify any of the following if you need to change, e.g. to
> use
> > > gcc
> > > > #        rather than cc, etc.
> > > >
> > > > SHELL=/bin/sh
> > > > INSTALLTYPE=cuda_parallel
> > > >
> > > > #  Set the C compiler, etc.
> > > >
> > > > #          For GNU:  CC-->gcc; LEX-->flex; YACC-->bison -y -t;
> > > > #          Note: If your lexer is "really" flex, you need to set
> > > > #          LEX=flex below.  For example, on many linux distributions,
> > > > #          /usr/bin/lex is really just a pointer to /usr/bin/flex,
> > > > #          so LEX=flex is necessary.  In general, gcc seems to need
> > flex.
> > > >
> > > > #  The compiler flags CFLAGS and CXXFLAGS should always be used.
> > > > #  By contrast, *OPTFLAGS and *NOOPTFLAGS will only be used with
> > > > #  certain files, and usually at compile-time but not link-time.
> > > > #  Where *OPTFLAGS and *NOOPTFLAGS are requested (in Makefiles,
> > > > #  makedepend and depend), they should come before CFLAGS or
> > > > #  CXXFLAGS; this allows the user to override *OPTFLAGS and
> > > > #  *NOOPTFLAGS using the BUILDFLAGS variable.
> > > > CC=mpicc
> > > > CFLAGS= -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -DBINTRAJ -DMPI
> > > $(CUSTOMBUILDFLAGS) $(AMBERCFLAGS)
> > > > OCFLAGS= $(COPTFLAGS) $(AMBERCFLAGS)
> > > > CNOOPTFLAGS=
> > > > COPTFLAGS=-O3 -mtune=generic -DBINTRAJ -DHASGZ -DHASBZ2
> > > > AMBERCFLAGS= $(AMBERBUILDFLAGS)
> > > >
> > > > CXX=g++
> > > > CPLUSPLUS=g++
> > > > CXXFLAGS= -DMPI  $(CUSTOMBUILDFLAGS)
> > > > CXXNOOPTFLAGS=
> > > > CXXOPTFLAGS=-O3
> > > > AMBERCXXFLAGS= $(AMBERBUILDFLAGS)
> > > >
> > > > NABFLAGS=
> > > >
> > > > LDFLAGS= $(CUSTOMBUILDFLAGS) $(AMBERLDFLAGS)
> > > > AMBERLDFLAGS=$(AMBERBUILDFLAGS)
> > > >
> > > > LEX=  flex
> > > > YACC=  $(BINDIR)/yacc
> > > > AR=    ar rv
> > > > M4=    m4
> > > > RANLIB=ranlib
> > > >
> > > > #  Set the C-preprocessor.  Code for a small preprocessor is in
> > > > #    ucpp-1.3;  it gets installed as $(BINDIR)/ucpp;
> > > > #    this can generally be used (maybe not on 64-bit machines like
> > > altix).
> > > >
> > > > CPP=    $(BINDIR)/ucpp -l
> > > >
> > > > #  These variables control whether we will use compiled versions of
> > BLAS
> > > > #  and LAPACK (which are generally slower), or whether those
> libraries
> > > are
> > > > #  already available (presumably in an optimized form).
> > > >
> > > > LAPACK=install
> > > > BLAS=install
> > > > F2C=skip
> > > >
> > > > #  These variables determine whether builtin versions of certain
> > > components
> > > > #  can be used, or whether we need to compile our own versions.
> > > >
> > > > UCPP=install
> > > > C9XCOMPLEX=skip
> > > >
> > > > #  For Windows/cygwin, set SFX to ".exe"; for Unix/Linux leave it
> > empty:
> > > > #  Set OBJSFX to ".obj" instead of ".o" on Windows:
> > > >
> > > > SFX=
> > > > OSFX=.o
> > > > MV=mv
> > > > RM=rm
> > > > CP=cp
> > > >
> > > > #  Information about Fortran compilation:
> > > >
> > > > FC=mpif90
> > > > FFLAGS=  $(LOCALFLAGS) $(CUSTOMBUILDFLAGS) $(FNOOPTFLAGS)
> > > > FNOOPTFLAGS= -O0
> > > > FOPTFLAGS= -O3 -mtune=generic $(LOCALFLAGS) $(CUSTOMBUILDFLAGS)
> > > > AMBERFFLAGS=$(AMBERBUILDFLAGS)
> > > > FREEFORMAT_FLAG= -ffree-form
> > > > LM=-lm
> > > > FPP=cpp -traditional $(FPPFLAGS) $(AMBERFPPFLAGS)
> > > > FPPFLAGS=-P  -DBINTRAJ -DMPI  $(CUSTOMBUILDFLAGS)
> > > > AMBERFPPFLAGS=$(AMBERBUILDFLAGS)
> > > >
> > > >
> > > > BUILD_SLEAP=install_sleap
> > > > XHOME=
> > > > XLIBS= -L/lib64 -L/lib
> > > > MAKE_XLEAP=skip_xleap
> > > >
> > > > NETCDF=netcdf.mod
> > > > NETCDFLIB=$(LIBDIR)/libnetcdf.a
> > > > PNETCDF=yes
> > > > PNETCDFLIB=$(LIBDIR)/libpnetcdf.a
> > > >
> > > > ZLIB=-lz
> > > > BZLIB=-lbz2
> > > >
> > > > HASFC=yes
> > > > MDGX=yes
> > > > CPPTRAJ=yes
> > > > MTKPP=
> > > >
> > > > COMPILER=gnu
> > > > MKL=
> > > > MKL_PROCESSOR=
> > > >
> > > > #CUDA Specific build flags
> > > > NVCC=$(CUDA_HOME)/bin/nvcc -use_fast_math -O3 -gencode
> > > arch=compute_13,code=sm_13 -gencode arch=compute_20,code=sm_20
> > > > PMEMD_CU_INCLUDES=-I$(CUDA_HOME)/include -IB40C -IB40C/KernelCommon
> > > -I/usr/include
> > > > PMEMD_CU_LIBS=-L$(CUDA_HOME)/lib64 -L$(CUDA_HOME)/lib -lcurand
> -lcufft
> > > -lcudart ./cuda/cuda.a
> > > > PMEMD_CU_DEFINES=-DCUDA -DMPI  -DMPICH_IGNORE_CXX_SEEK
> > > >
> > > > #PMEMD Specific build flags
> > > > PMEMD_FPP=cpp -traditional -DMPI  -P  -DBINTRAJ -DDIRFRC_EFS
> > > -DDIRFRC_COMTRANS -DDIRFRC_NOVEC -DFFTLOADBAL_2PROC -DPUBFFT
> > > > PMEMD_NETCDFLIB= $(NETCDFLIB)
> > > > PMEMD_F90=mpif90
> > > > PMEMD_FOPTFLAGS=-O3 -mtune=generic
> > > > PMEMD_CC=mpicc
> > > > PMEMD_COPTFLAGS=-O3 -mtune=generic -DMPICH_IGNORE_CXX_SEEK
> > > -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -DBINTRAJ -DMPI
> > > > PMEMD_FLIBSF=
> > > > PMEMD_LD= mpif90
> > > > LDOUT= -o
> > > >
> > > > #3D-RISM MPI
> > > > RISMSFF=
> > > > SANDER_RISM_MPI=sander.RISM.MPI$(SFX)
> > > > TESTRISM=
> > > >
> > > > #PUPIL
> > > > PUPILLIBS=-lrt -lm -lc -L${PUPIL_PATH}/lib -lPUPIL -lPUPILBlind
> > > >
> > > > #Python
> > > > PYINSTALL=
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > adam.adam-MS-7750:~/amber11/test$ ./test_amber_cuda_parallel.sh
> > > > [: 50: unexpected operator
> > > > [: 57: unexpected operator
> > > > cd cuda && make -k test.pmemd.cuda.MPI GPU_ID= PREC_MODEL=
> > > > make[1]: Entering directory `/home/adam/amber11/test/cuda'
> > > > ------------------------------------
> > > > Running CUDA Implicit solvent tests.
> > > >  Precision Model =
> > > >            GPU_ID =
> > > > ------------------------------------
> > > > cd trpcage/ && ./Run_md_trpcage  netcdf.mod
> > > > [proxy:0:0.adam-MS-7750] HYDU_create_process
> > > (./utils/launch/launch.c:69): execvp error on file
> > > ../../../bin/pmemd.cuda_.MPI (No such file or directory)
> > > > [proxy:0:0.adam-MS-7750] HYDU_create_process
> > > (./utils/launch/launch.c:69): execvp error on file
> > > ../../../bin/pmemd.cuda_.MPI (No such file or directory)
> > > >  ./Run_md_trpcage:  Program error
> > > > make[1]: *** [test.pmemd.cuda.gb] Error 1
> > > > ------------------------------------
> > > > Running CUDA Explicit solvent tests.
> > > >  Precision Model =
> > > >            GPU_ID =
> > > > ------------------------------------
> > > > cd 4096wat/ && ./Run.pure_wat  netcdf.mod
> > > > [proxy:0:0.adam-MS-7750] HYDU_create_process
> > > (./utils/launch/launch.c:69): execvp error on file
> > > ../../../bin/pmemd.cuda_.MPI (No such file or directory)
> > > > [proxy:0:0.adam-MS-7750] HYDU_create_process
> > > (./utils/launch/launch.c:69): execvp error on file
> > > ../../../bin/pmemd.cuda_.MPI (No such file or directory)
> > > >  ./Run.pure_wat:  Program error
> > > > make[1]: *** [test.pmemd.cuda.pme] Error 1
> > > > make[1]: Target `test.pmemd.cuda.MPI' not remade because of errors.
> > > > make[1]: Leaving directory `/home/adam/amber11/test/cuda'
> > > > make: *** [test.pmemd.cuda.MPI] Error 2
> > > > make: Target `test.parallel.cuda' not remade because of errors.
> > > > 0 file comparisons passed
> > > > 0 file comparisons failed
> > > > 11 tests experienced errors
> > > > Test log file saved as
> > > logs/test_amber_cuda_parallel/2012-03-25_19-35-37.log
> > > > No test diffs to save!
> > > > _______________________________________________
> > > > AMBER mailing list
> > > > AMBER.ambermd.org
> > > > http://lists.ambermd.org/mailman/listinfo/amber
> > >
> > > _______________________________________________
> > > AMBER mailing list
> > > AMBER.ambermd.org
> > > http://lists.ambermd.org/mailman/listinfo/amber
> > > _______________________________________________
> > > AMBER mailing list
> > > AMBER.ambermd.org
> > > http://lists.ambermd.org/mailman/listinfo/amber
> > >
> >
> >
> >
> > --
> > Jason M. Swails
> > Quantum Theory Project,
> > University of Florida
> > Ph.D. Candidate
> > 352-392-4032
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> >
> >
> >
> >
> >
> > --
> > Jason M. Swails
> > Quantum Theory Project,
> > University of Florida
> > Ph.D. Candidate
> > 352-392-4032
> >
> >
> >
> >
> >
>
>
> --
> Jason M. Swails
> Quantum Theory Project,
> University of Florida
> Ph.D. Candidate
> 352-392-4032
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
>
>


-- 
Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Candidate
352-392-4032
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon Mar 26 2012 - 10:30:05 PDT
Custom Search