Re: [AMBER] Error Compiling pmemd.cuda.MPI (i.e. Multiple GPUs)

From: Adam Jion <adamjion.yahoo.com>
Date: Sat, 24 Mar 2012 10:29:09 -0700 (PDT)

Thank you guys for your help.
I found the following mpi.h files in my system:

/usr/include/mpich2/mpi.h
/usr/lib/openmpi/include/mpi.h
/usr/src/linux-headers-3.0.0-12-generic/include/config/usb/serial/siemens/mpi.h
/usr/src/linux-headers-3.0.0-15-generic/include/config/usb/serial/siemens/mpi.h
/home/adam/amber11/include/mpi.h
/home/adam/amber11/AmberTools/src/openmpi-1.4.3/ompi/include/mp

And these are the relevant lines in my config.h file:

#CUDA Specific build flags
NVCC=$(CUDA_HOME)/bin/nvcc -use_fast_math -O3 -gencode arch=compute_13,code=sm_13 -gencode arch=compute_20,code=sm_20
PMEMD_CU_INCLUDES=-I$(CUDA_HOME)/include -IB40C -IB40C/KernelCommon -I/usr/include
PMEMD_CU_LIBS=-L$(CUDA_HOME)/lib64 -L$(CUDA_HOME)/lib -lcurand -lcufft -lcudart ./cuda/cuda.a
PMEMD_CU_DEFINES=-DCUDA -DMPI  -DMPICH_IGNORE_CXX_SEEK

I'm new to Linux, and have been learning a lot on the fly. But how do I
1. add the directory containing mpi.h to the NVCC compiler flags in config.h
2. Is the mpi.h from mpich2 the best? Or are they all equivalent?

Regards,
Adam



________________________________
 From: Jason Swails <jason.swails.gmail.com>
To: AMBER Mailing List <amber.ambermd.org>
Sent: Sunday, March 25, 2012 1:06 AM
Subject: Re: [AMBER] Error Compiling pmemd.cuda.MPI (i.e. Multiple GPUs)
 
Where is your mpi.h file?

You can find it via:

find / -name mpi.h 2>/dev/null

(The last part is to squash all of the error messages about not having read
permissions).


On Sat, Mar 24, 2012 at 12:49 PM, Scott Le Grand <varelse2005.gmail.com>wrote:

> MPI is installed in a manner the configure script doesn't recognize.  A
> long-term fix is needed for this, but in the meantime, what Jason said will
> work once you locate the correct mpi.h for whatever MPI (if any) is
> installed on your machine.
>
> Scott
>
>
> On Sat, Mar 24, 2012 at 9:38 AM, Adam Jion <adamjion.yahoo.com> wrote:
>
> > Hi,
> >
> > This was my command line:
> > adam.adam-MS-7750:~/amber11/src$ make cuda_parallel
> >
> >
> > And this is the error I got (last 30 lines):
> > make -C ./cuda
> > make[3]: Entering directory `/home/adam/amber11/src/pmemd/src/cuda'
> > cpp -traditional -DMPI  -P  -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
> > -DDIRFRC_NOVEC -DFFTLOADBAL_2PROC -DPUBFFT -DCUDA -DMPI
> > -DMPICH_IGNORE_CXX_SEEK cuda_info.fpp cuda_info.f90
> > mpif90 -O3 -mtune=generic -DCUDA -DMPI  -DMPICH_IGNORE_CXX_SEEK
> > -I/usr/local/cuda/include -IB40C -IB40C/KernelCommon -I/usr/include -c
> > cuda_info.f90
> > mpicc -O3 -mtune=generic -DMPICH_IGNORE_CXX_SEEK -D_FILE_OFFSET_BITS=64
> > -D_LARGEFILE_SOURCE -DBINTRAJ -DMPI  -DCUDA -DMPI
>  -DMPICH_IGNORE_CXX_SEEK
> > -I/usr/local/cuda/include -IB40C -IB40C/KernelCommon -I/usr/include -c
> > gpu.cpp
> > mpicc -O3 -mtune=generic -DMPICH_IGNORE_CXX_SEEK -D_FILE_OFFSET_BITS=64
> > -D_LARGEFILE_SOURCE -DBINTRAJ -DMPI  -DCUDA -DMPI
>  -DMPICH_IGNORE_CXX_SEEK
> > -I/usr/local/cuda/include -IB40C -IB40C/KernelCommon -I/usr/include -c
> > gputypes.cpp
> > /usr/local/cuda/bin/nvcc -use_fast_math -O3 -gencode
> > arch=compute_13,code=sm_13 -gencode arch=compute_20,code=sm_20 -DCUDA
> > -DMPI  -DMPICH_IGNORE_CXX_SEEK -I/usr/local/cuda/include -IB40C
> > -IB40C/KernelCommon -I/usr/include  -c kForcesUpdate.cu
> > In file included from gpu.h:15,
> >                  from kForcesUpdate.cu:14:
> > gputypes.h:30: fatal error: mpi.h: No such file or directory
> > compilation terminated.
> > make[3]: *** [kForcesUpdate.o] Error 1
> > make[3]: Leaving directory `/home/adam/amber11/src/pmemd/src/cuda'
> > make[2]: *** [-L/usr/local/cuda/lib64] Error 2
> > make[2]: Leaving directory `/home/adam/amber11/src/pmemd/src'
> > make[1]: *** [cuda_parallel] Error 2
> > make[1]: Leaving directory `/home/adam/amber11/src/pmemd'
> > make: *** [cuda_parallel] Error 2
> >
> > Very appreciative of your help,
> > Adam
> >
> > ps. Amber 11 (Serial + Parallel) + Single GPU works well. So I'm really
> > wondering what's going on...
> >
> >
> >
> >
> > ________________________________
> >  From: Jason Swails <jason.swails.gmail.com>
> > To: Adam Jion <adamjion.yahoo.com>; AMBER Mailing List <
> amber.ambermd.org>
> > Sent: Saturday, March 24, 2012 11:17 PM
> > Subject: Re: [AMBER] Error Compiling pmemd.cuda.MPI (i.e. Multiple GPUs)
> >
> >
> > It would help to see more of your error message (i.e. the compile line
> > that failed, so we know what directories were searched for in the include
> > path).
> >
> > Another option is to set MPI_HOME (such that mpif90 is in
> > $MPI_HOME/bin/mpif90), and re-run configure
> >
> > (./configure -mpi -cuda [gnu|intel])
> >
> > This should have been grabbed by default inside configure, but it's
> > possible you have a funky configuration.
> >
> > HTH,
> > Jason
> >
> > P.S., alternatively, if you know where this include file lives, just add
> > that directory to the NVCC compiler flags in config.h and just re-run
> "make"
> >
> >
> > On Sat, Mar 24, 2012 at 9:03 AM, Adam Jion <adamjion.yahoo.com> wrote:
> >
> > Hi!
> > >
> > >I have problems compiling pmemd.cuda.MPI. (However, the single gpu
> > version -pmemd.cuda - works)
> > >The error log is:
> > >
> > >In file included from gpu.h:15,
> > >                from kForcesUpdate.cu:14:
> > >gputypes.h:30: fatal error: mpi.h: No such file or directory
> > >compilation terminated.
> > >make[3]: *** [kForcesUpdate.o] Error 1
> > >make[3]: Leaving directory `/home/adam/amber11/src/pmemd/src/cuda'
> > >make[2]: *** [-L/usr/local/cuda/lib64] Error 2
> > >make[2]: Leaving directory `/home/adam/amber11/src/pmemd/src'
> > >make[1]: *** [cuda_parallel] Error 2
> > >make[1]: Leaving directory `/home/adam/amber11/src/pmemd'
> > >make: *** [cuda_parallel] Error 2
> > >
> > >
> > >Any help will be much appreciated,
> > >Adam
> > >
> > >ps. All the bugfixes have been applied. I have managed to combine both
> > serial and parallel versions of Amber11 and AmberTools 1.5 without
> > problems. My compilers are gcc-4.4, g++-4.4, gfortran-4.4.
> > >_______________________________________________
> > >AMBER mailing list
> > >AMBER.ambermd.org
> > >http://lists.ambermd.org/mailman/listinfo/amber
> > >
> >
> >
> > --
> > Jason M. Swails
> > Quantum Theory Project,
> > University of Florida
> > Ph.D. Candidate
> > 352-392-4032
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> >
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>



-- 
Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Candidate
352-392-4032
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Sat Mar 24 2012 - 10:30:04 PDT
Custom Search