Re: [AMBER] building error of multi-GPU pmemd.cuda

From: Jason Swails <jason.swails.gmail.com>
Date: Mon, 6 Dec 2010 11:55:03 -0500

On Mon, Dec 6, 2010 at 7:47 AM, case <case.biomaps.rutgers.edu> wrote:

> On Mon, Dec 06, 2010, Masakazu SEKIJIMA wrote:
> >
> > I'm just building multi-GPU pmemd.cuda. I compiled single-GPU
> > pmemd.cuda successfully.
> > But I am getting below errors:
> > Could you give me some advise on this problem?
> >
> > /opt/cuda/3.1/bin/nvcc -use_fast_math -O3 -gencode
> > arch=compute_13,code=sm_13 -gencode arch=compute_20,code=sm_20 -DCUDA
> > -DMPI -DMPICH_IGNORE_CXX_SEEK -I/opt/cuda/3.1/include -I/include -c
> ^^^^^^^^^^^^
> > kForcesUpdate.cu
> > In file included from gpu.h:15,
> > from kForcesUpdate.cu:14:
> > gputypes.h:25:17: error: mpi.h: No such file or directory
> > make[3]: *** [kForcesUpdate.o] Error 1
>
> This looks like you did not correctly set MPI_HOME when before you ran the
> configure script. The "-I/include" part marked above should point to the
> include directory in your MPI installation.
>
> [Ross or Scott may have something to add here, but I don't see how else
> nvcc
> will know where to find mpi.h. If this analysis is correct, the configure
> script should complain if MPI_HOME is not set.]
>

This complaint was added post-amber11 release, just to explain why no
complaint was issued when MPI_HOME wasn't set.

All the best,
Jason


> ...good luck...dac
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>



-- 
Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Graduate Student
352-392-4032
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon Dec 06 2010 - 09:00:04 PST
Custom Search