Re: [AMBER] building error of multi-GPU pmemd.cuda

From: case <>
Date: Mon, 6 Dec 2010 07:47:08 -0500

On Mon, Dec 06, 2010, Masakazu SEKIJIMA wrote:
> I'm just building multi-GPU pmemd.cuda. I compiled single-GPU
> pmemd.cuda successfully.
> But I am getting below errors:
> Could you give me some advise on this problem?
> /opt/cuda/3.1/bin/nvcc -use_fast_math -O3 -gencode
> arch=compute_13,code=sm_13 -gencode arch=compute_20,code=sm_20 -DCUDA
> -DMPI -DMPICH_IGNORE_CXX_SEEK -I/opt/cuda/3.1/include -I/include -c
> In file included from gpu.h:15,
> from
> gputypes.h:25:17: error: mpi.h: No such file or directory
> make[3]: *** [kForcesUpdate.o] Error 1

This looks like you did not correctly set MPI_HOME when before you ran the
configure script. The "-I/include" part marked above should point to the
include directory in your MPI installation.

[Ross or Scott may have something to add here, but I don't see how else nvcc
will know where to find mpi.h. If this analysis is correct, the configure
script should complain if MPI_HOME is not set.]

...good luck...dac

AMBER mailing list
Received on Mon Dec 06 2010 - 05:00:04 PST
Custom Search