[AMBER] 'undefined reference to MPI::...' error messages when compiling Amber 12 on CentOS 6 for "-mpi -cuda"

From: Frank Thommen <structures-it.embl-heidelberg.de>
Date: Fri, 05 Jul 2013 11:45:07 +0200

Hi,

when doing `make install` (after `./configure -mpi -cuda gnu`) on our
cluster (LSF running on CentOS 6.2), I'm getting the following error
(only the lines for the last compilation block are shown):

amber-12 > make install
[...]
cd AmberTools/src && make install
make[1]: Entering directory
`/g/software/linux/pack/amber-12/TEST/AmberTools/src'
AmberTools12 has no CUDA-enabled components
(cd ../../src && make cuda_parallel )
make[2]: Entering directory `/g/software/linux/pack/amber-12/TEST/src'
Starting installation of Amber12 (cuda parallel) at Fri Jul 5 11:20:02
CEST 2013.
cd pmemd && make cuda_parallel
make[3]: Entering directory `/g/software/linux/pack/amber-12/TEST/src/pmemd'
make -C src/ cuda_parallel
make[4]: Entering directory
`/g/software/linux/pack/amber-12/TEST/src/pmemd/src'
make -C ./cuda
make[5]: Entering directory
`/g/software/linux/pack/amber-12/TEST/src/pmemd/src/cuda'
make[5]: `cuda.a' is up to date.
make[5]: Leaving directory
`/g/software/linux/pack/amber-12/TEST/src/pmemd/src/cuda'
make -C ./cuda
make[5]: Entering directory
`/g/software/linux/pack/amber-12/TEST/src/pmemd/src/cuda'
make[5]: `cuda.a' is up to date.
make[5]: Leaving directory
`/g/software/linux/pack/amber-12/TEST/src/pmemd/src/cuda'
make -C ./cuda
make[5]: Entering directory
`/g/software/linux/pack/amber-12/TEST/src/pmemd/src/cuda'
make[5]: `cuda.a' is up to date.
make[5]: Leaving directory
`/g/software/linux/pack/amber-12/TEST/src/pmemd/src/cuda'
make -C ./cuda
make[5]: Entering directory
`/g/software/linux/pack/amber-12/TEST/src/pmemd/src/cuda'
make[5]: `cuda.a' is up to date.
make[5]: Leaving directory
`/g/software/linux/pack/amber-12/TEST/src/pmemd/src/cuda'
make -C ./cuda
make[5]: Entering directory
`/g/software/linux/pack/amber-12/TEST/src/pmemd/src/cuda'
make[5]: `cuda.a' is up to date.
make[5]: Leaving directory
`/g/software/linux/pack/amber-12/TEST/src/pmemd/src/cuda'
mpif90 -O3 -mtune=native -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
-Duse_SPFP -o pmemd.cuda.MPI gbl_constants.o gbl_datatypes.o
state_info.o file_io_dat.o mdin_ctrl_dat.o mdin_ewald_dat.o
mdin_debugf_dat.o prmtop_dat.o inpcrd_dat.o dynamics_dat.o img.o nbips.o
parallel_dat.o parallel.o gb_parallel.o pme_direct.o pme_recip_dat.o
pme_slab_recip.o pme_blk_recip.o pme_slab_fft.o pme_blk_fft.o
pme_fft_dat.o fft1d.o bspline.o pme_force.o pbc.o nb_pairlist.o
nb_exclusions.o cit.o dynamics.o bonds.o angles.o dihedrals.o
extra_pnts_nb14.o runmd.o loadbal.o shake.o prfs.o mol_list.o runmin.o
constraints.o axis_optimize.o gb_ene.o veclib.o gb_force.o timers.o
pmemd_lib.o runfiles.o file_io.o bintraj.o binrestart.o pmemd_clib.o
pmemd.o random.o degcnt.o erfcfun.o nmr_calls.o nmr_lib.o get_cmdline.o
master_setup.o pme_alltasks_setup.o pme_setup.o ene_frc_splines.o
gb_alltasks_setup.o nextprmtop_section.o angles_ub.o dihedrals_imp.o
cmap.o charmm.o charmm_gold.o findmask.o remd.o multipmemd.o
remd_exchg.o amd.o gbsa.o \
      ./cuda/cuda.a -L/g/software/linux/pack/cuda-5.0.35/lib64
-L/g/software/linux/pack/cuda-5.0.35/lib -lcurand -lcufft -lcudart
-L/g/software/linux/pack/amber-12/TEST/lib
-L/g/software/linux/pack/amber-12/TEST/lib -lnetcdf
./cuda/cuda.a(gpu.o): In function
`MPI::Comm::Set_errhandler(MPI::Errhandler const&)':
gpu.cpp:(.text._ZN3MPI4Comm14Set_errhandlerERKNS_10ErrhandlerE[MPI::Comm::Set_errhandler(MPI::Errhandler
const&)]+0x12): undefined reference to `MPI::Comm::mpi_err_map'
gpu.cpp:(.text._ZN3MPI4Comm14Set_errhandlerERKNS_10ErrhandlerE[MPI::Comm::Set_errhandler(MPI::Errhandler
const&)]+0x1c): undefined reference to `MPI::Comm::mpi_err_map'
gpu.cpp:(.text._ZN3MPI4Comm14Set_errhandlerERKNS_10ErrhandlerE[MPI::Comm::Set_errhandler(MPI::Errhandler
const&)]+0x3c): undefined reference to `MPI::Comm::mpi_err_map'
gpu.cpp:(.text._ZN3MPI4Comm14Set_errhandlerERKNS_10ErrhandlerE[MPI::Comm::Set_errhandler(MPI::Errhandler
const&)]+0x84): undefined reference to `MPI::Comm::mpi_err_map'
./cuda/cuda.a(gpu.o): In function `MPI::Comm::Exscan(void const*, void*,
int, MPI::Datatype const&, MPI::Op const&) const':
gpu.cpp:(.text._ZNK3MPI4Comm6ExscanEPKvPviRKNS_8DatatypeERKNS_2OpE[MPI::Comm::Exscan(void
const*, void*, int, MPI::Datatype const&, MPI::Op const&) const]+0x10):
undefined reference to `MPI::Comm::current_op'
gpu.cpp:(.text._ZNK3MPI4Comm6ExscanEPKvPviRKNS_8DatatypeERKNS_2OpE[MPI::Comm::Exscan(void
const*, void*, int, MPI::Datatype const&, MPI::Op const&) const]+0x2d):
undefined reference to `MPI::Comm::current_op'
./cuda/cuda.a(gpu.o): In function `MPI::Comm::Scan(void const*, void*,
int, MPI::Datatype const&, MPI::Op const&) const':
gpu.cpp:(.text._ZNK3MPI4Comm4ScanEPKvPviRKNS_8DatatypeERKNS_2OpE[MPI::Comm::Scan(void
const*, void*, int, MPI::Datatype const&, MPI::Op const&) const]+0x10):
undefined reference to `MPI::Comm::current_op'
gpu.cpp:(.text._ZNK3MPI4Comm4ScanEPKvPviRKNS_8DatatypeERKNS_2OpE[MPI::Comm::Scan(void
const*, void*, int, MPI::Datatype const&, MPI::Op const&) const]+0x2d):
undefined reference to `MPI::Comm::current_op'
./cuda/cuda.a(gpu.o): In function `MPI::Comm::Reduce_scatter_block(void
const*, void*, int, MPI::Datatype const&, MPI::Op const&) const':
gpu.cpp:(.text._ZNK3MPI4Comm20Reduce_scatter_blockEPKvPviRKNS_8DatatypeERKNS_2OpE[MPI::Comm::Reduce_scatter_block(void
const*, void*, int, MPI::Datatype const&, MPI::Op const&) const]+0x10):
undefined reference to `MPI::Comm::current_op'
./cuda/cuda.a(gpu.o):gpu.cpp:(.text._ZNK3MPI4Comm20Reduce_scatter_blockEPKvPviRKNS_8DatatypeERKNS_2OpE[MPI::Comm::Reduce_scatter_block(void
const*, void*, int, MPI::Datatype const&, MPI::Op const&) const]+0x2d):
more undefined references to `MPI::Comm::current_op' follow
./cuda/cuda.a(gpu.o): In function `MPI::Op::Init(void (*)(void const*,
void*, int, MPI::Datatype const&), bool)':
gpu.cpp:(.text._ZN3MPI2Op4InitEPFvPKvPviRKNS_8DatatypeEEb[MPI::Op::Init(void
(*)(void const*, void*, int, MPI::Datatype const&), bool)]+0x1f):
undefined reference to `op_intercept'
./cuda/cuda.a(gpu.o): In function `MPI::Comm::Set_attr(int, void const*)
const':
gpu.cpp:(.text._ZNK3MPI4Comm8Set_attrEiPKv[MPI::Comm::Set_attr(int, void
const*) const]+0x14): undefined reference to `MPI::Comm::mpi_comm_map'
gpu.cpp:(.text._ZNK3MPI4Comm8Set_attrEiPKv[MPI::Comm::Set_attr(int, void
const*) const]+0x1e): undefined reference to `MPI::Comm::mpi_comm_map'
gpu.cpp:(.text._ZNK3MPI4Comm8Set_attrEiPKv[MPI::Comm::Set_attr(int, void
const*) const]+0x44): undefined reference to `MPI::Comm::mpi_comm_map'
gpu.cpp:(.text._ZNK3MPI4Comm8Set_attrEiPKv[MPI::Comm::Set_attr(int, void
const*) const]+0x5d): undefined reference to `MPI::Comm::key_ref_map'
gpu.cpp:(.text._ZNK3MPI4Comm8Set_attrEiPKv[MPI::Comm::Set_attr(int, void
const*) const]+0x63): undefined reference to `MPI::Comm::key_ref_map'
gpu.cpp:(.text._ZNK3MPI4Comm8Set_attrEiPKv[MPI::Comm::Set_attr(int, void
const*) const]+0x83): undefined reference to `MPI::Comm::key_ref_map'
gpu.cpp:(.text._ZNK3MPI4Comm8Set_attrEiPKv[MPI::Comm::Set_attr(int, void
const*) const]+0xcf): undefined reference to `MPI::Comm::mpi_comm_map'
gpu.cpp:(.text._ZNK3MPI4Comm8Set_attrEiPKv[MPI::Comm::Set_attr(int, void
const*) const]+0x116): undefined reference to `MPI::Comm::key_ref_map'
gpu.cpp:(.text._ZNK3MPI4Comm8Set_attrEiPKv[MPI::Comm::Set_attr(int, void
const*) const]+0x1a9): undefined reference to `MPI::Comm::mpi_comm_map'
gpu.cpp:(.text._ZNK3MPI4Comm8Set_attrEiPKv[MPI::Comm::Set_attr(int, void
const*) const]+0x1af): undefined reference to `MPI::Comm::mpi_comm_map'
gpu.cpp:(.text._ZNK3MPI4Comm8Set_attrEiPKv[MPI::Comm::Set_attr(int, void
const*) const]+0x1d4): undefined reference to `MPI::Comm::mpi_comm_map'
gpu.cpp:(.text._ZNK3MPI4Comm8Set_attrEiPKv[MPI::Comm::Set_attr(int, void
const*) const]+0x23c): undefined reference to `MPI::Comm::mpi_comm_map'
./cuda/cuda.a(gpu.o): In function
`MPI::Intracomm::Dist_graph_create(int, int const*, int const*, int
const*, int const*, MPI::Info&, bool) const':
gpu.cpp:(.text._ZNK3MPI9Intracomm17Dist_graph_createEiPKiS2_S2_S2_RNS_4InfoEb[MPI::Intracomm::Dist_graph_create(int,
int const*, int const*, int const*, int const*, MPI::Info&, bool)
const]+0xd4): undefined reference to `MPI::Comm::mpi_comm_map'
./cuda/cuda.a(gpu.o):gpu.cpp:(.text._ZNK3MPI9Intracomm17Dist_graph_createEiPKiS2_S2_S2_RNS_4InfoEb[MPI::Intracomm::Dist_graph_create(int,
int const*, int const*, int const*, int const*, MPI::Info&, bool)
const]+0xda): more undefined references to `MPI::Comm::mpi_comm_map' follow
./cuda/cuda.a(gpu.o):(.rodata._ZTVN3MPI2OpE[vtable for MPI::Op]+0x30):
undefined reference to `MPI::Op::Reduce_local(void const*, void*, int,
MPI::Datatype const&) const'
./cuda/cuda.a(gpu.o):(.rodata._ZTVN3MPI2OpE[vtable for MPI::Op]+0x38):
undefined reference to `MPI::Op::Is_commutative() const'
collect2: ld returned 1 exit status
make[4]: *** [pmemd.cuda.MPI] Error 1
make[4]: Leaving directory
`/g/software/linux/pack/amber-12/TEST/src/pmemd/src'
make[3]: *** [cuda_parallel] Error 2
make[3]: Leaving directory `/g/software/linux/pack/amber-12/TEST/src/pmemd'
make[2]: *** [cuda_parallel] Error 2
make[2]: Leaving directory `/g/software/linux/pack/amber-12/TEST/src'
make[1]: [cuda_parallel] Error 2 (ignored)
make[1]: Leaving directory
`/g/software/linux/pack/amber-12/TEST/AmberTools/src'
make[1]: Entering directory `/g/software/linux/pack/amber-12/TEST/src'
Starting installation of Amber12 (cuda parallel) at Fri Jul 5 11:20:02
CEST 2013.
cd pmemd && make cuda_parallel
make[2]: Entering directory `/g/software/linux/pack/amber-12/TEST/src/pmemd'
make -C src/ cuda_parallel
make[3]: Entering directory
`/g/software/linux/pack/amber-12/TEST/src/pmemd/src'
make -C ./cuda
make[4]: Entering directory
`/g/software/linux/pack/amber-12/TEST/src/pmemd/src/cuda'
make[4]: `cuda.a' is up to date.
make[4]: Leaving directory
`/g/software/linux/pack/amber-12/TEST/src/pmemd/src/cuda'
make -C ./cuda
make[4]: Entering directory
`/g/software/linux/pack/amber-12/TEST/src/pmemd/src/cuda'
make[4]: `cuda.a' is up to date.
make[4]: Leaving directory
`/g/software/linux/pack/amber-12/TEST/src/pmemd/src/cuda'
make -C ./cuda
make[4]: Entering directory
`/g/software/linux/pack/amber-12/TEST/src/pmemd/src/cuda'
make[4]: `cuda.a' is up to date.
make[4]: Leaving directory
`/g/software/linux/pack/amber-12/TEST/src/pmemd/src/cuda'
make -C ./cuda
make[4]: Entering directory
`/g/software/linux/pack/amber-12/TEST/src/pmemd/src/cuda'
make[4]: `cuda.a' is up to date.
make[4]: Leaving directory
`/g/software/linux/pack/amber-12/TEST/src/pmemd/src/cuda'
make -C ./cuda
make[4]: Entering directory
`/g/software/linux/pack/amber-12/TEST/src/pmemd/src/cuda'
make[4]: `cuda.a' is up to date.
make[4]: Leaving directory
`/g/software/linux/pack/amber-12/TEST/src/pmemd/src/cuda'
mpif90 -O3 -mtune=native -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
-Duse_SPFP -o pmemd.cuda.MPI gbl_constants.o gbl_datatypes.o
state_info.o file_io_dat.o mdin_ctrl_dat.o mdin_ewald_dat.o
mdin_debugf_dat.o prmtop_dat.o inpcrd_dat.o dynamics_dat.o img.o nbips.o
parallel_dat.o parallel.o gb_parallel.o pme_direct.o pme_recip_dat.o
pme_slab_recip.o pme_blk_recip.o pme_slab_fft.o pme_blk_fft.o
pme_fft_dat.o fft1d.o bspline.o pme_force.o pbc.o nb_pairlist.o
nb_exclusions.o cit.o dynamics.o bonds.o angles.o dihedrals.o
extra_pnts_nb14.o runmd.o loadbal.o shake.o prfs.o mol_list.o runmin.o
constraints.o axis_optimize.o gb_ene.o veclib.o gb_force.o timers.o
pmemd_lib.o runfiles.o file_io.o bintraj.o binrestart.o pmemd_clib.o
pmemd.o random.o degcnt.o erfcfun.o nmr_calls.o nmr_lib.o get_cmdline.o
master_setup.o pme_alltasks_setup.o pme_setup.o ene_frc_splines.o
gb_alltasks_setup.o nextprmtop_section.o angles_ub.o dihedrals_imp.o
cmap.o charmm.o charmm_gold.o findmask.o remd.o multipmemd.o
remd_exchg.o amd.o gbsa.o \
      ./cuda/cuda.a -L/g/software/linux/pack/cuda-5.0.35/lib64
-L/g/software/linux/pack/cuda-5.0.35/lib -lcurand -lcufft -lcudart
-L/g/software/linux/pack/amber-12/TEST/lib
-L/g/software/linux/pack/amber-12/TEST/lib -lnetcdf
./cuda/cuda.a(gpu.o): In function
`MPI::Comm::Set_errhandler(MPI::Errhandler const&)':
gpu.cpp:(.text._ZN3MPI4Comm14Set_errhandlerERKNS_10ErrhandlerE[MPI::Comm::Set_errhandler(MPI::Errhandler
const&)]+0x12): undefined reference to `MPI::Comm::mpi_err_map'
gpu.cpp:(.text._ZN3MPI4Comm14Set_errhandlerERKNS_10ErrhandlerE[MPI::Comm::Set_errhandler(MPI::Errhandler
const&)]+0x1c): undefined reference to `MPI::Comm::mpi_err_map'
gpu.cpp:(.text._ZN3MPI4Comm14Set_errhandlerERKNS_10ErrhandlerE[MPI::Comm::Set_errhandler(MPI::Errhandler
const&)]+0x3c): undefined reference to `MPI::Comm::mpi_err_map'
gpu.cpp:(.text._ZN3MPI4Comm14Set_errhandlerERKNS_10ErrhandlerE[MPI::Comm::Set_errhandler(MPI::Errhandler
const&)]+0x84): undefined reference to `MPI::Comm::mpi_err_map'
./cuda/cuda.a(gpu.o): In function `MPI::Comm::Exscan(void const*, void*,
int, MPI::Datatype const&, MPI::Op const&) const':
gpu.cpp:(.text._ZNK3MPI4Comm6ExscanEPKvPviRKNS_8DatatypeERKNS_2OpE[MPI::Comm::Exscan(void
const*, void*, int, MPI::Datatype const&, MPI::Op const&) const]+0x10):
undefined reference to `MPI::Comm::current_op'
gpu.cpp:(.text._ZNK3MPI4Comm6ExscanEPKvPviRKNS_8DatatypeERKNS_2OpE[MPI::Comm::Exscan(void
const*, void*, int, MPI::Datatype const&, MPI::Op const&) const]+0x2d):
undefined reference to `MPI::Comm::current_op'
./cuda/cuda.a(gpu.o): In function `MPI::Comm::Scan(void const*, void*,
int, MPI::Datatype const&, MPI::Op const&) const':
gpu.cpp:(.text._ZNK3MPI4Comm4ScanEPKvPviRKNS_8DatatypeERKNS_2OpE[MPI::Comm::Scan(void
const*, void*, int, MPI::Datatype const&, MPI::Op const&) const]+0x10):
undefined reference to `MPI::Comm::current_op'
gpu.cpp:(.text._ZNK3MPI4Comm4ScanEPKvPviRKNS_8DatatypeERKNS_2OpE[MPI::Comm::Scan(void
const*, void*, int, MPI::Datatype const&, MPI::Op const&) const]+0x2d):
undefined reference to `MPI::Comm::current_op'
./cuda/cuda.a(gpu.o): In function `MPI::Comm::Reduce_scatter_block(void
const*, void*, int, MPI::Datatype const&, MPI::Op const&) const':
gpu.cpp:(.text._ZNK3MPI4Comm20Reduce_scatter_blockEPKvPviRKNS_8DatatypeERKNS_2OpE[MPI::Comm::Reduce_scatter_block(void
const*, void*, int, MPI::Datatype const&, MPI::Op const&) const]+0x10):
undefined reference to `MPI::Comm::current_op'
./cuda/cuda.a(gpu.o):gpu.cpp:(.text._ZNK3MPI4Comm20Reduce_scatter_blockEPKvPviRKNS_8DatatypeERKNS_2OpE[MPI::Comm::Reduce_scatter_block(void
const*, void*, int, MPI::Datatype const&, MPI::Op const&) const]+0x2d):
more undefined references to `MPI::Comm::current_op' follow
./cuda/cuda.a(gpu.o): In function `MPI::Op::Init(void (*)(void const*,
void*, int, MPI::Datatype const&), bool)':
gpu.cpp:(.text._ZN3MPI2Op4InitEPFvPKvPviRKNS_8DatatypeEEb[MPI::Op::Init(void
(*)(void const*, void*, int, MPI::Datatype const&), bool)]+0x1f):
undefined reference to `op_intercept'
./cuda/cuda.a(gpu.o): In function `MPI::Comm::Set_attr(int, void const*)
const':
gpu.cpp:(.text._ZNK3MPI4Comm8Set_attrEiPKv[MPI::Comm::Set_attr(int, void
const*) const]+0x14): undefined reference to `MPI::Comm::mpi_comm_map'
gpu.cpp:(.text._ZNK3MPI4Comm8Set_attrEiPKv[MPI::Comm::Set_attr(int, void
const*) const]+0x1e): undefined reference to `MPI::Comm::mpi_comm_map'
gpu.cpp:(.text._ZNK3MPI4Comm8Set_attrEiPKv[MPI::Comm::Set_attr(int, void
const*) const]+0x44): undefined reference to `MPI::Comm::mpi_comm_map'
gpu.cpp:(.text._ZNK3MPI4Comm8Set_attrEiPKv[MPI::Comm::Set_attr(int, void
const*) const]+0x5d): undefined reference to `MPI::Comm::key_ref_map'
gpu.cpp:(.text._ZNK3MPI4Comm8Set_attrEiPKv[MPI::Comm::Set_attr(int, void
const*) const]+0x63): undefined reference to `MPI::Comm::key_ref_map'
gpu.cpp:(.text._ZNK3MPI4Comm8Set_attrEiPKv[MPI::Comm::Set_attr(int, void
const*) const]+0x83): undefined reference to `MPI::Comm::key_ref_map'
gpu.cpp:(.text._ZNK3MPI4Comm8Set_attrEiPKv[MPI::Comm::Set_attr(int, void
const*) const]+0xcf): undefined reference to `MPI::Comm::mpi_comm_map'
gpu.cpp:(.text._ZNK3MPI4Comm8Set_attrEiPKv[MPI::Comm::Set_attr(int, void
const*) const]+0x116): undefined reference to `MPI::Comm::key_ref_map'
gpu.cpp:(.text._ZNK3MPI4Comm8Set_attrEiPKv[MPI::Comm::Set_attr(int, void
const*) const]+0x1a9): undefined reference to `MPI::Comm::mpi_comm_map'
gpu.cpp:(.text._ZNK3MPI4Comm8Set_attrEiPKv[MPI::Comm::Set_attr(int, void
const*) const]+0x1af): undefined reference to `MPI::Comm::mpi_comm_map'
gpu.cpp:(.text._ZNK3MPI4Comm8Set_attrEiPKv[MPI::Comm::Set_attr(int, void
const*) const]+0x1d4): undefined reference to `MPI::Comm::mpi_comm_map'
gpu.cpp:(.text._ZNK3MPI4Comm8Set_attrEiPKv[MPI::Comm::Set_attr(int, void
const*) const]+0x23c): undefined reference to `MPI::Comm::mpi_comm_map'
./cuda/cuda.a(gpu.o): In function
`MPI::Intracomm::Dist_graph_create(int, int const*, int const*, int
const*, int const*, MPI::Info&, bool) const':
gpu.cpp:(.text._ZNK3MPI9Intracomm17Dist_graph_createEiPKiS2_S2_S2_RNS_4InfoEb[MPI::Intracomm::Dist_graph_create(int,
int const*, int const*, int const*, int const*, MPI::Info&, bool)
const]+0xd4): undefined reference to `MPI::Comm::mpi_comm_map'
./cuda/cuda.a(gpu.o):gpu.cpp:(.text._ZNK3MPI9Intracomm17Dist_graph_createEiPKiS2_S2_S2_RNS_4InfoEb[MPI::Intracomm::Dist_graph_create(int,
int const*, int const*, int const*, int const*, MPI::Info&, bool)
const]+0xda): more undefined references to `MPI::Comm::mpi_comm_map' follow
./cuda/cuda.a(gpu.o):(.rodata._ZTVN3MPI2OpE[vtable for MPI::Op]+0x30):
undefined reference to `MPI::Op::Reduce_local(void const*, void*, int,
MPI::Datatype const&) const'
./cuda/cuda.a(gpu.o):(.rodata._ZTVN3MPI2OpE[vtable for MPI::Op]+0x38):
undefined reference to `MPI::Op::Is_commutative() const'
collect2: ld returned 1 exit status
make[3]: *** [pmemd.cuda.MPI] Error 1
make[3]: Leaving directory
`/g/software/linux/pack/amber-12/TEST/src/pmemd/src'
make[2]: *** [cuda_parallel] Error 2
make[2]: Leaving directory `/g/software/linux/pack/amber-12/TEST/src/pmemd'
make[1]: *** [cuda_parallel] Error 2
make[1]: Leaving directory `/g/software/linux/pack/amber-12/TEST/src'
make: *** [install] Error 2

amber-12 >


A better readable version of the output is available on
http://pastebin.de/35226.


This is on CentOS 6.2 (kernel 2.6.32-220.el6.x86_64) on an LSF 7.0.6
cluster using the MPI implementation which comes with LSF. I had to add
"-I/opt/platform_mpi/include" to the NVCC line in config.h to make it
work at all.

The compilation of serial, MPI and CUDA versions alone work find, it's
only the combined CUDA/MPI compilation that fails.

Any idea how this can be fixed?


Cheers
frank

_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Jul 05 2013 - 03:00:01 PDT
Custom Search