On Mon, Apr 15, 2013 at 6:16 AM, Donato Pera <donato.pera.dm.univaq.it>wrote:
> This is our error message:
>
> mpif90 -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK -Duse_SPFP -o
> pmemd.cuda.MPI gbl_constants.o gbl_datatypes.o state_info.o file_io_dat.o
> mdin_ctrl_dat.o mdin_ewald_dat.o mdin_debugf_dat.o prmtop_dat.o
> inpcrd_dat.o dynamics_dat.o img.o nbips.o parallel_dat.o parallel.o
> gb_parallel.o pme_direct.o pme_recip_dat.o pme_slab_recip.o
> pme_blk_recip.o pme_slab_fft.o pme_blk_fft.o pme_fft_dat.o fft1d.o
> bspline.o pme_force.o pbc.o nb_pairlist.o nb_exclusions.o cit.o dynamics.o
> bonds.o angles.o dihedrals.o extra_pnts_nb14.o runmd.o loadbal.o shake.o
> prfs.o mol_list.o runmin.o constraints.o axis_optimize.o gb_ene.o veclib.o
> gb_force.o timers.o pmemd_lib.o runfiles.o file_io.o bintraj.o
> binrestart.o pmemd_clib.o pmemd.o random.o degcnt.o erfcfun.o nmr_calls.o
> nmr_lib.o get_cmdline.o master_setup.o pme_alltasks_setup.o pme_setup.o
> ene_frc_splines.o gb_alltasks_setup.o nextprmtop_section.o angles_ub.o
> dihedrals_imp.o cmap.o charmm.o charmm_gold.o findmask.o remd.o
> multipmemd.o remd_exchg.o amd.o gbsa.o \
> ./cuda/cuda.a -L/usr/local/cuda/lib64 -L/usr/local/cuda/lib -lcurand
> -lcufft -lcudart -L/home/SWcbbc/Amber12/amber12_GPU/lib
> -L/home/SWcbbc/Amber12/amber12_GPU/lib -lnetcdf
> ./cuda/cuda.a(gpu.o): In function `MPI::Op::Init(void (*)(void const*,
> void*, int, MPI::Datatype const&), bool)':
>
> gpu.cpp:(.text._ZN3MPI2Op4InitEPFvPKvPviRKNS_8DatatypeEEb[MPI::Op::Init(void
> (*)(void const*, void*, int, MPI::Datatype const&), bool)]+0x19):
> undefined reference to `ompi_mpi_cxx_op_intercept'
> ./cuda/cuda.a(gpu.o): In function `MPI::Intracomm::Create(MPI::Group
> const&) const':
>
> gpu.cpp:(.text._ZNK3MPI9Intracomm6CreateERKNS_5GroupE[MPI::Intracomm::Create(MPI::Group
> const&) const]+0x2a): undefined reference to `MPI::Comm::Comm()'
> ./cuda/cuda.a(gpu.o): In function `MPI::Graphcomm::Clone() const':
> gpu.cpp:(.text._ZNK3MPI9Graphcomm5CloneEv[MPI::Graphcomm::Clone()
> const]+0x25): undefined reference to `MPI::Comm::Comm()'
> ./cuda/cuda.a(gpu.o): In function `MPI::Intracomm::Create_cart(int, int
> const*, bool const*, bool) const':
>
> gpu.cpp:(.text._ZNK3MPI9Intracomm11Create_cartEiPKiPKbb[MPI::Intracomm::Create_cart(int,
> int const*, bool const*, bool) const]+0x8f): undefined reference to
> `MPI::Comm::Comm()'
> ./cuda/cuda.a(gpu.o): In function `MPI::Intracomm::Create_graph(int, int
> const*, int const*, bool) const':
>
> gpu.cpp:(.text._ZNK3MPI9Intracomm12Create_graphEiPKiS2_b[MPI::Intracomm::Create_graph(int,
> int const*, int const*, bool) const]+0x2b): undefined reference to
> `MPI::Comm::Comm()'
> ./cuda/cuda.a(gpu.o): In function `MPI::Cartcomm::Clone() const':
> gpu.cpp:(.text._ZNK3MPI8Cartcomm5CloneEv[MPI::Cartcomm::Clone()
> const]+0x25): undefined reference to `MPI::Comm::Comm()'
>
> ./cuda/cuda.a(gpu.o):gpu.cpp:(.text._ZN3MPI8Cartcomm3SubEPKb[MPI::Cartcomm::Sub(bool
> const*)]+0x76): more undefined references to `MPI::Comm::Comm()' follow
> ./cuda/cuda.a(gpu.o):(.rodata._ZTVN3MPI3WinE[vtable for MPI::Win]+0x48):
> undefined reference to `MPI::Win::Free()'
> ./cuda/cuda.a(gpu.o):(.rodata._ZTVN3MPI8DatatypeE[vtable for
> MPI::Datatype]+0x78): undefined reference to `MPI::Datatype::Free()'
> collect2: ld returned 1 exit status
> make[3]: *** [pmemd.cuda.MPI] Error 1
> make[3]: Leaving directory `/home/SWcbbc/Amber12/amber12_GPU/src/pmemd/src'
> make[2]: *** [cuda_parallel] Error 2
> make[2]: Leaving directory `/home/SWcbbc/Amber12/amber12_GPU/src/pmemd'
> make[1]: *** [cuda_parallel] Error 2
> make[1]: Leaving directory `/home/SWcbbc/Amber12/amber12_GPU/src'
> make: *** [install] Error 2
>
This still indicates that the C++ MPI functions cannot be found in the MPI
libraries at the linking stage of the compilation.
If these errors persist even after adding -lmpi_cxx as has already been
suggested, then there are two possibilities (assuming you added -lmpi_cxx
to the config.h file correctly). First, your MPI was not built with C++
support. In this case, you have no option except to use a different MPI
that _does_ have C++ support. Second, your MPI does not support the
necessary MPI-2 functionality that is used by pmemd.cuda.MPI. In this
case, your MPI is not compatible with pmemd.cuda.MPI, and you must use a
different MPI that has the required functionality.
In either case, the solution is to use a different MPI. Consider using
mpich2. I have used mpich2-1.4.1p1 extensively, and can tell you from
experience it will work with pmemd.cuda.MPI. You can use the
"configure_mpich2" script to build your MPICH2 installation that is
guaranteed to be compatible with Amber. Look for instructions either in
the AmberTools manual and here:
http://jswails.wikidot.com/installing-amber12-and-ambertools-12#toc10
Good luck,
Jason
--
Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Candidate
352-392-4032
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon Apr 15 2013 - 05:00:03 PDT