Re: [AMBER] Amber installation problems

From: Donato Pera <donato.pera.dm.univaq.it>
Date: Mon, 15 Apr 2013 12:16:31 +0200 (CEST)

This is our error message:

mpif90 -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK -Duse_SPFP -o
pmemd.cuda.MPI gbl_constants.o gbl_datatypes.o state_info.o file_io_dat.o
mdin_ctrl_dat.o mdin_ewald_dat.o mdin_debugf_dat.o prmtop_dat.o
inpcrd_dat.o dynamics_dat.o img.o nbips.o parallel_dat.o parallel.o
gb_parallel.o pme_direct.o pme_recip_dat.o pme_slab_recip.o
pme_blk_recip.o pme_slab_fft.o pme_blk_fft.o pme_fft_dat.o fft1d.o
bspline.o pme_force.o pbc.o nb_pairlist.o nb_exclusions.o cit.o dynamics.o
bonds.o angles.o dihedrals.o extra_pnts_nb14.o runmd.o loadbal.o shake.o
prfs.o mol_list.o runmin.o constraints.o axis_optimize.o gb_ene.o veclib.o
gb_force.o timers.o pmemd_lib.o runfiles.o file_io.o bintraj.o
binrestart.o pmemd_clib.o pmemd.o random.o degcnt.o erfcfun.o nmr_calls.o
nmr_lib.o get_cmdline.o master_setup.o pme_alltasks_setup.o pme_setup.o
ene_frc_splines.o gb_alltasks_setup.o nextprmtop_section.o angles_ub.o
dihedrals_imp.o cmap.o charmm.o charmm_gold.o findmask.o remd.o
multipmemd.o remd_exchg.o amd.o gbsa.o \
     ./cuda/cuda.a -L/usr/local/cuda/lib64 -L/usr/local/cuda/lib -lcurand
-lcufft -lcudart -L/home/SWcbbc/Amber12/amber12_GPU/lib
-L/home/SWcbbc/Amber12/amber12_GPU/lib -lnetcdf
./cuda/cuda.a(gpu.o): In function `MPI::Op::Init(void (*)(void const*,
void*, int, MPI::Datatype const&), bool)':
gpu.cpp:(.text._ZN3MPI2Op4InitEPFvPKvPviRKNS_8DatatypeEEb[MPI::Op::Init(void
(*)(void const*, void*, int, MPI::Datatype const&), bool)]+0x19):
undefined reference to `ompi_mpi_cxx_op_intercept'
./cuda/cuda.a(gpu.o): In function `MPI::Intracomm::Create(MPI::Group
const&) const':
gpu.cpp:(.text._ZNK3MPI9Intracomm6CreateERKNS_5GroupE[MPI::Intracomm::Create(MPI::Group
const&) const]+0x2a): undefined reference to `MPI::Comm::Comm()'
./cuda/cuda.a(gpu.o): In function `MPI::Graphcomm::Clone() const':
gpu.cpp:(.text._ZNK3MPI9Graphcomm5CloneEv[MPI::Graphcomm::Clone()
const]+0x25): undefined reference to `MPI::Comm::Comm()'
./cuda/cuda.a(gpu.o): In function `MPI::Intracomm::Create_cart(int, int
const*, bool const*, bool) const':
gpu.cpp:(.text._ZNK3MPI9Intracomm11Create_cartEiPKiPKbb[MPI::Intracomm::Create_cart(int,
int const*, bool const*, bool) const]+0x8f): undefined reference to
`MPI::Comm::Comm()'
./cuda/cuda.a(gpu.o): In function `MPI::Intracomm::Create_graph(int, int
const*, int const*, bool) const':
gpu.cpp:(.text._ZNK3MPI9Intracomm12Create_graphEiPKiS2_b[MPI::Intracomm::Create_graph(int,
int const*, int const*, bool) const]+0x2b): undefined reference to
`MPI::Comm::Comm()'
./cuda/cuda.a(gpu.o): In function `MPI::Cartcomm::Clone() const':
gpu.cpp:(.text._ZNK3MPI8Cartcomm5CloneEv[MPI::Cartcomm::Clone()
const]+0x25): undefined reference to `MPI::Comm::Comm()'
./cuda/cuda.a(gpu.o):gpu.cpp:(.text._ZN3MPI8Cartcomm3SubEPKb[MPI::Cartcomm::Sub(bool
const*)]+0x76): more undefined references to `MPI::Comm::Comm()' follow
./cuda/cuda.a(gpu.o):(.rodata._ZTVN3MPI3WinE[vtable for MPI::Win]+0x48):
undefined reference to `MPI::Win::Free()'
./cuda/cuda.a(gpu.o):(.rodata._ZTVN3MPI8DatatypeE[vtable for
MPI::Datatype]+0x78): undefined reference to `MPI::Datatype::Free()'
collect2: ld returned 1 exit status
make[3]: *** [pmemd.cuda.MPI] Error 1
make[3]: Leaving directory `/home/SWcbbc/Amber12/amber12_GPU/src/pmemd/src'
make[2]: *** [cuda_parallel] Error 2
make[2]: Leaving directory `/home/SWcbbc/Amber12/amber12_GPU/src/pmemd'
make[1]: *** [cuda_parallel] Error 2
make[1]: Leaving directory `/home/SWcbbc/Amber12/amber12_GPU/src'
make: *** [install] Error 2


Thanks and Regards





> On Fri, Apr 12, 2013, Donato Pera wrote:
>
>> amber+mpi works
>> amber+gpu works
>> amber+mpi+gpu doesn't work
>
>> >>>>> undefined reference to `ompi_mpi_cxx_op_intercept'
>
>>From very limited information, this sounds like you don't have an MPI
> installation with MPI-2 support. Can you (re-)state what version of MPI
> you
> are using?
>
>> >> Then you will need to build your MPI with C++ support. You can
>> download
>> >> mpich2 in the $AMBERHOME/AmberTools/src folder and use the
>> >> configure_mpich2 script to build a compatible MPICH2 installation in
>> >> AMBERHOME/bin.
>> >>
>> >> Then make sure you add AMBERHOME/bin to the beginning of your PATH so
>> >> that the MPI you just built is used.
>
> Did you follow the above advice? It looks like you are using openMPI;
> which
> case, I'm not sure whether or not it will work with pmemd.cuda.MPI. You
> will
> need at least version 1.5 or later (and, in fact, even that might not
> actually
> work. If anyone on the list has current knowledge of how openMPI is or is
> not
> compatible with pmemd.cuda.MPI, please post some info.)
>
> ...dac
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
> --
> This message has been scanned for viruses and
> dangerous content by MailScanner, and is
> believed to be clean.
>
>



_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon Apr 15 2013 - 03:30:02 PDT
Custom Search