Re: [AMBER] Problems Compiling Amber11 Parallel GPU

From: Alison Lynton <a.lynton.curtin.edu.au>
Date: Tue, 28 Jun 2011 10:47:44 +0800

Hi Ross

We're using openmpi v 1.4.3, I've tried using the redhat package and also compiling from source. g++ was definitely available on the system.

When compiling openmpi from source, the config.log has no shortage of c++ references, and does everything for mpi_cxx (makefiles etc). libmpi_cxx happily compiled and got installed (I don't know if that means anything - not sure exactly what I should be checking config.log for to know if it decided to skip c++ bindings)

mpif90 --show reports:
gfortran -I/usr/local/amber11/include -pthread -I/usr/local/amber11/lib -L/usr/local/amber11/lib -lmpi_f90 -lmpi_f77 -lmpi -lopen-rte -lopen-pal -ldl -Wl,--export-dynamic -lnsl -lutil -lm -ldl

In the end it's compiling and appears to test ok (will be getting the client to do some real testing in the next week or so as it's well out of my area of expertise!) so assuming it tests ok perhaps I just need to let go and move on ;) But would be nice to know where I went wrong!

Thanks for your help!

Ali


On 27/06/2011, at 11:08 PM, Ross Walker wrote:

> Hi Alison,
>
> What version of MPI are you using and what does mpif90 --show report?
>
> The -lmpi_cxx library if needed by your mpi installation should
> automatically be included by the mpif90 script. My suspicion is that your
> MPI installation was configured for C and Fortran only and whoever installed
> it skipped the C++ bindings. It is possible an autoconf would also skip this
> if for example it did not find a c++ compiler at the time configure was run.
>
>
> You also need to make sure you are using a MPI that support MPI v2.0. For
> example MPICH2 will work but MPICH will not.
>
> All the best
> Ross
>
>> -----Original Message-----
>> From: Alison Lynton [mailto:a.lynton.curtin.edu.au]
>> Sent: Sunday, June 26, 2011 8:34 PM
>> To: AMBER Mailing List
>> Subject: [AMBER] Problems Compiling Amber11 Parallel GPU
>>
>> Hi All
>>
>> This might not be the right place for this question...
>>
>> I had some trouble compiling the Parallel GPU version of Amber11. In
>> the end, to get it working I had to add -lmpi_cxx to the PMEMD_CU_LIBS
>> line of config.h
>>
>> I've had a read through the help for configure and I can't see any
>> flags that would have helped me, but I thought I would ask if anyone
>> knows what I have done wrong in the configure process please? I also
>> figured it was worth posting this to the list as I believe other people
>> have had this trouble in the past - for example, this post:
>> http://archive.ambermd.org/201012/0307.html
>>
>> I've included the errors I was experiencing below.
>>
>> Thanks
>>
>> Ali
>>
>> Alison Lynton
>> Senior Systems Engineer
>>
>> Curtin University of Technology | Curtin IT Services | Building 204
>> | Room 521
>> Telephone 08 9266 2986 | Facsimile 08 9266 1072
>> Email a.lynton.curtin.edu.au | Website www.curtin.edu.au
>> "CRICOS provider code 00301J"
>>
>> mpif90 -O3 -mtune=generic -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK -o
>> pmemd.cuda.MPI gbl_constants.o gbl_datatypes.o state_info.o
>> file_io_dat.o mdin_ctrl_dat.o mdin_ewald_dat.o mdin_debugf_dat.o
>> prmtop_dat.o inpcrd_dat.o dynamics_dat.o img.o parallel_dat.o
>> parallel.o gb_parallel.o pme_direct.o pme_recip_dat.o pme_slab_recip.o
>> pme_blk_recip.o pme_slab_fft.o pme_blk_fft.o pme_fft_dat.o fft1d.o
>> bspline.o pme_force.o pbc.o nb_pairlist.o nb_exclusions.o cit.o
>> dynamics.o bonds.o angles.o dihedrals.o extra_pnts_nb14.o runmd.o
>> loadbal.o shake.o prfs.o mol_list.o runmin.o constraints.o
>> axis_optimize.o gb_ene.o veclib.o gb_force.o timers.o pmemd_lib.o
>> runfiles.o file_io.o bintraj.o pmemd_clib.o pmemd.o random.o degcnt.o
>> erfcfun.o nmr_calls.o nmr_lib.o get_cmdline.o master_setup.o
>> pme_alltasks_setup.o pme_setup.o ene_frc_splines.o gb_alltasks_setup.o
>> nextprmtop_section.o angles_ub.o dihedrals_imp.o cmap.o charmm.o
>> charmm_gold.o -L/usr/local/cuda/lib64 -L/usr/local/cuda/lib -lcufft -
>> lcudart ./cuda/cuda.a /usr/local/amber11/lib/libnetcdf.a
>> ./cuda/cuda.a(gpu.o): In function `MPI::Cartcomm::Clone() const':
>> gpu.cpp:(.text._ZNK3MPI8Cartcomm5CloneEv[MPI::Cartcomm::Clone()
>> const]+0x24): undefined reference to `MPI::Comm::Comm()'
>> ./cuda/cuda.a(gpu.o): In function `MPI::Intracomm::Create_graph(int,
>> int const*, int const*, bool) const':
>> gpu.cpp:(.text._ZNK3MPI9Intracomm12Create_graphEiPKiS2_b[MPI::Intracomm
>> ::Create_graph(int, int const*, int const*, bool) const]+0x27):
>> undefined reference to `MPI::Comm::Comm()'
>> ./cuda/cuda.a(gpu.o): In function `MPI::Intracomm::Split(int, int)
>> const':
>> gpu.cpp:(.text._ZNK3MPI9Intracomm5SplitEii[MPI::Intracomm::Split(int,
>> int) const]+0x24): undefined reference to `MPI::Comm::Comm()'
>> ./cuda/cuda.a(gpu.o): In function `MPI::Op::Init(void (*)(void const*,
>> void*, int, MPI::Datatype const&), bool)':
>> gpu.cpp:(.text._ZN3MPI2Op4InitEPFvPKvPviRKNS_8DatatypeEEb[MPI::Op::Init
>> (void (*)(void const*, void*, int, MPI::Datatype const&), bool)]+0x1f):
>> undefined reference to `ompi_mpi_cxx_op_intercept'
>> ./cuda/cuda.a(gpu.o): In function `MPI::Graphcomm::Clone() const':
>> gpu.cpp:(.text._ZNK3MPI9Graphcomm5CloneEv[MPI::Graphcomm::Clone()
>> const]+0x24): undefined reference to `MPI::Comm::Comm()'
>> ./cuda/cuda.a(gpu.o): In function `MPI::Intracomm::Create_cart(int, int
>> const*, bool const*, bool) const':
>> gpu.cpp:(.text._ZNK3MPI9Intracomm11Create_cartEiPKiPKbb[MPI::Intracomm:
>> :Create_cart(int, int const*, bool const*, bool) const]+0x124):
>> undefined reference to `MPI::Comm::Comm()'
>> ./cuda/cuda.a(gpu.o): In function `MPI::Cartcomm::Sub(bool const*)':
>> gpu.cpp:(.text._ZN3MPI8Cartcomm3SubEPKb[MPI::Cartcomm::Sub(bool
>> const*)]+0x7b): undefined reference to `MPI::Comm::Comm()'
>> ./cuda/cuda.a(gpu.o): In function `MPI::Intracomm::Create(MPI::Group
>> const&) const':
>> gpu.cpp:(.text._ZNK3MPI9Intracomm6CreateERKNS_5GroupE[MPI::Intracomm::C
>> reate(MPI::Group const&) const]+0x27): undefined reference to
>> `MPI::Comm::Comm()'
>> ./cuda/cuda.a(gpu.o): In function `MPI::Intercomm::Merge(bool)':
>> gpu.cpp:(.text._ZN3MPI9Intercomm5MergeEb[MPI::Intercomm::Merge(bool)]+0
>> x26): undefined reference to `MPI::Comm::Comm()'
>> ./cuda/cuda.a(gpu.o):gpu.cpp:(.text._ZNK3MPI9Intracomm5CloneEv[MPI::Int
>> racomm::Clone() const]+0x27): more undefined references to
>> `MPI::Comm::Comm()' follow
>> ./cuda/cuda.a(gpu.o):(.rodata._ZTVN3MPI3WinE[vtable for
>> MPI::Win]+0x48): undefined reference to `MPI::Win::Free()'
>> ./cuda/cuda.a(gpu.o):(.rodata._ZTVN3MPI8DatatypeE[vtable for
>> MPI::Datatype]+0x78): undefined reference to `MPI::Datatype::Free()'
>> collect2: ld returned 1 exit status
>> make[2]: *** [pmemd.cuda.MPI] Error 1
>> make[2]: Leaving directory `/usr/local/amber11/src/pmemd/src'
>> make[1]: *** [cuda_parallel] Error 2
>> make[1]: Leaving directory `/usr/local/amber11/src/pmemd'
>> make: *** [cuda_parallel] Error 2
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon Jun 27 2011 - 20:00:04 PDT
Custom Search