Re: [AMBER] pmemd.cuda.mpi installation

From: Cenk \(Jenk\) Andac <"Cenk>
Date: Thu, 3 Apr 2014 11:47:33 -0700 (PDT)

Hi Jason,

Thank you for replying. I have tried installing pmemd.cuda.MPI with the new parameter you suggested.

Unfortunately installation failed again..


Here is the error message ..

gputypes.h:1870: error: ‘cudaThreadExit’ was not declared in this scope
gputypes.h:1871: error: ‘cudaMemset’ was not declared in this scope
gputypes.h:1868: error: ‘cudaGetErrorString’ was not declared in this scope
gputypes.h:1872: error: ‘cudaThreadExit’ was not declared in this scope
gputypes.h: In member function ‘void GpuBuffer<T>::Allocate() [with T = NTPData]’:
gputypes.h:1828:   instantiated from ‘GpuBuffer<T>::GpuBuffer(int, bool, bool) [with T = NTPData]’
gpu.cpp:5261:   instantiated from here
gputypes.h:1851: error: ‘cudaHostAlloc’ was not declared in this scope
gputypes.h:1851: error: ‘cudaGetErrorString’ was not declared in this scope
gputypes.h:1852: error: ‘cudaThreadExit’ was not declared in this scope
gputypes.h:1855: error: ‘cudaHostGetDevicePointer’ was not declared in this scope
gputypes.h:1851: error: ‘cudaGetErrorString’ was not declared in this scope
gputypes.h:1856: error: ‘cudaThreadExit’ was not declared in this scope
gputypes.h:1868: error: ‘cudaMalloc’ was not declared in this scope
gputypes.h:1868: error: ‘cudaGetErrorString’ was not declared in this scope
gputypes.h:1870: error: ‘cudaThreadExit’ was not declared in this scope
gputypes.h:1871: error: ‘cudaMemset’ was not declared in this scope
gputypes.h:1868: error: ‘cudaGetErrorString’ was not declared in this scope
gputypes.h:1872: error: ‘cudaThreadExit’ was not declared in this scope
gputypes.h: In member function ‘void GpuBuffer<T>::Deallocate() [with T = NTPData]’:
gputypes.h:1834:   instantiated from ‘GpuBuffer<T>::~GpuBuffer() [with T = NTPData]’
gpu.cpp:8983:   instantiated from here
gputypes.h:1885: error: ‘cudaFreeHost’ was not declared in this scope
gputypes.h:1896: error: ‘cudaFree’ was not declared in this scope
gputypes.h:1899: error: ‘cudaGetErrorString’ was not declared in this scope
gputypes.h:1899: error: ‘cudaThreadExit’ was not declared in this scope
gputypes.h: In member function ‘void GpuBuffer<T>::Deallocate() [with T = NLEntry]’:
gputypes.h:1834:   instantiated from ‘GpuBuffer<T>::~GpuBuffer() [with T = NLEntry]’
gpu.cpp:8983:   instantiated from here
gputypes.h:1885: error: ‘cudaFreeHost’ was not declared in this scope
gputypes.h:1896: error: ‘cudaFree’ was not declared in this scope
gputypes.h:1899: error: ‘cudaGetErrorString’ was not declared in this scope
gputypes.h:1899: error: ‘cudaThreadExit’ was not declared in this scope
gputypes.h: In member function ‘void GpuBuffer<T>::Deallocate() [with T = NLRecord]’:
gputypes.h:1834:   instantiated from ‘GpuBuffer<T>::~GpuBuffer() [with T = NLRecord]’
gpu.cpp:8983:   instantiated from here
gputypes.h:1885: error: ‘cudaFreeHost’ was not declared in this scope
gputypes.h:1896: error: ‘cudaFree’ was not declared in this scope
gputypes.h:1899: error: ‘cudaGetErrorString’ was not declared in this scope
gputypes.h:1899: error: ‘cudaThreadExit’ was not declared in this scope
make[4]: *** [gpu.o] Error 1
make[4]: Leaving directory `/truba/sw/centos6.4-intel/app/amber/amber12-gcc-mpi/amber12/src/pmemd/src/cuda'
make[3]: *** [cuda/cuda.a] Error 2
make[3]: Leaving directory `/truba/sw/centos6.4-intel/app/amber/amber12-gcc-mpi/amber12/src/pmemd/src'
make[2]: *** [cuda_parallel] Error 2
make[2]: Leaving directory `/truba/sw/centos6.4-intel/app/amber/amber12-gcc-mpi/amber12/src/pmemd'
make[1]: *** [cuda_parallel] Error 2
make[1]: Leaving directory `/truba/sw/centos6.4-intel/app/amber/amber12-gcc-mpi/amber12/src'
make: *** [install] Error 2


Would you think that some other problems may be involved?

Thanks,

Jenk





________________________________
 From: Jason Swails <jason.swails.gmail.com>
To: amber.ambermd.org
Sent: Thursday, April 3, 2014 9:05 PM
Subject: Re: [AMBER] pmemd.cuda.mpi installation
 

On Thu, 2014-04-03 at 10:28 -0700, Cenk (Jenk) Andac wrote:

>
>
> Dear all,
>
> I have been trying to install mpi version of pmemd.cuda on a GRID server.
>
> I appears that pmemd.cuda.mpi installation fails, giving the following error message..
>
> I will appreciate it if someone out there could be of some assistance to troubleshoot the installation problem.
>
> Best regards,
>
> Jenk Andac
>
>
>
> make[4]: Entering directory `/truba/sw/centos6.4-intel/app/amber/amber12-gcc-mpi/amber12/src/pmemd/src/cuda'
> make[4]: `cuda.a' is up to date.
> make[4]: Leaving directory `/truba/sw/centos6.4-intel/app/amber/amber12-gcc-mpi/amber12/src/pmemd/src/cuda'
> mpif90  -O3 -mtune=native  -DCUDA -DMPI  -DMPICH_IGNORE_CXX_SEEK -Duse_SPFP -o pmemd.cuda.MPI gbl_constants.o gbl_datatypes.o state_info.o file_io_dat.o mdin_ctrl_dat.o mdin_ewald_dat.o mdin_debugf_dat.o prmtop_dat.o inpcrd_dat.o dynamics_dat.o img.o nbips.o parallel_dat.o parallel.o gb_parallel.o pme_direct.o pme_recip_dat.o pme_slab_recip.o pme_blk_recip.o pme_slab_fft.o pme_blk_fft.o pme_fft_dat.o fft1d.o bspline.o pme_force.o pbc.o nb_pairlist.o nb_exclusions.o cit.o dynamics.o bonds.o angles.o dihedrals.o extra_pnts_nb14.o runmd.o loadbal.o shake.o prfs.o mol_list.o runmin.o constraints.o axis_optimize.o gb_ene.o veclib.o gb_force.o timers.o pmemd_lib.o runfiles.o file_io.o bintraj.o binrestart.o pmemd_clib.o pmemd.o random.o degcnt.o erfcfun.o nmr_calls.o nmr_lib.o get_cmdline.o master_setup.o pme_alltasks_setup.o pme_setup.o ene_frc_splines.o gb_alltasks_setup.o nextprmtop_section.o angles_ub.o dihedrals_imp.o cmap.o charmm.o charmm_gold.o
> findmask.o remd.o multipmemd.o remd_exchg.o amd.o gbsa.o \
>      ./cuda/cuda.a -L/truba/sw/centos6.4-intel/lib/cuda/5.5/lib64 -L/truba/sw/centos6.4-intel/lib/cuda/5.5/lib -lcurand -lcufft -lcudart -L/usr/lib64 -lstdc++ -L/truba/sw/centos6.4-intel/app/amber/amber12-gcc-mpi/amber12/lib /truba/sw/centos6.4-intel/app/amber/amber12-gcc-mpi/amber12/lib/libnetcdf.a 
> ./cuda/cuda.a(gpu.o): In function `MPI::Intracomm::Create_graph(int, int const*, int const*, bool) const':
> gpu.cpp:(.text._ZNK3MPI9Intracomm12Create_graphEiPKiS2_b[MPI::Intracomm::Create_graph(int, int const*, int const*, bool) const]+0x27): undefined reference to `MPI::Comm::Comm()'
> ./cuda/cuda.a(gpu.o): In function `MPI::Intracomm::Create_cart(int, int const*, bool const*, bool) const':
> gpu.cpp:(.text._ZNK3MPI9Intracomm11Create_cartEiPKiPKbb[MPI::Intracomm::Create_cart(int, int const*, bool const*, bool) const]+0x124): undefined reference to `MPI::Comm::Comm()'
> ./cuda/cuda.a(gpu.o): In function `MPI::Intracomm::Create(MPI::Group const&) const':
> gpu.cpp:(.text._ZNK3MPI9Intracomm6CreateERKNS_5GroupE[MPI::Intracomm::Create(MPI::Group const&) const]+0x27): undefined reference to `MPI::Comm::Comm()'
> ./cuda/cuda.a(gpu.o): In function `MPI::Op::Init(void (*)(void const*, void*, int, MPI::Datatype const&), bool)':
> gpu.cpp:(.text._ZN3MPI2Op4InitEPFvPKvPviRKNS_8DatatypeEEb[MPI::Op::Init(void (*)(void const*, void*, int, MPI::Datatype const&), bool)]+0x1f): undefined reference to `ompi_mpi_cxx_op_intercept'
> ./cuda/cuda.a(gpu.o): In function `MPI::Graphcomm::Clone() const':
> gpu.cpp:(.text._ZNK3MPI9Graphcomm5CloneEv[MPI::Graphcomm::Clone() const]+0x24): undefined reference to `MPI::Comm::Comm()'
> ./cuda/cuda.a(gpu.o): In function `MPI::Cartcomm::Sub(bool const*) const':
> gpu.cpp:(.text._ZNK3MPI8Cartcomm3SubEPKb[MPI::Cartcomm::Sub(bool const*) const]+0x7b): undefined reference to `MPI::Comm::Comm()'
> ./cuda/cuda.a(gpu.o): In function `MPI::Cartcomm::Clone() const':
> gpu.cpp:(.text._ZNK3MPI8Cartcomm5CloneEv[MPI::Cartcomm::Clone() const]+0x24): undefined reference to `MPI::Comm::Comm()'
> ./cuda/cuda.a(gpu.o): In function `MPI::Intercomm::Merge(bool) const':
> gpu.cpp:(.text._ZNK3MPI9Intercomm5MergeEb[MPI::Intercomm::Merge(bool) const]+0x26): undefined reference to `MPI::Comm::Comm()'
> ./cuda/cuda.a(gpu.o): In function `MPI::Intracomm::Clone() const':
> gpu.cpp:(.text._ZNK3MPI9Intracomm5CloneEv[MPI::Intracomm::Clone() const]+0x27): undefined reference to `MPI::Comm::Comm()'
> ./cuda/cuda.a(gpu.o):gpu.cpp:(.text._ZNK3MPI9Intracomm5SplitEii[MPI::Intracomm::Split(int, int) const]+0x24): more undefined references to `MPI::Comm::Comm()' follow
> ./cuda/cuda.a(gpu.o):(.rodata._ZTVN3MPI3WinE[vtable for MPI::Win]+0x48): undefined reference to `MPI::Win::Free()'
> ./cuda/cuda.a(gpu.o):(.rodata._ZTVN3MPI8DatatypeE[vtable for MPI::Datatype]+0x78): undefined reference to `MPI::Datatype::Free()'
> collect2: ld returned 1 exit status
> make[3]: *** [pmemd.cuda.MPI] Error 1
> make[3]: Leaving directory `/truba/sw/centos6.4-intel/app/amber/amber12-gcc-mpi/amber12/src/pmemd/src'
> make[2]: *** [cuda_parallel] Error 2
> make[2]: Leaving directory `/truba/sw/centos6.4-intel/app/amber/amber12-gcc-mpi/amber12/src/pmemd'
> make[1]: *** [cuda_parallel] Error 2
> make[1]: Leaving directory `/truba/sw/centos6.4-intel/app/amber/amber12-gcc-mpi/amber12/src'
> make: *** [install] Error 2

Try adding -lmpi_cxx to the end of the PMEMD_FLIBSF flag in
$AMBERHOME/config.h and then recompile.

-- 
Jason M. Swails
BioMaPS,
Rutgers University
Postdoctoral Researcher
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Apr 03 2014 - 12:00:02 PDT
Custom Search