[AMBER] Installaiton of pmemd for Amber16

From: Ryoichi Utsumi <u.ryoichi1123.gmail.com>
Date: Sat, 10 Aug 2019 20:18:40 +0900

Dear all,

I am trying to upgrade the version of CUDA, from 7.5 to 8.0, and to
reinstall pmemd(pmemdGTI).
However, I have gotten the following error message:

"make[3]: Leaving directory '/home/k0072/k007200/amber16/src/pmemd/src/cuda'
make -C ../../../AmberTools/src/emil install
make[3]: Entering directory '/home/k0072/k007200/amber16/AmberTools/src/emil'
make[3]: Nothing to be done for 'install'.
make[3]: Leaving directory '/home/k0072/k007200/amber16/AmberTools/src/emil'
mpif90 -ip -O3 -no-prec-div -xHost -DCUDA -DMPI
-DMPICH_IGNORE_CXX_SEEK -o
/home/k0072/k007200/amber16/bin/pmemd.cuda_SPFP.MPI gbl_constants.o
gbl_datatypes.o state_info.o file_io_dat.o mdin_ctrl_dat.o
mdin_emil_dat.o mdin_ewald_dat.o mdin_debugf_dat.o prmtop_dat.o
inpcrd_dat.o dynamics_dat.o emil.o img.o nbips.o offload_allocation.o
parallel_dat.o parallel.o gb_parallel.o pme_direct.o pme_recip_dat.o
pme_slab_recip.o pme_blk_recip.o pme_slab_fft.o pme_blk_fft.o
pme_fft_dat.o fft1d.o bspline.o pme_force.o pbc.o nb_pairlist.o
gb_ene_hybrid.o nb_exclusions.o cit.o dynamics.o bonds.o angles.o
dihedrals.o extra_pnts_nb14.o runmd.o loadbal.o shake.o prfs.o
mol_list.o runmin.o constraints.o axis_optimize.o gb_ene.o veclib.o
gb_force.o timers.o pmemd_lib.o runfiles.o file_io.o AmberNetcdf.o
bintraj.o binrestart.o pmemd_clib.o pmemd.o random.o degcnt.o
erfcfun.o nmr_calls.o nmr_lib.o get_cmdline.o master_setup.o
pme_alltasks_setup.o pme_setup.o ene_frc_splines.o gb_alltasks_setup.o
nextprmtop_section.o angles_ub.o dihedrals_imp.o cmap.o charmm.o
charmm_gold.o findmask.o remd.o multipmemd.o remd_exchg.o amd.o gamd.o
ti.o gbsa.o barostats.o scaledMD.o constantph.o energy_records.o
constantph_dat.o relaxmd.o sgld.o emap.o get_efield_energy.o \
     ./cuda/cuda.a -L/home/app/cuda/cuda-8.0/lib64
-L/home/app/cuda/cuda-8.0/lib -lcurand -lcufft -lcudart -lstdc++
-L/home/k0072/k007200/amber16/lib
/home/k0072/k007200/amber16/lib/libnetcdff.a
/home/k0072/k007200/amber16/lib/libnetcdf.a -shared-intel
/home/k0072/k007200/amber16/lib/libemil.a -lstdc++ nfe_lib.o
nfe_setup.o nfe_colvar.o nfe_smd.o nfe_abmd.o nfe_pmd.o nfe_bbmd.o
nfe_stsm.o ../../../AmberTools/src/lib/sys.a
./cuda/cuda.a(gpu.o): In function `PMPI::Request::Wait()':
gpu.cpp:(.text._ZN4PMPI7Request4WaitEv[_ZN4PMPI7Request4WaitEv]+0x5):
undefined reference to `PMPI::Request::ignored_status'
./cuda/cuda.a(gpu.o): In function `PMPI::Request::Test()':
gpu.cpp:(.text._ZN4PMPI7Request4TestEv[_ZN4PMPI7Request4TestEv]+0x6):
undefined reference to `PMPI::Request::ignored_status'
./cuda/cuda.a(gpu.o): In function `PMPI::Comm::Recv(void*, int,
PMPI::Datatype const&, int, int) const':
gpu.cpp:(.text._ZNK4PMPI4Comm4RecvEPviRKNS_8DatatypeEii[_ZNK4PMPI4Comm4RecvEPviRKNS_8DatatypeEii]+0x1d):
undefined reference to `PMPI::Comm::ignored_status'
./cuda/cuda.a(gpu.o): In function `PMPI::Comm::Iprobe(int, int) const':
gpu.cpp:(.text._ZNK4PMPI4Comm6IprobeEii[_ZNK4PMPI4Comm6IprobeEii]+0xe):
undefined reference to `PMPI::Comm::ignored_status'
./cuda/cuda.a(gpu.o): In function `PMPI::Comm::Probe(int, int) const':
gpu.cpp:(.text._ZNK4PMPI4Comm5ProbeEii[_ZNK4PMPI4Comm5ProbeEii]+0x6):
undefined reference to `PMPI::Comm::ignored_status'
./cuda/cuda.a(gpu.o): In function `PMPI::Comm::Sendrecv(void const*,
int, PMPI::Datatype const&, int, int, void*, int, PMPI::Datatype
const&, int, int) const':
gpu.cpp:(.text._ZNK4PMPI4Comm8SendrecvEPKviRKNS_8DatatypeEiiPviS5_ii[_ZNK4PMPI4Comm8SendrecvEPKviRKNS_8DatatypeEiiPviS5_ii]+0x14):
undefined reference to `PMPI::Comm::ignored_status'
./cuda/cuda.a(gpu.o): In function `PMPI::Comm::Sendrecv_replace(void*,
int, PMPI::Datatype const&, int, int, int, int) const':
gpu.cpp:(.text._ZNK4PMPI4Comm16Sendrecv_replaceEPviRKNS_8DatatypeEiiii[_ZNK4PMPI4Comm16Sendrecv_replaceEPviRKNS_8DatatypeEiiii]+0x2d):
undefined reference to `PMPI::Comm::ignored_status'
./cuda/cuda.a(gpu.o): In function `PMPI::Comm::Free()':
gpu.cpp:(.text._ZN4PMPI4Comm4FreeEv[_ZN4PMPI4Comm4FreeEv]+0xf):
undefined reference to `PMPI::Comm::mpi_comm_map'
gpu.cpp:(.text._ZN4PMPI4Comm4FreeEv[_ZN4PMPI4Comm4FreeEv]+0x1a):
undefined reference to `PMPI::Comm::mpi_comm_map'
./cuda/cuda.a(gpu.o): In function `PMPI::Comm::Set_attr(int, void
const*) const':
gpu.cpp:(.text._ZNK4PMPI4Comm8Set_attrEiPKv[_ZNK4PMPI4Comm8Set_attrEiPKv]+0x71):
undefined reference to `PMPI::Comm::mpi_comm_map'
gpu.cpp:(.text._ZNK4PMPI4Comm8Set_attrEiPKv[_ZNK4PMPI4Comm8Set_attrEiPKv]+0x7a):
undefined reference to `PMPI::Comm::mpi_comm_map'
./cuda/cuda.a(gpu.o): In function `PMPI::Intracomm::Create(PMPI::Group
const&) const':
gpu.cpp:(.text._ZNK4PMPI9Intracomm6CreateERKNS_5GroupE[_ZNK4PMPI9Intracomm6CreateERKNS_5GroupE]+0x2a):
undefined reference to `MPI::Is_initialized()'
./cuda/cuda.a(gpu.o): In function `PMPI::Intracomm::Split(int, int) const':
gpu.cpp:(.text._ZNK4PMPI9Intracomm5SplitEii[_ZNK4PMPI9Intracomm5SplitEii]+0x2b):
undefined reference to `MPI::Is_initialized()'
./cuda/cuda.a(gpu.o): In function `PMPI::Intracomm::Create_cart(int,
int const*, bool const*, bool) const':
gpu.cpp:(.text._ZNK4PMPI9Intracomm11Create_cartEiPKiPKbb[_ZNK4PMPI9Intracomm11Create_cartEiPKiPKbb]+0x1b3):
undefined reference to `MPI::Is_initialized()'
./cuda/cuda.a(gpu.o): In function `PMPI::Intracomm::Create_graph(int,
int const*, int const*, bool) const':
gpu.cpp:(.text._ZNK4PMPI9Intracomm12Create_graphEiPKiS2_b[_ZNK4PMPI9Intracomm12Create_graphEiPKiS2_b]+0x35):
undefined reference to `MPI::Is_initialized()'
./cuda/cuda.a(gpu.o): In function `PMPI::Cartcomm::Sub(bool const*)':
gpu.cpp:(.text._ZN4PMPI8Cartcomm3SubEPKb[_ZN4PMPI8Cartcomm3SubEPKb]+0x198):
undefined reference to `MPI::Is_initialized()'
./cuda/cuda.a(gpu.o):gpu.cpp:(.text._ZN4PMPI9Intercomm5MergeEb[_ZN4PMPI9Intercomm5MergeEb]+0x2c):
more undefined references to `MPI::Is_initialized()' follow
./cuda/cuda.a(gpu.o): In function
`PMPI::Comm::Set_errhandler(PMPI::Errhandler const&)':
gpu.cpp:(.text._ZN4PMPI4Comm14Set_errhandlerERKNS_10ErrhandlerE[_ZN4PMPI4Comm14Set_errhandlerERKNS_10ErrhandlerE]+0x10):
undefined reference to `PMPI::Comm::mpi_comm_map'
gpu.cpp:(.text._ZN4PMPI4Comm14Set_errhandlerERKNS_10ErrhandlerE[_ZN4PMPI4Comm14Set_errhandlerERKNS_10ErrhandlerE]+0x1a):
undefined reference to `PMPI::Comm::mpi_comm_map'
./cuda/cuda.a(gpu.o): In function `PMPI::Op::Init(void (*)(void
const*, void*, int, PMPI::Datatype const&), bool)':
gpu.cpp:(.text._ZN4PMPI2Op4InitEPFvPKvPviRKNS_8DatatypeEEb[_ZN4PMPI2Op4InitEPFvPKvPviRKNS_8DatatypeEEb]+0xb):
undefined reference to `op_intercept(void*, void*, int*, unsigned
int*)'
./cuda/cuda.a(gpu.o):(.data._ZTVN3MPI2OpE[_ZTVN3MPI2OpE]+0x20):
undefined reference to `MPI::Op::Init(void (*)(void const*, void*,
int, MPI::Datatype const&), bool)'
./cuda/cuda.a(gpu.o):(.data._ZTVN3MPI2OpE[_ZTVN3MPI2OpE]+0x28):
undefined reference to `MPI::Op::Free()'
./cuda/cuda.a(gpu.o):(.data._ZTVN3MPI2OpE[_ZTVN3MPI2OpE]+0x30):
undefined reference to `MPI::Op::Reduce_local(void const*, void*, int,
MPI::Datatype const&) const'
./cuda/cuda.a(gpu.o):(.data._ZTVN3MPI2OpE[_ZTVN3MPI2OpE]+0x38):
undefined reference to `MPI::Op::Is_commutative() const'
Makefile:127: recipe for target
'/home/k0072/k007200/amber16/bin/pmemd.cuda_SPFP.MPI' failed
make[2]: *** [/home/k0072/k007200/amber16/bin/pmemd.cuda_SPFP.MPI]
Error 1
make[2]: Leaving directory '/home/k0072/k007200/amber16/src/pmemd/src'
Makefile:85: recipe for target 'cuda_parallel_SPFP' failed
make[1]: *** [cuda_parallel_SPFP] Error 2
make[1]: Leaving directory '/home/k0072/k007200/amber16/src/pmemd/src'
Makefile:33: recipe for target 'cuda_parallel' failed
make: *** [cuda_parallel] Error 2 "


If you could give me advice,I would appreciate it.

Ryoichi

_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Sat Aug 10 2019 - 04:30:02 PDT
Custom Search