Hi Thomas,
Is your mpi configured for gfortran?
Try mpif90 -show
All the best
Ross
/\
\/
|\oss Walker
---------------------------------------------------------
| Assistant Research Professor |
| San Diego Supercomputer Center |
| Adjunct Assistant Professor |
| Dept. of Chemistry and Biochemistry |
| University of California San Diego |
| NVIDIA Fellow |
|
http://www.rosswalker.co.uk |
http://www.wmd-lab.org |
| Tel: +1 858 822 0854 | EMail:- ross.rosswalker.co.uk |
---------------------------------------------------------
Note: Electronic Mail is not secure, has no guarantee of delivery, may not
be read every day, and should not be used for urgent or sensitive issues.
On 11/20/12 12:47 PM, "Thomas Evangelidis" <tevang3.gmail.com> wrote:
>Thanks Ross! The serial pmemd.CUDA compiles but pmemd.CUDA.MPI fails with
>the following output. Do you have any idea what went wrong?
>
>
>mpif90 -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK -Duse_SPFP -o
>pmemd.cuda.MPI gbl_constants.o gbl_datatypes.o state_info.o file_io_dat.o
>mdin_ctrl_dat.o mdin_ewald_dat.o mdin_debugf_dat.o prmtop_dat.o
>inpcrd_dat.o dynamics_dat.o img.o nbips.o parallel_dat.o parallel.o
>gb_parallel.o pme_direct.o pme_recip_dat.o pme_slab_recip.o
>pme_blk_recip.o
>pme_slab_fft.o pme_blk_fft.o pme_fft_dat.o fft1d.o bspline.o pme_force.o
>pbc.o nb_pairlist.o nb_exclusions.o cit.o dynamics.o bonds.o angles.o
>dihedrals.o extra_pnts_nb14.o runmd.o loadbal.o shake.o prfs.o mol_list.o
>runmin.o constraints.o axis_optimize.o gb_ene.o veclib.o gb_force.o
>timers.o pmemd_lib.o runfiles.o file_io.o bintraj.o binrestart.o
>pmemd_clib.o pmemd.o random.o degcnt.o erfcfun.o nmr_calls.o nmr_lib.o
>get_cmdline.o master_setup.o pme_alltasks_setup.o pme_setup.o
>ene_frc_splines.o gb_alltasks_setup.o nextprmtop_section.o angles_ub.o
>dihedrals_imp.o cmap.o charmm.o charmm_gold.o findmask.o remd.o
>multipmemd.o remd_exchg.o amd.o \
> ./cuda/cuda.a -L/gpfs/home/lspro220u1/Opt/cuda-4.2/cuda/lib64
>-L/gpfs/home/lspro220u1/Opt/cuda-4.2/cuda/lib -lcurand -lcufft -lcudart
>-L/gpfs/home/lspro220u1/Opt/amber12/lib
>-L/gpfs/home/lspro220u1/Opt/amber12/lib -lnetcdf
>./cuda/cuda.a(gpu.o): In function `MPI::Op::Init(void (*)(void const*,
>void*, int, MPI::Datatype const&), bool)':
>gpu.cpp:(.text._ZN3MPI2Op4InitEPFvPKvPviRKNS_8DatatypeEEb[MPI::Op::Init(vo
>id
>(*)(void const*, void*, int, MPI::Datatype const&), bool)]+0x19):
>undefined
>reference to `ompi_mpi_cxx_op_intercept'
>./cuda/cuda.a(gpu.o): In function `MPI::Intracomm::Clone() const':
>gpu.cpp:(.text._ZNK3MPI9Intracomm5CloneEv[MPI::Intracomm::Clone()
>const]+0x2a): undefined reference to `MPI::Comm::Comm()'
>./cuda/cuda.a(gpu.o): In function `MPI::Cartcomm::Sub(bool const*)':
>gpu.cpp:(.text._ZN3MPI8Cartcomm3SubEPKb[MPI::Cartcomm::Sub(bool
>const*)]+0x76): undefined reference to `MPI::Comm::Comm()'
>./cuda/cuda.a(gpu.o): In function `MPI::Graphcomm::Clone() const':
>gpu.cpp:(.text._ZNK3MPI9Graphcomm5CloneEv[MPI::Graphcomm::Clone()
>const]+0x25): undefined reference to `MPI::Comm::Comm()'
>./cuda/cuda.a(gpu.o): In function `MPI::Intracomm::Create_cart(int, int
>const*, bool const*, bool) const':
>gpu.cpp:(.text._ZNK3MPI9Intracomm11Create_cartEiPKiPKbb[MPI::Intracomm::Cr
>eate_cart(int,
>int const*, bool const*, bool) const]+0x8f): undefined reference to
>`MPI::Comm::Comm()'
>./cuda/cuda.a(gpu.o): In function `MPI::Intracomm::Create_graph(int, int
>const*, int const*, bool) const':
>gpu.cpp:(.text._ZNK3MPI9Intracomm12Create_graphEiPKiS2_b[MPI::Intracomm::C
>reate_graph(int,
>int const*, int const*, bool) const]+0x2b): undefined reference to
>`MPI::Comm::Comm()'
>./cuda/cuda.a(gpu.o):gpu.cpp:(.text._ZNK3MPI8Cartcomm5CloneEv[MPI::Cartcom
>m::Clone()
>const]+0x25): more undefined references to `MPI::Comm::Comm()' follow
>./cuda/cuda.a(gpu.o):(.rodata._ZTVN3MPI3WinE[vtable for MPI::Win]+0x48):
>undefined reference to `MPI::Win::Free()'
>./cuda/cuda.a(gpu.o):(.rodata._ZTVN3MPI8DatatypeE[vtable for
>MPI::Datatype]+0x78): undefined reference to `MPI::Datatype::Free()'
>collect2: ld returned 1 exit status
>make[4]: *** [pmemd.cuda.MPI] Error 1
>make[4]: Leaving directory
>`/gpfs/home/lspro220u1/Opt/amber12/src/pmemd/src'
>make[3]: *** [cuda_parallel] Error 2
>make[3]: Leaving directory `/gpfs/home/lspro220u1/Opt/amber12/src/pmemd'
>make[2]: *** [cuda_parallel] Error 2
>make[2]: Leaving directory `/gpfs/home/lspro220u1/Opt/amber12/src'
>make[1]: [cuda_parallel] Error 2 (ignored)
>make[1]: Leaving directory
>`/gpfs/home/lspro220u1/Opt/amber12/AmberTools/src'
>make[1]: Entering directory `/gpfs/home/lspro220u1/Opt/amber12/src'
>Starting installation of Amber12 (cuda parallel) at Tue Nov 20 22:45:23
>EET
>2012.
>cd pmemd && make cuda_parallel
>make[2]: Entering directory `/gpfs/home/lspro220u1/Opt/amber12/src/pmemd'
>make -C src/ cuda_parallel
>make[3]: Entering directory
>`/gpfs/home/lspro220u1/Opt/amber12/src/pmemd/src'
>make -C ./cuda
>make[4]: Entering directory
>`/gpfs/home/lspro220u1/Opt/amber12/src/pmemd/src/cuda'
>make[4]: `cuda.a' is up to date.
>make[4]: Leaving directory
>`/gpfs/home/lspro220u1/Opt/amber12/src/pmemd/src/cuda'
>make -C ./cuda
>make[4]: Entering directory
>`/gpfs/home/lspro220u1/Opt/amber12/src/pmemd/src/cuda'
>make[4]: `cuda.a' is up to date.
>make[4]: Leaving directory
>`/gpfs/home/lspro220u1/Opt/amber12/src/pmemd/src/cuda'
>make -C ./cuda
>make[4]: Entering directory
>`/gpfs/home/lspro220u1/Opt/amber12/src/pmemd/src/cuda'
>make[4]: `cuda.a' is up to date.
>make[4]: Leaving directory
>`/gpfs/home/lspro220u1/Opt/amber12/src/pmemd/src/cuda'
>make -C ./cuda
>make[4]: Entering directory
>`/gpfs/home/lspro220u1/Opt/amber12/src/pmemd/src/cuda'
>make[4]: `cuda.a' is up to date.
>make[4]: Leaving directory
>`/gpfs/home/lspro220u1/Opt/amber12/src/pmemd/src/cuda'
>make -C ./cuda
>make[4]: Entering directory
>`/gpfs/home/lspro220u1/Opt/amber12/src/pmemd/src/cuda'
>make[4]: `cuda.a' is up to date.
>make[4]: Leaving directory
>`/gpfs/home/lspro220u1/Opt/amber12/src/pmemd/src/cuda'
>mpif90 -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK -Duse_SPFP -o
>pmemd.cuda.MPI gbl_constants.o gbl_datatypes.o state_info.o file_io_dat.o
>mdin_ctrl_dat.o mdin_ewald_dat.o mdin_debugf_dat.o prmtop_dat.o
>inpcrd_dat.o dynamics_dat.o img.o nbips.o parallel_dat.o parallel.o
>gb_parallel.o pme_direct.o pme_recip_dat.o pme_slab_recip.o
>pme_blk_recip.o
>pme_slab_fft.o pme_blk_fft.o pme_fft_dat.o fft1d.o bspline.o pme_force.o
>pbc.o nb_pairlist.o nb_exclusions.o cit.o dynamics.o bonds.o angles.o
>dihedrals.o extra_pnts_nb14.o runmd.o loadbal.o shake.o prfs.o mol_list.o
>runmin.o constraints.o axis_optimize.o gb_ene.o veclib.o gb_force.o
>timers.o pmemd_lib.o runfiles.o file_io.o bintraj.o binrestart.o
>pmemd_clib.o pmemd.o random.o degcnt.o erfcfun.o nmr_calls.o nmr_lib.o
>get_cmdline.o master_setup.o pme_alltasks_setup.o pme_setup.o
>ene_frc_splines.o gb_alltasks_setup.o nextprmtop_section.o angles_ub.o
>dihedrals_imp.o cmap.o charmm.o charmm_gold.o findmask.o remd.o
>multipmemd.o remd_exchg.o amd.o \
> ./cuda/cuda.a -L/gpfs/home/lspro220u1/Opt/cuda-4.2/cuda/lib64
>-L/gpfs/home/lspro220u1/Opt/cuda-4.2/cuda/lib -lcurand -lcufft -lcudart
>-L/gpfs/home/lspro220u1/Opt/amber12/lib
>-L/gpfs/home/lspro220u1/Opt/amber12/lib -lnetcdf
>./cuda/cuda.a(gpu.o): In function `MPI::Op::Init(void (*)(void const*,
>void*, int, MPI::Datatype const&), bool)':
>gpu.cpp:(.text._ZN3MPI2Op4InitEPFvPKvPviRKNS_8DatatypeEEb[MPI::Op::Init(vo
>id
>(*)(void const*, void*, int, MPI::Datatype const&), bool)]+0x19):
>undefined
>reference to `ompi_mpi_cxx_op_intercept'
>./cuda/cuda.a(gpu.o): In function `MPI::Intracomm::Clone() const':
>gpu.cpp:(.text._ZNK3MPI9Intracomm5CloneEv[MPI::Intracomm::Clone()
>const]+0x2a): undefined reference to `MPI::Comm::Comm()'
>./cuda/cuda.a(gpu.o): In function `MPI::Cartcomm::Sub(bool const*)':
>gpu.cpp:(.text._ZN3MPI8Cartcomm3SubEPKb[MPI::Cartcomm::Sub(bool
>const*)]+0x76): undefined reference to `MPI::Comm::Comm()'
>./cuda/cuda.a(gpu.o): In function `MPI::Graphcomm::Clone() const':
>gpu.cpp:(.text._ZNK3MPI9Graphcomm5CloneEv[MPI::Graphcomm::Clone()
>const]+0x25): undefined reference to `MPI::Comm::Comm()'
>./cuda/cuda.a(gpu.o): In function `MPI::Intracomm::Create_cart(int, int
>const*, bool const*, bool) const':
>gpu.cpp:(.text._ZNK3MPI9Intracomm11Create_cartEiPKiPKbb[MPI::Intracomm::Cr
>eate_cart(int,
>int const*, bool const*, bool) const]+0x8f): undefined reference to
>`MPI::Comm::Comm()'
>./cuda/cuda.a(gpu.o): In function `MPI::Intracomm::Create_graph(int, int
>const*, int const*, bool) const':
>gpu.cpp:(.text._ZNK3MPI9Intracomm12Create_graphEiPKiS2_b[MPI::Intracomm::C
>reate_graph(int,
>int const*, int const*, bool) const]+0x2b): undefined reference to
>`MPI::Comm::Comm()'
>./cuda/cuda.a(gpu.o):gpu.cpp:(.text._ZNK3MPI8Cartcomm5CloneEv[MPI::Cartcom
>m::Clone()
>const]+0x25): more undefined references to `MPI::Comm::Comm()' follow
>./cuda/cuda.a(gpu.o):(.rodata._ZTVN3MPI3WinE[vtable for MPI::Win]+0x48):
>undefined reference to `MPI::Win::Free()'
>./cuda/cuda.a(gpu.o):(.rodata._ZTVN3MPI8DatatypeE[vtable for
>MPI::Datatype]+0x78): undefined reference to `MPI::Datatype::Free()'
>collect2: ld returned 1 exit status
>make[3]: *** [pmemd.cuda.MPI] Error 1
>make[3]: Leaving directory
>`/gpfs/home/lspro220u1/Opt/amber12/src/pmemd/src'
>make[2]: *** [cuda_parallel] Error 2
>make[2]: Leaving directory `/gpfs/home/lspro220u1/Opt/amber12/src/pmemd'
>make[1]: *** [cuda_parallel] Error 2
>make[1]: Leaving directory `/gpfs/home/lspro220u1/Opt/amber12/src'
>make: *** [install] Error 2
>
>
>
>
>
>On 20 November 2012 18:28, Ross Walker <ross.rosswalker.co.uk> wrote:
>
>> Hi Thomas,
>>
>> Can you quickly try GCC instead and see if that works?
>>
>> >From a completely clean tree - best to untar and start from scratch.
>>
>> ./configure -cuda gnu (say yes to applying the patches)
>> ./configure -cuda gnu (say yes again)
>> ..Repeat until configure doesn't ask you to patch anymore - this is an
>> unfortunate bug in the patching system that we can't retroactively fix..
>>
>> ./configure -cuda gnu
>> make install
>>
>> There is no difference in performance for GPU runs between the GNU or
>> Intel compilers and the GNU ones (amazingly!) seem to be more stable
>>these
>> days.
>>
>> All the best
>> Ross
>>
>>
>_______________________________________________
>AMBER mailing list
>AMBER.ambermd.org
>http://lists.ambermd.org/mailman/listinfo/amber
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Nov 20 2012 - 13:30:03 PST