Re: [AMBER] cuda-5.0/lib64/libcufft.so: undefined reference to `__isoc99_sscanf@GLIBC_2.7'

From: Jason Swails <jason.swails.gmail.com>
Date: Tue, 20 Nov 2012 09:38:42 -0500

By all reports, CUDA 5 does not yet work with Amber (there will be a patch
coming out to support this).

For the time being, stick with CUDA 4.2.

Good luck,
Jason

On Tue, Nov 20, 2012 at 4:50 AM, Thomas Evangelidis <tevang3.gmail.com>wrote:

> Greetings,
>
> I am trying to compile Amber12 on an Intel Xeon cluster with Infiniband,
> using the Intel 12.1.0 compilers, OpenMPI 1.4.5 (compiled with Intel) and
> CUDA 5. The serial and mpi version compile successfully but cuda fails.
> Bellow are the commands I issue and the stack trace I get:
>
> export AMBERHOME=/gpfs/home/lspre124u1/Opt/amber12
> export
> PATH=$AMBERHOME/exe:/gpfs/home/lspre124u1/Opt/cuda-5.0/cuda/bin:$PATH
> export
>
> LD_LIBRARY_PATH=$AMBERHOME/lib:/gpfs/home/lspre124u1/Opt/cuda-5.0/lib64:/gpfs/home/lspre124u1/Opt/cuda-5.0/lib:$LD_LIBRARY_PATH
> module load intel openmpi/1.4.5-intel
> export MKL_HOME=/gpfs/apps/compilers/intel/mkl
> export MPI_HOME=/gpfs/apps/mpi/openmpi/1.4.5/intel/
> export CUDA_HOME=/gpfs/home/lspre124u1/Opt/cuda-5.0
>
> ./configure -cuda intel
>
> ifort -ip -O3 -no-prec-div -xHost -DCUDA -Duse_SPFP -o pmemd.cuda
> gbl_constants.o gbl_datatypes.o state_info.o file_io_dat.o mdin_ctrl_dat.o
> mdin_ewald_dat.o mdin_debugf_dat.o prmtop_dat.o inpcrd_dat.o dynamics_dat.o
> img.o nbips.o parallel_dat.o parallel.o gb_parallel.o pme_direct.o
> pme_recip_dat.o pme_slab_recip.o pme_blk_recip.o pme_slab_fft.o
> pme_blk_fft.o pme_fft_dat.o fft1d.o bspline.o pme_force.o pbc.o
> nb_pairlist.o nb_exclusions.o cit.o dynamics.o bonds.o angles.o dihedrals.o
> extra_pnts_nb14.o runmd.o loadbal.o shake.o prfs.o mol_list.o runmin.o
> constraints.o axis_optimize.o gb_ene.o veclib.o gb_force.o timers.o
> pmemd_lib.o runfiles.o file_io.o bintraj.o binrestart.o pmemd_clib.o
> pmemd.o random.o degcnt.o erfcfun.o nmr_calls.o nmr_lib.o get_cmdline.o
> master_setup.o pme_alltasks_setup.o pme_setup.o ene_frc_splines.o
> gb_alltasks_setup.o nextprmtop_section.o angles_ub.o dihedrals_imp.o cmap.o
> charmm.o charmm_gold.o findmask.o remd.o multipmemd.o remd_exchg.o amd.o \
> ./cuda/cuda.a -L/gpfs/home/lspre124u1/Opt/cuda-5.0/lib64
> -L/gpfs/home/lspre124u1/Opt/cuda-5.0/lib -lcurand -lcufft -lcudart
> -L/gpfs/home/lspre124u1/Opt/amber12/lib
> -L/gpfs/home/lspre124u1/Opt/amber12/lib -lnetcdf -shared-intel
> -Wl,--start-group
> /gpfs/apps/compilers/intel/mkl/lib/intel64/libmkl_intel_lp64.a
> /gpfs/apps/compilers/intel/mkl/lib/intel64/libmkl_sequential.a
> /gpfs/apps/compilers/intel/mkl/lib/intel64/libmkl_core.a -Wl,--end-group
> -lpthread
> /gpfs/home/lspre124u1/Opt/cuda-5.0/lib64/libcufft.so: undefined reference
> to `__isoc99_sscanf.GLIBC_2.7'
> make[4]: *** [pmemd.cuda] Error 1
> make[4]: Leaving directory
> `/gpfs/home/lspre124u1/Opt/amber12/src/pmemd/src'
> make[3]: *** [cuda] Error 2
> make[3]: Leaving directory `/gpfs/home/lspre124u1/Opt/amber12/src/pmemd'
> make[2]: *** [cuda] Error 2
> make[2]: Leaving directory `/gpfs/home/lspre124u1/Opt/amber12/src'
> make[1]: [cuda] Error 2 (ignored)
> make[1]: Leaving directory
> `/gpfs/home/lspre124u1/Opt/amber12/AmberTools/src'
> make[1]: Entering directory `/gpfs/home/lspre124u1/Opt/amber12/src'
> Starting installation of Amber12 (cuda) at Tue Nov 20 11:40:18 EET 2012.
> cd pmemd && make cuda
> make[2]: Entering directory `/gpfs/home/lspre124u1/Opt/amber12/src/pmemd'
> make -C src/ cuda
> make[3]: Entering directory
> `/gpfs/home/lspre124u1/Opt/amber12/src/pmemd/src'
> make -C ./cuda
> make[4]: Entering directory
> `/gpfs/home/lspre124u1/Opt/amber12/src/pmemd/src/cuda'
> make[4]: `cuda.a' is up to date.
> make[4]: Leaving directory
> `/gpfs/home/lspre124u1/Opt/amber12/src/pmemd/src/cuda'
> make -C ./cuda
> make[4]: Entering directory
> `/gpfs/home/lspre124u1/Opt/amber12/src/pmemd/src/cuda'
> make[4]: `cuda.a' is up to date.
> make[4]: Leaving directory
> `/gpfs/home/lspre124u1/Opt/amber12/src/pmemd/src/cuda'
> make -C ./cuda
> make[4]: Entering directory
> `/gpfs/home/lspre124u1/Opt/amber12/src/pmemd/src/cuda'
> make[4]: `cuda.a' is up to date.
> make[4]: Leaving directory
> `/gpfs/home/lspre124u1/Opt/amber12/src/pmemd/src/cuda'
> make -C ./cuda
> make[4]: Entering directory
> `/gpfs/home/lspre124u1/Opt/amber12/src/pmemd/src/cuda'
> make[4]: `cuda.a' is up to date.
> make[4]: Leaving directory
> `/gpfs/home/lspre124u1/Opt/amber12/src/pmemd/src/cuda'
> make -C ./cuda
> make[4]: Entering directory
> `/gpfs/home/lspre124u1/Opt/amber12/src/pmemd/src/cuda'
> make[4]: `cuda.a' is up to date.
> make[4]: Leaving directory
> `/gpfs/home/lspre124u1/Opt/amber12/src/pmemd/src/cuda'
> ifort -ip -O3 -no-prec-div -xHost -DCUDA -Duse_SPFP -o pmemd.cuda
> gbl_constants.o gbl_datatypes.o state_info.o file_io_dat.o mdin_ctrl_dat.o
> mdin_ewald_dat.o mdin_debugf_dat.o prmtop_dat.o inpcrd_dat.o dynamics_dat.o
> img.o nbips.o parallel_dat.o parallel.o gb_parallel.o pme_direct.o
> pme_recip_dat.o pme_slab_recip.o pme_blk_recip.o pme_slab_fft.o
> pme_blk_fft.o pme_fft_dat.o fft1d.o bspline.o pme_force.o pbc.o
> nb_pairlist.o nb_exclusions.o cit.o dynamics.o bonds.o angles.o dihedrals.o
> extra_pnts_nb14.o runmd.o loadbal.o shake.o prfs.o mol_list.o runmin.o
> constraints.o axis_optimize.o gb_ene.o veclib.o gb_force.o timers.o
> pmemd_lib.o runfiles.o file_io.o bintraj.o binrestart.o pmemd_clib.o
> pmemd.o random.o degcnt.o erfcfun.o nmr_calls.o nmr_lib.o get_cmdline.o
> master_setup.o pme_alltasks_setup.o pme_setup.o ene_frc_splines.o
> gb_alltasks_setup.o nextprmtop_section.o angles_ub.o dihedrals_imp.o cmap.o
> charmm.o charmm_gold.o findmask.o remd.o multipmemd.o remd_exchg.o amd.o \
> ./cuda/cuda.a -L/gpfs/home/lspre124u1/Opt/cuda-5.0/lib64
> -L/gpfs/home/lspre124u1/Opt/cuda-5.0/lib -lcurand -lcufft -lcudart
> -L/gpfs/home/lspre124u1/Opt/amber12/lib
> -L/gpfs/home/lspre124u1/Opt/amber12/lib -lnetcdf -shared-intel
> -Wl,--start-group
> /gpfs/apps/compilers/intel/mkl/lib/intel64/libmkl_intel_lp64.a
> /gpfs/apps/compilers/intel/mkl/lib/intel64/libmkl_sequential.a
> /gpfs/apps/compilers/intel/mkl/lib/intel64/libmkl_core.a -Wl,--end-group
> -lpthread
> /gpfs/home/lspre124u1/Opt/cuda-5.0/lib64/libcufft.so: undefined reference
> to `__isoc99_sscanf.GLIBC_2.7'
> make[3]: *** [pmemd.cuda] Error 1
> make[3]: Leaving directory
> `/gpfs/home/lspre124u1/Opt/amber12/src/pmemd/src'
> make[2]: *** [cuda] Error 2
> make[2]: Leaving directory `/gpfs/home/lspre124u1/Opt/amber12/src/pmemd'
> make[1]: *** [cuda] Error 2
> make[1]: Leaving directory `/gpfs/home/lspre124u1/Opt/amber12/src'
> make: *** [install] Error 2
>
>
> Could some please help me interpret it? Below are the hardware details of a
> single node:
>
> processor : 0
> vendor_id : GenuineIntel
> cpu family : 6
> model : 44
> model name : Intel(R) Xeon(R) CPU X5650 . 2.67GHz
> stepping : 2
> cpu MHz : 1596.000
> cache size : 12288 KB
> physical id : 0
> siblings : 6
> core id : 0
> cpu cores : 6
> apicid : 0
> fpu : yes
> fpu_exception : yes
> cpuid level : 11
> wp : yes
> flags : fpu vme de pse tsc msr pae mce cx8 apic sep
> mtrr pge mca cmov pat
> pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx pdpe1gb
> rdtscp lm constant_tsc nonstop_tsc arat pni monitor ds_cpl vmx smx est tm2
> ssse3 cx16 xtpr sse4_1 sse4_2 popcnt lahf_lm
> bogomips : 5333.67
> clflush size : 64
> cache_alignment : 64
> address sizes : 40 bits physical, 48 bits virtual
> power management: [8]
>
>
> thanks,
> Thomas
>
>
> --
>
> ======================================================================
>
> Thomas Evangelidis
>
> PhD student
> University of Athens
> Faculty of Pharmacy
> Department of Pharmaceutical Chemistry
> Panepistimioupoli-Zografou
> 157 71 Athens
> GREECE
>
> email: tevang.pharm.uoa.gr
>
> tevang3.gmail.com
>
>
> website: https://sites.google.com/site/thomasevangelidishomepage/
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>



-- 
Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Candidate
352-392-4032
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Nov 20 2012 - 07:00:02 PST
Custom Search