Hi Ross,
The output from the commands you requested is:
mettu.ubuntu10:~$ gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/4.5.4/lto-wrapper
Target: x86_64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Ubuntu/Linaro 4.5.3-9ubuntu1' --with-bugurl=file:///usr/share/doc/gcc-4.5/README.Bugs --enable-languages=c,c++,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-4.5 --enable-shared --enable-linker-build-id --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --with-gxx-include-dir=/usr/include/c++/4.5 --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --enable-plugin --enable-gold --enable-ld=default --with-plugin-ld=ld.gold --enable-objc-gc --disable-werror --with-arch-32=i686 --with-tune=generic --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu
Thread model: posix
gcc version 4.5.4 (Ubuntu/Linaro 4.5.3-9ubuntu1)
mettu.ubuntu10:~$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2011 NVIDIA Corporation
Built on Thu_Nov_17_17:38:12_PST_2011
Cuda compilation tools, release 4.1, V0.2.1221
mettu.ubuntu10:~$
As for the bug fixes, I decided to start from scratch and install a fresh copy of AmberTools and Amber. I was able again to correctly install the serial and parallel versions without errors (except for a few discrepancies in the test scripts). Unfortunately I get a new set of errors when trying to compile the CUDA binaries, that look like this:
gfortran -O3 -mtune=generic -DCUDA -o pmemd.cuda gbl_constants.o gbl_datatypes.o state_info.o file_io_dat.o mdin_ctrl_dat.o mdin_ewald_dat.o mdin_debugf_dat.o prmtop_dat.o inpcrd_dat.o dynamics_dat.o img.o parallel_dat.o parallel.o gb_parallel.o pme_direct.o pme_recip_dat.o pme_slab_recip.o pme_blk_recip.o pme_slab_fft.o pme_blk_fft.o pme_fft_dat.o fft1d.o bspline.o pme_force.o pbc.o nb_pairlist.o nb_exclusions.o cit.o dynamics.o bonds.o angles.o dihedrals.o extra_pnts_nb14.o runmd.o loadbal.o shake.o prfs.o mol_list.o runmin.o constraints.o axis_optimize.o gb_ene.o veclib.o gb_force.o timers.o pmemd_lib.o runfiles.o file_io.o bintraj.o pmemd_clib.o pmemd.o random.o degcnt.o erfcfun.o nmr_calls.o nmr_lib.o get_cmdline.o master_setup.o pme_alltasks_setup.o pme_setup.o ene_frc_splines.o gb_alltasks_setup.o nextprmtop_section.o angles_ub.o dihedrals_imp.o cmap.o charmm.o charmm_gold.o -L/usr/local/cuda/lib64 -L/usr/local/cuda/lib -lcurand -lcufft -lcudart ./cuda/cuda.a /home/mettu/amber11//lib/libnetcdf.a
./cuda/cuda.a(gpu.o): In function `_ZN9GpuBufferI4int4E6UploadEPS0_.clone.0':
gpu.cpp:(.text+0x25): undefined reference to `cudaMemcpy'
gpu.cpp:(.text+0x30): undefined reference to `cudaGetErrorString'
gpu.cpp:(.text+0x4e): undefined reference to `cudaThreadExit'
./cuda/cuda.a(gpu.o): In function `_ZN9GpuBufferIdE6UploadEPd.clone.1':
gpu.cpp:(.text+0x85): undefined reference to `cudaMemcpy'
gpu.cpp:(.text+0x90): undefined reference to `cudaGetErrorString'
gpu.cpp:(.text+0xae): undefined reference to `cudaThreadExit'
./cuda/cuda.a(gpu.o): In function `_ZN9GpuBufferIdE8DownloadEPd.clone.2':
gpu.cpp:(.text+0xe5): undefined reference to `cudaMemcpy'
gpu.cpp:(.text+0xf0): undefined reference to `cudaGetErrorString'
gpu.cpp:(.text+0x10e): undefined reference to `cudaThreadExit'
This time it's the Fortran compiler (version 4.5.4) that is complaining, and there are quite a large number of these, I can provide the entire trace if you'd like. It does seem like it's related to the CUDA libraries though. It's almost as if the API has changed somehow? Sorry for the very long message, but I'd love to get this resolved soon!
Ram
On Feb 23, 2012, at 5:34 PM, Ross Walker wrote:
> Hi Ram,
>
> Can you confirm that you are using a fully patched copy of AMBER 11 please.
> Up to and including bugfix.20.
>
> http://ambermd.org/bugfixes11.html
>
> Also can you please run:
>
> gcc -v
>
> and
>
> nvcc -V
>
> All the best
> Ross
>
>> -----Original Message-----
>> From: Mettu, Ramgopal [mailto:rmettu.tulane.edu]
>> Sent: Thursday, February 23, 2012 2:10 PM
>> To: amber.ambermd.org
>> Subject: [AMBER] CUDA version for Amber 11?
>>
>> Hi All,
>> I've recently been trying to install Amber 11 with GPU support on my
>> Ubuntu GPU workstation. I am running Ubuntu 11.1, and security updates
>> ended up forcing me to update the installed CUDA drivers to the most
>> recent from Nvidia (4.x). I've been able to successfully compile the
>> serial and parallel versions of AMBER and complete the execution of the
>> testing scripts (with just a few discrepancies).
>>
>> However, compilation the CUDA version (running 'make cuda') fails on
>> a gcc command (see below for trace). I am wondering if I have a CUDA
>> version that is not yet supported? Or, if others have an insight as to
>> what might be the issue, I would greatly appreciate any advice. Thanks!
>>
>> Ram
>>
>>
>> ===============
>> gcc -O3 -mtune=generic -DSYSV -D_FILE_OFFSET_BITS=64 -
>> D_LARGEFILE_SOURCE -DBINTRAJ -DCUDA -I/usr/local/cuda/include -IB40C -
>> IB40C/KernelCommon -c gpu.cpp
>> In file included from gpu.cpp:27:0:
>> gputypes.h:1272:33: error: declaration of 'GpuBuffer<int>*
>> _gpuContext::pbConstraintSoluteID'
>> gputypes.h:1263:33: error: conflicts with previous declaration
>> 'GpuBuffer<int>* _gpuContext::pbConstraintSoluteID'
>> gputypes.h:1273:33: error: declaration of 'GpuBuffer<double>*
>> _gpuContext::pbConstraintSoluteAtom'
>> gputypes.h:1264:33: error: conflicts with previous declaration
>> 'GpuBuffer<double>* _gpuContext::pbConstraintSoluteAtom'
>> gputypes.h:1274:33: error: declaration of 'GpuBuffer<double>*
>> _gpuContext::pbConstraintSolute'
>> gputypes.h:1265:33: error: conflicts with previous declaration
>> 'GpuBuffer<double>* _gpuContext::pbConstraintSolute'
>> gputypes.h:1275:33: error: declaration of 'GpuBuffer<long long unsigned
>> int>* _gpuContext::pbConstraintUllSolute'
>> gputypes.h:1266:33: error: conflicts with previous declaration
>> 'GpuBuffer<long long unsigned int>* _gpuContext::pbConstraintUllSolute'
>> gputypes.h:1276:33: error: declaration of 'GpuBuffer<int>*
>> _gpuContext::pbConstraintSolventAtoms'
>> gputypes.h:1267:33: error: conflicts with previous declaration
>> 'GpuBuffer<int>* _gpuContext::pbConstraintSolventAtoms'
>> gputypes.h:1277:33: error: declaration of 'GpuBuffer<double>*
>> _gpuContext::pbConstraintSolventAtom'
>> gputypes.h:1268:33: error: conflicts with previous declaration
>> 'GpuBuffer<double>* _gpuContext::pbConstraintSolventAtom'
>> gputypes.h:1278:33: error: declaration of 'GpuBuffer<int4>*
>> _gpuContext::pbConstraintSolventConstraint'
>> gputypes.h:1269:33: error: conflicts with previous declaration
>> 'GpuBuffer<int4>* _gpuContext::pbConstraintSolventConstraint'
>> gpu.cpp: In function 'void gpu_setup_system_(int*, double*, int*, int*,
>> int*, int*, int*)':
>> gpu.cpp:445:63: warning: integer overflow in expression
>> gpu.cpp:463:18: error: 'struct cudaSimulation' has no member named
>> 'randomSeeds'
>> gpu.cpp:463:47: error: 'MAX_RANDOM_SEEDS' was not declared in this
>> scope
>> gpu.cpp:475:18: error: 'struct cudaSimulation' has no member named
>> 'randomSeeds'
>> make[3]: *** [gpu.o] Error 1
>> make[3]: Leaving directory `/home/mettu/amber11/src/pmemd/src/cuda'
>> make[2]: *** [-L/usr/local/cuda/lib64] Error 2
>> make[2]: Leaving directory `/home/mettu/amber11/src/pmemd/src'
>> make[1]: *** [cuda] Error 2
>> make[1]: Leaving directory `/home/mettu/amber11/src/pmemd'
>> make: *** [cuda] Error 2
>>
>>
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Feb 23 2012 - 20:30:02 PST