Re: [AMBER] Installing amber with CUDA as local user on supercomputer

From: Ross Walker <ross.rosswalker.co.uk>
Date: Thu, 13 Jun 2019 10:42:51 -0400

Hi Francesca,

It's been a long time since I tested CUDA 7.5 with AMBER 18. It may not longer be supported with the latest patches. Can you try a later version of CUDA? I'd suggest Cuda 10.0.

If the error is occurring while you are trying to install your own version of CUDA rather than while you are trying to build AMBER 18, it wasn't clear from your email, then try and skip the installation of the cuda samples. There should be a command line option for cuda 7.5 to skip this. It changed in later versions. That said I'd still recommend using a newer CUDA.

All the best
Ross

> On Jun 13, 2019, at 07:55, Francesca Lønstad Bleken <francesca.l.bleken.sintef.no> wrote:
>
> Hi,
>
> I am trying to install amber18 (with ambertools19) with CUDA as local user on a supercomputer with intel compiler.
> Both the serial and parallel version work.
>
> When I try to install the CUDA version, however I run into the following problem:
>
> /global/hds/software/cpu/eb3/CUDA/7.5.18/bin/nvcc -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_53,code=sm_53 -use_fast_math -O3 -ccbin icpc -I../cusplibrary-cuda9 -o cuda_LinearSolvers.o -c cuda_LinearSolvers.cu
> cuda_LinearSolvers.cu(193): error: class "cudaDeviceProp" has no member "concurrentManagedAccess"
>
> I have CUDA/7.5.18 which is installed globally.
>
> I have been googling, but did not find anything that I understand. If someone could point me in the right direction I would be grateful.
>
> ------------------------End of output-----------------
> [PBSA] FC rdpqr.F90
> cd ../lib && make nxtsec.o random.o
> make[3]: Entering directory `/home/francesb/60-programs/amber18_gpu/amber18/AmberTools/src/lib'
> [LIB] FC nxtsec.F
> [LIB] FC random.F90
> make[3]: Leaving directory `/home/francesb/60-programs/amber18_gpu/amber18/AmberTools/src/lib'
> cd ../lapack && make install
> make[3]: Entering directory `/home/francesb/60-programs/amber18_gpu/amber18/AmberTools/src/lapack'
> make[3]: Nothing to be done for `install'.
> make[3]: Leaving directory `/home/francesb/60-programs/amber18_gpu/amber18/AmberTools/src/lapack'
> cd ../blas && make install
> make[3]: Entering directory `/home/francesb/60-programs/amber18_gpu/amber18/AmberTools/src/blas'
> make[3]: Nothing to be done for `install'.
> make[3]: Leaving directory `/home/francesb/60-programs/amber18_gpu/amber18/AmberTools/src/blas'
> cd ../arpack && make install
> make[3]: Entering directory `/home/francesb/60-programs/amber18_gpu/amber18/AmberTools/src/arpack'
> make[3]: Nothing to be done for `install'.
> make[3]: Leaving directory `/home/francesb/60-programs/amber18_gpu/amber18/AmberTools/src/arpack'
> /global/hds/software/cpu/eb3/CUDA/7.5.18/bin/nvcc -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_53,code=sm_53 -use_fast_math -O3 -ccbin icpc -o cuda_pb.o -c cuda_pb.cu
> /global/hds/software/cpu/eb3/CUDA/7.5.18/bin/nvcc -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_53,code=sm_53 -use_fast_math -O3 -ccbin icpc -o kLinearSolvers.o -c kLinearSolvers.cu
> /global/hds/software/cpu/eb3/CUDA/7.5.18/bin/nvcc -gencode arch=compute_30,code=sm_30 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_53,code=sm_53 -use_fast_math -O3 -ccbin icpc -I../cusplibrary-cuda9 -o cuda_LinearSolvers.o -c cuda_LinearSolvers.cu
> cuda_LinearSolvers.cu(193): error: class "cudaDeviceProp" has no member "concurrentManagedAccess"
>
> cuda_LinearSolvers.cu(194): error: identifier "cudaMemPrefetchAsync" is undefined
>
> cuda_LinearSolvers.cu(248): error: class "cudaDeviceProp" has no member "concurrentManagedAccess"
>
> cuda_LinearSolvers.cu(250): error: identifier "cudaMemPrefetchAsync" is undefined
>
> cuda_LinearSolvers.cu(301): error: class "cudaDeviceProp" has no member "concurrentManagedAccess"
>
> cuda_LinearSolvers.cu(302): error: identifier "cudaMemPrefetchAsync" is undefined
>
> 6 errors detected in the compilation of "/tmp/tmpxft_000082cd_00000000-22_cuda_LinearSolvers.compute_53.cpp1.ii".
> make[2]: *** [cuda_LinearSolvers.o] Error 2
> make[2]: Leaving directory `/home/francesb/60-programs/amber18_gpu/amber18/AmberTools/src/pbsa'
> make[1]: *** [cuda_serial] Error 2
> make[1]: Leaving directory `/home/francesb/60-programs/amber18_gpu/amber18/AmberTools/src'
> make: *** [install] Error 2
>
>
> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
> Francesca L. Bleken, Ph.D.
> Research Scientist
> Process Chemistry and Functional Materials, SINTEF Industry
>
> Norway
> Mobile: (+47) 95 20 79 71
> www.sintef.no<http://www.sintef.no/>
> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Jun 13 2019 - 08:00:05 PDT
Custom Search