Re: [AMBER] Amber installation problems

From: Donato Pera <donato.pera.dm.univaq.it>
Date: Fri, 12 Apr 2013 10:49:01 +0200 (CEST)

Dear Jason,

We have this results

amber+mpi works

amber+gpu works

amber+mpi+gpu doesn't work

Regards Donato.



> Dear Jason,
>
> I have the C++ MPI support but AMBER doesn't work
>
> Regards Donato.
>
>
>
>
>> Then you will need to build your MPI with C++ support. You can download
>> mpich2 in the $AMBERHOME/AmberTools/src folder and use the
>> configure_mpich2 script to build a compatible MPICH2 installation in
>> AMBERHOME/bin.
>>
>> Then make sure you add AMBERHOME/bin to the beginning of your PATH so
>> that
>> the MPI you just built is used.
>>
>> HTH,
>> Jason
>>
>> --
>> Jason Swails
>> Quantum Theory Project,
>> University of Florida
>> Ph.D. Candidate
>> 352-392-4032
>>
>> On Apr 10, 2013, at 9:32 AM, "Donato Pera" <donato.pera.dm.univaq.it>
>> wrote:
>>
>>> Dear Dan,
>>>
>>> It doesn't work also adding
>>>
>>> '-lmpi_cxx' to the PMEMD_CU_LIBS
>>>
>>> Thanks Donato
>>>
>>>
>>>> Hi,
>>>>
>>>> Was your MPI built with support for C++? If it wasn't, I think you
>>>> need to recompile your MPI with C++ support. If it was, try adding
>>>> '-lmpi_cxx' to the PMEMD_CU_LIBS variable in your config.h file.
>>>>
>>>> Hope this helps,
>>>>
>>>> -Dan
>>>>
>>>> On Wed, Apr 10, 2013 at 7:48 AM, Donato Pera
>>>> <donato.pera.dm.univaq.it>
>>>> wrote:
>>>>> Dear Developers,
>>>>>
>>>>> We have had some problems during Amber12 installation with MPI.
>>>>> (We doesn't have problems on a single GPU).
>>>>>
>>>>> This is our make install results:
>>>>>
>>>>>
>>>>>
>>>>> [montagna.compute-1-6 amber12_GPU]$ ./configure -cuda -mpi gnu
>>>>> Checking for updates...
>>>>> AmberTools12 is up to date
>>>>> Amber12 is up to date
>>>>>
>>>>> Searching for python2... Found python2.4: /usr/bin/python2.4
>>>>>
>>>>> Obtaining the gnu suite version:
>>>>> gcc -v
>>>>> The version is 4.1.2
>>>>>
>>>>> Testing the gcc compiler:
>>>>> gcc -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -o testp testp.c
>>>>> OK
>>>>>
>>>>> Testing the gfortran compiler:
>>>>> gfortran -O0 -o testp testp.f
>>>>> OK
>>>>>
>>>>> Testing mixed C/Fortran compilation:
>>>>> gcc -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -c -o testp.c.o
>>>>> testp.c gfortran -O0 -c -o testp.f.o testp.f
>>>>> gcc -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -o testp
>>>>> testp.c.o
>>>>> testp.f.o -lgfortran -w
>>>>> OK
>>>>>
>>>>> Testing pointer size:
>>>>> gcc -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -o
>>>>> test_pointer_size
>>>>> test_pointer_size.c
>>>>> Detected 64 bit operating system.
>>>>>
>>>>> Testing flex: OK
>>>>>
>>>>> Configuring NetCDF (may be time-consuming)...
>>>>>
>>>>> NetCDF configure succeeded.
>>>>>
>>>>> Checking for zlib: OK
>>>>>
>>>>> Checking for libbz2: testp.c:2:19: error: bzlib.h: No such file or
>>>>> directory testp.c: In function 'main':
>>>>> testp.c:5: error: 'BZFILE' undeclared (first use in this function)
>>>>> testp.c:5: error: (Each undeclared identifier is reported only once
>>>>> testp.c:5: error: for each function it appears in.)
>>>>> testp.c:5: error: 'infile' undeclared (first use in this function)
>>>>> ./configure2: line 1897: ./testp: No such file or directory
>>>>> Not found.
>>>>> Skipping configuration of FFTW3
>>>>>
>>>>> The configuration file, config.h, was successfully created.
>>>>>
>>>>> The next step is to type 'make install'
>>>>>
>>>>> Cleaning the src directories. This may take a few moments.
>>>>> Configure complete.
>>>>> [montagna.compute-1-6 amber12_GPU]$ make install
>>>>> cd AmberTools/src && make install
>>>>> make[1]: Entering directory
>>>>> `/home/SWcbbc/Amber12/amber12_GPU/AmberTools/src' AmberTools12 has no
>>>>> CUDA-enabled components
>>>>> (cd ../../src && make cuda_parallel )
>>>>> make[2]: Entering directory `/home/SWcbbc/Amber12/amber12_GPU/src'
>>>>> Starting installation of Amber12 (cuda parallel) at Wed Apr 10
>>>>> 15:39:18
>>>>> CEST 2013.
>>>>> cd pmemd && make cuda_parallel
>>>>> make[3]: Entering directory
>>>>> `/home/SWcbbc/Amber12/amber12_GPU/src/pmemd'
>>>>> make -C src/ cuda_parallel
>>>>> make[4]: Entering directory
>>>>> `/home/SWcbbc/Amber12/amber12_GPU/src/pmemd/src' mpif90 -DMPI
>>>>> -DBINTRAJ
>>>>> -DDIRFRC_EFS -DDIRFRC_COMTRANS -DDIRFRC_NOVEC -DFFTLOADBAL_2PROC
>>>>> -DPUBFFT
>>>>> -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK -Duse_SPFP
>>>>> -I/home/SWcbbc/Amber12/amber12_GPU/include -c gbl_constants.F90
>>>>> mpif90
>>>>> -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> gbl_datatypes.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> state_info.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> file_io_dat.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> pmemd_lib.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> parallel_dat.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c file_io.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> mdin_ctrl_dat.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> axis_optimize.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c fft1d.F90
>>>>> mpif90
>>>>> -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c charmm.F90
>>>>> mpif90
>>>>> -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> nextprmtop_section.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> prmtop_dat.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> mdin_ewald_dat.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> mdin_debugf_dat.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c remd.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> binrestart.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> inpcrd_dat.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> constraints.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c mol_list.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> extra_pnts_nb14.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c prfs.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> dynamics_dat.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c random.F90
>>>>> mpif90
>>>>> -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c dynamics.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c pbc.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c img.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c timers.F90
>>>>> mpif90
>>>>> -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c bspline.F90
>>>>> In file bspline.F90:89
>>>>>
>>>>> if (ibcbeg - 1) 11, 15, 16
>>>>> 1
>>>>> Warning: Obsolete: arithmetic IF statement at (1)
>>>>> In file bspline.F90:144
>>>>>
>>>>> if (ibcend - 1) 21, 30, 24
>>>>> 1
>>>>> Warning: Obsolete: arithmetic IF statement at (1)
>>>>> In file bspline.F90:176
>>>>>
>>>>> 25 if (ibcend-1) 26, 30, 24
>>>>> 1
>>>>> Warning: Obsolete: arithmetic IF statement at (1)
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> pme_recip_dat.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> pme_fft_dat.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> pme_blk_fft.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> pme_slab_fft.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c shake.F90
>>>>> mpif90
>>>>> -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c nmr_lib.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> nmr_calls.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c angles.F90
>>>>> mpif90
>>>>> -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> angles_ub.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c cmap.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c bonds.F90
>>>>> mpif90
>>>>> -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c cit.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> dihedrals.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> dihedrals_imp.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> nb_exclusions.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c parallel.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> ene_frc_splines.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> nb_pairlist.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c nbips.F90
>>>>> mpif90
>>>>> -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> gb_parallel.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c loadbal.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> pme_blk_recip.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> pme_slab_recip.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> pme_direct.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c bintraj.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c runfiles.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c amd.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> pme_force.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c degcnt.F90
>>>>> mpif90
>>>>> -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c gbsa.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c gb_ene.F90
>>>>> mpif90
>>>>> -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c gb_force.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> get_cmdline.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> multipmemd.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> remd_exchg.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c runmd.F90
>>>>> In file runmd.F90:2323
>>>>>
>>>>> if (abs(biga) - abs(a(ij))) 15, 20, 20
>>>>> 1
>>>>> Warning: Obsolete: arithmetic IF statement at (1)
>>>>> In file runmd.F90:2332
>>>>>
>>>>> if (j - k) 35, 35, 25
>>>>> 1
>>>>> Warning: Obsolete: arithmetic IF statement at (1)
>>>>> In file runmd.F90:2345
>>>>>
>>>>> if (i - k) 45, 45, 38
>>>>> 1
>>>>> Warning: Obsolete: arithmetic IF statement at (1)
>>>>> In file runmd.F90:2356
>>>>>
>>>>> 45 if (biga) 48, 46, 48
>>>>> 1
>>>>> Warning: Obsolete: arithmetic IF statement at (1)
>>>>> In file runmd.F90:2360
>>>>>
>>>>> if (i - k) 50, 55, 50
>>>>> 1
>>>>> Warning: Obsolete: arithmetic IF statement at (1)
>>>>> In file runmd.F90:2373
>>>>>
>>>>> if (i - k) 60, 65, 60
>>>>> 1
>>>>> Warning: Obsolete: arithmetic IF statement at (1)
>>>>> In file runmd.F90:2374
>>>>>
>>>>> 60 if (j - k) 62, 65, 62
>>>>> 1
>>>>> Warning: Obsolete: arithmetic IF statement at (1)
>>>>> In file runmd.F90:2384
>>>>>
>>>>> if (j - k) 70, 75, 70
>>>>> 1
>>>>> Warning: Obsolete: arithmetic IF statement at (1)
>>>>> In file runmd.F90:2402
>>>>>
>>>>> if (k) 150, 150, 105
>>>>> 1
>>>>> Warning: Obsolete: arithmetic IF statement at (1)
>>>>> In file runmd.F90:2404
>>>>>
>>>>> if (i - k) 120, 120, 108
>>>>> 1
>>>>> Warning: Obsolete: arithmetic IF statement at (1)
>>>>> In file runmd.F90:2414
>>>>>
>>>>> if (j - k) 100, 100, 125
>>>>> 1
>>>>> Warning: Obsolete: arithmetic IF statement at (1)
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c runmin.F90
>>>>> In file runmin.F90:409
>>>>>
>>>>> if (fch) 100, 90, 130
>>>>> 1
>>>>> Warning: Obsolete: arithmetic IF statement at (1)
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c veclib.F90
>>>>> mpicc
>>>>> -O3 -DMPICH_IGNORE_CXX_SEEK -D_FILE_OFFSET_BITS=64
>>>>> -D_LARGEFILE_SOURCE -DBINTRAJ -DMPI -DCUDA -DMPI
>>>>> -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c pmemd_clib.c
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> gb_alltasks_setup.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> pme_alltasks_setup.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> pme_setup.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c findmask.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> master_setup.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c pmemd.F90
>>>>> mpif90
>>>>> -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c erfcfun.F90
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/home/SWcbbc/Amber12/amber12_GPU/include -c
>>>>> charmm_gold.F90
>>>>> make -C ./cuda
>>>>> make[5]: Entering directory
>>>>> `/home/SWcbbc/Amber12/amber12_GPU/src/pmemd/src/cuda'
>>>>> mpif90 -DMPI -DBINTRAJ -DDIRFRC_EFS -DDIRFRC_COMTRANS
>>>>> -DDIRFRC_NOVEC
>>>>> -DFFTLOADBAL_2PROC -DPUBFFT -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/usr/local/cuda-5.0/include -IB40C -IB40C/KernelCommon
>>>>> -I/opt/openmpi/include -c cuda_info.F90
>>>>> mpicc -O3 -DMPICH_IGNORE_CXX_SEEK -D_FILE_OFFSET_BITS=64
>>>>> -D_LARGEFILE_SOURCE -DBINTRAJ -DMPI -DCUDA -DMPI
>>>>> -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/usr/local/cuda-5.0/include -IB40C -IB40C/KernelCommon
>>>>> -I/opt/openmpi/include -c gpu.cpp
>>>>> mpicc -O3 -DMPICH_IGNORE_CXX_SEEK -D_FILE_OFFSET_BITS=64
>>>>> -D_LARGEFILE_SOURCE -DBINTRAJ -DMPI -DCUDA -DMPI
>>>>> -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/usr/local/cuda-5.0/include -IB40C -IB40C/KernelCommon
>>>>> -I/opt/openmpi/include -c gputypes.cpp
>>>>> /usr/local/cuda-5.0/bin/nvcc -use_fast_math -gencode
>>>>> arch=compute_13,code=sm_13 -gencode arch=compute_20,code=sm_20
>>>>> -gencode
>>>>> arch=compute_30,code=sm_30 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/usr/local/cuda-5.0/include -IB40C -IB40C/KernelCommon
>>>>> -I/opt/openmpi/include -c kForcesUpdate.cu
>>>>> ./kForcesUpdate.cu(87): Advisory: Loop was not unrolled, cannot
>>>>> deduce
>>>>> loop trip count
>>>>> ./kForcesUpdate.cu(127): Advisory: Loop was not unrolled, cannot
>>>>> deduce
>>>>> loop trip count
>>>>> /usr/local/cuda-5.0/bin/nvcc -use_fast_math -gencode
>>>>> arch=compute_13,code=sm_13 -gencode arch=compute_20,code=sm_20
>>>>> -gencode
>>>>> arch=compute_30,code=sm_30 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/usr/local/cuda-5.0/include -IB40C -IB40C/KernelCommon
>>>>> -I/opt/openmpi/include -c kCalculateLocalForces.cu
>>>>> /usr/local/cuda-5.0/bin/nvcc -use_fast_math -gencode
>>>>> arch=compute_13,code=sm_13 -gencode arch=compute_20,code=sm_20
>>>>> -gencode
>>>>> arch=compute_30,code=sm_30 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/usr/local/cuda-5.0/include -IB40C -IB40C/KernelCommon
>>>>> -I/opt/openmpi/include -c kCalculateGBBornRadii.cu
>>>>> /usr/local/cuda-5.0/bin/nvcc -use_fast_math -gencode
>>>>> arch=compute_13,code=sm_13 -gencode arch=compute_20,code=sm_20
>>>>> -gencode
>>>>> arch=compute_30,code=sm_30 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/usr/local/cuda-5.0/include -IB40C -IB40C/KernelCommon
>>>>> -I/opt/openmpi/include -c kCalculatePMENonbondEnergy.cu
>>>>> /usr/local/cuda-5.0/bin/nvcc -use_fast_math -gencode
>>>>> arch=compute_13,code=sm_13 -gencode arch=compute_20,code=sm_20
>>>>> -gencode
>>>>> arch=compute_30,code=sm_30 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/usr/local/cuda-5.0/include -IB40C -IB40C/KernelCommon
>>>>> -I/opt/openmpi/include -c kCalculateGBNonbondEnergy1.cu
>>>>> /usr/local/cuda-5.0/bin/nvcc -use_fast_math -gencode
>>>>> arch=compute_13,code=sm_13 -gencode arch=compute_20,code=sm_20
>>>>> -gencode
>>>>> arch=compute_30,code=sm_30 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/usr/local/cuda-5.0/include -IB40C -IB40C/KernelCommon
>>>>> -I/opt/openmpi/include -c kNLRadixSort.cu
>>>>> B40C/radix_sort/../radix_sort/../radix_sort/upsweep/../../radix_sort/upsweep/cta.cuh(127):
>>>>> Advisory: Loop was not unrolled, unexpected control flow construct
>>>>> B40C/radix_sort/../radix_sort/../radix_sort/upsweep/../../radix_sort/upsweep/cta.cuh(127):
>>>>> Advisory: Loop was not unrolled, unexpected control flow construct
>>>>> B40C/radix_sort/../radix_sort/../util/kernel_props.cuh: In member
>>>>> function
>>>>> 'int b40c::util::KernelProps::OversubscribedGridSize(int, int, int)
>>>>> const':
>>>>> B40C/radix_sort/../radix_sort/../util/kernel_props.cuh:140: warning:
>>>>> converting to 'int' from 'double'
>>>>> /usr/local/cuda-5.0/bin/nvcc -use_fast_math -gencode
>>>>> arch=compute_13,code=sm_13 -gencode arch=compute_20,code=sm_20
>>>>> -gencode
>>>>> arch=compute_30,code=sm_30 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/usr/local/cuda-5.0/include -IB40C -IB40C/KernelCommon
>>>>> -I/opt/openmpi/include -c kCalculateGBNonbondEnergy2.cu
>>>>> /usr/local/cuda-5.0/bin/nvcc -use_fast_math -gencode
>>>>> arch=compute_13,code=sm_13 -gencode arch=compute_20,code=sm_20
>>>>> -gencode
>>>>> arch=compute_30,code=sm_30 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/usr/local/cuda-5.0/include -IB40C -IB40C/KernelCommon
>>>>> -I/opt/openmpi/include -c kShake.cu
>>>>> /usr/local/cuda-5.0/bin/nvcc -use_fast_math -gencode
>>>>> arch=compute_13,code=sm_13 -gencode arch=compute_20,code=sm_20
>>>>> -gencode
>>>>> arch=compute_30,code=sm_30 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/usr/local/cuda-5.0/include -IB40C -IB40C/KernelCommon
>>>>> -I/opt/openmpi/include -c kNeighborList.cu
>>>>> /usr/local/cuda-5.0/bin/nvcc -use_fast_math -gencode
>>>>> arch=compute_13,code=sm_13 -gencode arch=compute_20,code=sm_20
>>>>> -gencode
>>>>> arch=compute_30,code=sm_30 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/usr/local/cuda-5.0/include -IB40C -IB40C/KernelCommon
>>>>> -I/opt/openmpi/include -c kPMEInterpolation.cu
>>>>> /usr/local/cuda-5.0/bin/nvcc -use_fast_math -gencode
>>>>> arch=compute_13,code=sm_13 -gencode arch=compute_20,code=sm_20
>>>>> -gencode
>>>>> arch=compute_30,code=sm_30 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
>>>>> -Duse_SPFP -I/usr/local/cuda-5.0/include -IB40C -IB40C/KernelCommon
>>>>> -I/opt/openmpi/include -c kCalculateAMDWeights.cu
>>>>> ar rvs cuda.a cuda_info.o gpu.o gputypes.o kForcesUpdate.o
>>>>> kCalculateLocalForces.o kCalculateGBBornRadii.o
>>>>> kCalculatePMENonbondEnergy.o kCalculateGBNonbondEnergy1.o
>>>>> kNLRadixSort.o
>>>>> kCalculateGBNonbondEnergy2.o kShake.o kNeighborList.o
>>>>> kPMEInterpolation.o
>>>>> kCalculateAMDWeights.o
>>>>> ar: creating cuda.a
>>>>> a - cuda_info.o
>>>>> a - gpu.o
>>>>> a - gputypes.o
>>>>> a - kForcesUpdate.o
>>>>> a - kCalculateLocalForces.o
>>>>> a - kCalculateGBBornRadii.o
>>>>> a - kCalculatePMENonbondEnergy.o
>>>>> a - kCalculateGBNonbondEnergy1.o
>>>>> a - kNLRadixSort.o
>>>>> a - kCalculateGBNonbondEnergy2.o
>>>>> a - kShake.o
>>>>> a - kNeighborList.o
>>>>> a - kPMEInterpolation.o
>>>>> a - kCalculateAMDWeights.o
>>>>> make[5]: Leaving directory
>>>>> `/home/SWcbbc/Amber12/amber12_GPU/src/pmemd/src/cuda'
>>>>> make -C ./cuda
>>>>> make[5]: Entering directory
>>>>> `/home/SWcbbc/Amber12/amber12_GPU/src/pmemd/src/cuda'
>>>>> make[5]: `cuda.a' is up to date.
>>>>> make[5]: Leaving directory
>>>>> `/home/SWcbbc/Amber12/amber12_GPU/src/pmemd/src/cuda'
>>>>> make -C ./cuda
>>>>> make[5]: Entering directory
>>>>> `/home/SWcbbc/Amber12/amber12_GPU/src/pmemd/src/cuda'
>>>>> make[5]: `cuda.a' is up to date.
>>>>> make[5]: Leaving directory
>>>>> `/home/SWcbbc/Amber12/amber12_GPU/src/pmemd/src/cuda'
>>>>> make -C ./cuda
>>>>> make[5]: Entering directory
>>>>> `/home/SWcbbc/Amber12/amber12_GPU/src/pmemd/src/cuda'
>>>>> make[5]: `cuda.a' is up to date.
>>>>> make[5]: Leaving directory
>>>>> `/home/SWcbbc/Amber12/amber12_GPU/src/pmemd/src/cuda'
>>>>> make -C ./cuda
>>>>> make[5]: Entering directory
>>>>> `/home/SWcbbc/Amber12/amber12_GPU/src/pmemd/src/cuda'
>>>>> make[5]: `cuda.a' is up to date.
>>>>> make[5]: Leaving directory
>>>>> `/home/SWcbbc/Amber12/amber12_GPU/src/pmemd/src/cuda'
>>>>> make -C ./cuda
>>>>> make[5]: Entering directory
>>>>> `/home/SWcbbc/Amber12/amber12_GPU/src/pmemd/src/cuda'
>>>>> make[5]: `cuda.a' is up to date.
>>>>> make[5]: Leaving directory
>>>>> `/home/SWcbbc/Amber12/amber12_GPU/src/pmemd/src/cuda'
>>>>> mpif90 -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK -Duse_SPFP -o
>>>>> pmemd.cuda.MPI gbl_constants.o gbl_datatypes.o state_info.o
>>>>> file_io_dat.o
>>>>> mdin_ctrl_dat.o mdin_ewald_dat.o mdin_debugf_dat.o prmtop_dat.o
>>>>> inpcrd_dat.o dynamics_dat.o img.o nbips.o parallel_dat.o parallel.o
>>>>> gb_parallel.o pme_direct.o pme_recip_dat.o pme_slab_recip.o
>>>>> pme_blk_recip.o pme_slab_fft.o pme_blk_fft.o pme_fft_dat.o fft1d.o
>>>>> bspline.o pme_force.o pbc.o nb_pairlist.o nb_exclusions.o cit.o
>>>>> dynamics.o
>>>>> bonds.o angles.o dihedrals.o extra_pnts_nb14.o runmd.o loadbal.o
>>>>> shake.o
>>>>> prfs.o mol_list.o runmin.o constraints.o axis_optimize.o gb_ene.o
>>>>> veclib.o
>>>>> gb_force.o timers.o pmemd_lib.o runfiles.o file_io.o bintraj.o
>>>>> binrestart.o pmemd_clib.o pmemd.o random.o degcnt.o erfcfun.o
>>>>> nmr_calls.o
>>>>> nmr_lib.o get_cmdline.o master_setup.o pme_alltasks_setup.o
>>>>> pme_setup.o
>>>>> ene_frc_splines.o gb_alltasks_setup.o nextprmtop_section.o
>>>>> angles_ub.o
>>>>> dihedrals_imp.o cmap.o charmm.o charmm_gold.o findmask.o remd.o
>>>>> multipmemd.o remd_exchg.o amd.o gbsa.o \
>>>>> ./cuda/cuda.a -L/usr/local/cuda-5.0/lib64
>>>>> -L/usr/local/cuda-5.0/lib
>>>>> -lcurand -lcufft -lcudart -L/home/SWcbbc/Amber12/amber12_GPU/lib
>>>>> -L/home/SWcbbc/Amber12/amber12_GPU/lib -lnetcdf
>>>>> ./cuda/cuda.a(gpu.o): In function `MPI::Op::Init(void (*)(void
>>>>> const*,
>>>>> void*, int, MPI::Datatype const&), bool)':
>>>>> gpu.cpp:(.text._ZN3MPI2Op4InitEPFvPKvPviRKNS_8DatatypeEEb[MPI::Op::Init(void
>>>>> (*)(void const*, void*, int, MPI::Datatype const&), bool)]+0x19):
>>>>> undefined reference to `ompi_mpi_cxx_op_intercept'
>>>>> ./cuda/cuda.a(gpu.o): In function `MPI::Intracomm::Create(MPI::Group
>>>>> const&) const':
>>>>> gpu.cpp:(.text._ZNK3MPI9Intracomm6CreateERKNS_5GroupE[MPI::Intracomm::Create(MPI::Group
>>>>> const&) const]+0x2a): undefined reference to `MPI::Comm::Comm()'
>>>>> ./cuda/cuda.a(gpu.o): In function `MPI::Graphcomm::Clone() const':
>>>>> gpu.cpp:(.text._ZNK3MPI9Graphcomm5CloneEv[MPI::Graphcomm::Clone()
>>>>> const]+0x25): undefined reference to `MPI::Comm::Comm()'
>>>>> ./cuda/cuda.a(gpu.o): In function `MPI::Intracomm::Create_cart(int,
>>>>> int
>>>>> const*, bool const*, bool) const':
>>>>> gpu.cpp:(.text._ZNK3MPI9Intracomm11Create_cartEiPKiPKbb[MPI::Intracomm::Create_cart(int,
>>>>> int const*, bool const*, bool) const]+0x8f): undefined reference to
>>>>> `MPI::Comm::Comm()'
>>>>> ./cuda/cuda.a(gpu.o): In function `MPI::Intracomm::Create_graph(int,
>>>>> int
>>>>> const*, int const*, bool) const':
>>>>> gpu.cpp:(.text._ZNK3MPI9Intracomm12Create_graphEiPKiS2_b[MPI::Intracomm::Create_graph(int,
>>>>> int const*, int const*, bool) const]+0x2b): undefined reference to
>>>>> `MPI::Comm::Comm()'
>>>>> ./cuda/cuda.a(gpu.o): In function `MPI::Cartcomm::Clone() const':
>>>>> gpu.cpp:(.text._ZNK3MPI8Cartcomm5CloneEv[MPI::Cartcomm::Clone()
>>>>> const]+0x25): undefined reference to `MPI::Comm::Comm()'
>>>>> ./cuda/cuda.a(gpu.o):gpu.cpp:(.text._ZN3MPI8Cartcomm3SubEPKb[MPI::Cartcomm::Sub(bool
>>>>> const*)]+0x76): more undefined references to `MPI::Comm::Comm()'
>>>>> follow
>>>>> ./cuda/cuda.a(gpu.o):(.rodata._ZTVN3MPI3WinE[vtable for
>>>>> MPI::Win]+0x48):
>>>>> undefined reference to `MPI::Win::Free()'
>>>>> ./cuda/cuda.a(gpu.o):(.rodata._ZTVN3MPI8DatatypeE[vtable for
>>>>> MPI::Datatype]+0x78): undefined reference to `MPI::Datatype::Free()'
>>>>> collect2: ld returned 1 exit status
>>>>> make[4]: *** [pmemd.cuda.MPI] Error 1
>>>>> make[4]: Leaving directory
>>>>> `/home/SWcbbc/Amber12/amber12_GPU/src/pmemd/src' make[3]: ***
>>>>> [cuda_parallel] Error 2
>>>>> make[3]: Leaving directory
>>>>> `/home/SWcbbc/Amber12/amber12_GPU/src/pmemd'
>>>>> make[2]: *** [cuda_parallel] Error 2
>>>>> make[2]: Leaving directory `/home/SWcbbc/Amber12/amber12_GPU/src'
>>>>> make[1]: [cuda_parallel] Error 2 (ignored)
>>>>> make[1]: Leaving directory
>>>>> `/home/SWcbbc/Amber12/amber12_GPU/AmberTools/src' make[1]: Entering
>>>>> directory `/home/SWcbbc/Amber12/amber12_GPU/src' Starting
>>>>> installation
>>>>> of
>>>>> Amber12 (cuda parallel) at Wed Apr 10 15:44:42 CEST 2013.
>>>>> cd pmemd && make cuda_parallel
>>>>> make[2]: Entering directory
>>>>> `/home/SWcbbc/Amber12/amber12_GPU/src/pmemd'
>>>>> make -C src/ cuda_parallel
>>>>> make[3]: Entering directory
>>>>> `/home/SWcbbc/Amber12/amber12_GPU/src/pmemd/src' make -C ./cuda
>>>>> make[4]: Entering directory
>>>>> `/home/SWcbbc/Amber12/amber12_GPU/src/pmemd/src/cuda'
>>>>> make[4]: `cuda.a' is up to date.
>>>>> make[4]: Leaving directory
>>>>> `/home/SWcbbc/Amber12/amber12_GPU/src/pmemd/src/cuda'
>>>>> make -C ./cuda
>>>>> make[4]: Entering directory
>>>>> `/home/SWcbbc/Amber12/amber12_GPU/src/pmemd/src/cuda'
>>>>> make[4]: `cuda.a' is up to date.
>>>>> make[4]: Leaving directory
>>>>> `/home/SWcbbc/Amber12/amber12_GPU/src/pmemd/src/cuda'
>>>>> make -C ./cuda
>>>>> make[4]: Entering directory
>>>>> `/home/SWcbbc/Amber12/amber12_GPU/src/pmemd/src/cuda'
>>>>> make[4]: `cuda.a' is up to date.
>>>>> make[4]: Leaving directory
>>>>> `/home/SWcbbc/Amber12/amber12_GPU/src/pmemd/src/cuda'
>>>>> make -C ./cuda
>>>>> make[4]: Entering directory
>>>>> `/home/SWcbbc/Amber12/amber12_GPU/src/pmemd/src/cuda'
>>>>> make[4]: `cuda.a' is up to date.
>>>>> make[4]: Leaving directory
>>>>> `/home/SWcbbc/Amber12/amber12_GPU/src/pmemd/src/cuda'
>>>>> make -C ./cuda
>>>>> make[4]: Entering directory
>>>>> `/home/SWcbbc/Amber12/amber12_GPU/src/pmemd/src/cuda'
>>>>> make[4]: `cuda.a' is up to date.
>>>>> make[4]: Leaving directory
>>>>> `/home/SWcbbc/Amber12/amber12_GPU/src/pmemd/src/cuda'
>>>>> mpif90 -O3 -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK -Duse_SPFP -o
>>>>> pmemd.cuda.MPI gbl_constants.o gbl_datatypes.o state_info.o
>>>>> file_io_dat.o
>>>>> mdin_ctrl_dat.o mdin_ewald_dat.o mdin_debugf_dat.o prmtop_dat.o
>>>>> inpcrd_dat.o dynamics_dat.o img.o nbips.o parallel_dat.o parallel.o
>>>>> gb_parallel.o pme_direct.o pme_recip_dat.o pme_slab_recip.o
>>>>> pme_blk_recip.o pme_slab_fft.o pme_blk_fft.o pme_fft_dat.o fft1d.o
>>>>> bspline.o pme_force.o pbc.o nb_pairlist.o nb_exclusions.o cit.o
>>>>> dynamics.o
>>>>> bonds.o angles.o dihedrals.o extra_pnts_nb14.o runmd.o loadbal.o
>>>>> shake.o
>>>>> prfs.o mol_list.o runmin.o constraints.o axis_optimize.o gb_ene.o
>>>>> veclib.o
>>>>> gb_force.o timers.o pmemd_lib.o runfiles.o file_io.o bintraj.o
>>>>> binrestart.o pmemd_clib.o pmemd.o random.o degcnt.o erfcfun.o
>>>>> nmr_calls.o
>>>>> nmr_lib.o get_cmdline.o master_setup.o pme_alltasks_setup.o
>>>>> pme_setup.o
>>>>> ene_frc_splines.o gb_alltasks_setup.o nextprmtop_section.o
>>>>> angles_ub.o
>>>>> dihedrals_imp.o cmap.o charmm.o charmm_gold.o findmask.o remd.o
>>>>> multipmemd.o remd_exchg.o amd.o gbsa.o \
>>>>> ./cuda/cuda.a -L/usr/local/cuda-5.0/lib64
>>>>> -L/usr/local/cuda-5.0/lib
>>>>> -lcurand -lcufft -lcudart -L/home/SWcbbc/Amber12/amber12_GPU/lib
>>>>> -L/home/SWcbbc/Amber12/amber12_GPU/lib -lnetcdf
>>>>> ./cuda/cuda.a(gpu.o): In function `MPI::Op::Init(void (*)(void
>>>>> const*,
>>>>> void*, int, MPI::Datatype const&), bool)':
>>>>> gpu.cpp:(.text._ZN3MPI2Op4InitEPFvPKvPviRKNS_8DatatypeEEb[MPI::Op::Init(void
>>>>> (*)(void const*, void*, int, MPI::Datatype const&), bool)]+0x19):
>>>>> undefined reference to `ompi_mpi_cxx_op_intercept'
>>>>> ./cuda/cuda.a(gpu.o): In function `MPI::Intracomm::Create(MPI::Group
>>>>> const&) const':
>>>>> gpu.cpp:(.text._ZNK3MPI9Intracomm6CreateERKNS_5GroupE[MPI::Intracomm::Create(MPI::Group
>>>>> const&) const]+0x2a): undefined reference to `MPI::Comm::Comm()'
>>>>> ./cuda/cuda.a(gpu.o): In function `MPI::Graphcomm::Clone() const':
>>>>> gpu.cpp:(.text._ZNK3MPI9Graphcomm5CloneEv[MPI::Graphcomm::Clone()
>>>>> const]+0x25): undefined reference to `MPI::Comm::Comm()'
>>>>> ./cuda/cuda.a(gpu.o): In function `MPI::Intracomm::Create_cart(int,
>>>>> int
>>>>> const*, bool const*, bool) const':
>>>>> gpu.cpp:(.text._ZNK3MPI9Intracomm11Create_cartEiPKiPKbb[MPI::Intracomm::Create_cart(int,
>>>>> int const*, bool const*, bool) const]+0x8f): undefined reference to
>>>>> `MPI::Comm::Comm()'
>>>>> ./cuda/cuda.a(gpu.o): In function `MPI::Intracomm::Create_graph(int,
>>>>> int
>>>>> const*, int const*, bool) const':
>>>>> gpu.cpp:(.text._ZNK3MPI9Intracomm12Create_graphEiPKiS2_b[MPI::Intracomm::Create_graph(int,
>>>>> int const*, int const*, bool) const]+0x2b): undefined reference to
>>>>> `MPI::Comm::Comm()'
>>>>> ./cuda/cuda.a(gpu.o): In function `MPI::Cartcomm::Clone() const':
>>>>> gpu.cpp:(.text._ZNK3MPI8Cartcomm5CloneEv[MPI::Cartcomm::Clone()
>>>>> const]+0x25): undefined reference to `MPI::Comm::Comm()'
>>>>> ./cuda/cuda.a(gpu.o):gpu.cpp:(.text._ZN3MPI8Cartcomm3SubEPKb[MPI::Cartcomm::Sub(bool
>>>>> const*)]+0x76): more undefined references to `MPI::Comm::Comm()'
>>>>> follow
>>>>> ./cuda/cuda.a(gpu.o):(.rodata._ZTVN3MPI3WinE[vtable for
>>>>> MPI::Win]+0x48):
>>>>> undefined reference to `MPI::Win::Free()'
>>>>> ./cuda/cuda.a(gpu.o):(.rodata._ZTVN3MPI8DatatypeE[vtable for
>>>>> MPI::Datatype]+0x78): undefined reference to `MPI::Datatype::Free()'
>>>>> collect2: ld returned 1 exit status
>>>>> make[3]: *** [pmemd.cuda.MPI] Error 1
>>>>> make[3]: Leaving directory
>>>>> `/home/SWcbbc/Amber12/amber12_GPU/src/pmemd/src' make[2]: ***
>>>>> [cuda_parallel] Error 2
>>>>> make[2]: Leaving directory
>>>>> `/home/SWcbbc/Amber12/amber12_GPU/src/pmemd'
>>>>> make[1]: *** [cuda_parallel] Error 2
>>>>> make[1]: Leaving directory `/home/SWcbbc/Amber12/amber12_GPU/src'
>>>>> make: *** [install] Error 2
>>>>>
>>>>> Regards Donato.
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> AMBER mailing list
>>>>> AMBER.ambermd.org
>>>>> http://lists.ambermd.org/mailman/listinfo/amber
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> -------------------------
>>>> Daniel R. Roe, PhD
>>>> Department of Medicinal Chemistry
>>>> University of Utah
>>>> 30 South 2000 East, Room 201
>>>> Salt Lake City, UT 84112-5820
>>>> http://home.chpc.utah.edu/~cheatham/
>>>> (801) 587-9652
>>>> (801) 585-9119 (Fax)
>>>>
>>>> _______________________________________________
>>>> AMBER mailing list
>>>> AMBER.ambermd.org
>>>> http://lists.ambermd.org/mailman/listinfo/amber
>>>>
>>>> --
>>>> This message has been scanned for viruses and
>>>> dangerous content by MailScanner, and is
>>>> believed to be clean.
>>>>
>>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> AMBER mailing list
>>> AMBER.ambermd.org
>>> http://lists.ambermd.org/mailman/listinfo/amber
>>
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>>
>> --
>> This message has been scanned for viruses and
>> dangerous content by MailScanner, and is
>> believed to be clean.
>>
>>
>
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
> --
> This message has been scanned for viruses and
> dangerous content by MailScanner, and is
> believed to be clean.
>
>



_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Apr 12 2013 - 02:00:03 PDT
Custom Search