Re: [AMBER] Error compiling amber cuda

From: Jason Swails <jason.swails.gmail.com>
Date: Mon, 25 Jul 2011 21:26:57 -0400

Just a comment -- the GPU patch(es) actually creates new files in a few times. Thus, if you don't start from a new directory, you will be patching a file that shouldn't exist, but does, giving rise to this issue.

All the best,
Jason

--
Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Candidate
352-392-4032
On Jul 25, 2011, at 7:18 PM, Fabrício Bracht <bracht.iq.ufrj.br> wrote:
> Hi Ross. I want to thank you for the patience and for all the help
> here. It may sound stupid but it seems that having an empty directory
> for extraction is crucial after all. Everything turned out ok. All the
> tests that were supposed to work worked and pmemd.cuda is working
> nicely now. Benchmark for the GTX460 running FactorIX NPT benchmark is
> 2.9 ns/day.
> Thank you again.
> 
> Fabrício
> 
> 2011/7/25 Ross Walker <ross.rosswalker.co.uk>:
>> Hi Fabricio,
>> 
>> See all those files ending in ".rej" those are 'rejects' from the patch
>> command which means that patch did not work correctly, either because the
>> directory was not properly clean to begin with or you initial AMBER tar
>> files are not the distributed ones but have been modified in some way. I
>> would suggest doing the following...
>> 
>> Delete your current AMBER directory (make sure AMBERHOME is set properly
>> before doing this). - rm -rf $AMBERHOME
>> 
>> Then get yourself copies of the original distributed tar files:
>> Amber11.tar.bz2 and AmberTools-1.5.tar.bz2
>> 
>> Place these in the directory where you want the amber11 directory to be then
>> do the following:
>> 
>> 1) tar xvjf Amber11.tar.bz2
>> 2) tar xvjf AmberTools-1.5.tar.bz2
>> 3) export AMBERHOME=/path_to/amber11
>> 4) cd $AMBERHOME
>> 5) wget http://ambermd.org/bugfixes/AmberTools/1.5/bugfix.all
>> 6) patch -p0 < bugfix.all
>> 7) rm -f bugfix.all
>> 8) wget http://ambermd.org/bugfixes/11.0/bugfix.all
>> 9) wget http://ambermd.org/bugfixes/11.0/apply_bugfix.x
>> 10) ./apply_bugfix.x bugfix.all
>> 11) cd $AMBERHOME/AmberTools/src/
>> 12) ./configure gnu
>> 13) make
>> 14) cd ../../
>> 15) ./AT15_Amber11.py
>> 16) cd src
>> 17) make serial
>> 
>> At this point you have the serial (non-GPU) versions of AMBERTools and AMBER
>> built. Best to test them at this point.
>> 
>> 18) cd $AMBERHOME/AmberTools/test/
>> 19) make test
>> 
>> Check the test logs to see if everything worked.
>> 
>> 20) cd $AMBERHOME/test
>> 21) make test
>> 
>> Again check the test logs to see if everything worked.
>> 
>> Now we can build the serial GPU code.
>> 
>> 22) cd $AMBERHOME/src/
>> 23) make clean
>> 24) cd $AMBERHOME/AmberTools/src/
>> 25) make clean
>> 26) ./configure -cuda gnu
>> 27) cd ../../
>> 28) ./AT15_Amber11.py
>> 29) cd src
>> 30) make cuda
>> 
>> Hopefully this time it should build correctly.
>> 
>> 31) cd ../test
>> 32) make test.cuda
>> 
>> All should be good... Fingers crossed...
>> 
>> All the best
>> Ross
>> 
>>> -----Original Message-----
>>> From: Fabrício Bracht [mailto:bracht.iq.ufrj.br]
>>> Sent: Monday, July 25, 2011 10:11 AM
>>> To: AMBER Mailing List
>>> Subject: Re: [AMBER] Error compiling amber cuda
>>> 
>>> Ok, I need a little help understanding what came out of md5sum.
>>> This is my output. Is everything ok?
>>> 
>>> zotac.zotac-desktop:~/Downloads/amber11/src/pmemd/src/cuda$ md5sum *
>>> md5sum: B40C: Is a directory
>>> 3339895f6f2599358465b9e17b5cf5c2  cuda_info.f90
>>> f4ed79de194d836246009d5c29051574  cuda_info.fpp
>>> adf389d4e9d599b76cea06c6ad062ed0  cuda_info.fpp.orig
>>> 2c646f72e82d589de51dd39a023ee239  cuda_info.fpp.rej
>>> e40d46e32e05d4f40fd13c86dd1609d8  cuda_info.o
>>> a9e4f660fcb5347b1273a8e3f76d3e74  gpu.cpp
>>> 43c2c868c7b0ad16801b3a86edcd4482  gpu.cpp.orig
>>> c690340fa3bbd86846acf6e9538369e9  gpu.cpp.rej
>>> 307e64e078aa5f1f22bd78fd224c9f4b  gpu.h
>>> fd46d1afeb0e795ab91fc2529cd5b843  gpu.h.orig
>>> 729c08b3c841bd87b1042f3d6c12374a  gpu.h.rej
>>> aa99a62e100dcf2e2bd7ad1136a24028  gpu.o
>>> 9e6a4f93e46046cda29369feb0dd32e8  gputypes.cpp
>>> feff7fe8bed8c11c1537c52729eab6ce  gputypes.cpp.orig
>>> 6d47d774d5b83ca3879391535be0165c  gputypes.cpp.rej
>>> 46f8ccf2bbee063ff35a73945b16a3a2  gputypes.h
>>> cc12bdff2667c3798295ba22109eb499  gputypes.h.orig
>>> 77b512c4d2565f61c95219f1ad08e37e  gputypes.h.rej
>>> 4c8e7585e957b3c70bceb8d71ef610b3  gputypes.o
>>> 90ba8d068522a00074707a529469f5ea  kCalculateGBBornRadii.cu
>>> e5f918f895e61717963ade0d0fe22d51  kCalculateGBBornRadii.cu.rej
>>> 97fbbcfb8a3833509d94072ecab05643  kCalculateGBNonbondEnergy1.cu
>>> 97fbbcfb8a3833509d94072ecab05643  kCalculateGBNonbondEnergy1.cu.orig
>>> dc47b131d709410fd317abd5f0216216  kCalculateGBNonbondEnergy1.cu.rej
>>> 79fb7a5bba2a19ba351a7dd5996d31fc  kCalculateGBNonbondEnergy2.cu
>>> 67a458e51a76162edbcc907e7135500c  kCalculateLocalForces.cu
>>> f4299550e2e5a5e8354530484a640c73  kCalculateLocalForces.cu.rej
>>> ce308f4fbe9468d5505beb0099d58e76  kCalculatePMENonbondEnergy.cu
>>> 1d2918a17c9e540b334509fc93dc6dd1  kCalculatePMENonbondEnergy.cu.rej
>>> 9b240d418e391a71b590e6dc3bc3b0ff  kCCF.h
>>> a38daf306f4183fbdc470dc81bb47ffb  kCCF.h.rej
>>> 5561a56bc236291cb87b4770453d67a4  kCLF.h
>>> 7edeb669099330c6c73e9cd6e8443655  kCLF.h.rej
>>> 86f220029e3a943a186ebcfd16e2dcd9  kCPNE.h
>>> 86f220029e3a943a186ebcfd16e2dcd9  kCPNE.h.orig
>>> d1e3a9216969c774fe7563e08fad0004  kCPNE.h.rej
>>> 9905ed2e705bccf1ae705279d85d0e57  kForcesUpdate.cu
>>> ccdeed1ab8011de006284ee519540f91  kForcesUpdate.cu.rej
>>> edf2d74af7a4d401ccecc7bfa6d036c3  kNeighborList.cu
>>> 44e96ce3db6e02ddfad4f76fe007dcb9  kNeighborList.cu.orig
>>> 5ebd4b78db7c46991755e9e3e9d6c84e  kNeighborList.cu.rej
>>> bbf74d0dbd475e889fb20834119760ac  kNTPKernels.h
>>> 49f952b429618228fca8e23f44223c58  kPGGW.h
>>> 0058536fbe45ceb7a79bc61df95adcea  kPGGW.h.orig
>>> fb86d5ad6feb20e5248f30a45cfc117f  kPGGW.h.rej
>>> 4aea91b87cbb3cf62b9fddafe607ab48  kPGS.h
>>> 6c962b5a27e6bc6c4c53b7f685889343  kPGS.h.orig
>>> 4db97cadd5b928a64a6fb6afb49cefd6  kPGS.h.rej
>>> 9c5951cdf94402d2c0396b74498f72f5  kPMEInterpolation.cu
>>> 56a2f1359d2662d12eacba3bfacc25f9  kPMEInterpolation.cu.orig
>>> c487ee79fdefce8f8fee548ddbe8c70f  kPMEInterpolation.cu.rej
>>> 46f01611524128ea428c069ef58bd421  kPSSE.h
>>> fd131be311aad3755f60aa3b51e89d29  kRandom.h
>>> eefe9bd32e04ba2bbe2eb5611a6464bd  kShake.cu
>>> 2e606f90bc3ca2956eadc4ffad32e885  kShake.cu.rej
>>> 4c6af869eda3380dd09c7b582c0fe0bd  kU.h
>>> 6947e1fae477c0bb9c637062a0ddbfd8  Makefile
>>> e5a6173273e6812669c21abcd1530226  Makefile.advanced
>>> e3eeff469aa56c8fea91f4911a3e0fa2  Makefile.advanced.rej
>>> 9c8878e1723ae40a877e9a98352f1b54  Makefile.orig
>>> 8c21b83dee0f294efd4f710134588a5f  Makefile.rej
>>> 
>>> Thank you
>>> Fabrício
>>> 
>>> 2011/7/25 Ross Walker <ross.rosswalker.co.uk>:
>>>> Hi Fabricio,
>>>> 
>>>> To be honest I have absolutely no idea what is causing the problem
>>> you are
>>>> seeing and I think it is going to take a considerable amount of
>>> debugging
>>>> etc to work out what is going wrong. I have not seen this problem
>>> before and
>>>> I do not think anybody else has either. Thus the options are:
>>>> 
>>>> 1) The patch is not being applied properly. Either you are not
>>> applying BOTH
>>>> the AmberTools and AMBER 11 patches correctly or something is going
>>> wrong
>>>> during the patch procedure. To debug this we will need to know
>>> exactly which
>>>> command lines you used and exactly what the output was in each case
>>> from the
>>>> point of untarring the AMBER tar files into a clean directory to
>>> applying
>>>> all the patches.
>>>> 
>>>> 2) Your compiler combination is in some way broken. Your NVCC looks
>>> good so
>>>> maybe something is strange with your C / Fortran compiler
>>> combination.
>>>> Although the error is coming from NVCC and not gcc which is strange.
>>> You
>>>> could try the intel compilers though and see if that helps although I
>>> still
>>>> suspect 1 is the issue here.
>>>> 
>>>> Perhaps you could run an md5sum on each of the files in the
>>>> $AMBERHOME/src/pmemd/src/cuda directory. Attached is what I get for
>>> AMBER 11
>>>> + all the latest patches.
>>>> 
>>>> foo.linux-jh9j:~/amber11_as_of_jul_22/src/pmemd/src/cuda> md5sum *
>>>> md5sum: B40C: Is a directory
>>>> f4ed79de194d836246009d5c29051574  cuda_info.fpp
>>>> a9e4f660fcb5347b1273a8e3f76d3e74  gpu.cpp
>>>> 307e64e078aa5f1f22bd78fd224c9f4b  gpu.h
>>>> 9e6a4f93e46046cda29369feb0dd32e8  gputypes.cpp
>>>> 46f8ccf2bbee063ff35a73945b16a3a2  gputypes.h
>>>> 90ba8d068522a00074707a529469f5ea  kCalculateGBBornRadii.cu
>>>> 97fbbcfb8a3833509d94072ecab05643  kCalculateGBNonbondEnergy1.cu
>>>> 79fb7a5bba2a19ba351a7dd5996d31fc  kCalculateGBNonbondEnergy2.cu
>>>> 67a458e51a76162edbcc907e7135500c  kCalculateLocalForces.cu
>>>> ce308f4fbe9468d5505beb0099d58e76  kCalculatePMENonbondEnergy.cu
>>>> 9b240d418e391a71b590e6dc3bc3b0ff  kCCF.h
>>>> 5561a56bc236291cb87b4770453d67a4  kCLF.h
>>>> 86f220029e3a943a186ebcfd16e2dcd9  kCPNE.h
>>>> 9905ed2e705bccf1ae705279d85d0e57  kForcesUpdate.cu
>>>> edf2d74af7a4d401ccecc7bfa6d036c3  kNeighborList.cu
>>>> fd65d023597024a68565c5a0e5ffd86c  kNTPKernels.h
>>>> 49f952b429618228fca8e23f44223c58  kPGGW.h
>>>> 4aea91b87cbb3cf62b9fddafe607ab48  kPGS.h
>>>> 9c5951cdf94402d2c0396b74498f72f5  kPMEInterpolation.cu
>>>> 46f01611524128ea428c069ef58bd421  kPSSE.h
>>>> ada7d510598c88ed4adb8d32a9dbf73d  kRandom.h
>>>> eefe9bd32e04ba2bbe2eb5611a6464bd  kShake.cu
>>>> b07e184d2840ffae27d8af5415fae04a  kU.h
>>>> 6947e1fae477c0bb9c637062a0ddbfd8  Makefile
>>>> e5a6173273e6812669c21abcd1530226  Makefile.advanced
>>>> 
>>>> Check you get EXACTLY the same for all your files. This will
>>> determine if
>>>> the patch is being applied correctly.
>>>> 
>>>> All the best
>>>> Ross
>>>> 
>>>> /\
>>>> \/
>>>> |\oss Walker
>>>> 
>>>> ---------------------------------------------------------
>>>> |             Assistant Research Professor              |
>>>> |            San Diego Supercomputer Center             |
>>>> |             Adjunct Assistant Professor               |
>>>> |         Dept. of Chemistry and Biochemistry           |
>>>> |          University of California San Diego           |
>>>> |                     NVIDIA Fellow                     |
>>>> | http://www.rosswalker.co.uk | http://www.wmd-lab.org/ |
>>>> | Tel: +1 858 822 0854 | EMail:- ross.rosswalker.co.uk  |
>>>> ---------------------------------------------------------
>>>> 
>>>> Note: Electronic Mail is not secure, has no guarantee of delivery,
>>> may not
>>>> be read every day, and should not be used for urgent or sensitive
>>> issues.
>>>> 
>>>> 
>>>> 
>>>> 
>>>> _______________________________________________
>>>> AMBER mailing list
>>>> AMBER.ambermd.org
>>>> http://lists.ambermd.org/mailman/listinfo/amber
>>>> 
>>> 
>>> _______________________________________________
>>> AMBER mailing list
>>> AMBER.ambermd.org
>>> http://lists.ambermd.org/mailman/listinfo/amber
>> 
>> 
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>> 
> 
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon Jul 25 2011 - 18:30:04 PDT
Custom Search