Re: [AMBER] CUDA version for AMBER22

From: Jones De Andrade via AMBER <amber.ambermd.org>
Date: Sun, 12 Nov 2023 06:14:01 -0300

Hi.

Thanks for the clarification, made it much easier to me to compile the
cuda version. :)

However, differently from Mingxuan Jiang, I did encounter 1 amber
failure:

==============================================================
make[4]: Entering directory '/usr/local/chem/amber23/test/cuda'
cd xray/480d && ./Run.480d DPFP yes
STOP PMEMD Terminated Abnormally!
diffing Energy.dat_DPFP.save with Energy.dat
possible FAILURE: check Energy.dat.dif
==============================================================

And 14 Ambertools failures + 1 error! :O

cd mdgx/Peptides && ./Test.peptides
/usr/local/chem/amber23///bin/mdgx.cuda cuda
MDGX set to /usr/local/chem/amber23///bin/mdgx.cuda
Referencing results with extension cuda
diffing energy_mts.cuda.dat.save with energy_mts.dat
possible FAILURE: check energy_mts.dat.dif
==============================================================
diffing kine_mts.cuda.dat.save with kine_mts.dat
possible FAILURE: check kine_mts.dat.dif
==============================================================
diffing bond_mts.cuda.dat.save with bond_mts.dat
possible FAILURE: check bond_mts.dat.dif
==============================================================
diffing angl_mts.cuda.dat.save with angl_mts.dat
possible FAILURE: check angl_mts.dat.dif
==============================================================
diffing dihe_mts.cuda.dat.save with dihe_mts.dat
possible FAILURE: check dihe_mts.dat.dif
==============================================================
diffing elec_mts.cuda.dat.save with elec_mts.dat
possible FAILURE: check elec_mts.dat.dif
==============================================================
diffing vdw_mts.cuda.dat.save with vdw_mts.dat
possible FAILURE: check vdw_mts.dat.dif
==============================================================
diffing solv_mts.cuda.dat.save with solv_mts.dat
possible FAILURE: check solv_mts.dat.dif
==============================================================
diffing energy_rtt.cuda.dat.save with energy_rtt.dat
possible FAILURE: check energy_rtt.dat.dif
==============================================================
diffing solv_igb1.cuda.dat.save with solv_igb1.dat
possible FAILURE: check solv_igb1.dat.dif
==============================================================
diffing solv_igb2.cuda.dat.save with solv_igb2.dat
possible FAILURE: check solv_igb2.dat.dif
==============================================================
diffing solv_igb5.cuda.dat.save with solv_igb5.dat
possible FAILURE: check solv_igb5.dat.dif
==============================================================
diffing solv_igb6.cuda.dat.save with solv_igb6.dat
possible FAILURE: check solv_igb6.dat.dif
==============================================================
diffing solv_igb7.cuda.dat.save with solv_igb7.dat
possible FAILURE: check solv_igb7.dat.dif
==============================================================

The thousands of differences in the diff file really don't seem to be
within acceptable limits:

possible FAILURE: check energy_mts.dat.dif
/usr/local/chem/amber23/AmberTools/test/mdgx/Peptides
102c102
< -400.4707
> -404.9356
(...)
201c201
< -457.8297
> -460.2415

Does anybody knows what can be causing such differences? I haven't seem
those specific file failures anywhere in the previous (serial, parallel
or omp) compilations of either amber or ambertools...

(It might be ridiculous, but I'm really having hard time finding the
errors reported at the end of the log: what can I grep to find them?)

Thanks a lot in advance.

Best regards,

Jones
---
Jones de Andrade
(jdandrade.iq.ufrgs.br)
DFQ/IQ/UFRGS
Lattes: http://lattes.cnpq.br/6675936210583999
Orcid: https://orcid.org/0000-0003-3429-8119
ResearcherID: https://publons.com/researcher/AAC-5337-2019/
Em 2023-11-10 16:54, Scott Brozell via AMBER escreveu:
> Hi,
> 
> Thanks for the reports.
> Of course, we are also interested in testing, benchmarking, and other
> results of using Amber with the latest cuda 12 versions.
> 
> Note that the additional cuda version check is
> ===
> CMake Error at
> AmberTools/src/quick/quick-cmake/QUICKCudaConfig.cmake:94 (message):
>   Error: Unsupported CUDA version.  quick requires CUDA version >= 8.0 
> and <=
>   12.0.  Please upgrade your CUDA installation or disable building with 
> CUDA.
> ===
> 
> In addition to modifying that cmake source file one can also disable
> building QUICK; add this to your cmake command either on the command 
> line
> or in your run_cmake file:
> ===
> -DBUILD_QUICK=FALSE
> ===
> 
> scott
> 
> On Fri, Nov 10, 2023 at 07:14:10PM +0000, Mingxuan Jiang via AMBER 
> wrote:
>> Dear Todd,
>> 
>> This worked wonderfully.
>> 
>> I just changed the 12.X to 12.4 for the following additional files:
>> 
>> In the  amber22_src/AmberTools/src/quick/quick-cmake/ folder
>> 
>> in addition to what you have suggested, and everything seemed to 
>> install well, to 100%.
>> 
>> When I ran some minimization scripts, and they ran to completion.
>> 
>> pmemd.cuda -O -i min.in -o min_f_CTS.out -p pypy.prmtop -c pypy.inpcrd 
>> -r pypy_out.rst -ref pypy.inpcrd -x pypy_out.nc
>> 
>> The only warning I got was:
>> 
>> Note: The following floating-point exceptions are signalling: 
>> IEEE_UNDERFLOW_FLAG IEEE_DENORMAL
>> 
>> Is this something of concern?
>> 
>> If necessary, I can attach the input files.
>> 
>> 
>> Thank you!
>> 
>> Best wishes,
>> Mingxuan Jiang
>> 
>> 
>> From: Todd Minehardt <todd.minehardt.gmail.com>
>> Date: Friday, 10 November 2023 at 14:00
>> To: Mingxuan Jiang <Mingxuan.Jiang.cruk.cam.ac.uk>, AMBER Mailing List 
>> <amber.ambermd.org>
>> Subject: Re: [AMBER] CUDA version for AMBER22
>> Mingxuan,
>> 
>> There is no need to roll back your CUDA 12.3 library, I have recently 
>> recompiled AmberTools 23 with CUDA 12.3. It works just fine.
>> 
>> You just need to edit a few files.
>> 
>> First, edit the file cmake/CudaConfig.cmake and change line 76 to 
>> read:
>> 
>> elseif((${CUDA_VERSION} VERSION_GREATER_EQUAL 12.0) AND 
>> (${CUDA_VERSION} VERSION_LESS_EQUAL 12.3))
>> 
>> and rebuild.
>> 
>> There will be a few other instances during your build process that 
>> exit as before (I cannot recall off the top of my head which files 
>> they emanate from), and repeat the process of editing the code in the 
>> same way (i.e., change VERSION_LESS to VERSION_LESS_EQUAL and 12.1 to 
>> 12.3).
>> 
>> Cheers,
>> 
>> Todd
>> 
>> On Fri, Nov 10, 2023 at 3:53???AM Mingxuan Jiang via AMBER 
>> <amber.ambermd.org<mailto:amber.ambermd.org>> wrote:
>> Dear Sir/Mdm,
>> 
>> I am Ming, a PhD student from CRUKCI, and a user of AMBER for a few 
>> years now.
>> 
>> Recently, our HPC cluster has upgraded to CUDA-12.3, and I was hoping 
>> to compile AMBER22. However, when running ./run_cmake,
>> 
>> The following message was received:
>> 
>> Basically, the cuda version is too high on the HPC. What would be the 
>> best way to resolve this?
>> 
>> 
>> -- 
>> **************************************************************************
>> 
>> -- Starting configuration of Amber version 22.0.0...
>> 
>> -- CMake Version: 3.20.2
>> 
>> -- For how to use this build system, please read this wiki:
>> 
>> --     
>> http://ambermd.org/pmwiki/pmwiki.php/Main/CMake<https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fambermd.org%2fpmwiki%2fpmwiki.php%2fMain%2fCMake&c=E,1,i7cWOz-8ow8JlxxHZOC99dt2WZUEjAtTR9fcEel2fiEZ3fuaC7BcvrqDbTTYOAMd1SDQvMlBW4jRKQiYFB_vlAtxYVVnQ10hHpqIaaPNCwMDEw,,&typo=1>
>> 
>> -- For a list of important CMake variables, check here:
>> 
>> --     
>> http://ambermd.org/pmwiki/pmwiki.php/Main/CMake-Common-Options<https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fambermd.org%2fpmwiki%2fpmwiki.php%2fMain%2fCMake-Common-Options&c=E,1,JabDTcvBeNM1AcPUPee8HsK6asOdoSgzuMKMzBJWl8al7VTiBofM40DtIdI3IoWdLWe1J-xtakALAtH1OxVa319m_kvZj9zgmFqpJuUCcDPi594,&typo=1>
>> 
>> -- 
>> **************************************************************************
>> 
>> -- Amber source found, building AmberTools and Amber
>> 
>> -- Looking for pthread.h
>> 
>> -- Looking for pthread.h - found
>> 
>> -- Performing Test CMAKE_HAVE_LIBC_PTHREAD
>> 
>> -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
>> 
>> -- Looking for pthread_create in pthreads
>> 
>> -- Looking for pthread_create in pthreads - not found
>> 
>> -- Looking for pthread_create in pthread
>> 
>> -- Looking for pthread_create in pthread - found
>> 
>> -- Found Threads: TRUE
>> 
>> -- Found CUDA: /usr/local/cuda (found version "12.3")
>> 
>> -- CUDA version 12.3 detected
>> 
>> CMake Error at cmake/CudaConfig.cmake:84 (message):
>> 
>>   Error: Untested CUDA version.  AMBER currently requires CUDA version 
>> >= 7.5
>> 
>>   and <= 12.1.
>> 
>> Call Stack (most recent call first):
>> 
>>   CMakeLists.txt:119 (include)
>> 
>> 
>> 
>> 
>> 
>> -- Configuring incomplete, errors occurred!
>> 
>> See also 
>> "/Users/jiang02/new_amber/amber22_src/build/CMakeFiles/CMakeOutput.log".
>> 
>> See also 
>> "/Users/jiang02/new_amber/amber22_src/build/CMakeFiles/CMakeError.log".
>> 
>> 
>> 
>> If errors are reported, search for 'CMake Error' in the cmake.log 
>> file.
>> 
>> 
>> 
>> If the cmake build report looks OK, you should now do the following:
>> 
>> 
>> 
>>     make install
>> 
>>     source 
>> /Users/jiang02/new_amber/amber22/amber.sh<https://linkprotect.cudasvc.com/url?a=https%3a%2f%2famber.sh&c=E,1,PRJQV7h47oLdpihxmMIc9GJ9CClJyiSvAFpnqkwAZzy6sYmDiqX_oiidSkGWoH-DCpWCxWfuetbYif0HDYrDN0-B1DfwMSalYZLqBOrcqjT4rd3C4E4jhJ1Kmw,,&typo=1&ancr_add=1>
>> 
>> 
>> 
>> Consider adding the last line to your login startup script, e.g. 
>> ~/.bashrc
> 
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Sun Nov 12 2023 - 01:30:02 PST
Custom Search