Thanks for the mail, Dr Brozell.
Noting the output of `mpicc -show`
shared by you, I made changes in
my mpicc, which helped the
configuration along, albeit not all
the way.
A snippet of cmake.log:
-- Found MPI_C: /path/to/mpicc (found version "3.1")
-- Found MPI_CXX: /path/to/mpicxx (found version "3.1")
-- Found MPI_Fortran: /path/to/mpif90 (found version "3.1")
-- Found MPI: TRUE (found version "3.1")
-- MPI C Compiler: /path/to/mpicc
-- MPI CXX Compiler: /path/to/mpicxx
-- MPI Fortran Compiler: /path/to/mpif90
-- If these are not the correct MPI wrappers, then set
MPI_<language>_COMPILER to the correct wrapper and reconfigure.
CMake Error at cmake/LibraryTracking.cmake:219 (message):
Incorrect usage. At least one LIBRARY should be provided.
Call Stack (most recent call first):
cmake/MPIConfig.cmake:122 (import_libraries)
CMakeLists.txt:117 (include)
Full log attached herewith
for your kind reference.
With respect to your collection of tools, i am building now with a similar
> set:
> Currently Loaded Modules:
> 4) gnu/11.2.0
Note that i do not specify the compilers manually.
The rpm build of MVAPICH2-GDR
compatible with our HPC hardware
requires GCC-11.2.0, which is not
currently available on the HPC. So,
I installed GCC-11.2.0 in my user
space, followed by primitive tests
with a few headers. Hence, the
-DCOMPILER=MANUAL in my
run_cmake.
Presumably your hello world program did include some header files?
Yes.
Have you tried to manually compile the test program
> /scratch/nitin.bt.iith/A24/amber24_src/cmake/test_include_stdlib.c ?
Yes, it worked as below:
*nitin.bt.iith.login01: build$* mpicc -g -o test-include.out
../cmake/test_include_stdlib.c
*nitin.bt.iith.login01: build$* srun -v -p gpu ./test-include.out
...
srun: launch/slurm: _task_start: Node gpu003, 1 tasks started
srun: launch/slurm: _task_finish: Received task exit notification for 1
task of StepId=203117.0 (*status=0x0000*).
srun: launch/slurm: _task_finish: gpu003: task 0: Completed
I would be grateful for any help
in this regard.
Thanks
Nitin Kulhar
On Sat, Oct 19, 2024 at 12:14 AM Scott Brozell <sbrozell.comcast.net> wrote:
> Hi,
>
> On Fri, Oct 18, 2024 at 09:57:02PM +0530, Nitin Kulhar via AMBER wrote:
> > I am trying to install Amber24
> > with GCC-11.2.0, CUDA-11.4,
> > and MVAPICH2-GDR-2.3.7-1.
> >
> > I am unable to configure Amber
> > with wrapper compilers
> > (mpicc, mpicxx, and mpif90)
> > from MVAPICH2-GDR.
> >
> > My approach:
> > Edit the run_cmake to direct
> > cmake-3.24.2 to use
> > MVAPICH2-GDR's wrapper
> > compilers, which were edited
> > so as to have cmake invoke
> > executables, headers, libraries
> > from both GCC-11.2.0 and
> > CUDA-11.4, via the wrappers.
> >
> > I could edit the wrappers to
> > compile and run primitive
> > programs like "Hello World".
> > But, I am unable to configure
> > the build of Amber24 with the
> > 'run_cmake' edited as below:
> >
> > # Assume this is Linux:
> > export CC="mpicc"
> > export CXX="mpicxx"
> > export FC="mpif90"
> > export MV2_USE_CUDA=0
> > export MV2_USE_GDRCOPY=0
> > cmake $AMBER_PREFIX/amber24_src \
> > -DCMAKE_INSTALL_PREFIX=$AMBER_PREFIX/amber24 \
> > -DMPI=TRUE -DCUDA=TRUE -DNCCL=TRUE \
> > -DMVAPICH2GDR_GPU_DIRECT_COMM=TRUE \
> > -DCOMPILER=MANUAL \
> > -DCMAKE_C_COMPILER=$CC \
> > -DCMAKE_CXX_COMPILER=$CXX \
> > -DCMAKE_Fortran_COMPILER=$FC \
> > -DMPI_C_INCLUDE_PATH=$MV2_INSTALL/include \
> > -DMPI_C_LIBRARIES=$MV2_INSTALL/lib64/libmpi.so \
> > -DMPI_CXX_INCLUDE_PATH=$MV2_INSTALL/include \
> > -DMPI_CXX_LIBRARIES=$MV2_INSTALL/lib64/libmpicxx.so \
> > -DMPI_Fortran_INCLUDE_PATH=$MV2_INSTALL/include \
> > -DMPI_Fortran_LIBRARIES=$MV2_INSTALL/lib64/libmpifort.so \
> > -DBUILD_GUI=FALSE -DBUILD_QUICK=FALSE \
> > -DDOWNLOAD_MINICONDA=TRUE \
> > -DINSTALL_TESTS=TRUE \
> > -DCMAKE_BUILD_TYPE=Debug \
> > -DCMAKE_VERBOSE_MAKEFILE=TRUE \
> > 2>&1 | tee cmake.log
> > fi
> >
> > PS. PFA the cmake.log.
> >
> > On Mon, Oct 7, 2024 at 6:55???PM Nitin Kulhar <bo18resch11002.iith.ac.in
> >
> > wrote:
> >
> > > I am keen on seeing some speedup
> > > on multi-node multi-gpu tasks, e.g. with
> > > pmemd.cuda.MPI (Amber24).
> > >
> > > Therefore, I am writing to inquire
> > > about the recommended version of
> > > cuda drivers (and GNU compilers) with
> > > which to install mvapich2-gdr.2.3.7
> > > for subsequently building Amber24
> > > executables with cuda and/or mpi
> > > support.
> > >
> > > I am also unclear on whether the
> > > following switches can be
> > > simultaneously turned on while
> > > configuring Amber24:
> > > -DCUDA
> > > -DMVAPICH2GDR_GPU_DIRECT_COMM
> > > -DMPI
>
> Your cmake log shows:
> ...
> -- Testing if stdlib.h can be included using -D_BITS_FLOATN_H...
> CMake Error at cmake/VerifyCompilerConfig.cmake:96 (message):
> Your C compiler could not compile a simple test program using C99 mode to
> compile stdlib.h. Build output was: Change Dir:
> /scratch/nitin.bt.iith/A24/amber24_src/build/CMakeFiles/CMakeTmp
>
> /scratch/nitin.bt.iith/A24/amber24_src/cmake/test_include_stdlib.c:4:10:
> fatal error: stdlib.h: No such file or directory
>
> 4 | #include <stdlib.h>
> | ^~~~~~~~~~
> ...
>
> which suggests a basic problem with your compiler installation.
> Presumably your hello world program did include some header files?
> Have you tried to manually compile the test program
> /scratch/nitin.bt.iith/A24/amber24_src/cmake/test_include_stdlib.c ?
>
> With respect to your collection of tools, i am building now with a similar
> set:
> Currently Loaded Modules:
> 4) gnu/11.2.0
> 5) cuda/11.7.1
> 6) mvapich2-gdr/2.3.7-1
>
> mpicc -show
> gcc
> -I/apps/spack/0.17/root/linux-rhel8-zen/cuda/gcc/8.4.1/11.7.1-i3l77ix/include
> -I/apps/spack/0.17/root/linux-rhel8-zen/cuda/gcc/8.4.1/11.7.1-i3l77ix/include
> -lcuda
> -L/apps/spack/0.17/root/linux-rhel8-zen/cuda/gcc/8.4.1/11.7.1-i3l77ix/lib64/stubs
> -L/apps/spack/0.17/root/linux-rhel8-zen/cuda/gcc/8.4.1/11.7.1-i3l77ix/lib64
> -lcudart -lrt
> -Wl,-rpath,/apps/spack/0.17/root/linux-rhel8-zen/cuda/gcc/8.4.1/11.7.1-i3l77ix/lib64
> -Wl,-rpath,XORIGIN/placeholder -Wl,--build-id
> -L/apps/spack/0.17/root/linux-rhel8-zen/cuda/gcc/8.4.1/11.7.1-i3l77ix/lib64/
> -lm -I/apps/mvapich2-gdr/gnu/11.2/2.3.7-1/include
> -L/apps/mvapich2-gdr/gnu/11.2/2.3.7-1/lib64 -Wl,-rpath
> -Wl,/apps/mvapich2-gdr/gnu/11.2/2.3.7-1/lib64 -Wl,--enable-new-dtags -lmpi
>
> cmake /tmp/amber -DCMAKE_INSTALL_PREFIX=/tmp/amber24 -DCOMPILER=GNU
> -DMPI=TRUE -DCUDA=TRUE -DOPENMP=FALSE -DMVAPICH2GDR_GPU_DIRECT_COMM=TRUE
> -DBUILD_GUI=TRUE -DBUILD_QUICK=TRUE -DBUILD_REAXFF_PUREMD=TRUE
> -DINSTALL_TESTS=TRUE -DBUILD_PYTHON=TRUE -DDOWNLOAD_MINICONDA=TRUE
> -DCHECK_UPDATES=TRUE -DAPPLY_UPDATES=FALSE -DCMAKE_BUILD_TYPE=Release
> -DCOLOR_CMAKE_MESSAGES=FALSE -DCMAKE_VERBOSE_MAKEFILE=TRUE
>
> Note that i do not specify the compilers manually.
> This set of tools has been put in my path, via the modules, and i just let
> cmake find them, eg:
>
> which mpicc
> /apps/mvapich2-gdr/gnu/11.2/2.3.7-1/bin/mpicc
>
> grep mpicc cmake.log
> -- MPI C Compiler: /apps/mvapich2-gdr/gnu/11.2/2.3.7-1/bin/mpicc
>
> oschelp,
> scott
>
>
--
Disclaimer:- This footer text is to convey that this email is sent by one
of the users of IITH. So, do not mark it as SPAM.
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Sat Oct 19 2024 - 06:30:02 PDT