so please ignore my previous message since it seems to me that
everything works fine now. :-)
I just have a question about options for parallelisation (-DMPI=FALSE
in the cmake script).
Dealing with the workstation equipped with 2 GPUs, do I need to
install pmemd.cuda.mpi that would allow me to use the both GPUs for
the same simulation (I am working with water-soluble enzyme in dimeric
form, total size of the system 80-100k atoms)?
If so, would it be possible to compile additionally pmemd.cuda.mpi to
the already installed amber22 (thus avoiding installation of all other
components from scratch) ?
Many thanks in advance
Enrico
Il giorno gio 4 ago 2022 alle ore 10:15 Enrico Martinez
<jmsstarlight.gmail.com> ha scritto:
>
> Thank you very much for this information!
> so I changed in the run_cmake script
>
> now while executing cmake script I have this
> -- Found CUDA: /usr/local/cuda (found version "11.7")
> -- CUDA version 11.7 detected
> -- Configuring for SM3.5, SM5.0, SM5.2, SM5.3, SM6.0, SM6.1, SM7.0,
> SM7.5 and SM8.0
> -- Checking CUDA and GNU versions -- compatible
>
> now It shows at the end that cuda has been detected
> -- Features:
> -- MPI: OFF
> -- OpenMP: OFF
> -- CUDA: ON
> -- NCCL: OFF
> -- Build Shared Libraries: ON
> -- Build GUI Interfaces: ON
> -- Build Python Programs: ON
> -- -Python Interpreter: Internal Miniconda (version 3.9)
> -- Build Perl Programs: ON
> -- Build configuration: RELEASE
> -- Target Processor: x86_64
> -- Build Documentation: OFF
> -- Sander Variants: normal LES API LES-API
> -- Install location: /home/gleb/amber22/
> -- Installation of Tests: ON
>
> -- Compilers:
> -- C: GNU 9.4.0 (/usr/bin/gcc)
> -- CXX: GNU 9.4.0 (/usr/bin/g++)
> -- Fortran: GNU 9.4.0 (/usr/bin/gfortran)
>
> -- Building Tools:
> -- addles amberlite ambpdb antechamber cifparse cphstats cpptraj emil
> etc few gbnsr6 gem.pmemd gpu_utils kmmd leap lib mdgx mm_pbsa
> mmpbsa_py moft nab ndiff-2.00 nfe-umbrella-slice nmode nmr_aux
> packmol_memgen paramfit parmed pbsa pdb4amber pmemd pymsmt pysander
> pytraj reduce rism sander saxs sebomd sff sqm xray xtalutil
>
> Would it be sufficient for installation?
> Does gpu_utils contain pmemd.gpu?
> Need I export CUDA devices additionally?
>
> Here is the info regarding my system on which I am trying to install Amber22
>
> +-----------------------------------------------------------------------------+
> | NVIDIA-SMI 515.43.04 Driver Version: 515.43.04 CUDA Version: 11.7 |
> |-------------------------------+----------------------+----------------------+
> | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
> | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
> | | | MIG M. |
> |===============================+======================+======================|
> | 0 NVIDIA RTX A6000 Off | 00000000:17:00.0 Off | Off |
> | 30% 40C P8 42W / 300W | 10MiB / 49140MiB | 0% Default |
> | | | N/A |
> +-------------------------------+----------------------+----------------------+
> | 1 NVIDIA RTX A6000 Off | 00000000:73:00.0 On | Off |
> | 30% 45C P8 34W / 300W | 589MiB / 49140MiB | 1% Default |
> | | | N/A |
> +-------------------------------+----------------------+----------------------+
>
>
> Il giorno mer 3 ago 2022 alle ore 19:13 David A Case via AMBER
> <amber.ambermd.org> ha scritto:
> >
> > On Wed, Aug 03, 2022, Enrico Martinez wrote:
> > >
> > >cd amber22_src/build
> > >./run_cmake -DCUDA=TRUE -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-11.7
> >
> > This won't work: you have edit the run_cmake script itself with those changes.
> > The script won't take things from your environment.
> >
> > ....dac
> >
> >
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Aug 04 2022 - 13:38:07 PDT