Right, thank you David!
So after compiling and testing the MPI version, I have not noticed a
difference between 1 or 2 gpus for my system.
Here is my input file
*************************************************************************
***************************** md.in ******************************
*************************************************************************
MD without restraints during 20ns at constant T= 330K & P= 1Atm
&cntrl
imin=0, ntx=5, ntpr=5000, ntwr=5000, ntwx=5000, ntwe=5000,
nscm=5000,
ntf=2, ntc=2,
ntb=2, ntp=1, taup=0.5,
nstlim=250000000, dt=0.002,
cut=10.0,
ntt=3, gamma_ln=2.0, ig=-1,
iwrap=1,
irest=1,
temp0=330.0
/
&ewald
netfrc = 0,
skin_permit = 0.75,
&end
In the output log I have
------------------- GPU DEVICE INFO --------------------
|
| Task ID: 0
| CUDA_VISIBLE_DEVICES: 0,1
| CUDA Capable Devices Detected: 2
| CUDA Device ID in use: 0
| CUDA Device Name: NVIDIA RTX A6000
| CUDA Device Global Mem Size: 48685 MB
| CUDA Device Num Multiprocessors: 84
| CUDA Device Core Freq: 1.80 GHz
|
|
| Task ID: 1
| CUDA_VISIBLE_DEVICES: 0,1
| CUDA Capable Devices Detected: 2
| CUDA Device ID in use: 1
| CUDA Device Name: NVIDIA RTX A6000
| CUDA Device Global Mem Size: 48651 MB
| CUDA Device Num Multiprocessors: 84
| CUDA Device Core Freq: 1.80 GHz
|
|--------------------------------------------------------
|---------------- GPU PEER TO PEER INFO -----------------
|
| Peer to Peer support: ENABLED
|
| NCCL support: ENABLED
|
|--------------------------------------------------------
In the nvidia-smi it indicates 4 pmemed :-)
| 0 N/A N/A 2532120 C pmemd.cuda.MPI 1061MiB |
| 0 N/A N/A 2532121 C pmemd.cuda.MPI 317MiB |
| 1 N/A N/A 2162 G /usr/lib/xorg/Xorg 110MiB |
| 1 N/A N/A 2783 G /usr/lib/xorg/Xorg 272MiB |
| 1 N/A N/A 3656 G /usr/bin/gnome-shell 90MiB |
| 1 N/A N/A 6720 G ...692524829162169360,131072 140MiB |
| 1 N/A N/A 1875033 G chimerax 64MiB |
| 1 N/A N/A 1937729 G ...RendererForSitePerProcess 76MiB |
| 1 N/A N/A 2532120 C pmemd.cuda.MPI 257MiB |
| 1 N/A N/A 2532121 C pmemd.cuda.MPI
assuming that I run the job on my 2 GPU workstation using
mpirun -np 2 pmemd.cuda.MPI -O
was everything correct?
P.S. do I need &end after &ewald?
Cheers
Enrico
Il giorno gio 4 ago 2022 alle ore 18:44 David A Case
<david.case.rutgers.edu> ha scritto:
>
> On Thu, Aug 04, 2022, Enrico Martinez wrote:
>
> >Dealing with the workstation equipped with 2 GPUs, do I need to
> >install pmemd.cuda.mpi that would allow me to use the both GPUs for
> >the same simulation?
>
> Yes: pmemd.cuda.MPI is automatically built if you ask for both MPI and CUDA.
>
> >If so, would it be possible to compile additionally pmemd.cuda.mpi to
> >the already installed amber22 (thus avoiding installation of all other
> >components from scratch) ?
>
> It's possible, but I don't recommend it. If you have already built the
> serial/cuda codes, turning on MPI and just doing "make install" will go pretty
> fast.
>
> ....dac
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Aug 05 2022 - 04:00:02 PDT