Got it!
I thought you ran continues constant pH MD.
Best,
Masoud
________________________________
From: Nitin Kulhar <bo18resch11002.iith.ac.in>
Sent: Tuesday, July 30, 2024 14:57
To: Carlos Simmerling <carlos.simmerling.stonybrook.edu>
Cc: Masoud Keramati <keramati.m.northeastern.edu>; AMBER Mailing List <amber.ambermd.org>
Subject: Re: [AMBER] Optimal GPU configuration for REMD
I used pmemd.cuda.MPI.
Logs from the run read as below:
Running multipmemd version of pmemd Amber22
Total processors = 8
Number of groups = 8
GPU-related Excerpt from one of the 8 mdout files:
|--- GPU DEVICE INFO ---
| Task ID: 0
| CUDA_VISIBLE_DEVICES: 0,1
| CUDA Capable Devices Detected: 2
| CUDA Device ID in use: 0
| CUDA Device Name: Tesla V100-SXM2-16GB
| CUDA Device Global Mem Size: 16160 MB
| CUDA Device Num Multiprocessors: 80
| CUDA Device Core Freq: 1.53 GHz
|---------------------------------------------
|--- GPU PEER TO PEER INFO ---
| Peer to Peer support: ENABLED
| NCCL support: ENABLED
|---------------------------------------------
Between that and the generation of
normal cpout files, I think it is working.
Regards
Nitin Kulhar
On Tue, Jul 30, 2024 at 9:37 PM Carlos Simmerling <carlos.simmerling.stonybrook.edu<mailto:carlos.simmerling.stonybrook.edu>> wrote:
that seems odd since the REMD portion requires MPI as far as I know.
On Tue, Jul 30, 2024 at 12:04 PM Masoud Keramati via AMBER <amber.ambermd.org<mailto:amber.ambermd.org>> wrote:
Hi Nitin,
Have you managed to finish pH-REMD with pmemd.cuda.MPI?
I tried couple of time but I couldn't and I have to use pmemd.cuda.
Best,
Masoud
________________________________
From: Nitin Kulhar via AMBER <amber.ambermd.org<mailto:amber.ambermd.org>>
Sent: Tuesday, July 30, 2024 07:08
To: AMBER Mailing List <amber.ambermd.org<mailto:amber.ambermd.org>>
Subject: Re: [AMBER] Optimal GPU configuration for REMD
Dear all
FYI: I managed to distribute 8 replicas
over 2 GPUs by setting the
CUDA_VISIBLE_DEVICES variable
along with the MPS system as shown
in the excerpt of the slurm script below:
# for MPS
export CUDA_MPS_PIPE_DIRECTORY=/tmp/nvidia-mps
export CUDA_MPS_LOG_DIRECTORY=/tmp/nvidia-log
nvidia-cuda-mps-control -d
# for multi-GPU run
export CUDA_VISIBLE_DEVICES=0,1
# for the actual job
mpirun -np 8 pmemd.cuda.MPI -ng 8 -groupfile groupfile
# quit MPS
echo quit | nvidia-cuda-mps-control
P.S.: The use of MPS was made following
David Cerutti's suggestion on another thread:
https://nam12.safelinks.protection.outlook.com/?url=http%3A%2F%2Farchive.ambermd.org%2F201910%2F0361.html&data=05%7C02%7Ckeramati.m%40northeastern.edu%7Cc2093587a94e41bc31e608dcb08816b4%7Ca8eec281aaa34daeac9b9a398b9215e7%7C0%7C0%7C638579345752519291%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=t54LkmVTtNh28vM3SkvgQGIdr0hqlnC3sIqELiY3JL4%3D&reserved=0<
http://archive.ambermd.org/201910/0361.html><
http://archive.ambermd.org/201910/0361.html>
Regards
Nitin Kulhar
On Mon, Jul 29, 2024 at 2:30 PM Nitin Kulhar <bo18resch11002.iith.ac.in<mailto:bo18resch11002.iith.ac.in>>
wrote:
> Dear all
>
> I am a novice to the theory/practice
> of REMD simulations.
> I am looking to run 6 replicas
> at different pH values on GPU.
>
> System:
> Protein-ligand complex in
> explicit solvent.
> 30510 atoms (total).
>
> Task:
> pH-REMD job consisting of
> 6 replicas to be run with
> pmemd.cuda.MPI.
>
> Resources:
> Each compute node has
> 2 nVidia V100-SXM2 cards,
> each with 16 GB global memory.
> Exclusive compute mode
> is NOT enabled on the GPUs.
>
> Is it possible to allocate 2 GPUs to the 6 replicas?
> Is it advisable to do so (accuracy concern)?
>
--
Disclaimer:- This footer text is to convey that this email is sent by one
of the users of IITH. So, do not mark it as SPAM.
_______________________________________________
AMBER mailing list
AMBER.ambermd.org<mailto:AMBER.ambermd.org>
https://nam12.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.ambermd.org%2Fmailman%2Flistinfo%2Famber&data=05%7C02%7Ckeramati.m%40northeastern.edu%7Cc2093587a94e41bc31e608dcb08816b4%7Ca8eec281aaa34daeac9b9a398b9215e7%7C0%7C0%7C638579345752528381%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=XHlr4JbMX8fXf8Uj7fb4N0qYSp0ey3y%2B3js1uNHVKZU%3D&reserved=0<
http://lists.ambermd.org/mailman/listinfo/amber><
http://lists.ambermd.org/mailman/listinfo/amber>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org<mailto:AMBER.ambermd.org>
http://lists.ambermd.org/mailman/listinfo/amber
Disclaimer:- This footer text is to convey that this email is sent by one of the users of IITH. So, do not mark it as SPAM.
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Jul 30 2024 - 12:30:01 PDT