Re: [AMBER] Compilation with MVAPICH2-GDR

From: Scott Brozell via AMBER <amber.ambermd.org>
Date: Tue, 5 Aug 2025 16:59:41 -0400

Hi,

Yes, ie, GPU to GPU direct communication can be used for any GPUs
whether on the same or different nodes.
The key is to improved performance is to check that your HPC cluster
has the hardware and software support. Presumably if cluster staff
have installed MVAPICH2-GDR then you are good to go.

For more details, see:
https://mvapich.cse.ohio-state.edu/userguide/gdr/
https://network.nvidia.com/products/GPUDirect-RDMA/
https://mug.mvapich.cse.ohio-state.edu/static/media/mug/presentations/23/MUG23WednesdaySamKhuvis.pdf

scott

On Mon, Aug 04, 2025 at 11:05:49AM +0200, Yasser Almeida via AMBER wrote:
>
> I am compiling AMBER24 (pmemd24) in a HPC cluster with GPU nodes. I have
> nodes with 2xA30 and 8xH100 GPUs. I compiled AMBER with MPI and works
> fine. In the manual, section 2.2.8 says:
>
> As of Amber 24, significantly improved performance of pmemd.cuda.MPI is
> available through the MVAPICH MPI library???s MVAPICH2-GDR GPU to GPU
> direct communication facility[20]. The improvement is 84% for the
> explicit solvent subset of the Amber benchmark suite. Users must
> manually activate this feature: Edit run cmake and add -DMVAPICH2GDR GPU
> DIRECT COMM=TRUE to the Linux section. And you must use MVAPICH2-GDR
> version 2.3.7 or later as your MPI. If you employ this feature then, in
> addition to citing Amber, please also cite reference [20] and note
> whether this capability enabled larger simulations.
>
> Where it says "GPU to GPU direct communication", does this refers to GPU
> to GPU direct communication between GPUs of different nodes and/or
> within a single node?


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Tue Aug 05 2025 - 14:30:02 PDT
Custom Search