Re: [AMBER] problem about pmemd.cuda.MPI

From: PUJAN AJMERA via AMBER <amber.ambermd.org>
Date: Thu, 9 Nov 2023 16:54:24 -0800

Hi Ning,

Are you aiming to make a single long trajectory faster? Generally, GPUs
will not run much faster beyond 1, due to communication overhead (GPUs are
much faster than the communication between them), and unsurprisingly in
your case, actually slow it down.

If you are looking to just get good sampling, I may suggest splitting your
trajectory length into 8 runs, and running each on 1 GPU.

All the best,
Pujan Ajmera
PhD Student at UCLA

On Thu, Nov 9, 2023 at 4:46 PM ning via AMBER <amber.ambermd.org> wrote:

> Hi amber experts,
>
>
> I have a question about how to use pmemd.cuda.MPI for long time
> simulation. Because, I found that if applying this command like "mpirun -np
> 8 pmemd.cuda.MPI .... ” to use 8×2080Ti GPU, the simulation performance had
> been decreased to 12.5% of one GPU. What's wrong about this ? I just
> compiled the parallel version as manual guidance. How to fix this problem?
> Thanks for any suggestions.
>
>
> Best,
> Ning
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Nov 09 2023 - 17:00:03 PST
Custom Search