Re: [AMBER] any working singularity or docker recipe? (esp for CUDA and MPI)

From: David A Case <david.case.rutgers.edu>
Date: Fri, 4 Feb 2022 08:46:34 -0500

On Fri, Feb 04, 2022, Michael Coleman wrote:

>Finally managed a path to get this compiled (details to follow). In
>testing 'pmemd.cuda.MPI' on the 'jac' benchmark, I'm seeing our multiple
>K80s lighting up as expected, but there is no benefit in wall-clock time
>for adding multiple GPUs. If anything, adding GPUs increases running time.

That is a common experience, although others might comment on what they
expect for jac and K80s. Check out the DHFR (aka jac) benchmarks results
here for K80s:

     https://ambermd.org/gpus14/benchmarks.htm#

This shows significant increases on going to 2 to 4 GPUs, but results
like this are generally quite dependent on the interconnect hardware and
its software settings. And look at the last note on ""Maximizing GPU
performance" here:

    https://ambermd.org/GPULogistics.php

>Is there an available example that would be expected to show a benefit with
>(say) eight or sixteen K80s?

I don't know of any such example for a single MD run. Multiple GPUs can be
great for independent simulations, or for things like replica-exchange,
where communication between GPUs is minimal.


....dac


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Feb 04 2022 - 06:00:02 PST
Custom Search