Re: [AMBER] any working singularity or docker recipe? (esp for CUDA and MPI)

From: Ross Walker <ross.rosswalker.co.uk>
Date: Fri, 4 Feb 2022 08:49:05 -0500

Hi Mike,

The AMBER GPU was never really designed to run across multiple GPUs since the focus was on maximum throughput and high performance per $. That is achieved by running 8 individual MD simulations across 8 GPUs. If you want something to demonstrate that you can use all the GPUs at once, although arguably of limited real world use, you can run a large implicit solvent calculation. The Nucleosome benchmark included here http://ambermd.org/Amber18_Benchmark_Suite_RCW.tar.bz2 <http://ambermd.org/Amber18_Benchmark_Suite_RCW.tar.bz2> should scale to 8, maybe even 16 GPUs.

All the best
Ross

> On Feb 4, 2022, at 02:58, Michael Coleman <mcolema5.uoregon.edu> wrote:
>
> Finally managed a path to get this compiled (details to follow). In testing 'pmemd.cuda.MPI' on the 'jac' benchmark, I'm seeing our multiple K80s lighting up as expected, but there is no benefit in wall-clock time for adding multiple GPUs. If anything, adding GPUs increases running time.
>
> My theory is that this example (about 20K atoms?) is simply too small to show such a benefit, and is being crushed by inter-GPU I/O overhead. Is there an available example that would be expected to show a benefit with (say) eight or sixteen K80s? Or, alternatively, is my theory wrong?
>
> Unfortunately, Google is no help on this, and I have no time to read the manual. :-(
>
> Mike
>
>
>
> -----Original Message-----
> From: David A Case <david.case.rutgers.edu>
> Sent: Thursday, February 3, 2022 3:41 AM
> To: AMBER Mailing List <amber.ambermd.org>
> Subject: Re: [AMBER] any working singularity or docker recipe? (esp for CUDA and MPI)
>
> On Wed, Feb 02, 2022, Michael Coleman wrote:
>
>> The problem turns out to be that this file is listed in a .gitignore.
>
> Aah...good catch. I've been bitten by similar problems in the past, but
> didn't think to look for that this time.
>
>> I'd call that a bug, but it depends on exactly what your workflow is, I suppose.
>
> I agree: it's a bug. If all one does it untar the distribution file, and
> proceed to run cmake, it doesn't show up. But I'll get this fixed.
>
> Thanks for the report....dac
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> https://urldefense.com/v3/__http://lists.ambermd.org/mailman/listinfo/amber__;!!C5qS4YX3!XamfHinML3R5WbkOnYK5EuNfJUKFbo2S2lRVb6CCkPbNrUkrms0DesGborb5PnYkhPY$
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber

_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Feb 04 2022 - 06:00:03 PST
Custom Search