Re: [AMBER] any working singularity or docker recipe? (esp for CUDA and MPI)

From: Michael Coleman <mcolema5.uoregon.edu>
Date: Sat, 5 Feb 2022 04:54:12 +0000

Hi Ross,

Thanks for your response. Mostly, I was just trying to ensure that my build wasn't flawed in some way. I didn't spot in the docs that multi-GPU runs weren't helping performance much, but it does make sense. Of all people, I should know better. :-P

Regards,
Mike


-----Original Message-----
From: Ross Walker <ross.rosswalker.co.uk>
Sent: Friday, February 4, 2022 5:49 AM
To: AMBER Mailing List <amber.ambermd.org>
Subject: Re: [AMBER] any working singularity or docker recipe? (esp for CUDA and MPI)

Hi Mike,

The AMBER GPU was never really designed to run across multiple GPUs since the focus was on maximum throughput and high performance per $. That is achieved by running 8 individual MD simulations across 8 GPUs. If you want something to demonstrate that you can use all the GPUs at once, although arguably of limited real world use, you can run a large implicit solvent calculation. The Nucleosome benchmark included here https://urldefense.com/v3/__http://ambermd.org/Amber18_Benchmark_Suite_RCW.tar.bz2__;!!C5qS4YX3!WDfmVgmTT6Ajq8d4XKfrKZUm9nkfyQrAC7GtOkga5GKN2rmrUC8fRwEwUIQUxMjBkV8$ <https://urldefense.com/v3/__http://ambermd.org/Amber18_Benchmark_Suite_RCW.tar.bz2__;!!C5qS4YX3!WDfmVgmTT6Ajq8d4XKfrKZUm9nkfyQrAC7GtOkga5GKN2rmrUC8fRwEwUIQUxMjBkV8$ > should scale to 8, maybe even 16 GPUs.

All the best
Ross

> On Feb 4, 2022, at 02:58, Michael Coleman <mcolema5.uoregon.edu> wrote:
>
> Finally managed a path to get this compiled (details to follow). In testing 'pmemd.cuda.MPI' on the 'jac' benchmark, I'm seeing our multiple K80s lighting up as expected, but there is no benefit in wall-clock time for adding multiple GPUs. If anything, adding GPUs increases running time.
>
> My theory is that this example (about 20K atoms?) is simply too small to show such a benefit, and is being crushed by inter-GPU I/O overhead. Is there an available example that would be expected to show a benefit with (say) eight or sixteen K80s? Or, alternatively, is my theory wrong?
>
> Unfortunately, Google is no help on this, and I have no time to read the manual. :-(
>
> Mike
>
>
>
> -----Original Message-----
> From: David A Case <david.case.rutgers.edu>
> Sent: Thursday, February 3, 2022 3:41 AM
> To: AMBER Mailing List <amber.ambermd.org>
> Subject: Re: [AMBER] any working singularity or docker recipe? (esp for CUDA and MPI)
>
> On Wed, Feb 02, 2022, Michael Coleman wrote:
>
>> The problem turns out to be that this file is listed in a .gitignore.
>
> Aah...good catch. I've been bitten by similar problems in the past, but
> didn't think to look for that this time.
>
>> I'd call that a bug, but it depends on exactly what your workflow is, I suppose.
>
> I agree: it's a bug. If all one does it untar the distribution file, and
> proceed to run cmake, it doesn't show up. But I'll get this fixed.
>
> Thanks for the report....dac
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> https://urldefense.com/v3/__http://lists.ambermd.org/mailman/listinfo/amber__;!!C5qS4YX3!XamfHinML3R5WbkOnYK5EuNfJUKFbo2S2lRVb6CCkPbNrUkrms0DesGborb5PnYkhPY$
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> https://urldefense.com/v3/__http://lists.ambermd.org/mailman/listinfo/amber__;!!C5qS4YX3!WDfmVgmTT6Ajq8d4XKfrKZUm9nkfyQrAC7GtOkga5GKN2rmrUC8fRwEwUIQUvGEQcMY$

_______________________________________________
AMBER mailing list
AMBER.ambermd.org
https://urldefense.com/v3/__http://lists.ambermd.org/mailman/listinfo/amber__;!!C5qS4YX3!WDfmVgmTT6Ajq8d4XKfrKZUm9nkfyQrAC7GtOkga5GKN2rmrUC8fRwEwUIQUvGEQcMY$

_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Feb 04 2022 - 21:00:02 PST
Custom Search