Re: [AMBER] any working singularity or docker recipe? (esp for CUDA and MPI)

From: Michael Coleman <mcolema5.uoregon.edu>
Date: Fri, 4 Feb 2022 07:58:20 +0000

Finally managed a path to get this compiled (details to follow). In testing 'pmemd.cuda.MPI' on the 'jac' benchmark, I'm seeing our multiple K80s lighting up as expected, but there is no benefit in wall-clock time for adding multiple GPUs. If anything, adding GPUs increases running time.

My theory is that this example (about 20K atoms?) is simply too small to show such a benefit, and is being crushed by inter-GPU I/O overhead. Is there an available example that would be expected to show a benefit with (say) eight or sixteen K80s? Or, alternatively, is my theory wrong?

Unfortunately, Google is no help on this, and I have no time to read the manual. :-(

Mike



-----Original Message-----
From: David A Case <david.case.rutgers.edu>
Sent: Thursday, February 3, 2022 3:41 AM
To: AMBER Mailing List <amber.ambermd.org>
Subject: Re: [AMBER] any working singularity or docker recipe? (esp for CUDA and MPI)

On Wed, Feb 02, 2022, Michael Coleman wrote:

>The problem turns out to be that this file is listed in a .gitignore.

Aah...good catch. I've been bitten by similar problems in the past, but
didn't think to look for that this time.

>I'd call that a bug, but it depends on exactly what your workflow is, I suppose.

I agree: it's a bug. If all one does it untar the distribution file, and
proceed to run cmake, it doesn't show up. But I'll get this fixed.

Thanks for the report....dac


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
https://urldefense.com/v3/__http://lists.ambermd.org/mailman/listinfo/amber__;!!C5qS4YX3!XamfHinML3R5WbkOnYK5EuNfJUKFbo2S2lRVb6CCkPbNrUkrms0DesGborb5PnYkhPY$

_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Feb 04 2022 - 00:00:02 PST
Custom Search