Re: [AMBER] Analysis of pmemd performance for exascale

From: David A Case via AMBER <amber.ambermd.org>
Date: Thu, 5 Oct 2023 08:35:53 -0600

On Wed, Oct 04, 2023, Charles Laughton via AMBER wrote:
>
>As part of an Exascale project I’ve been asked to gather information
>on biomolecular simulation code performance in terms of numerical
>methods. Specifically how - for typical large (c 1M atom) problem sizes
>– codes divide their time between the “numerical dwarves”. For
>reference, these are a) Dense linear algebra, b) Sparse linear algebra,
>c) Spectral. Methods (e.g. FFT), d) N-body methods, e) Structured grids,
>f) Unstructured grids, g) MapReduce, h) Combinational logic, i) Graph
>traversal, j) Dynamic programming, k) Backtrack & Branch-and-bound, l)
>Graphical models, and m) Finite state machines. What percentage of a
>typical run would be spent doing each of these?
>
>Does anyone have any data on this for the latest releases of pmemd.cuda, in
>particular?

We don't use this sort of language to analyze timings in pmemd. A small
fraction of time in a 1 million atom run would involve FFTs. I'm not sure
where to categorize all the rest (N-body methods?)

In our language, a big chunk of time goes into computing non-bonded
interactions (N-body?), and another big piece involves computing the
non-bonded list (graph traversal?).

I understand that this is probably not very helpful. Maybe others on the
list have some better insight here.

....dac


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Oct 05 2023 - 08:00:02 PDT
Custom Search