Probably very little. While there is some issue with caching in a CPU MD
simulation, all of our codes are written to lay out the memory in ways that
will take advantage of cache gulps and miss as little as possible. One of
the engines does actually perform better when you get to higher particle
densities, owing to a radically different layout of the nonbonded pairlist
that conserves cache space, but the overall difference is only about 10%
and at lower particle densities or shorter cutoffs this situation is
reversed. All of this is a moot point, though, because the performance
code is pmemd.CUDA which does its calculations on the GPU where the card's
memory specs, clock rate, and CUDA core count is what matters.
In some applications like big combinatorial search problems with large
tables and heavy dependence on random-access memory calls, you could see a
difference, but even in those cases my feeling is it would be marginal or
could be marginalized by writing the code properly. A cache miss is bad no
matter how you slice it, and it's a rare algorithm that would thwart a good
programmer from organizing data to avoid a long stream of trips to RAM.
HTH,
Dave
On Thu, Nov 3, 2016 at 1:53 PM, Nikolay N. Kuzmich <nnkuzmich.gmail.com>
wrote:
> Dear Amber users,
>
> I would like to ask you how this difference in RAM frequency would affect
> the speed
> of MD simulations. How big would be the gain in performance?
>
> Kind regards,
> Nick
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Nov 03 2016 - 11:30:03 PDT