Re: [AMBER] cpptraj readdata bad_alloc

From: Daniel Roe <>
Date: Mon, 24 Oct 2016 11:10:08 -0400

Hi Niel,

On Sun, Oct 23, 2016 at 3:01 PM, Niel Henriksen <> wrote:
> I discovered the problem on a machine with 120 GB of memory, but it still
> occurs on a machine with 512 GB too, so I don't think it's a memory issue.
> Is there a hardcoded limit to something in readdata or autocorr?

It's not a hard-coded limit; what's happening is you're hitting an
integer limit. Detailed explanation (if you are interested) follows,
but I wanted to let you know I'm working on the GitHub version of
cpptraj to fix 'corr' for large data sets that should be available
later today. I'll let you know when it's available.


Details: Under the hood, by default cpptraj uses FFTs to calculate
autocorrelation (via convolution theorem). For efficiency the FFT code
requires the input data size be a power of 2, so cpptraj allocates the
FFT size using the next available power of two; for 130M values this
is 134217728 (2^27). In addition, when using convolution theorem this
way you need to pad the end of non-periodic input data with zeros to
avoid end effects, so now we're at 2^28. However, PubFFT requires a
workspace that is 4 times larger than the actual FFT size, so what you
really end up needing is 2^30. For 135M values the next available
power of two is actually 2^28, so the final requested size ends up
being 2^31 which just happens to be one over the maximum size of a 4
byte integer. As such, the allocation request ends up being for a
negative number, and as a result the allocation fails. To avoid this,
I'll be switching to a longer unsigned type.

Daniel R. Roe
Laboratory of Computational Biology
National Institutes of Health, NHLBI
5635 Fishers Ln, Rm T900
Rockville MD, 20852
AMBER mailing list
Received on Mon Oct 24 2016 - 08:30:02 PDT
Custom Search