Re: [AMBER] any working singularity or docker recipe? (esp for CUDA and MPI)

From: Michael Coleman <>
Date: Sun, 23 Jan 2022 21:42:39 +0000

Hi Gerald,

Thanks very much for your quick response. I'm realizing that my email lacked a key detail. My primary goal is just to get a working build of any sort, rather than to use containers in production (which complicates debugging). The documented Amber build procedure isn't working for us, and I was hoping that seeing a working version would provide some hints.

The Amber license complicates things, but it seems like just a Singularity def wouldn't offend too badly.

In any case, the good news is that looking again at the Spack recipe this weekend (last attempt a few months ago), it appears to work now! Definitely yes for CUDA. Still having some multi-node MPI startup issues, but that might be more about Spack than Amber.

As a breadcrumb for others struggling to build Amber, here's my current best try:

    spack install -j 56 amber.20 %gcc +x11 +mpi +cuda ^cuda.10.2.89 ^openmpi schedulers=slurm +cuda fabrics=auto

This is on the 'devel' Spack branch, pulled a couple of days ago.

This build passes some of the simple test cases for single-node MPI driving multiple K80s. (I haven't yet run the complete test suite, but I'm optimistic.)

The 'cuda.10.2.89' works around an exasperating undocumented behavior of the Amber build, which is that GPU code is generated or not generated for different cards depending on the CUDA version number. In particular, K80 code is not generated for CUDA 11, even though CUDA 11 supports K80s. The Amber code doesn't fail obviously for lack of this code, but rather one gets an obscure diagnostic.

I'll update that 'spack install' command if I get a better one.


-----Original Message-----
From: Gerald Monard <>
Sent: Sunday, January 23, 2022 3:39 AM
To: AMBER Mailing List <>
Subject: Re: [AMBER] any working singularity or docker recipe? (esp for CUDA and MPI)

Also, just for the record and sharing my experience with docker and
singularity containers here.
What I have found so far is that these containers are not really as
portable as one may think of.
Basically, for the serial version of Amber, that's OK. You can take a
container and go wherever you want (laptop to HPC center) to use it. It's
But for the CUDA version and the MPI version, it can get very tricky.
For the CUDA version, you can end up with conflicts between the CUDA
version inside the container and the driver on the host.
For the MPI version, it depends whether you want to use the MPI version of
the HPC center, or your own. And you can easily end up with
incompatibility issues (see;!!C5qS4YX3!S2zvzLtfCq2WYc44bGX4Xk4yX8jZNs3t3m7FCaVk77f-YtrNCDhtm3SN-T0gcjgVUEA$ ).
So sometimes, to get the best performance for Amber, it is better to create
yourself a container that would suit best the platform on which you want to
run than to rely on a pre-defined container made elsewhere.
My 2 cents :-)


On Sun, Jan 23, 2022 at 8:31 PM Gerald Monard <>

> Hello,
> I have developed singularity and docker containers to build and test the
> Amber source code for different linux flavors. This is how the
>;!!C5qS4YX3!S2zvzLtfCq2WYc44bGX4Xk4yX8jZNs3t3m7FCaVk77f-YtrNCDhtm3SN-T0g-qqlW7A$ page is generated. But these
> containers only serve to _build_ Amber, they don't _contain_ Amber per se.
> I am working on recipes to include the binaries after the build stage into
> the containers. Hopefully, these will be included in the next Amber release.
> In the meantime, I can share this with you (off the list) if you are
> interested to beta test + give some feedback.
> best regards,
> Gerald
> On Sat, Jan 22, 2022 at 2:05 PM Michael Coleman <>
> wrote:
>> Hi,
>> Does anyone have a suggestion for a Singularity or Docker script to build
>> a GPU- and MPI-enabled build of Amber? (and AmberTools?)
>> It seems like there once was a Singularity script, 'amberity', which
>> probably encapsulated a Singularity recipe, but it no longer seems
>> accessible.
>> This page mocks me, as it seems to be using such containerization. No
>> source, though, so no help.;!!C5qS4YX3!S2zvzLtfCq2WYc44bGX4Xk4yX8jZNs3t3m7FCaVk77f-YtrNCDhtm3SN-T0g-qqlW7A$
>> I'm on RHEL 7, so the solution has to work on Linux generally. Beyond
>> that, I'm all ears.
>> Thank you,
>> Mike
>> Michael Coleman
>> Computational Scientist
>> Research Advanced Computing Services (HPC)
>> University of Oregon
>> _______________________________________________
>> AMBER mailing list
AMBER mailing list;!!C5qS4YX3!S2zvzLtfCq2WYc44bGX4Xk4yX8jZNs3t3m7FCaVk77f-YtrNCDhtm3SN-T0gzM7ylc4$

AMBER mailing list
Received on Sun Jan 23 2022 - 14:00:02 PST
Custom Search