Re: [AMBER] Building Amber16 CPU with Intel Compiler and GPU with GNU Compiler vs. MPI

From: Scott Brozell <sbrozell.rci.rutgers.edu>
Date: Sat, 14 Apr 2018 17:36:28 -0400

Hi,

On Fri, Apr 13, 2018 at 11:05:22AM -0400, Daniel Roe wrote:
> On Thu, Apr 12, 2018 at 1:14 PM, Ryan Novosielski <novosirj.rutgers.edu> wrote:
> >
> > My question is what is the recommendation for what to do about MPI, and subsequently, the modules system (which is out of scope here, but someone may have solved the problem). What I???d done so far is compiled everything with the same Intel compiler, including the MVAPICH2 stack, and this had worked. If I use the GNU compiler for the GPU portions, I suppose I???ll have to use MVAPICH2 compiled with the GNU compiler as well, which I suppose them complicates hierarchical modules/prevents one from using the MPI-enbabled CPU and GPU versions together, if that???s something anyone ever needs to do.
>
> In practice, modules actually make supporting the separate CPU/GPU
> builds of Amber simple. What I do for our local cluster is build the
> CPU version with Intel compilers and either Intel MPI or Mvapich built
> with Intel compilers, and the GPU version with GCC5/Mvapich (GNU)/CUDA
> 8. Then, to simplify things for users I create Amber and AmberGPU
> modules in the core modulefiles directory that take care of loading in
> the required modules, so that all users have to do is e.g.
>
> module load AmberGPU
>
> to get the GPU build of Amber (instead of 'module load gcc/5.4.0
> mvapich2 cuda' etc). This also makes it easy when you're updating the
> Amber build since the top-level module name doesn't ever change.
>
> I'm a big fan of using GCC with the GPU build of Amber. I found that
> when doing burn-in testing of GPUs with pmemd.cuda I always get
> reproducibly consistent results with GNU/CUDA, whereas with Intel/CUDA
> sometimes I get two sets of results (on the same GPUs). Of course this
> was probably with Intel 16 compilers (I don't remember exactly when I
> did these tests but it was a year or so back) so YMMV with newer
> versions.

I agree on building Amber's cuda executables with GNU compilers.
I have reported in this list on issues with cuda executables
that were built with Intel compilers (i have not finished the
investigation but optimization options appear to be the culprit).

I also agree with Dan that modules actually make supporting multiple
builds easy. But I follow a different approach at the Ohio Supercomputer
Center: I use only 1 Amber module for all executables (serial, mpi, and cuda
and once upon a time MIC); serial and mpi are built with Intel compilers
and cuda are built with GNU; the GNU versions that are used are the GNU
versions behind the hood of the Intel compilers; this behind the hood GNU
compiler is the system GNU compiler.

So for Amber16 we have serial and mpi exe. built with intel 16.0.3 and
mvapich2 2.2 and cuda exe. built with gnu 4.8.5.

In our approach using a cuda exe. requires loading the cuda module as well
as loading the amber/16 module. For Amber18 i plan to use rpath to eliminate
the need to load the cuda module as we have already done for example for LAMMPS.

scott


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Sat Apr 14 2018 - 15:00:01 PDT
Custom Search