Re: [AMBER] Problem with "make install" after "./configure -mpi -cuda gnu"

From: Jason Swails <>
Date: Thu, 4 Oct 2012 14:42:50 -0400

On Thu, Oct 4, 2012 at 1:21 PM, Su, Shiquan <> wrote:

> Dear Jason M. Swails:
> I am the software manager of amer12 on Keeneland. I once did succeed to
> install the MPI version. by "./configure -mpi -cuda gnu", and "make
> install". I need to do the installation again, but I just can not succeed
> this time.

Ah, of course. I suppose that's pretty obvious from some of the
compilation output. My mistake.

After all, I just want to finish a standard installation of Amber12
> including the MPI version (configured by "./configure -mpi -cuda gnu") . If
> you can do this successfully, would you please have a try on Keeneland, and
> send me your instruction? If you could not, would you please confirm the
> error with me? Thank you.

As it turns out, I always use the Intel compilers when building Amber
(including pmemd.cuda and pmemd.cuda.MPI). Here is my configuration (which
I just used to build Amber 12):

[swails.kidlogin1 ~/amber ]$ module list
Currently Loaded Modulefiles:
  1) modules 3) moab/6.1.5 5) mkl/2011_sp1.8.273
    7) openmpi/1.5.1-intel 9) cuda/4.2 11) numpy/1.4.1
  2) torque/2.5.11 4) gold 6)
intel/2011_sp1.8.273 8) PE-intel 10) python/2.7
 12) netcdf/4.1.1

[swails.kidlogin1 ~/amber ]$ which ifort
[swails.kidlogin1 ~/amber ]$ which icc
[swails.kidlogin1 ~/amber ]$ which mpif90
[swails.kidlogin1 ~/amber ]$ which mpicc

End of build log:

mpif90 -ip -O3 -no-prec-div -xHost -DCUDA -DMPI -DMPICH_IGNORE_CXX_SEEK
-Duse_SPFP -o pmemd.cuda.MPI gbl_constants.o gbl_datatypes.o state_info.o
file_io_dat.o mdin_ctrl_dat.o mdin_ewald_dat.o mdin_debugf_dat.o
prmtop_dat.o inpcrd_dat.o dynamics_dat.o img.o nbips.o parallel_dat.o
parallel.o gb_parallel.o pme_direct.o pme_recip_dat.o pme_slab_recip.o
pme_blk_recip.o pme_slab_fft.o pme_blk_fft.o pme_fft_dat.o fft1d.o
bspline.o pme_force.o pbc.o nb_pairlist.o nb_exclusions.o cit.o dynamics.o
bonds.o angles.o dihedrals.o extra_pnts_nb14.o runmd.o loadbal.o shake.o
prfs.o mol_list.o runmin.o constraints.o axis_optimize.o gb_ene.o veclib.o
gb_force.o timers.o pmemd_lib.o runfiles.o file_io.o bintraj.o binrestart.o
pmemd_clib.o pmemd.o random.o degcnt.o erfcfun.o nmr_calls.o nmr_lib.o
get_cmdline.o master_setup.o pme_alltasks_setup.o pme_setup.o
ene_frc_splines.o gb_alltasks_setup.o nextprmtop_section.o angles_ub.o
dihedrals_imp.o cmap.o charmm.o charmm_gold.o findmask.o remd.o
multipmemd.o remd_exchg.o amd.o \
     ./cuda/cuda.a -L/sw/keeneland/cuda/4.2/linux_binary/lib64
-L/sw/keeneland/cuda/4.2/linux_binary/lib -lcurand -lcufft -lcudart
-L/nics/d/home/swails/amber/lib -L/nics/d/home/swails/amber/lib -lnetcdf
-shared-intel -Wl,--start-group
-Wl,--end-group -lpthread
ifort: command line remark #10010: option '-pthread' is deprecated and will
be removed in a future release. See '-help deprecated'
make[3]: Leaving directory `/nics/d/home/swails/amber/src/pmemd/src'
Installation of pmemd.cuda.MPI complete
make[2]: Leaving directory `/nics/d/home/swails/amber/src/pmemd'
make[1]: Leaving directory `/nics/d/home/swails/amber/src'

I can try to switch environments over to GNU, but it may take me a little
bit to do so.


Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Candidate
AMBER mailing list
Received on Thu Oct 04 2012 - 12:00:03 PDT
Custom Search