Re: [AMBER] compile pmemd using gcc

From: Jorgen Simonsen <jorgen589.gmail.com>
Date: Fri, 10 Dec 2010 20:05:08 +0100

Thx that solved it I can now compile the program but when I try to run
it is simply stalled and I get the following error from the mpi

fe5.14536ipath_wait_for_device: The /dev/ipath device failed to appear
after 30.0 seconds: Connection timed out
fe5.14536PSM Could not find an InfiniPath Unit on device /dev/ipath
(30s elapsed) (err=21)
[fe5:14536] Open MPI failed to open a PSM endpoint: PSM Could not find
an InfiniPath Unit on device /dev/ipath (30s elapsed)
[fe5:14536] Error in psm_ep_open (error PSM Could not find an InfiniPath Unit)
fe5.14537ipath_wait_for_device: The /dev/ipath device failed to appear
after 30.0 seconds: Connection timed out
fe5.14537PSM Could not find an InfiniPath Unit on device /dev/ipath
(30s elapsed) (err=21)
[fe5:14537] Open MPI failed to open a PSM endpoint: PSM Could not find
an InfiniPath Unit on device /dev/ipath (30s elapsed)
[fe5:14537] Error in psm_ep_open (error PSM Could not find an InfiniPath Unit)
--------------------------------------------------------------------------
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort. There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems. This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):

  PML add procs failed
  --> Returned "Error" (-1) instead of "Success" (0)
--------------------------------------------------------------------------
*** An error occurred in MPI_Init
*** before MPI was initialized
*** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
[fe5:14536] Abort before MPI_INIT completed successfully; not able to
guarantee that all other processes were killed!
*** An error occurred in MPI_Init
*** before MPI was initialized
*** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
[fe5:14537] Abort before MPI_INIT completed successfully; not able to
guarantee that all other processes were killed!
--------------------------------------------------------------------------
mpirun has exited due to process rank 1 with PID 14537 on
node fe5 exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
[fe5:14535] 1 more process has sent help message help-mpi-runtime /
mpi_init:startup:internal-failure
[fe5:14535] Set MCA parameter "orte_base_help_aggregate" to 0 to see
all help / error messages


during the compilation I get some warnings like this

In file runmd.f90:1925

          if (abs(biga) - abs(a(ij))) 15, 20, 20
                                               1
Warning: Obsolete: arithmetic IF statement at (1)
 In file runmd.f90:1934

      if (j - k) 35, 35, 25
                          1
Warning: Obsolete: arithmetic IF statement at (1)
 In file runmd.f90:1947

      if (i - k) 45, 45, 38
                          1
Warning: Obsolete: arithmetic IF statement at (1)
 In file runmd.f90:1958

 45 if (biga) 48, 46, 48
                           1
Warning: Obsolete: arithmetic IF statement at (1)
 In file runmd.f90:1962

            if (i - k) 50, 55, 50
                                1
Warning: Obsolete: arithmetic IF statement at (1)
 In file runmd.f90:1975

          if (i - k) 60, 65, 60
                              1
Warning: Obsolete: arithmetic IF statement at (1)
 In file runmd.f90:1976

 60 if (j - k) 62, 65, 62

furthermore it does not move the executable to exe or bin but stays in pmemd?

On Fri, Dec 10, 2010 at 6:55 PM, Jason Swails <jason.swails.gmail.com> wrote:
> Hello,
>
> Remove the -DMKL.  This, I think, will make it look for MKL libraries, which
> you aren't linking to (a blank MATH_LIBS line).  Don't forget to do a make
> clean before you retry the install.
>
> Good luck,
> Jason
>
> On Fri, Dec 10, 2010 at 12:53 PM, Jorgen Simonsen <jorgen589.gmail.com>wrote:
>
>> hi all
>>
>> I am trying to compile pmemd with gcc I was able to do it with intel
>> compiler but with gcc I get the following error Warning: Obsolete:
>> arithmetic IF statement at (1)
>> /lib/cpp -traditional -P  -I/usr/mpi/gcc/openmpi-1.3.2-qlc/include
>> -DPUBFFT  -DMPI -DDIRFRC_EFS -DDIRFRC_COMTRANS -DDIRFRC_NOVEC -DMKL
>> -DFFTLOADBAL_2PROC veclib.fpp veclib.f90
>> mpif90 -c -O3 veclib.f90
>> mpicc  -c pmemd_clib.c
>> /lib/cpp -traditional -P  -I/usr/mpi/gcc/openmpi-1.3.2-qlc/include
>> -DPUBFFT  -DMPI -DDIRFRC_EFS -DDIRFRC_COMTRANS -DDIRFRC_NOVEC -DMKL
>> -DFFTLOADBAL_2PROC gb_alltasks_setup.fpp gb_alltasks_setup.f90
>> mpif90 -c -O3 gb_alltasks_setup.f90
>> /lib/cpp -traditional -P  -I/usr/mpi/gcc/openmpi-1.3.2-qlc/include
>> -DPUBFFT  -DMPI -DDIRFRC_EFS -DDIRFRC_COMTRANS -DDIRFRC_NOVEC -DMKL
>> -DFFTLOADBAL_2PROC pme_alltasks_setup.fpp pme_alltasks_setup.f90
>> mpif90 -c -O3 pme_alltasks_setup.f90
>> /lib/cpp -traditional -P  -I/usr/mpi/gcc/openmpi-1.3.2-qlc/include
>> -DPUBFFT  -DMPI -DDIRFRC_EFS -DDIRFRC_COMTRANS -DDIRFRC_NOVEC -DMKL
>> -DFFTLOADBAL_2PROC pme_setup.fpp pme_setup.f90
>> mpif90 -c -O3 pme_setup.f90
>> /lib/cpp -traditional -P  -I/usr/mpi/gcc/openmpi-1.3.2-qlc/include
>> -DPUBFFT  -DMPI -DDIRFRC_EFS -DDIRFRC_COMTRANS -DDIRFRC_NOVEC -DMKL
>> -DFFTLOADBAL_2PROC get_cmdline.fpp get_cmdline.f90
>> mpif90 -c -O3 get_cmdline.f90
>> /lib/cpp -traditional -P  -I/usr/mpi/gcc/openmpi-1.3.2-qlc/include
>> -DPUBFFT  -DMPI -DDIRFRC_EFS -DDIRFRC_COMTRANS -DDIRFRC_NOVEC -DMKL
>> -DFFTLOADBAL_2PROC master_setup.fpp master_setup.f90
>> mpif90 -c -O3 master_setup.f90
>> /lib/cpp -traditional -P  -I/usr/mpi/gcc/openmpi-1.3.2-qlc/include
>> -DPUBFFT  -DMPI -DDIRFRC_EFS -DDIRFRC_COMTRANS -DDIRFRC_NOVEC -DMKL
>> -DFFTLOADBAL_2PROC pmemd.fpp pmemd.f90
>> mpif90 -c -O3 pmemd.f90
>> /lib/cpp -traditional -P  -I/usr/mpi/gcc/openmpi-1.3.2-qlc/include
>> -DPUBFFT  -DMPI -DDIRFRC_EFS -DDIRFRC_COMTRANS -DDIRFRC_NOVEC -DMKL
>> -DFFTLOADBAL_2PROC erfcfun.fpp erfcfun.f90
>> mpif90 -c -O3 erfcfun.f90
>> mpif90  -o pmemd gbl_constants.o gbl_datatypes.o state_info.o
>> file_io_dat.o mdin_ctrl_dat.o mdin_ewald_dat.o prmtop_dat.o
>> inpcrd_dat.o dynamics_dat.o img.o parallel_dat.o parallel.o
>> gb_parallel.o pme_direct.o pme_recip_dat.o pme_slab_recip.o
>> pme_blk_recip.o pme_slab_fft.o pme_blk_fft.o pme_fft_dat.o fft1d.o
>> bspline.o pme_force.o pbc.o nb_pairlist.o nb_exclusions.o cit.o
>> dynamics.o bonds.o angles.o dihedrals.o extra_pnts_nb14.o runmd.o
>> loadbal.o shake.o prfs.o mol_list.o runmin.o constraints.o
>> axis_optimize.o gb_ene.o veclib.o gb_force.o timers.o pmemd_lib.o
>> runfiles.o file_io.o bintraj.o pmemd_clib.o pmemd.o random.o degcnt.o
>> erfcfun.o nmr_calls.o nmr_lib.o get_cmdline.o master_setup.o
>> pme_alltasks_setup.o pme_setup.o ene_frc_splines.o gb_alltasks_setup.o
>> nextprmtop_section.o    -L/usr/mpi/gcc/openmpi-1.3.2-qlc/lib64
>> gb_ene.o: In function `__gb_ene_mod__calc_born_radii':
>> gb_ene.f90:(.text+0xe15): undefined reference to `vdinvsqrt_'
>> gb_ene.f90:(.text+0x1174): undefined reference to `vdinv_'
>> gb_ene.f90:(.text+0x118b): undefined reference to `vdinv_'
>> gb_ene.f90:(.text+0x1287): undefined reference to `vdln_'
>> gb_ene.f90:(.text+0x129e): undefined reference to `vdln_'
>> gb_ene.o: In function `__gb_ene_mod__gb_ene':
>> gb_ene.f90:(.text+0x29e7): undefined reference to `vdinv_'
>> gb_ene.f90:(.text+0x2a72): undefined reference to `vdexp_'
>> gb_ene.f90:(.text+0x2b61): undefined reference to `vdinvsqrt_'
>> gb_ene.f90:(.text+0x2b94): undefined reference to `vdinvsqrt_'
>> gb_ene.f90:(.text+0x344a): undefined reference to `vdinv_'
>> gb_ene.f90:(.text+0x34c0): undefined reference to `vdexp_'
>> gb_ene.f90:(.text+0x390c): undefined reference to `vdinvsqrt_'
>> gb_ene.f90:(.text+0x3b37): undefined reference to `vdinv_'
>> gb_ene.f90:(.text+0x3b4e): undefined reference to `vdinv_'
>> gb_ene.f90:(.text+0x3bd7): undefined reference to `vdln_'
>> collect2: ld returned 1 exit status
>> make[1]: *** [pmemd] Error 1
>> make[1]: Leaving directory `/people/disk2/hbohr/amber10/src/pmemd/src'
>> make: *** [all] Error 2
>>
>> my config.h file looks like this
>>
>>
>> MATH_DEFINES =
>> MATH_LIBS =
>> MATH_DEFINES = -DMKL
>> FFT_DEFINES = -DPUBFFT
>> FFT_INCLUDE =
>> FFT_LIBS =
>> NETCDF_HOME =
>> NETCDF_DEFINES =
>> NETCDF_MOD =
>> NETCDF_LIBS =
>> MPI_HOME =/usr/mpi/gcc/openmpi-1.3.2-qlc
>> MPI_DEFINES = -DMPI
>> MPI_INCLUDE = -I$(MPI_HOME)/include
>> MPI_LIBDIR = $(MPI_HOME)/lib64
>> MPI_LIBS = -L$(MPI_LIBDIR)
>> DIRFRC_DEFINES = -DDIRFRC_EFS -DDIRFRC_COMTRANS -DDIRFRC_NOVEC
>> CPP = /lib/cpp
>> CPPFLAGS = -traditional -P
>> F90_DEFINES = -DFFTLOADBAL_2PROC
>>
>> F90 = mpif90
>> MODULE_SUFFIX = mod
>> F90FLAGS = -c
>> F90_OPT_DBG = -g -traceback
>> F90_OPT_LO =  -O0
>> F90_OPT_MED = -O2
>> F90_OPT_HI =  -O3
>> F90_OPT_DFLT =  $(F90_OPT_HI)
>>
>> CC = mpicc
>> CFLAGS =
>>
>> LOAD = mpif90
>> LOADFLAGS =
>>
>> I have removed some of the optimization flags from the intel file.
>>
>> Have I removed too much?
>>
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>>
>
>
>
> --
> Jason M. Swails
> Quantum Theory Project,
> University of Florida
> Ph.D. Graduate Student
> 352-392-4032
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>

_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Fri Dec 10 2010 - 11:30:06 PST
Custom Search