Re: AMBER: Segmentation faults trying to run sander mpi

From: Idan Gabdank <gabdank.cs.bgu.ac.il>
Date: Thu, 29 May 2008 17:57:07 +0300

Dear David,
Thank you for your help.

It looks like it is ok on all hosts:

xeonsrv1 dhfr # ./Run.dhfr.min
diffing mdout.dhfr.min.save with mdout.dhfr.min
PASSED
==============================================================
xeonsrv1 dhfr #


What should I check to solve the problem?

Thanks,

Idan
********************************************************************************
On Wed, May 28, 2008, Idan Gabdank wrote:

> I am trying to run a minimization using sander.mpi and I am receiving
> segmentation faults,

Do the test cases pass? I'm thinking especially of something like
$AMBERHOME/test/dhhr/Run.dhfr
.min.

Of course, if the test cases pass, then you need to look for what is
different
in your input and the test case. If the test case fails, then at least we
have narrowed things down.

...dac

-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber.scripps.edu <mailto:amber.scripps.edu>
To unsubscribe, send "unsubscribe amber" (in the *body* of the email)
     to majordomo.scripps.edu <mailto:majordomo.scripps.edu>



Idan Gabdank wrote:
> Dear AMBERS,
> I am trying to run a minimization using sander.mpi and I am receiving
> segmentation faults, tried to install newer version of openmpi and
> recompile Amber9 but it didn't help.
> ---------------------------------------------------------------------------------------------
>
> ---------------------------------------------------------------------------------------------
>
> Error:
> ---------------------------------------------------------------------------------------------
>
> ---------------------------------------------------------------------------------------------
>
> [xeonsrv2:01951] *** Process received signal ***
> [xeonsrv2:01951] Signal: Segmentation fault (11)
> [xeonsrv2:01951] Signal code: Address not mapped (1)
> [xeonsrv2:01951] Failing at address: 0x59f4980088
> [xeonsrv2:01951] [ 0] /lib/libc.so.6 [0x2b60d4446130]
> [xeonsrv2:01951] [ 1]
> /usr/lib64/mpi/mpi-openmpi/usr/lib64/openmpi/mca_btl_sm.so(mca_btl_sm_component_progress+0x533)
> [0x2b60d96c5b5c]
> [xeonsrv2:01951] [ 2]
> /usr/lib64/mpi/mpi-openmpi/usr/lib64/openmpi/mca_bml_r2.so(mca_bml_r2_progress+0x24)
> [0x2b60d92b91d9]
> [xeonsrv2:01951] [ 3]
> /usr/lib64/mpi/mpi-openmpi/usr/lib64/libopen-pal.so.0(opal_progress+0x49)
> [0x2b60d347793a]
> [xeonsrv2:01951] [ 4]
> /usr/lib64/mpi/mpi-openmpi/usr/lib64/openmpi/mca_oob_tcp.so(mca_oob_tcp_msg_wait+0x1a)
> [0x2b60d57b1fb8]
> [xeonsrv2:01951] [ 5]
> /usr/lib64/mpi/mpi-openmpi/usr/lib64/openmpi/mca_oob_tcp.so(mca_oob_tcp_recv+0x371)
> [0x2b60d57b5b5b]
> [xeonsrv2:01951] [ 6]
> /usr/lib64/mpi/mpi-openmpi/usr/lib64/libopen-rte.so.0(mca_oob_recv_packed+0x33)
> [0x2b60d323cfc1]
> [xeonsrv2:01951] [ 7]
> /usr/lib64/mpi/mpi-openmpi/usr/lib64/openmpi/mca_gpr_proxy.so(orte_gpr_proxy_put+0x20a)
> [0x2b60d5bc6be6]
> [xeonsrv2:01951] [ 8]
> /usr/lib64/mpi/mpi-openmpi/usr/lib64/libopen-rte.so.0(orte_smr_base_set_proc_state+0x281)
> [0x2b60d32529a1]
> [xeonsrv2:01951] [ 9]
> /usr/lib64/mpi/mpi-openmpi/usr/lib64/libmpi.so.0(ompi_mpi_init+0x7f2)
> [0x2b60d2fb5012]
> [xeonsrv2:01951] [10]
> /usr/lib64/mpi/mpi-openmpi/usr/lib64/libmpi.so.0(MPI_Init+0x81)[0x2b60d2fd4071]
>
> [xeonsrv2:01951] [11]
> /usr/lib64/mpi/mpi-openmpi/usr/lib64/libmpi_f77.so.0(PMPI_INIT+0x25)
> [0x2b60d2d71525]
> [xeonsrv2:01951] [12] sander.MPI(MAIN__+0x46) [0x49bb06]
> [xeonsrv2:01951] [13] sander.MPI(main+0xe) [0x62bea6]
> [xeonsrv2:01951] [14]
> /lib/libc.so.6(__libc_start_main+0xf4)[0x2b60d4432b74]
> [xeonsrv2:01951] [15] sander.MPI [0x41f279]
> [xeonsrv2:01951] *** End of error message ***
> ---------------------------------------------------------------------------------------------
>
> ---------------------------------------------------------------------------------------------
>
> Some info about the system I am using:
> ---------------------------------------------------------------------------------------------
>
> ---------------------------------------------------------------------------------------------
>
> xeonsrv1 ~ # uname -a
> Linux xeonsrv1 2.6.23-gentoo-r3 #1 SMP Wed Jan 16 15:37:54 IST 2008
> x86_64 Intel(R) Xeon(R) CPU 5140 . 2.33GHz GenuineIntel GNU/Linux
>
> xeonsrv1 ~ # gcc --version
> gcc (GCC) 4.1.2 (Gentoo 4.1.2 p1.0.2)
> Copyright (C) 2006 Free Software Foundation, Inc.
>
> xeonsrv1 ~ # gfortran --version
> GNU Fortran 95 (GCC) 4.1.2 (Gentoo 4.1.2 p1.0.2)
> Copyright (C) 2006 Free Software Foundation, Inc.
>
> from amber9/src/config.h
>
> #################################################
> CC= gcc
> CPLUSPLUS=g++
> CFLAGS= -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -O2 -m64
> CPPFLAGS= $(AMBERBUILDFLAGS)
>
> #------------------------------------------------------------------------------
>
> # Fortran preprocessing and compiler.
> # FPPFLAGS holds the main Fortran options, such as whether MPI is used.
> #------------------------------------------------------------------------------
>
> FPPFLAGS= -I/usr/lib64/mpi/mpi-openmpi/usr/include -P -DMPI
> -xassembler-with-cpp -Dsecond=ambsecond $(AMBERBUILDFLAGS)
> FPP= cpp -traditional $(FPPFLAGS)
> FC= gfortran
> FFLAGS= -I/usr/lib64/mpi/mpi-openmpi/usr/include -O0
> -fno-second-underscore -march=nocona $(LOCALFLAGS) $(AMBERBUILDFLAGS)
> FOPTFLAGS= -O3 -fno-second-underscore -march=nocona $(LOCALFLAGS)
> $(AMBERBUILDFLAGS)
> FREEFORMAT_FLAG= -ffree-form
>
> #------------------------------------------------------------------------------
>
> # Loader:
> #------------------------------------------------------------------------------
>
> LOAD= gfortran $(LOCALFLAGS) $(AMBERBUILDFLAGS)
> LOADCC= gcc $(LOCALFLAGS) $(AMBERBUILDFLAGS)
> LOADLIB= -L/usr/lib64/mpi/mpi-openmpi/usr/lib64 -Wl,--no-as-needed
> -lmpi_f90 -lmpi_f77 -lmpi -lopen-rte -lopen-pal -ldl
> -Wl,--export-dynamic -lnsl -lutil -lm -ldl
> LM= -lm
> LOADPTRAJ= gfortran $(LOCALFLAGS) $(AMBERBUILDFLAGS)
> XHOME= /usr/X11R6
> XLIBS= -L/usr/X11R6/lib64 -L/usr/X11R6/lib
> #################################################
>
> Thank you in advance for your help.
> Idan
>
>
>
>


-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber.scripps.edu
To unsubscribe, send "unsubscribe amber" (in the *body* of the email)
      to majordomo.scripps.edu
Received on Sun Jun 01 2008 - 06:07:33 PDT
Custom Search