Re: [AMBER] problem in running AMBER12 in GPU

From: Sanjib Paul <sanjib88paul.gmail.com>
Date: Wed, 11 Sep 2013 19:34:22 +0530

Hello,
         This time I have downloaded mpich-3.0.4.tar.gz. And followed the
same procedure. But I am getting the same error which I got last time.

Sanjib


On Wed, Sep 11, 2013 at 7:08 PM, Jan-Philip Gehrcke <jgehrcke.googlemail.com
> wrote:

> Hello!
>
> On 09/11/2013 03:28 PM, Sanjib Paul wrote:
> > Hello,
> > I have downloaded a file named 'mpich2-1.5-3.fc20.x86_64.rpm',
> > extracted it.
>
> You have downloaded a binary package of that software. What you need is
> the source distribution, which is usually offered as gzipped tarball
> (these filenames usually end with .tar.gz). When you extract such an
> archive, the top-level directory usually contains a `configure` file.
>
> This one is obviously missing in your case, see the error message below.
>
> Cheers!
>
>
> > And then kept the extracted file 'mpich2-1.5-3.fc20.x86_64'
> > in $AMBERHOME/AmberTools/src folder. Then run the script
> configure_mpich2.
> >
> > ./configure_mpich2 gnu
> >
> > Setting AMBERHOME to /home/software/AMBER12/amber12
> >
> > ./configure_mpich2: line 141: ./configure: No such file or directory
> > MPICH2 configure failed, returning 127
> >
> > I am getting the above error. I have searched google, but did not get any
> > problem or answer regarding this. So, please help.
> >
> > Sanjib
> >
> >
> >
> >
> > On Wed, Sep 4, 2013 at 8:26 PM, Jason Swails <jason.swails.gmail.com>
> wrote:
> >
> >> On Wed, Sep 4, 2013 at 10:21 AM, Sanjib Paul <sanjib88paul.gmail.com>
> >> wrote:
> >>
> >>> Hello,
> >>> Thanks for your valuable suggestion. After updating we are
> able
> >> to
> >>> run AMBER12 (sander, pmemd, sander.MPI & pmemd.MPI, pmemd.cuda). But
> >>> unfortunately when we are not able to install pmemd.cuda.MPI
> >> successfully.
> >>> Those are few following errors which we are getting at the end of
> >>> installation process.
> >>>
> >>> ./cuda/cuda.a(gpu.o): In function `MPI::Intracomm::Clone()
> const':
> >>> gpu.cpp:(.text._ZNK3MPI9Intracomm5CloneEv[MPI::Intracomm::Clone()
> >>> const]+0x27): undefined reference to `MPI::Comm::Comm()'
> >>>
> >>>
> >>
> ./cuda/cuda.a(gpu.o):gpu.cpp:(.text._ZNK3MPI9Intracomm5SplitEii[MPI::Intracomm::Split(int,
> >>> int) const]+0x24): more undefined references to `MPI::Comm::Comm()'
> >> follow
> >>> ./cuda/cuda.a(gpu.o):(.rodata._ZTVN3MPI3WinE[vtable for
> MPI::Win]+0x48):
> >>> undefined reference to `MPI::Win::Free()'
> >>> ./cuda/cuda.a(gpu.o):(.rodata._ZTVN3MPI8DatatypeE[vtable for
> >>> MPI::Datatype]+0x78): undefined reference to `MPI::Datatype::Free()'
> >>> collect2: ld returned 1 exit status
> >>> make[3]: *** [pmemd.cuda.MPI] Error 1
> >>> make[3]: Leaving directory `/home/test/amber12/src/pmemd/src'
> >>> make[2]: *** [cuda_parallel] Error 2
> >>> make[2]: Leaving directory `/home/test/amber12/src/pmemd'
> >>> make[1]: *** [cuda_parallel] Error 2
> >>> make[1]: Leaving directory `/home/test/amber12/src'
> >>> make: *** [install] Error 2
> >>>
> >>> We are using cuda version 5.5 with openmpi 1.7.2 built with installed
> >> cuda.
> >>> We did not understand what the problem is. Please give some suggestion.
> >>>
> >>
> >> It's likely that your MPI was not built with the appropriate support
> that
> >> pmemd.cuda.MPI needs (make sure the MPI is built with C++ support, I
> >> think). I've been using mpich2 (now just 'mpich') for some time and
> I've
> >> never had a problem building pmemd.cuda.MPI before. Note, if you google
> >> the last little bit of your error message:
> >>
> >> ./cuda/cuda.a(gpu.o):(.rodata._ZTVN3MPI8DatatypeE[vtable for
> >> MPI::Datatype]+0x78): undefined reference to `MPI::Datatype::Free()'
> >>
> >> it brings you to this link: http://archive.ambermd.org/201106/0678.html
> >>
> >> which provides useful things to try.
> >>
> >> Try adding -lmpi_cxx to the PMEMD_CU_LIBS line in config.h. If that
> still
> >> doesn't work, rebuild your MPI with C++ support. If you are unsure
> what to
> >> do, use the configure_mpich2 script in $AMBERHOME/AmberTools/src to
> build
> >> your own MPI (see the manual for instructions -- you need to download
> >> mpich2 first to use it). You also need to set up PATH and
> LD_LIBRARY_PATH
> >> so that "which mpif90" returns the version in $AMBERHOME/bin.
> >>
> >> HTH,
> >> Jason
> >>
> >> --
> >> Jason M. Swails
> >> BioMaPS,
> >> Rutgers University
> >> Postdoctoral Researcher
> >> _______________________________________________
> >> AMBER mailing list
> >> AMBER.ambermd.org
> >> http://lists.ambermd.org/mailman/listinfo/amber
> >>
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
> >
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Sep 11 2013 - 07:30:02 PDT
Custom Search