If it's your own system (and you're using Ubuntu or some other Debian
derivative), you can just follow the instructions for setting up Ubuntu on
that wiki -- it will also tell you how to use apt-get to install MPICH2 or
OpenMPI (but then you're confined to the GNU compilers more or less).
For individual Desktops, the choice of MPI is more or less unimportant.
MPI implementations will not typically differ from one another until you
start running on large scale systems with a lot of threads. With a small
system and small number of threads, there's little room for
implementation-based speedup.
HTH,
Jason
On Thu, Jan 26, 2012 at 2:04 PM, Massimiliano Porrini <mozz76.gmail.com>wrote:
> It's my machine (GPU workstation under my desk).
>
> I will follow your instructions to build the "Amber MPI".
>
> By the way, I wonder why I could install successfully the parallel version
> of both AT1.5 and Amber11, if the issue is related to a general
> MPI location.
>
> Moreover, I read on http://ambermd.org/gpus/ that MPICH2 is more
> efficient than OpenMP,
> hence it is advised to use MPICH2 .
> So, how about removing the OpenMP packages?
> Something like:
> apt-get remove openmp
>
> Thanks again for your helps.
> Best,
>
>
>
> Dr Massimiliano Porrini
> Institute for Condensed Matter and Complex Systems
> School of Physics & Astronomy
> The University of Edinburgh
> James Clerk Maxwell Building
> The King's Buildings
> Mayfield Road
> Edinburgh EH9 3JZ
>
> Tel +44-(0)131-650-5229
>
> E-mails : M.Porrini.ed.ac.uk
> mozz76.gmail.com
> maxp.iesl.forth.gr
>
> On 26 Jan 2012, at 18:20, Jason Swails <jason.swails.gmail.com> wrote:
>
> > Yikes. Is this your machine? If it's a cluster, is there documentation
> > that tells you how to compile and run MPI programs? With the current
> > setup, I have no way of telling if the MPI programs in your /usr/bin
> > correspond to the OpenMPI dev files or MPICH2 dev files located in
> > /usr/include/{openmpi/,mpich2/}.
> >
> > If something is clarified via documentation, I would suggest referring to
> > that. Otherwise, I would suggest building your own MPI. You can use the
> > configure_openmpi included with Amber to build an MPI that will work with
> > Amber. I would probably suggest this route.
> >
> > If you do that, you should put $AMBERHOME/bin at the beginning of your
> PATH
> > before running configure for Amber (but after building the MPI). So your
> > workflow should go something like this:
> >
> > 1) Download OpenMPI (
> >
> http://www.open-mpi.org/software/ompi/v1.4/downloads/openmpi-1.4.3.tar.bz2
> )
> > 2) Extract that tarball in $AMBERHOME/AmberTools/src
> > 3) Run configure_openmpi (this will also build OpenMPI)
> > 4) Put AMBERHOME/bin at the beginning of your path
> > 5) Follow the AmberTools/Amber parallel build instructions, starting with
> > the configure step.
> >
> > In Unix commands (assuming AMBERHOME is set):
> >
> > cd $AMBERHOME/AmberTools/src/
> > wget
> >
> http://www.open-mpi.org/software/ompi/v1.4/downloads/openmpi-1.4.3.tar.bz2
> > tar jxvf openmpi-1.4.3.tar.bz2
> > ./configure_openmpi gnu
> > export PATH=$AMBERHOME/bin\:$PATH
> > ./configure -mpi gnu
> > cd ../../src && make clean && make parallel
> >
> > HTH,
> > Jason
> >
> > On Thu, Jan 26, 2012 at 12:06 PM, Massimiliano Porrini
> > <M.Porrini.ed.ac.uk>wrote:
> >
> >> Jason, I located them and I've got them but in "slightly" different
> paths:
> >>
> >> /usr/include/mpich2/mpi.h
> >> and
> >> /usr/lib/openmpi/include/mpi.h
> >>
> >> /usr/include/mpich2/mpif.h
> >> and
> >> /usr/lib/openmpi/include/mpif.h
> >>
> >>
> >> Il 26 gennaio 2012 14:46, Jason Swails <jason.swails.gmail.com> ha
> >> scritto:
> >>> Do you have the files /usr/include/mpi.h and /usr/include/mpif.h ?
> >>>
> >>> On Thu, Jan 26, 2012 at 8:27 AM, Massimiliano Porrini <
> >> M.Porrini.ed.ac.uk>wrote:
> >>>
> >>>> Here it is:
> >>>>
> >>>> max.nexus:~$ which mpif90
> >>>> /usr/bin/mpif90
> >>>>
> >>>>
> >>>> Il 26 gennaio 2012 13:21, Jason Swails <jason.swails.gmail.com> ha
> >>>> scritto:
> >>>>> You don't have to install the individual bug fixes 18 19 and 20 like
> >>>> that. They are included in the bugfix.all.tar.bz2 file.
> >>>>>
> >>>>> What does "which mpif90" return?
> >>>>>
> >>>>> --
> >>>>> Jason M. Swails
> >>>>> Quantum Theory Project,
> >>>>> University of Florida
> >>>>> Ph.D. Candidate
> >>>>> 352-392-4032
> >>>>>
> >>>>> On Jan 26, 2012, at 7:03 AM, Massimiliano Porrini <
> M.Porrini.ed.ac.uk
> >>>
> >>>> wrote:
> >>>>>
> >>>>>> Hi Jason,
> >>>>>>
> >>>>>> I enclosed the config.h file (taken from $AMBERHOME/src/ ).
> >>>>>>
> >>>>>> And it seems that something went wrong because this file reports
> >>>>>> only the line cited by you (although at the end it has
> >> "-I/usr/include"
> >>>>>> instead of "-I/include"), as you can see:
> >>>>>>
> >>>>>> PMEMD_CU_INCLUDES=-I$(CUDA_HOME)/include -IB40C -IB40C/KernelCommon
> >>>>>> -I/usr/include
> >>>>>>
> >>>>>> However I am a bit confused now because I did apply all the
> bugfixes,
> >>>>>> following the guidelines of your web page
> http://jswails.wikidot.com
> >>>>>> (and double checking with the manuals).
> >>>>>>
> >>>>>> After having extracted the tarball source codes of AT1.5 and
> Amber11,
> >>>>>> this is what I did step by step to apply the bugfixes:
> >>>>>>
> >>>>>> cd $AMBERHOME
> >>>>>> wget http://ambermd.org/bugfixes/AmberTools/1.5/bugfix.all
> >>>>>> patch -p0 -N < bugfix.all
> >>>>>>
> >>>>>> cd $AMBERHOME
> >>>>>> cd ../
> >>>>>> wget http://ambermd.org/bugfixes/11.0/apply_bugfix.x
> >>>>>> wget http://ambermd.org/bugfixes/11.0/bugfix.all.tar.bz2
> >>>>>> chmod +x apply_bugfix.x
> >>>>>> ./apply_bugfix.x ./bugfix.all.tar.bz2
> >>>>>>
> >>>>>> cd $AMBERHOME
> >>>>>> wget http://ambermd.org/bugfixes/11.0/bugfix.18
> >>>>>> wget http://ambermd.org/bugfixes/11.0/bugfix.19
> >>>>>> wget http://ambermd.org/bugfixes/11.0/bugfix.20
> >>>>>> patch -p0 -N < bugfix.18
> >>>>>> patch -p0 -N < bugfix.19
> >>>>>> patch -p0 -N < bugfix.20
> >>>>>>
> >>>>>> And then I proceeded and succeeded to install AT1.5 (serial and
> >>>> parallel),
> >>>>>> Amber11 (serial and parallel) and pmemd.cuda, except pmemd.cuda.MPI
> .
> >>>>>>
> >>>>>> Thanks again for your help.
> >>>>>> Best,
> >>>>>>
> >>>>>>
> >>>>>> Il 25 gennaio 2012 20:07, Jason Swails <jason.swails.gmail.com> ha
> >>>> scritto:
> >>>>>>> Can you post your config.h file here? The C compiler for pmemd
> >>>> _should_ be
> >>>>>>> mpicc, which should provide the proper include paths so that
> "mpi.h"
> >>>> should
> >>>>>>> be found. Furthermore, the CUDA compilers should get the MPI
> >> include
> >>>> path
> >>>>>>> as well, so you should see a line that looks like
> >>>>>>>
> >>>>>>> PMEMD_CU_INCLUDES=-I$(CUDA_HOME)/include -IB40C -IB40C/KernelCommon
> >>>>>>> -I/mpi/intel-12.1.0/mpich2-1.4.1p1/include
> >>>>>>>
> >>>>>>> in your config.h. The key component is the last one, which is
> >>>>>>> "-I/path/to/mpi/include" so that the CUDA source files know about
> >> the
> >>>> MPI
> >>>>>>> header file. If something went wrong, then it'll look like
> >>>>>>>
> >>>>>>> PMEMD_CU_INCLUDES=-I$(CUDA_HOME)/include -IB40C -IB40C/KernelCommon
> >>>>>>> -I/include
> >>>>>>>
> >>>>>>> which I think indicates that you have not applied the bug fixes
> >> (which
> >>>> you
> >>>>>>> should definitely do).
> >>>>>>>
> >>>>>>> HTH,
> >>>>>>> Jason
> >>>>>>>
> >>>>>>> On Wed, Jan 25, 2012 at 2:53 PM, Massimiliano Porrini <
> >>>> M.Porrini.ed.ac.uk>wrote:
> >>>>>>>
> >>>>>>>> Hi all,
> >>>>>>>>
> >>>>>>>> After having applied all the bugfixes for AT1.5 and Amber11, I've
> >> just
> >>>>>>>> finished successfully
> >>>>>>>> installing and testing AT1.5 (serial and parallel), Amber11
> (serial
> >>>>>>>> and parallel) and pmemd.cuda
> >>>>>>>> (and it worked fine on all the 3 GPUs I have in my machine),
> >>>>>>>> but when I try to install the parallel version of it
> >> (pmemd.cuda.MPI):
> >>>>>>>>
> >>>>>>>> cd $AMBERHOME/AmberTools/src/
> >>>>>>>> make clean
> >>>>>>>> ./configure -cuda -mpi gnu
> >>>>>>>> cd ../../
> >>>>>>>> ./AT15_Amber.py
> >>>>>>>> cd src/
> >>>>>>>> make clean
> >>>>>>>> make cuda_parallel
> >>>>>>>>
> >>>>>>>> I get the following error (the last excerpt is reported):
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>
> >>
> ##############################################################################
> >>>>>>>> *****
> >>>>>>>> *****
> >>>>>>>> *****
> >>>>>>>> gputypes.h:1249: warning: ‘_gpuContext::pbImageSoluteAtomID’ will
> >> be
> >>>>>>>> initialized after
> >>>>>>>> gputypes.h:1217: warning: ‘GpuBuffer<unsigned int>*
> >>>>>>>> _gpuContext::pbNLExclusionList’
> >>>>>>>> gputypes.cpp:438: warning: when initialized here
> >>>>>>>> gputypes.h:1229: warning: ‘_gpuContext::pbNLPosition’ will be
> >>>> initialized
> >>>>>>>> after
> >>>>>>>> gputypes.h:1219: warning: ‘GpuBuffer<uint2>*
> >>>>>>>> _gpuContext::pbNLNonbondCellStartEnd’
> >>>>>>>> gputypes.cpp:438: warning: when initialized here
> >>>>>>>> gputypes.h:1269: warning:
> >> ‘_gpuContext::pbConstraintSolventConstraint’
> >>>>>>>> will be initialized after
> >>>>>>>> gputypes.h:1195: warning: ‘PMEDouble _gpuContext::ee_plasma’
> >>>>>>>> gputypes.cpp:438: warning: when initialized here
> >>>>>>>> /usr/local/cuda/bin/nvcc -use_fast_math -O3 -gencode
> >>>>>>>> arch=compute_13,code=sm_13 -gencode arch=compute_20,code=sm_20
> >> -DCUDA
> >>>>>>>> -DMPI -DMPICH_IGNORE_CXX_SEEK -I/usr/local/cuda/include -IB40C
> >>>>>>>> -IB40C/KernelCommon -I/usr/include -c kForcesUpdate.cu
> >>>>>>>> In file included from gpu.h:15,
> >>>>>>>> from kForcesUpdate.cu:14:
> >>>>>>>> gputypes.h:30:17: error: mpi.h: No such file or directory
> >>>>>>>> make[3]: *** [kForcesUpdate.o] Error 1
> >>>>>>>> make[3]: Leaving directory `/home/max/amber11/src/pmemd/src/cuda'
> >>>>>>>> make[2]: *** [-L/usr/local/cuda/lib64] Error 2
> >>>>>>>> make[2]: Leaving directory `/home/max/amber11/src/pmemd/src'
> >>>>>>>> make[1]: *** [cuda_parallel] Error 2
> >>>>>>>> make[1]: Leaving directory `/home/max/amber11/src/pmemd'
> >>>>>>>> make: *** [cuda_parallel] Error 2
> >>>>>>>>
> >>>>>>>>
> >>>>
> >>
> ##############################################################################
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> Thanks as always for any prospective hints about this issue.
> >>>>>>>>
> >>>>>>>> Best,
> >>>>>>>>
> >>>>>>>> Il 21 gennaio 2012 15:28, Jason Swails <jason.swails.gmail.com>
> ha
> >>>>>>>> scritto:
> >>>>>>>>> If that is the same config.h file as before, then AT15_Amber11.py
> >> has
> >>>>>>>> not been run (which also means that not all of the AmberTools bug
> >>>> fixes
> >>>>>>>> have been applied.
> >>>>>>>>>
> >>>>>>>>> Either run that script or apply all of the bug fixes for both
> >> Amber
> >>>> 11
> >>>>>>>> and AmberTools 1.5.
> >>>>>>>>>
> >>>>>>>>> You can see detailed instructions at
> >> http://jswails.wikidot.comwhere
> >>>>>>>> there is a page dedicated to installing Amber 11 and AmberTools
> >> 1.5.
> >>>>>>>>>
> >>>>>>>>> HTH,
> >>>>>>>>> Jason
> >>>>>>>>>
> >>>>>>>>> --
> >>>>>>>>> Jason M. Swails
> >>>>>>>>> Quantum Theory Project,
> >>>>>>>>> University of Florida
> >>>>>>>>> Ph.D. Candidate
> >>>>>>>>> 352-392-4032
> >>>>>>>>>
> >>>>>>>>> On Jan 21, 2012, at 7:35 AM, Massimiliano Porrini <
> >> mozz76.gmail.com>
> >>>>>>>> wrote:
> >>>>>>>>>
> >>>>>>>>>> Sorry, here is the config.h file.
> >>>>>>>>>> Cheers,
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> Dr Massimiliano Porrini
> >>>>>>>>>> Institute for Condensed Matter and Complex Systems
> >>>>>>>>>> School of Physics & Astronomy
> >>>>>>>>>> The University of Edinburgh
> >>>>>>>>>> James Clerk Maxwell Building
> >>>>>>>>>> The King's Buildings
> >>>>>>>>>> Mayfield Road
> >>>>>>>>>> Edinburgh EH9 3JZ
> >>>>>>>>>>
> >>>>>>>>>> Tel +44-(0)131-650-5229
> >>>>>>>>>>
> >>>>>>>>>> E-mails : M.Porrini.ed.ac.uk
> >>>>>>>>>> mozz76.gmail.com
> >>>>>>>>>> maxp.iesl.forth.gr
> >>>>>>>>>>
> >>>>>>>>>> Begin forwarded message:
> >>>>>>>>>>
> >>>>>>>>>>> From: max.complexity1.site
> >>>>>>>>>>> Date: 21 January 2012 12:17:31 GMT
> >>>>>>>>>>> To: mozz76.gmail.com
> >>>>>>>>>>> Subject: Amber11 cuda config file
> >>>>>>>>>>>
> >>>>>>>>>>> Amber11 cuda config file created via ./config cuda gnu
> >>>>>>>>>> _______________________________________________
> >>>>>>>>>> AMBER mailing list
> >>>>>>>>>> AMBER.ambermd.org
> >>>>>>>>>> http://lists.ambermd.org/mailman/listinfo/amber
> >>>>>>>>>
> >>>>>>>>> _______________________________________________
> >>>>>>>>> AMBER mailing list
> >>>>>>>>> AMBER.ambermd.org
> >>>>>>>>> http://lists.ambermd.org/mailman/listinfo/amber
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> --
> >>>>>>>> Dr Massimiliano Porrini
> >>>>>>>> Institute for Condensed Matter and Complex Systems
> >>>>>>>> School of Physics & Astronomy
> >>>>>>>> The University of Edinburgh
> >>>>>>>> James Clerk Maxwell Building
> >>>>>>>> The King's Buildings
> >>>>>>>> Mayfield Road
> >>>>>>>> Edinburgh EH9 3JZ
> >>>>>>>>
> >>>>>>>> Tel +44-(0)131-650-5229
> >>>>>>>>
> >>>>>>>> E-mails : M.Porrini.ed.ac.uk
> >>>>>>>> mozz76.gmail.com
> >>>>>>>> maxp.iesl.forth.gr
> >>>>>>>>
> >>>>>>>> _______________________________________________
> >>>>>>>> AMBER mailing list
> >>>>>>>> AMBER.ambermd.org
> >>>>>>>> http://lists.ambermd.org/mailman/listinfo/amber
> >>>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> --
> >>>>>>> Jason M. Swails
> >>>>>>> Quantum Theory Project,
> >>>>>>> University of Florida
> >>>>>>> Ph.D. Candidate
> >>>>>>> 352-392-4032
> >>>>>>> _______________________________________________
> >>>>>>> AMBER mailing list
> >>>>>>> AMBER.ambermd.org
> >>>>>>> http://lists.ambermd.org/mailman/listinfo/amber
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> --
> >>>>>> Dr Massimiliano Porrini
> >>>>>> Institute for Condensed Matter and Complex Systems
> >>>>>> School of Physics & Astronomy
> >>>>>> The University of Edinburgh
> >>>>>> James Clerk Maxwell Building
> >>>>>> The King's Buildings
> >>>>>> Mayfield Road
> >>>>>> Edinburgh EH9 3JZ
> >>>>>>
> >>>>>> Tel +44-(0)131-650-5229
> >>>>>>
> >>>>>> E-mails : M.Porrini.ed.ac.uk
> >>>>>> mozz76.gmail.com
> >>>>>> maxp.iesl.forth.gr
> >>>>>> <config.h>
> >>>>>> _______________________________________________
> >>>>>> AMBER mailing list
> >>>>>> AMBER.ambermd.org
> >>>>>> http://lists.ambermd.org/mailman/listinfo/amber
> >>>>>
> >>>>> _______________________________________________
> >>>>> AMBER mailing list
> >>>>> AMBER.ambermd.org
> >>>>> http://lists.ambermd.org/mailman/listinfo/amber
> >>>>
> >>>>
> >>>>
> >>>> --
> >>>> Dr Massimiliano Porrini
> >>>> Institute for Condensed Matter and Complex Systems
> >>>> School of Physics & Astronomy
> >>>> The University of Edinburgh
> >>>> James Clerk Maxwell Building
> >>>> The King's Buildings
> >>>> Mayfield Road
> >>>> Edinburgh EH9 3JZ
> >>>>
> >>>> Tel +44-(0)131-650-5229
> >>>>
> >>>> E-mails : M.Porrini.ed.ac.uk
> >>>> mozz76.gmail.com
> >>>> maxp.iesl.forth.gr
> >>>>
> >>>> _______________________________________________
> >>>> AMBER mailing list
> >>>> AMBER.ambermd.org
> >>>> http://lists.ambermd.org/mailman/listinfo/amber
> >>>>
> >>>
> >>>
> >>>
> >>> --
> >>> Jason M. Swails
> >>> Quantum Theory Project,
> >>> University of Florida
> >>> Ph.D. Candidate
> >>> 352-392-4032
> >>> _______________________________________________
> >>> AMBER mailing list
> >>> AMBER.ambermd.org
> >>> http://lists.ambermd.org/mailman/listinfo/amber
> >>
> >>
> >>
> >> --
> >> Dr Massimiliano Porrini
> >> Institute for Condensed Matter and Complex Systems
> >> School of Physics & Astronomy
> >> The University of Edinburgh
> >> James Clerk Maxwell Building
> >> The King's Buildings
> >> Mayfield Road
> >> Edinburgh EH9 3JZ
> >>
> >> Tel +44-(0)131-650-5229
> >>
> >> E-mails : M.Porrini.ed.ac.uk
> >> mozz76.gmail.com
> >> maxp.iesl.forth.gr
> >>
> >> _______________________________________________
> >> AMBER mailing list
> >> AMBER.ambermd.org
> >> http://lists.ambermd.org/mailman/listinfo/amber
> >>
> >
> >
> >
> > --
> > Jason M. Swails
> > Quantum Theory Project,
> > University of Florida
> > Ph.D. Candidate
> > 352-392-4032
> > _______________________________________________
> > AMBER mailing list
> > AMBER.ambermd.org
> > http://lists.ambermd.org/mailman/listinfo/amber
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
--
Jason M. Swails
Quantum Theory Project,
University of Florida
Ph.D. Candidate
352-392-4032
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Thu Jan 26 2012 - 12:30:02 PST