On Mon, Sep 13, 2010, Daniel Sindhikara wrote:
> >
> It's possible to use single core, but it would obviously slow down the
> calculation.
This is not "obvious" unless you have a high quality interconnect like
inifiniband.
> But I'm not sure how configure_openmpi works -- it requires the source code
> to be tarred from
> the ambertools src directory. It seems to configure it from there, does it
> automatically link the new
> libraries and executables?
You shouldn't need to do anything following the configure_openmpi step.
> >
> > I tried using gnu and it didn't make it through the parallel installation,
> the error messages began about here
> (it was the same regardless of openMPI or mpich):
> ...
> mv libsff.a /home/sindhikara/amber11/lib
> make[2]: Leaving directory `/home/sindhikara/amber11/AmberTools/src/sff'
> cd ../pbsa; make libinstall
> make[2]: Entering directory `/home/sindhikara/amber11/AmberTools/src/pbsa'
> mpicc -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -DBINTRAJ -DMPI -c -o
> interface.o interface.c
> interface.c(64): (col. 8) remark: BLOCK WAS VECTORIZED.
> cpp -traditional -P -DBINTRAJ -DMPI -DPBSA sa_driver.f > _sa_driver.f
> mpif90 -c -O3 -ffree-form -o sa_driver.o _sa_driver.f
> ifort: command line warning #10006: ignoring unknown option '-ffree-form'
You aren't using the gnu compilers here, you are using intel (ifort). You
would need to configure and compile your mpi with gcc/gfortran.
....dac
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon Sep 13 2010 - 05:00:06 PDT