Re: [AMBER] Fwd: Amber installation

From: Paul S. Nerenberg <psn.berkeley.edu>
Date: Sun, 3 Oct 2010 22:59:23 -0700

Hi Hadrian,

Quick responses to your other questions:

(1) This is not entirely true. The instructions in the manual direct
you to type "make serial" or "make parallel" -- both of these install
the relevant binaries in the directory $AMBERHOME/bin. (And as I
recall, the instructions after you configure AmberTools direct you to
type "make install".) So $AMBERHOME should point to wherever you want
the binaries (or rather /bin) to go.

(2) You generally want to be a "regular" user when installing AMBER.
It's not impossible to compile as root, but it shouldn't be
necessary. (Except perhaps for getting/changing the relevant
permissions to all you to install in /usr/local -- provided that's
where you want to install it.)

(3) As many people on this list will advise you, there is not much
need to install a parallel version of AmberTools, which really just
consists of parallel ptraj and NAB. (But if you need to, it pretty
much exactly follows the instructions in the manual.) As for parallel
AMBER, there are actually good step-by-step instructions in the AMBER
manual (see p. 13, a.k.a. the 15th page of Amber11.pdf)...

You link to mpi automatically when you do "./configure -mpi intel" (or
whatever compiler you are using). There is already a version of fftw
bundled with AMBER, so there's no need to worry about that. As for
the Intel MKL, you set the path to the MKL installation as an
environment variable MKL_HOME (e.g., "export MKL_HOME=/path/to/MKL").
If you type "./configure --help", you will see a lot of information
about this.

(4) When you run the tests, they save out to a log file, so you could
actually make a pbs script and run the tests from the pbs script.
(Just don't forget to set the DO_PARALLEL environment variable in the
script.) Alternatively, if your queue is configured to do interactive
jobs (e.g., qsub -I), then you can just run the tests on one of your
nodes interactively...

Hope that helps,

Paul


On Oct 3, 2010, at 10:30 PM, Hadrian Djohari wrote:

> Hi everyone,
>
>
>> I have some difficulty installing Amber 11 in our cluster, so I would
>> appreciate everyone's help. I would like the answers to all the
>> questions
>> below, but if someone can help me with the answer for the specific
>> openmpi/intel installation right at the bottom part, I would be
>> really
>> grateful.
>>
>> Amber installation questions:
>>
>> 1. The instruction in the Amber and AmberTools document seem to
>> direct
>> people to have both the source and the compiled binaries in the
>> same
>> directory by using only "make" instead of "make install". Is this
>> the only
>> way? We prefer to have the source and the compiled binaries in
>> different
>> directories (/usr/local/src/amber and /usr/local/amber). If we
>> want separate
>> directories, which one should $AMBERHOME point to?
>>
>> 2. The installer should be "root" or a user? I compiled "make
>> serial"
>> just fine as a user, but found some errors compiling as "root",
>> probably due
>> to some paths not being recognized.
>>
>> 3. Since I'm installing Amber, I should also install AmberTools.
>> And
>> I'd like to install the parallel version. So, is there a step-by-
>> step
>> instructions on how to install Amber (with AmberTools) as
>> parallel code
>> (please see no.1 too)? That would be EXTREMELY HELPFUL. Some of
>> these
>> questions needed answers (why do I have to install the serial
>> versions
>> first? how do I link to an existing libraries (fftw, mpi, mkl)?
>> what paths
>> are needed by the installer to execute?). These small things can
>> lead to an
>> unsuccessful compilation, so I would really like some more detailed
>> information about this.
>>
>> 4. Once the code is compiled, is there a short test job that use
>> PBS
>> submission, not the "make test" that is available in the
>> directories?
>>
>>
>> Also, I have tried this approach to install MD on a RHEL5.5 server:
> export AMBERHOME=/usr/local/amber/amber11
> cd $AMBERHOME/AmberTools/src
> ./configure -mpi intel
> make parallel
> This is successful.
>
> And then:
> cd $AMBERHOME/src
> make parallel
>
> But it produces this error;
> ...
> mpif90 -fast -c charmm_gold.f90
> mpif90 -fast -o pmemd.MPI gbl_constants.o gbl_datatypes.o
> state_info.o
> file_io_dat.o mdin_ctrl_dat.o mdin_ewald_dat.o mdin_debugf_dat.o
> prmtop_dat.o inpcrd_dat.o dynamics_dat.o img.o parallel_dat.o
> parallel.o
> gb_parallel.o pme_direct.o pme_recip_dat.o pme_slab_recip.o
> pme_blk_recip.o
> pme_slab_fft.o pme_blk_fft.o pme_fft_dat.o fft1d.o bspline.o
> pme_force.o
> pbc.o nb_pairlist.o nb_exclusions.o cit.o dynamics.o bonds.o angles.o
> dihedrals.o extra_pnts_nb14.o runmd.o loadbal.o shake.o prfs.o
> mol_list.o
> runmin.o constraints.o axis_optimize.o gb_ene.o veclib.o gb_force.o
> timers.o
> pmemd_lib.o runfiles.o file_io.o bintraj.o pmemd_clib.o pmemd.o
> random.o
> degcnt.o erfcfun.o nmr_calls.o nmr_lib.o get_cmdline.o master_setup.o
> pme_alltasks_setup.o pme_setup.o ene_frc_splines.o gb_alltasks_setup.o
> nextprmtop_section.o angles_ub.o dihedrals_imp.o cmap.o charmm.o
> charmm_gold.o ../../netcdf/lib/libnetcdf.a
> ipo: remark #11000: performing multi-file optimizations
> ipo: remark #11005: generating object file /tmp/ipo_ifortPoYvAz.o
> /usr/bin/ld: cannot find -lmpi_f90
> make[2]: *** [pmemd.MPI] Error 1
> make[2]: Leaving directory `/usr/local/amber/amber11/src/pmemd/src'
> make[1]: *** [parallel] Error 2
> make[1]: Leaving directory `/usr/local/amber/amber11/src/pmemd'
> make: *** [parallel] Error 2
>
> I don't know why it cannot find -lmpi_f90 since libmpi_f90.* are in
> the
> directory that is in the ld_library path
> hxd58.login:/usr/local/amber/amber11/src$ echo $LD_LIBRARY_PATH
> /usr/local/openmpi/openmpi-intel/lib/:/usr/local/lib:/usr/local/
> intel/compilers/11.1.056/lib/intel64:/usr/local/intel/compilers/
> 11.1.056/idb/lib/intel64
> hxd58.login:/usr/local/amber/amber11/src$ ll
> /usr/local/openmpi/openmpi-intel/lib
> ...
> -rwxr-xr-x 1 root root 1129 Mar 26 2010 libmpi_f90.la
> lrwxrwxrwx 1 root root 19 Mar 26 2010 libmpi_f90.so ->
> libmpi_f90.so.0.0.0
> lrwxrwxrwx 1 root root 19 Mar 26 2010 libmpi_f90.so.0 ->
> libmpi_f90.so.0.0.0
> -rwxr-xr-x 1 root root 17140 Mar 26 2010 libmpi_f90.so.0.0.0
> ...
>
> Thank you,
>
> --
> Hadrian Djohari
> HPCC Manager
> Case Western Reserve University
> (W): 216-368-0395
> (M): 216-798-7490
>
>
>
> --
> Hadrian Djohari
> HPCC Manager
> Case Western Reserve University
> (W): 216-368-0395
> (M): 216-798-7490
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Sun Oct 03 2010 - 23:00:06 PDT
Custom Search