Re: [AMBER] Fwd: Amber installation

From: Paul S. Nerenberg <psn.berkeley.edu>
Date: Mon, 4 Oct 2010 23:01:44 -0700

Hi Hadrian,

The same test also "fails" for me, but take a look at the log file
generated by the test (/test/tip5p/mdout.tip5p). In my case, the log
file shows that the test actually ran just fine (so there's no need to
worry). However, there is a small difference at the end of the output
file in the averages/rms section -- specifically, the saved test case
output has rms fluctuations for volume, while my test output doesn't.
I see the same "error" in a version I compiled with PGI (on another
machine) though, which makes me suspect that there may be something
slightly awry with the saved tip5p test case. (The saved tip5p_nve
test case doesn't have any rms fluctuations for volume, which adds to
my suspicion re: tip5p.) Maybe someone else on the list knows more?

Best,

Paul


On Oct 4, 2010, at 9:52 PM, Hadrian Djohari wrote:

> Hi Paul and Case,
>
> Thank you for the responses.
> The changed flags below did the trick to compile Amber successfully
> with
> openmpi and intel.
>
> All the AmberTools test.parallel passed, while all but 1 of Amber11
> passed.
> This is the failed one:
> ==============================================================
> export TESTsander='../../exe/pmemd.MPI'; cd tip5p && ./Run.tip5p
> diffing mdout.tip5p.save with mdout.tip5p
> possible FAILURE: check mdout.tip5p.dif
> ==============================================================
>
> Let me know if I should be concerned with this single test failure.
>
> Hadrian.
>
> On Mon, Oct 4, 2010 at 12:40 PM, Paul S. Nerenberg
> <psn.berkeley.edu> wrote:
>
>> Hi Hadrian,
>>
>> The last problem (compiling pmemd) is now a known issue with the
>> intel
>> compilers.* It revolves around the -fast optimization flag and the
>> other flags it is automatically calling (-static) and the type of
>> libraries you have on your system (looks like dynamic). Try editing
>> your $AMBERHOME/src/config.h (and $AMBERHOME/AmberTools/src/config.h)
>> file and replacing '-fast' with '-ipo -O3 -no-prec-div -xHost'. Then
>> do a "make clean" and recompile the parallel version.
>>
>> Best,
>>
>> Paul
>>
>> *It's not really a bug, you just have to be "heads up" about whether
>> you've got static or dynamic libraries.
>>
>>
>> On Oct 3, 2010, at 10:30 PM, Hadrian Djohari wrote:
>>
>>> Hi everyone,
>>>
>>>
>>>> I have some difficulty installing Amber 11 in our cluster, so I
>>>> would
>>>> appreciate everyone's help. I would like the answers to all the
>>>> questions
>>>> below, but if someone can help me with the answer for the specific
>>>> openmpi/intel installation right at the bottom part, I would be
>>>> really
>>>> grateful.
>>>>
>>>> Amber installation questions:
>>>>
>>>> 1. The instruction in the Amber and AmberTools document seem to
>>>> direct
>>>> people to have both the source and the compiled binaries in the
>>>> same
>>>> directory by using only "make" instead of "make install". Is this
>>>> the only
>>>> way? We prefer to have the source and the compiled binaries in
>>>> different
>>>> directories (/usr/local/src/amber and /usr/local/amber). If we
>>>> want separate
>>>> directories, which one should $AMBERHOME point to?
>>>>
>>>> 2. The installer should be "root" or a user? I compiled "make
>>>> serial"
>>>> just fine as a user, but found some errors compiling as "root",
>>>> probably due
>>>> to some paths not being recognized.
>>>>
>>>> 3. Since I'm installing Amber, I should also install AmberTools.
>>>> And
>>>> I'd like to install the parallel version. So, is there a step-by-
>>>> step
>>>> instructions on how to install Amber (with AmberTools) as
>>>> parallel code
>>>> (please see no.1 too)? That would be EXTREMELY HELPFUL. Some of
>>>> these
>>>> questions needed answers (why do I have to install the serial
>>>> versions
>>>> first? how do I link to an existing libraries (fftw, mpi, mkl)?
>>>> what paths
>>>> are needed by the installer to execute?). These small things can
>>>> lead to an
>>>> unsuccessful compilation, so I would really like some more detailed
>>>> information about this.
>>>>
>>>> 4. Once the code is compiled, is there a short test job that use
>>>> PBS
>>>> submission, not the "make test" that is available in the
>>>> directories?
>>>>
>>>>
>>>> Also, I have tried this approach to install MD on a RHEL5.5 server:
>>> export AMBERHOME=/usr/local/amber/amber11
>>> cd $AMBERHOME/AmberTools/src
>>> ./configure -mpi intel
>>> make parallel
>>> This is successful.
>>>
>>> And then:
>>> cd $AMBERHOME/src
>>> make parallel
>>>
>>> But it produces this error;
>>> ...
>>> mpif90 -fast -c charmm_gold.f90
>>> mpif90 -fast -o pmemd.MPI gbl_constants.o gbl_datatypes.o
>>> state_info.o
>>> file_io_dat.o mdin_ctrl_dat.o mdin_ewald_dat.o mdin_debugf_dat.o
>>> prmtop_dat.o inpcrd_dat.o dynamics_dat.o img.o parallel_dat.o
>>> parallel.o
>>> gb_parallel.o pme_direct.o pme_recip_dat.o pme_slab_recip.o
>>> pme_blk_recip.o
>>> pme_slab_fft.o pme_blk_fft.o pme_fft_dat.o fft1d.o bspline.o
>>> pme_force.o
>>> pbc.o nb_pairlist.o nb_exclusions.o cit.o dynamics.o bonds.o
>>> angles.o
>>> dihedrals.o extra_pnts_nb14.o runmd.o loadbal.o shake.o prfs.o
>>> mol_list.o
>>> runmin.o constraints.o axis_optimize.o gb_ene.o veclib.o gb_force.o
>>> timers.o
>>> pmemd_lib.o runfiles.o file_io.o bintraj.o pmemd_clib.o pmemd.o
>>> random.o
>>> degcnt.o erfcfun.o nmr_calls.o nmr_lib.o get_cmdline.o
>>> master_setup.o
>>> pme_alltasks_setup.o pme_setup.o ene_frc_splines.o
>>> gb_alltasks_setup.o
>>> nextprmtop_section.o angles_ub.o dihedrals_imp.o cmap.o charmm.o
>>> charmm_gold.o ../../netcdf/lib/libnetcdf.a
>>> ipo: remark #11000: performing multi-file optimizations
>>> ipo: remark #11005: generating object file /tmp/ipo_ifortPoYvAz.o
>>> /usr/bin/ld: cannot find -lmpi_f90
>>> make[2]: *** [pmemd.MPI] Error 1
>>> make[2]: Leaving directory `/usr/local/amber/amber11/src/pmemd/src'
>>> make[1]: *** [parallel] Error 2
>>> make[1]: Leaving directory `/usr/local/amber/amber11/src/pmemd'
>>> make: *** [parallel] Error 2
>>>
>>> I don't know why it cannot find -lmpi_f90 since libmpi_f90.* are in
>>> the
>>> directory that is in the ld_library path
>>> hxd58.login:/usr/local/amber/amber11/src$ echo $LD_LIBRARY_PATH
>>> /usr/local/openmpi/openmpi-intel/lib/:/usr/local/lib:/usr/local/
>>> intel/compilers/11.1.056/lib/intel64:/usr/local/intel/compilers/
>>> 11.1.056/idb/lib/intel64
>>> hxd58.login:/usr/local/amber/amber11/src$ ll
>>> /usr/local/openmpi/openmpi-intel/lib
>>> ...
>>> -rwxr-xr-x 1 root root 1129 Mar 26 2010 libmpi_f90.la
>>> lrwxrwxrwx 1 root root 19 Mar 26 2010 libmpi_f90.so ->
>>> libmpi_f90.so.0.0.0
>>> lrwxrwxrwx 1 root root 19 Mar 26 2010 libmpi_f90.so.0 ->
>>> libmpi_f90.so.0.0.0
>>> -rwxr-xr-x 1 root root 17140 Mar 26 2010 libmpi_f90.so.0.0.0
>>> ...
>>>
>>> Thank you,
>>>
>>> --
>>> Hadrian Djohari
>>> HPCC Manager
>>> Case Western Reserve University
>>> (W): 216-368-0395
>>> (M): 216-798-7490
>>>
>>>
>>>
>>> --
>>> Hadrian Djohari
>>> HPCC Manager
>>> Case Western Reserve University
>>> (W): 216-368-0395
>>> (M): 216-798-7490
>>> _______________________________________________
>>> AMBER mailing list
>>> AMBER.ambermd.org
>>> http://lists.ambermd.org/mailman/listinfo/amber
>>
>>
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>>
>
>
>
> --
> Hadrian Djohari
> HPCC Manager
> Case Western Reserve University
> (W): 216-368-0395
> (M): 216-798-7490
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber


_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Mon Oct 04 2010 - 23:30:03 PDT
Custom Search