Hi Kito,
For pmemd, MKL ONLY helps if you want to run generalized Born (implicit
solvent) simulations. It makes no difference whatever for explicit solvent
simulations. So I would only bother with MKL if you want gB.
Regards - Bob Duke
----- Original Message -----
From: "Ross Walker" <ross.rosswalker.co.uk>
To: "'AMBER Mailing List'" <amber.ambermd.org>
Sent: Wednesday, November 25, 2009 3:24 PM
Subject: RE: Re: Re: [AMBER] Not able to compile pmemd with openmpi in
Amber9
> Hi Kito,
>
> MKL Helps with performance somewhat, although more with implicit solvent
> than explicit solvent. Intel continually, for reasons known only to them,
> change the way one has to link to the MKL libraries. Thus when AMBER 9 was
> released it used the correct linking. New versions of MKL since then have
> changed things which has broken this. AMBER 10 have been updated to work
> with newer MKL versions via bugfixes but AMBER 9 was not.
>
> Your options are therefore to use AMBER 10 with the latest bug fixes or
> modify the link line to contain the correct linking for MKL (or just do
> away
> with MKL as you did).
>
> Which MKL version are you using?
>
> for 9.0 this should be:
>
> libmkl_lapack.a libmkl_em64t.a
>
> For 10.x or later (although they may have changed it AGAIN for 11.x) it
> should be:
>
> libmkl_intel_lp64.a libmkl_sequential.a libmkl_core.a
>
> Also make sure you run the test cases. Each new version of the Intel
> compilers introduce new compiler bugs which can lead to incorrect answers
> being generated.
>
> 11.1.046 is pretty bleeding edge. I still use 10.1.018 since I know it
> works
> and I trust it to get the answers correct.
>
> Good luck,
> Ross
>
>> -----Original Message-----
>> From: amber-bounces.ambermd.org [mailto:amber-bounces.ambermd.org] On
>> Behalf Of Jason Swails
>> Sent: Wednesday, November 25, 2009 8:51 AM
>> To: AMBER Mailing List
>> Subject: Re: Re: Re: [AMBER] Not able to compile pmemd with openmpi in
>> Amber9
>>
>> Kito,
>>
>> You want to be careful here. Run the tests to make sure it worked.
>> However, that is not what I suggested. try using this config.h file
>> instead:
>>
>> MATH_DEFINES =
>>
>> MATH_LIBS =
>>
>> IFORT_RPATH = /opt/gridengine/lib/lx26-
>> amd64:/usr/lib/perl5/5.8.8
>>
>> MATH_DEFINES = -DMKL
>>
>> MATH_LIBS = -L$(MKL_HOME)/lib/em64t -lguide -lpthread -lmkl_core
>> -lmkl_sequential -lmkl_intel_lp64
>>
>> FFT_DEFINES = -DPUBFFT
>>
>> FFT_INCLUDE =
>>
>> FFT_LIBS =
>>
>> NETCDF_HOME =
>>
>> NETCDF_DEFINES =
>>
>> NETCDF_MOD =
>>
>> NETCDF_LIBS =
>>
>> MPI_HOME = /opt/mpi/openmpi/1.3.3/intel/
>>
>> MPI_DEFINES = -DMPI -DSLOW_NONBLOCKING_MPI
>>
>> MPI_INCLUDE = -I$(MPI_HOME)/include
>>
>> MPI_LIBDIR = $(MPI_HOME)/lib
>>
>> MPI_LIBS = -L$(MPI_LIBDIR)
>>
>> DIRFRC_DEFINES = -DDIRFRC_EFS -DDIRFRC_COMTRANS -DDIRFRC_NOVEC
>>
>> CPP = /lib/cpp
>>
>> CPPFLAGS = -traditional -P
>>
>> F90_DEFINES = -DFFTLOADBAL_2PROC
>>
>> F90 = mpif90
>>
>> MODULE_SUFFIX = mod
>>
>> F90FLAGS = -c
>>
>> F90_OPT_DBG = -g -traceback
>>
>> F90_OPT_LO = -O0
>>
>> F90_OPT_MED = -O2
>>
>> F90_OPT_HI = -axWPT -ip -O3
>>
>> F90_OPT_DFLT = $(F90_OPT_HI)
>>
>> CC = mpicc
>>
>> CFLAGS =
>>
>> LOAD = mpif90
>>
>> LOADFLAGS =
>>
>> LOADLIBS = -limf -lsvml -Wl,-rpath=$(IFORT_RPATH)
>>
>>
>> What you did was to prevent the pmemd installation from using the intel
>> MKL
>> at all. However, without changing the MATH_LIBS line, you were still
>> trying
>> to link to lguide (which is an MKL library). When you removed both,
>> you
>> told pmemd not to use intel MKL. However, the intel MKL performs well
>> and
>> will probably make your installation of pmemd faster. Also, when you
>> start
>> changing preprocessor items (like removing -DMKL), you'll need to make
>> clean
>> before you try to compile with the new options. Since you're not even
>> compiling with MKL, you should not need libguide, so linking it in the
>> MATH_LIBS line is unnecessary.
>>
>> Try making clean and using the above config.h file and see if that
>> works.
>> Also, don't forget to run the test cases to make sure you have a
>> working
>> executable.
>>
>> Good luck!
>> Jason
>>
>>
>> On Wed, Nov 25, 2009 at 3:12 AM, Kito <kitobhai.rediffmail.com> wrote:
>>
>> > I was able to solve the problem
>> >
>> > "/usr/lib64/libguide.so: no version information available (required
>> by
>> > ./pmemd)"
>> >
>> > It looked like the "no version information available" means that the
>> > library version number is lower on the shared object so I created a
>> soft
>> > link of /opt/intel/Compiler/11.1/046/lib/intel64/libguide.so in
>> /usr/lib64
>> > and it worked.
>> >
>> >
>> >
>> > Thanks all
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > On Wed, 25 Nov 2009 13:21:41 +0530 wrote
>> >
>> > >Thanks Jason,
>> >
>> >
>> >
>> > When I changed MATH_DEFINES = -DMKL
>> >
>> >
>> >
>> > to MATH_DEFINES =
>> >
>> >
>> >
>> > it worked and I was able to link the object files.
>> >
>> >
>> >
>> > But on checking library dependencies, I have problem with libguide.so
>> > (first line on the output below)
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > $> ldd pmemd
>> >
>> >
>> >
>> > ./pmemd: /usr/lib64/libguide.so: no version information available
>> (required
>> > by ./pmemd)
>> >
>> >
>> >
>> > libguide.so => /usr/lib64/libguide.so (0x00002b1a6e4ca000)
>> >
>> >
>> >
>> > libpthread.so.0 => /lib64/libpthread.so.0 (0x00000032fdc00000)
>> >
>> >
>> >
>> > libimf.so => /usr/lib64/libimf.so (0x00002b1a6e607000)
>> >
>> >
>> >
>> > libsvml.so => /usr/lib64/libsvml.so (0x00002b1a6e89f000)
>> >
>> >
>> >
>> > libmpi_f90.so.0 =>
>> /opt/mpi/openmpi/1.3.3/intel/lib/libmpi_f90.so.0
>> > (0x00002b1a6e9e1000)
>> >
>> >
>> >
>> > libmpi_f77.so.0 =>
>> /opt/mpi/openmpi/1.3.3/intel/lib/libmpi_f77.so.0
>> > (0x00002b1a6ebe4000)
>> >
>> >
>> >
>> > libmpi.so.0 => /opt/mpi/openmpi/1.3.3/intel/lib/libmpi.so.0
>> > (0x00002b1a6ee1c000)
>> >
>> >
>> >
>> > libopen-rte.so.0 => /opt/mpi/openmpi/1.3.3/intel/lib/libopen-
>> rte.so.0
>> > (0x00002b1a6f0ec000)
>> >
>> >
>> >
>> > libopen-pal.so.0 => /opt/mpi/openmpi/1.3.3/intel/lib/libopen-
>> pal.so.0
>> > (0x00002b1a6f34c000)
>> >
>> >
>> >
>> > libdl.so.2 => /lib64/libdl.so.2 (0x00000032fd800000)
>> >
>> >
>> >
>> > libnsl.so.1 => /lib64/libnsl.so.1 (0x0000003300400000)
>> >
>> >
>> >
>> > libutil.so.1 => /lib64/libutil.so.1 (0x000000330a600000)
>> >
>> >
>> >
>> > libm.so.6 => /lib64/libm.so.6 (0x00000032fd400000)
>> >
>> >
>> >
>> > libc.so.6 => /lib64/libc.so.6 (0x00000032fd000000)
>> >
>> >
>> >
>> > libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x000000330ee00000)
>> >
>> >
>> >
>> > /lib64/ld-linux-x86-64.so.2 (0x00000032fcc00000)
>> >
>> >
>> >
>> > libirc.so => /usr/lib64/libirc.so (0x00002b1a6f5e8000)
>> >
>> >
>> >
>> > libifport.so.5 =>
>> > /opt/intel/Compiler/11.1/046/lib/intel64/libifport.so.5
>> (0x00002b1a6f71e000)
>> >
>> >
>> >
>> > libifcoremt.so.5 =>
>> > /opt/intel/Compiler/11.1/046/lib/intel64/libifcoremt.so.5
>> > (0x00002b1a6f857000)
>> >
>> >
>> >
>> > libintlc.so.5 =>
>> /opt/intel/Compiler/11.1/046/lib/intel64/libintlc.so.5
>> > (0x00002b1a6fafb000)
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > Thanks again
>> >
>> >
>> >
>> > Kito
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > On Wed, 25 Nov 2009 12:34:22 +0530 wrote
>> >
>> >
>> >
>> > >This appears to be an MKL issue. What version of MKL are you using?
>> It
>> >
>> >
>> >
>> > would appear like the MKL libraries you're linking to no longer
>> contain
>> >
>> >
>> >
>> > vd___MATHFUNCTION__ functions/subroutines anymore, so you have to
>> find
>> > which
>> >
>> >
>> >
>> > library is missing and link to that one as well. You can try adding
>> >
>> >
>> >
>> > -lmkl_intel_lp64 -lmkl_core and/or -lmkl_sequential to the MATH_LIBS
>> line.
>> >
>> >
>> >
>> > I believe those are the only three libraries that pmemd will need,
>> but this
>> >
>> >
>> >
>> > is for Intel MKL 10 or 11. For older MKL versions, the config file
>> you have
>> >
>> >
>> >
>> > above should work...
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > I just now tried compiling pmemd9 with the ifort11 compiler and MKL
>> 11. It
>> >
>> >
>> >
>> > failed with the default config.h file (slightly different errors than
>> > yours,
>> >
>> >
>> >
>> > but still MKL-related). Adding the three above -lmkl... statements
>> above to
>> >
>> >
>> >
>> > MATH_LIBS fixed the problems.
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > Again, though, this is dependent on the version of MKL that you're
>> using.
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > Good luck, and I'd be interested to hear any success/failure (along
>> with
>> >
>> >
>> >
>> > which MKL you have installed :) )
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > All the best,
>> >
>> >
>> >
>> > Jason
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > On Wed, Nov 25, 2009 at 1:39 AM, Kito wrote:
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > > Hi Amber Experts/Users,
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > I am having difficulties with pmemd installation
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > My h/w and s/w configuration is
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > Linux Enterprise, OpenMPI, AMD x86_64, Intel Compiliers
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > My config.h file is
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > MATH_DEFINES =
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > MATH_LIBS =
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > IFORT_RPATH = /opt/gridengine/lib/lx26-amd64:/usr/lib/perl5/5.8.8
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > MATH_DEFINES = -DMKL
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > MATH_LIBS = -L$(MKL_HOME)/lib/em64t -lguide -lpthread
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > FFT_DEFINES = -DPUBFFT
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > FFT_INCLUDE =
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > FFT_LIBS =
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > NETCDF_HOME =
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > NETCDF_DEFINES =
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > NETCDF_MOD =
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > NETCDF_LIBS =
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > MPI_HOME = /opt/mpi/openmpi/1.3.3/intel/
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > MPI_DEFINES = -DMPI -DSLOW_NONBLOCKING_MPI
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > MPI_INCLUDE = -I$(MPI_HOME)/include
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > MPI_LIBDIR = $(MPI_HOME)/lib
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > MPI_LIBS = -L$(MPI_LIBDIR)
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > DIRFRC_DEFINES = -DDIRFRC_EFS -DDIRFRC_COMTRANS -DDIRFRC_NOVEC
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > CPP = /lib/cpp
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > CPPFLAGS = -traditional -P
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > F90_DEFINES = -DFFTLOADBAL_2PROC
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > F90 = mpif90
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > MODULE_SUFFIX = mod
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > F90FLAGS = -c
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > F90_OPT_DBG = -g -traceback
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > F90_OPT_LO = -O0
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > F90_OPT_MED = -O2
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > F90_OPT_HI = -axWPT -ip -O3
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > F90_OPT_DFLT = $(F90_OPT_HI)
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > CC = mpicc
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > CFLAGS =
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > LOAD = mpif90
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > LOADFLAGS =
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > LOADLIBS = -limf -lsvml -Wl,-rpath=$(IFORT_RPATH)
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > The error I get is
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > mpif90 -o pmemd gbl_constants.o gbl_datatypes.o state_info.o
>> > file_io_dat.o
>> >
>> >
>> >
>> > > parallel_dat.o mdin_ctrl_dat.o mdin_ewald_dat.o prmtop_dat.o
>> inpcrd_dat.o
>> >
>> >
>> >
>> > > dynamics_dat.o img.o parallel.o pme_direct.o pme_recip.o pme_fft.o
>> > fft1d.o
>> >
>> >
>> >
>> > > bspline.o pme_force.o pbc.o nb_pairlist.o cit.o dynamics.o bonds.o
>> > angles.o
>> >
>> >
>> >
>> > > dihedrals.o runmd.o loadbal.o shake.o runmin.o constraints.o
>> > axis_optimize.o
>> >
>> >
>> >
>> > > gb_ene.o veclib.o gb_force.o timers.o pmemd_lib.o runfiles.o
>> file_io.o
>> >
>> >
>> >
>> > > bintraj.o pmemd_clib.o pmemd.o random.o degcnt.o erfcfun.o
>> nmr_calls.o
>> >
>> >
>> >
>> > > nmr_lib.o get_cmdline.o master_setup.o alltasks_setup.o pme_setup.o
>> >
>> >
>> >
>> > > ene_frc_splines.o nextprmtop_section.o
>> >
>> >
>> >
>> > > -L/opt/intel/Compiler/11.1/046/mkl//lib/em64t -lguide -lpthread
>> >
>> >
>> >
>> > > -L/opt/mpi/openmpi/1.3.3/intel//lib -limf -lsvml
>> >
>> >
>> >
>> > > -Wl,-rpath=/opt/gridengine/lib/lx26-amd64:/usr/lib/perl5/5.8.8
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > gb_ene.o: In function `gb_ene_mod_mp_gb_ene_':
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > gb_ene.f90:(.text+0x19de): undefined reference to `vdinvsqrt_'
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > gb_ene.f90:(.text+0x1c93): undefined reference to `vdinv_'
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > gb_ene.f90:(.text+0x1cac): undefined reference to `vdinv_'
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > gb_ene.f90:(.text+0x1fde): undefined reference to `vdln_'
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > gb_ene.f90:(.text+0x1ff7): undefined reference to `vdln_'
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > gb_ene.f90:(.text+0x3b98): undefined reference to `vdinv_'
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > gb_ene.f90:(.text+0x3dd9): undefined reference to `vdexp_'
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > gb_ene.f90:(.text+0x4254): undefined reference to `vdinvsqrt_'
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > gb_ene.f90:(.text+0x4289): undefined reference to `vdinv_'
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > gb_ene.f90:(.text+0x43df): undefined reference to `vdexp_'
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > gb_ene.f90:(.text+0x43fc): undefined reference to `vdinvsqrt_'
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > gb_ene.f90:(.text+0x5403): undefined reference to `vdinvsqrt_'
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > gb_ene.f90:(.text+0x55b1): undefined reference to `vdinv_'
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > gb_ene.f90:(.text+0x55ca): undefined reference to `vdinv_'
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > gb_ene.f90:(.text+0x57b2): undefined reference to `vdln_'
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > gb_ene.o: In function `gb_ene_mod_mp_calc_born_radii_':
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > gb_ene.f90:(.text+0x678a): undefined reference to `vdinvsqrt_'
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > gb_ene.f90:(.text+0x6a3f): undefined reference to `vdinv_'
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > gb_ene.f90:(.text+0x6a58): undefined reference to `vdinv_'
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > gb_ene.f90:(.text+0x6d8a): undefined reference to `vdln_'
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > gb_ene.f90:(.text+0x6da3): undefined reference to `vdln_'
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > make[1]: *** [pmemd] Error 1
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > make[1]: Leaving directory `/opt/apps/amber/9/intel-
>> mkl/src/pmemd/src'
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > make: *** [all] Error 2
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > Please help... I have gone through the archives and got similar
>> problems
>> >
>> >
>> >
>> > > but their said solution's doesnt seems to help.
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > Thanks in advance
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> > > Kito
>> >
>> >
>> >
>> > > _______________________________________________
>> >
>> >
>> >
>> > > AMBER mailing list
>> >
>> >
>> >
>> > > AMBER.ambermd.org
>> >
>> >
>> >
>> > > http://lists.ambermd.org/mailman/listinfo/amber
>> >
>> >
>> >
>> > >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > --
>> >
>> >
>> >
>> > ---------------------------------------
>> >
>> >
>> >
>> > Jason M. Swails
>> >
>> >
>> >
>> > Quantum Theory Project,
>> >
>> >
>> >
>> > University of Florida
>> >
>> >
>> >
>> > Ph.D. Graduate Student
>> >
>> >
>> >
>> > 352-392-4032
>> >
>> >
>> >
>> > _______________________________________________
>> >
>> >
>> >
>> > AMBER mailing list
>> >
>> >
>> >
>> > AMBER.ambermd.org
>> >
>> >
>> >
>> > http://lists.ambermd.org/mailman/listinfo/amber
>> >
>> >
>> >
>> > _______________________________________________
>> >
>> > AMBER mailing list
>> >
>> > AMBER.ambermd.org
>> >
>> > http://lists.ambermd.org/mailman/listinfo/amber
>> >
>> > _______________________________________________
>> > AMBER mailing list
>> > AMBER.ambermd.org
>> > http://lists.ambermd.org/mailman/listinfo/amber
>> >
>>
>>
>>
>> --
>> ---------------------------------------
>> Jason M. Swails
>> Quantum Theory Project,
>> University of Florida
>> Ph.D. Graduate Student
>> 352-392-4032
>> _______________________________________________
>> AMBER mailing list
>> AMBER.ambermd.org
>> http://lists.ambermd.org/mailman/listinfo/amber
>
>
> _______________________________________________
> AMBER mailing list
> AMBER.ambermd.org
> http://lists.ambermd.org/mailman/listinfo/amber
>
>
_______________________________________________
AMBER mailing list
AMBER.ambermd.org
http://lists.ambermd.org/mailman/listinfo/amber
Received on Wed Nov 25 2009 - 15:00:04 PST