However, after a perusal of ACML last version (4.1.0), it is my impression that not only double precision, but in some cases quadruple precision is utilized. Moreover, my understanding is that BLAS LAPACk SCALAPACK etc are optimized for the various platforms. From a brief interview in the Dept with ACML users, it seems that double precision is there since at least two years (or more; two years ago refers to their first use of ACML).
I am certainly not in a position to duel on these affairs (not aiming at). My only interest is that, should the mkl/qmmm problem arise from dual-opteron, or from mkl themselves, I would like to try ACML if I get a hint how to link them to Amber (incidentally, I already have the ACML libraries on my machine). More that fifty atoms is a normal situation for an organic chemist as I am.
Regards
francesco
--- On Fri, 5/30/08, Francesco Pietra <chiendarret.yahoo.com> wrote:
> From: Francesco Pietra <chiendarret.yahoo.com>
> Subject: RE: Fw: RE: AMBER: MKL libraries/Amber10
> To: amber.scripps.edu
> Date: Friday, May 30, 2008, 11:44 PM
> HI Ross:
> I believe Amber users (and others, if they come across)
> should be grateful for this enlightening answer. Thanks for
> taking time to explain in such details
>
> francesco
>
>
> --- On Fri, 5/30/08, Ross Walker
> <ross.rosswalker.co.uk> wrote:
>
> > From: Ross Walker <ross.rosswalker.co.uk>
> > Subject: RE: Fw: RE: AMBER: MKL libraries/Amber10
> > To: amber.scripps.edu
> > Date: Friday, May 30, 2008, 3:41 PM
> > > If its is MKL that makes the problem (on
> different
> > hardware, by Cenk and
> > > me), why not compiling Amber10 with ifort + ACML
> or
> > GOTO libraries? Is any
> > > indication how to set the congif file?
> >
> > Last time I checked GOTO was basically a cut down BLAS
> > library designed to
> > get very high linpack numbers and lacked any of the
> real
> > useful stuff. This
> > may have changed. Note it is actually LAPACK that we
> really
> > need which
> > obviously makes use of a lot of BLAS calls so one
> could
> > perhaps use the
> > lapack code included in AMBER and link it to GOTO blas
> but
> > I don't know how
> > useful this will be.
> >
> > As for ACML last time I looked at it (albeit a couple
> of
> > years ago now) it
> > seemed to have largely forgotten that double precision
> > arithmatic exists.
> > All the vector functions were single precision and I
> > don't recall it having
> > any of the matrix diagonalization routines in there so
> > again not very
> > useful. However, this may have changed so perhaps we
> could
> > consider adding
> > ACML support. Maybe if NSF awards an AMD based machine
> to
> > SDSC I'll do it to
> > earn some kudos ;-).
> >
> > > In previous mail Ross said that MKL will speed up
> QMMM
> > by a large (not
> > > minor) margin.
> >
> > This will of course depend on the QM system size. For
> less
> > than 30 atoms or
> > so there will be almost no difference, for 30 to 50
> atoms
> > it will be minor
> > and for >50 atoms it will be a large difference so
> the
> > importance of MKL is
> > a function of what you want to run... <sigh>
> nothing
> > is ever simple ;-).
> >
> > > As far as I know ACML are equivalent BLAS, and
> perhaps
> > GOTO
> > > is even better than MKL or ACML.
> >
> > The key routines for QMMM are vdinvsqrt, vdexp,
> vdsqrt,
> > vdcos, dspev,
> > dspevd, dspevr, dsyev, dsyevd and dsyevr.
> >
> > The first 4 are vector math functions and not strictly
> > lapack routines.
> > There is also not a consensus on the interface for
> such
> > routines so GOTO
> > would be no use here. ACML might help if it does
> double
> > precision vectors.
> > In fact looking at the documentation:
> > http://developer.amd.com/assets/acml_userguide.pdf
> >
> > It does vectored cosine, vectored exponential and
> vectored
> > log but AMD seem
> > to have forgotten that people might want to do
> vectored
> > sqrt or inverse
> > sqrt. They support vectored power (to a float) but
> only in
> > single precision
> > :-(. And that probably doesn't make use of
> specialist
> > sqrt hardware anyway
> > even if you do it to the power 0.5 so that's
> probably
> > not much good. So for
> > vector math ACML would appear to be pretty useless.
> >
> > It at least looks like the lapack implementation is
> > complete so this might
> > help a bit - the threading might also help some for
> matrix
> > diagonalizations
> > although one would have to check.
> >
> > Although as usual we see that the marketing department
> does
> > not live in the
> > real world:
> >
> > "ACML's aggressively tuned OpenMP versions
> mean
> > that you don't have to worry
> > about managing sophisticated threading models or
> complex
> > debugging. Whether
> > you are using dynamic or static linking, Windows,
> Linux or
> > Solaris 32- or
> > 64-bit, multi threading just works. "
> >
> > Great.... for single cpu non-mpi code. If you are
> using MPI
> > this sort of
> > threading is just a complete pain...
> >
> > Just my 3c.... (my prices are going up due to the cost
> of
> > oil...)
> >
> > All the best
> > Ross
> >
> >
> > /\
> > \/
> > |\oss Walker
> >
> > | Assistant Research Professor |
> > | San Diego Supercomputer Center |
> > | Tel: +1 858 822 0854 | EMail:- ross.rosswalker.co.uk
> |
> > | http://www.rosswalker.co.uk | PGP Key available on
> > request |
> >
> > Note: Electronic Mail is not secure, has no guarantee
> of
> > delivery, may not
> > be read every day, and should not be used for urgent
> or
> > sensitive issues.
> >
> >
> >
> >
> >
> -----------------------------------------------------------------------
> > The AMBER Mail Reflector
> > To post, send mail to amber.scripps.edu
> > To unsubscribe, send "unsubscribe amber" (in
> the
> > *body* of the email)
> > to majordomo.scripps.edu
-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber.scripps.edu
To unsubscribe, send "unsubscribe amber" (in the *body* of the email)
to majordomo.scripps.edu
Received on Sun Jun 01 2008 - 06:08:18 PDT