Re: AMBER: amber 8 benchmarks dec alpha

From: Sergio E. Wong <swon9.itsa.ucsf.edu>
Date: Mon, 24 Jan 2005 15:54:41 -0800 (PST)

Thanks.. I'm actually looking to run replica exchange (using Sander and
GB/SA) at PSC. Basically I get about 145 ps/day on a single processor on
a system with 3500 atoms. The question is.. is this a reasonable number??
or is it possible to adjust the optimization to improve the performance??

The latter is really the important question

My mdin file looks like this:

 &cntrl
  imin = 0,
  nmropt = 0,
  irest=1,
  ntx=5,

  ntpr = 1024, ntwx = 1024,
  ntwr = 1024,

  ntf = 2, ntb = 0,
  cut = 15,
  rgbmax=12,
  igb =5,

  nstlim = 1024,
  nscm = 2000,
  t = 0.0, dt = 0.001,
  nrespa=8,

  temp0 = 450.0, tempi = 450.0,
  ntt = 3, dtemp = 0.0,
  gamma_ln = 2,

  ntc = 2, tol = 0.000001,

  ntr = 1,
  RESTRAINT_WT = 0.30,
  restraintmask = ':1-98.CA | :110-211.CA | :219-230.CA',

  repcrd = 0,
  numexchg = 10,

 /

Thanks

-Sergio


On Mon, 24 Jan 2005, Robert Duke wrote:

> Sergio -
> PMEMD benchmarks are out on the amber web site, amber.scripps.edu. If you
> follow the pmemd link and look at the old release notes for pmemd 3.0 and
> 3.1, there are fairly complete benchmarks for pmemd 3.0/3.1, which are
> maybe a little slower than pmemd 8. The only numbers I published for pmemd
> 8 is on the benchmarks page; I just copied it below. These two sources give
> you some starting points; if you are not interested in using pmemd, you
> should probably not run sander at really high processor count as it is not
> very efficient. All my numbers were produced on PSC machines.
> Regards - Bob Duke
>
>
> BENCHMARKING RESULTS FOR PITTSBURGH SUPERCOMPUTER CENTER ALPHASERVER,
> LEMIEUX
>
> *******************************************************************************
> LEMIEUX PERFORMANCE, Compaq 1 GHz ES45 alphaserver, Quadrics interconnect
> *******************************************************************************
>
> With the Quadrics interconnect, it is possible to use one or two
> interconnect
> "rails", with one rail being the default. Performance may be improved by
> use
> of two rails by on the order of 10-20%, but at the time of benchmarking,
> there
> appeared to be system problems associated with using two rails. Thus, at
> present we only present data for one rail, and only recommend the use of one
> rail. PMEMD optimization is the default optimization (no DIRFRC_*)
> specified,
> which produced the best results over a range of values.
>
>
> 90906 Atoms, Constant Pressure Molecular Dynamics (Factor IX)
>
> #procs PMEMD Sander 8
> psec/day psec/day
>
> 64 (4x16) 1745 500
> 128 (4x32) 2615 1172
>
>
> Benchmarks were run on NCSA's Itanium 1 Linux cluster, which has a Myrinet
> interconnect. The Itanium 1 is significantly slower than the Itanium 2, but
> the benchmarks show good PMEMD scaling on the Myrinet interconnect out to
> about 32 processors, which is fairly typical. The Itanium chips have a huge
> L3 cache, so PMEMD is best optimized using DIRFRC_BIGCACHE_OPT (the
> default).
> ----- Original Message -----
> From: "Sergio E. Wong" <swon9.itsa.ucsf.edu>
> To: <amber.scripps.edu>
> Sent: Monday, January 24, 2005 5:08 PM
> Subject: AMBER: amber 8 benchmarks dec alpha
>
>
> > Dear sirs;
> >
> > I was wondering if anyone had any sort of amber 8 benchmark on dec
> > alphas (compiled with the compaq f90 compiler). Thanks
> >
> > -Sergio
> >
> > -----------------------------------------------------------------------
> > The AMBER Mail Reflector
> > To post, send mail to amber.scripps.edu
> > To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu
> >
>
>
> -----------------------------------------------------------------------
> The AMBER Mail Reflector
> To post, send mail to amber.scripps.edu
> To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu
>
-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber.scripps.edu
To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu
Received on Tue Jan 25 2005 - 00:53:00 PST
Custom Search