Re: AMBER: Implicit precision in sander vs architecture

From: Teletchéa Stéphane <steletch.biomedicale.univ-paris5.fr>
Date: 20 Oct 2003 10:45:04 +0200

Le ven 17/10/2003 à 19:05, David E. Konerding a écrit :
> Yong Duan wrote:
>
[SNIP]
>
> Yong is confusing the meaning of 64-bit as it is conventionally used in
> the marketing literature with the actual
> technical details of how integer data is implemented on digital hardware.
>
> 64-bit is normally used these days to describe the size of the address
> register and the amount of directly addressable memory.
> It's not as relevant when applied to the size of the arithmetical,
> integer, or floating point registers. For example, my
> 32-bit Intel PC can only address memory using a 32-bit range, but it
> natively implements a 64-bit integer and an 80-bit floating
> point type. No extra "work" or "passes" are being done when I add two
> 64-bit integers or multiple to 80-bit FP ones (on my hardware).
> Dave

This is actually exactly what i mean by precision :

From what i know, there are many different 'internal' precision for x86
:
32 bits for integer,
80 bits for floating point,
128 for SSE, ...

So that's why i'm surprised byt he different behaviour on different
machines.

I understand that :
- different architectures have different implicit precision, but what is
employed for each architecture ? Does it influence the quality of the
dynamics ?
- running locally or over a network induces another variables that
implies another source of divergence, has it been quantified, what is
best ?
- 32 bits machines does not have only 32 bits precision (for instance
80bits for floating point operations) but do 64 bits machines have
64bits precision for integer of float (or a bigger precision for
floating point ?)

Last : AMBER has been developped on 64bits architectures and then to
32bits if i understand correctly. Maybe amber has been 'tuned' to
reproduce correctly dynamics simulation, taking explicitly the roundoff
error in the code. I saw papers for T3E for example, many contributions
from SGI, ... Has something similar been done for 32-bits processors ?

Could this be the difference between the architectures ?

Again, i remember you my point : i have the 'impression' that long
dynamics run are more 'stable' (less conformational space is explored)
on 64-bits machines than on 32-bits machines. One could say on the
opposite 32-bits machines are exploring more conformational space ...

I hope i'm clearer now, there are two options :
1 - implicit precision differences induces errors that lead to
divergence but it has been more controlled on 64-bits machines (intrisic
robustness in precision)
2 - AMBER has been more developped for 64-bits machines, so the results
have been more correctly examined than for 32-bits machines.

Thanks a lot for your answers.

Stéphane Teletchéa

-- 


-----------------------------------------------------------------------
The AMBER Mail Reflector
To post, send mail to amber.scripps.edu
To unsubscribe, send "unsubscribe amber" to majordomo.scripps.edu


Received on Mon Oct 20 2003 - 09:53:00 PDT
Custom Search