RE: AMBER: Implicit precision in sander vs architecture

From: Yong Duan <>
Date: Fri, 17 Oct 2003 12:30:31 -0400

Well, not really.

32-bit system refers to the hardware capacity. One can use 2-bit to
construct a 64-bit (or longer) word. That's how it is done on some of
those old calculators. In those good-old days when Intel still called
their first generation chips 8080 (which was the chip for IBM PC and IBM
PC compatibles), one still needed to use 64-bit words for accuracy

The consequence is the speed. To do a 64-bit floating point add (which
is the only logic hard-wired in computer, multiplication, even
subtraction are just variations of addition. In other words, computers
really only know how to add.), 32-bit machines need to do it at least
twice (depending on the length of their registers).

I heard this talk. The speaker claimed that the 64-bit machine made it
possible to construct the first human gene map because 32-bit machine
would only allow one to represent data upto 4 billion. Gosh, what if our
genes were one bit too long to be stored as a 64-bit integer? We would
be doomed! I think we all had a big sigh of relief that our genes are
not that long. We humanbeings are smart, but not too smart :).

So, the short answer to your question is, no, do not worry. Unless you
specify "REAL*4" or single precision, you will always get REAL*8 or
"double precision" for free even though your machine could be a 32-bit
machine. This is the beauty of AMBER --- it doubles the capacity of your
machines. So stay with AMBER!! :)


> do diverge 'naturally' because of roundoff but does it happen more
> rapidly on 32-bits systems as i'm suspecting it form a lower
> precision ?
> Any hint would be very helpfull.
> Stéphane Teletchéa
> Writing his PhD ...
> --

The AMBER Mail Reflector
To post, send mail to
To unsubscribe, send "unsubscribe amber" to
Received on Fri Oct 17 2003 - 17:53:02 PDT
Custom Search