ajb at spamcop.net ajb at spamcop.net
Thu Aug 9 21:05:00 EDT 2007

```G'day.

Quoting Andrew Coppin <andrewcoppin at btinternet.com>:

> First of all, currently I'm just using Double to represent coordinates.
> Can anybody tell me what the smallest value you can represent with one
> is? (Not including denormals.)

Remember that floating point numbers are stored in three parts.  There's
a sign, an exponent and a mantissa.  Assuming that floatRadix n == 2,
normalised numbers can be represented as:

s * m * 2^e

where s is 1 or -1, m is in the range [1/2,1) and stored using
floatDigits n bits of precision, and e is in the range floatRange n.

The reason why this is significant is that the precision of a floating
point number is relative, not absolute.

The relative error is measured by a number called "epsilon":

-- The recursion is just so that we don't need -fglasgow-exts; feel
-- free to use lexically-scoped type variables instead if you like.
epsilon :: (RealFloat a) => a
epsilon = let eps = encodeFloat 1 (1 - floatDigits eps) in eps

epsilon is the smallest number such that 1.0 + epsilon /= 1.0.  It
measures the minimum possible relative separation between two adjacent
normalised floating-point numbers.

That is, if both a and b are normalised floating-point numbers (this
also means that they're not zero), and a /= b, then
abs (a-b) / a >= epsilon.

Similarly, the maximum possible relative separation is epsilon * 2.
(In general, epsilon * floatRadix n, but AFAIK no architectures that
than 2, so this is a safe assumption.)

> (I've built a function that checks for cycles in the orbit - by looking
> for previous points that are sufficiently "near" to the current one. But
> I want to know how much precision a Double gives me so I can figure out
> how near is "near".)

While epsilon is the minimum relative separation of RealFloat numbers,
it is usually much smaller than the error of any nontrivial bunch of
operations.

For something as simple as a Mandelbrot iteration, it's usually okay to
simply use epsilon multiplied by some factor which represents the
accumulated error of a bunch of operations.  If you were doing Serious
Numeric Analysis(tm), you'd go through each step and work out the
possible rounding error, and come up with a tight bound.  But something
like this will probably do:

tolerance :: Double
tolerance = 100 * epsilon

near :: Double -> Double -> Bool
near a b
| a == 0 || isDenormalized a = abs (a-b) < tolerance
| otherwise                  = abs (a-b) < tolerance * a

Cheers,
Andrew Bromage
```