# [Haskell-cafe] instance Enum Double considered not entirely great?

Richard O'Keefe ok at cs.otago.ac.nz
Fri Sep 23 01:02:07 CEST 2011

```On 21/09/2011, at 11:42 PM, Jake McArthur wrote:
> With fixed point numbers, it makes sense to have an Enum instance.

What is the use case?

> Enumeration is reasonable because most applications for fixed point
> arithmetic do *not* want to pretend that they are real numbers;

But that does not mean you want to pretend they are integers,
and having an Enum instance is basically about pretending to be
integers.

> you
> almost always want to be aware of the current precision and whether
> you might overflow or need more precision.

There are at least two defensible understandings of what a fixed point
number means.  One is appropriate for finance, which is that the numbers
are exact rational numbers of the form m/b^n for integer m, n and
integer b > 1.  (For example, when I was born, it made sense to think
of money as m/960, where m is the number of farthings, giving you a
subtraction, multiplication, integer quotient, and remainder are
exact, and each other division has to be given an explicit rounding method.
It is difficult to fit this understanding into Haskell (although given the
fact that it _is_ possible to do type-level arithmetic, not _impossible_).
The real problem is fitting it into the class system, because
(+) :: Fixed m -> Fixed n -> Fixed (Max m n)
(*) :: Fixed m -> Fixed n -> Fixed (Plus m n), while
compare :: Fixed m -> Fixed n -> Ordering
makes sense for any (types representing naturals) m, n.

The other understanding is appropriate for engineering (think of ADCs and
DACs) and is that the numbers are approximate.  That seems to be what you
have in mind.

Across the spectrum of programming languages, other understandings also exist:
I'm aware of one programming language where "fixed" point numbers are
limited to 31 digits of precision and morph into a weird sort of floating
point rather than go over the precision limit, and another where fixed point
numbers are really arbitrary precision rationals that *print* to limited
precision (OUCH).

> This situation is no
> different from Word or Int. toEnum and fromEnum are also inverses. No
> expectations are violated here unless you have already gotten used to
> the broken Float, Double, and Rational instances.

Let's face it, Enum badly needs some revision.
We have
toEnum :: Int -> a
and yet we have
instance Enum Integer

How is _that_ supposed to work?  Or instance Enum Int64 on a system
where Int is 32 bits?  And yet '..' syntax makes perfect sense for
any size of integer.

I do think that '..' syntax for Float and Double could be useful,
but the actual definition is such that, well, words fail me.
[1.0..3.5] => [1.0,2.0,3.0,4.0] ????  Why did anyone ever think
_that_ was a good idea?  I would love to see a law that

(Ord t, Enum t) =>
(∀a, b :: t) x ∈ [a..b] ⇒ a ≤ x && x ≤ b   -- not valid (sigh)

This is not the same as a law that

(Ord t, Enum t) =>
(∀a, b, x :: t) a ≤ x && x ≤ b ⇒ x ∈ [a..b]  -- not valid (ho hum)

As things currently stand, neither of these laws is valid.
It is even easy to find a value for b :: Double such that
[b..b] is empty.  Not good.

As things c
```