[Haskell-cafe] instance Enum Double considered not entirely great?

Casey McCann cam at uptoisomorphism.net
Tue Sep 20 23:28:41 CEST 2011


On Tue, Sep 20, 2011 at 3:48 PM, Chris Smith <cdsmith at gmail.com> wrote:
> On Tue, 2011-09-20 at 15:28 -0400, Casey McCann wrote:
>> I actually think the brokenness of Ord for floating point values is
>> worse in many ways, as demonstrated by the ability to insert a value
>> into a Data.Set.Set and have other values "disappear" from the set as
>> a result.
>
> Definitely Ord is worse.  I'd very much like to see the Ord instance for
> Float and Double abandon the IEEE semantics and just put "NaN" somewhere
> in there -- doesn't matter where -- and provide new functions for the
> IEEE semantics.

It should be first, to make floating point values consistent with
applying Maybe to a numeric type.

Personally, I contend that the most correct solution is to distinguish
between meaningful ordering relations and ones used for algorithmic
convenience. As another example, the type (Integer, Integer), regarded
as Cartesian coordinates, has no meaningful ordering at all but does
have an obvious arbitrary total order (i.e., the current Ord
instance). For purposes like implementing Data.Set.Set, we don't care
at all whether the ordering used makes any particular kind of sense;
we care only that it is consistent and total. For
semantically-meaningful comparisons, we want the semantically-correct
answer and no other.

For types with no meaningful order at all, or with a meaningful total
order that we can use, there is no ambiguity, but floating point
values have both a semantic partial order and an obvious arbitrary
total order which disagree about NaN. In the true spirit of compromise
the current Ord instance fails to implement both, ensuring that things
work incorrectly all the time rather than half the time.

That said, in lieu of introducing multiple new type classes, note that
the Haskell Report specifically describes Ord as representing a total
order[0], so the current instances for floating point values seem
completely indefensible. Since removing the instances entirely is
probably not a popular idea, the least broken solution would be to
define NaN as equal to itself and less than everything else, thus
accepting the reality of Ord as the "meaningless arbitrary total
order" type class I suggested above and leaving Haskell bereft of any
generic semantic comparisons whatsoever. Ah, pragmatism.

> As for Enum, if someone were to want a type class to represent an
> enumeration of all the values of a type, then such a thing is reasonable
> to want.  Maybe you can even reasonably wish it were called Enum.  But
> it would be the *wrong* thing to use as a desugaring for list range
> notation.  List ranges are very unlikely to be useful or even meaningful
> for most such enumerations (what is [ Red, Green .. LightPurple]?); and
> conversely, as we've seen in this thread, list ranges *are* useful in
> situations where they are not a suitable way of enumerating all values
> of a type.

It's not clear that Enum, as it stands, actually means anything coherent at all.

Consider again my example of integer (x, y) coordinates. Naively, what
would [(0, 0) .. (3, 3)] appear to mean? Why, obviously it's the
sixteen points whose coordinates range from 0 to 3, except it isn't
because Enum isn't defined on pairs and doesn't work that way anyhow.
Could we describe this range with an iteration function and a
comparison? No, because the Ord instance here is intrinsically
nonsensical. And yet, the intent is simple and useful, so why don't we
have a standard type class for it?[1] This would seem to be the
obvious, intuitive interpretation for range syntax with starting and
ending values.

To the extent that Enum can be given a coherent interpretation (which
requires ignoring many existing instances), it seems to describe types
with unary successor/predecessor operations. As such, instances for
Rational, floating point values, and the like are patently nonsensical
and really should be removed. An obvious generalization would be to
define Enum based on an "increment" operation of some sort, in which
case those instances could be defined reasonably with a default
increment of 1, which is merely dubious, rather than ridiculous. The
increment interpretation would be very natural for infinite lists
defined with open ranges and an optional step size.

Absent from the above is any interpretation of expressions like [0,2
..11], which are ill-defined anyway, as evidenced by that expression
producing lists of different lengths depending on what type is chosen
for the numeric literals. Myself, I'm content to declare that use of
range syntax a mistake in general, and insist that an unbounded range
and something like takeWhile be used instead.

- C.

[0]: See here: http://www.haskell.org/onlinereport/haskell2010/haskellch6.html#x13-1290006.3.2
[1]: Spoiler: We do.



More information about the Haskell-Cafe mailing list