[Haskell-cafe] instance Enum Double considered not entirely great?

Daniel Fischer daniel.is.fischer at googlemail.com
Wed Sep 21 22:06:34 CEST 2011


On Wednesday 21 September 2011, 20:39:09, Casey McCann wrote:
> On Wed, Sep 21, 2011 at 12:09 AM, Daniel Fischer
> 
> <daniel.is.fischer at googlemail.com> wrote:
> > Yes. Which can be inconvenient if you are interested in whether you
> > got a -0.0, so if that's the case, you can't simply use (== -0.0).
> > Okay, problematic is a too strong word, but it's another case that may
> > require special treatment.
> 
> Hmm. I was going to suggest that it's not a major concern so long as
> the distinction can't be observed without using functions specific to
> floating point values, since that preserves consistent behavior for
> polymorphic functions, but... that's not true, because the sign is
> preserved when dividing by zero! So we currently have the following
> behavior:
> 
>     0   == (-0)     = True
>     1/0 == 1/(-0)   = False
>     signum (-0)     = 0.0
>     signum (1/0)    = 1.0
>     signum (1/(-0)) = -1.0
> 
> All of which is, I believe, completely correct according to IEEE
> semantics,

Yup.

> but seems to cause very awkward problems for any sensible
> semantics of Haskell's type classes.

Well, that's something you risk whenever you have an Eq instance regarding 
some non-identical values as equal. Some function may distinguish between 
them, cf. e.g. showTree in Data.Set/Map for a non-floating-point example.

> 
> ...sigh.
> 
> >> which is correct and shouldn't break any expected behavior.
> >> I don't think it's required that distinguishable values be unequal,
> > 
> > But desirable, IMO.
> 
> I'm ambivalent. I can see it making sense for truly equivalent values,
> where there's a reasonable expectation that anything using them should
> give the same answer, or when there's a clearly-defined normal form
> that values may be reduced to.

Yes, it's not an absolute, but if your Eq instance declares distinguishable 
values equal, you better have a very good reason for it.
The reason for Data.Set/Map is good enough, I think. -0.0 == 0.0 is 
borderline. If Double/Float get Eq and Ord instances avoiding the NaN 
poison, I'd prefer to distinguish -0.0 from 0.0 too, leaving the 
identification to the IEEE comparisons.

> 
> But as demonstrated above, this isn't the case with signed zeros if
> Num is available as well as Eq.
> 
> >> I still don't see why it makes sense to add separate IEEE comparisons
> > 
> > Pure and simple: speed.
> > That is what the machine instructions, and hence the primops, deliver.
> 
> Oh, I assume the IEEE operations would be available no matter what,
> possibly as separate operations monomorphic to Float and Double, that

That too, but I want to keep the polymorphic variants available, it's 
easier to change a few type signatures near the top than hunting through 
the entire project to replace eqDouble with eqFloat etc. and recompile 
everything.

> they'd be used to define the partial ordering instance, and could be
> imported directly from some appropriate module.
> 
> But as it turns out the partial ordering isn't valid anyway, so I
> retract this whole line of argument.
> 
> >> Ah, yes, wherein someone suggested that comparing to NaN should be a
> >> runtime error rather than give incorrect results. A strictly more
> >> correct approach, but not one I find satisfactory...
> > 
> > Umm, 'more correct' only in some sense. Definitely unsatisfactory.
> 
> More correct in the very narrow sense of producing fewer incorrect
> answers, according to Haskell semantics. :] That it would produce
> fewer answers in general and a great deal more bottoms is another
> matter. Certainly not useful, and in fact actively counterproductive
> given that the whole purpose of silent NaNs is to allow computations
> to proceed without handling exceptions at every step along the way.

Quite.

> 
> I'm becoming increasingly convinced that the only strictly coherent
> approach in the overall scheme of things would be to banish floating
> point values from most of the standard libraries except where they can

Hmm. I don't particularly like that idea. Correctly handling floating point 
numbers isn't trivial - So What? They're extremely useful, they deserve 
their place. Put a bumper over the sharpest edges, write "Enter at your own 
risk" on the garage door, that's enough.

> be given correct implementations according to Haskell semantics, and
> instead provide a module (not re-exported by the Prelude) that gives
> operations using precise IEEE semantics and access to all the expected
> primops and such. As you said above, the importance of floating point
> values is for speed, and the IEEE semantics are designed to support
> that. So I'm happy to consider floats as purely a performance
> optimization that should only be used when number crunching is
> actually a bottleneck.

> Let Rational be the default fractional type
> instead and save everyone a bunch of headaches.

If only things were so easy.
You can't satisfactorily define functions like sqrt, exp, log, sin, cos ...
for Rational, so for a large class of tasks you need floating point numbers 
(yes, one could also use arbitrary precision numbers of some kind), 
regardless of performance considerations.
And unfortunately even plain arithmetic quickly leads to huge numbers with 
Rational.

When dealing with fractional numbers, you can only choose which headache 
you prefer. Each is the lesser evil for some tasks.




More information about the Haskell-Cafe mailing list