[Haskell-cafe] Re: A question about "monad laws"

Roman Leshchinskiy rl at cse.unsw.edu.au
Thu Feb 14 09:15:41 EST 2008

jerzy.karczmarczuk at info.unicaen.fr wrote:
> Jed Brown comments the answer of -
> -- Roman Leshchinskiy who answers my question concerning the replacement
> of floating-point numbers:
>>> > First, when I see the advice "use something else", I always ask 
>>> "what",
>>> > and I get an answer very, very rarely... Well? What do you propose?
>>> For Haskell, Rational seems like a good choice. The fact that the 
>>> standard requires defaulting to Double is quite unfortunate and 
>>> inconsistent, IMO; the default should be Rational. Float and Double 
>>> shouldn't even be in scope without an explicit import. There really 
>>> is no good reason to use them unless you are
>>> writing a binding to existing libraries or really need the performance.
> Here Jed Brown says:
>> Until you need to evaluate a transcendental function. 
> ...
> It would be killing, wouldn't it?...

Yes, it would. I was talking about average programs, though, which (I 
suspect) don't do numerics and really only need fractions. If you do 
numerics, by all means use a data type that supports numerics well. But 
even here, and especially in a functional language, IEEE floating point 
usually isn't the best choice unless you really need the performance.

You seem to be after a type that can be used to represent non-integer 
numbers in next to all problem domains. I don't think such a type 
exists. So, as usual, one has to choose a data structure suited to the 
problem at hand. IMO, standard floating point is not a good choice for 
most problem domains so Float and Double shouldn't be used by default. 
Whether Rational is a good default is certainly debatable.

>> For all practical purposes, the semantics of (==) is not well defined 
>> for floating point numbers. That's one of the first things I used to 
>> teach my students about floats: *never* compare them for equality. So 
>> in my view, your example doesn't fail, it's undefined. That Haskell 
>> provides (==) for floats is unfortunate. 
> I disagree, on practical basis. Floating-point numbers are very well
> defined, we know how the mantissa is represented. If the numbers are
> normalized, as they should, plenty of low-level iterative algorithms
> may use the equality - after some operation - to check that the machine-
> precision convergence has been obtained.

If you are absolutely sure that for every possible precision and for 
every sequence of operations that compilers will generate from your code 
your algorithm will actually converge to a particular binary 
representation and not flip-flop on the last bit of the mantissa, for 
instance, and if you do not care about the actual precision of your 
algorithm (i.e., you want as much as possible of it) then yes, you might 
get away with using exact equality. Of course, you'll have to protect 
that part of your code by a sufficient number of warnings since you are 
using a highly unsafe operation in a very carefully controlled context. 
I'm not sure the trouble is really worth it. Anyway, in my view, such an 
unsafe operation shouldn't be in scope by default and definitely 
shouldn't be called (==). It's really quite like unsafePerformIO.

> On the contrary, the verification that the absolute value between two 
> terms is less than some threshold, may be arbitrary or dubious.

Only if you use an inappropriate theshold. Choosing thresholds and 
precision is an important part of numeric programming and should be done 
with great care.


More information about the Haskell-Cafe mailing list