[Haskell-cafe] Re: A question about "monad laws"

Richard A. O'Keefe ok at cs.otago.ac.nz
Thu Feb 14 01:05:32 EST 2008


On 14 Feb 2008, at 6:01 pm, Roman Leshchinskiy wrote:
> I don't understand this. Why use a type which can overflow in the  
> first place? Why not use Integer?

Why is this hard to understand?
Dijkstra's classic "A Discipline of Programming" distinguishes
several kinds of machine.  I'm quoting from memory here.

	A Sufficiently Large Machine is one which can run your program
	to completion giving correct answers all the way.

	An Insufficiently Large Machine is one which can't do that and
	silently goes crazy instead.

	A Hopefully Sufficiently Large Machine is one which *either*
	does what a Sufficiently Large Machine would have *or* reports
	that it couldn't.

The good thing about an SLM is that it always gives you right answers  
(assuming
your program is correct).  The bad thing is that you can't afford it.

The good thing about an ILM is that you can afford it.  The bad thing  
is that
you can't trust it.

The great thing about a HSLM is that you can both trust and afford it.

Presumably the reason for having Int in the language at all is speed.
As people have pointed out several times on this list to my knowledge,
Integer performance is not as good as Int performance, not hardly,
and it is silly to pay that price if I don't actually need it.

The thing about using SafeInt is that I should get the *same* space  
and speed
from SafeInt as I do from Int, or at the very least the same space  
and far
better speed than Integer, while at the same time EITHER the results  
are the
results I would have got using Integer *OR* the system promises to  
tell me
about it, so that I *know* there is a problem.

SafeInt is what you should use when you *THINK* your results should  
all fit
in a machine int but aren't perfectly sure.  (And this is nearly all  
the time.)

Int is what you should use when you don't give a damn what the  
results are as
long as you get them fast.  (But in that case, why not use C or  
assembler?)

>
>>> You just have to check for exceptional conditions.
>> Why should it be *MY* job to check for exceptional conditions?
>
> It shouldn't unless you use a type whose contract specifies that  
> it's your job to check for them. Which is the case for Int, Float  
> and Double.

Wrong.  You're confusing two things here.  One is Float and Double,
where we get in serious trouble WITHOUT ANY EXCEPTIONAL CONDITIONS IN  
SIGHT.
The other is Int overflow.  There may also be an equivocation on  
'checking'.
When was the last time you proved that a large program would not  
incur an
integer overflow?  When was the last time you proved that a library  
package
would not incur integer overflow provided it was called in accord  
with its
contract.  When was the last time you even *found* a sufficiently  
informative
contract in someone else's Haskell code?

The checking I am talking about is done by the hardware at machine  
speeds
and provides *certainty* that overflow did not occur.

> It's not the case for Integer and Rational.
>
>> If you think that, you do not understand floating point.
>> x+(y+z) == (x+y)+z fails even though there is nothing exceptional  
>> about
>> any of the operands or any of the results.
>
> For all practical purposes, the semantics of (==) is not well  
> defined for floating point numbers.

With respect to IEEE arithmetic, wrong.

> That's one of the first things I used to teach my students about  
> floats: *never* compare them for equality.

That's one of the warning signs I watch out for.  "Never compare  
floats for
equality" is a sure sign of someone who thinks they know about floats  
but don't.

> So in my view, your example doesn't fail, it's undefined. That  
> Haskell provides (==) for floats is unfortunate.

The operation is well defined and required by the IEEE standard.
>

> If they used (==) for floats, then they simply didn't know what  
> they were doing. The fact that a program is commercial doesn't mean  
> it's any good.

Er, we weren't talking about (==) for floats; I don't know where you  
got that.
I never said it was any good; quite the opposite.  My point is that  
bad software
escaped into the commercial market because floating point doesn't  
follow the
laws people expect it to.
>
>>> I guess it trapped on creating denormals. But again, presumably  
>>> the reason the student used doubles here was because he wanted  
>>> his program to be fast. Had he read just a little bit about  
>>> floating point, he would have known that it is *not* fast under  
>>> certain conditions.
>> Well, no.  Close, but no cigar.
>> (a) It wasn't denormals, it was underflow.
>
> "Creating denormals" and underflow are equivalent.

No they are not.  Underflow in this sense occurs when the result is too
small to be even a denormal.  (The IEEE exceptions Underflow and Inexact
are not the same.  Creating denormals is likely to trigger Inexact but
should not trigger Underflow.  I am talking only about a condition that
triggered Underflow.)

> Denormals are created as a result of underflow. A denormalised  
> number is smaller than any representable normal number. When the  
> result of an operation is too small to be represented by a normal  
> number, IEEE arithmetic will either trap or return a denormal,  
> depending on whether underflow is masked or not.

No, we're talking about a situation where returning a denormal is not  
an option
because there is no suitable denormal.  This is underflow.

>
>> (b) The fact underflow was handled by trapping to the operating  
>> system,
>>     which then completed the operating by writing a 0.0 to the  
>> appropriate
>>     register, is *NOT* a universal property of floating point, and  
>> is *NOT*
>>     a universal property of IEEE floating point.  It's a fact  
>> about that
>>     particular architecture, and I happened to have the manual and  
>> he didn't.
>
> IIRC, underflow is a standard IEEE exception.

Underflow is indeed a standard IEEE exception.  Like other standard IEEE
exceptions, it is *disabled by default*.  In this case, the hardware
delivered the exception *to the operating system*, but the operating
system did not deliver it to the *user code*.  It is the *combination*
of hardware and operating system that conforms to the IEEE standard  
(or not).
So we are talking about a situation where the only legal IEEE  
outcomes are
"return 0.0" or "raise the Underflow exception" and where raising an  
exception
*in the user code* wasn't allowed and didn't happen.

The hardware is allowed to trap to the operating system any time it  
feels
like, for any reason (like 'this model doesn't support the SQRTD  
instruction')
or none (hey, it's a Friday, I think I'll generate traps).

The knowledge I had, and the student lacked, was *not* knowledge  
about an
interface (the IEEE specification) but about an implementation.   
There's a
lot of Haskell code out there with no performance guides in the  
documentation...

> I'm not. But progammers I consider competent for this particular  
> task know how to use floating point. Your student didn't but that's  
> ok for a student.

Wrong.  He *did* know "how to use floating point", and his code would  
have
run at the expected speed on other hardware.  It gave pretty good  
answers.
What he didn't know was how one
particular machine struck the balance between hardware and software.

I wonder just how many programmers these days think Double.(*) is  
_always_
a cheap hardware instruction?

Returning to our theme:  the programmer expectation here is "a simple  
cost
model."  Violating that expectation led to a program with a huge  
unexpected
cost problem.  In the same way, violating other programmer  
expectations is
likely to lead to unexpected correctness problems.






More information about the Haskell-Cafe mailing list