[Haskell-cafe] Re: A question about "monad laws"
Roman Leshchinskiy
rl at cse.unsw.edu.au
Thu Feb 14 04:24:02 EST 2008
Richard A. O'Keefe wrote:
> On 14 Feb 2008, at 6:01 pm, Roman Leshchinskiy wrote:
>> I don't understand this. Why use a type which can overflow in the
>> first place? Why not use Integer?
>
> [...]
>
> Presumably the reason for having Int in the language at all is speed.
> As people have pointed out several times on this list to my knowledge,
> Integer performance is not as good as Int performance, not hardly,
> and it is silly to pay that price if I don't actually need it.
Do I understand correctly that you advocate using overflowing ints (even
if they signal overflow) even if Integers are fast enough for a
particular program? I strongly disagree with this. It's premature
optimisation of the worst kind - trading correctness for unneeded
performance.
> SafeInt is what you should use when you *THINK* your results should all fit
> in a machine int but aren't perfectly sure. (And this is nearly all the
> time.)
Again, I strongly disagree. You should use Integer unless your program
is too slow and profiling shows that Integer is the culprit. If and only
if that is the case should you think about alternatives. That said, I
doubt that your SafeInt would be significantly faster than Integer.
>>>> You just have to check for exceptional conditions.
>>> Why should it be *MY* job to check for exceptional conditions?
>>
>> It shouldn't unless you use a type whose contract specifies that it's
>> your job to check for them. Which is the case for Int, Float and Double.
>
> Wrong. You're confusing two things here. One is Float and Double,
> where we get in serious trouble WITHOUT ANY EXCEPTIONAL CONDITIONS IN
> SIGHT. The other is Int overflow.
I'm not sure what I'm confusing here, my comment referred specifically
to exceptional conditions which floats provide plenty of. As to getting
in trouble, I don't need floats for that, I manage to do it perfectly
well with any data type including (). Seriously, though, I think we
agree that using floating point numbers correctly isn't trivial, people
who do that should know what they are doing and should best use existing
libraries. I just don't see how floats are special in this regard.
> The checking I am talking about is done by the hardware at machine speeds
> and provides *certainty* that overflow did not occur.
So you advocate using different hardware?
>>> If you think that, you do not understand floating point.
>>> x+(y+z) == (x+y)+z fails even though there is nothing exceptional about
>>> any of the operands or any of the results.
>>
>> For all practical purposes, the semantics of (==) is not well defined
>> for floating point numbers.
>
> With respect to IEEE arithmetic, wrong.
Yes, IEEE does define an operation which is (wrongly, IMO) called
"equality". It's not a particularly useful operation (and it is not
equality), but it does have a defined semantics. However, the semantics
of (==) on floats isn't really defined in Haskell or C, for that matter,
even if you know that the hardware is strictly IEEE conformant.
In general, floating point numbers do not really have a useful notion of
equality. They are approximations, after all, and independently derived
approximations can only be usefully tested for approximate equality. And
yes, x+(y+z) is approximately equal to (x+y)+z for floats. How
approximate depends on the particular values, of course.
>> That's one of the first things I used to teach my students about
>> floats: *never* compare them for equality.
>
> That's one of the warning signs I watch out for. "Never compare floats for
> equality" is a sure sign of someone who thinks they know about floats
> but don't.
Hmm. Personally, I've never seen an algorithm where comparing for exact
equality was algorithmically necessary. Sometimes (rarely) it is
acceptable but necessary? Do you know of one? On the other hand, there
are a lot of cases where comparing for exact equality is algorithmically
wrong.
As an aside, you might want to try googling for "Never compare floats
for equality". I'm positive some of those people *do* know about floats.
>> "Creating denormals" and underflow are equivalent.
>
> No they are not. Underflow in this sense occurs when the result is too
> small to be even a denormal.
I'm fairly sure that IEEE underflow occurs when the result cannot be
represented by a *normal* number but I don't have a copy of the
standard. Anyway, it's not important for this discussion, I guess.
> Underflow is indeed a standard IEEE exception. Like other standard IEEE
> exceptions, it is *disabled by default*. In this case, the hardware
> delivered the exception *to the operating system*, but the operating
> system did not deliver it to the *user code*. It is the *combination*
> of hardware and operating system that conforms to the IEEE standard (or
> not).
> So we are talking about a situation where the only legal IEEE outcomes are
> "return 0.0" or "raise the Underflow exception" and where raising an
> exception
> *in the user code* wasn't allowed and didn't happen.
Now I'm curious. I would have guessed that it was an Alpha but that
would behave differently (it would trap on underflow, but only in strict
IEEE mode and only because it actually implemented flush to zero instead
of gradual underflow).
>> I'm not. But progammers I consider competent for this particular task
>> know how to use floating point. Your student didn't but that's ok for
>> a student.
>
> Wrong. He *did* know "how to use floating point", and his code would have
> run at the expected speed on other hardware. It gave pretty good answers.
Wrt speed - not necessarily. For instance, x86 is really bad when it
comes to denormals. Have a look at
http://www.cygnus-software.com/papers/x86andinfinity.html
for instance. So while he knew how to get good results with floating
point, he didn't know how to get good performance. Which, as you say, is
not part of the IEEE standard but which you still have to know if you
use floats.
Roman
More information about the Haskell-Cafe
mailing list