[Haskell-cafe] abs minBound < (0 :: Int) && negate minBound == (minBound :: Int)

Rustom Mody rustompmody at gmail.com
Tue Aug 20 18:14:49 CEST 2013


On Tue, Aug 20, 2013 at 6:37 AM, Richard A. O'Keefe <ok at cs.otago.ac.nz> wrote:
>
> On 20/08/2013, at 3:43 AM, Kyle Miller wrote:
>
>> On Sun, Aug 18, 2013 at 8:04 PM, Richard A. O'Keefe <ok at cs.otago.ac.nz> wrote:
>> The argument for twos-complement, which always puzzled me, is that the other
>> systems have two ways to represent zero.  I never found this to be a problem,
>> not even for bitwise operations, on the B6700.  I *did* find "abs x < 0"
>> succeeding to be a pain in the posterior.  (The B6700 had two different tests:
>> 'are these two numbers equal' and 'are these two bit patterns equal'.)
>>
>> I think a better argument for twos complement is that you're just doing all of your computations modulo 2^n (where n is 32 or 64 or whatever), and addition and multiplication work as expected modulo anything.
>
> To me, that's not a better argument.

These kinds of argument usually come down to a question of framing --
whether pragmatic, philosophical or pedagogical.  Let me start with
the philosophical

> It isn't even a _good_ argument.
> It amounts to saying "if you do things wrong, you can justify it by
> saying you're really doing something else right, and it's the programmer's
> fault for wanting the wrong thing."

This argument works if 'doing something right' is an available option.
 What if its not?

>
> One great thing about the B6700 was that you were guaranteed
> EITHER the mathematically correct answer in the numbers you were
> thinking in terms of OR an exception telling you the machine couldn't
> do what you wanted.  When it comes to *applications* programming,
> the number of times I have *wanted* arithmetic modulo 2^n (in the last
> 40 years) can be counted on the fingers of one ear.
>
> You may call it "multiplication work[ing] as expected" when the product of two
> positive numbers comes out negative; I call it a wrong answer.
>
> Prelude> let tens = 1 : map (*10) tens :: [Int]
> Prelude> take 19 tens
> [1,10,100,1000,10000,100000,
1000000,10000000,100000000,
1000000000,10000000000,
100000000000,1000000000000,10000000000000,100000000000000,
1000000000000000,10000000000000000,100000000000000000,1000000000000000000]
> Prelude> [x * x | x <- take 19 tens]
> [1,100,10000,1000000,100000000,10000000000,
1000000000000,100000000000000,10000000000000000,1000000000000000000,
7766279631452241920,1864712049423024128,2003764205206896640,
-2537764290115403776,4477988020393345024,5076944270305263616,
-8814407033341083648,4003012203950112768,-5527149226598858752]
>
> Yes, I know that Haskell has Integer.
> If I want to do more arithmetic than a bit of simple counting,
> I like to use it.
> The gibberish that Int multiplication spewed out above is why.
>
> Roughly speaking, there are three ways to handle integer
> arithmetic: the Lisp way, the Ada way, and the Java way.
> Lisp just gets it right (think Haskell's "Integer" type).
> Java *defines* wrong answers to be "right".
> Ada recognises that sometimes you want modular arithmetic (so it offers you
> modular types) and sometimes you don't (so it offers you bounded but
> non-modular types, where overflow is trapped).

This issue is really a specific instance of the question:
Are computers finite or infinite?

If one says finite then the infinite-taped Turing machine has nothing
to do with computers
If one says infinite then the abstraction we are talking of is
unrelated to the boxes on our desks/palmtops.

If one recognises that in juggling between these two views -- dare I
say a central project for a programmer?? -- we need to stuff an
infinite conceptual object into a finite actual one.  And in doing so
some corners will be cut willy-nilly.

So to say Lisp is 'right' because arithmetic does not overflow at
machine word size limits misses the fact that it overflows more
unpredictably when the machine memory fills out. Lets look at good ol
factorial

fact 0 = 1
fact n = n * fact (n-1)

Now I ran it as fact 1000000 with signature Int -> Int and with
Integer -> Integer

In the first case I got 0 in about 3 seconds
In the second... I thought I'd see what happens but after about 2
minutes of the CPU fans maxing out, firefox started giving me alarms
about an 'unresponsive script'; I felt my machine had had enough
punishment and gave in with C-c!

And if that sounds like a unreal argument, consider representing and
storing Graham's number.

Of course I am arguing philosophically not pragmatically.
Philosophically: Graham's number is 'just' a finite number, though a
rather obese one
Pragmatically: 32-bits is unwise for a bank-balance, 64 should be a
bit more safe

So coming to the pragmatic and to lisp...
I remember a story (maybe apocryphal) about a robot in the MIT(?) lab
that did a lot of clever things and then tumbled down the stairs. When
asked, the concerned researcher/academic shrugged it off: "It was
garbage collecting"

If the robot had been programmed in C its real-time behavior would
have been sturdier though its integer overflow properties would have
been flimsier.

More pragmatically, its good to remember Whorf's finding that wrong
signboards/terminology can actually cause buildings to catch fire.

So yes, Haskell's Int, should have been called FastInt or Int29 or somethin'

Many such unfortunate nomenclatures...
Just yesterday, been struggling with teaching the fact that 'data' is
not data but type, 'type' is not type but type-synonym etc etc

Yeah thats the third framing -- pedagogy...

And I better stop now with a...

tl;dr
Abstractions leaking is a center-piece of our story.
We can look the other way -- and be elegant theoreticians
Or we can be good engineers and try to plug the leaks -- only to have
the leak explode ever more noisily and messily




More information about the Haskell-Cafe mailing list