int to float problem

b.i.mills@massey.ac.nz b.i.mills@massey.ac.nz
Thu, 6 Mar 2003 18:13:36 +1300


> I agree with what you said, 
> but I think you may have missed my point.

Sounds likely on both counts. 

The same thing annoys me, but my work is in exact or symbolic:

-- I don't claim this is a practical example
-- I'm just saying that it is logically plausible

   denominator 2 % 3 == 3

   denominator 23 == 1  --- only works because 23 might mean 23%1

   denominator (floor (23 % 45)) == type error in application.

So now I have to say ...

   denominator $ fromInteger (floor (23 % 45))

Is this the same malarkey that you are complaining about?

I don't like using the conversions, so I generally try to
find some way to rephrase the problem at a higher level. 

So, you want some way to define a default manner in which 
other types can be mapped to the type that is used in the 
definition of the function? (Do I understand correctly)?

I can see merit in this, someone might use, floor xor ceiling, 
to shoehorn a float into and integer function. Getting different
answers. On the other hand, it looks like ad-hoc polymorphism.
I mean you are requesting that we can write a function ....

myIntFn :: Integer -> Integer

and then define a default mapping for each type into integers.

So what if I define functions myFloatInt, myRatioIntInt,
etc, to be used by default when the "wrong" type is presented
as argument. Such that the range of each of these is distinct. 
In this way myIntFn can actually map Floats and Ratio Ints in 
totally different ways.

That is, such a mechanism is very close to the ad-hoc polymorphism
C++ style ...

myIntFn :: Integer -> Integer
myIntFn x = 2*x

myIntFn :: Float -> Float
myIntFn x = x*x

and so on.

Don't get me wrong, I am not fundamentally opposed to ad-hoc
polymorphism, in fact ... (Ok, don't get me onto the subject
of polymorphism).

> In the definition of w, the fromIntegrals serve no purpose 
> other than to make everything type.  This seems to go 
> against the merits of declarative programming.

I feel were you are coming from here ... but I wonder if the
issue is more the problem that Int is not a subset of Float.
That is, the matter that (1 :: Integer) and (1 :: Float) are
not represented the same in the computer means that they really
are not in a subset hierarchy. A conversion is a non trivial
operation (not just a projection) I just don't feel we can miss 
out this point. We (as programmers) are pretending that the 
Floats and Integers are a model of reals and integers. But,
they are not, so we have to write our program in such a way
that we make up for the sense in which they are not.

Ultimately, float is what it is ... not a real number. The
fact that we can write programs to get a good approximation
to certain real results, using floats, shows that we are 
good at compensating for the distinction. 

I don't think that it goes against the merits of declarative
programming, rather (once again) declarative programming is
causing us to have to recognise the reality of the situation.
(In this case that real arithmetic is non computable).

Mind you, maybe having to recognise the reality is a form
of demerit. 

Regards 

Bruce.

ps: Parametric polymorphism only works because other operators
    available are ad-hoc polymorphic :p