Revamping the numeric classes

Bjorn Lisper lisper@it.kth.se
Wed, 14 Feb 2001 00:20:41 +0100 (MET)


Marcin 'Qrczak' Kowalczyk:
>Me:
>> Functions themselves are never lifted. They just appear to be lifted
>> when applied to arguments of certain types, and then the function
>> application is always statically resolved into an expression where
>> the function has its original type.

>This does not solve the problems. Instead of composed liftings I will
>split the code into separate bindings. Suppose I write:

>    let
>        g x y = x + y
>        f x y = g x y
>        in f [1, 2] [10, 20] :: [[Int]]

>What does it mean? I could mean this:

>    let
>        g :: Int -> Int
>        g x y = x + y
>        f :: Int -> [Int] -> [Int]
>        f x y = g x y
>        in f [1, 2] [10, 20] :: [[Int]]

>which results in [[11, 21], [12, 22]], or this:

>    let
>        g :: Int -> Int
>        g x y = x + y
>        f :: [Int] -> Int -> [Int]
>        f x y = g x y
>        in f [1, 2] [10, 20] :: [[Int]]

>which results in [[11, 12], [21, 22]]. Anyway, somebody loses (one
>who thought that his version would be chosen by the compiler).

Actually, both will lose :-). If we change f to uncurried form 
f(x,y) = g x y then we will have f :: (Int,Int) -> Int, and
f [1, 2] [10, 20] -> zipWith f.(,) [1, 2] [10, 20] :: [Int]
which evaluates to [11,22].

If you mean that f [1, 2] [10, 20] is explicitly typed to [[Int]], then I'd
say a type error is the correct result. I think explicit typing should only
affect the HM typing of the transformed expression by possibly giving it a
less general type, it should not coerce the elemental overloading into a
more lifted result than necessary.

>If you think that the fact that bodies of let-bound variables are
>typechecked prior to their usage help, let's transform let to lambdas
>(it's not used polymorphically so it's possible):

>    (\g -> (\f -> f [1, 2] [10, 20] :: [[Int]]) (\x y -> g x y))
>    (\x y -> x + y)

>Now it is not clear in what order this should be typechecked, and
>different orders give different results.

Unlike algorithm W, a type inference algorithm that resolves elemental
overloading as part of the type inference must (I believe) maintain a set of
different possible assumptions about the type variables, and in the end make
a choice guided by the "minimal lifting" principle. In your example (with
uncurried f) there will be an assumption f::(Int,Int)->a stemming from 
(\f -> ...)(\x y -> g x y) and an assumption f::([Int],[Int])->b from
f [1, 2] [10, 20]. These types cannot be unified, but can be made related wrt
the "liftedness" order if b=[a]. Minimal lifting now dictates that
(Int,Int)->a is chosen, and f [1, 2] [10, 20] is subsequently transformed as
above into an expression of type [Int]. A type error follows since [Int] is
not a substitution instance of [[Int]].

I should say I don't have the details of an inference algorithm sorted out,
but I think something along the lines above should work.

>It can be beta/eta-reduced to
>    [1, 2] + [10, 20] :: [[Int]]
>and I really don't know what meaning would you give to it.

If + has uncurried type then
[1, 2] + [10, 20] -> zipWith (+).(,) [1, 2] [10, 20] :: [Int], but in
general one should not expect that unresolved overloaded expressions have
the subject reduction property. (For instance, the example above breaks if +
has curried type Int -> Int -> Int.)

>Anyway, examples are pointless. Functional programming, as opposed to
>imperative programming, leads to more common use of lists and functions
>as values (instead of repeated stateful calls you return all elements
>in a list to process them later), also Maybes, tuples etc. Function
>explicitly take as arguments things they depend on, explicitly return
>things they modify, functions are curried, there are combinators like
>(.) or curry which manipulate functions without applying them...

>All this means that there are many more places when somebody can
>make a type error by mismatching levels of lifting functions or
>lifting repetition.

Probably true. What I have tried to argue is merely that elemental
overloading probably can be done in a consistent way even in language like
Haskell that has an advanced type system. I don't deny that there can be
problems if it behaves in ways unexpected to the programmer.

>When I am making a type error, the last thing I want from a compiler
>is to guess what I could mean and silently compile incorrect code
>basing on this assumption. I could have done the error in a different
>place!

Well, "silent": as for any kind of syntactical convenience, the use of
elemental overloading would require the possibility to obtain good
diagnostics and information about the resulting typings and transformed
expressions.

>> Of course, the use of the overloading I have described (and any
>> kind of overloading) is justified only if the overloading matches
>> the intuition of the programmer.  If it misleads you then it is
>> harmful. Regarding elemental overloading, my experience is that
>> data parallel programmers quickly develop a strong intuition for it.

>Perhaps because they don't use higher order functions and arrays they
>use always represent arrays of the problem domain, and not a way to
>produce multuple results together?

These languages are invariably first order, but some have datatypes like
nested sequences. Arrays are used for different purposes, ranging from
pointer tables to problem representations, and elemental overloading is used
for both.

>It's a very specific need. And a request for convenience, not
>a functional feature. I'm sure it does not make sense to ever change
>the Haskell's type system in sush radical way. When Int is unified with
>[Int], it's simply an error.

(They're not unified!) I should say something about the roots of my
interest. I have been working on a dialect of Haskell directed towards
data-parallel style specification/rapid prototyping of parallel
algorithms. Thus, the idea is to combine all the powerful abstraction
features of Haskell with the data parallel paradigm. In this context, I
think the elemental overloading would fit in just fine.

I will end this discussion unilaterally now. I don't think I have anything
more to add.

Björn Lisper