[Haskell-cafe] Re: [Haskell] Typing in haskell and mathematics
Jacques Carette
carette at mcmaster.ca
Fri Jan 28 16:40:00 EST 2005
[My apologies for not seeing the related discussion yesterday on haskell-cafe, I was not subscribed until just now]
Iavor Diatchki <iavor.diatchki at gmail.com> wrote:
> On Fri, 28 Jan 2005 10:01:33 -0500, Jacques Carette <carette at mcmaster.ca> wrote:
> > The previous post on record syntax reminded me of some 'problems' I had noticed where Haskell and mathematics have
> >a
> > (deep) usage mismatch.
> It is interesting that my sentiment is exactly the opposite.
> Mathematicians (the applied kind, not the pure) tend to be quite
> sloppy about their notation relaying _a lot_ on the context and the
> fact that people know what they are talking about. I find that this
> makes their explanations harder to understand rather than easier.
But every program you write in Haskell depends on a lot of context too! It depends on the (complete) semantics of the
programming language, which includes quite a lot of type theory. And more to the point, it includes the context of
all the builtin operators as well as the (large) set of functions in Prelude. That is a lot of context.
The 'only' difference with mathematics is that Haskell forces you to use 'import' statements to bring in more context.
> > First, consider a syntax for other component-wise function application? For example, it would be convenient to
> >have
> > (f,g) @ (x,y)
> > be (f x, g y). In some languages [with dynamic typing], one can even do (f,g) (x,y) :-)
> I am not sure what you mean, but you can define exactly that (except
> for the @ sign which is special syntax already):
> (f,g) `ap2` (x,y) = (f x, g y)
[I should have looked harder for better syntax]
I meant to
a) overload application
b) do it for all tuples at once
Building all of Nat into the names of my functions does not appeal to me...
> > Yes, I am aware that this untypeable in Haskell, because polymorphism is straight-jacketed by structural rules.
> What do you mean by this:
> ap2 :: (a -> b, c - d) -> (a,c) -> (b,d)
I mean that you cannot have ap2, ap3, ap4, etc *and* apply all share the 'same' type. But shouldn't they?
> > 1) look at the type a -> b as being a *subtype* of b (though I have never seen it phrased that way in print)
> I don't think subtyping is what you are thinking of. Perhaps you are referring
> to the correspondance between b-expressions with a free a-variable,
> and functions a->b. In Haskell we don't have expressions with free
> variables as
> first class values. There might be some interesting things to work
> out for such beasts.
Expressions with free variables as first-class values is indeed an obvious 'next' thing to want, if one is to parallel
mathematics.
But I really mean to look at any p-producing function as being of b-type, and that includes a -> b.
> > 2) to consider every 'natural transformation' available and use it (tuples of functions <-> obvious function on
> > tuples)
> Are you suggesting that natural transformations should be applied automatically?
> There are many problems with this: which operations should happen
> automatically?
At least the ones that you get 'for free' for structural reasons.
> Type inference (if it works at all) will be hard to do, but perhaps
> these are technical details.
> Fundamentally the "sloppy" math notation assumes that what the author
> wrote is correct and should simply be given "the obvious" meaning.
Haskell's type inference engine does the same thing -- it assumes that the code you wrote is meaningful, and thus
gives it the *only* type that makes sense. Care has been taken so that in Haskell "the obvious" == "the only", which
is certainly a good property to have.
Mathematicians who are a bit careful with their notation, do the same. They make sure that "obvious" and "only" match
when they drop sub/superscripts, some environment dependence, etc.
> This is pretty much the idea with dynamic typing. I find the
> restrictions of static typing very convenient (well most of the time
> :-) as they capture "stupid' errors.
I disagree. I wrote tens of thousands of lines of code in a (decent) dynamically typed language. Now I am writing
thousands of lines of code in Haskell. I also appreciate all my "stupid" errors being caught - this is a welcome
advantage, and I rather like it. But I really really hate having to duplicate code for really dumb reasons. So what
I want is not dynamic typing, just more powerful static typing.
> > Seeing a -> b as a subtype of b allows one to take any function f : b -> b -> c (like + ) and 'lift' it to
> > ff : (a -> b) -> (a -> b) -> (a -> c)
> > via the obvious definition
> > ff g h = \x -> f (g x) (h x)
> > This is done routinely in mathematics. The meaning of the expression (sin + cos) should be immediately obvious.
> > Such
> > 'lifting' is used even in first-year calculus courses, but not in Haskell.
> Apart from the fact that this does not seem like a good idea,
> you can already do things like this in Haskell:
> liftOp op f g x = f x `op` f x
Of course. My point is that you need to explicitly use liftOp. I want it to be implicit. Note that this does NOT
break type inference, if type inference is done via constraint solving and a->b is a subtype of b, though it does make
it harder.
> This is all you need, especially if you are prepared to stick to the
> "usual" mathematical convention of not using curried functions, but
> instead using tuples to pass multiple arguments.
Oh, but curried functions are way too nice to give up!
> > Similarly, there are "obvious" maps
> > apply_T :: T (a->b) -> a -> T b
> > for any datatype constructor T (as has been noticed by all who do 'generic' programming).
> Is this really true? Consider:
> data T a = T (a -> Int)
> What shoud apply_T do?
> applyT :: (essentially) ((a -> b) -> Int) -> a -> (b -> Int)
> perhaps: applyT (T f) a = T (\b -> f (const b))
> but that probably does not behave as you would like it.
> Of course you could restrict the datatypes for which apply_T can be derived.
Good point - since my intuition was built using covariant types, it does break down on contravariant types.
I meant that it would be ok for
applyT :: T (a -> (b -> Int)) -> a -> T (b -> Int)
But I did talk about 'polynomial functors' (or was that a different email?), and your example is most definitely not.
> I agree that it would be nice to automatically derive functions such
> as apply_T, etc.
> This (as you point out) is the idea of generic programming.
> This however does not suggest that we should replace application...
Not replace, augment/generalize.
Jacques
More information about the Haskell-Cafe
mailing list