[Haskell-cafe] Basic question concerning the category Hask (was:
concerning data constructors)
Yitzchak Gale
gale at sefer.org
Sun Jan 6 08:32:55 EST 2008
(sorry, I hit the send button)
>> What is the lifted version you are referring to?
> Take the ordinary disjoint union, and then add a new _|_ element,
> distinct from both existing copies of _|_ (which are still distinct
> from each other).
Now why is that not the category-theoretic coproduct?
h . Left = f and h . Right = g both for _|_ and for finite
elements of the types. And it looks universal to me.
>>> Of course, Haskell makes things even worse by lifting the
>>> product and exponential objects,
>> OK, what goes wrong there, and what is the lifting?
> Again, in Haskell, (_|_, _|_) /= _|_. The difference is in the function
> f (x, y) = 1 which gives f undefined = undefined but
> f (undefined, undefined) = 1.
I don't get that one. Given any f and g, we define h x = (f x, g x).
Why do we not have fst . h = f and snd . h = g, both in Hask
and StrictHask? fst and snd are strict.
>> Unfortunately, this means that (alpha, beta) has an extra _|_
>> element < (_|_, _|_), so it's not the categorical product (which
>> would lack such an element for the same reason as above).
The reason you can't adjoin an extra element to (A,B) in, say,
Set, is that you would have nowhere to map it under fst
and snd. But here that is not a problem, _|_ goes to _|_
under both.
>> This is partly an implementation issue --- compiling pattern matching
>> without introducing such a lifting requires a parallel implementation
That's interesting. So given a future platform where parallelizing
is much cheaper than it is today, we could conceivably have
a totally lazy version of Haskell. I wonder what it would be like
to program in that environment, what new problems would arise
and what new techniques would solve them. Sounds like a nice
research topic. Who is working on it?
>> --- and partly a semantic issue --- data introduces a single level of
>> lifting, every time it is used, and every constructor is completely
>> lazy.
Unless you use bangs. So both options are available, and that
essentially is what defines Haskell as being a non-strict language.
>> Functions have the same issue --- in the presence of seq, undefined /
>> = const undefined.
Right. I am becoming increasingly convinced that the seq issue
is a red herring.
>> Extensionality is a key part of the definition of all of these
>> constructions. The categorical rules are designed to require, in
>> concrete categories, that the range of the two injections into a
>> coproduct form a partition of the coproduct, the surjective pairing
>> law (fst x, snd x) = x holds, and the eta reduction law (\ x -> f x)
>> = f holds. Haskell flaunts all three; while some categories have few
>> enough morphisms to get away with this (at least some times), Hask is
>> not one of them.
That interpretation is not something that is essential in the notion
of category, only in certain specific examples of categories
that you know. Product and coproduct in any given category -
whether they exist, what they are like if they exist, and what alternative
constructions exist if they do not - reflect the nature of the structure
that the category is modeling.
I am interested in understanding the category Hask that represents
what Haskell really is like as a programming language. Not
under some semantic mapping that includes non-computable
functions, and that forgets some of the interesting structure
that comes from laziness (though that is undoubtably also
very interesting).
-Yitz
More information about the Haskell-Cafe
mailing list