[Haskell] Typing in haskell and mathematics

Cale Gibbard cgibbard at gmail.com
Mon Jan 31 11:20:23 EST 2005


I agree, as an undergraduate student of pure mathematics, I have been
finding fairly large parts of the discussion about mathematical
notation to be somewhat silly and uninformed. (Really it was more the
previous thread, but it seems to have been continued here). One thing
to keep in mind about mathematical notation is that it is usually not
intended to be read by machines, rather it is a human language, albeit
a very precise one.

Statements like "a < b < c" are perfectly clear to everyone present,
and if anyone has a type error in their head when reading that which
they can't get past, they are most likely just being stubborn.

Another common thing which is done whenever convenient is to treat
Cartesian product as associative, and naturally identify the spaces (A
x A) x A and A x (A x A), along with perhaps A^3 which might be
functions f: {0,1,2} -> A, or might be treated as one of the
preceding. In any given context, it should be easy to determine the
author or lecturer's intent as to whether these are distinct or the
same. Bookkeeping is left as an exercise to the bored reader or to
logicians. (With the recent comment that functional programmers are
applied logicians, perhaps it is no surprise that they would be upset
with such fuzziness in the language!)

The notational confusion between f and f(x) tends not to happen beyond
the level of highschool, at least not where I am. The former refers to
the function f, and the latter refers to a point in its codomain. In
fact, I currently am dealing with the opposite problem in trying to
learn differential geometry - people and books using just f to refer
to a particular point in the codomain of f. All functions are
automatically evaluated at a given point of concern. This seems to be
common among physicists for some reason (most likely because they'd
like to think of the function as a quantity, and are used to
quantities having implicit relationships).

Another thing to keep in mind is that mathematics is written in what
is usually another human language. If I write a proof in English, I
can provide enough context that any natural transformations I
implicitly apply to things will be made obvious to the reader.

The goal is not usually a machine-checkable proof, rather it's to
convey enough information that anyone paying attention could produce
one. A professor of mine used the term "rigourisable", halfway joking
about the process. Basically, enough logical steps are provided so
that the ideas are clear, and everyone is completely convinced that
they could fill in the details -- mathematicians are usually pretty
good at making sure that the details that they're providing for
themselves are consistent.

The reasons that such bookkeeping details are not generally provided
is that for one, if someone is unable to fill some detail in, they are
expected to think about it and then ask a question, and two, providing
all of the details only serves to obscure the actual reasoning.
Getting caught up in explicitly applying natural transformations that
people would apply in their head without much thought anyway is a
waste of time and creates a lot of junk to have to sift through to
find the key ideas.

In any case, it is important to keep the audience in mind while
writing anything, including symbols.

 - Cale

p.s. There are other good philosophical reasons not to take everything
down to the level of formal logic and set theory all the time,
stemming from Goedel's result that a formal system for mathematics
cannot be proven consistent within itself. If we ever discover our
system is broken it will likely have to be replaced by a similar one
which corrects the problem. The fuzziness serves as good protection in
the case of such an earthquake. It's easier to map slightly fuzzy
notions with a given semantic interpretation onto a new system and
find the analogues of theorems that you had before than it is to map
direct low-level syntactical statements across and be able to say that
they mean anything at all similar.

On Mon, 31 Jan 2005 13:59:58 +0100, Benjamin Franksen
<benjamin.franksen at bessy.de> wrote:
> On Monday 31 January 2005 04:24, Jeremy Gibbons wrote:
> > Despite being a fan of generic programming, I have my doubts about
> > this kind of automatic lifting. It works fine in "ordinary
> > mathematics", because there is no fear of confusion - one hardly ever
> > deals with functions as entities in their own right.
>
> May I please beg to differ? When I studied math, things were quite
> different, at least. I remember whole branches of mathematics
> completely dedicated to dealing with "functions as entities in their
> own right". One notable example is Functional Analysis, of which I
> happen to know a little. And, as far as I remember, we used notation
> which reflected this, i.e. nobody wrote 'f(x)' when actually they meant
> just 'f', which is the same as '\x -> f x', which in math is usually
> written 'x |-> f(x)'.
>
> > (Witness "sigma
> > sin(x) dx", involving a term sin(x) and a dummy variable x, rather
> > than the more logical "sigma sin", involving the function.)
>
> The notations for 'integral' and 'differential quotient' stem from a
> time when dealing with functions as entities in their own right was
> indeed not yet a common concept in mathematics, i.e. earlier than 1900.
>
> BTW, 'sigma sin' is not a function.
>
> Ben
> _______________________________________________
> Haskell mailing list
> Haskell at haskell.org
> http://www.haskell.org/mailman/listinfo/haskell
>


More information about the Haskell mailing list