[Haskell-cafe] matrix computations based on the GSL

Jacques Carette carette at mcmaster.ca
Wed Jun 29 16:57:08 EDT 2005


Henning Thielemann wrote:

>Mathematical notation has the problem that it doesn't distinguish between
>things that are different but in turn discriminates things which are
>essentially the same. 
>
I used to think that too.  And while that is sometimes true, it is 
actually quite rare!  When common mathematical usage makes 2 things the 
same, it is usually because either they really are the same, or context 
is always sufficient to tell them apart.  When it keeps things separate, 
usually it is because there is something subtle going on.

>If your design goal is to keep as close as possible
>to common notational habits you have already lost! As I already pointed
>out in an earlier discussion I see it the other way round: Computer
>languages are the touchstones for mathematical notation because you can't
>tell a computer about an imprecise expression: "Don't be stupid, you
>_know_ what I mean!"
>  
>
No, you misplace the problem: you seem to want all your mathematical 
expressions to be 100% meaningful in a context-free environment.  
Computers really do love that.  Humans mathematicians are much more 
sophisticated than that, they can deal with a lot of context-sensitivity 
without much difficulty.  The goal here should be to allow computers to 
deal with context sensitivity with as much ease. 

In fact, type classes in Haskell is a *great* way to do just that!

>More specific:
> You give two different things the same name. You write
>  A*B
>   and you mean a matrix multiplication. Matrix multiplication means
>finding a representation of the composition of the operators represented
>by A and B.
> But you also write
>  A*x
> and you mean the matrix-vector multiplication. This corresponds to the
>application of the operator represented by A to the vector x.
> You see: Two different things, but the same sign (*). Why? You like this
>ambiguity because of its conciseness. You are used to it. What else?
>  
>
This is called operator overloading.  It is completely harmless because 
you can tell the two * apart from their type signatures.  It is a 
complete and total waste of time to use two different symbols to mean 
the same thing.

<sarcasm>Next thing you know, you'll want a different 'application' 
symbol for every arity of function, because they are ``different''. 
</sarcasm>
Seriously, the unification of the concept of application, achieved 
through currying and first-class functions, is wonderful.  Operator 
application is no different!

> But you have to introduce an orientation of vectors, thus you
>discriminate two things (row and column vectors) which are essentially the
>same!
> What is the overall benefit?
>  
>
Someone else already answered this much better than I could.

> It seems to me like the effort of most Haskell XML libraries in trying to
>have as few as possible combinator functions (namely one: o) which forces
>you to not discriminate the types for the functions to be combined (the
>three essential types are unified to a -> [a]) and even more it forces you
>to put conversions from the natural type (like a->Bool, a->a) in every
>atomic function!
>  
>
Here we agree.  Too much polymorphism can hurt too - you end up 
essentially pushing everything into dynamic typing, which is rather 
anti-Haskell.  The problem that the authors faced was that Haskell 
doesn't yet have enough facilities to eliminate *all* boilerplate code, 
and I guess they chose dynamic typing over boilerplate.

>I don't see the problem. There are three very different kinds of
>multiplication, they should also have their own signs: Scalar product,
>matrix-vector multiplication, matrix-matrix multiplication.
>  
>
You see 3 concepts, I see one: multiplication.  Abstract Algebra is the 
branch of mathematics where you want to abstract out the *unimportant* 
details.  Much progress was made in the late 1800's when this was 
discovered by mathematicians ;-).

>I have worked with Maple and I have finally dropped it because of its
>design. I dropped MatLab, too, because the distinction of row and column
>vectors, because it makes no sense to distinguish between e.g. convolving
>row vectors or column vectors. Many routines have to be aware of this
>difference for which it is irrelevant and many routines work only with one
>of these kinds and you are often busy with transposing them.
>  
>
The core design of much of Maple and Matlab were done in the early 80s.  
That core design hasn't changed.  That it turns out to be sub-optimal is 
to be expected!!!
Note that the one giant try at a statically typed mathematics system 
(Axiom w/ programming language Aldor) never caught on, because they were 
way way too hard to use by mere mortals.  And Aldor has a type system 
that makes Haskell's look simple and pedestrian by comparison.  Aldor 
has full dependent types, recursive modules, first-class types, just to 
name a few.  There is a 50-long (recursive) dependency chain in Aldor's 
algebra library!

>If translating all of existing idioms is your goal, then this is certainly
>the only design. But adapting the sloppy (not really convenient)
>mathematical notation is not a good design guideline. 
>
I should know better than to get into a discussion like this with 
someone who believes they singlehandedly know better than tens of 
thousands of mathematicians...  Rather reminds me of my days as a PhD 
student...

Jacques


More information about the Haskell-Cafe mailing list