[Haskell-cafe] matrix computations based on the GSL
Conal Elliott
conal at conal.net
Wed Jun 29 15:42:55 EDT 2005
On row & column vectors, do you really want to think of them as
{1,...,n)->R? They often represent linear maps from R^n to R or R to
R^n, which are very different types. Similarly, instead of working with
matrices, how about linear maps from R^n to R^m? In this view, column
and row vectors, matrices, and often scalars are useful as
representations of linear maps.
I've played around some with this idea of working with linear maps
instead of the common representations, especially in the context of
derivatives (including higher-dimensional and higher-order), where it is
the view of calculus on manifolds. It's a lovely, unifying approach and
combines all of the various chain rules into a single one. I'd love to
explore it more thoroughly with one or more collaborators.
Cheers,
- Conal
-----Original Message-----
Henning Thielemann wrote:
>On Wed, 29 Jun 2005, Jacques Carette wrote:
>
>
>>9. There are row vectors and column vectors, and these are different
>>types. You get type errors if you mix them incorrectly.
>>
>>
>
>What do you mean with "row vectors and column vectors are different
>types"? Do you mean that in a well designed library they should be
>distinguished? I disagree to this point of view which is also
represented
>by MatLab.
>
This was hotly debated during our design sessions too. It does appear
to be a rather odd decision, I agree! It was arrived at after trying
many alternatives, and finding all of them wanting.
Mathematically, vectors are elements of R^n which does not have any
inherent 'orientation'. They are just as simply abstracted as functions
from {1,...,n) -> R, as you point out. But matrices are then just
linear operators on R^n, and if you go there, then you have to first
specify a basis for R^n before you can write down a representation for
any matrix. Not useful.
However, when you step away from the pure mathematics point of view and
instead look at actual mathematical usage, things change significantly.
Mathematics is full of abuse of notation, and to make usage of matrices
and vectors both convenient yet 'safe' required us to re-examine these
many de facto conventions that mathematicians routinely use. And try to
adapt our design to find the middle road between safety and usefulness.
One of the hacks in Matlab, because it only has matrices, is that
vectors are also 1xN and Nx1 matrices. In Maple, these are different
types. This helps a lot, because then a dot product is a function from
R^n x R^n -> R. 1x1 matrices are not reals in Maple. Nor are Nx1
matrices Vectors.
> If we instead distinguish row and column vectors because we treat them
as
>matrices, then the quadratic form
> x^T * A * x
> denotes a 1x1 matrix, not a real.
>
But if you consider x to be a vector without orientation, writing down
x^T is *completely meaningless*! If x is oriented, then x^T makes
sense.
Also, if x is oriented, then
x^T * (A * x) = (x^T * A) * x.
What is the meaning of (x * A) for a 'vector' x ? It gets much worse
when the middle A is of the form B.C. To ensure that everything is as
associative as can be, but no more, is very difficult.
I don't have the time to go into all the details of why this design
'works' (and it does!), giving the 'expected' result to all meaningful
linear algebra operations, but let me just say that this was the result
of long and intense debate, and was the *only* design that actually
allowed us to translate all of linear algebra idioms into convenient
notation. Please believe me that this design was not arrived at
lightly!
Jacques
_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe at haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
More information about the Haskell-Cafe
mailing list