[Haskell-cafe] matrix computations based on the GSL

Henning Thielemann lemming at henning-thielemann.de
Thu Jul 7 09:08:50 EDT 2005


On Thu, 7 Jul 2005, David Roundy wrote:

> On Tue, Jul 05, 2005 at 08:17:58PM +0200, Henning Thielemann wrote:
> > The example, again: If you write some common expression like
> >
> > transpose x * a * x
> >
> > then both the human reader and the compiler don't know whether x is a
> > "true" matrix or if it simulates a column or a row vector. It may be that
> > 'x' is a row vector and 'a' a 1x1 matrix, then the expression denotes a
> > square matrix of the size of the vector simulated by 'x'. It may be that
> > 'x' is a column vector and 'a' a square matrix. Certainly in most cases I
> > want the latter one and I want to have a scalar as a result. But if
> > everything is a matrix then I have to check at run-time if the result is a
> > 1x1 matrix and then I have to extract the only element from this matrix.
> > If I omit the 1x1 test mistakes can remain undiscovered. I blame the
> > common notation x^T * A * x for this trouble since the alternative
> > notation <x, A*x> can be immediately transcribed into computer language
> > code while retaining strong distinction between a vector and a matrix
> > type.
>
> The issue is that Haskell (as far as I understand, and noone has suggested
> anything to the contrary) doesn't have a sufficiently powerful type system
> to represent matrices or vectors in a statically typed way.  It would be
> wonderful if we could represent matrix multiplication as
>
> matrix_mul :: Matrix n m -> Matrix m o -> Matrix n o

Even if we had that I would vote for distinction of Vector and Matrix! I
wanted to show with my example that vectors and matrices are different
enough that it makes sense to give them different types. In my opinion
multiplying a matrix with a so-called column vector is a more fundamental
bug than multiplying a matrix with a vector of non-compatible size. That
is I like static check of matrix-vector combinations and a dynamic check
of their size. The latter also because in many application you don't know
the vector size at compile time.

> You think of vectors as fundamental and matrices as a sort of derived
> quantity.  I'd prefer to think the other way around, with vectors being
> special cases of matrices, since it's simpler to implement and work with.

It's less type-safe and thus cumbersome to work with. That's my experience
with MatLab.

> Also note that if you have several vectors x for which you want to compute
> the dot product with metric A, and if you want to do this efficiently,
> you'll have to convert your list of vectors into a matrix anyways.

If you bundle some vectors as columns in matrix B and want to compute
norms with respect to matrix A writing
 B^T * A * B
  you will not only get the norms of the vectors in B but also many mixed
scalar products. This example shows to me that matrices are not simply
collections of vectors. Btw. we should not mess up the design with
decisions on efficiency concerns. If you want efficiency then you can
abuse matrices for that but I consider a 'map' over a list of vectors as
the cleaner way (and more type safe, because you can't make transposition
errors).



More information about the Haskell-Cafe mailing list