[Haskell-cafe] matrix computations based on the GSL

Jacques Carette carette at mcmaster.ca
Wed Jun 29 12:27:07 EDT 2005

I would recommend that you look very closely at the design of the 
LinearAlgebra package, the Matrix and Vector constructors, and the 
underlying implementation data-structure rtable() for Maple's 
implementation of all these ideas.  About 250 person-days were spent on 
just the high-level design, as we knew that the underlying 
implementation would take a lot longer (it ended up taking about 6 

The main design points were:
1. complete separation of the user-visible types (Matrix, Vector, Array) 
from the implementation type (rtable)
2. complete separation of the usual functions for user use (matrix 
addition, multiplication, LUDecomposition, etc) from the actual 
underlying implementations
3. for efficiency purposes (only), allow access  to the low-level 
details of both data-structures and implementations, but spend no effort 
on making these easy to use.  Making them efficient was the only thing 
that mattered.  Thus the calling sequences for the low-level routines 
are quite tedious.
4. the high-level commands should do what one expects, but they are 
expected to pay a (small) efficiency cost for this convenience
5. incorporate all of LAPACK (through ATLAS, raw LAPACK is now too 
slow), as well as *all* of NAG's LinearAlgebra routines in a completely 
seamless manner.
6. do #6 for double precision as well as arbitrary precision arithmetic, 
again seamlessly
7. use as much structure information to do algorithm dispatch as possible
8. Matrices and Arrays are different high-level types, with different 
defaults.  A.B for Matrices is matrix multiplication, A.B for Arrays is 
component-wise.  To serves the purpose of both those who want to do 
linear algebra and those who want to do signal processing.
9. There are row vectors and column vectors, and these are different 
types.  You get type errors if you mix them incorrectly.

Things that follow from that are numerous.  For example, for the Matrix 
constructor there are:
1. 15 different abstract 'shapes' a matrix can have, with the 
band[n1,n2] 'shape' having two integer parameters
2. 14 different storage formats (element *layouts*)
3. each storage format can come in C or Fortran order
4. datatype which can be complex[8], float[8], integer[1], integer[2], 
integer[4], integer[8] or Maple
5. a variety of attributes (like positive_definite) which can be used by 
various routines for algorithm selection

This implies that, in Maple (unlike in Matlab), the eigenvalues of a 
real symmetric matrix are guaranteed to be real, ie not contain spurious 
small imaginary parts.  Solving Ax=b for A positive definite will use a 
Cholesky decomposition (automatically), etc.  Computations on arbitrary 
length integers will also use different algorithms than for matrices of 
polynomials, for example.  Where appropriate, fraction-free algorithms 
are used, etc.

Some routines are even designed to be a bit 'lazy': one can use an 
LUDecomposition of a matrix A and return essentially only the 
information necessary to then do a large number of Ax=b solves for 
different b.

Matrix access is also very convenient, at the element level, but also at 
the sub-matrix level, rather like Matlab's notation.  If A is a 5x5 
Matrix, then
A[1..2,3..4] := A[2..3,-2..-1];
will assign the submatrix of A consisting of the elements from row 2,3 
and columns 4,5 to the submatrix of A in rows 1,2 and columns 3,4.  
Notice the indexing from the right (positive numbers) and the left 
(negative).  One can also say
A[[2,1],3..4] := A[2..3,[-1,-2]];
As this is somewhat difficult to explain, let me show what this does:
 > A := Matrix(5,5,(i,j)->i*j);

                        [1     2     3     4     5]
                        [                         ]
                        [2     4     6     8    10]
                        [                         ]
                   A := [3     6     9    12    15]
                        [                         ]
                        [4     8    12    16    20]
                        [                         ]
                        [5    10    15    20    25]
 > A[[2,1],3..4] := A[2..3,[-1,-2]]: A;

                     [1     2    15    12     5]
                     [                         ]
                     [2     4    10     8    10]
                     [                         ]
                     [3     6     9    12    15]
                     [                         ]
                     [4     8    12    16    20]
                     [                         ]
                     [5    10    15    20    25]

(hopefully the above will come out in a fixed-width font!).  This makes 
writing block-algorithms extremely easy.

Overall, this design lets one easily add new algorithms without having 
to worry about breaking user code.


More information about the Haskell-Cafe mailing list