[Haskell-cafe] matrix computations based on the GSL

Alberto Ruiz aruiz at um.es
Thu Mar 16 12:48:37 EST 2006

Hi Frederick,

I refer to the ATLAS library:


Some versions of octave use it. I have not yet linked the GSL library with it, 
you must compile it yourself to take advantage of the optimizations for your 
architecture, but I think that it must be easy. It is in my TO DO list... :)

Curiously, I am just now working in a new (and hopefully improved) 
version of the GSL wrappers. A first and preliminary approximation to the 
documentation can be found at:


The code will be available in two or three days in a darcs repository.

I have simplified the wrapper infrastructure and the user interface. Now we 
can freely work both with real and complex vectors or matrices, still with 
static type checking. There are also some bug corrections (the eigensystem 
wrapper destroys its argument!).

I made some run time comparisons, precisely in the PCA example with the big 
matrix. I don't remember the exact difference, but I think that 5 times 
faster is too much... I will check it as soon as possible. Of course our goal 
is to have something like a functional "octave" with the same performance 
(and a much nicer language :)

Many thanks for your message!


On Thursday 16 March 2006 18:13, Frederik Eaton wrote:
> Hi Alberto,
> I'm sorry if this has been discussed before...
> I'm reading your paper, at one point it says (re. the PCA example):
> "Octave achieves the same result, slightly faster. (In this experiment
> we have not used optimized BLAS libraries which can improve efficiency
> of the GSL)"
> That seems to imply that there is a way to use optimized BLAS
> libraries? How can I do that?
> Also, in my experiments (with matrix inversion) it seems,
> subjectively, that Octave is about 5 or so times faster for operations
> on large matrices. Presumably you've tested this as well, do you have
> any comparison results?
> Thanks,
> Frederik

More information about the Haskell-Cafe mailing list