[Haskell-cafe] Numerical methods in Haskell

Bjorn Lisper lisper at it.kth.se
Mon Feb 20 09:59:36 EST 2006


David Roundy:
>On Mon, Feb 20, 2006 at 11:47:49AM +0100, Bjorn Lisper wrote:
>> >(a) It's hard to compete with existing libraries.  The obvious thing is
>> >not to compete; instead, just call them.  But somehow that doesn't seem
>> >to be as motivating.  Perhaps some bindings exist though?
>> 
>> Hard to compete, yes. But on the other hand, the rewards can be high.
>> Numerical library code (especially matrix libraries) tends to be highly
>> optimized for the hardware architecture at hand. As a consequence a small
>> change in, say, the cache architecture, might require a thorough rewrite of
>> the library code to regain high utilisation of the hardware. This is since
>> languages like Fortran and C force you to code so close to the hardware. A
>> high-level language, with good optimizing compilation, would make also
>> library code more portable across hardware architectures. N.b. these
>> optimizations are non-trivial to say the least.
>
>The only particularly relevant numerical libraries today (atlas and fftw)
>already do far better optimization than any compiler is going to acheive,
>and in the case of fftw can respond to changes in memory configuration at
>runtime.  In both cases they're written in ANSI C (although the C code for
>fftw is written by an OCaml program... or at least some dialect of ML).  In
>order to take advantage of cache behavior properly, it's necesary to allow
>adjustments in the actual algorithm used, which isn't something that a
>clever compiler is likely to accomplish.

That's a valid point. You may want to change, e.g., the size of blocks in a
block-oriented matrix algorithm to match the cache. This will, in general,
require the use of algebraic laws like associativity and commutativity,
which are not valid for floating-point arithmetics and thus can change the
numerical behaviour, so a compiler shouldn't fiddle around with them unless
under strict control of the programmer. Interestingly, the language invented
by my aforementioned former PhD student contains a nondeterministic matrix
decomposition primitive, which allows the partitioning of a matrix into a
fixed number of blocks, but where block sizes can vary. This is exactly to
let the programmer give an optimizing compiler this degree of freedom when
deemed safe. Alas, he never got around to any serious experiments with this
feature.

Björn Lisper


More information about the Haskell-Cafe mailing list