Arrays vs Lists and Specialization
Bjorn Lisper
lisper@it.kth.se
Wed, 22 Jan 2003 12:15:34 +0100 (MET)
Matthew Donadio:
>| Many spectral estimation routines are defined in terms of special
>| matrices (ie, Toeplitz, etc). Arrays defined recursively by list
>| comprehensions make it easy to implement algorithms like
>Levinson-Durbin
>| recursion, and they look very similar to the mathematical definitions:
>|
>| > levinson r p = (array (1,p) [ (k, a!(p,k)) | k <- [1..p] ], realPart
>(rho!p))
>| > where a = array ((1,1),(p,p)) [ ((k,i), ak k i) | k <- [1..p],
>i <- [1..k] ]
>| > rho = array (1,p) [ (k, rhok k) | k <- [1..p] ]
>| > ak 1 1 = -r!1 / r!0
>| > ak k i | k==i = -(r!k + sum [ a!(k-1,l) * r!(k-l) | l <-
>[1..(k-1)] ]) /
>| rho!(k-1)
>| > | otherwise = a!(k-1,i) + a!(k,k) * (conjugate
>(a!(k-1,k-i)))
>| > rhok 1 = (1 - (abs (a!(1,1)))^2) * r!0
>| > rhok k = (1 - (abs (a!(k,k)))^2) * rho!(k-1)
>|
>| OK, my question then has to do with the efficiency of lists versus
>| arrays. Do the latest compilers handle handle arrays efficiently, or
>| are lists really the way to go? If there is a performace difference,
>is
>| it generally big enough to warrant rewriting algorithms?
Simon:
>Do not rewrite it! This is a textbook application of Haskell arrays,
>and they should work beautifully. If not, we should fix it. Using
>lists will probably be much *less* efficient than arrays.
>
>People often use arrays with algorithms derived from imperative
>programming, which assume update-in-place. They use (\\) for each
>element, which copies the entire array, and get terrible performance.
>
>Haskell arrays are designed to be build en-bloc, as you are doing it.
>Your programs are lovely, and will not do redundant array copying.
>
>It is, however, possible that GHC fails to deforest away all the
>intermediate lists in the array comprehensions, but I think it does a
>reasonable job. You can use -ddump-simpl to see.
You could check out SAC. SAC is a functional language specialized for
arrays, with restrictions in the language to ensure efficient compilation.
The SAC compiler can apparently do a very good job on scheduling/memory
allocation as to optimize for cache performance: this is an important issue
to get performance in array computations that I believe current Haskell
implementations do not bother much about.
SAC has syntactic forms to define arrays that are somewhat reminiscent of
array comprehensions, although they are much more restricted. An interesting
route for a *real* array-optimizing Haskell compiler could be to try to find
instances of array comprehensions that can be mapped to constructs
corresponding to SAC's constructs, and then apply the compilation technology
of SAC. An alternative is to make such constructs directly available in
Haskell, so the programmer can be explicit whan performance is needed.
Homepage http://www.informatik.uni-kiel.de/~sacbase/.
Björn Lisper