[Haskell] [Fwd: Re: Computer Language Shootout]

John Meacham john at repetae.net
Thu Jan 25 18:08:49 EST 2007


On Thu, Jan 25, 2007 at 08:55:37AM +0000, Simon Marlow wrote:
> Clean has also declined in these benchmarks but not that much as Haskell.
> According to John van Groningen Clean's binary-trees program in the previous
> shootout version used lazy data structure which resulted in lower memory
> usage and much faster execution. That was removed by the maintainer of the
> shootout and replaced by a much slower one using strict data structure.

Why was this done?

I notice that a lot of people expouse the 'strictness is good, lazy is
bad' methodology of optimization. We really should be careful about
that, naive additions of seqs (or even worse, deepSeqs!) can kill
algorithmic performance of code and create bad space leaks. Lazy is
good, it is much better to never compute something at all then compute
it just a bit faster. Haskell is lazy by default and users need to start
thinking in terms of lazyness by default to fully take advantage of
haskell's goodness.

Not that careful manipulation of strictness isn't key to good
performance, but the 'shotgun' approach of just adding 'deepSeqs' and
'seq's everywhere (without benchmarking to see if it actually helps)
should be avoided and certainly not advocated to new users.

Although it is especially hard to generalize optimization rules for
haskell, I think the closest one can to a rule of thumb is "make
operations on integral or basic types as strict as possible, make
everything else as lazy as possible.". at least, that is as far as one
should go without benchmarks or some good reasoning to manipulate
strictness further. 

        John

-- 
John Meacham - ⑆repetae.net⑆john⑈


More information about the Haskell mailing list