[Haskell-cafe] Re: strict Haskell dialect
cgibbard at gmail.com
Mon Feb 6 02:25:54 EST 2006
On 05/02/06, Jan-Willem Maessen <jmaessen at alum.mit.edu> wrote:
> On Feb 5, 2006, at 2:02 PM, Brian Hulley wrote:
> > ...
> > I wonder if current compilation technology for lazy Haskell (or
> > Clean) has reached the theoretical limits on what is possible for
> > the compiler to optimize away, or if it is just that optimization
> > has not received so much attention as work on the type system etc?
> I would answer resoundingly that there is still a good deal to
> learn / perfect in the compilation technology, but there's been a
> lack of manpower/funding to make it happen.
> -Jan-Willem Maessen
Besides, haven't you heard of the full-employment theorem for
compiler-writers? ;) To paraphrase it: For every optimising compiler,
there is one which does better on at least one program.
In any event, if compilers which preserve non-strict semantics aren't
producing programs from naively written code which are even as fast as
the corresponding hand-tuned C+Assembly programs, then there's still
plenty of room for improvement. It just takes a lot of time,
resources, and effort, as mentioned, to make it (or more reasonable
approximations to it) happen.
Perhaps some small amount of overhead will always be needed to
implement programs with non-strict semantics, (short of solving the
halting problem) but I think that with a lot of hard work, this is
something which could be squeezed down a lot, perhaps to the point of
being negligible. (It's already negligible for many, perhaps even most
applications, on modern hardware.)
I think that as programming languages become higher level, one has
more and more fun opportunities to optimise that it would be much more
difficult to locate and attempt in lower level languages. For example,
knowing that a piece of code is a 'map' or 'foldr' operation,
algebraic rules can be applied at the higher levels, performing
fusion. This part has been done to some extent, but perhaps there are
much deeper things which could be done at that level.
At a lower level, special optimisers could be used in native code
generation, which would take advantage of the fact that there will be
no state or limited state to carry around and no real side effects
(potentially limiting the loads and stores and operating system calls
one would have to do). One might choose an appropriate scheduler which
handled code differently based on the particular higher-order function
in which it was wrapped, since different structures of computation put
different kinds of strain on any given processor.
Anyway, don't ever fool yourself into thinking that any
otherwise-reasonable language is somehow inherently slow. There's
always potential for a better implementation.
More information about the Haskell-Cafe