[Haskell-cafe] Re: Why can't Haskell be faster?
Neil Mitchell
ndmitchell at gmail.com
Wed Oct 31 11:37:12 EDT 2007
Hi
I've been working on optimising Haskell for a little while
(http://www-users.cs.york.ac.uk/~ndm/supero/), so here are my thoughts
on this. The Clean and Haskell languages both reduce to pretty much
the same Core language, with pretty much the same type system, once
you get down to it - so I don't think the difference between the
performance is a language thing, but it is a compiler thing. The
uniqueness type stuff may give Clean a slight benefit, but I'm not
sure how much they use that in their analyses.
Both Clean and GHC do strictness analysis - I don't know which one
does better, but both do quite well. I think Clean has some
generalised fusion framework, while GHC relies on rules and short-cut
deforestation. GHC goes through C-- to C or ASM, while Clean has been
generating native code for a lot longer. GHC is based on the STG
machine, while Clean is based on the ABC machine - not sure which is
better, but there are differences there.
My guess is that the native code generator in Clean beats GHC, which
wouldn't be too surprising as GHC is currently rewriting its CPS and
Register Allocator to produce better native code.
Thanks
Neil
On 10/31/07, Jeff.Harper at handheld.com <Jeff.Harper at handheld.com> wrote:
>
> Peter Hercek wrote:
> > * it is easy to mark stuff strict (even in function signatures
> > etc), so it is possible to save on unnecessary CAF creations
>
> Also, the Clean compiler has a strictness analyzer. The compiler will
> analyze code and find many (but not all) cases where a function argument can
> be made strict without changing the behavior of the program.
>
> _______________________________________________
> Haskell-Cafe mailing list
> Haskell-Cafe at haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
>
More information about the Haskell-Cafe
mailing list