[Haskell-cafe] Re: Why can't Haskell be faster?
Stefan Holdermans
stefan at cs.uu.nl
Thu Nov 1 07:04:51 EDT 2007
Neil wrote:
> The Clean and Haskell languages both reduce to pretty much
> the same Core language, with pretty much the same type system, once
> you get down to it - so I don't think the difference between the
> performance is a language thing, but it is a compiler thing. The
> uniqueness type stuff may give Clean a slight benefit, but I'm not
> sure how much they use that in their analyses.
From what I know from the Nijmegen team, having the uniqueness
information available and actually using it for code generation does
allow for an impressive speed-up.
The thing is: in principle, there is, I think, no reason why we can't
do the same thing for Haskell. Of course, the Clean languages
exposes uniqueness types at its surface level, but that is in no way
essential to the underlying analysis. Exposing uniqueness types is,
in that sense, just an alternative to monadic encapsulation of side
effects. So, a Haskell compiler could just implement a uniqueness
analysis under the hood and use the results for generating code that
does in-place updates that are guaranteed to maintain referential
transparency.
Interestingly---but now I'm shamelessly plugging a paper of Jurriaan
Hage, Arie Middelkoop, and myself, presented at this year's ICFP
[*]---such an analysis is very similar to sharing analysis, which may
be used by compilers for lazy languages to avoid unnecessary thunk
updates.
Cheers,
Stefan
[*] Jurriaan Hage, Stefan Holdermans, and Arie Middelkoop. A generic
usage analysis with subeffect qualifiers. In Ralf Hinze and Norman
Ramsey, editors, Proceedings of the 12th ACM SIGPLAN International
Conference on Functional Programming, ICFP 2007, Freiburg, Germany,
October 1–-3, pages 235–-246. ACM Press, 2007. http://doi.acm.org/
10.1145/1291151.1291189.
More information about the Haskell-Cafe
mailing list