[Haskell-cafe] Can't Haskell catch up with Clean's
duncan.coutts at worc.ox.ac.uk
Thu Dec 8 15:09:47 EST 2005
On Thu, 2005-12-08 at 11:29 -0800, Jeremy Shaw wrote:
> > Why should inferring uniqueness be all that fragile? A uniqueness checker can be
> > rather robust, as is demonstrated by the Clean one.
> Fragile could refer to the fact that a relatively small looking change
> to your code could have a enormous impact on the runtime of the code
> because you unknowningly changed a value from being used uniquely to
> being used non-uniquely.
> In clean, the annotations allow you to enforce the uniqueness, so this
> change would be caught by the type-checker. But, if the uniqueness is
> *only* inferred, then the user has to be very careful about ensuring
> uniqueness if they want performance gains associated with it -- and
> they have to do it without the help of the type-checker.
> Having written a bit of clean code, I can say that it is very easy to
> accidently un-uniquify things.
This is an example of what I call "clever compiler syndrome" where the
potential speed benefit of an optimisation is rendered much less useful.
If you need the speed gained by the optimisation kicking in then you
need to be able to guarantee it, or at least accurately check that is is
kicking in. So if it can be lost by simple and non-obvious code changes
then you cannot rely on the optimisation and so it is not useful. The
only case it is a benefit is when it accidentally happens and it's just
a bonus, but in that case you never needed the optimisation it in the
We already have this issue in Haskell with strictness.
I think what we need is better performance and source code analysis
tools, which might simply be viewers for information produced by the
compiler about what it's optimisation analysis algorithms are actually
coming up with.
For example it's not currently convenient to find out the strictness
that ghc infers for functions (though it is possible). Ideally an IDE or
something would be able to present this sort of information along with
the inferred type etc.
So if it were easy to find out the uniqueness that the compiler was
inferring then it might actually be useful to people that it did such an
inference. Since in that case they would be able to check that it was
actually kicking in and modify their code if it were not. You would also
want to be able to ask the questions "why is it not unique here when I
expect it to be", just like the compiler currently answers our question
of why the type is not what we expect it to be at some place in the
More information about the Haskell-Cafe