[Haskell-cafe] Can't Haskell catch up with Clean's uniqueness typing?

haskell-cafe.mail.zooloo at xoxy.net haskell-cafe.mail.zooloo at xoxy.net
Thu Dec 8 13:34:47 EST 2005


On Thu, Dec 08, 2005 at 06:38:53PM +0100, haskell-cafe.mail.zooloo at xoxy.net wrote:
> ----- Original Message -----
> From: "Tomasz Zielonka - tomasz.zielonka at gmail.com"
> Sent: Wednesday, December 07, 2005 8:53 PM
> 
> > > Clean-like _explicit_ uniqueness typing is not what I'm asking for in Haskell.
> >
> > So you want implicit, automatically inferred uniqueness typing -
> > something that would be even more fragile and sensitive then current
> > Haskell's space problems arising from laziness? ;-)
> 
> Why should inferring uniqueness be all that fragile?

It's just that I automatically thought about how uniqueness typing is
used in Clean, expecially as a way to achieve O(1) array update.

It would be nice if Haskell could infer that an *immutable* array given
as parameter to an update operation (like. //) can be reused to create
the result effectively. But if you wrote code with such an optimisation
in mind, you could be very disappointed with performance if the compiler
didn't perform the optimisation.

I think that in the case of arrays I would still prefer to use mutable
arrays to be sure that my program has the desired asymptotic complexity.

Maybe you didn't intend to propose it for arrays? It would be fine to
use your idea for "smaller", constant-size things, like records, etc,
just to reduce the constant factor.

> A uniqueness checker can be rather robust, as is demonstrated by the
> Clean one.

In Clean there is no surprise, because this optimisation is part of
the type system, it's part of the language definition.

> Whatever suggestion gets through the uniqueness checker, the resulting code can't be
> consuming more space, slower, or otherwise worse than the one without reusing unique
> nodes, can it?

It can be slower than the programmer expects, in terms of asymptotic
complexity. So this feature could be difficult to use to create
reliable, efficient code.

Of course, unpredictabile efficiency of Haskell programs is not a new
problem.

> > > It might be possible to get extremely fast code out of ghc, but as an
> > > overall impression, it's not easy, whilst Clean sort of gives it for
> > > granted (well, struggeling with wrongly assigned uniqueness attributes
> > > aside).
> >
> > Well, C sort of gives it for granted too, because it is very difficult
> > to write inefficient, simple, specification-like code. I want to be able
> > to write simple and elegant code, even if it is inefficient!
> 
> errr..., could you give me some clue how you'd expect automatically uniqueness
> detection to deteriorate your coding style? I, for one, don't define elegance in terms of
> "not running too fast". ;)

I was referring to UT as in Clean, not to automatic UT you propose.

You said "Clean sort of gives it for granted", with it = "extremely fast
code". I don't yet know a language which _grants_ extremely fast code
without being more low-level.

OK, I am nitpicking ;-)

> To be honest, your reply feels like a counterpart to the likewise
> questionable monad avoidance in Clean.

My english understanding skills failed me here. Could you expand this
sentence using simpler words?

Best regards
Tomasz

-- 
I am searching for a programmer who is good at least in some of
[Haskell, ML, C++, Linux, FreeBSD, math] for work in Warsaw, Poland


More information about the Haskell-Cafe mailing list