[Haskell-cafe] Can't Haskell catch upwith Clean's uniqueness typing?

Robert Dockins robdockins at fastmail.fm
Tue Dec 6 18:19:22 EST 2005


On Tuesday 06 December 2005 04:00 pm, haskell-cafe.mail.zooloo at xoxy.net wrote:
> From: "Shae Matijs Erisson - shae at ScannedInAvian.com"
> Sent: Tuesday, December 06, 2005 6:16 PM
>
> > haskell-cafe.mail.zooloo at xoxy.net writes:
> > > being occupied with learning both languages, I'm getting curious if
> > > Haskell couldn't achieve most of the performance gains resulting from
> > > uniqueness typing in Clean by *automatically* determining the reference
> > > count of arguments wherever possible and subsequently allowing them to
> > > be physically replaced immediately by (the corresponding part of) the
> > > function's result. Are there any principal obstacles, or *could* this
> > > be done, or *is* this even done already, e. g. in ghc?
> >
> > Maybe you're describing speculative evaluation?
> >
> > Optimistic Evaluation: An Adaptive Evaluation Strategy for Non-Strict
> > Programs http://citeseer.ist.psu.edu/ennals03optimistic.html
> > --
>
> Thanks for the pointer - I have heard a little about optimistic evaluation
> already, but don't know much of the details (yet). Anyway, from what I
> know, I think it's a different thing.
>
> In Clean, you can (and often are required to) assign uniqueness attributes
> to some parts of a function's type signature. The extended type checker
> ensures that none of those parts is referred to more than once during a
> single run of the program. Based on this guarantee, a function does not
> have to allocate new memory at all to store a unique result but can
> overwrite the unique arguments in place.
>
> Apparently, the uniqueness assignments have to comply with very tight laws
> - getting a program through the Clean type checker can be tough, once it
> reports an uniqueness coercion error. I suppose, no explicit uniqueness
> attributing is going to be implemented in Haskell, anyway.
>
> My question is - and this might better suit to Haskell -, can't uniqueness
> be inferred (and exploited) automatically in many cases?

Yes, probably.  There is a technique called sharing analysis that attempts to 
determine when a datastructure is only referenced once (ie, NOT shared).  If 
you can prove a datastructure node is not shared then you can reuse it 
destructively.

Here is a paper on the technique.  It's written for lisp cons cells, but one 
might be able to generalize the technique to ADT.  I don't know where to find 
a free copy.

http://portal.acm.org/citation.cfm?id=99375


There has also been some similar work done along these lines for Mercury (a 
logic programming language).

http://www.cs.mu.oz.au/research/mercury/information/papers.html

Search for papers with the word "reuse" in the title.  I'm not very familiar 
with this work, so I don't know how applicable this might be.


More information about the Haskell-Cafe mailing list