Optimizations for mutable structures?

Seth Kurtzberg seth at cql.com
Thu Dec 8 05:15:42 EST 2005

On Thu, 2005-12-08 at 09:40 +0000, Simon Marlow wrote:
> On 07 December 2005 19:57, Claus Reinke wrote:
> > there seem to be two issues here - can we agree on that at least?
> > 
> > 1) scheduling non-sequential programs on sequential processors
> > 
> > i wasn't arguing that the scheduler should realise all possible
> > interleavings at once. the issue i was referring to is known as
> > "fairness" in concurrent systems literature. as with referential
> > transparency, various non-equivalent definitions are in use,
> > but one informal description might be something like:
> > 
> >     if, for a given experiment, some event is possible according to
> >     the semantics, it should happen eventually if the experiment is
> >     repeated often enough.
> > 
> > see, eg,
> >
> http://research.microsoft.com/users/lamport/pubs/pubs.html#lamport-fairn
> ess
> Yes, good point.  Regarding fairness, I've been working with the
> assumption that we don't need to preserve (certain kinds of) fairness
> properties when performing optimisations, whereas you and others want to
> preserve all fairness.
> Why don't I care about preserving fairness?  Well you said it - it's not
> practical to implement.  And since the implementation already doesn't
> guarantee strong fairness (or whatever it's called), it doesn't do any
> harm to weaken the fairness that we do provide, because programmers
> can't rely on strong fairness anyway.  In practical terms, you won't
> notice the difference.
> Furthermore, I'd go so far as to say that a program that relies on the
> kind of fairness we've been talking about (arbitrary interleaving) for
> correctness or termination, is broken.
> We ceratinly *do* care about some kind of fairness.  The property that
> we try to maintain is something like this:
>   If a thread is unblocked according to the semantics, then the
>   implementation ensures that it runs after a finite time.

This is the context in which I've always seen the term fairness applied,
to wit, w.r.t thread starvation, livelock, etc.

Fairness certainly isn't a requirement for correctness.  This is not to
say it isn't desirable or important, although I'd argue that it is less
important than correctness.  Perhaps it would be better to say that if
the program isn't correct, its performance w.r.t. fairness is

The fundamental requirement is the same for all languages, I believe;
the concurrently executing threads must produce a system state that is
identical to _one_ system state which would be produced by running the
threads sequentially.  It is easy to show that to even enumerate all the
possible sequences is NP-complete.  Beyond the requirement of
serializability, there is no practical alternative to a dose of human
intelligence.  At least people coding in Haskell have an understanding
of the underlying issues.  Alas, this is far from true for even
experiences coders of imperative languages.


> I'm not familiar with the fairness literature, perhaps this property is
> known?
> You may well call our implementation incomplete because it doesn't
> implement full fairness.  I don't think that's useful though; I'd much
> rather characterise exactly what fairness property we can and do
> implement, and use that for reasoning with.
> [snip]
> > 2) compiler optimisation validity in sequential and non-sequential
> >     environments
> > 
> > the original motivation of this thread - are sequential
> > transformations still valid in a non-sequential setting?
> > 
> > in general, the answer is no, to the surprise/annoyance of many many
> > compiler implementors who want their valuable optimisations to work
> > even when their sequential language acquires threads.
> I don't think this applies in our setting.  The reason we have IORef and
> {M,T}Vars, and not just a single mutable reference type, is that
> {M,T}Vars provide strong consistency guarantees when used to communicate
> between threads, whereas IORefs do not.  Hence, IORefs can be
> implemented with much fewer restrictions.
> If you share IORefs between threads and run on a multiprocessor, you are
> at the mercy of both sequential optimisations and your architecture's
> memory consistency guarantees.  In other words, don't do it.  Use
> communication primitives that have strong properties in a multi-threaded
> setting.
> > i'd really, really prefer concurrent haskell not to go down a route
> > in which demands of simpler implementation leads to subtle problems
> > in reasoning about programs.
> If you think any of this impacts your ability to reason about programs,
> please elaborate - I don't think it does.  
> Cheers,
> 	Simon
> _______________________________________________
> Glasgow-haskell-users mailing list
> Glasgow-haskell-users at haskell.org
> http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

More information about the Glasgow-haskell-users mailing list