Optimizations for mutable structures?
Simon Marlow
simonmar at microsoft.com
Thu Dec 8 04:40:18 EST 2005
On 07 December 2005 19:57, Claus Reinke wrote:
> there seem to be two issues here - can we agree on that at least?
>
> 1) scheduling non-sequential programs on sequential processors
>
> i wasn't arguing that the scheduler should realise all possible
> interleavings at once. the issue i was referring to is known as
> "fairness" in concurrent systems literature. as with referential
> transparency, various non-equivalent definitions are in use,
> but one informal description might be something like:
>
> if, for a given experiment, some event is possible according to
> the semantics, it should happen eventually if the experiment is
> repeated often enough.
>
> see, eg,
>
http://research.microsoft.com/users/lamport/pubs/pubs.html#lamport-fairn
ess
Yes, good point. Regarding fairness, I've been working with the
assumption that we don't need to preserve (certain kinds of) fairness
properties when performing optimisations, whereas you and others want to
preserve all fairness.
Why don't I care about preserving fairness? Well you said it - it's not
practical to implement. And since the implementation already doesn't
guarantee strong fairness (or whatever it's called), it doesn't do any
harm to weaken the fairness that we do provide, because programmers
can't rely on strong fairness anyway. In practical terms, you won't
notice the difference.
Furthermore, I'd go so far as to say that a program that relies on the
kind of fairness we've been talking about (arbitrary interleaving) for
correctness or termination, is broken.
We ceratinly *do* care about some kind of fairness. The property that
we try to maintain is something like this:
If a thread is unblocked according to the semantics, then the
implementation ensures that it runs after a finite time.
I'm not familiar with the fairness literature, perhaps this property is
known?
You may well call our implementation incomplete because it doesn't
implement full fairness. I don't think that's useful though; I'd much
rather characterise exactly what fairness property we can and do
implement, and use that for reasoning with.
[snip]
> 2) compiler optimisation validity in sequential and non-sequential
> environments
>
> the original motivation of this thread - are sequential
> transformations still valid in a non-sequential setting?
>
> in general, the answer is no, to the surprise/annoyance of many many
> compiler implementors who want their valuable optimisations to work
> even when their sequential language acquires threads.
I don't think this applies in our setting. The reason we have IORef and
{M,T}Vars, and not just a single mutable reference type, is that
{M,T}Vars provide strong consistency guarantees when used to communicate
between threads, whereas IORefs do not. Hence, IORefs can be
implemented with much fewer restrictions.
If you share IORefs between threads and run on a multiprocessor, you are
at the mercy of both sequential optimisations and your architecture's
memory consistency guarantees. In other words, don't do it. Use
communication primitives that have strong properties in a multi-threaded
setting.
> i'd really, really prefer concurrent haskell not to go down a route
> in which demands of simpler implementation leads to subtle problems
> in reasoning about programs.
If you think any of this impacts your ability to reason about programs,
please elaborate - I don't think it does.
Cheers,
Simon
More information about the Glasgow-haskell-users
mailing list