[Haskell-cafe] Can you do everything without shared-memory
concurrency?
Sebastian Sylvan
sebastian.sylvan at gmail.com
Wed Sep 10 07:23:03 EDT 2008
2008/9/9 Bruce Eckel <bruceteckel at gmail.com>
> So this is the kind of problem I keep running into. There will seem to be
> consensus that you can do everything with isolated processes message passing
> (and note here that I include Actors in this scenario even if their
> mechanism is more complex). And then someone will pipe up and say "well, of
> course, you have to have threads" and the argument is usually "for
> efficiency."
> I make two observations here which I'd like comments on:
>
> 1) What good is more efficiency if the majority of programmers can never
> get it right? My position: if a programmer has to explicitly synchronize
> anywhere in the program, they'll get it wrong. This of course is a point of
> contention; I've met a number of people who say "well, I know you don't
> believe it, but *I* can write successful threaded programs." I used to think
> that, too. But now I think it's just a learning phase, and you aren't a
> reliable thread programmer until you say "it's impossible to get right"
> (yes, a conundrum).
>
I don't see why this needs to be a religious either-or issue? As I said,
*when* isolated threads maps well to your problem, they are more attractive
than shared memory solutions (for correctness reasons), but preferring
isolated threads does not mean you should ignore the reality that they do
not fit every scenario well. There's no single superior
concurrency/parallelism paradigm (at least not yet), so the best we can do
for general purpose languages is to recognize the relative
strengths/weaknesses of each and provide all of them.
>
> 2) What if you have lots of processors? Does that change the picture any?
> That is, if you use isolated processes with message passing and you have as
> many processors as you want, do you still think you need shared-memory
> threading?
>
Not really. There are still situations where you have large pools of
*potential* data with no way of figuring out ahead of time what pieces
you'll need to modify . So for explicit synchronisation, e.g. using isolated
threads to "own" the data, or with locks, you'll need to be conservative and
lock the whole world, which means you might as well run everything
sequentially. Note here that implementing this scenario using isolated
threads with message passing effectively boils down to simulating locks and
shared memory - so if you're using shared memory and locks anyway, why not
have native (efficient) support for them?
As I said earlier, though, I believe the best way to synchronize shared
memory is currently STM, not using manual locks (simulated with threads or
otherwise).
>
> A comment on the issue of serialization -- note that any time you need to
> protect shared memory, you use some form of serialization. Even optimistic
> methods guarantee serialization, even if it happens after the memory is
> corrupted, by backing up to the uncorrupted state. The effect is the same;
> only one thread can access the shared state at a time.
>
Yes, the difference is that with isolated threads, or with manual locking,
the programmer has to somehow figure out which pieces lock ahead of time, or
write manual transaction protocols with rollbacks etc. The ideal case is
that you have a runtime (possibly with hardware support) to let you off the
hook and automatically do a very fine-grained locking with optimistic
concurrency.
Isolated threads and locks are on the same side of this argument - they both
require the user to ahead of time partition the data up and decide how to
serialize operations on the data (which is not always possible statically,
leading to very very complicated code, or very low concurrency).
--
Sebastian Sylvan
+44(0)7857-300802
UIN: 44640862
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.haskell.org/pipermail/haskell-cafe/attachments/20080910/54054fd7/attachment.htm
More information about the Haskell-Cafe
mailing list