[Haskell-cafe] Can you do everything without shared-memory concurrency?

David Roundy daveroundy at gmail.com
Wed Sep 10 09:05:12 EDT 2008


2008/9/9 Jed Brown <jed at 59a2.org>:
> On Tue 2008-09-09 12:30, Bruce Eckel wrote:
>> So this is the kind of problem I keep running into. There will seem to be
>> consensus that you can do everything with isolated processes message passing
>> (and note here that I include Actors in this scenario even if their mechanism
>> is more complex). And then someone will pipe up and say "well, of course, you
>> have to have threads" and the argument is usually "for efficiency."
>
> Some pipe up and say ``you can't do global shared memory because it's
> inefficient''.  Ensuring cache coherency with many processors operating
> on shared memory is a nightmare and inevitably leads to poor
> performance.  Perhaps some optimizations could be done if the programs
> were guaranteed to have no mutable state, but that's not realistic.
> Almost all high performance machines (think top500) are distributed
> memory with very few cores per node.  Parallel programs are normally
> written using MPI for communication and they can achieve nearly linear
> scaling to 10^5 processors BlueGene/L for scientific problems with
> strong global coupling.

I should point out, however, that in my experience MPI programming
involves deadlocks and synchronization handling that are at least as
nasty as any I've run into doing shared-memory threading.  This isn't
an issue, of course, as long as you're letting lapack do all the
message passing, but once you've got to deal with message passing
between nodes, you've got bugs possible that are strikingly similar to
the sorts of nasty bugs present in shared memory threaded code using
locks.

David


More information about the Haskell-Cafe mailing list