[Haskell-cafe] Can you do everything without shared-memory concurrency?

Jed Brown jed at 59A2.org
Wed Sep 10 09:30:50 EDT 2008


On Wed 2008-09-10 09:05, David Roundy wrote:
> 2008/9/9 Jed Brown <jed at 59a2.org>:
> > On Tue 2008-09-09 12:30, Bruce Eckel wrote:
> >> So this is the kind of problem I keep running into. There will seem to be
> >> consensus that you can do everything with isolated processes message passing
> >> (and note here that I include Actors in this scenario even if their mechanism
> >> is more complex). And then someone will pipe up and say "well, of course, you
> >> have to have threads" and the argument is usually "for efficiency."
> >
> > Some pipe up and say ``you can't do global shared memory because it's
> > inefficient''.  Ensuring cache coherency with many processors operating
> > on shared memory is a nightmare and inevitably leads to poor
> > performance.  Perhaps some optimizations could be done if the programs
> > were guaranteed to have no mutable state, but that's not realistic.
> > Almost all high performance machines (think top500) are distributed
> > memory with very few cores per node.  Parallel programs are normally
> > written using MPI for communication and they can achieve nearly linear
> > scaling to 10^5 processors BlueGene/L for scientific problems with
> > strong global coupling.
> 
> I should point out, however, that in my experience MPI programming
> involves deadlocks and synchronization handling that are at least as
> nasty as any I've run into doing shared-memory threading.

Absolutely, avoiding deadlock is the first priority (before error
handling).  If you use the non-blocking interface, you have to be very
conscious of whether a buffer is being used or the call has completed.
Regardless, the API requires the programmer to maintain a very clear
distinction between locally owned and remote memory.

> This isn't an issue, of course, as long as you're letting lapack do
> all the message passing, but once you've got to deal with message
> passing between nodes, you've got bugs possible that are strikingly
> similar to the sorts of nasty bugs present in shared memory threaded
> code using locks.

Lapack per-se does not do message passing.  I assume you mean whatever
parallel library you are working with, for instance, PETSc.  Having the
right abstractions goes a long way.

I'm happy to trade the issues with shared mutable state for distributed
synchronization issues, but that is likely due to it's suitability for
the problems I'm interested in.  If the data model maps cleanly to
distributed memory, I think it is easier than coarse-grained shared
parallelism.  (OpenMP is fine-grained; there is little or no shared
mutable state and it is very easy.)

Jed
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 197 bytes
Desc: not available
Url : http://www.haskell.org/pipermail/haskell-cafe/attachments/20080910/19228017/attachment.bin


More information about the Haskell-Cafe mailing list