[Haskell-cafe] Re: Can you do everything without shared-memory concurrency?

Simon Marlow simonmarhaskell at gmail.com
Wed Sep 10 17:01:22 EDT 2008


Bruce Eckel wrote:
> So this is the kind of problem I keep running into. There will seem to 
> be consensus that you can do everything with isolated processes message 
> passing (and note here that I include Actors in this scenario even if 
> their mechanism is more complex). And then someone will pipe up and say 
> "well, of course, you have to have threads" and the argument is usually 
> "for efficiency."
> 
> I make two observations here which I'd like comments on:
> 
> 1) What good is more efficiency if the majority of programmers can never 
> get it right? My position: if a programmer has to explicitly synchronize 
> anywhere in the program, they'll get it wrong. This of course is a point 
> of contention; I've met a number of people who say "well, I know you 
> don't believe it, but *I* can write successful threaded programs." I 
> used to think that, too. But now I think it's just a learning phase, and 
> you aren't a reliable thread programmer until you say "it's impossible 
> to get right" (yes, a conundrum).

(welcome Bruce!)

Let's back up a bit.  If the goal is just to make something go faster, 
then threads are definitely not the first tool the programmer should be 
looking at, and neither is message passing or STM.  The reason is that 
threads and mutable state inherently introduce non-determinism, and when 
you're just trying to make something go faster non-determinism is almost 
certainly unnecessary (there are problems where non-determinism helps, 
but usually not).

In Haskell, for example, we have par/seq and Strategies which are 
completely determinstic, don't require threads or mutable state, and are 
trivial to use correctly.  Now, getting good speedup is still far from 
trivial, but that's something we're actively working on.  Still, people 
are often able to get a speedup just by using a parMap or something. 
Soon we'll have Data Parallel Haskell too, which also targets the need 
for deterministic parallelism.

We make a clean distinction between Concurrency and Parallelism. 
Concurrency is a _programming paradigm_, wherein threads are used 
typically for dealing with multiple asynchronous events fromm the 
environment, or for structuring your program as a collection of 
interacting agents.  Parallelism, on the other hand, is just about 
making your programs go faster.  You shouldn't need threads to do 
parallelism, because there are no asynchronous stimuli to respond to. 
It just so happens that it's possible to run a concurrent program in 
parallel on a multiprocessor, but that's just a bonus.  I guess the main 
point I'm making is that to make your program go faster, you shouldn't 
have to make it concurrent.  Concurrent programs are hard to get right, 
parallel programs needn't be.

Cheers,
	Simon

> 2) What if you have lots of processors? Does that change the picture 
> any? That is, if you use isolated processes with message passing and you 
> have as many processors as you want, do you still think you need 
> shared-memory threading?
> 
> A comment on the issue of serialization -- note that any time you need 
> to protect shared memory, you use some form of serialization. Even 
> optimistic methods guarantee serialization, even if it happens after the 
> memory is corrupted, by backing up to the uncorrupted state. The effect 
> is the same; only one thread can access the shared state at a time.
> 
> On Tue, Sep 9, 2008 at 4:03 AM, Sebastian Sylvan 
> <sebastian.sylvan at gmail.com <mailto:sebastian.sylvan at gmail.com>> wrote:
> 
> 
> 
>     On Mon, Sep 8, 2008 at 8:33 PM, Bruce Eckel <bruceteckel at gmail.com
>     <mailto:bruceteckel at gmail.com>> wrote:
> 
>         As some of you on this list may know, I have struggled to understand
>         concurrency, on and off for many years, but primarily in the C++ and
>         Java domains. As time has passed and experience has stacked up,
>         I have
>         become more convinced that while the world runs in parallel, we
>         think
>         sequentially and so shared-memory concurrency is impossible for
>         programmers to get right -- not only are we unable to think in
>         such a
>         way to solve the problem, the unnatural domain-cutting that
>         happens in
>         shared-memory concurrency always trips you up, especially when the
>         scale increases.
> 
>         I think that the inclusion of threads and locks in Java was just a
>         knee-jerk response to solving the concurrency problem. Indeed, there
>         were subtle threading bugs in the system until Java 5. I personally
>         find the Actor model to be most attractive when talking about
>         threading and objects, but I don't yet know where the limitations of
>         Actors are.
> 
>         However, I keep running across comments where people claim they
>         "must"
>         have shared memory concurrency. It's very hard for me to tell
>         whether
>         this is just because the person knows threads or if there is
>         truth to
>         it. 
> 
>      
>     For correctness, maybe not, for efficiency, yes definitely!
>      
>     Imagine a program where you have a huge set of data that needs to be
>     modified (in some sense) over time by thousands of agents. E.g. a
>     game simulation.
>     Now, also imagine that every agent could *potentially* modify every
>     single piece of data, but that every agent *typically* only touches
>     two or three varibles here and there. I.e. the collisions between
>     the potential read/write sets is 100%, while the collisions for the
>     actual read/write sets is very very low.
>      
>     How would you do this with threads and message passing? Well you
>     could have one big thread owning all of your data that takes
>     "update" messages, and then "updates" the world for you (immutably
>     if you wish, by just replacing its "world" variable with a new one
>     containing your update), but now you've effectively serialized all
>     your interactions with the "world", so you're not really concurrent
>     anymore!
>      
>     So you could decompose the world into multiple threads using some
>     application-specific logical sudivision, but then you're effectively
>     just treating each thread as a mutable variable with an implicit
>     lock (with the risks of deadlock that comes with it - remember we
>     don't know the read/write set in advance - it could be the entire
>     world - so we can't just order our updates in some global way here),
>     so you're really just doing shared mutable state again, and gain
>     little from having threads "simulate" your mutable cells...
>      
>     What you really need for this is some way for each agent to update
>     this shared state *in parallel*, without having to block all other
>     agents pessimistically, but instead only block other agents if there
>     was an *actual* conflict. STM seems to be the only real hope for
>     that sort of thing right now.
>     IMO my list of preferred methods goes like this:
>     1. Purely functional data parallelism
>     2. Purely functional task parallelism (using e.g. strategies)
>     3. Message passing with no (or very minimal) shared state (simulated
>     using threads as "data servers" or otherwise)
>     (3.5. Join patterns? Don't have enough experience with this, but
>     seems sort of nice?)
>     4. Shared state concurrency using STM
>     5. Shared state concurrency using locks
>     6. Lockless programming.
>      
>     So while I wouldn't resort to any shared state concurrency unless
>     there are good reasons for why the other methods don't work well
>     (performance is a good reason!), there are still situations where
>     you need it, and a general purpose language had better supply a way
>     of accessing those kinds of facilities.
>      
>     -- 
>     Sebastian Sylvan
>     +44(0)7857-300802
>     UIN: 44640862
> 
> 
> 
> 
> -- 
> Bruce Eckel
> 
> 
> ------------------------------------------------------------------------
> 
> _______________________________________________
> Haskell-Cafe mailing list
> Haskell-Cafe at haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe



More information about the Haskell-Cafe mailing list