[Haskell-cafe] Can you do everything without shared-memory concurrency?

Kyle Consalus consalus at gmail.com
Mon Sep 8 16:15:11 EDT 2008

Depending on definitions and how much we want to be concerned with
distributed systems,
I believe either model can be used to emulate the other (though it is
harder to emulate the possible
pitfalls of shared memory with CSP).

To me, it seems somewhat similar to garbage collection vs manually
memory management.
You can choose the potential to be more clever than the computer at
the risk of finding
the problem is more clever than you are.

Anyway, for the time being I believe there are operations that can be
done with shared memory
that can't be done with message passing if we make "good performance"
a requirement.

On Mon, Sep 8, 2008 at 12:33 PM, Bruce Eckel <bruceteckel at gmail.com> wrote:
> As some of you on this list may know, I have struggled to understand
> concurrency, on and off for many years, but primarily in the C++ and
> Java domains. As time has passed and experience has stacked up, I have
> become more convinced that while the world runs in parallel, we think
> sequentially and so shared-memory concurrency is impossible for
> programmers to get right -- not only are we unable to think in such a
> way to solve the problem, the unnatural domain-cutting that happens in
> shared-memory concurrency always trips you up, especially when the
> scale increases.
> I think that the inclusion of threads and locks in Java was just a
> knee-jerk response to solving the concurrency problem. Indeed, there
> were subtle threading bugs in the system until Java 5. I personally
> find the Actor model to be most attractive when talking about
> threading and objects, but I don't yet know where the limitations of
> Actors are.
> However, I keep running across comments where people claim they "must"
> have shared memory concurrency. It's very hard for me to tell whether
> this is just because the person knows threads or if there is truth to
> it. The only semi-specific comment I've heard refers to data
> parallelism, which I assumed was something like matrix inversion, but
> when I checked this with an expert, he replied that matrix inversion
> decomposes very nicely to separate processes without shared memory, so
> now I'm not clear on what the "data parallelism requires threads"
> issue refers to.
> I know that both Haskell and Erlang only allow separated memory spaces
> with message passing between processes, and they seem to be able to
> solve a large range of problems -- but are there problems that they
> cannot solve? I recently listened to an interview with Simon
> Peyton-Jones where he seemed to suggest that this newsgroup might be a
> helpful place to answer such questions. Thanks for any insights -- it
> would be especially useful if I can point to some kind of proof one
> way or another.
> --
> Bruce Eckel
> _______________________________________________
> Haskell-Cafe mailing list
> Haskell-Cafe at haskell.org
> http://www.haskell.org/mailman/listinfo/haskell-cafe

More information about the Haskell-Cafe mailing list