[Haskell-cafe] Monads in Scala, XSLT,
Unix shell pipes was Re: Monads in ...
cgibbard at gmail.com
Sat Nov 26 14:46:25 EST 2005
> Maybe this is a different topic, but exploring concurrency in Haskell
> is definitely on my "to do" list, but this is really a bit of a puzzle.
> One thing I've been thinking lately is that in functional programming
> the process is really the wrong abstraction (computation is reduction,
> not a sequence of steps performed in temporal order). But what is
> concurrency if their are no processes to "run" concurrently? I've beren
> thinking about action systems and non-determinism, but am unsure how
> the pieces really fit together.
Concurrency in Haskell is handled by forking the execution of IO
actions, which are indeed sequences of steps to be performed in a
temporal order. There are some elegant constructions in that
direction, not least of which is STM, a system for transactional
thread communication, on top of which one can implement channels and
various other concurrency abstractions. STM allows one to insist that
certain restricted kinds of actions relating to thread communication
(in the STM monad) occur atomically with respect to other threads.
These transactions can create, read and write to a special kind of
mutable variable called a TVar (transactional variable). They can also
ask to block the thread they are in and be retried later when one of
the TVars they observed changes. There is additionally an operator
`orElse` where if a and b are STM transactions, then (a `orElse` b) is
a transaction where if a is executed and if it retries, then b is
attempted, and if it retries, then the whole transaction retries. The
first transaction not to retry gets to return its value. The operator
`orElse` is associative, and has retry as an identity.
The paper describing STM in more detail is here:
Perhaps more related to what you were thinking about, with Parallel
Haskell, there's a mechanism for parallel computation to be used in a
similar fashion to seq called par. The expression x `par` (y `seq`(x +
y)), when evaluated, will first spark a parallel process for
evaluating x (up to the top level data constructor), while in the main
task, y `seq` (x + y) is computed, so the evaluation of y proceeds,
and the task potentially blocks until x finishes being computed, and
then the sum is returned.
Sparking x will create the potential for x to be computed in another
thread on a separate processor, if one becomes idle, but if that
doesn't happen, x will simply be computed on the same processor as y
when it is needed for the sum.
Hope this is somewhat interesting and perhaps somewhat answers your question :)
More information about the Haskell-Cafe