[Haskell] Implicit parallel functional programming

Simon Peyton-Jones simonpj at microsoft.com
Thu Jan 20 04:38:09 EST 2005


| > I thought the "lazy functional languages are great for implicit
| > parallelism" thing died out some time ago - at least as far as
running
| > the programs on conventional hardware is concerned.

Some quick thoughts.

1.  Like Ben L, I don't believe in totally-automated parallelism from
lazy FP.  But that doesn't mean that lazy FP is bad for parallel
machines.  On the contrary, it is *absolutely fantastic* that a program
decorated with some parallel strategies (see the paper Ben mentioned
http://research.microsoft.com/%7Esimonpj/Papers/strategies.ps.gz) is
guaranteed to give the same results on a multiprocessor as on a
uni-processor.   Adding parallel strategies can't make your program go
wrong.  

2.  One reason that implicitly-parallel FP has never "made it" despite
lots of work over the last 20 yrs (!) is that uni-processors have kept
getting faster.  That has already changed
(http://www.gotw.ca/publications/concurrency-ddj.htm).   We're going to
see multi-processors on a chip, because Intel doesn't know what else to
do with their transistors.  Getting more performance is going to
*require* parallelism, which is a real change; until now, parallel
processors kept getting outpaced by next year's sequential ones.    

3.  Furthermore, if everyone has 10 processors on their desk (because
you can't buy a machine with fewer -- I'm speculating a little here)
then the utilisation efficiency is less important than it used to be.
If you have 10 processors anyway, then a programming system that lets
you use 3 with minimal effort would be splendid.  Whereas up to today if
someone forks out for a 10-node machine they jolly well want to use
almost all of those 10.

4.  Most parallel-FP implementations thus far have been on distributed
memory multiprocessors, connected by LANs or some other switch fabric.
In particular, each processor has had a separate heap, which entails
lots of marshalling and copying of graph between one heap and another.
It's incredibly hard to recover these overheads.  The new multi-cores
will be much more closely coupled, and can share a common heap, with
rather high bandwidth between processors.


In short, I think things are changing.

Simon


More information about the Haskell mailing list