[Haskell] Implicit parallel functional programming

paul at theV.net paul at theV.net
Thu Jan 20 09:54:42 EST 2005


I'd like to add another meaning to running things in a distributed
way, i.e., scalability. Implicit parallelism should help the 
application to scale itself automatically with the increase of
the number of nodes in the cluster. 

Yes, today we have two-processors on a core, and uni-processor 
speed bump is unlikely to overshadow the effort of parallelism 
like it did 20 years ago. But we are also beginning to see
applications requiring thousands of machines to run. The so
called grid computing maybe a just another buzzword, but the
reality is that grand applications just won't scale on today's
two-processor core, and explicit parallelism often requires
a prior knowledge on how to split the program, which is hardly
scalable except some simple cases.

Recent examples? World of Warcraft. Blizzard is already having 
trouble to manage the load across thousands of their servers,
until the stage that they are forced to stop shipping new games
to store. It's in the news yesterday.

Regards,
Paul

On Wed, Jan 19, 2005 at 11:16:12PM -0500, mgross at dorsai.org wrote:
> 
> Having donned my flame-resistant suit, I am going to venture that I think
> the differences of opinion in the posts copied below are in large part the
> result of people talking about different things at the same time. 
> 
> First, there is a claim that functional languages facilitate parallel
> execution, which is undeniably true if the implementation is something
> like that of Haskell (no assignment statements mean no memory contention,
> etc.). 
> 
> Then there is the claim that it is difficult (if not impossible) to
> analyze a piece of code and parallelize it optimally automatically, which
> is a point that was conceded long ago.
> 
> Third, there is the issue of whether "reasonable" parallelization can be
> obtained, and the answer to this problem is that it has been done before,
> and it can be done again. The open question is whether or not the
> parallelization thus achieved is sufficiently "good" in some sense. This,
> I submit, appears to be an open question at present, which is a statement
> I'd really like to find is wrong. 
> 
> Then there is the issue of optimality of algorithms. Which is always a
> bugbear, because different quality criteria change things wildly (y'know,
> n-square sorts are frequently used in real-world programs . . . <g>). And
> when it comes to dealing with parallel execution on a multiprocessor
> machine (as opposed to a cluster), this raises significant questions about
> performance constraints resulting from hardware design and implementation.
> Any answer to the problems here depends on the hardware that is being used
> and cannot be general.
> 
> I suggest, then, that if this thread is to be continued, contributors
> should be careful to specify the particular hardware and software
> environments they are talking about, and be specific about such things as
> quality criteria. 
> 
> Murray Gross
> 
> 
> 
> 
> 
> On Thu, 20 Jan 2005, Ben Lippmeier wrote:
> 
> > 
> > I thought the "lazy functional languages are great for implicit 
> > parallelism" thing died out some time ago - at least as far as running 
> > the programs on conventional hardware is concerned.
> > 
> > Designing an algorithm that breaks apart a "sequential" lazy program 
> > into parallel chunks of the appropriate size is **HARD** (with double 
> > asterixes). The time and space behavior of a lazy program is complex 
> > enough for the _programmer_ to reason about, let alone an automated 
> > analysis - which has no knowledge of what the program is actually trying 
> > to do.
> > 
> > I think a more likely approach lies in the direction of the so called 
> > "parallel strategies". If you haven't already, I would strongly suggest 
> > reading: Algorithm + Strategy = Parallelism, 1998, PW Trinder, et al.
> > You can get this paper from Simon Peyton Jones's homepage.
> > 
> > Also, at the end of Hans Wolfgang-Loidl's thesis he develops a 
> > granularity analysis for a Haskell subset - one of the first steps in 
> > any kind of implicit parallelism. It's a pretty good effort, but at the 
> > end of it all it still relies on a pre-existing table of information 
> > about recursive functions. I think that these kind of analyses tend 
> > suffer from uncomputability problems more than most.
> > 
> > If you've still got your heart set on implicit parallelism, then there's 
> > a (very slow) simulator you might want to poke around with. I wrote it 
> > based around Clem Baker-Finch's "Abstract machine for parallel lazy 
> > evaluation", which supports fully speculative implicit parallelism.
> > 
> > There's a link to it on my homepage at 
> > http://cs.anu.edu.au/people/Ben.Lippmeier
> > 
> > 
> > Keean Schupke wrote:
> > > I have to say I disagree... I feel Haskell is highly suited to implicit 
> > > parallel execution... The key to "implicit" parallelisation is that it 
> > > is implicit - not explicit, so the programmer should feel like they are 
> > > programming a sequential language. If we can assume little memory access 
> > > penalties for threads running on other CPUs (shared cache model), it 
> > > seems to be a matter of putting locks on the right structures, and 
> > > allowing any worker-thread to take the next function ready to run from 
> > > the scheduler.
> > > 
> > >    Keean.
> > > 
> > 
> > _______________________________________________
> > Haskell mailing list
> > Haskell at haskell.org
> > http://www.haskell.org/mailman/listinfo/haskell
> > 
> 
> _______________________________________________
> Haskell mailing list
> Haskell at haskell.org
> http://www.haskell.org/mailman/listinfo/haskell
> 


More information about the Haskell mailing list