[Haskell-cafe] Re: Paralelism and Distribution in Haskell

Mathew de Detrich deteego at gmail.com
Mon Sep 6 21:08:55 EDT 2010


*Mistake, in where I said "majority of Haskell programs were pure" I meant
"majority of code in Haskell programs was pure"

On Tue, Sep 7, 2010 at 11:07 AM, Mathew de Detrich <deteego at gmail.com>wrote:

> Before Haskell took off with parallelism, it was assumed that Haskell would
> be trivial to run concurrently on cores because majority of Haskell programs
> were pure, so you could simply run different functions on different cores
> and string the results together when your done
>
> It turned out that using such a naive method created massive overhead (to
> the point where it wasn't worth it), and so different concurrent paradigms
> were introduced into Haskell to provide parallelism (nested data structures,
> parallel strategies, collections, STM). In I believe almost every case for
> these algorithms, there is a compromise between ease of implementation vs
> performance gains.
>
> Haskell is still by far one of the best languages to deal with
> concurrency/parallelism. In most other conventional languages used today
> (with are imperative or multi-paradigm), parallelism breaks
> modularity/abstraction (which is one of the main reasons why most desktop
> applications/games are still single core, and the few exceptions
> use parallelism in very trivial cases). This is of course mainly to to deal
> with state (semaphores/mutex). Although it is possible to program in other
> languages using 'pure' code, its often very ugly (and in that case you may
> as well use Haskell)
>
>
> On Tue, Sep 7, 2010 at 8:37 AM, Johannes Waldmann <
> waldmann at imn.htwk-leipzig.de> wrote:
>
>> Don Stewart <dons <at> galois.com> writes:
>>
>> > Note that DPH is a programming model, but the implementation currently
>> > targets shared memory multicores (and to some extent GPUs), not
>> > distributed systems.
>>
>> Yes. I understand that's only part of what the original poster wanted,
>> but I'd sure want to use ghc-generated code on a (non-distributed) GPU.
>>
>> I keep telling students and colleagues that functional/declarative code
>> "automatically" parallelizes, with basically "no extra effort"
>> from the programmer (because it's all in the compiler) - but I would
>> feel better with some real code and benchmarks to back that up.
>>
>> GPU computing via ghc  could be a huge marketing opportunity  -
>> if it works, it should be all over the front page of haskell.org?
>>
>> J.W.
>>
>>
>> _______________________________________________
>> Haskell-Cafe mailing list
>> Haskell-Cafe at haskell.org
>> http://www.haskell.org/mailman/listinfo/haskell-cafe
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://www.haskell.org/pipermail/haskell-cafe/attachments/20100906/023f5a0c/attachment.html


More information about the Haskell-Cafe mailing list