[Haskell-cafe] parallel Haskell with limited sparks

Henning Thielemann lemming at henning-thielemann.de
Tue Jul 9 08:17:33 UTC 2019


When I read about parallel programming in Haskell it is always about 
manual chunking of data. Why? Most of my applications with parallelization 
benefit from a different computation scheme: Start a fixed number of 
threads (at most the number of available computing cores) and whenever a 
thread finishes a task it gets assigned a new one. This is how make -j, 
cabal install -j, ghc -j, work. I wrote my own package pooled-io which 
does the same for IO in Haskell and there seem to be more packages that 
implemented the same idea. Can I have that computation scheme for non-IO 
computations, too? With Parallel Strategies, monad-par etc.?


More information about the Haskell-Cafe mailing list