[Haskell-cafe] Why should we write "a `par` b `pseq` (f a b)" instead of "a `par` b `par` (f a b)"?

Petr P petr.mvd at gmail.com
Sun Jan 20 00:56:28 CET 2013


  Dear Haskellers,

I've been playing with par and pseq, and I wonder: Is there any reason to
use
  a `par` b `pseq` (a + b)
instead of
  a `par` b `par` (a + b)
except that the second version sparks twice instead of just once (which
probably degrades performance a bit)? It seems to me that the second
variant would work as well: The main thread would block on one of the
sparked computations, but the other one would still be evaluated in
parallel.

The second variant seems to have one additional advantage: If the function
that combines 'a' and 'b' isn't strict (and perhaps we don't know that in
advance), the main thread won't be blocked by evaluating the computation it
doesn't need. Like
    a `par` b `pseq` (const a b)
will block until both 'a' and 'b' are evaluated, but
  a `par` b `par` (const a b)
will finish as soon as 'a' is evaluated.

I found this link on SO:
Why do we need 'seq' or 'pseq' with 'par' in Haskell? <
http://stackoverflow.com/q/4576734/1333025>
but it doesn't really solve this objection.

  Thanks for help,
  Petr Pudlak
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.haskell.org/pipermail/haskell-cafe/attachments/20130120/72f30d88/attachment.htm>


More information about the Haskell-Cafe mailing list