parList implementation question

Marcus D. Gabriel marcus at gabriel.name
Sat Dec 19 04:26:20 EST 2009


Denis Bueno wrote:
> On Fri, Dec 18, 2009 at 11:31, Marcus D. Gabriel <marcus at gabriel.name> wrote:
>> than parList via parMap.  For example, in one experiment, parMap
>> with parList run at 0.81 the time of the serial solution whereas
>> forceParMap with forceParList run at 0.58 the time of the serial
>> solution.  This is to say, forceParList completed in 0.72 the
>> time of parList.  So,
>
> I don't know your application, so it's hard to interpret your numbers.
>
> Are you sure those measurements are statistically significant?

The particular example that I gave was a straightforward
searchDepthFirst(1) algorithm. To parallelise it, I choose the
initial, simple approach to determine the first list of successor
nodes from an initial node and use parMap to apply the serial
algorithm searchDepthFirst to this list and then reduce (concat)
the list.

With some RTS tuning and using a dedicated dual-core system, I was
able for the problem above to reproduce the numbers that I cite
above consistently to within about plus or minus a factor of 0.01.

I have played with variations of this class of problem but not
generally compared parList with forceParList across a broader class
of problems.

I like your question above.  It helped me formulate this question:

Is what I observed above significant or random?  I hope it is not
random.

I guess the deeper question is how does one determine with
confidence that

  parallised serial algorithm + startegy1 works for problem class1

and

  parallised serial algorithm + startegy2 works for problem class2


where both are better than the serial case for their respective
class of problems across a given range of operating environments,
e.g., -N2 to -N8 for 2 to 8 cores.

- Marcus

(1) "Algorithms, A Functional Programming Approach" by Fethi Rabhi
and Guy Laplame.  ISBN 0201-59604-0.




More information about the Glasgow-haskell-users mailing list