parList implementation question
Marcus D. Gabriel
marcus at gabriel.name
Sat Dec 19 04:26:20 EST 2009
Denis Bueno wrote:
> On Fri, Dec 18, 2009 at 11:31, Marcus D. Gabriel <marcus at gabriel.name> wrote:
>> than parList via parMap. For example, in one experiment, parMap
>> with parList run at 0.81 the time of the serial solution whereas
>> forceParMap with forceParList run at 0.58 the time of the serial
>> solution. This is to say, forceParList completed in 0.72 the
>> time of parList. So,
> I don't know your application, so it's hard to interpret your numbers.
> Are you sure those measurements are statistically significant?
The particular example that I gave was a straightforward
searchDepthFirst(1) algorithm. To parallelise it, I choose the
initial, simple approach to determine the first list of successor
nodes from an initial node and use parMap to apply the serial
algorithm searchDepthFirst to this list and then reduce (concat)
With some RTS tuning and using a dedicated dual-core system, I was
able for the problem above to reproduce the numbers that I cite
above consistently to within about plus or minus a factor of 0.01.
I have played with variations of this class of problem but not
generally compared parList with forceParList across a broader class
I like your question above. It helped me formulate this question:
Is what I observed above significant or random? I hope it is not
I guess the deeper question is how does one determine with
parallised serial algorithm + startegy1 works for problem class1
parallised serial algorithm + startegy2 works for problem class2
where both are better than the serial case for their respective
class of problems across a given range of operating environments,
e.g., -N2 to -N8 for 2 to 8 cores.
(1) "Algorithms, A Functional Programming Approach" by Fethi Rabhi
and Guy Laplame. ISBN 0201-59604-0.
More information about the Glasgow-haskell-users