[Haskell-cafe] Re: Red-Blue Stack
Josef Svenningsson
josef.svenningsson at gmail.com
Fri Sep 26 14:14:08 EDT 2008
On Fri, Sep 26, 2008 at 7:18 PM, Stephan Friedrichs
<deduktionstheorem at web.de> wrote:
> apfelmus wrote:
>> [..]
>>
>> Persistent data structures are harder to come up with than ephemeral
>> ones, [...]
>
> Yes, in some cases it's quite hard to find a persistent solution for a
> data structure that is rather trivial compared to its ephemeral
> counterpart. My question is: Is there a case, where finding a persistent
> solution that performs equally well is *impossible* rather than just
> harder? I mean might there be a case where (forced) persistence (as we
> have in pure Haskell) is a definite disadvantage in terms of big-O
> notation? Do some problems even move from P to NP in a persistent setting?
>
The only result I'm aware of is that of Nicholas Pippenger where he
shows that there are algorithms which are slower by a factor of log n
if one is not allowed to use mutation:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.36.670
Interestingly enough, this particular result does not carry over to
Haskell. The particular algorithm that he uses can actually be
implemented optimally using lazy evaluation, as show in the following
paper:
http://progtools.comlab.ox.ac.uk/members/oege/publications/jfp97
So a pure strict language is less efficient than a strict language
with mutation and a pure lazy language. Although intuitively a pure
lazy language should also be less efficient than a strict language
with mutation I'm not aware of any such results.
Cheers,
Josef
More information about the Haskell-Cafe
mailing list