Fast Sequences
Josef Svenningsson
josef.svenningsson at gmail.com
Tue Apr 4 15:27:56 EDT 2006
On 3/31/06, Ross Paterson <ross at soi.city.ac.uk> wrote:
> On Thu, Mar 30, 2006 at 04:08:23PM -0500, Jim Apple wrote:
> > Ok, so, let my try once more:
> >
> > Are there any operations f such that f . reverse has a worse big-O
> > time bound than f does alone?
>
> The big-O will be the same in all cases, but the constant factor will
> be a little larger, because reverse does a constant amount of work on
> each node, if that node is demanded.
>
This seemed very mysterious to me at first. It seemed to imply that
one couldn't observe the linear behaviour of reverse. In fact it
seemed like in any program that we could write, any call to reverse
would only contribute a constant amount of work. After discussing this
with NilsAnders Danielsson and Ulf Norell today we came up with the
following program:
index i . reverse . reverse .... reverse
where you compose reverse k times.
Now, if reverse were really constant time then it would have
complexity O(k) + O(log(min(i,n-i))). But when we actually try took
look up one element we must perform k reversals at each level in the
finger tree. Each reversal is a constant amount of work since we won't
force anything else but the thunks in path to the index we're
interested in. This gives that the actual complexity of the above
program is O(k log(min(i,n-i))). So reverse is costlier than a
constant factor but it still seems difficult to write a program which
demonstrates the claimed linearity of the reverse operation(*).
Oh, the complexity of the complexities of lazy data structures.
All this really begs to ask the question: Is reverse really linear
then? I guess the floor is open for a formalism to precisely state the
complexity of lazy algorithms.
But for the practially minded I can only say: If you don't use reverse
too often it can be considered a constant time operation.
Does this make sense to you Ross? I'd really like to hear your comments on this.
Cheers,
/Josef
(*) Yes, I know that O(n) means that reverse has *at most* linear
complexity. But we typically mean big-Theta when we write big-O and I
believe that was the intended meaning of the authors as well.
More information about the Libraries
mailing list