ross at soi.city.ac.uk
Tue Apr 4 15:56:03 EDT 2006
On Tue, Apr 04, 2006 at 09:27:56PM +0200, Josef Svenningsson wrote:
> On 3/31/06, Ross Paterson <ross at soi.city.ac.uk> wrote:
> > On Thu, Mar 30, 2006 at 04:08:23PM -0500, Jim Apple wrote:
> > > Ok, so, let my try once more:
> > >
> > > Are there any operations f such that f . reverse has a worse big-O
> > > time bound than f does alone?
> > The big-O will be the same in all cases, but the constant factor will
> > be a little larger, because reverse does a constant amount of work on
> > each node, if that node is demanded.
> This seemed very mysterious to me at first. It seemed to imply that
> one couldn't observe the linear behaviour of reverse. In fact it
> seemed like in any program that we could write, any call to reverse
> would only contribute a constant amount of work. After discussing this
> with NilsAnders Danielsson and Ulf Norell today we came up with the
> following program:
> index i . reverse . reverse .... reverse
> where you compose reverse k times.
> Now, if reverse were really constant time then it would have
> complexity O(k) + O(log(min(i,n-i))). But when we actually try took
> look up one element we must perform k reversals at each level in the
> finger tree. Each reversal is a constant amount of work since we won't
> force anything else but the thunks in path to the index we're
> interested in. This gives that the actual complexity of the above
> program is O(k log(min(i,n-i))). So reverse is costlier than a
> constant factor but it still seems difficult to write a program which
> demonstrates the claimed linearity of the reverse operation(*).
I think that's consistent with what I said: reverse increases the
constant factor; k reverses increase it k times. The constant amount
of work is at each node, not in the structure overall. The increase is
approximately the constant amount of work done at each node. index i
visits O(log(min(i,n-i))) nodes, forcing each of the k reverses on each
of those nodes.
More information about the Libraries