[Haskell-cafe] Re: appending an element to a list

apfelmus apfelmus at quantentunnel.de
Sat Jun 7 07:00:42 EDT 2008


Ronald Guida wrote:
> Thank you, apfelmus.  That was a wonderful explanation; the debit
> method in [1] finally makes sense.

A diagram says more than a thousand words :)

My explanation is not entirely faithful to Okasaki, let me elaborate.


In his book, Okasaki calls the process of transferring the debits from
the input

>   xs = x1 : x2 : x3 : ... : xn : []
>           1    1    1 ... 1    1  0
> 
>   ys = y1 : y2 : y3 : ... : ym : []
>           1    1    1 ... 1    1  0 

to the output

>   xs ++ ys = x1 : x2 : x3 : ... : xn : y1 : y2 : y3 : ... : ym : []
>                 2    2    2 ... 2    2    1    1    1 ... 1    1  0 

"debit inheritance". In other words, the debits of xs and ys (here 1 at
each node) are carried over to xs ++ ys (in addition to the debits
created by ++ itself). In the thesis, he doesn't give it an explicit
name, but discusses this phenomenon in the very last paragraphs of
chapter 3.4 .


The act of relocating debits from child to parent nodes as exemplified with

>   xs ++ reverse ys =
>      x1 : x2 : x3 : ... : xn : yn : y{n-1} : ... : y1 : []
>         1    1    1 ... 1    1    n        0 ... 0    0  0 

>   xs ++ reverse ys =
>      x1 : x2 : x3 : ... : xn : yn : y{n-1} : ... : y1 : []
>         2    2    2 ... 2    2    0        0 ... 0    0  0

is called "debit passing", but Okasaki doesn't use it earlier than in 
the chapter "Implicit recursive slowdown". But the example I gave here 
is useful for understand the scheduled implementation of real time 
queues. The trick there is to not create a "big" suspension with  n 
debits but to really "physically" distribute them across the data structure

      x1 : x2 : x3 : ... : xn : yn : y{n-1} : ... : y1 : []
         2    2    2 ... 2    2    2        2 ... 2    2  2

and discharge them by forcing a node with every call to  snoc . I say
"physically" because this forcing performs actual work, it does not
simply "mentally" discharge a debit to amortize work that will be done
later. Note that the 2 debits added to each  yi  are an overestimation
here, but the real time queue implementation pays for them nonetheless.


My focus on debit passing in the original explanation might suggest that
debits can only be discharged when actually evaluating the node to which
the debit was assigned. This is not the case, an operation may discharge
any debits, even in parts of the data structure that it doesn't touch.
Of course, it must discharge debits of nodes it does touch.

For instance, in the proof of theorem 3.1 (thesis) for queues, Okasaki
writes "We can restore the invariant by discharging the first (two) 
debit(s) in  the queue" without bothering to analyze which node this 
will be. So, the front queue might look like

      f1 : f2 : f3 : ... : fn : f{n+1} : f{n+2} : ... : fm : []
         0    0    1 ... 1    1        n        0 ... 0    0  0

and it's one of the nodes that carries one debit, or it could look like

           f2 : f3 : ... : fn : f{n+1} : f{n+2} : ... : fm : []
              0    0 ... 0    0       n-3       0 ... 0    0  0

and it's the node with the large amount of debits. In fact, it's not 
even immediate that these two are the only possibilities.

However, with the debit passing from my previous post, it's easier to 
say which node will be discharged. But even then, only  tail  discharges 
exactly the debits of nodes it inspects while the  snoc  operation 
discharges debits in the untouched front list. Of course, as soon as 
identifying the nodes becomes tractable, chances are that you can turn 
it into a real-time data structure.

Another good example are skew heaps from

[2]:
   Chris Okasaki. Fun with binary heap trees.
   in  J. Gibbons, O. de Moor. The Fun of Programming.
   http://www.palgrave.com/PDFs/0333992857.Pdf

Here, the "good" nodes are annotated with one debit. Every  join 
operation discharges O(log n) of them and allocates new ones while 
walking down the tree, but the "time" to actually walk down the tree is 
not counted immediately. This is just like (++) walks down the first 
list and allocates debits without immediately using O(n) time to do that.


Regards,
apfelmus


PS: In a sense, discharging arbitrary debits can still be explained with 
debit passing: first pass those debits to the top and the discharge them 
because any operation has to inspect the top.



More information about the Haskell-Cafe mailing list