[Haskell-cafe] complexity of functions in PriorityQ.hs
Michael Litchard
michael at schmong.org
Wed Nov 30 21:30:15 UTC 2016
Thank you Xia for your generous help. Enclosed find the file with
PriorityQ.hs, and the web page where I found the link.
https://wiki.haskell.org/Prime_numbers#External_links
http://www.cs.hmc.edu/~oneill/code/haskell-primes.zip
On Wed, Nov 30, 2016 at 12:54 PM, Li-yao Xia <li-yao.xia at ens.fr> wrote:
> Hello Michael,
>
> Would you have a link to this library? I couldn't find it.
>
> Alternating the subtree in which we recursively call insert is indeed a
> good hint. Another detail is that the keys do not affect the shape of the
> tree (that is, what is left if you ignore the keys and the values, or
> replace them all with ()).
>
> - The only difference between the first two branches of insert is that the
> key-value pairs are swapped. In both cases, the shape of the tree is
> transformed in the same way.
>
> - We can also see that "siftdown wk wv t1 t2" has the same shape as "Br wk
> wv t1 t2", and as the primary component of deleteMinAndInsert, it follows
> that this function does not change the shape of the tree, it only moves the
> key-value pairs in it.
>
> We have insert, deleteMin, and deleteMinAndInsert. It is natural to wonder
> whether deleteMinAndInsert behaves the same as the composition of insert
> and deleteMin. While the key-value pairs may be shuffled around
> differently, the swapping of subtrees that happens in insert appears to be
> mirrored by leftRem (called by deleteMin).
>
> In other words, we have that "insert () ()" and deleteMin are inverses of
> each other. (The units make the key-values undinstiguishable.)
>
> All this means that the shape of a PriorityQ is entirely determined by the
> number n of elements in it, and in particular that shape can be obtained by
> iterating "insert () ()" n times starting from the empty queue Lf.
>
> shape :: Int -> PriorityQ () ()
> shape 0 = Lf
> shape n = insert () () (shape (n-1))
>
> Since "insert () ()" simply calls itself alternatingly on each subtree,
> which both start off as Lf after the first insertion, the subtrees of a
> "shape n" are also "shape k" for some value(s) of k:
>
> shape (2 * n + 1) = Br () () (shape n) (shape n)
> shape (2 * n + 2) = Br () () (shape (n + 1)) (shape n)
>
> That is a good characterization of that shape. We can get their depth:
>
> depth :: PriorityQ k v -> Int
> depth Lf = 0
> depth (Br _ _ l r) = 1 + max (depth l) (depth r)
>
> Let us do some empirical measurements:
>
> depth (shape n): 0 1 2 2 3 3 3 3 ...
> floor (log2 n): _ 0 1 1 2 2 2 2 ...
>
> Generalize:
>
> depth (shape n) = floor (log2 n) + 1
>
> And since the functions presented all recurse through the tree in depth,
> their complexities are proportional to it, thus logarithmic.
>
> Li-yao
>
>
> Le 11/30/2016 6:15 PM, Michael Litchard a écrit :
>
>> I'm trying to figure out the complexity of the functions in Melissa
>> O'Neill's Priority Queue library, Priority.Q.hs
>>
>> Below find definitions, and what I think the complexity is. I'd
>> appreciate feedback, especially on the ones I have marked as guesses.
>>
>> data PriorityQ k v = Lf
>> | Br {-# UNPACK #-} !k v !(PriorityQ k v)
>> !(PriorityQ k v)
>> deriving (Eq, Ord, Read, Show)
>>
>> O(log n) .
>> This is a guess but alternating which side of the sub-tree is being
>> traversed is my hint.
>>
>> insert :: Ord k => k -> v -> PriorityQ k v -> PriorityQ k v
>> insert wk wv (Br vk vv t1 t2)
>> | wk <= vk = Br wk wv (insert vk vv t2) t1
>> | otherwise = Br vk vv (insert wk wv t2) t1
>> insert wk wv Lf = Br wk wv Lf Lf
>>
>> I have no idea. Reasoning help here please.
>> siftdown :: Ord k => k -> v -> PriorityQ k v -> PriorityQ k v ->
>> PriorityQ k v
>> siftdown wk wv Lf _ = Br wk wv Lf Lf
>> siftdown wk wv (t @ (Br vk vv _ _)) Lf
>> | wk <= vk = Br wk wv t Lf
>> | otherwise = Br vk vv (Br wk wv Lf Lf) Lf
>> siftdown wk wv (t1 @ (Br vk1 vv1 p1 q1)) (t2 @ (Br vk2 vv2 p2 q2))
>> | wk <= vk1 && wk <= vk2 = Br wk wv t1 t2
>> | vk1 <= vk2 = Br vk1 vv1 (siftdown wk wv p1 q1) t2
>> | otherwise = Br vk2 vv2 t1 (siftdown wk wv p2 q2)
>>
>> Whatever the cost of siftdown is.
>> deleteMinAndInsert :: Ord k => k -> v -> PriorityQ k v -> PriorityQ k v
>> deleteMinAndInsert wk wv Lf = error "Empty PriorityQ"
>> deleteMinAndInsert wk wv (Br _ _ t1 t2) = siftdown wk wv t1 t2
>>
>> This looks like O(log N) to me.
>> leftrem :: PriorityQ k v -> (k, v, PriorityQ k v)
>> leftrem (Br vk vv Lf Lf) = (vk, vv, Lf)
>> leftrem (Br vk vv t1 t2) = (wk, wv, Br vk vv t2 t) where
>> (wk, wv, t) = leftrem t1
>> leftrem _ = error "Empty heap!"
>>
>> The cost of siftdown, which is probably more expensive than leftrem.
>>
>> deleteMin :: Ord k => PriorityQ k v -> PriorityQ k v
>> deleteMin (Br vk vv Lf _) = Lf
>> deleteMin (Br vk vv t1 t2) = siftdown wk wv t2 t where
>> (wk,wv,t) = leftrem t1
>> deleteMin _ = error "Empty heap!"
>>
>> Thanks for the feedback. I'm new to computational analysis and this is
>> the most complicated code I have tried to tackle thus far. I have
>> Okasaki's book, but am having trouble making best use of it.
>>
>>
>>
>>
>>
>> _______________________________________________
>> Haskell-Cafe mailing list
>> To (un)subscribe, modify options or view archives go to:
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
>> Only members subscribed via the mailman list are allowed to post.
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.haskell.org/pipermail/haskell-cafe/attachments/20161130/4ab5e341/attachment.html>
More information about the Haskell-Cafe
mailing list