[Haskell-cafe] Re: Is lazyness make big difference?

apfelmus at quantentunnel.de apfelmus at quantentunnel.de
Fri Feb 16 09:03:38 EST 2007


Nick wrote:
>    main        = print primes
>    primes      = 2:filter is_prime [3,5..]
>    is_prime n  = all (\p-> n `mod` p /= 0) (takeWhile (\p-> p*p<=n) primes)
> 
> We can rewrite this in strict languages with lazy constructs. For
> example, in Scala (of course stream is not only lazily evaluated thing
> there)
> 
>    def main(args: Array[String]): Unit = {
>        val n = Integer.parseInt(args(0))
>        System.out.println(primes(ints(2)) take n toList)
>    }
> 
>    def primes(nums: Stream[Int]): Stream[Int] =
>        Stream.cons(nums.head,
>            primes ((nums tail) filter (x => x % nums.head != 0)) )
> 
>    def ints(n: Int): Stream[Int] =
>        Stream.cons(n, ints(n+1))

Aha, I finally recovered some of the examples from which the claim
"Laziness is needed to achieve true compositionality" stems.


The first is already present in your example above and also showed up
some time ago in the thread "Optimisation fun". The point is that the
function 'all' used in

   is_prime n  = all (\p-> n `mod` p /= 0)
                     (takeWhile (\p-> p*p<=n) primes)

works only because we have lazy *Bool*eans. Your Scala version
accidentally (?) circumvents it by using a different algorithm, namely

   primes'      = sieve [2..]
   sieve (x:xs) = x : filter (\y -> y `mod` x /= 0) (sieve xs)

Thanks to laziness, 'all' stops as soon as one element does not fulfill
the condition. "True compositionality" allows us to define

   all p = foldr (&&) True . map p

and get the lazy behavior. You cannot reuse a strict (&&) in such a way.
 Of course, given some support for lazy constructs, you could define a
lazy version of (&&) just as you define a lazy version of lists (called
"Streams"), but not having laziness as default means that you have to
think about whether your function is intended to be re-used (=> you have
to provide lazy interface) or not *before* you write your function.


The second folklore example is lazy mergesort:

  mergesort []  = []
  mergesort xs  = foldtree1 merge $ map return xs

  foldtree1 f [x] = x
  foldtree1 f xs  = foldtree1 f $ pairs xs
     where
     pairs []        = []
     pairs [x]       = [x]
     pairs (x:x':xs) = f x x' : pairs xs

  merge []     ys     = ys
  merge xs     []     = xs
  merge (x:xs) (y:ys) =
      if x <= y then x:merge xs (y:ys) else y:merge (x:xs) ys

The point about this 'mergesort' is that while it sorts a complete list
in O(N log N) time, it may return the minimum element in O(N) time
already. Thus, we can be bold and reuse 'mergesort' as in

  minimum = head . mergesort

and still get the desired O(N) asymptotic complexity.

Note: The function 'foldtree' folds the elements of a list as if they
where in a binary tree:

  foldrtree f [1,2,3,4,5,6,7,8]
 ==>
  ((1 `f` 2) `f` (3 `f` 4)) `f` ((1 `f` 2) `f` (3 `f` 4))

The O(N) stuff works because 'foldtree' constructs this expression in
O(N + N/2 + N/4 + N/8 + ..) = O(N) time. I'm not entirely sure, but I
think that the more common 'splitAt (length xs `div` 2)' and 'deal
(x:x':xs) = (x:..,x':..)' approaches both take O(N log N) time for the
same task. This makes them unusable for the point here. Besides, only
'foldtree' can easily be transformed into a proof for dependent types,
but that's another story told by Conor McBride in 'Why dependent types
matter'.


There has been another example circulating on #haskell. I think it was
something with

  substrings = concatMap tails . inits

but I can't remember it now. Cale, can you help?

Anyway, the point is that with domain specific embedded languages, the
re-usability without time penalties is crucial. So far, only default
laziness can achieve this.

>> I also think that the laziness in Haskell is already so implicit that
>> 90% of the Haskell code written so far will simply break irreparably
>> if you experimentally remove it.
>>   
> Yes, I understand, that the present Haskell code heavily bases on laziness,
> but I'm going into the problem in general: how much I get,
> if I switch from default strictness to default laziness in my
> hypothetical language L? Or, from other side,
> how much I throw away in the reverse case?

Yes, what I meant with "laziness in Haskell is already so implicit" is
that the re-use I exemplified above happens subconsciously. So indeed,
it looks like - and only looks like - one could easily turn a lazy
language into a strict one. Isn't that the good thing about laziness
that nobody notices it in the code?


Regards,
apfelmus



More information about the Haskell-Cafe mailing list