apfelmus at quantentunnel.de apfelmus at quantentunnel.de
Mon Oct 9 10:25:55 EDT 2006

```Cale Gibbard wrote:
> On 08/10/06, apfelmus at quantentunnel.de <apfelmus at quantentunnel.de> wrote:
>> large -> large    depends on large input like above
>>                   but roughly same otherwise
>> small -> large    roughly same
>
> Actually, it's an important point that laziness is generally
> preferable in these cases as well, since the large result might never
> be completely demanded, so if the function is lazy, then it will waste
> no time computing things that are never needed.

Yes, in the sense that laziness allows for compositions
(f : large -> small) . (g : large -> large)
to adopt demands on the "large" input from f and not from g. So to
speak, laziness is to be preferred for (.. -> large) if it's preferred
for (large -> small) to enable proper behavior under composition.

> I might also like to point out that by "small" and "large", we're
> actually referring to the number of ways in which components of the
> datastructure can be computed separately, which tends to correspond
> nicely to how one usually pictures small and large pieces of data.

I'm not sure what you mean. I thought it is appropriate to count the
number of approximations to a given value v, i.e. count how many v'
there are with
v' < v
in the usual CPO ordering, that is Cons _|_ [1,2] < Cons 3 [1,2] etc.
That count is either large or small.

For example, Ints are small, because there is only _|_ < any number.
However, by representing them as
data Digit   = Zero | One
newtype Int' = Binary [Digit]
they become large.
Also,
data Tree' = Leaf | Tree !(Tree a) a !(Tree a)
(with strictness annotations) is small, although the values are usually
"big". Is it that what you mean?

Admittedly, discussing strictness in terms of large and small which are
defined by < and _|_ is a somewhat circular definition.

Regards,
apfelmus

```