[Haskell-cafe] Re: Problems with strictness analysis?

wren ng thornton wren at freegeek.org
Thu Nov 6 03:28:23 EST 2008


Dominic Steinitz wrote:
> wren ng thornton <wren <at> freegeek.org> writes:
> 
> [snick]
> 
> > > isum 0 s = s
> > > isum n s = isum (n-1) (s+n)
> > This is tail recursive, and will be optimized to an iterative loop; 
> 
> [snick]
> 
> > In terms of having a compiler 'smart enough', it's not clear that 
> > functions of this sort ought to be inferred strict simply because the 
> > accumulator is ultimately returned to the caller. Consider for example:
> 
> I thought this was strict as Luke Palmer has already pointed out. My 
> understanding is that a compiler may be able to infer it is strict and then 
> perform eager evaluation.

But how should a compiler infer that it's strict? That's the trick. It's 
easy enough to come up with examples that would foil any naive attempt 
at inference ---like the (++) and (.) examples--- and it's not entirely 
clear that a general solution is possible. GHC's strictness analyzer 
will capture many things, but it's conservative enough to ensure that it 
doesn't change the semantics of the program. Strictness annotations are 
about declaring the semantics of the program, so any optimizations done 
by the compiler must ensure that they don't change the strictness 
properties of the original program.[1]

Even for this particular example, it's not clear that making isum strict 
in the accumulator is correct. The issue here is that isum does not have 
a type annotation and so it's polymorphic in (+). For the Int and 
Integer instances it happens that (+) is strict, but we could also have 
a Num instance for Peano integers or some other lazy representation, and 
this function should work on them as well--- preserving the 
least-strictness of the (+) operator for those types. For strict (+) 
it's true that isum n _|_ == _|_, but for any (+) that's lazy in its 
first argument that's not true. S(S(S _|_)) is a perfectly valid Peano 
integer and forbidding it as a return value from this function would 
alter the semantics from what was written in the definition.

If Haskell had strictness annotations as part of the type system, then 
there might be room for progress. We could imagine constructing separate 
polymorphic bodies for isum, one for each strictness variant of (+). 
Then, when isum is instantiated at types which define strict (+) we 
could use an optimized version of isum that forces evaluation at each 
step. Now the trick becomes how to control the explosion of generated 
code for all the combinatorially many strictness variants of types. For 
whole-program optimizing compilers, it should be relatively easy to keep 
the code bloat down, but for separate compilation it's not so 
straightforward. Chances are that any solution along this route would 
still require strictness annotations in order to do the right thing, 
only now we've lifted the annotations into the type language instead of 
leaving them in the term language.


> >   > f 0 xs = xs
> >   > f n xs = f (n-1) (replicate n n ++ xs)
> >
> > Since (++) can indeed return partial answers, it's fine for the 
> > accumulator to be lazy. Indeed, making it strict harms performance 
> > significantly. Another example is when the accumulator is a function, as 
> 
> Can this function be strict if (++)isn't? And if it isn't strict, why would it 
> make sense to evaluate it eagerly?

Depends what you mean by "strict". Adding ($!) before the accumulator 
will force each (++) thunk to WHNF, which will in turn force the 
replicate thunk to WHNF. Additional annotations could make the entire 
spine strict, or all the elements strict as well. Everything *can* be 
made strict, whether it ought to be is a separate matter entirely.

The point was that a simple heuristic like detecting accumulator 
patterns and making the accumulators strict is not always a good idea. 
Adding that ($!) to the function will double the evaluation time and 
makes no semantic difference. It *doesn't* make sense to evaluate the 
accumulator eagerly, because you'll still have a bunch of thunks, 
they're just pushed back one element in the list. The only way for 
strictness to alter the semantics of this particular function is if the 
entire spine of the second argument is forced (which in turn makes it 
take time quadratic in the length of the return value, not to mention 
the extra space for the whole list).


> PS This subject seems to come up often enough to be worth a wiki entry (maybe 
> there already is one). I think we should also be careful with terminology (as 
> Luke Palmer and David Menendez have pointed out. Maybe that could be included 
> in the wiki entry.

It would be worth a wiki. In addition to the strict/non-strict vs 
eager/lazy distinction, it's also important to distinguish unlifted 
types from unboxed types (which both come up in this sort of discussion).



[1] So for example, optimizations like those in 
<http://hackage.haskell.org/packages/archive/list-extras/0.2.2.1/doc/html/Data-List-Extras-LazyLength.html> 
are not the sorts of things which it would be correct for a compiler to 
perform automatically. Importing that module can dramatically change the 
strictness properties of a program. Generally this is for the best since 
it simply eliminates excessive computation, but if anyone is relying on 
the strictness properties of the length function, it breaks those 
properties and so it may break their program.

-- 
Live well,
~wren


More information about the Haskell-Cafe mailing list