[Haskell-cafe] Runtime strictness analysis for polymorphic HOFs?
ekmett at gmail.com
Thu Jun 18 10:30:45 EDT 2009
I have been exploring a weak form of runtime strictness analysis in a
tracing JIT project of my own in the spirit of TraceMonkey. Basically a
tracing JIT doesn't compile blocks of code but instead once a tracing point
has been passed often enough, it starts a trace from that point logging the
series of instructions executed to a bounded log. If you make it back to the
starting point before the log fills up then you've identified a superblock
with a bunch of side exits for handling when your assumptions are violated.
What is interesting is in a lazy setting, if you are tracing a bytecode
representation that knows about allocation and thunks, you can do some
additional optimizations in here. If on every path to a side exit or the end
of the loop you find that the thunk is evaluated you can evaluate it
strictly and move its execution earlier in the trace. This gives you a weak
form of runtime strictness analysis. If the pointer to that thunk never
escapes, then you can unbox the contents of the thunk and operate on its
members in registers. Add constant folding, polyinline caching to improve
branch prediction for spineless tagless g-machine thunk evaluation, and code
migration to the side exits and it becomes an interesting runtime system.
But the key concern for the sake of the current discussion is that it models
strictness analysis and unboxing quite naturally as an artifact of the
jitting process. So that said, a form of conservative strictness analysis at
runtime is possible.
On Sun, Jun 14, 2009 at 7:42 PM, Paul Chiusano <paul.chiusano at gmail.com>wrote:
> I was recently trying to figure out if there was a way, at runtime, to do
> better strictness analysis for polymorphic HOFs, for which the strictness of
> some arguments might depend on the strictness of the strictness of function
> types that are passed as arguments . As an example, consider foldl. The
> 'seed' parameter of foldl can be made strict as long as the binary function
> used for the fold is strict in its arguments. Unfortunately, because
> strictness analysis is done statically, Haskell can't assume anything about
> the strictness of the binary function - assuming we only compile one
> instance of foldl, it must be the most conservative version possible, and
> that means making the seed parameter lazy. :-(
> I started thinking about ways you could to a check at runtime for this sort
> of thing, something to the effect of asking foldl, before heap-allocating a
> thunk for the seed parameter, whether that parameter could be made strict.
> foldl could then inspect other arguments that have been supplied, and based
> on these arguments, evaluate or go ahead with creating a thunk the seed
> parameter. It's a runtime cost, sure, but would it be more than the cost of
> having to do an additional heap allocation? In any case a small runtime cost
> might be worth it if the analysis becomes more uniform.
> So, my question is: does doing this sort of runtime analysis seem like a
> horrible idea? Is it even possible? Has anyone tried this in the past? (So
> far I haven't found anything, but would love references if people have
> Note that I'm not suggesting Haskell should do anything like this. I'm
> playing around with the ideas because I'm interesting in creating a lazy
> language and I was hoping to have strictness analysis be very predictable
> and uniform, something the programmer can count on and use to simply reason
> about space usage ... which might be hopelessly unrealistic goal! I guess
> the more general question is - is "perfect" strictness analysis (however
> that is defined) possible, if we're willing to incur some runtime cost? What
> would that look like?
> More background on my thinking here - a bit half-baked, so bear with me!
> Haskell-Cafe mailing list
> Haskell-Cafe at haskell.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Haskell-Cafe