Dataflow analysis for Cmm

Michal Terepeta michal.terepeta at
Sun Oct 16 13:03:05 UTC 2016


I was looking at cleaning up a bit the situation with dataflow analysis for
In particular, I was experimenting with rewriting the current
`cmm.Hoopl.Dataflow` module:
- To only include the functionality to do analysis (since GHC doesn’t seem
to use
  the rewriting part).
  - Code simplification (we could remove a lot of unused code).
  - Makes it clear what we’re actually using from Hoopl.
- To have an interface that works with transfer functions operating on a
  basic block (`Block CmmNode C C`).
  This means that it would be up to the user of the algorithm to traverse
  whole block.
  - Further simplifications.
  - We could remove `analyzeFwdBlocks` hack, which AFAICS is just a
    of `analyzeFwd` but ignores the middle nodes (probably for efficiency of
    analyses that only look at the blocks).
  - More flexible (e.g., the clients could know which block they’re
    we could consider memoizing some per block information, etc.).

What do you think about this?

I have a branch that implements the above:
It’s introducing a second parallel implementation (`cmm.Hoopl.Dataflow2`
module), so that it's possible to run ./validate while comparing the
results of
the old implementation with the new one.

Second question: how could we merge this? (assuming that people are
ok with the approach) Some ideas:
- Change cmm/Hoopl/Dataflow module itself along with the three analyses
that use
  it in one step.
- Introduce the Dataflow2 module first, then switch the analyses, then
  any unused code that still depends on the old Dataflow module, finally
  the old Dataflow module itself.
(Personally I'd prefer the second option, but I'm also ok with the first

I’m happy to export the code to Phab if you prefer - I wasn’t sure what’s
recommended workflow for code that’s not ready for review…

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the ghc-devs mailing list