[Haskell-cafe] Was simplified subsumption worth it for industry Haskell programmers?

pareto optimal pareto.optimal at mailfence.com
Fri May 6 19:34:06 UTC 2022


I originally posted on Reddit and the thread contains much debate and discussion.

I'm concerned the views of the mailing list and likely ghc devs might not be as represented there or discourse, so i'm also copying it here.

Copy-paste of post:

Warning: Long post

tl;dr

- simplified subsumption seems to make common code I write in industry clunky for no good reason in lots of places

- Are others concerned with motivating what seem like "pointless lambdas" to new hires or students for simple tasks?

- Are there more real world advantages that make these frequent annoyances worth it?

- How is quicklook impredicativity useful in industry?

The biggest advantage seems to be that laziness is more predictable.

However looking at [commits fixing simplified subsumption errors on github](https://github.com/search?q=simplified+subsumption+language%3Ahaskell&type=commits) I see very common patterns in industry Haskell now need an explicit lambda for "reasons" such as:

      readFreqSumFile :: (MonadSafe m) => FilePath -> m (FreqSumHeader, Producer FreqSumEntry m ())
    - readFreqSumFile file = readFreqSumProd $ withFile file ReadMode PB.fromHandle
    + readFreqSumFile file = readFreqSumProd $ withFile file ReadMode (\h -> PB.fromHandle h)

and:

    - toOrders <- asks _pdfConfToOrder
    + toOrders <- asks (\r -> _pdfConfToOrder r)

And this typical use of id is no longer valid:

    instance MonadOrvilleControl IO where
    -    liftWithConnection = id
    -    liftFinally = id
    +   liftWithConnection ioWithConn = ioWithConn
    +   liftFinally ioFinally = ioFinally

On my $work codebase that means hundreds of changes that make our code worse with seemingly no benefit.

This case is addressed in the proposal, but seems to handwave this as:

> The benefit, in terms of programming convenience, is small.

>From my perspective while updating my codebase, it certainly doesn't feel that way.

>From the persective of onboarding new Haskell hires, it doesn't feel simpler. I envision a future teaching session like:

> student: This code looks correct but I get an error that means nothing to me of

     error:
         • Couldn't match type: b0 -> b0 with: forall q. q -> q
              Expected: p -> forall q. q -> q
              Actual: p -> b0 -> b0
         • In the first argument of ‘g’, namely ‘f’ In the expression: g f In an equation for ‘h’: h = g f | | h = g f | ^

> me: Ah, that's because of something called simplified subsumption which we'll cover much later.
> me: For now, just know putting it in an explicit lambda fixes it when you notice a compile error like that.
> me: Now lets try to move past that and get back to basic file reading and writing
> student: oookkkay? (feeling unsure, disillusioned about Haskell requiring pointless ceremony and being overly complex for no seeming benefit)

Being a fan of and proponent of Haskell I think: If this complication is being added, surely something is made possible in return that gives more value.

This led me to [the proposal](https://github.com/ghc-proposals/ghc-proposals/blob/master/proposals/0287-simplify-subsumption.rst) and I found with simplified subsumption:

- Laziness characteristics and semantics of programs will be changed less, which I infer will lead to more predictable performance
- I assume that simplifying a compiler step like this will speed up compile times and reduce space usage
- Quick look impredicativity seems to be the real driving reason behind simplified subsumption and somehow makes dealing with very polymorphic code easier

At this point my thought is:

> Making highly polymorphic code simpler to write that isn't as typical in industry Haskell code in ways I can't determine without great effort was valued over "small incoveniences" that I'll run into daily

But, still wanting to give the benefit of the doubt I dive face first into [The proposal for Quicklook impredicativity](https://github.com/ghc-proposals/ghc-proposals/blob/master/proposals/0274-quick-look-impredicativity.rst).

Reading the whole thing, I still cannot ground this concept in real world terms that may effect me or that I could take advantage of.

So, I go to the paper [A quick look at impredicativity](https://www.microsoft.com/en-us/research/publication/a-quick-look-at-impredicativity/) and start reading many things I don't fully understand.

Running out of energy, I start skimming and finally find some examples in section 10 APPLICATIONS.

I see an example with gZipWithM that I still don't understand. Further down I see reference to pieces of code updated in Streamly that take advantage of quick look polymorphism and wonder why the real world example wasn't included and explained.

So, i'm left frustrated with "simplified" subsumption and posting here for help answering:

- Are others in the same boat?
- Are there advantages i'm not seeing?
- Can we use my reflection to improve industry/academic communication?

And finally, any revant commentary surrounding this I may be oblivious to.

Thanks!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.haskell.org/pipermail/haskell-cafe/attachments/20220506/1da0a9f3/attachment.html>


More information about the Haskell-Cafe mailing list