[ghc-steering-committee] Stability
Adam Gundry
adam at well-typed.com
Thu Sep 28 20:29:44 UTC 2023
I haven't quite been keeping up with all of this thread, but I'm glad to
see we are making progress and exploring the trade-offs here. Moritz,
thanks for volunteering to drive this forward and start writing a
proposal. I'm very happy for you to do so; while we may need to continue
seeking the right balance between the different concerns, it's a good
discussion to be having.
I agree with the suggestion to keep the plan for --std=experimental
(which is about an enforcement mechanism) separate from the policy
aspects (i.e. what stability guarantees do we offer for which features).
I can see that having some sort of flag may be helpful, but I'm not yet
convinced we can reasonably give a simple binary "stable or not"
classification, and that will become clearer once we more clearly
categorise features and explore the library stability issue more.
On the template-haskell front, I opened
https://gitlab.haskell.org/ghc/ghc/-/issues/24021 which explores one
possible way forward.
Generalising from that ticket, it seems to me that the biggest step we
could make towards greater stability would be to move away from
non-reinstallable packages tied to the compiler (like base and
template-haskell as they stand) towards reinstallable packages. For
example, if base was a reinstallable package then it could receive a new
minor version to support a new version of ghc-internal, without breaking
its clients. The CLC-approved addition of foldl' to Prelude looks set to
cause a lot of disruption, but if we could use an old major version of
base with the new GHC, that would be ameliorated significantly. This
seems like somewhere where more resources could really help.
Finally, in case this is useful, I started talking to the authors of the
extension stability proposal a little while back about classifying
extensions, and prepared a very drafty spreadsheet with some initial
thoughts. Take it with a pinch of salt, but here it is:
https://docs.google.com/spreadsheets/d/1sRIcNKdflX2ogrx2EF3vKOn0vmv-UEOdladBqtYXdaw/edit#gid=0
Cheers,
Adam
On 28/09/2023 14:15, Simon Peyton Jones wrote:
> I do not yet see how we can end up with the inability to change. We
> will just have a clearer process for _breaking_ changes. This
>
> does absolutely not mean we have to end up in a local optimum, or
> that we can not change. We can
>
>
> Agreed that stable features are not unable to change... it's just that
> the bar is higher (that's the intent), so change is more expensive.
> E.g. suppose we want to change some fine point about the semantics of
> INCOHERENT. It might be difficult to have both the old semantics and
> the new, and somehow give a deprecation warning. And INCOHERENT is
> already saying "I know that the ice is thin here and I'm signed up to
> the consequences".
>
> You are reminding us that if a feature is widely depended on, changing
> it imposes real costs on users. I'm reminding us that making more
> things "stable" imposes real costs on GHC's design and implementation
> team -- on whom we all rely. Both concerns are legitimate. We will need
> to debate each stable/experimental choice on a case by case basis,
> informed (among other things) but how widely used they are.
>
> Simon
>
>
> On Thu, 28 Sept 2023 at 13:39, Moritz Angermann
> <moritz.angermann at gmail.com <mailto:moritz.angermann at gmail.com>> wrote:
>
> Simon,
>
> Thank you for bringing us this far. I'd be happy to step up to drive
> this further.
>
> I will say that I do see the tension you see as well. And I do
> believe if we come to a conclusion on this stability policy that it will
> make GHC development more rigorous, which I think is a good thing
> for all of us, not just the consumers of the compiler. I think
> we need to stay mindful that we can still freely experiment on the
> --std=experimental (or however that flag ends up being named)
> side. I see this whole discussion as leading us towards the
> language research reactor side of GHC being primarily confined behind
> --std=experimental, and the stability seeking (commercial
> application?) on the other side.
>
> I do not yet see how we can end up with the inability to change. We
> will just have a clearer process for _breaking_ changes. This
> does absolutely not mean we have to end up in a local optimum, or
> that we can not change. We can!
>
> Unless someone speaks up who does _not_ want me to drive, this, I'm
> happy to start driving this discussion (including writing the
> proposals, starting next week).
>
> Best,
> Moritz
>
> On Thu, 28 Sept 2023 at 19:30, Simon Peyton Jones
> <simon.peytonjones at gmail.com <mailto:simon.peytonjones at gmail.com>>
> wrote:
>
> Should we have a document (or better spreadsheet?) with a
> bullet point for each experimental feature to be considered?
> I believe we need to take into account that we can’t end up
> classifying most of todays Haskell programs unstable. As
> such I’d like to propose that we’d add to each features a
> counter for how much of hackage (as a proxy for real world
> usage) uses the specific feature.
>
>
> I think that would be a helpful way to "ground" the discussion a
> bit more. (The spreadsheet should also give a preliminary
> classification of extensions, at least into stable/experimental.)
>
> I'm running out of capacity to drive this debate, useful though
> it is. Does anyone else feel able to drive some of the legwork?
>
> So far this is all informal committee discussion. The next step
> would be a GHC Proposal inviting broader feedback from the
> community. I'm just hoping that by debugging it between
> ourselves we can side-step some unproductive discussions in that
> bigger context.
>
> I believe we need to take into account that we can’t end up
> classifying most of todays Haskell programs unstable
>
>
> There is a tension here. If we say something is "stable" then
> we have to be super-careful about changing it. (That's the
> whole point.) And yet if the feature (like INCOHERENT) is a
> flaky "you are on your own" unsafePerformIO-like feature, I am
> axious about tying ourselves into an inability to change the
> behaviour of corner cases. I'm not sure how to resolve this
> tension.
>
> Simon
>
>
>
> On Thu, 28 Sept 2023 at 02:20, Moritz Angermann
> <moritz.angermann at gmail.com <mailto:moritz.angermann at gmail.com>>
> wrote:
>
> I think we are moving in the right direction! I do see
> however the tension rising on (2). And without being clear
> about (2), I don’t think we can properly agree on (1). We
> can agree on (1) in principle, but we need to clarify what
> we consider unstable/experimental, as a precondition to have
> finale agreement on (1). Otherwise people might agree to
> (1), only to be surprised by (2). For (3), I’d be happy to
> try to get my employer to provide resources for the
> implementation of —std=experimental.
>
> Thusly I believe we should start to build a list of features
> we consider sufficiently experimental that they should
> preclude an existing Haskell program from being considered
> stable. This list for me contains so far:
>
> - Linear Types
> - Dependent Haskell
>
> Adam pointed out experimental backend and non-tire-1
> platforms. I tend to agree with this, but see this
> distinctly separate from the language stability (outside of
> backend specific language extensions, e.g. JSFFI).
>
> Platforms/backends may be experimental but those are (safe
> for specific lang exts) orthogonal to the Haskell code the
> compiler accepts.
>
> Should we have a document (or better spreadsheet?) with a
> bullet point for each experimental feature to be considered?
> I believe we need to take into account that we can’t end up
> classifying most of todays Haskell programs unstable. As
> such I’d like to propose that we’d add to each features a
> counter for how much of hackage (as a proxy for real world
> usage) uses the specific feature.
>
> Best,
> Moritz
>
> On Wed, 27 Sep 2023 at 10:35 PM, Simon Peyton Jones
> <simon.peytonjones at gmail.com
> <mailto:simon.peytonjones at gmail.com>> wrote:
>
> it's essential that we continue to have these
> discussions to ensure we're making the best
> decisions for the project and our community.
>
>
> Yes exactly! Its tricky and nuanced; hence trying to
> articulate something in a concrete doc, so we are all on
> the same page (literally!).
>
> However, deprecation cycles don't mean we're averse
> to major changes. It means we introduce them
> responsibly. When we believe a superior design is
> possible, we can start a deprecation process to
> transition towards it.
>
>
> I have tried to make this explicit in Section 4. See
> what you think.
>
> I think there are three phases
>
> 1. Agree this document. Is it what we want.
> 2. Categorise extensions into stable/experimental, and
> identify experimental language features.
> 3. Implement --std=experimental (Section 6).
>
> (1) is what we are doing now. (2) will be some work,
> done by us. (3) is a larger task: it will require
> significant work to implement, and may impose unwelcome
> churn of its own. But that should not stop us doing (1)
> and (2).
>
> Simon
>
> *
>
>
>
> On Wed, 27 Sept 2023 at 10:20, Moritz Angermann
> <moritz.angermann at gmail.com
> <mailto:moritz.angermann at gmail.com>> wrote:
>
> Dear Adam,
>
> Thank you for your thoughtful feedback. I understand
> your reservations, and it's essential that we
> continue to have these discussions to ensure we're
> making the best decisions for the project and our
> community. Let me address each of your points in turn:
>
> - Cognitive Overhead for Users:
> I understand the concern about cognitive overhead
> due to the inability to remove complexity. However,
> our primary intention is to ensure a gradual
> transition for our users rather than abrupt shifts.
> Introducing changes via deprecation cycles allows
> users to adjust to modifications over time, reducing
> the immediate cognitive load. It's a balance between
> stability and simplicity, and I believe this
> approach allows us still to reduce complexity.
>
> - Maintenance Burden in the Compiler:
> Maintaining backward compatibility does indeed
> introduce some overhead. Still, it also encourages a
> more disciplined and considered approach to changes.
> With our deprecation cycles in place, it's not that
> we never remove complexity; rather, we do it in a
> way that provides ample time for adjustments. This
> benefits both the development team and the community.
>
> - Risk of Local Optimum:
> This is a valid concern. However, deprecation cycles
> don't mean we're averse to major changes. It means
> we introduce them responsibly. When we believe a
> superior design is possible, we can start a
> deprecation process to transition towards it. The
> flexibility and duration of our deprecation cycles
> can be tailored depending on the severity of the
> breaking change.
>
> - Discouraging Volunteer Contributors:
> I understand that lengthy approval processes can be
> off-putting. But it's crucial to note that a
> rigorous process ensures the consistency and
> reliability of our project. We always welcome and
> value contributions. Moreover, I believe this
> stability policy will provide us with clear
> guardrails on how changes can be contributed.
>
> I will not disagree on the costs. I do believe
> though that the costs for _breaking_ changes in the
> compiler ought to be borne by the people making the
> change, instead of those who use the compiler (and
> may not even benefit of those changes that caused
> breakage). I also see the team maintaining GHC as
> the one to enforce this; they are the ones who cut
> the releases. The fact that we may have breaking
> changes due to _bugs_ is covered explicitly in the
> stability policy document.
>
> With my CLC hat on, I have been focusing on the same
> stability guidelines as well (if it breaks existing
> code, I have been against those changes without
> deprecation policies). The issues with the
> template-haskell, and ghc library are noted. For the
> ghc library the question will remain if we intent to
> provide a stable api to the compiler or not. I
> believe many tools would like to have one, and if we
> relegate anything unstable to ghc-experimental this
> might be achieved. For template-haskell this is a
> bigger concern. Maybe we can collectively come up
> with a solution that would allow us to provide a
> more insulated template haskell interface from the
> compiler.
>
> However for template-haskell we might also need to
> look at what exactly caused those breaking changes
> in the past.
>
> What this document outlines (in my understanding) is
> that any experimental feature development can _only_
> be visible behind --std=experimental, and the
> dependency of ghc-experimental. Unless those are
> given, the compiler should accept existing programs.
> This should allow us enough room to innovate
> (everyone is always free to opt-in to bleeding edge
> features with --std=experimental). I also believe
> that most of what we have today will need to be
> treated as non-experimental simply because we did
> not have that mechanism before. We don't want to
> break existing programs as much as possible, thus
> relegating existing features into --std=experimental
> (except for some fairly clear ones: e.g. Dependent
> Haskell, and Linear Types?) is not really possible.
> What we can however do is start deprecation phases
> for a few versions, moving features we consider
> highly experimental (or maybe even bad) into
> `--std=experimental`. Just by having deprecation
> phases and given the ecosystem enough time to adjust
> (and provide feedback) we might come to different
> conclusions.
>
> As I've also outlined in the document, _if_ GHC was
> trivially swappable, companies like IOG would _love_
> to try new compilers and report back bugs and
> regressions. As it is today, we can't. Making a
> large live codebase compatible with 9.8 is a
> multiple weeks effort. Experimenting with nightlies
> is technically impossible. _If_ I could setup the
> built of our software trivial with ghc nightlies,
> I'd be _happy_ to build the infrastructure out it to
> provide performance regressions (compilation,
> runtime, ...) for our codebase and provide the
> feedback to the GHC team; however I can't. And thus
> I'm stuck patching and fixing 8.10, and 9.2 today.
> 9.6 maybe soon, but likely at the point in time
> where 9.6 is not going to see any further releases,
> so I can spare trying to even forward port my
> patches to HEAD. Not that I could even test them
> with head properly, as our source is not accepted by
> HEAD. Thus I end up writing patches against old
> stale branches. This to me is a fairly big
> discouragement from contributing to GHC.
>
> Best,
> Moritz
>
> On Mon, 25 Sept 2023 at 15:17, Adam Gundry
> <adam at well-typed.com <mailto:adam at well-typed.com>>
> wrote:
>
> I'm afraid that I'm somewhat sceptical of this
> approach.
>
> A strong stability guarantee is certainly a
> valuable goal, but it also
> comes with costs, which I'd like to see more
> clearly articulated. Some
> of them include:
>
> * Cognitive overhead for users, because of
> the inability to remove
> complexity from the design.
>
> * Increasing maintenance burden in the
> compiler, because of the
> additional work needed to implement new features
> and the inability to
> remove complexity from the implementation.
>
> * A risk of getting stuck in a local optimum,
> because moving to a
> better design would entail breaking changes.
>
> * Discouraging volunteer contributors, who
> are much less likely to
> work on a potentially beneficial change if the
> process for getting it
> approved is too onerous. (I'm worried we're
> already reaching that point
> due to the increasing burden of well-intentioned
> processes.)
>
> Ultimately every proposed change has a
> cost-benefit trade-off, with risk
> of breakage being one of the costs. We need to
> consciously evaluate that
> trade-off on a case-by-case basis. Almost all
> changes might break
> something (e.g. by regressing performance, or
> for Hyrum's Law reasons),
> so there needs to be a proportionate assessment
> of how likely each
> change is to be damaging in practice, bearing in
> mind that such an
> assessment is itself costly and limited in scope.
>
> It seems to me that the GHC team have taken on
> board lessons regarding
> stability of the language, and the extension
> system already gives quite
> a lot of flexibility to evolve the language in a
> backwards-compatible
> way. In my experience, the key stability
> problems preventing upgrades to
> recent GHC releases are:
>
> * The cascading effect of breaking changes in
> one library causing the
> need to upgrade libraries which depend upon it.
> This is primarily under
> the control of the CLC and library maintainers,
> however, not the GHC
> team. It would help if base was minimal and
> reinstallable, but that
> isn't a total solution either, because you'd
> still have to worry about
> packages depending on template-haskell or the
> ghc package itself.
>
> * Performance regressions or critical bugs.
> These tend to be a
> significant obstacle to upgrading for smaller
> commercial users. But
> spending more of our limited resources on
> stability of the language
> means fewer resources for resolving these issues.
>
> There's surely more we can do here, but let's be
> careful not to pay too
> many costs to achieve stability of the
> *language* alone, when stability
> of the *libraries* and *implementation* are both
> more important and
> harder to fix.
>
> Adam
>
>
> On 22/09/2023 10:53, Simon Peyton Jones wrote:
> > Dear GHC SC
> >
> > To avoid derailing the debate about -Wsevere
> >
> <https://mail.haskell.org/pipermail/ghc-steering-committee/2023-September/003407.html <https://mail.haskell.org/pipermail/ghc-steering-committee/2023-September/003407.html>>, and HasField redesign <https://mail.haskell.org/pipermail/ghc-steering-committee/2023-September/003383.html <https://mail.haskell.org/pipermail/ghc-steering-committee/2023-September/003383.html>>, I'm starting a new (email for now) thread about stability.
> >
> > I have tried to articulate what I believe is
> an evolving consensus in
> > this document
> >
> <https://docs.google.com/document/d/1wtbAK6cUhiAmM6eHV5TLh8azEdNtsmGwm47ZulgaZds/edit?usp=sharing <https://docs.google.com/document/d/1wtbAK6cUhiAmM6eHV5TLh8azEdNtsmGwm47ZulgaZds/edit?usp=sharing>>.
> >
> > If we converge, we'll turn that into a proper
> PR for the GHC proposal
> > process, although it has wider implications
> than just GHC proposals and
> > we should share with a broader audience. But
> let's start with the
> > steering committee.
> >
> > Any views? You all have edit rights.
> >
> > I think that the draft covers Moritz's and
> Julian's goals, at least that
> > was my intention. I have pasted Moritz's
> last email below, for context.
> >
> > Simon
> >
> >
> > ========= Moritz's last email ============
> >
> > Now, this is derailing the original
> discussion a bit, and I'm not sure
> > how far we want to take this. But, regarding
> @Simon Marlow
> > <mailto:marlowsd at gmail.com
> <mailto:marlowsd at gmail.com>>'s comment
> >
> > This is one cultural aspect of our
> community I'd like to shift: the
> > expectation that it's OK to make breaking
> changes as long as you
> > warn about
> > them or go through a migration cycle. It
> just isn't! (and I speak as
> > someone who used to make lots of changes,
> I'm now fully repentant!).
> > That's
> > not to say that we shouldn't ever change
> anything, but when
> > considering the
> > cost/benefit tradeoff adding a migration
> cycle doesn't reduce the
> > cost, it
> > just defers it.
> >
> >
> > I actually read this as we should stop having
> breaking changes to begin
> > with. And _if_ we
> > do have breaking changes, that deprecation
> does not change the need to
> > actually change
> > code (cost). As outlined in my reply to that,
> and @Richard Eisenberg
> > <mailto:lists at richarde.dev
> <mailto:lists at richarde.dev>>'s observation, it
> > "smears" the cost. The--to me--_much_ bigger
> implication of deprecation
> > cycles is that we
> > _inform_ our _customers_ about upcoming
> changes _early_, instead of
> > _after the fact_. We
> > also give them ample time to react. Being by
> changing their code, or
> > raising their concerns.
> > Would the Simplified Subsumptions / Deep
> Subsumptions change have looked
> > differently?
> > As such I see deprecation cycles as
> orthogonal to the question if we
> > should have breaking
> > changes to begin with.
> >
> > Thus I believe the following:
> >
> > - Do have a deprecation cycle if possible.
> > - Do not treat a deprecation cycle as an
> excuse. Costs are deferred
> > but are as large as ever.
> >
> >
> > should be upgraded to:
> > - Preferably _no_ breaking changes.
> > - If breaking changes, then with a
> deprecation cycle, unless technically
> > infeasible.
> > - An understanding that any breaking change
> incurs significant costs.
> >
> > Ocaml recently added multicore support, and
> they put tremendous effort
> > into making
> > sure it keeps backwards compatibility:
> >
> https://github.com/ocaml-multicore/docs/blob/main/ocaml_5_design.md <https://github.com/ocaml-multicore/docs/blob/main/ocaml_5_design.md>
> >
> <https://github.com/ocaml-multicore/docs/blob/main/ocaml_5_design.md <https://github.com/ocaml-multicore/docs/blob/main/ocaml_5_design.md>>
> >
> > PS: we should also agree that a "stable"
> extension should not
> > require dependencies on
> ghc-experimental. To become stable, any
> > library support for an extension must
> move into `base`.
> >
> >
> > This seems like a good idea, however I still
> remain that _experimental_
> > features should not be on-by-default in a
> stable compiler. Yes, ideally
> > I'd not even see them in a stable compiler,
> but I know this view is
> > contentious. The use of `ghc-experimental`
> should therefore be guarded
> > by `--std=experimental` as Julian suggested.
> That is a loud opt-in to
> > experimental features.
> >
>
--
Adam Gundry, Haskell Consultant
Well-Typed LLP, https://www.well-typed.com/
Registered in England & Wales, OC335890
27 Old Gloucester Street, London WC1N 3AX, England
More information about the ghc-steering-committee
mailing list