Fwd: Release policies

Boespflug, Mathieu m at tweag.io
Thu Dec 14 21:59:38 UTC 2017

>> * But actually if we look at their respective release notes, GHC 8.2.1 was
>> relased in July 2017, even though the Cabal website claims that
>> Cabal- was released in August 2017 (see
>> https://www.haskell.org/cabal/download.html). So it looks like GHC didn't
>> just not give enough lead time about an upstream dependency it shipped
>> with, it shipped with an unreleased version of Cabal!
> Perhaps this is true and I admit I wasn't happy about releasing the compiler
> without a Cabal release. However, there was no small amount of pressure to
> push forward nevertheless as the release was already quite late and the
> expectation was a Cabal release would be coming shortly after the GHC
> release. Coordination issues like this are a major reason why I think it
> would be better if GHC were more decoupled from its dependencies'
> upstreams.

I have the same sentiment. Do you think this is feasible in the case
of Cabal? Even if say something like Backpack shows up all over again?
If so, are there concrete changes that could be made to support the
following workflow:

* upstreams develop their respective libraries independently of GHC
using their own testing.
* If they want GHC to ship a newer version, they create a Diff. As
Manuel proposed in a separate thread, this must be before feature
freeze, unless...
* ... a critical issue is found in the upstream release, in which case
upstream cuts a new release, and submits a Diff again.
* GHC always has the option to back out an offending upgrade, and
revert to a known good version. In fact it should preemptively do so
while waiting for a new release of upstream.
* In general, GHC does not track git commits of upstream dependencies
in an unknown state of quality, but tracks vetted and tested releases

>> * GHC should never under any circumstance ship with an unreleased version
>> of any independently maintained dependency. Cabal is one such dependency.
>> This should hold true for anything else. We could just add that policy to
>> the Release Policy.
> We can adopt this as a policy, but doing so very well may mean that GHC
> will be subject to schedule slips beyond its control. We can hope that
> upstream maintainers will be responsive, but there is little we can do
> when they are not.

Why not? If GHC only ever tracks upstream releases (as I think it
should), not git commits in unknown state, then we don't need upstream
maintainer responsiveness. Because at any point in time, all GHC
dependencies are already released. If GHC should ship with a newer
version of a dependency, the onus is on the upstream maintainer to
submit a Diff asking GHC to move to the latest version. Are there good
reasons for GHC to track patches not upstreamed and released?

>> * Stronger still, GHC should not switch to a new major release of a
>> dependency at any time during feature freeze ahead of a release. E.g. if
>> Cabal-3.0.0 ships before feature freeze for GHC-9.6, then maybe it's fair
>> game to include in GHC. But not if Cabal-3.0.0 hasn't shipped yet.
> Yes, this I agree with. I think we can be more accomodating of minor
> bumps to fix bugs which may come to light during the freeze, but major
> releases should be avoided.


>> * The 3-release backwards compat rule should apply in all circumstances.
>> That means major version bumps of any library GHC ships with, including
>> base, should not imply any breaking change in the API's of any such library.
> I'm not sure I follow what you are suggesting here.

Nothing new: just that the 3-release policy doesn't just apply to
base, but also anything else that happens to ship with GHC (including
Cabal). Perhaps that already the policy?

>> * GHC does have control over reinstallable packages (like text and
>> bytestring): GHC need not ship with the latest versions of these, if indeed
>> they introduce breaking changes that would contravene the 3-release policy.
>> * Note: today, users are effectively tied to whatever version of the
>> packages ships with GHC (i.e. the "reinstallable" bit is problematic today
>> for various technical reasons). That's why a breaking change in bytestring
>> is technically a breaking change in GHC.
> I don't follow: Only a small fraction of packages, namely those that
> explicitly link against the `ghc` library, are tied. Can you clarify
> what technical reasons you are referring to here?

Builds often fail for strange reasons when both bytestring-0.10.2 and
bytestring-0.10.1 are in scope. Some libraries in a build plan pick up
one version where some pick up another. The situation here might well
be better than it used to be, but at this point in time Stackage works
hard to ensure that in any given package set, there is *exactly one*
version of any package. That's why Stackage aligns versions of core
packages to whatever ships with the GHC version the package set is
based on.

So in this sense, AFAIK a bug in bytestring can't be worked around by
reinstalling bytestring (not in Stackage land): it requires waiting
for the next GHC version that will ship with a new version of
bytestring with that bug fixed. I'm not entirely familiar with all
Stackage details so Michael - please step in if this is incorrect.

>> * Because there are far fewer consumers of metadata than consumers of say
>> base, I think shorter lead time is reasonable. At the other extreme, it
>> could even be just the few months during feature freeze.
> Right, I wouldn't be opposed to striving for this in principle although
> I think we should be aware that breakage is at times necessary and the
> policy should accomodate this. I think the important thing is that we be
> aware of when we are breaking metadata compatibility and convey this to
> our users.

That sounds reasonable. But when have we ever needed to use non
backwards compatible metadata ASAP? The integer-gmp example was a case
in point: the Cabal-2.0 feature it was using was merely syntactic
sugar at this point, since no tool *yet* interprets the new constructs
in any special way AFAIK.

>> * The release notes bugs mentioned above and the lack of consistent upload
>> to Hackage are a symptom of lack of release automation, I suspect. That's
>> how to fix it, but we could also spell out in the Release Policy that GHC
>> libraries should all be on Hackage from the day of release.
> Yes, the hackage uploads have historically been handled manually. I have
> and AFAIK most release managers coming before me have generally deferred
> this to Herbert as is quite meticulous. However, I think it would be
> nice if we could remove the need for human intervention entirely.

Indeed. Can be part of the deploy step in the continuous integration pipeline.

More information about the ghc-devs mailing list