Tentative high-level plans for 7.10.1

p.k.f.holzenspies at utwente.nl p.k.f.holzenspies at utwente.nl
Tue Oct 7 08:29:07 UTC 2014


Mmm... yes, you seem to have some strong points against LTS. People committing to an LTS-version aside from the further development of HEAD seems somewhat unlikely also...

I must say, though, significant API-changes with only minor version-bumps have bitten me also. Not sure we should want this.

Ph.


PS. Maybe long, but not too long, let alone TL;DR. Thanks for the clarity

________________________________________
From: mad.one at gmail.com <mad.one at gmail.com> on behalf of Austin Seipp <austin at well-typed.com>
Sent: 07 October 2014 02:45
To: John Lato
Cc: Johan Tibell; Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org; Simon Marlow
Subject: Re: Tentative high-level plans for 7.10.1

The steps for making a GHC release are here:
https://ghc.haskell.org/trac/ghc/wiki/MakingReleases

So, for the record, making a release is not *that* arduous, but it
does take time. On average it will take me about 1 day or so to go
from absolutely-nothing to release announcement:

 1. Bump version, update configure.ac, tag.
 2. Build source tarball (this requires 1 build, but can be done very quickly).
 3. Make N binary builds for each platform (the most time consuming
part, as this requires heavy optimizations in the builds).
 4. Upload documentation for all libraries.
 5. Update webpage and upload binaries.
 6. Send announcement.
 7. Upload binaries from other systems later.

Herbert has graciously begun taking care of stewarding and uploading
the libraries. So, there are a few steps we could introduce to
alleviate this process technically in a few ways, but ultimately all
of these have to happen, pretty much (regardless of the automation
involved).

But I don't think this is the real problem.

The real problem is that GHC moves forward in terms of implementation
extremely, extremely quickly. It is not clear how to reconcile this
development pace with something like needing dozens of LTS releases
for a stable version. At least, not without a lot of concentrated
effort from almost every single developer. A lot of it can be
alleviated through social process perhaps, but it's not strictly
technical IMO.

What do I mean by that? I mean that:

 - We may introduce a feature in GHC version X.Y
 - That might have a bug, or other problems.
 - We may fix it, and in the process, fix up a few other things and
refactor HEAD, which will be GHC X.Y+2 eventually.
 - Repeat steps 2-3 a few times.
 - Now we want to backport the fixes for that feature in HEAD back to X.Y.
 - But GHC X.Y has *significantly* diverged from HEAD in that
timeframe, because of step 3 being repeated!

In other words: we are often so aggressive at refactoring code that
the *act* of backporting in and of itself can be complicated, and it
gets harder as time goes on - because often the GHC of a year ago is
so much different than the GHC of today.

As a concrete example of this, let's look at the changes between GHC
7.8.2 and GHC 7.8.3:

https://github.com/ghc/ghc/compare/ghc-7.8.2-release...ghc-7.8.3-release

There are about ~110 commits between 7.8.2 and 7.8.3. But as the 7.8
branch lived on, backporting fixes became significantly more complex.
In fact, I estimate close to 30 of those commits were NOT direct 7.8
requirements - but they were brought in because _actual fixes_ were
dependent on them, in non-trivial ways.

Take for example f895f33 by Simon PJ, which fixes #9023. The problem
with f895f33 is that by the time we fixed the bug in HEAD with that
commit, the history had changed significantly from the branch. In
order to get f895f33 to plant easily, I had to backport *at least* 12
to 15 other commits, which it was dependent upon, and commits those
commits were dependent upon, etc etc. I did not see any non-trivial
way to do this otherwise.

I believe at one point Gergo backported some of his fixes to 7.8,
which had since become 'non applicable' (and I thank him for that
greatly), but inevitably we instead brought along the few extra
changes anyway, since they were *still* needed for other fixes. And
some of them had API changes. So the choice was to rewrite 4 patches
for an old codebase completely (the work being done by two separate
people) or backport a few extra patches.

The above is obviously an extreme case. But it stands to reason this
would _only happen again_ with 7.8.4, probably even worse since more
months of development have gone by.

An LTS release would mandate things like no-API-changes-at-all, but
this significantly limits our ability to *actually* backport patches
sometimes, like the above, due to dependent changes. The alternative,
obviously, is to do what Gergo did and manually re-write such a fix
for the older branch. But that means we would have had to do that for
*every patch* in the same boat, including 2 or 3 other fixes we
needed!

Furthermore, while I am a release manager and do think I know a bit
about GHC, it is hopeless to expect me to know it all. I will
absolutely require coordinated effort to help develop 'retropatches'
that don't break API compatibility, from active developers who are
involved in their respective features. And they are almost all
volunteers! Simon and I are the only ones who wouldn't qualify on
that.

So - at what point does it stop becoming 'backporting fixes to older
versions' and instead become literally "working on the older version
of the compiler AND the new one in tandem"? Given our rate of churn
and change internally, this seems like it would be a significant
burden in general to ask of developers. If we had an LTS release of
GHC that lasted 3 years for example, that would mean developers are
expected to work on the current code of their own, *and their old code
for the next three years*. That is an absolutely, undeniably a _huge_
investment to ask of someone. It's not clear how many can actually
hold it (and I don't blame them).

This email is already a bit long (which is extremely unusual for my
emails, I'm sure you all know), but I just wanted to give some insight
on the process.

I think the technical/automation aspects are the easy part. We could
probably fully automate the GHC release process in days, if one or two
people worked on it dilligently. The hard part is actually balancing
the needs and time of users and developers, which is a complex
relationship.

On Mon, Oct 6, 2014 at 6:22 PM, John Lato <jwlato at gmail.com> wrote:
> On Mon, Oct 6, 2014 at 5:38 PM, Johan Tibell <johan.tibell at gmail.com> wrote:
>>
>> On Mon, Oct 6, 2014 at 11:28 AM, Herbert Valerio Riedel
>> <hvriedel at gmail.com> wrote:
>>>
>>> On 2014-10-06 at 11:03:19 +0200, p.k.f.holzenspies at utwente.nl wrote:
>>> > The danger, of course, is that people aren't very enthusiastic about
>>> > bug-fixing older versions of a compiler, but for
>>> > language/compiler-uptake, this might actually be a Better Way.
>>>
>>> Maybe some of the commercial GHC users might be interested in donating
>>> the manpower to maintain older GHC versions. It's mostly a
>>> time-consuming QA & auditing process to maintain old GHCs.
>>
>>
>> What can we do to make that process cheaper? In particular, which are the
>> manual steps in making a new GHC release today?
>
>
> I would very much like to know this as well.  For ghc-7.8.3 there were a
> number of people volunteering manpower to finish up the release, but to the
> best of my knowledge those offers weren't taken up, which makes me think
> that the extra overhead for coordinating more people would outweigh any
> gains.  From the outside, it appears that the process/workflow could use
> some improvement, perhaps in ways that would make it simpler to divide up
> the workload.
>
> John L.
>
>
> _______________________________________________
> ghc-devs mailing list
> ghc-devs at haskell.org
> http://www.haskell.org/mailman/listinfo/ghc-devs
>



--
Regards,

Austin Seipp, Haskell Consultant
Well-Typed LLP, http://www.well-typed.com/


More information about the ghc-devs mailing list