From manuel.chakravarty at tweag.io Fri Dec 8 02:29:19 2017 From: manuel.chakravarty at tweag.io (Manuel M T Chakravarty) Date: Fri, 8 Dec 2017 13:29:19 +1100 Subject: [GHC DevOps Group] CircleCI job accounting question In-Reply-To: <87d14a8h4s.fsf@ben-laptop.smart-cactus.org> References: <8760agnulr.fsf@ben-laptop.smart-cactus.org> <0B8947A7-CF4E-4803-828E-2DC1FBC6F4F0@tweag.io> <8760aen9ov.fsf@ben-laptop.smart-cactus.org> <54496A02-05EE-43EF-B717-612829628D90@tweag.io> <87zi7njmth.fsf@ben-laptop.smart-cactus.org> <87mv3f80xv.fsf@ben-laptop.smart-cactus.org> <87d14a8h4s.fsf@ben-laptop.smart-cactus.org> Message-ID: [Continuing to pick up the threads left dormant while I was travelling.] Hi Ben, As per #14506, we still need to find a solution to triggering CircleCI builds from Harbormaster, right? Just to make sure that I understand the requirements correctly, from what you wrote, Harbormaster needs a separate (from the main GHC repo) Git repo where contributors push patches (I guess, via the arc tool) for review and that is what you called the staging area. Moreover, CircleCI requires the staging area repo to be on GitHub. Is that correct? Concerning the push permission (which would need to be manually enabled on GitHub), are you saying that the repo would have to be push for all? Wouldn’t it be possible to use a GitHub API key or so? I found this feature request, which supposedly has been complete. Does that help? https://discuss.circleci.com/t/compatibility-with-phabricator/183 I also found this https://github.com/signalfx/phabricator-circleci which has the disadvantage that it involves using AWS Lambda. Cheers, Manuel > Am 23.11.2017 um 02:31 schrieb Ben Gamari : > > Manuel M T Chakravarty writes: > >>> Ben Gamari : >>> >>> Manuel M T Chakravarty writes: >>> >>>> Mateusz had a first stab >>>> >>>> https://github.com/tweag/ghc/blob/tweag/circleci-macos/appveyor.yml >>>> >>>> but got stuck in the default resource limits. We emailed them with a >>>> request, but there was no answer so far. I’ll follow up on it. >>>> >>> Any update on this? For the record, I have confirmed with the Rustaceans >>> that Mozilla indeed pays for their usage. >> >> No, sorry, I have been completely taken out with travelling and >> conference for the last week. (Just arrived in the Netherlands.) > > Quite alright; just wondering. > >>> * It appears that CircleCI only builds the head commits of pushes. >>> Making this configurable has been a feature request for nearly a year >>> now, so it looks like we will need to work around this. I briefly >>> looked into setting up some automation to trigger builds on otherwise >>> untested commits, but ran into apparent API bugginess. It looks like >>> we'll just need to ensure that contributors push at most one commit >>> at a time for now to ensure all commits get testing. See GHC #14505 >>> for details. >> >> Why do we need the intermediate builds exactly? Wouldn’t they usually >> fail? (When I do PRs with multiple commits, the state of the tree >> between this commits will usually not be well-defined.) > > No, every commit should build. This is in part a difference between > Phabricator's patch-based model and GitHub's feature branch model. > However, many projects using the latter also demand that all > intermediate commits must be atomic, buildable changes. Sacrificing this > property greatly complicates bisection. > > Building all intermediates is desireable as ultimately we would like to > preserve per-commit build artifacts for the last few months of commits > to enable easy bisection. > >>> * I have tried enabling testing of Harbormaster Differentials via >>> CircleCI. Unfortunately it appears that CircleCI only supports >>> testing repositories hosted on GitHub. There are a few ways in which >>> we could proceed, >>> >>> a. Move ghc's staging area (the repository where Arcanist pushes >>> patches submitted with `arc diff`) to GitHub. This, however, would >>> require that we manually manage push privileges to this repository. >> >> What do you mean by manually manage push privileges? In what way is >> that not manually at the moment? > > As long as a user had added a key to their Phabricator account pushing > to the staging area will "just work". It requires no intervention from > me. I believe in the event that we moved to GitHub I would need to > manually grant users commit privileges to the staging area. > >>> b. Try to work around the issue by mirroring GHC's staging area to >>> GitHub and manually trigger CircleCI builds. >> >> Is the manual triggering necessary, because Harbormaster would need to >> wait until the repo is triggered (which it can’t)? > > In general this whole mirroring situation doesn't appear to be a > use-case that Phabricator's CircleCI integration supports. It demands > that the staging area of the tested repository is hosted on GitHub. > >>> * I have been honing the Hadrian test infrastructure; I'm currently >>> waiting on a build, but I expect this attempt will pass, at which point >>> I will merge it. >> >> Great! > > Sadly the build appears to reliably hang with no output. I'll need to > look into this. > > Cheers, > > - Ben From manuel.chakravarty at tweag.io Mon Dec 11 06:16:38 2017 From: manuel.chakravarty at tweag.io (Manuel M T Chakravarty) Date: Mon, 11 Dec 2017 17:16:38 +1100 Subject: [GHC DevOps Group] CircleCI job accounting question In-Reply-To: <874lpm86ra.fsf@ben-laptop.smart-cactus.org> References: <8760agnulr.fsf@ben-laptop.smart-cactus.org> <0B8947A7-CF4E-4803-828E-2DC1FBC6F4F0@tweag.io> <8760aen9ov.fsf@ben-laptop.smart-cactus.org> <54496A02-05EE-43EF-B717-612829628D90@tweag.io> <87zi7njmth.fsf@ben-laptop.smart-cactus.org> <87mv3f80xv.fsf@ben-laptop.smart-cactus.org> <87d14a8h4s.fsf@ben-laptop.smart-cactus.org> <874lpm86ra.fsf@ben-laptop.smart-cactus.org> Message-ID: <31461C3E-34EC-409D-8952-DB65B9452D3D@tweag.io> > 23.11.2017 06:15 schrieb Ben Gamari : > Simon Marlow > writes: > >> On 22 November 2017 at 15:31, Ben Gamari wrote: >> >>> Manuel M T Chakravarty writes: >>>> Why do we need the intermediate builds exactly? Wouldn’t they usually >>>> fail? (When I do PRs with multiple commits, the state of the tree >>>> between this commits will usually not be well-defined.) >>> >>> No, every commit should build. This is in part a difference between >>> Phabricator's patch-based model and GitHub's feature branch model. >>> However, many projects using the latter also demand that all >>> intermediate commits must be atomic, buildable changes. Sacrificing this >>> property greatly complicates bisection. >>> >>> Building all intermediates is desireable as ultimately we would like to >>> preserve per-commit build artifacts for the last few months of commits >>> to enable easy bisection. >>> >> >> I don't quite understand this. Yes building all commits is desirable, but >> in the case of Phabricator each revision is going to be a single commit, >> no? So why is this an issue? Or is it an issue only for github PRs? > > The problem is that many contributors, including Simon PJ, Richard, and > me, tend to push batches of work. For instance, when I land > contributors' differentials I first apply a batch, then validate > locally, and then push as a chunk. We can change this if necessary, but > it will need to be via social convention which hasn't worked very well > historically. I would like to understand this issue (which prompts #14505) a bit better. Do we agree that, in an ideal world, nobody should ever push to master, and instead, all contributions start as PRs or differentials, which are merged after being reviewed, which crucial includes building them in CI? If that were the case, then we wouldn’t have the issue with needing to force CircleCI to build non-head commits, right? If we agree that this is the ideal scenario, then what is preventing us from getting to that scenario? In particular, I don’t think you should need to validate contributor code manually on your boxes. The purpose of an automatic CI pipeline is exactly to avoid such manual steps. I am sorry if any of these questions appear naive. Cheers, Manuel -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Dec 11 08:51:32 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 11 Dec 2017 08:51:32 +0000 Subject: [GHC DevOps Group] CircleCI job accounting question In-Reply-To: <31461C3E-34EC-409D-8952-DB65B9452D3D@tweag.io> References: <8760agnulr.fsf@ben-laptop.smart-cactus.org> <0B8947A7-CF4E-4803-828E-2DC1FBC6F4F0@tweag.io> <8760aen9ov.fsf@ben-laptop.smart-cactus.org> <54496A02-05EE-43EF-B717-612829628D90@tweag.io> <87zi7njmth.fsf@ben-laptop.smart-cactus.org> <87mv3f80xv.fsf@ben-laptop.smart-cactus.org> <87d14a8h4s.fsf@ben-laptop.smart-cactus.org> <874lpm86ra.fsf@ben-laptop.smart-cactus.org> <31461C3E-34EC-409D-8952-DB65B9452D3D@tweag.io> Message-ID: The problem is that many contributors, including Simon PJ, Richard, and me, tend to push batches of work I have not been following this thread (“job accounting” seemed above my pay grade) but I saw this mention of my name 😊. Without having read myself into the context there seem to be two issues * Every commit to master should be validate-clean, and this should be tested by the CI framework not by the contributor. This is essential. I would be delighted if every commit I made went through that gate. I’m careful, but occasionally not careful enough. * Most – perhaps all – commits should go through a code-review process. Here I freely admit that I tend to use (or abuse?) my status to make most of my commits without review, except perhaps informally with individuals. I’d be absolutely willing to review this if (a) in fact people think that the extra step would really improve quality (perhaps looking at past commits) or (b) the very fact that I do so makes people feel cross. In both cases, the big thing from my point of view is that, once I’m ready to press “go”, I’d like to take it off my to-do list by pushing into a queue that will result in either * a commit to master, or * an email to me saying “more work to do here” I’m sure that many contributors would value this property. The tricky bit is, I suppose, deciding when the reviewers are happy. Maybe that need central human intervention; I’m not sure. Also important: I often accumulate a sequence of related patches: * I’d like to be able to queue them as a group, or at least with very low additional overhead per patch * Sometimes I have attempted to separate two things (e.g. a bug fix and a refactoring). In the past I have not felt it important to guarantee a validate-clean state between the two; e.g. I might have erred in separating the two. The cost benefit ratio of teasing them apart has not seemed worth the pain. But separating them at least means we have two commit messages. On this point I can see that I might have to change my behaviour, because the CI would be obliged to stop as soon as it found one that didn’t validate. I might need help to figure out how best to fix up a patch sequence by moving little bits from one to another, but I’m sure it’s possible. I’m very keen to support a better CI process in any way I can. Simon From: Ghc-devops-group [mailto:ghc-devops-group-bounces at haskell.org] On Behalf Of Manuel M T Chakravarty Sent: 11 December 2017 06:17 To: Ben Gamari Cc: ghc-devops-group at haskell.org; Mateusz Kowalczyk Subject: Re: [GHC DevOps Group] CircleCI job accounting question 23.11.2017 06:15 schrieb Ben Gamari >: Simon Marlow > writes: On 22 November 2017 at 15:31, Ben Gamari > wrote: Manuel M T Chakravarty > writes: Why do we need the intermediate builds exactly? Wouldn’t they usually fail? (When I do PRs with multiple commits, the state of the tree between this commits will usually not be well-defined.) No, every commit should build. This is in part a difference between Phabricator's patch-based model and GitHub's feature branch model. However, many projects using the latter also demand that all intermediate commits must be atomic, buildable changes. Sacrificing this property greatly complicates bisection. Building all intermediates is desireable as ultimately we would like to preserve per-commit build artifacts for the last few months of commits to enable easy bisection. I don't quite understand this. Yes building all commits is desirable, but in the case of Phabricator each revision is going to be a single commit, no? So why is this an issue? Or is it an issue only for github PRs? The problem is that many contributors, including Simon PJ, Richard, and me, tend to push batches of work. For instance, when I land contributors' differentials I first apply a batch, then validate locally, and then push as a chunk. We can change this if necessary, but it will need to be via social convention which hasn't worked very well historically. I would like to understand this issue (which prompts #14505) a bit better. Do we agree that, in an ideal world, nobody should ever push to master, and instead, all contributions start as PRs or differentials, which are merged after being reviewed, which crucial includes building them in CI? If that were the case, then we wouldn’t have the issue with needing to force CircleCI to build non-head commits, right? If we agree that this is the ideal scenario, then what is preventing us from getting to that scenario? In particular, I don’t think you should need to validate contributor code manually on your boxes. The purpose of an automatic CI pipeline is exactly to avoid such manual steps. I am sorry if any of these questions appear naive. Cheers, Manuel -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Mon Dec 11 15:22:31 2017 From: ben at well-typed.com (Ben Gamari) Date: Mon, 11 Dec 2017 10:22:31 -0500 Subject: [GHC DevOps Group] CircleCI job accounting question In-Reply-To: References: <8760agnulr.fsf@ben-laptop.smart-cactus.org> <0B8947A7-CF4E-4803-828E-2DC1FBC6F4F0@tweag.io> <8760aen9ov.fsf@ben-laptop.smart-cactus.org> <54496A02-05EE-43EF-B717-612829628D90@tweag.io> <87zi7njmth.fsf@ben-laptop.smart-cactus.org> <87mv3f80xv.fsf@ben-laptop.smart-cactus.org> <87d14a8h4s.fsf@ben-laptop.smart-cactus.org> <874lpm86ra.fsf@ben-laptop.smart-cactus.org> <31461C3E-34EC-409D-8952-DB65B9452D3D@tweag.io> Message-ID: <87vahdi91m.fsf@ben-laptop.smart-cactus.org> Simon Peyton Jones writes: > The problem is that many contributors, including Simon PJ, Richard, and > me, tend to push batches of work > > I have not been following this thread (“job accounting” seemed above > my pay grade) but I saw this mention of my name 😊. Without having > read myself into the context there seem to be two issues > > > * Every commit to master should be validate-clean, and this should > be tested by the CI framework not by the contributor. This is > essential. I would be delighted if every commit I made went through > that gate. I’m careful, but occasionally not careful enough. > > * Most – perhaps all – commits should go through a code-review > process. Here I freely admit that I tend to use (or abuse?) my > status to make most of my commits without review, except perhaps > informally with individuals. I’d be absolutely willing to review > this if (a) in fact people think that the extra step would really > improve quality (perhaps looking at past commits) or (b) the very > fact that I do so makes people feel cross. > I personally think that we should strive for your first point (every commit should be validate-clean) before attempting to tackle your second. I, for one, am rather skeptical that putting all of your patches through review would significantly affect quality. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From manuel.chakravarty at tweag.io Tue Dec 12 08:05:07 2017 From: manuel.chakravarty at tweag.io (Manuel M T Chakravarty) Date: Tue, 12 Dec 2017 19:05:07 +1100 Subject: [GHC DevOps Group] CircleCI job accounting question In-Reply-To: References: <8760agnulr.fsf@ben-laptop.smart-cactus.org> <0B8947A7-CF4E-4803-828E-2DC1FBC6F4F0@tweag.io> <8760aen9ov.fsf@ben-laptop.smart-cactus.org> <54496A02-05EE-43EF-B717-612829628D90@tweag.io> <87zi7njmth.fsf@ben-laptop.smart-cactus.org> <87mv3f80xv.fsf@ben-laptop.smart-cactus.org> <87d14a8h4s.fsf@ben-laptop.smart-cactus.org> <874lpm86ra.fsf@ben-laptop.smart-cactus.org> <31461C3E-34EC-409D-8952-DB65B9452D3D@tweag.io> Message-ID: <80CCD2AA-FF2F-4BCA-975C-2B5E53D462C7@tweag.io> > Simon Peyton Jones : > > The problem is that many contributors, including Simon PJ, Richard, and > me, tend to push batches of work > > I have not been following this thread (“job accounting” seemed above my pay grade) but I saw this mention of my name 😊. Without having read myself into the context there seem to be two issues > > Every commit to master should be validate-clean, and this should be tested by the CI framework not by the contributor. This is essential. I would be delighted if every commit I made went through that gate. I’m careful, but occasionally not careful enough. > > Most – perhaps all – commits should go through a code-review process. Here I freely admit that I tend to use (or abuse?) my status to make most of my commits without review, except perhaps informally with individuals. I’d be absolutely willing to review this if (a) in fact people think that the extra step would really improve quality (perhaps looking at past commits) or (b) the very fact that I do so makes people feel cross. I think, the important point, especially in the context we are discussing here is the first one. We should ensure that every single commit goes through CI before hitting master. > In both cases, the big thing from my point of view is that, once I’m ready to press “go”, I’d like to take it off my to-do list by pushing into a queue that will result in either > a commit to master, or > an email to me saying “more work to do here” > > I’m sure that many contributors would value this property. The tricky bit is, I suppose, deciding when the reviewers are happy. Maybe that need central human intervention; I’m not sure. The idea with pull requests on GitHub is basically what you are describing. CI automatically validates the pull request and a reviewer can just push the merge button if CI is green and they are happy. If they are unhappy, the comment, which results in a notification to you. I always thought this is what Differentials in Phabricator are for. Ben, is that not so? > Also important: I often accumulate a sequence of related patches: > > I’d like to be able to queue them as a group, or at least with very low additional overhead per patch > > Sometimes I have attempted to separate two things (e.g. a bug fix and a refactoring). In the past I have not felt it important to guarantee a validate-clean state between the two; e.g. I might have erred in separating the two. The cost benefit ratio of teasing them apart has not seemed worth the pain. But separating them at least means we have two commit messages. > > On this point I can see that I might have to change my behaviour, because the CI would be obliged to stop as soon as it found one that didn’t validate. I might need help to figure out how best to fix up a patch sequence by moving little bits from one to another, but I’m sure it’s possible. This is where things get a bit tricky. A standard solution is to ”squash” commits; i.e., these separate patches are combined into one when they are finally applied to master. This means that the separate patches are never tested separately and that ”broken” intermediate state is never observed by CI or bisection. (The patches are still separate during code review.) Manuel > I’m very keen to support a better CI process in any way I can. > > Simon > > From: Ghc-devops-group [mailto:ghc-devops-group-bounces at haskell.org] On Behalf Of Manuel M T Chakravarty > Sent: 11 December 2017 06:17 > To: Ben Gamari > Cc: ghc-devops-group at haskell.org; Mateusz Kowalczyk > Subject: Re: [GHC DevOps Group] CircleCI job accounting question > > 23.11.2017 06:15 schrieb Ben Gamari >: > Simon Marlow > writes: > > > On 22 November 2017 at 15:31, Ben Gamari > wrote: > > > Manuel M T Chakravarty > writes: > > Why do we need the intermediate builds exactly? Wouldn’t they usually > fail? (When I do PRs with multiple commits, the state of the tree > between this commits will usually not be well-defined.) > > No, every commit should build. This is in part a difference between > Phabricator's patch-based model and GitHub's feature branch model. > However, many projects using the latter also demand that all > intermediate commits must be atomic, buildable changes. Sacrificing this > property greatly complicates bisection. > > Building all intermediates is desireable as ultimately we would like to > preserve per-commit build artifacts for the last few months of commits > to enable easy bisection. > > > I don't quite understand this. Yes building all commits is desirable, but > in the case of Phabricator each revision is going to be a single commit, > no? So why is this an issue? Or is it an issue only for github PRs? > > The problem is that many contributors, including Simon PJ, Richard, and > me, tend to push batches of work. For instance, when I land > contributors' differentials I first apply a batch, then validate > locally, and then push as a chunk. We can change this if necessary, but > it will need to be via social convention which hasn't worked very well > historically. > > I would like to understand this issue (which prompts #14505) a bit better. Do we agree that, in an ideal world, nobody should ever push to master, and instead, all contributions start as PRs or differentials, which are merged after being reviewed, which crucial includes building them in CI? If that were the case, then we wouldn’t have the issue with needing to force CircleCI to build non-head commits, right? > > If we agree that this is the ideal scenario, then what is preventing us from getting to that scenario? In particular, I don’t think you should need to validate contributor code manually on your boxes. The purpose of an automatic CI pipeline is exactly to avoid such manual steps. > > I am sorry if any of these questions appear naive. > > Cheers, > Manuel -------------- next part -------------- An HTML attachment was scrubbed... URL: From manuel.chakravarty at tweag.io Tue Dec 12 08:12:53 2017 From: manuel.chakravarty at tweag.io (Manuel M T Chakravarty) Date: Tue, 12 Dec 2017 19:12:53 +1100 Subject: [GHC DevOps Group] CircleCI job accounting question In-Reply-To: <87vahdi91m.fsf@ben-laptop.smart-cactus.org> References: <8760agnulr.fsf@ben-laptop.smart-cactus.org> <0B8947A7-CF4E-4803-828E-2DC1FBC6F4F0@tweag.io> <8760aen9ov.fsf@ben-laptop.smart-cactus.org> <54496A02-05EE-43EF-B717-612829628D90@tweag.io> <87zi7njmth.fsf@ben-laptop.smart-cactus.org> <87mv3f80xv.fsf@ben-laptop.smart-cactus.org> <87d14a8h4s.fsf@ben-laptop.smart-cactus.org> <874lpm86ra.fsf@ben-laptop.smart-cactus.org> <31461C3E-34EC-409D-8952-DB65B9452D3D@tweag.io> <87vahdi91m.fsf@ben-laptop.smart-cactus.org> Message-ID: Hi Ben, > Am 12.12.2017 um 02:22 schrieb Ben Gamari : > > Simon Peyton Jones writes: > >> The problem is that many contributors, including Simon PJ, Richard, and >> me, tend to push batches of work >> >> I have not been following this thread (“job accounting” seemed above >> my pay grade) but I saw this mention of my name 😊. Without having >> read myself into the context there seem to be two issues >> >> >> * Every commit to master should be validate-clean, and this should >> be tested by the CI framework not by the contributor. This is >> essential. I would be delighted if every commit I made went through >> that gate. I’m careful, but occasionally not careful enough. >> >> * Most – perhaps all – commits should go through a code-review >> process. Here I freely admit that I tend to use (or abuse?) my >> status to make most of my commits without review, except perhaps >> informally with individuals. I’d be absolutely willing to review >> this if (a) in fact people think that the extra step would really >> improve quality (perhaps looking at past commits) or (b) the very >> fact that I do so makes people feel cross. >> > I personally think that we should strive for your first point (every > commit should be validate-clean) before attempting to tackle your > second. I, for one, am rather skeptical that putting all of your patches > through review would significantly affect quality. I completely agree. So, what is preventing us from disabling direct pushes to master and requiring all contributions to go through a PR or Differential? PRs and Differentials are squashed on merging to master and the whole problem (with CircleCI building only heads of commit groups) just disappears. I believe that this is the usual approach. If the outstanding issue is that you combine multiple contributions from contributors and manually valid them to ensure they are not just individually sound, but also in combination, we might want to consider https://bors.tech which is exactly for that kind of thing (and apparently used by Rust). Cheers, Manuel -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Tue Dec 12 15:54:05 2017 From: ben at well-typed.com (Ben Gamari) Date: Tue, 12 Dec 2017 10:54:05 -0500 Subject: [GHC DevOps Group] CircleCI job accounting question In-Reply-To: References: <8760agnulr.fsf@ben-laptop.smart-cactus.org> <0B8947A7-CF4E-4803-828E-2DC1FBC6F4F0@tweag.io> <8760aen9ov.fsf@ben-laptop.smart-cactus.org> <54496A02-05EE-43EF-B717-612829628D90@tweag.io> <87zi7njmth.fsf@ben-laptop.smart-cactus.org> <87mv3f80xv.fsf@ben-laptop.smart-cactus.org> <87d14a8h4s.fsf@ben-laptop.smart-cactus.org> <874lpm86ra.fsf@ben-laptop.smart-cactus.org> <31461C3E-34EC-409D-8952-DB65B9452D3D@tweag.io> <87vahdi91m.fsf@ben-laptop.smart-cactus.org> Message-ID: <87374ghrhj.fsf@ben-laptop.smart-cactus.org> Manuel M T Chakravarty writes: > Hi Ben, > >> Am 12.12.2017 um 02:22 schrieb Ben Gamari : >> >> I personally think that we should strive for your first point (every >> commit should be validate-clean) before attempting to tackle your >> second. I, for one, am rather skeptical that putting all of your patches >> through review would significantly affect quality. > > I completely agree. > On re-reading what I wrote above, I realize that it was a bit unclear. To clarify, the sentence >> I, for one, am rather skeptical that putting all of your patches >> through review would significantly affect quality. was intended to mean "I am not convinced that quality would improve as a result of review". Is this what you are agreeing with? > So, what is preventing us from disabling direct pushes to master and > requiring all contributions to go through a PR or Differential? > > PRs and Differentials are squashed on merging to master and the whole > problem (with CircleCI building only heads of commit groups) just > disappears. I believe that this is the usual approach. > We generally don't want to squash. Those who typically commit directly generally master generally take pains to ensure that their commit history is sensible. This history is valuable, both for the future readers of the code as well as later bisection; projecting it out is quite undesirable. We could indeed require that Simon, et al. open Differentials for each of their commits. However, as I mentioned earlier it doesn't seem likely that putting these commits through rigorous review would be fruitful relative to the work it would require; these differential would be just be for merging. Unfortunately, the cost of creating a differential is non-trivial and I'm not certain that we want to make Simon pay it for every commit. Admittedly this is where a Git-based workflows have a bit of an advantage: it is much more convenient to merge a group of commits in a structure-preserving manner via a git branch than a stack of differentials. > If the outstanding issue is that you combine multiple contributions > from contributors and manually valid them to ensure they are not just > individually sound, but also in combination, we might want to consider > > https://bors.tech > > which is exactly for that kind of thing (and apparently used by Rust). Indeed, I'm familiar with bors. Unfortunately it's quite GitHub-centric, so at for the moment it's not a viable option. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From marlowsd at gmail.com Wed Dec 13 14:36:35 2017 From: marlowsd at gmail.com (Simon Marlow) Date: Wed, 13 Dec 2017 14:36:35 +0000 Subject: [GHC DevOps Group] CircleCI job accounting question In-Reply-To: References: <8760agnulr.fsf@ben-laptop.smart-cactus.org> <0B8947A7-CF4E-4803-828E-2DC1FBC6F4F0@tweag.io> <8760aen9ov.fsf@ben-laptop.smart-cactus.org> <54496A02-05EE-43EF-B717-612829628D90@tweag.io> <87zi7njmth.fsf@ben-laptop.smart-cactus.org> <87mv3f80xv.fsf@ben-laptop.smart-cactus.org> <87d14a8h4s.fsf@ben-laptop.smart-cactus.org> <874lpm86ra.fsf@ben-laptop.smart-cactus.org> <31461C3E-34EC-409D-8952-DB65B9452D3D@tweag.io> <87vahdi91m.fsf@ben-laptop.smart-cactus.org> Message-ID: On 12 December 2017 at 08:12, Manuel M T Chakravarty < manuel.chakravarty at tweag.io> wrote: > Hi Ben, > > Am 12.12.2017 um 02:22 schrieb Ben Gamari : > > Simon Peyton Jones writes: > > The problem is that many contributors, including Simon PJ, Richard, and > me, tend to push batches of work > > I have not been following this thread (“job accounting” seemed above > my pay grade) but I saw this mention of my name 😊. Without having > read myself into the context there seem to be two issues > > > * Every commit to master should be validate-clean, and this should > be tested by the CI framework not by the contributor. This is > essential. I would be delighted if every commit I made went through > that gate. I’m careful, but occasionally not careful enough. > > * Most – perhaps all – commits should go through a code-review > process. Here I freely admit that I tend to use (or abuse?) my > status to make most of my commits without review, except perhaps > informally with individuals. I’d be absolutely willing to review > this if (a) in fact people think that the extra step would really > improve quality (perhaps looking at past commits) or (b) the very > fact that I do so makes people feel cross. > > I personally think that we should strive for your first point (every > commit should be validate-clean) before attempting to tackle your > second. I, for one, am rather skeptical that putting all of your patches > through review would significantly affect quality. > > > I completely agree. > > So, what is preventing us from disabling direct pushes to master and > requiring all contributions to go through a PR or Differential? > Well, CI needs to be working first :) Also Phabricator doesn't have the equivalent of a merge button right now, which makes the workflow a bit awkward. I'm not sure what the current state of that is - is there an extension or something we can enable to get this, Ben? PRs and Differentials are squashed on merging to master and the whole > problem (with CircleCI building only heads of commit groups) just > disappears. I believe that this is the usual approach. > How do we enforce that PRs get squashed on merging? (not that we're actively using PRs (yet) but I'm curious how this works). Cheers Simon If the outstanding issue is that you combine multiple contributions from > contributors and manually valid them to ensure they are not just > individually sound, but also in combination, we might want to consider > > https://bors.tech > > which is exactly for that kind of thing (and apparently used by Rust). > > Cheers, > Manuel > > > _______________________________________________ > Ghc-devops-group mailing list > Ghc-devops-group at haskell.org > https://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devops-group > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From m at tweag.io Wed Dec 13 16:14:07 2017 From: m at tweag.io (Boespflug, Mathieu) Date: Wed, 13 Dec 2017 17:14:07 +0100 Subject: [GHC DevOps Group] CircleCI job accounting question In-Reply-To: References: <8760agnulr.fsf@ben-laptop.smart-cactus.org> <0B8947A7-CF4E-4803-828E-2DC1FBC6F4F0@tweag.io> <8760aen9ov.fsf@ben-laptop.smart-cactus.org> <54496A02-05EE-43EF-B717-612829628D90@tweag.io> <87zi7njmth.fsf@ben-laptop.smart-cactus.org> <87mv3f80xv.fsf@ben-laptop.smart-cactus.org> <87d14a8h4s.fsf@ben-laptop.smart-cactus.org> <874lpm86ra.fsf@ben-laptop.smart-cactus.org> <31461C3E-34EC-409D-8952-DB65B9452D3D@tweag.io> <87vahdi91m.fsf@ben-laptop.smart-cactus.org> Message-ID: > How do we enforce that PRs get squashed on merging? (not that we're actively using PRs (yet) but I'm curious how this works). It's the gatekeeper (i.e. Ben) that presses the merge button. And it can be set to squash merge by default. -- Mathieu Boespflug Founder at http://tweag.io. On 13 December 2017 at 15:36, Simon Marlow wrote: > On 12 December 2017 at 08:12, Manuel M T Chakravarty < > manuel.chakravarty at tweag.io> wrote: > >> Hi Ben, >> >> Am 12.12.2017 um 02:22 schrieb Ben Gamari : >> >> Simon Peyton Jones writes: >> >> The problem is that many contributors, including Simon PJ, Richard, and >> me, tend to push batches of work >> >> I have not been following this thread (“job accounting” seemed above >> my pay grade) but I saw this mention of my name 😊. Without having >> read myself into the context there seem to be two issues >> >> >> * Every commit to master should be validate-clean, and this should >> be tested by the CI framework not by the contributor. This is >> essential. I would be delighted if every commit I made went through >> that gate. I’m careful, but occasionally not careful enough. >> >> * Most – perhaps all – commits should go through a code-review >> process. Here I freely admit that I tend to use (or abuse?) my >> status to make most of my commits without review, except perhaps >> informally with individuals. I’d be absolutely willing to review >> this if (a) in fact people think that the extra step would really >> improve quality (perhaps looking at past commits) or (b) the very >> fact that I do so makes people feel cross. >> >> I personally think that we should strive for your first point (every >> commit should be validate-clean) before attempting to tackle your >> second. I, for one, am rather skeptical that putting all of your patches >> through review would significantly affect quality. >> >> >> I completely agree. >> >> So, what is preventing us from disabling direct pushes to master and >> requiring all contributions to go through a PR or Differential? >> > > Well, CI needs to be working first :) > > Also Phabricator doesn't have the equivalent of a merge button right now, > which makes the workflow a bit awkward. I'm not sure what the current state > of that is - is there an extension or something we can enable to get this, > Ben? > > PRs and Differentials are squashed on merging to master and the whole >> problem (with CircleCI building only heads of commit groups) just >> disappears. I believe that this is the usual approach. >> > > How do we enforce that PRs get squashed on merging? (not that we're > actively using PRs (yet) but I'm curious how this works). > > Cheers > Simon > > > If the outstanding issue is that you combine multiple contributions from >> contributors and manually valid them to ensure they are not just >> individually sound, but also in combination, we might want to consider >> >> https://bors.tech >> >> which is exactly for that kind of thing (and apparently used by Rust). >> >> Cheers, >> Manuel >> >> >> _______________________________________________ >> Ghc-devops-group mailing list >> Ghc-devops-group at haskell.org >> https://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devops-group >> >> > > _______________________________________________ > Ghc-devops-group mailing list > Ghc-devops-group at haskell.org > https://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devops-group > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From m at tweag.io Wed Dec 13 16:37:57 2017 From: m at tweag.io (Boespflug, Mathieu) Date: Wed, 13 Dec 2017 17:37:57 +0100 Subject: [GHC DevOps Group] CircleCI job accounting question In-Reply-To: <87374ghrhj.fsf@ben-laptop.smart-cactus.org> References: <8760agnulr.fsf@ben-laptop.smart-cactus.org> <0B8947A7-CF4E-4803-828E-2DC1FBC6F4F0@tweag.io> <8760aen9ov.fsf@ben-laptop.smart-cactus.org> <54496A02-05EE-43EF-B717-612829628D90@tweag.io> <87zi7njmth.fsf@ben-laptop.smart-cactus.org> <87mv3f80xv.fsf@ben-laptop.smart-cactus.org> <87d14a8h4s.fsf@ben-laptop.smart-cactus.org> <874lpm86ra.fsf@ben-laptop.smart-cactus.org> <31461C3E-34EC-409D-8952-DB65B9452D3D@tweag.io> <87vahdi91m.fsf@ben-laptop.smart-cactus.org> <87374ghrhj.fsf@ben-laptop.smart-cactus.org> Message-ID: Ben wrote: > Admittedly this is where a Git-based workflows have a bit of an > advantage: it is much more convenient to merge a group of commits in a > structure-preserving manner via a git branch than a stack of > differentials. So if both parent commits of a merge commit were guaranteed to build and test, would that be enough? Or would you still want to insist that all commits of both branch build and compile? Technically, it's easy to do (just make the CI script loop through each commit between merge base and branch tip). But would you be willing to pay for (sometimes substantially) longer validation times just to guarantee a pristine history? I think I'm once again confused about what the requirements really are here, and in what order of preference. Once we know that, I'm confident many technical solutions could satisfy them. Note that we can decouple the question of whether to manage contributions via PR's or otherwise from the review tool question. One could imagine contributions *always* going through GitHub PR's, but say *always* reviewed via Phabricator (say via an import/sync tool discussed many times before but never actually implemented). -- Mathieu Boespflug Founder at http://tweag.io. On 12 December 2017 at 16:54, Ben Gamari wrote: > Manuel M T Chakravarty writes: > > > Hi Ben, > > > >> Am 12.12.2017 um 02:22 schrieb Ben Gamari : > >> > >> I personally think that we should strive for your first point (every > >> commit should be validate-clean) before attempting to tackle your > >> second. I, for one, am rather skeptical that putting all of your patches > >> through review would significantly affect quality. > > > > I completely agree. > > > On re-reading what I wrote above, I realize that it was a bit unclear. > To clarify, the sentence > > >> I, for one, am rather skeptical that putting all of your patches > >> through review would significantly affect quality. > > was intended to mean "I am not convinced that quality would improve as a > result of review". Is this what you are agreeing with? > > > > So, what is preventing us from disabling direct pushes to master and > > requiring all contributions to go through a PR or Differential? > > > > PRs and Differentials are squashed on merging to master and the whole > > problem (with CircleCI building only heads of commit groups) just > > disappears. I believe that this is the usual approach. > > > We generally don't want to squash. Those who typically commit directly > generally master generally take pains to ensure that their commit > history is sensible. This history is valuable, both for the future > readers of the code as well as later bisection; projecting it out is > quite undesirable. > > We could indeed require that Simon, et al. open Differentials for each > of their commits. However, as I mentioned earlier it doesn't seem likely > that putting these commits through rigorous review would be fruitful > relative to the work it would require; these differential would be just > be for merging. Unfortunately, the cost of creating a differential is > non-trivial and I'm not certain that we want to make Simon pay it for > every commit. > > Admittedly this is where a Git-based workflows have a bit of an > advantage: it is much more convenient to merge a group of commits in a > structure-preserving manner via a git branch than a stack of > differentials. > > > If the outstanding issue is that you combine multiple contributions > > from contributors and manually valid them to ensure they are not just > > individually sound, but also in combination, we might want to consider > > > > https://bors.tech > > > > which is exactly for that kind of thing (and apparently used by Rust). > > Indeed, I'm familiar with bors. Unfortunately it's quite GitHub-centric, > so at for the moment it's not a viable option. > > Cheers, > > - Ben > > _______________________________________________ > Ghc-devops-group mailing list > Ghc-devops-group at haskell.org > https://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devops-group > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Wed Dec 13 21:15:17 2017 From: ben at well-typed.com (Ben Gamari) Date: Wed, 13 Dec 2017 16:15:17 -0500 Subject: [GHC DevOps Group] CircleCI job accounting question In-Reply-To: References: <8760agnulr.fsf@ben-laptop.smart-cactus.org> <0B8947A7-CF4E-4803-828E-2DC1FBC6F4F0@tweag.io> <8760aen9ov.fsf@ben-laptop.smart-cactus.org> <54496A02-05EE-43EF-B717-612829628D90@tweag.io> <87zi7njmth.fsf@ben-laptop.smart-cactus.org> <87mv3f80xv.fsf@ben-laptop.smart-cactus.org> <87d14a8h4s.fsf@ben-laptop.smart-cactus.org> <874lpm86ra.fsf@ben-laptop.smart-cactus.org> <31461C3E-34EC-409D-8952-DB65B9452D3D@tweag.io> <87vahdi91m.fsf@ben-laptop.smart-cactus.org> Message-ID: <87vahafhyk.fsf@ben-laptop.smart-cactus.org> Simon Marlow writes: > On 12 December 2017 at 08:12, Manuel M T Chakravarty < > manuel.chakravarty at tweag.io> wrote: > >> I completely agree. >> >> So, what is preventing us from disabling direct pushes to master and >> requiring all contributions to go through a PR or Differential? >> > > Well, CI needs to be working first :) > > Also Phabricator doesn't have the equivalent of a merge button right now, > which makes the workflow a bit awkward. I'm not sure what the current state > of that is - is there an extension or something we can enable to get this, > Ben? > As far as I can tell the feature is still a prototype [1]. In particular, I think that the lack of what the manual describes as "chain of custody" renders the feature as-implemented rather useless to us since it's not clear what will actually be landed. Cheers, - Ben [1] https://secure.phabricator.com/book/phabricator/article/differential_land/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From m at tweag.io Wed Dec 13 22:03:48 2017 From: m at tweag.io (Boespflug, Mathieu) Date: Wed, 13 Dec 2017 23:03:48 +0100 Subject: [GHC DevOps Group] Release policies In-Reply-To: References: Message-ID: [replying to ghc-devops-group@, which I assume based on your email's content is the mailing list you intended.] Hi Simon, feedback from downstream consumers of Cabal metadata (e.g. build tool authors) will be particularly useful for the discussion here. Here are my thoughts as a bystander. It's worth trying to identify what problems came up during the integer-gmp incident in Trac #14558: * GHC 8.2.1 shipped with integer-gmp-1.0.1.0 but the release notes said otherwise. * GHC 8.2.1 shipped with Cabal-2.0.0.2, but specifically claimed in the release notes that cabal-install-1.24 (and by implication any other build tool based on Cabal-the-library version 1.24) was supported: "GHC 8.2 only works with cabal-install version 1.24 or later. Please upgrade if you have an older version of cabal-install." * GHC 8.2.2 also claimed Cabal-1.24 support. * GHC 8.2.1 was released in July 2017 with Cabal-2.0.0.2, a brand new major release with breaking changes to the metadata format, without much lead time for downstream tooling authors (like Stack) to adapt. * But actually if we look at their respective release notes, GHC 8.2.1 was relased in July 2017, even though the Cabal website claims that Cabal-2.0.0.2 was released in August 2017 (see https://www.haskell.org/cabal/download.html). So it looks like GHC didn't just not give enough lead time about an upstream dependency it shipped with, it shipped with an unreleased version of Cabal! * Libraries that ship with GHC are usually also uploaded to Hackage, to make the documentation easily accessible, but integer-gmp-1.0.1.0 was not uploaded to Hackage until 4 months after the release. * The metadata for integer-gmp-1.0.1.0 as uploaded to Hackage differed from the metadata that was actually in the source tarball of GHC-8.2.1 and GHC-8.2.2. * The metadata for integer-gmp-1.0.1.0 as uploaded to Hackage included Cabal-2.0 specific syntactic sugar, making the metadata unreadable using any tooling that did not link against the Cabal-2.0.0.2 library (or any later version). * It so happened that one particular version of one particular downstream build tool, Stack, had a bug, compounding the bad effects of the previous point. But a new release has now been made, and in any case that's not a problem for GHC to solve. So let's keep that out of the discussion here. So I suggest we discuss ways to eliminate or reduce the likelihood of any of the above problems from occurring again. Here are some ideas: * GHC should never under any circumstance ship with an unreleased version of any independently maintained dependency. Cabal is one such dependency. This should hold true for anything else. We could just add that policy to the Release Policy. * Stronger still, GHC should not switch to a new major release of a dependency at any time during feature freeze ahead of a release. E.g. if Cabal-3.0.0 ships before feature freeze for GHC-9.6, then maybe it's fair game to include in GHC. But not if Cabal-3.0.0 hasn't shipped yet. * The 3-release backwards compat rule should apply in all circumstances. That means major version bumps of any library GHC ships with, including base, should not imply any breaking change in the API's of any such library. * GHC does have control over reinstallable packages (like text and bytestring): GHC need not ship with the latest versions of these, if indeed they introduce breaking changes that would contravene the 3-release policy. * Note: today, users are effectively tied to whatever version of the packages ships with GHC (i.e. the "reinstallable" bit is problematic today for various technical reasons). That's why a breaking change in bytestring is technically a breaking change in GHC. * The current release policy covers API stability, but what about metadata? In the extreme, we could say a 3-release policy applies to metadata too. Meaning, all metadata shipping with GHC now and in the next 2 releases should be parseable by today's version of Cabal and downstream tooling. Is such a long lead time necessary? That's for build tool authors to say, and a point to negotiate with GHC devs. * Because there are far fewer consumers of metadata than consumers of say base, I think shorter lead time is reasonable. At the other extreme, it could even be just the few months during feature freeze. * The release notes bugs mentioned above and the lack of consistent upload to Hackage are a symptom of lack of release automation, I suspect. That's how to fix it, but we could also spell out in the Release Policy that GHC libraries should all be on Hackage from the day of release. Finally, a question for discussion: * Hackage allows revising the metadata of an uploaded package even without changing the version number. This happens routinely on Hackage today by the Hackage trustees. Should this be permitted for packages whose release is completely tied to that of GHC itself (like integer-gmp)? Best, Mathieu On 13 December 2017 at 17:43, Simon Peyton Jones via ghc-devs < ghc-devs at haskell.org> wrote: > Dear GHC devops group > > The conversation on Trac #14558 > suggests that we might > want to consider reviewing GHC’s release policies > . > This email is to invite your input. > > The broad questions is this. We want GHC to serve the needs of all its > users, including downstream tooling that uses GHC. What release policies > will best support that goal? For example, we already ensure that GHC 8.4 > can be compiled with 8.2 and 8.0. This imposes a slight tax on GHC > development, but it means that users don't need to upgrade quite as often. > (If the tempo of releases increases, we might want to increase the window.) > > Trac #14558 suggests that we might want to ensure the metadata on GHC’s > built-in libraries is parsable with older Cabals. One possibility would be > this: > > - Ensure that the Cabal metadata of non-reinstallable packages (e.g. > integer-gmp) shipped with GHC be parsable by the Cabal versions shipped > with the last two major GHC releases [i.e. have a sufficiently old > cabal-version field]. That is, in general a new Cabal specification will > need to be shipped with two GHC releases before GHC will use start using > its features in non-reinstallable packages. > - Upholding this policy won't always be possible. There may be cases > (as is the case Hadrian for GHC 8.4) where the benefit of quickly > introducing incompatible syntax outweighs the need for compatibility. In > this (hopefully rare) case we would explicitly advertise the > incompatibility in the release documentation, and give as much notice as > possible to users to allow downstream tools to adapt. > - For reinstallable packages, of which GHC is simply a client (like > text or bytestring), we can’t reasonably enforce such a policy, because GHC > devs have no control over what the maintainers of external core libraries > put in their Cabal files. > > This is just a proposal. The narrow questions are these: > > - Would this be sufficient to deal with the concerns raised in #14558? > - Is it necessary, ow would anything simpler be sufficient? > - What costs would the policy impose on GHC development? > - There may be matters of detail: e.g. is two releases the right grace > period. Would one do? > > Both the broad question and the narrow ones are appropriate for the Devops > group. > > Thanks! > > Simon > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From m at tweag.io Wed Dec 13 22:11:26 2017 From: m at tweag.io (Boespflug, Mathieu) Date: Wed, 13 Dec 2017 23:11:26 +0100 Subject: [GHC DevOps Group] Release policies In-Reply-To: References: Message-ID: > * Note: today, users are effectively tied to whatever version of the packages ships with GHC (i.e. the "reinstallable" bit is problematic today for various technical reasons). That's why a breaking change in bytestring is technically a breaking change in GHC. By the way, this is why I think GHC should strive to *reduce* as much as possible the set of libraries it ships with and exposes in ghc-pkg by default. e.g. if GHC did not expose bytestring, then tools like Stack that insist that only one version of any library be in any given package set could freely upgrade to a new bytestring major version. And GHC wouldn't need to impose its 3-version policy on bytestring. I suspect this is not technically possible in the case of bytestring, but what about e.g. time or xhtml? -- Mathieu Boespflug Founder at http://tweag.io. On 13 December 2017 at 23:03, Boespflug, Mathieu wrote: > [replying to ghc-devops-group@, which I assume based on your email's > content is the mailing list you intended.] > > Hi Simon, > > feedback from downstream consumers of Cabal metadata (e.g. build tool > authors) will be particularly useful for the discussion here. Here are my > thoughts as a bystander. > > It's worth trying to identify what problems came up during the integer-gmp > incident in Trac #14558: > > * GHC 8.2.1 shipped with integer-gmp-1.0.1.0 but the release notes said > otherwise. > * GHC 8.2.1 shipped with Cabal-2.0.0.2, but specifically claimed in the > release notes that cabal-install-1.24 (and by implication any other build > tool based on Cabal-the-library version 1.24) was supported: "GHC 8.2 only > works with cabal-install version 1.24 or later. Please upgrade if you have > an older version of cabal-install." > * GHC 8.2.2 also claimed Cabal-1.24 support. > * GHC 8.2.1 was released in July 2017 with Cabal-2.0.0.2, a brand new > major release with breaking changes to the metadata format, without much > lead time for downstream tooling authors (like Stack) to adapt. > * But actually if we look at their respective release notes, GHC 8.2.1 was > relased in July 2017, even though the Cabal website claims that > Cabal-2.0.0.2 was released in August 2017 (see > https://www.haskell.org/cabal/download.html). So it looks like GHC didn't > just not give enough lead time about an upstream dependency it shipped > with, it shipped with an unreleased version of Cabal! > * Libraries that ship with GHC are usually also uploaded to Hackage, to > make the documentation easily accessible, but integer-gmp-1.0.1.0 was not > uploaded to Hackage until 4 months after the release. > * The metadata for integer-gmp-1.0.1.0 as uploaded to Hackage differed > from the metadata that was actually in the source tarball of GHC-8.2.1 and > GHC-8.2.2. > * The metadata for integer-gmp-1.0.1.0 as uploaded to Hackage included > Cabal-2.0 specific syntactic sugar, making the metadata unreadable using > any tooling that did not link against the Cabal-2.0.0.2 library (or any > later version). > * It so happened that one particular version of one particular downstream > build tool, Stack, had a bug, compounding the bad effects of the previous > point. But a new release has now been made, and in any case that's not a > problem for GHC to solve. So let's keep that out of the discussion here. > > So I suggest we discuss ways to eliminate or reduce the likelihood of any > of the above problems from occurring again. Here are some ideas: > > * GHC should never under any circumstance ship with an unreleased version > of any independently maintained dependency. Cabal is one such dependency. > This should hold true for anything else. We could just add that policy to > the Release Policy. > * Stronger still, GHC should not switch to a new major release of a > dependency at any time during feature freeze ahead of a release. E.g. if > Cabal-3.0.0 ships before feature freeze for GHC-9.6, then maybe it's fair > game to include in GHC. But not if Cabal-3.0.0 hasn't shipped yet. > * The 3-release backwards compat rule should apply in all circumstances. > That means major version bumps of any library GHC ships with, including > base, should not imply any breaking change in the API's of any such library. > * GHC does have control over reinstallable packages (like text and > bytestring): GHC need not ship with the latest versions of these, if indeed > they introduce breaking changes that would contravene the 3-release policy. > * Note: today, users are effectively tied to whatever version of the > packages ships with GHC (i.e. the "reinstallable" bit is problematic today > for various technical reasons). That's why a breaking change in bytestring > is technically a breaking change in GHC. > * The current release policy covers API stability, but what about > metadata? In the extreme, we could say a 3-release policy applies to > metadata too. Meaning, all metadata shipping with GHC now and in the next 2 > releases should be parseable by today's version of Cabal and downstream > tooling. Is such a long lead time necessary? That's for build tool authors > to say, and a point to negotiate with GHC devs. > * Because there are far fewer consumers of metadata than consumers of say > base, I think shorter lead time is reasonable. At the other extreme, it > could even be just the few months during feature freeze. > * The release notes bugs mentioned above and the lack of consistent upload > to Hackage are a symptom of lack of release automation, I suspect. That's > how to fix it, but we could also spell out in the Release Policy that GHC > libraries should all be on Hackage from the day of release. > > Finally, a question for discussion: > > * Hackage allows revising the metadata of an uploaded package even without > changing the version number. This happens routinely on Hackage today by the > Hackage trustees. Should this be permitted for packages whose release is > completely tied to that of GHC itself (like integer-gmp)? > > Best, > > Mathieu > > > On 13 December 2017 at 17:43, Simon Peyton Jones via ghc-devs < > ghc-devs at haskell.org> wrote: > >> Dear GHC devops group >> >> The conversation on Trac #14558 >> suggests that we might >> want to consider reviewing GHC’s release policies >> . >> This email is to invite your input. >> >> The broad questions is this. We want GHC to serve the needs of all its >> users, including downstream tooling that uses GHC. What release policies >> will best support that goal? For example, we already ensure that GHC 8.4 >> can be compiled with 8.2 and 8.0. This imposes a slight tax on GHC >> development, but it means that users don't need to upgrade quite as often. >> (If the tempo of releases increases, we might want to increase the window.) >> >> Trac #14558 suggests that we might want to ensure the metadata on GHC’s >> built-in libraries is parsable with older Cabals. One possibility would be >> this: >> >> - Ensure that the Cabal metadata of non-reinstallable packages (e.g. >> integer-gmp) shipped with GHC be parsable by the Cabal versions shipped >> with the last two major GHC releases [i.e. have a sufficiently old >> cabal-version field]. That is, in general a new Cabal specification will >> need to be shipped with two GHC releases before GHC will use start using >> its features in non-reinstallable packages. >> - Upholding this policy won't always be possible. There may be cases >> (as is the case Hadrian for GHC 8.4) where the benefit of quickly >> introducing incompatible syntax outweighs the need for compatibility. In >> this (hopefully rare) case we would explicitly advertise the >> incompatibility in the release documentation, and give as much notice as >> possible to users to allow downstream tools to adapt. >> - For reinstallable packages, of which GHC is simply a client (like >> text or bytestring), we can’t reasonably enforce such a policy, because GHC >> devs have no control over what the maintainers of external core libraries >> put in their Cabal files. >> >> This is just a proposal. The narrow questions are these: >> >> - Would this be sufficient to deal with the concerns raised in #14558? >> - Is it necessary, ow would anything simpler be sufficient? >> - What costs would the policy impose on GHC development? >> - There may be matters of detail: e.g. is two releases the right >> grace period. Would one do? >> >> Both the broad question and the narrow ones are appropriate for the >> Devops group. >> >> Thanks! >> >> Simon >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikhail.glushenkov at gmail.com Wed Dec 13 22:38:46 2017 From: mikhail.glushenkov at gmail.com (Mikhail Glushenkov) Date: Wed, 13 Dec 2017 22:38:46 +0000 Subject: [GHC DevOps Group] Release policies In-Reply-To: References: Message-ID: Hi Mathieu, On 13 December 2017 at 22:03, Boespflug, Mathieu wrote: > * But actually if we look at their respective release notes, GHC 8.2.1 was > relased in July 2017, even though the Cabal website claims that > Cabal-2.0.0.2 was released in August 2017 (see > https://www.haskell.org/cabal/download.html). So it looks like GHC didn't > just not give enough lead time about an upstream dependency it shipped with, > it shipped with an unreleased version of Cabal! If you look at http://hackage.haskell.org/package/Cabal-2.0.0.2 and http://hackage.haskell.org/package/cabal-install-2.0.0.0 you'll see that Cabal-2.0.0.2 was uploaded to Hackage on Jul 24 2017, at the time of GHC 8.2.1 release, while cabal-install-2.0.0.0 was released in August, which is also when the 2.0 release was announced. This explains the discrepancy. Cabal-2.0.0.2 is the same version that ships with GHC 8.2.1. From m at tweag.io Wed Dec 13 22:47:11 2017 From: m at tweag.io (Boespflug, Mathieu) Date: Wed, 13 Dec 2017 23:47:11 +0100 Subject: [GHC DevOps Group] Release policies In-Reply-To: References: Message-ID: Hi Mikhail, I'm seeing "GHC 8.2.2 includes Cabal 2.0.1.0. GHC 8.2.1 includes Cabal 2.0.0.2. GHC 8.0.2 includes Cabal 1.24.2.0. GHC 8.0.1 includes Cabal 1.24.0.0. GHC 7.10.3 includes Cabal 1.22.5.0. 2.0.1.0 November 2017 2.0.0.2 August 2017 1.24.2.0 December 2016 1.24.1.0 October 2016 1.24.0.0 May 2016" At https://www.haskell.org/cabal/download.html. Clearly the timeline on that page should be fixed then. But thanks for pointers re the actual Cabal timeline. July 22th (according to GHC download page): GHC-8.2.1... July 24th (according to Hackage): Cabal-2.0.0.2... Surely it's fair to afford downstream tool authors more than *minus* 2 days of lead time to adapt? Best, Mathieu On 13 December 2017 at 23:38, Mikhail Glushenkov < mikhail.glushenkov at gmail.com> wrote: > Hi Mathieu, > > On 13 December 2017 at 22:03, Boespflug, Mathieu wrote: > > * But actually if we look at their respective release notes, GHC 8.2.1 > was > > relased in July 2017, even though the Cabal website claims that > > Cabal-2.0.0.2 was released in August 2017 (see > > https://www.haskell.org/cabal/download.html). So it looks like GHC > didn't > > just not give enough lead time about an upstream dependency it shipped > with, > > it shipped with an unreleased version of Cabal! > > If you look at > > http://hackage.haskell.org/package/Cabal-2.0.0.2 and > http://hackage.haskell.org/package/cabal-install-2.0.0.0 > > you'll see that Cabal-2.0.0.2 was uploaded to Hackage on Jul 24 2017, > at the time of GHC 8.2.1 release, while cabal-install-2.0.0.0 was > released in August, which is also when the 2.0 release was announced. > This explains the discrepancy. Cabal-2.0.0.2 is the same version that > ships with GHC 8.2.1. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikhail.glushenkov at gmail.com Wed Dec 13 22:58:33 2017 From: mikhail.glushenkov at gmail.com (Mikhail Glushenkov) Date: Wed, 13 Dec 2017 22:58:33 +0000 Subject: [GHC DevOps Group] Release policies In-Reply-To: References: Message-ID: Hi Mathieu, On 13 December 2017 at 22:47, Boespflug, Mathieu wrote: > At https://www.haskell.org/cabal/download.html. Clearly the timeline on that > page should be fixed then. But thanks for pointers re the actual Cabal > timeline. Yeah, I guess I should change the month from August to July and add cabal-install releases to the timeline. Thanks for noticing! > July 22th (according to GHC download page): GHC-8.2.1... July 24th > (according to Hackage): Cabal-2.0.0.2... Surely it's fair to afford > downstream tool authors more than *minus* 2 days of lead time to adapt? That's why we're having this discussion -- historically we've often/usually cut releases of lib:Cabal and GHC simultaneously. Note, however, that GHC HEAD snapshots in the period leading to 8.2.1 shipped with lib:Cabal snapshots from the 2.0 branch, so if a downstream tool developer did testing against GHC HEAD, they also automatically got an updated snapshot of lib:Cabal. From manuel.chakravarty at tweag.io Thu Dec 14 02:00:20 2017 From: manuel.chakravarty at tweag.io (Manuel M T Chakravarty) Date: Thu, 14 Dec 2017 13:00:20 +1100 Subject: [GHC DevOps Group] CircleCI job accounting question In-Reply-To: References: <8760agnulr.fsf@ben-laptop.smart-cactus.org> <0B8947A7-CF4E-4803-828E-2DC1FBC6F4F0@tweag.io> <8760aen9ov.fsf@ben-laptop.smart-cactus.org> <54496A02-05EE-43EF-B717-612829628D90@tweag.io> <87zi7njmth.fsf@ben-laptop.smart-cactus.org> <87mv3f80xv.fsf@ben-laptop.smart-cactus.org> <87d14a8h4s.fsf@ben-laptop.smart-cactus.org> <874lpm86ra.fsf@ben-laptop.smart-cactus.org> <31461C3E-34EC-409D-8952-DB65B9452D3D@tweag.io> <87vahdi91m.fsf@ben-laptop.smart-cactus.org> Message-ID: <135A2041-43A7-45FD-AB82-51407717FA08@tweag.io> > Am 14.12.2017 um 01:36 schrieb Simon Marlow : > > On 12 December 2017 at 08:12, Manuel M T Chakravarty > wrote: > Hi Ben, > >> Am 12.12.2017 um 02:22 schrieb Ben Gamari >: >> >> Simon Peyton Jones > writes: >> >>> The problem is that many contributors, including Simon PJ, Richard, and >>> me, tend to push batches of work >>> >>> I have not been following this thread (“job accounting” seemed above >>> my pay grade) but I saw this mention of my name 😊. Without having >>> read myself into the context there seem to be two issues >>> >>> >>> * Every commit to master should be validate-clean, and this should >>> be tested by the CI framework not by the contributor. This is >>> essential. I would be delighted if every commit I made went through >>> that gate. I’m careful, but occasionally not careful enough. >>> >>> * Most – perhaps all – commits should go through a code-review >>> process. Here I freely admit that I tend to use (or abuse?) my >>> status to make most of my commits without review, except perhaps >>> informally with individuals. I’d be absolutely willing to review >>> this if (a) in fact people think that the extra step would really >>> improve quality (perhaps looking at past commits) or (b) the very >>> fact that I do so makes people feel cross. >>> >> I personally think that we should strive for your first point (every >> commit should be validate-clean) before attempting to tackle your >> second. I, for one, am rather skeptical that putting all of your patches >> through review would significantly affect quality. > > I completely agree. > > So, what is preventing us from disabling direct pushes to master and requiring all contributions to go through a PR or Differential? > > Well, CI needs to be working first :) CI itself is working for Linux and macOS, and the hold up with Windows is largely us trying to get it for free from AppVeyor. The outstanding problems are with getting Phabricator to integrate/cooperate and now agreeing on a workflow. If we would use GitHub and PRs (like most of the rest of the world), I think, all this would be solved already also. Custom infrastructure => extra costs, as usual. > Also Phabricator doesn't have the equivalent of a merge button right now, which makes the workflow a bit awkward. I'm not sure what the current state of that is - is there an extension or something we can enable to get this, Ben? > > PRs and Differentials are squashed on merging to master and the whole problem (with CircleCI building only heads of commit groups) just disappears. I believe that this is the usual approach. > > How do we enforce that PRs get squashed on merging? (not that we’re actively using PRs (yet) but I’m curious how this works). GitHub has a per-repository setting allowing a choice between three options as follows. When merging pull requests, you can allow any combination of merge commits, squashing, or rebasing. At least one option must be enabled. Allow merge commits Add all commits from the head branch to the base branch with a merge commit. Allow squash merging Combine all commits from the head branch into a single commit in the base branch. Allow rebase merging Add all commits from the head branch onto the base branch individually. If more than one option is allowed, you can choose which one to use at the time of pressing the merge button. Cheers, Manuel -------------- next part -------------- An HTML attachment was scrubbed... URL: From manuel.chakravarty at tweag.io Thu Dec 14 02:20:16 2017 From: manuel.chakravarty at tweag.io (Manuel M T Chakravarty) Date: Thu, 14 Dec 2017 13:20:16 +1100 Subject: [GHC DevOps Group] CircleCI job accounting question In-Reply-To: <87374ghrhj.fsf@ben-laptop.smart-cactus.org> References: <8760agnulr.fsf@ben-laptop.smart-cactus.org> <0B8947A7-CF4E-4803-828E-2DC1FBC6F4F0@tweag.io> <8760aen9ov.fsf@ben-laptop.smart-cactus.org> <54496A02-05EE-43EF-B717-612829628D90@tweag.io> <87zi7njmth.fsf@ben-laptop.smart-cactus.org> <87mv3f80xv.fsf@ben-laptop.smart-cactus.org> <87d14a8h4s.fsf@ben-laptop.smart-cactus.org> <874lpm86ra.fsf@ben-laptop.smart-cactus.org> <31461C3E-34EC-409D-8952-DB65B9452D3D@tweag.io> <87vahdi91m.fsf@ben-laptop.smart-cactus.org> <87374ghrhj.fsf@ben-laptop.smart-cactus.org> Message-ID: <2E8F04C3-63C0-4E55-A197-16D631519EE4@tweag.io> > Am 13.12.2017 um 02:54 schrieb Ben Gamari : > > Manuel M T Chakravarty writes: >>> Am 12.12.2017 um 02:22 schrieb Ben Gamari : >>> >>> I personally think that we should strive for your first point (every >>> commit should be validate-clean) before attempting to tackle your >>> second. I, for one, am rather skeptical that putting all of your patches >>> through review would significantly affect quality. >> >> I completely agree. >> > On re-reading what I wrote above, I realize that it was a bit unclear. > To clarify, the sentence > >>> I, for one, am rather skeptical that putting all of your patches >>> through review would significantly affect quality. > > was intended to mean "I am not convinced that quality would improve as a > result of review". Is this what you are agreeing with? Yes. >> So, what is preventing us from disabling direct pushes to master and >> requiring all contributions to go through a PR or Differential? >> >> PRs and Differentials are squashed on merging to master and the whole >> problem (with CircleCI building only heads of commit groups) just >> disappears. I believe that this is the usual approach. >> > We generally don't want to squash. Those who typically commit directly > generally master generally take pains to ensure that their commit > history is sensible. This history is valuable, both for the future > readers of the code as well as later bisection; projecting it out is > quite undesirable. > > We could indeed require that Simon, et al. open Differentials for each > of their commits. However, as I mentioned earlier it doesn't seem likely > that putting these commits through rigorous review would be fruitful > relative to the work it would require; these differential would be just > be for merging. Unfortunately, the cost of creating a differential is > non-trivial and I'm not certain that we want to make Simon pay it for > every commit. > > Admittedly this is where a Git-based workflows have a bit of an > advantage: it is much more convenient to merge a group of commits in a > structure-preserving manner via a git branch than a stack of > differentials. Sorry for being unclear, but I was not suggesting to put Simon’s commits in Differentials to review them, but to make everything go through the same funnel, triggering CI on all the commits it ought to be triggered on. Generally, I think, direct commits to master are bad process (and this why this is not well supported by CircleCI either). Two key points of CI are * vet all incoming changes before they land in master and * automate all validation and artefact generation. But pushing directly to master, we compromise both. There is no vetting. Instead, we rely on the developer to do the right thing and validate locally. This means (a) we open ourselves to mistakes (which as SPJ wrote, do happen sometimes) and (b) we replace automation by a manual process. The right thing to do is to handle all commits in the same way. They all go through CI and nobody needs to validate locally. As a bonus we solve the problem that every commit gets validated, too (because all go through CI anyway). The hold up seems to be that Phabricator creates an overhead, which has prompted the use of the loophole (= direct push to master). How about the following solution? Everything that is directly pushed to master currently, is being pushed to GitHub and goes through a PR. After all, the main criticism of GitHub PRs seems to be about code reviews being nicer on Phabricator and we don’t want code review on the direct pushes to master anyway. So, why not use GitHub for this? Cheers, Manuel >> If the outstanding issue is that you combine multiple contributions >> from contributors and manually valid them to ensure they are not just >> individually sound, but also in combination, we might want to consider >> >> https://bors.tech >> >> which is exactly for that kind of thing (and apparently used by Rust). > > Indeed, I'm familiar with bors. Unfortunately it's quite GitHub-centric, > so at for the moment it's not a viable option. > > Cheers, > > - Ben > _______________________________________________ > Ghc-devops-group mailing list > Ghc-devops-group at haskell.org > https://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devops-group From manuel.chakravarty at tweag.io Thu Dec 14 06:02:30 2017 From: manuel.chakravarty at tweag.io (Manuel M T Chakravarty) Date: Thu, 14 Dec 2017 17:02:30 +1100 Subject: [GHC DevOps Group] Release policies In-Reply-To: References: Message-ID: <51BC2727-14ED-4BFF-AD86-BCAAD64E60CA@tweag.io> Simon, As Mathieu indicated, a core problem that led to #14558 is that integer-gmp was uploaded long after the release of GHC. One simple ground rule that we should have in the release policies is that GHC never gets released until all its dependencies have been released (this means uploaded and all). If this delays a GHC release, so be it. With respect to Cabal, one general issue (as highlighted in the discussion between Mathieu and Gershom) is the co-development of the two. This immediately leads to a policy problem: the Cabal devs are free to structure their development as they see fit and they don’t need to follow any policies that we put into place. Hence my question: can we simply ship GHC with the latest *stable* Cabal release at the time of GHC feature freeze? I think, this would make the process much less fragile. Also, given GHC’s intended faster release schedule, it shouldn’t slow Cabal development significantly down either. Manuel > 14.12.2017 03:43 Simon Peyton Jones via ghc-devs : > > Dear GHC devops group > > The conversation on Trac #14558 suggests that we might want to consider reviewing GHC’s release policies . This email is to invite your input. > > The broad questions is this. We want GHC to serve the needs of all its users, including downstream tooling that uses GHC. What release policies will best support that goal? For example, we already ensure that GHC 8.4 can be compiled with 8.2 and 8.0. This imposes a slight tax on GHC development, but it means that users don't need to upgrade quite as often. (If the tempo of releases increases, we might want to increase the window.) > > Trac #14558 suggests that we might want to ensure the metadata on GHC’s built-in libraries is parsable with older Cabals. One possibility would be this: > > Ensure that the Cabal metadata of non-reinstallable packages (e.g. integer-gmp) shipped with GHC be parsable by the Cabal versions shipped with the last two major GHC releases [i.e. have a sufficiently old cabal-version field]. That is, in general a new Cabal specification will need to be shipped with two GHC releases before GHC will use start using its features in non-reinstallable packages. > Upholding this policy won't always be possible. There may be cases (as is the case Hadrian for GHC 8.4) where the benefit of quickly introducing incompatible syntax outweighs the need for compatibility. In this (hopefully rare) case we would explicitly advertise the incompatibility in the release documentation, and give as much notice as possible to users to allow downstream tools to adapt. > For reinstallable packages, of which GHC is simply a client (like text or bytestring), we can’t reasonably enforce such a policy, because GHC devs have no control over what the maintainers of external core libraries put in their Cabal files. > This is just a proposal. The narrow questions are these: > > Would this be sufficient to deal with the concerns raised in #14558? > Is it necessary, ow would anything simpler be sufficient? > What costs would the policy impose on GHC development? > There may be matters of detail: e.g. is two releases the right grace period. Would one do? > Both the broad question and the narrow ones are appropriate for the Devops group. > > Thanks! > > Simon > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Thu Dec 14 08:34:50 2017 From: marlowsd at gmail.com (Simon Marlow) Date: Thu, 14 Dec 2017 08:34:50 +0000 Subject: [GHC DevOps Group] CircleCI job accounting question In-Reply-To: <2E8F04C3-63C0-4E55-A197-16D631519EE4@tweag.io> References: <8760agnulr.fsf@ben-laptop.smart-cactus.org> <0B8947A7-CF4E-4803-828E-2DC1FBC6F4F0@tweag.io> <8760aen9ov.fsf@ben-laptop.smart-cactus.org> <54496A02-05EE-43EF-B717-612829628D90@tweag.io> <87zi7njmth.fsf@ben-laptop.smart-cactus.org> <87mv3f80xv.fsf@ben-laptop.smart-cactus.org> <87d14a8h4s.fsf@ben-laptop.smart-cactus.org> <874lpm86ra.fsf@ben-laptop.smart-cactus.org> <31461C3E-34EC-409D-8952-DB65B9452D3D@tweag.io> <87vahdi91m.fsf@ben-laptop.smart-cactus.org> <87374ghrhj.fsf@ben-laptop.smart-cactus.org> <2E8F04C3-63C0-4E55-A197-16D631519EE4@tweag.io> Message-ID: On 14 December 2017 at 02:20, Manuel M T Chakravarty < manuel.chakravarty at tweag.io> wrote: > > > The hold up seems to be that Phabricator creates an overhead, which has > prompted the use of the loophole (= direct push to master). > > How about the following solution? Everything that is directly pushed to > master currently, is being pushed to GitHub and goes through a PR. > > After all, the main criticism of GitHub PRs seems to be about code reviews > being nicer on Phabricator and we don’t want code review on the direct > pushes to master anyway. So, why not use GitHub for this? > Provided there's a way that we can bypass this for Phabricator and do 'arc land', then it's fine with me. Otherwise we would have this convoluted workflow: * arc diff * code review + CI validation on Phabricator * push to your github fork * create a PR on GitHub * wait for CI again * press the merge button compared with: * arc diff * code review + CI validation on Phabricator * arc land, or wait for Ben to merge Cheers Simon Cheers, > Manuel > > >> If the outstanding issue is that you combine multiple contributions > >> from contributors and manually valid them to ensure they are not just > >> individually sound, but also in combination, we might want to consider > >> > >> https://bors.tech > >> > >> which is exactly for that kind of thing (and apparently used by Rust). > > > > Indeed, I'm familiar with bors. Unfortunately it's quite GitHub-centric, > > so at for the moment it's not a viable option. > > > > Cheers, > > > > - Ben > > _______________________________________________ > > Ghc-devops-group mailing list > > Ghc-devops-group at haskell.org > > https://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devops-group > > _______________________________________________ > Ghc-devops-group mailing list > Ghc-devops-group at haskell.org > https://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devops-group > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Thu Dec 14 09:22:00 2017 From: marlowsd at gmail.com (Simon Marlow) Date: Thu, 14 Dec 2017 09:22:00 +0000 Subject: [GHC DevOps Group] Release policies In-Reply-To: References: Message-ID: On 13 December 2017 at 22:11, Boespflug, Mathieu wrote: > > * Note: today, users are effectively tied to whatever version of the > packages ships with GHC (i.e. the "reinstallable" bit is problematic today > for various technical reasons). That's why a breaking change in bytestring > is technically a breaking change in GHC. > > By the way, this is why I think GHC should strive to *reduce* as much as > possible the set of libraries it ships with and exposes in ghc-pkg by > default. e.g. if GHC did not expose bytestring, then tools like Stack that > insist that only one version of any library be in any given package set > could freely upgrade to a new bytestring major version. And GHC wouldn't > need to impose its 3-version policy on bytestring. I suspect this is not > technically possible in the case of bytestring, but what about e.g. time or > xhtml? > The constraints that I know about: * A GHC installation includes the ghc package, and a package must include its transitive dependencies, so any dependency of the ghc package must be in the package DB that we install. (this includes bytestring and time, but not xhtml) * unix depends on time, so that's not one we can drop * GHC is dynamically linked and when using GHCi and TH, packages that GHC is linked against are shared with user code. There are ways around the problems here (use a unique package Id for private packages, or use -fexternal-interpreter) * Haddock is dynamically linked against xhtml, amongst other things. So we'd need a way to ship the shared libraries for a package but not expose the package through the package DB, and give these shared libraries names that won't conflict with packages we later install. Cheers Simon > -- > Mathieu Boespflug > Founder at http://tweag.io. > > On 13 December 2017 at 23:03, Boespflug, Mathieu wrote: > >> [replying to ghc-devops-group@, which I assume based on your email's >> content is the mailing list you intended.] >> >> Hi Simon, >> >> feedback from downstream consumers of Cabal metadata (e.g. build tool >> authors) will be particularly useful for the discussion here. Here are my >> thoughts as a bystander. >> >> It's worth trying to identify what problems came up during the >> integer-gmp incident in Trac #14558: >> >> * GHC 8.2.1 shipped with integer-gmp-1.0.1.0 but the release notes said >> otherwise. >> * GHC 8.2.1 shipped with Cabal-2.0.0.2, but specifically claimed in the >> release notes that cabal-install-1.24 (and by implication any other build >> tool based on Cabal-the-library version 1.24) was supported: "GHC 8.2 only >> works with cabal-install version 1.24 or later. Please upgrade if you have >> an older version of cabal-install." >> * GHC 8.2.2 also claimed Cabal-1.24 support. >> * GHC 8.2.1 was released in July 2017 with Cabal-2.0.0.2, a brand new >> major release with breaking changes to the metadata format, without much >> lead time for downstream tooling authors (like Stack) to adapt. >> * But actually if we look at their respective release notes, GHC 8.2.1 >> was relased in July 2017, even though the Cabal website claims that >> Cabal-2.0.0.2 was released in August 2017 (see >> https://www.haskell.org/cabal/download.html). So it looks like GHC >> didn't just not give enough lead time about an upstream dependency it >> shipped with, it shipped with an unreleased version of Cabal! >> * Libraries that ship with GHC are usually also uploaded to Hackage, to >> make the documentation easily accessible, but integer-gmp-1.0.1.0 was not >> uploaded to Hackage until 4 months after the release. >> * The metadata for integer-gmp-1.0.1.0 as uploaded to Hackage differed >> from the metadata that was actually in the source tarball of GHC-8.2.1 and >> GHC-8.2.2. >> * The metadata for integer-gmp-1.0.1.0 as uploaded to Hackage included >> Cabal-2.0 specific syntactic sugar, making the metadata unreadable using >> any tooling that did not link against the Cabal-2.0.0.2 library (or any >> later version). >> * It so happened that one particular version of one particular downstream >> build tool, Stack, had a bug, compounding the bad effects of the previous >> point. But a new release has now been made, and in any case that's not a >> problem for GHC to solve. So let's keep that out of the discussion here. >> >> So I suggest we discuss ways to eliminate or reduce the likelihood of any >> of the above problems from occurring again. Here are some ideas: >> >> * GHC should never under any circumstance ship with an unreleased version >> of any independently maintained dependency. Cabal is one such dependency. >> This should hold true for anything else. We could just add that policy to >> the Release Policy. >> * Stronger still, GHC should not switch to a new major release of a >> dependency at any time during feature freeze ahead of a release. E.g. if >> Cabal-3.0.0 ships before feature freeze for GHC-9.6, then maybe it's fair >> game to include in GHC. But not if Cabal-3.0.0 hasn't shipped yet. >> * The 3-release backwards compat rule should apply in all circumstances. >> That means major version bumps of any library GHC ships with, including >> base, should not imply any breaking change in the API's of any such library. >> * GHC does have control over reinstallable packages (like text and >> bytestring): GHC need not ship with the latest versions of these, if indeed >> they introduce breaking changes that would contravene the 3-release policy. >> * Note: today, users are effectively tied to whatever version of the >> packages ships with GHC (i.e. the "reinstallable" bit is problematic today >> for various technical reasons). That's why a breaking change in bytestring >> is technically a breaking change in GHC. >> * The current release policy covers API stability, but what about >> metadata? In the extreme, we could say a 3-release policy applies to >> metadata too. Meaning, all metadata shipping with GHC now and in the next 2 >> releases should be parseable by today's version of Cabal and downstream >> tooling. Is such a long lead time necessary? That's for build tool authors >> to say, and a point to negotiate with GHC devs. >> * Because there are far fewer consumers of metadata than consumers of say >> base, I think shorter lead time is reasonable. At the other extreme, it >> could even be just the few months during feature freeze. >> * The release notes bugs mentioned above and the lack of consistent >> upload to Hackage are a symptom of lack of release automation, I suspect. >> That's how to fix it, but we could also spell out in the Release Policy >> that GHC libraries should all be on Hackage from the day of release. >> >> Finally, a question for discussion: >> >> * Hackage allows revising the metadata of an uploaded package even >> without changing the version number. This happens routinely on Hackage >> today by the Hackage trustees. Should this be permitted for packages whose >> release is completely tied to that of GHC itself (like integer-gmp)? >> >> Best, >> >> Mathieu >> >> >> On 13 December 2017 at 17:43, Simon Peyton Jones via ghc-devs < >> ghc-devs at haskell.org> wrote: >> >>> Dear GHC devops group >>> >>> The conversation on Trac #14558 >>> suggests that we might >>> want to consider reviewing GHC’s release policies >>> . >>> This email is to invite your input. >>> >>> The broad questions is this. We want GHC to serve the needs of all its >>> users, including downstream tooling that uses GHC. What release policies >>> will best support that goal? For example, we already ensure that GHC 8.4 >>> can be compiled with 8.2 and 8.0. This imposes a slight tax on GHC >>> development, but it means that users don't need to upgrade quite as often. >>> (If the tempo of releases increases, we might want to increase the window.) >>> >>> Trac #14558 suggests that we might want to ensure the metadata on GHC’s >>> built-in libraries is parsable with older Cabals. One possibility would be >>> this: >>> >>> - Ensure that the Cabal metadata of non-reinstallable packages (e.g. >>> integer-gmp) shipped with GHC be parsable by the Cabal versions shipped >>> with the last two major GHC releases [i.e. have a sufficiently old >>> cabal-version field]. That is, in general a new Cabal specification will >>> need to be shipped with two GHC releases before GHC will use start using >>> its features in non-reinstallable packages. >>> - Upholding this policy won't always be possible. There may be cases >>> (as is the case Hadrian for GHC 8.4) where the benefit of quickly >>> introducing incompatible syntax outweighs the need for compatibility. In >>> this (hopefully rare) case we would explicitly advertise the >>> incompatibility in the release documentation, and give as much notice as >>> possible to users to allow downstream tools to adapt. >>> - For reinstallable packages, of which GHC is simply a client (like >>> text or bytestring), we can’t reasonably enforce such a policy, because GHC >>> devs have no control over what the maintainers of external core libraries >>> put in their Cabal files. >>> >>> This is just a proposal. The narrow questions are these: >>> >>> - Would this be sufficient to deal with the concerns raised in >>> #14558? >>> - Is it necessary, ow would anything simpler be sufficient? >>> - What costs would the policy impose on GHC development? >>> - There may be matters of detail: e.g. is two releases the right >>> grace period. Would one do? >>> >>> Both the broad question and the narrow ones are appropriate for the >>> Devops group. >>> >>> Thanks! >>> >>> Simon >>> >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>> >>> >> > > _______________________________________________ > Ghc-devops-group mailing list > Ghc-devops-group at haskell.org > https://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devops-group > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From m at tweag.io Thu Dec 14 09:27:59 2017 From: m at tweag.io (Boespflug, Mathieu) Date: Thu, 14 Dec 2017 10:27:59 +0100 Subject: [GHC DevOps Group] Fwd: Release policies In-Reply-To: References: Message-ID: Hi Gershom, thanks for the extra input. So we've confirmed two facts: * GHC (intended to) ship with only Cabal-2.0 support, but there was a mistake in the release notes so this was unclear to downstream tooling authors. * Cabal-2.0 was released anywhere between slightly *after* and *exactly at the same as* GHC, despite GHC itself shipping with Cabal-2.0. I'm not too concerned by the first point: so long as Cabal-X does not introduce breaking changes, the fact that GHC-Y ultimately shipped with Cabal-X shouldn't be a problem. And this kind of bug in the release notes should go away provided more automation. The second one is more interesting. It is, as you point out, a product of GHC and Cabal being intimately linked and co-developed to a large extent. This leads to a simultaneous release that poses a concrete problem: * if new Cabal versions are used immediately in GHC, then that gives no time at all ahead of a GHC release for downstream tooling authors to adapt, because Cabal is, up until the point of the GHC release, a moving target. Three possible solutions: * Provided no API breaking changes in Cabal, if no metadata that ships with GHC uses new Cabal features for some period of time before release, then the problem goes away. * Or something close to what Manuel proposed in another thread: ship in GHC-X+1 the Cabal version that was co-developed during the development cycle of GHC-X. * Or a middle ground: make feature freeze a thing. Meaning that for a couple of months before a major GHC release, the major new Cabal isn't technically released yet, but like GHC itself within this period, it's pretty staid, so not so much a moving target, and something downstream tooling authors can possibly adapt to even without any grace period on new metadata features. This assumes that the 2 months of feature freeze are enough time for downstream tooling. Thoughts from any of those maintainers? On 14 December 2017 at 01:27, Gershom B wrote: > On Wed, Dec 13, 2017 at 7:06 PM, Boespflug, Mathieu wrote: >> >> But crucially, what *is* the policy around Cabal versions? This >> comment, https://ghc.haskell.org/trac/ghc/ticket/14558#comment:23 >> claims "if Stack doesn't support the version of Cabal that ships with >> a certain version of GHC, it shouldn't claim that it supports that >> version of GHC. The same applies to cabal-install". Is any build tool >> linked against Cabal-X by definition "not a supported configuration" >> by GHC-Z if it ships with Cabal-Y such that X < Y? > > My understanding is that this is the general thought, yes. In fact, > I've been told that even though cabal-install 1.24 did end up working > with the GHC 8.2.x series, the release notes, which were not updated > properly, actually _were supposed_ to say cabal-install 2.0.0.0 was > what was supported there. I believe future cabal-installs will warn > when used with a ghc with a newer Cabal-lib than they were built > against... > >> Right. But switching from Cabal-2 to Cabal-3 (a hypothetical at this >> point) sounds like a whole new set of features transitively just made >> it into the compiler. Is that something we're happy to happen during >> feature freeze? > > Right. After freeze, the compiler itself shouldn't switch from Cabal-2 > to Cabal-3. But I would imagine rather that the Cabal-3 tree and the > compiler tree would be updated in tandem, and then the "freeze" would > sort of apply to both in tandem as well. So there wouldn't be big > changes after the freeze, but nor would the compiler be coupled to a > _released_ lib. Rather, they would develop together, freeze together, > and release together. > >> I don't disagree. But then we'd need to abandon any notion that >> versions of packages on Hackage and versions of packages in the GHC >> release tarball always match up. Might even be worth calling that out >> explicitly in the policy. > > Not exactly. The tarball of the package on hackage should match the > release tarball. Revisions don't change the tarball. They just add > additional metadata to the index as well that cabal-install knows how > to use in conjunction with the tarball: > https://github.com/haskell-infra/hackage-trustees/blob/master/revisions-information.md#what-are-revisions > > --Gershom From ben at well-typed.com Thu Dec 14 17:30:31 2017 From: ben at well-typed.com (Ben Gamari) Date: Thu, 14 Dec 2017 12:30:31 -0500 Subject: [GHC DevOps Group] CircleCI job accounting question In-Reply-To: <2E8F04C3-63C0-4E55-A197-16D631519EE4@tweag.io> References: <8760agnulr.fsf@ben-laptop.smart-cactus.org> <0B8947A7-CF4E-4803-828E-2DC1FBC6F4F0@tweag.io> <8760aen9ov.fsf@ben-laptop.smart-cactus.org> <54496A02-05EE-43EF-B717-612829628D90@tweag.io> <87zi7njmth.fsf@ben-laptop.smart-cactus.org> <87mv3f80xv.fsf@ben-laptop.smart-cactus.org> <87d14a8h4s.fsf@ben-laptop.smart-cactus.org> <874lpm86ra.fsf@ben-laptop.smart-cactus.org> <31461C3E-34EC-409D-8952-DB65B9452D3D@tweag.io> <87vahdi91m.fsf@ben-laptop.smart-cactus.org> <87374ghrhj.fsf@ben-laptop.smart-cactus.org> <2E8F04C3-63C0-4E55-A197-16D631519EE4@tweag.io> Message-ID: <87lgi5fc99.fsf@ben-laptop.smart-cactus.org> Manuel M T Chakravarty writes: >> Am 13.12.2017 um 02:54 schrieb Ben Gamari : >> >> Admittedly this is where a Git-based workflows have a bit of an >> advantage: it is much more convenient to merge a group of commits in a >> structure-preserving manner via a git branch than a stack of >> differentials. > > Sorry for being unclear, but I was not suggesting to put Simon’s > commits in Differentials to review them, but to make everything go > through the same funnel, triggering CI on all the commits it ought to > be triggered on. > > Generally, I think, direct commits to master are bad process (and this > why this is not well supported by CircleCI either). Two key points of > CI are Yes, I completely agree. I would also to eliminate the need to push directly to master. Some time ago I had a slightly different scheme in mind for which I developed [1] a bit of automation in the course of another project. Namely, you stand up a simple daemon which tests commits pushed to a magic branch. The daemon simply fires off a CI job and when it finishes, it pushes the commit to master. This is somewhat similar to the technique used by Bors but without the GitHub dependency. I had intended to set this up for GHC when the Jenkins work had concluded. [1] https://github.com/bgamari/auto-push [snip] > The hold up seems to be that Phabricator creates an overhead, which > has prompted the use of the loophole (= direct push to master). > > How about the following solution? Everything that is directly pushed > to master currently, is being pushed to GitHub and goes through a PR. > > After all, the main criticism of GitHub PRs seems to be about code > reviews being nicer on Phabricator and we don’t want code review on > the direct pushes to master anyway. So, why not use GitHub for this? > Historically GHC avoided this since GHC avoided merge commits as they complicate bisection. However, now since GitHub supports rebase-merging this is certainly a compelling option. It's certainly much simpler than the approach I outlined above yet provides the same benefits. If no one objects I think this sounds like a great path forward. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Thu Dec 14 22:25:01 2017 From: ben at well-typed.com (Ben Gamari) Date: Thu, 14 Dec 2017 17:25:01 -0500 Subject: [GHC DevOps Group] CircleCI job accounting question In-Reply-To: <135A2041-43A7-45FD-AB82-51407717FA08@tweag.io> References: <8760agnulr.fsf@ben-laptop.smart-cactus.org> <0B8947A7-CF4E-4803-828E-2DC1FBC6F4F0@tweag.io> <8760aen9ov.fsf@ben-laptop.smart-cactus.org> <54496A02-05EE-43EF-B717-612829628D90@tweag.io> <87zi7njmth.fsf@ben-laptop.smart-cactus.org> <87mv3f80xv.fsf@ben-laptop.smart-cactus.org> <87d14a8h4s.fsf@ben-laptop.smart-cactus.org> <874lpm86ra.fsf@ben-laptop.smart-cactus.org> <31461C3E-34EC-409D-8952-DB65B9452D3D@tweag.io> <87vahdi91m.fsf@ben-laptop.smart-cactus.org> <135A2041-43A7-45FD-AB82-51407717FA08@tweag.io> Message-ID: <87zi6ldk1z.fsf@ben-laptop.smart-cactus.org> Manuel M T Chakravarty writes: >> Am 14.12.2017 um 01:36 schrieb Simon Marlow : >> >> On 12 December 2017 at 08:12, Manuel M T Chakravarty > wrote: >> Hi Ben, >> >>> Am 12.12.2017 um 02:22 schrieb Ben Gamari >: >>> >>> I personally think that we should strive for your first point (every >>> commit should be validate-clean) before attempting to tackle your >>> second. I, for one, am rather skeptical that putting all of your patches >>> through review would significantly affect quality. >> >> I completely agree. >> >> So, what is preventing us from disabling direct pushes to master and requiring all contributions to go through a PR or Differential? >> >> Well, CI needs to be working first :) > > CI itself is working for Linux and macOS, and the hold up with Windows > is largely us trying to get it for free from AppVeyor. The outstanding > problems are with getting Phabricator to integrate/cooperate and now > agreeing on a workflow. If we would use GitHub and PRs (like most of > the rest of the world), I think, all this would be solved already > also. > Well, I think it's a bit premature to call it "working". It runs and reliably finishes, but there is still quite some work to be done to make it pass the testsuite. The only reason it "passes" currently is that the testsuite driver exits with exit code 0 even when tests fail (this is #14411; I just posted a patch for this as D4268). If you look at any of the recent "passing" builds you will see that the testsuite fails with around a dozen failures (around half being stat failures). I fixed a whole slew of these in the OS X build a few weeks ago but I haven't had a chance to dig into those that remain. At first glance they resemble #10037, but there is at least one segmentation fault (on Darwin) which is a bit concerning. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From manuel.chakravarty at tweag.io Fri Dec 15 04:17:19 2017 From: manuel.chakravarty at tweag.io (Manuel M T Chakravarty) Date: Fri, 15 Dec 2017 15:17:19 +1100 Subject: [GHC DevOps Group] CircleCI job accounting question In-Reply-To: <87lgi5fc99.fsf@ben-laptop.smart-cactus.org> References: <8760agnulr.fsf@ben-laptop.smart-cactus.org> <0B8947A7-CF4E-4803-828E-2DC1FBC6F4F0@tweag.io> <8760aen9ov.fsf@ben-laptop.smart-cactus.org> <54496A02-05EE-43EF-B717-612829628D90@tweag.io> <87zi7njmth.fsf@ben-laptop.smart-cactus.org> <87mv3f80xv.fsf@ben-laptop.smart-cactus.org> <87d14a8h4s.fsf@ben-laptop.smart-cactus.org> <874lpm86ra.fsf@ben-laptop.smart-cactus.org> <31461C3E-34EC-409D-8952-DB65B9452D3D@tweag.io> <87vahdi91m.fsf@ben-laptop.smart-cactus.org> <87374ghrhj.fsf@ben-laptop.smart-cactus.org> <2E8F04C3-63C0-4E55-A197-16D631519EE4@tweag.io> <87lgi5fc99.fsf@ben-laptop.smart-cactus.org> Message-ID: > Ben Gamari : > > Manuel M T Chakravarty writes: >> The hold up seems to be that Phabricator creates an overhead, which >> has prompted the use of the loophole (= direct push to master). >> >> How about the following solution? Everything that is directly pushed >> to master currently, is being pushed to GitHub and goes through a PR. >> >> After all, the main criticism of GitHub PRs seems to be about code >> reviews being nicer on Phabricator and we don’t want code review on >> the direct pushes to master anyway. So, why not use GitHub for this? >> > Historically GHC avoided this since GHC avoided merge commits as they > complicate bisection. However, now since GitHub supports rebase-merging > this is certainly a compelling option. It's certainly much simpler than > the approach I outlined above yet provides the same benefits. > > If no one objects I think this sounds like a great path forward. Great! Manuel From manuel.chakravarty at tweag.io Fri Dec 15 04:29:38 2017 From: manuel.chakravarty at tweag.io (Manuel M T Chakravarty) Date: Fri, 15 Dec 2017 15:29:38 +1100 Subject: [GHC DevOps Group] CircleCI job accounting question In-Reply-To: <87zi6ldk1z.fsf@ben-laptop.smart-cactus.org> References: <8760agnulr.fsf@ben-laptop.smart-cactus.org> <0B8947A7-CF4E-4803-828E-2DC1FBC6F4F0@tweag.io> <8760aen9ov.fsf@ben-laptop.smart-cactus.org> <54496A02-05EE-43EF-B717-612829628D90@tweag.io> <87zi7njmth.fsf@ben-laptop.smart-cactus.org> <87mv3f80xv.fsf@ben-laptop.smart-cactus.org> <87d14a8h4s.fsf@ben-laptop.smart-cactus.org> <874lpm86ra.fsf@ben-laptop.smart-cactus.org> <31461C3E-34EC-409D-8952-DB65B9452D3D@tweag.io> <87vahdi91m.fsf@ben-laptop.smart-cactus.org> <135A2041-43A7-45FD-AB82-51407717FA08@tweag.io> <87zi6ldk1z.fsf@ben-laptop.smart-cactus.org> Message-ID: <6E3F8703-530F-4A11-8A60-2C6BC384C297@tweag.io> > 15.12.2017 09:25 Ben Gamari : > > Manuel M T Chakravarty writes: > >>> Am 14.12.2017 um 01:36 schrieb Simon Marlow : >>> >>> On 12 December 2017 at 08:12, Manuel M T Chakravarty > wrote: >>> Hi Ben, >>> >>>> Am 12.12.2017 um 02:22 schrieb Ben Gamari >: >>>> >>>> I personally think that we should strive for your first point (every >>>> commit should be validate-clean) before attempting to tackle your >>>> second. I, for one, am rather skeptical that putting all of your patches >>>> through review would significantly affect quality. >>> >>> I completely agree. >>> >>> So, what is preventing us from disabling direct pushes to master and requiring all contributions to go through a PR or Differential? >>> >>> Well, CI needs to be working first :) >> >> CI itself is working for Linux and macOS, and the hold up with Windows >> is largely us trying to get it for free from AppVeyor. The outstanding >> problems are with getting Phabricator to integrate/cooperate and now >> agreeing on a workflow. If we would use GitHub and PRs (like most of >> the rest of the world), I think, all this would be solved already >> also. >> > Well, I think it's a bit premature to call it "working". It runs and > reliably finishes, but there is still quite some work to be done to make > it pass the testsuite. The only reason it "passes" currently is that the > testsuite driver exits with exit code 0 even when tests fail (this is > #14411; I just posted a patch for this as D4268). > > If you look at any of the recent "passing" builds you will see that the > testsuite fails with around a dozen failures (around half being stat > failures). I fixed a whole slew of these in the OS X build a few weeks > ago but I haven't had a chance to dig into those that remain. At first > glance they resemble #10037, but there is at least one segmentation > fault (on Darwin) which is a bit concerning. Please correct me if I am wrong, but aren’t these problems with the compiler/testsuite and not problems with CircleCI? Cheers, Manuel From michael at snoyman.com Fri Dec 15 07:42:55 2017 From: michael at snoyman.com (Michael Snoyman) Date: Fri, 15 Dec 2017 10:42:55 +0300 Subject: [GHC DevOps Group] Fwd: Release policies In-Reply-To: References: Message-ID: On Thu, Dec 14, 2017 at 12:27 PM, Boespflug, Mathieu wrote: [snip] * Or a middle ground: make feature freeze a thing. Meaning that for a > couple of months before a major GHC release, the major new Cabal isn't > technically released yet, but like GHC itself within this period, it's > pretty staid, so not so much a moving target, and something downstream > tooling authors can possibly adapt to even without any grace period on > new metadata features. This assumes that the 2 months of feature > freeze are enough time for downstream tooling. Thoughts from any of > those maintainers? > > Short answer: if there's a clear idea in advance of when this feature freeze is going to happen, I think we can coordinate releases of downstream tooling (Stack being the most important, but stackage-curator playing in as well) so that 2 months is sufficient. I'll talk with the rest of the Stack team to see if there are any concerns. Longer answer: Stack intentionally avoids depending on the internals of Cabal wherever possible. Instead of calling library functions directly from within Haskell code to perform builds, for example, it interacts with the Setup.hs files over their command line interface.[1] This has two results: * Stack can usually start using new GHC/Cabal versions without a new Stack release, since it's just shelling out for the actual build * There's not usually very much code churn needed in Stack to upgrade to a newer Cabal release This past release was an exception because of all of the changes that landed, both the new cabal grammar to support the ^>= operator (making the old parser incapable of lossily parsing new files) and API changes (I think mostly around Backpack, though there was some code cleanup as well). In particular, the main interface we need from Cabal—the package description data types and parser—changed significantly enough that it took significant effort to upgrade. There were also new features added (like sub libraries and foreign libraries) that weren't immediately supported by the old Stack version, and had to be manually added in. Tying this up: generally upgrading to a new Cabal release should be fine, and the only concern I'd have is fitting it into a release schedule with Stack. The complications that could slow that down are: * Changes to the command line interface that Stack uses (hopefully those are exceedingly rare) * Major overhauls to the Stack-facing API Michael [1] This allows for more reproducible builds of older snapshots, insuring that the exact same Cabal library is performing the builds -------------- next part -------------- An HTML attachment was scrubbed... URL: From m at tweag.io Fri Dec 15 08:41:36 2017 From: m at tweag.io (Boespflug, Mathieu) Date: Fri, 15 Dec 2017 09:41:36 +0100 Subject: [GHC DevOps Group] Fwd: Release policies In-Reply-To: References: Message-ID: Thanks for the feedback, Michael. Manuel, I believe you are also a Cabal-the-library consumer in Haskell For Mac? Michael, you brought up another problem tangentially related to the original integer-gmp issue but that was not in my original list earlier in this thread: * Cabal-2.0.0 had breaking changes in the API. This means that by association GHC itself broke BC, because it shipped with Cabal-2.0, without the usual grace period. Now, there are far fewer users of Cabal than of base. All, Michael in his previous email seems to be okay with breaking changes in Cabal given the conditions he stated (2 months grace period, advance notice of when the 2 months start). And perhaps this points to the lack of a need for the regular grace period applying to Cabal. How many other users of Cabal-the-library are there? In principle, every single Hackage package out there, which all have a Setup.hs script. Most of them are trivial, but how many did break because of these API changes? I for one am pretty happy for Cabal to move fast, but I'm concerned that these breaking changes happened without any kind of advance notice. To Simon's original point - there does not to be a clear policy and a good process surrounding Cabal itself and other GHC dependencies. So far we discussed mostly metadata changes, not API changes. And to be clear, folks did get some (post facto) notice in September: http://coldwa.st/e/blog/2017-09-09-Cabal-2-0.html. That's helpful, but I submit that in the future this really should be part of the GHC release announcement (which happened over a month before that), and in fact a migration guide circulated before the feature freeze, so downstream tooling authors can adapt. If this is not possible, then perhaps it's premature for GHC to include that given Cabal release. Again, GHC should always have the option to stick to the old Cabal version until things get ironed out. On 15 December 2017 at 08:42, Michael Snoyman wrote: > > > On Thu, Dec 14, 2017 at 12:27 PM, Boespflug, Mathieu wrote: > > [snip] > >> * Or a middle ground: make feature freeze a thing. Meaning that for a >> couple of months before a major GHC release, the major new Cabal isn't >> technically released yet, but like GHC itself within this period, it's >> pretty staid, so not so much a moving target, and something downstream >> tooling authors can possibly adapt to even without any grace period on >> new metadata features. This assumes that the 2 months of feature >> freeze are enough time for downstream tooling. Thoughts from any of >> those maintainers? >> > > Short answer: if there's a clear idea in advance of when this feature freeze > is going to happen, I think we can coordinate releases of downstream tooling > (Stack being the most important, but stackage-curator playing in as well) so > that 2 months is sufficient. I'll talk with the rest of the Stack team to > see if there are any concerns. > > Longer answer: Stack intentionally avoids depending on the internals of > Cabal wherever possible. Instead of calling library functions directly from > within Haskell code to perform builds, for example, it interacts with the > Setup.hs files over their command line interface.[1] This has two results: > > * Stack can usually start using new GHC/Cabal versions without a new Stack > release, since it's just shelling out for the actual build > * There's not usually very much code churn needed in Stack to upgrade to a > newer Cabal release > > This past release was an exception because of all of the changes that > landed, both the new cabal grammar to support the ^>= operator (making the > old parser incapable of lossily parsing new files) and API changes (I think > mostly around Backpack, though there was some code cleanup as well). In > particular, the main interface we need from Cabal—the package description > data types and parser—changed significantly enough that it took significant > effort to upgrade. There were also new features added (like sub libraries > and foreign libraries) that weren't immediately supported by the old Stack > version, and had to be manually added in. > > Tying this up: generally upgrading to a new Cabal release should be fine, > and the only concern I'd have is fitting it into a release schedule with > Stack. The complications that could slow that down are: > > * Changes to the command line interface that Stack uses (hopefully those are > exceedingly rare) > * Major overhauls to the Stack-facing API > > Michael > > [1] This allows for more reproducible builds of older snapshots, insuring > that the exact same Cabal library is performing the builds From mikhail.glushenkov at gmail.com Fri Dec 15 09:19:37 2017 From: mikhail.glushenkov at gmail.com (Mikhail Glushenkov) Date: Fri, 15 Dec 2017 09:19:37 +0000 Subject: [GHC DevOps Group] Fwd: Release policies In-Reply-To: References: Message-ID: Hi Mathieu, On 15 December 2017 at 08:41, Boespflug, Mathieu wrote: > How many other > users of Cabal-the-library are there? In principle, every single > Hackage package out there, which all have a Setup.hs script. This is not such a big deal now, because build-type: Custom packages can declare the dependencies of the Setup script via the custom-setup stanza. By default (when there's no custom-setup stanza), Cabal < 2 is chosen. From mikhail.glushenkov at gmail.com Fri Dec 15 09:26:28 2017 From: mikhail.glushenkov at gmail.com (Mikhail Glushenkov) Date: Fri, 15 Dec 2017 09:26:28 +0000 Subject: [GHC DevOps Group] Fwd: Release policies In-Reply-To: References: Message-ID: Hi Mathieu, On 15 December 2017 at 08:41, Boespflug, Mathieu wrote: > In principle, every single > Hackage package out there, which all have a Setup.hs script. Also, the build-type: Simple packages (which are the vast majority on Hackage) are not affected at all, because they all use a default built-in setup script. From ben at well-typed.com Fri Dec 15 15:32:13 2017 From: ben at well-typed.com (Ben Gamari) Date: Fri, 15 Dec 2017 10:32:13 -0500 Subject: [GHC DevOps Group] Fwd: Release policies In-Reply-To: References: Message-ID: <87tvwsdn2m.fsf@ben-laptop.smart-cactus.org> "Boespflug, Mathieu" writes: > Thanks for the feedback, Michael. > > Manuel, I believe you are also a Cabal-the-library consumer in Haskell For Mac? > > Michael, you brought up another problem tangentially related to the > original integer-gmp issue but that was not in my original list > earlier in this thread: > > * Cabal-2.0.0 had breaking changes in the API. > > This means that by association GHC itself broke BC, because it shipped > with Cabal-2.0, without the usual grace period. > I'm a bit confused; by "the usual grace period" do you mean the Core Library Committee's three release policy? AFAIK this policy only applies to libraries under CLC control (e.g. those defined in the Report and perhaps template-haskell). The only other compatibility guarantee that GHC provides is the "two release policy", which stipulates that GHC should be bootstrappable with the two most recent major GHC releases. GHC has never, as far as I am aware, considered major version bumps of its dependencies to be part of its interface. We perform a major bump of most libraries with nearly every release [1]. Perhaps I've misunderstood your statement? Cheers, - Ben [1] https://ghc.haskell.org/trac/ghc/wiki/Commentary/Libraries/VersionHistory -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Fri Dec 15 15:39:47 2017 From: ben at well-typed.com (Ben Gamari) Date: Fri, 15 Dec 2017 10:39:47 -0500 Subject: [GHC DevOps Group] CircleCI job accounting question In-Reply-To: <6E3F8703-530F-4A11-8A60-2C6BC384C297@tweag.io> References: <8760agnulr.fsf@ben-laptop.smart-cactus.org> <0B8947A7-CF4E-4803-828E-2DC1FBC6F4F0@tweag.io> <8760aen9ov.fsf@ben-laptop.smart-cactus.org> <54496A02-05EE-43EF-B717-612829628D90@tweag.io> <87zi7njmth.fsf@ben-laptop.smart-cactus.org> <87mv3f80xv.fsf@ben-laptop.smart-cactus.org> <87d14a8h4s.fsf@ben-laptop.smart-cactus.org> <874lpm86ra.fsf@ben-laptop.smart-cactus.org> <31461C3E-34EC-409D-8952-DB65B9452D3D@tweag.io> <87vahdi91m.fsf@ben-laptop.smart-cactus.org> <135A2041-43A7-45FD-AB82-51407717FA08@tweag.io> <87zi6ldk1z.fsf@ben-laptop.smart-cactus.org> <6E3F8703-530F-4A11-8A60-2C6BC384C297@tweag.io> Message-ID: <87r2rwdmpq.fsf@ben-laptop.smart-cactus.org> Manuel M T Chakravarty writes: >> 15.12.2017 09:25 Ben Gamari : >> >> Well, I think it's a bit premature to call it "working". It runs and >> reliably finishes, but there is still quite some work to be done to make >> it pass the testsuite. The only reason it "passes" currently is that the >> testsuite driver exits with exit code 0 even when tests fail (this is >> #14411; I just posted a patch for this as D4268). >> >> If you look at any of the recent "passing" builds you will see that the >> testsuite fails with around a dozen failures (around half being stat >> failures). I fixed a whole slew of these in the OS X build a few weeks >> ago but I haven't had a chance to dig into those that remain. At first >> glance they resemble #10037, but there is at least one segmentation >> fault (on Darwin) which is a bit concerning. > > Please correct me if I am wrong, but aren’t these problems with the compiler/testsuite and not problems with CircleCI? > Very likely, yes. That being said, they do need to be fixed before we can move to CircleCI as our primary CI scheme. Otherwise the build will go red the moment that I fix #10037. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From marlowsd at gmail.com Fri Dec 15 17:06:27 2017 From: marlowsd at gmail.com (Simon Marlow) Date: Fri, 15 Dec 2017 17:06:27 +0000 Subject: [GHC DevOps Group] CircleCI job accounting question In-Reply-To: <87lgi5fc99.fsf@ben-laptop.smart-cactus.org> References: <8760agnulr.fsf@ben-laptop.smart-cactus.org> <0B8947A7-CF4E-4803-828E-2DC1FBC6F4F0@tweag.io> <8760aen9ov.fsf@ben-laptop.smart-cactus.org> <54496A02-05EE-43EF-B717-612829628D90@tweag.io> <87zi7njmth.fsf@ben-laptop.smart-cactus.org> <87mv3f80xv.fsf@ben-laptop.smart-cactus.org> <87d14a8h4s.fsf@ben-laptop.smart-cactus.org> <874lpm86ra.fsf@ben-laptop.smart-cactus.org> <31461C3E-34EC-409D-8952-DB65B9452D3D@tweag.io> <87vahdi91m.fsf@ben-laptop.smart-cactus.org> <87374ghrhj.fsf@ben-laptop.smart-cactus.org> <2E8F04C3-63C0-4E55-A197-16D631519EE4@tweag.io> <87lgi5fc99.fsf@ben-laptop.smart-cactus.org> Message-ID: On 14 December 2017 at 17:30, Ben Gamari wrote: > Historically GHC avoided this since GHC avoided merge commits as they > complicate bisection. However, now since GitHub supports rebase-merging > this is certainly a compelling option. It's certainly much simpler than > the approach I outlined above yet provides the same benefits. > > If no one objects I think this sounds like a great path forward. > We'll have to be careful to tell people that PRs will be squashed when merging, since the current workflow is to push multiple patches at once to master. How does merging PRs work when our source of truth is not on github? Cheers Simon > Cheers, > > - Ben > > > _______________________________________________ > Ghc-devops-group mailing list > Ghc-devops-group at haskell.org > https://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devops-group > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From m at tweag.io Fri Dec 15 17:37:18 2017 From: m at tweag.io (Boespflug, Mathieu) Date: Fri, 15 Dec 2017 18:37:18 +0100 Subject: [GHC DevOps Group] Fwd: Release policies In-Reply-To: <87tvwsdn2m.fsf@ben-laptop.smart-cactus.org> References: <87tvwsdn2m.fsf@ben-laptop.smart-cactus.org> Message-ID: On 15 December 2017 at 16:32, Ben Gamari wrote: > "Boespflug, Mathieu" writes: > >> Thanks for the feedback, Michael. >> >> Manuel, I believe you are also a Cabal-the-library consumer in Haskell For Mac? >> >> Michael, you brought up another problem tangentially related to the >> original integer-gmp issue but that was not in my original list >> earlier in this thread: >> >> * Cabal-2.0.0 had breaking changes in the API. >> >> This means that by association GHC itself broke BC, because it shipped >> with Cabal-2.0, without the usual grace period. >> > I'm a bit confused; by "the usual grace period" do you mean the Core > Library Committee's three release policy? I did mean that one, yes. That was my question earlier - is Cabal along with *all* core libraries covered by the CLC's 3-release policy? The *Core Libraries* Committee (CLC) defines a "core library" as "Our definition of "core library" is a library that ships with GHC." (See https://wiki.haskell.org/Library_submissions#The_Libraries) But indeed, Cabal is not part of the CLC libraries list on that page. So I'm confused too: a) is Cabal a "core library", b) does that mean Cabal is bound by the 3-release policy? > GHC has never, as far as I am aware, considered major version bumps of > its dependencies to be part of its interface. We perform a major bump of > most libraries with nearly every release [1]. Yes, and major version bumps are not necessarily BC. From gershomb at gmail.com Fri Dec 15 18:03:18 2017 From: gershomb at gmail.com (Gershom B) Date: Fri, 15 Dec 2017 13:03:18 -0500 Subject: [GHC DevOps Group] Fwd: Release policies In-Reply-To: References: <87tvwsdn2m.fsf@ben-laptop.smart-cactus.org> Message-ID: On Fri, Dec 15, 2017 at 12:37 PM, Boespflug, Mathieu wrote: > I did mean that one, yes. That was my question earlier - is Cabal > along with *all* core libraries covered by the CLC's 3-release policy? The 3 release policy does not apply to all libraries maintained by the CLC. It applies to "basic libraries": https://prime.haskell.org/wiki/Libraries/3-Release-Policy The general notion is that it applies to things surrounding the prelude, base, and things perhaps adjacent to that. That is to say, more or less, things that would be defined in the libraries section of the Haskell Report: https://www.haskell.org/onlinereport/haskell2010/haskellpa2.html > The *Core Libraries* Committee (CLC) defines a "core library" as > > "Our definition of "core library" is a library that ships with GHC." > (See https://wiki.haskell.org/Library_submissions#The_Libraries) By that definition, "Cabal" might well be listed in the core libraries that are not maintained by the CLC on that page, and it is perhaps an oversight that it is not? I would ask them. -g From mikhail.glushenkov at gmail.com Fri Dec 15 18:03:44 2017 From: mikhail.glushenkov at gmail.com (Mikhail Glushenkov) Date: Fri, 15 Dec 2017 18:03:44 +0000 Subject: [GHC DevOps Group] Fwd: Release policies In-Reply-To: References: <87tvwsdn2m.fsf@ben-laptop.smart-cactus.org> Message-ID: Hi Mathieu, On 15 December 2017 at 17:37, Boespflug, Mathieu wrote: > b) does that mean > Cabal is bound by the 3-release policy? Historically, it hasn't been the case, and I can say that the 3-release backwards compat policy would be an absolute nightmare for lib:Cabal to implement. From m at tweag.io Fri Dec 15 18:14:49 2017 From: m at tweag.io (Boespflug, Mathieu) Date: Fri, 15 Dec 2017 19:14:49 +0100 Subject: [GHC DevOps Group] Fwd: Release policies In-Reply-To: References: <87tvwsdn2m.fsf@ben-laptop.smart-cactus.org> Message-ID: On 15 December 2017 at 19:03, Gershom B wrote: > On Fri, Dec 15, 2017 at 12:37 PM, Boespflug, Mathieu wrote: >> I did mean that one, yes. That was my question earlier - is Cabal >> along with *all* core libraries covered by the CLC's 3-release policy? > > The 3 release policy does not apply to all libraries maintained by the > CLC. It applies to "basic libraries": > https://prime.haskell.org/wiki/Libraries/3-Release-Policy That clarifies it, thanks! From manuel.chakravarty at tweag.io Mon Dec 18 03:50:46 2017 From: manuel.chakravarty at tweag.io (Manuel M T Chakravarty) Date: Mon, 18 Dec 2017 14:50:46 +1100 Subject: [GHC DevOps Group] CircleCI job accounting question In-Reply-To: References: <8760agnulr.fsf@ben-laptop.smart-cactus.org> <0B8947A7-CF4E-4803-828E-2DC1FBC6F4F0@tweag.io> <8760aen9ov.fsf@ben-laptop.smart-cactus.org> <54496A02-05EE-43EF-B717-612829628D90@tweag.io> <87zi7njmth.fsf@ben-laptop.smart-cactus.org> <87mv3f80xv.fsf@ben-laptop.smart-cactus.org> <87d14a8h4s.fsf@ben-laptop.smart-cactus.org> <874lpm86ra.fsf@ben-laptop.smart-cactus.org> <31461C3E-34EC-409D-8952-DB65B9452D3D@tweag.io> <87vahdi91m.fsf@ben-laptop.smart-cactus.org> <87374ghrhj.fsf@ben-laptop.smart-cactus.org> <2E8F04C3-63C0-4E55-A197-16D631519EE4@tweag.io> <87lgi5fc99.fsf@ben-laptop.smart-cactus.org> Message-ID: <0EE46FA3-0936-4E4E-A4A8-08506ECEB55C@tweag.io> > Am 16.12.2017 um 04:06 schrieb Simon Marlow : > > On 14 December 2017 at 17:30, Ben Gamari > wrote: > Historically GHC avoided this since GHC avoided merge commits as they > complicate bisection. However, now since GitHub supports rebase-merging > this is certainly a compelling option. It's certainly much simpler than > the approach I outlined above yet provides the same benefits. > > If no one objects I think this sounds like a great path forward. > > We’ll have to be careful to tell people that PRs will be squashed when merging, since the current workflow is to push multiple patches at once to master. Yes, good point. > How does merging PRs work when our source of truth is not on github? I am not sure, but there are merged PR’s in the GitHub history, so I assume, it works. Ben? Incidentally, once we have no more direct pushes to master, would it make sense to make GitHub the source of truth? (That would get rid of another bit of custom infrastructure.) Cheers, Manuel -------------- next part -------------- An HTML attachment was scrubbed... URL: From manuel.chakravarty at tweag.io Mon Dec 18 04:04:35 2017 From: manuel.chakravarty at tweag.io (Manuel M T Chakravarty) Date: Mon, 18 Dec 2017 15:04:35 +1100 Subject: [GHC DevOps Group] Release policies In-Reply-To: References: Message-ID: Yes, you are right Haskell for Mac also links against Cabal-the-library and API changes have regularly required me to fix my code. I guess, I have never been particularly stressed about it, because I also link against GHC API and that doesn’t even know how to spell API stability — i.e., changes required by Cabal are usually drowned out by the chaos inflicted by GHC. In any case, you are making a good point. Mikhail, I don’t understand your response to Mathieu at all. What does the build-type have to do with this? Cheers, Manuel > 15.12.2017, 19:41 Boespflug, Mathieu : > > Thanks for the feedback, Michael. > > Manuel, I believe you are also a Cabal-the-library consumer in Haskell For Mac? > > Michael, you brought up another problem tangentially related to the > original integer-gmp issue but that was not in my original list > earlier in this thread: > > * Cabal-2.0.0 had breaking changes in the API. > > This means that by association GHC itself broke BC, because it shipped > with Cabal-2.0, without the usual grace period. > > Now, there are far fewer users of Cabal than of base. All, Michael in > his previous email seems to be okay with breaking changes in Cabal > given the conditions he stated (2 months grace period, advance notice > of when the 2 months start). And perhaps this points to the lack of a > need for the regular grace period applying to Cabal. How many other > users of Cabal-the-library are there? In principle, every single > Hackage package out there, which all have a Setup.hs script. Most of > them are trivial, but how many did break because of these API changes? > I for one am pretty happy for Cabal to move fast, but I'm concerned > that these breaking changes happened without any kind of advance > notice. To Simon's original point - there does not to be a clear > policy and a good process surrounding Cabal itself and other GHC > dependencies. So far we discussed mostly metadata changes, not API > changes. > > And to be clear, folks did get some (post facto) notice in September: > http://coldwa.st/e/blog/2017-09-09-Cabal-2-0.html. That's helpful, but > I submit that in the future this really should be part of the GHC > release announcement (which happened over a month before that), and in > fact a migration guide circulated before the feature freeze, so > downstream tooling authors can adapt. If this is not possible, then > perhaps it's premature for GHC to include that given Cabal release. > Again, GHC should always have the option to stick to the old Cabal > version until things get ironed out. > > > On 15 December 2017 at 08:42, Michael Snoyman wrote: >> >> >> On Thu, Dec 14, 2017 at 12:27 PM, Boespflug, Mathieu wrote: >> >> [snip] >> >>> * Or a middle ground: make feature freeze a thing. Meaning that for a >>> couple of months before a major GHC release, the major new Cabal isn't >>> technically released yet, but like GHC itself within this period, it's >>> pretty staid, so not so much a moving target, and something downstream >>> tooling authors can possibly adapt to even without any grace period on >>> new metadata features. This assumes that the 2 months of feature >>> freeze are enough time for downstream tooling. Thoughts from any of >>> those maintainers? >>> >> >> Short answer: if there's a clear idea in advance of when this feature freeze >> is going to happen, I think we can coordinate releases of downstream tooling >> (Stack being the most important, but stackage-curator playing in as well) so >> that 2 months is sufficient. I'll talk with the rest of the Stack team to >> see if there are any concerns. >> >> Longer answer: Stack intentionally avoids depending on the internals of >> Cabal wherever possible. Instead of calling library functions directly from >> within Haskell code to perform builds, for example, it interacts with the >> Setup.hs files over their command line interface.[1] This has two results: >> >> * Stack can usually start using new GHC/Cabal versions without a new Stack >> release, since it's just shelling out for the actual build >> * There's not usually very much code churn needed in Stack to upgrade to a >> newer Cabal release >> >> This past release was an exception because of all of the changes that >> landed, both the new cabal grammar to support the ^>= operator (making the >> old parser incapable of lossily parsing new files) and API changes (I think >> mostly around Backpack, though there was some code cleanup as well). In >> particular, the main interface we need from Cabal—the package description >> data types and parser—changed significantly enough that it took significant >> effort to upgrade. There were also new features added (like sub libraries >> and foreign libraries) that weren't immediately supported by the old Stack >> version, and had to be manually added in. >> >> Tying this up: generally upgrading to a new Cabal release should be fine, >> and the only concern I'd have is fitting it into a release schedule with >> Stack. The complications that could slow that down are: >> >> * Changes to the command line interface that Stack uses (hopefully those are >> exceedingly rare) >> * Major overhauls to the Stack-facing API >> >> Michael >> >> [1] This allows for more reproducible builds of older snapshots, insuring >> that the exact same Cabal library is performing the builds > _______________________________________________ > Ghc-devops-group mailing list > Ghc-devops-group at haskell.org > https://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devops-group From m at tweag.io Mon Dec 18 11:03:22 2017 From: m at tweag.io (Boespflug, Mathieu) Date: Mon, 18 Dec 2017 12:03:22 +0100 Subject: [GHC DevOps Group] Release policies In-Reply-To: References: Message-ID: I think Mikhail's point is that if a package says build-type: Simple, then we know exactly what its Setup.hs says, and therefore also which part of the Cabal API it's using. Easy enough to keep that part stable even if others change. Case in point: Cabal-2.0 brought a number of changes to the overall API, but nothing that broke calling defaultMain from Distribution.Simple (which is what a build-type: Simple script does). At the end of the day the consumers of the wider Cabal API are pretty small. A substantial number of misc packages on Hackage do it but rarely heavily. Other than that it essentially comes down to Stack, cabal-install, Haskell For Mac and... any others? My takeaway from the discussion so far is that the number of heavy consumers looks small enough that a draconian BC policy for Cabal-the-library sounds overkill, provided, crucially, that everything is in place, by GHC feature freeze at the very latest, to allow a smooth migration. A "smooth transition" means having a migration guide available before start of feature freeze etc, but to Ben's concern stated earlier in this thread (about GHC/upstream coupling), ideally also a release. I should note that to the extent that GHC tracks upstream releases only (not git commits in unknown state), GHC can be released on a timely schedule without needing any coordination from upstream maintainers to await new releases on their part. So quite apart from the Cabal thing specifically, it's worth thinking about asking that the versions of all upstream packages only make it into GHC, at the behest of their respective maintainers, after a new release of upstream is made. This was already proposed earlier in the thread: > * [Proposal:] GHC does not track git commits of upstream dependencies > in an unknown state of quality, but tracks vetted and tested releases > instead. Potentially, this could even mean drastically cutting down on the number of git submodules carried in the GHC repo. Since these packages could as well be downloaded from Hackage. On 18 December 2017 at 05:04, Manuel M T Chakravarty wrote: > Yes, you are right Haskell for Mac also links against Cabal-the-library and API changes have regularly required me to fix my code. I guess, I have never been particularly stressed about it, because I also link against GHC API and that doesn’t even know how to spell API stability — i.e., changes required by Cabal are usually drowned out by the chaos inflicted by GHC. > > In any case, you are making a good point. > > Mikhail, I don’t understand your response to Mathieu at all. What does the build-type have to do with this? > > Cheers, > Manuel > >> 15.12.2017, 19:41 Boespflug, Mathieu : >> >> Thanks for the feedback, Michael. >> >> Manuel, I believe you are also a Cabal-the-library consumer in Haskell For Mac? >> >> Michael, you brought up another problem tangentially related to the >> original integer-gmp issue but that was not in my original list >> earlier in this thread: >> >> * Cabal-2.0.0 had breaking changes in the API. >> >> This means that by association GHC itself broke BC, because it shipped >> with Cabal-2.0, without the usual grace period. >> >> Now, there are far fewer users of Cabal than of base. All, Michael in >> his previous email seems to be okay with breaking changes in Cabal >> given the conditions he stated (2 months grace period, advance notice >> of when the 2 months start). And perhaps this points to the lack of a >> need for the regular grace period applying to Cabal. How many other >> users of Cabal-the-library are there? In principle, every single >> Hackage package out there, which all have a Setup.hs script. Most of >> them are trivial, but how many did break because of these API changes? >> I for one am pretty happy for Cabal to move fast, but I'm concerned >> that these breaking changes happened without any kind of advance >> notice. To Simon's original point - there does not to be a clear >> policy and a good process surrounding Cabal itself and other GHC >> dependencies. So far we discussed mostly metadata changes, not API >> changes. >> >> And to be clear, folks did get some (post facto) notice in September: >> http://coldwa.st/e/blog/2017-09-09-Cabal-2-0.html. That's helpful, but >> I submit that in the future this really should be part of the GHC >> release announcement (which happened over a month before that), and in >> fact a migration guide circulated before the feature freeze, so >> downstream tooling authors can adapt. If this is not possible, then >> perhaps it's premature for GHC to include that given Cabal release. >> Again, GHC should always have the option to stick to the old Cabal >> version until things get ironed out. >> >> >> On 15 December 2017 at 08:42, Michael Snoyman wrote: >>> >>> >>> On Thu, Dec 14, 2017 at 12:27 PM, Boespflug, Mathieu wrote: >>> >>> [snip] >>> >>>> * Or a middle ground: make feature freeze a thing. Meaning that for a >>>> couple of months before a major GHC release, the major new Cabal isn't >>>> technically released yet, but like GHC itself within this period, it's >>>> pretty staid, so not so much a moving target, and something downstream >>>> tooling authors can possibly adapt to even without any grace period on >>>> new metadata features. This assumes that the 2 months of feature >>>> freeze are enough time for downstream tooling. Thoughts from any of >>>> those maintainers? >>>> >>> >>> Short answer: if there's a clear idea in advance of when this feature freeze >>> is going to happen, I think we can coordinate releases of downstream tooling >>> (Stack being the most important, but stackage-curator playing in as well) so >>> that 2 months is sufficient. I'll talk with the rest of the Stack team to >>> see if there are any concerns. >>> >>> Longer answer: Stack intentionally avoids depending on the internals of >>> Cabal wherever possible. Instead of calling library functions directly from >>> within Haskell code to perform builds, for example, it interacts with the >>> Setup.hs files over their command line interface.[1] This has two results: >>> >>> * Stack can usually start using new GHC/Cabal versions without a new Stack >>> release, since it's just shelling out for the actual build >>> * There's not usually very much code churn needed in Stack to upgrade to a >>> newer Cabal release >>> >>> This past release was an exception because of all of the changes that >>> landed, both the new cabal grammar to support the ^>= operator (making the >>> old parser incapable of lossily parsing new files) and API changes (I think >>> mostly around Backpack, though there was some code cleanup as well). In >>> particular, the main interface we need from Cabal—the package description >>> data types and parser—changed significantly enough that it took significant >>> effort to upgrade. There were also new features added (like sub libraries >>> and foreign libraries) that weren't immediately supported by the old Stack >>> version, and had to be manually added in. >>> >>> Tying this up: generally upgrading to a new Cabal release should be fine, >>> and the only concern I'd have is fitting it into a release schedule with >>> Stack. The complications that could slow that down are: >>> >>> * Changes to the command line interface that Stack uses (hopefully those are >>> exceedingly rare) >>> * Major overhauls to the Stack-facing API >>> >>> Michael >>> >>> [1] This allows for more reproducible builds of older snapshots, insuring >>> that the exact same Cabal library is performing the builds >> _______________________________________________ >> Ghc-devops-group mailing list >> Ghc-devops-group at haskell.org >> https://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devops-group > From simonpj at microsoft.com Mon Dec 18 11:08:56 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 18 Dec 2017 11:08:56 +0000 Subject: [GHC DevOps Group] Release policies In-Reply-To: References: Message-ID: This thread sounds as if it has been productive, though I have not followed the details. Does anyone feel able to draw it together into a proposed release policy? Along with a summary of the reasoning that led to it? Thanks Simon | -----Original Message----- | From: Ghc-devops-group [mailto:ghc-devops-group-bounces at haskell.org] On | Behalf Of Boespflug, Mathieu | Sent: 18 December 2017 11:03 | To: Manuel M T Chakravarty | Cc: ghc-devops-group at haskell.org; ghc-devs at haskell.org Devs | Subject: Re: [GHC DevOps Group] Release policies | | I think Mikhail's point is that if a package says build-type: Simple, then | we know exactly what its Setup.hs says, and therefore also which part of the | Cabal API it's using. Easy enough to keep that part stable even if others | change. Case in point: Cabal-2.0 brought a number of changes to the overall | API, but nothing that broke calling defaultMain from Distribution.Simple | (which is what a build-type: Simple script does). At the end of the day the | consumers of the wider Cabal API are pretty small. A substantial number of | misc packages on Hackage do it but rarely heavily. Other than that it | essentially comes down to Stack, cabal-install, Haskell For Mac and... any | others? | | My takeaway from the discussion so far is that the number of heavy consumers | looks small enough that a draconian BC policy for Cabal-the-library sounds | overkill, provided, crucially, that everything is in place, by GHC feature | freeze at the very latest, to allow a smooth migration. A "smooth | transition" means having a migration guide available before start of feature | freeze etc, but to Ben's concern stated earlier in this thread (about | GHC/upstream coupling), ideally also a release. | | I should note that to the extent that GHC tracks upstream releases only (not | git commits in unknown state), GHC can be released on a timely schedule | without needing any coordination from upstream maintainers to await new | releases on their part. So quite apart from the Cabal thing specifically, | it's worth thinking about asking that the versions of all upstream packages | only make it into GHC, at the behest of their respective maintainers, after | a new release of upstream is made. This was already proposed earlier in the | thread: | | > * [Proposal:] GHC does not track git commits of upstream dependencies | > in an unknown state of quality, but tracks vetted and tested releases | > instead. | | Potentially, this could even mean drastically cutting down on the number of | git submodules carried in the GHC repo. Since these packages could as well | be downloaded from Hackage. | | | On 18 December 2017 at 05:04, Manuel M T Chakravarty | wrote: | > Yes, you are right Haskell for Mac also links against Cabal-the-library | and API changes have regularly required me to fix my code. I guess, I have | never been particularly stressed about it, because I also link against GHC | API and that doesn’t even know how to spell API stability — i.e., changes | required by Cabal are usually drowned out by the chaos inflicted by GHC. | > | > In any case, you are making a good point. | > | > Mikhail, I don’t understand your response to Mathieu at all. What does the | build-type have to do with this? | > | > Cheers, | > Manuel | > | >> 15.12.2017, 19:41 Boespflug, Mathieu : | >> | >> Thanks for the feedback, Michael. | >> | >> Manuel, I believe you are also a Cabal-the-library consumer in Haskell | For Mac? | >> | >> Michael, you brought up another problem tangentially related to the | >> original integer-gmp issue but that was not in my original list | >> earlier in this thread: | >> | >> * Cabal-2.0.0 had breaking changes in the API. | >> | >> This means that by association GHC itself broke BC, because it | >> shipped with Cabal-2.0, without the usual grace period. | >> | >> Now, there are far fewer users of Cabal than of base. All, Michael in | >> his previous email seems to be okay with breaking changes in Cabal | >> given the conditions he stated (2 months grace period, advance notice | >> of when the 2 months start). And perhaps this points to the lack of a | >> need for the regular grace period applying to Cabal. How many other | >> users of Cabal-the-library are there? In principle, every single | >> Hackage package out there, which all have a Setup.hs script. Most of | >> them are trivial, but how many did break because of these API changes? | >> I for one am pretty happy for Cabal to move fast, but I'm concerned | >> that these breaking changes happened without any kind of advance | >> notice. To Simon's original point - there does not to be a clear | >> policy and a good process surrounding Cabal itself and other GHC | >> dependencies. So far we discussed mostly metadata changes, not API | >> changes. | >> | >> And to be clear, folks did get some (post facto) notice in September: | >> https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fcoldw | >> a.st%2Fe%2Fblog%2F2017-09-09-Cabal-2-0.html&data=02%7C01%7Csimonpj%40 | >> microsoft.com%7Ca4ef8935c10c4da1766f08d546070073%7C72f988bf86f141af91 | >> ab2d7cd011db47%7C1%7C0%7C636491918245390697&sdata=YNAuG7XcS9NiRMyklKe | >> 1LyL6LrxKr%2BlIsu6oxFXb%2Byg%3D&reserved=0. That's helpful, but I | >> submit that in the future this really should be part of the GHC release | announcement (which happened over a month before that), and in fact a | migration guide circulated before the feature freeze, so downstream tooling | authors can adapt. If this is not possible, then perhaps it's premature for | GHC to include that given Cabal release. | >> Again, GHC should always have the option to stick to the old Cabal | >> version until things get ironed out. | >> | >> | >> On 15 December 2017 at 08:42, Michael Snoyman | wrote: | >>> | >>> | >>> On Thu, Dec 14, 2017 at 12:27 PM, Boespflug, Mathieu wrote: | >>> | >>> [snip] | >>> | >>>> * Or a middle ground: make feature freeze a thing. Meaning that for | >>>> a couple of months before a major GHC release, the major new Cabal | >>>> isn't technically released yet, but like GHC itself within this | >>>> period, it's pretty staid, so not so much a moving target, and | >>>> something downstream tooling authors can possibly adapt to even | >>>> without any grace period on new metadata features. This assumes | >>>> that the 2 months of feature freeze are enough time for downstream | >>>> tooling. Thoughts from any of those maintainers? | >>>> | >>> | >>> Short answer: if there's a clear idea in advance of when this | >>> feature freeze is going to happen, I think we can coordinate | >>> releases of downstream tooling (Stack being the most important, but | >>> stackage-curator playing in as well) so that 2 months is sufficient. | >>> I'll talk with the rest of the Stack team to see if there are any | concerns. | >>> | >>> Longer answer: Stack intentionally avoids depending on the internals | >>> of Cabal wherever possible. Instead of calling library functions | >>> directly from within Haskell code to perform builds, for example, it | >>> interacts with the Setup.hs files over their command line interface.[1] | This has two results: | >>> | >>> * Stack can usually start using new GHC/Cabal versions without a new | >>> Stack release, since it's just shelling out for the actual build | >>> * There's not usually very much code churn needed in Stack to | >>> upgrade to a newer Cabal release | >>> | >>> This past release was an exception because of all of the changes | >>> that landed, both the new cabal grammar to support the ^>= operator | >>> (making the old parser incapable of lossily parsing new files) and | >>> API changes (I think mostly around Backpack, though there was some | >>> code cleanup as well). In particular, the main interface we need | >>> from Cabal—the package description data types and parser—changed | >>> significantly enough that it took significant effort to upgrade. | >>> There were also new features added (like sub libraries and foreign | >>> libraries) that weren't immediately supported by the old Stack version, | and had to be manually added in. | >>> | >>> Tying this up: generally upgrading to a new Cabal release should be | >>> fine, and the only concern I'd have is fitting it into a release | >>> schedule with Stack. The complications that could slow that down are: | >>> | >>> * Changes to the command line interface that Stack uses (hopefully | >>> those are exceedingly rare) | >>> * Major overhauls to the Stack-facing API | >>> | >>> Michael | >>> | >>> [1] This allows for more reproducible builds of older snapshots, | >>> insuring that the exact same Cabal library is performing the builds | >> _______________________________________________ | >> Ghc-devops-group mailing list | >> Ghc-devops-group at haskell.org | >> https://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devops-group | > | _______________________________________________ | Ghc-devops-group mailing list | Ghc-devops-group at haskell.org | https://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devops-group From ben at well-typed.com Mon Dec 18 18:31:50 2017 From: ben at well-typed.com (Ben Gamari) Date: Mon, 18 Dec 2017 13:31:50 -0500 Subject: [GHC DevOps Group] CircleCI job accounting question In-Reply-To: <0EE46FA3-0936-4E4E-A4A8-08506ECEB55C@tweag.io> References: <8760agnulr.fsf@ben-laptop.smart-cactus.org> <0B8947A7-CF4E-4803-828E-2DC1FBC6F4F0@tweag.io> <8760aen9ov.fsf@ben-laptop.smart-cactus.org> <54496A02-05EE-43EF-B717-612829628D90@tweag.io> <87zi7njmth.fsf@ben-laptop.smart-cactus.org> <87mv3f80xv.fsf@ben-laptop.smart-cactus.org> <87d14a8h4s.fsf@ben-laptop.smart-cactus.org> <874lpm86ra.fsf@ben-laptop.smart-cactus.org> <31461C3E-34EC-409D-8952-DB65B9452D3D@tweag.io> <87vahdi91m.fsf@ben-laptop.smart-cactus.org> <87374ghrhj.fsf@ben-laptop.smart-cactus.org> <2E8F04C3-63C0-4E55-A197-16D631519EE4@tweag.io> <87lgi5fc99.fsf@ben-laptop.smart-cactus.org> <0EE46FA3-0936-4E4E-A4A8-08506ECEB55C@tweag.io> Message-ID: <871sjrevlb.fsf@ben-laptop.smart-cactus.org> Manuel M T Chakravarty writes: >> Am 16.12.2017 um 04:06 schrieb Simon Marlow : >> >> On 14 December 2017 at 17:30, Ben Gamari > wrote: >> Historically GHC avoided this since GHC avoided merge commits as they >> complicate bisection. However, now since GitHub supports rebase-merging >> this is certainly a compelling option. It's certainly much simpler than >> the approach I outlined above yet provides the same benefits. >> >> If no one objects I think this sounds like a great path forward. >> >> We’ll have to be careful to tell people that PRs will be squashed >> when merging, since the current workflow is to push multiple patches >> at once to master. > > Yes, good point. > >> How does merging PRs work when our source of truth is not on github? > > I am not sure, but there are merged PR’s in the GitHub history, so I > assume, it works. Ben? > I have generally avoided pushing directly to GitHub (including via PR merge) since the git.haskell.org mirror won't see such pushes. This is the reason why pushes to github.com/ghc/ghc are currently disabled. I instead merge PRs manually and push to git.haskell.org. I instead merge PRs with `git merge; git rebase` and push to git.haskell.org. > Incidentally, once we have no more direct pushes to master, would it > make sense to make GitHub the source of truth? (That would get rid of > another bit of custom infrastructure.) > Perhaps, but I'm not sure I see the benefit in doing so given that this would require reworking various of bits of infrastructure that depend upon git.haskell.org that currently work fine. This includes the Trac push notifications, the commit style checks, the submodule referential integrity checks, and submodule mirroring hooks. Additionally, GitHub's access control mechanism is quite lacking relative to gitolite. We rely on this to keep ghc's branch namespace (which is already quite cluttered) under control. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From m at tweag.io Mon Dec 18 18:51:27 2017 From: m at tweag.io (Boespflug, Mathieu) Date: Mon, 18 Dec 2017 19:51:27 +0100 Subject: [GHC DevOps Group] Release policies In-Reply-To: References: Message-ID: Hi Herbert, On 18 December 2017 at 12:22, Herbert Valerio Riedel wrote: > Hi Mathieu, > > On Mon, Dec 18, 2017 at 12:03 PM, Boespflug, Mathieu wrote: >> it's worth thinking about asking that >> the versions of all upstream packages only make it into GHC, at the >> behest of their respective maintainers, after a new release of >> upstream is made. > > Let me try to formulate the invariant such a procedure would demand: > > That would require a guarantee that the APIs provided by GHC (mostly > ghc-prim/base, but possibly more, including other boot libs; as well > as GHC's own behaviour) my package(s) rely upon are frozen to the > point that any such release of my packages advertising compatiblity > with the imminent GHC release will remain compatible with said final > GHC release. > > Is this something you're ready to guarantee? That's exactly it: a tradeoff. As you note, any package release now can't possibly anticipate all changes in future GHC. Furthermore, if a new GHC feature crucially depends on upstream but no upstream release is available yet, under this proposal merging the new GHC feature would need to be delayed. But on the other hand, if indeed GHC is to have more frequent releases, we can't have a GHC release perennially held up by upstream maintainers cutting their own releases one after the other, or worse still, releases happening *after* it already shipped in GHC as we just saw, or with breaking changes not announced to downstream tooling authors ahead of time. This is a problem that was brought up by Ben at ICFP, and again here. It's a problem relevant to the discussion at hand, which started as the result of upstream releases showing up on Hackage long after the release, and downstream tooling authors not afforded any advance notice to adapt. When this was brought up at ICFP, several GHC DevOps Group members recommended to Ben that he avoid needing *any* cooperation from upstream maintainers on the critical path towards a release. Of the packages you mention, base can't introduce breaking changes without a grace period. Are any upstream packages that closely tied to ghc-prim etc that a breaking change in the latter is likely between feature freeze and the release date? On the off-chance that a BC change does happen, package releases are cheap, so would that really be a problem? If a package is that closely tied to ghc-prim, base or any other boot lib, and conversely GHC is closely tied to it, then shouldn't that package just be part of the GHC codebase proper, rather than managed separately outside of GHC devs' control? From moritz.angermann at gmail.com Mon Dec 18 11:18:26 2017 From: moritz.angermann at gmail.com (Moritz Angermann) Date: Mon, 18 Dec 2017 19:18:26 +0800 Subject: [GHC DevOps Group] Release policies In-Reply-To: References: Message-ID: <326FDA10-DC08-426F-8A36-EA1D7E847537@gmail.com> Hi, this might only be tangentially relevant. However, you might consider this a working example of GHC and Cabal bleeding edge symbiosis. Some might have seen that I built some *relocatable* GHC releases for cross compilation (however those include the full base compiler for macOS and linux (deb8) as well) at http://hackage.mobilehaskell.org/. To facility those builds, I did not only need to make changes to GHC to allow it to detect its library folder relative to `ghc`. I also did made quite a lot of changes to hadrian, and cabal(!) to actually support describing GHC packages (primarily rts and ghc-prim) as cabal packages without resorting to hacks in ghc-cabal. And getting rid of ghc-cabal altogether. Any overly restrictive policy on GHC and Cabal releases will result in changes like theses to become a lot harder to do. After all a lot of work on GHC is performed by volunteers in their spare time, and I honestly can not see myself forcing someone else to spend even more time on streamlining project integrations, if even higher barriers are put into place. If we start dropping submodule dependencies, we need to make it trivial to put them back into place when hacking on GHC. I would therefore rather suggest we keep submodules in the development tree of GHC, but put a policy into place that says that submodules have to be replaced with their proper hackage dependencies (and versions) prior to any release. This would allow to keep working across ghc and multiple dependencies as source, while giving some additional non-in-flight guarantees about GHC releases and their dependencies. Another example would be my llvm-ng (bitcode producing llvm backend), which is naturally also a set of submodules in my ghc tree, as working on the code gen and ghc at the same time would either mean I need to fully integrate those libraries into GHC, or start bumping them on hackage for every minimal change while working on GHC and the code gen. By the end of the day, I believe we want to make hacking on GHC as easy for everyone as possible while at the same time improving the release quality. Cheers, Moritz > On Dec 18, 2017, at 7:03 PM, Boespflug, Mathieu wrote: > > I think Mikhail's point is that if a package says build-type: Simple, > then we know exactly what its Setup.hs says, and therefore also which > part of the Cabal API it's using. Easy enough to keep that part stable > even if others change. Case in point: Cabal-2.0 brought a number of > changes to the overall API, but nothing that broke calling defaultMain > from Distribution.Simple (which is what a build-type: Simple script > does). At the end of the day the consumers of the wider Cabal API are > pretty small. A substantial number of misc packages on Hackage do it > but rarely heavily. Other than that it essentially comes down to > Stack, cabal-install, Haskell For Mac and... any others? > > My takeaway from the discussion so far is that the number of heavy > consumers looks small enough that a draconian BC policy for > Cabal-the-library sounds overkill, provided, crucially, that > everything is in place, by GHC feature freeze at the very latest, to > allow a smooth migration. A "smooth transition" means having a > migration guide available before start of feature freeze etc, but to > Ben's concern stated earlier in this thread (about GHC/upstream > coupling), ideally also a release. > > I should note that to the extent that GHC tracks upstream releases > only (not git commits in unknown state), GHC can be released on a > timely schedule without needing any coordination from upstream > maintainers to await new releases on their part. So quite apart from > the Cabal thing specifically, it's worth thinking about asking that > the versions of all upstream packages only make it into GHC, at the > behest of their respective maintainers, after a new release of > upstream is made. This was already proposed earlier in the thread: > >> * [Proposal:] GHC does not track git commits of upstream dependencies >> in an unknown state of quality, but tracks vetted and tested releases >> instead. > > Potentially, this could even mean drastically cutting down on the > number of git submodules carried in the GHC repo. Since these packages > could as well be downloaded from Hackage. > > > On 18 December 2017 at 05:04, Manuel M T Chakravarty > wrote: >> Yes, you are right Haskell for Mac also links against Cabal-the-library and API changes have regularly required me to fix my code. I guess, I have never been particularly stressed about it, because I also link against GHC API and that doesn’t even know how to spell API stability — i.e., changes required by Cabal are usually drowned out by the chaos inflicted by GHC. >> >> In any case, you are making a good point. >> >> Mikhail, I don’t understand your response to Mathieu at all. What does the build-type have to do with this? >> >> Cheers, >> Manuel >> >>> 15.12.2017, 19:41 Boespflug, Mathieu : >>> >>> Thanks for the feedback, Michael. >>> >>> Manuel, I believe you are also a Cabal-the-library consumer in Haskell For Mac? >>> >>> Michael, you brought up another problem tangentially related to the >>> original integer-gmp issue but that was not in my original list >>> earlier in this thread: >>> >>> * Cabal-2.0.0 had breaking changes in the API. >>> >>> This means that by association GHC itself broke BC, because it shipped >>> with Cabal-2.0, without the usual grace period. >>> >>> Now, there are far fewer users of Cabal than of base. All, Michael in >>> his previous email seems to be okay with breaking changes in Cabal >>> given the conditions he stated (2 months grace period, advance notice >>> of when the 2 months start). And perhaps this points to the lack of a >>> need for the regular grace period applying to Cabal. How many other >>> users of Cabal-the-library are there? In principle, every single >>> Hackage package out there, which all have a Setup.hs script. Most of >>> them are trivial, but how many did break because of these API changes? >>> I for one am pretty happy for Cabal to move fast, but I'm concerned >>> that these breaking changes happened without any kind of advance >>> notice. To Simon's original point - there does not to be a clear >>> policy and a good process surrounding Cabal itself and other GHC >>> dependencies. So far we discussed mostly metadata changes, not API >>> changes. >>> >>> And to be clear, folks did get some (post facto) notice in September: >>> http://coldwa.st/e/blog/2017-09-09-Cabal-2-0.html. That's helpful, but >>> I submit that in the future this really should be part of the GHC >>> release announcement (which happened over a month before that), and in >>> fact a migration guide circulated before the feature freeze, so >>> downstream tooling authors can adapt. If this is not possible, then >>> perhaps it's premature for GHC to include that given Cabal release. >>> Again, GHC should always have the option to stick to the old Cabal >>> version until things get ironed out. >>> >>> >>> On 15 December 2017 at 08:42, Michael Snoyman wrote: >>>> >>>> >>>> On Thu, Dec 14, 2017 at 12:27 PM, Boespflug, Mathieu wrote: >>>> >>>> [snip] >>>> >>>>> * Or a middle ground: make feature freeze a thing. Meaning that for a >>>>> couple of months before a major GHC release, the major new Cabal isn't >>>>> technically released yet, but like GHC itself within this period, it's >>>>> pretty staid, so not so much a moving target, and something downstream >>>>> tooling authors can possibly adapt to even without any grace period on >>>>> new metadata features. This assumes that the 2 months of feature >>>>> freeze are enough time for downstream tooling. Thoughts from any of >>>>> those maintainers? >>>>> >>>> >>>> Short answer: if there's a clear idea in advance of when this feature freeze >>>> is going to happen, I think we can coordinate releases of downstream tooling >>>> (Stack being the most important, but stackage-curator playing in as well) so >>>> that 2 months is sufficient. I'll talk with the rest of the Stack team to >>>> see if there are any concerns. >>>> >>>> Longer answer: Stack intentionally avoids depending on the internals of >>>> Cabal wherever possible. Instead of calling library functions directly from >>>> within Haskell code to perform builds, for example, it interacts with the >>>> Setup.hs files over their command line interface.[1] This has two results: >>>> >>>> * Stack can usually start using new GHC/Cabal versions without a new Stack >>>> release, since it's just shelling out for the actual build >>>> * There's not usually very much code churn needed in Stack to upgrade to a >>>> newer Cabal release >>>> >>>> This past release was an exception because of all of the changes that >>>> landed, both the new cabal grammar to support the ^>= operator (making the >>>> old parser incapable of lossily parsing new files) and API changes (I think >>>> mostly around Backpack, though there was some code cleanup as well). In >>>> particular, the main interface we need from Cabal—the package description >>>> data types and parser—changed significantly enough that it took significant >>>> effort to upgrade. There were also new features added (like sub libraries >>>> and foreign libraries) that weren't immediately supported by the old Stack >>>> version, and had to be manually added in. >>>> >>>> Tying this up: generally upgrading to a new Cabal release should be fine, >>>> and the only concern I'd have is fitting it into a release schedule with >>>> Stack. The complications that could slow that down are: >>>> >>>> * Changes to the command line interface that Stack uses (hopefully those are >>>> exceedingly rare) >>>> * Major overhauls to the Stack-facing API >>>> >>>> Michael >>>> >>>> [1] This allows for more reproducible builds of older snapshots, insuring >>>> that the exact same Cabal library is performing the builds >>> _______________________________________________ >>> Ghc-devops-group mailing list >>> Ghc-devops-group at haskell.org >>> https://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devops-group >> > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From chak at justtesting.org Tue Dec 19 10:15:48 2017 From: chak at justtesting.org (Manuel M T Chakravarty) Date: Tue, 19 Dec 2017 21:15:48 +1100 Subject: [GHC DevOps Group] Release policies In-Reply-To: References: Message-ID: <6CCFE7BC-8180-4D73-B188-9BE9F2BC9223@justtesting.org> I believe the standard policy would be to say that even master may only dependent on released versions of dependencies. That is after all the only way to have a master that is always ready to be cut for a release (as per modern CI practices). Given the tight coupling of some of the dependencies of GHC with GHC, maybe we need to consider something weaker, but I think, the weakest reasonable (from a CI standpoint) policy is to say that, while master may depend on pre-release versions, release branches can *only* depend on released dependencies. In other words, a release branch can only be cut when master has progressed to a point where it only depends on released versions of its dependencies. Under that compromise, cutting a GHC release branch may be delayed by a delayed upstream release, but hopefully the approx 3 month release process has enough slack to tolerate that. (It is obviously not ideal, though.) Cheers, Manuel PS: Simon, I am sorry, but IMHO it is too early for a summary and policy proposal as the discussion hasn’t really converged yet. In any case, I am happy to write a summary Trac page once we are there. Is that ok? > 19.12.2017 06:41 Gershom B : > > Let me try to formulate a synthetic policy as per Simon's request: > > Policy: > Bundled library maintainers agree to the following: > 1) When GHC cuts a feature-freeze branch, they too (if anything has > changed) cut a feature-freeze branch within two weeks at the maximum > (ideally sooner), to be included in the main GHC freeze branch. If > they do not do so, the last released version will be included instead. > 2) When GHC releases the first release candidate, maintainers (if > anything has changed) release new versions of their packages, to then > be depended on directly in the GHC repo. All submodules are then > replaced with their proper released versions for GHC release. > > This policy can be enforced by GHC hq as part of the release process > with the exception of a case in which there's coupling so that a new > GHC _requires_ a new submodule release, and also the maintainer is not > responsive. We'll have to deal with that case directly, likely by just > appealing to the libraries committee or something to force a new > release :-) > > Motivation: > This should help address multiple issues: 1) holdup of ghc on other > releases. 2) lack of synchronization with ghc and other releases. 3) > low lead-time for people to adapt to API changes in forthcoming > library releases tied to ghc releases. In particular, because Cabal is > part of this policy, it should help circumvent the sorts of problems > that led to this thread initially. Further, because this only applies > to freeze/release branches, it should not slow down > rapid-implementation of cross-cutting changes more generally. > > Cheers, > Gershom > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From simonpj at microsoft.com Tue Dec 19 10:20:05 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 19 Dec 2017 10:20:05 +0000 Subject: [GHC DevOps Group] Release policies In-Reply-To: <6CCFE7BC-8180-4D73-B188-9BE9F2BC9223@justtesting.org> References: <6CCFE7BC-8180-4D73-B188-9BE9F2BC9223@justtesting.org> Message-ID: | PS: Simon, I am sorry, but IMHO it is too early for a summary and policy | proposal as the discussion hasn’t really converged yet. In any case, I am | happy to write a summary Trac page once we are there. Is that ok? Yes, I'm perfectly happy with that, thank you. I just wanted to be sure that the discussion eventually converged rather than petering out. Many thanks to Gershom for putting out a concrete suggestion; I think that concrete proposals can help to focus a debate. Simon | -----Original Message----- | From: Manuel M T Chakravarty [mailto:chak at justtesting.org] | Sent: 19 December 2017 10:16 | To: Gershom Bazerman ; Simon Peyton Jones | | Cc: ghc-devs ; ghc-devops-group at haskell.org | Subject: Re: [GHC DevOps Group] Release policies | | I believe the standard policy would be to say that even master may only | dependent on released versions of dependencies. That is after all the | only way to have a master that is always ready to be cut for a release | (as per modern CI practices). | | Given the tight coupling of some of the dependencies of GHC with GHC, | maybe we need to consider something weaker, but I think, the weakest | reasonable (from a CI standpoint) policy is to say that, while master may | depend on pre-release versions, release branches can *only* depend on | released dependencies. In other words, a release branch can only be cut | when master has progressed to a point where it only depends on released | versions of its dependencies. | | Under that compromise, cutting a GHC release branch may be delayed by a | delayed upstream release, but hopefully the approx 3 month release | process has enough slack to tolerate that. (It is obviously not ideal, | though.) | | Cheers, | Manuel | | PS: Simon, I am sorry, but IMHO it is too early for a summary and policy | proposal as the discussion hasn’t really converged yet. In any case, I am | happy to write a summary Trac page once we are there. Is that ok? | | > 19.12.2017 06:41 Gershom B : | > | > Let me try to formulate a synthetic policy as per Simon's request: | > | > Policy: | > Bundled library maintainers agree to the following: | > 1) When GHC cuts a feature-freeze branch, they too (if anything has | > changed) cut a feature-freeze branch within two weeks at the maximum | > (ideally sooner), to be included in the main GHC freeze branch. If | > they do not do so, the last released version will be included instead. | > 2) When GHC releases the first release candidate, maintainers (if | > anything has changed) release new versions of their packages, to then | > be depended on directly in the GHC repo. All submodules are then | > replaced with their proper released versions for GHC release. | > | > This policy can be enforced by GHC hq as part of the release process | > with the exception of a case in which there's coupling so that a new | > GHC _requires_ a new submodule release, and also the maintainer is not | > responsive. We'll have to deal with that case directly, likely by just | > appealing to the libraries committee or something to force a new | > release :-) | > | > Motivation: | > This should help address multiple issues: 1) holdup of ghc on other | > releases. 2) lack of synchronization with ghc and other releases. 3) | > low lead-time for people to adapt to API changes in forthcoming | > library releases tied to ghc releases. In particular, because Cabal is | > part of this policy, it should help circumvent the sorts of problems | > that led to this thread initially. Further, because this only applies | > to freeze/release branches, it should not slow down | > rapid-implementation of cross-cutting changes more generally. | > | > Cheers, | > Gershom | > _______________________________________________ | > ghc-devs mailing list | > ghc-devs at haskell.org | > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From chak at justtesting.org Wed Dec 20 01:15:42 2017 From: chak at justtesting.org (Manuel M T Chakravarty) Date: Wed, 20 Dec 2017 12:15:42 +1100 Subject: [GHC DevOps Group] Release policies In-Reply-To: References: Message-ID: <0383353F-5894-4993-96E2-71EEE64E3CB5@justtesting.org> Thanks for the terminology clarification, Gershom. As a general note, adding features or completing hitherto incomplete features on the release branch is something that should be avoided as much as possible. For starters, it is generally more work than doing it before the release branch is cut as things need to be added to master and the release branch, which slowly diverge. Moreover, the main purpose of the release branch is to stabilise the code base and to do so in a timely manner. As an example, if there is a new type system extension, but the error message are still bad, it may be allowed on the release branch with the understanding that the error messages are going to be fixed during the first month or so of the release branch lifecycle (i.e., the bulk of the work is done and tested before the release branch is cut). This in particular implies that everything merged to the release branch has to have been well tested (the wiki page states, ”[o]nly previously agreed on, stable and tested new functionality is allowed in.”) Implicitly adding new functionality by depending on a moving library dependency, which is out of the quality control of the GHC team, is not something that I would call ”stable and tested functionality". If we depend on the release version of a library on the release branch, we are still free to bump the library version if that is necessary to fix bugs (which may creep up due to GHC testing as you suggest) — I would expect that to be patch level bumps, though. If new library features are not sufficiently stable to move to that less tightly coupled process from T-3, then that feature set simply has to wait for the next GHC release (which by the new schedule will come much more quickly than before). Let me reiterate that the tighter release process is aimed at being able to achieve more frequent and more predictable GHC releases. This again ought to help the library development, too. In other words, we slow down at the right place to move faster overall (an old Kung Fu trick ;) Cheers, Manuel > 20.12.2017, 04:28 Gershom B : > > Good questions. In my below proposal I think I made an error in naming > things. I checked back at the wiki page for the new release calendar > schedule: https://ghc.haskell.org/trac/ghc/wiki/WorkingConventions/Releases/NewSchedule > > Based on that, what I was calling "freeze" is really just cutting the > branch. But it isn't intended as a full freeze. That happens 3 months > before release. The "feature freeze" in that calendar only comes with > the first RC, 1 month before release. > > I think that this timing still works with the proposal I have however > -- bundled libs branch when GHC branches (T-3), and cut releases when > GHC cuts the first RC (T-1). For bundled libs, I think we'd want to > treat that branch (T-3) as closer to a feature freeze. > > However, and here I disagree with Manuel, I think there's plenty of > reason to _not_ cut release versions of libs at the time of the T-3 > branch. In particular, due to the coupling, this may cause trouble if > there are cross-cutting changes that need to be implemented for the > sake of GHC working properly over the three month duration of the > alpha. If there's a feature in a library designed to work in > conjunction with a change in GHC, and that change in GHC needs to be > altered in the course of the alpha (which may not be uncommon -- bug > testing can often reveal such things) then it is likely the library > may need to be changed too. I don't see any concrete goal solved in > making this process significantly more difficult. I thought Moritz' > examples in this thread were very revealing with regards to such > possibilities. It is not clear what cost function the stronger > proposal is trying to optimize for. > > If it is that we want a branch that is "always ready to be cut for > release" (why? is such a thing even possible anytime in the > foreseeable future?), one middle ground may be to cut _candidate_ > releases of bundled libs on branch (T-3). > > --Gershom > > > On Tue, Dec 19, 2017 at 10:12 AM, Michael Snoyman wrote: >> Thanks for spelling this out Gershom. Reading it through, here are my >> questions: >> >> 1. What's the definition of "feature freeze"? Does it mean API stability? >> Does it mean not code changes at all except to fix a bug? Are performance >> fixes allowed in that case? >> 2. What's the minimum time between GHC cutting a feature-freeze branch and >> the first release candidate? And the minimum time between first release >> candidate and official release? Obviously, if each of these is 1 week (which >> I can't imagine would be the case), then these libraries could cut a >> feature-freeze branch after the official release, which obviously isn't >> intended. I apologize if these timings are already well established, I'm not >> familiar enough with GHC release cadence to know. >> >> I can't speak to GHC development itself, but from a downstream perspective, >> this sounds like the right direction. >> >> On Mon, Dec 18, 2017 at 9:41 PM, Gershom B wrote: >>> >>> Let me try to formulate a synthetic policy as per Simon's request: >>> >>> Policy: >>> Bundled library maintainers agree to the following: >>> 1) When GHC cuts a feature-freeze branch, they too (if anything has >>> changed) cut a feature-freeze branch within two weeks at the maximum >>> (ideally sooner), to be included in the main GHC freeze branch. If >>> they do not do so, the last released version will be included instead. >>> 2) When GHC releases the first release candidate, maintainers (if >>> anything has changed) release new versions of their packages, to then >>> be depended on directly in the GHC repo. All submodules are then >>> replaced with their proper released versions for GHC release. >>> >>> This policy can be enforced by GHC hq as part of the release process >>> with the exception of a case in which there's coupling so that a new >>> GHC _requires_ a new submodule release, and also the maintainer is not >>> responsive. We'll have to deal with that case directly, likely by just >>> appealing to the libraries committee or something to force a new >>> release :-) >>> >>> Motivation: >>> This should help address multiple issues: 1) holdup of ghc on other >>> releases. 2) lack of synchronization with ghc and other releases. 3) >>> low lead-time for people to adapt to API changes in forthcoming >>> library releases tied to ghc releases. In particular, because Cabal is >>> part of this policy, it should help circumvent the sorts of problems >>> that led to this thread initially. Further, because this only applies >>> to freeze/release branches, it should not slow down >>> rapid-implementation of cross-cutting changes more generally. >>> >>> Cheers, >>> Gershom >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >> > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From chak at justtesting.org Wed Dec 20 01:47:00 2017 From: chak at justtesting.org (Manuel M T Chakravarty) Date: Wed, 20 Dec 2017 12:47:00 +1100 Subject: [GHC DevOps Group] Can't push to haddock In-Reply-To: References: <87k1xj14qf.fsf@gmail.com> Message-ID: <31114271-8CCA-4A38-B5D7-4885D4F6D131@justtesting.org> I think, what Sven is getting at here —and I do have to say, I concur— is that there is a bit of NIH (Not Invented Here) syndrome in parts of the Haskell community. I think, part of it is just inertia and the desire to keep things the same, because that is easier and more familiar. One aspect that complicates this discussion significantly is that GHC dev has developed certain work arounds and ways of doing things, where third party infrastructure seems lacking in features, because it doesn’t support all these quirks. However, it turns out that if we are only prepared to change our workflow and processes to align with modern software development practices, many of theses ”features” aren’t actually necessary. We have seen quite a bit of that in the CI discussion. I am not writing this to blame anything or anybody. I think, it is a normal part of a healthy process of change. However, it complicates the discussion as people get hung up on individual technicalities, such as this or that feature is missing, without considering the big picture. Generally, I think, a worthwhile golden rule in ops is that custom infrastructure is bad. It creates extra work, technical debt, and failure points. So, IMHO the default ought to be to use 3rd part infrastructure (like GitHub) and only augment that where absolutely necessary. This will simply leave us with more time to write Haskell code in GHC instead of building, maintaining, and supporting GHC infrastructure. Cheers, Manuel > 19.12.2017 20:47 Simon Peyton Jones via ghc-devs : > > It seems to me that there is some hostility towards GitHub in GHC HQ, but I don't really understand why. GitHub serves other similar projects quite well, e.g. Rust, and I can't see why we should be special. > > Speaking for myself, I have no hostility towards GitHub, and there is no GHC-HQ bias against it that I know of. If it serves the purpose better, we should use it. Indeed that’s why I asked my original question. I agree with your point that data may actually be safer in GitHub than in our own repo. (And there is nothing to stop a belt-and-braces mirror backup system.) > > The issue is: does GitHub serve the purpose better? We have frequently debated this multi-dimensional question. And we should continue to do so: the answers may change over time (GitHub’s facilities are not static; and its increasing dominance is itself a cultural familiarity factor that simply was not the case five years ago). > > Simon > > From: Sven Panne [mailto:svenpanne at gmail.com] > Sent: 19 December 2017 09:30 > To: Herbert Valerio Riedel > Cc: Simon Peyton Jones ; ghc-devs at haskell.org Devs > Subject: Re: Can't push to haddock > > 2017-12-19 9:50 GMT+01:00 Herbert Valerio Riedel >: > > We'd need mirroring anyway, as we want to keep control over our > infrastructure and not have to trust a 3rd party infrastructure to > safely handle our family jewels: GHC's source tree. > > > > I think this is a question of perspective: Having the master repository on GitHub doesn't mean you are in immediate danger or lose your "family jewels". IMHO it's quite the contrary: I'm e.g. sure that in case that something goes wrong with GitHub, there is far more manpower behind it to fix that than for any self-hosted repository. And you can of course have some mirror of your GitHub repo in case of e.g. an earthquake/meteor/... in the San Francisco area... ;-) > > > > It seems to me that there is some hostility towards GitHub in GHC HQ, but I don't really understand why. GitHub serves other similar projects quite well, e.g. Rust, and I can't see why we should be special. > > > > Also, catching bad commits "a bit later" is just asking for trouble -- > by the time they're caught the git repos have already lost their > invariant and its a big mess to recover; > > > > This is by no means different than saying: "I want to run 'validate' in the commit hook, otherwise it's a big mess." We don't do this for obvious reasons, and what is the "big mess" if there is some incorrect submodule reference for a short time span? How is that different from somebody introducing e.g. a subtle compiler bug in a commit? > > > > the invariant I devised and > whose validation I implemented 4 years ago has served us pretty well, > and has ensured that we never glitched into incorrectness; I'm also not > sure why it's being suggested to switch to a less principled and more > fragile scheme now. [...] > > > > Because the whole repository structure is overly complicated and simply hosting everything on GitHub would simplify things. Again: I'm well aware that there are tradeoffs involved, but I would really appreciate simplifications. I have the impression that the entry barrier to GHC development has become larger and larger over the years, partly because of very non-standard tooling, partly because of the increasingly arcane repository organization. There are reasons that other projects like Rust attract far more developers... :-/ > > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: