MR does not merge

Clinton Mead clintonmead at
Mon Jan 21 05:59:14 UTC 2019

Hi All

I'm not a GHC dev so my understanding of this process is limited to this
thread but just my thoughts.

My understanding is that we want to achieve the following two goals:

1. Never allow code which breaks tests to be committed to master.
2. Ensure that master is up to date as soon as possible with recently
submitted merge requests (MR).

The issue seems to be that the only way to ensure 1 is to use a serial
"rebase/test/make master branch" process on every MR. Which means if you
get a lot of MRs in a row you can get a queue of MRs blowing out.

So what I propose is the following:

1. Keep a queue of pending MRs.
2. When the previous test is complete, create a branch (lets call it
"pending") which is all the MRs in the queue rebased firstly on master and
then each other. Drop any MRs which fail this rebasing.
3. Run tests against "pending"
4. If the tests pass, "pending" becomes "master". However, if the CI for
"pending" fails, "split" pending into two (half the MRs in each, perhaps
interleaving their size also), rebase them separately on master call them
"pending1" and "pending2". If there's only one MR pending, don't "split" it
(you can't), just report the test failure to the MR owner.
5. If either "pending1" or "pending2" passes, it becomes "master". Also,
whether either or both of "pending1" or "pending2" fails, go back to step 4
for these. If they both pass (which probably should never happen) maybe
just merge one into master arbitrarily and put the other MRs in the pending
MR queue.
6. Once we've merged all our MRs in to master (and perhaps through the
binary search above found the broken MR) start this process again with the
current pending MRs.

With this process we ensure master is never broken, but we can test and
merge n MRs in log(n) time, so the MR queue will not grow arbitrarily long
if the rate of submitted MRs exceeds the rate we run CI tests on them.

"Marge-bot" mentioned almost does what I suggest, except in the case of a
failure it runs the MRs one-by-one, instead of binary split like I suggest.
Perhaps my proposal could be best implemented as a patch to Marge-bot.

On Sat, Jan 19, 2019 at 2:42 AM Ben Gamari <ben at> wrote:

> Simon Peyton Jones via ghc-devs <ghc-devs at> writes:
> > |  Indeed this is a known issue that I have been working [1] with
> upstream
> > |  to resolve.
> >
> > Thanks. I'm not equipped to express a well-informed opinion about what
> > the best thing to do is. But in the meantime I WOULD be grateful for
> > explicit workflow advice. Specifically:
> >
> > * What steps should I take to get a patch committed to master,
> >   assuming I've done the review stuff and want to press "go"?
> >
> At the moment it's largely just a matter of when a bulk merge happens; I
> did a large merge on Wednesday and another yesterday.
> However, as Matthew suggested I think it may make sense to try using
> Marge bot to eliminate this manual process with little cost. It doesn't
> take particularly long to put together a bulk merge but it does require
> some form of human intervention which generally implies latency.
> Cheers,
> - Ben
> _______________________________________________
> ghc-devs mailing list
> ghc-devs at
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the ghc-devs mailing list