From ben at smart-cactus.org Fri Nov 1 00:40:43 2019 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 31 Oct 2019 20:40:43 -0400 Subject: DriverPipeline/HscMain DynFlags mystery In-Reply-To: References: Message-ID: <5D6C68BC-1266-4CEF-B3B9-DD1867A3CF92@smart-cactus.org> On October 31, 2019 2:45:09 PM EDT, "Ömer Sinan Ağacan" wrote: >Hi, > >We recently did some refactoring in HscMain and DriverPipeline to >generate >interfaces after final Cmms are generated (previously interfaces would >be >generated after the tidying pass). It's mostly done but there's one >thing that I >couldn't figure out after two full days of debugging (I also asked a >few people >about it on IRC), so I wanted to ask here in case someone here knows. > >Previously the interface value (ModIface) would be generated and >written to disk >in `HscMain.finish`. The DynFlags we use to generate the ModIface and >to write >it to disk would be the one passed to `HscMain.hscIncrementalCompile`. > >In the new implementation part of the interface is still generated in >`HscMain.hscIncrementalCompile` (see mkPartialIface), using the same >DynFlags as >before. But more stuff is added after the final Cmms are generated (see >mkFullIface calls in DriverPipeline) using DynFlags in `compileOne'` or >`runPhase` (called by `runPipeline`). It turns out these DynFlags are >different >enough from the one passed to `HscMain.hscIncrementalCompile` that some >tests >fail (I remember a backpack test, but there may be more). > >("Full" interfaces are written to disk right after generation) > >See [1] for the hack I added as a workaround. Basically I keep the >DynFlags >passed to hscIncrementalCompile so that I can generate the final >interfaces >correctly. > >The question is what's changing in DynFlags that makes things go wrong. >I tried >looking at the fields used by mkFullIface and hscMaybeWriteIface, but >as far as >I can see none of the fields used by these functions are different from >the >DynFlags passed to hscIncrementalCompile. > >If anyone knows what's going on any help would be appreciated. > >Thanks, > >Ömer > >[1]: >https://gitlab.haskell.org/ghc/ghc/blob/master/compiler/main/HscTypes.hs#L255-259 >_______________________________________________ >ghc-devs mailing list >ghc-devs at haskell.org >http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs The question would be easier to answer if it included a bit more context: - this is in reference to !1304, correct? - specifically which tests fail and in which ways? - what is the "more stuff" that you are adding? In general when it comes to bugs like this I find it help to reduce the size of the patch as much as possible. In your case, the CAF refactor is probably quite irrelevant to the issue you are seeing. I would try to extract the pipeline refactor that is triggering your bug into a separate MR which can be assessed independently from the CAF business. This may take a few minutes but in my experience this sort of exercise is almost always worth the effort. Even if you don't find the bug while splitting up the patch it will be significantly easier for others to help with the result. Cheers, - Ben From omeragacan at gmail.com Fri Nov 1 04:34:05 2019 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Fri, 1 Nov 2019 07:34:05 +0300 Subject: DriverPipeline/HscMain DynFlags mystery In-Reply-To: <5D6C68BC-1266-4CEF-B3B9-DD1867A3CF92@smart-cactus.org> References: <5D6C68BC-1266-4CEF-B3B9-DD1867A3CF92@smart-cactus.org> Message-ID: > The question would be easier to answer if it included a bit more context: > > - this is in reference to !1304, correct? This is done in two parts: !1633 and !1969. It's mainly for !1304 and !17004, but I'm talking about the code already merged here. No need to look at any MRs. (There will be one more MR on this topic, but it's not relevant for this problem) > - specifically which tests fail and in which ways? I don't remember... There are two places in DriverPipeline that writes interface files. Currently because of this problem I pass a DynFlags to those sites so that I can generate interfaces without breaking stuff, e.g. HscRecomp { hscs_guts = cgguts, hscs_summary = mod_summary, hscs_partial_iface = partial_iface, hscs_old_iface_hash = mb_old_iface_hash, hscs_iface_dflags = iface_dflags } -> do ... liftIO $ hscMaybeWriteIface if_dflags final_iface mb_old_iface_hash (ms_location mod_summary) ... Instead of using if_dflags if I use the DynFlags in the current environment (in `HscEnv`) some tests fail. I'll try to give more details of which tests are failing. > - what is the "more stuff" that you are adding? Currently nothing, but we'll be adding CafInfos after !1304. > In general when it comes to bugs like this I find it help to reduce the size > of the patch as much as possible. In your case, the CAF refactor is probably > quite irrelevant to the issue you are seeing. I would try to extract the > pipeline refactor that is triggering your bug into a separate MR which can be > assessed independently from the CAF business. > > This may take a few minutes but in my experience this sort of exercise is > almost always worth the effort. Even if you don't find the bug while splitting > up the patch it will be significantly easier for others to help with the > result. This is the patch: https://gitlab.haskell.org/ghc/ghc/commit/bbdd54aab2f727bd90efe237eeb72e5e014b0cb2 It's not the smallest patch that demonstrates, but hopefully it's small enough. Ömer Ben Gamari , 1 Kas 2019 Cum, 03:40 tarihinde şunu yazdı: > > On October 31, 2019 2:45:09 PM EDT, "Ömer Sinan Ağacan" wrote: > >Hi, > > > >We recently did some refactoring in HscMain and DriverPipeline to > >generate > >interfaces after final Cmms are generated (previously interfaces would > >be > >generated after the tidying pass). It's mostly done but there's one > >thing that I > >couldn't figure out after two full days of debugging (I also asked a > >few people > >about it on IRC), so I wanted to ask here in case someone here knows. > > > >Previously the interface value (ModIface) would be generated and > >written to disk > >in `HscMain.finish`. The DynFlags we use to generate the ModIface and > >to write > >it to disk would be the one passed to `HscMain.hscIncrementalCompile`. > > > >In the new implementation part of the interface is still generated in > >`HscMain.hscIncrementalCompile` (see mkPartialIface), using the same > >DynFlags as > >before. But more stuff is added after the final Cmms are generated (see > >mkFullIface calls in DriverPipeline) using DynFlags in `compileOne'` or > >`runPhase` (called by `runPipeline`). It turns out these DynFlags are > >different > >enough from the one passed to `HscMain.hscIncrementalCompile` that some > >tests > >fail (I remember a backpack test, but there may be more). > > > >("Full" interfaces are written to disk right after generation) > > > >See [1] for the hack I added as a workaround. Basically I keep the > >DynFlags > >passed to hscIncrementalCompile so that I can generate the final > >interfaces > >correctly. > > > >The question is what's changing in DynFlags that makes things go wrong. > >I tried > >looking at the fields used by mkFullIface and hscMaybeWriteIface, but > >as far as > >I can see none of the fields used by these functions are different from > >the > >DynFlags passed to hscIncrementalCompile. > > > >If anyone knows what's going on any help would be appreciated. > > > >Thanks, > > > >Ömer > > > >[1]: > >https://gitlab.haskell.org/ghc/ghc/blob/master/compiler/main/HscTypes.hs#L255-259 > >_______________________________________________ > >ghc-devs mailing list > >ghc-devs at haskell.org > >http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > The question would be easier to answer if it included a bit more context: > > - this is in reference to !1304, correct? > - specifically which tests fail and in which ways? > - what is the "more stuff" that you are adding? > > In general when it comes to bugs like this I find it help to reduce the size of the patch as much as possible. In your case, the CAF refactor is probably quite irrelevant to the issue you are seeing. I would try to extract the pipeline refactor that is triggering your bug into a separate MR which can be assessed independently from the CAF business. > > This may take a few minutes but in my experience this sort of exercise is almost always worth the effort. Even if you don't find the bug while splitting up the patch it will be significantly easier for others to help with the result. > > Cheers, > > - Ben From trupill at gmail.com Fri Nov 1 09:27:32 2019 From: trupill at gmail.com (Alejandro Serrano Mena) Date: Fri, 1 Nov 2019 10:27:32 +0100 Subject: Working in my own branch with changes in submodules Message-ID: Dear GHC devs, I am currently working on my own fork of GHC ( https://gitlab.haskell.org/trupill/ghc), and as part of it I need to do some changes to the Cabal and haskeline libraries. However, since they are in submodules, I am not sure about how I can commit those changes, share them with others, and rebase my changes against the current HEAD for these submodules. Thanks in advance and kind regards, Alejandro -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Fri Nov 1 14:22:35 2019 From: ben at smart-cactus.org (Ben Gamari) Date: Fri, 01 Nov 2019 10:22:35 -0400 Subject: Working in my own branch with changes in submodules In-Reply-To: References: Message-ID: <87d0ebr80n.fsf@smart-cactus.org> Alejandro Serrano Mena writes: > Dear GHC devs, > I am currently working on my own fork of GHC ( > https://gitlab.haskell.org/trupill/ghc), and as part of it I need to do > some changes to the Cabal and haskeline libraries. However, since they are > in submodules, I am not sure about how I can commit those changes, share > them with others, and rebase my changes against the current HEAD for these > submodules. > You can push wip/ branches to the GHC mirrors of these submodules where they can be picked up by CI. Specifically, git at gitlab.haskell.org:ghc/packages/Cabal git at gitlab.haskell.org:ghc/packages/haskeline Just make sure that your branch name begins with `wip/` and you should be able to push. Do let me know if there are further questions. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From chrisdone at gmail.com Mon Nov 4 13:59:33 2019 From: chrisdone at gmail.com (Christopher Done) Date: Mon, 4 Nov 2019 13:59:33 +0000 Subject: Compiling binaries of bytecode: ever been considered? Message-ID: Hi all, I was just wondering: has a compiler output mode ever been considered that would dump bytecode to a file, dynamic link to the ghc runtime, and then on start-up that program would just interpret the bytecode like ghci does? The purpose would be simply faster compile-and-restart times. Cheers, Chris From klebinger.andreas at gmx.at Mon Nov 4 14:02:20 2019 From: klebinger.andreas at gmx.at (Andreas Klebinger) Date: Mon, 4 Nov 2019 15:02:20 +0100 Subject: Compiling binaries of bytecode: ever been considered? In-Reply-To: References: Message-ID: I've heard the idea come up once or twice. But I'm not aware of any efforts going further than that. Christopher Done schrieb am 04.11.2019 um 14:59: > Hi all, > > I was just wondering: has a compiler output mode ever been considered > that would dump bytecode to a file, dynamic link to the ghc runtime, > and then on start-up that program would just interpret the bytecode > like ghci does? > > The purpose would be simply faster compile-and-restart times. > > Cheers, > > Chris > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From ben at well-typed.com Mon Nov 4 14:55:03 2019 From: ben at well-typed.com (Ben Gamari) Date: Mon, 04 Nov 2019 09:55:03 -0500 Subject: GitLab email Message-ID: <874kzjpu80.fsf@smart-cactus.org> Hi everyone, On Friday I finished importing the prime.haskell.org Trac instance into GitLab. For this process I had to disable mail delivery to ensure that users weren't spammed by import messages. Unfortunately, it looks like I neglected to reenable mail after the import had concluded. I re-enabled mail about half an hour ago and consequently you may find yourself facing a few more messages than usual this morning. Sorry for the inconvenience! Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From allbery.b at gmail.com Mon Nov 4 15:16:36 2019 From: allbery.b at gmail.com (Brandon Allbery) Date: Mon, 4 Nov 2019 10:16:36 -0500 Subject: Compiling binaries of bytecode: ever been considered? In-Reply-To: References: Message-ID: Lots of people have had such ideas… until they looked at the bco implementation. Consider yourself warned. On Mon, Nov 4, 2019 at 9:02 AM Andreas Klebinger wrote: > I've heard the idea come up once or twice. But I'm not aware of any > efforts going further than that. > > > > Christopher Done schrieb am 04.11.2019 um 14:59: > > Hi all, > > > > I was just wondering: has a compiler output mode ever been considered > > that would dump bytecode to a file, dynamic link to the ghc runtime, > > and then on start-up that program would just interpret the bytecode > > like ghci does? > > > > The purpose would be simply faster compile-and-restart times. > > > > Cheers, > > > > Chris > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -- brandon s allbery kf8nh allbery.b at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From csaba.hruska at gmail.com Tue Nov 5 12:23:46 2019 From: csaba.hruska at gmail.com (Csaba Hruska) Date: Tue, 5 Nov 2019 13:23:46 +0100 Subject: Quick Q: do all FFI (non-primop) calls involve State# and RealWorld? In-Reply-To: References: Message-ID: Hi, I've also observed that in the final lowered STG the State# is always passed to effectful primops and FFI calls, but the returning new State# is removed from the result type. The State# has VoidRep representation in in cmm, so no register gets allocated for it and eventually the State# function argument is compiled to nothing in the machine code. i.e. the compilation steps for the code above is: foreign import ccall "math.h sin" sin :: CDouble -> CDouble 1. Initial STG type is: CDouble -> State# RealWorld -> (# State# RealWorld, CDouble #) 2. Lowered STG type is: CDouble -> State# RealWorld -> (# CDouble #) 3. FFI C function should be: double sin(double x); Regards, Csaba On Mon, Oct 28, 2019 at 10:59 AM Christopher Done wrote: > Hi all, > > I tried compiling this file: > > {-# LANGUAGE NoImplicitPrelude #-}-- | Demonstrate various use of the FFI.module Foreign whereimport Foreign.Cforeign import ccall "math.h sin" sin :: CDouble -> CDoubleit :: CDoubleit = sin 1 > > And I’ve noticed that the annotated type given for this foreign op in > Core, is (# State# RealWorld, CDouble #), whereas I would have expected > e.g. CDouble. > > Meanwhile, the foreign op call is passed a RealWorld argument. > > Additionally, code that consumes the result of this foreign call expects a (# > CDouble #) as a return value. > > So there are some assumptions I put in my interpreter to test this FFI > call out: > > 1. Despite claiming to return the real world in a tuple, it actually > should just return an unboxed tuple of the value. > 2. It should ignore the RealWorld argument entirely. > > I assume, if I were to lift this function into returning IO, that I > should indeed return the RealWorld argument given. So the lesson is: > > All FFI functions accept a RealWorld, and may return a 2-tuple of State# > RealWorld *if* it’s impure, else it’ll return a 1-tuple of the value. > Correct? > > Can someone confirm that my observations are right? Also, if so, is there > somewhere I can read more about this? > > Cheers > > Chris > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chrisdone at gmail.com Tue Nov 5 15:18:38 2019 From: chrisdone at gmail.com (Christopher Done) Date: Tue, 5 Nov 2019 15:18:38 +0000 Subject: Quick Q: do all FFI (non-primop) calls involve State# and RealWorld? In-Reply-To: References: Message-ID: Aha, thanks Csaba. So I’m not losing my marbles. The AST has a type signature of the “initial” but implements the “lowered”. So with -ddump-stg we can observe it: The version claimed in the type signature (returning a tuple): Foreign.it :: Foreign.C.Types.CDouble [GblId] = [] \u [] case ds_r1HA of { GHC.Types.D# ds2_s1HW [Occ=Once] -> case __pkg_ccall_GC main [ds2_s1HW GHC.Prim.realWorld#] of { (#,#) _ [Occ=Dead] ds4_s1I0 [Occ=Once] -> GHC.Types.D# [ds4_s1I0]; }; }; The final “lowered” version: Foreign.it :: Foreign.C.Types.CDouble [GblId] = [] \u [] case ds_r1HA of { GHC.Types.D# ds2_s1HW [Occ=Once] -> case __pkg_ccall_GC main [ds2_s1HW GHC.Prim.realWorld#] of { Unit# ds4_s1I0 [Occ=Once] -> GHC.Types.D# [ds4_s1I0]; }; }; Cheers! Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Fri Nov 8 10:25:19 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Fri, 8 Nov 2019 10:25:19 +0000 Subject: Proposed changes to merge request workflow In-Reply-To: References: <87blurje5l.fsf@smart-cactus.org> Message-ID: If the maintainers are not willing to either review or find reviewers for a new contributors patch then it doesn't seem to me that a project wants or values new contributors. A maintainer can make a value judgement about a patch that is isn't worth reviewing, but such situations are exceedingly rare. Everyone contributes patches in good faith in order to make the compiler better. Realistically it's impossible to be a good reviewer without having implemented patches on the code base. If you don't have a good handle for how things work then it's too big to get a feel for just by reading the code. You need to learn how things fit together by getting stuck writing patches. At least some of the maintainers are paid to maintain GHC and as such, should be expected to perform responsibilities that volunteers are not willing to perform. One of these tasks should be finding reviewers for all patches and making sure contributions do not languish indefinitely. Apart from this one point the suggested process sounds good but it seems to have stalled in the last month. Cheers, Matt On Wed, Oct 9, 2019 at 11:31 AM Simon Peyton Jones wrote: > > | > Make it clear that it is the contributor's responsibility to identify > | reviewers for their merge requests. > | > | Asking for reviews is one of the most frustrating parts of > | contributing patches, even if you know who to ask! So I think the > | maintainer's should be responsible for finding suitable and willing > | reviewers. > > It is true that it's hard to find reviewers. But if it's hard for the author it is also hard for the maintainers. A patch is a service that an author is offering, which is great. But every patch is owed, as a matter of right, suitable and willing reviewers, the patch is /also/ a blank cheque that any author can write, but it's up to someone else to pay. That's not good either. No author has an unlimited call on the time of other volunteers, and I don't think any author truly expects that. > > It's an informal gift economy. I review your patches (a) because I have learned that you have good judgement and write good code (b) because I want the bug that you are fixing to be fixed and (c) because you give me all sorts of helpful feedback about my patches, or otherwise contribute to the community in constructive ways. > > That may make it hard for /new/ authors to get started. Being an assiduous reviewer is an excellent plan, because it gets you into GHC's code base, guided by someone else's work; and it earns you all those good-contributor points. But even then it may be hard. So I think it's absolutely reasonable for authors to ask for help in finding reviewers. > > But simply saying that it's "the maintainers" responsibility to find reviewers goes much too far in the other direction, IMHO. > > Perhaps we should articulate some of this thinking. > > Simon > > | -----Original Message----- > | From: ghc-devs On Behalf Of Matthew > | Pickering > | Sent: 09 October 2019 11:18 > | To: Ben Gamari > | Cc: ghc-devs at haskell.org > | Subject: Re: Proposed changes to merge request workflow > | > | Sounds good in principal but I object to > | > | > Make it clear that it is the contributor's responsibility to identify > | reviewers for their merge requests. > | > | Asking for reviews is one of the most frustrating parts of > | contributing patches, even if you know who to ask! So I think the > | maintainer's should be responsible for finding suitable and willing > | reviewers. > | > | Cheers, > | > | Matt > | > | On Tue, Oct 8, 2019 at 7:17 PM Ben Gamari wrote: > | > > | > tl;dr. I would like feedback on a few proposed changes [1] to our merge > | > request workflow. > | > > | > > | > Hello everyone, > | > > | > Over the past six months I have been monitoring the operation of our > | > merge request workflow, which arose rather organically in the wake of > | > the initial move to GitLab. While it works reasonably well, there is > | > clearly room for improvement: > | > > | > * we have no formal way to track the status of in-flight merge > | > requests (e.g. for authors to mark an MR as ready for review or > | > reviewers to mark work as ready for merge) > | > > | > * merge requests still at times languish without review > | > > | > * the backport protocol is somewhat error prone and requires a great > | > deal of attention to ensure that patches don't slip through the > | > cracks > | > > | > * there is no technical mechanism to prevent that under-reviewed > | > patches from being merged (either intentionally or otherwise) to > | > `master` > | > > | > To address this I propose [1] a few changes to our workflow: > | > > | > 1. Define explicit phases of the merge request lifecycle, > | > systematically identified with labels. This will help to make it > | > clear who is responsible for a merge request at every stage of its > | > lifecycle. > | > > | > 2. Make it clear that it is the contributor's responsibility to > | > identify reviewers for their merge requests. > | > > | > 3. Institute a final pre-merge sanity check to ensure that > | > patches are adequately reviewed, documented, tested, and have had > | > their ticket and MR metadata updated. > | > > | > Note that this is merely a proposal; I am actively seeking input from > | > the developer community. Do let me know what you think. > | > > | > Cheers, > | > > | > - Ben > | > > | > > | > [1] > | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.h > | askell.org%2Fghc%2Fghc%2Fwikis%2Fproposals%2Fmerge-request- > | workflow&data=02%7C01%7Csimonpj%40microsoft.com%7Cd1199fd308b442cf744f > | 08d74ca2074b%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C6370621311033130 > | 52&sdata=SxBADAuF%2FvGzduaytetUzIxGr8lC%2BjTX2eCLNEoOCkQ%3D&reserv > | ed=0 > | > _______________________________________________ > | > ghc-devs mailing list > | > ghc-devs at haskell.org > | > > | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask > | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- > | devs&data=02%7C01%7Csimonpj%40microsoft.com%7Cd1199fd308b442cf744f08d7 > | 4ca2074b%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637062131103313052&a > | mp;sdata=T%2FyLoRH9BTIVPxMzF0%2BAa3c20qCBkhvQrp53FtROz40%3D&reserved=0 > | _______________________________________________ > | ghc-devs mailing list > | ghc-devs at haskell.org > | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask > | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- > | devs&data=02%7C01%7Csimonpj%40microsoft.com%7Cd1199fd308b442cf744f08d7 > | 4ca2074b%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637062131103323047&a > | mp;sdata=IwsIP3P6W5qtsLxfePbYOWTXdPLttNMLHWXkuTtVWgI%3D&reserved=0 From simonpj at microsoft.com Fri Nov 8 10:52:58 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 8 Nov 2019 10:52:58 +0000 Subject: Proposed changes to merge request workflow In-Reply-To: References: <87blurje5l.fsf@smart-cactus.org> Message-ID: | If the maintainers are not willing to either review or find reviewers | for a new contributors patch | then it doesn't seem to me that a project wants or values new | contributors. Yes, that would be an unfortunate -- and indeed wrong -- impression to convey. Thanks for highlighting it. You'd like the maintainers to have an *obligation* to cause someone to produce a good review on every patch. Here's the worst-case scenario: a well-meaning but inexperienced person produces a stream of large, ill-thought-out, and mostly wrong patches. To give a guarantee of high quality reviews of those patches amounts to a blank cheque on the time of volunteers working mostly in their spare time. Now, of course, that's an extreme scenario. But that's why I'm keen to avoid making it an unconditional obligation that the few maintainers must discharge. I don’t think there is really a difference of opinion here. Of course we welcome patches; of course everyone will try to help find reviewers if they are lacking! So how about this - the author nominates reviewers - if he or she finds difficulty in doing so, or the reviewers s/he nominates are unresponsive, then he or she should ask for help - maintainers should make efforts to help In other words, as an author you remain in control. But help is available if you need it. What do others think? Simon | -----Original Message----- | From: Matthew Pickering | Sent: 08 November 2019 10:25 | To: Simon Peyton Jones | Cc: Ben Gamari ; ghc-devs at haskell.org | Subject: Re: Proposed changes to merge request workflow | | If the maintainers are not willing to either review or find reviewers | for a new contributors patch | then it doesn't seem to me that a project wants or values new | contributors. | | A maintainer can make a value judgement about a patch that is isn't | worth reviewing, but such | situations are exceedingly rare. Everyone contributes patches in good | faith in order to make the compiler better. | | Realistically it's impossible to be a good reviewer without having | implemented patches on the code base. If you don't | have a good handle for how things work then it's too big to get a feel | for just by reading the code. You need to learn how things | fit together by getting stuck writing patches. | | At least some of the maintainers are paid to maintain GHC and as such, | should be expected to perform responsibilities that | volunteers are not willing to perform. One of these tasks should be | finding reviewers for all patches and making sure contributions | do not languish indefinitely. | | Apart from this one point the suggested process sounds good but it | seems to have stalled in the last month. | | Cheers, | | Matt | | On Wed, Oct 9, 2019 at 11:31 AM Simon Peyton Jones | wrote: | > | > | > Make it clear that it is the contributor's responsibility to | identify | > | reviewers for their merge requests. | > | | > | Asking for reviews is one of the most frustrating parts of | > | contributing patches, even if you know who to ask! So I think the | > | maintainer's should be responsible for finding suitable and willing | > | reviewers. | > | > It is true that it's hard to find reviewers. But if it's hard for the | author it is also hard for the maintainers. A patch is a service that an | author is offering, which is great. But every patch is owed, as a matter | of right, suitable and willing reviewers, the patch is /also/ a blank | cheque that any author can write, but it's up to someone else to pay. | That's not good either. No author has an unlimited call on the time of | other volunteers, and I don't think any author truly expects that. | > | > It's an informal gift economy. I review your patches (a) because I have | learned that you have good judgement and write good code (b) because I | want the bug that you are fixing to be fixed and (c) because you give me | all sorts of helpful feedback about my patches, or otherwise contribute to | the community in constructive ways. | > | > That may make it hard for /new/ authors to get started. Being an | assiduous reviewer is an excellent plan, because it gets you into GHC's | code base, guided by someone else's work; and it earns you all those good- | contributor points. But even then it may be hard. So I think it's | absolutely reasonable for authors to ask for help in finding reviewers. | > | > But simply saying that it's "the maintainers" responsibility to find | reviewers goes much too far in the other direction, IMHO. | > | > Perhaps we should articulate some of this thinking. | > | > Simon | > | > | -----Original Message----- | > | From: ghc-devs On Behalf Of Matthew | > | Pickering | > | Sent: 09 October 2019 11:18 | > | To: Ben Gamari | > | Cc: ghc-devs at haskell.org | > | Subject: Re: Proposed changes to merge request workflow | > | | > | Sounds good in principal but I object to | > | | > | > Make it clear that it is the contributor's responsibility to | identify | > | reviewers for their merge requests. | > | | > | Asking for reviews is one of the most frustrating parts of | > | contributing patches, even if you know who to ask! So I think the | > | maintainer's should be responsible for finding suitable and willing | > | reviewers. | > | | > | Cheers, | > | | > | Matt | > | | > | On Tue, Oct 8, 2019 at 7:17 PM Ben Gamari wrote: | > | > | > | > tl;dr. I would like feedback on a few proposed changes [1] to our | merge | > | > request workflow. | > | > | > | > | > | > Hello everyone, | > | > | > | > Over the past six months I have been monitoring the operation of | our | > | > merge request workflow, which arose rather organically in the wake | of | > | > the initial move to GitLab. While it works reasonably well, there | is | > | > clearly room for improvement: | > | > | > | > * we have no formal way to track the status of in-flight merge | > | > requests (e.g. for authors to mark an MR as ready for review or | > | > reviewers to mark work as ready for merge) | > | > | > | > * merge requests still at times languish without review | > | > | > | > * the backport protocol is somewhat error prone and requires a | great | > | > deal of attention to ensure that patches don't slip through the | > | > cracks | > | > | > | > * there is no technical mechanism to prevent that under-reviewed | > | > patches from being merged (either intentionally or otherwise) | to | > | > `master` | > | > | > | > To address this I propose [1] a few changes to our workflow: | > | > | > | > 1. Define explicit phases of the merge request lifecycle, | > | > systematically identified with labels. This will help to make | it | > | > clear who is responsible for a merge request at every stage of | its | > | > lifecycle. | > | > | > | > 2. Make it clear that it is the contributor's responsibility to | > | > identify reviewers for their merge requests. | > | > | > | > 3. Institute a final pre-merge sanity check to ensure that | > | > patches are adequately reviewed, documented, tested, and have | had | > | > their ticket and MR metadata updated. | > | > | > | > Note that this is merely a proposal; I am actively seeking input | from | > | > the developer community. Do let me know what you think. | > | > | > | > Cheers, | > | > | > | > - Ben | > | > | > | > | > | > [1] | > | | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.h | > | askell.org%2Fghc%2Fghc%2Fwikis%2Fproposals%2Fmerge-request- | > | | workflow&data=02%7C01%7Csimonpj%40microsoft.com%7Cd1199fd308b442cf744f | > | | 08d74ca2074b%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C6370621311033130 | > | | 52&sdata=SxBADAuF%2FvGzduaytetUzIxGr8lC%2BjTX2eCLNEoOCkQ%3D&reserv | > | ed=0 | > | > _______________________________________________ | > | > ghc-devs mailing list | > | > ghc-devs at haskell.org | > | > | > | | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask | > | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | > | | devs&data=02%7C01%7Csimonpj%40microsoft.com%7Cd1199fd308b442cf744f08d7 | > | | 4ca2074b%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637062131103313052&a | > | | mp;sdata=T%2FyLoRH9BTIVPxMzF0%2BAa3c20qCBkhvQrp53FtROz40%3D&reserved=0 | > | _______________________________________________ | > | ghc-devs mailing list | > | ghc-devs at haskell.org | > | | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask | > | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | > | | devs&data=02%7C01%7Csimonpj%40microsoft.com%7Cd1199fd308b442cf744f08d7 | > | | 4ca2074b%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637062131103323047&a | > | | mp;sdata=IwsIP3P6W5qtsLxfePbYOWTXdPLttNMLHWXkuTtVWgI%3D&reserved=0 From metaniklas at gmail.com Fri Nov 8 12:28:30 2019 From: metaniklas at gmail.com (Niklas Larsson) Date: Fri, 8 Nov 2019 13:28:30 +0100 Subject: Proposed changes to merge request workflow In-Reply-To: References: Message-ID: Hi! I have contributed a patch or two to GHC, so I guess I’m a reasonable example of an newbie. The step of nominating reviewers just wouldn’t work for me. I have no idea of who in this project would be willing and able to give a review. Or who the eligible reviewers are. Maybe I’d select someone who haven’t been active for years. If you do this, can you please add an alternative “I’m a clueless newbie, help me select reviewers” to that step? Regards, Niklas > 8 nov. 2019 kl. 11:53 skrev Simon Peyton Jones via ghc-devs : > > | If the maintainers are not willing to either review or find reviewers > | for a new contributors patch > | then it doesn't seem to me that a project wants or values new > | contributors. > > Yes, that would be an unfortunate -- and indeed wrong -- impression to convey. Thanks for highlighting it. > > You'd like the maintainers to have an *obligation* to cause someone to produce a good review on every patch. Here's the worst-case scenario: a well-meaning but inexperienced person produces a stream of large, ill-thought-out, and mostly wrong patches. To give a guarantee of high quality reviews of those patches amounts to a blank cheque on the time of volunteers working mostly in their spare time. > > Now, of course, that's an extreme scenario. But that's why I'm keen to avoid making it an unconditional obligation that the few maintainers must discharge. > > I don’t think there is really a difference of opinion here. Of course we welcome patches; of course everyone will try to help find reviewers if they are lacking! > > So how about this > - the author nominates reviewers > - if he or she finds difficulty in doing so, or the reviewers s/he > nominates are unresponsive, then he or she should ask for help > - maintainers should make efforts to help > > In other words, as an author you remain in control. But help is available if you need it. > > What do others think? > > Simon > > | -----Original Message----- > | From: Matthew Pickering > | Sent: 08 November 2019 10:25 > | To: Simon Peyton Jones > | Cc: Ben Gamari ; ghc-devs at haskell.org > | Subject: Re: Proposed changes to merge request workflow > | > | If the maintainers are not willing to either review or find reviewers > | for a new contributors patch > | then it doesn't seem to me that a project wants or values new > | contributors. > | > | A maintainer can make a value judgement about a patch that is isn't > | worth reviewing, but such > | situations are exceedingly rare. Everyone contributes patches in good > | faith in order to make the compiler better. > | > | Realistically it's impossible to be a good reviewer without having > | implemented patches on the code base. If you don't > | have a good handle for how things work then it's too big to get a feel > | for just by reading the code. You need to learn how things > | fit together by getting stuck writing patches. > | > | At least some of the maintainers are paid to maintain GHC and as such, > | should be expected to perform responsibilities that > | volunteers are not willing to perform. One of these tasks should be > | finding reviewers for all patches and making sure contributions > | do not languish indefinitely. > | > | Apart from this one point the suggested process sounds good but it > | seems to have stalled in the last month. > | > | Cheers, > | > | Matt > | > | On Wed, Oct 9, 2019 at 11:31 AM Simon Peyton Jones > | wrote: > | > > | > | > Make it clear that it is the contributor's responsibility to > | identify > | > | reviewers for their merge requests. > | > | > | > | Asking for reviews is one of the most frustrating parts of > | > | contributing patches, even if you know who to ask! So I think the > | > | maintainer's should be responsible for finding suitable and willing > | > | reviewers. > | > > | > It is true that it's hard to find reviewers. But if it's hard for the > | author it is also hard for the maintainers. A patch is a service that an > | author is offering, which is great. But every patch is owed, as a matter > | of right, suitable and willing reviewers, the patch is /also/ a blank > | cheque that any author can write, but it's up to someone else to pay. > | That's not good either. No author has an unlimited call on the time of > | other volunteers, and I don't think any author truly expects that. > | > > | > It's an informal gift economy. I review your patches (a) because I have > | learned that you have good judgement and write good code (b) because I > | want the bug that you are fixing to be fixed and (c) because you give me > | all sorts of helpful feedback about my patches, or otherwise contribute to > | the community in constructive ways. > | > > | > That may make it hard for /new/ authors to get started. Being an > | assiduous reviewer is an excellent plan, because it gets you into GHC's > | code base, guided by someone else's work; and it earns you all those good- > | contributor points. But even then it may be hard. So I think it's > | absolutely reasonable for authors to ask for help in finding reviewers. > | > > | > But simply saying that it's "the maintainers" responsibility to find > | reviewers goes much too far in the other direction, IMHO. > | > > | > Perhaps we should articulate some of this thinking. > | > > | > Simon > | > > | > | -----Original Message----- > | > | From: ghc-devs On Behalf Of Matthew > | > | Pickering > | > | Sent: 09 October 2019 11:18 > | > | To: Ben Gamari > | > | Cc: ghc-devs at haskell.org > | > | Subject: Re: Proposed changes to merge request workflow > | > | > | > | Sounds good in principal but I object to > | > | > | > | > Make it clear that it is the contributor's responsibility to > | identify > | > | reviewers for their merge requests. > | > | > | > | Asking for reviews is one of the most frustrating parts of > | > | contributing patches, even if you know who to ask! So I think the > | > | maintainer's should be responsible for finding suitable and willing > | > | reviewers. > | > | > | > | Cheers, > | > | > | > | Matt > | > | > | > | On Tue, Oct 8, 2019 at 7:17 PM Ben Gamari wrote: > | > | > > | > | > tl;dr. I would like feedback on a few proposed changes [1] to our > | merge > | > | > request workflow. > | > | > > | > | > > | > | > Hello everyone, > | > | > > | > | > Over the past six months I have been monitoring the operation of > | our > | > | > merge request workflow, which arose rather organically in the wake > | of > | > | > the initial move to GitLab. While it works reasonably well, there > | is > | > | > clearly room for improvement: > | > | > > | > | > * we have no formal way to track the status of in-flight merge > | > | > requests (e.g. for authors to mark an MR as ready for review or > | > | > reviewers to mark work as ready for merge) > | > | > > | > | > * merge requests still at times languish without review > | > | > > | > | > * the backport protocol is somewhat error prone and requires a > | great > | > | > deal of attention to ensure that patches don't slip through the > | > | > cracks > | > | > > | > | > * there is no technical mechanism to prevent that under-reviewed > | > | > patches from being merged (either intentionally or otherwise) > | to > | > | > `master` > | > | > > | > | > To address this I propose [1] a few changes to our workflow: > | > | > > | > | > 1. Define explicit phases of the merge request lifecycle, > | > | > systematically identified with labels. This will help to make > | it > | > | > clear who is responsible for a merge request at every stage of > | its > | > | > lifecycle. > | > | > > | > | > 2. Make it clear that it is the contributor's responsibility to > | > | > identify reviewers for their merge requests. > | > | > > | > | > 3. Institute a final pre-merge sanity check to ensure that > | > | > patches are adequately reviewed, documented, tested, and have > | had > | > | > their ticket and MR metadata updated. > | > | > > | > | > Note that this is merely a proposal; I am actively seeking input > | from > | > | > the developer community. Do let me know what you think. > | > | > > | > | > Cheers, > | > | > > | > | > - Ben > | > | > > | > | > > | > | > [1] > | > | > | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.h > | > | askell.org%2Fghc%2Fghc%2Fwikis%2Fproposals%2Fmerge-request- > | > | > | workflow&data=02%7C01%7Csimonpj%40microsoft.com%7Cd1199fd308b442cf744f > | > | > | 08d74ca2074b%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C6370621311033130 > | > | > | 52&sdata=SxBADAuF%2FvGzduaytetUzIxGr8lC%2BjTX2eCLNEoOCkQ%3D&reserv > | > | ed=0 > | > | > _______________________________________________ > | > | > ghc-devs mailing list > | > | > ghc-devs at haskell.org > | > | > > | > | > | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask > | > | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- > | > | > | devs&data=02%7C01%7Csimonpj%40microsoft.com%7Cd1199fd308b442cf744f08d7 > | > | > | 4ca2074b%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637062131103313052&a > | > | > | mp;sdata=T%2FyLoRH9BTIVPxMzF0%2BAa3c20qCBkhvQrp53FtROz40%3D&reserved=0 > | > | _______________________________________________ > | > | ghc-devs mailing list > | > | ghc-devs at haskell.org > | > | > | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask > | > | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- > | > | > | devs&data=02%7C01%7Csimonpj%40microsoft.com%7Cd1199fd308b442cf744f08d7 > | > | > | 4ca2074b%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637062131103323047&a > | > | > | mp;sdata=IwsIP3P6W5qtsLxfePbYOWTXdPLttNMLHWXkuTtVWgI%3D&reserved=0 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From ben at smart-cactus.org Fri Nov 8 17:05:37 2019 From: ben at smart-cactus.org (Ben Gamari) Date: Fri, 08 Nov 2019 12:05:37 -0500 Subject: Proposed changes to merge request workflow In-Reply-To: References: <87blurje5l.fsf@smart-cactus.org> Message-ID: <878soqnvs3.fsf@smart-cactus.org> Matthew Pickering writes: > If the maintainers are not willing to either review or find reviewers > for a new contributors patch then it doesn't seem to me that a project > wants or values new contributors. > For what it's worth, I am happy to try to find reviewers for a newcomer's patch. However, on the whole it is better for everyone involved if the contributor does it: * the contributor is more involved in the process and, consequently, more invested * the process moves more quickly since the contributor doesn't need to wait for someone else to find reviewers for their work * me and the rest of us at Well-Typed are less of a bottleneck and therefore have more time for improving GHC Of course, even with this policy, if I see a patch languishing then I will try to handle it. In my view all we are doing here is setting the preferred default; . > A maintainer can make a value judgement about a patch that is isn't > worth reviewing, but such > situations are exceedingly rare. Everyone contributes patches in good > faith in order to make the compiler better. > > Realistically it's impossible to be a good reviewer without having > implemented patches on the code base. If you don't > have a good handle for how things work then it's too big to get a feel > for just by reading the code. You need to learn how things > fit together by getting stuck writing patches. > > At least some of the maintainers are paid to maintain GHC and as such, > should be expected to perform responsibilities that > volunteers are not willing to perform. One of these tasks should be > finding reviewers for all patches and making sure contributions > do not languish indefinitely. > > Apart from this one point the suggested process sounds good but it > seems to have stalled in the last month. > Indeed I've been stuck in an endless cycle of pre-release tasks. Hopefully this will end today. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Fri Nov 8 18:30:01 2019 From: ben at smart-cactus.org (Ben Gamari) Date: Fri, 08 Nov 2019 13:30:01 -0500 Subject: Proposed changes to merge request workflow In-Reply-To: References: <87blurje5l.fsf@smart-cactus.org> Message-ID: <875zjunrvf.fsf@smart-cactus.org> Simon Peyton Jones via ghc-devs writes: > | If the maintainers are not willing to either review or find reviewers > | for a new contributors patch > | then it doesn't seem to me that a project wants or values new > | contributors. > > Yes, that would be an unfortunate -- and indeed wrong -- impression to convey. Thanks for highlighting it. > > You'd like the maintainers to have an *obligation* to cause someone to produce a good review on every patch. Here's the worst-case scenario: a well-meaning but inexperienced person produces a stream of large, ill-thought-out, and mostly wrong patches. To give a guarantee of high quality reviews of those patches amounts to a blank cheque on the time of volunteers working mostly in their spare time. > > Now, of course, that's an extreme scenario. But that's why I'm keen to avoid making it an unconditional obligation that the few maintainers must discharge. > > I don’t think there is really a difference of opinion here. Of course we welcome patches; of course everyone will try to help find reviewers if they are lacking! > > So how about this > - the author nominates reviewers > - if he or she finds difficulty in doing so, or the reviewers s/he > nominates are unresponsive, then he or she should ask for help > - maintainers should make efforts to help > In my mind there has always been a (perhaps too implicit) promise that maintainers are always present in the background and happy to help in finding reviewers if asked (and perhaps even if not, if it seems a contributor is lost). Perhaps we should make this more explicit? Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From rae at richarde.dev Fri Nov 8 19:36:21 2019 From: rae at richarde.dev (Richard Eisenberg) Date: Fri, 8 Nov 2019 19:36:21 +0000 Subject: Proposed changes to merge request workflow In-Reply-To: <875zjunrvf.fsf@smart-cactus.org> References: <87blurje5l.fsf@smart-cactus.org> <875zjunrvf.fsf@smart-cactus.org> Message-ID: <60FEED71-2079-4E2B-B8CF-3EC3B9F25BBC@richarde.dev> I wonder if it would alleviate the concerns to have a ghc-maintainers mailing list. This is distinct from ghc-devs, in that the maintainers have GHC as their day job. It would explicitly invite email from folks struggling to figure out how to contribute. I don't mean to create more mail for Ben et al, but having an explicit "seek help here" direction is nice. And (at least for me) mailing a list for help feels more comfortable than emailing an individual. Richard > On Nov 8, 2019, at 6:30 PM, Ben Gamari wrote: > > Simon Peyton Jones via ghc-devs > writes: > >> | If the maintainers are not willing to either review or find reviewers >> | for a new contributors patch >> | then it doesn't seem to me that a project wants or values new >> | contributors. >> >> Yes, that would be an unfortunate -- and indeed wrong -- impression to convey. Thanks for highlighting it. >> >> You'd like the maintainers to have an *obligation* to cause someone to produce a good review on every patch. Here's the worst-case scenario: a well-meaning but inexperienced person produces a stream of large, ill-thought-out, and mostly wrong patches. To give a guarantee of high quality reviews of those patches amounts to a blank cheque on the time of volunteers working mostly in their spare time. >> >> Now, of course, that's an extreme scenario. But that's why I'm keen to avoid making it an unconditional obligation that the few maintainers must discharge. >> >> I don’t think there is really a difference of opinion here. Of course we welcome patches; of course everyone will try to help find reviewers if they are lacking! >> >> So how about this >> - the author nominates reviewers >> - if he or she finds difficulty in doing so, or the reviewers s/he >> nominates are unresponsive, then he or she should ask for help >> - maintainers should make efforts to help >> > In my mind there has always been a (perhaps too implicit) promise that > maintainers are always present in the background and happy to help in > finding reviewers if asked (and perhaps even if not, if it seems a > contributor is lost). > > Perhaps we should make this more explicit? > > Cheers, > > - Ben > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Fri Nov 8 20:47:47 2019 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Fri, 8 Nov 2019 20:47:47 +0000 Subject: Proposed changes to merge request workflow In-Reply-To: <60FEED71-2079-4E2B-B8CF-3EC3B9F25BBC@richarde.dev> References: <87blurje5l.fsf@smart-cactus.org> <875zjunrvf.fsf@smart-cactus.org> <60FEED71-2079-4E2B-B8CF-3EC3B9F25BBC@richarde.dev> Message-ID: What about some sort of script that detects MR older than x time without a reviewer, and asks a group of people to take a look. On Fri, 8 Nov 2019 at 19:36, Richard Eisenberg wrote: > I wonder if it would alleviate the concerns to have a ghc-maintainers > mailing list. This is distinct from ghc-devs, in that the maintainers have > GHC as their day job. It would explicitly invite email from folks > struggling to figure out how to contribute. I don't mean to create more > mail for Ben et al, but having an explicit "seek help here" direction is > nice. And (at least for me) mailing a list for help feels more comfortable > than emailing an individual. > > Richard > > On Nov 8, 2019, at 6:30 PM, Ben Gamari wrote: > > Simon Peyton Jones via ghc-devs writes: > > | If the maintainers are not willing to either review or find reviewers > | for a new contributors patch > | then it doesn't seem to me that a project wants or values new > | contributors. > > Yes, that would be an unfortunate -- and indeed wrong -- impression to > convey. Thanks for highlighting it. > > You'd like the maintainers to have an *obligation* to cause someone to > produce a good review on every patch. Here's the worst-case scenario: a > well-meaning but inexperienced person produces a stream of large, > ill-thought-out, and mostly wrong patches. To give a guarantee of high > quality reviews of those patches amounts to a blank cheque on the time of > volunteers working mostly in their spare time. > > Now, of course, that's an extreme scenario. But that's why I'm keen to > avoid making it an unconditional obligation that the few maintainers must > discharge. > > I don’t think there is really a difference of opinion here. Of course we > welcome patches; of course everyone will try to help find reviewers if they > are lacking! > > So how about this > - the author nominates reviewers > - if he or she finds difficulty in doing so, or the reviewers s/he > nominates are unresponsive, then he or she should ask for help > - maintainers should make efforts to help > > In my mind there has always been a (perhaps too implicit) promise that > maintainers are always present in the background and happy to help in > finding reviewers if asked (and perhaps even if not, if it seems a > contributor is lost). > > Perhaps we should make this more explicit? > > Cheers, > > - Ben > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From omeragacan at gmail.com Sat Nov 9 08:13:01 2019 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Sat, 9 Nov 2019 11:13:01 +0300 Subject: Implementing code ownership? Message-ID: Hi, In the thread "Proposed changes to merge request workflow" I agree with Matt that expecting a contributor to find reviewers will not work. I'm wondering if we implemented code ownership better it would somewhat help with the problem. If we had at least one responsive owner for every file then a reviewer would be automatically assigned to every MR, solving the problem. You may say that we simply don't have enough people to assign one responsive person to every file, but that's a problem that a new contributor won't be able to solve anyway, so this is not a disadvantage over the proposed plan. Ömer From chrisdone at gmail.com Sat Nov 9 16:10:10 2019 From: chrisdone at gmail.com (Christopher Done) Date: Sat, 9 Nov 2019 16:10:10 +0000 Subject: Quick Q: do all FFI (non-primop) calls involve State# and RealWorld? In-Reply-To: References: Message-ID: For anyone interested, here's a complete list of all foreign imports at the STG level from base and integer-simple: https://gist.github.com/chrisdone/24b476862b678a3665fbf9b833a9905f They all have type (# State# RealWorld #) or (# State# RealWorld, #). On Tue, 5 Nov 2019 at 15:18, Christopher Done wrote: > Aha, thanks Csaba. So I’m not losing my marbles. The AST has a type > signature of the “initial” but implements the “lowered”. So with > -ddump-stg we can observe it: > > The version claimed in the type signature (returning a tuple): > > Foreign.it :: Foreign.C.Types.CDouble > [GblId] = > [] \u [] > case ds_r1HA of { > GHC.Types.D# ds2_s1HW [Occ=Once] -> > case __pkg_ccall_GC main [ds2_s1HW GHC.Prim.realWorld#] of { > (#,#) _ [Occ=Dead] ds4_s1I0 [Occ=Once] -> GHC.Types.D# [ds4_s1I0]; > }; > }; > > The final “lowered” version: > > Foreign.it :: Foreign.C.Types.CDouble > [GblId] = > [] \u [] > case ds_r1HA of { > GHC.Types.D# ds2_s1HW [Occ=Once] -> > case __pkg_ccall_GC main [ds2_s1HW GHC.Prim.realWorld#] of { > Unit# ds4_s1I0 [Occ=Once] -> GHC.Types.D# [ds4_s1I0]; > }; > }; > > Cheers! > > Chris > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Nov 15 13:02:05 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 15 Nov 2019 13:02:05 +0000 Subject: gc.log file Message-ID: Devs, I'm getting mysterious messages from git, below. I have no file gc.log in my tree. Also I have run 'git prune' but the message still occurs. Any ideas? Simon simonpj at MSRC-3645512:~/code/HEAD$ git push --force setsockopt IPV6_TCLASS 8: Operation not permitted: Counting objects: 34, done. Delta compression using up to 20 threads. Compressing objects: 100% (14/14), done. Writing objects: 100% (34/34), 8.83 KiB | 2.21 MiB/s, done. Total 34 (delta 32), reused 21 (delta 20) remote: remote: View merge request for wip/T16296: remote: https://gitlab.haskell.org/ghc/ghc/merge_requests/2161 remote: remote: warning: The last gc run reported the following. Please correct the root cause remote: and remove gc.log. remote: Automatic cleanup will not be performed until the file is removed. remote: remote: warning: There are too many unreachable loose objects; run 'git prune' to remove them. remote: -------------- next part -------------- An HTML attachment was scrubbed... URL: From sylvain at haskus.fr Fri Nov 15 16:04:25 2019 From: sylvain at haskus.fr (Sylvain Henry) Date: Fri, 15 Nov 2019 17:04:25 +0100 Subject: Question about negative Integers Message-ID: <83bcbb0e-84d0-fd4a-2e36-04186b827b22@haskus.fr> Hi GHC devs, As some of you may know, I am working on fixing several longstanding issues with GHC's big numbers implementation (Integer, Natural). You can read more about it here: https://gitlab.haskell.org/hsyl20/ghc/raw/hsyl20-integer/libraries/ghc-bignum/docs/ghc-bignum.rst To summarize, we would have a single `ghc-bignum` package with different backends (GMP, pure Haskell, etc.). The backend is chosen with a Cabal flag and new backends are way easier to add. All the backends use the same representation which allows Integer and Natural types and datacons to be wired-in which has a lot of nice consequences (remove some dependency hacks in base package, make GHC agnostic of the backend used, etc.). A major roadblock in previous attempts was that integer-simple doesn't use the same representations for numbers as integer-gmp. But I have written a new pure Haskell implementation which happens to be faster than integer-simple (see perf results in the document linked above) and that uses the common representation (similar to what was used in integer-gmp). I am very close to submit a merge request but there is a remaining question about the Bits instance for negative Integer numbers: We don't store big negative Integer using two's complement encoding, instead we use signed magnitude representation (i.e. we use constructors to distinguish between (big) positive or negative numbers). It's already true today in integer-simple and integer-gmp. However integer-gmp and integer-simple fake two's complement encoding for Bits operations. As a consequence, every Bits operation on negative Integers does *a lot* of stuff. E.g. testing a single bit with `testBit` is linear in the size of the number, a logical `and` between two numbers involves additions and subtractions, etc. Question is: do we need/want to keep this behavior? There is nothing in the report that says that Integer's Bits instance has to mimic two's complement encoding. What's the point of slowly accessing a fake representation instead of the actual one? Could we deprecate this? The instance isn't even coherent: popCount returns the negated numbers of 1s in the absolute value as it can't return an infinite value. Thanks, Sylvain From simonpj at microsoft.com Fri Nov 15 16:56:53 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 15 Nov 2019 16:56:53 +0000 Subject: Question about negative Integers In-Reply-To: <83bcbb0e-84d0-fd4a-2e36-04186b827b22@haskus.fr> References: <83bcbb0e-84d0-fd4a-2e36-04186b827b22@haskus.fr> Message-ID: I'm not *at all* close to this, but what you say sounds sensible. What is the user-facing change you propose? Something about the behaviour of (Bits Integer)? If so, fly it past the Core Libraries Committee, but in concrete form: I propose to change X to Y. Simon | -----Original Message----- | From: ghc-devs On Behalf Of Sylvain Henry | Sent: 15 November 2019 16:04 | To: ghc-devs | Subject: Question about negative Integers | | Hi GHC devs, | | As some of you may know, I am working on fixing several longstanding | issues with GHC's big numbers implementation (Integer, Natural). You can | read more about it here: | https://gitlab.haskell.org/hsyl20/ghc/raw/hsyl20-integer/libraries/ghc- | bignum/docs/ghc-bignum.rst | | To summarize, we would have a single `ghc-bignum` package with different | backends (GMP, pure Haskell, etc.). The backend is chosen with a Cabal | flag and new backends are way easier to add. All the backends use the | same representation which allows Integer and Natural types and datacons | to be wired-in which has a lot of nice consequences (remove some | dependency hacks in base package, make GHC agnostic of the backend used, | etc.). | | A major roadblock in previous attempts was that integer-simple doesn't | use the same representations for numbers as integer-gmp. But I have | written a new pure Haskell implementation which happens to be faster | than integer-simple (see perf results in the document linked above) and | that uses the common representation (similar to what was used in | integer-gmp). | | I am very close to submit a merge request but there is a remaining | question about the Bits instance for negative Integer numbers: | | We don't store big negative Integer using two's complement encoding, | instead we use signed magnitude representation (i.e. we use constructors | to distinguish between (big) positive or negative numbers). It's already | true today in integer-simple and integer-gmp. However integer-gmp and | integer-simple fake two's complement encoding for Bits operations. As a | consequence, every Bits operation on negative Integers does *a lot* of | stuff. E.g. testing a single bit with `testBit` is linear in the size of | the number, a logical `and` between two numbers involves additions and | subtractions, etc. | | Question is: do we need/want to keep this behavior? There is nothing in | the report that says that Integer's Bits instance has to mimic two's | complement encoding. What's the point of slowly accessing a fake | representation instead of the actual one? Could we deprecate this? The | instance isn't even coherent: popCount returns the negated numbers of 1s | in the absolute value as it can't return an infinite value. | | Thanks, | Sylvain | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From hvriedel at gmail.com Fri Nov 15 17:19:38 2019 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Fri, 15 Nov 2019 18:19:38 +0100 Subject: Question about negative Integers In-Reply-To: <83bcbb0e-84d0-fd4a-2e36-04186b827b22@haskus.fr> References: <83bcbb0e-84d0-fd4a-2e36-04186b827b22@haskus.fr> Message-ID: On Fri, Nov 15, 2019 at 5:04 PM Sylvain Henry wrote: > Question is: do we need/want to keep this behavior? Yes ;-) I chose it quite intentionally after benchmarking and carefully examining various approaches, and with the intent to use a common-denominator representation which would be easy to supplement with alternative bignum implementations. Since you didn't seem to reference it, I wonder if you even saw the page where some of my design rationale was written down as well as hinting at how I intended to make the backend selectable via a link-time flag (similar to how you'd select the RTS via -threaded or -prof, you'd also be able to select the integer backend at link-time w/o the need to recompile anything). See https://gitlab.haskell.org/ghc/ghc/wikis/design/integer-gmp2 However, some time ago I did discuss picking up that plan again, but he did point out that it would make a lot more sense to leverage Backpack for this, as it seems to be a much more elegant solution to this problem than the simple but platform-specific link-time approach I was aiming for. Ben put it quite bluntly that if Backpack can't be used for this thing it was basically designed for, we should consider ripping it out again as it would have effectively failed its promise. And I do agree! Back when I originally redesigned and rewrote integer-gmp from scratch, Backpack wasn't available yet. But now we have it, and a Backpack based solution would IMO indeed be the proper solution at this point to the problem of abstracting over integer-backends as well as representations -- it could even be combined with my original plan for C FFI based platforms (but that's mostly an optimization at that point for the special case where the backends share said common representation at the ABI level). -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekmett at gmail.com Sat Nov 16 02:46:37 2019 From: ekmett at gmail.com (Edward Kmett) Date: Sat, 16 Nov 2019 08:16:37 +0530 Subject: Question about negative Integers In-Reply-To: <83bcbb0e-84d0-fd4a-2e36-04186b827b22@haskus.fr> References: <83bcbb0e-84d0-fd4a-2e36-04186b827b22@haskus.fr> Message-ID: > Question is: do we need/want to keep this behavior? I think we really do want to keep this behavior. And not just because I for one have a decent cross-section of code that would just become horribly broken (and would have to find some way to jerry-rig the existing behavior anyways) if we randomly changed it. The current underlying representation if more directly exposed would be quite surprising to users and doesn't at all fit the mental model of what an Int-like thing is. Other examples: Conor McBride's work on co-deBruijn syntax exploits the current Bits instance heavily (and can be greatly streamlined by making more use of it that he doesn't, quite yet.) -Edward On Fri, Nov 15, 2019 at 9:34 PM Sylvain Henry wrote: > Hi GHC devs, > > As some of you may know, I am working on fixing several longstanding > issues with GHC's big numbers implementation (Integer, Natural). You can > read more about it here: > > https://gitlab.haskell.org/hsyl20/ghc/raw/hsyl20-integer/libraries/ghc-bignum/docs/ghc-bignum.rst > > To summarize, we would have a single `ghc-bignum` package with different > backends (GMP, pure Haskell, etc.). The backend is chosen with a Cabal > flag and new backends are way easier to add. All the backends use the > same representation which allows Integer and Natural types and datacons > to be wired-in which has a lot of nice consequences (remove some > dependency hacks in base package, make GHC agnostic of the backend used, > etc.). > > A major roadblock in previous attempts was that integer-simple doesn't > use the same representations for numbers as integer-gmp. But I have > written a new pure Haskell implementation which happens to be faster > than integer-simple (see perf results in the document linked above) and > that uses the common representation (similar to what was used in > integer-gmp). > > I am very close to submit a merge request but there is a remaining > question about the Bits instance for negative Integer numbers: > > We don't store big negative Integer using two's complement encoding, > instead we use signed magnitude representation (i.e. we use constructors > to distinguish between (big) positive or negative numbers). It's already > true today in integer-simple and integer-gmp. However integer-gmp and > integer-simple fake two's complement encoding for Bits operations. As a > consequence, every Bits operation on negative Integers does *a lot* of > stuff. E.g. testing a single bit with `testBit` is linear in the size of > the number, a logical `and` between two numbers involves additions and > subtractions, etc. > > Question is: do we need/want to keep this behavior? There is nothing in > the report that says that Integer's Bits instance has to mimic two's > complement encoding. What's the point of slowly accessing a fake > representation instead of the actual one? Could we deprecate this? The > instance isn't even coherent: popCount returns the negated numbers of 1s > in the absolute value as it can't return an infinite value. > > Thanks, > Sylvain > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Sat Nov 16 11:04:02 2019 From: mail at joachim-breitner.de (Joachim Breitner) Date: Sat, 16 Nov 2019 12:04:02 +0100 Subject: Question about negative Integers In-Reply-To: <83bcbb0e-84d0-fd4a-2e36-04186b827b22@haskus.fr> References: <83bcbb0e-84d0-fd4a-2e36-04186b827b22@haskus.fr> Message-ID: <58612ae1eb65b0f6616d2e266f1fa43e4bab65b5.camel@joachim-breitner.de> Hi, Am Freitag, den 15.11.2019, 17:04 +0100 schrieb Sylvain Henry: > However integer-gmp and > integer-simple fake two's complement encoding for Bits operations. just a small factoid: the Coq standard library provide the same semantics. I’d lean towards leaving it as it is. If someone need the “other” semantics, they can easily throw in a (very efficient) `abs` in the right places. Cheers, Joachim -- Joachim Breitner mail at joachim-breitner.de http://www.joachim-breitner.de/ From sylvain at haskus.fr Sat Nov 16 12:30:44 2019 From: sylvain at haskus.fr (Sylvain Henry) Date: Sat, 16 Nov 2019 13:30:44 +0100 Subject: Question about negative Integers In-Reply-To: <58612ae1eb65b0f6616d2e266f1fa43e4bab65b5.camel@joachim-breitner.de> References: <83bcbb0e-84d0-fd4a-2e36-04186b827b22@haskus.fr> <58612ae1eb65b0f6616d2e266f1fa43e4bab65b5.camel@joachim-breitner.de> Message-ID: Alright. Thanks everyone for the convincing answers. I will keep the current behavior and I will document that operations may be slower than could be expected. Cheers, Sylvain On 16/11/2019 12:04, Joachim Breitner wrote: > Hi, > > Am Freitag, den 15.11.2019, 17:04 +0100 schrieb Sylvain Henry: >> However integer-gmp and >> integer-simple fake two's complement encoding for Bits operations. > just a small factoid: the Coq standard library provide the same > semantics. I’d lean towards leaving it as it is. If someone need the > “other” semantics, they can easily throw in a (very efficient) `abs` in > the right places. > > Cheers, > Joachim > From omeragacan at gmail.com Sun Nov 17 08:22:59 2019 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Sun, 17 Nov 2019 11:22:59 +0300 Subject: Problem with compiler perf tests Message-ID: Hi, Currently we have a bunch of tests in testsuite/tests/perf/compiler for keeping compile time allocations, max residency etc. in the expected ranges and avoid introducing accidental compile time performance regressions. This has a problem: we expect every MR to keep the compile time stats in the specified ranges, but sometimes a patch fixes an issue, or does something right (removes hacks/refactors bad code etc.) but also increases the numbers because sometimes doing it right means doing more work or keeping more things in memory (e.g. !1747, !2100 which is required by !1304). We then spend hours/days trying to shave a few bytes off in those patches, because the previous hacky/buggy code set the standards. It doesn't make sense to compare bad/buggy code with good code and expect them to do the same thing. Second problem is that it forces the developer to focus on a tiny part of the compiler to reduce the numbers to the where they were. If they looked at the big picture instead it might be possible to see rooms of improvements in other places that could be possibly lead to much more efficient use of the developer time. I think what we should do instead is that once it's clear that the patch did not introduce *accidental* increases in numbers (e.g. in !2100 I checked and explained the increase in average residency, and showed that the increase makes sense and is not a leak) and it's the right thing to do, we should merge it, and track the performance issues in another issue. The CI should still run perf tests, but those should be allowed to fail. Any opinions on this? Ömer From ben at well-typed.com Sun Nov 17 11:27:38 2019 From: ben at well-typed.com (Ben Gamari) Date: Sun, 17 Nov 2019 06:27:38 -0500 Subject: Problem with compiler perf tests In-Reply-To: References: Message-ID: <7D9335D9-5C37-4DC1-AF30-C34558DDF01E@well-typed.com> On November 17, 2019 3:22:59 AM EST, "Ömer Sinan Ağacan" wrote: >Hi, > >Currently we have a bunch of tests in testsuite/tests/perf/compiler for >keeping >compile time allocations, max residency etc. in the expected ranges and >avoid >introducing accidental compile time performance regressions. > >This has a problem: we expect every MR to keep the compile time stats >in the >specified ranges, but sometimes a patch fixes an issue, or does >something right >(removes hacks/refactors bad code etc.) but also increases the numbers >because >sometimes doing it right means doing more work or keeping more things >in memory >(e.g. !1747, !2100 which is required by !1304). > >We then spend hours/days trying to shave a few bytes off in those >patches, >because the previous hacky/buggy code set the standards. It doesn't >make sense >to compare bad/buggy code with good code and expect them to do the same >thing. > >Second problem is that it forces the developer to focus on a tiny part >of the >compiler to reduce the numbers to the where they were. If they looked >at the big >picture instead it might be possible to see rooms of improvements in >other >places that could be possibly lead to much more efficient use of the >developer >time. > >I think what we should do instead is that once it's clear that the >patch did not >introduce *accidental* increases in numbers (e.g. in !2100 I checked >and >explained the increase in average residency, and showed that the >increase makes >sense and is not a leak) and it's the right thing to do, we should >merge it, and >track the performance issues in another issue. The CI should still run >perf >tests, but those should be allowed to fail. > To be clear, our policy is not that GHC's performance tests should never regress. You are quite right that it is sometimes unrealistic to expect a patch fixing previously-incorrect behavior to do so without introducing additional cost. However, we do generally want to ensure that we aren't introducing low-hanging regressions. This is most easily done before the merge request is merged, when a developer has the relevant bits of the design (both new and old) fresh in mind and a clear picture of the structure of their implementation. This is of course a tradeoff. At some point we must conclude that the marginal benefit of investigating any potential regressions is outweighed by that of using that effort elsewhere in the compiler. This is inevitably a judgement call. In the specific case of !2100 I think we probably have crossed this threshold. The overall ~0.1% compile time regression that you report seems reasonable and I doubt that further work on this particular patch will eliminate this. However, it also seems that in this particular case there are outstanding design questions which have yet to be addressed (specifically the exchange between you and Simon regarding PartialModIface which has more to do with implementation clarity than performance). I agree with Simon that we should avoid committing this patch in two pieces if unless there is a good reason. Perhaps you have such a reason? Cheers, - Ben -- Sent from my Android device with K-9 Mail. Please excuse my brevity. From klebinger.andreas at gmx.at Sun Nov 17 11:58:21 2019 From: klebinger.andreas at gmx.at (Andreas Klebinger) Date: Sun, 17 Nov 2019 12:58:21 +0100 Subject: Problem with compiler perf tests In-Reply-To: References: Message-ID: <088102bd-13d4-2a78-66ad-ff3dc14f8ece@gmx.at> Ömer Sinan Ağacan schrieb am 17.11.2019 um 09:22: > I think what we should do instead is that once it's clear that the > patch did not > introduce *accidental* increases in numbers (e.g. in !2100 I checked and > explained the increase in average residency, and showed that the increase makes > sense and is not a leak) and it's the right thing to do, we should merge it But that's what we do already isn't it? We don't expect all changes to have no performance implications if they can be argued for. However it's easy for "insignificant" changes to compound to a significant slowdown so I don't think we are too careful currently. I've never seen anyone care about "a few bytes". Assuming we get 6 MR's who regresses a metric by 1% per year that adds up quickly. Three years and we will be about 20% worse! So I think we are right to be cautions with those things. It's just that people sometimes (as in !2100 initially) disagree on what the right thing to do is. But I don't see a way around that no matter where we set the thresholds. That can only be resolved by discourse. What I don't agree with is pushing that discussion into separate tickets in general. That would just mean we get a bunch of performance regression, and a bunch of tickets documenting them. Which is better than not documenting them! And sometimes that will be the best course of action. But if there is a chance to resolve performance issues while a patch is still being worked on that will in general always be a better solution. At least that's my opinion on the general case. Cheers, Andreas > Hi, > > Currently we have a bunch of tests in testsuite/tests/perf/compiler for keeping > compile time allocations, max residency etc. in the expected ranges and avoid > introducing accidental compile time performance regressions. > > This has a problem: we expect every MR to keep the compile time stats in the > specified ranges, but sometimes a patch fixes an issue, or does something right > (removes hacks/refactors bad code etc.) but also increases the numbers because > sometimes doing it right means doing more work or keeping more things in memory > (e.g. !1747, !2100 which is required by !1304). > > We then spend hours/days trying to shave a few bytes off in those patches, > because the previous hacky/buggy code set the standards. It doesn't make sense > to compare bad/buggy code with good code and expect them to do the same thing. > > Second problem is that it forces the developer to focus on a tiny part of the > compiler to reduce the numbers to the where they were. If they looked at the big > picture instead it might be possible to see rooms of improvements in other > places that could be possibly lead to much more efficient use of the developer > time. > > I think what we should do instead is that once it's clear that the patch did not > introduce *accidental* increases in numbers (e.g. in !2100 I checked and > explained the increase in average residency, and showed that the increase makes > sense and is not a leak) and it's the right thing to do, we should merge it, and > track the performance issues in another issue. The CI should still run perf > tests, but those should be allowed to fail. > > Any opinions on this? > > Ömer > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From b at chreekat.net Mon Nov 18 15:36:42 2019 From: b at chreekat.net (Bryan Richter) Date: Mon, 18 Nov 2019 17:36:42 +0200 Subject: gc.log file In-Reply-To: References: Message-ID: This message seems to indicate that git prune needs to be run on the "remote", i.e. on the GitLab server itself. On Fri, 15 Nov 2019, 15.02 Simon Peyton Jones via ghc-devs, < ghc-devs at haskell.org> wrote: > Devs, > > I’m getting mysterious messages from git, below. I have no file gc.log > in my tree. > > Also I have run ‘git prune’ but the message still occurs. > > Any ideas? > > Simon > > > > simonpj at MSRC-3645512:~/code/HEAD$ git push --force > > setsockopt IPV6_TCLASS 8: Operation not permitted: > > Counting objects: 34, done. > > Delta compression using up to 20 threads. > > Compressing objects: 100% (14/14), done. > > Writing objects: 100% (34/34), 8.83 KiB | 2.21 MiB/s, done. > > Total 34 (delta 32), reused 21 (delta 20) > > remote: > > remote: View merge request for wip/T16296: > > remote: https://gitlab.haskell.org/ghc/ghc/merge_requests/2161 > > remote: > > remote: warning: The last gc run reported the following. Please correct > the root cause > > remote: and remove gc.log. > > remote: Automatic cleanup will not be performed until the file is removed. > > > remote: > > remote: warning: There are too many unreachable loose objects; run 'git > prune' to remove them. > > remote: > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Fri Nov 22 20:28:25 2019 From: ben at smart-cactus.org (Ben Gamari) Date: Fri, 22 Nov 2019 15:28:25 -0500 Subject: Service outage Message-ID: <87r21z8xlm.fsf@smart-cactus.org> Hello all, DreamHost, which hosts our GitLab artifact and log storage, is currently having some network trouble. Artifact and CI log download will be unavailable until this is resolved. I have engaged with DreamHost to try to expedite resolution of the issue but have no yet heard back. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Sun Nov 24 21:08:53 2019 From: ben at well-typed.com (Ben Gamari) Date: Sun, 24 Nov 2019 16:08:53 -0500 Subject: [ANNOUNCE] Glasgow Haskell Compiler 8.10.1-alpha1 released Message-ID: <87o8x17zj3.fsf@smart-cactus.org> Hello all, The GHC team is happy to announce the availability of the first alpha release in the GHC 8.10 series. Source and binary distributions are available at the usual place: https://downloads.haskell.org/ghc/8.10.1-alpha1/ GHC 8.10.1 will bring a number of new features including: * The new UnliftedNewtypes extension allowing newtypes around unlifted types. * The new StandaloneKindSignatures extension allows users to give top-level kind signatures to type, type family, and class declarations. * A new warning, -Wderiving-defaults, to draw attention to ambiguous deriving clauses * A number of improvements in code generation, including changes * A new GHCi command, :instances, for listing the class instances available for a type. * An upgraded Windows toolchain lifting the MAX_PATH limitation * Improved support profiling, including support for sending profiler samples to the eventlog, allowing correlation between the profile and other program events This release marks the beginning of the 8.10 pre-release cycle. The next alpha release will be in roughly two weeks. This next alpha will likely be the last release before the release candidate in late December. If all goes well the final release will be cut roughly two weeks after the candidate, in mid-January. This being an alpha release, there are a few issues that are still outstanding: * The new Alpine Linux binary distribution is not present due to an apparent correctness issue [1]; any help Alpine users can offer here would be greatly appreciated. * We have yet to sort out compliance with Apple's notarization requirements [2] which will be likely be necessary for users of macOS Catalina. * There is an issue with the users guide build that has yet to be sorted out. * There are a couple of non-regression correctness issues which we plan to fix for the final 8.10.1 but are not fixed in this release. * Debian 10 builds are not yet available. Please do test this release and let us know if you encounter any other issues. Cheers, - Ben [1] https://gitlab.haskell.org/ghc/ghc/issues/17508 [2] https://gitlab.haskell.org/ghc/ghc/issues/17418 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From omeragacan at gmail.com Mon Nov 25 10:24:22 2019 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Mon, 25 Nov 2019 13:24:22 +0300 Subject: -dynamic-too implementation question Message-ID: Hi, If anyone here knows about how -dynamic-too is implemented feedback in #17502 would be appreciated. As far as I can see it's unnecessarily inefficient currently, and the code is very hard to follow, but it's possible that I'm missing something, and it'd be good to know what before investing time into it. Thanks, Ömer From facundo.dominguez at tweag.io Mon Nov 25 16:26:17 2019 From: facundo.dominguez at tweag.io (=?UTF-8?Q?Dom=C3=ADnguez=2C_Facundo?=) Date: Mon, 25 Nov 2019 13:26:17 -0300 Subject: Multiple instance pragmas Message-ID: Dear devs, I have a program [1] which depends on the ability to specify some instances to be both overlappable and incoherent. GHC so far, allows only one of the OVERLAPPABLE or INCOHERENT pragmas to be specified per instance. One can still have an overlappable and incoherent instance by using -XIncoherenInstances, but this extension is deprecated. Is there any chance that a patch is accepted to allow multiple instance pragmas? And if not, what would the reason for this constraint? Thanks, Facundo [1] https://gist.github.com/facundominguez/2c0292bf6a721b450c46486ff3b71f24 -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Mon Nov 25 16:28:55 2019 From: allbery.b at gmail.com (Brandon Allbery) Date: Mon, 25 Nov 2019 11:28:55 -0500 Subject: Multiple instance pragmas In-Reply-To: References: Message-ID: TBH I'd have expected INCOHERENT to cover OVERLAPPABLE, i.e. all bets are off and you've allowed anything including overlaps. On Mon, Nov 25, 2019 at 11:26 AM Domínguez, Facundo < facundo.dominguez at tweag.io> wrote: > Dear devs, > > I have a program [1] which depends on the ability to specify some > instances to be both overlappable and incoherent. > > GHC so far, allows only one of the OVERLAPPABLE or INCOHERENT pragmas to > be specified per instance. One can still have an overlappable and > incoherent instance by using -XIncoherenInstances, but this extension is > deprecated. > > Is there any chance that a patch is accepted to allow multiple instance > pragmas? And if not, what would the reason for this constraint? > > Thanks, > Facundo > > [1] > https://gist.github.com/facundominguez/2c0292bf6a721b450c46486ff3b71f24 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -- brandon s allbery kf8nh allbery.b at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at richarde.dev Mon Nov 25 16:58:59 2019 From: rae at richarde.dev (Richard Eisenberg) Date: Mon, 25 Nov 2019 16:58:59 +0000 Subject: Multiple instance pragmas In-Reply-To: References: Message-ID: <37835659-B9E6-4481-8FB8-AD587C08BBF8@richarde.dev> I agree -- I think INCOHERENT essentially subsumes the others. Do you have a counter-example? Richard > On Nov 25, 2019, at 4:28 PM, Brandon Allbery wrote: > > TBH I'd have expected INCOHERENT to cover OVERLAPPABLE, i.e. all bets are off and you've allowed anything including overlaps. > > On Mon, Nov 25, 2019 at 11:26 AM Domínguez, Facundo > wrote: > Dear devs, > > I have a program [1] which depends on the ability to specify some instances to be both overlappable and incoherent. > > GHC so far, allows only one of the OVERLAPPABLE or INCOHERENT pragmas to be specified per instance. One can still have an overlappable and incoherent instance by using -XIncoherenInstances, but this extension is deprecated. > > Is there any chance that a patch is accepted to allow multiple instance pragmas? And if not, what would the reason for this constraint? > > Thanks, > Facundo > > [1] https://gist.github.com/facundominguez/2c0292bf6a721b450c46486ff3b71f24 _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > -- > brandon s allbery kf8nh > allbery.b at gmail.com _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From josefs at fb.com Tue Nov 26 11:49:15 2019 From: josefs at fb.com (Josef Svenningsson) Date: Tue, 26 Nov 2019 11:49:15 +0000 Subject: Injecting imported functions using a core plugin Message-ID: Hi ghc-devs, I’m currently writing a core plugin that I could use some help with. Consider the following two modules: ``` module A where foo :: Int bar :: Int module B where baz :: Int baz = bar ``` When compiling module B I run my plugin. The goal of the plugin is to replace the occurrence of `bar` with `foo`. Note that we can be sure that `foo` is actually imported, but unfortunately doesn’t occur anywhere in B before the plugin performs the transformation. The problem I have is that in order to inject `foo` in B I need to have an `Id` which represents `foo` and I’m having some trouble constructing such an `Id`. I’ve looked through the various environments that are available during the core to core transformations but none of them provides enough information to actually produce the `foo` `Id` as far as I can see. I hope I’m missing something. What do I need to do in order to construct the `foo` `Id` in module B? Thanks, Josef PS. The way I’ve phrased my problem in this email it would be possible to solve it with rewrite rules. My actual use case is unfortunately more complicated and rewrite rules don’t provide enough power to do what I want. -------------- next part -------------- An HTML attachment was scrubbed... URL: From facundo.dominguez at tweag.io Tue Nov 26 11:56:05 2019 From: facundo.dominguez at tweag.io (=?UTF-8?Q?Dom=C3=ADnguez=2C_Facundo?=) Date: Tue, 26 Nov 2019 08:56:05 -0300 Subject: Injecting imported functions using a core plugin In-Reply-To: References: Message-ID: Hello Josef, Do you know the location of foo when building the plugin? Otherwise, how is the plugin supposed to learn where it comes from? Facundo On Tue, Nov 26, 2019 at 8:49 AM Josef Svenningsson wrote: > Hi ghc-devs, > > > > I’m currently writing a core plugin that I could use some help with. > > Consider the following two modules: > > > > ``` > > module A where > > > > foo :: Int > > bar :: Int > > > > module B where > > > > baz :: Int > > baz = bar > > ``` > > > > When compiling module B I run my plugin. The goal of the plugin is to > replace the occurrence of `bar` with `foo`. Note that we can be sure that > `foo` is actually imported, but unfortunately doesn’t occur anywhere in B > before the plugin performs the transformation. > > > > The problem I have is that in order to inject `foo` in B I need to have an > `Id` which represents `foo` and I’m having some trouble constructing such > an `Id`. I’ve looked through the various environments that are available > during the core to core transformations but none of them provides enough > information to actually produce the `foo` `Id` as far as I can see. I hope > I’m missing something. What do I need to do in order to construct the `foo` > `Id` in module B? > > > > Thanks, > > > > Josef > > > > PS. The way I’ve phrased my problem in this email it would be possible to > solve it with rewrite rules. My actual use case is unfortunately more > complicated and rewrite rules don’t provide enough power to do what I want. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From josefs at fb.com Tue Nov 26 12:55:29 2019 From: josefs at fb.com (Josef Svenningsson) Date: Tue, 26 Nov 2019 12:55:29 +0000 Subject: Injecting imported functions using a core plugin In-Reply-To: References: Message-ID: Yes, the plugin is fully aware of module A in my example. Thanks, Josef From: "Domínguez, Facundo" Date: Tuesday, November 26, 2019 at 11:56 AM To: Josef Svenningsson Cc: "ghc-devs at haskell.org" Subject: Re: Injecting imported functions using a core plugin Hello Josef, Do you know the location of foo when building the plugin? Otherwise, how is the plugin supposed to learn where it comes from? Facundo On Tue, Nov 26, 2019 at 8:49 AM Josef Svenningsson > wrote: Hi ghc-devs, I’m currently writing a core plugin that I could use some help with. Consider the following two modules: ``` module A where foo :: Int bar :: Int module B where baz :: Int baz = bar ``` When compiling module B I run my plugin. The goal of the plugin is to replace the occurrence of `bar` with `foo`. Note that we can be sure that `foo` is actually imported, but unfortunately doesn’t occur anywhere in B before the plugin performs the transformation. The problem I have is that in order to inject `foo` in B I need to have an `Id` which represents `foo` and I’m having some trouble constructing such an `Id`. I’ve looked through the various environments that are available during the core to core transformations but none of them provides enough information to actually produce the `foo` `Id` as far as I can see. I hope I’m missing something. What do I need to do in order to construct the `foo` `Id` in module B? Thanks, Josef PS. The way I’ve phrased my problem in this email it would be possible to solve it with rewrite rules. My actual use case is unfortunately more complicated and rewrite rules don’t provide enough power to do what I want. _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Tue Nov 26 13:37:13 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Tue, 26 Nov 2019 13:37:13 +0000 Subject: Injecting imported functions using a core plugin In-Reply-To: References: Message-ID: You can use `thNameToGhcName` to turn a quoted name ('foo) into a GHC Name and and then use `lookupId` in order to get the `Id` for that Name. Cheers, Matt On Tue, Nov 26, 2019 at 12:55 PM Josef Svenningsson wrote: > > Yes, the plugin is fully aware of module A in my example. > > > > Thanks, > > > > Josef > > > > From: "Domínguez, Facundo" > Date: Tuesday, November 26, 2019 at 11:56 AM > To: Josef Svenningsson > Cc: "ghc-devs at haskell.org" > Subject: Re: Injecting imported functions using a core plugin > > > > Hello Josef, > > > > Do you know the location of foo when building the plugin? Otherwise, how is the plugin supposed to learn where it comes from? > > > > Facundo > > > > On Tue, Nov 26, 2019 at 8:49 AM Josef Svenningsson wrote: > > Hi ghc-devs, > > > > I’m currently writing a core plugin that I could use some help with. > > Consider the following two modules: > > > > ``` > > module A where > > > > foo :: Int > > bar :: Int > > > > module B where > > > > baz :: Int > > baz = bar > > ``` > > > > When compiling module B I run my plugin. The goal of the plugin is to replace the occurrence of `bar` with `foo`. Note that we can be sure that `foo` is actually imported, but unfortunately doesn’t occur anywhere in B before the plugin performs the transformation. > > > > The problem I have is that in order to inject `foo` in B I need to have an `Id` which represents `foo` and I’m having some trouble constructing such an `Id`. I’ve looked through the various environments that are available during the core to core transformations but none of them provides enough information to actually produce the `foo` `Id` as far as I can see. I hope I’m missing something. What do I need to do in order to construct the `foo` `Id` in module B? > > > > Thanks, > > > > Josef > > > > PS. The way I’ve phrased my problem in this email it would be possible to solve it with rewrite rules. My actual use case is unfortunately more complicated and rewrite rules don’t provide enough power to do what I want. > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From josefs at fb.com Tue Nov 26 14:57:52 2019 From: josefs at fb.com (Josef Svenningsson) Date: Tue, 26 Nov 2019 14:57:52 +0000 Subject: Injecting imported functions using a core plugin In-Reply-To: References: Message-ID: <327FF6F2-F1F7-402A-BD77-ED30B4DAC833@fb.com> That works splendidly! Neat! Thanks, Josef On 11/26/19, 1:37 PM, "Matthew Pickering" wrote: You can use `thNameToGhcName` to turn a quoted name ('foo) into a GHC Name and and then use `lookupId` in order to get the `Id` for that Name. Cheers, Matt On Tue, Nov 26, 2019 at 12:55 PM Josef Svenningsson wrote: > > Yes, the plugin is fully aware of module A in my example. > > > > Thanks, > > > > Josef > > > > From: "Domínguez, Facundo" > Date: Tuesday, November 26, 2019 at 11:56 AM > To: Josef Svenningsson > Cc: "ghc-devs at haskell.org" > Subject: Re: Injecting imported functions using a core plugin > > > > Hello Josef, > > > > Do you know the location of foo when building the plugin? Otherwise, how is the plugin supposed to learn where it comes from? > > > > Facundo > > > > On Tue, Nov 26, 2019 at 8:49 AM Josef Svenningsson wrote: > > Hi ghc-devs, > > > > I’m currently writing a core plugin that I could use some help with. > > Consider the following two modules: > > > > ``` > > module A where > > > > foo :: Int > > bar :: Int > > > > module B where > > > > baz :: Int > > baz = bar > > ``` > > > > When compiling module B I run my plugin. The goal of the plugin is to replace the occurrence of `bar` with `foo`. Note that we can be sure that `foo` is actually imported, but unfortunately doesn’t occur anywhere in B before the plugin performs the transformation. > > > > The problem I have is that in order to inject `foo` in B I need to have an `Id` which represents `foo` and I’m having some trouble constructing such an `Id`. I’ve looked through the various environments that are available during the core to core transformations but none of them provides enough information to actually produce the `foo` `Id` as far as I can see. I hope I’m missing something. What do I need to do in order to construct the `foo` `Id` in module B? > > > > Thanks, > > > > Josef > > > > PS. The way I’ve phrased my problem in this email it would be possible to solve it with rewrite rules. My actual use case is unfortunately more complicated and rewrite rules don’t provide enough power to do what I want. > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__mail.haskell.org_cgi-2Dbin_mailman_listinfo_ghc-2Ddevs&d=DwIFaQ&c=5VD0RTtNlTh3ycd41b3MUw&r=APxRu21VjqVNNweqFCcqEg&m=dCV829NFCGL4-CrhlI0o5o3t7wME9DVjxJ6993GLXp0&s=dpLaNzy0ViowB29tAS4s0ljm_UczJJoxLkgJFhOuRf4&e= > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > https://urldefense.proofpoint.com/v2/url?u=http-3A__mail.haskell.org_cgi-2Dbin_mailman_listinfo_ghc-2Ddevs&d=DwIFaQ&c=5VD0RTtNlTh3ycd41b3MUw&r=APxRu21VjqVNNweqFCcqEg&m=dCV829NFCGL4-CrhlI0o5o3t7wME9DVjxJ6993GLXp0&s=dpLaNzy0ViowB29tAS4s0ljm_UczJJoxLkgJFhOuRf4&e= From csaba.hruska at gmail.com Wed Nov 27 16:03:43 2019 From: csaba.hruska at gmail.com (Csaba Hruska) Date: Wed, 27 Nov 2019 17:03:43 +0100 Subject: .hie files for pre-installed GHC libraries Message-ID: Hi, Is it planned to include the .hie files of the base and other libraries in the GHC binary download package? Regards, Csaba Hruska -------------- next part -------------- An HTML attachment was scrubbed... URL: From rene_de_visser at hotmail.com Thu Nov 28 13:00:40 2019 From: rene_de_visser at hotmail.com (Rene) Date: Thu, 28 Nov 2019 13:00:40 +0000 Subject: Fix for ticket 8095 in ghc-8.10.1 Message-ID: Hello, I wondering if the fix for https://gitlab.haskell.org/ghc/ghc/issues/8095 is going to make it into ghc-8.10.1 ? Rene. -------------- next part -------------- An HTML attachment was scrubbed... URL: