From omeragacan at gmail.com Mon Jul 1 07:42:07 2019 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Mon, 1 Jul 2019 10:42:07 +0300 Subject: Workflow question (changing codegen) In-Reply-To: References: Message-ID: My current workaround is this: I have a branch which is just master + the new file I've added. I first build it from a clean tree (git clean -xfd; then build), then switch to my branch, and run `make 1` in `compiler/`. That way I don't have to run ./configure (because the new file is already built and tracked by the build system) so the compiler version does not change and my stage 1 compiler can use the libraries I built with master. I guess the root cause of this is that I have to run ./configure for the build system to track my new file, but doing that also updates the compiler version. Avoiding any of these (updating compiler version, or having to run configure when adding new files) would make this much easier. Ömer Ömer Sinan Ağacan , 28 Haz 2019 Cum, 12:09 tarihinde şunu yazdı: > > Hi all, > > I'm currently going through this torturous process and I'm hoping that someone > here will be able to help. > > I'm making changes in the codegen. My changes are currently buggy, and I need a > working stage 1 compiler to be able to debug. Basically I need to build > libraries using the branch my changes are based on, then build stage 1 with my > branch, so that I'll be able to build and run programs using stage 1 that uses > my codegen changes. The changes are compatible with the old codegen (i.e. no > changes in calling conventions or anything like that) so this should work. > > Normally I do this > > $ git checkout master > $ git distclean && ./boot && ./configure && make > $ git checkout my_branch > $ cd compiler; make 1 > > This gives me stage 1 compiler that uses my buggy codegen changes, plus > libraries built with the old and correct codegen. > > However the problem is I'm also adding a new file in my_branch, and the build > system just doesn't register that fact, even after adding the line I added to > compiler/ghc.cabal.in to compiler/ghc.cabal. So far the only way to fix this > that I could find was to run ./configure again, then run make for a few seconds > at the top level, then do `make 1` in compiler/. Unfortunately even that doesn't > work when the master branch and my_branch have different dates, because `make` > in master branch produces a different version than the `make` in my_branch, so > the interface files become incompatible. > > Anyone have any ideas on how to proceed here? > > Thanks, > > Ömer From jost.berthold at gmail.com Mon Jul 1 09:17:14 2019 From: jost.berthold at gmail.com (Jost Berthold) Date: Mon, 1 Jul 2019 19:17:14 +1000 Subject: Cloning (Shayne Fletcher) In-Reply-To: References: Message-ID: <9b33faa0-87aa-1eb1-811c-8b0ffabe8935@gmail.com> Just on this detail in the previous mails: On 6/25/19 10:00 PM, ghc-devs-request at haskell.org wrote: >> More generally, I'm actually wondering, why GHC's .gitsubmodules use > relative paths. Why not make them absolute? > > I continue to wonder about that and if switching to absolute paths might > remove this wrinkle. Can anyone chime in? I remember the relative paths for submodules were added to make working with several clones of the GHC repo (to lower rebuild cost for simultaneous branches or similar) easier. With relative paths, one can make a second local clone from the first one and all references to all submodules will share local data. That said, this does get in the way sometimes. I changed back to absolute paths in my GHC fork quite a while back. / Jost From a.pelenitsyn at gmail.com Mon Jul 1 11:52:56 2019 From: a.pelenitsyn at gmail.com (Artem Pelenitsyn) Date: Mon, 1 Jul 2019 14:52:56 +0300 Subject: Cloning (Shayne Fletcher) In-Reply-To: <9b33faa0-87aa-1eb1-811c-8b0ffabe8935@gmail.com> References: <9b33faa0-87aa-1eb1-811c-8b0ffabe8935@gmail.com> Message-ID: Hello Jost, Thanks for researching this! In fact, Arnaud did his own research on this topic and submitted !1309 [1] to switch to the absolute paths. The MR has been approved by Ben swiftly and now awaits merging. I believe we should default to the common case, which is to use abs paths making the life of, presumably, many people easier, and let those who understand submodules hack their way through them. [1]: https://gitlab.haskell.org/ghc/ghc/merge_requests/1309 -- Best wishes, Artem On Mon, 1 Jul 2019 at 12:17, Jost Berthold wrote: > Just on this detail in the previous mails: > > On 6/25/19 10:00 PM, ghc-devs-request at haskell.org wrote: > >> More generally, I'm actually wondering, why GHC's .gitsubmodules use > > relative paths. Why not make them absolute? > > > > I continue to wonder about that and if switching to absolute paths might > > remove this wrinkle. Can anyone chime in? > > I remember the relative paths for submodules were added to make working > with several clones of the GHC repo (to lower rebuild cost for > simultaneous branches or similar) easier. > > With relative paths, one can make a second local clone from the first > one and all references to all submodules will share local data. > > That said, this does get in the way sometimes. I changed back to > absolute paths in my GHC fork quite a while back. > > > / Jost > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Jul 1 13:50:12 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 1 Jul 2019 13:50:12 +0000 Subject: User manual Message-ID: I use this link to see the most up to date GHC user manual https://ghc.gitlab.haskell.org/ghc/doc/users_guide/index.html But I think I'm seeing on from 13 June. It's labelled as GHC 8.9.0.20190613 User's Guide How come it's so out of date? Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From shayne.fletcher at daml.com Mon Jul 1 14:06:37 2019 From: shayne.fletcher at daml.com (Shayne Fletcher) Date: Mon, 1 Jul 2019 09:06:37 -0500 Subject: Cloning (Shayne Fletcher) In-Reply-To: References: <9b33faa0-87aa-1eb1-811c-8b0ffabe8935@gmail.com> Message-ID: Well this is an unexpected and most welcome development. Way to go Arnaud! On Mon, Jul 1, 2019, 06:53 Artem Pelenitsyn wrote: > Hello Jost, > > Thanks for researching this! In fact, Arnaud did his own research on this > topic and submitted !1309 [1] to switch to the absolute paths. The MR has > been approved by Ben swiftly and now awaits merging. > > I believe we should default to the common case, which is to use abs paths > making the life of, presumably, many people easier, and let those who > understand submodules hack their way through them. > > [1]: https://gitlab.haskell.org/ghc/ghc/merge_requests/1309 > > -- > Best wishes, > Artem > > > On Mon, 1 Jul 2019 at 12:17, Jost Berthold > wrote: > >> Just on this detail in the previous mails: >> >> On 6/25/19 10:00 PM, ghc-devs-request at haskell.org wrote: >> >> More generally, I'm actually wondering, why GHC's .gitsubmodules use >> > relative paths. Why not make them absolute? >> > >> > I continue to wonder about that and if switching to absolute paths might >> > remove this wrinkle. Can anyone chime in? >> >> I remember the relative paths for submodules were added to make working >> with several clones of the GHC repo (to lower rebuild cost for >> simultaneous branches or similar) easier. >> >> With relative paths, one can make a second local clone from the first >> one and all references to all submodules will share local data. >> >> That said, this does get in the way sometimes. I changed back to >> absolute paths in my GHC fork quite a while back. >> >> >> / Jost >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -- This message, and any attachments, is for the intended recipient(s) only, may contain information that is privileged, confidential and/or proprietary and subject to important terms and conditions available at  http://www.digitalasset.com/emaildisclaimer.html . If you are not the intended recipient, please delete this message. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chessai1996 at gmail.com Mon Jul 1 15:27:00 2019 From: chessai1996 at gmail.com (chessai .) Date: Mon, 1 Jul 2019 11:27:00 -0400 Subject: In-Reply-To: References: Message-ID: Copying on ghc-devs, since i think that's what you wanted. On Mon, Jul 1, 2019, 11:14 AM Andrew Martin wrote: > To get GHC to raise an exception from an inline primop, I presume that I'd > need to jump to stg_raisezh. None of the existing inline primops do > anything quite like this. I see things like emitMemcmpCall and > emitMemsetCall, but these ultimately just wrap emitForeignCall, which wraps > mkUnsafeCall, which wraps the data constructor CmmUnsafeForeignCall. I > think I want CmmCall instead, which is described by the comments as being > used for "a native call or tail call". Hmm... emitRtsCall might be what I > want. I'll try pursuing this route further. > > -- > -Andrew Thaddeus Martin > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Mon Jul 1 18:15:01 2019 From: ben at smart-cactus.org (Ben Gamari) Date: Mon, 01 Jul 2019 14:15:01 -0400 Subject: User manual In-Reply-To: References: Message-ID: <875zolwrj3.fsf@smart-cactus.org> Simon Peyton Jones via ghc-devs writes: > I use this link to see the most up to date GHC user manual > https://ghc.gitlab.haskell.org/ghc/doc/users_guide/index.html > But I think I'm seeing on from 13 June. It's labelled as GHC 8.9.0.20190613 User's Guide > How come it's so out of date? I had also noticed this; the doc-tarballs job [1] appears to be failing due to missing PDF documentation from the Linux build. I have opened [2] to track this. Cheers, - Ben [1] https://gitlab.haskell.org/ghc/ghc/-/jobs/114650 [2] https://gitlab.haskell.org/ghc/ghc/issues/16890 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From matthewtpickering at gmail.com Mon Jul 1 18:20:23 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Mon, 1 Jul 2019 19:20:23 +0100 Subject: User manual In-Reply-To: References: Message-ID: It seems the the `doc-tarball` job stopped working because the `-deb9-debug` variant uses the `validate` flavour which doesn't build the documentation pdf. How is best to fix this Ben? The dwarf build does have the PDF but is 350mb . On Mon, Jul 1, 2019 at 2:50 PM Simon Peyton Jones via ghc-devs wrote: > > I use this link to see the most up to date GHC user manual > > https://ghc.gitlab.haskell.org/ghc/doc/users_guide/index.html > > But I think I’m seeing on from 13 June. It’s labelled as GHC 8.9.0.20190613 User's Guide > > How come it’s so out of date? > > Simon > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From ben at smart-cactus.org Mon Jul 1 18:52:06 2019 From: ben at smart-cactus.org (Ben Gamari) Date: Mon, 01 Jul 2019 14:52:06 -0400 Subject: User manual In-Reply-To: References: Message-ID: <87y31hvb8t.fsf@smart-cactus.org> Matthew Pickering writes: > It seems the the `doc-tarball` job stopped working because the > `-deb9-debug` variant uses the `validate` flavour which doesn't build > the documentation pdf. > > How is best to fix this Ben? The dwarf build does have the PDF but is 350mb . > A good question. I can see that we probably don't want to change the validate flavour to set BUILD_SPHINX_PDF=YES; afterall, we don't want to require contributors to install LaTeX just to locally test their patches. However, perhaps BUILD_SPHINX_PDF ?= NO is a reasonable trade-off and then override this in the CI configuration. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Mon Jul 1 19:01:50 2019 From: ben at smart-cactus.org (Ben Gamari) Date: Mon, 01 Jul 2019 15:01:50 -0400 Subject: Workflow question (changing codegen) In-Reply-To: <4D7733A6-9909-4117-AA4C-A57595EEB778@richarde.dev> References: <4D7733A6-9909-4117-AA4C-A57595EEB778@richarde.dev> Message-ID: <87sgrpvask.fsf@smart-cactus.org> Richard Eisenberg writes: > Just to pass on something that looks cool (I haven't tried it myself > yet): git worktree. It seems git can hang several different checkouts > of a repo in different directories. This seems far superior to my > current habit of having many clones of ghc, sometimes going through > machinations to get commits from one place to another. The > documentation for git worktree seems quite approachable, so you might > find it useful. I plan on using it in the future. > Indeed I use `git new-workdir`, which is a script similar to `git worktree` (but I think predates `worktree` by a few years). Like `worktree`, this allows you to have several working directories sharing a set of refs. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Mon Jul 1 19:03:57 2019 From: ben at smart-cactus.org (Ben Gamari) Date: Mon, 01 Jul 2019 15:03:57 -0400 Subject: Workflow question (changing codegen) In-Reply-To: References: <4D7733A6-9909-4117-AA4C-A57595EEB778@richarde.dev> Message-ID: <87pnmtvap1.fsf@smart-cactus.org> Sebastian Graf writes: > Re: git worktree: That's the workflow I'm currently using. It has its > problems with submodules, see > https://stackoverflow.com/questions/31871888/what-goes-wrong-when-using-git-worktree-with-git-submodules. > But you can make it work with this git alias from the first answer: > https://gitlab.com/clacke/gists/blob/0c4a0b6e10f7fbf15127339750a6ff490d9aa3c8/.config/git/config#L12. > Just go into your main checkout and do `git wtas ../T9876`. AFAIR it > interacts weirdly with MinGW's git or git for Windows, but nothing you > can't work around. > > Anyway, I was hoping that one day hadrian will be smart enough to have a > build directory for each branch or something, so that I would only need one > checkout where I can switch between branches as needed. In the meantime, > `git wtas` does what I want. > For what it's worth, Hadrian can already do this. Just pass the --build-root=$DIR flag and all build artifacts will end up in $DIR instead of the usual _build. I'm not sure how robust it is on branch switches, but it's at least a start. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Mon Jul 1 19:05:06 2019 From: ben at smart-cactus.org (Ben Gamari) Date: Mon, 01 Jul 2019 15:05:06 -0400 Subject: Workflow question (changing codegen) In-Reply-To: References: Message-ID: <87muhxvan4.fsf@smart-cactus.org> Ömer Sinan Ağacan writes: > My current workaround is this: I have a branch which is just master + the new > file I've added. I first build it from a clean tree (git clean -xfd; then > build), then switch to my branch, and run `make 1` in `compiler/`. That way I > don't have to run ./configure (because the new file is already built and tracked > by the build system) so the compiler version does not change and my stage 1 > compiler can use the libraries I built with master. > > I guess the root cause of this is that I have to run ./configure for the build > system to track my new file, but doing that also updates the compiler version. > Avoiding any of these (updating compiler version, or having to run configure > when adding new files) would make this much easier. > For what it's worth, I sometimes just resort to manually editing the files generated by ./configure to avoid having to reconfigure and consequently rebuild. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From omeragacan at gmail.com Tue Jul 2 06:04:40 2019 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Tue, 2 Jul 2019 09:04:40 +0300 Subject: Weight field in issues too fine grained? Message-ID: Hi, One of the problems I'm having when triaging is that I think the "weight" field for issues is currently too fine grained. The triage protocol[1] gives some idea but it's still up to the person who's doing triaging to decide, for example, between 7 vs. 10 for a runtime crash. I think a better "weight" field would be what we had in trac: highest, high, normal etc. that way we don't have to decide whether a runtime panic is 8 or 9 or 10, we'd just mark it as "highest". Now if we had a lot of issues with weight 8, 9, 10 etc. perhaps we'd use the weight field to prioritize, but in my experience we usually have very little such issues and they all get fixed before the next release, so the distinction between e.g. 8 vs. 9 is not useful or meaningful. Is it possible to do switch to trac-style priority/weight field in Gitlab? Anyone else think that this would be good? Ömer [1]: https://gitlab.haskell.org/ghc/ghc/wikis/gitlab/issues#triage-protocol From b at chreekat.net Tue Jul 2 06:44:26 2019 From: b at chreekat.net (Bryan Richter) Date: Tue, 2 Jul 2019 09:44:26 +0300 Subject: Weight field in issues too fine grained? In-Reply-To: References: Message-ID: On Tue, 2 Jul 2019, 9.05 Ömer Sinan Ağacan, wrote: > > Is it possible to do switch to trac-style priority/weight field in Gitlab? > Anyone else think that this would be good? > > Ömer > > [1]: > https://gitlab.haskell.org/ghc/ghc/wikis/gitlab/issues#triage-protocol Hi Ömer, Yes, it's possible to have precisely the same priority labels as in Trac. The feature for this is label prioritization [1]. Weight, in fact, is intended to be used as a measure of size or complexity [2]. But I suppose it could be used however one wants. :) -Bryan [1]: https://docs.gitlab.com/ee/user/project/labels.html#label-priority [2]: https://docs.gitlab.com/ee/user/project/issues/issue_data_and_actions.html#9-weight-starter -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Jul 2 07:54:20 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 2 Jul 2019 07:54:20 +0000 Subject: Weight field in issues too fine grained? In-Reply-To: References: Message-ID: Omer's suggestion makes sense to me | -----Original Message----- | From: ghc-devs On Behalf Of Ömer Sinan | Agacan | Sent: 02 July 2019 07:05 | To: ghc-devs | Subject: Weight field in issues too fine grained? | | Hi, | | One of the problems I'm having when triaging is that I think the "weight" | field for issues is currently too fine grained. The triage protocol[1] | gives some idea but it's still up to the person who's doing triaging to | decide, for example, between 7 vs. 10 for a runtime crash. | | I think a better "weight" field would be what we had in trac: highest, | high, normal etc. that way we don't have to decide whether a runtime panic | is 8 or 9 or 10, we'd just mark it as "highest". | | Now if we had a lot of issues with weight 8, 9, 10 etc. perhaps we'd use | the weight field to prioritize, but in my experience we usually have very | little such issues and they all get fixed before the next release, so the | distinction between e.g. 8 vs. 9 is not useful or meaningful. | | Is it possible to do switch to trac-style priority/weight field in Gitlab? | Anyone else think that this would be good? | | Ömer | | [1]: | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.h | askell.org%2Fghc%2Fghc%2Fwikis%2Fgitlab%2Fissues%23triage- | protocol&data=02%7C01%7Csimonpj%40microsoft.com%7Cdc6955690c0d48921a75 | 08d6feb34b37%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C6369764433521776 | 50&sdata=tbQhiSFrkZZIRMNt3nal3nO7im53pENC1%2F121kRWioo%3D&reserved | =0 | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com%7Cdc6955690c0d48921a7508d6 | feb34b37%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636976443352177650&a | mp;sdata=ZMY2yF%2BkMwR0%2FCVwbIBh%2B5GKXcO%2FAK4QXXrnF7MWMFY%3D&reserv | ed=0 From matthewtpickering at gmail.com Tue Jul 2 09:01:34 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Tue, 2 Jul 2019 10:01:34 +0100 Subject: What to do if your hadrian build starts failing (supportedLlvmVersion) Message-ID: If you see this message: Exit code: 1 Stderr: compiler/llvmGen/LlvmCodeGen/Base.hs:192:36: error: • Couldn't match expected type ‘Int’ with actual type ‘(Integer, Integer)’ • In the first argument of ‘LlvmVersion’, namely ‘(7, 0)’ In the expression: LlvmVersion (7, 0) In an equation for ‘supportedLlvmVersion’: supportedLlvmVersion = LlvmVersion (7, 0) | 192 | supportedLlvmVersion = LlvmVersion sUPPORTED_LLVM_VERSION | ^^^^^ ) Then you need to delete the `includes/ghcautoconf.h` file. It is a file generated by the Make build system. Cheers, Matt From matthewtpickering at gmail.com Tue Jul 2 09:12:21 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Tue, 2 Jul 2019 10:12:21 +0100 Subject: Weight field in issues too fine grained? In-Reply-To: References: Message-ID: It isn't possible to change how the weight field works but as Bryan points out we could use some of the more advanced label features. A scoped label (https://docs.gitlab.com/ee/user/project/labels.html#scoped-labels-premium) could be suitable for weight so that it is enforced that each issue only has one weight. Currently my understanding of weight is that 1. (Obviously) hIgh priority issues are marked as 10 2. (Obviously) low priority issues are marked as 3 3. Everything else is left as Cheers, Matt On Tue, Jul 2, 2019 at 8:54 AM Simon Peyton Jones via ghc-devs wrote: > > Omer's suggestion makes sense to me > > | -----Original Message----- > | From: ghc-devs On Behalf Of Ömer Sinan > | Agacan > | Sent: 02 July 2019 07:05 > | To: ghc-devs > | Subject: Weight field in issues too fine grained? > | > | Hi, > | > | One of the problems I'm having when triaging is that I think the "weight" > | field for issues is currently too fine grained. The triage protocol[1] > | gives some idea but it's still up to the person who's doing triaging to > | decide, for example, between 7 vs. 10 for a runtime crash. > | > | I think a better "weight" field would be what we had in trac: highest, > | high, normal etc. that way we don't have to decide whether a runtime panic > | is 8 or 9 or 10, we'd just mark it as "highest". > | > | Now if we had a lot of issues with weight 8, 9, 10 etc. perhaps we'd use > | the weight field to prioritize, but in my experience we usually have very > | little such issues and they all get fixed before the next release, so the > | distinction between e.g. 8 vs. 9 is not useful or meaningful. > | > | Is it possible to do switch to trac-style priority/weight field in Gitlab? > | Anyone else think that this would be good? > | > | Ömer > | > | [1]: > | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.h > | askell.org%2Fghc%2Fghc%2Fwikis%2Fgitlab%2Fissues%23triage- > | protocol&data=02%7C01%7Csimonpj%40microsoft.com%7Cdc6955690c0d48921a75 > | 08d6feb34b37%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C6369764433521776 > | 50&sdata=tbQhiSFrkZZIRMNt3nal3nO7im53pENC1%2F121kRWioo%3D&reserved > | =0 > | _______________________________________________ > | ghc-devs mailing list > | ghc-devs at haskell.org > | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask > | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- > | devs&data=02%7C01%7Csimonpj%40microsoft.com%7Cdc6955690c0d48921a7508d6 > | feb34b37%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636976443352177650&a > | mp;sdata=ZMY2yF%2BkMwR0%2FCVwbIBh%2B5GKXcO%2FAK4QXXrnF7MWMFY%3D&reserv > | ed=0 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From ben at smart-cactus.org Tue Jul 2 13:57:46 2019 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 02 Jul 2019 09:57:46 -0400 Subject: Weight field in issues too fine grained? In-Reply-To: References: Message-ID: <87a7dwv8rv.fsf@smart-cactus.org> Matthew Pickering writes: > It isn't possible to change how the weight field works but as Bryan > points out we could use some of the more advanced label features. > > A scoped label (https://docs.gitlab.com/ee/user/project/labels.html#scoped-labels-premium) > could be suitable for weight so that it is enforced that each issue > only has one weight. > > Currently my understanding of weight is that > > 1. (Obviously) hIgh priority issues are marked as 10 > 2. (Obviously) low priority issues are marked as 3 > 3. Everything else is left as > Right. I would suggest that we convert the weight field into two (mutually exclusive) labels: * P::High would be category (1) * P::Low would be category (2) * No P::* label would imply categoy (3) Does this sound reasonable to everyone? I could cobble together a script to make this change in about 10 minutes if so. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From omeragacan at gmail.com Tue Jul 2 14:20:42 2019 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Tue, 2 Jul 2019 17:20:42 +0300 Subject: Weight field in issues too fine grained? In-Reply-To: <87a7dwv8rv.fsf@smart-cactus.org> References: <87a7dwv8rv.fsf@smart-cactus.org> Message-ID: I think we may want two different weights "high" and "highest". - Highest: regressions, incorrect results, runtime panics/crashes. These are release blockers. - High: other bugs Other than that this sounds good to me. I don't remember how many kinds of priorities we had in trac but IIRC it used to work well, I think we can just copy the old priorities as labels. Ömer Ben Gamari , 2 Tem 2019 Sal, 16:58 tarihinde şunu yazdı: > > Matthew Pickering writes: > > > It isn't possible to change how the weight field works but as Bryan > > points out we could use some of the more advanced label features. > > > > A scoped label (https://docs.gitlab.com/ee/user/project/labels.html#scoped-labels-premium) > > could be suitable for weight so that it is enforced that each issue > > only has one weight. > > > > Currently my understanding of weight is that > > > > 1. (Obviously) hIgh priority issues are marked as 10 > > 2. (Obviously) low priority issues are marked as 3 > > 3. Everything else is left as > > > Right. I would suggest that we convert the weight field into two > (mutually exclusive) labels: > > * P::High would be category (1) > * P::Low would be category (2) > * No P::* label would imply categoy (3) > > Does this sound reasonable to everyone? I could cobble together a script > to make this change in about 10 minutes if so. > > Cheers, > > - Ben > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From ben at smart-cactus.org Tue Jul 2 15:22:01 2019 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 02 Jul 2019 11:22:01 -0400 Subject: Weight field in issues too fine grained? In-Reply-To: References: <87a7dwv8rv.fsf@smart-cactus.org> Message-ID: <87zhlwtqb0.fsf@smart-cactus.org> Ömer Sinan Ağacan writes: > I think we may want two different weights "high" and "highest". > > - Highest: regressions, incorrect results, runtime panics/crashes. These are > release blockers. > - High: other bugs > > Other than that this sounds good to me. > A fair point. Sounds fine to me. > I don't remember how many kinds of priorities we had in trac but IIRC it used to > work well, I think we can just copy the old priorities as labels. > Trac had lowest, low, normal, high, and highest, IIRC. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Tue Jul 2 16:12:49 2019 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 02 Jul 2019 12:12:49 -0400 Subject: Weight field in issues too fine grained? In-Reply-To: <87a7dwv8rv.fsf@smart-cactus.org> References: <87a7dwv8rv.fsf@smart-cactus.org> Message-ID: <87v9wktnyb.fsf@smart-cactus.org> Ben Gamari writes: > Right. I would suggest that we convert the weight field into two > (mutually exclusive) labels: > > * P::High would be category (1) > * P::Low would be category (2) > * No P::* label would imply categoy (3) > > Does this sound reasonable to everyone? I could cobble together a script > to make this change in about 10 minutes if so. I have I have posted this script here [1]. Cheers, - Ben [1] https://gitlab.haskell.org/bgamari/gitlab-migration/snippets/1457 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From simonpj at microsoft.com Tue Jul 2 16:21:03 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 2 Jul 2019 16:21:03 +0000 Subject: Weight field in issues too fine grained? In-Reply-To: <87v9wktnyb.fsf@smart-cactus.org> References: <87a7dwv8rv.fsf@smart-cactus.org> <87v9wktnyb.fsf@smart-cactus.org> Message-ID: Hang on. | > * P::High would be category (1) | > * P::Low would be category (2) | > * No P::* label would imply categoy (3) Let's have P:High, P:Medium, P:Low, with no "P:" label meaning "no one has assigned it a priority yet". It's very important to be able to distinguish "no one has assigned a priority" from "priority has been assigned as low". Simon | -----Original Message----- | From: Ben Gamari | Sent: 02 July 2019 17:13 | To: Matthew Pickering ; Simon Peyton Jones | | Cc: ghc-devs | Subject: Re: Weight field in issues too fine grained? | | Ben Gamari writes: | | > Right. I would suggest that we convert the weight field into two | > (mutually exclusive) labels: | > | > * P::High would be category (1) | > * P::Low would be category (2) | > * No P::* label would imply categoy (3) | > | > Does this sound reasonable to everyone? I could cobble together a | > script to make this change in about 10 minutes if so. | | I have | I have posted this script here [1]. | | Cheers, | | - Ben | | | [1] https://gitlab.haskell.org/bgamari/gitlab-migration/snippets/1457 From ben at smart-cactus.org Tue Jul 2 17:03:27 2019 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 02 Jul 2019 13:03:27 -0400 Subject: Weight field in issues too fine grained? In-Reply-To: References: Message-ID: <87h884tllu.fsf@smart-cactus.org> Simon Peyton Jones via ghc-devs writes: > Hang on. > > | > * P::High would be category (1) > | > * P::Low would be category (2) > | > * No P::* label would imply categoy (3) > > Let's have P:High, P:Medium, P:Low, with no "P:" label meaning "no one > has assigned it a priority yet". > > It's very important to be able to distinguish "no one has assigned a > priority" from "priority has been assigned as low". > The initial thought was that a ticket without the "needs triage" label would have a valid priority. Consequently a ticket without "needs triage" and no "P::*" label would have medium priority. However, while writing this it does seem that this is a non-trivial invariant that leaves a bit too much implicit. Perhaps an explicit P::normal label is best. I'll update the script. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From simonpj at microsoft.com Tue Jul 2 17:12:25 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 2 Jul 2019 17:12:25 +0000 Subject: Weight field in issues too fine grained? In-Reply-To: <87h884tllu.fsf@smart-cactus.org> References: <87h884tllu.fsf@smart-cactus.org> Message-ID: Moreover, then we don't need "Needs triage". Anything without a P: label needs triage! Triage = assign a P: label. Nice Simon | -----Original Message----- | From: Ben Gamari | Sent: 02 July 2019 18:03 | To: Simon Peyton Jones ; Matthew Pickering | | Cc: ghc-devs | Subject: RE: Weight field in issues too fine grained? | | Simon Peyton Jones via ghc-devs writes: | | > Hang on. | > | > | > * P::High would be category (1) | > | > * P::Low would be category (2) | > | > * No P::* label would imply categoy (3) | > | > Let's have P:High, P:Medium, P:Low, with no "P:" label meaning "no one | > has assigned it a priority yet". | > | > It's very important to be able to distinguish "no one has assigned a | > priority" from "priority has been assigned as low". | > | The initial thought was that a ticket without the "needs triage" label | would have a valid priority. Consequently a ticket without "needs triage" | and no "P::*" label would have medium priority. | | However, while writing this it does seem that this is a non-trivial | invariant that leaves a bit too much implicit. Perhaps an explicit | P::normal label is best. I'll update the script. | | Cheers, | | - Ben From arnaud.spiwack at tweag.io Thu Jul 4 07:10:01 2019 From: arnaud.spiwack at tweag.io (Spiwack, Arnaud) Date: Thu, 4 Jul 2019 09:10:01 +0200 Subject: Guarded Impredicativity In-Reply-To: References: Message-ID: Dear Alejandro and Simon, Taking into account that I'm a bit of an impredicativity nut, so I may be over enthusiastic. - I frequently want more impredicativity in GHC - Last time I did, guarded impredicativity, as in the paper, would have, I believed, done the trick. That being said, it is somewhat hard to give an answer on the spot, but I'll try to take note of why and whether guarded impredicativity would suffice. Best, Arnaud On Fri, Jun 28, 2019 at 2:15 PM Simon Peyton Jones via ghc-devs < ghc-devs at haskell.org> wrote: > Just to amplify: we are very interested to > > > > - Get some idea of *whether anyone cares about impredicativity*. If > we added it to GHC, would you use it? Have you ever bumped up Haskell’s > inability to instantiate a polymorphic function at a polytype. > > > > - Get some idea of *whether the particular form of impredicativity > described in the paper would be expressive enough* for your > application. > > > > Simon > > > > *From:* ghc-devs *On Behalf Of *Alejandro > Serrano Mena > *Sent:* 28 June 2019 13:12 > *To:* ghc-devs at haskell.org > *Subject:* Guarded Impredicativity > > > > Dear all, > > > > We are trying to bring back `ImpredicativeTypes` into GHC by using the > ideas in the "Guarded Impredicative Polymorphism" paper [ > https://www.microsoft.com/en-us/research/publication/guarded-impredicative-polymorphism/ > > ]. > > > > For now I have produced a first attempt, which lives in > https://gitlab.haskell.org/trupill/ghc > . > It would be great if those interested in impredicative polymorphism could > give it a try and see whether it works as expected or not. > > > > The main idea behing "guarded impredicativity" is that you can infer an > impredicative instantiation for a type variable in a function call if you > have at least one given argument where that type variable appears under a > type constructor different from (->). > > For example, consider the call `(\x -> x) : ids`, where `ids :: [forall a. > a -> a]`. Since in the type of `(:)`, namely `forall a. a -> [a] -> [a]`, > the variable `a` appears under the `[]` constructor and that second > argument is given, we are allowed to instantiate `a := forall a. a -> a`. > On the other hand, if we try to do `ids <> ids`, where `(<>)` is monoid > concatenation with type `forall m. Monoid m => m -> m -> m`, we are forced > to instantiate `m` with a not-polymorphic type because at no point the > variable appears under a type constructor. > > > > Just for reference, the best to get a working clone is to follow these > steps: > > > git clone --recursive https://gitlab.haskell.org/ghc/ghc > > impredicative-ghc > > > cd impredicative-ghc > > > git remote add trupill git at gitlab.haskell.org:trupill/ghc.git > > > git fetch trupill > > > git checkout trupill master > > > > Thanks very much in advance, > > Alejandro > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From trupill at gmail.com Thu Jul 4 10:55:49 2019 From: trupill at gmail.com (Alejandro Serrano Mena) Date: Thu, 4 Jul 2019 12:55:49 +0200 Subject: Guarded Impredicativity In-Reply-To: References: Message-ID: Thanks very much, If you had some snippets of code or some libraries you could share with us, that would be extremely helpful. Regards, Alejandro El jue., 4 jul. 2019 a las 9:10, Spiwack, Arnaud () escribió: > Dear Alejandro and Simon, > > Taking into account that I'm a bit of an impredicativity nut, so I may be > over enthusiastic. > > - I frequently want more impredicativity in GHC > - Last time I did, guarded impredicativity, as in the paper, would have, I > believed, done the trick. > > That being said, it is somewhat hard to give an answer on the spot, but > I'll try to take note of why and whether guarded impredicativity would > suffice. > > Best, > Arnaud > > > On Fri, Jun 28, 2019 at 2:15 PM Simon Peyton Jones via ghc-devs < > ghc-devs at haskell.org> wrote: > >> Just to amplify: we are very interested to >> >> >> >> - Get some idea of *whether anyone cares about impredicativity*. If >> we added it to GHC, would you use it? Have you ever bumped up Haskell’s >> inability to instantiate a polymorphic function at a polytype. >> >> >> >> - Get some idea of *whether the particular form of impredicativity >> described in the paper would be expressive enough* for your >> application. >> >> >> >> Simon >> >> >> >> *From:* ghc-devs *On Behalf Of *Alejandro >> Serrano Mena >> *Sent:* 28 June 2019 13:12 >> *To:* ghc-devs at haskell.org >> *Subject:* Guarded Impredicativity >> >> >> >> Dear all, >> >> >> >> We are trying to bring back `ImpredicativeTypes` into GHC by using the >> ideas in the "Guarded Impredicative Polymorphism" paper [ >> https://www.microsoft.com/en-us/research/publication/guarded-impredicative-polymorphism/ >> >> ]. >> >> >> >> For now I have produced a first attempt, which lives in >> https://gitlab.haskell.org/trupill/ghc >> . >> It would be great if those interested in impredicative polymorphism could >> give it a try and see whether it works as expected or not. >> >> >> >> The main idea behing "guarded impredicativity" is that you can infer an >> impredicative instantiation for a type variable in a function call if you >> have at least one given argument where that type variable appears under a >> type constructor different from (->). >> >> For example, consider the call `(\x -> x) : ids`, where `ids :: [forall >> a. a -> a]`. Since in the type of `(:)`, namely `forall a. a -> [a] -> >> [a]`, the variable `a` appears under the `[]` constructor and that second >> argument is given, we are allowed to instantiate `a := forall a. a -> a`. >> On the other hand, if we try to do `ids <> ids`, where `(<>)` is monoid >> concatenation with type `forall m. Monoid m => m -> m -> m`, we are forced >> to instantiate `m` with a not-polymorphic type because at no point the >> variable appears under a type constructor. >> >> >> >> Just for reference, the best to get a working clone is to follow these >> steps: >> >> > git clone --recursive https://gitlab.haskell.org/ghc/ghc >> >> impredicative-ghc >> >> > cd impredicative-ghc >> >> > git remote add trupill git at gitlab.haskell.org:trupill/ghc.git >> >> > git fetch trupill >> >> > git checkout trupill master >> >> >> >> Thanks very much in advance, >> >> Alejandro >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Jul 5 07:37:53 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 5 Jul 2019 07:37:53 +0000 Subject: Gitlab workflow Message-ID: Ben Still trying to understand GitLab. Look at MR 1352 https://gitlab.haskell.org/ghc/ghc/merge_requests/1352 * It clearly says on the first page "The changes were not merged into master" * But lower down (at the end) it says "Merged in 80af..." What should I believe? Merged or not merged? Also * It would be really helpful if a MR status, displayed prominently at the top, had "Merged" as a status, not just "Closed". If I'm trying to check if my has landed, and I see "Closed", that could mean that someone has (doubtless for good reasons) closed it manually, and that it will never land. Would that be possible? Thanks Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Fri Jul 5 09:38:46 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Fri, 5 Jul 2019 10:38:46 +0100 Subject: Gitlab workflow In-Reply-To: References: Message-ID: Hi Simon, No it is not possible due to the use of Marge to merge patches. Gitlab automatically chooses the merged status as follows: Consider two MRs both which target HEAD. MR 1: HEAD <- A MR 2: HEAD <- B Marge creates a batch which contains both MR 1 and MR 2. Once the batch succeeds, firstly MR 1 is merged. HEAD <- A MR 1 is closed with the *merged* status because A was merged directly into HEAD and it matches the state of MR 1. Then patch B gets merged and now master looks like: HEAD <- A <- B MR 2 is closed with closed status because B was merged into master after A, not directly onto HEAD (as the original MR was). There is no option to change this status in the gitlab API. Cheers, Matt On Fri, Jul 5, 2019 at 8:38 AM Simon Peyton Jones via ghc-devs wrote: > > Ben > > Still trying to understand GitLab. Look at MR 1352 https://gitlab.haskell.org/ghc/ghc/merge_requests/1352 > > It clearly says on the first page “The changes were not merged into master” > But lower down (at the end) it says “Merged in 80af...” > > What should I believe? Merged or not merged? > > Also > > It would be really helpful if a MR status, displayed prominently at the top, had “Merged” as a status, not just “Closed”. If I’m trying to check if my has landed, and I see “Closed”, that could mean that someone has (doubtless for good reasons) closed it manually, and that it will never land. > > Would that be possible? > > Thanks > > Simon > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From simonpj at microsoft.com Fri Jul 5 09:43:16 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 5 Jul 2019 09:43:16 +0000 Subject: Gitlab workflow In-Reply-To: References: Message-ID: | No it is not possible due to the use of Marge to merge patches. Gitlab By "it" is not possible, you mean that it's not possible to make the MR status into "Merged". Worse, I think you are saying that some MRs will say "Merged" and some will say "Closed" in some random way depending on Marge batching. Sigh. Maybe this will get better with Gitlab's new merge-train feature. Meanwhile, my original message also asked why the MR shows two contradictory messages about whether the MR has landed. Is that also un-fixable? And if so how do I figure out which one to believe? Thanks Simon | -----Original Message----- | From: Matthew Pickering | Sent: 05 July 2019 10:39 | To: Simon Peyton Jones | Cc: ghc-devs | Subject: Re: Gitlab workflow | | Hi Simon, | | No it is not possible due to the use of Marge to merge patches. Gitlab | automatically chooses the merged status as follows: | | Consider two MRs both which target HEAD. | | MR 1: HEAD <- A | MR 2: HEAD <- B | | Marge creates a batch which contains both MR 1 and MR 2. Once the batch | succeeds, firstly MR 1 is merged. | | HEAD <- A | | MR 1 is closed with the *merged* status because A was merged directly | into HEAD and it matches the state of MR 1. | | Then patch B gets merged and now master looks like: | | HEAD <- A <- B | | MR 2 is closed with closed status because B was merged into master after | A, not directly onto HEAD (as the original MR was). | | There is no option to change this status in the gitlab API. | | Cheers, | | Matt | | On Fri, Jul 5, 2019 at 8:38 AM Simon Peyton Jones via ghc-devs wrote: | > | > Ben | > | > Still trying to understand GitLab. Look at MR 1352 | > https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitl | > ab.haskell.org%2Fghc%2Fghc%2Fmerge_requests%2F1352&data=02%7C01%7C | > simonpj%40microsoft.com%7Ce03ba07f29c447c1252e08d7012c9b59%7C72f988bf8 | > 6f141af91ab2d7cd011db47%7C1%7C0%7C636979163409361534&sdata=xZZiFzO | > CRNpEskjO1MVSONbDvug9dyGEQtaHHSpGeCk%3D&reserved=0 | > | > It clearly says on the first page “The changes were not merged into | master” | > But lower down (at the end) it says “Merged in 80af...” | > | > What should I believe? Merged or not merged? | > | > Also | > | > It would be really helpful if a MR status, displayed prominently at the | top, had “Merged” as a status, not just “Closed”. If I’m trying to check | if my has landed, and I see “Closed”, that could mean that someone has | (doubtless for good reasons) closed it manually, and that it will never | land. | > | > Would that be possible? | > | > Thanks | > | > Simon | > | > _______________________________________________ | > ghc-devs mailing list | > ghc-devs at haskell.org | > https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail. | > haskell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc-devs&data=02%7C01 | > %7Csimonpj%40microsoft.com%7Ce03ba07f29c447c1252e08d7012c9b59%7C72f988 | > bf86f141af91ab2d7cd011db47%7C1%7C0%7C636979163409361534&sdata=2aXm | > n8ewTaA3S8y5eg0sa0lIed7L7BQRfm4jRTTvoO8%3D&reserved=0 From matthewtpickering at gmail.com Fri Jul 5 09:54:52 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Fri, 5 Jul 2019 10:54:52 +0100 Subject: Gitlab workflow In-Reply-To: References: Message-ID: It's not possible to make the MR status merged and also have a reliable merge bot. We used to try to make the status merged but it caused too much instability. Merge trains might eventually work but the current iteration is not suitable as it doesn't work with forks. You believe the one which marge posts telling you that the patch is merged, the commit it links to is on master so you can clearly see the patch has been committed. Matt On Fri, Jul 5, 2019 at 10:43 AM Simon Peyton Jones wrote: > > | No it is not possible due to the use of Marge to merge patches. Gitlab > > By "it" is not possible, you mean that it's not possible to make the MR status into "Merged". Worse, I think you are saying that some MRs will say "Merged" and some will say "Closed" in some random way depending on Marge batching. Sigh. > > Maybe this will get better with Gitlab's new merge-train feature. > > Meanwhile, my original message also asked why the MR shows two contradictory messages about whether the MR has landed. Is that also un-fixable? And if so how do I figure out which one to believe? > > Thanks > > Simon > > > > | -----Original Message----- > | From: Matthew Pickering > | Sent: 05 July 2019 10:39 > | To: Simon Peyton Jones > | Cc: ghc-devs > | Subject: Re: Gitlab workflow > | > | Hi Simon, > | > | No it is not possible due to the use of Marge to merge patches. Gitlab > | automatically chooses the merged status as follows: > | > | Consider two MRs both which target HEAD. > | > | MR 1: HEAD <- A > | MR 2: HEAD <- B > | > | Marge creates a batch which contains both MR 1 and MR 2. Once the batch > | succeeds, firstly MR 1 is merged. > | > | HEAD <- A > | > | MR 1 is closed with the *merged* status because A was merged directly > | into HEAD and it matches the state of MR 1. > | > | Then patch B gets merged and now master looks like: > | > | HEAD <- A <- B > | > | MR 2 is closed with closed status because B was merged into master after > | A, not directly onto HEAD (as the original MR was). > | > | There is no option to change this status in the gitlab API. > | > | Cheers, > | > | Matt > | > | On Fri, Jul 5, 2019 at 8:38 AM Simon Peyton Jones via ghc-devs | devs at haskell.org> wrote: > | > > | > Ben > | > > | > Still trying to understand GitLab. Look at MR 1352 > | > https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitl > | > ab.haskell.org%2Fghc%2Fghc%2Fmerge_requests%2F1352&data=02%7C01%7C > | > simonpj%40microsoft.com%7Ce03ba07f29c447c1252e08d7012c9b59%7C72f988bf8 > | > 6f141af91ab2d7cd011db47%7C1%7C0%7C636979163409361534&sdata=xZZiFzO > | > CRNpEskjO1MVSONbDvug9dyGEQtaHHSpGeCk%3D&reserved=0 > | > > | > It clearly says on the first page “The changes were not merged into > | master” > | > But lower down (at the end) it says “Merged in 80af...” > | > > | > What should I believe? Merged or not merged? > | > > | > Also > | > > | > It would be really helpful if a MR status, displayed prominently at the > | top, had “Merged” as a status, not just “Closed”. If I’m trying to check > | if my has landed, and I see “Closed”, that could mean that someone has > | (doubtless for good reasons) closed it manually, and that it will never > | land. > | > > | > Would that be possible? > | > > | > Thanks > | > > | > Simon > | > > | > _______________________________________________ > | > ghc-devs mailing list > | > ghc-devs at haskell.org > | > https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail. > | > haskell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc-devs&data=02%7C01 > | > %7Csimonpj%40microsoft.com%7Ce03ba07f29c447c1252e08d7012c9b59%7C72f988 > | > bf86f141af91ab2d7cd011db47%7C1%7C0%7C636979163409361534&sdata=2aXm > | > n8ewTaA3S8y5eg0sa0lIed7L7BQRfm4jRTTvoO8%3D&reserved=0 From simonpj at microsoft.com Fri Jul 5 10:18:32 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 5 Jul 2019 10:18:32 +0000 Subject: Gitlab workflow In-Reply-To: References: Message-ID: | You believe the one which marge posts telling you that the patch is | merged, the commit it links to is on master so you can clearly see the | patch has been committed. OK. The earlier one, also from Marge, not the Discussion stream but rather in the panel at the top, says Closed by Marge Bot 8 hours ago The changes were not merged into master So that is an outright lie? Yes it is closed, but contrary to the statement it _has_ been merged. It's unfortunate that this misleading display is right at top, in the summary material, while the truth (that it has been merged) is buried in the Discussion stream. Alas. But thank you for clarifying. Is this something we can raise with the Gitlab folk? It seems so egregiously wrong. Simon | -----Original Message----- | From: Matthew Pickering | Sent: 05 July 2019 10:55 | To: Simon Peyton Jones | Cc: ghc-devs | Subject: Re: Gitlab workflow | | It's not possible to make the MR status merged and also have a reliable | merge bot. We used to try to make the status merged but it caused too | much instability. | | Merge trains might eventually work but the current iteration is not | suitable as it doesn't work with forks. | | You believe the one which marge posts telling you that the patch is | merged, the commit it links to is on master so you can clearly see the | patch has been committed. | | Matt | | On Fri, Jul 5, 2019 at 10:43 AM Simon Peyton Jones | wrote: | > | > | No it is not possible due to the use of Marge to merge patches. | > | Gitlab | > | > By "it" is not possible, you mean that it's not possible to make the MR | status into "Merged". Worse, I think you are saying that some MRs will | say "Merged" and some will say "Closed" in some random way depending on | Marge batching. Sigh. | > | > Maybe this will get better with Gitlab's new merge-train feature. | > | > Meanwhile, my original message also asked why the MR shows two | contradictory messages about whether the MR has landed. Is that also un- | fixable? And if so how do I figure out which one to believe? | > | > Thanks | > | > Simon | > | > | > | > | -----Original Message----- | > | From: Matthew Pickering | > | Sent: 05 July 2019 10:39 | > | To: Simon Peyton Jones | > | Cc: ghc-devs | > | Subject: Re: Gitlab workflow | > | | > | Hi Simon, | > | | > | No it is not possible due to the use of Marge to merge patches. | > | Gitlab automatically chooses the merged status as follows: | > | | > | Consider two MRs both which target HEAD. | > | | > | MR 1: HEAD <- A | > | MR 2: HEAD <- B | > | | > | Marge creates a batch which contains both MR 1 and MR 2. Once the | > | batch succeeds, firstly MR 1 is merged. | > | | > | HEAD <- A | > | | > | MR 1 is closed with the *merged* status because A was merged | > | directly into HEAD and it matches the state of MR 1. | > | | > | Then patch B gets merged and now master looks like: | > | | > | HEAD <- A <- B | > | | > | MR 2 is closed with closed status because B was merged into master | > | after A, not directly onto HEAD (as the original MR was). | > | | > | There is no option to change this status in the gitlab API. | > | | > | Cheers, | > | | > | Matt | > | | > | On Fri, Jul 5, 2019 at 8:38 AM Simon Peyton Jones via ghc-devs | > | wrote: | > | > | > | > Ben | > | > | > | > Still trying to understand GitLab. Look at MR 1352 > | > | https://gitl > | > | ab.haskell.org%2Fghc%2Fghc%2Fmerge_requests%2F1352&data=02%7C01% | > | 7C > | > | simonpj%40microsoft.com%7Ce03ba07f29c447c1252e08d7012c9b59%7C72f988b | > | f8 > | > | 6f141af91ab2d7cd011db47%7C1%7C0%7C636979163409361534&sdata=xZZiF | > | zO > CRNpEskjO1MVSONbDvug9dyGEQtaHHSpGeCk%3D&reserved=0 | > | > | > | > It clearly says on the first page “The changes were not merged | > | into master” | > | > But lower down (at the end) it says “Merged in 80af...” | > | > | > | > What should I believe? Merged or not merged? | > | > | > | > Also | > | > | > | > It would be really helpful if a MR status, displayed prominently | > | at the top, had “Merged” as a status, not just “Closed”. If I’m | > | trying to check if my has landed, and I see “Closed”, that could | > | mean that someone has (doubtless for good reasons) closed it | > | manually, and that it will never land. | > | > | > | > Would that be possible? | > | > | > | > Thanks | > | > | > | > Simon | > | > | > | > _______________________________________________ | > | > ghc-devs mailing list | > | > ghc-devs at haskell.org | > | > http://mail. | > | > | > | haskell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc-devs&data=02%7C | > | 01 > | > | %7Csimonpj%40microsoft.com%7Ce03ba07f29c447c1252e08d7012c9b59%7C72f9 | > | 88 > | > | bf86f141af91ab2d7cd011db47%7C1%7C0%7C636979163409361534&sdata=2a | > | Xm > n8ewTaA3S8y5eg0sa0lIed7L7BQRfm4jRTTvoO8%3D&reserved=0 From eacameron at gmail.com Fri Jul 5 12:05:02 2019 From: eacameron at gmail.com (Elliot Cameron) Date: Fri, 5 Jul 2019 08:05:02 -0400 Subject: Gitlab workflow In-Reply-To: References: Message-ID: Could Marge change the target branch of an MR before merging it? Perhaps this would convince GitLab to show the right info. On Fri, Jul 5, 2019, 6:18 AM Simon Peyton Jones via ghc-devs < ghc-devs at haskell.org> wrote: > | You believe the one which marge posts telling you that the patch is > | merged, the commit it links to is on master so you can clearly see the > | patch has been committed. > > OK. The earlier one, also from Marge, not the Discussion stream but > rather in the panel at the top, says > > Closed by Marge Bot 8 hours ago > The changes were not merged into master > > So that is an outright lie? Yes it is closed, but contrary to the > statement it _has_ been merged. > > It's unfortunate that this misleading display is right at top, in the > summary material, while the truth (that it has been merged) is buried in > the Discussion stream. > > Alas. But thank you for clarifying. > > Is this something we can raise with the Gitlab folk? It seems so > egregiously wrong. > > Simon > > > | -----Original Message----- > | From: Matthew Pickering > | Sent: 05 July 2019 10:55 > | To: Simon Peyton Jones > | Cc: ghc-devs > | Subject: Re: Gitlab workflow > | > | It's not possible to make the MR status merged and also have a reliable > | merge bot. We used to try to make the status merged but it caused too > | much instability. > | > | Merge trains might eventually work but the current iteration is not > | suitable as it doesn't work with forks. > | > | You believe the one which marge posts telling you that the patch is > | merged, the commit it links to is on master so you can clearly see the > | patch has been committed. > | > | Matt > | > | On Fri, Jul 5, 2019 at 10:43 AM Simon Peyton Jones > | wrote: > | > > | > | No it is not possible due to the use of Marge to merge patches. > | > | Gitlab > | > > | > By "it" is not possible, you mean that it's not possible to make the > MR > | status into "Merged". Worse, I think you are saying that some MRs will > | say "Merged" and some will say "Closed" in some random way depending on > | Marge batching. Sigh. > | > > | > Maybe this will get better with Gitlab's new merge-train feature. > | > > | > Meanwhile, my original message also asked why the MR shows two > | contradictory messages about whether the MR has landed. Is that also > un- > | fixable? And if so how do I figure out which one to believe? > | > > | > Thanks > | > > | > Simon > | > > | > > | > > | > | -----Original Message----- > | > | From: Matthew Pickering > | > | Sent: 05 July 2019 10:39 > | > | To: Simon Peyton Jones > | > | Cc: ghc-devs > | > | Subject: Re: Gitlab workflow > | > | > | > | Hi Simon, > | > | > | > | No it is not possible due to the use of Marge to merge patches. > | > | Gitlab automatically chooses the merged status as follows: > | > | > | > | Consider two MRs both which target HEAD. > | > | > | > | MR 1: HEAD <- A > | > | MR 2: HEAD <- B > | > | > | > | Marge creates a batch which contains both MR 1 and MR 2. Once the > | > | batch succeeds, firstly MR 1 is merged. > | > | > | > | HEAD <- A > | > | > | > | MR 1 is closed with the *merged* status because A was merged > | > | directly into HEAD and it matches the state of MR 1. > | > | > | > | Then patch B gets merged and now master looks like: > | > | > | > | HEAD <- A <- B > | > | > | > | MR 2 is closed with closed status because B was merged into master > | > | after A, not directly onto HEAD (as the original MR was). > | > | > | > | There is no option to change this status in the gitlab API. > | > | > | > | Cheers, > | > | > | > | Matt > | > | > | > | On Fri, Jul 5, 2019 at 8:38 AM Simon Peyton Jones via ghc-devs > | > | wrote: > | > | > > | > | > Ben > | > | > > | > | > Still trying to understand GitLab. Look at MR 1352 > > | > | https://gitl > > | > | ab.haskell.org > %2Fghc%2Fghc%2Fmerge_requests%2F1352&data=02%7C01% > | > | 7C > > | > | simonpj%40microsoft.com > %7Ce03ba07f29c447c1252e08d7012c9b59%7C72f988b > | > | f8 > > | > | 6f141af91ab2d7cd011db47%7C1%7C0%7C636979163409361534&sdata=xZZiF > | > | zO > CRNpEskjO1MVSONbDvug9dyGEQtaHHSpGeCk%3D&reserved=0 > | > | > > | > | > It clearly says on the first page “The changes were not merged > | > | into master” > | > | > But lower down (at the end) it says “Merged in 80af...” > | > | > > | > | > What should I believe? Merged or not merged? > | > | > > | > | > Also > | > | > > | > | > It would be really helpful if a MR status, displayed prominently > | > | at the top, had “Merged” as a status, not just “Closed”. If I’m > | > | trying to check if my has landed, and I see “Closed”, that could > | > | mean that someone has (doubtless for good reasons) closed it > | > | manually, and that it will never land. > | > | > > | > | > Would that be possible? > | > | > > | > | > Thanks > | > | > > | > | > Simon > | > | > > | > | > _______________________________________________ > | > | > ghc-devs mailing list > | > | > ghc-devs at haskell.org > | > | > http://mail. > | > | > > | > | haskell.org > %2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc-devs&data=02%7C > | > | 01 > > | > | %7Csimonpj%40microsoft.com > %7Ce03ba07f29c447c1252e08d7012c9b59%7C72f9 > | > | 88 > > | > | bf86f141af91ab2d7cd011db47%7C1%7C0%7C636979163409361534&sdata=2a > | > | Xm > n8ewTaA3S8y5eg0sa0lIed7L7BQRfm4jRTTvoO8%3D&reserved=0 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Fri Jul 5 12:14:10 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Fri, 5 Jul 2019 13:14:10 +0100 Subject: Gitlab workflow In-Reply-To: References: Message-ID: The target branch is already correct. The way to get the merge status is to first rebase the branch before pushing the merge commit. Unfortunately the rebase API is very slow and buggy so we had to stop using it. On Fri, Jul 5, 2019 at 1:05 PM Elliot Cameron wrote: > > Could Marge change the target branch of an MR before merging it? Perhaps this would convince GitLab to show the right info. > > On Fri, Jul 5, 2019, 6:18 AM Simon Peyton Jones via ghc-devs wrote: >> >> | You believe the one which marge posts telling you that the patch is >> | merged, the commit it links to is on master so you can clearly see the >> | patch has been committed. >> >> OK. The earlier one, also from Marge, not the Discussion stream but rather in the panel at the top, says >> >> Closed by Marge Bot 8 hours ago >> The changes were not merged into master >> >> So that is an outright lie? Yes it is closed, but contrary to the statement it _has_ been merged. >> >> It's unfortunate that this misleading display is right at top, in the summary material, while the truth (that it has been merged) is buried in the Discussion stream. >> >> Alas. But thank you for clarifying. >> >> Is this something we can raise with the Gitlab folk? It seems so egregiously wrong. >> >> Simon >> >> >> | -----Original Message----- >> | From: Matthew Pickering >> | Sent: 05 July 2019 10:55 >> | To: Simon Peyton Jones >> | Cc: ghc-devs >> | Subject: Re: Gitlab workflow >> | >> | It's not possible to make the MR status merged and also have a reliable >> | merge bot. We used to try to make the status merged but it caused too >> | much instability. >> | >> | Merge trains might eventually work but the current iteration is not >> | suitable as it doesn't work with forks. >> | >> | You believe the one which marge posts telling you that the patch is >> | merged, the commit it links to is on master so you can clearly see the >> | patch has been committed. >> | >> | Matt >> | >> | On Fri, Jul 5, 2019 at 10:43 AM Simon Peyton Jones >> | wrote: >> | > >> | > | No it is not possible due to the use of Marge to merge patches. >> | > | Gitlab >> | > >> | > By "it" is not possible, you mean that it's not possible to make the MR >> | status into "Merged". Worse, I think you are saying that some MRs will >> | say "Merged" and some will say "Closed" in some random way depending on >> | Marge batching. Sigh. >> | > >> | > Maybe this will get better with Gitlab's new merge-train feature. >> | > >> | > Meanwhile, my original message also asked why the MR shows two >> | contradictory messages about whether the MR has landed. Is that also un- >> | fixable? And if so how do I figure out which one to believe? >> | > >> | > Thanks >> | > >> | > Simon >> | > >> | > >> | > >> | > | -----Original Message----- >> | > | From: Matthew Pickering >> | > | Sent: 05 July 2019 10:39 >> | > | To: Simon Peyton Jones >> | > | Cc: ghc-devs >> | > | Subject: Re: Gitlab workflow >> | > | >> | > | Hi Simon, >> | > | >> | > | No it is not possible due to the use of Marge to merge patches. >> | > | Gitlab automatically chooses the merged status as follows: >> | > | >> | > | Consider two MRs both which target HEAD. >> | > | >> | > | MR 1: HEAD <- A >> | > | MR 2: HEAD <- B >> | > | >> | > | Marge creates a batch which contains both MR 1 and MR 2. Once the >> | > | batch succeeds, firstly MR 1 is merged. >> | > | >> | > | HEAD <- A >> | > | >> | > | MR 1 is closed with the *merged* status because A was merged >> | > | directly into HEAD and it matches the state of MR 1. >> | > | >> | > | Then patch B gets merged and now master looks like: >> | > | >> | > | HEAD <- A <- B >> | > | >> | > | MR 2 is closed with closed status because B was merged into master >> | > | after A, not directly onto HEAD (as the original MR was). >> | > | >> | > | There is no option to change this status in the gitlab API. >> | > | >> | > | Cheers, >> | > | >> | > | Matt >> | > | >> | > | On Fri, Jul 5, 2019 at 8:38 AM Simon Peyton Jones via ghc-devs >> | > | wrote: >> | > | > >> | > | > Ben >> | > | > >> | > | > Still trying to understand GitLab. Look at MR 1352 > >> | > | https://gitl > >> | > | ab.haskell.org%2Fghc%2Fghc%2Fmerge_requests%2F1352&data=02%7C01% >> | > | 7C > >> | > | simonpj%40microsoft.com%7Ce03ba07f29c447c1252e08d7012c9b59%7C72f988b >> | > | f8 > >> | > | 6f141af91ab2d7cd011db47%7C1%7C0%7C636979163409361534&sdata=xZZiF >> | > | zO > CRNpEskjO1MVSONbDvug9dyGEQtaHHSpGeCk%3D&reserved=0 >> | > | > >> | > | > It clearly says on the first page “The changes were not merged >> | > | into master” >> | > | > But lower down (at the end) it says “Merged in 80af...” >> | > | > >> | > | > What should I believe? Merged or not merged? >> | > | > >> | > | > Also >> | > | > >> | > | > It would be really helpful if a MR status, displayed prominently >> | > | at the top, had “Merged” as a status, not just “Closed”. If I’m >> | > | trying to check if my has landed, and I see “Closed”, that could >> | > | mean that someone has (doubtless for good reasons) closed it >> | > | manually, and that it will never land. >> | > | > >> | > | > Would that be possible? >> | > | > >> | > | > Thanks >> | > | > >> | > | > Simon >> | > | > >> | > | > _______________________________________________ >> | > | > ghc-devs mailing list >> | > | > ghc-devs at haskell.org >> | > | > http://mail. >> | > | > >> | > | haskell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc-devs&data=02%7C >> | > | 01 > >> | > | %7Csimonpj%40microsoft.com%7Ce03ba07f29c447c1252e08d7012c9b59%7C72f9 >> | > | 88 > >> | > | bf86f141af91ab2d7cd011db47%7C1%7C0%7C636979163409361534&sdata=2a >> | > | Xm > n8ewTaA3S8y5eg0sa0lIed7L7BQRfm4jRTTvoO8%3D&reserved=0 >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From ryan.gl.scott at gmail.com Sat Jul 6 16:47:38 2019 From: ryan.gl.scott at gmail.com (Ryan Scott) Date: Sat, 6 Jul 2019 12:47:38 -0400 Subject: lint-submods-marge consistently failing when attempting to update Haddock Message-ID: I've noticed that Marge's most recent batch is consistently failing after repeated attempts. Each time, the failure is only in the lint-submods-marge job. Here is an excerpt from the most recent failure [1]: Submodule update(s) detected in 1cd22260c2467650dde8811cc58e89594a016f43: utils/haddock => 658ad4af237f3da196cca083ad525375260e38a7 *FAIL* commit not found in submodule repo or not reachable from persistent branches My understanding is that the lint-submods-marge job checks for any submodule updates, and if there is an update, it ensures that the new commit is actually present upstream. However, I have already pushed Haddock commit 658ad4af237f3da196cca083ad525375260e38a7 upstream on GitHub [2], and there has been enough time for this commit to also appear on the GitLab mirror [3]. Despite this, lint-submods-marge keeps failing, and I have no idea why. Does anyone know what to do from here? Ryan S. ----- [1] https://gitlab.haskell.org/ghc/ghc/-/jobs/119054 [2] https://github.com/haskell/haddock/commit/658ad4af237f3da196cca083ad525375260e38a7 [3] https://gitlab.haskell.org/ghc/haddock/commit/658ad4af237f3da196cca083ad525375260e38a7 From b at chreekat.net Sat Jul 6 17:05:46 2019 From: b at chreekat.net (Bryan Richter) Date: Sat, 6 Jul 2019 20:05:46 +0300 Subject: Gitlab workflow In-Reply-To: References: Message-ID: <1051919e-303d-dab7-a7c7-ddd0f0a1e2a7@chreekat.net> I can't help but notice that there are a lot of issues caused by adhering to a rebase-only workflow. I understand that lots of projects use this workflow, but I still don't understand its popularity. Git is just not designed to be used this way (although I admit that git is flexible enough to make that statement contentious). For instance, this current issue is due to how git tracks revisions. Git doesn't care about Merge Requests or Issue Numbers. It just knows the cryptographic hashes of the worktree's contents including the set of commits leading up to the current commit. If you change commits by rebasing, git just sees brand-new commits. GitHub and GitLab seem to be making things worse by relying on git's design while layering features on top that are only sort-of compatible. Brand-new commits created by a rebase are no longer tied to the original Merge Request, since it is reliant on the very hashes that got obliviated by the rebase. But it's not just GitLab that gets stymied: A bunch of handy git commands like `git branch --contains` end up being useless as well. I will resist the urge to stand up even taller on my soapbox and list all the other convenient features of git that get broken by rebasing, so suffice to say that the downsides to Plain Old Merges that do exist seem nonetheless trivial in comparison. Rather than argue against GHC's current practices, however, I would like to understand them better. What issues led to a rebase-only workflow? Which expert opinions were considered? What happy stories can people relate? We recently switched away from a rebase-only workflow at $workplace, and it's already made life so much nicer for us -- so I'm curious what unforeseen pain we might be in for. :) -Bryan On 7/5/19 3:14 PM, Matthew Pickering wrote: > The target branch is already correct. The way to get the merge status > is to first rebase the branch before pushing the merge commit. > Unfortunately the rebase API is very slow and buggy so we had to stop > using it. > > > On Fri, Jul 5, 2019 at 1:05 PM Elliot Cameron wrote: >> Could Marge change the target branch of an MR before merging it? Perhaps this would convince GitLab to show the right info. >> >> On Fri, Jul 5, 2019, 6:18 AM Simon Peyton Jones via ghc-devs wrote: >>> | You believe the one which marge posts telling you that the patch is >>> | merged, the commit it links to is on master so you can clearly see the >>> | patch has been committed. >>> >>> OK. The earlier one, also from Marge, not the Discussion stream but rather in the panel at the top, says >>> >>> Closed by Marge Bot 8 hours ago >>> The changes were not merged into master >>> >>> So that is an outright lie? Yes it is closed, but contrary to the statement it _has_ been merged. >>> >>> It's unfortunate that this misleading display is right at top, in the summary material, while the truth (that it has been merged) is buried in the Discussion stream. >>> >>> Alas. But thank you for clarifying. >>> >>> Is this something we can raise with the Gitlab folk? It seems so egregiously wrong. >>> >>> Simon >>> >>> >>> | -----Original Message----- >>> | From: Matthew Pickering >>> | Sent: 05 July 2019 10:55 >>> | To: Simon Peyton Jones >>> | Cc: ghc-devs >>> | Subject: Re: Gitlab workflow >>> | >>> | It's not possible to make the MR status merged and also have a reliable >>> | merge bot. We used to try to make the status merged but it caused too >>> | much instability. >>> | >>> | Merge trains might eventually work but the current iteration is not >>> | suitable as it doesn't work with forks. >>> | >>> | You believe the one which marge posts telling you that the patch is >>> | merged, the commit it links to is on master so you can clearly see the >>> | patch has been committed. >>> | >>> | Matt >>> | >>> | On Fri, Jul 5, 2019 at 10:43 AM Simon Peyton Jones >>> | wrote: >>> | > >>> | > | No it is not possible due to the use of Marge to merge patches. >>> | > | Gitlab >>> | > >>> | > By "it" is not possible, you mean that it's not possible to make the MR >>> | status into "Merged". Worse, I think you are saying that some MRs will >>> | say "Merged" and some will say "Closed" in some random way depending on >>> | Marge batching. Sigh. >>> | > >>> | > Maybe this will get better with Gitlab's new merge-train feature. >>> | > >>> | > Meanwhile, my original message also asked why the MR shows two >>> | contradictory messages about whether the MR has landed. Is that also un- >>> | fixable? And if so how do I figure out which one to believe? >>> | > >>> | > Thanks >>> | > >>> | > Simon >>> | > >>> | > >>> | > >>> | > | -----Original Message----- >>> | > | From: Matthew Pickering >>> | > | Sent: 05 July 2019 10:39 >>> | > | To: Simon Peyton Jones >>> | > | Cc: ghc-devs >>> | > | Subject: Re: Gitlab workflow >>> | > | >>> | > | Hi Simon, >>> | > | >>> | > | No it is not possible due to the use of Marge to merge patches. >>> | > | Gitlab automatically chooses the merged status as follows: >>> | > | >>> | > | Consider two MRs both which target HEAD. >>> | > | >>> | > | MR 1: HEAD <- A >>> | > | MR 2: HEAD <- B >>> | > | >>> | > | Marge creates a batch which contains both MR 1 and MR 2. Once the >>> | > | batch succeeds, firstly MR 1 is merged. >>> | > | >>> | > | HEAD <- A >>> | > | >>> | > | MR 1 is closed with the *merged* status because A was merged >>> | > | directly into HEAD and it matches the state of MR 1. >>> | > | >>> | > | Then patch B gets merged and now master looks like: >>> | > | >>> | > | HEAD <- A <- B >>> | > | >>> | > | MR 2 is closed with closed status because B was merged into master >>> | > | after A, not directly onto HEAD (as the original MR was). >>> | > | >>> | > | There is no option to change this status in the gitlab API. >>> | > | >>> | > | Cheers, >>> | > | >>> | > | Matt >>> | > | >>> | > | On Fri, Jul 5, 2019 at 8:38 AM Simon Peyton Jones via ghc-devs >>> | > | wrote: >>> | > | > >>> | > | > Ben >>> | > | > >>> | > | > Still trying to understand GitLab. Look at MR 1352 > >>> | > | https://gitl > >>> | > | ab.haskell.org%2Fghc%2Fghc%2Fmerge_requests%2F1352&data=02%7C01% >>> | > | 7C > >>> | > | simonpj%40microsoft.com%7Ce03ba07f29c447c1252e08d7012c9b59%7C72f988b >>> | > | f8 > >>> | > | 6f141af91ab2d7cd011db47%7C1%7C0%7C636979163409361534&sdata=xZZiF >>> | > | zO > CRNpEskjO1MVSONbDvug9dyGEQtaHHSpGeCk%3D&reserved=0 >>> | > | > >>> | > | > It clearly says on the first page “The changes were not merged >>> | > | into master” >>> | > | > But lower down (at the end) it says “Merged in 80af...” >>> | > | > >>> | > | > What should I believe? Merged or not merged? >>> | > | > >>> | > | > Also >>> | > | > >>> | > | > It would be really helpful if a MR status, displayed prominently >>> | > | at the top, had “Merged” as a status, not just “Closed”. If I’m >>> | > | trying to check if my has landed, and I see “Closed”, that could >>> | > | mean that someone has (doubtless for good reasons) closed it >>> | > | manually, and that it will never land. >>> | > | > >>> | > | > Would that be possible? >>> | > | > >>> | > | > Thanks >>> | > | > >>> | > | > Simon >>> | > | > >>> | > | > _______________________________________________ >>> | > | > ghc-devs mailing list >>> | > | > ghc-devs at haskell.org >>> | > | > http://mail. >>> | > | > >>> | > | haskell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc-devs&data=02%7C >>> | > | 01 > >>> | > | %7Csimonpj%40microsoft.com%7Ce03ba07f29c447c1252e08d7012c9b59%7C72f9 >>> | > | 88 > >>> | > | bf86f141af91ab2d7cd011db47%7C1%7C0%7C636979163409361534&sdata=2a >>> | > | Xm > n8ewTaA3S8y5eg0sa0lIed7L7BQRfm4jRTTvoO8%3D&reserved=0 >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From allbery.b at gmail.com Sat Jul 6 17:15:05 2019 From: allbery.b at gmail.com (Brandon Allbery) Date: Sat, 6 Jul 2019 13:15:05 -0400 Subject: Gitlab workflow In-Reply-To: <1051919e-303d-dab7-a7c7-ddd0f0a1e2a7@chreekat.net> References: <1051919e-303d-dab7-a7c7-ddd0f0a1e2a7@chreekat.net> Message-ID: For one, merge commits tend to be big, annoying, and a problem for anyone who finds themself working on something that someone else just blew away or rewrote because they weren't checking back and you can't pick only part of the merge commit unless it's itself broken into multiple commits per file or sub-change (yes ideally they all would be the latter, but then you just made big changes like refactorings impossible). The more distributed the project is, the more rebase makes a lot of sense vs. merge commits; you need a lot more central planning and organization for merge commits to work well. Which itself seems kinda anti-git. On Sat, Jul 6, 2019 at 1:06 PM Bryan Richter wrote: > I can't help but notice that there are a lot of issues caused by > adhering to a rebase-only workflow. I understand that lots of projects > use this workflow, but I still don't understand its popularity. Git is > just not designed to be used this way (although I admit that git is > flexible enough to make that statement contentious). > > For instance, this current issue is due to how git tracks revisions. Git > doesn't care about Merge Requests or Issue Numbers. It just knows the > cryptographic hashes of the worktree's contents including the set of > commits leading up to the current commit. If you change commits by > rebasing, git just sees brand-new commits. > > GitHub and GitLab seem to be making things worse by relying on git's > design while layering features on top that are only sort-of compatible. > Brand-new commits created by a rebase are no longer tied to the original > Merge Request, since it is reliant on the very hashes that got > obliviated by the rebase. But it's not just GitLab that gets stymied: A > bunch of handy git commands like `git branch --contains` end up being > useless as well. I will resist the urge to stand up even taller on my > soapbox and list all the other convenient features of git that get > broken by rebasing, so suffice to say that the downsides to Plain Old > Merges that do exist seem nonetheless trivial in comparison. > > Rather than argue against GHC's current practices, however, I would like > to understand them better. What issues led to a rebase-only workflow? > Which expert opinions were considered? What happy stories can people > relate? We recently switched away from a rebase-only workflow at > $workplace, and it's already made life so much nicer for us -- so I'm > curious what unforeseen pain we might be in for. :) > > -Bryan > > On 7/5/19 3:14 PM, Matthew Pickering wrote: > > The target branch is already correct. The way to get the merge status > > is to first rebase the branch before pushing the merge commit. > > Unfortunately the rebase API is very slow and buggy so we had to stop > > using it. > > > > > > On Fri, Jul 5, 2019 at 1:05 PM Elliot Cameron > wrote: > >> Could Marge change the target branch of an MR before merging it? > Perhaps this would convince GitLab to show the right info. > >> > >> On Fri, Jul 5, 2019, 6:18 AM Simon Peyton Jones via ghc-devs < > ghc-devs at haskell.org> wrote: > >>> | You believe the one which marge posts telling you that the patch is > >>> | merged, the commit it links to is on master so you can clearly see > the > >>> | patch has been committed. > >>> > >>> OK. The earlier one, also from Marge, not the Discussion stream but > rather in the panel at the top, says > >>> > >>> Closed by Marge Bot 8 hours ago > >>> The changes were not merged into master > >>> > >>> So that is an outright lie? Yes it is closed, but contrary to the > statement it _has_ been merged. > >>> > >>> It's unfortunate that this misleading display is right at top, in the > summary material, while the truth (that it has been merged) is buried in > the Discussion stream. > >>> > >>> Alas. But thank you for clarifying. > >>> > >>> Is this something we can raise with the Gitlab folk? It seems so > egregiously wrong. > >>> > >>> Simon > >>> > >>> > >>> | -----Original Message----- > >>> | From: Matthew Pickering > >>> | Sent: 05 July 2019 10:55 > >>> | To: Simon Peyton Jones > >>> | Cc: ghc-devs > >>> | Subject: Re: Gitlab workflow > >>> | > >>> | It's not possible to make the MR status merged and also have a > reliable > >>> | merge bot. We used to try to make the status merged but it caused > too > >>> | much instability. > >>> | > >>> | Merge trains might eventually work but the current iteration is not > >>> | suitable as it doesn't work with forks. > >>> | > >>> | You believe the one which marge posts telling you that the patch is > >>> | merged, the commit it links to is on master so you can clearly see > the > >>> | patch has been committed. > >>> | > >>> | Matt > >>> | > >>> | On Fri, Jul 5, 2019 at 10:43 AM Simon Peyton Jones > >>> | wrote: > >>> | > > >>> | > | No it is not possible due to the use of Marge to merge patches. > >>> | > | Gitlab > >>> | > > >>> | > By "it" is not possible, you mean that it's not possible to make > the MR > >>> | status into "Merged". Worse, I think you are saying that some MRs > will > >>> | say "Merged" and some will say "Closed" in some random way > depending on > >>> | Marge batching. Sigh. > >>> | > > >>> | > Maybe this will get better with Gitlab's new merge-train feature. > >>> | > > >>> | > Meanwhile, my original message also asked why the MR shows two > >>> | contradictory messages about whether the MR has landed. Is that > also un- > >>> | fixable? And if so how do I figure out which one to believe? > >>> | > > >>> | > Thanks > >>> | > > >>> | > Simon > >>> | > > >>> | > > >>> | > > >>> | > | -----Original Message----- > >>> | > | From: Matthew Pickering > >>> | > | Sent: 05 July 2019 10:39 > >>> | > | To: Simon Peyton Jones > >>> | > | Cc: ghc-devs > >>> | > | Subject: Re: Gitlab workflow > >>> | > | > >>> | > | Hi Simon, > >>> | > | > >>> | > | No it is not possible due to the use of Marge to merge patches. > >>> | > | Gitlab automatically chooses the merged status as follows: > >>> | > | > >>> | > | Consider two MRs both which target HEAD. > >>> | > | > >>> | > | MR 1: HEAD <- A > >>> | > | MR 2: HEAD <- B > >>> | > | > >>> | > | Marge creates a batch which contains both MR 1 and MR 2. Once > the > >>> | > | batch succeeds, firstly MR 1 is merged. > >>> | > | > >>> | > | HEAD <- A > >>> | > | > >>> | > | MR 1 is closed with the *merged* status because A was merged > >>> | > | directly into HEAD and it matches the state of MR 1. > >>> | > | > >>> | > | Then patch B gets merged and now master looks like: > >>> | > | > >>> | > | HEAD <- A <- B > >>> | > | > >>> | > | MR 2 is closed with closed status because B was merged into > master > >>> | > | after A, not directly onto HEAD (as the original MR was). > >>> | > | > >>> | > | There is no option to change this status in the gitlab API. > >>> | > | > >>> | > | Cheers, > >>> | > | > >>> | > | Matt > >>> | > | > >>> | > | On Fri, Jul 5, 2019 at 8:38 AM Simon Peyton Jones via ghc-devs > >>> | > | wrote: > >>> | > | > > >>> | > | > Ben > >>> | > | > > >>> | > | > Still trying to understand GitLab. Look at MR 1352 > > >>> | > | https://gitl > > >>> | > | ab.haskell.org > %2Fghc%2Fghc%2Fmerge_requests%2F1352&data=02%7C01% > >>> | > | 7C > > >>> | > | simonpj%40microsoft.com > %7Ce03ba07f29c447c1252e08d7012c9b59%7C72f988b > >>> | > | f8 > > >>> | > | > 6f141af91ab2d7cd011db47%7C1%7C0%7C636979163409361534&sdata=xZZiF > >>> | > | zO > CRNpEskjO1MVSONbDvug9dyGEQtaHHSpGeCk%3D&reserved=0 > >>> | > | > > >>> | > | > It clearly says on the first page “The changes were not > merged > >>> | > | into master” > >>> | > | > But lower down (at the end) it says “Merged in 80af...” > >>> | > | > > >>> | > | > What should I believe? Merged or not merged? > >>> | > | > > >>> | > | > Also > >>> | > | > > >>> | > | > It would be really helpful if a MR status, displayed > prominently > >>> | > | at the top, had “Merged” as a status, not just “Closed”. If > I’m > >>> | > | trying to check if my has landed, and I see “Closed”, that > could > >>> | > | mean that someone has (doubtless for good reasons) closed it > >>> | > | manually, and that it will never land. > >>> | > | > > >>> | > | > Would that be possible? > >>> | > | > > >>> | > | > Thanks > >>> | > | > > >>> | > | > Simon > >>> | > | > > >>> | > | > _______________________________________________ > >>> | > | > ghc-devs mailing list > >>> | > | > ghc-devs at haskell.org > >>> | > | > http://mail. > >>> | > | > > >>> | > | haskell.org > %2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc-devs&data=02%7C > >>> | > | 01 > > >>> | > | %7Csimonpj%40microsoft.com > %7Ce03ba07f29c447c1252e08d7012c9b59%7C72f9 > >>> | > | 88 > > >>> | > | > bf86f141af91ab2d7cd011db47%7C1%7C0%7C636979163409361534&sdata=2a > >>> | > | Xm > n8ewTaA3S8y5eg0sa0lIed7L7BQRfm4jRTTvoO8%3D&reserved=0 > >>> _______________________________________________ > >>> ghc-devs mailing list > >>> ghc-devs at haskell.org > >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -- brandon s allbery kf8nh allbery.b at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From svenpanne at gmail.com Sat Jul 6 20:22:19 2019 From: svenpanne at gmail.com (Sven Panne) Date: Sat, 6 Jul 2019 22:22:19 +0200 Subject: Gitlab workflow In-Reply-To: <1051919e-303d-dab7-a7c7-ddd0f0a1e2a7@chreekat.net> References: <1051919e-303d-dab7-a7c7-ddd0f0a1e2a7@chreekat.net> Message-ID: Am Sa., 6. Juli 2019 um 19:06 Uhr schrieb Bryan Richter : > [...] Rather than argue against GHC's current practices, however, I would > like > to understand them better. What issues led to a rebase-only workflow? > Which expert opinions were considered? What happy stories can people > relate? We recently switched away from a rebase-only workflow at > $workplace, and it's already made life so much nicer for us -- so I'm > curious what unforeseen pain we might be in for. :) I've worked for several companies of several sizes, and from my experience the rule is: The bigger the company, the more there is a tendency to use a rebase-only workflow, with big/huge projects exclusively relying on rebases, explicitly forbidding (non-fast-forward) merges. There are several good reasons for this IMHO: * Clarity: Even with a single release branch, merges tend to create an incomprehensible mess in the history. Things get totally unmanageable when you have to support several releases in various branches. IMHO this reason alone is enough to rule out non-fast-forward merges in bigger projects. * Bisecting: With merges you will have a very, very hard time bisecting your history to find a bug (or a fix). With a linear (single release) or tree-shaped (for several supported releases) history, this gets trivial and can easily be automated. * Hash instability: Simply relying on a hash to find out if a fix/feature is in some branch is an illusion: Sooner or later you get a merge conflict and need to modify your commit. * Tool integration via IDs: For the reason stated above, you will have some kind of bug/feature/issue/...-ID e.g. in your commit message, anyway. This ID is then used in your issue tracker/release management tool/..., not the hash of a commit in some branch. Of course your mileage may vary, depending on your team and project size, the additional tools you use, how good your CI/testing/release framework is, etc. GitLab's machinery may still be in it's infancy, but some kind of bot picking/testing/committing (even reverting, if necessary) your changes is a very common and scalable way of doing things. Or the other way round: If you don't do this, and your project gets bigger, you have an almost 100% guarantee that the code in your repository is broken in some way. :-} -------------- next part -------------- An HTML attachment was scrubbed... URL: From b at chreekat.net Sun Jul 7 15:06:31 2019 From: b at chreekat.net (Bryan Richter) Date: Sun, 7 Jul 2019 18:06:31 +0300 Subject: Gitlab workflow In-Reply-To: References: <1051919e-303d-dab7-a7c7-ddd0f0a1e2a7@chreekat.net> Message-ID: An HTML attachment was scrubbed... URL: From svenpanne at gmail.com Sun Jul 7 16:53:16 2019 From: svenpanne at gmail.com (Sven Panne) Date: Sun, 7 Jul 2019 18:53:16 +0200 Subject: Gitlab workflow In-Reply-To: References: <1051919e-303d-dab7-a7c7-ddd0f0a1e2a7@chreekat.net> Message-ID: Am So., 7. Juli 2019 um 17:06 Uhr schrieb Bryan Richter : > How does the scaling argument reconcile with the massive scope of the > Linux kernel, the project for which git was created? I can find some middle > ground with the more specific points you made in your email, but I have yet > to understand how the scaling argument holds water when Linux trucks along > with "4000 developers, 450 different companies, and 200 new developers each > release"[1]. What makes Linux special in this regard? Is there some second > inflection point? > Well, somehow I saw that example coming... :-D I think the main reason why things work for Linux is IMHO the amount of highly specialized high-quality maintainers, i.e. the people who pick the patches into the (parts of) the releases they maintain, and who do it as their main (sole?) job. In addition they have a brutal review system plus an army of people continuously testing *and* they have Linus. Now look at your usual company: You have average people there (at best), silly deadlines for silly features, no real maintainers with the power to reject/revert stuff (regardless of any deadline), your testing is far from where it should be etc. etc. Then you do everything to keep things as simple as possible, and having a repository with no merge commits *is* much easier to handle than one with merges. If you are happy with merge commits, by all means continue to use them. The "right" way of doing things depends on so many factors (project size/complexity, number/qualification of people/maintainers, release strategy/frequency, ...) that there is probably no silver bullet. The good thing is: Git doesn't prescribe you a special kind of workflow, it is more of a toolbox to build your own. I would very much like to turn the question around: I never fully understood why some people like merge-based workflows so much. OK, you can see that e.g. commits A, B, and C together implement feature X, but to be honest: After the feature X landed, probably nobody really cares about the feature's history anymore, you normally care much more about: Which commit broke feature Y? Which commit slowed down things? Which commit introduced a space leak/race condition? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Sun Jul 7 16:57:32 2019 From: ben at well-typed.com (Ben Gamari) Date: Sun, 07 Jul 2019 12:57:32 -0400 Subject: Partial GitLab outage Message-ID: <87zhlpsryh.fsf@smart-cactus.org> Hi everyone, For the past 12 hours or so the hosting provider which hosts haskell.org's infrastructure has been having issues [1] with their block storage infrastructure. Due to this outage the search and Docker registry functionality hosted by gitlab.haskell.org are currently unavailable. We hope that service will be restored by the end of the day. Cheers, - Ben [1] https://status.packet.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Sun Jul 7 17:05:19 2019 From: ben at well-typed.com (Ben Gamari) Date: Sun, 07 Jul 2019 13:05:19 -0400 Subject: Gitlab workflow In-Reply-To: References: Message-ID: <87wogtsrlf.fsf@smart-cactus.org> Simon Peyton Jones via ghc-devs writes: > | You believe the one which marge posts telling you that the patch is > | merged, the commit it links to is on master so you can clearly see the > | patch has been committed. > > OK. The earlier one, also from Marge, not the Discussion stream but rather in the panel at the top, says > > Closed by Marge Bot 8 hours ago > The changes were not merged into master > > So that is an outright lie? Yes it is closed, but contrary to the statement it _has_ been merged. > > It's unfortunate that this misleading display is right at top, in the summary material, while the truth (that it has been merged) is buried in the Discussion stream. > > Alas. But thank you for clarifying. > > Is this something we can raise with the Gitlab folk? It seems so egregiously wrong. > Indeed this is a bit of a limitation of GitLab and is a result of the fact that our Marge fork uses a somewhat hacky workflow for its merges to work around other GitLab bugs. There is indeed GitLab issue [1] requesting the ability to manually mark merge requests as merged, but it seems unlikely that this will happen in the near future. Thankfully, our days of using Marge are numbered. GitLab 12.0, which shipped two weeks ago, introduced Merge Train support. I haven't yet upgraded our installation but will try to do so this week. There are a few annoying limitations of merge train support surrounding forks, but I believe we can work around these for the time being. Cheers, - Ben [1] https://gitlab.com/gitlab-org/gitlab-ee/issues/3033 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Sun Jul 7 17:32:16 2019 From: ben at well-typed.com (Ben Gamari) Date: Sun, 07 Jul 2019 13:32:16 -0400 Subject: Issue weight migration Message-ID: <87k1ctsqci.fsf@smart-cactus.org> Hi everyone, Today I will run the migration moving the information encoded in issue weights to priority labels, as discussed on this list last week [1]. Cheers, - Ben [1] https://mail.haskell.org/pipermail/ghc-devs/2019-July/017851.html -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From simonpj at microsoft.com Mon Jul 8 08:22:19 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 8 Jul 2019 08:22:19 +0000 Subject: Issue weight migration In-Reply-To: <87k1ctsqci.fsf@smart-cactus.org> References: <87k1ctsqci.fsf@smart-cactus.org> Message-ID: Thanks Ben. Did we agree to have * 3 explicit labels (high, normal, low) * With absence of a label indicating "has not been assigned a priority" which you can also read as "needs triage". I would strongly prefer not to have "no label" = "low priority" as I described earlier Simon | -----Original Message----- | From: ghc-devs On Behalf Of Ben Gamari | Sent: 07 July 2019 18:32 | To: GHC developers | Subject: Issue weight migration | | Hi everyone, | | Today I will run the migration moving the information encoded in issue | weights to priority labels, as discussed on this list last week [1]. | | Cheers, | | - Ben | | | [1] | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail.ha | skell.org%2Fpipermail%2Fghc-devs%2F2019- | July%2F017851.html&data=02%7C01%7Csimonpj%40microsoft.com%7Cf79dfc3d0 | 20e46af165f08d70301164c%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C6369 | 81175530582443&sdata=e47yXB%2B5Ox69IjV2TNcOWnKXZVa5zpFN7L9Bcga4E%2F8% | 3D&reserved=0 From ben at well-typed.com Mon Jul 8 10:43:30 2019 From: ben at well-typed.com (Ben Gamari) Date: Mon, 08 Jul 2019 06:43:30 -0400 Subject: Issue weight migration In-Reply-To: References: <87k1ctsqci.fsf@smart-cactus.org> Message-ID: <877e8sst6a.fsf@smart-cactus.org> Simon Peyton Jones via ghc-devs writes: > Thanks Ben. Did we agree to have > > * 3 explicit labels (high, normal, low) > > * With absence of a label indicating "has not been assigned a priority" > which you can also read as "needs triage". > > I would strongly prefer not to have > "no label" = "low priority" > as I described earlier > We actually have four labels (highest, high, normal, low), mirroring Trac. On further reflection I agree with you; "no label" = "normal priority" left a bit too much implicit. Regardless, whether we want to equate the lack of a priority label with "needs triage" is another decision. I'm not opposed to this but I do wonder whether issue reporters might be tempted to set the ticket priority, thereby inadvertently circumventing the usual triage process. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Mon Jul 8 10:59:14 2019 From: ben at well-typed.com (Ben Gamari) Date: Mon, 08 Jul 2019 06:59:14 -0400 Subject: GitLab back up Message-ID: <874l3wssg2.fsf@smart-cactus.org> Hi everyone, All haskell.org services should now be restored after this weekend's storage outage. Thanks for your patience. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From allbery.b at gmail.com Mon Jul 8 11:46:23 2019 From: allbery.b at gmail.com (Brandon Allbery) Date: Mon, 8 Jul 2019 07:46:23 -0400 Subject: Issue weight migration In-Reply-To: <877e8sst6a.fsf@smart-cactus.org> References: <87k1ctsqci.fsf@smart-cactus.org> <877e8sst6a.fsf@smart-cactus.org> Message-ID: Isn't there already a "needs triage" label separate from this? Which would make that plus explicit priority a suggested priority to guide whoever's doing triage. (I expect triage goes beyond simply priority setting, e.g. making sure it has the right component(s) and maybe assigning specific people who know that component.) On Mon, Jul 8, 2019 at 6:43 AM Ben Gamari wrote: > Simon Peyton Jones via ghc-devs writes: > > > Thanks Ben. Did we agree to have > > > > * 3 explicit labels (high, normal, low) > > > > * With absence of a label indicating "has not been assigned a priority" > > which you can also read as "needs triage". > > > > I would strongly prefer not to have > > "no label" = "low priority" > > as I described earlier > > > We actually have four labels (highest, high, normal, low), mirroring > Trac. On further reflection I agree with you; "no label" = "normal > priority" left a bit too much implicit. > > Regardless, whether we want to equate the lack of a priority label with > "needs triage" is another decision. I'm not opposed to this but I do > wonder whether issue reporters might be tempted to set the ticket > priority, thereby inadvertently circumventing the usual triage process. > > Cheers, > > - Ben > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -- brandon s allbery kf8nh allbery.b at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Jul 8 12:29:53 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 8 Jul 2019 12:29:53 +0000 Subject: Test failures with -DDEBUG stage2 Message-ID: Devs Should I expect failures with a stage2 compiler built with -DDEBUG? I'd much prefer the answer to be "no". But I several failure Unexpected failures: deriving/should_compile/T11148.run T11148 [bad exit code (2)] (normal) indexed-types/should_compile/T11361.run T11361 [exit code non-0] (normal) polykinds/T11362.run T11362 [exit code non-0] (normal) stranal/should_compile/T9208.run T9208 [exit code non-0] (optasm) typecheck/should_fail/TcCoercibleFail.run TcCoercibleFail [stderr mismatch] (normal) Fragile tests: libraries/base/tests/CPUTime001.run CPUTime001 [fragile pass] (normal) profiling/should_run/heapprof001.run heapprof001 [fragile fail] (ghci-ext-prof) profiling/should_run/heapprof001.run heapprof001 [fragile fail] (prof) profiling/should_run/T15897.run T15897 [fragile fail] (profasm) ghci/should_run/T3171.run T3171 [fragile pass] (normal) profiling/should_run/T5559.run T5559 [fragile fail] (ghci-ext-prof) profiling/should_run/T5559.run T5559 [fragile fail] (prof) concurrent/should_run/T5611.run T5611 [fragile pass] (normal) concurrent/should_run/T5611a.run T5611a [fragile pass] (normal) Of these, the first three are ghc-stage2: compiler/cbits/genSym.c:14: checkUniqueRange: Assertion `u != UNIQUE_MASK' failed. What's going on there? Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryan.gl.scott at gmail.com Mon Jul 8 14:19:51 2019 From: ryan.gl.scott at gmail.com (Ryan Scott) Date: Mon, 8 Jul 2019 10:19:51 -0400 Subject: lint-submods-marge consistently failing when attempting to update Haddock Message-ID: Ben indicates (on the #ghc IRC channel) that he suspects something is amiss with the lint-submods-marge job. Ryan S. From ben at smart-cactus.org Mon Jul 8 19:20:54 2019 From: ben at smart-cactus.org (Ben Gamari) Date: Mon, 08 Jul 2019 15:20:54 -0400 Subject: Test failures with -DDEBUG stage2 In-Reply-To: References: Message-ID: <871rz0s580.fsf@smart-cactus.org> Simon Peyton Jones via ghc-devs writes: > Devs > Should I expect failures with a stage2 compiler built with -DDEBUG? > I'd much prefer the answer to be "no". The answer is indeed "no" and I have been working on fixing these in !1296 but I've been reluctant to commit until all tests are green. I'll try to get a subset of my fixes merged soon to at least resolve the issues that you are seeing. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From simonpj at microsoft.com Tue Jul 9 15:02:00 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 9 Jul 2019 15:02:00 +0000 Subject: Gitlab wiki Message-ID: I'm looking at a GHC wiki page: https://gitlab.haskell.org/ghc/ghc/wikis/Developing-Hadrian Whenever looking at such a wiki page, the right hand third of my landscape screen is taken up with a mostly-blank menu. Can I hide it, so I can read the document more easily? As it is, I have large areas of uniform grey on my precious pixels. Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Wed Jul 10 07:11:29 2019 From: mail at joachim-breitner.de (Joachim Breitner) Date: Wed, 10 Jul 2019 09:11:29 +0200 Subject: The GHC Committee welcomes its new members Message-ID: <5449e89bc59a881d9189d38cd19a98eb1b7cdcaa.camel@joachim-breitner.de> Dear Haskell community, the GHC Steering committee welcomes its new two members, Sandy Maguire and Arnaud Spiwack. We are happy to see that there is continued interest in our work, and are looking forward to the insights and energy that are brought to the committee by Sandy and Arnaud. They take the seats of Ben Gamari and Manuel Chakravarty. A big thanks to Ben and Manuel for their contributions to GHC and the proposal committee. In particular Ben, who created the initials draft of the proposal process. We had four nominations in total, and I would like to preserve their privacy. Since we only had two seats to fill, we had to make a pick; nevertheless I am grateful for all nominations, and encourage everyone, including those who did not make it this time, to try again next time. Neither the committee nor its process is perfect, as the way two recent proposals went show, but we are constantly trying to refine and improve. Your feedback is always welcome, either at the committee mailing list, or in private communication with the Chairs (the two Simons), me, or any other member. On behalf of the committee, Joachim Breitner PS: Sandy and Arnaud, please subscribe to the mailing list at https://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-steering-committee -- Joachim Breitner mail at joachim-breitner.de http://www.joachim-breitner.de/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From ml at stefansf.de Thu Jul 11 09:06:55 2019 From: ml at stefansf.de (Stefan Schulze Frielinghaus) Date: Thu, 11 Jul 2019 11:06:55 +0200 Subject: Cabal reports mismatched interface file ways dyn Message-ID: <20190711090655.GA1481@dyn-9-152-222-29.boeblingen.de.ibm.com> Hi all, I'm trying to compile GHC 8.6.5 using the LLVM backend and dynamic linking. My build.mk file looks as follows: include mk/flavours/quick-llvm.mk DYNAMIC_BY_DEFAULT = YES I can build a stage3 compiler. However while executing `make install` the following error shows up: "inplace/bin/ghc-cabal" register libraries/ghc-prim dist-install "/devel/ghc86/lib/ghc-8.6.5/bin/ghc" "/devel/ghc86/lib/ghc-8.6.5/bin/ghc-pkg" "/devel/ghc86/lib/ghc-8.6.5" '' '/devel/ghc86' '/devel/ghc86/lib/ghc-8.6.5' '/devel/ghc86/share/doc/ghc-8.6.5/html/libraries' NO ghc-cabal: '/devel/ghc86/lib/ghc-8.6.5/bin/ghc' exited with an error: Bad interface file: dist-install/build/GHC/CString.hi mismatched interface file ways (wanted "dyn", got "") I found similar reports [1,2] from 2013 but no solution. Any ideas how to fix this? Cheers, Stefan [1] https://mail.haskell.org/pipermail/ghc-devs/2013-December/003488.html [2] https://mail.haskell.org/pipermail/ghc-devs/2013-December/003507.html From siddu.druid at gmail.com Thu Jul 11 09:19:25 2019 From: siddu.druid at gmail.com (Siddharth Bhat) Date: Thu, 11 Jul 2019 11:19:25 +0200 Subject: Cleanly setting C compiler options when building RTS Message-ID: Hello all, I was interested in building the GHC RTS with GCC's AddressSanitizer and Ubsan enabled. What I want to do very specifically is to pass "-fsanitize=address -fsanitize=undefined" when compiling the RTS. What's the "correct" way to set this up in the build system? Is there a configure flag? Do I need to change the Shake script? Thanks, ~Siddharth -------------- next part -------------- An HTML attachment was scrubbed... URL: From alp at well-typed.com Thu Jul 11 09:48:43 2019 From: alp at well-typed.com (Alp Mestanogullari) Date: Thu, 11 Jul 2019 11:48:43 +0200 Subject: Cleanly setting C compiler options when building RTS In-Reply-To: References: Message-ID: <2e6ce580-eaad-49f2-a4aa-1b77c237d857@well-typed.com> Since you mention the Shake script, should we assume you're using Hadrian, or the Make build system? On 11/07/2019 11:19, Siddharth Bhat wrote: > Hello all, > > I was interested in building the GHC RTS with GCC's AddressSanitizer > and Ubsan enabled. > > What I want to do very specifically is to pass "-fsanitize=address > -fsanitize=undefined" when compiling the RTS. > > What's the "correct" way to set this up in the build system? Is there > a configure flag? Do I need to change the Shake script? > Thanks, > ~Siddharth > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -- Alp Mestanogullari, Haskell Consultant Well-Typed LLP, https://www.well-typed.com/ Registered in England and Wales, OC335890 118 Wymering Mansions, Wymering Road, London, W9 2NF, England -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Thu Jul 11 09:49:49 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Thu, 11 Jul 2019 10:49:49 +0100 Subject: Cleanly setting C compiler options when building RTS In-Reply-To: References: Message-ID: Hi Siddharth, The correct way is to create a custom flavour with something like the following in. grts = quickFlavour { name = "grts", args = args quickFlavour <> (builder Cc ? package rts ? arg "-g3" <> arg "-O0") } Cheers, Matt On Thu, Jul 11, 2019 at 10:20 AM Siddharth Bhat wrote: > > Hello all, > > I was interested in building the GHC RTS with GCC's AddressSanitizer and Ubsan enabled. > > What I want to do very specifically is to pass "-fsanitize=address -fsanitize=undefined" when compiling the RTS. > > What's the "correct" way to set this up in the build system? Is there a configure flag? Do I need to change the Shake script? > Thanks, > ~Siddharth > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From alp at well-typed.com Thu Jul 11 09:55:00 2019 From: alp at well-typed.com (Alp Mestanogullari) Date: Thu, 11 Jul 2019 11:55:00 +0200 Subject: Cleanly setting C compiler options when building RTS In-Reply-To: References: Message-ID: <2f958db1-4f98-e732-6f7c-10bca40b2199@well-typed.com> That's indeed one Hadrian solution. An alternative one (requires 'master' from yesterday) is: $ hadrian/build.sh --flavour=... "stage1.rts.cc.c.opts += -fsanitize=address -fsanitize=undefined" Note that I'm not quite sure who between GHC and the C compiler gets invoked to build the RTS. If it's GHC, then you'll need to replace 'builder Cc' in Matthew's solution with 'builder (Ghc CompileCWithGhc)', or with the key-value style approach: $ hadrian/build.sh --flavour=... "stage1.rts.ghc.c.opts += -fsanitize=address -fsanitize=undefined" You can find more about how both of those settings mechanisms work here: https://gitlab.haskell.org/ghc/ghc/blob/master/hadrian/doc/user-settings.md ... once GitLab works again. :-) Until then: https://github.com/ghc/ghc/blob/master/hadrian/doc/user-settings.md On 11/07/2019 11:49, Matthew Pickering wrote: > Hi Siddharth, > > The correct way is to create a custom flavour with something like the > following in. > > grts = quickFlavour { name = "grts", args = args quickFlavour <> > (builder Cc ? package rts ? arg "-g3" <> arg "-O0") } > > Cheers, > > Matt > > On Thu, Jul 11, 2019 at 10:20 AM Siddharth Bhat wrote: >> Hello all, >> >> I was interested in building the GHC RTS with GCC's AddressSanitizer and Ubsan enabled. >> >> What I want to do very specifically is to pass "-fsanitize=address -fsanitize=undefined" when compiling the RTS. >> >> What's the "correct" way to set this up in the build system? Is there a configure flag? Do I need to change the Shake script? >> Thanks, >> ~Siddharth >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -- Alp Mestanogullari, Haskell Consultant Well-Typed LLP, https://www.well-typed.com/ Registered in England and Wales, OC335890 118 Wymering Mansions, Wymering Road, London, W9 2NF, England From matthewtpickering at gmail.com Thu Jul 11 10:03:16 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Thu, 11 Jul 2019 11:03:16 +0100 Subject: Cabal reports mismatched interface file ways dyn In-Reply-To: <20190711090655.GA1481@dyn-9-152-222-29.boeblingen.de.ibm.com> References: <20190711090655.GA1481@dyn-9-152-222-29.boeblingen.de.ibm.com> Message-ID: Does the quick flavour build the dynamic libraries? On Thu, Jul 11, 2019 at 10:07 AM Stefan Schulze Frielinghaus wrote: > > Hi all, > > I'm trying to compile GHC 8.6.5 using the LLVM backend and dynamic linking. My > build.mk file looks as follows: > > include mk/flavours/quick-llvm.mk > DYNAMIC_BY_DEFAULT = YES > > I can build a stage3 compiler. However while executing `make install` the > following error shows up: > > "inplace/bin/ghc-cabal" register libraries/ghc-prim dist-install "/devel/ghc86/lib/ghc-8.6.5/bin/ghc" "/devel/ghc86/lib/ghc-8.6.5/bin/ghc-pkg" "/devel/ghc86/lib/ghc-8.6.5" '' '/devel/ghc86' '/devel/ghc86/lib/ghc-8.6.5' '/devel/ghc86/share/doc/ghc-8.6.5/html/libraries' NO > ghc-cabal: '/devel/ghc86/lib/ghc-8.6.5/bin/ghc' exited with an error: > Bad interface file: dist-install/build/GHC/CString.hi > mismatched interface file ways (wanted "dyn", got "") > > I found similar reports [1,2] from 2013 but no solution. Any ideas how to fix > this? > > Cheers, > Stefan > > [1] https://mail.haskell.org/pipermail/ghc-devs/2013-December/003488.html > [2] https://mail.haskell.org/pipermail/ghc-devs/2013-December/003507.html > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From b at chreekat.net Thu Jul 11 12:31:52 2019 From: b at chreekat.net (Bryan Richter) Date: Thu, 11 Jul 2019 15:31:52 +0300 Subject: Gitlab workflow In-Reply-To: References: <1051919e-303d-dab7-a7c7-ddd0f0a1e2a7@chreekat.net> Message-ID: <6e7157da-4eb5-c074-2fed-99552aafca4b@chreekat.net> On 7/7/19 7:53 PM, Sven Panne wrote:> Am So., 7. Juli 2019 um 17:06 Uhr schrieb Bryan Richter : > > How does the scaling argument reconcile with the massive scope > > of the Linux kernel, the project for which git was created? I > > can find some middle ground with the more specific points > > you made in your email, but I have yet to understand how the > > scaling argument holds water when Linux trucks along with "4000 > > developers, 450 different companies, and 200 new developers each > > release"[1]. What makes Linux special in this regard? Is there > > some second inflection point? > > > Well, somehow I saw that example coming... :-D I think the main > reason why things work for Linux is IMHO the amount of highly > specialized high-quality maintainers, i.e. the people who pick the > patches into the (parts of) the releases they maintain, and who do > it as their main (sole?) job. In addition they have a brutal review > system plus an army of people continuously testing *and* they have > Linus. :D I would add to your argument that they appear to use git primarily to *keep a record of merges*. Incoming patches have no history whatsoever; they're just individual patches. I guess that could be considered a simpler-to-use version of the fast-forward-only strategy! Perhaps Linux isn't such a great counterexample after all.... Once they have committed patches to some particular history, though, they don't rebase, since that would rewrite important audit history. > I would very much like to turn the question around: I never fully > understood why some people like merge-based workflows so much. OK, > you can see that e.g. commits A, B, and C together implement feature > X, but to be honest: After the feature X landed, probably nobody > really cares about the feature's history anymore, you normally care > much more about: Which commit broke feature Y? Which commit slowed > down things? Which commit introduced a space leak/race condition? What I *don't* like is rewriting history, for all the reasons I don't like mutable state. As you say, what you're generally interested in is commits. When references to commits (in emails etc.) get invalidated, it adds confusion and extra work. Seeing this happen is what led me to wonder why people even prefer this strategy. On top of that, many of the problems people have with merges actually seem to be problems with bad commits, as you yourself hinted. Other concerns seem to be based in unfamiliarity with git's features, or an irrational desire for "pure history". (Merges *are* history!) One final thing I like about merges is conflict resolution. Resolving conflicts via rebase is something I get wrong 40% of the time. It's hard. Even resolving a conflict during a merge is hard, but it's easier. Plus, the eventual merge commit keeps a record of the resolution! (I only learned this recently, since `git log` doesn't show it by default.) Keeping a public record of how a conflict was resolved seems like a huge benefit. In the end, my main takeaway is that good commits are just as important as good code, regardless of strategy. From ml at stefansf.de Thu Jul 11 14:38:30 2019 From: ml at stefansf.de (Stefan Schulze Frielinghaus) Date: Thu, 11 Jul 2019 16:38:30 +0200 Subject: Cabal reports mismatched interface file ways dyn In-Reply-To: References: <20190711090655.GA1481@dyn-9-152-222-29.boeblingen.de.ibm.com> Message-ID: <20190711143830.GA31385@dyn-9-152-222-29.boeblingen.de.ibm.com> Yes, it looks so: $ ls -1 libraries/ghc-prim/dist-install/build/GHC/CString.* libraries/ghc-prim/dist-install/build/GHC/CString.dyn_hi libraries/ghc-prim/dist-install/build/GHC/CString.dyn_o libraries/ghc-prim/dist-install/build/GHC/CString.hi libraries/ghc-prim/dist-install/build/GHC/CString.o $ inplace/bin/ghc-stage2 --show-iface libraries/ghc-prim/dist-install/build/GHC/CString.hi | grep --after-context=1 Way Way: Wanted [d, y, n], got [] On Thu, Jul 11, 2019 at 11:03:16AM +0100, Matthew Pickering wrote: > Does the quick flavour build the dynamic libraries? > > On Thu, Jul 11, 2019 at 10:07 AM Stefan Schulze Frielinghaus > wrote: > > > > Hi all, > > > > I'm trying to compile GHC 8.6.5 using the LLVM backend and dynamic linking. My > > build.mk file looks as follows: > > > > include mk/flavours/quick-llvm.mk > > DYNAMIC_BY_DEFAULT = YES > > > > I can build a stage3 compiler. However while executing `make install` the > > following error shows up: > > > > "inplace/bin/ghc-cabal" register libraries/ghc-prim dist-install "/devel/ghc86/lib/ghc-8.6.5/bin/ghc" "/devel/ghc86/lib/ghc-8.6.5/bin/ghc-pkg" "/devel/ghc86/lib/ghc-8.6.5" '' '/devel/ghc86' '/devel/ghc86/lib/ghc-8.6.5' '/devel/ghc86/share/doc/ghc-8.6.5/html/libraries' NO > > ghc-cabal: '/devel/ghc86/lib/ghc-8.6.5/bin/ghc' exited with an error: > > Bad interface file: dist-install/build/GHC/CString.hi > > mismatched interface file ways (wanted "dyn", got "") > > > > I found similar reports [1,2] from 2013 but no solution. Any ideas how to fix > > this? > > > > Cheers, > > Stefan > > > > [1] https://mail.haskell.org/pipermail/ghc-devs/2013-December/003488.html > > [2] https://mail.haskell.org/pipermail/ghc-devs/2013-December/003507.html > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From ben at well-typed.com Thu Jul 11 15:36:04 2019 From: ben at well-typed.com (Ben Gamari) Date: Thu, 11 Jul 2019 11:36:04 -0400 Subject: Issue weight migration In-Reply-To: References: <87k1ctsqci.fsf@smart-cactus.org> <877e8sst6a.fsf@smart-cactus.org> Message-ID: <87a7dkr3by.fsf@smart-cactus.org> Brandon Allbery writes: > Isn't there already a "needs triage" label separate from this? Which would > make that plus explicit priority a suggested priority to guide whoever's > doing triage. (I expect triage goes beyond simply priority setting, e.g. > making sure it has the right component(s) and maybe assigning specific > people who know that component.) > Yes, this is precisely my concern. If our experience with Trac is any guide it seems likely that reporters will indeed set issues priorities and it's not clear that this is something that we want to discourage. For this reason I haven't yet removed the "needs triage" label and won't do so until we get a sense for how frequently reporters set the priority themselves. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Thu Jul 11 15:47:48 2019 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 11 Jul 2019 11:47:48 -0400 Subject: Gitlab workflow In-Reply-To: <6e7157da-4eb5-c074-2fed-99552aafca4b@chreekat.net> References: <1051919e-303d-dab7-a7c7-ddd0f0a1e2a7@chreekat.net> <6e7157da-4eb5-c074-2fed-99552aafca4b@chreekat.net> Message-ID: <878st4r2sg.fsf@smart-cactus.org> Bryan Richter writes: > On 7/7/19 7:53 PM, Sven Panne wrote:> Am So., 7. Juli 2019 um 17:06 > Uhr schrieb Bryan Richter : > > > > How does the scaling argument reconcile with the massive scope > > > of the Linux kernel, the project for which git was created? I > > > can find some middle ground with the more specific points > > > you made in your email, but I have yet to understand how the > > > scaling argument holds water when Linux trucks along with "4000 > > > developers, 450 different companies, and 200 new developers each > > > release"[1]. What makes Linux special in this regard? Is there > > > some second inflection point? > > > > > > Well, somehow I saw that example coming... :-D I think the main > > reason why things work for Linux is IMHO the amount of highly > > specialized high-quality maintainers, i.e. the people who pick the > > patches into the (parts of) the releases they maintain, and who do > > it as their main (sole?) job. In addition they have a brutal review > > system plus an army of people continuously testing *and* they have > > Linus. > > :D > > I would add to your argument that they appear to use git primarily > to *keep a record of merges*. Incoming patches have no history > whatsoever; they're just individual patches. I guess that could be > considered a simpler-to-use version of the fast-forward-only strategy! > Perhaps Linux isn't such a great counterexample after all.... > > Once they have committed patches to some particular history, though, > they don't rebase, since that would rewrite important audit history. > > > I would very much like to turn the question around: I never fully > > understood why some people like merge-based workflows so much. OK, > > you can see that e.g. commits A, B, and C together implement feature > > X, but to be honest: After the feature X landed, probably nobody > > really cares about the feature's history anymore, you normally care > > much more about: Which commit broke feature Y? Which commit slowed > > down things? Which commit introduced a space leak/race condition? > > What I *don't* like is rewriting history, for all the reasons I don't > like mutable state. As you say, what you're generally interested in is > commits. When references to commits (in emails etc.) get invalidated, > it adds confusion and extra work. Seeing this happen is what led me to > wonder why people even prefer this strategy. > I would reiterate this. In my experience when I'm looking back at GHC's history I'm probably doing so for one of a few possible reasons: * I want to know which patch broke something * I want to know which patch made something slower * I want to know which patch added something In all of these cases I (personally) find a linear history makes reasoning about the progression of changes much easier. Bisection, blame, and performance analysis tools are all much easier when you have only one "past" to worry about. > On top of that, many of the problems people have with merges actually > seem to be problems with bad commits, as you yourself hinted. Other > concerns seem to be based in unfamiliarity with git's features, or an > irrational desire for "pure history". (Merges *are* history!) > > One final thing I like about merges is conflict resolution. Resolving > conflicts via rebase is something I get wrong 40% of the time. It's > hard. Even resolving a conflict during a merge is hard, but it's > easier. > I strongly disagree here. In my experience, resolving conflicts via rebase is much easier than doing so via merge (which is one of the reason why I personally use a rebase workflow even outside of GHC). The difference is that during a rebase workflow I can reason about the changes made by each commit individually. I can look at the diff of the original commit (which is generally small, if history was constructed well), refer to the relevant subset of changes from the new commits I'm rebasing on top of, and adapt my changes needing only this "local" state. By contrast during a merge I need to keep both the entirety of my branch as well as every new commit that I'm merging into in my head. Not only is this often plain infeasible (e.g. I can't imagine trying to do this with the recent concurrent GC patches), but you end up with a result that is incoherent since changes that were likely relevant to your feature branch commits end up recorded in the merge commit. > Plus, the eventual merge commit keeps a record of the > resolution! (I only learned this recently, since `git log` doesn't > show it by default.) Keeping a public record of how a conflict was > resolved seems like a huge benefit. I'm not sure I see the value in this. To me it seems like the merge resolution is just another step in the *development* of the patch. We generally don't preserve such steps in history. We only care about the fully-consistent state of the patch when it is merged. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From svenpanne at gmail.com Thu Jul 11 15:49:39 2019 From: svenpanne at gmail.com (Sven Panne) Date: Thu, 11 Jul 2019 17:49:39 +0200 Subject: Gitlab workflow In-Reply-To: <6e7157da-4eb5-c074-2fed-99552aafca4b@chreekat.net> References: <1051919e-303d-dab7-a7c7-ddd0f0a1e2a7@chreekat.net> <6e7157da-4eb5-c074-2fed-99552aafca4b@chreekat.net> Message-ID: Am Do., 11. Juli 2019 um 14:32 Uhr schrieb Bryan Richter : > [...] When references to commits (in emails etc.) get invalidated, > it adds confusion and extra work. Seeing this happen is what led me to > wonder why people even prefer this strategy. > I think there is a misunderstanding here: You never ever force-push rebased commits to a public repo, this would indeed change commit hashes and annoy all your collaborators like hell. In a rebase-only workflow you rebase locally, pushing your then fast-forward-only merge to the public repo. You can even disable force-pushed on the receiving side, an that's what is normally done (well, not on GitHub...). > [...] One final thing I like about merges is conflict resolution. Resolving > conflicts via rebase is something I get wrong 40% of the time. It's > hard. Even resolving a conflict during a merge is hard, but it's > easier. Hmmm, I don't see a difference with conflict resolution in both cases, the work involved is equivalent. > Plus, the eventual merge commit keeps a record of the > resolution! (I only learned this recently, since `git log` doesn't > show it by default.) Keeping a public record of how a conflict was > resolved seems like a huge benefit. [...] > To me it is quite the opposite: In a collaborative environment, I don't care even the tiniest bit about how somebody resolved the conflicts of his branch: This is a technical artifact about when the branch was made and when it is merged. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spam at scientician.net Thu Jul 11 17:34:59 2019 From: spam at scientician.net (Bardur Arantsson) Date: Thu, 11 Jul 2019 19:34:59 +0200 Subject: Gitlab workflow In-Reply-To: <878st4r2sg.fsf@smart-cactus.org> References: <1051919e-303d-dab7-a7c7-ddd0f0a1e2a7@chreekat.net> <6e7157da-4eb5-c074-2fed-99552aafca4b@chreekat.net> <878st4r2sg.fsf@smart-cactus.org> Message-ID: On 11/07/2019 17.47, Ben Gamari wrote: [--snip--] I'm just going to snip all of that because that was an almost perfect summary of why rebase-always is best. I'm *not* talking about rebasing willy-nilly, but I agree that rebasing private branches and rebasing-with-agreement is superior in every way to the just-merge-things approach... and you formulated it perfectly. Thank you. Regards, From b at chreekat.net Fri Jul 12 14:55:25 2019 From: b at chreekat.net (Bryan Richter) Date: Fri, 12 Jul 2019 17:55:25 +0300 Subject: Gitlab workflow In-Reply-To: References: <1051919e-303d-dab7-a7c7-ddd0f0a1e2a7@chreekat.net> <6e7157da-4eb5-c074-2fed-99552aafca4b@chreekat.net> Message-ID: This thread has helped my shift some of my perceptions, but, On 7/11/19 6:49 PM, Sven Panne wrote: > Am Do., 11. Juli 2019 um 14:32 Uhr schrieb Bryan Richter > : > > > [...] When references to commits (in emails etc.) get invalidated, > > it adds confusion and extra work. Seeing this happen is what led > > me to wonder why people even prefer this strategy. > > > I think there is a misunderstanding here: You never ever force-push > rebased commits to a public repo, this would indeed change commit > hashes and annoy all your collaborators like hell. This current thread started precisely because history was rewritten and GitLab could not ascertain that a merge had happened. :) Even if GitLab develops a feature that synchronizes these upstream rebases with the relevant merge request, the problem will still remain in downstream repositories. The source branch of a rebased merge request always ends up orphaned on forks and local repositories. I see why people might make that trade-off, however. And I see that `git cherry` specifically considers this workflow: Determine whether there are commits in .. that are equivalent to those in the range ... The equivalence test is based on the diff, after removing whitespace and line numbers. git-cherry therefore detects when commits have been "copied" by means of git-cherry-pick(1), git-am(1) or git-rebase(1). - git-cherry(1) -Bryan From ben at well-typed.com Sun Jul 14 16:51:24 2019 From: ben at well-typed.com (Ben Gamari) Date: Sun, 14 Jul 2019 12:51:24 -0400 Subject: Moving head.hackage upstream Message-ID: <875zo4r248.fsf@smart-cactus.org> Hi Herbert, Last week I did some work to clean up and document GHC's head.hackage infrastructure. At this point we have a full CI pipeline, including automatic deployment of a Hackage repository. I asked on #ghc and there was quite some appetite to use gitlab.haskell.org:ghc/head.hackage as the head.hackage upstream repository to eliminate confusion and enjoy the benefits of having merge requests checked via CI. Moreover, this would significantly simplify the process of testing GHC against head.hackage as it would eliminate the need to pull from a separate upstream repository. Would you be okay with moving head.hackage's upstream? Thanks again for everything you have done in the head.hackage area. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Sun Jul 14 16:58:18 2019 From: ben at well-typed.com (Ben Gamari) Date: Sun, 14 Jul 2019 12:58:18 -0400 Subject: Moving head.hackage upstream In-Reply-To: <875zo4r248.fsf@smart-cactus.org> References: <875zo4r248.fsf@smart-cactus.org> Message-ID: <8736j8r1sn.fsf@smart-cactus.org> Ben Gamari writes: > Hi Herbert, > > Last week I did some work to clean up and document GHC's head.hackage > infrastructure. At this point we have a full CI pipeline, including > automatic deployment of a Hackage repository. > > I asked on #ghc and there was quite some appetite to use > gitlab.haskell.org:ghc/head.hackage as the head.hackage upstream > repository to eliminate confusion and enjoy the benefits of having merge > requests checked via CI. Moreover, this would significantly simplify the > process of testing GHC against head.hackage as it would eliminate the need > to pull from a separate upstream repository. > > Would you be okay with moving head.hackage's upstream? > > Thanks again for everything you have done in the head.hackage area. > I probably should mention that I have documented our infrastructure in two blog posts which I hope to publish this week or next [1,2]. Cheers, - Ben [1] https://gitlab.haskell.org/ghc/homepage/merge_requests/16 [2] https://gitlab.haskell.org/ghc/homepage/merge_requests/29 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From klebinger.andreas at gmx.at Sun Jul 14 22:13:52 2019 From: klebinger.andreas at gmx.at (Andreas Klebinger) Date: Mon, 15 Jul 2019 00:13:52 +0200 Subject: Is HEAD broken? Message-ID: <7faf971b-d42f-4c6b-fc2e-01b22e8cf597@gmx.at> Is HEAD broken? I get this error with hadrian: I suspect it's only broken on windows and has to do with MSYS #ifdefs /-------------------------------------------------------------------------\ | Successfully built library 'ghci' (Stage0, way v).                      | | Library: _build/stage0/libraries/ghci/build/libHSghci-8.9.0.20190714.a  | | Library synopsis: The library supporting GHC's interactive interpreter. | \-------------------------------------------------------------------------/ | Copy package 'ghci' # cabal-copy (for _build/stage0/lib/package.conf.d/ghci-8.9.0.20190714.conf) | Register package 'ghci' # cabal-register (for _build/stage0/lib/package.conf.d/ghci-8.9.0.20190714.conf) | Run Ghc CompileHs Stage0: libraries/text/Data/Text/Internal/Encoding/Fusion/Common.hs => _build/stage0/libraries/text/build/Data/Text/Internal/Encoding/Fusion/Common.o | Run Ghc CompileHs Stage0: libraries/text/Data/Text/Show.hs => _build/stage0/libraries/text/build/Data/Text/Show.o # cabal-configure (for _build/stage0/compiler/setup-config) | Run Ghc CompileHs Stage0: libraries/text/Data/Text/Internal/Encoding/Fusion.hs => _build/stage0/libraries/text/build/Data/Text/Internal/Encoding/Fusion.o | Run Ghc CompileHs Stage0: libraries/text/Data/Text/Internal/Lazy/Encoding/Fusion.hs => _build/stage0/libraries/text/build/Data/Text/Internal/Lazy/Encoding/Fusion.o # cabal-autogen (for _build/stage0/compiler/build/autogen/cabal_macros.h) | Run GhcPkg Dependencies Stage0: process WARNING: cache is out of date: C:\ghc\msys64\opt\ghc\lib\package.conf.d\package.cache ghc will see an old view of this package db. Use 'ghc-pkg recache' to fix. | Run GhcPkg Unregister Stage0: process => none | Run Ghc CompileHs Stage0: libraries/text/Data/Text/Encoding.hs => _build/stage0/libraries/text/build/Data/Text/Encoding.o | Run Ghc CompileHs Stage0: libraries/text/Data/Text.hs => _build/stage0/libraries/text/build/Data/Text.o | Run Ghc CompileHs Stage0: libraries/text/Data/Text/Foreign.hs => _build/stage0/libraries/text/build/Data/Text/Foreign.o ghc-pkg.exe: cannot find package process | Run GhcPkg Copy Stage0: process => _build/stage0/lib/package.conf.d/process-1.6.5.0.conf | Run Ghc CompileHs Stage0: libraries/text/Data/Text/Read.hs => _build/stage0/libraries/text/build/Data/Text/Read.o | Run Ghc CompileHs Stage0: libraries/text/Data/Text/Internal/Lazy.hs => _build/stage0/libraries/text/build/Data/Text/Internal/Lazy.o | Run Ghc CompileHs Stage0: libraries/text/Data/Text/Internal/IO.hs => _build/stage0/libraries/text/build/Data/Text/Internal/IO.o WARNING: cache is out of date: C:\ghc\msys64\opt\ghc\lib\package.conf.d\package.cache ghc will see an old view of this package db. Use 'ghc-pkg recache' to fix. | Run Cc FindCDependencies Stage0: compiler/parser/cutils.c => _build/stage0/compiler/build/c/parser/cutils.o.d | Run Cc FindCDependencies Stage0: compiler/ghci/keepCAFsForGHCi.c => _build/stage0/compiler/build/c/ghci/keepCAFsForGHCi.o.d | Run Cc FindCDependencies Stage0: compiler/cbits/genSym.c => _build/stage0/compiler/build/c/cbits/genSym.o.d | Run DeriveConstants: none => _build/generated/DerivedConstants.h (and 1 more) | Run DeriveConstants: none => _build/generated/GHCConstantsHaskellExports.hs (and 1 more) | Run Happy: compiler/parser/Parser.y => _build/stage0/compiler/build/Parser.hs | Run DeriveConstants: none => _build/generated/GHCConstantsHaskellWrappers.hs (and 1 more) | Run Ghc CompileHs Stage0: libraries/text/Data/Text/Internal/Lazy/Search.hs => _build/stage0/libraries/text/build/Data/Text/Internal/Lazy/Search.o | Run Ghc CompileHs Stage0: libraries/text/Data/Text/Lazy/Internal.hs => _build/stage0/libraries/text/build/Data/Text/Lazy/Internal.o | Run Ghc CompileHs Stage0: libraries/text/Data/Text/Internal/Lazy/Fusion.hs => _build/stage0/libraries/text/build/Data/Text/Internal/Lazy/Fusion.o | Run Ghc CompileHs Stage0: libraries/text/Data/Text/IO.hs => _build/stage0/libraries/text/build/Data/Text/IO.o | Successfully generated _build/stage0/compiler/build/Config.hs. | Run Alex: compiler/cmm/CmmLex.x => _build/stage0/compiler/build/CmmLex.hs | Run HsCpp: compiler/prelude/primops.txt.pp => _build/stage0/compiler/build/primops.txt In file included from includes/MachDeps.h:45:0,                  from compiler/prelude/primops.txt.pp:122: includes/ghcautoconf.h:1:0: error: unterminated #if  #if !defined(__GHCAUTOCONF_H__) Error when running Shake build system:   at action, called at src\Rules.hs:68:19 in main:Rules   at need, called at src\Rules.hs:90:5 in main:Rules * Depends on: _build/stage0/lib/package.conf.d/ghc-8.9.0.20190714.conf   at need, called at src\Rules\Register.hs:115:5 in main:Rules.Register * Depends on: _build/stage0/compiler/build/libHSghc-8.9.0.20190714.a   at need, called at src\Rules\Library.hs:144:5 in main:Rules.Library * Depends on: _build/stage0/compiler/build/TcTypeable.o   at &%>, called at src\Rules\Compile.hs:47:9 in main:Rules.Compile * Depends on: _build/stage0/compiler/build/TcTypeable.o _build/stage0/compiler/build/TcTypeable.hi   at apply1, called at src\Development\Shake\Internal\Rules\Oracle.hs:159:32 in shake-0.18.3-2a90fc68b337e984af1d3900d8eeed6f2bc6fa1a:Development.Shake.Internal.Rules.Oracle * Depends on: OracleQ (KeyValues ("_build/stage0/compiler/.dependencies","_build/stage0/compiler/build/TcTypeable.o"))   at need, called at src\Hadrian\Oracles\TextFile.hs:96:9 in main:Hadrian.Oracles.TextFile * Depends on: _build/stage0/compiler/.dependencies   at readFile', called at src\Rules\Dependencies.hs:34:19 in main:Rules.Dependencies   at need, called at src\Development\Shake\Internal\Derived.hs:118:15 in shake-0.18.3-2a90fc68b337e984af1d3900d8eeed6f2bc6fa1a:Development.Shake.Internal.Derived * Depends on: _build/stage0/compiler/.dependencies.mk   at need, called at src\Rules\Dependencies.hs:26:9 in main:Rules.Dependencies * Depends on: _build/stage0/compiler/build/primop-fixity.hs-incl   at need, called at src\Rules\Generate.hs:147:5 in main:Rules.Generate * Depends on: _build/stage0/compiler/build/primops.txt * Raised the exception: user error (Development.Shake.cmd, system command failed Command line: E:/ghc_head/inplace/mingw/bin/gcc.exe -E -undef -traditional -P -Iincludes -I_build/generated -I_build/stage0/compiler/build -x c compiler/prelude/primops.txt.pp Exit code: 1 Stderr: In file included from includes/MachDeps.h:45:0,                  from compiler/prelude/primops.txt.pp:122: includes/ghcautoconf.h:1:0: error: unterminated #if  #if !defined(__GHCAUTOCONF_H__) From klebinger.andreas at gmx.at Sun Jul 14 22:31:32 2019 From: klebinger.andreas at gmx.at (Andreas Klebinger) Date: Mon, 15 Jul 2019 00:31:32 +0200 Subject: Is HEAD broken? In-Reply-To: <7faf971b-d42f-4c6b-fc2e-01b22e8cf597@gmx.at> References: <7faf971b-d42f-4c6b-fc2e-01b22e8cf597@gmx.at> Message-ID: <20e28c4a-904d-27c4-c0f8-d3729e250a93@gmx.at> Upon restarting the build it seems to proceed further. Strange but good enough for me at the moment. Andreas Klebinger schrieb am 15.07.2019 um 00:13: > Is HEAD broken? > > I get this error with hadrian: > > I suspect it's only broken on windows and has to do with MSYS #ifdefs > > /-------------------------------------------------------------------------\ > > | Successfully built library 'ghci' (Stage0, way > v).                      | > | Library: > _build/stage0/libraries/ghci/build/libHSghci-8.9.0.20190714.a  | > | Library synopsis: The library supporting GHC's interactive > interpreter. | > \-------------------------------------------------------------------------/ > > | Copy package 'ghci' > # cabal-copy (for > _build/stage0/lib/package.conf.d/ghci-8.9.0.20190714.conf) > | Register package 'ghci' > # cabal-register (for > _build/stage0/lib/package.conf.d/ghci-8.9.0.20190714.conf) > | Run Ghc CompileHs Stage0: > libraries/text/Data/Text/Internal/Encoding/Fusion/Common.hs => > _build/stage0/libraries/text/build/Data/Text/Internal/Encoding/Fusion/Common.o > > | Run Ghc CompileHs Stage0: libraries/text/Data/Text/Show.hs => > _build/stage0/libraries/text/build/Data/Text/Show.o > # cabal-configure (for _build/stage0/compiler/setup-config) > | Run Ghc CompileHs Stage0: > libraries/text/Data/Text/Internal/Encoding/Fusion.hs => > _build/stage0/libraries/text/build/Data/Text/Internal/Encoding/Fusion.o > | Run Ghc CompileHs Stage0: > libraries/text/Data/Text/Internal/Lazy/Encoding/Fusion.hs => > _build/stage0/libraries/text/build/Data/Text/Internal/Lazy/Encoding/Fusion.o > > # cabal-autogen (for _build/stage0/compiler/build/autogen/cabal_macros.h) > | Run GhcPkg Dependencies Stage0: process > WARNING: cache is out of date: > C:\ghc\msys64\opt\ghc\lib\package.conf.d\package.cache > ghc will see an old view of this package db. Use 'ghc-pkg recache' to > fix. > | Run GhcPkg Unregister Stage0: process => none > | Run Ghc CompileHs Stage0: libraries/text/Data/Text/Encoding.hs => > _build/stage0/libraries/text/build/Data/Text/Encoding.o > | Run Ghc CompileHs Stage0: libraries/text/Data/Text.hs => > _build/stage0/libraries/text/build/Data/Text.o > | Run Ghc CompileHs Stage0: libraries/text/Data/Text/Foreign.hs => > _build/stage0/libraries/text/build/Data/Text/Foreign.o > ghc-pkg.exe: cannot find package process > | Run GhcPkg Copy Stage0: process => > _build/stage0/lib/package.conf.d/process-1.6.5.0.conf > | Run Ghc CompileHs Stage0: libraries/text/Data/Text/Read.hs => > _build/stage0/libraries/text/build/Data/Text/Read.o > | Run Ghc CompileHs Stage0: libraries/text/Data/Text/Internal/Lazy.hs => > _build/stage0/libraries/text/build/Data/Text/Internal/Lazy.o > | Run Ghc CompileHs Stage0: libraries/text/Data/Text/Internal/IO.hs => > _build/stage0/libraries/text/build/Data/Text/Internal/IO.o > WARNING: cache is out of date: > C:\ghc\msys64\opt\ghc\lib\package.conf.d\package.cache > ghc will see an old view of this package db. Use 'ghc-pkg recache' to > fix. > | Run Cc FindCDependencies Stage0: compiler/parser/cutils.c => > _build/stage0/compiler/build/c/parser/cutils.o.d > | Run Cc FindCDependencies Stage0: compiler/ghci/keepCAFsForGHCi.c => > _build/stage0/compiler/build/c/ghci/keepCAFsForGHCi.o.d > | Run Cc FindCDependencies Stage0: compiler/cbits/genSym.c => > _build/stage0/compiler/build/c/cbits/genSym.o.d > | Run DeriveConstants: none => _build/generated/DerivedConstants.h (and > 1 more) > | Run DeriveConstants: none => > _build/generated/GHCConstantsHaskellExports.hs (and 1 more) > | Run Happy: compiler/parser/Parser.y => > _build/stage0/compiler/build/Parser.hs > | Run DeriveConstants: none => > _build/generated/GHCConstantsHaskellWrappers.hs (and 1 more) > | Run Ghc CompileHs Stage0: > libraries/text/Data/Text/Internal/Lazy/Search.hs => > _build/stage0/libraries/text/build/Data/Text/Internal/Lazy/Search.o > | Run Ghc CompileHs Stage0: libraries/text/Data/Text/Lazy/Internal.hs => > _build/stage0/libraries/text/build/Data/Text/Lazy/Internal.o > | Run Ghc CompileHs Stage0: > libraries/text/Data/Text/Internal/Lazy/Fusion.hs => > _build/stage0/libraries/text/build/Data/Text/Internal/Lazy/Fusion.o > | Run Ghc CompileHs Stage0: libraries/text/Data/Text/IO.hs => > _build/stage0/libraries/text/build/Data/Text/IO.o > | Successfully generated _build/stage0/compiler/build/Config.hs. > | Run Alex: compiler/cmm/CmmLex.x => > _build/stage0/compiler/build/CmmLex.hs > | Run HsCpp: compiler/prelude/primops.txt.pp => > _build/stage0/compiler/build/primops.txt > In file included from includes/MachDeps.h:45:0, >                  from compiler/prelude/primops.txt.pp:122: > includes/ghcautoconf.h:1:0: error: unterminated #if >  #if !defined(__GHCAUTOCONF_H__) > > Error when running Shake build system: >   at action, called at src\Rules.hs:68:19 in main:Rules >   at need, called at src\Rules.hs:90:5 in main:Rules > * Depends on: _build/stage0/lib/package.conf.d/ghc-8.9.0.20190714.conf >   at need, called at src\Rules\Register.hs:115:5 in main:Rules.Register > * Depends on: _build/stage0/compiler/build/libHSghc-8.9.0.20190714.a >   at need, called at src\Rules\Library.hs:144:5 in main:Rules.Library > * Depends on: _build/stage0/compiler/build/TcTypeable.o >   at &%>, called at src\Rules\Compile.hs:47:9 in main:Rules.Compile > * Depends on: _build/stage0/compiler/build/TcTypeable.o > _build/stage0/compiler/build/TcTypeable.hi >   at apply1, called at > src\Development\Shake\Internal\Rules\Oracle.hs:159:32 in > shake-0.18.3-2a90fc68b337e984af1d3900d8eeed6f2bc6fa1a:Development.Shake.Internal.Rules.Oracle > > * Depends on: OracleQ (KeyValues > ("_build/stage0/compiler/.dependencies","_build/stage0/compiler/build/TcTypeable.o")) > >   at need, called at src\Hadrian\Oracles\TextFile.hs:96:9 in > main:Hadrian.Oracles.TextFile > * Depends on: _build/stage0/compiler/.dependencies >   at readFile', called at src\Rules\Dependencies.hs:34:19 in > main:Rules.Dependencies >   at need, called at src\Development\Shake\Internal\Derived.hs:118:15 > in > shake-0.18.3-2a90fc68b337e984af1d3900d8eeed6f2bc6fa1a:Development.Shake.Internal.Derived > > * Depends on: _build/stage0/compiler/.dependencies.mk >   at need, called at src\Rules\Dependencies.hs:26:9 in > main:Rules.Dependencies > * Depends on: _build/stage0/compiler/build/primop-fixity.hs-incl >   at need, called at src\Rules\Generate.hs:147:5 in main:Rules.Generate > * Depends on: _build/stage0/compiler/build/primops.txt > * Raised the exception: > user error (Development.Shake.cmd, system command failed > Command line: E:/ghc_head/inplace/mingw/bin/gcc.exe -E -undef > -traditional -P -Iincludes -I_build/generated > -I_build/stage0/compiler/build -x c compiler/prelude/primops.txt.pp > Exit code: 1 > Stderr: > In file included from includes/MachDeps.h:45:0, >                  from compiler/prelude/primops.txt.pp:122: > includes/ghcautoconf.h:1:0: error: unterminated #if >  #if !defined(__GHCAUTOCONF_H__) > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From ryan.gl.scott at gmail.com Mon Jul 15 12:10:09 2019 From: ryan.gl.scott at gmail.com (Ryan Scott) Date: Mon, 15 Jul 2019 08:10:09 -0400 Subject: lint-submods-marge consistently failing when attempting to update Haddock Message-ID: The submodule linter appears to have been disabled in [1]. As Matthew notes in [2], perhaps we should probably open a ticket to track how to restore it. Ryan S. ----- [1] https://gitlab.haskell.org/ghc/ghc/commit/a39a3cd663273c46cf4e346ddf3bf9fb39195c9d [2] https://gitlab.haskell.org/ghc/ghc/commit/a39a3cd663273c46cf4e346ddf3bf9fb39195c9d#note_213227 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryan.gl.scott at gmail.com Mon Jul 15 12:14:17 2019 From: ryan.gl.scott at gmail.com (Ryan Scott) Date: Mon, 15 Jul 2019 08:14:17 -0400 Subject: Moving head.hackage upstream Message-ID: Count me among the people who are eagerly awaiting this move. If I understood Ben correctly when discussing this idea with him on #ghc, then one of the benefits of having head.hackage on GitLab would be that the head.hackage index would automatically regenerate any time a commit lands. This would make things far more streamlined than the status quo, where the index has to be regenerated by hand. If head.hackage is migrated over to GitLab, would that change how people are expected to use it? That is, would it still be as simple as copying the repository stanza from [1] into one's cabal.project file? Or would that change with a move to GitLab? Ryan S. ----- [1] http://head.hackage.haskell.org/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Mon Jul 15 14:04:11 2019 From: ben at smart-cactus.org (Ben Gamari) Date: Mon, 15 Jul 2019 10:04:11 -0400 Subject: lint-submods-marge consistently failing when attempting to update Haddock In-Reply-To: References: Message-ID: <87wogjpf6y.fsf@smart-cactus.org> Ryan Scott writes: > The submodule linter appears to have been disabled in [1]. As Matthew notes > in [2], perhaps we should probably open a ticket to track how to restore it. > I opened a ticket [1] in my gitlab-migration project where I track this sort of administrative task. Cheers, - Ben [1]https://gitlab.haskell.org/bgamari/gitlab-migration/issues/73 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ollie at ocharles.org.uk Mon Jul 15 14:21:09 2019 From: ollie at ocharles.org.uk (Oliver Charles) Date: Mon, 15 Jul 2019 15:21:09 +0100 Subject: Guarded Impredicativity Message-ID: Hi Alejandro and other GHC devs, I've just been pointed to this mailing list, and in particular the discussion on guarded impredicativity from the Haskell IRC channel. I wasn't following the list before, so sorry if this post comes out of threads! I have a use case for impredicative polymorphism at the moment that comes out of some work on effect systems. Essentially, what I'm trying to do is to use reflection to thread around the interpretation of an effect. One encoding of effects is: newtype Program signature carrier a = Program { ( forall x. signature x -> carrier x ) -> carrier a } But for various reasons, this sucks for writing really high performant code. My formulation is instead to change that -> to =>, and to carry around the signature interpretation with reflection. Thus we have something roughly along the lines of: newtype Program s signature carrier a = Program ( forall m. Monad m => m a ) with runProgram :: ( forall s. Reifies s ( signature :~> carrier ) => Program s signature carrier a ) -> carrier a All is well and good, but from the user's perspective, it sucks to actually compose these things together, due to the `forall s` bit. For example, I can write: foo = runError (runState s myProgram) but I can't write foo = runError . runState s $ myProgram or foo = myProgram & runState s & runError I was excited to hear there is a branch with some progress on GI, but unfortunately it doesn't seem sufficient for my application. I've uploaded everything (it's just two files) here: https://gist.github.com/ocharles/8008bf31c70d0190ff3440f9a5b0684d Currently this doesn't compile, but I'd like it to (I'm using the `nix run` command mpickering shared earlier). The problem is Example.hs:55 - if you change the first . to a $, it type checks. Let me know if this is unclear and I'm happy to refine it. I just wanted to show you: * Roughly what I want to do * A concrete program that still fails to type check, even though I believe it should (in some ideal type checker...) Regards, Ollie From ben at well-typed.com Mon Jul 15 16:34:02 2019 From: ben at well-typed.com (Ben Gamari) Date: Mon, 15 Jul 2019 12:34:02 -0400 Subject: Moving head.hackage upstream In-Reply-To: References: Message-ID: <87sgr7p895.fsf@smart-cactus.org> Ryan Scott writes: > Count me among the people who are eagerly awaiting this move. If I > understood Ben correctly when discussing this idea with him on #ghc, then > one of the benefits of having head.hackage on GitLab would be that the > head.hackage index would automatically regenerate any time a commit lands. > This would make things far more streamlined than the status quo, where the > index has to be regenerated by hand. > > If head.hackage is migrated over to GitLab, would that change how people > are expected to use it? That is, would it still be as simple as copying the > repository stanza from [1] into one's cabal.project file? Or would that > change with a move to GitLab? > There is the question of what would happen to http://head.hackage.haskell.org/. The CI-generated repository is currently deployed via GitLab Pages to http://ghc.gitlab.haskell.org/head.hackage/; the user's experience otherwise does not change. It would be easy to redirect head.hackage.haskell.org there is that is okay with hvr. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From trupill at gmail.com Tue Jul 16 08:24:56 2019 From: trupill at gmail.com (Alejandro Serrano Mena) Date: Tue, 16 Jul 2019 10:24:56 +0200 Subject: Guarded Impredicativity In-Reply-To: References: Message-ID: Thank you very much, this kind of real world examples are very useful to us. Right now we are still researching what are the possibilities, but we'll try to cover this use case. Regards, Alejandro El lun., 15 jul. 2019 a las 16:21, Oliver Charles () escribió: > Hi Alejandro and other GHC devs, > > I've just been pointed to this mailing list, and in particular the > discussion on guarded impredicativity from the Haskell IRC channel. I > wasn't following the list before, so sorry if this post comes out of > threads! > > I have a use case for impredicative polymorphism at the moment that > comes out of some work on effect systems. Essentially, what I'm trying > to do is to use reflection to thread around the interpretation of an > effect. One encoding of effects is: > > newtype Program signature carrier a = > Program { ( forall x. signature x -> carrier x ) -> carrier a } > > But for various reasons, this sucks for writing really high performant > code. > > My formulation is instead to change that -> to =>, and to carry around > the signature interpretation with reflection. Thus we have something > roughly along the lines of: > > newtype Program s signature carrier a = > Program ( forall m. Monad m => m a ) > > with > > runProgram > :: ( forall s. Reifies s ( signature :~> carrier ) => Program s > signature carrier a ) > -> carrier a > > All is well and good, but from the user's perspective, it sucks to > actually compose these things together, due to the `forall s` bit. For > example, I can write: > > foo = > runError (runState s myProgram) > > but I can't write > > foo = > runError . runState s $ > myProgram > > or > > foo = > myProgram > & runState s > & runError > > I was excited to hear there is a branch with some progress on GI, but > unfortunately it doesn't seem sufficient for my application. I've > uploaded everything (it's just two files) here: > > https://gist.github.com/ocharles/8008bf31c70d0190ff3440f9a5b0684d > > Currently this doesn't compile, but I'd like it to (I'm using the `nix > run` command mpickering shared earlier). The problem is Example.hs:55 > - if you change the first . to a $, it type checks. > > Let me know if this is unclear and I'm happy to refine it. I just > wanted to show you: > > * Roughly what I want to do > * A concrete program that still fails to type check, even though I > believe it should (in some ideal type checker...) > > Regards, > Ollie > -------------- next part -------------- An HTML attachment was scrubbed... URL: From omeragacan at gmail.com Wed Jul 17 07:45:43 2019 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Wed, 17 Jul 2019 10:45:43 +0300 Subject: What's preventing inlining GHC.Magic.lazy? Message-ID: Hi Simon, I'm trying to understand what's preventing inlining GHC.Magic.lazy. I can see with -ddump-simpl -ddump-simpl-iterations -ddump-prep that we only eliminate it in CorePrep, so it's preserved during simplifications and tidying, but I don't see how. It doesn't have a NOINLINE pragma, and we don't check whether the id we're inlining is lazyId (using MkId.lazyId or MkId.lazyIdKey) anywhere in the compiler as far as I can see. I also checked Note [lazyId magic] in MkId, but it doesn't explain how we avoid inlining it. Could you say a few words on this? Thanks Ömer From matthewtpickering at gmail.com Wed Jul 17 08:05:34 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Wed, 17 Jul 2019 09:05:34 +0100 Subject: What's preventing inlining GHC.Magic.lazy? In-Reply-To: References: Message-ID: I think it doesn't get inlined because we don't add an unfolding in the definition of `lazyId` in `MkId`. The definition in `GHC.Magic` is just for documentation I think. Cheers, Matt On Wed, Jul 17, 2019 at 8:46 AM Ömer Sinan Ağacan wrote: > > Hi Simon, > > I'm trying to understand what's preventing inlining GHC.Magic.lazy. I can see > with -ddump-simpl -ddump-simpl-iterations -ddump-prep that we only eliminate it > in CorePrep, so it's preserved during simplifications and tidying, but I don't > see how. It doesn't have a NOINLINE pragma, and we don't check whether the id > we're inlining is lazyId (using MkId.lazyId or MkId.lazyIdKey) anywhere in the > compiler as far as I can see. > > I also checked Note [lazyId magic] in MkId, but it doesn't explain how we avoid > inlining it. > > Could you say a few words on this? > > Thanks > Ömer > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From omeragacan at gmail.com Wed Jul 17 11:39:02 2019 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Wed, 17 Jul 2019 14:39:02 +0300 Subject: What's preventing inlining GHC.Magic.lazy? In-Reply-To: References: Message-ID: Thanks Matt, that makes sense. I applied the same idea to another Id and now it's also never inlined now, so I can confirm that this works. Ömer Matthew Pickering , 17 Tem 2019 Çar, 11:05 tarihinde şunu yazdı: > > I think it doesn't get inlined because we don't add an unfolding in > the definition of `lazyId` in `MkId`. > > The definition in `GHC.Magic` is just for documentation I think. > > Cheers, > > Matt > > On Wed, Jul 17, 2019 at 8:46 AM Ömer Sinan Ağacan wrote: > > > > Hi Simon, > > > > I'm trying to understand what's preventing inlining GHC.Magic.lazy. I can see > > with -ddump-simpl -ddump-simpl-iterations -ddump-prep that we only eliminate it > > in CorePrep, so it's preserved during simplifications and tidying, but I don't > > see how. It doesn't have a NOINLINE pragma, and we don't check whether the id > > we're inlining is lazyId (using MkId.lazyId or MkId.lazyIdKey) anywhere in the > > compiler as far as I can see. > > > > I also checked Note [lazyId magic] in MkId, but it doesn't explain how we avoid > > inlining it. > > > > Could you say a few words on this? > > > > Thanks > > Ömer > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From simonpj at microsoft.com Wed Jul 17 11:48:43 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 17 Jul 2019 11:48:43 +0000 Subject: What's preventing inlining GHC.Magic.lazy? In-Reply-To: References: Message-ID: Several points. 1. In GHC.Magic, the comments on lazyId say -- Implementation note: its strictness and unfolding are over-ridden -- by the definition in MkId.hs; in both cases to nothing at all. -- That way, 'lazy' does not get inlined, and the strictness analyser -- sees it as lazy. Then the worker/wrapper phase inlines it. The last line is not right: it's CorePrep that inlines it. Fix? Also point to `Note [lazyId magic]` in MkId 2. You ask | > > I'm trying to understand what's preventing inlining GHC.Magic.lazy. The Note [lazyId magic] in MkId says: * It must not have an unfolding: it gets "inlined" by a HACK in CorePrep. It's very important to do this inlining *after* unfoldings are exposed in the interface file. Otherwise, the unfolding for (say) pseq in the interface file will not mention 'lazy', so if we inline 'pseq' we'll totally miss the very thing that 'lazy' was there for in the first place. See #3259 for a real world example. Is that enough to explain? Or, in the light of what you now know, could you elaborate the explanation, so that it would have been clearer the first time round? 3. What is NOT said in Note [lazyId magic] is why we have a definition in GHC.Magic. There are two reasons I think: * It generates Haddock docs. * It generates code that may be called if CorePrep decides not to inline it. Notably, if we have map lazy xs then CorePrep won't inline it. So we need code to call. Can you add this to `Note [lazyId magic]`. 4. Much of this now applies to `unsafeCorece#` too, and perhaps to other magicIds: see Note [magicIds] in MkId. Would you like to decide what can be described once, for each magicId, and what is specific to the particular magicId, and adjust the Notes in MkId accordingly? Thanks Simon | -----Original Message----- | From: Ömer Sinan Ağacan | Sent: 17 July 2019 12:39 | To: Matthew Pickering | Cc: Simon Peyton Jones ; ghc-devs | Subject: Re: What's preventing inlining GHC.Magic.lazy? | | Thanks Matt, that makes sense. | | I applied the same idea to another Id and now it's also never inlined | now, so I can confirm that this works. | | Ömer | | Matthew Pickering , 17 Tem 2019 Çar, | 11:05 tarihinde şunu yazdı: | > | > I think it doesn't get inlined because we don't add an unfolding in | > the definition of `lazyId` in `MkId`. | > | > The definition in `GHC.Magic` is just for documentation I think. | > | > Cheers, | > | > Matt | > | > On Wed, Jul 17, 2019 at 8:46 AM Ömer Sinan Ağacan | wrote: | > > | > > Hi Simon, | > > | > > I'm trying to understand what's preventing inlining GHC.Magic.lazy. | > > I can see with -ddump-simpl -ddump-simpl-iterations -ddump-prep that | > > we only eliminate it in CorePrep, so it's preserved during | > > simplifications and tidying, but I don't see how. It doesn't have a | > > NOINLINE pragma, and we don't check whether the id we're inlining is | > > lazyId (using MkId.lazyId or MkId.lazyIdKey) anywhere in the compiler | as far as I can see. | > > | > > I also checked Note [lazyId magic] in MkId, but it doesn't explain | > > how we avoid inlining it. | > > | > > Could you say a few words on this? | > > | > > Thanks | > > Ömer | > > _______________________________________________ | > > ghc-devs mailing list | > > ghc-devs at haskell.org | > > https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmai | > > l.haskell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc-devs&data=02% | > > 7C01%7Csimonpj%40microsoft.com%7C7854c360129d4ae5120408d70aab7481%7C | > > 72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636989603820519640&sd | > > ata=FwlftEXCI2Q%2FhYOotfuHISMSlCIFjVYgQcaXl16%2B0Ig%3D&reserved= | > > 0 From rae at richarde.dev Wed Jul 17 15:20:14 2019 From: rae at richarde.dev (Richard Eisenberg) Date: Wed, 17 Jul 2019 11:20:14 -0400 Subject: gitlab sometimes slow Message-ID: <0275F0A4-720F-418C-916A-02AC3FB0BD3D@richarde.dev> Hi all, GitLab is sometimes a bit slow. I understand we host this ourselves, and faster is more expensive. My question: how much more expensive? That is, if we throw $100 at the problem, will gitlab be speedy? Will it take $1,000? $10,000? If it's the first one, then let's just blast ahead. If it's not, perhaps knowing what it would take would either help me accept the status quo (I know that every time my page loads slowly, the Haskell community has saved several dollars) or we could contemplate chipping in somehow. Thanks! Richard From allbery.b at gmail.com Wed Jul 17 15:46:46 2019 From: allbery.b at gmail.com (Brandon Allbery) Date: Wed, 17 Jul 2019 11:46:46 -0400 Subject: gitlab sometimes slow In-Reply-To: <0275F0A4-720F-418C-916A-02AC3FB0BD3D@richarde.dev> References: <0275F0A4-720F-418C-916A-02AC3FB0BD3D@richarde.dev> Message-ID: I rather suspect it'd be more like "per some period" than a one-time fee, and "$100/month" is rather harder than "$100". On Wed, Jul 17, 2019 at 11:20 AM Richard Eisenberg wrote: > Hi all, > > GitLab is sometimes a bit slow. I understand we host this ourselves, and > faster is more expensive. My question: how much more expensive? That is, if > we throw $100 at the problem, will gitlab be speedy? Will it take $1,000? > $10,000? If it's the first one, then let's just blast ahead. If it's not, > perhaps knowing what it would take would either help me accept the status > quo (I know that every time my page loads slowly, the Haskell community has > saved several dollars) or we could contemplate chipping in somehow. > > Thanks! > Richard > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -- brandon s allbery kf8nh allbery.b at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Wed Jul 17 15:49:56 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Wed, 17 Jul 2019 16:49:56 +0100 Subject: gitlab sometimes slow In-Reply-To: References: <0275F0A4-720F-418C-916A-02AC3FB0BD3D@richarde.dev> Message-ID: Are you particularly noticing this on the wiki? That is known to be slow as the implementation is quite hacky. If you want to experience some real slowness, try browsing gitlab.com! Cheers, Matt On Wed, Jul 17, 2019 at 4:47 PM Brandon Allbery wrote: > > I rather suspect it'd be more like "per some period" than a one-time fee, and "$100/month" is rather harder than "$100". > > On Wed, Jul 17, 2019 at 11:20 AM Richard Eisenberg wrote: >> >> Hi all, >> >> GitLab is sometimes a bit slow. I understand we host this ourselves, and faster is more expensive. My question: how much more expensive? That is, if we throw $100 at the problem, will gitlab be speedy? Will it take $1,000? $10,000? If it's the first one, then let's just blast ahead. If it's not, perhaps knowing what it would take would either help me accept the status quo (I know that every time my page loads slowly, the Haskell community has saved several dollars) or we could contemplate chipping in somehow. >> >> Thanks! >> Richard >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > -- > brandon s allbery kf8nh > allbery.b at gmail.com > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From rae at richarde.dev Wed Jul 17 18:19:18 2019 From: rae at richarde.dev (Richard Eisenberg) Date: Wed, 17 Jul 2019 14:19:18 -0400 Subject: gitlab sometimes slow In-Reply-To: References: <0275F0A4-720F-418C-916A-02AC3FB0BD3D@richarde.dev> Message-ID: As it turns out, it was the wiki that directly inspired this post. But I had noticed the problem previously and wanted to say this for some time. Yes, I should have clarified that I expect the charge to be recurring; pretend those figures are all per-year. If the problem is the gitlab implementation, and not our server power, then I agree there is no way for us to fix things. Richard > On Jul 17, 2019, at 11:49 AM, Matthew Pickering wrote: > > Are you particularly noticing this on the wiki? That is known to be > slow as the implementation is quite hacky. > > If you want to experience some real slowness, try browsing gitlab.com! > > Cheers, > > Matt > > On Wed, Jul 17, 2019 at 4:47 PM Brandon Allbery wrote: >> >> I rather suspect it'd be more like "per some period" than a one-time fee, and "$100/month" is rather harder than "$100". >> >> On Wed, Jul 17, 2019 at 11:20 AM Richard Eisenberg wrote: >>> >>> Hi all, >>> >>> GitLab is sometimes a bit slow. I understand we host this ourselves, and faster is more expensive. My question: how much more expensive? That is, if we throw $100 at the problem, will gitlab be speedy? Will it take $1,000? $10,000? If it's the first one, then let's just blast ahead. If it's not, perhaps knowing what it would take would either help me accept the status quo (I know that every time my page loads slowly, the Haskell community has saved several dollars) or we could contemplate chipping in somehow. >>> >>> Thanks! >>> Richard >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >> >> >> -- >> brandon s allbery kf8nh >> allbery.b at gmail.com >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From ben at well-typed.com Wed Jul 17 22:18:02 2019 From: ben at well-typed.com (Ben Gamari) Date: Wed, 17 Jul 2019 18:18:02 -0400 Subject: Upgrading GitLab Message-ID: <87a7dcpap6.fsf@smart-cactus.org> Hello everyone, In a moment I'm going to perform an upgrade of GitLab. I expect this will only take around 20 minutes. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Wed Jul 17 22:51:54 2019 From: ben at well-typed.com (Ben Gamari) Date: Wed, 17 Jul 2019 18:51:54 -0400 Subject: Upgrading GitLab In-Reply-To: <87a7dcpap6.fsf@smart-cactus.org> References: <87a7dcpap6.fsf@smart-cactus.org> Message-ID: <875zo0p94r.fsf@smart-cactus.org> Ben Gamari writes: > Hello everyone, > > In a moment I'm going to perform an upgrade of GitLab. I expect this > will only take around 20 minutes. > Things should now be back to normal. Thanks for your patience! Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Thu Jul 18 15:35:24 2019 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 18 Jul 2019 11:35:24 -0400 Subject: gitlab sometimes slow In-Reply-To: References: <0275F0A4-720F-418C-916A-02AC3FB0BD3D@richarde.dev> Message-ID: <8736j3pd8o.fsf@smart-cactus.org> Richard Eisenberg writes: > As it turns out, it was the wiki that directly inspired this post. But I had noticed the problem previously and wanted to say this for some time. > > Yes, I should have clarified that I expect the charge to be recurring; > pretend those figures are all per-year. If the problem is the gitlab > implementation, and not our server power, then I agree there is no way > for us to fix things. > Unfortunately the Wiki is a known performance problem-spot; it is indeed excruciatingly slow. I originally reported this as gitlab-ce#57179 [1] which upstream thinks they fixed in GitLab 11.8. However, as you note, things are still quite slow as of 12.0. I'll raise this with upstream. Cheers, - Ben [1] https://gitlab.com/gitlab-org/gitlab-ce/issues/57179 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ryan.gl.scott at gmail.com Fri Jul 19 13:17:58 2019 From: ryan.gl.scott at gmail.com (Ryan Scott) Date: Fri, 19 Jul 2019 09:17:58 -0400 Subject: Guarded Impredicativity Message-ID: I have another interesting application of guarded impredicativity that I want to bring up. Currently, GHC #16140 [1] makes it rather inconvenient to use quantified constraints in type synonyms. For instance, GHC rejects the following example by default: type F f = (Functor f, forall a. Eq (f a)) This is because F is a synonym for a constraint tuple, so mentioning a quantified constraint in one of its arguments gets flagged as impredicative. In the discussion for #16140, we have pondered doing a major rewrite of the code in TcValidity to permit F. But perhaps we don't need to! After all, the quantified constraint in the example above appears directly underneath a type constructor (namely, the type constructor for the constraint 2-tuple), which should be a textbook case of guarded impredicativity. I don't have the guarded impredicativity branch built locally, so I am unable to test if this hypothesis is true. In any case, I wanted to mention it as another motivating use case. Ryan S. ----- [1] https://gitlab.haskell.org/ghc/ghc/issues/16140 From a.pelenitsyn at gmail.com Fri Jul 19 15:22:19 2019 From: a.pelenitsyn at gmail.com (Artem Pelenitsyn) Date: Fri, 19 Jul 2019 11:22:19 -0400 Subject: Guarded Impredicativity In-Reply-To: References: Message-ID: Hello Ryan, Your example seems to work out of the box with the GI branch. With the oneliner Matthew posted before: nix run -f https://github.com/mpickering/ghc-artefact-nix/archive//master.tar.gz \ ghc-head-from -c ghc-head-from \ https://gitlab.haskell.org/mpickering/ghc/-/jobs/114593/artifacts/raw/ghc-x86_64-fedora27-linux.tar.xz It is really easy to check. Also, I didn't see anywhere mentioned that one need to provide -XImpredicativeTypes. The whole example, therefore, is: {-#LANGUAGE ImpredicativeTypes, ConstraintKinds #-} module M where type F f = (Functor f, forall a. Eq (f a)) -- Best, Artem On Fri, 19 Jul 2019 at 09:18, Ryan Scott wrote: > I have another interesting application of guarded impredicativity that > I want to bring up. Currently, GHC #16140 [1] makes it rather > inconvenient to use quantified constraints in type synonyms. For > instance, GHC rejects the following example by default: > > type F f = (Functor f, forall a. Eq (f a)) > > This is because F is a synonym for a constraint tuple, so mentioning a > quantified constraint in one of its arguments gets flagged as > impredicative. In the discussion for #16140, we have pondered doing a > major rewrite of the code in TcValidity to permit F. But perhaps we > don't need to! After all, the quantified constraint in the example > above appears directly underneath a type constructor (namely, the type > constructor for the constraint 2-tuple), which should be a textbook > case of guarded impredicativity. > > I don't have the guarded impredicativity branch built locally, so I am > unable to test if this hypothesis is true. In any case, I wanted to > mention it as another motivating use case. > > Ryan S. > ----- > [1] https://gitlab.haskell.org/ghc/ghc/issues/16140 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryan.gl.scott at gmail.com Fri Jul 19 21:39:33 2019 From: ryan.gl.scott at gmail.com (Ryan Scott) Date: Fri, 19 Jul 2019 17:39:33 -0400 Subject: Guarded Impredicativity In-Reply-To: References: Message-ID: Good to know. Thanks for checking! Ryan S. On Fri, Jul 19, 2019 at 11:22 AM Artem Pelenitsyn wrote: > > Hello Ryan, > > Your example seems to work out of the box with the GI branch. > > With the oneliner Matthew posted before: > nix run -f https://github.com/mpickering/ghc-artefact-nix/archive//master.tar.gz \ > ghc-head-from -c ghc-head-from \ > https://gitlab.haskell.org/mpickering/ghc/-/jobs/114593/artifacts/raw/ghc-x86_64-fedora27-linux.tar.xz > It is really easy to check. Also, I didn't see anywhere mentioned that one need to provide -XImpredicativeTypes. The whole example, therefore, is: > > {-#LANGUAGE ImpredicativeTypes, ConstraintKinds #-} > module M where > type F f = (Functor f, forall a. Eq (f a)) > > -- > Best, Artem > > On Fri, 19 Jul 2019 at 09:18, Ryan Scott wrote: >> >> I have another interesting application of guarded impredicativity that >> I want to bring up. Currently, GHC #16140 [1] makes it rather >> inconvenient to use quantified constraints in type synonyms. For >> instance, GHC rejects the following example by default: >> >> type F f = (Functor f, forall a. Eq (f a)) >> >> This is because F is a synonym for a constraint tuple, so mentioning a >> quantified constraint in one of its arguments gets flagged as >> impredicative. In the discussion for #16140, we have pondered doing a >> major rewrite of the code in TcValidity to permit F. But perhaps we >> don't need to! After all, the quantified constraint in the example >> above appears directly underneath a type constructor (namely, the type >> constructor for the constraint 2-tuple), which should be a textbook >> case of guarded impredicativity. >> >> I don't have the guarded impredicativity branch built locally, so I am >> unable to test if this hypothesis is true. In any case, I wanted to >> mention it as another motivating use case. >> >> Ryan S. >> ----- >> [1] https://gitlab.haskell.org/ghc/ghc/issues/16140 >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From ben at well-typed.com Fri Jul 19 21:47:55 2019 From: ben at well-typed.com (Ben Gamari) Date: Fri, 19 Jul 2019 17:47:55 -0400 Subject: Moving head.hackage upstream In-Reply-To: <875zo4r248.fsf@smart-cactus.org> References: <875zo4r248.fsf@smart-cactus.org> Message-ID: <87tvbhofwb.fsf@smart-cactus.org> Ben Gamari writes: > Hi Herbert, > > Last week I did some work to clean up and document GHC's head.hackage > infrastructure. At this point we have a full CI pipeline, including > automatic deployment of a Hackage repository. > > I asked on #ghc and there was quite some appetite to use > gitlab.haskell.org:ghc/head.hackage as the head.hackage upstream > repository to eliminate confusion and enjoy the benefits of having merge > requests checked via CI. Moreover, this would significantly simplify the > process of testing GHC against head.hackage as it would eliminate the need > to pull from a separate upstream repository. > > Would you be okay with moving head.hackage's upstream? > Just a gentle ping on this, Herbert. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From nir at altlinux.org Mon Jul 22 08:48:32 2019 From: nir at altlinux.org (Igor Chudov) Date: Mon, 22 Jul 2019 11:48:32 +0300 Subject: GHC native bootstrap Message-ID: <6512231563785312@iva5-049509bcc5d6.qloud-c.yandex.net> Hello! I want to bootstrap GHC on an exotic architecture and I have GCC-compatible compiler but miss cross-compiling toolchain. I was able to bootstrap Hugs98 (with manual fixes) and guys on IRC channel pointed me to GHC bootstrap articles: - https://elephly.net/posts/2017-01-09-bootstrapping-haskell-part-1.html - http://www.joachim-breitner.de/blog/748-Thoughts_on_bootstrapping_GHC which I read and plan to give it a try. I also read articles about cross-compiling GHC on Haskell wiki but the problem is complex so I don't know where to start. Are there anyone who has experience with GHC compiling (cross-compiling?) and bootstrap on new architectures who could possibly help me to solve the problem or describe the steps needed? --  With best regards, Igor Chudov From jan at vanbruegge.de Mon Jul 22 10:14:11 2019 From: jan at vanbruegge.de (=?UTF-8?Q?Jan_van_Br=c3=bcgge?=) Date: Mon, 22 Jul 2019 12:14:11 +0200 Subject: How do Coercions work Message-ID: <104ace47-9c0d-381b-a08f-a0ab2fcce40c@vanbruegge.de> Hi, currently I have some problems understanding how the coercions are threaded through the compiler. In particular the function `normalise_type`. With guessing and looking at the other cases, I came to this solution, but I have no idea if I am on the right track: normalise_type ty   = go ty   where     go (RowTy k v flds)       = do { (co_k, nty_k) <- go k            ; (co_v, nty_v) <- go v            ; let (a, b) = unzip flds            ; (co_a, ntys_a) <- foldM foldCo (co_k, []) a            ; (co_b, ntys_b) <- foldM foldCo (co_v, []) b            ; return (co_a `mkTransCo` co_b, mkRowTy nty_k nty_v $ zip ntys_a ntys_b) }         where             foldCo (co, tys) t = go t >>= \(c, nt) -> return (co `mkTransCo` c, nt:tys) RowTy has type Type -> Type -> [(Type, Type)] What I am not sure at all is how to treat the coecions. From looking at the go_app_tys code I guessed that I can just combine them like that, but what does that mean from a semantic standpoint? The core spec was not that helpful either in that regard. Thanks in advance Jan From trupill at gmail.com Mon Jul 22 11:05:44 2019 From: trupill at gmail.com (Alejandro Serrano Mena) Date: Mon, 22 Jul 2019 13:05:44 +0200 Subject: Guarded Impredicativity In-Reply-To: References: Message-ID: Just to keep you posted about the current development, we are working on a new approach to impredicativity which is inspired by guarded impredicativity but requires much fewer changes to the codebase. In particular, our goal is to isolate the inference of impredicativity, instead of contaminating the whole compiler with it. The repo where we are developing it lives in https://gitlab.haskell.org/trupill/ghc (branch "quick-look"). Regards, Alejandro El vie., 19 jul. 2019 a las 23:40, Ryan Scott () escribió: > Good to know. Thanks for checking! > > Ryan S. > > On Fri, Jul 19, 2019 at 11:22 AM Artem Pelenitsyn > wrote: > > > > Hello Ryan, > > > > Your example seems to work out of the box with the GI branch. > > > > With the oneliner Matthew posted before: > > nix run -f > https://github.com/mpickering/ghc-artefact-nix/archive//master.tar.gz \ > > ghc-head-from -c ghc-head-from \ > > > https://gitlab.haskell.org/mpickering/ghc/-/jobs/114593/artifacts/raw/ghc-x86_64-fedora27-linux.tar.xz > > It is really easy to check. Also, I didn't see anywhere mentioned that > one need to provide -XImpredicativeTypes. The whole example, therefore, is: > > > > {-#LANGUAGE ImpredicativeTypes, ConstraintKinds #-} > > module M where > > type F f = (Functor f, forall a. Eq (f a)) > > > > -- > > Best, Artem > > > > On Fri, 19 Jul 2019 at 09:18, Ryan Scott > wrote: > >> > >> I have another interesting application of guarded impredicativity that > >> I want to bring up. Currently, GHC #16140 [1] makes it rather > >> inconvenient to use quantified constraints in type synonyms. For > >> instance, GHC rejects the following example by default: > >> > >> type F f = (Functor f, forall a. Eq (f a)) > >> > >> This is because F is a synonym for a constraint tuple, so mentioning a > >> quantified constraint in one of its arguments gets flagged as > >> impredicative. In the discussion for #16140, we have pondered doing a > >> major rewrite of the code in TcValidity to permit F. But perhaps we > >> don't need to! After all, the quantified constraint in the example > >> above appears directly underneath a type constructor (namely, the type > >> constructor for the constraint 2-tuple), which should be a textbook > >> case of guarded impredicativity. > >> > >> I don't have the guarded impredicativity branch built locally, so I am > >> unable to test if this hypothesis is true. In any case, I wanted to > >> mention it as another motivating use case. > >> > >> Ryan S. > >> ----- > >> [1] https://gitlab.haskell.org/ghc/ghc/issues/16140 > >> _______________________________________________ > >> ghc-devs mailing list > >> ghc-devs at haskell.org > >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Mon Jul 22 12:52:38 2019 From: ben at well-typed.com (Ben Gamari) Date: Mon, 22 Jul 2019 08:52:38 -0400 Subject: GHC native bootstrap In-Reply-To: <6512231563785312@iva5-049509bcc5d6.qloud-c.yandex.net> References: <6512231563785312@iva5-049509bcc5d6.qloud-c.yandex.net> Message-ID: <6ECACD2D-7FF1-4645-9B89-05115DC8DEF0@well-typed.com> Indeed there are people here who can help with this. This is described in the articles you linked to but in short you want an unregistered build which compiles via the C backend. Cheers, - Ben On July 22, 2019 4:48:32 AM EDT, Igor Chudov wrote: >Hello! > >I want to bootstrap GHC on an exotic architecture and I have >GCC-compatible compiler but miss cross-compiling toolchain. I was able >to bootstrap Hugs98 (with manual fixes) and guys on IRC channel pointed >me to GHC bootstrap articles: > >- >https://elephly.net/posts/2017-01-09-bootstrapping-haskell-part-1.html >- http://www.joachim-breitner.de/blog/748-Thoughts_on_bootstrapping_GHC > >which I read and plan to give it a try. > >I also read articles about cross-compiling GHC on Haskell wiki but the >problem is complex so I don't know where to start. > >Are there anyone who has experience with GHC compiling >(cross-compiling?) and bootstrap on new architectures who could >possibly help me to solve the problem or describe the steps needed? > >--  >With best regards, Igor Chudov > >_______________________________________________ >ghc-devs mailing list >ghc-devs at haskell.org >http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -- Sent from my Android device with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nir at altlinux.org Mon Jul 22 13:17:41 2019 From: nir at altlinux.org (Igor Chudov) Date: Mon, 22 Jul 2019 16:17:41 +0300 Subject: GHC native bootstrap In-Reply-To: <6ECACD2D-7FF1-4645-9B89-05115DC8DEF0@well-typed.com> References: <6512231563785312@iva5-049509bcc5d6.qloud-c.yandex.net> <6ECACD2D-7FF1-4645-9B89-05115DC8DEF0@well-typed.com> Message-ID: <10209091563801461@myt6-09be74140f25.qloud-c.yandex.net> An HTML attachment was scrubbed... URL: From ben at well-typed.com Mon Jul 22 13:42:07 2019 From: ben at well-typed.com (Ben Gamari) Date: Mon, 22 Jul 2019 09:42:07 -0400 Subject: Moving head.hackage upstream In-Reply-To: <875zo4r248.fsf@smart-cactus.org> References: <875zo4r248.fsf@smart-cactus.org> Message-ID: <87lfwqnq39.fsf@smart-cactus.org> Ben Gamari writes: > Hi Herbert, > > Last week I did some work to clean up and document GHC's head.hackage > infrastructure. At this point we have a full CI pipeline, including > automatic deployment of a Hackage repository. > > I asked on #ghc and there was quite some appetite to use > gitlab.haskell.org:ghc/head.hackage as the head.hackage upstream > repository to eliminate confusion and enjoy the benefits of having merge > requests checked via CI. Moreover, this would significantly simplify the > process of testing GHC against head.hackage as it would eliminate the need > to pull from a separate upstream repository. > > Would you be okay with moving head.hackage's upstream? > > Thanks again for everything you have done in the head.hackage area. > Herbert and I discussed this via IRC over the weekend and he said he would be fine with moving head.hackage's upstream to GitLab. Herbert, can you change the description of your GitHub repository to reflect this? Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Mon Jul 22 13:45:10 2019 From: ben at smart-cactus.org (Ben Gamari) Date: Mon, 22 Jul 2019 09:45:10 -0400 Subject: head.hackage upstream Message-ID: <87h87enpy3.fsf@smart-cactus.org> Hi everyone, As I noted just a moment ago in another thread on this list, head.hackage's upstream repository will be moving to GitLab. I'll but publishing a blog post soon describing some of the infrastructure we have built around head.hackage. Otherwise, if you want to submit a patch to head.hackage please open a merge request against the GitLab repository [1]. Cheers, - Ben [1] https://gitlab.haskell.org/ghc/head.hackage -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Mon Jul 22 13:52:18 2019 From: ben at smart-cactus.org (Ben Gamari) Date: Mon, 22 Jul 2019 09:52:18 -0400 Subject: GHC native bootstrap In-Reply-To: <10209091563801461@myt6-09be74140f25.qloud-c.yandex.net> References: <6512231563785312@iva5-049509bcc5d6.qloud-c.yandex.net> <6ECACD2D-7FF1-4645-9B89-05115DC8DEF0@well-typed.com> <10209091563801461@myt6-09be74140f25.qloud-c.yandex.net> Message-ID: <87ef2inpm7.fsf@smart-cactus.org> Igor Chudov writes: > Thanks, Ben! > > I read old docs and found that it was mentioned that it's possible to > start bootstrap with GHC 4.08.2 and HC files supplied. I performed > "./configure && make" stage on x86_64 machine and moved sources to the > desired machine (and successfully patched some files to work with > exotic C compiler) but encountered Oh dear; I missed the fact that you lack a cross-compiling toolchain. Things are much easier if you can cross-compile. Given your situation your approach is probably the best you can do. I have never done a native bootstrap like this and consequently have no idea what challenges lay in wait. It sounds like it may be a long road, however. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Mon Jul 22 13:58:54 2019 From: ben at smart-cactus.org (Ben Gamari) Date: Mon, 22 Jul 2019 09:58:54 -0400 Subject: How do Coercions work In-Reply-To: <104ace47-9c0d-381b-a08f-a0ab2fcce40c@vanbruegge.de> References: <104ace47-9c0d-381b-a08f-a0ab2fcce40c@vanbruegge.de> Message-ID: <87blxmnpb8.fsf@smart-cactus.org> Jan van Brügge writes: > Hi, > > currently I have some problems understanding how the coercions are > threaded through the compiler. In particular the function > `normalise_type`. With guessing and looking at the other cases, I came > to this solution, but I have no idea if I am on the right track: > > normalise_type ty >   = go ty >   where >     go (RowTy k v flds) >       = do { (co_k, nty_k) <- go k >            ; (co_v, nty_v) <- go v >            ; let (a, b) = unzip flds >            ; (co_a, ntys_a) <- foldM foldCo (co_k, []) a >            ; (co_b, ntys_b) <- foldM foldCo (co_v, []) b >            ; return (co_a `mkTransCo` co_b, mkRowTy nty_k nty_v $ zip > ntys_a ntys_b) } >         where >             foldCo (co, tys) t = go t >>= \(c, nt) -> return (co > `mkTransCo` c, nt:tys) > > > RowTy has type Type -> Type -> [(Type, Type)] > > What I am not sure at all is how to treat the coecions. From looking at > the go_app_tys code I guessed that I can just combine them like that, > but what does that mean from a semantic standpoint? The core spec was > not that helpful either in that regard. > Perhaps others are quicker than me but I'll admit I'm having a hard time following this. What is the specification for normalise_type's desired behavior? What equality is the coercion you are trying to build supposed to witness? In short, TransCo (short for "transitivity") represents a "chain" of coercions. That is, if I have, co1 :: a ~ b co2 :: b ~ c then I can construct co3 :: a ~ c co3 = TransCo co1 co2 Does this help? Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From allbery.b at gmail.com Mon Jul 22 14:31:05 2019 From: allbery.b at gmail.com (Brandon Allbery) Date: Mon, 22 Jul 2019 10:31:05 -0400 Subject: GHC native bootstrap In-Reply-To: <87ef2inpm7.fsf@smart-cactus.org> References: <6512231563785312@iva5-049509bcc5d6.qloud-c.yandex.net> <6ECACD2D-7FF1-4645-9B89-05115DC8DEF0@well-typed.com> <10209091563801461@myt6-09be74140f25.qloud-c.yandex.net> <87ef2inpm7.fsf@smart-cactus.org> Message-ID: IIRC another way to do this, which was and possibly still is used on ARM, is to compile on the host with -fllvm, saving the LLVM IR output, and then run opt on the target. This requires the target have an LLVM toolchain at the same (or at least IR compatible, but note that they make few if any guarantees about this) version as the host. On Mon, Jul 22, 2019 at 9:52 AM Ben Gamari wrote: > Igor Chudov writes: > > > Thanks, Ben! > > > > I read old docs and found that it was mentioned that it's possible to > > start bootstrap with GHC 4.08.2 and HC files supplied. I performed > > "./configure && make" stage on x86_64 machine and moved sources to the > > desired machine (and successfully patched some files to work with > > exotic C compiler) but encountered > > Oh dear; I missed the fact that you lack a cross-compiling toolchain. > Things are much easier if you can cross-compile. > > Given your situation your approach is probably the best you can do. > I have never done a native bootstrap like this and consequently have no > idea what challenges lay in wait. It sounds like it may be a long road, > however. > > Cheers, > > - Ben > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -- brandon s allbery kf8nh allbery.b at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at richarde.dev Mon Jul 22 17:28:33 2019 From: rae at richarde.dev (Richard Eisenberg) Date: Mon, 22 Jul 2019 13:28:33 -0400 Subject: gitlab subject lines Message-ID: Hi Ben, Since the recent GitLab upgrade, all GitLab emails have "GHC | Glasgow Haskell Compiler | " prefixed to their subject lines. This reduces the bandwidth of information in my mail reader. Is there a way of going back to just "GHC |"? Thanks! :) Richard From jan at vanbruegge.de Mon Jul 22 18:49:16 2019 From: jan at vanbruegge.de (=?UTF-8?Q?Jan_van_Br=c3=bcgge?=) Date: Mon, 22 Jul 2019 20:49:16 +0200 Subject: How do Coercions work In-Reply-To: <87blxmnpb8.fsf@smart-cactus.org> References: <104ace47-9c0d-381b-a08f-a0ab2fcce40c@vanbruegge.de> <87blxmnpb8.fsf@smart-cactus.org> Message-ID: <8f876aa7-cf2b-4b45-9f7a-f8da73a7b53a@vanbruegge.de> Hi Ben, thanks, that *does* make clear what TransCo does, I did not know how the transitivity was meant to act. This also hints that my code there is utter garbage, as I already suspected. So I guess my actual question is: What does the coercion returned by normalize_type represent? Same with flatten_one in TcFlatten.hs. I have problems visualizing what is going on there and what is expected of me to do with the returned coercion from a recursive call. Cheers, Jan Am 22.07.19 um 15:58 schrieb Ben Gamari: > Jan van Brügge writes: > >> Hi, >> >> currently I have some problems understanding how the coercions are >> threaded through the compiler. In particular the function >> `normalise_type`. With guessing and looking at the other cases, I came >> to this solution, but I have no idea if I am on the right track: >> >> normalise_type ty >>   = go ty >>   where >>     go (RowTy k v flds) >>       = do { (co_k, nty_k) <- go k >>            ; (co_v, nty_v) <- go v >>            ; let (a, b) = unzip flds >>            ; (co_a, ntys_a) <- foldM foldCo (co_k, []) a >>            ; (co_b, ntys_b) <- foldM foldCo (co_v, []) b >>            ; return (co_a `mkTransCo` co_b, mkRowTy nty_k nty_v $ zip >> ntys_a ntys_b) } >>         where >>             foldCo (co, tys) t = go t >>= \(c, nt) -> return (co >> `mkTransCo` c, nt:tys) >> >> >> RowTy has type Type -> Type -> [(Type, Type)] >> >> What I am not sure at all is how to treat the coecions. From looking at >> the go_app_tys code I guessed that I can just combine them like that, >> but what does that mean from a semantic standpoint? The core spec was >> not that helpful either in that regard. >> > Perhaps others are quicker than me but I'll admit I'm having a hard time > following this. What is the specification for normalise_type's desired > behavior? What equality is the coercion you are trying to build supposed > to witness? > > In short, TransCo (short for "transitivity") represents a "chain" of > coercions. That is, if I have, > > co1 :: a ~ b > co2 :: b ~ c > > then I can construct > > co3 :: a ~ c > co3 = TransCo co1 co2 > > Does this help? > > Cheers, > > - Ben From rae at richarde.dev Mon Jul 22 19:15:44 2019 From: rae at richarde.dev (Richard Eisenberg) Date: Mon, 22 Jul 2019 15:15:44 -0400 Subject: How do Coercions work In-Reply-To: <8f876aa7-cf2b-4b45-9f7a-f8da73a7b53a@vanbruegge.de> References: <104ace47-9c0d-381b-a08f-a0ab2fcce40c@vanbruegge.de> <87blxmnpb8.fsf@smart-cactus.org> <8f876aa7-cf2b-4b45-9f7a-f8da73a7b53a@vanbruegge.de> Message-ID: <6471FF8C-BE89-4158-8639-A99C94BBE2A0@richarde.dev> Hi Jan, > On Jul 22, 2019, at 2:49 PM, Jan van Brügge wrote: > > This also hints that my code there is > utter garbage, as I already suspected. Sorry, but I'm afraid it is. At least about the coercions. > > So I guess my actual question is: What does the coercion returned by > normalize_type represent? normalise_type simplifies a type down to one where all type families are evaluated as far as possible. Suppose we have `type instance F Int = Bool`. In Haskell, we use `F Int` and `Bool` interchangeably: we say they are "definitionally equal" in Haskell. By "definitionally equal", I mean that there is no code one could write that can tell the difference between them, and they can be arbitrarily substituted for each other anywhere. In Core, however (which is where normalise_type works), F Int and Bool are *not* definitionally equal. Instead, they are "propositionally equal", which means that they are distinct, but if we have something of type F Int, we can produce something of type Bool without any runtime action. A cast does this conversion. If you look at the Expr type (in CoreSyn), you'll see `Cast :: Expr -> Coercion -> Expr` (somewhat simplified). A cast (sometimes written `e |> co`) changes the type of an expression. It has typing rule e : ty1 co : ty1 ~ ty2 ------------------- e |> co : ty2 That is, if expression e has type ty1 and a coercion co has type ty1 ~ ty2, then e |> co has type ty2. Casts are erased later on in the compilation pipeline. Propositional equality in Core has two sub-varieties: nominal equality and representational equality. Only nominal equality is in play here, and so we can basically ignore this distinction here. The definitional equality of type family reduction in Haskell is compiled into nominal propositional equality in Core. So everywhere the compiler has to replace F Int with Bool during compilation, a cast (or other construct) is introduced in Core. normalise_type performs type family reductions, and it returns a coercion that witnesses the equality between its input type and its output type. The flattener does the same. So the real question is: suppose I have a RowType ty1 that mentions F Int. Normalizing will get me a RowType ty2 that mentions Bool in place of F Int. How can I get a coercion of type ty1 ~ ty2? You have to answer that question to be able to complete this clause of normalise_type. My strong hunch is that you will need a new constructor of Coercion. Note that most type forms have corresponding Coercion forms. So you will probably need a RowCo. The relationship between RowType and RowCo will be very like the one between AppTy and AppCo, so you can use that as a guide, perhaps. Also, I hate to say it, but if you're fiddling with Core, you will need to make a Strong Argument (preferably in the form of a proof) that your changes are type-safe. That is, the new Core language needs to respect the Progress and Preservation theorems. Fiddling with Core is not to be done lightly. I hope this helps! Richard > Same with flatten_one in TcFlatten.hs. I have > problems visualizing what is going on there and what is expected of me > to do with the returned coercion from a recursive call. > > Cheers, > > Jan > > Am 22.07.19 um 15:58 schrieb Ben Gamari: >> Jan van Brügge writes: >> >>> Hi, >>> >>> currently I have some problems understanding how the coercions are >>> threaded through the compiler. In particular the function >>> `normalise_type`. With guessing and looking at the other cases, I came >>> to this solution, but I have no idea if I am on the right track: >>> >>> normalise_type ty >>> = go ty >>> where >>> go (RowTy k v flds) >>> = do { (co_k, nty_k) <- go k >>> ; (co_v, nty_v) <- go v >>> ; let (a, b) = unzip flds >>> ; (co_a, ntys_a) <- foldM foldCo (co_k, []) a >>> ; (co_b, ntys_b) <- foldM foldCo (co_v, []) b >>> ; return (co_a `mkTransCo` co_b, mkRowTy nty_k nty_v $ zip >>> ntys_a ntys_b) } >>> where >>> foldCo (co, tys) t = go t >>= \(c, nt) -> return (co >>> `mkTransCo` c, nt:tys) >>> >>> >>> RowTy has type Type -> Type -> [(Type, Type)] >>> >>> What I am not sure at all is how to treat the coecions. From looking at >>> the go_app_tys code I guessed that I can just combine them like that, >>> but what does that mean from a semantic standpoint? The core spec was >>> not that helpful either in that regard. >>> >> Perhaps others are quicker than me but I'll admit I'm having a hard time >> following this. What is the specification for normalise_type's desired >> behavior? What equality is the coercion you are trying to build supposed >> to witness? >> >> In short, TransCo (short for "transitivity") represents a "chain" of >> coercions. That is, if I have, >> >> co1 :: a ~ b >> co2 :: b ~ c >> >> then I can construct >> >> co3 :: a ~ c >> co3 = TransCo co1 co2 >> >> Does this help? >> >> Cheers, >> >> - Ben > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From jan at vanbruegge.de Mon Jul 22 21:23:19 2019 From: jan at vanbruegge.de (=?UTF-8?Q?Jan_van_Br=c3=bcgge?=) Date: Mon, 22 Jul 2019 23:23:19 +0200 Subject: How do Coercions work In-Reply-To: <6471FF8C-BE89-4158-8639-A99C94BBE2A0@richarde.dev> References: <104ace47-9c0d-381b-a08f-a0ab2fcce40c@vanbruegge.de> <87blxmnpb8.fsf@smart-cactus.org> <8f876aa7-cf2b-4b45-9f7a-f8da73a7b53a@vanbruegge.de> <6471FF8C-BE89-4158-8639-A99C94BBE2A0@richarde.dev> Message-ID: <7271a73b-0f9e-b75c-b0d2-dcd17a87f68d@vanbruegge.de> Hi Richard, >  In Core, however (which is where normalise_type works), F Int and Bool are *not* definitionally equal. Instead, they are "propositionally equal", which means that they are distinct Thanks, this was the piece of information that was missing. Now it makes sense why there is always a coercion returned. The explanation was really helpful. > My strong hunch is that you will need a new constructor of Coercion. Yeah, I think so too. > if you're fiddling with Core, you will need to make a Strong Argument (preferably in the form of a proof) that your changes are type-safe Yeah, I know. I wanted to try to adapt the proof once I have /something /working. No idea how that will work out, I am actually reading a lot of your papers following the proofs in there, but I have to read a lot more to be able to do this on my own. Thanks again for the help, I think I can dig deeper now :) Jan Am 22.07.19 um 21:15 schrieb Richard Eisenberg: > Hi Jan, > >> On Jul 22, 2019, at 2:49 PM, Jan van Brügge wrote: >> >> This also hints that my code there is >> utter garbage, as I already suspected. > Sorry, but I'm afraid it is. At least about the coercions. > >> So I guess my actual question is: What does the coercion returned by >> normalize_type represent? > normalise_type simplifies a type down to one where all type families are evaluated as far as possible. Suppose we have `type instance F Int = Bool`. In Haskell, we use `F Int` and `Bool` interchangeably: we say they are "definitionally equal" in Haskell. By "definitionally equal", I mean that there is no code one could write that can tell the difference between them, and they can be arbitrarily substituted for each other anywhere. > > In Core, however (which is where normalise_type works), F Int and Bool are *not* definitionally equal. Instead, they are "propositionally equal", which means that they are distinct, but if we have something of type F Int, we can produce something of type Bool without any runtime action. A cast does this conversion. If you look at the Expr type (in CoreSyn), you'll see `Cast :: Expr -> Coercion -> Expr` (somewhat simplified). A cast (sometimes written `e |> co`) changes the type of an expression. It has typing rule > > e : ty1 > co : ty1 ~ ty2 > ------------------- > e |> co : ty2 > > That is, if expression e has type ty1 and a coercion co has type ty1 ~ ty2, then e |> co has type ty2. Casts are erased later on in the compilation pipeline. > > Propositional equality in Core has two sub-varieties: nominal equality and representational equality. Only nominal equality is in play here, and so we can basically ignore this distinction here. > > The definitional equality of type family reduction in Haskell is compiled into nominal propositional equality in Core. So everywhere the compiler has to replace F Int with Bool during compilation, a cast (or other construct) is introduced in Core. > > normalise_type performs type family reductions, and it returns a coercion that witnesses the equality between its input type and its output type. The flattener does the same. > > So the real question is: suppose I have a RowType ty1 that mentions F Int. Normalizing will get me a RowType ty2 that mentions Bool in place of F Int. How can I get a coercion of type ty1 ~ ty2? You have to answer that question to be able to complete this clause of normalise_type. > > My strong hunch is that you will need a new constructor of Coercion. Note that most type forms have corresponding Coercion forms. So you will probably need a RowCo. The relationship between RowType and RowCo will be very like the one between AppTy and AppCo, so you can use that as a guide, perhaps. > > Also, I hate to say it, but if you're fiddling with Core, you will need to make a Strong Argument (preferably in the form of a proof) that your changes are type-safe. That is, the new Core language needs to respect the Progress and Preservation theorems. Fiddling with Core is not to be done lightly. > > I hope this helps! > Richard > >> Same with flatten_one in TcFlatten.hs. I have >> problems visualizing what is going on there and what is expected of me >> to do with the returned coercion from a recursive call. >> >> Cheers, >> >> Jan >> >> Am 22.07.19 um 15:58 schrieb Ben Gamari: >>> Jan van Brügge writes: >>> >>>> Hi, >>>> >>>> currently I have some problems understanding how the coercions are >>>> threaded through the compiler. In particular the function >>>> `normalise_type`. With guessing and looking at the other cases, I came >>>> to this solution, but I have no idea if I am on the right track: >>>> >>>> normalise_type ty >>>> = go ty >>>> where >>>> go (RowTy k v flds) >>>> = do { (co_k, nty_k) <- go k >>>> ; (co_v, nty_v) <- go v >>>> ; let (a, b) = unzip flds >>>> ; (co_a, ntys_a) <- foldM foldCo (co_k, []) a >>>> ; (co_b, ntys_b) <- foldM foldCo (co_v, []) b >>>> ; return (co_a `mkTransCo` co_b, mkRowTy nty_k nty_v $ zip >>>> ntys_a ntys_b) } >>>> where >>>> foldCo (co, tys) t = go t >>= \(c, nt) -> return (co >>>> `mkTransCo` c, nt:tys) >>>> >>>> >>>> RowTy has type Type -> Type -> [(Type, Type)] >>>> >>>> What I am not sure at all is how to treat the coecions. From looking at >>>> the go_app_tys code I guessed that I can just combine them like that, >>>> but what does that mean from a semantic standpoint? The core spec was >>>> not that helpful either in that regard. >>>> >>> Perhaps others are quicker than me but I'll admit I'm having a hard time >>> following this. What is the specification for normalise_type's desired >>> behavior? What equality is the coercion you are trying to build supposed >>> to witness? >>> >>> In short, TransCo (short for "transitivity") represents a "chain" of >>> coercions. That is, if I have, >>> >>> co1 :: a ~ b >>> co2 :: b ~ c >>> >>> then I can construct >>> >>> co3 :: a ~ c >>> co3 = TransCo co1 co2 >>> >>> Does this help? >>> >>> Cheers, >>> >>> - Ben >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at richarde.dev Mon Jul 22 21:32:45 2019 From: rae at richarde.dev (Richard Eisenberg) Date: Mon, 22 Jul 2019 17:32:45 -0400 Subject: How do Coercions work In-Reply-To: <7271a73b-0f9e-b75c-b0d2-dcd17a87f68d@vanbruegge.de> References: <104ace47-9c0d-381b-a08f-a0ab2fcce40c@vanbruegge.de> <87blxmnpb8.fsf@smart-cactus.org> <8f876aa7-cf2b-4b45-9f7a-f8da73a7b53a@vanbruegge.de> <6471FF8C-BE89-4158-8639-A99C94BBE2A0@richarde.dev> <7271a73b-0f9e-b75c-b0d2-dcd17a87f68d@vanbruegge.de> Message-ID: <63CFAAF0-8530-40CA-AF71-F9881BF33CF1@richarde.dev> Glad to know you're unstuck. If you're trying to follow along with a proof of type safety, I recommend the JFP version of the Coercible paper (http://repository.brynmawr.edu/cgi/viewcontent.cgi?article=1010&context=compsci_pubs ). While roles aren't important for you, it's probably the cleanest presentation of the proof. Unfortunately, it doesn't include Type :: Type, but I can't point you to a great proof there. :( Let me know if you need further assistance. This stuff is hard! Richard > On Jul 22, 2019, at 5:23 PM, Jan van Brügge wrote: > > Hi Richard, > > > In Core, however (which is where normalise_type works), F Int and Bool are *not* definitionally equal. Instead, they are "propositionally equal", which means that they are distinct > Thanks, this was the piece of information that was missing. Now it makes sense why there is always a coercion returned. The explanation was really helpful. > > My strong hunch is that you will need a new constructor of Coercion. > > Yeah, I think so too. > > > if you're fiddling with Core, you will need to make a Strong Argument (preferably in the form of a proof) that your changes are type-safe > > Yeah, I know. I wanted to try to adapt the proof once I have something working. No idea how that will work out, I am actually reading a lot of your papers following the proofs in there, but I have to read a lot more to be able to do this on my own. > Thanks again for the help, I think I can dig deeper now :) > Jan > Am 22.07.19 um 21:15 schrieb Richard Eisenberg: >> Hi Jan, >> >>> On Jul 22, 2019, at 2:49 PM, Jan van Brügge wrote: >>> >>> This also hints that my code there is >>> utter garbage, as I already suspected. >> Sorry, but I'm afraid it is. At least about the coercions. >> >>> So I guess my actual question is: What does the coercion returned by >>> normalize_type represent? >> normalise_type simplifies a type down to one where all type families are evaluated as far as possible. Suppose we have `type instance F Int = Bool`. In Haskell, we use `F Int` and `Bool` interchangeably: we say they are "definitionally equal" in Haskell. By "definitionally equal", I mean that there is no code one could write that can tell the difference between them, and they can be arbitrarily substituted for each other anywhere. >> >> In Core, however (which is where normalise_type works), F Int and Bool are *not* definitionally equal. Instead, they are "propositionally equal", which means that they are distinct, but if we have something of type F Int, we can produce something of type Bool without any runtime action. A cast does this conversion. If you look at the Expr type (in CoreSyn), you'll see `Cast :: Expr -> Coercion -> Expr` (somewhat simplified). A cast (sometimes written `e |> co`) changes the type of an expression. It has typing rule >> >> e : ty1 >> co : ty1 ~ ty2 >> ------------------- >> e |> co : ty2 >> >> That is, if expression e has type ty1 and a coercion co has type ty1 ~ ty2, then e |> co has type ty2. Casts are erased later on in the compilation pipeline. >> >> Propositional equality in Core has two sub-varieties: nominal equality and representational equality. Only nominal equality is in play here, and so we can basically ignore this distinction here. >> >> The definitional equality of type family reduction in Haskell is compiled into nominal propositional equality in Core. So everywhere the compiler has to replace F Int with Bool during compilation, a cast (or other construct) is introduced in Core. >> >> normalise_type performs type family reductions, and it returns a coercion that witnesses the equality between its input type and its output type. The flattener does the same. >> >> So the real question is: suppose I have a RowType ty1 that mentions F Int. Normalizing will get me a RowType ty2 that mentions Bool in place of F Int. How can I get a coercion of type ty1 ~ ty2? You have to answer that question to be able to complete this clause of normalise_type. >> >> My strong hunch is that you will need a new constructor of Coercion. Note that most type forms have corresponding Coercion forms. So you will probably need a RowCo. The relationship between RowType and RowCo will be very like the one between AppTy and AppCo, so you can use that as a guide, perhaps. >> >> Also, I hate to say it, but if you're fiddling with Core, you will need to make a Strong Argument (preferably in the form of a proof) that your changes are type-safe. That is, the new Core language needs to respect the Progress and Preservation theorems. Fiddling with Core is not to be done lightly. >> >> I hope this helps! >> Richard >> >>> Same with flatten_one in TcFlatten.hs. I have >>> problems visualizing what is going on there and what is expected of me >>> to do with the returned coercion from a recursive call. >>> >>> Cheers, >>> >>> Jan >>> >>> Am 22.07.19 um 15:58 schrieb Ben Gamari: >>>> Jan van Brügge writes: >>>> >>>>> Hi, >>>>> >>>>> currently I have some problems understanding how the coercions are >>>>> threaded through the compiler. In particular the function >>>>> `normalise_type`. With guessing and looking at the other cases, I came >>>>> to this solution, but I have no idea if I am on the right track: >>>>> >>>>> normalise_type ty >>>>> = go ty >>>>> where >>>>> go (RowTy k v flds) >>>>> = do { (co_k, nty_k) <- go k >>>>> ; (co_v, nty_v) <- go v >>>>> ; let (a, b) = unzip flds >>>>> ; (co_a, ntys_a) <- foldM foldCo (co_k, []) a >>>>> ; (co_b, ntys_b) <- foldM foldCo (co_v, []) b >>>>> ; return (co_a `mkTransCo` co_b, mkRowTy nty_k nty_v $ zip >>>>> ntys_a ntys_b) } >>>>> where >>>>> foldCo (co, tys) t = go t >>= \(c, nt) -> return (co >>>>> `mkTransCo` c, nt:tys) >>>>> >>>>> >>>>> RowTy has type Type -> Type -> [(Type, Type)] >>>>> >>>>> What I am not sure at all is how to treat the coecions. From looking at >>>>> the go_app_tys code I guessed that I can just combine them like that, >>>>> but what does that mean from a semantic standpoint? The core spec was >>>>> not that helpful either in that regard. >>>>> >>>> Perhaps others are quicker than me but I'll admit I'm having a hard time >>>> following this. What is the specification for normalise_type's desired >>>> behavior? What equality is the coercion you are trying to build supposed >>>> to witness? >>>> >>>> In short, TransCo (short for "transitivity") represents a "chain" of >>>> coercions. That is, if I have, >>>> >>>> co1 :: a ~ b >>>> co2 :: b ~ c >>>> >>>> then I can construct >>>> >>>> co3 :: a ~ c >>>> co3 = TransCo co1 co2 >>>> >>>> Does this help? >>>> >>>> Cheers, >>>> >>>> - Ben >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at richarde.dev Mon Jul 22 22:09:21 2019 From: rae at richarde.dev (Richard Eisenberg) Date: Mon, 22 Jul 2019 18:09:21 -0400 Subject: HIE files? Message-ID: <212438F6-DFBD-46FC-92FA-F392E87079B2@richarde.dev> Hi devs, I recently learned about the code for HIE files. This is quite a substantial new development in GHC, judging from the amount of code. I understand broadly why it's here, but I'd like to learn more specifics. For example, if I'm adding a new bit of syntax, how should I update HIE generation? What if I'm changing a bit of syntax? Is there a primer to all this? Thanks! Richard From ben at well-typed.com Mon Jul 22 23:34:17 2019 From: ben at well-typed.com (Ben Gamari) Date: Mon, 22 Jul 2019 19:34:17 -0400 Subject: [ANNOUNCE] GHC 8.8.1 release candidate 1 is now available Message-ID: <878sspod8v.fsf@smart-cactus.org> Hello everyone, The GHC team is pleased to announce the release candidate for GHC 8.8.1. The source distribution, binary distributions, and documentation are available at https://downloads.haskell.org/ghc/8.8.1-rc1 This release is the culmination of over 3000 commits by over one hundred contributors and has several new features and numerous bug fixes relative to GHC 8.6: * Profiling now works correctly on 64-bit Windows (although still may be problematic on 32-bit Windows due to platform limitations; see #15934) * A new code layout algorithm for amd64's native code generator * The introduction of a late lambda-lifting pass which may reduce allocations significantly for some programs. * Further work on Trees That Grow, enabling improved code re-use of the Haskell AST in tooling * More locations where users can write `forall` (GHC Proposal #0007) * A comprehensive audit of GHC's memory ordering barriers has been performed, resulting in a number of fixes that should significantly improve the reliability of programs on architectures with weakly-ordered memory models (e.g. PowerPC, many ARM and AArch64 implementations). * A long-standing linker limitation rendering GHCi unusable with projects with cyclic symbol dependencies has been fixed (#13786) * Further work on the Hadrian build system * Numerous bug-fixes As always, if anything looks amiss do let us know. Happy compiling! Cheers, - Ben [1] https://downloads.haskell.org/ghc/8.8.1-rc1/docs/html/users_guide/8.8.1-notes.html -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From rae at richarde.dev Tue Jul 23 02:56:19 2019 From: rae at richarde.dev (Richard Eisenberg) Date: Mon, 22 Jul 2019 22:56:19 -0400 Subject: a better workflow? Message-ID: <5FC105B0-3B38-4E13-B8DF-B37708945751@richarde.dev> Hi devs, Having gotten back to spending more time on GHC, I've found myself frequently hitting capacity limits on my machine. At one point, I could use a server at work that was a workhorse, but that's not possible any more (for boring reasons). It was great, and I miss it. So I started wondering about renting an AWS instance to help, but I quickly got overwhelmed by choice in setting that up. It's now pretty clear that their free services won't serve me, even as a trial prototype. So before diving deeper, I thought I'd ask: has anyone tried this? Or does anyone have a workflow that they like? Problems I have in want of a solution: - Someone submits an MR and I'm reviewing it. I want to interact with it. This invariably means building from scratch and waiting 45 minutes. - I work on a patch for a few weeks, on and off. It's ready, but I want to rebase. So I build from scratch and wait 45 minutes. - I make a controversial change and want to smoke out any programs that fail. So I run the testsuite and wait over an hour. This gets tiresome quickly. Most days of GHC hacking require at least one forced task-switch due to these wait times. If I had a snappy server, perhaps these times would be lessened. By the way, I'm aware of ghc-artefact-nix, but I don't know how to use it. I tried it twice. The first time, I think it worked. But by the second time, it had been revamped (ghc-head-from), and I think I needed to go into two subshells to get it working... and then the ghc I had didn't include the MR code. I think. It's hard to be sure when you're not sure whether or not the patch itself is working. Part of the problem is that I don't use Nix and mostly don't know what I'm doing when I follow the ghc-artefact-nix instructions, which seem to target Nix users. Thanks! Richard From simonpj at microsoft.com Tue Jul 23 11:04:22 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 23 Jul 2019 11:04:22 +0000 Subject: gitlab subject lines In-Reply-To: References: Message-ID: A big +1 from me. The "GHC" part is already uninformative, but adding "Glasgow Haskell Compiler" consumes _all_ the pixels on my laptop's message-list display, leaving no clue whatsoever about which ticket this is. Thanks Simon | -----Original Message----- | From: ghc-devs On Behalf Of Richard | Eisenberg | Sent: 22 July 2019 18:29 | To: Ben Gamari | Cc: Simon Peyton Jones via ghc-devs | Subject: gitlab subject lines | | Hi Ben, | | Since the recent GitLab upgrade, all GitLab emails have "GHC | Glasgow | Haskell Compiler | " prefixed to their subject lines. This reduces the | bandwidth of information in my mail reader. Is there a way of going back | to just "GHC |"? | | Thanks! :) | Richard | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From jan.stolarek at p.lodz.pl Tue Jul 23 13:25:43 2019 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Tue, 23 Jul 2019 14:25:43 +0100 Subject: a better workflow? In-Reply-To: <5FC105B0-3B38-4E13-B8DF-B37708945751@richarde.dev> References: <5FC105B0-3B38-4E13-B8DF-B37708945751@richarde.dev> Message-ID: <201907231425.43388.jan.stolarek@p.lodz.pl> Hi Richard, I think it's been around two years since I last built GHC, but back in the days I could get a full build time around 17 minutes on my laptop. Not sure how much the build times have increased since then but I suspect you should be able to build GHC faster than in 45 minutes. The trick I used wasn't really much of a trick, it was simply about having good hardware: an SSD drive, a good CPU (I have Xeon), and lots of RAM. And then I had: 1. several separate source trees. This means being able to work on your own stuff in one source tree and being able to review MRs in another without a need to do a full rebuild when you want to switch between the two (or more). Downside of this setup was when you wanted to bootstrap from different GHC versions in different source trees, but with enough scripting this is definitely doable. 2. build trees separated from the source trees. If I really wanted to squeeze max performance I would map the build tree onto a ramdisk - that's why you want lots of RAM. It definitely made the build faster but I can't recall how much it improved the testsuite runs. The downside of course is that you lose the build when you switch off your machine, so I simply wouldn't switch off mine, only suspend it to RAM. Janek PS. A friend of mine recently told me that his company was considering using AWS but after calculating the costs it turned out that buying and maintaining their own servers will be cheaper. --- Politechnika Łódzka Lodz University of Technology Treść tej wiadomości zawiera informacje przeznaczone tylko dla adresata. Jeżeli nie jesteście Państwo jej adresatem, bądź otrzymaliście ją przez pomyłkę prosimy o powiadomienie o tym nadawcy oraz trwałe jej usunięcie. This email contains information intended solely for the use of the individual to whom it is addressed. If you are not the intended recipient or if you have received this message in error, please notify the sender and delete it from your system. From a.pelenitsyn at gmail.com Tue Jul 23 14:30:08 2019 From: a.pelenitsyn at gmail.com (Artem Pelenitsyn) Date: Tue, 23 Jul 2019 10:30:08 -0400 Subject: gitlab subject lines In-Reply-To: References: Message-ID: Hey all, Long uninformative prefixes are indeed a huge pain. If emails are something you GitLab masters are going to look at, I have one more suggestion. It would be nice if there was a way to tell email notifications about MRs from ones about issues. It was trivial before when we had Phab vs Trac. But now I can't find a way to make them go into different direvtories. It turns out, Gmail filters, for one, can't target individual symbols. like ! vs # (there is even an SE question about exactly telling apart Gitlab's emails https://webapps.stackexchange.com/q/52828/70750). I also used to be subscribed for notifications about Trac Wiki. Is it possible to have those from the Gitlab Wiki? I understand wiki is mirrored from https://gitlab.haskell.org/ghc/ghc-wiki-mirror but if you start watching it, it won't notify about regular commits to master, only about MRs/issues, but no one opens those on that repo. -- Kind regards, Artem On Tue, Jul 23, 2019, 7:04 AM Simon Peyton Jones via ghc-devs < ghc-devs at haskell.org> wrote: > A big +1 from me. The "GHC" part is already uninformative, but adding > "Glasgow Haskell Compiler" consumes _all_ the pixels on my laptop's > message-list display, leaving no clue whatsoever about which ticket this is. > > Thanks > > Simon > > | -----Original Message----- > | From: ghc-devs On Behalf Of Richard > | Eisenberg > | Sent: 22 July 2019 18:29 > | To: Ben Gamari > | Cc: Simon Peyton Jones via ghc-devs > | Subject: gitlab subject lines > | > | Hi Ben, > | > | Since the recent GitLab upgrade, all GitLab emails have "GHC | Glasgow > | Haskell Compiler | " prefixed to their subject lines. This reduces the > | bandwidth of information in my mail reader. Is there a way of going back > | to just "GHC |"? > | > | Thanks! :) > | Richard > | _______________________________________________ > | ghc-devs mailing list > | ghc-devs at haskell.org > | http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Tue Jul 23 23:18:06 2019 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 23 Jul 2019 19:18:06 -0400 Subject: gitlab subject lines In-Reply-To: References: Message-ID: <871rygnxw4.fsf@smart-cactus.org> Artem Pelenitsyn writes: > Hey all, > > Long uninformative prefixes are indeed a huge pain. If emails are something > you GitLab masters are going to look at, I have one more suggestion. It > would be nice if there was a way to tell email notifications about MRs from > ones about issues. It was trivial before when we had Phab vs Trac. But now > I can't find a way to make them go into different direvtories. It turns > out, Gmail filters, for one, can't target individual symbols. like ! vs # > (there is even an SE question about exactly telling apart Gitlab's emails > https://webapps.stackexchange.com/q/52828/70750). > I don't know if this is possible with GMail filters but if you can filter on arbitrary mail headers it definitely is possible to make this distinction. GitLab notifications have a number of headers (e.g. X-GitLab-Project, X-GitLab-MergeRequest-IID) which make this sort of this quite simple. > I also used to be subscribed for notifications about Trac Wiki. Is it > possible to have those from the Gitlab Wiki? I understand wiki is mirrored > from https://gitlab.haskell.org/ghc/ghc-wiki-mirror but if you start > watching it, it won't notify about regular commits to master, only about > MRs/issues, but no one opens those on that repo. > Hmm. Yes, commit notifications would be helpful. There is an "email on push" notifier integration [1], but it appears to only support sending to a fixed set of addresses. We would need to setup a mailing list to use it for Wiki notifications. Cheers, - Ben [1] https://docs.gitlab.com/ee/user/project/integrations/emails_on_push.html -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Tue Jul 23 23:23:39 2019 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 23 Jul 2019 19:23:39 -0400 Subject: gitlab subject lines In-Reply-To: References: Message-ID: <87y30omj2f.fsf@smart-cactus.org> Richard Eisenberg writes: > Hi Ben, > > Since the recent GitLab upgrade, all GitLab emails have "GHC | Glasgow > Haskell Compiler | " prefixed to their subject lines. This reduces the > bandwidth of information in my mail reader. Is there a way of going > back to just "GHC |"? > Yes, I also noticed this. Despite the progress [1] made in convincing GitLab upstream to allow number-centric titles it seems that titles just keep getting longer. I suppose I can try to sort out which commit caused the regression and revert it locally. Cheers, - Ben [1] https://gitlab.com/gitlab-org/gitlab-ce/issues/21712 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Wed Jul 24 01:06:20 2019 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 23 Jul 2019 21:06:20 -0400 Subject: a better workflow? In-Reply-To: <5FC105B0-3B38-4E13-B8DF-B37708945751@richarde.dev> References: <5FC105B0-3B38-4E13-B8DF-B37708945751@richarde.dev> Message-ID: <87v9vsmebc.fsf@smart-cactus.org> Richard Eisenberg writes: > Hi devs, > > Having gotten back to spending more time on GHC, I've found myself > frequently hitting capacity limits on my machine. At one point, I > could use a server at work that was a workhorse, but that's not > possible any more (for boring reasons). It was great, and I miss it. > So I started wondering about renting an AWS instance to help, but I > quickly got overwhelmed by choice in setting that up. It's now pretty > clear that their free services won't serve me, even as a trial > prototype. So before diving deeper, I thought I'd ask: has anyone > tried this? Or does anyone have a workflow that they like? > > Problems I have in want of a solution: > - Someone submits an MR and I'm reviewing it. I want to interact with > it. This invariably means building from scratch and waiting 45 > minutes. > - I work on a patch for a few weeks, on and off. It's ready, but I > want to rebase. So I build from scratch and wait 45 minutes. > - I make a controversial change and want to smoke out any programs > that fail. So I run the testsuite and wait over an hour. > > This gets tiresome quickly. Most days of GHC hacking require at least > one forced task-switch due to these wait times. If I had a snappy > server, perhaps these times would be lessened. > Indeed. I can't imagine working on GHC without my build server. As you likely know, having a fast machine with plenty of storage always available has a few nice consequences: * I can keep around as many GHC trees (often already built) as I have concurrent projects * I can leave a tmux session running for each of those projects with build environment, an editor session, and whatever else might be relevant * working from my laptop is no problem, even when running on battery: just SSH home and pick up where I left off Compared to human-hours, even a snappy computer is cheap. A few years ago I tried using an AWS instance for my development environment instead of self-hosting. In the end this experiment didn't last long for a few reasons: * reasonably fast cloud instances are expensive so keeping the machine up all the time simply wasn't economical (compared to the cost of running the machine myself). The performance of one AWS "vCPU" tends to be pretty anemic relative to a single modern core. Anyone who uses cloud services for long enough will eventually make a mistake which puts this cost into perspective. In my case this mistake was inadvertently leaving a moderate-size instance running for ten days a few years ago. At that point I realized that with the cost incurred by this one mistake I could have purchased around a quarter of a far more capable computer. * having to rebuild your development environment every time you need to do a build is expensive in time, even when automated. Indeed some of the steps necessary to build a branch aren't even readily automated (e.g. ensuring that you remember to set your build flavour correctly). This inevitably results in mistakes, resulting in yet more rebuilds. Admittedly self-hosting does have its costs: * You need to reasonably reliable internet connection and power * You must configure your local router to allow traffic into the box * You must configure a dynamic DNS service so you can reliably reach your box * You must live with the knowledge that you are turning >10W of perfectly good electricity into heat and carbon dioxide 24 hours per day, seven days per week. (Of course, considering how many dead dinosaurs I will vaporize getting to Berlin in a few weeks, I suspect I have bigger fish to fry [1]) > By the way, I'm aware of ghc-artefact-nix, but I don't know how to use > it. I tried it twice. The first time, I think it worked. But by the > second time, it had been revamped (ghc-head-from), and I think I > needed to go into two subshells to get it working... and then the ghc > I had didn't include the MR code. I think. It's hard to be sure when > you're not sure whether or not the patch itself is working. Part of > the problem is that I don't use Nix and mostly don't know what I'm > doing when I follow the ghc-artefact-nix instructions, which seem to > target Nix users. > We should try to fix improve this. I think ghc-artefact-nix could be a great tool to enable the consumption of CI-prepared bindists. I'll try to heave a look and document this when I finish my second head.hackage blog post. I personally use NixOS both on my laptop and my build server. This is quite nice since the environments are guaranteed to be reasonably consistent. Furthermore, bringing up a development environment on another machine is straightforward: $ git clone git://github.com/alpmestan/ghc.nix $ nix-shell ghc.nix $ git clone --recursive https://gitlab.haskell.org/ghc/ghc $ cd ghc $ ./validate Of course, Nix is far from perfect and it doesn't always realize its goal of guaranteed reproducibility. However, it is in my opinion a step up from the ad-hoc Debian configuration that I used up until a couple of years ago. Naturally, your mileage may vary. Cheers, - Ben [1] I was curious about the numbers here: The distance from New Hampshire to Berlin is around 3000 nautical miles. A typical commercial flight of this distance has a burn rate per seat [2] of around 3L/100km. Burning one liter of jet fuel will evolve [3] roughly 2.5 kg of CO_2. Consequently, this single trip (both ways) will cost roughly 800 kg CO_2 eq. By contrast, the carbon intensity of electricity production in my region [4] is 280 gCO_2 eq/kWh. Consequently, assuming an average power of 50W, running my server for one year would cost around 100 kg CO_2 eq. Indeed it's not as negligible as I thought, but still not awful. [2] https://en.wikipedia.org/wiki/Fuel_economy_in_aircraft#Long-haul_flights [3] https://www.eia.gov/environment/emissions/co2_vol_mass.php [4] https://www.electricitymap.org/?page=country&solar=false&remote=true&wind=false&countryCode=US-NEISO -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From rae at richarde.dev Wed Jul 24 01:42:37 2019 From: rae at richarde.dev (Richard Eisenberg) Date: Tue, 23 Jul 2019 21:42:37 -0400 Subject: a better workflow? In-Reply-To: <87v9vsmebc.fsf@smart-cactus.org> References: <5FC105B0-3B38-4E13-B8DF-B37708945751@richarde.dev> <87v9vsmebc.fsf@smart-cactus.org> Message-ID: <13C500D2-7C82-4193-B42E-A7D557193A91@richarde.dev> This is very helpful information. I've long thought about doing something like this, but never quite had the crying need until now. And given my short-term peripateticism (summer at my in-laws' in Massachusetts, followed by a year's stint in Cambridge, UK, followed by another month's visit to my in-laws', all while my main home is rented out), this is not viable for now. But it does drive home the advantages quite well. And it describes exactly the trouble I thought I might get into with AWS, once I realized how big a machine I would need to make it worthwhile -- and how manual my interactions with it would have to be. Thanks for writing this up. It convinces me to give up on AWS and either find another solution or live with what I have now. Richard > On Jul 23, 2019, at 9:06 PM, Ben Gamari wrote: > > Richard Eisenberg > writes: > >> Hi devs, >> >> Having gotten back to spending more time on GHC, I've found myself >> frequently hitting capacity limits on my machine. At one point, I >> could use a server at work that was a workhorse, but that's not >> possible any more (for boring reasons). It was great, and I miss it. >> So I started wondering about renting an AWS instance to help, but I >> quickly got overwhelmed by choice in setting that up. It's now pretty >> clear that their free services won't serve me, even as a trial >> prototype. So before diving deeper, I thought I'd ask: has anyone >> tried this? Or does anyone have a workflow that they like? >> >> Problems I have in want of a solution: >> - Someone submits an MR and I'm reviewing it. I want to interact with >> it. This invariably means building from scratch and waiting 45 >> minutes. >> - I work on a patch for a few weeks, on and off. It's ready, but I >> want to rebase. So I build from scratch and wait 45 minutes. >> - I make a controversial change and want to smoke out any programs >> that fail. So I run the testsuite and wait over an hour. >> >> This gets tiresome quickly. Most days of GHC hacking require at least >> one forced task-switch due to these wait times. If I had a snappy >> server, perhaps these times would be lessened. >> > Indeed. I can't imagine working on GHC without my build server. As you > likely know, having a fast machine with plenty of storage always > available has a few nice consequences: > > * I can keep around as many GHC trees (often already built) as I have > concurrent projects > > * I can leave a tmux session running for each of those projects with > build environment, an editor session, and whatever else might be > relevant > > * working from my laptop is no problem, even when running on > battery: just SSH home and pick up where I left off > > Compared to human-hours, even a snappy computer is cheap. > > A few years ago I tried using an AWS instance for my development > environment instead of self-hosting. In the end this experiment didn't > last long for a few reasons: > > * reasonably fast cloud instances are expensive so keeping the machine > up all the time simply wasn't economical (compared to the cost of > running the machine myself). The performance of one AWS "vCPU" tends > to be pretty anemic relative to a single modern core. > > Anyone who uses cloud services for long enough will eventually make a > mistake which puts this cost into perspective. In my case this > mistake was inadvertently leaving a moderate-size instance running > for ten days a few years ago. At that point I realized that with the > cost incurred by this one mistake I could have purchased around a > quarter of a far more capable computer. > > * having to rebuild your development environment every time you need to > do a build is expensive in time, even when automated. Indeed some of > the steps necessary to build a branch aren't even readily automated > (e.g. ensuring that you remember to set your build flavour > correctly). This inevitably results in mistakes, resulting in yet > more rebuilds. > > Admittedly self-hosting does have its costs: > > * You need to reasonably reliable internet connection and power > > * You must configure your local router to allow traffic into the box > > * You must configure a dynamic DNS service so you can reliably reach > your box > > * You must live with the knowledge that you are turning >10W of > perfectly good electricity into heat and carbon dioxide 24 hours per > day, seven days per week. > > (Of course, considering how many dead dinosaurs I will vaporize > getting to Berlin in a few weeks, I suspect I have bigger fish to > fry [1]) > > >> By the way, I'm aware of ghc-artefact-nix, but I don't know how to use >> it. I tried it twice. The first time, I think it worked. But by the >> second time, it had been revamped (ghc-head-from), and I think I >> needed to go into two subshells to get it working... and then the ghc >> I had didn't include the MR code. I think. It's hard to be sure when >> you're not sure whether or not the patch itself is working. Part of >> the problem is that I don't use Nix and mostly don't know what I'm >> doing when I follow the ghc-artefact-nix instructions, which seem to >> target Nix users. >> > We should try to fix improve this. I think ghc-artefact-nix could be a > great tool to enable the consumption of CI-prepared bindists. I'll try > to heave a look and document this when I finish my second head.hackage > blog post. > > I personally use NixOS both on my laptop and my build server. This is > quite nice since the environments are guaranteed to be reasonably > consistent. Furthermore, bringing up a development environment on > another machine is straightforward: > > $ git clone git://github.com/alpmestan/ghc.nix > $ nix-shell ghc.nix > $ git clone --recursive https://gitlab.haskell.org/ghc/ghc > $ cd ghc > $ ./validate > > Of course, Nix is far from perfect and it doesn't always realize its > goal of guaranteed reproducibility. However, it is in my opinion a step > up from the ad-hoc Debian configuration that I used up until a couple of > years ago. > > Naturally, your mileage may vary. > > Cheers, > > - Ben > > > [1] I was curious about the numbers here: > > The distance from New Hampshire to Berlin is around 3000 nautical > miles. A typical commercial flight of this distance has a burn rate > per seat [2] of around 3L/100km. > > Burning one liter of jet fuel will evolve [3] roughly 2.5 kg of > CO_2. Consequently, this single trip (both ways) will cost roughly > 800 kg CO_2 eq. > > By contrast, the carbon intensity of electricity production in my > region [4] is 280 gCO_2 eq/kWh. Consequently, assuming an average > power of 50W, running my server for one year would cost around > 100 kg CO_2 eq. > > Indeed it's not as negligible as I thought, but still not awful. > > [2] https://en.wikipedia.org/wiki/Fuel_economy_in_aircraft#Long-haul_flights > [3] https://www.eia.gov/environment/emissions/co2_vol_mass.php > [4] https://www.electricitymap.org/?page=country&solar=false&remote=true&wind=false&countryCode=US-NEISO -------------- next part -------------- An HTML attachment was scrubbed... URL: From dxld at darkboxed.org Wed Jul 24 02:48:29 2019 From: dxld at darkboxed.org (Daniel =?iso-8859-1?Q?Gr=F6ber?=) Date: Wed, 24 Jul 2019 04:48:29 +0200 Subject: a better workflow? In-Reply-To: <13C500D2-7C82-4193-B42E-A7D557193A91@richarde.dev> References: <5FC105B0-3B38-4E13-B8DF-B37708945751@richarde.dev> <87v9vsmebc.fsf@smart-cactus.org> <13C500D2-7C82-4193-B42E-A7D557193A91@richarde.dev> Message-ID: <20190724024829.GA32104@darkboxed.org> Hi, On Tue, Jul 23, 2019 at 09:42:37PM -0400, Richard Eisenberg wrote: > Thanks for writing this up. It convinces me to give up on AWS and > either find another solution or live with what I have now. I don't think you ever mentioned -- are you already using `git worktree` to get multiple source checkouts or are you working off a single build tree? I find using it essential to reducing context switching overhead. Also AWS is by far not the only game in town when it comes to server hosting. If you don't mind getting something on a month-to-month basis rather than hourly then bog standard server hosting providers are probably a much cheaper option. Since they don't offer any of the fancy managed cloud features you're unlikely to need. I can recomend Hetzner in terms of price, if you don't mind just getting some old(ish) 4 core, 8 threads hardware they have some really affordable options in the 30EUR/mo range (look for the "Server Auctions" stuff). --Daniel From steven at steshaw.org Wed Jul 24 02:58:58 2019 From: steven at steshaw.org (Steven Shaw) Date: Wed, 24 Jul 2019 12:58:58 +1000 Subject: a better workflow? In-Reply-To: <20190724024829.GA32104@darkboxed.org> References: <5FC105B0-3B38-4E13-B8DF-B37708945751@richarde.dev> <87v9vsmebc.fsf@smart-cactus.org> <13C500D2-7C82-4193-B42E-A7D557193A91@richarde.dev> <20190724024829.GA32104@darkboxed.org> Message-ID: Hi Richard, I'd second Hetzner. They are in Europe so latency should be pretty good from England. I don't build GHC regularly but I have just purchased a machine similar to this one from Hetnzer (with only a single 500GB Samsung 500GB 970 EVO Plus) and it makes a meal of my client's application with many dependencies. We use a Hetzner machine at work as a CI server and it hasn't let us down yet. Note that I used to use GCP because my MacBook Air wasn't really up to the task. I'd use tmux and emacs so things were pretty good (on a free trial with preemptible — shut down your machine when you're not using it and it can be pretty cheap). However, SSD speeds are not like those you get with a dedicated server. IIRC 300Gbps vs 1000Gbps. Cheers, Steve. -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Wed Jul 24 03:11:02 2019 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Tue, 23 Jul 2019 23:11:02 -0400 Subject: a better workflow? In-Reply-To: References: <5FC105B0-3B38-4E13-B8DF-B37708945751@richarde.dev> <87v9vsmebc.fsf@smart-cactus.org> <13C500D2-7C82-4193-B42E-A7D557193A91@richarde.dev> <20190724024829.GA32104@darkboxed.org> Message-ID: also: depending on the time scales of havingt these machines, it sometimes makes sense to just have a mini itx/ micro tower/etc at home! I dont have any build recommendations but im sure folks like Ben have suggestions :) On Tue, Jul 23, 2019 at 10:59 PM Steven Shaw wrote: > Hi Richard, > > I'd second Hetzner. They are in Europe so latency should be pretty good > from England. I don't build GHC regularly but I have just purchased a > machine similar to this one from Hetnzer > (with only a > single 500GB Samsung 500GB 970 EVO Plus) and it makes a meal of my client's > application with many dependencies. We use a Hetzner machine at work as a > CI server and it hasn't let us down yet. > > Note that I used to use GCP because my MacBook Air wasn't really up to the > task. I'd use tmux and emacs so things were pretty good (on a free trial > with preemptible — shut down your machine when you're not using it and it > can be pretty cheap). However, SSD speeds are not like those you get with a > dedicated server. IIRC 300Gbps vs 1000Gbps. > > Cheers, > Steve. > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zubin.duggal at gmail.com Wed Jul 24 06:38:13 2019 From: zubin.duggal at gmail.com (Zubin Duggal) Date: Wed, 24 Jul 2019 12:08:13 +0530 Subject: HIE files? In-Reply-To: <212438F6-DFBD-46FC-92FA-F392E87079B2@richarde.dev> References: <212438F6-DFBD-46FC-92FA-F392E87079B2@richarde.dev> Message-ID: <20190724063813.zmifig27silwqxze@zubinpc> Hi, There is no proper write up for this yet, I will add my comments here to the wiki page on HIE files () and also as a Note in HieAst When adding new syntax or changing a bit of syntax in HIE files, you need to pay attention to the following things: 1) Symbols (Names/Vars/Modules) in the following categories: a) Symbols that appear in the source file that directly correspond to something the user typed b) Symbols that don't appear in the source, but should be in some sense "visible" to a user, particularly via IDE tooling or the like. This includes things like the names introduced by RecordWildcards (We record all the names introduced by a (..) in HIE files), and will include implicit parameters and evidence variables after one of my pending MRs lands. 2) Subtrees that may contain such symbols For 1), you need to call `toHie` for one of the following instances instance ToHie (Context (Located Name)) where ... instance ToHie (Context (Located Var)) where ... instance ToHie (IEContext (Located ModuleName)) where ... `Context` is a data type that looks like: data Context a = C ContextInfo a -- Used for names and bindings `ContextInfo` is defined in `HieTypes`, and looks like data ContextInfo = Use -- ^ regular variable | MatchBind | IEThing IEType -- ^ import/export | TyDecl -- | Value binding | ValBind BindType -- ^ whether or not the binding is in an instance Scope -- ^ scope over which the value is bound (Maybe Span) -- ^ span of entire binding ... It is used to annotate symbols in the .hie files with some extra information on the context in which they occur and should be fairly self explanatory. You need to select one that looks appropriate for the symbol usage. In very rare cases, you might need to extend this sum type if none of the cases seem appropriate. If you select one that corresponds to a binding site, you will need to provide a `Scope` and a `Span` for your binding. Both of these are basically `SrcSpans`. The `SrcSpan` in the `Scope` is supposed to span over the part of the source where the symbol can be legally allowed to occur. For more details on how to calculate this, see Note [Capturing Scopes and other non local information] in HieAst. The binding `Span` is supposed to be the span of the entire binding for the name. For a function definition `foo`: foo x = x + y where y = x^2 This is the span of the entire function definition from `foo x` to `x^2`. For a class definition, this is the span of the entire class, and so on. If this isn't well defined for your bit of syntax (like a variable bound by a lambda), then you can just supply a `Nothing` There is a test that checks that all symbols in the resulting HIE file occur inside their stated `Scope`. This can be turned on by passing the -fvalidate-ide-info flag to ghc along with -fwrite-ide-info to generate the .hie file. You may also want to provide a test in testsuite/test/hiefile that includes a file containing your new construction, and tests that the calculated scope is valid (by using -fvalidate-ide-info) For subtrees in the AST that may contain symbols, the procedure is fairly straightforward. If you are extending the GHC AST, you will need to provide a `ToHie` instance for any new types you may have introduced in the AST. Here are is an extract from the `ToHie` instance for (LHsExpr (GhcPass p)): toHie e@(L mspan oexpr) = concatM $ getTypeNode e : case oexpr of HsVar _ (L _ var) -> [ toHie $ C Use (L mspan var) -- Patch up var location since typechecker removes it ] HsConLikeOut _ con -> [ toHie $ C Use $ L mspan $ conLikeName con ] ... HsApp _ a b -> [ toHie a , toHie b ] If your subtree is `Located` or has a `SrcSpan` available, the output list should contain a HieAst `Node` corresponding to the subtree. You can use either `makeNode` or `getTypeNode` for this purpose, depending on whether it makes sense to assign a `Type` to the subtree. After this, you just need to concatenate the result of calling `toHie` on all subexpressions and appropriately annotated symbols contained in the subtree. If your subtree doesn't have a span available, you can omit the `makeNode` call and just recurse directly in to the subexpressions. I can clarify any remaining questions you might have. If you are satisfied with this write up, I can proceed to add it as a Note to HieAst. Hope this helps, Zubin On 19/07/22 18:09, Richard Eisenberg wrote: > Hi devs, > > I recently learned about the code for HIE files. This is quite a substantial new development in GHC, judging from the amount of code. I understand broadly why it's here, but I'd like to learn more specifics. For example, if I'm adding a new bit of syntax, how should I update HIE generation? What if I'm changing a bit of syntax? Is there a primer to all this? > > Thanks! > Richard From omeragacan at gmail.com Wed Jul 24 12:40:05 2019 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Wed, 24 Jul 2019 15:40:05 +0300 Subject: Tracking issues for GHC proposals? Message-ID: Hi all, It's currently quite hard to see progress of proposals. For example, I'm looking at the "small primitives" proposal [1]. After some digging I can see that it's mostly (if not completely) implemented [2, 3], but finding all this takes time. I think it'd make sense to create a tracking issue when a proposal is accepted, on GHC's Gitlab, and link to it from the proposal. All related MRs and issues would then be linked from the tracking issue, making seeing progress easier. How does that sound? Ömer [1]: https://github.com/ghc-proposals/ghc-proposals/blob/master/proposals/0014-small-primitives.rst [2]: https://phabricator.haskell.org/D5006 [3]: https://phabricator.haskell.org/D4475 From rae at richarde.dev Wed Jul 24 12:41:57 2019 From: rae at richarde.dev (Richard Eisenberg) Date: Wed, 24 Jul 2019 08:41:57 -0400 Subject: Tracking issues for GHC proposals? In-Reply-To: References: Message-ID: <6874884C-2465-4134-BB43-D237F0CA591A@richarde.dev> Yes, please! > On Jul 24, 2019, at 8:40 AM, Ömer Sinan Ağacan wrote: > > Hi all, > > It's currently quite hard to see progress of proposals. For example, I'm looking > at the "small primitives" proposal [1]. After some digging I can see that it's > mostly (if not completely) implemented [2, 3], but finding all this takes time. > > I think it'd make sense to create a tracking issue when a proposal is accepted, > on GHC's Gitlab, and link to it from the proposal. All related MRs and issues > would then be linked from the tracking issue, making seeing progress easier. How > does that sound? > > Ömer > > [1]: https://github.com/ghc-proposals/ghc-proposals/blob/master/proposals/0014-small-primitives.rst > [2]: https://phabricator.haskell.org/D5006 > [3]: https://phabricator.haskell.org/D4475 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From rae at richarde.dev Wed Jul 24 12:55:43 2019 From: rae at richarde.dev (Richard Eisenberg) Date: Wed, 24 Jul 2019 08:55:43 -0400 Subject: HIE files? In-Reply-To: <20190724063813.zmifig27silwqxze@zubinpc> References: <212438F6-DFBD-46FC-92FA-F392E87079B2@richarde.dev> <20190724063813.zmifig27silwqxze@zubinpc> Message-ID: <31CFA1BD-1F58-4620-8B87-3ACD6060DF25@richarde.dev> > On Jul 24, 2019, at 2:38 AM, Zubin Duggal wrote: > > I can clarify any remaining questions you might have. If you are satisfied > with this write up, I can proceed to add it as a Note to HieAst. This has been very helpful -- thanks! There are a few questions that I have, but this email contains the general info I'm looking for, and I think going through the normal review process will be better than creating an email chain here. My one overall suggestion is that it might be most useful for the Note to begin with instructions to those who edit the GHC AST -- that will be your main audience. Then, perhaps, someone will need to extend the HIE AST as well, and there should be further instructions. I have made #16975 (https://gitlab.haskell.org/ghc/ghc/issues/16975 ) to track progress of adding this Note. Thanks! Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at richarde.dev Wed Jul 24 13:03:17 2019 From: rae at richarde.dev (Richard Eisenberg) Date: Wed, 24 Jul 2019 09:03:17 -0400 Subject: a better workflow? In-Reply-To: <20190724024829.GA32104@darkboxed.org> References: <5FC105B0-3B38-4E13-B8DF-B37708945751@richarde.dev> <87v9vsmebc.fsf@smart-cactus.org> <13C500D2-7C82-4193-B42E-A7D557193A91@richarde.dev> <20190724024829.GA32104@darkboxed.org> Message-ID: > On Jul 23, 2019, at 10:48 PM, Daniel Gröber wrote: > > I don't think you ever mentioned -- are you already using `git > worktree` to get multiple source checkouts or are you working off a > single build tree? I find using it essential to reducing context > switching overhead. This is a good point. No, I'm not currently. Some post I read (actually, I think the manpage) said that `git worktree` and submodules don't mix, so I got scared off. Regardless, I don't think worktree will solve my problem exactly. It eliminates the annoyance of shuttling commits from one checkout to another, but that's not really a pain point for me. (Yes, it's a small annoyance, but I hit it only rarely, and it's quick to sort out.) Perhaps I'm missing something though about worktree that will allow more, e.g., sharing of build products. Am I? Thanks! Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgraf1337 at gmail.com Wed Jul 24 14:57:18 2019 From: sgraf1337 at gmail.com (Sebastian Graf) Date: Wed, 24 Jul 2019 15:57:18 +0100 Subject: a better workflow? In-Reply-To: References: <5FC105B0-3B38-4E13-B8DF-B37708945751@richarde.dev> <87v9vsmebc.fsf@smart-cactus.org> <13C500D2-7C82-4193-B42E-A7D557193A91@richarde.dev> <20190724024829.GA32104@darkboxed.org> Message-ID: I found that git worktree works rather well, even with submodules (well, mostly. Even if it doesn't for some reason, you can still update and init the submodules manually, losing sharing in the process). See https://stackoverflow.com/a/31872051, in particular the GitHub links to `wtas` alias. I mostly do this: $ cd ~/code/hs/ghc $ cd pristine $ git wtas ../pmcheck and mostly just hack away. From time to time I seem to have issues because of confused submodule references, but as I said above doing a `git submodule update --init --recursive` fixes that. Cloning the root GHC checkout is the most time-consuming step, after all. Also I'm currently in the rather comfortable situation of having an 8 core azure VM just for GHC dev, which is pretty amazing. Doing the same as Ben here: Having a tmux open with one (or more) tab per checkout I'm working on in parallel. VSCode is my editor of choice and seamlessly picks up any SSH connection I throw at it. Can highly recommend that when you're on a rather weak machine like a laptop or convertible. Am Mi., 24. Juli 2019 um 14:03 Uhr schrieb Richard Eisenberg < rae at richarde.dev>: > > > On Jul 23, 2019, at 10:48 PM, Daniel Gröber wrote: > > I don't think you ever mentioned -- are you already using `git > worktree` to get multiple source checkouts or are you working off a > single build tree? I find using it essential to reducing context > switching overhead. > > > This is a good point. No, I'm not currently. Some post I read (actually, I > think the manpage) said that `git worktree` and submodules don't mix, so I > got scared off. Regardless, I don't think worktree will solve my problem > exactly. It eliminates the annoyance of shuttling commits from one checkout > to another, but that's not really a pain point for me. (Yes, it's a small > annoyance, but I hit it only rarely, and it's quick to sort out.) Perhaps > I'm missing something though about worktree that will allow more, e.g., > sharing of build products. Am I? > > Thanks! > Richard > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at richarde.dev Wed Jul 24 21:07:12 2019 From: rae at richarde.dev (Richard Eisenberg) Date: Wed, 24 Jul 2019 17:07:12 -0400 Subject: Typing rules for Haskell Message-ID: Hi devs, Simon and I were wondering about a tight specification for the recent action in proposal #253 (https://github.com/ghc-proposals/ghc-proposals/pull/253 ). We needed to see the typing rules. So I made a repo (https://gitlab.haskell.org/rae/haskell ) to collect typing rules for source Haskell. I managed to convince the CI infrastructure to produce a PDF at every upload; it is linked from the README. The current version contains the proposed result signatures; it should probably be in a branch, but life is short. Contributions very welcome! Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Wed Jul 24 23:32:07 2019 From: ben at smart-cactus.org (Ben Gamari) Date: Wed, 24 Jul 2019 19:32:07 -0400 Subject: Typing rules for Haskell In-Reply-To: References: Message-ID: <87k1c7m2kt.fsf@smart-cactus.org> Richard Eisenberg writes: > Hi devs, > > Simon and I were wondering about a tight specification for the recent > action in proposal #253 > (https://github.com/ghc-proposals/ghc-proposals/pull/253 > ). We needed > to see the typing rules. So I made a repo > (https://gitlab.haskell.org/rae/haskell > ) to collect typing rules for > source Haskell. I managed to convince the CI infrastructure to produce > a PDF at every upload; it is linked from the README. > > The current version contains the proposed result signatures; it should > probably be in a branch, but life is short. > Very nice! Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From omeragacan at gmail.com Thu Jul 25 07:07:08 2019 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Thu, 25 Jul 2019 10:07:08 +0300 Subject: Typing rules for Haskell In-Reply-To: <87k1c7m2kt.fsf@smart-cactus.org> References: <87k1c7m2kt.fsf@smart-cactus.org> Message-ID: Nice! Perhaps I should revive my or-patterns proposal now. The main problem (IIRC) was that I had to give a rather large subset of Haskell (that includes pattern matching) typing rules to show typing rules of or-patterns. Now that that part is done perhaps I can find the time for the rest. Ömer Ben Gamari , 25 Tem 2019 Per, 02:32 tarihinde şunu yazdı: > > Richard Eisenberg writes: > > > Hi devs, > > > > Simon and I were wondering about a tight specification for the recent > > action in proposal #253 > > (https://github.com/ghc-proposals/ghc-proposals/pull/253 > > ). We needed > > to see the typing rules. So I made a repo > > (https://gitlab.haskell.org/rae/haskell > > ) to collect typing rules for > > source Haskell. I managed to convince the CI infrastructure to produce > > a PDF at every upload; it is linked from the README. > > > > The current version contains the proposed result signatures; it should > > probably be in a branch, but life is short. > > > Very nice! > > Cheers, > > - Ben > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From matthewtpickering at gmail.com Thu Jul 25 11:21:21 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Thu, 25 Jul 2019 12:21:21 +0100 Subject: Try haskell-ide-engine on GHC! Message-ID: Hi all, As some of you know I have been working on getting haskell-ide-engine working on GHC for the last few months. Perhaps now the branch is in a usable state where people can try it and report issues. All the basic features such as, hover, completion, error reporting, go to definition etc should work well. I suspect this will be enough for most developers. I have compiled a list of instructions about how to try out the branch. https://gist.github.com/mpickering/68ae458d2c426a29a7c1ddf798dbc793 In the last few weeks Zubin has been a great help finishing some parts of the patch that I lost steam for and given it a much better chance of getting merged into the main repo before the end of the year. Cheers, Matt From gertjan.bottu at kuleuven.be Thu Jul 25 09:23:39 2019 From: gertjan.bottu at kuleuven.be (Gert-Jan Bottu) Date: Thu, 25 Jul 2019 11:23:39 +0200 Subject: Any ways to test a GHC build against large set of packages (including test suites)? In-Reply-To: References: Message-ID: Hi, I'm trying to do something similar : I'm hacking around with GHC, and would like to build a large set of packages to verify my changes. Similarly to the steps described below, I've followed the scheduled build in .circle/config.yml, but I can't figure out how to force it to use my own (hacked upon) GHC build? More concretely, the steps I took (from the lastest .circle/config.yml): - Installed my local GHC to ~/ghc-head - Installed stackage-build-plan, stackage-curator and stackage-head from git repos - export BUILD_PLAN=nightly-2018-10-23 - curl https://ghc-artifacts.s3.amazonaws.com/nightly/validate-x86_64-linux/latest/metadata.json --output metadata.json - curl https://raw.githubusercontent.com/fpco/stackage-nightly/master/$BUILD_PLAN.yaml --output $BUILD_PLAN.yaml - fix-build-plan $BUILD_PLAN.yaml custom-source-urls.yaml - stackage-curator make-bundle --allow-newer --jobs 9 --plan-file $BUILD_PLAN.yaml --docmap-file docmap-file.yaml --target $BUILD_PLAN --skip-haddock --skip-hoogle --skip-benches --no-rebuild-cabal -v > build.log 2>&1 This manages to build Stackage and generate a report just fine, but it doesn't use my ~/ghc-head GHC install. Any ideas how I can point stackage-curator to a specific GHC install? Thanks Gert-Jan On 10.08.18 10:39, Ömer Sinan Ağacan wrote: > Hi, > > This is working great, I just generated my first report. One problem is stm-2.4 > doesn't compile with GHC HEAD, we need stm-2.5.0.0. But that's not published on > Hackage yet, and latest nightly still uses stm-2.4.5.0. I wonder if there's > anything that can be done about this. Apparently stm blocks 82 packages (I > don't know if that's counting transitively or just packages that are directly > blocked by stm). Any ideas about this? > > Ömer > > Ömer Sinan Ağacan , 9 Ağu 2018 Per, 14:45 > tarihinde şunu yazdı: >> Ah, I now realize that that command is supposed to print that output. I'll >> continue following the steps and keep you updated if I get stuck again. >> >> Ömer >> >> Ömer Sinan Ağacan , 9 Ağu 2018 Per, 13:20 >> tarihinde şunu yazdı: >>> Hi Manuel, >>> >>> I'm trying stackage-head. I'm following the steps for the scheduled build in >>> .circleci/config.yml. So far steps I took: >>> >>> - Installed ghc-head (from [1]) to ~/ghc-head >>> - Installed stackage-build-plan, stackage-curator and stackage-head (with >>> -fdev) from git repos, using stack. >>> - export BUILD_PLAN=nightly-2018-07-30 (from config.yml) >>> - curl https://ghc-artifacts.s3.amazonaws.com/nightly/validate-x86_64-linux/latest/metadata.json >>> --output metadata.json >>> - curl https://raw.githubusercontent.com/fpco/stackage-nightly/master/$BUILD_PLAN.yaml >>> --output $BUILD_PLAN.yaml >>> >>> Now I'm doing >>> >>> - ./.local/bin/stackage-head already-seen --target $BUILD_PLAN >>> --ghc-metadata metadata.json --outdir build-reports >>> >>> but it's failing with >>> >>> The combination of target and commit is new to me >>> >>> Any ideas what I'm doing wrong? >>> >>> Thanks >>> >>> [1]: https://ghc-artifacts.s3.amazonaws.com/nightly/validate-x86_64-linux/latest/bindist.tar.xz >>> >>> Ömer >>> >>> Ömer Sinan Ağacan , 7 Ağu 2018 Sal, 23:28 >>> tarihinde şunu yazdı: >>>> Thanks for both suggestions. I'll try both and see which one works better. >>>> >>>> Ömer >>>> >>>> Manuel M T Chakravarty , 7 Ağu 2018 Sal, 18:15 >>>> tarihinde şunu yazdı: >>>>> Hi Ömer, >>>>> >>>>> This is exactly the motivation for the Stackage HEAD works that we have pushed at Tweag I/O in the context of the GHC DevOps group. Have a look at >>>>> >>>>> https://github.com/tweag/stackage-head >>>>> >>>>> and also the blog post from when the first version went live: >>>>> >>>>> https://www.tweag.io/posts/2018-04-17-stackage-head-is-live.html >>>>> >>>>> Cheers, >>>>> Manuel >>>>> >>>>>> Am 06.08.2018 um 09:40 schrieb Ömer Sinan Ağacan : >>>>>> >>>>>> Hi, >>>>>> >>>>>> I'd like to test some GHC builds + some compile and runtime flag combinations >>>>>> against a large set of packages by building them and running test suites. For >>>>>> this I need >>>>>> >>>>>> - A set of packages that are known to work with latest GHC >>>>>> - A way to build them and run their test suites (if I could specify compile and >>>>>> runtime flags that'd be even better) >>>>>> >>>>>> I think stackage can serve as (1) but I don't know how to do (2). Can anyone >>>>>> point me to the right direction? I vaguely remember some nix-based solution for >>>>>> this that was being discussed on the IRC channel, but can't recall any details. >>>>>> >>>>>> Thanks, >>>>>> >>>>>> Ömer >>>>>> _______________________________________________ >>>>>> ghc-devs mailing list >>>>>> ghc-devs at haskell.org >>>>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From matthewtpickering at gmail.com Thu Jul 25 14:17:47 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Thu, 25 Jul 2019 15:17:47 +0100 Subject: Any ways to test a GHC build against large set of packages (including test suites)? In-Reply-To: References: Message-ID: Hi Gert-Jan, Have you considered using the head.hackage infrastructure? There is a CI job which builds a set of package with HEAD. It is designed for this kind of testing. In order to test it on your branch you probably just need to it at a suitable bindist. Cheers, Matt On Thu, Jul 25, 2019 at 3:10 PM Gert-Jan Bottu wrote: > > Hi, > > I'm trying to do something similar : I'm hacking around with GHC, and > would like to build a large set of packages to verify my changes. > Similarly to the steps described below, I've followed the scheduled > build in .circle/config.yml, but I can't figure out how to force it to > use my own (hacked upon) GHC build? > > More concretely, the steps I took (from the lastest .circle/config.yml): > - Installed my local GHC to ~/ghc-head > - Installed stackage-build-plan, stackage-curator and stackage-head from > git repos > - export BUILD_PLAN=nightly-2018-10-23 > - curl > https://ghc-artifacts.s3.amazonaws.com/nightly/validate-x86_64-linux/latest/metadata.json > --output metadata.json > - curl > https://raw.githubusercontent.com/fpco/stackage-nightly/master/$BUILD_PLAN.yaml > --output $BUILD_PLAN.yaml > - fix-build-plan $BUILD_PLAN.yaml custom-source-urls.yaml > - stackage-curator make-bundle --allow-newer --jobs 9 --plan-file > $BUILD_PLAN.yaml --docmap-file docmap-file.yaml --target $BUILD_PLAN > --skip-haddock --skip-hoogle --skip-benches --no-rebuild-cabal -v > > build.log 2>&1 > > This manages to build Stackage and generate a report just fine, but it > doesn't use my ~/ghc-head GHC install. Any ideas how I can point > stackage-curator to a specific GHC install? > > Thanks > > Gert-Jan > > On 10.08.18 10:39, Ömer Sinan Ağacan wrote: > > Hi, > > > > This is working great, I just generated my first report. One problem is stm-2.4 > > doesn't compile with GHC HEAD, we need stm-2.5.0.0. But that's not published on > > Hackage yet, and latest nightly still uses stm-2.4.5.0. I wonder if there's > > anything that can be done about this. Apparently stm blocks 82 packages (I > > don't know if that's counting transitively or just packages that are directly > > blocked by stm). Any ideas about this? > > > > Ömer > > > > Ömer Sinan Ağacan , 9 Ağu 2018 Per, 14:45 > > tarihinde şunu yazdı: > >> Ah, I now realize that that command is supposed to print that output. I'll > >> continue following the steps and keep you updated if I get stuck again. > >> > >> Ömer > >> > >> Ömer Sinan Ağacan , 9 Ağu 2018 Per, 13:20 > >> tarihinde şunu yazdı: > >>> Hi Manuel, > >>> > >>> I'm trying stackage-head. I'm following the steps for the scheduled build in > >>> .circleci/config.yml. So far steps I took: > >>> > >>> - Installed ghc-head (from [1]) to ~/ghc-head > >>> - Installed stackage-build-plan, stackage-curator and stackage-head (with > >>> -fdev) from git repos, using stack. > >>> - export BUILD_PLAN=nightly-2018-07-30 (from config.yml) > >>> - curl https://ghc-artifacts.s3.amazonaws.com/nightly/validate-x86_64-linux/latest/metadata.json > >>> --output metadata.json > >>> - curl https://raw.githubusercontent.com/fpco/stackage-nightly/master/$BUILD_PLAN.yaml > >>> --output $BUILD_PLAN.yaml > >>> > >>> Now I'm doing > >>> > >>> - ./.local/bin/stackage-head already-seen --target $BUILD_PLAN > >>> --ghc-metadata metadata.json --outdir build-reports > >>> > >>> but it's failing with > >>> > >>> The combination of target and commit is new to me > >>> > >>> Any ideas what I'm doing wrong? > >>> > >>> Thanks > >>> > >>> [1]: https://ghc-artifacts.s3.amazonaws.com/nightly/validate-x86_64-linux/latest/bindist.tar.xz > >>> > >>> Ömer > >>> > >>> Ömer Sinan Ağacan , 7 Ağu 2018 Sal, 23:28 > >>> tarihinde şunu yazdı: > >>>> Thanks for both suggestions. I'll try both and see which one works better. > >>>> > >>>> Ömer > >>>> > >>>> Manuel M T Chakravarty , 7 Ağu 2018 Sal, 18:15 > >>>> tarihinde şunu yazdı: > >>>>> Hi Ömer, > >>>>> > >>>>> This is exactly the motivation for the Stackage HEAD works that we have pushed at Tweag I/O in the context of the GHC DevOps group. Have a look at > >>>>> > >>>>> https://github.com/tweag/stackage-head > >>>>> > >>>>> and also the blog post from when the first version went live: > >>>>> > >>>>> https://www.tweag.io/posts/2018-04-17-stackage-head-is-live.html > >>>>> > >>>>> Cheers, > >>>>> Manuel > >>>>> > >>>>>> Am 06.08.2018 um 09:40 schrieb Ömer Sinan Ağacan : > >>>>>> > >>>>>> Hi, > >>>>>> > >>>>>> I'd like to test some GHC builds + some compile and runtime flag combinations > >>>>>> against a large set of packages by building them and running test suites. For > >>>>>> this I need > >>>>>> > >>>>>> - A set of packages that are known to work with latest GHC > >>>>>> - A way to build them and run their test suites (if I could specify compile and > >>>>>> runtime flags that'd be even better) > >>>>>> > >>>>>> I think stackage can serve as (1) but I don't know how to do (2). Can anyone > >>>>>> point me to the right direction? I vaguely remember some nix-based solution for > >>>>>> this that was being discussed on the IRC channel, but can't recall any details. > >>>>>> > >>>>>> Thanks, > >>>>>> > >>>>>> Ömer > >>>>>> _______________________________________________ > >>>>>> ghc-devs mailing list > >>>>>> ghc-devs at haskell.org > >>>>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From ben at smart-cactus.org Thu Jul 25 16:01:47 2019 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 25 Jul 2019 12:01:47 -0400 Subject: Any ways to test a GHC build against large set of packages (including test suites)? In-Reply-To: References: Message-ID: <87a7d2m7bu.fsf@smart-cactus.org> Gert-Jan Bottu writes: > Hi, > > I'm trying to do something similar : I'm hacking around with GHC, and > would like to build a large set of packages to verify my changes. > Similarly to the steps described below, I've followed the scheduled > build in .circle/config.yml, but I can't figure out how to force it to > use my own (hacked upon) GHC build? > I can't comment on stackage-curator but would second mpickering's question. head.hackage is designed precisely for this sort of application. I have a pair of draft blog posts [1,2], to be published soon, which document its usage. Cheers, - Ben [1] https://gitlab.haskell.org/ghc/homepage/merge_requests/16 [2] https://gitlab.haskell.org/ghc/homepage/merge_requests/29 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From sylvain at haskus.fr Thu Jul 25 16:28:50 2019 From: sylvain at haskus.fr (Sylvain Henry) Date: Thu, 25 Jul 2019 18:28:50 +0200 Subject: Any ways to test a GHC build against large set of packages (including test suites)? In-Reply-To: References: Message-ID: <2787c542-1743-d15a-8f97-d48bd4def06e@haskus.fr> Hi, I've never used stackage-curator but "curator 2.0" [1] seems to generate a stack.yaml file that can be used by Stack to build all the packages of the selected snapshot. As Stack supports installing GHC bindists and Stack 2.0 even supports building and installing GHC from a GIT repository [2], you should just have to edit the generated stack.yaml file to use another compiler. Cheers, Sylvain [1] https://github.com/commercialhaskell/curator [2] https://docs.haskellstack.org/en/stable/yaml_configuration/#building-ghc-from-source-experimental On 25/07/2019 11:23, Gert-Jan Bottu wrote: > Hi, > > I'm trying to do something similar : I'm hacking around with GHC, and > would like to build a large set of packages to verify my changes. > Similarly to the steps described below, I've followed the scheduled > build in .circle/config.yml, but I can't figure out how to force it to > use my own (hacked upon) GHC build? > > More concretely, the steps I took (from the lastest .circle/config.yml): > - Installed my local GHC to ~/ghc-head > - Installed stackage-build-plan, stackage-curator and stackage-head > from git repos > - export BUILD_PLAN=nightly-2018-10-23 > - curl > https://ghc-artifacts.s3.amazonaws.com/nightly/validate-x86_64-linux/latest/metadata.json > --output metadata.json > - curl > https://raw.githubusercontent.com/fpco/stackage-nightly/master/$BUILD_PLAN.yaml > --output $BUILD_PLAN.yaml > - fix-build-plan $BUILD_PLAN.yaml custom-source-urls.yaml > - stackage-curator make-bundle --allow-newer --jobs 9 --plan-file > $BUILD_PLAN.yaml --docmap-file docmap-file.yaml --target $BUILD_PLAN > --skip-haddock --skip-hoogle --skip-benches --no-rebuild-cabal -v > > build.log 2>&1 > > This manages to build Stackage and generate a report just fine, but it > doesn't use my ~/ghc-head GHC install. Any ideas how I can point > stackage-curator to a specific GHC install? > > Thanks > > Gert-Jan > > On 10.08.18 10:39, Ömer Sinan Ağacan wrote: >> Hi, >> >> This is working great, I just generated my first report. One problem >> is stm-2.4 >> doesn't compile with GHC HEAD, we need stm-2.5.0.0. But that's not >> published on >> Hackage yet, and latest nightly still uses stm-2.4.5.0. I wonder if >> there's >> anything that can be done about this. Apparently stm blocks 82 >> packages (I >> don't know if that's counting transitively or just packages that are >> directly >> blocked by stm). Any ideas about this? >> >> Ömer >> >> Ömer Sinan Ağacan , 9 Ağu 2018 Per, 14:45 >> tarihinde şunu yazdı: >>> Ah, I now realize that that command is supposed to print that >>> output. I'll >>> continue following the steps and keep you updated if I get stuck again. >>> >>> Ömer >>> >>> Ömer Sinan Ağacan , 9 Ağu 2018 Per, 13:20 >>> tarihinde şunu yazdı: >>>> Hi Manuel, >>>> >>>> I'm trying stackage-head. I'm following the steps for the scheduled >>>> build in >>>> .circleci/config.yml. So far steps I took: >>>> >>>> - Installed ghc-head (from [1]) to ~/ghc-head >>>> - Installed stackage-build-plan, stackage-curator and stackage-head >>>> (with >>>>    -fdev) from git repos, using stack. >>>> - export BUILD_PLAN=nightly-2018-07-30 (from config.yml) >>>> - curl >>>> https://ghc-artifacts.s3.amazonaws.com/nightly/validate-x86_64-linux/latest/metadata.json >>>> --output metadata.json >>>> - curl >>>> https://raw.githubusercontent.com/fpco/stackage-nightly/master/$BUILD_PLAN.yaml >>>> --output $BUILD_PLAN.yaml >>>> >>>> Now I'm doing >>>> >>>> - ./.local/bin/stackage-head already-seen --target $BUILD_PLAN >>>> --ghc-metadata metadata.json --outdir build-reports >>>> >>>> but it's failing with >>>> >>>>      The combination of target and commit is new to me >>>> >>>> Any ideas what I'm doing wrong? >>>> >>>> Thanks >>>> >>>> [1]: >>>> https://ghc-artifacts.s3.amazonaws.com/nightly/validate-x86_64-linux/latest/bindist.tar.xz >>>> >>>> Ömer >>>> >>>> Ömer Sinan Ağacan , 7 Ağu 2018 Sal, 23:28 >>>> tarihinde şunu yazdı: >>>>> Thanks for both suggestions. I'll try both and see which one works >>>>> better. >>>>> >>>>> Ömer >>>>> >>>>> Manuel M T Chakravarty , 7 Ağu 2018 Sal, 18:15 >>>>> tarihinde şunu yazdı: >>>>>> Hi Ömer, >>>>>> >>>>>> This is exactly the motivation for the Stackage HEAD works that >>>>>> we have pushed at Tweag I/O in the context of the GHC DevOps >>>>>> group. Have a look at >>>>>> >>>>>>    https://github.com/tweag/stackage-head >>>>>> >>>>>> and also the blog post from when the first version went live: >>>>>> >>>>>> https://www.tweag.io/posts/2018-04-17-stackage-head-is-live.html >>>>>> >>>>>> Cheers, >>>>>> Manuel >>>>>> >>>>>>> Am 06.08.2018 um 09:40 schrieb Ömer Sinan Ağacan >>>>>>> : >>>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> I'd like to test some GHC builds + some compile and runtime flag >>>>>>> combinations >>>>>>> against a large set of packages by building them and running >>>>>>> test suites. For >>>>>>> this I need >>>>>>> >>>>>>> - A set of packages that are known to work with latest GHC >>>>>>> - A way to build them and run their test suites (if I could >>>>>>> specify compile and >>>>>>>   runtime flags that'd be even better) >>>>>>> >>>>>>> I think stackage can serve as (1) but I don't know how to do >>>>>>> (2). Can anyone >>>>>>> point me to the right direction? I vaguely remember some >>>>>>> nix-based solution for >>>>>>> this that was being discussed on the IRC channel, but can't >>>>>>> recall any details. >>>>>>> >>>>>>> Thanks, >>>>>>> >>>>>>> Ömer >>>>>>> _______________________________________________ >>>>>>> ghc-devs mailing list >>>>>>> ghc-devs at haskell.org >>>>>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From rae at richarde.dev Thu Jul 25 16:52:56 2019 From: rae at richarde.dev (Richard Eisenberg) Date: Thu, 25 Jul 2019 12:52:56 -0400 Subject: Typing rules for Haskell In-Reply-To: References: <87k1c7m2kt.fsf@smart-cactus.org> Message-ID: <376688C3-6D99-49A1-B4E8-C55D7B8E4B6F@richarde.dev> Great idea. I believe that if you fork my repo, your fork will also auto-build a PDF on pushing. Feel free to add text among the Greek, as well. :) > On Jul 25, 2019, at 3:07 AM, Ömer Sinan Ağacan wrote: > > Nice! > > Perhaps I should revive my or-patterns proposal now. The main problem (IIRC) was > that I had to give a rather large subset of Haskell (that includes pattern > matching) typing rules to show typing rules of or-patterns. Now that that part > is done perhaps I can find the time for the rest. > > Ömer > > Ben Gamari , 25 Tem 2019 Per, 02:32 tarihinde şunu yazdı: >> >> Richard Eisenberg writes: >> >>> Hi devs, >>> >>> Simon and I were wondering about a tight specification for the recent >>> action in proposal #253 >>> (https://github.com/ghc-proposals/ghc-proposals/pull/253 >>> ). We needed >>> to see the typing rules. So I made a repo >>> (https://gitlab.haskell.org/rae/haskell >>> ) to collect typing rules for >>> source Haskell. I managed to convince the CI infrastructure to produce >>> a PDF at every upload; it is linked from the README. >>> >>> The current version contains the proposed result signatures; it should >>> probably be in a branch, but life is short. >>> >> Very nice! >> >> Cheers, >> >> - Ben >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From juhpetersen at gmail.com Fri Jul 26 04:44:55 2019 From: juhpetersen at gmail.com (Jens Petersen) Date: Fri, 26 Jul 2019 12:44:55 +0800 Subject: [ANNOUNCE] GHC 8.8.1 release candidate 1 is now available In-Reply-To: <878sspod8v.fsf@smart-cactus.org> References: <878sspod8v.fsf@smart-cactus.org> Message-ID: On Tue, 23 Jul 2019 at 07:35, Ben Gamari wrote: > The GHC team is pleased to announce the release candidate for GHC 8.8.1. : > https://downloads.haskell.org/ghc/8.8.1-rc1 Thanks! I can build it successfully on Fedora Rawhide. However s390x fails as reported in . Also llvm7.0 is no longer detected on Fedora but I have patched around that for now (see https://gitlab.haskell.org/ghc/ghc/issues/16990). > This release is the culmination of over 3000 commits by over one hundred > contributors Wow, congratulations Thanks, Jens From sgraf1337 at gmail.com Fri Jul 26 09:36:39 2019 From: sgraf1337 at gmail.com (Sebastian Graf) Date: Fri, 26 Jul 2019 10:36:39 +0100 Subject: Try haskell-ide-engine on GHC! In-Reply-To: References: Message-ID: Hey all, What can I say, after few hours of on and off tinkering I got it to work! The hover information is incredibly helpful, as is jump to definition. It works even in modules with type and name errors! The error information not so much (yet), at least not compared to the shorter feedback loop of using ghcid. Haven't used completions in anger yet, but it works quite well when fooling around with it. Great work, Zubin and Matthew! :) As to my setup: I'm using VSCode Remote, so the language server will run on my build VM which VSCode communicates with via SSH. I'm using nix+home-manager to manage my configuration over there, so I had to wrap the hie executable with the following script: #! /usr/bin/env bash . /etc/profile.d/nix.sh nix-shell --pure /path/to/ghc.nix/ --run /path/to/haskell-ide-engine/dist-newstyle/build/x86_64-linux/ghc-8.6.4/haskell-ide-engine-1.0.0.0/x/hie/build/hie/hie Also the shellHook echo output from ghc.nix confuses the language server protocol, so be sure to delete those 4 lines from ghc.nix/default.nix. It takes quite a while to initialise the first time around. Be sure to look at the output of the alanz.vscode-hie-server extension to see if there's any progress being made. Can only encourage you to try this out! Best, Sebastian Am Do., 25. Juli 2019 um 12:21 Uhr schrieb Matthew Pickering < matthewtpickering at gmail.com>: > Hi all, > > As some of you know I have been working on getting haskell-ide-engine > working on GHC for the last few months. Perhaps now the branch is in a > usable state where people can try it and report issues. All the basic > features such as, hover, completion, error reporting, go to definition > etc should work well. I suspect this will be enough for most > developers. > > I have compiled a list of instructions about how to try out the branch. > > https://gist.github.com/mpickering/68ae458d2c426a29a7c1ddf798dbc793 > > In the last few weeks Zubin has been a great help finishing some parts > of the patch that I lost steam for and given it a much better chance > of getting merged into the main repo before the end of the year. > > Cheers, > > Matt > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alp at well-typed.com Fri Jul 26 10:23:45 2019 From: alp at well-typed.com (Alp Mestanogullari) Date: Fri, 26 Jul 2019 12:23:45 +0200 Subject: Try haskell-ide-engine on GHC! In-Reply-To: References: Message-ID: <9775541e-445f-2695-f481-41703e240bf0@well-typed.com> Maybe we can just remove those lines now. :-) On 26/07/2019 11:36, Sebastian Graf wrote: > Hey all, > > What can I say, after few hours of on and off tinkering I got it to work! > The hover information is incredibly helpful, as is jump to definition. > It works even in modules with type and name errors! > The error information not so much (yet), at least not compared to the > shorter feedback loop of using ghcid. > Haven't used completions in anger yet, but it works quite well when > fooling around with it. > > Great work, Zubin and Matthew! :) > > As to my setup: I'm using VSCode Remote, so the language server will > run on my build VM which VSCode communicates with via SSH. > I'm using nix+home-manager to manage my configuration over there, so I > had to wrap the hie executable with the following script: > > #! /usr/bin/env bash > . /etc/profile.d/nix.sh > nix-shell --pure /path/to/ghc.nix/ --run > /path/to/haskell-ide-engine/dist-newstyle/build/x86_64-linux/ghc-8.6.4/haskell-ide-engine-1.0.0.0/x/hie/build/hie/hie > > Also the shellHook echo output from ghc.nix confuses the language > server protocol, so be sure to delete those 4 lines from > ghc.nix/default.nix. > > It takes quite a while to initialise the first time around. Be sure to > look at the output of the alanz.vscode-hie-server extension to see if > there's any progress being made. > Can only encourage you to try this out! > > Best, > Sebastian > > > Am Do., 25. Juli 2019 um 12:21 Uhr schrieb Matthew Pickering > >: > > Hi all, > > As some of you know I have been working on getting haskell-ide-engine > working on GHC for the last few months. Perhaps now the branch is in a > usable state where people can try it and report issues. All the basic > features such as, hover, completion, error reporting, go to definition > etc should work well. I suspect this will be enough for most > developers. > > I have compiled a list of instructions about how to try out the > branch. > > https://gist.github.com/mpickering/68ae458d2c426a29a7c1ddf798dbc793 > > In the last few weeks Zubin has been a great help finishing some parts > of the patch that I lost steam for and given it a much better chance > of getting merged into the main repo before the end of the year. > > Cheers, > > Matt > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -- Alp Mestanogullari, Haskell Consultant Well-Typed LLP, https://www.well-typed.com/ Registered in England and Wales, OC335890 118 Wymering Mansions, Wymering Road, London, W9 2NF, England -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Fri Jul 26 10:31:26 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Fri, 26 Jul 2019 11:31:26 +0100 Subject: Try haskell-ide-engine on GHC! In-Reply-To: <9775541e-445f-2695-f481-41703e240bf0@well-typed.com> References: <9775541e-445f-2695-f481-41703e240bf0@well-typed.com> Message-ID: I noticed myself that things start going quite badly when you start opening files which are not part of the main ghc library. The workaround to this is to place a `hie.yaml` file in a subdirectory so indicate that a certain subdirectory shouldn't use the same settings as compiler. For example, I placed this cradle in the `default ``` cradle: {default} ``` I should probably add a `cradle: [none}` option as well just to disable HIE on certain files, it will never be able to work loading some tests as they require a HEAD version of GHC. In the future I would like to teach hadrian to be able to load any GHC component built in stage1 into GHCi, it should not be much work, but I haven't got around to it yet. On Fri, Jul 26, 2019 at 11:24 AM Alp Mestanogullari wrote: > > Maybe we can just remove those lines now. :-) > > On 26/07/2019 11:36, Sebastian Graf wrote: > > Hey all, > > What can I say, after few hours of on and off tinkering I got it to work! > The hover information is incredibly helpful, as is jump to definition. It works even in modules with type and name errors! > The error information not so much (yet), at least not compared to the shorter feedback loop of using ghcid. > Haven't used completions in anger yet, but it works quite well when fooling around with it. > > Great work, Zubin and Matthew! :) > > As to my setup: I'm using VSCode Remote, so the language server will run on my build VM which VSCode communicates with via SSH. > I'm using nix+home-manager to manage my configuration over there, so I had to wrap the hie executable with the following script: > > #! /usr/bin/env bash > . /etc/profile.d/nix.sh > nix-shell --pure /path/to/ghc.nix/ --run /path/to/haskell-ide-engine/dist-newstyle/build/x86_64-linux/ghc-8.6.4/haskell-ide-engine-1.0.0.0/x/hie/build/hie/hie > > Also the shellHook echo output from ghc.nix confuses the language server protocol, so be sure to delete those 4 lines from ghc.nix/default.nix. > > It takes quite a while to initialise the first time around. Be sure to look at the output of the alanz.vscode-hie-server extension to see if there's any progress being made. > Can only encourage you to try this out! > > Best, > Sebastian > > > Am Do., 25. Juli 2019 um 12:21 Uhr schrieb Matthew Pickering : >> >> Hi all, >> >> As some of you know I have been working on getting haskell-ide-engine >> working on GHC for the last few months. Perhaps now the branch is in a >> usable state where people can try it and report issues. All the basic >> features such as, hover, completion, error reporting, go to definition >> etc should work well. I suspect this will be enough for most >> developers. >> >> I have compiled a list of instructions about how to try out the branch. >> >> https://gist.github.com/mpickering/68ae458d2c426a29a7c1ddf798dbc793 >> >> In the last few weeks Zubin has been a great help finishing some parts >> of the patch that I lost steam for and given it a much better chance >> of getting merged into the main repo before the end of the year. >> >> Cheers, >> >> Matt >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -- > Alp Mestanogullari, Haskell Consultant > Well-Typed LLP, https://www.well-typed.com/ > > Registered in England and Wales, OC335890 > 118 Wymering Mansions, Wymering Road, London, W9 2NF, England > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From matthewtpickering at gmail.com Fri Jul 26 10:54:35 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Fri, 26 Jul 2019 11:54:35 +0100 Subject: Try haskell-ide-engine on GHC! In-Reply-To: References: <9775541e-445f-2695-f481-41703e240bf0@well-typed.com> Message-ID: You can also try placing the following `hie.yaml` file in the `hadrian` subdirectory so that it works when developing hadrian. ``` cradle: {cabal: {component: "exe:hadrian"}} ``` On Fri, Jul 26, 2019 at 11:31 AM Matthew Pickering wrote: > > I noticed myself that things start going quite badly when you start > opening files which are not part of the main ghc library. The > workaround to this is to place a `hie.yaml` file in a subdirectory so > indicate that a certain subdirectory shouldn't use the same settings > as compiler. For example, I placed this cradle in the `default > > ``` > cradle: {default} > ``` > > I should probably add a `cradle: [none}` option as well just to > disable HIE on certain files, it will never be able to work loading > some tests as they require a HEAD version of GHC. > > In the future I would like to teach hadrian to be able to load any GHC > component built in stage1 into GHCi, it should not be much work, but I > haven't got around to it yet. > > On Fri, Jul 26, 2019 at 11:24 AM Alp Mestanogullari wrote: > > > > Maybe we can just remove those lines now. :-) > > > > On 26/07/2019 11:36, Sebastian Graf wrote: > > > > Hey all, > > > > What can I say, after few hours of on and off tinkering I got it to work! > > The hover information is incredibly helpful, as is jump to definition. It works even in modules with type and name errors! > > The error information not so much (yet), at least not compared to the shorter feedback loop of using ghcid. > > Haven't used completions in anger yet, but it works quite well when fooling around with it. > > > > Great work, Zubin and Matthew! :) > > > > As to my setup: I'm using VSCode Remote, so the language server will run on my build VM which VSCode communicates with via SSH. > > I'm using nix+home-manager to manage my configuration over there, so I had to wrap the hie executable with the following script: > > > > #! /usr/bin/env bash > > . /etc/profile.d/nix.sh > > nix-shell --pure /path/to/ghc.nix/ --run /path/to/haskell-ide-engine/dist-newstyle/build/x86_64-linux/ghc-8.6.4/haskell-ide-engine-1.0.0.0/x/hie/build/hie/hie > > > > Also the shellHook echo output from ghc.nix confuses the language server protocol, so be sure to delete those 4 lines from ghc.nix/default.nix. > > > > It takes quite a while to initialise the first time around. Be sure to look at the output of the alanz.vscode-hie-server extension to see if there's any progress being made. > > Can only encourage you to try this out! > > > > Best, > > Sebastian > > > > > > Am Do., 25. Juli 2019 um 12:21 Uhr schrieb Matthew Pickering : > >> > >> Hi all, > >> > >> As some of you know I have been working on getting haskell-ide-engine > >> working on GHC for the last few months. Perhaps now the branch is in a > >> usable state where people can try it and report issues. All the basic > >> features such as, hover, completion, error reporting, go to definition > >> etc should work well. I suspect this will be enough for most > >> developers. > >> > >> I have compiled a list of instructions about how to try out the branch. > >> > >> https://gist.github.com/mpickering/68ae458d2c426a29a7c1ddf798dbc793 > >> > >> In the last few weeks Zubin has been a great help finishing some parts > >> of the patch that I lost steam for and given it a much better chance > >> of getting merged into the main repo before the end of the year. > >> > >> Cheers, > >> > >> Matt > >> _______________________________________________ > >> ghc-devs mailing list > >> ghc-devs at haskell.org > >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > -- > > Alp Mestanogullari, Haskell Consultant > > Well-Typed LLP, https://www.well-typed.com/ > > > > Registered in England and Wales, OC335890 > > 118 Wymering Mansions, Wymering Road, London, W9 2NF, England > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From davide at well-typed.com Fri Jul 26 15:18:45 2019 From: davide at well-typed.com (David Eichmann) Date: Fri, 26 Jul 2019 16:18:45 +0100 Subject: Try haskell-ide-engine on GHC! In-Reply-To: References: Message-ID: <607d20b4-00e3-ea3b-517b-0b683489512d@well-typed.com> Wow, Great job, Matt and Zubin! I've managed to get this setup without issue. I'll definitely be using this in the future. - David E On 7/25/19 12:21 PM, Matthew Pickering wrote: > Hi all, > > As some of you know I have been working on getting haskell-ide-engine > working on GHC for the last few months. Perhaps now the branch is in a > usable state where people can try it and report issues. All the basic > features such as, hover, completion, error reporting, go to definition > etc should work well. I suspect this will be enough for most > developers. > > I have compiled a list of instructions about how to try out the branch. > > https://gist.github.com/mpickering/68ae458d2c426a29a7c1ddf798dbc793 > > In the last few weeks Zubin has been a great help finishing some parts > of the patch that I lost steam for and given it a much better chance > of getting merged into the main repo before the end of the year. > > Cheers, > > Matt > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -- David Eichmann, Haskell Consultant Well-Typed LLP, http://www.well-typed.com Registered in England & Wales, OC335890 118 Wymering Mansions, Wymering Road, London W9 2NF, England From carter.schonwald at gmail.com Fri Jul 26 18:18:54 2019 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Fri, 26 Jul 2019 14:18:54 -0400 Subject: can't checkout ghc 8.6 branch correctly -- something wrong with transformers mirror repo? Message-ID: Hey everyone, whats wrong with the 8.6 branch? when i do a fresh clone, i wind up with this error : $ git submodule update --init --recursive error: Server does not allow request for unadvertised object def8c55d0c47c1c40de985d83f052f3659b40cfd Fetched in submodule path 'libraries/transformers', but it did not contain def8c55d0c47c1c40de985d83f052f3659b40cfd. Direct fetching of that commit failed. 14:17:43 ~/D/r/ghc-8.6.5-series (ghc-8.6|✚6) $ -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Fri Jul 26 18:23:05 2019 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Fri, 26 Jul 2019 14:23:05 -0400 Subject: can't checkout ghc 8.6 branch correctly -- something wrong with transformers mirror repo? In-Reply-To: References: Message-ID: git clone --recursive git://git.haskell.org/ghc.git ghc-8.6.5-series -b ghc-8.6 --- not using gitlab to clone seems to be the culprit .. On Fri, Jul 26, 2019 at 2:18 PM Carter Schonwald wrote: > Hey everyone, whats wrong with the 8.6 branch? > > when i do a fresh clone, i wind up with this error : > $ > git submodule update --init --recursive > error: Server does not allow request for unadvertised object > def8c55d0c47c1c40de985d83f052f3659b40cfd > Fetched in submodule path 'libraries/transformers', but it did not contain > def8c55d0c47c1c40de985d83f052f3659b40cfd. Direct fetching of that commit > failed. > 14:17:43 ~/D/r/ghc-8.6.5-series (ghc-8.6|✚6) $ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Sat Jul 27 19:57:21 2019 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sat, 27 Jul 2019 15:57:21 -0400 Subject: =?UTF-8?Q?Most_recent_happy_and_Alex_releases_can=E2=80=99t_build_gh?= =?UTF-8?Q?c_8=2E6=2Ex?= Message-ID: Hey everyone : Is this a known issue / deliberate breaking change / other ? This is also the same Alex and happy that’s needed to build current ghc master Thx! -Carter -------------- next part -------------- An HTML attachment was scrubbed... URL: From sam.halliday at gmail.com Sat Jul 27 21:14:07 2019 From: sam.halliday at gmail.com (Sam Halliday) Date: Sat, 27 Jul 2019 22:14:07 +0100 Subject: api to access .hi files Message-ID: <8736irnpsw.fsf@gmail.com> Hello all, I'd like to learn how to use the ghc api programmatically, but I am finding the haddocks (on hackage) to be a bit overwhelming. Could somebody please help me out by pointing me in the direction of the parts of the haddocks that are relevant to access information from .hi files that are available in the current build environment? In particular, I'm mostly interested in gathering information about symbols and their type signatures. As a first exercise: given a module+import section for a haskell source file, I want to find out which symbols (and their types) are available. Like :browse in ghci, but programmatically. PS: I'm aware that the .hie format is up and coming. I'm very excited by this! But I'm going to be using ghc-8.4.x and ghc-8.6.x for the foreseeable future, so I am mostly interested in what they have to offer. -- Best regards, Sam -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 194 bytes Desc: not available URL: From joan.karadimov at gmail.com Mon Jul 29 12:35:37 2019 From: joan.karadimov at gmail.com (Joan Karadimov) Date: Mon, 29 Jul 2019 15:35:37 +0300 Subject: Invalid link in the wiki page "Building GHC on Windows" Message-ID: Inside this wiki page: https://gitlab.haskell.org/ghc/ghc/wikis/building/preparation/windows ... there is a link to the latest cabal release. The link is: https://www.haskell.org/cabal/release/cabal-install-2.4.1.0/cabal-install-2.4.1.0-${arch}-unknown-mingw32.zip That link is not valid. It should be something like: https://downloads.haskell.org/cabal/cabal-install-2.4.1.0/cabal-install-2.4.1.0-${arch}-unknown-mingw32.zip The latter link is taken from https://www.haskell.org/cabal/download.html. -------------- next part -------------- An HTML attachment was scrubbed... URL: From davide at well-typed.com Tue Jul 30 17:58:29 2019 From: davide at well-typed.com (David Eichmann) Date: Tue, 30 Jul 2019 18:58:29 +0100 Subject: Extended Dependency Generation Proposal Message-ID: Hello GHC Developers, I've recently been working on a proposal (found here [1]) for "Extended Dependency Generation". This new feature takes the form of a new build option/mode that outputs comprehensive build dependencies for building Haskell modules. This allows external build tools, such as cabal-install, to implement correct incremental builds with recompilation avoidance using GHC's one shot mode to compile individual modules. All input is appreciated. It would be particularly helpful to hear from the Cabal, Stack, and Shake communities. Looking forward to hearing your comments, David Eichmann [1] https://github.com/ghc-proposals/ghc-proposals/pull/245 -- David Eichmann, Haskell Consultant Well-Typed LLP, http://www.well-typed.com Registered in England & Wales, OC335890 118 Wymering Mansions, Wymering Road, London W9 2NF, England -------------- next part -------------- An HTML attachment was scrubbed... URL: From dxld at darkboxed.org Tue Jul 30 18:22:42 2019 From: dxld at darkboxed.org (Daniel =?iso-8859-1?Q?Gr=F6ber?=) Date: Tue, 30 Jul 2019 20:22:42 +0200 Subject: Extended Dependency Generation Proposal In-Reply-To: References: Message-ID: <20190730182242.GB7243@darkboxed.org> Hi, from the proposal it sounds like you are planning to only extend the single-shot mode with the new options, is that right? I think `ghc --make` could also benefit from being able to communicate non-module graph dependencies such as `addDependentFile` and CPP #include to build-tools, no? I've always been annoyed by the fact that if such dependencies change cabal will not consider rebuilding. --Daniel On Tue, Jul 30, 2019 at 06:58:29PM +0100, David Eichmann wrote: > Hello GHC Developers, > > > I've recently been working on a proposal (found here > [1]) for "Extended > Dependency Generation". This new feature takes the form of a new build > option/mode that outputs comprehensive build dependencies for building > Haskell modules. This allows external build tools, such as cabal-install, to > implement correct incremental builds with recompilation avoidance using > GHC's one shot mode to compile individual modules. > > All input is appreciated. It would be particularly helpful to hear from the > Cabal, Stack, and Shake communities. > > > Looking forward to hearing your comments, > > David Eichmann > > > [1] https://github.com/ghc-proposals/ghc-proposals/pull/245 > > -- > David Eichmann, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com > > Registered in England & Wales, OC335890 > 118 Wymering Mansions, Wymering Road, London W9 2NF, England > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From davide at well-typed.com Tue Jul 30 18:42:08 2019 From: davide at well-typed.com (David Eichmann) Date: Tue, 30 Jul 2019 19:42:08 +0100 Subject: Extended Dependency Generation Proposal In-Reply-To: <20190730182242.GB7243@darkboxed.org> References: <20190730182242.GB7243@darkboxed.org> Message-ID: <26e7fd79-cc38-8efe-1b4e-fb56428341eb@well-typed.com> Hi Daniel, While the proposal is aimed at improving the single-shot mode use case, I don't see a reason why this wouldn't work in make mode: GHC would report dependencies of all modules being built. Is there a use case you have in mind where that would be useful? - David On 7/30/19 7:22 PM, Daniel Gröber wrote: > Hi, > > from the proposal it sounds like you are planning to only extend the > single-shot mode with the new options, is that right? > > I think `ghc --make` could also benefit from being able to communicate > non-module graph dependencies such as `addDependentFile` and CPP > #include to build-tools, no? > > I've always been annoyed by the fact that if such dependencies change > cabal will not consider rebuilding. > > --Daniel > > On Tue, Jul 30, 2019 at 06:58:29PM +0100, David Eichmann wrote: >> Hello GHC Developers, >> >> >> I've recently been working on a proposal (found here >> [1]) for "Extended >> Dependency Generation". This new feature takes the form of a new build >> option/mode that outputs comprehensive build dependencies for building >> Haskell modules. This allows external build tools, such as cabal-install, to >> implement correct incremental builds with recompilation avoidance using >> GHC's one shot mode to compile individual modules. >> >> All input is appreciated. It would be particularly helpful to hear from the >> Cabal, Stack, and Shake communities. >> >> >> Looking forward to hearing your comments, >> >> David Eichmann >> >> >> [1] https://github.com/ghc-proposals/ghc-proposals/pull/245 >> >> -- >> David Eichmann, Haskell Consultant >> Well-Typed LLP, http://www.well-typed.com >> >> Registered in England & Wales, OC335890 >> 118 Wymering Mansions, Wymering Road, London W9 2NF, England >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -- David Eichmann, Haskell Consultant Well-Typed LLP, http://www.well-typed.com Registered in England & Wales, OC335890 118 Wymering Mansions, Wymering Road, London W9 2NF, England From dxld at darkboxed.org Tue Jul 30 19:58:28 2019 From: dxld at darkboxed.org (Daniel =?iso-8859-1?Q?Gr=F6ber?=) Date: Tue, 30 Jul 2019 21:58:28 +0200 Subject: Extended Dependency Generation Proposal In-Reply-To: <26e7fd79-cc38-8efe-1b4e-fb56428341eb@well-typed.com> References: <20190730182242.GB7243@darkboxed.org> <26e7fd79-cc38-8efe-1b4e-fb56428341eb@well-typed.com> Message-ID: <20190730195827.GC7243@darkboxed.org> Hi David, On Tue, Jul 30, 2019 at 07:42:08PM +0100, David Eichmann wrote: > While the proposal is aimed at improving the single-shot mode use case, I > don't see a reason why this wouldn't work in make mode: GHC would report > dependencies of all modules being built. Ok cool, that's what I thought. > Is there a use case you have in mind where that would be useful? I'm just a bit sceptical that fully switching to single-shot mode is actually a good idea in all cases so I'd like to keep --make at feature parity. However I do have a particular use-case in mind as well: Instead of having one --make instance per-package I'd like a sort of build-plan wide GHC server process to allow more sharing of in-memory build products. I think other people have been playing with that idea too so it's not exactly new but in the context of Haskell-IDE-Engine this might actually be something that makes a lot of sense to do because we already have what is essentially a GHC server process running the whole time, namely HIE itself. --Daniel From a.pelenitsyn at gmail.com Wed Jul 31 17:05:28 2019 From: a.pelenitsyn at gmail.com (Artem Pelenitsyn) Date: Wed, 31 Jul 2019 13:05:28 -0400 Subject: Invalid link in the wiki page "Building GHC on Windows" In-Reply-To: References: Message-ID: Hey Joan, Thanks for spotting this! Should be fixed now. Also, wiki is now back to public access for editing (it was closed for technical reasons last several days). So you can fix if anything pops up in the future. -- Best wishes, Artem On Mon, 29 Jul 2019 at 08:36, Joan Karadimov wrote: > Inside this wiki page: > https://gitlab.haskell.org/ghc/ghc/wikis/building/preparation/windows > > ... there is a link to the latest cabal release. The link is: > > https://www.haskell.org/cabal/release/cabal-install-2.4.1.0/cabal-install-2.4.1.0-${arch}-unknown-mingw32.zip > > > That link is not valid. It should be something like: > > https://downloads.haskell.org/cabal/cabal-install-2.4.1.0/cabal-install-2.4.1.0-${arch}-unknown-mingw32.zip > > The latter link is taken from https://www.haskell.org/cabal/download.html. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: