From rae at richarde.dev Tue Mar 2 04:30:06 2021 From: rae at richarde.dev (Richard Eisenberg) Date: Tue, 2 Mar 2021 04:30:06 +0000 Subject: GHC 9.1? Message-ID: <010f0177f1334c3d-7b781909-8479-4977-a9a7-a03549e534d9-000000@us-east-2.amazonses.com> Hi devs, I understand that GHC uses the same version numbering system as the Linux kernel did until 2003(*), using odd numbers for unstable "releases" and even ones for stable ones. I have seen this become a point of confusion, as in: "Quick Look just missed the cutoff for GHC 9.0, so it will be out in GHC 9.2" "Um, what about 9.1?" Is there a reason to keep this practice? Linux moved away from it 18 years ago and seems to have thrived despite. Giving this convention up on a new first-number change (the change from 8 to 9) seems like a good time. I don't feel strongly about this, at all -- just asking a question that maybe no one has asked in a long time. Richard (*) I actually didn't know that Linux stopped doing this until writing this email, wondering why we needed to tie ourselves to Linux. I coincidentally stopped using Linux full-time (and thus administering my own installation) in 2003, when I graduated from university. From carter.schonwald at gmail.com Tue Mar 2 04:44:12 2021 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 1 Mar 2021 23:44:12 -0500 Subject: GHC 9.1? In-Reply-To: <010f0177f1334c3d-7b781909-8479-4977-a9a7-a03549e534d9-000000@us-east-2.amazonses.com> References: <010f0177f1334c3d-7b781909-8479-4977-a9a7-a03549e534d9-000000@us-east-2.amazonses.com> Message-ID: It makes determining if a ghc build was a dev build vs a tagged release much easier. Odd == I’m using a dev build because it reports a version like majormajor.odd.time stamp right ? — we still donthat with dev /master right? At some level any versioning notation is a social convention, and this one does have a good advantage of making dev builds apparent while letting things like hackage head have coherent versioning for treating these releases sanely? Otoh. It’s all a social construct. So any approach that helps all relevant communities is always welcome. Though even numbers are nice ;) On Mon, Mar 1, 2021 at 11:30 PM Richard Eisenberg wrote: > Hi devs, > > I understand that GHC uses the same version numbering system as the Linux > kernel did until 2003(*), using odd numbers for unstable "releases" and > even ones for stable ones. I have seen this become a point of confusion, as > in: "Quick Look just missed the cutoff for GHC 9.0, so it will be out in > GHC 9.2" "Um, what about 9.1?" > > Is there a reason to keep this practice? Linux moved away from it 18 years > ago and seems to have thrived despite. Giving this convention up on a new > first-number change (the change from 8 to 9) seems like a good time. > > I don't feel strongly about this, at all -- just asking a question that > maybe no one has asked in a long time. > > Richard > > (*) I actually didn't know that Linux stopped doing this until writing > this email, wondering why we needed to tie ourselves to Linux. I > coincidentally stopped using Linux full-time (and thus administering my own > installation) in 2003, when I graduated from university. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgraf1337 at gmail.com Tue Mar 2 07:46:04 2021 From: sgraf1337 at gmail.com (Sebastian Graf) Date: Tue, 2 Mar 2021 08:46:04 +0100 Subject: GHC 9.1? In-Reply-To: References: <010f0177f1334c3d-7b781909-8479-4977-a9a7-a03549e534d9-000000@us-east-2.amazonses.com> Message-ID: Hi, I generally would like +0.1 steps, but mostly because it causes less head-scratching to everyone new to Haskell. Basically the same argument as Richard says. I can't comment on how far head.hackage (or any tool relies) on odd version numbers, I certainly never have. Given that it's all overlays (over which we have complete control), does it really matter anyway? When would we say <=9.1 rather than <=9.2? Shouldn't 9.1 at one point become binary compatible with 9.2, as if it really was "9.2.-1" (according to the PVP, 9.2.0 is actually > 9.2, so that won't work)? I think there are multiple ways in which we could avoid using 9.1 as the namespace for "somewhere between 9.0 and 9.2 exclusively". We have alpha releases, so why don't we name it 9.1.nightly? > majormajor.odd.time stamp TBH, I found the fact that the *configure* date (I think?) is embedded in the version rather annoying. I sometimes have two checkouts configured at different dates but branching off from the same base commit, so I'm pretty sure that interface files are compatible. Yet when I try to run one compiler on the package database of the other (because I might have copied a compiler invocation from stdout that contained an absolute path), I get an error due to the interface file version mismatch. I'd rather have a crash or undefined behavior than a check based on the configure date, especially since I'm just debugging anyway. I do get why we want to embed it for release management purposes, though. Cheers, Sebastian Am Di., 2. März 2021 um 05:45 Uhr schrieb Carter Schonwald < carter.schonwald at gmail.com>: > It makes determining if a ghc build was a dev build vs a tagged release > much easier. Odd == I’m using a dev build because it reports a version > like majormajor.odd.time stamp right ? — we still donthat with dev /master > right? > > At some level any versioning notation is a social convention, and this one > does have a good advantage of making dev builds apparent while letting > things like hackage head have coherent versioning for treating these > releases sanely? > > Otoh. It’s all a social construct. So any approach that helps all relevant > communities is always welcome. Though even numbers are nice ;) > > On Mon, Mar 1, 2021 at 11:30 PM Richard Eisenberg > wrote: > >> Hi devs, >> >> I understand that GHC uses the same version numbering system as the Linux >> kernel did until 2003(*), using odd numbers for unstable "releases" and >> even ones for stable ones. I have seen this become a point of confusion, as >> in: "Quick Look just missed the cutoff for GHC 9.0, so it will be out in >> GHC 9.2" "Um, what about 9.1?" >> >> Is there a reason to keep this practice? Linux moved away from it 18 >> years ago and seems to have thrived despite. Giving this convention up on a >> new first-number change (the change from 8 to 9) seems like a good time. >> >> I don't feel strongly about this, at all -- just asking a question that >> maybe no one has asked in a long time. >> >> Richard >> >> (*) I actually didn't know that Linux stopped doing this until writing >> this email, wondering why we needed to tie ourselves to Linux. I >> coincidentally stopped using Linux full-time (and thus administering my own >> installation) in 2003, when I graduated from university. >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekmett at gmail.com Tue Mar 2 09:33:57 2021 From: ekmett at gmail.com (Edward Kmett) Date: Tue, 2 Mar 2021 01:33:57 -0800 Subject: GHC 9.1? In-Reply-To: <010f0177f1334c3d-7b781909-8479-4977-a9a7-a03549e534d9-000000@us-east-2.amazonses.com> References: <010f0177f1334c3d-7b781909-8479-4977-a9a7-a03549e534d9-000000@us-east-2.amazonses.com> Message-ID: In the past I've gained non-zero utility from having the spacer there to allow me to push patches in to allow HEAD builds while features are still in flux. Some of those in flux changes -- to my mild chagrin -- made it out to hackage, but were handled robustly because I wasn't claiming in the code that it worked on the next major release of GHC. Admittedly this was in the before-times, when it was much harder to vendor specific versions of packages for testing. Now with stack.yaml and cabal.project addressing that detail it is much reduced concern. That isn't to say there is zero cost to losing every other version number, but if we want to allow GHC versions and PVP versions to mentally "fit in the same type" the current practice has the benefit that it doesn't require us either doing something like bolting tags back into Data.Version to handle the "x.y.nightly" or forcing everyone to move to the real next release the moment the new compiler ships with a bunch of a jump, or generally forcing more string-processing nonsense into build systems. Right now version numbers go up and you can use some numerical shenanigans to approximate them with a single integer for easy ifdefs. I'm ever so slightly against recoloring the bikeshed on the way we manage the GHC version number, just because I know my tooling is robust around what we have, and I don't see marked improvement in the status quo being gained, while I do foresee a bit of complication around the consumption of ghc as a tool if we change -Edward On Mon, Mar 1, 2021 at 8:30 PM Richard Eisenberg wrote: > Hi devs, > > I understand that GHC uses the same version numbering system as the Linux > kernel did until 2003(*), using odd numbers for unstable "releases" and > even ones for stable ones. I have seen this become a point of confusion, as > in: "Quick Look just missed the cutoff for GHC 9.0, so it will be out in > GHC 9.2" "Um, what about 9.1?" > > Is there a reason to keep this practice? Linux moved away from it 18 years > ago and seems to have thrived despite. Giving this convention up on a new > first-number change (the change from 8 to 9) seems like a good time. > > I don't feel strongly about this, at all -- just asking a question that > maybe no one has asked in a long time. > > Richard > > (*) I actually didn't know that Linux stopped doing this until writing > this email, wondering why we needed to tie ourselves to Linux. I > coincidentally stopped using Linux full-time (and thus administering my own > installation) in 2003, when I graduated from university. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Tue Mar 2 12:18:06 2021 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 02 Mar 2021 07:18:06 -0500 Subject: GHC 9.1? In-Reply-To: <010f0177f1334c3d-7b781909-8479-4977-a9a7-a03549e534d9-000000@us-east-2.amazonses.com> References: <010f0177f1334c3d-7b781909-8479-4977-a9a7-a03549e534d9-000000@us-east-2.amazonses.com> Message-ID: <87eegxwxzf.fsf@smart-cactus.org> Richard Eisenberg writes: > Hi devs, > > I understand that GHC uses the same version numbering system as the > Linux kernel did until 2003(*), using odd numbers for unstable > "releases" and even ones for stable ones. I have seen this become a > point of confusion, as in: "Quick Look just missed the cutoff for GHC > 9.0, so it will be out in GHC 9.2" "Um, what about 9.1?" > > Is there a reason to keep this practice? Linux moved away from it 18 > years ago and seems to have thrived despite. Giving this convention up > on a new first-number change (the change from 8 to 9) seems like a > good time. > > I don't feel strongly about this, at all -- just asking a question > that maybe no one has asked in a long time. > At this point there isn't any reason strong reason for either design. However, it also never really occurred to me that our convention could be confusing. I do believe that there is value in having a clear versioning scheme for non-released compilers. However, I can't think of anything that would break if we, for instance, used 9.0.99 instead of 9.1 (other than being a bit ugly) The strongest argument I can put forth for the status quo is that it is eases adapting GHC API users prior to GHC release. Specifically, head.hackage maintains the policy that any patch be buildable with both current GHC major release and GHC's `master` branch. In order to achieve this, it is often necessary to write CPP guards that condition on the `ghc` library version. The idiomatic way to accomplish this is Cabal's MIN_VERSION_ghc macro, which only allows you to predicate on the two most-significant version numbers (since the PVP dictates that breaking changes should change the second version component, at least). Using MIN_VERSION_ghc would become impossible in a three-component versioning scheme. To work around this, we would either have to use a hack like __GLASGOW_HASKELL_FULL_VERISON__ or drop the two version buildability requirement on head.hackage patches. I would really rather not do the latter as it would severely hamper the usability of the patch-set for differential performance testing. The former seems unfortunate since it means more work to turn a head.hackage patch into something upstremable. Now since I've written this down, I would place my vote under retaining even-odd numbering. Not only does this have historical precedent in its favor, but it also has at least one clear technical advantage. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Tue Mar 2 12:26:34 2021 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 02 Mar 2021 07:26:34 -0500 Subject: GHC 9.1? In-Reply-To: References: <010f0177f1334c3d-7b781909-8479-4977-a9a7-a03549e534d9-000000@us-east-2.amazonses.com> Message-ID: <87blc1wxl1.fsf@smart-cactus.org> Sebastian Graf writes: > Hi, > > I generally would like +0.1 steps, but mostly because it causes less > head-scratching to everyone new to Haskell. Basically the same argument as > Richard says. > > I can't comment on how far head.hackage (or any tool relies) on odd version > numbers, I certainly never have. Given that it's all overlays (over which > we have complete control), does it really matter anyway? When would we say > <=9.1 rather than <=9.2? Shouldn't 9.1 at one point become binary > compatible with 9.2, as if it really was "9.2.-1" (according to the PVP, > 9.2.0 is actually > 9.2, so that won't work)? I think there are multiple > ways in which we could avoid using 9.1 as the namespace for "somewhere > between 9.0 and 9.2 exclusively". We have alpha releases, so why don't we > name it 9.1.nightly? > One reason is that our versioning data model (as captured by Data.Version) now only admits numeric version components. Textual tags were previously admitted but deprecated in #2496 as there is no clear ordering for such versions. >> majormajor.odd.time stamp > > TBH, I found the fact that the *configure* date (I think?) is embedded in > the version rather annoying. I sometimes have two checkouts configured at > different dates but branching off from the same base commit, so I'm pretty > sure that interface files are compatible. Yet when I try to run one > compiler on the package database of the other (because I might have copied > a compiler invocation from stdout that contained an absolute path), I get > an error due to the interface file version mismatch. I'd rather have a > crash or undefined behavior than a check based on the configure date, > especially since I'm just debugging anyway. I disagree here. Personally, if I do something non-sensical I would much rather get predictable version error than be sent off on a wild-goose chase debugging ghosts. Fixing an incorrect command-line takes a few seconds; finding a bizarre runtime crash due to subtly wrong ABI may take days. This is why I generally plop any test command-line of non-trivial length into a shell script; it makes safely switching between compilers much easier. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From carter.schonwald at gmail.com Tue Mar 2 13:04:02 2021 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Tue, 2 Mar 2021 08:04:02 -0500 Subject: GHC 9.1? In-Reply-To: <87blc1wxl1.fsf@smart-cactus.org> References: <010f0177f1334c3d-7b781909-8479-4977-a9a7-a03549e534d9-000000@us-east-2.amazonses.com> <87blc1wxl1.fsf@smart-cactus.org> Message-ID: As ben says, a lot of our tools have made choices over time that make having the odd vs even convention super easy to support for compiler and library devs doing experiments with ghc builds that are based off master. No matter what, any naming scheme is a social construct for communicating with tools and humans. And I def agree that “how to interpret a ghc version number” is a useful thing to make more readily explained, any convention that changes the way cabal/library authors can experiments to suport / evaluate how features in an unreleased ghc work for them has a nontrivial engineering cost footprint. On Tue, Mar 2, 2021 at 7:26 AM Ben Gamari wrote: > Sebastian Graf writes: > > > Hi, > > > > I generally would like +0.1 steps, but mostly because it causes less > > head-scratching to everyone new to Haskell. Basically the same argument > as > > Richard says. > > > > I can't comment on how far head.hackage (or any tool relies) on odd > version > > numbers, I certainly never have. Given that it's all overlays (over which > > we have complete control), does it really matter anyway? When would we > say > > <=9.1 rather than <=9.2? Shouldn't 9.1 at one point become binary > > compatible with 9.2, as if it really was "9.2.-1" (according to the PVP, > > 9.2.0 is actually > 9.2, so that won't work)? I think there are multiple > > ways in which we could avoid using 9.1 as the namespace for "somewhere > > between 9.0 and 9.2 exclusively". We have alpha releases, so why don't we > > name it 9.1.nightly? > > > One reason is that our versioning data model (as captured by Data.Version) > now only admits numeric version components. Textual tags were previously > admitted but deprecated in #2496 as there is no clear ordering for such > versions. > > > >> majormajor.odd.time stamp > > > > TBH, I found the fact that the *configure* date (I think?) is embedded in > > the version rather annoying. I sometimes have two checkouts configured at > > different dates but branching off from the same base commit, so I'm > pretty > > sure that interface files are compatible. Yet when I try to run one > > compiler on the package database of the other (because I might have > copied > > a compiler invocation from stdout that contained an absolute path), I get > > an error due to the interface file version mismatch. I'd rather have a > > crash or undefined behavior than a check based on the configure date, > > especially since I'm just debugging anyway. > > I disagree here. Personally, if I do something non-sensical I would much > rather get predictable version error than be sent off on a wild-goose chase > debugging ghosts. Fixing an incorrect command-line takes a few seconds; > finding a bizarre runtime crash due to subtly wrong ABI may take days. > This is why I generally plop any test command-line of non-trivial length > into a shell script; it makes safely switching between compilers much > easier. > > Cheers, > > - Ben > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lonetiger at gmail.com Tue Mar 2 15:36:27 2021 From: lonetiger at gmail.com (Phyx) Date: Tue, 2 Mar 2021 15:36:27 +0000 Subject: GHC 9.1? In-Reply-To: References: <010f0177f1334c3d-7b781909-8479-4977-a9a7-a03549e534d9-000000@us-east-2.amazonses.com> Message-ID: I am also against not using the odd/even versioning scheme. My objections are similar to what Edward mentioned in that adding "junks" at the end of the build number is problematic for packagers of the toolchain where the packaging has its own way to mark something pre-release. If GHC were to invent its own thing, especially if it's alpha numeric this would be a huge pain for no real benefit. A beginner can quickly see on Wikipedia or other places that the compiler only does even numbered releases, but the changes has a lot of wide spreading implications. Kind regards, Tamar Sent from my Mobile On Tue, Mar 2, 2021, 09:34 Edward Kmett wrote: > In the past I've gained non-zero utility from having the spacer there to > allow me to push patches in to allow HEAD builds while features are still > in flux. Some of those in flux changes -- to my mild chagrin -- made it out > to hackage, but were handled robustly because I wasn't claiming in the code > that it worked on the next major release of GHC. Admittedly this was in the > before-times, when it was much harder to vendor specific versions of > packages for testing. Now with stack.yaml and cabal.project addressing that > detail it is much reduced concern. > > That isn't to say there is zero cost to losing every other version number, > but if we want to allow GHC versions and PVP versions to mentally "fit in > the same type" the current practice has the benefit that it doesn't require > us either doing something like bolting tags back into Data.Version to > handle the "x.y.nightly" or forcing everyone to move to the real next > release the moment the new compiler ships with a bunch of a jump, or > generally forcing more string-processing nonsense into build systems. Right > now version numbers go up and you can use some numerical shenanigans to > approximate them with a single integer for easy ifdefs. > > I'm ever so slightly against recoloring the bikeshed on the way we manage > the GHC version number, just because I know my tooling is robust around > what we have, and I don't see marked improvement in the status quo being > gained, while I do foresee a bit of complication around the consumption of > ghc as a tool if we change > > -Edward > > On Mon, Mar 1, 2021 at 8:30 PM Richard Eisenberg wrote: > >> Hi devs, >> >> I understand that GHC uses the same version numbering system as the Linux >> kernel did until 2003(*), using odd numbers for unstable "releases" and >> even ones for stable ones. I have seen this become a point of confusion, as >> in: "Quick Look just missed the cutoff for GHC 9.0, so it will be out in >> GHC 9.2" "Um, what about 9.1?" >> >> Is there a reason to keep this practice? Linux moved away from it 18 >> years ago and seems to have thrived despite. Giving this convention up on a >> new first-number change (the change from 8 to 9) seems like a good time. >> >> I don't feel strongly about this, at all -- just asking a question that >> maybe no one has asked in a long time. >> >> Richard >> >> (*) I actually didn't know that Linux stopped doing this until writing >> this email, wondering why we needed to tie ourselves to Linux. I >> coincidentally stopped using Linux full-time (and thus administering my own >> installation) in 2003, when I graduated from university. >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at richarde.dev Tue Mar 2 16:51:14 2021 From: rae at richarde.dev (Richard Eisenberg) Date: Tue, 2 Mar 2021 16:51:14 +0000 Subject: GHC 9.1? In-Reply-To: References: <010f0177f1334c3d-7b781909-8479-4977-a9a7-a03549e534d9-000000@us-east-2.amazonses.com> Message-ID: <010f0177f3d9d112-28a0db36-2e4f-437c-a5fd-4edccbb005f5-000000@us-east-2.amazonses.com> Thanks for the input, all. I'm now convinced that retaining the current odd/even scheme has concrete benefits and am happy to continue doing it. Richard > On Mar 2, 2021, at 10:36 AM, Phyx wrote: > > I am also against not using the odd/even versioning scheme. > > My objections are similar to what Edward mentioned in that adding "junks" at the end of the build number is problematic for packagers of the toolchain where the packaging has its own way to mark something pre-release. > > If GHC were to invent its own thing, especially if it's alpha numeric this would be a huge pain for no real benefit. > > A beginner can quickly see on Wikipedia or other places that the compiler only does even numbered releases, but the changes has a lot of wide spreading implications. > > Kind regards, > Tamar > > Sent from my Mobile > > On Tue, Mar 2, 2021, 09:34 Edward Kmett > wrote: > In the past I've gained non-zero utility from having the spacer there to allow me to push patches in to allow HEAD builds while features are still in flux. Some of those in flux changes -- to my mild chagrin -- made it out to hackage, but were handled robustly because I wasn't claiming in the code that it worked on the next major release of GHC. Admittedly this was in the before-times, when it was much harder to vendor specific versions of packages for testing. Now with stack.yaml and cabal.project addressing that detail it is much reduced concern. > > That isn't to say there is zero cost to losing every other version number, but if we want to allow GHC versions and PVP versions to mentally "fit in the same type" the current practice has the benefit that it doesn't require us either doing something like bolting tags back into Data.Version to handle the "x.y.nightly" or forcing everyone to move to the real next release the moment the new compiler ships with a bunch of a jump, or generally forcing more string-processing nonsense into build systems. Right now version numbers go up and you can use some numerical shenanigans to approximate them with a single integer for easy ifdefs. > > I'm ever so slightly against recoloring the bikeshed on the way we manage the GHC version number, just because I know my tooling is robust around what we have, and I don't see marked improvement in the status quo being gained, while I do foresee a bit of complication around the consumption of ghc as a tool if we change > > -Edward > > On Mon, Mar 1, 2021 at 8:30 PM Richard Eisenberg > wrote: > Hi devs, > > I understand that GHC uses the same version numbering system as the Linux kernel did until 2003(*), using odd numbers for unstable "releases" and even ones for stable ones. I have seen this become a point of confusion, as in: "Quick Look just missed the cutoff for GHC 9.0, so it will be out in GHC 9.2" "Um, what about 9.1?" > > Is there a reason to keep this practice? Linux moved away from it 18 years ago and seems to have thrived despite. Giving this convention up on a new first-number change (the change from 8 to 9) seems like a good time. > > I don't feel strongly about this, at all -- just asking a question that maybe no one has asked in a long time. > > Richard > > (*) I actually didn't know that Linux stopped doing this until writing this email, wondering why we needed to tie ourselves to Linux. I coincidentally stopped using Linux full-time (and thus administering my own installation) in 2003, when I graduated from university. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From javran.c at gmail.com Tue Mar 2 21:53:09 2021 From: javran.c at gmail.com (Javran Cheng) Date: Tue, 2 Mar 2021 13:53:09 -0800 Subject: Newcomer looking for mentorship on a HPC / Coverage related issue Message-ID: Hi all, I'm a newcomer (not entirely new as I did have some contribution few years back but the workflow seems to have changed a lot) and is working on https://gitlab.haskell.org/ghc/ghc/-/issues/15932. I'm looking for some (informal) mentorship on that issue - I think I know what the problem is , but I need someone with more experience to help developing a concrete plan before working on a fix, as this might involve two parts in GHC codebase: HPC source code and GHC/HsToCore/Coverage. I tried to ask for comments and other related bits on IRC few months back but that doesn't attract much attention. Now that I'm in the mood of hacking GHC again, I figure probably finding a mentorship might work better. Thanks! Javran -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Wed Mar 3 00:13:12 2021 From: ben at well-typed.com (Ben Gamari) Date: Tue, 02 Mar 2021 19:13:12 -0500 Subject: Plan for GHC 9.2 In-Reply-To: <87v9b7slvf.fsf@smart-cactus.org> References: <87v9b7slvf.fsf@smart-cactus.org> Message-ID: <874khtw0vd.fsf@smart-cactus.org> Ben Gamari writes: > tl;dr. Provisional release schedule for 9.2 enclosed. Please discuss, > especially if you have something you would like merged for 9.2.1. > > Hello all, > Hi all, With the planned fork deadline looming, I thought now would be a good time for a bit of a status update. As you likely realized, various CI breakages have resulted in quite a bit of lost merge time over the past two weeks. As a result, we currently have many, but far from all, of the patches slated for 9.2 in the tree. To avoid having a repeat of the very backport-heavy 9.0 series, I am going to bump back the fork date at least another week to allow the remaining large bits of work to make it into the tree. In particular what remains is: * Finishing the rework of sized integer primops (#19026, John Ericson) * Bumping bytestring to 0.11 (#19091, Ben) * Merge of ghc-exactprint into GHC? (Alan Zimmerman, Henry) * -XGHC2021 (Joachim) * Bytecode-from-STG (Luite) * Record dot syntax (Shayne) * template-haskell putDoc/getDoc (!3330, Luke Lau) * UnliftedDataTypes (!2218, Sebastian Graf) * ARM NCG backend and further stabilize Apple ARM support? (Moritz) If you see a project of yours above then do let me know soon if you have doubts whether you can get it into a mergeable state this week. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Wed Mar 3 05:16:54 2021 From: ben at well-typed.com (Ben Gamari) Date: Wed, 03 Mar 2021 00:16:54 -0500 Subject: Newcomer looking for mentorship on a HPC / Coverage related issue In-Reply-To: References: Message-ID: <871rcwx1do.fsf@smart-cactus.org> Javran Cheng writes: > Hi all, > Hi Javran, > I'm a newcomer (not entirely new as I did have some contribution few years > back but the workflow seems to have changed a lot) and is working on > https://gitlab.haskell.org/ghc/ghc/-/issues/15932. > I'm looking for some (informal) mentorship on that issue - I think I know what > the problem is , but I need > someone with more experience to help developing a concrete plan before > working on a fix, as this might involve two parts in GHC codebase: HPC > source code and GHC/HsToCore/Coverage. Thanks for the email! I would be happy to offer what help I can. I'll pick up on the ticket and, if you would like, we could schedule a meeting to discuss details. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From gergo at erdi.hu Thu Mar 4 11:55:10 2021 From: gergo at erdi.hu (=?ISO-8859-2?Q?=C9RDI_Gerg=F5?=) Date: Thu, 4 Mar 2021 19:55:10 +0800 (+08) Subject: What changed between GHC 8.8 and 8.10 that could cause this? Message-ID: Hi, I'm trying to figure out a Clash problem and managed to track it down to a GHC upgrade; specifically, a given Clash version, when based on GHC 8.8, has no problem synthesizing one module after another from one process; but the same Clash version with GHC 8.10 fails with link-time errors on the second compilation. The details are at https://github.com/clash-lang/clash-compiler/issues/1686 but for now I'm just hoping that some lightbulb will go off for someone if some handling of internal state has changed in GHC that could mean that the symbol tables of loaded modules could persist between GHC invocations from the same process. So, does this ring a bell for anyone? Thanks, Gergo From julian at leviston.net Thu Mar 4 12:42:20 2021 From: julian at leviston.net (Julian Leviston) Date: Thu, 4 Mar 2021 23:42:20 +1100 Subject: What changed between GHC 8.8 and 8.10 that could cause this? In-Reply-To: References: Message-ID: Hi, I don’t know enough about what Clash does to comment really, but it sounds like it’s to do with my work on enabling multiple linker instances in https://gitlab.haskell.org/ghc/ghc/-/merge_requests/388 — maybe reading through that or the plan I outlined at https://gitlab.haskell.org/ghc/ghc/-/issues/3372 might help, though I’m not sure. Strange, though, as this work was to isolate state in GHC — to change it from using a global IORef to use a per-process MVar . But it definitely did change the way state is handled, so it might be the related to these issues somehow? I realise this isn’t much help, but maybe it points you in a direction where you can begin to understand some more. Julian > On 4 Mar 2021, at 10:55 pm, ÉRDI Gergő wrote: > > Hi, > > I'm trying to figure out a Clash problem and managed to track it down to a GHC upgrade; specifically, a given Clash version, when based on GHC 8.8, has no problem synthesizing one module after another from one process; but the same Clash version with GHC 8.10 fails with link-time errors on the second compilation. > > The details are at https://github.com/clash-lang/clash-compiler/issues/1686 > but for now I'm just hoping that some lightbulb will go off for someone if some handling of internal state has changed in GHC that could mean that the symbol tables of loaded modules could persist between GHC invocations from the same process. > > So, does this ring a bell for anyone? > > Thanks, > Gergo > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Thu Mar 4 12:45:00 2021 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Thu, 4 Mar 2021 12:45:00 +0000 Subject: What changed between GHC 8.8 and 8.10 that could cause this? In-Reply-To: References: Message-ID: Perhaps related to 0dc7985663efa1739aafb480759e2e2e7fca2a36 ? commit 0dc7985663efa1739aafb480759e2e2e7fca2a36 Author: Julian Leviston Date: Sat Feb 2 20:10:51 2019 +1100 Allow for multiple linker instances. Fixes Haskell portion of #3372. On Thu, Mar 4, 2021 at 11:55 AM ÉRDI Gergő wrote: > > Hi, > > I'm trying to figure out a Clash problem and managed to track it down to > a GHC upgrade; specifically, a given Clash version, when based on GHC > 8.8, has no problem synthesizing one module after another from one > process; but the same Clash version with GHC 8.10 fails with link-time > errors on the second compilation. > > The details are at https://github.com/clash-lang/clash-compiler/issues/1686 > but for now I'm just hoping that some lightbulb will go off for someone if > some handling of internal state has changed in GHC that could mean that > the symbol tables of loaded modules could persist between GHC invocations > from the same process. > > So, does this ring a bell for anyone? > > Thanks, > Gergo > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From alan.zimm at gmail.com Sat Mar 6 17:39:43 2021 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Sat, 6 Mar 2021 17:39:43 +0000 Subject: GHC Exactprint merge process Message-ID: I have been running a branch in !2418[1] for just over a year to migrate the ghc-exactprint functionality directly into the GHC AST[2], and I am now satisfied that it is able to provide all the same functionality as the original. This is one of the features intended for the impending 9.2.1 release, and it needs to be reviewed to be able to land. But the change is huge, as it mechanically affects most files that interact with the GHC AST. So I have split out a precursor !5158 [3] with just the new types that are used to represent the annotations, so it can be a focal point for discussion. It is ready for review, please comment if you have time and interest. Regards Alan [1] https://gitlab.haskell.org/ghc/ghc/-/merge_requests/2418 [2] https://gitlab.haskell.org/ghc/ghc/-/issues/17638 [3] https://gitlab.haskell.org/ghc/ghc/-/merge_requests/5158 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Sat Mar 6 18:59:41 2021 From: ben at smart-cactus.org (Ben Gamari) Date: Sat, 06 Mar 2021 13:59:41 -0500 Subject: GHC Exactprint merge process In-Reply-To: References: Message-ID: <87mtvgumzp.fsf@smart-cactus.org> "Alan & Kim Zimmerman" writes: > I have been running a branch in !2418[1] for just over a year to migrate > the ghc-exactprint functionality directly into the GHC AST[2], and I am now > satisfied that it is able to provide all the same functionality as the > original. > > This is one of the features intended for the impending 9.2.1 release, and > it needs to be reviewed to be able to land. But the change is huge, as it > mechanically affects most files that interact with the GHC AST. > > So I have split out a precursor !5158 [3] with just the new types that are > used to represent the annotations, so it can be a focal point for > discussion. > > It is ready for review, please comment if you have time and interest. > Thanks Alan! I'll have a look. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From gergo at erdi.hu Sun Mar 7 02:58:50 2021 From: gergo at erdi.hu (=?ISO-8859-2?Q?=C9RDI_Gerg=F5?=) Date: Sun, 7 Mar 2021 10:58:50 +0800 (+08) Subject: Inlining of `any @[]` vs `elem @[]` Message-ID: Hi, The inlining behaviour of `any @[]` and `elem @[]` differs in a way that I am not sure is intentional, and it is affecting Clash (see https://github.com/clash-lang/clash-compiler/issues/1691). I would think that if it is a good idea to inline `any` then inlining `elem` would be just as good an idea, or vice versa. However, `any` is defined polymorphically over `Foldable`, via `foldMap` using `foldr`, with all steps between (and `foldr @[]`!) marked as `INLINE`. The result is that if you use `any (x ==) [1, 5, 7]` you get the following beautiful Core: ``` topEntity = \ (x_agAF :: Int) -> case x_agAF of { GHC.Types.I# y_ahao -> case y_ahao of { __DEFAULT -> GHC.Types.False; 1# -> GHC.Types.True; 5# -> GHC.Types.True; 7# -> GHC.Types.True } } ``` As the kids these days would say: *chef's kiss*. `elem`, on the other hand, is a typeclass method of `Foldable`, with a default implementation in terms of `any`, but overridden for lists with the following implementation: ``` GHC.List.elem :: (Eq a) => a -> [a] -> Bool GHC.List.elem _ [] = False GHC.List.elem x (y:ys) = x==y || GHC.List.elem x ys {-# NOINLINE [1] elem #-} {-# RULES "elem/build" forall x (g :: forall b . Eq a => (a -> b -> b) -> b -> b) . elem x (build g) = g (\ y r -> (x == y) || r) False #-} ``` This is marked as non-inlineable until phase 1 (so that `elem/build` has a chance of firing), but it seems that when build fusion doesn't apply (since `[1, 5, 7]` is, of course, not built via `build`), no inlining happens AT ALL, even in later phases, so we end up with this: ``` topEntity = \ (x_agAF :: Int) -> GHC.List.elem @ Int GHC.Classes.$fEqInt x_agAF (GHC.Types.: @ Int (GHC.Types.I# 1#) (GHC.Types.: @ Int (GHC.Types.I# 5#) (GHC.Types.: @ Int (GHC.Types.I# 7#) (GHC.Types.[] @ Int)))) ``` So not only does it trip up Clash, it would also result in less efficient code in software when using "normal" GHC. Is this all intentional? Wouldn't it make more sense to mark `GHC.List.elem` as `INLINE [1]` instead of `NOINLINE [1]`, so that any calls remaining after build fusion would be inlined? Thanks, Gergo From gergo at erdi.hu Sun Mar 7 03:02:07 2021 From: gergo at erdi.hu (=?ISO-8859-2?Q?=C9RDI_Gerg=F5?=) Date: Sun, 7 Mar 2021 11:02:07 +0800 (+08) Subject: What changed between GHC 8.8 and 8.10 that could cause this? In-Reply-To: References: Message-ID: Thanks Matthew and Julian! Unfortunately, trying out GHC before/after this change didn't turn out to be as easy as I hoped: to do my testing, I need to build a given GHC commit, and then use that via Stack to install ~140 dependencies so that I can then test the problem I have initially seen. And it turns out doing that with a random GHC commit is quite painful because in any given Stackage snapshot there will be packages with which the GHC-bundled libraries are incompatible... :/ On Thu, 4 Mar 2021, Julian Leviston wrote: > Hi,I don’t know enough about what Clash does to comment really, but it sounds like > it’s to do with my work on enabling multiple linker instances > in https://gitlab.haskell.org/ghc/ghc/-/merge_requests/388 — maybe reading through > that or the plan I outlined at https://gitlab.haskell.org/ghc/ghc/-/issues/3372 might > help, though I’m not sure. > > Strange, though, as this work was to isolate state in GHC — to change it from using a > global IORef to use a per-process MVar . But it definitely did change the way state is > handled, so it might be the related to these issues somehow? > > I realise this isn’t much help, but maybe it points you in a direction where you can > begin to understand some more. > > Julian > > On 4 Mar 2021, at 10:55 pm, ÉRDI Gergő wrote: > > Hi, > > I'm trying to figure out a Clash  problem and managed to track it down to a GHC > upgrade; specifically, a given Clash version, when based on GHC 8.8, has no > problem synthesizing one module after another from one process; but the same > Clash version with GHC 8.10 fails with link-time errors on the second > compilation. > > The details are at https://github.com/clash-lang/clash-compiler/issues/1686 > but for now I'm just hoping that some lightbulb will go off for someone if some > handling of internal state has changed in GHC that could mean that the symbol > tables of loaded modules could persist between GHC invocations from the same > process. > > So, does this ring a bell for anyone? > > Thanks, > Gergo > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > -- .--= ULLA! =-----------------. \ http://gergo.erdi.hu \ `---= gergo at erdi.hu =-------' I tried to commit suicide once by taking over 1,000 aspirin. But after I took 2, I felt better! From ghc-devs at chrisdone.com Mon Mar 8 17:13:45 2021 From: ghc-devs at chrisdone.com (Chris Done) Date: Mon, 08 Mar 2021 17:13:45 +0000 Subject: Pointer-or-Int 63-bit representations for Integer Message-ID: <942965df-67a7-4101-ab18-567550af34a1@www.fastmail.com> Hi all, In OCaml's implementation, they use a well known 63-bit representation of ints to distinguish whether a given machine word is either a pointer or to be interpreted as an integer. I was wondering whether anyone had considered the performance benefits of doing this for the venerable Integer type in Haskell? I.e. if the Int fits in 63-bits, just shift it and do regular arithmetic. If the result ever exceeds 63-bits, allocate a GMP/integer-simple integer and return a pointer to it. This way, for most applications--in my experience--integers don't really ever exceed 64-bit, so you would (possibly) pay a smaller cost than the pointer chasing involved in bignum arithmetic. Assumption: it's cheaper to do more CPU instructions than to allocate or wait for mainline memory. This would need assistance from the GC to be able to recognize said bit flag. As I understand the current implementation of integer-gimp, they also try to use an Int64 where possible using a constructor (https://hackage.haskell.org/package/integer-gmp-1.0.3.0/docs/src/GHC.Integer.Type.html#Integer), but I believe that the compiled code will still pointer chase through the constructor. Simple addition or subtraction, for example, is 24 times slower in Integer than in Int for 1000000 iterations: https://github.com/haskell-perf/numbers#addition An unboxed sum might be an improvement? e.g. (# Int# | ByteArray# #) -- would this "kind of" approximate the approach described? I don't have a good intuition of what the memory layout would be like. Just pondering. Cheers, Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.spiwack at tweag.io Mon Mar 8 17:41:43 2021 From: arnaud.spiwack at tweag.io (Spiwack, Arnaud) Date: Mon, 8 Mar 2021 18:41:43 +0100 Subject: Pointer-or-Int 63-bit representations for Integer In-Reply-To: <942965df-67a7-4101-ab18-567550af34a1@www.fastmail.com> References: <942965df-67a7-4101-ab18-567550af34a1@www.fastmail.com> Message-ID: For what it's worth, Ocaml uses the fact that pointers are word-aligned (hence even numbers) to let the gc distinguish between unboxed values and pointers: 63-bit integers are made odd by representing n as (2n+1). But GHC also makes use of the word-alignment of pointers: it is used for pointer tagging [ https://gitlab.haskell.org/ghc/ghc/-/wikis/commentary/rts/haskell-execution/pointer-tagging ]. The tag is, for a closure that has been forced, a representation of its constructor (only the 7 first constructors can be so tagged, if I understand correctly). This is an optimisation for pattern matching: you don't have to run the closure's entry code every time you pattern-match. The bottom-line is that it can't be true, in GHC, that odd values are unboxed and even values are pointers, since odd pointers already exist. Not sure whether the optimisation can be recovered. Best, Arnaud -------------- next part -------------- An HTML attachment was scrubbed... URL: From ghc-devs at chrisdone.com Mon Mar 8 17:46:07 2021 From: ghc-devs at chrisdone.com (Chris Done) Date: Mon, 08 Mar 2021 17:46:07 +0000 Subject: Pointer-or-Int 63-bit representations for Integer In-Reply-To: <942965df-67a7-4101-ab18-567550af34a1@www.fastmail.com> References: <942965df-67a7-4101-ab18-567550af34a1@www.fastmail.com> Message-ID: <8a47b852-4fe4-47b2-a222-64be15547ff5@www.fastmail.com> Hi all, I did a trivial test, in case anybody's interested, in the unboxed sum idea approach, only considering the Int# branch. https://gist.github.com/chrisdone/6aef640a49fc30b45ad210eac287dce9 It seems to be on par with Int, which is pretty cool because I wasn't sure what to expect. Assuming I didn't make a horrible mistake. `B = 272.2 μs` `Int = 270.9 μs ` `Integer = 7.860 ms ` Cheers, Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From ghc-devs at chrisdone.com Mon Mar 8 17:51:33 2021 From: ghc-devs at chrisdone.com (chris done) Date: Mon, 08 Mar 2021 17:51:33 +0000 Subject: Pointer-or-Int 63-bit representations for Integer In-Reply-To: References: <942965df-67a7-4101-ab18-567550af34a1@www.fastmail.com> Message-ID: On Mon, Mar 8, 2021, at 5:41 PM, Spiwack, Arnaud wrote: > For what it's worth, Ocaml uses the fact that pointers are word-aligned (hence even numbers) to let the gc distinguish between unboxed values and pointers: 63-bit integers are made odd by representing n as (2n+1). > > But GHC also makes use of the word-alignment of pointers: it is used for pointer tagging [ https://gitlab.haskell.org/ghc/ghc/-/wikis/commentary/rts/haskell-execution/pointer-tagging ]. The tag is, for a closure that has been forced, a representation of its constructor (only the 7 first constructors can be so tagged, if I understand correctly). This is an optimisation for pattern matching: you don't have to run the closure's entry code every time you pattern-match. > > The bottom-line is that it can't be true, in GHC, that odd values are unboxed and even values are pointers, since odd pointers already exist. Not sure whether the optimisation can be recovered. > > Best, > Arnaud I see, thanks for the pointer (tee hee!). Seems like that real-estate is already used up in the runtime's representation. I replied to the thread just now a mildly interesting result with unboxed sums prior to reading this. Seems like a potentially fun avenue for someone with more time. Cheers, Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From sylvain at haskus.fr Mon Mar 8 18:32:41 2021 From: sylvain at haskus.fr (Sylvain Henry) Date: Mon, 8 Mar 2021 19:32:41 +0100 Subject: Pointer-or-Int 63-bit representations for Integer In-Reply-To: <942965df-67a7-4101-ab18-567550af34a1@www.fastmail.com> References: <942965df-67a7-4101-ab18-567550af34a1@www.fastmail.com> Message-ID: <3a7bd2e0-87a2-bd4f-14b8-821f41373cc4@haskus.fr> Hi Chris, It has been considered in the past. There are some traces in the wiki: https://gitlab.haskell.org/ghc/ghc/-/wikis/replacing-gmp-notes >> The suggestion discussed by John Meacham , Lennart Augustsson , Simon Marlow and Bulat Ziganshin was to change the representation of Integer so the Int# does the work of S# and J#: the Int# could be either a pointer to the Bignum library array of limbs or, if the number of significant digits could fit into say, 31 bits, to use the extra bit as an indicator of that fact and hold the entire value in the Int#, thereby saving the memory from S# and J#. It's not trivial because it requires a new runtime representation that is dynamically boxed or not. > An unboxed sum might be an improvement? e.g. (# Int# | ByteArray# #) -- would this "kind of" approximate the approach described? I don't have a good intuition of what the memory layout would be like. After the unariser pass, the unboxed sum becomes an unboxed tuple: (# Int# {-tag-}, Int#, ByteArray# #) The two fields don't overlap because they don't have the same slot type. In my early experiments before implementing ghc-bignum, performance got worse in some cases with this encoding iirc. It may be worth checking again if someone has time to do it :). Nowadays it should be easier as we can define pattern synonyms with INLINE pragmas to replace Integer's constructors. Another issue we have with Integer/Natural is that we have to mark most operations NOINLINE to support constant-folding. To be fair benchmarks should take this into account. Cheers, Sylvain On 08/03/2021 18:13, Chris Done wrote: > Hi all, > > In OCaml's implementation, they use a well known 63-bit representation > of ints to distinguish whether a given machine word is either a > pointer or to be interpreted as an integer. > > I was wondering whether anyone had considered the performance benefits > of doing this for the venerable Integer type in Haskell? I.e. if the > Int fits in 63-bits, just shift it and do regular arithmetic. If the > result ever exceeds 63-bits, allocate a GMP/integer-simple integer and > return a pointer to it. This way, for most applications--in my > experience--integers don't really ever exceed 64-bit, so you would > (possibly) pay a smaller cost than the pointer chasing involved in > bignum arithmetic. Assumption: it's cheaper to do more CPU > instructions than to allocate or wait for mainline memory. > > This would need assistance from the GC to be able to recognize said > bit flag. > > As I understand the current implementation of integer-gimp, they also > try to use an Int64 where possible using a constructor > (https://hackage.haskell.org/package/integer-gmp-1.0.3.0/docs/src/GHC.Integer.Type.html#Integer > ), > but I believe that the compiled code will still pointer chase through > the constructor. Simple addition or subtraction, for example, is 24 > times slower in Integer than in Int for 1000000 iterations: > > https://github.com/haskell-perf/numbers#addition > > > An unboxed sum might be an improvement? e.g. (# Int# | ByteArray# #) > -- would this "kind of" approximate the approach described? I don't > have a good intuition of what the memory layout would be like. > > Just pondering. > > Cheers, > > Chris > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.ericson at obsidian.systems Mon Mar 8 23:55:46 2021 From: john.ericson at obsidian.systems (John Ericson) Date: Mon, 8 Mar 2021 18:55:46 -0500 Subject: Requesting help with unregisterized backend failure of !4717 Message-ID: <8dc8a721-7db6-33ce-cc73-e6994aa6eeed@obsidian.systems> Hi everyone, https://gitlab.haskell.org/ghc/ghc/-/merge_requests/4717 fails some numerical boundary checking tests with the unregisterized backend. In particular, `minBound / (-1)` and `pred minBound` are *not* diverging as expected. This stumped me a few days ago, and no new ideas have struct me since. I would very much appreciate some help. This does seem like something that is very likely to be backend-dependent, as different instructions/arches handle such edge cases in different ways. What makes this so peculiar, however, is that the boundary condition checking/branching is done *in regular Haskell*. I'm thus quite baffled as to what could be going wrong. If anyone wants to dive in, see my last comment https://gitlab.haskell.org/ghc/ghc/-/merge_requests/4717#note_329840, which has a minimization to declutter the code without doing enough constant folding that the problem is avoided entirely. (I went with a compilation unit trick, but in hindsight NOINLINE should also work and be simpler.) Finally, let my provide some context for why I am hoping to get this merged soon. To be clear, !4492 was the main MR relating to the numerics primops critical to get in for 9.2 and it thankfully already landed. But landing this MR too would be nice: It shrinks the intermediate representations of numerical code probably smaller than it was originally, whereas now it is perhaps larger than it was before due to more size<->native conversions. Also, I think this MR will help get !3658 over the finish line, which, while again not as critical for 9.2 as !4492 was, would be awfully nice to do to fully realize the new simplicity of the plan set forth in https://gitlab.haskell.org/ghc/ghc/-/wikis/Unboxed-Numerics. Thanks, John N.B. I just rebased and repushed the MR, so CI might be still running or failing due to something else, but based on local testing this is still the an issue. https://gitlab.haskell.org/ghc/ghc/-/pipelines/31002 is an earlier pipeline run that one can look at until CI finishes again. From carter.schonwald at gmail.com Tue Mar 9 00:25:14 2021 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 8 Mar 2021 19:25:14 -0500 Subject: Requesting help with unregisterized backend failure of !4717 In-Reply-To: <8dc8a721-7db6-33ce-cc73-e6994aa6eeed@obsidian.systems> References: <8dc8a721-7db6-33ce-cc73-e6994aa6eeed@obsidian.systems> Message-ID: Isn’t the unregisterized backend a c compiler? You should check what gcc and clang or whoever we use handles those issues On Mon, Mar 8, 2021 at 6:56 PM John Ericson wrote: > Hi everyone, > > https://gitlab.haskell.org/ghc/ghc/-/merge_requests/4717 fails some > numerical boundary checking tests with the unregisterized backend. In > particular, `minBound / (-1)` and `pred minBound` are *not* diverging as > expected. This stumped me a few days ago, and no new ideas have struct > me since. I would very much appreciate some help. > > This does seem like something that is very likely to be > backend-dependent, as different instructions/arches handle such edge > cases in different ways. What makes this so peculiar, however, is that > the boundary condition checking/branching is done *in regular Haskell*. > I'm thus quite baffled as to what could be going wrong. > > If anyone wants to dive in, see my last comment > https://gitlab.haskell.org/ghc/ghc/-/merge_requests/4717#note_329840, > which has a minimization to declutter the code without doing enough > constant folding that the problem is avoided entirely. (I went with a > compilation unit trick, but in hindsight NOINLINE should also work and > be simpler.) > > Finally, let my provide some context for why I am hoping to get this > merged soon. To be clear, !4492 was the main MR relating to the numerics > primops critical to get in for 9.2 and it thankfully already landed. But > landing this MR too would be nice: It shrinks the intermediate > representations of numerical code probably smaller than it was > originally, whereas now it is perhaps larger than it was before due to > more size<->native conversions. Also, I think this MR will help get > !3658 over the finish line, which, while again not as critical for 9.2 > as !4492 was, would be awfully nice to do to fully realize the new > simplicity of the plan set forth in > https://gitlab.haskell.org/ghc/ghc/-/wikis/Unboxed-Numerics. > > Thanks, > > John > > N.B. I just rebased and repushed the MR, so CI might be still running or > failing due to something else, but based on local testing this is still > the an issue. https://gitlab.haskell.org/ghc/ghc/-/pipelines/31002 is an > earlier pipeline run that one can look at until CI finishes again. > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.ericson at obsidian.systems Tue Mar 9 05:15:43 2021 From: john.ericson at obsidian.systems (John Ericson) Date: Tue, 9 Mar 2021 00:15:43 -0500 Subject: Requesting help with unregisterized backend failure of !4717 In-Reply-To: References: <8dc8a721-7db6-33ce-cc73-e6994aa6eeed@obsidian.systems> Message-ID: <98488369-5442-db9b-edba-fad268a717f6@obsidian.systems> The problem occurs earlier in the pipeline than that. The generated C doesn't have the proper branching present in the original Haskell. On 3/8/21 7:25 PM, Carter Schonwald wrote: > Isn’t the unregisterized backend a c compiler? You should check what > gcc and clang or whoever we use handles those issues > > On Mon, Mar 8, 2021 at 6:56 PM John Ericson > wrote: > > Hi everyone, > > https://gitlab.haskell.org/ghc/ghc/-/merge_requests/4717 > fails some > numerical boundary checking tests with the unregisterized backend. In > particular, `minBound / (-1)` and `pred minBound` are *not* > diverging as > expected. This stumped me a few days ago, and no new ideas have > struct > me since. I would very much appreciate some help. > > This does seem like something that is very likely to be > backend-dependent, as different instructions/arches handle such edge > cases in different ways. What makes this so peculiar, however, is > that > the boundary condition checking/branching is done *in regular > Haskell*. > I'm thus quite baffled as to what could be going wrong. > > If anyone wants to dive in, see my last comment > https://gitlab.haskell.org/ghc/ghc/-/merge_requests/4717#note_329840 > , > > which has a minimization to declutter the code without doing enough > constant folding that the problem is avoided entirely. (I went with a > compilation unit trick, but in hindsight NOINLINE should also work > and > be simpler.) > > Finally, let my provide some context for why I am hoping to get this > merged soon. To be clear, !4492 was the main MR relating to the > numerics > primops critical to get in for 9.2 and it thankfully already > landed. But > landing this MR too would be nice: It shrinks the intermediate > representations of numerical code probably smaller than it was > originally, whereas now it is perhaps larger than it was before > due to > more size<->native conversions. Also, I think this MR will help get > !3658 over the finish line, which, while again not as critical for > 9.2 > as !4492 was, would be awfully nice to do to fully realize the new > simplicity of the plan set forth in > https://gitlab.haskell.org/ghc/ghc/-/wikis/Unboxed-Numerics > . > > Thanks, > > John > > N.B. I just rebased and repushed the MR, so CI might be still > running or > failing due to something else, but based on local testing this is > still > the an issue. https://gitlab.haskell.org/ghc/ghc/-/pipelines/31002 > is an > earlier pipeline run that one can look at until CI finishes again. > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christiaan.baaij at gmail.com Tue Mar 9 10:26:11 2021 From: christiaan.baaij at gmail.com (Christiaan Baaij) Date: Tue, 9 Mar 2021 11:26:11 +0100 Subject: What changed between GHC 8.8 and 8.10 that could cause this? In-Reply-To: References: Message-ID: Even if MR388 ( https://gitlab.haskell.org/ghc/ghc/-/merge_requests/388 ) is the cause of the issue we're seeing with the API exposed by Clash, I still think MR388 is wrong. My reasoning is the following: In 8.8 and earlier we had: - RTS C-code contains the ground truth of what is linked. The API it provides are set-membership, insert, lookup, and delete. Notably it does not allow you to get the set of linked objects. - There is a globally shared MVar (using NOINLINE, sharedCaf, unsafePerformIO newIORef "tricks") to what is basically a log/view of the linked-objects state kept by the RTS C-code. With MR388, in 8.10 and later we get: - RTS C-code contains the ground truth of what is linked. The API it provides are set-membership, insert, lookup, and delete. Notably it does not allow you to get the set of linked objects. - A _new_ MVar for every call to `runGhc` which is a log/view of the linked-object state kept by the RTS C-code. But that means these MVar get out-of-sync with the ground truth that is the RTS C-code! And since the RTS C-code does not expose an API to get the set of linked objects, there's no way to sync these MVars either! I'm building a ghc-8.10.2 with MR388 reverted to see whether it is indeed what is causing the issue we're seeing in Clash. Given my analysis above of what I think is wrong with MR388, I'm not saying we should completely revert MR388, but simply ensure that every HscEnv created through `runGhc` gets the globally shared MVar; as opposed to the current call to `newMVar`. On Sun, 7 Mar 2021 at 04:02, ÉRDI Gergő wrote: > Thanks Matthew and Julian! Unfortunately, trying out GHC before/after this > change didn't turn out to be as easy as I hoped: to do my testing, I > need to build a given GHC commit, and then use that via Stack to install > ~140 dependencies so that I can then test the problem I have initially > seen. And it turns out doing that with a random GHC commit is quite > painful because in any given Stackage snapshot there will be packages with > which the GHC-bundled libraries are incompatible... :/ > > > > On Thu, 4 Mar 2021, Julian Leviston wrote: > > > Hi,I don’t know enough about what Clash does to comment really, but it > sounds like > > it’s to do with my work on enabling multiple linker instances > > in https://gitlab.haskell.org/ghc/ghc/-/merge_requests/388 — maybe > reading through > > that or the plan I outlined at > https://gitlab.haskell.org/ghc/ghc/-/issues/3372 might > > help, though I’m not sure. > > > > Strange, though, as this work was to isolate state in GHC — to change it > from using a > > global IORef to use a per-process MVar . But it definitely did change > the way state is > > handled, so it might be the related to these issues somehow? > > > > I realise this isn’t much help, but maybe it points you in a direction > where you can > > begin to understand some more. > > > > Julian > > > > On 4 Mar 2021, at 10:55 pm, ÉRDI Gergő wrote: > > > > Hi, > > > > I'm trying to figure out a Clash problem and managed to track it down > to a GHC > > upgrade; specifically, a given Clash version, when based on GHC 8.8, has > no > > problem synthesizing one module after another from one process; but the > same > > Clash version with GHC 8.10 fails with link-time errors on the second > > compilation. > > > > The details are at > https://github.com/clash-lang/clash-compiler/issues/1686 > > but for now I'm just hoping that some lightbulb will go off for someone > if some > > handling of internal state has changed in GHC that could mean that the > symbol > > tables of loaded modules could persist between GHC invocations from the > same > > process. > > > > So, does this ring a bell for anyone? > > > > Thanks, > > Gergo > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > > > > > > > -- > > .--= ULLA! =-----------------. > \ http://gergo.erdi.hu \ > `---= gergo at erdi.hu =-------' > I tried to commit suicide once by taking over 1,000 aspirin. But after I > took 2, I felt better!_______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lonetiger at gmail.com Tue Mar 9 11:22:22 2021 From: lonetiger at gmail.com (Phyx) Date: Tue, 9 Mar 2021 11:22:22 +0000 Subject: What changed between GHC 8.8 and 8.10 that could cause this? In-Reply-To: References: Message-ID: Hi, Hmm... I don't agree.. This isn't about grounds of truth or anything like that.. and in fact, an object being in the linker map, doesn't mean its usable at all or meant to be used at all. It can be temporary state (symbol redirection or supporting of deprecated symbols are two that come to mind). So this is also a case of.. be careful. The change introduced in the MR simply decoupled the top level user interface and the C linker. The reason for this is simply because the majority of projects do not require shared state here, but infact benefit from unshared state. i.e. interpreters, IDEs etc. Where you want to be able to process multiple separate files at the same time without needing to create new processes for each. Now back to your point about runGhc needing to use a shared state.. In my opinion that would be wrong. Here's the documentation for GHC 8.6.5 https://hackage.haskell.org/package/ghc-8.6.5/docs/GHC.html specifically: ---- runGhc :: Maybe FilePath - See argument to initGhcMonad. -> Ghc a - The action to perform. -> IO a - Run function for the Ghc monad. It initialises the GHC session and warnings via initGhcMonad. Each call to this function will create a new session which should not be shared among several threads. Any errors not handled inside the Ghc action are propagated as IO exceptions. --- And if the session isn't guaranteed there's no guarantee about the underlying state. This explicit declaration that runGhc will not share state has been in the API for for decades (going as far back as I stopped looking at 7.2). That Clash is relying on behavior we explicitly stated is not the case is a bug in Clash. If you require shared state you should not be using the top level runGhc wrapper but instead call unGhc yourself (or call setSession yourself). There is perhaps a case to be made for a runGhcShared which does this, but runGhc itself never guaranteed one session or one state. Kind regards, Tamar On Tue, Mar 9, 2021, 10:27 Christiaan Baaij wrote: > Even if MR388 ( https://gitlab.haskell.org/ghc/ghc/-/merge_requests/388 ) > is the cause of the issue we're seeing with the API exposed by Clash, I > still think MR388 is wrong. > My reasoning is the following: > > In 8.8 and earlier we had: > - RTS C-code contains the ground truth of what is linked. The API it > provides are set-membership, insert, lookup, and delete. Notably it does > not allow you to get the set of linked objects. > - There is a globally shared MVar (using NOINLINE, sharedCaf, > unsafePerformIO newIORef "tricks") to what is basically a log/view of the > linked-objects state kept by the RTS C-code. > > With MR388, in 8.10 and later we get: > - RTS C-code contains the ground truth of what is linked. The API it > provides are set-membership, insert, lookup, and delete. Notably it does > not allow you to get the set of linked objects. > - A _new_ MVar for every call to `runGhc` which is a log/view of the > linked-object state kept by the RTS C-code. But that means these MVar get > out-of-sync with the ground truth that is the RTS C-code! And since the RTS > C-code does not expose an API to get the set of linked objects, there's no > way to sync these MVars either! > > I'm building a ghc-8.10.2 with MR388 reverted to see whether it is indeed > what is causing the issue we're seeing in Clash. > Given my analysis above of what I think is wrong with MR388, I'm not > saying we should completely revert MR388, but simply ensure that every > HscEnv created through `runGhc` gets the globally shared MVar; as opposed > to the current call to `newMVar`. > > On Sun, 7 Mar 2021 at 04:02, ÉRDI Gergő wrote: > >> Thanks Matthew and Julian! Unfortunately, trying out GHC before/after >> this >> change didn't turn out to be as easy as I hoped: to do my testing, I >> need to build a given GHC commit, and then use that via Stack to install >> ~140 dependencies so that I can then test the problem I have initially >> seen. And it turns out doing that with a random GHC commit is quite >> painful because in any given Stackage snapshot there will be packages >> with >> which the GHC-bundled libraries are incompatible... :/ >> >> >> >> On Thu, 4 Mar 2021, Julian Leviston wrote: >> >> > Hi,I don’t know enough about what Clash does to comment really, but it >> sounds like >> > it’s to do with my work on enabling multiple linker instances >> > in https://gitlab.haskell.org/ghc/ghc/-/merge_requests/388 — maybe >> reading through >> > that or the plan I outlined at >> https://gitlab.haskell.org/ghc/ghc/-/issues/3372 might >> > help, though I’m not sure. >> > >> > Strange, though, as this work was to isolate state in GHC — to change >> it from using a >> > global IORef to use a per-process MVar . But it definitely did change >> the way state is >> > handled, so it might be the related to these issues somehow? >> > >> > I realise this isn’t much help, but maybe it points you in a direction >> where you can >> > begin to understand some more. >> > >> > Julian >> > >> > On 4 Mar 2021, at 10:55 pm, ÉRDI Gergő wrote: >> > >> > Hi, >> > >> > I'm trying to figure out a Clash problem and managed to track it down >> to a GHC >> > upgrade; specifically, a given Clash version, when based on GHC 8.8, >> has no >> > problem synthesizing one module after another from one process; but the >> same >> > Clash version with GHC 8.10 fails with link-time errors on the second >> > compilation. >> > >> > The details are at >> https://github.com/clash-lang/clash-compiler/issues/1686 >> > but for now I'm just hoping that some lightbulb will go off for someone >> if some >> > handling of internal state has changed in GHC that could mean that the >> symbol >> > tables of loaded modules could persist between GHC invocations from the >> same >> > process. >> > >> > So, does this ring a bell for anyone? >> > >> > Thanks, >> > Gergo >> > _______________________________________________ >> > ghc-devs mailing list >> > ghc-devs at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > >> > >> > >> > >> >> -- >> >> .--= ULLA! =-----------------. >> \ http://gergo.erdi.hu \ >> `---= gergo at erdi.hu =-------' >> I tried to commit suicide once by taking over 1,000 aspirin. But after I >> took 2, I felt better!_______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christiaan.baaij at gmail.com Tue Mar 9 11:46:14 2021 From: christiaan.baaij at gmail.com (Christiaan Baaij) Date: Tue, 9 Mar 2021 12:46:14 +0100 Subject: What changed between GHC 8.8 and 8.10 that could cause this? In-Reply-To: References: Message-ID: But we don't _want_ the shared state, it's simply there. This whole issue arises from the fact that we were oblivious to the shared RTS state, resulting in Clash doing GHC API calls where the RTS loads/links an object file twice. And we're not even explicitly linking/loading object files twice, something to do with the GHC type-checker seems to do that. I don't see how I can avoid this issue without being forced to run within a single `runGhc` session. On Tue, 9 Mar 2021 at 12:22, Phyx wrote: > Hi, > > Hmm... I don't agree.. > > This isn't about grounds of truth or anything like that.. and in fact, an > object being in the linker map, doesn't mean its usable at all or meant to > be used at all. > It can be temporary state (symbol redirection or supporting of deprecated > symbols are two that come to mind). So this is also a case of.. be careful. > > The change introduced in the MR simply decoupled the top level user > interface and the C linker. > The reason for this is simply because the majority of projects do not > require shared state here, but infact benefit from unshared state. > > i.e. interpreters, IDEs etc. Where you want to be able to process > multiple separate files at the same time without needing to create new > processes for each. > > Now back to your point about runGhc needing to use a shared state.. In my > opinion that would be wrong. > > Here's the documentation for GHC 8.6.5 > https://hackage.haskell.org/package/ghc-8.6.5/docs/GHC.html > > specifically: > > ---- > > runGhc > :: Maybe FilePath - See argument to initGhcMonad. > -> Ghc a - The action to perform. > -> IO a - Run function for the Ghc monad. > > It initialises the GHC session and warnings via initGhcMonad. > Each call to this function will create a new session which should not be > shared among several threads. > > Any errors not handled inside the Ghc action are propagated as IO > exceptions. > > --- > > And if the session isn't guaranteed there's no guarantee about the > underlying state. > This explicit declaration that runGhc will not share state has been in the > API for for decades (going as far back as I stopped looking at 7.2). > > That Clash is relying on behavior we explicitly stated is not the case is > a bug in Clash. > > If you require shared state you should not be using the top level runGhc > wrapper but instead call unGhc yourself (or call setSession yourself). > > There is perhaps a case to be made for a runGhcShared which does this, but > runGhc itself never guaranteed one session or one state. > > Kind regards, > Tamar > > On Tue, Mar 9, 2021, 10:27 Christiaan Baaij > wrote: > >> Even if MR388 ( https://gitlab.haskell.org/ghc/ghc/-/merge_requests/388 >> ) is the cause of the issue we're seeing with the API exposed by Clash, I >> still think MR388 is wrong. >> My reasoning is the following: >> >> In 8.8 and earlier we had: >> - RTS C-code contains the ground truth of what is linked. The API it >> provides are set-membership, insert, lookup, and delete. Notably it does >> not allow you to get the set of linked objects. >> - There is a globally shared MVar (using NOINLINE, sharedCaf, >> unsafePerformIO newIORef "tricks") to what is basically a log/view of the >> linked-objects state kept by the RTS C-code. >> >> With MR388, in 8.10 and later we get: >> - RTS C-code contains the ground truth of what is linked. The API it >> provides are set-membership, insert, lookup, and delete. Notably it does >> not allow you to get the set of linked objects. >> - A _new_ MVar for every call to `runGhc` which is a log/view of the >> linked-object state kept by the RTS C-code. But that means these MVar get >> out-of-sync with the ground truth that is the RTS C-code! And since the RTS >> C-code does not expose an API to get the set of linked objects, there's no >> way to sync these MVars either! >> >> I'm building a ghc-8.10.2 with MR388 reverted to see whether it is indeed >> what is causing the issue we're seeing in Clash. >> Given my analysis above of what I think is wrong with MR388, I'm not >> saying we should completely revert MR388, but simply ensure that every >> HscEnv created through `runGhc` gets the globally shared MVar; as opposed >> to the current call to `newMVar`. >> >> On Sun, 7 Mar 2021 at 04:02, ÉRDI Gergő wrote: >> >>> Thanks Matthew and Julian! Unfortunately, trying out GHC before/after >>> this >>> change didn't turn out to be as easy as I hoped: to do my testing, I >>> need to build a given GHC commit, and then use that via Stack to install >>> ~140 dependencies so that I can then test the problem I have initially >>> seen. And it turns out doing that with a random GHC commit is quite >>> painful because in any given Stackage snapshot there will be packages >>> with >>> which the GHC-bundled libraries are incompatible... :/ >>> >>> >>> >>> On Thu, 4 Mar 2021, Julian Leviston wrote: >>> >>> > Hi,I don’t know enough about what Clash does to comment really, but it >>> sounds like >>> > it’s to do with my work on enabling multiple linker instances >>> > in https://gitlab.haskell.org/ghc/ghc/-/merge_requests/388 — maybe >>> reading through >>> > that or the plan I outlined at >>> https://gitlab.haskell.org/ghc/ghc/-/issues/3372 might >>> > help, though I’m not sure. >>> > >>> > Strange, though, as this work was to isolate state in GHC — to change >>> it from using a >>> > global IORef to use a per-process MVar . But it definitely did change >>> the way state is >>> > handled, so it might be the related to these issues somehow? >>> > >>> > I realise this isn’t much help, but maybe it points you in a direction >>> where you can >>> > begin to understand some more. >>> > >>> > Julian >>> > >>> > On 4 Mar 2021, at 10:55 pm, ÉRDI Gergő wrote: >>> > >>> > Hi, >>> > >>> > I'm trying to figure out a Clash problem and managed to track it down >>> to a GHC >>> > upgrade; specifically, a given Clash version, when based on GHC 8.8, >>> has no >>> > problem synthesizing one module after another from one process; but >>> the same >>> > Clash version with GHC 8.10 fails with link-time errors on the second >>> > compilation. >>> > >>> > The details are at >>> https://github.com/clash-lang/clash-compiler/issues/1686 >>> > but for now I'm just hoping that some lightbulb will go off for >>> someone if some >>> > handling of internal state has changed in GHC that could mean that the >>> symbol >>> > tables of loaded modules could persist between GHC invocations from >>> the same >>> > process. >>> > >>> > So, does this ring a bell for anyone? >>> > >>> > Thanks, >>> > Gergo >>> > _______________________________________________ >>> > ghc-devs mailing list >>> > ghc-devs at haskell.org >>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>> > >>> > >>> > >>> > >>> >>> -- >>> >>> .--= ULLA! =-----------------. >>> \ http://gergo.erdi.hu \ >>> `---= gergo at erdi.hu =-------' >>> I tried to commit suicide once by taking over 1,000 aspirin. But after I >>> took 2, I felt better!_______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lonetiger at gmail.com Tue Mar 9 12:43:11 2021 From: lonetiger at gmail.com (Phyx) Date: Tue, 9 Mar 2021 12:43:11 +0000 Subject: What changed between GHC 8.8 and 8.10 that could cause this? In-Reply-To: References: Message-ID: Hi, > But we don't _want_ the shared state, it's simply there. > This whole issue arises from the fact that we were oblivious to the shared RTS state, resulting in Clash doing GHC API calls where the RTS loads/links an object file twice. The RTS should under no circumstances be actually loading an object file twice as there's only one linker map and should result in a symbol collision. Looking at the error you posted at https://github.com/clash-lang/clash-compiler/issues/1686 is actually the linker doing the right thing. GHC runtime linker: fatal error: I found a duplicate definition for symbol Lib2_plots2_closure whilst processing object file .stack-work/dist/x86_64-linux-tinfo6/Cabal-3.2.1.0/build/exe2/_clashilator/clash-syn/Lib2.o The symbol was previously defined in .stack-work/dist/x86_64-linux-tinfo6/Cabal-3.2.1.0/build/exe1/_clashilator/clash-syn/Lib2.o You're loading the same object file twice from different build folders and the linker has no guarantee that these two are the same symbol at all. This however is indeed a shortcoming of M388 that we can't split the C linker map easily. > And we're not even explicitly linking/loading object files twice, something to do with the GHC type-checker seems to do that. Yes but you have a new object file, in a different path. This can't be resolved by the linker cache. This looks like it accidentally worked before as the shared Haskell Linker state resolves based on the Close name itself. So it never asked the C linker. I say accidental because there's no guarantee that the closure in exe1 and exe2 are the same, despite them having the same name.. > I don't see how I can avoid this issue without being forced to run within a single `runGhc` session. As I mentioned below, you can override the hsc_dynLinker in a wrapper around runGhc. i.e. runClashGhc :: -> .. do shared_linker <- ... runGhc .. $ do setSession $ hsc_env { hsc_dynLinker = shared_linker } Should restore the behavior. You don't need to run inside a single runGhc, you just need to provide a single hsc_dynLinker. That should work. Kind Regards, Tamar On Tue, Mar 9, 2021, 11:46 Christiaan Baaij wrote: > But we don't _want_ the shared state, it's simply there. > This whole issue arises from the fact that we were oblivious to the shared > RTS state, resulting in Clash doing GHC API calls where the RTS loads/links > an object file twice. > And we're not even explicitly linking/loading object files twice, > something to do with the GHC type-checker seems to do that. > I don't see how I can avoid this issue without being forced to run within > a single `runGhc` session. > > On Tue, 9 Mar 2021 at 12:22, Phyx wrote: > >> Hi, >> >> Hmm... I don't agree.. >> >> This isn't about grounds of truth or anything like that.. and in fact, an >> object being in the linker map, doesn't mean its usable at all or meant to >> be used at all. >> It can be temporary state (symbol redirection or supporting of deprecated >> symbols are two that come to mind). So this is also a case of.. be careful. >> >> The change introduced in the MR simply decoupled the top level user >> interface and the C linker. >> The reason for this is simply because the majority of projects do not >> require shared state here, but infact benefit from unshared state. >> >> i.e. interpreters, IDEs etc. Where you want to be able to process >> multiple separate files at the same time without needing to create new >> processes for each. >> >> Now back to your point about runGhc needing to use a shared state.. In my >> opinion that would be wrong. >> >> Here's the documentation for GHC 8.6.5 >> https://hackage.haskell.org/package/ghc-8.6.5/docs/GHC.html >> >> specifically: >> >> ---- >> >> runGhc >> :: Maybe FilePath - See argument to initGhcMonad. >> -> Ghc a - The action to perform. >> -> IO a - Run function for the Ghc monad. >> >> It initialises the GHC session and warnings via initGhcMonad. >> Each call to this function will create a new session which should not be >> shared among several threads. >> >> Any errors not handled inside the Ghc action are propagated as IO >> exceptions. >> >> --- >> >> And if the session isn't guaranteed there's no guarantee about the >> underlying state. >> This explicit declaration that runGhc will not share state has been in >> the API for for decades (going as far back as I stopped looking at 7.2). >> >> That Clash is relying on behavior we explicitly stated is not the case is >> a bug in Clash. >> >> If you require shared state you should not be using the top level runGhc >> wrapper but instead call unGhc yourself (or call setSession yourself). >> >> There is perhaps a case to be made for a runGhcShared which does this, >> but runGhc itself never guaranteed one session or one state. >> >> Kind regards, >> Tamar >> >> On Tue, Mar 9, 2021, 10:27 Christiaan Baaij >> wrote: >> >>> Even if MR388 ( https://gitlab.haskell.org/ghc/ghc/-/merge_requests/388 >>> ) is the cause of the issue we're seeing with the API exposed by Clash, I >>> still think MR388 is wrong. >>> My reasoning is the following: >>> >>> In 8.8 and earlier we had: >>> - RTS C-code contains the ground truth of what is linked. The API it >>> provides are set-membership, insert, lookup, and delete. Notably it does >>> not allow you to get the set of linked objects. >>> - There is a globally shared MVar (using NOINLINE, sharedCaf, >>> unsafePerformIO newIORef "tricks") to what is basically a log/view of the >>> linked-objects state kept by the RTS C-code. >>> >>> With MR388, in 8.10 and later we get: >>> - RTS C-code contains the ground truth of what is linked. The API it >>> provides are set-membership, insert, lookup, and delete. Notably it does >>> not allow you to get the set of linked objects. >>> - A _new_ MVar for every call to `runGhc` which is a log/view of the >>> linked-object state kept by the RTS C-code. But that means these MVar get >>> out-of-sync with the ground truth that is the RTS C-code! And since the RTS >>> C-code does not expose an API to get the set of linked objects, there's no >>> way to sync these MVars either! >>> >>> I'm building a ghc-8.10.2 with MR388 reverted to see whether it is >>> indeed what is causing the issue we're seeing in Clash. >>> Given my analysis above of what I think is wrong with MR388, I'm not >>> saying we should completely revert MR388, but simply ensure that every >>> HscEnv created through `runGhc` gets the globally shared MVar; as opposed >>> to the current call to `newMVar`. >>> >>> On Sun, 7 Mar 2021 at 04:02, ÉRDI Gergő wrote: >>> >>>> Thanks Matthew and Julian! Unfortunately, trying out GHC before/after >>>> this >>>> change didn't turn out to be as easy as I hoped: to do my testing, I >>>> need to build a given GHC commit, and then use that via Stack to >>>> install >>>> ~140 dependencies so that I can then test the problem I have initially >>>> seen. And it turns out doing that with a random GHC commit is quite >>>> painful because in any given Stackage snapshot there will be packages >>>> with >>>> which the GHC-bundled libraries are incompatible... :/ >>>> >>>> >>>> >>>> On Thu, 4 Mar 2021, Julian Leviston wrote: >>>> >>>> > Hi,I don’t know enough about what Clash does to comment really, but >>>> it sounds like >>>> > it’s to do with my work on enabling multiple linker instances >>>> > in https://gitlab.haskell.org/ghc/ghc/-/merge_requests/388 — maybe >>>> reading through >>>> > that or the plan I outlined at >>>> https://gitlab.haskell.org/ghc/ghc/-/issues/3372 might >>>> > help, though I’m not sure. >>>> > >>>> > Strange, though, as this work was to isolate state in GHC — to change >>>> it from using a >>>> > global IORef to use a per-process MVar . But it definitely did change >>>> the way state is >>>> > handled, so it might be the related to these issues somehow? >>>> > >>>> > I realise this isn’t much help, but maybe it points you in a >>>> direction where you can >>>> > begin to understand some more. >>>> > >>>> > Julian >>>> > >>>> > On 4 Mar 2021, at 10:55 pm, ÉRDI Gergő wrote: >>>> > >>>> > Hi, >>>> > >>>> > I'm trying to figure out a Clash problem and managed to track it >>>> down to a GHC >>>> > upgrade; specifically, a given Clash version, when based on GHC 8.8, >>>> has no >>>> > problem synthesizing one module after another from one process; but >>>> the same >>>> > Clash version with GHC 8.10 fails with link-time errors on the second >>>> > compilation. >>>> > >>>> > The details are at >>>> https://github.com/clash-lang/clash-compiler/issues/1686 >>>> > but for now I'm just hoping that some lightbulb will go off for >>>> someone if some >>>> > handling of internal state has changed in GHC that could mean that >>>> the symbol >>>> > tables of loaded modules could persist between GHC invocations from >>>> the same >>>> > process. >>>> > >>>> > So, does this ring a bell for anyone? >>>> > >>>> > Thanks, >>>> > Gergo >>>> > _______________________________________________ >>>> > ghc-devs mailing list >>>> > ghc-devs at haskell.org >>>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>>> > >>>> > >>>> > >>>> > >>>> >>>> -- >>>> >>>> .--= ULLA! =-----------------. >>>> \ http://gergo.erdi.hu \ >>>> `---= gergo at erdi.hu =-------' >>>> I tried to commit suicide once by taking over 1,000 aspirin. But after >>>> I took 2, I felt better!_______________________________________________ >>>> ghc-devs mailing list >>>> ghc-devs at haskell.org >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>>> >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From christiaan.baaij at gmail.com Tue Mar 9 13:29:40 2021 From: christiaan.baaij at gmail.com (Christiaan Baaij) Date: Tue, 9 Mar 2021 14:29:40 +0100 Subject: What changed between GHC 8.8 and 8.10 that could cause this? In-Reply-To: References: Message-ID: Thanks for showing the proper way to do this and taking the time to show why my analysis of MR388 was wrong! We'll update the way Clash interacts with the GHC API. On Tue, 9 Mar 2021 at 13:43, Phyx wrote: > Hi, > > > But we don't _want_ the shared state, it's simply there. > > This whole issue arises from the fact that we were oblivious to the > shared RTS state, resulting in Clash doing GHC API calls where the RTS > loads/links an object file twice. > > The RTS should under no circumstances be actually loading an object file > twice as there's only one linker map and should result in a symbol > collision. > > Looking at the error you posted at > https://github.com/clash-lang/clash-compiler/issues/1686 is actually the > linker doing the right thing. > > GHC runtime linker: fatal error: I found a duplicate definition for symbol > Lib2_plots2_closure > whilst processing object file > .stack-work/dist/x86_64-linux-tinfo6/Cabal-3.2.1.0/build/exe2/_clashilator/clash-syn/Lib2.o > The symbol was previously defined in > .stack-work/dist/x86_64-linux-tinfo6/Cabal-3.2.1.0/build/exe1/_clashilator/clash-syn/Lib2.o > > You're loading the same object file twice from different build folders and > the linker has no guarantee that these two are the same symbol at all. > This however is indeed a shortcoming of M388 that we can't split the C > linker map easily. > > > And we're not even explicitly linking/loading object files twice, > something to do with the GHC type-checker seems to do that. > > Yes but you have a new object file, in a different path. This can't be > resolved by the linker cache. This looks like it accidentally worked > before as the shared Haskell Linker state resolves based on the Close name > itself. > So it never asked the C linker. I say accidental because there's no > guarantee that the closure in exe1 and exe2 are the same, despite them > having the same name.. > > > I don't see how I can avoid this issue without being forced to run > within a single `runGhc` session. > > As I mentioned below, you can override the hsc_dynLinker in a wrapper > around runGhc. > > i.e. > > runClashGhc :: -> .. > do shared_linker <- ... > runGhc .. $ do > setSession $ hsc_env { hsc_dynLinker = shared_linker } > > > Should restore the behavior. You don't need to run inside a single runGhc, > you just need to provide a single hsc_dynLinker. > > That should work. > > Kind Regards, > Tamar > > On Tue, Mar 9, 2021, 11:46 Christiaan Baaij > wrote: > >> But we don't _want_ the shared state, it's simply there. >> This whole issue arises from the fact that we were oblivious to the >> shared RTS state, resulting in Clash doing GHC API calls where the RTS >> loads/links an object file twice. >> And we're not even explicitly linking/loading object files twice, >> something to do with the GHC type-checker seems to do that. >> I don't see how I can avoid this issue without being forced to run within >> a single `runGhc` session. >> >> On Tue, 9 Mar 2021 at 12:22, Phyx wrote: >> >>> Hi, >>> >>> Hmm... I don't agree.. >>> >>> This isn't about grounds of truth or anything like that.. and in fact, >>> an object being in the linker map, doesn't mean its usable at all or meant >>> to be used at all. >>> It can be temporary state (symbol redirection or supporting of >>> deprecated symbols are two that come to mind). So this is also a case of.. >>> be careful. >>> >>> The change introduced in the MR simply decoupled the top level user >>> interface and the C linker. >>> The reason for this is simply because the majority of projects do not >>> require shared state here, but infact benefit from unshared state. >>> >>> i.e. interpreters, IDEs etc. Where you want to be able to process >>> multiple separate files at the same time without needing to create new >>> processes for each. >>> >>> Now back to your point about runGhc needing to use a shared state.. In >>> my opinion that would be wrong. >>> >>> Here's the documentation for GHC 8.6.5 >>> https://hackage.haskell.org/package/ghc-8.6.5/docs/GHC.html >>> >>> specifically: >>> >>> ---- >>> >>> runGhc >>> :: Maybe FilePath - See argument to initGhcMonad. >>> -> Ghc a - The action to perform. >>> -> IO a - Run function for the Ghc monad. >>> >>> It initialises the GHC session and warnings via initGhcMonad. >>> Each call to this function will create a new session which should not be >>> shared among several threads. >>> >>> Any errors not handled inside the Ghc action are propagated as IO >>> exceptions. >>> >>> --- >>> >>> And if the session isn't guaranteed there's no guarantee about the >>> underlying state. >>> This explicit declaration that runGhc will not share state has been in >>> the API for for decades (going as far back as I stopped looking at 7.2). >>> >>> That Clash is relying on behavior we explicitly stated is not the case >>> is a bug in Clash. >>> >>> If you require shared state you should not be using the top level runGhc >>> wrapper but instead call unGhc yourself (or call setSession yourself). >>> >>> There is perhaps a case to be made for a runGhcShared which does this, >>> but runGhc itself never guaranteed one session or one state. >>> >>> Kind regards, >>> Tamar >>> >>> On Tue, Mar 9, 2021, 10:27 Christiaan Baaij >>> wrote: >>> >>>> Even if MR388 ( https://gitlab.haskell.org/ghc/ghc/-/merge_requests/388 >>>> ) is the cause of the issue we're seeing with the API exposed by Clash, I >>>> still think MR388 is wrong. >>>> My reasoning is the following: >>>> >>>> In 8.8 and earlier we had: >>>> - RTS C-code contains the ground truth of what is linked. The API it >>>> provides are set-membership, insert, lookup, and delete. Notably it does >>>> not allow you to get the set of linked objects. >>>> - There is a globally shared MVar (using NOINLINE, sharedCaf, >>>> unsafePerformIO newIORef "tricks") to what is basically a log/view of the >>>> linked-objects state kept by the RTS C-code. >>>> >>>> With MR388, in 8.10 and later we get: >>>> - RTS C-code contains the ground truth of what is linked. The API it >>>> provides are set-membership, insert, lookup, and delete. Notably it does >>>> not allow you to get the set of linked objects. >>>> - A _new_ MVar for every call to `runGhc` which is a log/view of the >>>> linked-object state kept by the RTS C-code. But that means these MVar get >>>> out-of-sync with the ground truth that is the RTS C-code! And since the RTS >>>> C-code does not expose an API to get the set of linked objects, there's no >>>> way to sync these MVars either! >>>> >>>> I'm building a ghc-8.10.2 with MR388 reverted to see whether it is >>>> indeed what is causing the issue we're seeing in Clash. >>>> Given my analysis above of what I think is wrong with MR388, I'm not >>>> saying we should completely revert MR388, but simply ensure that every >>>> HscEnv created through `runGhc` gets the globally shared MVar; as opposed >>>> to the current call to `newMVar`. >>>> >>>> On Sun, 7 Mar 2021 at 04:02, ÉRDI Gergő wrote: >>>> >>>>> Thanks Matthew and Julian! Unfortunately, trying out GHC before/after >>>>> this >>>>> change didn't turn out to be as easy as I hoped: to do my testing, I >>>>> need to build a given GHC commit, and then use that via Stack to >>>>> install >>>>> ~140 dependencies so that I can then test the problem I have initially >>>>> seen. And it turns out doing that with a random GHC commit is quite >>>>> painful because in any given Stackage snapshot there will be packages >>>>> with >>>>> which the GHC-bundled libraries are incompatible... :/ >>>>> >>>>> >>>>> >>>>> On Thu, 4 Mar 2021, Julian Leviston wrote: >>>>> >>>>> > Hi,I don’t know enough about what Clash does to comment really, but >>>>> it sounds like >>>>> > it’s to do with my work on enabling multiple linker instances >>>>> > in https://gitlab.haskell.org/ghc/ghc/-/merge_requests/388 — maybe >>>>> reading through >>>>> > that or the plan I outlined at >>>>> https://gitlab.haskell.org/ghc/ghc/-/issues/3372 might >>>>> > help, though I’m not sure. >>>>> > >>>>> > Strange, though, as this work was to isolate state in GHC — to >>>>> change it from using a >>>>> > global IORef to use a per-process MVar . But it definitely did >>>>> change the way state is >>>>> > handled, so it might be the related to these issues somehow? >>>>> > >>>>> > I realise this isn’t much help, but maybe it points you in a >>>>> direction where you can >>>>> > begin to understand some more. >>>>> > >>>>> > Julian >>>>> > >>>>> > On 4 Mar 2021, at 10:55 pm, ÉRDI Gergő wrote: >>>>> > >>>>> > Hi, >>>>> > >>>>> > I'm trying to figure out a Clash problem and managed to track it >>>>> down to a GHC >>>>> > upgrade; specifically, a given Clash version, when based on GHC 8.8, >>>>> has no >>>>> > problem synthesizing one module after another from one process; but >>>>> the same >>>>> > Clash version with GHC 8.10 fails with link-time errors on the second >>>>> > compilation. >>>>> > >>>>> > The details are at >>>>> https://github.com/clash-lang/clash-compiler/issues/1686 >>>>> > but for now I'm just hoping that some lightbulb will go off for >>>>> someone if some >>>>> > handling of internal state has changed in GHC that could mean that >>>>> the symbol >>>>> > tables of loaded modules could persist between GHC invocations from >>>>> the same >>>>> > process. >>>>> > >>>>> > So, does this ring a bell for anyone? >>>>> > >>>>> > Thanks, >>>>> > Gergo >>>>> > _______________________________________________ >>>>> > ghc-devs mailing list >>>>> > ghc-devs at haskell.org >>>>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>>>> > >>>>> > >>>>> > >>>>> > >>>>> >>>>> -- >>>>> >>>>> .--= ULLA! =-----------------. >>>>> \ http://gergo.erdi.hu \ >>>>> `---= gergo at erdi.hu =-------' >>>>> I tried to commit suicide once by taking over 1,000 aspirin. But after >>>>> I took 2, I felt better!_______________________________________________ >>>>> ghc-devs mailing list >>>>> ghc-devs at haskell.org >>>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>>>> >>>> _______________________________________________ >>>> ghc-devs mailing list >>>> ghc-devs at haskell.org >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Tue Mar 9 15:22:41 2021 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Tue, 9 Mar 2021 15:22:41 +0000 Subject: Running ghc-debug on ghc Message-ID: Hi, I now have some simple instructions for running ghc-debug on GHC. 1. Cherry-pick 4be70967f1f1ab70cbe31aad8ae69aea87c6f4c4 commit 4be70967f1f1ab70cbe31aad8ae69aea87c6f4c4 (HEAD -> wip/ghc-with-debug) Author: Matthew Pickering Date: Fri Jan 8 11:26:17 2021 +0000 Add support for ghc-debug to ghc executable 2. Build GHC * Add the following to _build/hadrian.settings ``` stage1.*.ghc.hs.opts += -finfo-table-map -fdistinct-constructor-tables ``` * Build GHC as normal ``` ./hadrian/build -j8 ``` * The result is a ghc-debug enabled compiler # Building a debugger * Use the compiler you just built to build ghc-debug ``` cd ghc-debug cabal update cabal new-build debugger -w ../_build/stage1/bin/ghc ``` # Running the debugger Modify `test/Test.hs` to implement the debugging thing you want to do. Perhaps start with `p30`, which is a program to generate a profile. * Start the process you want to debug ``` GHC_DEBUG_SOCKET=/tmp/ghc-debug build-cabal ``` * Start the debugger ``` cabal new-run debugger -w ... ``` * Open a ticket about the memory issue you find. There is the start of some more documentation here - http://ghc.gitlab.haskell.org/ghc-debug/docs.html These instructions are also in the instructions.md file in the commit. Cheers, Matt From carter.schonwald at gmail.com Tue Mar 9 15:27:27 2021 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Tue, 9 Mar 2021 10:27:27 -0500 Subject: Running ghc-debug on ghc In-Reply-To: References: Message-ID: This looks really cool! So it’s like a debugger that can also do heap introspection profiling as a consequence of being able to inspect the state as part of debugging? On Tue, Mar 9, 2021 at 10:23 AM Matthew Pickering < matthewtpickering at gmail.com> wrote: > Hi, > > I now have some simple instructions for running ghc-debug on GHC. > > 1. Cherry-pick 4be70967f1f1ab70cbe31aad8ae69aea87c6f4c4 > > commit 4be70967f1f1ab70cbe31aad8ae69aea87c6f4c4 (HEAD -> > wip/ghc-with-debug) > Author: Matthew Pickering > Date: Fri Jan 8 11:26:17 2021 +0000 > > Add support for ghc-debug to ghc executable > > 2. Build GHC > * Add the following to _build/hadrian.settings > > ``` > stage1.*.ghc.hs.opts += -finfo-table-map -fdistinct-constructor-tables > ``` > > * Build GHC as normal > > ``` > ./hadrian/build -j8 > ``` > > * The result is a ghc-debug enabled compiler > > # Building a debugger > > * Use the compiler you just built to build ghc-debug > > ``` > cd ghc-debug > cabal update > cabal new-build debugger -w ../_build/stage1/bin/ghc > ``` > > # Running the debugger > > Modify `test/Test.hs` to implement the debugging thing you want to do. > Perhaps > start with `p30`, which is a program to generate a profile. > > > * Start the process you want to debug > ``` > GHC_DEBUG_SOCKET=/tmp/ghc-debug build-cabal > ``` > > * Start the debugger > ``` > cabal new-run debugger -w ... > ``` > > * Open a ticket about the memory issue you find. > > There is the start of some more documentation here - > http://ghc.gitlab.haskell.org/ghc-debug/docs.html > > These instructions are also in the instructions.md file in the commit. > > Cheers, > > Matt > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Mar 9 22:37:45 2021 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 9 Mar 2021 22:37:45 +0000 Subject: WSL2 Message-ID: Friends I've just installed WSL2 and built GHC. I get this (single) validation failure in libraries/unix/tests/getGroupEntryForName. It seems to be just an error message wibble, but I can't push a change to master because that'll affect everyone else. Any ideas? Simon =====> 1 of 1 [0, 0, 0] ]0;getGroupEntryForName(normal) 1 of 1 [0, 0, 0]Actual stderr output differs from expected: --- getGroupEntryForName.run/getGroupEntryForName.stderr.normalised 2021-03-09 22:36:01.300421100 +0000 +++ getGroupEntryForName.run/getGroupEntryForName.run.stderr.normalised 2021-03-09 22:36:01.300421100 +0000 @@ -1 +1 @@ -getGroupEntryForName: getGroupEntryForName: does not exist (no such group) +getGroupEntryForName: getGroupEntryForName: does not exist (No such process) *** unexpected failure for getGroupEntryForName(normal) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Wed Mar 10 17:21:30 2021 From: ben at smart-cactus.org (Ben Gamari) Date: Wed, 10 Mar 2021 12:21:30 -0500 Subject: WSL2 In-Reply-To: References: Message-ID: <8735x2vs9x.fsf@smart-cactus.org> Simon Peyton Jones via ghc-devs writes: > Friends > I've just installed WSL2 and built GHC. > I get this (single) validation failure in > libraries/unix/tests/getGroupEntryForName. It seems to be just an > error message wibble, but I can't push a change to master because > that'll affect everyone else. Hmm, this is quite unfortunate. My recollection is that WSL2 by default runs an Ubuntu image, so I'm somewhat surprised that this is failing. Can you paste the output of `uname -a` and `cat /etc/os-release`? Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From simonpj at microsoft.com Wed Mar 10 22:22:33 2021 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 10 Mar 2021 22:22:33 +0000 Subject: WSL2 In-Reply-To: <8735x2vs9x.fsf@smart-cactus.org> References: <8735x2vs9x.fsf@smart-cactus.org> Message-ID: | Hmm, this is quite unfortunate. My recollection is that WSL2 by | default runs an Ubuntu image, so I'm somewhat surprised that this is | failing. bash$ uname -a Linux MSRC-3645512 5.4.72-microsoft-standard-WSL2 #1 SMP Wed Oct 28 23:40:43 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux bash$ cat /etc/os-release NAME="Ubuntu" VERSION="20.04.2 LTS (Focal Fossa)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 20.04.2 LTS" VERSION_ID="20.04" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=focal UBUNTU_CODENAME=focal | -----Original Message----- | From: Ben Gamari | Sent: 10 March 2021 17:22 | To: Simon Peyton Jones ; ghc-devs | Subject: Re: WSL2 | | Simon Peyton Jones via ghc-devs writes: | | > Friends | > I've just installed WSL2 and built GHC. | > I get this (single) validation failure in | > libraries/unix/tests/getGroupEntryForName. It seems to be just an | > error message wibble, but I can't push a change to master because | > that'll affect everyone else. | | Hmm, this is quite unfortunate. My recollection is that WSL2 by | default runs an Ubuntu image, so I'm somewhat surprised that this is | failing. | | Can you paste the output of `uname -a` and `cat /etc/os-release`? | | Cheers, | | - Ben From tom-lists-haskell-cafe-2017 at jaguarpaw.co.uk Thu Mar 11 10:19:52 2021 From: tom-lists-haskell-cafe-2017 at jaguarpaw.co.uk (Tom Ellis) Date: Thu, 11 Mar 2021 10:19:52 +0000 Subject: WSL2 Message-ID: <20210311101952.GA15063@cloudinit-builder> SPJ Wrote: > I've just installed WSL2 and built GHC. I get this (single) > validation failure in libraries/unix/tests/getGroupEntryForName. It > seems to be just an error message wibble, but I can't push a change > to master because that'll affect everyone else. Interesting, I've only ever built GHC on WSL and WSL2. I've seen this error message on WSL2 during every test run, I think. I didn't realise that it never occurred on other platforms, let alone that it was WSL2 specific! Tom From tom-lists-haskell-cafe-2017 at jaguarpaw.co.uk Thu Mar 11 10:32:30 2021 From: tom-lists-haskell-cafe-2017 at jaguarpaw.co.uk (Tom Ellis) Date: Thu, 11 Mar 2021 10:32:30 +0000 Subject: What type of performance regression testing does GHC go through? Message-ID: <20210311103230.GB15063@cloudinit-builder> A user posted the following to the ghc-proposals repository. Both JB and RAE suggested ghc-devs as a more appropriate forum. Since I have no idea whether the user has even ever used a mailing list before I thought I would lower the activation energy by posting their message for them. https://github.com/ghc-proposals/ghc-proposals/issues/410 > Hi, > > Does the GHC release or development process include regression > testing for performance? > > Is this the place to discuss ideas for implementing such a thing and > to eventually craft a proposal? > > I believe the performance impact of changes to GHC needs to be > verified/validated before release. I also believe this would be > feasible if we tracked metrics on building a wide variety of > real-world packages. Using real-world packages is one of the best > ways to see the actual impact users will experience. It's also a > great way to broaden the scope of tests, particularly with the > combination of language pragmas and enabled features within the > compiler. From ietf-dane at dukhovni.org Thu Mar 11 11:05:04 2021 From: ietf-dane at dukhovni.org (Viktor Dukhovni) Date: Thu, 11 Mar 2021 06:05:04 -0500 Subject: WSL2 In-Reply-To: <20210311101952.GA15063@cloudinit-builder> References: <20210311101952.GA15063@cloudinit-builder> Message-ID: On Thu, Mar 11, 2021 at 10:19:52AM +0000, Tom Ellis wrote: > SPJ Wrote: > > I've just installed WSL2 and built GHC. I get this (single) > > validation failure in libraries/unix/tests/getGroupEntryForName. It > > seems to be just an error message wibble, but I can't push a change > > to master because that'll affect everyone else. > > Interesting, I've only ever built GHC on WSL and WSL2. I've seen this > error message on WSL2 during every test run, I think. I didn't > realise that it never occurred on other platforms, let alone that it > was WSL2 specific! I am curious what specific version/branch of GHC (and associated submodule commit of "unix") is being tested. I've recently cleaned a bunch of the upstream "unix" handling of the group/passwd database handling, but I don't believe that GHC has yet switched to the newer code. A subtle facet of the delta points in the right direction: -getGroupEntryForName: getGroupEntryForName: does not exist (no such group) +getGroupEntryForName: getGroupEntryForName: does not exist (No such process) not only is it complaining about "process" rather than "group", but crucially the case of the word "No" is different. The variance is due to the fact that there are two possible error paths with group lookup in the group lookup code: doubleAllocWhileERANGE loc enttype initlen unpack action = alloca $ go initlen where go len res = do r <- allocaBytes len $ \buf -> do rc <- action buf (fromIntegral len) res if rc /= 0 --hard-error-> then return (Left rc) else do p <- peek res --not-found--> when (p == nullPtr) $ notFoundErr fmap Right (unpack p) case r of Right x -> return x Left rc | Errno rc == eRANGE -> -- ERANGE means this is not an error -- we just have to try again with a larger buffer go (2 * len) res Left rc -> --1--> ioError (errnoToIOError loc (Errno rc) Nothing Nothing) notFoundErr = --2--> ioError $ flip ioeSetErrorString ("no such " ++ enttype) $ mkIOError doesNotExistErrorType loc Nothing Nothing The expected error path is "not-found" -> (2), where the group lookup works, but no result is found (rc == 0). This reports the lower-case "no such group". The unexpected error path is a non-zero return from "getgrnam_r" (action) -> (1), which uses `errno` to build the error string, which ends up being "No such process". On Linux systems that's: ESRCH 3 /* No such process */ So the call to "getgrnam_r" failed by returning ESRCH, rather than 0. The Linux manpage does not suggest to me that one might expect a non-zero return from getgrnam_r(3) just from a missing entry in the group file: RETURN VALUE The getgrnam() and getgrgid() functions return a pointer to a group structure, or NULL if the matching entry is not found or an error occurs. If an error occurs, errno is set appropriately. If one wants to check errno after the call, it should be set to zero before the call. The return value may point to a static area, and may be overwritten by subsequent calls to getgrent(3), getgrgid(), or getgrnam(). (Do not pass the returned pointer to free(3).) On success, getgrnam_r() and getgrgid_r() return zero, and ---> set *result to grp. If no matching group record was found, ---> these functions return 0 and store NULL in *result. In case ---> of error, an error number is returned, and NULL is stored in ---> *result. ERRORS 0 or ENOENT or ESRCH or EBADF or EPERM or ... The given name or gid was not found. EINTR A signal was caught; see signal(7). EIO I/O error. EMFILE The per-process limit on the number of open file descriptors has been reached. ENFILE The system-wide limit on the total number of open files has been reached. ENOMEM Insufficient memory to allocate group structure. ERANGE Insufficient buffer space supplied. The "0 or ENOENT or ESRCH ..." text then plausibly applies to getgrnam(3), and its legacy behaviour. So the question is why the lookup is failing. To that end compiling a tracing with "strace" the below C program should tell the story: #include #include #include #include int main(int argc, char **argv) { struct group g, *p; char buf[1024]; int rc; errno = 0; rc = getgrnam_r("nosuchgrouphere", &g, buf, sizeof(buf), &p); printf("%p: %m(%d)\n", p, errno); return (rc == 0 && p == NULL); } On a Fedora 31 system I get: $ make g cc g.c -o g $ ./g (nil): Success(0) If something else happens on WSL2, running $ strace -o g.trace ./g may reveal something not going right during the lookup if the problem is with some system call. On the other hand, if the problem is entirely in "user-land", then it may take more work to see what's going on. Is group database on these systems backed just by local files or by AD LDAP? A look at at the "group" entry in /etc/nsswitch.conf may shed some light on how groups are found. -- Viktor. From ietf-dane at dukhovni.org Thu Mar 11 11:19:46 2021 From: ietf-dane at dukhovni.org (Viktor Dukhovni) Date: Thu, 11 Mar 2021 06:19:46 -0500 Subject: WSL2 In-Reply-To: References: <20210311101952.GA15063@cloudinit-builder> Message-ID: On Thu, Mar 11, 2021 at 06:05:04AM -0500, Viktor Dukhovni wrote: > So the question is why the lookup is failing. To that end compiling a > tracing with "strace" the below C program should tell the story: > > #include > #include > #include > #include > > int main(int argc, char **argv) > { > struct group g, *p; > char buf[1024]; > int rc; > > errno = 0; > rc = getgrnam_r("nosuchgrouphere", &g, buf, sizeof(buf), &p); > printf("%p: %m(%d)\n", p, errno); > return (rc == 0 && p == NULL); > } To experiment with other group names and make sure that at least group "root" or similar works, a slightly extended version is: #include #include #include #include int main(int argc, char **argv) { char buf[1024]; struct group g, *p; int rc; errno = 0; rc = getgrnam_r(argc > 1 ? argv[1] : "nosuchgrouphere", &g, buf, sizeof(buf), &p); printf("%s(%p) %m(%d)\n", p ? g.gr_name : NULL, p, errno); return (rc == 0 && p == NULL); } This gives (again Fedora 31) the expected results: $ make g cc g.c -o g $ ./g (null)((nil)) Success(0) $ ./g root root(0x7ffe6a6225d0) Success(0) -- Viktor. From tom-lists-haskell-cafe-2017 at jaguarpaw.co.uk Thu Mar 11 11:41:10 2021 From: tom-lists-haskell-cafe-2017 at jaguarpaw.co.uk (Tom Ellis) Date: Thu, 11 Mar 2021 11:41:10 +0000 Subject: WSL2 In-Reply-To: References: <20210311101952.GA15063@cloudinit-builder> Message-ID: <20210311114110.GC15063@cloudinit-builder> On Thu, Mar 11, 2021 at 06:19:46AM -0500, Viktor Dukhovni wrote: > On Thu, Mar 11, 2021 at 06:05:04AM -0500, Viktor Dukhovni wrote: > > So the question is why the lookup is failing. To that end compiling a > > tracing with "strace" the below C program should tell the story: [...] > To experiment with other group names and make sure that at least > group "root" or similar works, a slightly extended version is: [...] I'm not really following the details, but is this useful to you? % cat g.c && cc g.c -o g && ./g #include #include #include #include int main(int argc, char **argv) { char buf[1024]; struct group g, *p; int rc; errno = 0; rc = getgrnam_r(argc > 1 ? argv[1] : "nosuchgrouphere", &g, buf, sizeof(buf), &p); printf("%s(%p) %m(%d)\n", p ? g.gr_name : NULL, p, errno); return (rc == 0 && p == NULL); } (null)((nil)) No such process(3) From ietf-dane at dukhovni.org Thu Mar 11 12:04:11 2021 From: ietf-dane at dukhovni.org (Viktor Dukhovni) Date: Thu, 11 Mar 2021 10:04:11 -0200 Subject: WSL2 In-Reply-To: <20210311114110.GC15063@cloudinit-builder> References: <20210311101952.GA15063@cloudinit-builder> <20210311114110.GC15063@cloudinit-builder> Message-ID: <6535C1D3-CF04-415F-8E48-68373B34D229@dukhovni.org> > On Mar 11, 2021, at 9:41 AM, Tom Ellis wrote: > > I'm not really following the details, but is this useful to you? > > % cat g.c && cc g.c -o g && ./g > #include > #include > #include > #include > > int main(int argc, char **argv) > { > char buf[1024]; > struct group g, *p; > int rc; > > errno = 0; > rc = getgrnam_r(argc > 1 ? argv[1] : "nosuchgrouphere", > &g, buf, sizeof(buf), &p); > printf("%s(%p) %m(%d)\n", p ? g.gr_name : NULL, p, errno); > return (rc == 0 && p == NULL); > } > (null)((nil)) No such process(3) Yes, it means that the reported error is not an artefact of the Haskell "unix" package, but rather originates directly from normal use of the getpwnam_r(3) glibc API on these systems. It would now be useful to also post: - The output of "./g root" or some other group known to exist. - The output of "./g xyzzy" or some other short group name known to not exist - The output of "grep group /etc/nsswitch.conf" - Attach an strace output file (g.trace.txt) from: strace -o g.trace.txt ./g -- Viktor. From simonpj at microsoft.com Thu Mar 11 12:21:15 2021 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 11 Mar 2021 12:21:15 +0000 Subject: WSL2 In-Reply-To: <20210311114110.GC15063@cloudinit-builder> References: <20210311101952.GA15063@cloudinit-builder> <20210311114110.GC15063@cloudinit-builder> Message-ID: Like Tom, I'm not following the details, but if you want me to run some commands and send you the output I can do that. Just send the script! | -----Original Message----- | From: ghc-devs On Behalf Of Tom Ellis | Sent: 11 March 2021 11:41 | To: ghc-devs at haskell.org | Subject: Re: WSL2 | | On Thu, Mar 11, 2021 at 06:19:46AM -0500, Viktor Dukhovni wrote: | > On Thu, Mar 11, 2021 at 06:05:04AM -0500, Viktor Dukhovni wrote: | > > So the question is why the lookup is failing. To that end | compiling | > > a tracing with "strace" the below C program should tell the story: | [...] | > To experiment with other group names and make sure that at least | group | > "root" or similar works, a slightly extended version is: | [...] | | I'm not really following the details, but is this useful to you? | | % cat g.c && cc g.c -o g && ./g | #include | #include | #include | #include | | int main(int argc, char **argv) | { | char buf[1024]; | struct group g, *p; | int rc; | | errno = 0; | rc = getgrnam_r(argc > 1 ? argv[1] : "nosuchgrouphere", | &g, buf, sizeof(buf), &p); | printf("%s(%p) %m(%d)\n", p ? g.gr_name : NULL, p, errno); | return (rc == 0 && p == NULL); | } | (null)((nil)) No such process(3) | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail. | haskell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=04%7C01%7Csimonpj%40microsoft.com%7C48a10ad0766c4dd6caf4 | 08d8e4829c7d%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637510597246 | 441070%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJ | BTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=nQdF9H7BpTqQL%2Bm0URWQmXh | 1KEQAV1KgfPvG75mOR%2B0%3D&reserved=0 From simonpj at microsoft.com Thu Mar 11 12:22:11 2021 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 11 Mar 2021 12:22:11 +0000 Subject: WSL2 In-Reply-To: <20210311114110.GC15063@cloudinit-builder> References: <20210311101952.GA15063@cloudinit-builder> <20210311114110.GC15063@cloudinit-builder> Message-ID: PS: since this is not, apparently, just my stupidity, it would be good to open a ticket and transfer this thread to it. Would someone like to do that? | -----Original Message----- | From: ghc-devs On Behalf Of Tom Ellis | Sent: 11 March 2021 11:41 | To: ghc-devs at haskell.org | Subject: Re: WSL2 | | On Thu, Mar 11, 2021 at 06:19:46AM -0500, Viktor Dukhovni wrote: | > On Thu, Mar 11, 2021 at 06:05:04AM -0500, Viktor Dukhovni wrote: | > > So the question is why the lookup is failing. To that end | compiling | > > a tracing with "strace" the below C program should tell the story: | [...] | > To experiment with other group names and make sure that at least | group | > "root" or similar works, a slightly extended version is: | [...] | | I'm not really following the details, but is this useful to you? | | % cat g.c && cc g.c -o g && ./g | #include | #include | #include | #include | | int main(int argc, char **argv) | { | char buf[1024]; | struct group g, *p; | int rc; | | errno = 0; | rc = getgrnam_r(argc > 1 ? argv[1] : "nosuchgrouphere", | &g, buf, sizeof(buf), &p); | printf("%s(%p) %m(%d)\n", p ? g.gr_name : NULL, p, errno); | return (rc == 0 && p == NULL); | } | (null)((nil)) No such process(3) | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail. | haskell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=04%7C01%7Csimonpj%40microsoft.com%7C48a10ad0766c4dd6caf4 | 08d8e4829c7d%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637510597246 | 441070%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJ | BTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=nQdF9H7BpTqQL%2Bm0URWQmXh | 1KEQAV1KgfPvG75mOR%2B0%3D&reserved=0 From simonpj at microsoft.com Thu Mar 11 17:54:44 2021 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 11 Mar 2021 17:54:44 +0000 Subject: Inlining of `any @[]` vs `elem @[]` In-Reply-To: References: Message-ID: Gergo With HEAD, and -O, I get the exact same (good code) for these two functions: f x = any (x ==) [1, 5, 7::Int] g x = elem x [2, 6, 9 :: Int] namely f = \ (x_aga :: Int) -> case x_aga of { GHC.Types.I# x1_a13b -> case x1_a13b of { __DEFAULT -> GHC.Types.False; 1# -> GHC.Types.True; 5# -> GHC.Types.True; 7# -> GHC.Types.True } } g = \ (x_aQu :: Int) -> case x_aQu of { GHC.Types.I# x1_a13b -> case x1_a13b of { __DEFAULT -> GHC.Types.False; 2# -> GHC.Types.True; 6# -> GHC.Types.True; 9# -> GHC.Types.True } } Maybe this is fixed? If you think not, maybe open a ticket? Simon | -----Original Message----- | From: ghc-devs On Behalf Of ÉRDI Gergo | Sent: 07 March 2021 02:59 | To: GHC Devs | Subject: Inlining of `any @[]` vs `elem @[]` | | Hi, | | The inlining behaviour of `any @[]` and `elem @[]` differs in a way | that I am not sure is intentional, and it is affecting Clash (see | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgith | ub.com%2Fclash-lang%2Fclash- | compiler%2Fissues%2F1691&data=04%7C01%7Csimonpj%40microsoft.com%7C | e37a9761e8814eada5f208d8e115026d%7C72f988bf86f141af91ab2d7cd011db47%7C | 1%7C0%7C637506827802688772%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDA | iLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=Kik8v | KuwNobr9kiQOcIHuKTn%2BbEmQ7oY8tqP9tFjs6M%3D&reserved=0). I would | think that if it is a good idea to inline `any` then inlining `elem` | would be just as good an idea, or vice versa. | | However, `any` is defined polymorphically over `Foldable`, via | `foldMap` using `foldr`, with all steps between (and `foldr @[]`!) | marked as `INLINE`. The result is that if you use `any (x ==) [1, 5, | 7]` you get the following beautiful Core: | | ``` | topEntity | = \ (x_agAF :: Int) -> | case x_agAF of { GHC.Types.I# y_ahao -> | case y_ahao of { | __DEFAULT -> GHC.Types.False; | 1# -> GHC.Types.True; | 5# -> GHC.Types.True; | 7# -> GHC.Types.True | } | } | ``` | | As the kids these days would say: *chef's kiss*. | | | `elem`, on the other hand, is a typeclass method of `Foldable`, with a | default implementation in terms of `any`, but overridden for lists | with the following implementation: | | ``` | GHC.List.elem :: (Eq a) => a -> [a] -> Bool | GHC.List.elem _ [] = False | GHC.List.elem x (y:ys) = x==y || GHC.List.elem x ys | {-# NOINLINE [1] elem #-} | {-# RULES | "elem/build" forall x (g :: forall b . Eq a => (a -> b -> b) -> b - | > b) | . elem x (build g) = g (\ y r -> (x == y) || r) False | #-} | ``` | | This is marked as non-inlineable until phase 1 (so that `elem/build` | has a chance of firing), but it seems that when build fusion doesn't | apply (since `[1, 5, 7]` is, of course, not built via `build`), no | inlining happens AT ALL, even in later phases, so we end up with this: | | ``` | topEntity | = \ (x_agAF :: Int) -> | GHC.List.elem | @ Int | GHC.Classes.$fEqInt | x_agAF | (GHC.Types.: | @ Int | (GHC.Types.I# 1#) | (GHC.Types.: | @ Int | (GHC.Types.I# 5#) | (GHC.Types.: @ Int (GHC.Types.I# 7#) (GHC.Types.[] @ | Int)))) ``` | | So not only does it trip up Clash, it would also result in less | efficient code in software when using "normal" GHC. | | Is this all intentional? Wouldn't it make more sense to mark | `GHC.List.elem` as `INLINE [1]` instead of `NOINLINE [1]`, so that any | calls remaining after build fusion would be inlined? | | Thanks, | Gergo | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail. | haskell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=04%7C01%7Csimonpj%40microsoft.com%7Ce37a9761e8814eada5f2 | 08d8e115026d%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637506827802 | 688772%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJ | BTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=yMXu0XJQU2GmlDTH9ZaHXhl33 | ZRBjHMe41rr8lKVxkk%3D&reserved=0 From ietf-dane at dukhovni.org Thu Mar 11 19:04:44 2021 From: ietf-dane at dukhovni.org (Viktor Dukhovni) Date: Thu, 11 Mar 2021 14:04:44 -0500 Subject: WSL2 In-Reply-To: References: <20210311101952.GA15063@cloudinit-builder> <20210311114110.GC15063@cloudinit-builder> Message-ID: On Thu, Mar 11, 2021 at 12:21:15PM +0000, Simon Peyton Jones via ghc-devs wrote: > Like Tom, I'm not following the details, but if you want me to run > some commands and send you the output I can do that. Just send the > script! See attached. If any of the prerequisite shell utilities are not installed, the script will exit asking that they be installed. Please email me the output, or post to the list. (Should be just a couple of hundred lines of mostly hex output). -- Viktor. -------------- next part -------------- A non-text attachment was scrubbed... Name: getgrnam.sh Type: application/x-sh Size: 1666 bytes Desc: not available URL: From simonpj at microsoft.com Thu Mar 11 19:53:20 2021 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 11 Mar 2021 19:53:20 +0000 Subject: WSL2 In-Reply-To: References: <20210311101952.GA15063@cloudinit-builder> <20210311114110.GC15063@cloudinit-builder> Message-ID: Voila simonpj at MSRC-3645512:~$ sh /mnt/c/tmp/getgrnam.sh /etc/nsswitch.conf group entry group: files systemd == Tracing getent group xyzzy0 1f8b080818754a60000374726163652e6f757400ed5a6d73daba12fe9e5fa1e1cc9d212d09b6e4 373ac3e9e182493c0593b14ddb4ce9788c2d12a6bc5ddbe4909ef4bfdf950d040c26b831bdf743 9336962d69779fd56a575a892ea8fb408b85f23cf0cbfde1a47c47433a090b25f4a5b02e16eefc e97cc60a8bc7efdf1fb9c2d712e216f26040fb6265203b8443e537084be8c1f103f4a67c8eaa88 3bebfbdf8a7ab7d53a47877fa0ed4214dd3e2f7b3c073f678eefdedb33df0d47456e41388e67d4 6b46fdda7efffe3d905f32773941ea2b32cf3176173c5235fd63ad858adae4c1190d3de4f877f3 3100383f735c97060180a4a15b1e7997c1f472e6d3d1d4f1009361773e9caf246164f48eaa5ba8 a84f513077efd16038a268ea236fe853379cfa8fe767d3199d3861b166d9cd46fd5303f4b241d9 75dc7b0a743bb6d1e8e8addba78e5d6f75d4cf6a9dc949ce0641087d4909fd1384f678ead1aa69 6b4d43bd7ae224412821f81a0cbfd32a11154529a1cbcbcb1fb13ec763671629b48496753746c7 b20db50612b46b37f68da17dac592a548386a23ea026ecf62587ef33bdbaa369408be4a5f18898 ed81381af6cb0b45b225e162349ccc1717779339fbe832d4d201c43e753c06b8d0e365596d357b b8c7c32fe971895ff8f2273cf8d5abc4fd07daaeeb0ba08b1252086664e1713663942521a62d2d 9b09f1f3af6dda875f63cab202ea9704461d8adbd497547b78d5478c9f577a977d8de5e5d6dc93 e0403b0483ec8212a904a7104ff48e89873d4c480f63fcaf1e96851e96a01ec420a041f61f4b3d 9ec90535b2f4f94fa827311c092c4451224390944366278be2b3d9610e573016520c4fe12b78c3 ee9ea2d227436356b761824fac5cd33bfa6dbbd3354b30ad762cb2c22cf2f7001e3f801ba38039 2255449ce60022ed3754fd7639304977a06099297f3cf3a721b8b4e2fabbe0c2f712e21541c615 69491e4651ddb0839dc6a2c071ebc6cf46c1a6ffae544dedb3dad827df028b406e4bca88c136d3 8a2b444c0947784e38883f9d135ff112ac62b2dbac1c6e10b1c28228efa24b31f9034ca99c601a 334830e5c558a944c44a76a687679c9835066cc4e128fa9a2a4403b3b441d21163f3dc634c2bfd f1186f85aa9de671e87758e487e6c2b62da51117f903ade7936d9df6639b6131f33c01f1a7d728 ace3fa13e1a3b17da163d46a5f60654b2f165c4753d719d1e5e382697ff890cb4a822362e42ef6 af2556b5c7ae264481cf6135c14007f78e4fb7615fc2d2cd0972008d2bcc349e11af17217fa056 c4094d9c3145113be439a183fa4e402f7b933f62e7cb4c8bf564749e3b175615e95873d0cb8631 d42fbb56f34229b7eab6063ec5d29a5abd66691d3d0f15a5da04ce620f8ec8e5640f7b579877ee 74f210ffbd0030f3110d76d6d81971832b4e451ed725b09bd735436dec5b587b3942df1df2b65a 33bb86da8681cf63bc491a6892652fe19e14b2a5b6d49b6b5873e40058905300b38a5fbb794a07 5c6b340cd5347380cb133e056f5473bc7fcf733eef02d66bed3c06574a9bc2529658460627c57a 53bb518d3c4278da868ca4ae82f781a527f656a659bb52b39b72433312bbd07861f78cf8e43297 cd5bd37e0d863dde272d7b236448dd88e4c411063cad55336ef3082f32971e54b3203e6d80a977 5a2d609c87bf157985e7d340af6b8f054edcca6923ab968bdf25444a831c571d8f57396d9ce9b6 5543abe700594c032c66822b9fd6aeaddb9b3cc617733c965397c7cbcaa3415732671d82a9fb8d 02e8a6ddd5b5cf256476ea1f6cd30266eda7a8bc4415bfe81dfddf2d282cf99233d8274c58aa20 02ead803673c1c3d56d7d482f9c49e39e17db5507e70fcb23f9f942781eb9563ae851f306f9f0f 145e3e09f80d6cbf99b2e3904910fc3d0cddfb4b907c90c72cdc76b51b1bfa5d76b08d877f485d 38e3d9886eefe881ceafdcd0ff5f9f0c11258f609b763204436233db0a187a9cfd8428793eb47b 42c471e2ee31c0c609d1f1b6a590349f2757a47497f752ae9dc8ca4e9697c87146122bdb847f2e 714e1279e5987a82a382238e89e39b2c6c9c249b8864928db08fcd4fa6ccfb3b1c77b3f4449162 55629277c23c269d696aece4a9570a39264f1d4fc5943c758a5f599dcba7cdab5140e93766fe40 d7545508315d631f12c6e2d8a9a26065af1bf6a7d3f0dde21df7ae37f11c3a9e4ee0858797fe90 953094826d3f0c843208c85afff6dabfca6b078f4148c7dea9fc36b88e692e7e1b938ab29d29d8 5cac0a024fa49ff6dc64f7fc91087879e8294ac2aebfc9eabaa5a48713f6f85449895d48857fe1 c4f7c0d9aa9b6424ed8b4ab1afe299d77abdf726c9f3ce98fea98c7616de33636406cb9dc26031 335afea0cd66b8114004f1cae91149ee6149b4e2db00b200cb19fee3f9ede02bb0233d42e48d6b 0058c8788f8317e5c4358e5f23df660644e0042ed5adbd38ffe22b22db368ae35362a92291d7af 9c762c34a29ee048e29366cc094aea66ffa5b37f25c9283e0a4e3092725c3bf13b333e229f64a9 2c6f1b087bb4f9bac513c99cdbd95d3c2d3572dc9580b5fbda7bdf20a0a11d0e3ddbf13c9f5d05 dcb8c2a0881b47f755e82f0b246aef4ffb73984da361106eb597a25b21e76bc17d36e3ee1c371c 4e274553bb32acb6a6c79bf47b67e28da85fddb0b0fe00ba43d5d809be55bf7c8dca8391731754 cd1a086c5a1d43359ea00c9434bdd9891a80ccb02ddf2244213cc1a67e792deb3c45129b4f15c4 ad1c29082bd70ceb5532c168b98c1193caeeae321b5f22552126e6d744b7993f1a8e8721b82b90 d268696dcdb2410cd62b6ef70f6b60bb73bf5a9115e50dcf3167177d1b3b8b2aeb21093608abe9 9a75fb23fb8a7b8fdc2ba9afbb37a86b1a3cfb83d18d76a3a25acb68a3fa75ab812cd3825ae30a 7db4a28f60884df449d3ebd748eb004aa6e925c63b1a824d168f981d4ba3dc97406a5c1989fc51 e68cd15f8579407daf7fc14b7d85488e57c1dcc0237deaf032e5078ec0d1ca000f88cbb248025e 2591ea1d5d37d466d7541ba8588f7981e1219f0e809ef7ea1c124b612dd7a1e558c0f276645fa5 c89e833c941a9aa1d6c1766f8fcf75fd2f79ef313353b5da35f3436c2a5b93822e86a11dedf98a f89056abe8fdd9dbb76f116b4f3df4f730bc4718c197b3ff02e18adc56ef2d0000 == Tracing getgrnam xyzzy0 (null)((nil)) No such process(3) 1f8b080818754a60000374726163652e6f757400ed597d6fda4813ff9f4fb1a27a24686958effa b512d7e3c049ac12880cb48d4a65197b49ac189bb34d4b7ae9777f666da0600309497b8ff4dc35 4dbdde9dfdcdececbc79ca16ccf9c22ae593fa354baea3c09e966be8d3f66b7971f7eddb1d2e7f ae21bc502693892ab88ee05015d55f2222a32f7614a397f52a6a205c1a47b795eeb0d3a9a2c37f 807621c9a22660577131c6253b726eac59e4247e052f28c602476f9aad73ebeddbb700bf623eb6 6d9b0a63ccd9bd16906e74df373ba862045f6cdf73911d5dcfa72c48aa25db71581c57ca759638 75df3d89c39359c4fcd076e14ca6d57b575d49c261ba3dbd3b40956e88e2b97383269ecf501821 d78b989384d15db514ce58602795e6c03a6db73eb4412f1bc88eeddc30c0ed5966bbd7ed5cddf7 ac56a7a77fd45b5c4e5a9ac409eca535f4579c58d3d0658dbe659c9afad93d9645b1866036f6be b106955455ada1939393ef993ea7537b962ab486966b97666f60997a1324b8685e5a97a6f1be39 d061193494ee0135b9984e983ae17a75fc306615fad07da4cc761cd1f7c6f5852a5bb2f8daf782 f9e2f57530e7930e3fb57ce0c411b35d7ee0f2485014bd733a2223017ee808e77e60e6377808ab 5719ff09b4ebf532e8a286544a382c3c4a338e2c8b19b6bc2413b3e7efdbd8875f33644505f5cb 224787e136fa127544567ba4ec79d61df2d94c5ebce69e3f1c688712905d545395903de0b9dd19 783222948e0821ff1911451c1119d6410c0a1ae4bf441e095c2e5851e48fbfc13acd8e238385a8 6a6a08b27ac8ec1449fa617604138d10718fe1a9824636ecee3e1d7d300d6e751b2678cfc7cd6e af7b75d11bf66be056058b4c3dfddf0b7cfc056edc02c154d624b22f00a4da6febddabe5c5e4c3 81a38db9f2a7b3284c20a455d6f30e4ce31a125451219abc84875bd437eca0402c8918af897f18 0577ffa254a7c647bdbd4bbe0591006e4bca94c1365346d59429c554c0e2c1f3efe724686e8e55 069b63a5d294151125a578ba3d267f802953f24c530679a65aa6542a11f578a6873d4e3b36076c e4e134fbf675c806fdda062483cb5fda46de9856fa1308d94a5505729efaf1c4959d945cdcb6a5 7de08e74807a1e6ceb7492d90ccf99d5dc11e3d0b96590e64ead61d7f85843fd5eeb9dd51f00d8 c57d3a5e26b2ec05d4fb4707064bd5d292130601972b0dabb635b1a79e7fd758a3c5f3c09ad9c9 4da35c87caa81ecd837a103b6e3de35afe0eda11d6d5cbc365c73177f77f7bb02757957ce37a4a 63a9373eb031a5da57ed0571fcd54b9c9b13d0d5e427d47b92803793eeba687a818aec46c10bf8 8bf4853d9df92ccb16dc17f856c0f9b1b9bc9adfafd0675685ffe0c217aec4e2d61cf3d393e30b e07cf95b2c8031968a55ce4601fc78db5279f5b253a58a262b4f2f2554a590c41cd5ce32a7ba0d fcb4ba80e6cb82143dc7119c9573cc55a7c7b0b1f36c52c83c1bba8bcd132b82718163b11e70b4 2ccf11427f763d90411fe51ac58271a99067a7e13d71e53a0ae7b3037ee5c78cdd72f307dcbeae 43521b9abb4ec2593cd65554a2ee0cc35118266f166ff09b51e0da6c1a06f022c0cbd8e32302a3 783b0e03d0110272ea7fa3f6df15b5e3bb386153f757c56d8271f853e236a19aca5d6ba7568928 0a547e72e416c7c5702309cb6f3a28808af1e6d8d02de7239cb423a62a4a164234e1810fda039f 8e4e9e91b22b2b2dbf4764aa8acf8fde34ff3997e1ff2aa39d2537dc18b9c1e25f61b0841bad70 d0668f687850513ab34754564644960659b34311a19c11de57af269f811d1d51aa6c7439887864 9b4a90945c97eaef916fc3ff04118b786f587bd0ff48b1e7e0902c47ca9a4c9f5f39152c3445cf 7114b32e0ec1a28a9fda4f51f38cc462e7c611a59f583b09058f4fe1f32c95653345dca1cde715 4f29f4338b27f1500fa3407db09d12b3c44a3cd7b25d37e2ffd3b1d1a151858defdc06ec57442d a58fc2f11cbcc9f7e2648b9ea44dafea5af0887bdcb5ed245e1854fac69939b830ba595be0c60e 5c9f458d0d0b1b4f603b2c4dedf8b6f1e9733a9ef8f675dce83741e0fea067eae63d8c01c9e89e f65202903909a34d20ea52077fafa165d7b9ba47124bd82b88a33d52103e6e9a8367c904b7e570 465c2a6bb8eaa57c4a5585b8989f73db6691ef4dbd04c2154869768c0b636081187c5746f71727 b09c79d4d014557d29601eecd2b9a9bd68f01db26881b046d7185c7d3fbee2de21f74aeaf3e125 1af64d81ff43d0a571a9a366c7bc40adf34e1b0dfa035835cfd0fb413a0986788a3e18ddd63932 7a704aaee9e519af590236597984772c8d7257cbaa7d66e63a5647f7a87e2fcf6316b9e3d7c461 8e0b8e3ba60cd2b7643b92cc404532241647d26c8df7ad44b26a5bb57addaea99f0efb7a1b555a 192f303c14b109e0b9c775ad76a47ade345bd6a1f54cc0fa76665f35e57e247918b50d536f81ed 5e3dbebbf6bfe4bdc3ccfafae0a2d97f9799ca965364b95fc8e5fed6b90955318f4a301bb9ec4b 636adf327882a1f35088176275b30ef81a7909e328e54a30f7fd6aa512787eb58a56527261788c a4d52cbf539a9a132db1859758e92767654f6b70759f6f4baf5ebd429c9eb9e8ab97dc208c60a6 f45f7443942f431f0000 == Tracing getent group root root:x:0: 1f8b080818754a60000374726163652e6f757400ed5a6d73da3810feceafd0d0b919d2129025bf d199b4c781d330259031d02673dc7884110913b039dba4f42efdefb7b281808d29f44ce73e5c08 b1ac97dd7d56ab5d6915bee0f6132fe4cb73df2b0fc64ef99e07dc09f245f47b7e5dccdf7bee7c 260a9eeb06f93f8a082fb4d188332c8d46f208a3f26b4454f4c43c1fbd2e9fa10b847303efb1d0 ea359b6768ff0ff45d282a1d60a922318c718e79f68335f3ec6052c00b8ab124a857cdda95f5fe fd7b20bf646e6bba32a2ba8a05bb7309198dd6a76a13151ace139b8c878879f7f329887f9663b6 cd7d1f20f2c02e4f8625df2dcd3c3e71d910109956fbe3d94a1241a6d5365a5d5468b9c89fdb0f 68349e70e47a6838f6b81db8ded7b39c3be30e0b0ad5ae7559af7dae83563628dbcc7ee040b76d 99f576ab79f7dcb66acdb6716bd4849c3437f203184b8be86f3fb0a6ee905f74acc6a5697c78c6 aa2c1711d4fae3bff80555745d2fa252a9f42dd2e774ca66a1428b68d97663b6bb9669544182eb ea8d7563363e55bb06348386c231a026aeea1ab635a1577be2fabc40bf371f21b31d1027e34179 a1ab962a9f4fc6ce7c717eefcc45a52d50ab7b107b9c0d05e07c5fd234a379d9277d093eb48f63 1fa879070f69f5aae23fa1efba3d0fba28229d1241161eb999a0acca116d75d94d8e9ebf6ed3de ff1a51d67450bf2a0bea50dca6bea4da27ab314af4fcd0ea89da485ebce61e0707daa1046497f5 50252485786c74443ce8134afb84905ffa4493fb44857610838206c597a87d49c8052d9a7afb0e da690447050bd1f5d010547d9fd9698af2627604930a21728ae1e952856cd8dd7358fa6c3684d5 6d98e0b328575bedd6dd75bbd729c2b24a58a4222cf2ff093c7c0237668160aa561492e60042ed d78dd6dd726262ee40e5432a943f9d796e002eadb0ae1fe9505f44922e6ba4a22ec9c32c1a1b76 90e8acc818af3bbf188558fe49a92e1bb7467d977c0b220c624bca90c116530d0b3f064c29a612 96f7e24fe7245586dbac966463ac0683901591152d892ec5e4f730e55a9c69c820c6d49622a552 85e8c733ddbfe2a46363c0461c0ea36fc78068d0296e905495c83c93c6b4d69f44c856a84a740f 433faed061d85ddeb6a534e2a37dbde7ceb64eb5c86644cc3c8b41fce13d8a18b8ae1200f06e12 1b03c35ebb02abd87889e03a716d36e1cbc7b9d0fef829939d04a64ae82e76ef2556ad07ee26d4 011f66b09b10a0fd07e6f16dd825d8ba313f03d0a4224ce305f17a13f20a35434ec861538e4276 68c8028606cce7a5bef32a72bec2b4c44841e765707ed5908e3503bd6c1843add4eb5e9eebe566 cd6a804fe9362e1bb56ab7d16e65a1a2549b2047d8032c463b237bd8b9c3bcb75de729fa7b0e60 e613ee27f6d847e20687908a3c6a8b61ef5c554da3be6b635dc9107a72caaf8d6aa7671ad730f1 59cc374d034d8f394be82785dc359ac6cd15ec3932002c6b298045c3cf3d3ca503aed6eba6d1e9 640057a2520adeb0e570ff9ee57a4e026e55afb3985c356d09ab47c5b2c149b1de546f0c338b10 9e7620a3a9bbe05d60d989bd55a753fd601c6fcaf586193b85461bbb17c42797b9dcb9eb58ff06 c30eef9396bd910f4fddc09c9d38c280a7ed56cdbb2cc28b86d383ea31884f1b606aed66131867 e16f154997a434d0ebd64381334d396d646d64e2772955d320474d87e3954f1b677ad786d9a865 00594903ac1c05979ed6aebb773759cc2fc112d152b7c7cbc68341cb47671d7cd77ee400fad2ea b51ab745d469d73e5a9d2e30bb7e0ecb4b54d14babddfaad0985255f9a8373822352052150668d d8743cf97ab1a6e6cf1d6bc682878b7cf98979656fee941ddf1e9623aef96fb06ea5f585c2f76f 02fe07b6db4cc57588e3fb5fc681fd5002c94759acc26d57bb71a04fb283633cfc2263c1a6b309 df3ed1039d9f79a0ff2fdf0ca98c66116cd36e86604a2c615bbe404f8ebf218adf0f256f883056 92d7001b374487db964ed37c9e5651d35ddef772ed8cc8f12c2fd4451949a26f13feb1c4398de5 cd23ea718e3ce418bbbe39860d4bb0e1493614ef62f38329f3419c234d64e9a18e44aa2434e384 f992f4514b2371a3b252c80179eae5524cc953a7f895d5ad7cdaba9af89c3f0af307ba1dc38010 d3337721112c0e5d2a3ad177ba61f18f016f176ff1dbbe33647cea3af022c1cb602c4a044afeb6 1f064247693712508a0958bb82b3a34a7028a037e44f1753f6c8e1098a159e102fe4b34d79bf78 e3800b2a9b02830aa55802ff026a727c310eac50c98594fcfeaaf3fbdc9b376f90e8cf8708e2d0 03c2086a72ff00018c8ab25e210000 == Tracing getgrnam root root(0x7ffc07bd1490) Success(0) 1f8b080818754a60000374726163652e6f757400ed586d73da3810feceafd0d0b9196808c892df c80ccd71c4699912c840d2a673bef1185b4e9880cdd926a577e97fbf956c88b18184b49fee2e84 b12c4bcfee3e5a691fcc96cc79609572bd71cbe2dbd0b767e51afa7df3b61c06415cfea386f052 f33ca6784a932a4d1d35de22a2a2073b8cd0db4615b5102e8dc3fb4affbad7aba2fd7f3076a928 ae3a1e2b2ac31897ecd0b9b3e6a1134f2b78493196387a7bd8f9609d9e9e027c6adcc1dad89574 86b9b9630919ddfea7760f55bafe833d9db8c80e6f1733e6c7d592ed382c8a2ae5068b9dc6d4ad 47417d1eb26960bb10d1d01a7cacae3ce130fd81d1bf42957e80a2857387bcc994a12044ee2464 4e1c84dfaaa560ce7c3baeb4afacf3b3cee733602583ecd8ce1d03dc81353c1bf47b5f1e0756a7 37306e8c0ef79396bc2886b9b486fe8e626b16b8ac35b2bae743e3fd235665b986a0379afcc55a 54d175bd86eaf5faf784cfd9cc9e0b426b287d76391c5c5943a30d1e5cb42fadcb61f753fbca80 c7c0909803344944222aa69c57671a44ac429f5b0f616c4b88d3c9b8b1d4554b958fa7137fb13c bef517bcd3e151ab7b220e99edf280cba6a46946efdc24a6041f6ae2dc077adec1455addaaf84f 18bb7e5e062e6a48a784c3c2a534e7c8aa9c60abe93039b9feba89bdff3641d674a05f95393a34 37d1535493ace628c9f57dff9af726fee2b5f57c70c00e25e0bbac0b4ac80ef0dcec043c3609a5 2621e4179368b209ab695270830283fc4b5453e27ec1134dbd7907cf69128e0a19a2eb2211547d 5fda698af2947604932621f28ec4d3a526c9e4dda3687d1e7679d66552f091b7dbfd41ffcbc5e0 7a54836d55c8488967e4ff0bf8f205ccac02c1546d2a64d70120d83f33fa5fd285c91f0732f638 f9b37918c470a455d6fd5486fe1a927459234d358587553432795018acc0753df82929f8f62f7a 75debd31ceb6f9b7240ac06d78290c6c1a556c4718a5984a58de1bff6e4b52d3cd994a6073a63c 4d9822b2a215a3db91f27b8c322d6f5418c81b751352a942f4c38deedb7109f4413520538745f5 1d19500d46b5cc2626b0f8696ee49369c59f44c846a92a0c17a5bf39d6b1182e6fe6d20e709536 f78c5ef8594e93e297d6cc6a2ec42870ee1994b973ebbadfbda9a1d1a0f3d11a5d01d8c5a368a7 852cb9017a7feb4123a596969cc0f7b95fe258b52dcf9e4da6df5a6bb468e15b733bbe6b951ba0 8c1ae1c26ff891e33612abe5efc08eb4562fcfcb8e43d6ee5f1bd8ab55259fb8eed23db11b9f99 2846ed527b7e147d9dc4ce5d1db8f27e82de53249c2dba6bd1f40615cd99fe1bf847c6d29ecda7 2ca9167c2ff0a980f334b9bceadf4de80faac2ffb0f08525b17836473c7a72b800cecbdfa200c6 5829aa9c8c007e796ee95cbd6ca5546baadaaba504f5c68522463d96544e7d13f875ba80e6ca66 829ed32258111673eaf41033765e7d08c8bc196d9b99572a8271c162510fc0de4da824f467eb81 04faa0ad51148c29213f5c86779c2bb761b098efd957d388b17b9efe803b320c286ad7c36d9170 132fdd2a3ad1b71ec3fcbdc7c9f2049f98be6bb359e0c38d0437e3096f1168459be730001dc46e e2a09473b0f361080e122c1c0c5df6d09ad9f70cae402c3f09f152ae66fdfd1a4e62c65184c395 cc8b12b909eb3f5a24af4270d5f4c54f9af4d70c5b4e624bd05dd95116578e9e968e8e8e101fcf 5c0415e90e61043da57f00fa305c353d120000 simonpj at MSRC-3645512:~$ | -----Original Message----- | From: ghc-devs On Behalf Of Viktor | Dukhovni | Sent: 11 March 2021 19:05 | To: ghc-devs at haskell.org | Subject: Re: WSL2 | | On Thu, Mar 11, 2021 at 12:21:15PM +0000, Simon Peyton Jones via ghc- | devs wrote: | | > Like Tom, I'm not following the details, but if you want me to run | > some commands and send you the output I can do that. Just send the | > script! | | See attached. If any of the prerequisite shell utilities are not | installed, the script will exit asking that they be installed. | | Please email me the output, or post to the list. (Should be just a | couple of hundred lines of mostly hex output). | | -- | Viktor. From ietf-dane at dukhovni.org Thu Mar 11 20:36:07 2021 From: ietf-dane at dukhovni.org (Viktor Dukhovni) Date: Thu, 11 Mar 2021 15:36:07 -0500 Subject: WSL2 In-Reply-To: References: <20210311101952.GA15063@cloudinit-builder> <20210311114110.GC15063@cloudinit-builder> Message-ID: On Thu, Mar 11, 2021 at 07:53:20PM +0000, Simon Peyton Jones via ghc-devs wrote: > Voila Thanks! > /etc/nsswitch.conf group entry > group: files systemd The main "suspicious" thing here (decoded traces below my signature) is that the nsswitch.conf file is configured to try "systemd" as a source of group data, but attempts to contact "systemd" or read the underlying systemd store directly are failing. This is different from "not found", where systemd might have furnished a negative reply (as is the case on my Fedora 31 system, see below). So a failure return code is not surprising, because the answer is not authoritative, systemd might have answered differently if it had been possible to query it. It appears the WSL2 systems have a systemically misconfigured "nsswitch.conf" that wants to query "group" (and likely other) data from an unavailable source. [ Bottom line, the "unix" test case in question may need to be prepared to encounter such misconfiguration of the test platform and accept either type of error. Perhaps catch the IO expected IO exception, and output a fixed "not found" message regardless of the exception details, or by specifically checking for either of the two expected forms. ] By way of contrast, on my Fedora system, systemd can actually be reached and appears to respond to the "nss" library's satisfaction: execve("/usr/bin/getent", ["getent", "group", "xyzzy0"], 0x7fff3afbcca0 /* 31 vars */) = 0 ... openat(AT_FDCWD, "/lib64/libnss_files.so.2", O_RDONLY|O_CLOEXEC) = 3 openat(AT_FDCWD, "/etc/group", O_RDONLY|O_CLOEXEC) = 3 read(3, "root:x:0:\nbin:x:1:\ndaemon:x:2:\ns"..., 4096) = 1161 read(3, "", 4096) = 0 ... openat(AT_FDCWD, "/lib64/libnss_systemd.so.2", O_RDONLY|O_CLOEXEC) = 3 access("/etc/systemd/dont-synthesize-nobody", F_OK) = -1 ENOENT (No such file or directory) socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 3 connect(3, {sa_family=AF_UNIX, sun_path="/run/dbus/system_bus_socket"}, 30) = 0 getsockopt(3, SOL_SOCKET, SO_PEERCRED, {pid=1, uid=0, gid=0}, [12]) = 0 getsockopt(3, SOL_SOCKET, SO_PEERSEC, 0x5568c64660e0, [64]) = -1 ENOPROTOOPT (Protocol not available) getsockopt(3, SOL_SOCKET, SO_PEERGROUPS, 0x5568c6466130, [256->0]) = 0 sendmsg(3, {msg_name=NULL, msg_namelen=0, msg_iov=[{iov_base="\0AUTH EXTERNAL\r\nDATA\r\n", iov_len=22}, {iov_base="NEGOTIATE_UNIX_FD\r\n", iov_len=19}, {iov_base="BEGIN\r\n", iov_len=7}], msg_iovlen=3, msg_controllen=0, msg_flags=0}, MSG_DONTWAIT|MSG_NOSIGNAL) = 48 recvmsg(3, {msg_name=NULL, msg_namelen=0, msg_iov=[{iov_base="DATA\r\nOK 7bc788e33c85b875f6b74a6"..., iov_len=256}], msg_iovlen=1, msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_DONTWAIT|MSG_CMSG_CLOEXEC) = 58 sendmsg(3, {msg_name=NULL, msg_namelen=0, msg_iov=[{iov_base="l\1\0\1\0\0\0\0\1\0\0\0m\0\0\0\1\1o\0\25\0\0\0/org/fre"..., iov_len=128}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, MSG_DONTWAIT|MSG_NOSIGNAL) = 128 recvmsg(3, {msg_name=NULL, msg_namelen=0, msg_iov=[{iov_base="l\2\1\1\16\0\0\0\377\377\377\377G\0\0\0\5\1u\0\1\0\0\0", iov_len=24}], msg_iovlen=1, msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_DONTWAIT|MSG_CMSG_CLOEXEC) = 24 recvmsg(3, {msg_name=NULL, msg_namelen=0, msg_iov=[{iov_base="\7\1s\0\24\0\0\0org.freedesktop.DBus\0\0\0\0"..., iov_len=78}], msg_iovlen=1, msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_DONTWAIT|MSG_CMSG_CLOEXEC) = 78 sendmsg(3, {msg_name=NULL, msg_namelen=0, msg_iov=[{iov_base="l\1\0\1\v\0\0\0\2\0\0\0\247\0\0\0\1\1o\0\31\0\0\0/org/fre"..., iov_len=184}, {iov_base="\6\0\0\0xyzzy0\0", iov_len=11}], msg_iovlen=2, msg_controllen=0, msg_flags=0}, MSG_DONTWAIT|MSG_NOSIGNAL) = 195 recvmsg(3, {msg_name=NULL, msg_namelen=0, msg_iov=[{iov_base="l\4\1\1\16\0\0\0\377\377\377\377\227\0\0\0\7\1s\0\24\0\0\0", iov_len=24}], msg_iovlen=1, msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_DONTWAIT|MSG_CMSG_CLOEXEC) = 24 recvmsg(3, {msg_name=NULL, msg_namelen=0, msg_iov=[{iov_base="org.freedesktop.DBus\0\0\0\0\6\1s\0\t\0\0\0"..., iov_len=158}], msg_iovlen=1, msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_DONTWAIT|MSG_CMSG_CLOEXEC) = 158 recvmsg(3, {msg_name=NULL, msg_namelen=0, msg_iov=[{iov_base="l\3\1\1(\0\0\0\257\30\r\0m\0\0\0\5\1u\0\2\0\0\0", iov_len=24}], msg_iovlen=1, msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_DONTWAIT|MSG_CMSG_CLOEXEC) = 24 recvmsg(3, {msg_name=NULL, msg_namelen=0, msg_iov=[{iov_base="\6\1s\0\t\0\0\0:1.303526\0\0\0\0\0\0\0\4\1s\0*\0\0\0"..., iov_len=144}], msg_iovlen=1, msg_controllen=0, msg_flags=MSG_CMSG_CLOEXEC}, MSG_DONTWAIT|MSG_CMSG_CLOEXEC) = 144 close(3) = 0 -- Viktor. So group lookups are configured to try /etc/group first, and then some systemd-based machinery (possibly creating groups on the fly, ...). > == Tracing getent group xyzzy0 execve("/usr/bin/getent", ["getent", "group", "xyzzy0"], 0x7ffeb59f7a30 /* 26 vars */) = 0 brk(NULL) = 0x55cb17d10000 ... [ initialisation ] ... openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libnss_files.so.2", O_RDONLY|O_CLOEXEC) = 3 close(3) = 0 ... [ loading code for "files" ] ... openat(AT_FDCWD, "/etc/group", O_RDONLY|O_CLOEXEC) = 3 read(3, "root:x:0:\ndaemon:x:1:\nbin:x:2:\ns"..., 4096) = 828 read(3, "", 4096) = 0 ... [ no match in "/etc/group" ] ... openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libnss_systemd.so.2", O_RDONLY|O_CLOEXEC) = 3 ... [ loading code for "systemd" ] ... socket(AF_UNIX, SOCK_DGRAM|SOCK_CLOEXEC, 0) = 3 connect(3, {sa_family=AF_UNIX, sun_path=@"userdb-16b836ad920fd3bea17e1fa40e9f2f3c"}, 42) = -1 ECONNREFUSED (Connection refused) close(3) = 0 ... [ failing to connect to systemd socket ] ... openat(AT_FDCWD, "/run/systemd/userdb/", O_RDONLY|O_NONBLOCK|O_CLOEXEC|O_DIRECTORY) = -1 ENOENT (No such file or directory) ... [ failing to directly access the data ] ... exit_group(2) = ? ... [ "not found" exit status ] ... > == Tracing getgrnam xyzzy0 > (null)((nil)) No such process(3) execve("./getgrnam", ["./getgrnam", "xyzzy0"], 0x7fff81dc1c38 /* 26 vars */) = 0 ... openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libnss_files.so.2", O_RDONLY|O_CLOEXEC) = 3 ... openat(AT_FDCWD, "/etc/group", O_RDONLY|O_CLOEXEC) = 3 read(3, "root:x:0:\ndaemon:x:1:\nbin:x:2:\ns"..., 4096) = 828 read(3, "", 4096) = 0 ... openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libnss_systemd.so.2", O_RDONLY|O_CLOEXEC) = 3 socket(AF_UNIX, SOCK_DGRAM|SOCK_CLOEXEC, 0) = 3 connect(3, {sa_family=AF_UNIX, sun_path=@"userdb-2cecd700b3e3705ac56ef006755c59a9"}, 42) = -1 ECONNREFUSED (Connection refused) openat(AT_FDCWD, "/run/systemd/userdb/", O_RDONLY|O_NONBLOCK|O_CLOEXEC|O_DIRECTORY) = -1 ENOENT (No such file or directory) write(1, "(null)((nil)) No such process(3)"..., 33) = 33 ... [ same as getent(1), errno is from user-land, otherwise would have been ENOENT, not ESRCH ] > == Tracing getent group root > root:x:0: execve("/usr/bin/getent", ["getent", "group", "root"], 0x7ffea01ff4f0 /* 26 vars */) = 0 ... openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libnss_files.so.2", O_RDONLY|O_CLOEXEC) = 3 ... openat(AT_FDCWD, "/etc/group", O_RDONLY|O_CLOEXEC) = 3 read(3, "root:x:0:\ndaemon:x:1:\nbin:x:2:\ns"..., 4096) = 828 write(1, "root:x:0:\n", 10) = 10 ... [ found a match in /etc/group ] ... exit_group(0) = ? ... [ success exit ] ... > == Tracing getgrnam root > root(0x7ffc07bd1490) Success(0) execve("./getgrnam", ["./getgrnam", "root"], 0x7ffe5f593598 /* 26 vars */) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libnss_files.so.2", O_RDONLY|O_CLOEXEC) = 3 openat(AT_FDCWD, "/etc/group", O_RDONLY|O_CLOEXEC) = 3 read(3, "root:x:0:\ndaemon:x:1:\nbin:x:2:\ns"..., 4096) = 828 write(1, "root(0x7ffc07bd1490) Success(0)\n", 32) = 32 exit_group(0) = ? ... [ ditto ] ... From simonpj at microsoft.com Thu Mar 11 20:45:57 2021 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 11 Mar 2021 20:45:57 +0000 Subject: WSL2 In-Reply-To: References: <20210311101952.GA15063@cloudinit-builder> <20210311114110.GC15063@cloudinit-builder> Message-ID: OK thanks. Let's pursue this further on this ticket: https://gitlab.haskell.org/ghc/ghc/-/issues/19525 Simon | -----Original Message----- | From: ghc-devs On Behalf Of Viktor | Dukhovni | Sent: 11 March 2021 20:36 | To: ghc-devs at haskell.org | Subject: Re: WSL2 | | On Thu, Mar 11, 2021 at 07:53:20PM +0000, Simon Peyton Jones via ghc- | devs wrote: | | > Voila | | Thanks! | | > /etc/nsswitch.conf group entry | > group: files systemd | | The main "suspicious" thing here (decoded traces below my signature) | is that the nsswitch.conf file is configured to try "systemd" as a | source of group data, but attempts to contact "systemd" or read the | underlying systemd store directly are failing. This is different from | "not found", where systemd might have furnished a negative reply (as | is the case on my Fedora 31 system, see below). | | So a failure return code is not surprising, because the answer is not | authoritative, systemd might have answered differently if it had been | possible to query it. It appears the WSL2 systems have a systemically | misconfigured "nsswitch.conf" that wants to query "group" (and likely | other) data from an unavailable source. | | [ Bottom line, the "unix" test case in question may need to be | prepared | to encounter such misconfiguration of the test platform and accept | either type of error. Perhaps catch the IO expected IO exception, | and | output a fixed "not found" message regardless of the exception | details, | or by specifically checking for either of the two expected forms. ] | | By way of contrast, on my Fedora system, systemd can actually be | reached and appears to respond to the "nss" library's satisfaction: | | execve("/usr/bin/getent", ["getent", "group", "xyzzy0"], | 0x7fff3afbcca0 /* 31 vars */) = 0 | ... | openat(AT_FDCWD, "/lib64/libnss_files.so.2", O_RDONLY|O_CLOEXEC) = | 3 | openat(AT_FDCWD, "/etc/group", O_RDONLY|O_CLOEXEC) = 3 | read(3, "root:x:0:\nbin:x:1:\ndaemon:x:2:\ns"..., 4096) = 1161 | read(3, "", 4096) = 0 | ... | openat(AT_FDCWD, "/lib64/libnss_systemd.so.2", O_RDONLY|O_CLOEXEC) | = 3 | access("/etc/systemd/dont-synthesize-nobody", F_OK) = -1 ENOENT | (No such file or directory) | socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 3 | connect(3, {sa_family=AF_UNIX, | sun_path="/run/dbus/system_bus_socket"}, 30) = 0 | getsockopt(3, SOL_SOCKET, SO_PEERCRED, {pid=1, uid=0, gid=0}, | [12]) = 0 | getsockopt(3, SOL_SOCKET, SO_PEERSEC, 0x5568c64660e0, [64]) = -1 | ENOPROTOOPT (Protocol not available) | getsockopt(3, SOL_SOCKET, SO_PEERGROUPS, 0x5568c6466130, [256->0]) | = 0 | sendmsg(3, {msg_name=NULL, msg_namelen=0, | msg_iov=[{iov_base="\0AUTH EXTERNAL\r\nDATA\r\n", iov_len=22}, | {iov_base="NEGOTIATE_UNIX_FD\r\n", iov_len=19}, {iov_base="BEGIN\r\n", | iov_len=7}], msg_iovlen=3, msg_controllen=0, msg_flags=0}, | MSG_DONTWAIT|MSG_NOSIGNAL) = 48 | recvmsg(3, {msg_name=NULL, msg_namelen=0, | msg_iov=[{iov_base="DATA\r\nOK 7bc788e33c85b875f6b74a6"..., | iov_len=256}], msg_iovlen=1, msg_controllen=0, | msg_flags=MSG_CMSG_CLOEXEC}, MSG_DONTWAIT|MSG_CMSG_CLOEXEC) = 58 | sendmsg(3, {msg_name=NULL, msg_namelen=0, | msg_iov=[{iov_base="l\1\0\1\0\0\0\0\1\0\0\0m\0\0\0\1\1o\0\25\0\0\0/org | /fre"..., iov_len=128}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, | MSG_DONTWAIT|MSG_NOSIGNAL) = 128 | recvmsg(3, {msg_name=NULL, msg_namelen=0, | msg_iov=[{iov_base="l\2\1\1\16\0\0\0\377\377\377\377G\0\0\0\5\1u\0\1\0 | \0\0", iov_len=24}], msg_iovlen=1, msg_controllen=0, | msg_flags=MSG_CMSG_CLOEXEC}, MSG_DONTWAIT|MSG_CMSG_CLOEXEC) = 24 | recvmsg(3, {msg_name=NULL, msg_namelen=0, | msg_iov=[{iov_base="\7\1s\0\24\0\0\0org.freedesktop.DBus\0\0\0\0"..., | iov_len=78}], msg_iovlen=1, msg_controllen=0, | msg_flags=MSG_CMSG_CLOEXEC}, MSG_DONTWAIT|MSG_CMSG_CLOEXEC) = 78 | sendmsg(3, {msg_name=NULL, msg_namelen=0, | msg_iov=[{iov_base="l\1\0\1\v\0\0\0\2\0\0\0\247\0\0\0\1\1o\0\31\0\0\0/ | org/fre"..., iov_len=184}, {iov_base="\6\0\0\0xyzzy0\0", iov_len=11}], | msg_iovlen=2, msg_controllen=0, msg_flags=0}, | MSG_DONTWAIT|MSG_NOSIGNAL) = 195 | recvmsg(3, {msg_name=NULL, msg_namelen=0, | msg_iov=[{iov_base="l\4\1\1\16\0\0\0\377\377\377\377\227\0\0\0\7\1s\0\ | 24\0\0\0", iov_len=24}], msg_iovlen=1, msg_controllen=0, | msg_flags=MSG_CMSG_CLOEXEC}, MSG_DONTWAIT|MSG_CMSG_CLOEXEC) = 24 | recvmsg(3, {msg_name=NULL, msg_namelen=0, | msg_iov=[{iov_base="org.freedesktop.DBus\0\0\0\0\6\1s\0\t\0\0\0"..., | iov_len=158}], msg_iovlen=1, msg_controllen=0, | msg_flags=MSG_CMSG_CLOEXEC}, MSG_DONTWAIT|MSG_CMSG_CLOEXEC) = 158 | recvmsg(3, {msg_name=NULL, msg_namelen=0, | msg_iov=[{iov_base="l\3\1\1(\0\0\0\257\30\r\0m\0\0\0\5\1u\0\2\0\0\0", | iov_len=24}], msg_iovlen=1, msg_controllen=0, | msg_flags=MSG_CMSG_CLOEXEC}, MSG_DONTWAIT|MSG_CMSG_CLOEXEC) = 24 | recvmsg(3, {msg_name=NULL, msg_namelen=0, | msg_iov=[{iov_base="\6\1s\0\t\0\0\0:1.303526\0\0\0\0\0\0\0\4\1s\0*\0\0 | \0"..., iov_len=144}], msg_iovlen=1, msg_controllen=0, | msg_flags=MSG_CMSG_CLOEXEC}, MSG_DONTWAIT|MSG_CMSG_CLOEXEC) = 144 | close(3) = 0 | | -- | Viktor. | | So group lookups are configured to try /etc/group first, and then some | systemd-based machinery (possibly creating groups on the fly, ...). | | > == Tracing getent group xyzzy0 | | execve("/usr/bin/getent", ["getent", "group", "xyzzy0"], | 0x7ffeb59f7a30 /* 26 vars */) = 0 | brk(NULL) = 0x55cb17d10000 | ... [ initialisation ] ... | openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libnss_files.so.2", | O_RDONLY|O_CLOEXEC) = 3 | close(3) = 0 | ... [ loading code for "files" ] ... | openat(AT_FDCWD, "/etc/group", O_RDONLY|O_CLOEXEC) = 3 | read(3, "root:x:0:\ndaemon:x:1:\nbin:x:2:\ns"..., 4096) = 828 | read(3, "", 4096) = 0 | ... [ no match in "/etc/group" ] ... | openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libnss_systemd.so.2", | O_RDONLY|O_CLOEXEC) = 3 | ... [ loading code for "systemd" ] ... | socket(AF_UNIX, SOCK_DGRAM|SOCK_CLOEXEC, 0) = 3 | connect(3, {sa_family=AF_UNIX, sun_path=@"userdb- | 16b836ad920fd3bea17e1fa40e9f2f3c"}, 42) = -1 ECONNREFUSED (Connection | refused) | close(3) = 0 | ... [ failing to connect to systemd socket ] ... | openat(AT_FDCWD, "/run/systemd/userdb/", | O_RDONLY|O_NONBLOCK|O_CLOEXEC|O_DIRECTORY) = -1 ENOENT (No such file | or directory) | ... [ failing to directly access the data ] ... | exit_group(2) = ? | ... [ "not found" exit status ] ... | | > == Tracing getgrnam xyzzy0 | > (null)((nil)) No such process(3) | execve("./getgrnam", ["./getgrnam", "xyzzy0"], 0x7fff81dc1c38 /* | 26 vars */) = 0 | ... | openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libnss_files.so.2", | O_RDONLY|O_CLOEXEC) = 3 | ... | openat(AT_FDCWD, "/etc/group", O_RDONLY|O_CLOEXEC) = 3 | read(3, "root:x:0:\ndaemon:x:1:\nbin:x:2:\ns"..., 4096) = 828 | read(3, "", 4096) = 0 | ... | openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libnss_systemd.so.2", | O_RDONLY|O_CLOEXEC) = 3 | socket(AF_UNIX, SOCK_DGRAM|SOCK_CLOEXEC, 0) = 3 | connect(3, {sa_family=AF_UNIX, sun_path=@"userdb- | 2cecd700b3e3705ac56ef006755c59a9"}, 42) = -1 ECONNREFUSED (Connection | refused) | openat(AT_FDCWD, "/run/systemd/userdb/", | O_RDONLY|O_NONBLOCK|O_CLOEXEC|O_DIRECTORY) = -1 ENOENT (No such file | or directory) | write(1, "(null)((nil)) No such process(3)"..., 33) = 33 | ... [ same as getent(1), errno is from user-land, | otherwise would have been ENOENT, not ESRCH ] | | > == Tracing getent group root | > root:x:0: | execve("/usr/bin/getent", ["getent", "group", "root"], | 0x7ffea01ff4f0 /* 26 vars */) = 0 | ... | openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libnss_files.so.2", | O_RDONLY|O_CLOEXEC) = 3 | ... | openat(AT_FDCWD, "/etc/group", O_RDONLY|O_CLOEXEC) = 3 | read(3, "root:x:0:\ndaemon:x:1:\nbin:x:2:\ns"..., 4096) = 828 | write(1, "root:x:0:\n", 10) = 10 | ... [ found a match in /etc/group ] ... | exit_group(0) = ? | ... [ success exit ] ... | | > == Tracing getgrnam root | > root(0x7ffc07bd1490) Success(0) | execve("./getgrnam", ["./getgrnam", "root"], 0x7ffe5f593598 /* 26 | vars */) = 0 | openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libnss_files.so.2", | O_RDONLY|O_CLOEXEC) = 3 | openat(AT_FDCWD, "/etc/group", O_RDONLY|O_CLOEXEC) = 3 | read(3, "root:x:0:\ndaemon:x:1:\nbin:x:2:\ns"..., 4096) = 828 | write(1, "root(0x7ffc07bd1490) Success(0)\n", 32) = 32 | exit_group(0) = ? | ... [ ditto ] ... | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail. | haskell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=04%7C01%7Csimonpj%40microsoft.com%7C45a66bb32c394418ea61 | 08d8e4cd6b23%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637510918226 | 583068%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJ | BTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=hNJpeFwl8DTDXAyzfdfNFnawU | YmQ3BnNqhlYgEMRxAM%3D&reserved=0 From rae at richarde.dev Thu Mar 11 23:48:34 2021 From: rae at richarde.dev (Richard Eisenberg) Date: Thu, 11 Mar 2021 23:48:34 +0000 Subject: GHC Exactprint merge process In-Reply-To: References: Message-ID: <010f017823b123fc-3203f222-2578-43cb-8b6a-d495a38e063e-000000@us-east-2.amazonses.com> I've started a review, but sent along what I had when dinner was ready. Hopefully more later, but don't wait up for me! Incidentally: this is a monstrous patch, and so there is a strong incentive just to get on with it without resolving all these quibbles. I won't stand in your way on that front -- it might be better to improve this after it lands. However, I also see quite a few TODO:AZ notes. Are you intending to fix these before landing? Or do you think it's OK to merge first and then return? High level piece: I'm in support of this direction of movement -- I just want to make sure that the new code is understandable and maintainable. Thanks, Richard > On Mar 6, 2021, at 12:39 PM, Alan & Kim Zimmerman wrote: > > I have been running a branch in !2418[1] for just over a year to migrate the ghc-exactprint functionality directly into the GHC AST[2], and I am now satisfied that it is able to provide all the same functionality as the original. > > This is one of the features intended for the impending 9.2.1 release, and it needs to be reviewed to be able to land. But the change is huge, as it mechanically affects most files that interact with the GHC AST. > > So I have split out a precursor !5158 [3] with just the new types that are used to represent the annotations, so it can be a focal point for discussion. > > It is ready for review, please comment if you have time and interest. > > Regards > Alan > > [1] https://gitlab.haskell.org/ghc/ghc/-/merge_requests/2418 > [2] https://gitlab.haskell.org/ghc/ghc/-/issues/17638 > [3] https://gitlab.haskell.org/ghc/ghc/-/merge_requests/5158 _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Fri Mar 12 00:21:53 2021 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 11 Mar 2021 19:21:53 -0500 Subject: What type of performance regression testing does GHC go through? In-Reply-To: <20210311103230.GB15063@cloudinit-builder> References: <20210311103230.GB15063@cloudinit-builder> Message-ID: <87mtv98bmp.fsf@smart-cactus.org> Tom Ellis writes: > A user posted the following to the ghc-proposals repository. Both JB > and RAE suggested ghc-devs as a more appropriate forum. Since I have > no idea whether the user has even ever used a mailing list before I > thought I would lower the activation energy by posting their message > for them. > > https://github.com/ghc-proposals/ghc-proposals/issues/410 > >> Hi, >> >> Does the GHC release or development process include regression >> testing for performance? >> >> Is this the place to discuss ideas for implementing such a thing and >> to eventually craft a proposal? >> >> I believe the performance impact of changes to GHC needs to be >> verified/validated before release. I also believe this would be >> feasible if we tracked metrics on building a wide variety of >> real-world packages. Using real-world packages is one of the best >> ways to see the actual impact users will experience. It's also a >> great way to broaden the scope of tests, particularly with the >> combination of language pragmas and enabled features within the >> compiler. We already do this, but help is definitely wanted! In short, every commit to GHC goes through a variety of performance testing including: * the performance testsuite in `base` (which I'm sure all GHC developers are all-too-familiar with at this point) * a run of the nofib benchmark suite * compile-time benchmarking using the head.hackage patchset (when it is buildable) In addition to being preserved as CI artifacts, all of this information also gets thrown into a PostgreSQL database (see [1]) which is exposed via Postgrest. The problem is that we currently don't *do* anything with it. I have occassionally found it useful to do quick queries against it, but it would be great if someone would step up to help improve this infrastructure. Cheers, - Ben [1] https://github.com/bgamari/ghc-perf-import -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Fri Mar 12 00:25:19 2021 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 11 Mar 2021 19:25:19 -0500 Subject: WSL2 In-Reply-To: References: <20210311101952.GA15063@cloudinit-builder> <20210311114110.GC15063@cloudinit-builder> Message-ID: <87k0qd8bgw.fsf@smart-cactus.org> Viktor Dukhovni writes: > On Thu, Mar 11, 2021 at 07:53:20PM +0000, Simon Peyton Jones via ghc-devs wrote: > >> Voila > > Thanks! > >> /etc/nsswitch.conf group entry >> group: files systemd > > The main "suspicious" thing here (decoded traces below my signature) is > that the nsswitch.conf file is configured to try "systemd" as a source > of group data, but attempts to contact "systemd" or read the underlying > systemd store directly are failing. This is different from "not found", > where systemd might have furnished a negative reply (as is the case on > my Fedora 31 system, see below). > This rings a bell. See #15230. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From gergo at erdi.hu Fri Mar 12 10:02:00 2021 From: gergo at erdi.hu (=?ISO-8859-2?Q?=C9RDI_Gerg=F5?=) Date: Fri, 12 Mar 2021 18:02:00 +0800 (+08) Subject: Inlining of `any @[]` vs `elem @[]` In-Reply-To: References: Message-ID: On Thu, 11 Mar 2021, Simon Peyton Jones wrote: > With HEAD, and -O, I get the exact same (good code) for these two functions: > > f x = any (x ==) [1, 5, 7::Int] > > g x = elem x [2, 6, 9 :: Int] > > Maybe this is fixed? If you think not, maybe open a ticket? OK, so initially I tried it on GHC 8.10.3, which is where `elem @[]` is not optimized. I have now tried on GHC 9.0.1, where, just like you see on HEAD, indeed it gets it right. I wonder why that is? What changed between GHC 8.10.3 and 9.0.1? Was the definition of `elem` changed in `base`? Thanks, Gergo From gergo at erdi.hu Fri Mar 12 10:34:39 2021 From: gergo at erdi.hu (=?ISO-8859-2?Q?=C9RDI_Gerg=F5?=) Date: Fri, 12 Mar 2021 18:34:39 +0800 (+08) Subject: Inlining of `any @[]` vs `elem @[]` In-Reply-To: References: Message-ID: On Fri, 12 Mar 2021, ÉRDI Gergő wrote: > I wonder why that is? What changed between GHC 8.10.3 and 9.0.1? Was the > definition of `elem` changed in `base`? Oh, I've found this commit: ``` commit f10d11fa49fa9a7a506c4fdbdf86521c2a8d3495 Author: Andreas Klebinger Date: Wed Jan 29 15:25:07 2020 +0100 Fix "build/elem" RULE. An redundant constraint prevented the rule from matching. Fixing this allows a call to elem on a known list to be translated into a series of equality checks, and eventually a simple case expression. [...] ``` From simonpj at microsoft.com Fri Mar 12 10:40:55 2021 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 12 Mar 2021 10:40:55 +0000 Subject: Inlining of `any @[]` vs `elem @[]` In-Reply-To: References: Message-ID: | I wonder why that is? What changed between GHC 8.10.3 and 9.0.1? Was | the definition of `elem` changed in `base`? I'm not sure... you could investigate, but I'm inclined just to declare victory! S | -----Original Message----- | From: ÉRDI Gergő | Sent: 12 March 2021 10:02 | To: Simon Peyton Jones | Cc: GHC Devs | Subject: RE: Inlining of `any @[]` vs `elem @[]` | | On Thu, 11 Mar 2021, Simon Peyton Jones wrote: | | > With HEAD, and -O, I get the exact same (good code) for these two | functions: | > | > f x = any (x ==) [1, 5, 7::Int] | > | > g x = elem x [2, 6, 9 :: Int] | > | > Maybe this is fixed? If you think not, maybe open a ticket? | | OK, so initially I tried it on GHC 8.10.3, which is where `elem @[]` | is not optimized. I have now tried on GHC 9.0.1, where, just like you | see on HEAD, indeed it gets it right. | | I wonder why that is? What changed between GHC 8.10.3 and 9.0.1? Was | the definition of `elem` changed in `base`? | | Thanks, | Gergo From gergo at erdi.hu Fri Mar 12 11:23:27 2021 From: gergo at erdi.hu (=?ISO-8859-2?Q?=C9RDI_Gerg=F5?=) Date: Fri, 12 Mar 2021 19:23:27 +0800 (+08) Subject: Inlining of `any @[]` vs `elem @[]` In-Reply-To: References: Message-ID: On Fri, 12 Mar 2021, Simon Peyton Jones wrote: > I'm not sure... you could investigate, but I'm inclined just to declare victory! That's easy for you to say, but here I am stuck with Stack not supporting GHC 9.0... https://github.com/commercialhaskell/stack/issues/5486 From simonpj at microsoft.com Fri Mar 12 13:39:22 2021 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 12 Mar 2021 13:39:22 +0000 Subject: Inlining of `any @[]` vs `elem @[]` In-Reply-To: References: Message-ID: Ah, sorry, I thought it was just curiosity about what has changed. I am not sure whether there will be future 8.10 releases; but you can open a ticket asking Ben to backport the fix (which you have found) to 8.10, if there is to be such a release. Simon | -----Original Message----- | From: ÉRDI Gergő | Sent: 12 March 2021 11:23 | To: Simon Peyton Jones | Cc: GHC Devs | Subject: RE: Inlining of `any @[]` vs `elem @[]` | | On Fri, 12 Mar 2021, Simon Peyton Jones wrote: | | > I'm not sure... you could investigate, but I'm inclined just to | declare victory! | | That's easy for you to say, but here I am stuck with Stack not | supporting GHC 9.0... | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgith | ub.com%2Fcommercialhaskell%2Fstack%2Fissues%2F5486&data=04%7C01%7C | simonpj%40microsoft.com%7C8ca06575fa2e439f3c9108d8e5494435%7C72f988bf8 | 6f141af91ab2d7cd011db47%7C1%7C0%7C637511450139614984%7CUnknown%7CTWFpb | GZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0 | %3D%7C1000&sdata=djhrOesH8Ku%2FnfGynP%2Blx9swLHJ9blzfdMGak27OKbk%3 | D&reserved=0 From zubin.duggal at gmail.com Fri Mar 12 21:27:34 2021 From: zubin.duggal at gmail.com (Zubin Duggal) Date: Sat, 13 Mar 2021 02:57:34 +0530 Subject: GSOC Idea: Bytecode serialization and/or Fat Interface files Message-ID: <20210312212734.daeiuwrtwn6j7t4b@zubin-msi> Hi all, This is following up on this recent discussion on the list concerning fat interface files: https://mail.haskell.org/pipermail/ghc-devs/2020-October/019324.html Now that we have been accepted as a GSOC organisation, I think it would be a good project idea for a sufficiently motivated and advanced student. This is a call for mentors (and students as well!) who would be interested in this project The problem is the following: Haskell Language Server (and ghci with `-fno-code`) have very fast startup times for codebases which don't make use of Template Haskell, and thus don't require any code-gen to typecheck. This is because they can simply read the cached iface files generated by a previous compile and don't need to re-invoke the typechecker. However, as soon as TH is involved, we are forced to retypecheck and compile files, since it is not possible to restart the code-gen process starting with only a iface file. I can think of two ways to address this problem: 1. Allow bytecode to be serialized 2. Serialize desugared Core into iface files (fat interfaces), so that (byte)code-gen can be restarted from this point and doesn't need (1) might be challenging, but offers a few more advantages over (2), in that we can reduce the work done to load TH-heavy codebases to just a load of the cached bytecode objects from disk, and could make the load process (and times) for these codebases directly comparable to their TH-free cousins. It would also make ghci startup a lot faster with a warm cache of bytecode objects, bringing ghci startup times in line with those of -fno-code However (2) might be much easier to achieve and offers many of the same advantages, in that we would not need to re-run the compiler frontend or core-to-core optimisation phases. There is also already a (slightly bitrotted) implementation of (2) thanks to the work of Edward Yang. If any of this sounds exciting to you as a student or a mentor, please get in touch. In particular, I think (2) is a feasible project that can be completed with minimal mentoring effort. However, I'm only vaguely familiar with the details of the byte code generator, so if (1) is a direction we want to pursue, we would need a mentor familiar with the details of this part of GHC. Cheers, Zubin -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From cheng.shao at tweag.io Fri Mar 12 22:20:22 2021 From: cheng.shao at tweag.io (Cheng Shao) Date: Fri, 12 Mar 2021 23:20:22 +0100 Subject: GSOC Idea: Bytecode serialization and/or Fat Interface files In-Reply-To: <20210312212734.daeiuwrtwn6j7t4b@zubin-msi> References: <20210312212734.daeiuwrtwn6j7t4b@zubin-msi> Message-ID: I believe Josh has already been working on 2 some time ago? cc'ing him to this thread. I'm personally in favor of 2 since it's also super useful for prototyping whole-program ghc backends, where one can just read all the CgGuts from the .hi files, and get all codegen-related Core for free. Cheers, Cheng On Fri, Mar 12, 2021 at 10:32 PM Zubin Duggal wrote: > > Hi all, > > This is following up on this recent discussion on the list concerning fat > interface files: https://mail.haskell.org/pipermail/ghc-devs/2020-October/019324.html > > Now that we have been accepted as a GSOC organisation, I think > it would be a good project idea for a sufficiently motivated and > advanced student. This is a call for mentors (and students as > well!) who would be interested in this project > > The problem is the following: > > Haskell Language Server (and ghci with `-fno-code`) have very > fast startup times for codebases which don't make use of Template > Haskell, and thus don't require any code-gen to typecheck. This > is because they can simply read the cached iface files generated by a > previous compile and don't need to re-invoke the typechecker. > > However, as soon as TH is involved, we are forced to retypecheck and > compile files, since it is not possible to restart the code-gen process > starting with only a iface file. I can think of two ways to address this > problem: > > 1. Allow bytecode to be serialized > > 2. Serialize desugared Core into iface files (fat interfaces), so that > (byte)code-gen can be restarted from this point and doesn't need > > (1) might be challenging, but offers a few more advantages over (2), > in that we can reduce the work done to load TH-heavy codebases to just > a load of the cached bytecode objects from disk, and could make the > load process (and times) for these codebases directly comparable to > their TH-free cousins. > > It would also make ghci startup a lot faster with a warm cache of > bytecode objects, bringing ghci startup times in line with those of > -fno-code > > However (2) might be much easier to achieve and offers many > of the same advantages, in that we would not need to re-run > the compiler frontend or core-to-core optimisation phases. > There is also already a (slightly bitrotted) implementation > of (2) thanks to the work of Edward Yang. > > If any of this sounds exciting to you as a student or a mentor, please > get in touch. > > In particular, I think (2) is a feasible project that can be completed > with minimal mentoring effort. However, I'm only vaguely familiar with > the details of the byte code generator, so if (1) is a direction we want > to pursue, we would need a mentor familiar with the details of this part > of GHC. > > Cheers, > Zubin > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From rae at richarde.dev Fri Mar 12 22:21:14 2021 From: rae at richarde.dev (Richard Eisenberg) Date: Fri, 12 Mar 2021 22:21:14 +0000 Subject: GHC Exactprint merge process In-Reply-To: <010f017823b123fc-3203f222-2578-43cb-8b6a-d495a38e063e-000000@us-east-2.amazonses.com> References: <010f017823b123fc-3203f222-2578-43cb-8b6a-d495a38e063e-000000@us-east-2.amazonses.com> Message-ID: <010f017828878aa5-9b1cf947-8f0d-4d63-871f-62eb4e9f940c-000000@us-east-2.amazonses.com> After a consult with Simon, I've updated the relevant wiki page at https://gitlab.haskell.org/ghc/ghc/-/wikis/api-annotations with a sketch of a design description for this new feature, along with lots of questions. Both Simon and I agree that it may be more sensible to merge first and ask questions later, but we do think the design could be tightened in a few places. There are no notifications etc on wiki page updates, so it might be good to also correspond via email when updates take place. Richard > On Mar 11, 2021, at 6:48 PM, Richard Eisenberg wrote: > > I've started a review, but sent along what I had when dinner was ready. Hopefully more later, but don't wait up for me! > > Incidentally: this is a monstrous patch, and so there is a strong incentive just to get on with it without resolving all these quibbles. I won't stand in your way on that front -- it might be better to improve this after it lands. However, I also see quite a few TODO:AZ notes. Are you intending to fix these before landing? Or do you think it's OK to merge first and then return? > > High level piece: I'm in support of this direction of movement -- I just want to make sure that the new code is understandable and maintainable. > > Thanks, > Richard > >> On Mar 6, 2021, at 12:39 PM, Alan & Kim Zimmerman > wrote: >> >> I have been running a branch in !2418[1] for just over a year to migrate the ghc-exactprint functionality directly into the GHC AST[2], and I am now satisfied that it is able to provide all the same functionality as the original. >> >> This is one of the features intended for the impending 9.2.1 release, and it needs to be reviewed to be able to land. But the change is huge, as it mechanically affects most files that interact with the GHC AST. >> >> So I have split out a precursor !5158 [3] with just the new types that are used to represent the annotations, so it can be a focal point for discussion. >> >> It is ready for review, please comment if you have time and interest. >> >> Regards >> Alan >> >> [1] https://gitlab.haskell.org/ghc/ghc/-/merge_requests/2418 >> [2] https://gitlab.haskell.org/ghc/ghc/-/issues/17638 >> [3] https://gitlab.haskell.org/ghc/ghc/-/merge_requests/5158 _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Fri Mar 12 23:06:48 2021 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Fri, 12 Mar 2021 23:06:48 +0000 Subject: GHC Exactprint merge process In-Reply-To: <010f017828878aa5-9b1cf947-8f0d-4d63-871f-62eb4e9f940c-000000@us-east-2.amazonses.com> References: <010f017823b123fc-3203f222-2578-43cb-8b6a-d495a38e063e-000000@us-east-2.amazonses.com> <010f017828878aa5-9b1cf947-8f0d-4d63-871f-62eb4e9f940c-000000@us-east-2.amazonses.com> Message-ID: Thanks Richard This MR is a huge change, and hard to digest. But it is also a step function, in that we cannot have the old way of using API Annotations for exact printing and the new way at the same time. So I have focused on making sure that it can actually do what I believe is required, which I am satisfied it now does. Admittedly this needs to be more clearly defined. I am happy to do that, and to tweak the implementation based on feedback. There are plenty of rough spots, as I have been working on the big picture at the expense of polished details. And I do appreciate the willingness to merge now and then clean up, this will make my life a lot simpler, I have rebased 50 odd times already. Regards Alan On Fri, 12 Mar 2021, 22:21 Richard Eisenberg, wrote: > After a consult with Simon, I've updated the relevant wiki page at > https://gitlab.haskell.org/ghc/ghc/-/wikis/api-annotations with a sketch > of a design description for this new feature, along with lots of questions. > Both Simon and I agree that it may be more sensible to merge first and ask > questions later, but we do think the design could be tightened in a few > places. > > There are no notifications etc on wiki page updates, so it might be good > to also correspond via email when updates take place. > > Richard > > On Mar 11, 2021, at 6:48 PM, Richard Eisenberg wrote: > > I've started a review, but sent along what I had when dinner was ready. > Hopefully more later, but don't wait up for me! > > Incidentally: this is a monstrous patch, and so there is a strong > incentive just to get on with it without resolving all these quibbles. I > won't stand in your way on that front -- it might be better to improve this > after it lands. However, I also see quite a few TODO:AZ notes. Are you > intending to fix these before landing? Or do you think it's OK to merge > first and then return? > > High level piece: I'm in support of this direction of movement -- I just > want to make sure that the new code is understandable and maintainable. > > Thanks, > Richard > > On Mar 6, 2021, at 12:39 PM, Alan & Kim Zimmerman > wrote: > > I have been running a branch in !2418[1] for just over a year to migrate > the ghc-exactprint functionality directly into the GHC AST[2], and I am now > satisfied that it is able to provide all the same functionality as the > original. > > This is one of the features intended for the impending 9.2.1 release, and > it needs to be reviewed to be able to land. But the change is huge, as it > mechanically affects most files that interact with the GHC AST. > > So I have split out a precursor !5158 [3] with just the new types that are > used to represent the annotations, so it can be a focal point for > discussion. > > It is ready for review, please comment if you have time and interest. > > Regards > Alan > > [1] https://gitlab.haskell.org/ghc/ghc/-/merge_requests/2418 > [2] https://gitlab.haskell.org/ghc/ghc/-/issues/17638 > [3] https://gitlab.haskell.org/ghc/ghc/-/merge_requests/5158 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From moritz.angermann at gmail.com Sat Mar 13 00:11:53 2021 From: moritz.angermann at gmail.com (Moritz Angermann) Date: Sat, 13 Mar 2021 08:11:53 +0800 Subject: GSOC Idea: Bytecode serialization and/or Fat Interface files In-Reply-To: References: <20210312212734.daeiuwrtwn6j7t4b@zubin-msi> Message-ID: Yes there is also John resumable compilation ideas. And the current performance work obsidian systems does. On Sat, 13 Mar 2021 at 6:21 AM, Cheng Shao wrote: > I believe Josh has already been working on 2 some time ago? cc'ing him > to this thread. > > I'm personally in favor of 2 since it's also super useful for > prototyping whole-program ghc backends, where one can just read all > the CgGuts from the .hi files, and get all codegen-related Core for > free. > > Cheers, > Cheng > > On Fri, Mar 12, 2021 at 10:32 PM Zubin Duggal > wrote: > > > > Hi all, > > > > This is following up on this recent discussion on the list concerning fat > > interface files: > https://mail.haskell.org/pipermail/ghc-devs/2020-October/019324.html > > > > Now that we have been accepted as a GSOC organisation, I think > > it would be a good project idea for a sufficiently motivated and > > advanced student. This is a call for mentors (and students as > > well!) who would be interested in this project > > > > The problem is the following: > > > > Haskell Language Server (and ghci with `-fno-code`) have very > > fast startup times for codebases which don't make use of Template > > Haskell, and thus don't require any code-gen to typecheck. This > > is because they can simply read the cached iface files generated by a > > previous compile and don't need to re-invoke the typechecker. > > > > However, as soon as TH is involved, we are forced to retypecheck and > > compile files, since it is not possible to restart the code-gen process > > starting with only a iface file. I can think of two ways to address this > > problem: > > > > 1. Allow bytecode to be serialized > > > > 2. Serialize desugared Core into iface files (fat interfaces), so that > > (byte)code-gen can be restarted from this point and doesn't need > > > > (1) might be challenging, but offers a few more advantages over (2), > > in that we can reduce the work done to load TH-heavy codebases to just > > a load of the cached bytecode objects from disk, and could make the > > load process (and times) for these codebases directly comparable to > > their TH-free cousins. > > > > It would also make ghci startup a lot faster with a warm cache of > > bytecode objects, bringing ghci startup times in line with those of > > -fno-code > > > > However (2) might be much easier to achieve and offers many > > of the same advantages, in that we would not need to re-run > > the compiler frontend or core-to-core optimisation phases. > > There is also already a (slightly bitrotted) implementation > > of (2) thanks to the work of Edward Yang. > > > > If any of this sounds exciting to you as a student or a mentor, please > > get in touch. > > > > In particular, I think (2) is a feasible project that can be completed > > with minimal mentoring effort. However, I'm only vaguely familiar with > > the details of the byte code generator, so if (1) is a direction we want > > to pursue, we would need a mentor familiar with the details of this part > > of GHC. > > > > Cheers, > > Zubin > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.ericson at obsidian.systems Sat Mar 13 01:33:24 2021 From: john.ericson at obsidian.systems (John Ericson) Date: Fri, 12 Mar 2021 20:33:24 -0500 Subject: GSOC Idea: Bytecode serialization and/or Fat Interface files In-Reply-To: References: <20210312212734.daeiuwrtwn6j7t4b@zubin-msi> Message-ID: Yes, see https://gitlab.haskell.org/ghc/ghc/-/wikis/Plan-for-increased-parallelism-and-more-detailed-intermediate-output where we (Obsidian) and IOHK have been planning together. I must saw, I am a bit skeptical about a GSOC being able to take this on successfully. I thought Fendor did a great job with multiple home units, for example, but we have still to finish merging all his work! The driver is perhaps the biggest cesspool of technical debt in GHC, and it will take a while to untangle let alone implement new features. I forget what the rules are for more incremental or multifaceted projects, but I would prefer an approach of trying to untangle things with no singular large goal. Or maybe we can involve a student with efforts to improve CI, attacking the root cause for why it's so hard to land things in the first place . John On 3/12/21 7:11 PM, Moritz Angermann wrote: > Yes there is also John resumable compilation ideas. And the current > performance work obsidian systems does. > > On Sat, 13 Mar 2021 at 6:21 AM, Cheng Shao > wrote: > > I believe Josh has already been working on 2 some time ago? cc'ing him > to this thread. > > I'm personally in favor of 2 since it's also super useful for > prototyping whole-program ghc backends, where one can just read all > the CgGuts from the .hi files, and get all codegen-related Core for > free. > > Cheers, > Cheng > > On Fri, Mar 12, 2021 at 10:32 PM Zubin Duggal > > wrote: > > > > Hi all, > > > > This is following up on this recent discussion on the list > concerning fat > > interface files: > https://mail.haskell.org/pipermail/ghc-devs/2020-October/019324.html > > > > > Now that we have been accepted as a GSOC organisation, I think > > it would be a good project idea for a sufficiently motivated and > > advanced student. This is a call for mentors (and students as > > well!) who would be interested in this project > > > > The problem is the following: > > > > Haskell Language Server (and ghci with `-fno-code`) have very > > fast startup times for codebases which don't make use of Template > > Haskell, and thus don't require any code-gen to typecheck. This > > is because they can simply read the cached iface files generated > by a > > previous compile and don't need to re-invoke the typechecker. > > > > However, as soon as TH is involved, we are forced to retypecheck and > > compile files, since it is not possible to restart the code-gen > process > > starting with only a iface file. I can think of two ways to > address this > > problem: > > > > 1. Allow bytecode to be serialized > > > > 2. Serialize desugared Core into iface files (fat interfaces), > so that > > (byte)code-gen can be restarted from this point and doesn't need > > > > (1) might be challenging, but offers a few more advantages over (2), > > in that we can reduce the work done to load TH-heavy codebases > to just > > a load of the cached bytecode objects from disk, and could make the > > load process (and times) for these codebases directly comparable to > > their TH-free cousins. > > > > It would also make ghci startup a lot faster with a warm cache of > > bytecode objects, bringing ghci startup times in line with those of > > -fno-code > > > > However (2) might be much easier to achieve and offers many > > of the same advantages, in that we would not need to re-run > > the compiler frontend or core-to-core optimisation phases. > > There is also already a (slightly bitrotted) implementation > > of (2) thanks to the work of Edward Yang. > > > > If any of this sounds exciting to you as a student or a mentor, > please > > get in touch. > > > > In particular, I think (2) is a feasible project that can be > completed > > with minimal mentoring effort. However, I'm only vaguely > familiar with > > the details of the byte code generator, so if (1) is a direction > we want > > to pursue, we would need a mentor familiar with the details of > this part > > of GHC. > > > > Cheers, > > Zubin > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From moritz.angermann at gmail.com Sat Mar 13 02:50:20 2021 From: moritz.angermann at gmail.com (Moritz Angermann) Date: Sat, 13 Mar 2021 10:50:20 +0800 Subject: GSOC Idea: Bytecode serialization and/or Fat Interface files In-Reply-To: References: <20210312212734.daeiuwrtwn6j7t4b@zubin-msi> Message-ID: I'd be happy to mentor anyone on either of these. The CI part is going to be grueling demotivatinal work with very long pauses in between, which is why I didn't propose it yet. I agree with John, that I'm a bit skeptical about a Student being able to help/pull anything off in the current state how things are with multiple parties being actively involved in this already, without being relegated to a spectators position. On Sat, Mar 13, 2021 at 9:34 AM John Ericson wrote: > Yes, see > https://gitlab.haskell.org/ghc/ghc/-/wikis/Plan-for-increased-parallelism-and-more-detailed-intermediate-output > where we (Obsidian) and IOHK have been planning together. > > I must saw, I am a bit skeptical about a GSOC being able to take this on > successfully. I thought Fendor did a great job with multiple home units, > for example, but we have still to finish merging all his work! The driver > is perhaps the biggest cesspool of technical debt in GHC, and it will take > a while to untangle let alone implement new features. > > I forget what the rules are for more incremental or multifaceted projects, > but I would prefer an approach of trying to untangle things with no > singular large goal. Or maybe we can involve a student with efforts to > improve CI, attacking the root cause for why it's so hard to land things in > the first place . > > John > On 3/12/21 7:11 PM, Moritz Angermann wrote: > > Yes there is also John resumable compilation ideas. And the current > performance work obsidian systems does. > > On Sat, 13 Mar 2021 at 6:21 AM, Cheng Shao wrote: > >> I believe Josh has already been working on 2 some time ago? cc'ing him >> to this thread. >> >> I'm personally in favor of 2 since it's also super useful for >> prototyping whole-program ghc backends, where one can just read all >> the CgGuts from the .hi files, and get all codegen-related Core for >> free. >> >> Cheers, >> Cheng >> >> On Fri, Mar 12, 2021 at 10:32 PM Zubin Duggal >> wrote: >> > >> > Hi all, >> > >> > This is following up on this recent discussion on the list concerning >> fat >> > interface files: >> https://mail.haskell.org/pipermail/ghc-devs/2020-October/019324.html >> > >> > Now that we have been accepted as a GSOC organisation, I think >> > it would be a good project idea for a sufficiently motivated and >> > advanced student. This is a call for mentors (and students as >> > well!) who would be interested in this project >> > >> > The problem is the following: >> > >> > Haskell Language Server (and ghci with `-fno-code`) have very >> > fast startup times for codebases which don't make use of Template >> > Haskell, and thus don't require any code-gen to typecheck. This >> > is because they can simply read the cached iface files generated by a >> > previous compile and don't need to re-invoke the typechecker. >> > >> > However, as soon as TH is involved, we are forced to retypecheck and >> > compile files, since it is not possible to restart the code-gen process >> > starting with only a iface file. I can think of two ways to address this >> > problem: >> > >> > 1. Allow bytecode to be serialized >> > >> > 2. Serialize desugared Core into iface files (fat interfaces), so that >> > (byte)code-gen can be restarted from this point and doesn't need >> > >> > (1) might be challenging, but offers a few more advantages over (2), >> > in that we can reduce the work done to load TH-heavy codebases to just >> > a load of the cached bytecode objects from disk, and could make the >> > load process (and times) for these codebases directly comparable to >> > their TH-free cousins. >> > >> > It would also make ghci startup a lot faster with a warm cache of >> > bytecode objects, bringing ghci startup times in line with those of >> > -fno-code >> > >> > However (2) might be much easier to achieve and offers many >> > of the same advantages, in that we would not need to re-run >> > the compiler frontend or core-to-core optimisation phases. >> > There is also already a (slightly bitrotted) implementation >> > of (2) thanks to the work of Edward Yang. >> > >> > If any of this sounds exciting to you as a student or a mentor, please >> > get in touch. >> > >> > In particular, I think (2) is a feasible project that can be completed >> > with minimal mentoring effort. However, I'm only vaguely familiar with >> > the details of the byte code generator, so if (1) is a direction we want >> > to pursue, we would need a mentor familiar with the details of this part >> > of GHC. >> > >> > Cheers, >> > Zubin >> > _______________________________________________ >> > ghc-devs mailing list >> > ghc-devs at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Sun Mar 14 01:00:48 2021 From: ben at smart-cactus.org (Ben Gamari) Date: Sat, 13 Mar 2021 20:00:48 -0500 Subject: GSOC Idea: Bytecode serialization and/or Fat Interface files In-Reply-To: References: <20210312212734.daeiuwrtwn6j7t4b@zubin-msi> Message-ID: <87blbm8s76.fsf@smart-cactus.org> John Ericson writes: > Yes, see > https://gitlab.haskell.org/ghc/ghc/-/wikis/Plan-for-increased-parallelism-and-more-detailed-intermediate-output > where we (Obsidian) and IOHK have been planning together. > > I must saw, I am a bit skeptical about a GSOC being able to take this on > successfully. I thought Fendor did a great job with multiple home units, > for example, but we have still to finish merging all his work! The > driver is perhaps the biggest cesspool of technical debt in GHC, and it > will take a while to untangle let alone implement new features. > > I forget what the rules are for more incremental or multifaceted > projects, but I would prefer an approach of trying to untangle things > with no singular large goal. Or maybe we can involve a student with > efforts to improve CI, attacking the root cause for why it's so hard to > land things in the first place . > I think this would be ill-suited to a GSoC project. GSoC projects are strongly encouraged to be measurable projects with a clear development trajectory from the outset and multiple concrete checkpoints. If we want the project to be successful I think it would be a mistake to wander from this guidance. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From simonpj at microsoft.com Sun Mar 14 20:53:26 2021 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Sun, 14 Mar 2021 20:53:26 +0000 Subject: Build failure -- missing dependency? Help! Message-ID: I'm getting this (with 'sh validate -legacy'). Oddly * It does not happen on HEAD * It does happen on wip/T19495, a tiny patch with one innocuous change to GHC.Tc.Gen.HsType I can't see how my patch could possible cause "missing files" in ghc-bignum! I'm guessing that there is a missing dependency that someone doesn't show up in master, but does in my branch, randomly. There's something funny about ghc-bignum; it doesn't seem to be a regular library Can anyone help? Thanks Simon "inplace/bin/ghc-stage1" -hisuf hi -osuf o -hcsuf hc -static -O -H64m -Wall -fllvm-fill-undef-with-garbage -Werror -this-unit-id base-4.16.0.0 -hide-all-packages -package-env - -i -ilibraries/base/. -ilibraries/base/dist-install/build -Ilibraries/base/dist-install/build -ilibraries/base/dist-install/build/./autogen -Ilibraries/base/dist-install/build/./autogen -Ilibraries/base/include -Ilibraries/base/dist-install/build/include -optP-include -optPlibraries/base/dist-install/build/./autogen/cabal_macros.h -package-id ghc-bignum-1.0 -package-id ghc-prim-0.8.0 -package-id rts -this-unit-id base -Wcompat -Wnoncanonical-monad-instances -XHaskell2010 -O -dcore-lint -ticky -Wwarn -no-user-package-db -rtsopts -Wno-trustworthy-safe -Wno-deprecated-flags -Wnoncanonical-monad-instances -outputdir libraries/base/dist-install/build -dynamic-too -c libraries/base/./GHC/Exception/Type.hs-boot -o libraries/base/dist-install/build/GHC/Exception/Type.o-boot -dyno libraries/base/dist-install/build/GHC/Exception/Type.dyn_o-boot Failed to load interface for 'GHC.Num.Integer' There are files missing in the 'ghc-bignum' package, try running 'ghc-pkg check'. Use -v (or `:set -v` in ghci) to see a list of the files searched for. make[1]: *** [libraries/base/ghc.mk:4: libraries/base/dist-install/build/GHC/Exception/Type.o-boot] Error 1 make[1]: *** Waiting for unfinished jobs.... -------------- next part -------------- An HTML attachment was scrubbed... URL: From ietf-dane at dukhovni.org Mon Mar 15 08:17:47 2021 From: ietf-dane at dukhovni.org (Viktor Dukhovni) Date: Mon, 15 Mar 2021 06:17:47 -0200 Subject: Build failure -- missing dependency? Help! In-Reply-To: References: Message-ID: <8FF786A4-83EA-4C37-A9D1-64E2FF67C947@dukhovni.org> > On Mar 14, 2021, at 6:53 PM, Simon Peyton Jones via ghc-devs wrote: > > I’m getting this (with ‘sh validate –legacy’). Oddly > > • It does not happen on HEAD > • It does happen on wip/T19495, a tiny patch with one innocuous change to GHC.Tc.Gen.HsType > I can’t see how my patch could possible cause “missing files” in ghc-bignum! > > I’m guessing that there is a missing dependency that someone doesn’t show up in master, but does in my branch, randomly. > > There’s something funny about ghc-bignum; it doesn’t seem to be a regular library > > Can anyone help? I managed to reproduce the issue on my machine, and noticed that after: $ cd libraries/ghc-bignum/ $ gmake $ cd ../.. $ ./validate --legacy --no-clean the build continues OK. So it looks like the legacy parallel build has a missing dependency on the completion of the build of libraries/ghc-bignum at the point when it is trying to run: $ "inplace/bin/ghc-stage1" -v1 \ -hisuf hi \ -osuf o \ -hcsuf hc \ -static -O0 -H64m -Wall -fllvm-fill-undef-with-garbage -Werror \ -this-unit-id base-4.16.0.0 \ -hide-all-packages -package-env - -i \ -ilibraries/base/. \ -ilibraries/base/dist-install/build \ -Ilibraries/base/dist-install/build \ -ilibraries/base/dist-install/build/./autogen \ -Ilibraries/base/dist-install/build/./autogen \ -Ilibraries/base/include \ -Ilibraries/base/dist-install/build/include \ -optP-include \ -optPlibraries/base/dist-install/build/./autogen/cabal_macros.h \ -package-id ghc-bignum-1.0 \ -package-id ghc-prim-0.8.0 \ -package-id rts \ -this-unit-id base \ -Wcompat -Wnoncanonical-monad-instances \ -XHaskell2010 -O \ -dcore-lint -dno-debug-output \ -no-user-package-db \ -rtsopts \ -Wno-trustworthy-safe -Wno-deprecated-flags -Wnoncanonical-monad-instances \ -outputdirlibraries/base/dist-install/build \ -dynamic-too \ -c libraries/base/./GHC/Exception/Type.hs-boot \ -o libraries/base/dist-install/build/GHC/Exception/Type.o-boot \ -dyno libraries/base/dist-install/build/GHC/Exception/Type.dyn_o-boot My best guess is that the problem command fires via libraries/base/dist-install/package-data.mk which is created by cabal, and things get rather complicated from there... -- Viktor. From sylvain at haskus.fr Mon Mar 15 08:29:36 2021 From: sylvain at haskus.fr (Sylvain Henry) Date: Mon, 15 Mar 2021 09:29:36 +0100 Subject: Build failure -- missing dependency? Help! In-Reply-To: References: Message-ID: Hi Simon, The issue is that: 1. Make build system doesn't respect package dependencies, only module dependencies (afaik) 2. The build system isn't aware that most modules implicitly depend on GHC.Num.Integer/Natural (to desugar Integer/Natural literals) That's why we have several fake imports in `base` that look like: > import GHC.Num.Integer () -- See Note [Depend on GHC.Num.Integer] in GHC.Base Note [Depend on GHC.Num.Integer] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The Integer type is special because GHC.Iface.Tidy uses constructors in GHC.Num.Integer to construct Integer literal values. Currently it reads the interface file whether or not the current module *has* any Integer literals, so it's important that GHC.Num.Integer is compiled before any other module. (There's a hack in GHC to disable this for packages ghc-prim and ghc-bignum which aren't allowed to contain any Integer literals.) Likewise we implicitly need Integer when deriving things like Eq instances. The danger is that if the build system doesn't know about the dependency on Integer, it'll compile some base module before GHC.Num.Integer, resulting in:   Failed to load interface for ‘GHC.Num.Integer’     There are files missing in the ‘ghc-bignum’ package, Bottom line: we make GHC.Base depend on GHC.Num.Integer; and everything else either depends on GHC.Base, or does not have NoImplicitPrelude (and hence depends on Prelude). Note: this is only a problem with the make-based build system. Hadrian doesn't seem to interleave compilation of modules from separate packages and respects the dependency between `base` and `ghc-bignum`. So we should add a similar fake import into libraries/base/GHC/Exception/Type.hs-boot. I will open a MR. Sylvain On 14/03/2021 21:53, Simon Peyton Jones via ghc-devs wrote: > > I’m getting this (with ‘sh validate –legacy’).  Oddly > > * It does not happen on HEAD > * It does happen on wip/T19495, a tiny patch with one innocuous > change to GHC.Tc.Gen.HsType > > I can’t see how my patch could possible cause “missing files” in > ghc-bignum! > > I’m guessing that there is a missing dependency that someone doesn’t > show up in master, but does in my branch, randomly. > > There’s something funny about ghc-bignum; it doesn’t seem to be a > regular library > > Can anyone help? > > Thanks > > Simon > > "inplace/bin/ghc-stage1" -hisuf hi -osuf  o -hcsuf hc -static  -O > -H64m -Wall -fllvm-fill-undef-with-garbage    -Werror    -this-unit-id > base-4.16.0.0 -hide-all-packages -package-env - -i -ilibraries/base/. > -ilibraries/base/dist-install/build > -Ilibraries/base/dist-install/build > -ilibraries/base/dist-install/build/./autogen > -Ilibraries/base/dist-install/build/./autogen -Ilibraries/base/include > -Ilibraries/base/dist-install/build/include    -optP-include > -optPlibraries/base/dist-install/build/./autogen/cabal_macros.h > -package-id ghc-bignum-1.0 -package-id ghc-prim-0.8.0 -package-id rts > -this-unit-id base -Wcompat -Wnoncanonical-monad-instances > -XHaskell2010 -O -dcore-lint -ticky -Wwarn  -no-user-package-db > -rtsopts -Wno-trustworthy-safe -Wno-deprecated-flags > -Wnoncanonical-monad-instances  -outputdir > libraries/base/dist-install/build  -dynamic-too -c > libraries/base/./GHC/Exception/Type.hs-boot -o > libraries/base/dist-install/build/GHC/Exception/Type.o-boot -dyno > libraries/base/dist-install/build/GHC/Exception/Type.dyn_o-boot > > Failed to load interface for ‘GHC.Num.Integer’ > > There are files missing in the ‘ghc-bignum’ package, > > try running 'ghc-pkg check'. > > Use -v (or `:set -v` in ghci) to see a list of the files searched for. > > make[1]: *** [libraries/base/ghc.mk:4: > libraries/base/dist-install/build/GHC/Exception/Type.o-boot] Error 1 > > make[1]: *** Waiting for unfinished jobs.... > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Mar 15 08:33:23 2021 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 15 Mar 2021 08:33:23 +0000 Subject: Build failure -- missing dependency? Help! In-Reply-To: References: Message-ID: Thanks Sylvain So we should add a similar fake import into libraries/base/GHC/Exception/Type.hs-boot. I will open a MR. Thank you! Don't forget to comment it - especially because it is fake. Make build system doesn't respect package dependencies, only module dependencies (afaik) Does Hadrian suffer from this malady too? Are the fake imports needed? Or can we sweep them away when we sweep away make? Simon From: ghc-devs On Behalf Of Sylvain Henry Sent: 15 March 2021 08:30 To: ghc-devs at haskell.org Subject: Re: Build failure -- missing dependency? Help! Hi Simon, The issue is that: 1. Make build system doesn't respect package dependencies, only module dependencies (afaik) 2. The build system isn't aware that most modules implicitly depend on GHC.Num.Integer/Natural (to desugar Integer/Natural literals) That's why we have several fake imports in `base` that look like: > import GHC.Num.Integer () -- See Note [Depend on GHC.Num.Integer] in GHC.Base Note [Depend on GHC.Num.Integer] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The Integer type is special because GHC.Iface.Tidy uses constructors in GHC.Num.Integer to construct Integer literal values. Currently it reads the interface file whether or not the current module *has* any Integer literals, so it's important that GHC.Num.Integer is compiled before any other module. (There's a hack in GHC to disable this for packages ghc-prim and ghc-bignum which aren't allowed to contain any Integer literals.) Likewise we implicitly need Integer when deriving things like Eq instances. The danger is that if the build system doesn't know about the dependency on Integer, it'll compile some base module before GHC.Num.Integer, resulting in: Failed to load interface for 'GHC.Num.Integer' There are files missing in the 'ghc-bignum' package, Bottom line: we make GHC.Base depend on GHC.Num.Integer; and everything else either depends on GHC.Base, or does not have NoImplicitPrelude (and hence depends on Prelude). Note: this is only a problem with the make-based build system. Hadrian doesn't seem to interleave compilation of modules from separate packages and respects the dependency between `base` and `ghc-bignum`. So we should add a similar fake import into libraries/base/GHC/Exception/Type.hs-boot. I will open a MR. Sylvain On 14/03/2021 21:53, Simon Peyton Jones via ghc-devs wrote: I'm getting this (with 'sh validate -legacy'). Oddly 1. It does not happen on HEAD 2. It does happen on wip/T19495, a tiny patch with one innocuous change to GHC.Tc.Gen.HsType I can't see how my patch could possible cause "missing files" in ghc-bignum! I'm guessing that there is a missing dependency that someone doesn't show up in master, but does in my branch, randomly. There's something funny about ghc-bignum; it doesn't seem to be a regular library Can anyone help? Thanks Simon "inplace/bin/ghc-stage1" -hisuf hi -osuf o -hcsuf hc -static -O -H64m -Wall -fllvm-fill-undef-with-garbage -Werror -this-unit-id base-4.16.0.0 -hide-all-packages -package-env - -i -ilibraries/base/. -ilibraries/base/dist-install/build -Ilibraries/base/dist-install/build -ilibraries/base/dist-install/build/./autogen -Ilibraries/base/dist-install/build/./autogen -Ilibraries/base/include -Ilibraries/base/dist-install/build/include -optP-include -optPlibraries/base/dist-install/build/./autogen/cabal_macros.h -package-id ghc-bignum-1.0 -package-id ghc-prim-0.8.0 -package-id rts -this-unit-id base -Wcompat -Wnoncanonical-monad-instances -XHaskell2010 -O -dcore-lint -ticky -Wwarn -no-user-package-db -rtsopts -Wno-trustworthy-safe -Wno-deprecated-flags -Wnoncanonical-monad-instances -outputdir libraries/base/dist-install/build -dynamic-too -c libraries/base/./GHC/Exception/Type.hs-boot -o libraries/base/dist-install/build/GHC/Exception/Type.o-boot -dyno libraries/base/dist-install/build/GHC/Exception/Type.dyn_o-boot Failed to load interface for 'GHC.Num.Integer' There are files missing in the 'ghc-bignum' package, try running 'ghc-pkg check'. Use -v (or `:set -v` in ghci) to see a list of the files searched for. make[1]: *** [libraries/base/ghc.mk:4: libraries/base/dist-install/build/GHC/Exception/Type.o-boot] Error 1 make[1]: *** Waiting for unfinished jobs.... _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From sylvain at haskus.fr Mon Mar 15 08:46:35 2021 From: sylvain at haskus.fr (Sylvain Henry) Date: Mon, 15 Mar 2021 09:46:35 +0100 Subject: Build failure -- missing dependency? Help! In-Reply-To: References: Message-ID: <30d0a6e4-be48-a577-b9e9-763448f7afb5@haskus.fr> > > Thank you! Don’t forget to comment it – especially because it is fake. > Done in https://gitlab.haskell.org/ghc/ghc/-/merge_requests/5265 > Make build system doesn't respect package dependencies, only module > dependencies (afaik) > > Does Hadrian suffer from this malady too? Are the fake imports needed? > Or can we sweep them away when we sweep away make? > No, Hadrian has other issues but not this one :) Sylvain -------------- next part -------------- An HTML attachment was scrubbed... URL: From ietf-dane at dukhovni.org Mon Mar 15 10:44:20 2021 From: ietf-dane at dukhovni.org (Viktor Dukhovni) Date: Mon, 15 Mar 2021 06:44:20 -0400 Subject: Build failure -- missing dependency? Help! In-Reply-To: <30d0a6e4-be48-a577-b9e9-763448f7afb5@haskus.fr> References: <30d0a6e4-be48-a577-b9e9-763448f7afb5@haskus.fr> Message-ID: On Mon, Mar 15, 2021 at 09:46:35AM +0100, Sylvain Henry wrote: > > > > Thank you! Don’t forget to comment it – especially because it is fake. > > Done in https://gitlab.haskell.org/ghc/ghc/-/merge_requests/5265 Speaking of build failures with the legacy make system, I see a build failure on FreeBSD 12.2 with "validate --legacy" that I don't see with hadrian. It looks like the C compiler flags aren't quite the same and warnings are more tolerated in the hadrian build. The issue is that once "PosixSource.h" is included, FreeBSD (rightly I believe) hides header prototypes of various non-POSIX extensions. In particular pthread_setname_np(3), is not exposed from . The hadrian build works fine, but the legacy build stops with a fatal missing prototype. The fix appears to be include before "PosixSource.h" as below. Since we have no CI for FreeBSD, and this change only affects FreeBSD, I'm not sure whether it makes sense to burn build CI cycles for an MR with this change. What's the right way to proceed? FWIW, with your MR and the below patch, the FreeBSD "validate --legacy" successfully builds GHC. [ The tests seem to all be failing, perhaps the test driver scripts are not portable to FreeBSD, but previously the compiler was not building. ] --- a/rts/posix/Itimer.c +++ b/rts/posix/Itimer.c @@ -17,6 +17,12 @@ * seems to support. So much for standards. */ +#include "ghcconfig.h" +#if defined(freebsd_HOST_OS) +#include +#include +#endif + #include "PosixSource.h" #include "Rts.h" -- Viktor. From ietf-dane at dukhovni.org Mon Mar 15 16:28:42 2021 From: ietf-dane at dukhovni.org (Viktor Dukhovni) Date: Mon, 15 Mar 2021 12:28:42 -0400 Subject: Build failure -- missing dependency? Help! In-Reply-To: References: <30d0a6e4-be48-a577-b9e9-763448f7afb5@haskus.fr> Message-ID: On Mon, Mar 15, 2021 at 06:44:20AM -0400, Viktor Dukhovni wrote: > ..., the FreeBSD "validate --legacy" > successfully builds GHC. [ The tests seem to all be failing, perhaps > the test driver scripts are not portable to FreeBSD, but previously > the compiler was not building. ] FWIW, the tests seem to fail for two reasons: 1. The "install dir" and "test space" directories don't appear to be handled correctly. I had to drop the spaces. 2. On FreeBSD many tests run into the dreaded: unhandled ELF relocation(RelA) type 19 Can anyone versed in Elf internals help with: https://gitlab.haskell.org/ghc/ghc/-/issues/19086 -- Viktor. From moritz.angermann at gmail.com Tue Mar 16 01:25:26 2021 From: moritz.angermann at gmail.com (Moritz Angermann) Date: Tue, 16 Mar 2021 09:25:26 +0800 Subject: Build failure -- missing dependency? Help! In-Reply-To: References: <30d0a6e4-be48-a577-b9e9-763448f7afb5@haskus.fr> Message-ID: Hi Viktor, - I believe the "test spaces" part is important and would need to be fixed, if spaces break this is not desirable. - For the Relocations part, I'm happy to offer guidance and help for anyone who wants to take a stab at it, right now I'm not in a position where I could take this on myself, I'm afraid. Cheers, Moritz On Tue, Mar 16, 2021 at 12:29 AM Viktor Dukhovni wrote: > On Mon, Mar 15, 2021 at 06:44:20AM -0400, Viktor Dukhovni wrote: > > > ..., the FreeBSD "validate --legacy" > > successfully builds GHC. [ The tests seem to all be failing, perhaps > > the test driver scripts are not portable to FreeBSD, but previously > > the compiler was not building. ] > > FWIW, the tests seem to fail for two reasons: > > 1. The "install dir" and "test space" directories don't > appear to be handled correctly. I had to drop the spaces. > > 2. On FreeBSD many tests run into the dreaded: > > unhandled ELF relocation(RelA) type 19 > > Can anyone versed in Elf internals help with: > > https://gitlab.haskell.org/ghc/ghc/-/issues/19086 > > -- > Viktor. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sylvain at haskus.fr Tue Mar 16 18:41:17 2021 From: sylvain at haskus.fr (Sylvain Henry) Date: Tue, 16 Mar 2021 19:41:17 +0100 Subject: Generalising KnowNat/Char/Symbol? Message-ID: <3da9cc7b-b8cf-a53c-f65f-90559c77c0f9@haskus.fr> Hi, I would like to have a KnownWord constraint to implement a type-safe efficient sum type. For now [1] I have: data V (vs :: [Type]) = Variant !Word Any where Word is a tag used as an index in the vs list and Any a value (unsafeCoerced to the appropriate type). Instead I would like to have something like: data V (vs :: [Type]) = Variant (forall w. KnownWord w => Proxy w -> Index w vs) Currently if I use KnownNat (instead of the proposed KnownWord), the code isn't very good because Natural equality is implemented using `naturalEq` which isn't inlined and we end up with sequences of comparisons instead of single case-expressions with unboxed literal alternatives. I could probably implement KnownWord and the required stuff (axioms and whatnot), but then someone will want KnownInt and so on. So would it instead make sense to generalise the different "Known*" we currently have with: class KnownValue t (v :: t) where valueSing :: SValue t v newtype SValue t (v :: t) = SValue t litVal :: KnownValue t v => proxy v -> t type KnownNat = KnownValue Natural type KnownChar = KnownValue Char type KnownSymbol = KnownValue String type KnownWord = KnownValue Word Thoughts? Sylvain [1] https://hackage.haskell.org/package/haskus-utils-variant-3.1/docs/Haskus-Utils-Variant.html From iavor.diatchki at gmail.com Tue Mar 16 21:45:16 2021 From: iavor.diatchki at gmail.com (Iavor Diatchki) Date: Tue, 16 Mar 2021 14:45:16 -0700 Subject: Generalising KnowNat/Char/Symbol? In-Reply-To: <3da9cc7b-b8cf-a53c-f65f-90559c77c0f9@haskus.fr> References: <3da9cc7b-b8cf-a53c-f65f-90559c77c0f9@haskus.fr> Message-ID: It's been a while since I've looked at that stuff, but your suggestion seems reasonable to me. On Tue, Mar 16, 2021 at 11:42 AM Sylvain Henry wrote: > Hi, > > I would like to have a KnownWord constraint to implement a type-safe > efficient sum type. For now [1] I have: > > data V (vs :: [Type]) = Variant !Word Any > > where Word is a tag used as an index in the vs list and Any a value > (unsafeCoerced to the appropriate type). > > Instead I would like to have something like: > > data V (vs :: [Type]) = Variant (forall w. KnownWord w => Proxy w -> > Index w vs) > > Currently if I use KnownNat (instead of the proposed KnownWord), the > code isn't very good because Natural equality is implemented using > `naturalEq` which isn't inlined and we end up with sequences of > comparisons instead of single case-expressions with unboxed literal > alternatives. > > I could probably implement KnownWord and the required stuff (axioms and > whatnot), but then someone will want KnownInt and so on. So would it > instead make sense to generalise the different "Known*" we currently > have with: > > class KnownValue t (v :: t) where valueSing :: SValue t v > > newtype SValue t (v :: t) = SValue t > > litVal :: KnownValue t v => proxy v -> t > > type KnownNat = KnownValue Natural > type KnownChar = KnownValue Char > type KnownSymbol = KnownValue String > type KnownWord = KnownValue Word > > Thoughts? > Sylvain > > [1] > > https://hackage.haskell.org/package/haskus-utils-variant-3.1/docs/Haskus-Utils-Variant.html > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleg.grenrus at iki.fi Tue Mar 16 22:58:37 2021 From: oleg.grenrus at iki.fi (Oleg Grenrus) Date: Wed, 17 Mar 2021 00:58:37 +0200 Subject: Generalising KnowNat/Char/Symbol? In-Reply-To: References: <3da9cc7b-b8cf-a53c-f65f-90559c77c0f9@haskus.fr> Message-ID: I think this is libraries at haskell.org issue. Most non-beginners have used KnownSymbol or KnownNat, GHC.TypeLits was meant to be "internal" module but now it's the module everyone uses. To me abstracting KnownValue like that seems to be reinventing Sing(I) from singletons: - https://hackage.haskell.org/package/singletons-3.0/docs/Data-Singletons.html#t:SingI - https://hackage.haskell.org/package/singletons-base-3.0/docs/src/GHC.TypeLits.Singletons.Internal.html#SNat   ((re)defines separate GADTs for SNat and SSymbol) - Oleg On 16.3.2021 23.45, Iavor Diatchki wrote: > It's been a while since I've looked at that stuff, but your suggestion > seems reasonable to me. > > On Tue, Mar 16, 2021 at 11:42 AM Sylvain Henry > wrote: > > Hi, > > I would like to have a KnownWord constraint to implement a type-safe > efficient sum type. For now [1] I have: > > data V (vs :: [Type]) = Variant !Word Any > > where Word is a tag used as an index in the vs list and Any a value > (unsafeCoerced to the appropriate type). > > Instead I would like to have something like: > > data V (vs :: [Type]) = Variant (forall w. KnownWord w => Proxy w -> > Index w vs) > > Currently if I use KnownNat (instead of the proposed KnownWord), the > code isn't very good because Natural equality is implemented using > `naturalEq` which isn't inlined and we end up with sequences of > comparisons instead of single case-expressions with unboxed literal > alternatives. > > I could probably implement KnownWord and the required stuff > (axioms and > whatnot), but then someone will want KnownInt and so on. So would it > instead make sense to generalise the different "Known*" we currently > have with: > > class KnownValue t (v :: t) where valueSing :: SValue t v > > newtype SValue t (v :: t) = SValue t > > litVal :: KnownValue t v => proxy v -> t > > type KnownNat = KnownValue Natural > type KnownChar = KnownValue Char > type KnownSymbol = KnownValue String > type KnownWord = KnownValue Word > > Thoughts? > Sylvain > > [1] > https://hackage.haskell.org/package/haskus-utils-variant-3.1/docs/Haskus-Utils-Variant.html > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From moritz.angermann at gmail.com Wed Mar 17 03:00:14 2021 From: moritz.angermann at gmail.com (Moritz Angermann) Date: Wed, 17 Mar 2021 11:00:14 +0800 Subject: On CI Message-ID: Hi there! Just a quick update on our CI situation. Ben, John, Davean and I have been discussion on CI yesterday, and what we can do about it, as well as some minor notes on why we are frustrated with it. This is an open invitation to anyone who in earnest wants to work on CI. Please come forward and help! We'd be glad to have more people involved! First the good news, over the last few weeks we've seen we *can* improve CI performance quite substantially. And the goal is now to have MR go through CI within at most 3hs. There are some ideas on how to make this even faster, especially on wide (high core count) machines; however that will take a bit more time. Now to the more thorny issue: Stat failures. We do not want GHC to regress, and I believe everyone is on board with that mission. Yet we have just witnessed a train of marge trials all fail due to a -2% regression in a few tests. Thus we've been blocking getting stuff into master for at least another day. This is (in my opinion) not acceptable! We just had five days of nothing working because master was broken and subsequently all CI pipelines kept failing. We have thus effectively wasted a week. While we can mitigate the latter part by enforcing marge for all merges to master (and with faster pipeline turnaround times this might be more palatable than with 9-12h turnaround times -- when you need to get something done! ha!), but that won't help us with issues where marge can't find a set of buildable MRs, because she just keeps hitting a combination of MRs that somehow together increase or decrease metrics. We have three knobs to adjust: - Make GHC build faster / make the testsuite run faster. There is some rather interesting work going on about parallelizing (earlier) during builds. We've also seen that we've wasted enormous amounts of time during darwin builds in the kernel, because of a bug in the testdriver. - Use faster hardware. We've seen that just this can cut windows build times from 220min to 80min. - Reduce the amount of builds. We used to build two pipelines for each marge merge, and if either of both (see below) failed, marge's merge would fail as well. So not only did we build twice as much as we needed, we also increased our chances to hit bogous build failures by 2. We need to do something about this, and I'd advocate for just not making stats fail with marge. Build errors of course, but stat failures, no. And then have a separate dashboard (and Ben has some old code lying around for this, which someone would need to pick up and polish, ...), that tracks GHC's Performance for each commit to master, with easy access from the dashboard to the offending commit. We will also need to consider the implications of synthetic micro benchmarks, as opposed to say building Cabal or other packages, that reflect more real-world experience of users using GHC. I will try to provide a data driven report on GHC's CI on a bi-weekly or month (we will have to see what the costs for writing it up, and the usefulness is) going forward. And my sincere hope is that it will help us better understand our CI situation; instead of just having some vague complaints about it. Cheers, Moritz -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.spiwack at tweag.io Wed Mar 17 08:14:14 2021 From: arnaud.spiwack at tweag.io (Spiwack, Arnaud) Date: Wed, 17 Mar 2021 09:14:14 +0100 Subject: On CI In-Reply-To: References: Message-ID: > and if either of both (see below) failed, marge's merge would fail as well. > Re: “see below” is this referring to a missing part of your email? -------------- next part -------------- An HTML attachment was scrubbed... URL: From moritz.angermann at gmail.com Wed Mar 17 08:22:16 2021 From: moritz.angermann at gmail.com (Moritz Angermann) Date: Wed, 17 Mar 2021 16:22:16 +0800 Subject: On CI In-Reply-To: References: Message-ID: No it wasn't. It was about the stat failures described in the next paragraph. I could have been more clear about that. My apologies! On Wed, Mar 17, 2021 at 4:14 PM Spiwack, Arnaud wrote: > > and if either of both (see below) failed, marge's merge would fail as well. >> > > Re: “see below” is this referring to a missing part of your email? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.spiwack at tweag.io Wed Mar 17 08:26:19 2021 From: arnaud.spiwack at tweag.io (Spiwack, Arnaud) Date: Wed, 17 Mar 2021 09:26:19 +0100 Subject: On CI In-Reply-To: References: Message-ID: Then I have a question: why are there two pipelines running on each merge batch? On Wed, Mar 17, 2021 at 9:22 AM Moritz Angermann wrote: > No it wasn't. It was about the stat failures described in the next > paragraph. I could have been more clear about that. My apologies! > > On Wed, Mar 17, 2021 at 4:14 PM Spiwack, Arnaud > wrote: > >> >> and if either of both (see below) failed, marge's merge would fail as >>> well. >>> >> >> Re: “see below” is this referring to a missing part of your email? >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From moritz.angermann at gmail.com Wed Mar 17 08:34:39 2021 From: moritz.angermann at gmail.com (Moritz Angermann) Date: Wed, 17 Mar 2021 16:34:39 +0800 Subject: On CI In-Reply-To: References: Message-ID: *why* is a very good question. The MR fixing it is here: https://gitlab.haskell.org/ghc/ghc/-/merge_requests/5275 On Wed, Mar 17, 2021 at 4:26 PM Spiwack, Arnaud wrote: > Then I have a question: why are there two pipelines running on each merge > batch? > > On Wed, Mar 17, 2021 at 9:22 AM Moritz Angermann < > moritz.angermann at gmail.com> wrote: > >> No it wasn't. It was about the stat failures described in the next >> paragraph. I could have been more clear about that. My apologies! >> >> On Wed, Mar 17, 2021 at 4:14 PM Spiwack, Arnaud >> wrote: >> >>> >>> and if either of both (see below) failed, marge's merge would fail as >>>> well. >>>> >>> >>> Re: “see below” is this referring to a missing part of your email? >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Wed Mar 17 09:26:16 2021 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 17 Mar 2021 09:26:16 +0000 Subject: On CI In-Reply-To: References: Message-ID: We need to do something about this, and I'd advocate for just not making stats fail with marge. Generally I agree. One point you don’t mention is that our perf tests (which CI forces us to look at assiduously) are often pretty weird cases. So there is at least a danger that these more exotic cases will stand in the way of (say) a perf improvement in the typical case. But “not making stats fail” is a bit crude. Instead how about * Always accept stat improvements * We already have per-benchmark windows. If the stat falls outside the window, we fail. You are effectively saying “widen all windows to infinity”. If something makes a stat 10 times worse, I think we *should* fail. But 10% worse? Maybe we should accept and look later as you suggest. So I’d argue for widening the windows rather than disabling them completely. * If we did that we’d need good instrumentation to spot steps and drift in perf, as you say. An advantage is that since the perf instrumentation runs only on committed master patches, not on every CI, it can cost more. In particular , it could run a bunch of “typical” tests, including nofib and compiling Cabal or other libraries. The big danger is that by relieving patch authors from worrying about perf drift, it’ll end up in the lap of the GHC HQ team. If it’s hard for the author of a single patch (with which she is intimately familiar) to work out why it’s making some test 2% worse, imagine how hard, and demotivating, it’d be for Ben to wonder why 50 patches (with which he is unfamiliar) are making some test 5% worse. I’m not sure how to address this problem. At least we should make it clear that patch authors are expected to engage *actively* in a conversation about why their patch is making something worse, even after it lands. Simon From: ghc-devs On Behalf Of Moritz Angermann Sent: 17 March 2021 03:00 To: ghc-devs Subject: On CI Hi there! Just a quick update on our CI situation. Ben, John, Davean and I have been discussion on CI yesterday, and what we can do about it, as well as some minor notes on why we are frustrated with it. This is an open invitation to anyone who in earnest wants to work on CI. Please come forward and help! We'd be glad to have more people involved! First the good news, over the last few weeks we've seen we *can* improve CI performance quite substantially. And the goal is now to have MR go through CI within at most 3hs. There are some ideas on how to make this even faster, especially on wide (high core count) machines; however that will take a bit more time. Now to the more thorny issue: Stat failures. We do not want GHC to regress, and I believe everyone is on board with that mission. Yet we have just witnessed a train of marge trials all fail due to a -2% regression in a few tests. Thus we've been blocking getting stuff into master for at least another day. This is (in my opinion) not acceptable! We just had five days of nothing working because master was broken and subsequently all CI pipelines kept failing. We have thus effectively wasted a week. While we can mitigate the latter part by enforcing marge for all merges to master (and with faster pipeline turnaround times this might be more palatable than with 9-12h turnaround times -- when you need to get something done! ha!), but that won't help us with issues where marge can't find a set of buildable MRs, because she just keeps hitting a combination of MRs that somehow together increase or decrease metrics. We have three knobs to adjust: - Make GHC build faster / make the testsuite run faster. There is some rather interesting work going on about parallelizing (earlier) during builds. We've also seen that we've wasted enormous amounts of time during darwin builds in the kernel, because of a bug in the testdriver. - Use faster hardware. We've seen that just this can cut windows build times from 220min to 80min. - Reduce the amount of builds. We used to build two pipelines for each marge merge, and if either of both (see below) failed, marge's merge would fail as well. So not only did we build twice as much as we needed, we also increased our chances to hit bogous build failures by 2. We need to do something about this, and I'd advocate for just not making stats fail with marge. Build errors of course, but stat failures, no. And then have a separate dashboard (and Ben has some old code lying around for this, which someone would need to pick up and polish, ...), that tracks GHC's Performance for each commit to master, with easy access from the dashboard to the offending commit. We will also need to consider the implications of synthetic micro benchmarks, as opposed to say building Cabal or other packages, that reflect more real-world experience of users using GHC. I will try to provide a data driven report on GHC's CI on a bi-weekly or month (we will have to see what the costs for writing it up, and the usefulness is) going forward. And my sincere hope is that it will help us better understand our CI situation; instead of just having some vague complaints about it. Cheers, Moritz -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.spiwack at tweag.io Wed Mar 17 09:53:28 2021 From: arnaud.spiwack at tweag.io (Spiwack, Arnaud) Date: Wed, 17 Mar 2021 10:53:28 +0100 Subject: On CI In-Reply-To: References: Message-ID: Ah, so it was really two identical pipelines (one for the branch where Margebot batches commits, and one for the MR that Margebot creates before merging). That's indeed a non-trivial amount of purely wasted computer-hours. Taking a step back, I am inclined to agree with the proposal of not checking stat regressions in Margebot. My high-level opinion on this is that perf tests don't actually test the right thing. Namely, they don't prevent performance drift over time (if a given test is allowed to degrade by 2% every commit, it can take a 100% performance hit in just 35 commits). While it is important to measure performance, and to avoid too egregious performance degradation in a given commit, it's usually performance over time which matters. I don't really know how to apply it to collaborative development, and help maintain healthy performance. But flagging performance regressions in MRs, while not making them block batched merges sounds like a reasonable compromise. On Wed, Mar 17, 2021 at 9:34 AM Moritz Angermann wrote: > *why* is a very good question. The MR fixing it is here: > https://gitlab.haskell.org/ghc/ghc/-/merge_requests/5275 > > On Wed, Mar 17, 2021 at 4:26 PM Spiwack, Arnaud > wrote: > >> Then I have a question: why are there two pipelines running on each merge >> batch? >> >> On Wed, Mar 17, 2021 at 9:22 AM Moritz Angermann < >> moritz.angermann at gmail.com> wrote: >> >>> No it wasn't. It was about the stat failures described in the next >>> paragraph. I could have been more clear about that. My apologies! >>> >>> On Wed, Mar 17, 2021 at 4:14 PM Spiwack, Arnaud >>> wrote: >>> >>>> >>>> and if either of both (see below) failed, marge's merge would fail as >>>>> well. >>>>> >>>> >>>> Re: “see below” is this referring to a missing part of your email? >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From moritz.angermann at gmail.com Wed Mar 17 10:18:49 2021 From: moritz.angermann at gmail.com (Moritz Angermann) Date: Wed, 17 Mar 2021 18:18:49 +0800 Subject: On CI In-Reply-To: References: Message-ID: I am not advocating to drop perf tests during merge requests, I just want them to not be fatal for marge batches. Yes this means that a bunch of unrelated merge requests all could be fine wrt to the perf checks per merge request, but the aggregate might fail perf. And then subsequently the next MR against the merged aggregate will start failing. Even that is a pretty bad situation imo. I honestly don't have a good answer, I just see marge work on batches, over and over and over again, just to fail. Eventually marge should figure out a subset of the merges that fit into the perf window, but that might be after 10 tries? So after up to ~30+hours?, which means there won't be any merge request landing in GHC for 30hs. I find that rather unacceptable. I think we need better visualisation of perf regressions that happen on master. Ben has some wip for this, and I think John said there might be some way to add a nice (maybe reflex) ui to it. If we can see regressions on master easily, and go from "ohh this point in time GHC got worse", to "this is the commit". We might be able to figure it out. But what do we expect of patch authors? Right now if five people write patches to GHC, and each of them eventually manage to get their MRs green, after a long review, they finally see it assigned to marge, and then it starts failing? Their patch on its own was fine, but their aggregate with other people's code leads to regressions? So we now expect all patch authors together to try to figure out what happened? Figuring out why something regressed is hard enough, and we only have a very few people who are actually capable of debugging this. Thus I believe it would end up with Ben, Andreas, Matthiew, Simon, ... or someone else from GHC HQ anyway to figure out why it regressed, be it in the Review Stage, or dissecting a marge aggregate, or on master. Thus I believe in most cases we'd have to look at the regressions anyway, and right now we just convolutedly make working on GHC a rather depressing job. Increasing the barrier to entry by also requiring everyone to have absolutely stellar perf regression skills is quite a challenge. There is also the question of our synthetic benchmarks actually measuring real world performance? Do the micro benchmarks translate to the same regressions in say building aeson, vector or Cabal? The latter being what most practitioners care about more than the micro benchmarks. Again, I'm absolutely not in favour of GHC regressing, it's slow enough as it is. I just think CI should be assisting us and not holding development back. Cheers, Moritz On Wed, Mar 17, 2021 at 5:54 PM Spiwack, Arnaud wrote: > Ah, so it was really two identical pipelines (one for the branch where > Margebot batches commits, and one for the MR that Margebot creates before > merging). That's indeed a non-trivial amount of purely wasted > computer-hours. > > Taking a step back, I am inclined to agree with the proposal of not > checking stat regressions in Margebot. My high-level opinion on this is > that perf tests don't actually test the right thing. Namely, they don't > prevent performance drift over time (if a given test is allowed to degrade > by 2% every commit, it can take a 100% performance hit in just 35 commits). > While it is important to measure performance, and to avoid too egregious > performance degradation in a given commit, it's usually performance over > time which matters. I don't really know how to apply it to collaborative > development, and help maintain healthy performance. But flagging > performance regressions in MRs, while not making them block batched merges > sounds like a reasonable compromise. > > > On Wed, Mar 17, 2021 at 9:34 AM Moritz Angermann < > moritz.angermann at gmail.com> wrote: > >> *why* is a very good question. The MR fixing it is here: >> https://gitlab.haskell.org/ghc/ghc/-/merge_requests/5275 >> >> On Wed, Mar 17, 2021 at 4:26 PM Spiwack, Arnaud >> wrote: >> >>> Then I have a question: why are there two pipelines running on each >>> merge batch? >>> >>> On Wed, Mar 17, 2021 at 9:22 AM Moritz Angermann < >>> moritz.angermann at gmail.com> wrote: >>> >>>> No it wasn't. It was about the stat failures described in the next >>>> paragraph. I could have been more clear about that. My apologies! >>>> >>>> On Wed, Mar 17, 2021 at 4:14 PM Spiwack, Arnaud < >>>> arnaud.spiwack at tweag.io> wrote: >>>> >>>>> >>>>> and if either of both (see below) failed, marge's merge would fail as >>>>>> well. >>>>>> >>>>> >>>>> Re: “see below” is this referring to a missing part of your email? >>>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at richarde.dev Wed Mar 17 13:39:08 2021 From: rae at richarde.dev (Richard Eisenberg) Date: Wed, 17 Mar 2021 13:39:08 +0000 Subject: On CI In-Reply-To: References: Message-ID: <010f01784069544b-55202ccf-7239-4333-80c6-3f3cd8543527-000000@us-east-2.amazonses.com> > On Mar 17, 2021, at 6:18 AM, Moritz Angermann wrote: > > But what do we expect of patch authors? Right now if five people write patches to GHC, and each of them eventually manage to get their MRs green, after a long review, they finally see it assigned to marge, and then it starts failing? Their patch on its own was fine, but their aggregate with other people's code leads to regressions? So we now expect all patch authors together to try to figure out what happened? Figuring out why something regressed is hard enough, and we only have a very few people who are actually capable of debugging this. Thus I believe it would end up with Ben, Andreas, Matthiew, Simon, ... or someone else from GHC HQ anyway to figure out why it regressed, be it in the Review Stage, or dissecting a marge aggregate, or on master. I have previously posted against the idea of allowing Marge to accept regressions... but the paragraph above is sadly convincing. Maybe Simon is right about opening up the windows to, say, be 100% (which would catch a 10x regression) instead of infinite, but I'm now convinced that Marge should be very generous in allowing regressions -- provided we also have some way of monitoring drift over time. Separately, I've been concerned for some time about the peculiarity of our perf tests. For example, I'd be quite happy to accept a 25% regression on T9872c if it yielded a 1% improvement on compiling Cabal. T9872 is very very very strange! (Maybe if *all* the T9872 tests regressed, I'd be more worried.) I would be very happy to learn that some more general, representative tests are included in our examinations. Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgraf1337 at gmail.com Wed Mar 17 13:47:08 2021 From: sgraf1337 at gmail.com (Sebastian Graf) Date: Wed, 17 Mar 2021 14:47:08 +0100 Subject: On CI In-Reply-To: <010f01784069544b-55202ccf-7239-4333-80c6-3f3cd8543527-000000@us-east-2.amazonses.com> References: <010f01784069544b-55202ccf-7239-4333-80c6-3f3cd8543527-000000@us-east-2.amazonses.com> Message-ID: Re: Performance drift: I opened https://gitlab.haskell.org/ghc/ghc/-/issues/17658 a while ago with an idea of how to measure drift a bit better. It's basically an automatically checked version of "Ben stares at performance reports every two weeks and sees that T9872 has regressed by 10% since 9.0" Maybe we can have Marge check for drift and each individual MR for incremental perf regressions? Sebastian Am Mi., 17. März 2021 um 14:40 Uhr schrieb Richard Eisenberg < rae at richarde.dev>: > > > On Mar 17, 2021, at 6:18 AM, Moritz Angermann > wrote: > > But what do we expect of patch authors? Right now if five people write > patches to GHC, and each of them eventually manage to get their MRs green, > after a long review, they finally see it assigned to marge, and then it > starts failing? Their patch on its own was fine, but their aggregate with > other people's code leads to regressions? So we now expect all patch > authors together to try to figure out what happened? Figuring out why > something regressed is hard enough, and we only have a very few people who > are actually capable of debugging this. Thus I believe it would end up with > Ben, Andreas, Matthiew, Simon, ... or someone else from GHC HQ anyway to > figure out why it regressed, be it in the Review Stage, or dissecting a > marge aggregate, or on master. > > > I have previously posted against the idea of allowing Marge to accept > regressions... but the paragraph above is sadly convincing. Maybe Simon is > right about opening up the windows to, say, be 100% (which would catch a > 10x regression) instead of infinite, but I'm now convinced that Marge > should be very generous in allowing regressions -- provided we also have > some way of monitoring drift over time. > > Separately, I've been concerned for some time about the peculiarity of our > perf tests. For example, I'd be quite happy to accept a 25% regression on > T9872c if it yielded a 1% improvement on compiling Cabal. T9872 is very > very very strange! (Maybe if *all* the T9872 tests regressed, I'd be more > worried.) I would be very happy to learn that some more general, > representative tests are included in our examinations. > > Richard > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.ericson at obsidian.systems Wed Mar 17 15:06:28 2021 From: john.ericson at obsidian.systems (John Ericson) Date: Wed, 17 Mar 2021 11:06:28 -0400 Subject: On CI In-Reply-To: References: <010f01784069544b-55202ccf-7239-4333-80c6-3f3cd8543527-000000@us-east-2.amazonses.com> Message-ID: <7a45b959-ea2a-6b2c-3158-08049d234ab0@obsidian.systems> Yes, I think the counter point of "automating what Ben does" so people besides Ben can do it is very important. In this case, I think a good thing we could do is asynchronously build more of master post-merge, such as use the perf stats to automatically bisect anything that is fishy, including within marge bot roll-ups which wouldn't be built by the regular workflow anyways. I also agree with Sebastian that the overfit/overly-synthetic nature of our current tests + the sketchy way we ignored drift makes the current approach worth abandoning in any event. The fact that the gold standard must include tests of larger, "real world" code, which unfortunately takes longer to build, I also think is a point towards this asynchronous approach: We trade MR latency for stat latency, but better utilize our build machines and get better stats, and when a human is to fix something a few days later, they have a much better foundation to start their investigation. Finally I agree with SPJ that for fairness and sustainability's sake, the person investigating issues after the fact should ideally be the MR authors, and definitely definitely not Ben. But I hope that better stats, nice looking graphs, and maybe a system to automatically ping MR authors, will make the perf debugging much more accessible enabling that goal. John On 3/17/21 9:47 AM, Sebastian Graf wrote: > Re: Performance drift: I opened > https://gitlab.haskell.org/ghc/ghc/-/issues/17658 > a while ago with > an idea of how to measure drift a bit better. > It's basically an automatically checked version of "Ben stares at > performance reports every two weeks and sees that T9872 has regressed > by 10% since 9.0" > > Maybe we can have Marge check for drift and each individual MR for > incremental perf regressions? > > Sebastian > > Am Mi., 17. März 2021 um 14:40 Uhr schrieb Richard Eisenberg > >: > > > >> On Mar 17, 2021, at 6:18 AM, Moritz Angermann >> > >> wrote: >> >> But what do we expect of patch authors? Right now if five people >> write patches to GHC, and each of them eventually manage to get >> their MRs green, after a long review, they finally see it >> assigned to marge, and then it starts failing? Their patch on its >> own was fine, but their aggregate with other people's code leads >> to regressions? So we now expect all patch authors together to >> try to figure out what happened? Figuring out why something >> regressed is hard enough, and we only have a very few people who >> are actually capable of debugging this. Thus I believe it would >> end up with Ben, Andreas, Matthiew, Simon, ... or someone else >> from GHC HQ anyway to figure out why it regressed, be it in the >> Review Stage, or dissecting a marge aggregate, or on master. > > I have previously posted against the idea of allowing Marge to > accept regressions... but the paragraph above is sadly convincing. > Maybe Simon is right about opening up the windows to, say, be 100% > (which would catch a 10x regression) instead of infinite, but I'm > now convinced that Marge should be very generous in allowing > regressions -- provided we also have some way of monitoring drift > over time. > > Separately, I've been concerned for some time about the > peculiarity of our perf tests. For example, I'd be quite happy to > accept a 25% regression on T9872c if it yielded a 1% improvement > on compiling Cabal. T9872 is very very very strange! (Maybe if > *all* the T9872 tests regressed, I'd be more worried.) I would be > very happy to learn that some more general, representative tests > are included in our examinations. > > Richard > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From klebinger.andreas at gmx.at Wed Mar 17 15:16:10 2021 From: klebinger.andreas at gmx.at (Andreas Klebinger) Date: Wed, 17 Mar 2021 16:16:10 +0100 Subject: On CI In-Reply-To: <010f01784069544b-55202ccf-7239-4333-80c6-3f3cd8543527-000000@us-east-2.amazonses.com> References: <010f01784069544b-55202ccf-7239-4333-80c6-3f3cd8543527-000000@us-east-2.amazonses.com> Message-ID: <132ab2d4-9f1f-4320-fee6-d5f48abe00f3@gmx.at> > I'd be quite happy to accept a 25% regression on T9872c if it yielded a 1% improvement on compiling Cabal. T9872 is very very very strange! (Maybe if *all* the T9872 tests regressed, I'd be more worried.) While I fully agree with this. We should *always* want to know if a small syntetic benchmark regresses by a lot. Or in other words we don't want CI to accept such a regression for us ever, but the developer of a patch should need to explicitly ok it. Otherwise we just slow down a lot of seldom-used code paths by a lot. Now that isn't really an issue anyway I think. The question is rather is 2% a large enough regression to worry about? 5%? 10%? Cheers, Andreas Am 17/03/2021 um 14:39 schrieb Richard Eisenberg: > > >> On Mar 17, 2021, at 6:18 AM, Moritz Angermann >> > wrote: >> >> But what do we expect of patch authors? Right now if five people >> write patches to GHC, and each of them eventually manage to get their >> MRs green, after a long review, they finally see it assigned to >> marge, and then it starts failing? Their patch on its own was fine, >> but their aggregate with other people's code leads to regressions? So >> we now expect all patch authors together to try to figure out what >> happened? Figuring out why something regressed is hard enough, and we >> only have a very few people who are actually capable of debugging >> this. Thus I believe it would end up with Ben, Andreas, Matthiew, >> Simon, ... or someone else from GHC HQ anyway to figure out why it >> regressed, be it in the Review Stage, or dissecting a marge >> aggregate, or on master. > > I have previously posted against the idea of allowing Marge to accept > regressions... but the paragraph above is sadly convincing. Maybe > Simon is right about opening up the windows to, say, be 100% (which > would catch a 10x regression) instead of infinite, but I'm now > convinced that Marge should be very generous in allowing regressions > -- provided we also have some way of monitoring drift over time. > > Separately, I've been concerned for some time about the peculiarity of > our perf tests. For example, I'd be quite happy to accept a 25% > regression on T9872c if it yielded a 1% improvement on compiling > Cabal. T9872 is very very very strange! (Maybe if *all* the T9872 > tests regressed, I'd be more worried.) I would be very happy to learn > that some more general, representative tests are included in our > examinations. > > Richard > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From merijn at inconsistent.nl Wed Mar 17 16:01:53 2021 From: merijn at inconsistent.nl (Merijn Verstraaten) Date: Wed, 17 Mar 2021 17:01:53 +0100 Subject: On CI In-Reply-To: <132ab2d4-9f1f-4320-fee6-d5f48abe00f3@gmx.at> References: <010f01784069544b-55202ccf-7239-4333-80c6-3f3cd8543527-000000@us-east-2.amazonses.com> <132ab2d4-9f1f-4320-fee6-d5f48abe00f3@gmx.at> Message-ID: <22A5DA78-973C-48EF-8B3F-49CB6D8C2DF1@inconsistent.nl> On 17 Mar 2021, at 16:16, Andreas Klebinger wrote: > > While I fully agree with this. We should *always* want to know if a small syntetic benchmark regresses by a lot. > Or in other words we don't want CI to accept such a regression for us ever, but the developer of a patch should need to explicitly ok it. > > Otherwise we just slow down a lot of seldom-used code paths by a lot. > > Now that isn't really an issue anyway I think. The question is rather is 2% a large enough regression to worry about? 5%? 10%? You probably want a sliding window anyway. Having N 1.8% regressions in a row can still slow things down a lot. While a 3% regression after a 5% improvement is probably fine. - Merijn -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From oleg.grenrus at iki.fi Wed Mar 17 17:15:50 2021 From: oleg.grenrus at iki.fi (Oleg Grenrus) Date: Wed, 17 Mar 2021 19:15:50 +0200 Subject: Is referring to GHC-proposals in GHC user manual bad practice or not? Message-ID: <6a9d0fbb-4cf1-78a5-1d56-6dac3c3dff7e@iki.fi> I have a following question: My lexer rules related proposal was recently accepted. The biggest part of getting it in is writing documentation for it. While looking at Divergence from Haskell 98 and Haskell 2010 section of the user manual, in particular Lexical syntax, it already has See "GHC Proposal #229 for the precise rules.". Can I just the same? (I think there was an implicit acceptance of that practice in e.g. https://gitlab.haskell.org/ghc/ghc/-/merge_requests/1664#note_238759) However, I think that referring to proposals text for "essential" bits of information is a bad practice. Because GHC proposals are sometimes amended, one have to look into GitHub history to find out what were there for a particular time point of a GHC release. Very laborous. --- Currently there is 23 references to about a dozen of proposals. An example are passages like     In 9.0, the behavior of this extension changed, and now we require that a negative literal must not be preceded by a closing token (see     `GHC Proposal #229 `__     for the definition of a closing token). or      a future release will be      turned off by default and then possibly removed. The reasons for this and      the deprecation schedule are described in `GHC proposal #30      `__. And there are better examples, which are references for more information, not essential one, like      See the proposal `DuplicateRecordFields without ambiguous field access      `_      and the documentation on :extension:`DuplicateRecordFields` for further details. (I'd put the internal user manual link first), or     But these automatic eta-expansions may silently change the semantics of the user's program,     and deep skolemisation was removed from the language by     `GHC Proposal #287 `__.     This proposal has many more examples. --- So to boil down my question, can I write     Lexical syntax of identifiers and decimal numbers differs slightly from the Haskell report.     See GHC Proposal #403 for the precise rules and differences. - Oleg From oleg.grenrus at iki.fi Wed Mar 17 17:21:20 2021 From: oleg.grenrus at iki.fi (Oleg Grenrus) Date: Wed, 17 Mar 2021 19:21:20 +0200 Subject: Is referring to GHC-proposals in GHC user manual bad practice or not? In-Reply-To: <6a9d0fbb-4cf1-78a5-1d56-6dac3c3dff7e@iki.fi> References: <6a9d0fbb-4cf1-78a5-1d56-6dac3c3dff7e@iki.fi> Message-ID: <6752d566-2eb5-93e4-6328-b5f7f09be109@iki.fi> I forgot to link a bit of relevant discussion from https://github.com/ghc-proposals/ghc-proposals/pull/406, is there a (silent) consensus on the issue? - Oleg On 17.3.2021 19.15, Oleg Grenrus wrote: > I have a following question: > My lexer rules related proposal was recently accepted. The biggest part > of getting it in is writing documentation for it. While looking at > Divergence from Haskell 98 and Haskell 2010 section of the user manual, > in particular Lexical syntax, it already has See "GHC Proposal #229 for > the precise rules.". > > Can I just the same? (I think there was an implicit acceptance of that > practice in e.g. > https://gitlab.haskell.org/ghc/ghc/-/merge_requests/1664#note_238759) > > However, I think that referring to proposals text for "essential" bits > of information is a bad practice. > Because GHC proposals are sometimes amended, one have to look into > GitHub history to find out what were there for a particular time point > of a GHC release. Very laborous. > > --- > > Currently there is 23 references to about a dozen of proposals. An > example are passages like > >     In 9.0, the behavior of this extension changed, and now we require > that a negative literal must not be preceded by a closing token (see >     `GHC Proposal #229 > `__ >     for the definition of a closing token). > > or > >      a future release will be >      turned off by default and then possibly removed. The reasons for > this and >      the deprecation schedule are described in `GHC proposal #30 >      > `__. > > And there are better examples, which are references for more information, > not essential one, like > >      See the proposal `DuplicateRecordFields without ambiguous field access >      > `_ >      and the documentation on :extension:`DuplicateRecordFields` for > further details. > > (I'd put the internal user manual link first), or > >     But these automatic eta-expansions may silently change the semantics > of the user's program, >     and deep skolemisation was removed from the language by >     `GHC Proposal #287 > `__. >     This proposal has many more examples. > > --- > > So to boil down my question, can I write > >     Lexical syntax of identifiers and decimal numbers differs slightly > from the Haskell report. >     See GHC Proposal #403 for the precise rules and differences. > > - Oleg > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From rae at richarde.dev Wed Mar 17 18:35:54 2021 From: rae at richarde.dev (Richard Eisenberg) Date: Wed, 17 Mar 2021 18:35:54 +0000 Subject: Is referring to GHC-proposals in GHC user manual bad practice or not? In-Reply-To: <6752d566-2eb5-93e4-6328-b5f7f09be109@iki.fi> References: <6a9d0fbb-4cf1-78a5-1d56-6dac3c3dff7e@iki.fi> <6752d566-2eb5-93e4-6328-b5f7f09be109@iki.fi> Message-ID: <010f0178417908cf-eedefe38-32a3-4234-bb56-bab58dbbaae1-000000@us-east-2.amazonses.com> My vote is that the manual should be self-standing. References to proposals are good, but as supplementary/background reading only. My gold standard always is: if we lost all the source code to GHC and all its compiled versions, but just had the manual and Haskell Reports (but without external references), we could re-create an interface-equivalent implementation. (I say "interface-equivalent" because we do not specify all the details of e.g. optimizations and interface files.) We are very, very far from that gold standard. Yet I still think it's a good standard to aim for when drafting new sections of the manual. Of course, authors are quite free to copy-and-paste from proposal text to form a new manual chapter. If we agree about this, it would be good to lay this out somewhere, perhaps in the "care and feeding" chapter. Richard > On Mar 17, 2021, at 1:21 PM, Oleg Grenrus wrote: > > I forgot to link a bit of relevant discussion from > https://github.com/ghc-proposals/ghc-proposals/pull/406, > is there a (silent) consensus on the issue? > > - Oleg > > On 17.3.2021 19.15, Oleg Grenrus wrote: >> I have a following question: >> My lexer rules related proposal was recently accepted. The biggest part >> of getting it in is writing documentation for it. While looking at >> Divergence from Haskell 98 and Haskell 2010 section of the user manual, >> in particular Lexical syntax, it already has See "GHC Proposal #229 for >> the precise rules.". >> >> Can I just the same? (I think there was an implicit acceptance of that >> practice in e.g. >> https://gitlab.haskell.org/ghc/ghc/-/merge_requests/1664#note_238759) >> >> However, I think that referring to proposals text for "essential" bits >> of information is a bad practice. >> Because GHC proposals are sometimes amended, one have to look into >> GitHub history to find out what were there for a particular time point >> of a GHC release. Very laborous. >> >> --- >> >> Currently there is 23 references to about a dozen of proposals. An >> example are passages like >> >> In 9.0, the behavior of this extension changed, and now we require >> that a negative literal must not be preceded by a closing token (see >> `GHC Proposal #229 >> `__ >> for the definition of a closing token). >> >> or >> >> a future release will be >> turned off by default and then possibly removed. The reasons for >> this and >> the deprecation schedule are described in `GHC proposal #30 >> >> `__. >> >> And there are better examples, which are references for more information, >> not essential one, like >> >> See the proposal `DuplicateRecordFields without ambiguous field access >> >> `_ >> and the documentation on :extension:`DuplicateRecordFields` for >> further details. >> >> (I'd put the internal user manual link first), or >> >> But these automatic eta-expansions may silently change the semantics >> of the user's program, >> and deep skolemisation was removed from the language by >> `GHC Proposal #287 >> `__. >> This proposal has many more examples. >> >> --- >> >> So to boil down my question, can I write >> >> Lexical syntax of identifiers and decimal numbers differs slightly >> from the Haskell report. >> See GHC Proposal #403 for the precise rules and differences. >> >> - Oleg >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From allbery.b at gmail.com Wed Mar 17 18:37:41 2021 From: allbery.b at gmail.com (Brandon Allbery) Date: Wed, 17 Mar 2021 14:37:41 -0400 Subject: Is referring to GHC-proposals in GHC user manual bad practice or not? In-Reply-To: <010f0178417908cf-eedefe38-32a3-4234-bb56-bab58dbbaae1-000000@us-east-2.amazonses.com> References: <6a9d0fbb-4cf1-78a5-1d56-6dac3c3dff7e@iki.fi> <6752d566-2eb5-93e4-6328-b5f7f09be109@iki.fi> <010f0178417908cf-eedefe38-32a3-4234-bb56-bab58dbbaae1-000000@us-east-2.amazonses.com> Message-ID: I'm inclined to agree with this, especially given the argument that it'll depend on the state of a proposal at a given time. On Wed, Mar 17, 2021 at 2:36 PM Richard Eisenberg wrote: > My vote is that the manual should be self-standing. References to > proposals are good, but as supplementary/background reading only. My gold > standard always is: if we lost all the source code to GHC and all its > compiled versions, but just had the manual and Haskell Reports (but without > external references), we could re-create an interface-equivalent > implementation. (I say "interface-equivalent" because we do not specify all > the details of e.g. optimizations and interface files.) We are very, very > far from that gold standard. Yet I still think it's a good standard to aim > for when drafting new sections of the manual. > > Of course, authors are quite free to copy-and-paste from proposal text to > form a new manual chapter. > > If we agree about this, it would be good to lay this out somewhere, > perhaps in the "care and feeding" chapter. > > Richard > > > On Mar 17, 2021, at 1:21 PM, Oleg Grenrus wrote: > > > > I forgot to link a bit of relevant discussion from > > https://github.com/ghc-proposals/ghc-proposals/pull/406, > > is there a (silent) consensus on the issue? > > > > - Oleg > > > > On 17.3.2021 19.15, Oleg Grenrus wrote: > >> I have a following question: > >> My lexer rules related proposal was recently accepted. The biggest part > >> of getting it in is writing documentation for it. While looking at > >> Divergence from Haskell 98 and Haskell 2010 section of the user manual, > >> in particular Lexical syntax, it already has See "GHC Proposal #229 for > >> the precise rules.". > >> > >> Can I just the same? (I think there was an implicit acceptance of that > >> practice in e.g. > >> https://gitlab.haskell.org/ghc/ghc/-/merge_requests/1664#note_238759) > >> > >> However, I think that referring to proposals text for "essential" bits > >> of information is a bad practice. > >> Because GHC proposals are sometimes amended, one have to look into > >> GitHub history to find out what were there for a particular time point > >> of a GHC release. Very laborous. > >> > >> --- > >> > >> Currently there is 23 references to about a dozen of proposals. An > >> example are passages like > >> > >> In 9.0, the behavior of this extension changed, and now we require > >> that a negative literal must not be preceded by a closing token (see > >> `GHC Proposal #229 > >> < > https://github.com/ghc-proposals/ghc-proposals/blob/master/proposals/0229-whitespace-bang-patterns.rst > >`__ > >> for the definition of a closing token). > >> > >> or > >> > >> a future release will be > >> turned off by default and then possibly removed. The reasons for > >> this and > >> the deprecation schedule are described in `GHC proposal #30 > >> > >> < > https://github.com/ghc-proposals/ghc-proposals/blob/master/proposals/0030-remove-star-kind.rst > >`__. > >> > >> And there are better examples, which are references for more > information, > >> not essential one, like > >> > >> See the proposal `DuplicateRecordFields without ambiguous field > access > >> > >> < > https://github.com/ghc-proposals/ghc-proposals/blob/master/proposals/0366-no-ambiguous-field-access.rst > >`_ > >> and the documentation on :extension:`DuplicateRecordFields` for > >> further details. > >> > >> (I'd put the internal user manual link first), or > >> > >> But these automatic eta-expansions may silently change the semantics > >> of the user's program, > >> and deep skolemisation was removed from the language by > >> `GHC Proposal #287 > >> < > https://github.com/ghc-proposals/ghc-proposals/blob/master/proposals/0287-simplify-subsumption.rst > >`__. > >> This proposal has many more examples. > >> > >> --- > >> > >> So to boil down my question, can I write > >> > >> Lexical syntax of identifiers and decimal numbers differs slightly > >> from the Haskell report. > >> See GHC Proposal #403 for the precise rules and differences. > >> > >> - Oleg > >> _______________________________________________ > >> ghc-devs mailing list > >> ghc-devs at haskell.org > >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -- brandon s allbery kf8nh allbery.b at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ietf-dane at dukhovni.org Wed Mar 17 18:42:38 2021 From: ietf-dane at dukhovni.org (Viktor Dukhovni) Date: Wed, 17 Mar 2021 14:42:38 -0400 Subject: Is referring to GHC-proposals in GHC user manual bad practice or not? In-Reply-To: <010f0178417908cf-eedefe38-32a3-4234-bb56-bab58dbbaae1-000000@us-east-2.amazonses.com> References: <6a9d0fbb-4cf1-78a5-1d56-6dac3c3dff7e@iki.fi> <6752d566-2eb5-93e4-6328-b5f7f09be109@iki.fi> <010f0178417908cf-eedefe38-32a3-4234-bb56-bab58dbbaae1-000000@us-east-2.amazonses.com> Message-ID: <51BC9434-E024-42B1-A573-C8DEF6D36F96@dukhovni.org> > On Mar 17, 2021, at 2:35 PM, Richard Eisenberg wrote: > > My vote is that the manual should be self-standing. References to proposals are good, but as supplementary/background reading only. My gold standard always is: if we lost all the source code to GHC and all its compiled versions, but just had the manual and Haskell Reports (but without external references), we could re-create an interface-equivalent implementation. (I say "interface-equivalent" because we do not specify all the details of e.g. optimizations and interface files.) We are very, very far from that gold standard. Yet I still think it's a good standard to aim for when drafting new sections of the manual. I strongly agree. Tracking down the evolving proposals, is rather a chore... -- Viktor. From oleg.grenrus at iki.fi Wed Mar 17 18:52:42 2021 From: oleg.grenrus at iki.fi (Oleg Grenrus) Date: Wed, 17 Mar 2021 20:52:42 +0200 Subject: Is referring to GHC-proposals in GHC user manual bad practice or not? In-Reply-To: <010f0178417908cf-eedefe38-32a3-4234-bb56-bab58dbbaae1-000000@us-east-2.amazonses.com> References: <6a9d0fbb-4cf1-78a5-1d56-6dac3c3dff7e@iki.fi> <6752d566-2eb5-93e4-6328-b5f7f09be109@iki.fi> <010f0178417908cf-eedefe38-32a3-4234-bb56-bab58dbbaae1-000000@us-east-2.amazonses.com> Message-ID: <1f8641f8-8d39-5381-85e3-0e5aca61907b@iki.fi> To check that I understand this - Bad: "See the proposal for the definition of a closing token" (important definition) - Acceptable: "The reasons for this ..." (not essential information for replicating the functionality, though maybe one sentence summary would be good?) - Fine: "...the deprecation schedule are described ..." (if code is lost, the schedule will probably change as well) - Fine: "This proposal has many more examples." (manual has some examples already) - Bad: "Lexical syntax of identifiers and decimal numbers differs slightly from the Haskell report. See GHC Proposal #403 for the precise rules and differences." (doesn't specify the changes, vague, needs external reference). Do we agree on this interpretation? - Oleg On 17.3.2021 20.35, Richard Eisenberg wrote: > My vote is that the manual should be self-standing. References to proposals are good, but as supplementary/background reading only. My gold standard always is: if we lost all the source code to GHC and all its compiled versions, but just had the manual and Haskell Reports (but without external references), we could re-create an interface-equivalent implementation. (I say "interface-equivalent" because we do not specify all the details of e.g. optimizations and interface files.) We are very, very far from that gold standard. Yet I still think it's a good standard to aim for when drafting new sections of the manual. > > Of course, authors are quite free to copy-and-paste from proposal text to form a new manual chapter. > > If we agree about this, it would be good to lay this out somewhere, perhaps in the "care and feeding" chapter. > > Richard > >> On Mar 17, 2021, at 1:21 PM, Oleg Grenrus wrote: >> >> I forgot to link a bit of relevant discussion from >> https://github.com/ghc-proposals/ghc-proposals/pull/406, >> is there a (silent) consensus on the issue? >> >> - Oleg >> >> On 17.3.2021 19.15, Oleg Grenrus wrote: >>> I have a following question: >>> My lexer rules related proposal was recently accepted. The biggest part >>> of getting it in is writing documentation for it. While looking at >>> Divergence from Haskell 98 and Haskell 2010 section of the user manual, >>> in particular Lexical syntax, it already has See "GHC Proposal #229 for >>> the precise rules.". >>> >>> Can I just the same? (I think there was an implicit acceptance of that >>> practice in e.g. >>> https://gitlab.haskell.org/ghc/ghc/-/merge_requests/1664#note_238759) >>> >>> However, I think that referring to proposals text for "essential" bits >>> of information is a bad practice. >>> Because GHC proposals are sometimes amended, one have to look into >>> GitHub history to find out what were there for a particular time point >>> of a GHC release. Very laborous. >>> >>> --- >>> >>> Currently there is 23 references to about a dozen of proposals. An >>> example are passages like >>> >>> In 9.0, the behavior of this extension changed, and now we require >>> that a negative literal must not be preceded by a closing token (see >>> `GHC Proposal #229 >>> `__ >>> for the definition of a closing token). >>> >>> or >>> >>> a future release will be >>> turned off by default and then possibly removed. The reasons for >>> this and >>> the deprecation schedule are described in `GHC proposal #30 >>> >>> `__. >>> >>> And there are better examples, which are references for more information, >>> not essential one, like >>> >>> See the proposal `DuplicateRecordFields without ambiguous field access >>> >>> `_ >>> and the documentation on :extension:`DuplicateRecordFields` for >>> further details. >>> >>> (I'd put the internal user manual link first), or >>> >>> But these automatic eta-expansions may silently change the semantics >>> of the user's program, >>> and deep skolemisation was removed from the language by >>> `GHC Proposal #287 >>> `__. >>> This proposal has many more examples. >>> >>> --- >>> >>> So to boil down my question, can I write >>> >>> Lexical syntax of identifiers and decimal numbers differs slightly >>> from the Haskell report. >>> See GHC Proposal #403 for the precise rules and differences. >>> >>> - Oleg >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From ben at well-typed.com Wed Mar 17 18:53:27 2021 From: ben at well-typed.com (Ben Gamari) Date: Wed, 17 Mar 2021 14:53:27 -0400 Subject: Is referring to GHC-proposals in GHC user manual bad practice or not? In-Reply-To: <010f0178417908cf-eedefe38-32a3-4234-bb56-bab58dbbaae1-000000@us-east-2.amazonses.com> References: <6a9d0fbb-4cf1-78a5-1d56-6dac3c3dff7e@iki.fi> <6752d566-2eb5-93e4-6328-b5f7f09be109@iki.fi> <010f0178417908cf-eedefe38-32a3-4234-bb56-bab58dbbaae1-000000@us-east-2.amazonses.com> Message-ID: <2B99A68D-3F51-4421-AB85-B0ED752E4C08@well-typed.com> I, too, agree with Richard here. In fact, one (small) reason why we originally chose RestructuredText as the proposal syntax is to make it easy to turn the proposal into users guide documentation. Cheers, - Ben On March 17, 2021 2:35:54 PM EDT, Richard Eisenberg wrote: >My vote is that the manual should be self-standing. References to >proposals are good, but as supplementary/background reading only. My >gold standard always is: if we lost all the source code to GHC and all >its compiled versions, but just had the manual and Haskell Reports (but >without external references), we could re-create an >interface-equivalent implementation. (I say "interface-equivalent" >because we do not specify all the details of e.g. optimizations and >interface files.) We are very, very far from that gold standard. Yet I >still think it's a good standard to aim for when drafting new sections >of the manual. > >Of course, authors are quite free to copy-and-paste from proposal text >to form a new manual chapter. > >If we agree about this, it would be good to lay this out somewhere, >perhaps in the "care and feeding" chapter. > >Richard > >> On Mar 17, 2021, at 1:21 PM, Oleg Grenrus >wrote: >> >> I forgot to link a bit of relevant discussion from >> https://github.com/ghc-proposals/ghc-proposals/pull/406, >> is there a (silent) consensus on the issue? >> >> - Oleg >> >> On 17.3.2021 19.15, Oleg Grenrus wrote: >>> I have a following question: >>> My lexer rules related proposal was recently accepted. The biggest >part >>> of getting it in is writing documentation for it. While looking at >>> Divergence from Haskell 98 and Haskell 2010 section of the user >manual, >>> in particular Lexical syntax, it already has See "GHC Proposal #229 >for >>> the precise rules.". >>> >>> Can I just the same? (I think there was an implicit acceptance of >that >>> practice in e.g. >>> >https://gitlab.haskell.org/ghc/ghc/-/merge_requests/1664#note_238759) >>> >>> However, I think that referring to proposals text for "essential" >bits >>> of information is a bad practice. >>> Because GHC proposals are sometimes amended, one have to look into >>> GitHub history to find out what were there for a particular time >point >>> of a GHC release. Very laborous. >>> >>> --- >>> >>> Currently there is 23 references to about a dozen of proposals. An >>> example are passages like >>> >>> In 9.0, the behavior of this extension changed, and now we >require >>> that a negative literal must not be preceded by a closing token (see >>> `GHC Proposal #229 >>> >`__ >>> for the definition of a closing token). >>> >>> or >>> >>> a future release will be >>> turned off by default and then possibly removed. The reasons >for >>> this and >>> the deprecation schedule are described in `GHC proposal #30 >>> >>> >`__. >>> >>> And there are better examples, which are references for more >information, >>> not essential one, like >>> >>> See the proposal `DuplicateRecordFields without ambiguous field >access >>> >>> >`_ >>> and the documentation on :extension:`DuplicateRecordFields` for >>> further details. >>> >>> (I'd put the internal user manual link first), or >>> >>> But these automatic eta-expansions may silently change the >semantics >>> of the user's program, >>> and deep skolemisation was removed from the language by >>> `GHC Proposal #287 >>> >`__. >>> This proposal has many more examples. >>> >>> --- >>> >>> So to boil down my question, can I write >>> >>> Lexical syntax of identifiers and decimal numbers differs >slightly >>> from the Haskell report. >>> See GHC Proposal #403 for the precise rules and differences. >>> >>> - Oleg >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > >_______________________________________________ >ghc-devs mailing list >ghc-devs at haskell.org >http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -- Sent from my Android device with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at richarde.dev Wed Mar 17 19:05:20 2021 From: rae at richarde.dev (Richard Eisenberg) Date: Wed, 17 Mar 2021 19:05:20 +0000 Subject: Is referring to GHC-proposals in GHC user manual bad practice or not? In-Reply-To: <1f8641f8-8d39-5381-85e3-0e5aca61907b@iki.fi> References: <6a9d0fbb-4cf1-78a5-1d56-6dac3c3dff7e@iki.fi> <6752d566-2eb5-93e4-6328-b5f7f09be109@iki.fi> <010f0178417908cf-eedefe38-32a3-4234-bb56-bab58dbbaae1-000000@us-east-2.amazonses.com> <1f8641f8-8d39-5381-85e3-0e5aca61907b@iki.fi> Message-ID: <010f01784193fccb-7de0b4f1-1420-41ac-9ddb-621af1652a13-000000@us-east-2.amazonses.com> > On Mar 17, 2021, at 2:52 PM, Oleg Grenrus wrote: > > > Do we agree on this interpretation? Yes, fully. Thanks for illustrating with examples. Richard > > - Oleg > > On 17.3.2021 20.35, Richard Eisenberg wrote: >> My vote is that the manual should be self-standing. References to proposals are good, but as supplementary/background reading only. My gold standard always is: if we lost all the source code to GHC and all its compiled versions, but just had the manual and Haskell Reports (but without external references), we could re-create an interface-equivalent implementation. (I say "interface-equivalent" because we do not specify all the details of e.g. optimizations and interface files.) We are very, very far from that gold standard. Yet I still think it's a good standard to aim for when drafting new sections of the manual. >> >> Of course, authors are quite free to copy-and-paste from proposal text to form a new manual chapter. >> >> If we agree about this, it would be good to lay this out somewhere, perhaps in the "care and feeding" chapter. >> >> Richard >> >>> On Mar 17, 2021, at 1:21 PM, Oleg Grenrus wrote: >>> >>> I forgot to link a bit of relevant discussion from >>> https://github.com/ghc-proposals/ghc-proposals/pull/406, >>> is there a (silent) consensus on the issue? >>> >>> - Oleg >>> >>> On 17.3.2021 19.15, Oleg Grenrus wrote: >>>> I have a following question: >>>> My lexer rules related proposal was recently accepted. The biggest part >>>> of getting it in is writing documentation for it. While looking at >>>> Divergence from Haskell 98 and Haskell 2010 section of the user manual, >>>> in particular Lexical syntax, it already has See "GHC Proposal #229 for >>>> the precise rules.". >>>> >>>> Can I just the same? (I think there was an implicit acceptance of that >>>> practice in e.g. >>>> https://gitlab.haskell.org/ghc/ghc/-/merge_requests/1664#note_238759) >>>> >>>> However, I think that referring to proposals text for "essential" bits >>>> of information is a bad practice. >>>> Because GHC proposals are sometimes amended, one have to look into >>>> GitHub history to find out what were there for a particular time point >>>> of a GHC release. Very laborous. >>>> >>>> --- >>>> >>>> Currently there is 23 references to about a dozen of proposals. An >>>> example are passages like >>>> >>>> In 9.0, the behavior of this extension changed, and now we require >>>> that a negative literal must not be preceded by a closing token (see >>>> `GHC Proposal #229 >>>> `__ >>>> for the definition of a closing token). >>>> >>>> or >>>> >>>> a future release will be >>>> turned off by default and then possibly removed. The reasons for >>>> this and >>>> the deprecation schedule are described in `GHC proposal #30 >>>> >>>> `__. >>>> >>>> And there are better examples, which are references for more information, >>>> not essential one, like >>>> >>>> See the proposal `DuplicateRecordFields without ambiguous field access >>>> >>>> `_ >>>> and the documentation on :extension:`DuplicateRecordFields` for >>>> further details. >>>> >>>> (I'd put the internal user manual link first), or >>>> >>>> But these automatic eta-expansions may silently change the semantics >>>> of the user's program, >>>> and deep skolemisation was removed from the language by >>>> `GHC Proposal #287 >>>> `__. >>>> This proposal has many more examples. >>>> >>>> --- >>>> >>>> So to boil down my question, can I write >>>> >>>> Lexical syntax of identifiers and decimal numbers differs slightly >>>> from the Haskell report. >>>> See GHC Proposal #403 for the precise rules and differences. >>>> >>>> - Oleg >>>> _______________________________________________ >>>> ghc-devs mailing list >>>> ghc-devs at haskell.org >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> From karel.gardas at centrum.cz Wed Mar 17 22:21:41 2021 From: karel.gardas at centrum.cz (Karel Gardas) Date: Wed, 17 Mar 2021 23:21:41 +0100 Subject: On CI In-Reply-To: <132ab2d4-9f1f-4320-fee6-d5f48abe00f3@gmx.at> References: <010f01784069544b-55202ccf-7239-4333-80c6-3f3cd8543527-000000@us-east-2.amazonses.com> <132ab2d4-9f1f-4320-fee6-d5f48abe00f3@gmx.at> Message-ID: <3d7253a2-b1ee-4b7e-15e3-7d75f7b1c15f@centrum.cz> On 3/17/21 4:16 PM, Andreas Klebinger wrote: > Now that isn't really an issue anyway I think. The question is rather is > 2% a large enough regression to worry about? 5%? 10%? 5-10% is still around system noise even on lightly loaded workstation. Not sure if CI is not run on some shared cloud resources where it may be even higher. I've done simple experiment of pining ghc compiling ghc-cabal and I've been able to "speed" it up by 5-10% on W-2265. Also following this CI/performance regs discussion I'm not entirely sure if this is not just a witch-hunt hurting/beating mostly most active GHC developers. Another idea may be to give up on CI doing perf reg testing at all and invest saved resources into proper investigation of GHC/Haskell programs performance. Not sure, if this would not be more beneficial longer term. Just one random number thrown to the ring. Linux's perf claims that nearly every second L3 cache access on the example above ends with cache miss. Is it a good number or bad number? See stats below (perf stat -d on ghc with +RTS -T -s -RTS'). Good luck to anybody working on that! Karel Linking utils/ghc-cabal/dist/build/tmp/ghc-cabal ... 61,020,836,136 bytes allocated in the heap 5,229,185,608 bytes copied during GC 301,742,768 bytes maximum residency (19 sample(s)) 3,533,000 bytes maximum slop 840 MiB total memory in use (0 MB lost due to fragmentation) Tot time (elapsed) Avg pause Max pause Gen 0 2012 colls, 0 par 5.725s 5.731s 0.0028s 0.1267s Gen 1 19 colls, 0 par 1.695s 1.696s 0.0893s 0.2636s TASKS: 4 (1 bound, 3 peak workers (3 total), using -N1) SPARKS: 0 (0 converted, 0 overflowed, 0 dud, 0 GC'd, 0 fizzled) INIT time 0.000s ( 0.000s elapsed) MUT time 27.849s ( 32.163s elapsed) GC time 7.419s ( 7.427s elapsed) EXIT time 0.000s ( 0.010s elapsed) Total time 35.269s ( 39.601s elapsed) Alloc rate 2,191,122,004 bytes per MUT second Productivity 79.0% of total user, 81.2% of total elapsed Performance counter stats for '/export/home/karel/sfw/ghc-8.10.3/bin/ghc -H32m -O -Wall -optc-Wall -O0 -hide-all-packages -package ghc-prim -package base -package binary -package array -package transformers -package time -package containers -package bytestring -package deepseq -package process -package pretty -package directory -package filepath -package template-haskell -package unix --make utils/ghc-cabal/Main.hs -o utils/ghc-cabal/dist/build/tmp/ghc-cabal -no-user-package-db -Wall -fno-warn-unused-imports -fno-warn-warnings-deprecations -DCABAL_VERSION=3,4,0,0 -DBOOTSTRAPPING -odir bootstrapping -hidir bootstrapping libraries/Cabal/Cabal/Distribution/Fields/Lexer.hs -ilibraries/Cabal/Cabal -ilibraries/binary/src -ilibraries/filepath -ilibraries/hpc -ilibraries/mtl -ilibraries/text/src libraries/text/cbits/cbits.c -Ilibraries/text/include -ilibraries/parsec/src +RTS -T -s -RTS': 39,632.99 msec task-clock # 0.999 CPUs utilized 17,191 context-switches # 0.434 K/sec 0 cpu-migrations # 0.000 K/sec 899,930 page-faults # 0.023 M/sec 177,636,979,975 cycles # 4.482 GHz (87.54%) 181,945,795,221 instructions # 1.02 insn per cycle (87.59%) 34,033,574,511 branches # 858.718 M/sec (87.42%) 1,664,969,299 branch-misses # 4.89% of all branches (87.48%) 41,522,737,426 L1-dcache-loads # 1047.681 M/sec (87.53%) 2,675,319,939 L1-dcache-load-misses # 6.44% of all L1-dcache hits (87.48%) 372,370,395 LLC-loads # 9.395 M/sec (87.49%) 173,614,140 LLC-load-misses # 46.62% of all LL-cache hits (87.46%) 39.663103602 seconds time elapsed 38.288158000 seconds user 1.358263000 seconds sys From oleg.grenrus at iki.fi Thu Mar 18 15:14:17 2021 From: oleg.grenrus at iki.fi (Oleg Grenrus) Date: Thu, 18 Mar 2021 17:14:17 +0200 Subject: Pull request to editline Message-ID: <2a14c0b7-8a11-7d1c-669a-fe8e87c74288@iki.fi> Hi Judah, I'm sending your an email, in case you haven't noticed GitHub notifications. I have a PR https://github.com/judah/haskeline/pull/153 (now opened for two months). It's blocking work on Data.List specialization. Also Ben have pinged you on https://github.com/judah/haskeline/issues/154 to make releases for 9.0.1 and 9.0.2. It would be very great if my patch could be merged and released, that should make editline ready for 9.2 and maybe even 9.4 (I think we are too late with refactor of Data.List for 9.2). - Oleg From judah.jacobson at gmail.com Thu Mar 18 16:32:37 2021 From: judah.jacobson at gmail.com (Judah Jacobson) Date: Thu, 18 Mar 2021 09:32:37 -0700 Subject: Pull request to editline In-Reply-To: <2a14c0b7-8a11-7d1c-669a-fe8e87c74288@iki.fi> References: <2a14c0b7-8a11-7d1c-669a-fe8e87c74288@iki.fi> Message-ID: Hi Oleg, I apologize for the delay in response. I have merged your Data.List PR and released haskeline-0.8.1.2 containing that change. I will also look into making releases corresponding to ghc-9.0.*. On Thu, Mar 18, 2021 at 8:14 AM Oleg Grenrus wrote: > Hi Judah, > > I'm sending your an email, in case you haven't noticed GitHub > notifications. > I have a PR https://github.com/judah/haskeline/pull/153 (now opened for > two months). > It's blocking work on Data.List specialization. > > Also Ben have pinged you on https://github.com/judah/haskeline/issues/154 > to make releases for 9.0.1 and 9.0.2. > > It would be very great if my patch could be merged and released, > that should make editline ready for 9.2 and maybe even 9.4 > (I think we are too late with refactor of Data.List for 9.2). > > - Oleg > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleg.grenrus at iki.fi Thu Mar 18 16:53:53 2021 From: oleg.grenrus at iki.fi (Oleg Grenrus) Date: Thu, 18 Mar 2021 18:53:53 +0200 Subject: Pull request to editline In-Reply-To: References: <2a14c0b7-8a11-7d1c-669a-fe8e87c74288@iki.fi> Message-ID: <1d2c4621-845b-567b-ba4d-4ecd9db5768d@iki.fi> I think the 0.8.1.2 release should be fine for 9.0.2. Could you, Ben, confirm that? - Oleg On 18.3.2021 18.32, Judah Jacobson wrote: > Hi Oleg, I apologize for the delay in response. I have merged your > Data.List PR and released haskeline-0.8.1.2 containing that change. I > will also look into making releases corresponding to ghc-9.0.*. > > > > On Thu, Mar 18, 2021 at 8:14 AM Oleg Grenrus > wrote: > > Hi Judah, > > I'm sending your an email, in case you haven't noticed GitHub > notifications. > I have a PR https://github.com/judah/haskeline/pull/153 (now > opened for > two months). > It's blocking work on Data.List specialization. > > Also Ben have pinged you on > https://github.com/judah/haskeline/issues/154 > to make releases for 9.0.1 and 9.0.2. > > It would be very great if my patch could be merged and released, > that should make editline ready for 9.2 and maybe even 9.4 > (I think we are too late with refactor of Data.List for 9.2). > > - Oleg > -------------- next part -------------- An HTML attachment was scrubbed... URL: From davean at xkcd.com Thu Mar 18 17:07:42 2021 From: davean at xkcd.com (davean) Date: Thu, 18 Mar 2021 13:07:42 -0400 Subject: On CI In-Reply-To: <3d7253a2-b1ee-4b7e-15e3-7d75f7b1c15f@centrum.cz> References: <010f01784069544b-55202ccf-7239-4333-80c6-3f3cd8543527-000000@us-east-2.amazonses.com> <132ab2d4-9f1f-4320-fee6-d5f48abe00f3@gmx.at> <3d7253a2-b1ee-4b7e-15e3-7d75f7b1c15f@centrum.cz> Message-ID: That really shouldn't be near system noise for a well constructed performance test. You might be seeing things like thermal issues, etc though - good benchmarking is a serious subject. Also we're not talking wall clock tests, we're talking specific metrics. The machines do tend to be bare metal, but many of these are entirely CPU performance independent, memory timing independent, etc. Well not quite but that's a longer discussion. The investigation of Haskell code performance is a very good thing to do BTW, but you'd still want to avoid regressions in the improvements you made. How well we can do that and the cost of it is the primary issue here. -davean On Wed, Mar 17, 2021 at 6:22 PM Karel Gardas wrote: > On 3/17/21 4:16 PM, Andreas Klebinger wrote: > > Now that isn't really an issue anyway I think. The question is rather is > > 2% a large enough regression to worry about? 5%? 10%? > > 5-10% is still around system noise even on lightly loaded workstation. > Not sure if CI is not run on some shared cloud resources where it may be > even higher. > > I've done simple experiment of pining ghc compiling ghc-cabal and I've > been able to "speed" it up by 5-10% on W-2265. > > Also following this CI/performance regs discussion I'm not entirely sure > if this is not just a witch-hunt hurting/beating mostly most active GHC > developers. Another idea may be to give up on CI doing perf reg testing > at all and invest saved resources into proper investigation of > GHC/Haskell programs performance. Not sure, if this would not be more > beneficial longer term. > > Just one random number thrown to the ring. Linux's perf claims that > nearly every second L3 cache access on the example above ends with cache > miss. Is it a good number or bad number? See stats below (perf stat -d > on ghc with +RTS -T -s -RTS'). > > Good luck to anybody working on that! > > Karel > > > Linking utils/ghc-cabal/dist/build/tmp/ghc-cabal ... > 61,020,836,136 bytes allocated in the heap > 5,229,185,608 bytes copied during GC > 301,742,768 bytes maximum residency (19 sample(s)) > 3,533,000 bytes maximum slop > 840 MiB total memory in use (0 MB lost due to fragmentation) > > Tot time (elapsed) Avg pause Max > pause > Gen 0 2012 colls, 0 par 5.725s 5.731s 0.0028s > 0.1267s > Gen 1 19 colls, 0 par 1.695s 1.696s 0.0893s > 0.2636s > > TASKS: 4 (1 bound, 3 peak workers (3 total), using -N1) > > SPARKS: 0 (0 converted, 0 overflowed, 0 dud, 0 GC'd, 0 fizzled) > > INIT time 0.000s ( 0.000s elapsed) > MUT time 27.849s ( 32.163s elapsed) > GC time 7.419s ( 7.427s elapsed) > EXIT time 0.000s ( 0.010s elapsed) > Total time 35.269s ( 39.601s elapsed) > > Alloc rate 2,191,122,004 bytes per MUT second > > Productivity 79.0% of total user, 81.2% of total elapsed > > > Performance counter stats for > '/export/home/karel/sfw/ghc-8.10.3/bin/ghc -H32m -O -Wall -optc-Wall -O0 > -hide-all-packages -package ghc-prim -package base -package binary > -package array -package transformers -package time -package containers > -package bytestring -package deepseq -package process -package pretty > -package directory -package filepath -package template-haskell -package > unix --make utils/ghc-cabal/Main.hs -o > utils/ghc-cabal/dist/build/tmp/ghc-cabal -no-user-package-db -Wall > -fno-warn-unused-imports -fno-warn-warnings-deprecations > -DCABAL_VERSION=3,4,0,0 -DBOOTSTRAPPING -odir bootstrapping -hidir > bootstrapping libraries/Cabal/Cabal/Distribution/Fields/Lexer.hs > -ilibraries/Cabal/Cabal -ilibraries/binary/src -ilibraries/filepath > -ilibraries/hpc -ilibraries/mtl -ilibraries/text/src > libraries/text/cbits/cbits.c -Ilibraries/text/include > -ilibraries/parsec/src +RTS -T -s -RTS': > > 39,632.99 msec task-clock # 0.999 CPUs > utilized > 17,191 context-switches # 0.434 K/sec > > 0 cpu-migrations # 0.000 K/sec > > 899,930 page-faults # 0.023 M/sec > > 177,636,979,975 cycles # 4.482 GHz > (87.54%) > 181,945,795,221 instructions # 1.02 insn per > cycle (87.59%) > 34,033,574,511 branches # 858.718 M/sec > (87.42%) > 1,664,969,299 branch-misses # 4.89% of all > branches (87.48%) > 41,522,737,426 L1-dcache-loads # 1047.681 M/sec > (87.53%) > 2,675,319,939 L1-dcache-load-misses # 6.44% of all > L1-dcache hits (87.48%) > 372,370,395 LLC-loads # 9.395 M/sec > (87.49%) > 173,614,140 LLC-load-misses # 46.62% of all > LL-cache hits (87.46%) > > 39.663103602 seconds time elapsed > > 38.288158000 seconds user > 1.358263000 seconds sys > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgraf1337 at gmail.com Thu Mar 18 17:37:28 2021 From: sgraf1337 at gmail.com (Sebastian Graf) Date: Thu, 18 Mar 2021 18:37:28 +0100 Subject: On CI In-Reply-To: References: <010f01784069544b-55202ccf-7239-4333-80c6-3f3cd8543527-000000@us-east-2.amazonses.com> <132ab2d4-9f1f-4320-fee6-d5f48abe00f3@gmx.at> <3d7253a2-b1ee-4b7e-15e3-7d75f7b1c15f@centrum.cz> Message-ID: To be clear: All performance tests that run as part of CI measure allocations only. No wall clock time. Those measurements are (mostly) deterministic and reproducible between compiles of the same worktree and not impacted by thermal issues/hardware at all. Am Do., 18. März 2021 um 18:09 Uhr schrieb davean : > That really shouldn't be near system noise for a well constructed > performance test. You might be seeing things like thermal issues, etc > though - good benchmarking is a serious subject. > Also we're not talking wall clock tests, we're talking specific metrics. > The machines do tend to be bare metal, but many of these are entirely CPU > performance independent, memory timing independent, etc. Well not quite but > that's a longer discussion. > > The investigation of Haskell code performance is a very good thing to do > BTW, but you'd still want to avoid regressions in the improvements you > made. How well we can do that and the cost of it is the primary issue here. > > -davean > > > On Wed, Mar 17, 2021 at 6:22 PM Karel Gardas > wrote: > >> On 3/17/21 4:16 PM, Andreas Klebinger wrote: >> > Now that isn't really an issue anyway I think. The question is rather is >> > 2% a large enough regression to worry about? 5%? 10%? >> >> 5-10% is still around system noise even on lightly loaded workstation. >> Not sure if CI is not run on some shared cloud resources where it may be >> even higher. >> >> I've done simple experiment of pining ghc compiling ghc-cabal and I've >> been able to "speed" it up by 5-10% on W-2265. >> >> Also following this CI/performance regs discussion I'm not entirely sure >> if this is not just a witch-hunt hurting/beating mostly most active GHC >> developers. Another idea may be to give up on CI doing perf reg testing >> at all and invest saved resources into proper investigation of >> GHC/Haskell programs performance. Not sure, if this would not be more >> beneficial longer term. >> >> Just one random number thrown to the ring. Linux's perf claims that >> nearly every second L3 cache access on the example above ends with cache >> miss. Is it a good number or bad number? See stats below (perf stat -d >> on ghc with +RTS -T -s -RTS'). >> >> Good luck to anybody working on that! >> >> Karel >> >> >> Linking utils/ghc-cabal/dist/build/tmp/ghc-cabal ... >> 61,020,836,136 bytes allocated in the heap >> 5,229,185,608 bytes copied during GC >> 301,742,768 bytes maximum residency (19 sample(s)) >> 3,533,000 bytes maximum slop >> 840 MiB total memory in use (0 MB lost due to fragmentation) >> >> Tot time (elapsed) Avg pause Max >> pause >> Gen 0 2012 colls, 0 par 5.725s 5.731s 0.0028s >> 0.1267s >> Gen 1 19 colls, 0 par 1.695s 1.696s 0.0893s >> 0.2636s >> >> TASKS: 4 (1 bound, 3 peak workers (3 total), using -N1) >> >> SPARKS: 0 (0 converted, 0 overflowed, 0 dud, 0 GC'd, 0 fizzled) >> >> INIT time 0.000s ( 0.000s elapsed) >> MUT time 27.849s ( 32.163s elapsed) >> GC time 7.419s ( 7.427s elapsed) >> EXIT time 0.000s ( 0.010s elapsed) >> Total time 35.269s ( 39.601s elapsed) >> >> Alloc rate 2,191,122,004 bytes per MUT second >> >> Productivity 79.0% of total user, 81.2% of total elapsed >> >> >> Performance counter stats for >> '/export/home/karel/sfw/ghc-8.10.3/bin/ghc -H32m -O -Wall -optc-Wall -O0 >> -hide-all-packages -package ghc-prim -package base -package binary >> -package array -package transformers -package time -package containers >> -package bytestring -package deepseq -package process -package pretty >> -package directory -package filepath -package template-haskell -package >> unix --make utils/ghc-cabal/Main.hs -o >> utils/ghc-cabal/dist/build/tmp/ghc-cabal -no-user-package-db -Wall >> -fno-warn-unused-imports -fno-warn-warnings-deprecations >> -DCABAL_VERSION=3,4,0,0 -DBOOTSTRAPPING -odir bootstrapping -hidir >> bootstrapping libraries/Cabal/Cabal/Distribution/Fields/Lexer.hs >> -ilibraries/Cabal/Cabal -ilibraries/binary/src -ilibraries/filepath >> -ilibraries/hpc -ilibraries/mtl -ilibraries/text/src >> libraries/text/cbits/cbits.c -Ilibraries/text/include >> -ilibraries/parsec/src +RTS -T -s -RTS': >> >> 39,632.99 msec task-clock # 0.999 CPUs >> utilized >> 17,191 context-switches # 0.434 K/sec >> >> 0 cpu-migrations # 0.000 K/sec >> >> 899,930 page-faults # 0.023 M/sec >> >> 177,636,979,975 cycles # 4.482 GHz >> (87.54%) >> 181,945,795,221 instructions # 1.02 insn per >> cycle (87.59%) >> 34,033,574,511 branches # 858.718 M/sec >> (87.42%) >> 1,664,969,299 branch-misses # 4.89% of all >> branches (87.48%) >> 41,522,737,426 L1-dcache-loads # 1047.681 M/sec >> (87.53%) >> 2,675,319,939 L1-dcache-load-misses # 6.44% of all >> L1-dcache hits (87.48%) >> 372,370,395 LLC-loads # 9.395 M/sec >> (87.49%) >> 173,614,140 LLC-load-misses # 46.62% of all >> LL-cache hits (87.46%) >> >> 39.663103602 seconds time elapsed >> >> 38.288158000 seconds user >> 1.358263000 seconds sys >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From davean at xkcd.com Thu Mar 18 17:39:51 2021 From: davean at xkcd.com (davean) Date: Thu, 18 Mar 2021 13:39:51 -0400 Subject: On CI In-Reply-To: References: <010f01784069544b-55202ccf-7239-4333-80c6-3f3cd8543527-000000@us-east-2.amazonses.com> <132ab2d4-9f1f-4320-fee6-d5f48abe00f3@gmx.at> <3d7253a2-b1ee-4b7e-15e3-7d75f7b1c15f@centrum.cz> Message-ID: I left the wiggle room for things like longer wall time causing more time events in the IO Manager/RTS which can be a thermal/HW issue. They're small and indirect though -davean On Thu, Mar 18, 2021 at 1:37 PM Sebastian Graf wrote: > To be clear: All performance tests that run as part of CI measure > allocations only. No wall clock time. > Those measurements are (mostly) deterministic and reproducible between > compiles of the same worktree and not impacted by thermal issues/hardware > at all. > > Am Do., 18. März 2021 um 18:09 Uhr schrieb davean : > >> That really shouldn't be near system noise for a well constructed >> performance test. You might be seeing things like thermal issues, etc >> though - good benchmarking is a serious subject. >> Also we're not talking wall clock tests, we're talking specific metrics. >> The machines do tend to be bare metal, but many of these are entirely CPU >> performance independent, memory timing independent, etc. Well not quite but >> that's a longer discussion. >> >> The investigation of Haskell code performance is a very good thing to do >> BTW, but you'd still want to avoid regressions in the improvements you >> made. How well we can do that and the cost of it is the primary issue here. >> >> -davean >> >> >> On Wed, Mar 17, 2021 at 6:22 PM Karel Gardas >> wrote: >> >>> On 3/17/21 4:16 PM, Andreas Klebinger wrote: >>> > Now that isn't really an issue anyway I think. The question is rather >>> is >>> > 2% a large enough regression to worry about? 5%? 10%? >>> >>> 5-10% is still around system noise even on lightly loaded workstation. >>> Not sure if CI is not run on some shared cloud resources where it may be >>> even higher. >>> >>> I've done simple experiment of pining ghc compiling ghc-cabal and I've >>> been able to "speed" it up by 5-10% on W-2265. >>> >>> Also following this CI/performance regs discussion I'm not entirely sure >>> if this is not just a witch-hunt hurting/beating mostly most active GHC >>> developers. Another idea may be to give up on CI doing perf reg testing >>> at all and invest saved resources into proper investigation of >>> GHC/Haskell programs performance. Not sure, if this would not be more >>> beneficial longer term. >>> >>> Just one random number thrown to the ring. Linux's perf claims that >>> nearly every second L3 cache access on the example above ends with cache >>> miss. Is it a good number or bad number? See stats below (perf stat -d >>> on ghc with +RTS -T -s -RTS'). >>> >>> Good luck to anybody working on that! >>> >>> Karel >>> >>> >>> Linking utils/ghc-cabal/dist/build/tmp/ghc-cabal ... >>> 61,020,836,136 bytes allocated in the heap >>> 5,229,185,608 bytes copied during GC >>> 301,742,768 bytes maximum residency (19 sample(s)) >>> 3,533,000 bytes maximum slop >>> 840 MiB total memory in use (0 MB lost due to fragmentation) >>> >>> Tot time (elapsed) Avg pause Max >>> pause >>> Gen 0 2012 colls, 0 par 5.725s 5.731s 0.0028s >>> 0.1267s >>> Gen 1 19 colls, 0 par 1.695s 1.696s 0.0893s >>> 0.2636s >>> >>> TASKS: 4 (1 bound, 3 peak workers (3 total), using -N1) >>> >>> SPARKS: 0 (0 converted, 0 overflowed, 0 dud, 0 GC'd, 0 fizzled) >>> >>> INIT time 0.000s ( 0.000s elapsed) >>> MUT time 27.849s ( 32.163s elapsed) >>> GC time 7.419s ( 7.427s elapsed) >>> EXIT time 0.000s ( 0.010s elapsed) >>> Total time 35.269s ( 39.601s elapsed) >>> >>> Alloc rate 2,191,122,004 bytes per MUT second >>> >>> Productivity 79.0% of total user, 81.2% of total elapsed >>> >>> >>> Performance counter stats for >>> '/export/home/karel/sfw/ghc-8.10.3/bin/ghc -H32m -O -Wall -optc-Wall -O0 >>> -hide-all-packages -package ghc-prim -package base -package binary >>> -package array -package transformers -package time -package containers >>> -package bytestring -package deepseq -package process -package pretty >>> -package directory -package filepath -package template-haskell -package >>> unix --make utils/ghc-cabal/Main.hs -o >>> utils/ghc-cabal/dist/build/tmp/ghc-cabal -no-user-package-db -Wall >>> -fno-warn-unused-imports -fno-warn-warnings-deprecations >>> -DCABAL_VERSION=3,4,0,0 -DBOOTSTRAPPING -odir bootstrapping -hidir >>> bootstrapping libraries/Cabal/Cabal/Distribution/Fields/Lexer.hs >>> -ilibraries/Cabal/Cabal -ilibraries/binary/src -ilibraries/filepath >>> -ilibraries/hpc -ilibraries/mtl -ilibraries/text/src >>> libraries/text/cbits/cbits.c -Ilibraries/text/include >>> -ilibraries/parsec/src +RTS -T -s -RTS': >>> >>> 39,632.99 msec task-clock # 0.999 CPUs >>> utilized >>> 17,191 context-switches # 0.434 K/sec >>> >>> 0 cpu-migrations # 0.000 K/sec >>> >>> 899,930 page-faults # 0.023 M/sec >>> >>> 177,636,979,975 cycles # 4.482 GHz >>> (87.54%) >>> 181,945,795,221 instructions # 1.02 insn per >>> cycle (87.59%) >>> 34,033,574,511 branches # 858.718 M/sec >>> (87.42%) >>> 1,664,969,299 branch-misses # 4.89% of all >>> branches (87.48%) >>> 41,522,737,426 L1-dcache-loads # 1047.681 M/sec >>> (87.53%) >>> 2,675,319,939 L1-dcache-load-misses # 6.44% of all >>> L1-dcache hits (87.48%) >>> 372,370,395 LLC-loads # 9.395 M/sec >>> (87.49%) >>> 173,614,140 LLC-load-misses # 46.62% of all >>> LL-cache hits (87.46%) >>> >>> 39.663103602 seconds time elapsed >>> >>> 38.288158000 seconds user >>> 1.358263000 seconds sys >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.ericson at obsidian.systems Thu Mar 18 18:03:55 2021 From: john.ericson at obsidian.systems (John Ericson) Date: Thu, 18 Mar 2021 14:03:55 -0400 Subject: On CI In-Reply-To: References: <010f01784069544b-55202ccf-7239-4333-80c6-3f3cd8543527-000000@us-east-2.amazonses.com> <132ab2d4-9f1f-4320-fee6-d5f48abe00f3@gmx.at> <3d7253a2-b1ee-4b7e-15e3-7d75f7b1c15f@centrum.cz> Message-ID: My guess is most of the "noise" is not run time, but the compiled code changing in hard to predict ways. https://gitlab.haskell.org/ghc/ghc/-/merge_requests/1776/diffs for example was a very small PR that took *months* of on-off work to get passing metrics tests. In the end, binding `is_boot` twice helped a bit, and dumb luck helped a little bit more. No matter how you analyze that, that's a lot of pain for what's manifestly a performance-irrelevant MR --- no one is writing 10,000 default methods or whatever could possibly make this the micro-optimizing worth it! Perhaps this is an extreme example, but my rough sense is that it's not an isolated outlier. John On 3/18/21 1:39 PM, davean wrote: > I left the wiggle room for things like longer wall time causing more > time events in the IO Manager/RTS which can be a thermal/HW issue. > They're small and indirect though > > -davean > > On Thu, Mar 18, 2021 at 1:37 PM Sebastian Graf > wrote: > > To be clear: All performance tests that run as part of CI measure > allocations only. No wall clock time. > Those measurements are (mostly) deterministic and reproducible > between compiles of the same worktree and not impacted by thermal > issues/hardware at all. > > Am Do., 18. März 2021 um 18:09 Uhr schrieb davean >: > > That really shouldn't be near system noise for a well > constructed performance test. You might be seeing things like > thermal issues, etc though - good benchmarking is a serious > subject. > Also we're not talking wall clock tests, we're talking > specific metrics. The machines do tend to be bare metal, but > many of these are entirely CPU performance independent, memory > timing independent, etc. Well not quite but that's a longer > discussion. > > The investigation of Haskell code performance is a very good > thing to do BTW, but you'd still want to avoid regressions in > the improvements you made. How well we can do that and the > cost of it is the primary issue here. > > -davean > > > On Wed, Mar 17, 2021 at 6:22 PM Karel Gardas > > wrote: > > On 3/17/21 4:16 PM, Andreas Klebinger wrote: > > Now that isn't really an issue anyway I think. The > question is rather is > > 2% a large enough regression to worry about? 5%? 10%? > > 5-10% is still around system noise even on lightly loaded > workstation. > Not sure if CI is not run on some shared cloud resources > where it may be > even higher. > > I've done simple experiment of pining ghc compiling > ghc-cabal and I've > been able to "speed" it up by 5-10% on W-2265. > > Also following this CI/performance regs discussion I'm not > entirely sure > if  this is not just a witch-hunt hurting/beating mostly > most active GHC > developers. Another idea may be to give up on CI doing > perf reg testing > at all and invest saved resources into proper investigation of > GHC/Haskell programs performance. Not sure, if this would > not be more > beneficial longer term. > > Just one random number thrown to the ring. Linux's perf > claims that > nearly every second L3 cache access on the example above > ends with cache > miss. Is it a good number or bad number? See stats below > (perf stat -d > on ghc with +RTS -T -s -RTS'). > > Good luck to anybody working on that! > > Karel > > > Linking utils/ghc-cabal/dist/build/tmp/ghc-cabal ... >   61,020,836,136 bytes allocated in the heap >    5,229,185,608 bytes copied during GC >      301,742,768 bytes maximum residency (19 sample(s)) >        3,533,000 bytes maximum slop >              840 MiB total memory in use (0 MB lost due to > fragmentation) > >                                      Tot time (elapsed)  > Avg pause  Max > pause >   Gen  0      2012 colls,     0 par    5.725s  5.731s    >  0.0028s > 0.1267s >   Gen  1        19 colls,     0 par    1.695s  1.696s    >  0.0893s > 0.2636s > >   TASKS: 4 (1 bound, 3 peak workers (3 total), using -N1) > >   SPARKS: 0 (0 converted, 0 overflowed, 0 dud, 0 GC'd, 0 > fizzled) > >   INIT    time    0.000s  (  0.000s elapsed) >   MUT     time   27.849s  ( 32.163s elapsed) >   GC      time    7.419s  (  7.427s elapsed) >   EXIT    time    0.000s  (  0.010s elapsed) >   Total   time   35.269s  ( 39.601s elapsed) > >   Alloc rate    2,191,122,004 bytes per MUT second > >   Productivity  79.0% of total user, 81.2% of total elapsed > > >  Performance counter stats for > '/export/home/karel/sfw/ghc-8.10.3/bin/ghc -H32m -O -Wall > -optc-Wall -O0 > -hide-all-packages -package ghc-prim -package base > -package binary > -package array -package transformers -package time > -package containers > -package bytestring -package deepseq -package process > -package pretty > -package directory -package filepath -package > template-haskell -package > unix --make utils/ghc-cabal/Main.hs -o > utils/ghc-cabal/dist/build/tmp/ghc-cabal > -no-user-package-db -Wall > -fno-warn-unused-imports -fno-warn-warnings-deprecations > -DCABAL_VERSION=3,4,0,0 -DBOOTSTRAPPING -odir > bootstrapping -hidir > bootstrapping > libraries/Cabal/Cabal/Distribution/Fields/Lexer.hs > -ilibraries/Cabal/Cabal -ilibraries/binary/src > -ilibraries/filepath > -ilibraries/hpc -ilibraries/mtl -ilibraries/text/src > libraries/text/cbits/cbits.c -Ilibraries/text/include > -ilibraries/parsec/src +RTS -T -s -RTS': > >          39,632.99 msec task-clock                # 0.999 CPUs > utilized >             17,191      context-switches          # 0.434 > K/sec > >                  0      cpu-migrations            # 0.000 > K/sec > >            899,930      page-faults               # 0.023 > M/sec > >    177,636,979,975      cycles                    # 4.482 GHz >               (87.54%) >    181,945,795,221      instructions              # 1.02  > insn per > cycle           (87.59%) >     34,033,574,511      branches                  # > 858.718 M/sec >               (87.42%) >      1,664,969,299      branch-misses             # 4.89% > of all > branches          (87.48%) >     41,522,737,426      L1-dcache-loads           # > 1047.681 M/sec >               (87.53%) >      2,675,319,939      L1-dcache-load-misses     # 6.44% > of all > L1-dcache hits    (87.48%) >        372,370,395      LLC-loads                 # 9.395 > M/sec >               (87.49%) >        173,614,140      LLC-load-misses           # >  46.62% of all > LL-cache hits     (87.46%) > >       39.663103602 seconds time elapsed > >       38.288158000 seconds user >        1.358263000 seconds sys > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Thu Mar 18 19:15:35 2021 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 18 Mar 2021 15:15:35 -0400 Subject: On CI In-Reply-To: <3d7253a2-b1ee-4b7e-15e3-7d75f7b1c15f@centrum.cz> References: <010f01784069544b-55202ccf-7239-4333-80c6-3f3cd8543527-000000@us-east-2.amazonses.com> <132ab2d4-9f1f-4320-fee6-d5f48abe00f3@gmx.at> <3d7253a2-b1ee-4b7e-15e3-7d75f7b1c15f@centrum.cz> Message-ID: <87blbgxoh8.fsf@smart-cactus.org> Karel Gardas writes: > On 3/17/21 4:16 PM, Andreas Klebinger wrote: >> Now that isn't really an issue anyway I think. The question is rather is >> 2% a large enough regression to worry about? 5%? 10%? > > 5-10% is still around system noise even on lightly loaded workstation. > Not sure if CI is not run on some shared cloud resources where it may be > even higher. > I think when we say "performance" we should be clear about what we are referring to. Currently, GHC does not measure instructions/cycles/time. We only measure allocations and residency. These are significantly more deterministic than time measurements, even on cloud hardware. I do think that eventually we should start to measure a broader spectrum of metrics, but this is something that can be done on dedicated hardware as a separate CI job. > I've done simple experiment of pining ghc compiling ghc-cabal and I've > been able to "speed" it up by 5-10% on W-2265. > Do note that once we switch to Hadrian ghc-cabal will vanish entirely (since Hadrian implements its functionality directly). > Also following this CI/performance regs discussion I'm not entirely sure > if this is not just a witch-hunt hurting/beating mostly most active GHC > developers. Another idea may be to give up on CI doing perf reg testing > at all and invest saved resources into proper investigation of > GHC/Haskell programs performance. Not sure, if this would not be more > beneficial longer term. > I don't think this would be beneficial. It's much easier to prevent a regression from getting into the tree than it is to find and characterise it after it has been merged. > Just one random number thrown to the ring. Linux's perf claims that > nearly every second L3 cache access on the example above ends with cache > miss. Is it a good number or bad number? See stats below (perf stat -d > on ghc with +RTS -T -s -RTS'). > It is very hard to tell; it sounds bad but it is not easy to know why or whether it is possible to improve. This is one of the reasons why I have been trying to improve sharing within GHC recently; reducing residency should improve cache locality. Nevertheless, the difficulty interpreting architectural events is why I generally only use `perf` for differential measurements. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Thu Mar 18 19:50:48 2021 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 18 Mar 2021 15:50:48 -0400 Subject: On CI In-Reply-To: References: Message-ID: <878s6kxmuj.fsf@smart-cactus.org> Simon Peyton Jones via ghc-devs writes: > > We need to do something about this, and I'd advocate for just not making stats fail with marge. > > Generally I agree. One point you don’t mention is that our perf tests > (which CI forces us to look at assiduously) are often pretty weird > cases. So there is at least a danger that these more exotic cases will > stand in the way of (say) a perf improvement in the typical case. > > But “not making stats fail” is a bit crude. Instead how about > To be clear, the proposal isn't to accept stats failures for merge request validation jobs. I believe Moritz was merely suggesting that we accept such failures in marge-bot validations (that is, the pre-merge validation done on batches of merge requests). In my opinion this is reasonable since we know that all of the MRs in the batch do not individually regress. While it's possible that interactions between two or more MRs result in a qualitative change in performance, it seems quite unlikely. What is far *more* likely (and what we see regularly) is that the cumulative effect of a batch of improving patches pushes the batches' overall stat change out of the acceptance threshold. This is quite annoying as it dooms the entire batch. For this reason, I think we should at very least accept stat improvements during Marge validations (as you suggest). I agree that we probably want a batch to fail if two patches accumulate to form a regression, even if the two passed CI individually. > * We already have per-benchmark windows. If the stat falls outside > the window, we fail. You are effectively saying “widen all windows > to infinity”. If something makes a stat 10 times worse, I think we > *should* fail. But 10% worse? Maybe we should accept and look later > as you suggest. So I’d argue for widening the windows rather than > disabling them completely. > Yes, I agree. > > * If we did that we’d need good instrumentation to spot steps and > drift in perf, as you say. An advantage is that since the perf > instrumentation runs only on committed master patches, not on every > CI, it can cost more. In particular , it could run a bunch of > “typical” tests, including nofib and compiling Cabal or other > libraries. > We already have the beginnings of such instrumentation. > The big danger is that by relieving patch authors from worrying about > perf drift, it’ll end up in the lap of the GHC HQ team. If it’s hard > for the author of a single patch (with which she is intimately > familiar) to work out why it’s making some test 2% worse, imagine how > hard, and demotivating, it’d be for Ben to wonder why 50 patches (with > which he is unfamiliar) are making some test 5% worse. > Yes, I absolutely agree with this. I would very much like to avoid having to do this sort of post-hoc investigation any more than necessary. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Thu Mar 18 19:54:57 2021 From: ben at well-typed.com (Ben Gamari) Date: Thu, 18 Mar 2021 15:54:57 -0400 Subject: GitLab upgrade soon Message-ID: <875z1oxmnj.fsf@smart-cactus.org> Hi all, I will be performing a GitLab upgrade starting in approximately one hour. While I generally try to do this with more notice and out of working hours, in this case there is an exploitable GitLab vulnerability that deserves swift action. Thank you for you patience. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Thu Mar 18 23:04:13 2021 From: ben at well-typed.com (Ben Gamari) Date: Thu, 18 Mar 2021 19:04:13 -0400 Subject: GitLab upgrade soon In-Reply-To: <875z1oxmnj.fsf@smart-cactus.org> References: <875z1oxmnj.fsf@smart-cactus.org> Message-ID: <8735wsxdw5.fsf@smart-cactus.org> Ben Gamari writes: > Hi all, > > I will be performing a GitLab upgrade starting in approximately one > hour. While I generally try to do this with more notice and out of > working hours, in this case there is an exploitable GitLab vulnerability > that deserves swift action. Thank you for you patience. > GitLab is now up-to-date. You may now return to your scheduled activities. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From giorgio at marinel.li Fri Mar 19 06:54:46 2021 From: giorgio at marinel.li (Giorgio Marinelli) Date: Fri, 19 Mar 2021 07:54:46 +0100 Subject: GitLab upgrade soon In-Reply-To: <8735wsxdw5.fsf@smart-cactus.org> References: <875z1oxmnj.fsf@smart-cactus.org> <8735wsxdw5.fsf@smart-cactus.org> Message-ID: 502 On Fri, 19 Mar 2021 at 00:05, Ben Gamari wrote: > > Ben Gamari writes: > > > Hi all, > > > > I will be performing a GitLab upgrade starting in approximately one > > hour. While I generally try to do this with more notice and out of > > working hours, in this case there is an exploitable GitLab vulnerability > > that deserves swift action. Thank you for you patience. > > > GitLab is now up-to-date. You may now return to your scheduled activities. > > Cheers, > > - Ben > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From simonpj at microsoft.com Fri Mar 19 08:33:46 2021 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 19 Mar 2021 08:33:46 +0000 Subject: GitLab is down: urgent Message-ID: GHC's GitLab seems to be down. Ben? (I just get 502's) Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.spiwack at tweag.io Fri Mar 19 09:11:22 2021 From: arnaud.spiwack at tweag.io (Spiwack, Arnaud) Date: Fri, 19 Mar 2021 10:11:22 +0100 Subject: GitLab is down: urgent In-Reply-To: References: Message-ID: >From IRC: it appears that some disks on the gitlab machine are full. I don't know who can fix this issue when Ben is away. On Fri, Mar 19, 2021 at 9:35 AM Simon Peyton Jones via ghc-devs < ghc-devs at haskell.org> wrote: > GHC’s GitLab seems to be down. Ben? > > (I just get 502’s) > > Simon > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Fri Mar 19 13:10:25 2021 From: ben at smart-cactus.org (Ben Gamari) Date: Fri, 19 Mar 2021 09:10:25 -0400 Subject: GitLab is down: urgent In-Reply-To: References: Message-ID: <87y2ejwapt.fsf@smart-cactus.org> Simon Peyton Jones via ghc-devs writes: > GHC's GitLab seems to be down. Ben? > (I just get 502's) I am currently working on this. Unfortunately it looks like we yet again ran out of disk space (in part as a result of the upgrade). Will be up within the hour. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Fri Mar 19 13:45:41 2021 From: ben at well-typed.com (Ben Gamari) Date: Fri, 19 Mar 2021 09:45:41 -0400 Subject: Post-mortem for last-night's GitLab outage Message-ID: <87v99nw932.fsf@smart-cactus.org> Hi everyone, It appears that gitlab.haskell.org's GitLab services went down around nine hours ago (around midnight EST). Surprisingly, the outage appears to be entirely unrelated to yesterday's upgrade. Rather, the problem was merely that the docker repository had grown to fill the entirety of the server's data volume. I have fixed this (and prevented future occurrences of the same issue) by moving our Docker images to a new volume. Services should be now once again fully operational. Disk usage is something that we have struggled with in the past, in part due the relatively small local disk capacity of our servers and previous unreliability of our hosting provider's iSCSI block storage infrastructure. The latter has previously prompted us to avoid using iSCSI volumes for operation-critical data, while the former has meant that we had to keep bulk data size like Docker images in careful check, lest we run out of local storage. At this point, it has been over half a year since we have experienced any trouble with iSCSI. For this reason, I have moved the Docker images back to iSCSI. This should eliminate this failure mode in the future. Meanwhile the GitLab database remains on local storage, also minimizing the potential for downtime due to future iSCSI failures. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Fri Mar 19 13:47:07 2021 From: ben at smart-cactus.org (Ben Gamari) Date: Fri, 19 Mar 2021 09:47:07 -0400 Subject: GitLab is down: urgent In-Reply-To: References: Message-ID: <87sg4rw90k.fsf@smart-cactus.org> Simon Peyton Jones via ghc-devs writes: > GHC's GitLab seems to be down. Ben? > (I just get 502's) Everything should now be back to normal. I have sent a post-mortem describing the failure mode and fix. Apologies for the inconvenience! Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From rae at richarde.dev Fri Mar 19 13:59:26 2021 From: rae at richarde.dev (Richard Eisenberg) Date: Fri, 19 Mar 2021 13:59:26 +0000 Subject: GitLab is down: urgent In-Reply-To: <87sg4rw90k.fsf@smart-cactus.org> References: <87sg4rw90k.fsf@smart-cactus.org> Message-ID: <010f01784ac8a554-6345913d-fea6-4314-b9ae-de353d5ff450-000000@us-east-2.amazonses.com> Hi Ben, Thanks for getting the fix in so quickly. However: suppose you were unavailable to do this, due to a well-deserved holiday perhaps. Who else has the credentials and know-how to step in? I feel as if we should have a resource somewhere with a definitive list of critical services and who has access to what. Perhaps this list should be kept private, in case the knowledge itself is a (small) security risk -- but it should be written down and shared widely enough that it is unlikely for all people with access to be unavailable at the same time. Do we have such a resource already? Richard > On Mar 19, 2021, at 9:47 AM, Ben Gamari wrote: > > Simon Peyton Jones via ghc-devs writes: > >> GHC's GitLab seems to be down. Ben? >> (I just get 502's) > > Everything should now be back to normal. I have sent a post-mortem > describing the failure mode and fix. Apologies for the inconvenience! > > Cheers, > > - Ben > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From giorgio at marinel.li Fri Mar 19 14:47:15 2021 From: giorgio at marinel.li (Giorgio Marinelli) Date: Fri, 19 Mar 2021 15:47:15 +0100 Subject: GitLab is down: urgent In-Reply-To: <010f01784ac8a554-6345913d-fea6-4314-b9ae-de353d5ff450-000000@us-east-2.amazonses.com> References: <87sg4rw90k.fsf@smart-cactus.org> <010f01784ac8a554-6345913d-fea6-4314-b9ae-de353d5ff450-000000@us-east-2.amazonses.com> Message-ID: Hi Everyone, Didn't there exist the ghc-devops group to talk about and manage those problems/topics? Best, Giorgio On Fri, 19 Mar 2021 at 15:00, Richard Eisenberg wrote: > > Hi Ben, > > Thanks for getting the fix in so quickly. > > However: suppose you were unavailable to do this, due to a well-deserved holiday perhaps. Who else has the credentials and know-how to step in? > > I feel as if we should have a resource somewhere with a definitive list of critical services and who has access to what. Perhaps this list should be kept private, in case the knowledge itself is a (small) security risk -- but it should be written down and shared widely enough that it is unlikely for all people with access to be unavailable at the same time. > > Do we have such a resource already? > > Richard > > > On Mar 19, 2021, at 9:47 AM, Ben Gamari wrote: > > > > Simon Peyton Jones via ghc-devs writes: > > > >> GHC's GitLab seems to be down. Ben? > >> (I just get 502's) > > > > Everything should now be back to normal. I have sent a post-mortem > > describing the failure mode and fix. Apologies for the inconvenience! > > > > Cheers, > > > > - Ben > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From davean at xkcd.com Fri Mar 19 16:14:10 2021 From: davean at xkcd.com (davean) Date: Fri, 19 Mar 2021 12:14:10 -0400 Subject: GitLab is down: urgent In-Reply-To: <010f01784ac8a554-6345913d-fea6-4314-b9ae-de353d5ff450-000000@us-east-2.amazonses.com> References: <87sg4rw90k.fsf@smart-cactus.org> <010f01784ac8a554-6345913d-fea6-4314-b9ae-de353d5ff450-000000@us-east-2.amazonses.com> Message-ID: There is the haskell-infrastructure group, who has access. Its knowledge of the peculiarities of each of the very different services that's harder to keep up to date between people. If Ben was hit by a bus it is prepared to be handled. With this diversity and level of manpower though stepping in tends to be a high cost operation so it is done when necessary. -davean On Fri, Mar 19, 2021 at 10:00 AM Richard Eisenberg wrote: > Hi Ben, > > Thanks for getting the fix in so quickly. > > However: suppose you were unavailable to do this, due to a well-deserved > holiday perhaps. Who else has the credentials and know-how to step in? > > I feel as if we should have a resource somewhere with a definitive list of > critical services and who has access to what. Perhaps this list should be > kept private, in case the knowledge itself is a (small) security risk -- > but it should be written down and shared widely enough that it is unlikely > for all people with access to be unavailable at the same time. > > Do we have such a resource already? > > Richard > > > On Mar 19, 2021, at 9:47 AM, Ben Gamari wrote: > > > > Simon Peyton Jones via ghc-devs writes: > > > >> GHC's GitLab seems to be down. Ben? > >> (I just get 502's) > > > > Everything should now be back to normal. I have sent a post-mortem > > describing the failure mode and fix. Apologies for the inconvenience! > > > > Cheers, > > > > - Ben > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From howard.b.golden at gmail.com Fri Mar 19 16:44:10 2021 From: howard.b.golden at gmail.com (howard.b.golden at gmail.com) Date: Fri, 19 Mar 2021 09:44:10 -0700 Subject: GitLab is down: urgent In-Reply-To: References: <87sg4rw90k.fsf@smart-cactus.org> <010f01784ac8a554-6345913d-fea6-4314-b9ae-de353d5ff450-000000@us-east-2.amazonses.com> Message-ID: Hi Ben, Richard and Davean, I would like to help however I can. I already maintain the Haskell wiki, and I would like to improve and document its configuration using devops techniques, preferably consistent with gitlab.haskell.org. Regards, Howard On Fri, 2021-03-19 at 12:14 -0400, davean wrote: > There is the haskell-infrastructure group, who has access. Its > knowledge of the peculiarities of each of the very different services > that's harder to keep up to date between people. > > If Ben was hit by a bus it is prepared to be handled. With this > diversity and level of manpower though stepping in tends to be a high > cost operation so it is done when necessary. > > -davean > > On Fri, Mar 19, 2021 at 10:00 AM Richard Eisenberg > wrote: > > Hi Ben, > > > > Thanks for getting the fix in so quickly. > > > > However: suppose you were unavailable to do this, due to a well- > > deserved holiday perhaps. Who else has the credentials and know-how > > to step in? > > > > I feel as if we should have a resource somewhere with a definitive > > list of critical services and who has access to what. Perhaps this > > list should be kept private, in case the knowledge itself is a > > (small) security risk -- but it should be written down and shared > > widely enough that it is unlikely for all people with access to be > > unavailable at the same time. > > > > Do we have such a resource already? > > > > Richard > > > > > On Mar 19, 2021, at 9:47 AM, Ben Gamari > > wrote: > > > > > > Simon Peyton Jones via ghc-devs writes: > > > > > >> GHC's GitLab seems to be down. Ben? > > >> (I just get 502's) > > > > > > Everything should now be back to normal. I have sent a post- > > mortem > > > describing the failure mode and fix. Apologies for the > > inconvenience! > > > > > > Cheers, > > > > > > - Ben > > > _______________________________________________ > > > ghc-devs mailing list > > > ghc-devs at haskell.org > > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From rae at richarde.dev Fri Mar 19 17:32:03 2021 From: rae at richarde.dev (Richard Eisenberg) Date: Fri, 19 Mar 2021 17:32:03 +0000 Subject: GitLab is down: urgent In-Reply-To: References: <87sg4rw90k.fsf@smart-cactus.org> <010f01784ac8a554-6345913d-fea6-4314-b9ae-de353d5ff450-000000@us-east-2.amazonses.com> Message-ID: <010f01784b8b4c5a-ac0ddc5d-9b52-4dd3-8510-f62447408051-000000@us-east-2.amazonses.com> > On Mar 19, 2021, at 12:44 PM, howard.b.golden at gmail.com wrote: > > I would like to help however I can. I already maintain the Haskell > wiki, and I would like to improve and document its configuration using > devops techniques, preferably consistent with gitlab.haskell.org . Thanks, Howard! I will try to take you up on your offer to help: do you think you could start this documentation process more broadly? That is, not just covering the Haskell Wiki, but also, say, gitlab.haskell.org. (You say you wish to document the wiki's configuration consistently with gitlab.haskell.org, but I don't know that the latter is documented!) Ideally, I would love to know what services haskell.org hosts, who runs them, and what happens if those people become unavailable. There's a zoo of services out there, and knowing who does what would be invaluable. Of course, anyone can start this process, but it takes someone willing to stick with it and see it through for a few weeks. Since Howard boldly stepped forward, I nominate him. :) Thanks, Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From howard.b.golden at gmail.com Fri Mar 19 18:13:25 2021 From: howard.b.golden at gmail.com (howard.b.golden at gmail.com) Date: Fri, 19 Mar 2021 11:13:25 -0700 Subject: Configuration documentation (Was Re: GitLab is down: urgent) In-Reply-To: <010f01784b8b4c5a-ac0ddc5d-9b52-4dd3-8510-f62447408051-000000@us-east-2.amazonses.com> References: <87sg4rw90k.fsf@smart-cactus.org> <010f01784ac8a554-6345913d-fea6-4314-b9ae-de353d5ff450-000000@us-east-2.amazonses.com> <010f01784b8b4c5a-ac0ddc5d-9b52-4dd3-8510-f62447408051-000000@us-east-2.amazonses.com> Message-ID: <66d44a1d4e4e16385ad1c98a92e6a1d8735d07a5.camel@gmail.com> Hi Richard, Gershom and Ben, I have access to the server that runs the Haskell wiki. There are other websites on that server as well. I can document them as well. I know that Gershom B. has done most (all?) of the work on that server. I ask him and the haskell.org committee to send me or point me at whatever documentation they have already. I ask Ben and others with any documentation of gitlab.haskell.org to send me or point me at whatever they have already. I don't want to do this alone. Other volunteers are welcome! I also will abide by the Haskell Foundation and the haskell.org committee in arranging this documentation according to their needs and preferences. Howard On Fri, 2021-03-19 at 17:32 +0000, Richard Eisenberg wrote: > > > > On Mar 19, 2021, at 12:44 PM, howard.b.golden at gmail.com wrote: > > > > I would like to help however I can. I already maintain the Haskell > > wiki, and I would like to improve and document its configuration > > using > > devops techniques, preferably consistent with gitlab.haskell.org. > > Thanks, Howard! > > I will try to take you up on your offer to help: do you think you > could start this documentation process more broadly? That is, not > just covering the Haskell Wiki, but also, say, gitlab.haskell.org. > (You say you wish to document the wiki's configuration consistently > with gitlab.haskell.org, but I don't know that the latter is > documented!) > > Ideally, I would love to know what services haskell.org hosts, who > runs them, and what happens if those people become unavailable. > There's a zoo of services out there, and knowing who does what would > be invaluable. > > Of course, anyone can start this process, but it takes someone > willing to stick with it and see it through for a few weeks. Since > Howard boldly stepped forward, I nominate him. :) > > Thanks, > Richard From gershomb at gmail.com Fri Mar 19 18:21:47 2021 From: gershomb at gmail.com (Gershom B) Date: Fri, 19 Mar 2021 14:21:47 -0400 Subject: Configuration documentation (Was Re: GitLab is down: urgent) In-Reply-To: <66d44a1d4e4e16385ad1c98a92e6a1d8735d07a5.camel@gmail.com> References: <87sg4rw90k.fsf@smart-cactus.org> <010f01784ac8a554-6345913d-fea6-4314-b9ae-de353d5ff450-000000@us-east-2.amazonses.com> <010f01784b8b4c5a-ac0ddc5d-9b52-4dd3-8510-f62447408051-000000@us-east-2.amazonses.com> <66d44a1d4e4e16385ad1c98a92e6a1d8735d07a5.camel@gmail.com> Message-ID: Cc: admin at haskell.org which remains (since it was set up over five? years ago) the contact address for haskell infra admin stuff. There's also, as there has always been, the #haskell-infrastructure irc channel. This all _used_ to be on phabricator (and was on the haskell wiki before that) but it wasn't really suited to port over nicely to gitlab. At the request of the still-forming working group for HF, a repo was created on github with key information and policy, and sent to the HF, which should have this documented somewhere already: https://github.com/haskell-infra/haskell-admins I'll go back and try to read earlier in the thread, if I was cc'd, because I'm just now looped in? I.e. I see from the message "gitlab is down" but other than that I'm not sure exactly what is at issue? -g --- Here's the info I sent the committee last time it asked: here's the full list of servers. benley (Benjamin Staffin) has been using the two boxes as his tests for mailman migration. the various servers tagged bgamari are all the ghc specific ones. the three hackage servers are matrix, docbuilder, and hackage proper. the two consolidated servers are misc-services-origin and www-combo-origin. here's info on packet's elastic block storage: https://www.packet.com/developers/docs/storage/ebs/ its a bit fiddly, but we have it working at this point. Davean, Herbert, Ben and me are the ones who have keys on almost every box, I think. Alp and Austin are both officially team members but I haven't interacted with them much (austin is legacy, and alp may only work with Ben on ghc stuff, idk). Davean manages backups. some significant subdomains hosted: archives.haskell.org, hoogle.haskell.org, downloads.haskell.org, wiki.haskell.org there are some other smaller subdomains such as summer.haskell.org, pvp.haskell.org, etc. here are the main static subsites of haskell.org, most of which have existed at those urls well prior to the existence of a haskell.org website beyond the wiki alex communities ghc-perf happy hoogle onlinereport arrows definition ghcup haskell-symposium hugs platform cabal ghc haddock haskell-workshop nhc98 tutorial Sadly, joachim didn't keep up ghc-perf which was nice while it was working. On Fri, Mar 19, 2021 at 2:13 PM wrote: > > Hi Richard, Gershom and Ben, > > I have access to the server that runs the Haskell wiki. There are other > websites on that server as well. I can document them as well. I know > that Gershom B. has done most (all?) of the work on that server. I ask > him and the haskell.org committee to send me or point me at whatever > documentation they have already. > > I ask Ben and others with any documentation of gitlab.haskell.org to > send me or point me at whatever they have already. > > I don't want to do this alone. Other volunteers are welcome! I also > will abide by the Haskell Foundation and the haskell.org committee in > arranging this documentation according to their needs and preferences. > > Howard > > On Fri, 2021-03-19 at 17:32 +0000, Richard Eisenberg wrote: > > > > > > > On Mar 19, 2021, at 12:44 PM, howard.b.golden at gmail.com wrote: > > > > > > I would like to help however I can. I already maintain the Haskell > > > wiki, and I would like to improve and document its configuration > > > using > > > devops techniques, preferably consistent with gitlab.haskell.org. > > > > Thanks, Howard! > > > > I will try to take you up on your offer to help: do you think you > > could start this documentation process more broadly? That is, not > > just covering the Haskell Wiki, but also, say, gitlab.haskell.org. > > (You say you wish to document the wiki's configuration consistently > > with gitlab.haskell.org, but I don't know that the latter is > > documented!) > > > > Ideally, I would love to know what services haskell.org hosts, who > > runs them, and what happens if those people become unavailable. > > There's a zoo of services out there, and knowing who does what would > > be invaluable. > > > > Of course, anyone can start this process, but it takes someone > > willing to stick with it and see it through for a few weeks. Since > > Howard boldly stepped forward, I nominate him. :) > > > > Thanks, > > Richard > -------------- next part -------------- A non-text attachment was scrubbed... Name: services.png Type: image/png Size: 148276 bytes Desc: not available URL: From rae at richarde.dev Fri Mar 19 18:56:00 2021 From: rae at richarde.dev (Richard Eisenberg) Date: Fri, 19 Mar 2021 18:56:00 +0000 Subject: Configuration documentation (Was Re: GitLab is down: urgent) In-Reply-To: <66d44a1d4e4e16385ad1c98a92e6a1d8735d07a5.camel@gmail.com> References: <87sg4rw90k.fsf@smart-cactus.org> <010f01784ac8a554-6345913d-fea6-4314-b9ae-de353d5ff450-000000@us-east-2.amazonses.com> <010f01784b8b4c5a-ac0ddc5d-9b52-4dd3-8510-f62447408051-000000@us-east-2.amazonses.com> <66d44a1d4e4e16385ad1c98a92e6a1d8735d07a5.camel@gmail.com> Message-ID: <010f01784bd8280c-ad229b6a-7581-428e-ae94-1b4e7141f2ac-000000@us-east-2.amazonses.com> Thanks, Howard! > On Mar 19, 2021, at 2:13 PM, howard.b.golden at gmail.com wrote: > > I also > will abide by the Haskell Foundation and the haskell.org committee in > arranging this documentation according to their needs and preferences. Thanks for this explicit comment, but I don't think there are specific needs / preferences -- just to have the information out there. And it seems there is much more information than I thought, as we see in Gershom's email, to which I will respond shortly. Richard > > Howard > > On Fri, 2021-03-19 at 17:32 +0000, Richard Eisenberg wrote: >> >> >>> On Mar 19, 2021, at 12:44 PM, howard.b.golden at gmail.com wrote: >>> >>> I would like to help however I can. I already maintain the Haskell >>> wiki, and I would like to improve and document its configuration >>> using >>> devops techniques, preferably consistent with gitlab.haskell.org. >> >> Thanks, Howard! >> >> I will try to take you up on your offer to help: do you think you >> could start this documentation process more broadly? That is, not >> just covering the Haskell Wiki, but also, say, gitlab.haskell.org. >> (You say you wish to document the wiki's configuration consistently >> with gitlab.haskell.org, but I don't know that the latter is >> documented!) >> >> Ideally, I would love to know what services haskell.org hosts, who >> runs them, and what happens if those people become unavailable. >> There's a zoo of services out there, and knowing who does what would >> be invaluable. >> >> Of course, anyone can start this process, but it takes someone >> willing to stick with it and see it through for a few weeks. Since >> Howard boldly stepped forward, I nominate him. :) >> >> Thanks, >> Richard > From howard.b.golden at gmail.com Fri Mar 19 19:04:03 2021 From: howard.b.golden at gmail.com (howard.b.golden at gmail.com) Date: Fri, 19 Mar 2021 12:04:03 -0700 Subject: Configuration documentation (Was Re: GitLab is down: urgent) In-Reply-To: References: <87sg4rw90k.fsf@smart-cactus.org> <010f01784ac8a554-6345913d-fea6-4314-b9ae-de353d5ff450-000000@us-east-2.amazonses.com> <010f01784b8b4c5a-ac0ddc5d-9b52-4dd3-8510-f62447408051-000000@us-east-2.amazonses.com> <66d44a1d4e4e16385ad1c98a92e6a1d8735d07a5.camel@gmail.com> Message-ID: I suggest continuing this discussion/effort on the Haskell Discourse at https://discourse.haskell.org/t/documentation-of-haskell-websites-servers-and-their-configurations/2153 . Howard On Fri, 2021-03-19 at 14:21 -0400, Gershom B wrote: > Cc: admin at haskell.org which remains (since it was set up over five? > years ago) the contact address for haskell infra admin stuff. There's > also, as there has always been, the #haskell-infrastructure irc > channel. > > This all _used_ to be on phabricator (and was on the haskell wiki > before that) but it wasn't really suited to port over nicely to > gitlab. At the request of the still-forming working group for HF, a > repo was created on github with key information and policy, and sent > to the HF, which should have this documented somewhere already: > https://github.com/haskell-infra/haskell-admins > > I'll go back and try to read earlier in the thread, if I was cc'd, > because I'm just now looped in? > > I.e. I see from the message "gitlab is down" but other than that I'm > not sure exactly what is at issue? > > -g > > --- > > Here's the info I sent the committee last time it asked: > > here's the full list of servers. benley (Benjamin Staffin) has been > using the two boxes as his tests for mailman migration. the various > servers tagged bgamari are all the ghc specific ones. the three > hackage servers are matrix, docbuilder, and hackage proper. the two > consolidated servers are misc-services-origin and www-combo-origin. > > here's info on packet's elastic block storage: > https://www.packet.com/developers/docs/storage/ebs/ > > its a bit fiddly, but we have it working at this point. > > Davean, Herbert, Ben and me are the ones who have keys on almost > every > box, I think. Alp and Austin are both officially team members but I > haven't interacted with them much (austin is legacy, and alp may only > work with Ben on ghc stuff, idk). > > Davean manages backups. > > some significant subdomains hosted: archives.haskell.org, > hoogle.haskell.org, downloads.haskell.org, wiki.haskell.org > > there are some other smaller subdomains such as summer.haskell.org, > pvp.haskell.org, etc. > > here are the main static subsites of haskell.org, most of which have > existed at those urls well prior to the existence of a haskell.org > website beyond the wiki > > alex communities ghc- > perf happy hoogle onlinereport > > arrows definition ghcup haskell-symposium hugs platform > > cabal ghc haddock haskell-workshop nhc98 tutorial > > Sadly, joachim didn't keep up ghc-perf which was nice while it was > working. > > On Fri, Mar 19, 2021 at 2:13 PM wrote: > > Hi Richard, Gershom and Ben, > > > > I have access to the server that runs the Haskell wiki. There are > > other > > websites on that server as well. I can document them as well. I > > know > > that Gershom B. has done most (all?) of the work on that server. I > > ask > > him and the haskell.org committee to send me or point me at > > whatever > > documentation they have already. > > > > I ask Ben and others with any documentation of gitlab.haskell.org > > to > > send me or point me at whatever they have already. > > > > I don't want to do this alone. Other volunteers are welcome! I also > > will abide by the Haskell Foundation and the haskell.org committee > > in > > arranging this documentation according to their needs and > > preferences. > > > > Howard > > > > On Fri, 2021-03-19 at 17:32 +0000, Richard Eisenberg wrote: > > > > > > > On Mar 19, 2021, at 12:44 PM, howard.b.golden at gmail.com wrote: > > > > > > > > I would like to help however I can. I already maintain the > > > > Haskell > > > > wiki, and I would like to improve and document its > > > > configuration > > > > using > > > > devops techniques, preferably consistent with > > > > gitlab.haskell.org. > > > > > > Thanks, Howard! > > > > > > I will try to take you up on your offer to help: do you think you > > > could start this documentation process more broadly? That is, not > > > just covering the Haskell Wiki, but also, say, > > > gitlab.haskell.org. > > > (You say you wish to document the wiki's configuration > > > consistently > > > with gitlab.haskell.org, but I don't know that the latter is > > > documented!) > > > > > > Ideally, I would love to know what services haskell.org hosts, > > > who > > > runs them, and what happens if those people become unavailable. > > > There's a zoo of services out there, and knowing who does what > > > would > > > be invaluable. > > > > > > Of course, anyone can start this process, but it takes someone > > > willing to stick with it and see it through for a few weeks. > > > Since > > > Howard boldly stepped forward, I nominate him. :) > > > > > > Thanks, > > > Richard From rae at richarde.dev Fri Mar 19 19:08:35 2021 From: rae at richarde.dev (Richard Eisenberg) Date: Fri, 19 Mar 2021 19:08:35 +0000 Subject: Configuration documentation (Was Re: GitLab is down: urgent) In-Reply-To: References: <87sg4rw90k.fsf@smart-cactus.org> <010f01784ac8a554-6345913d-fea6-4314-b9ae-de353d5ff450-000000@us-east-2.amazonses.com> <010f01784b8b4c5a-ac0ddc5d-9b52-4dd3-8510-f62447408051-000000@us-east-2.amazonses.com> <66d44a1d4e4e16385ad1c98a92e6a1d8735d07a5.camel@gmail.com> Message-ID: <010f01784be3ae0e-92c44cd0-1c27-4335-b3df-c9be59a6a9cf-000000@us-east-2.amazonses.com> Thanks, Gershom, for the quick and very illuminating response. > On Mar 19, 2021, at 2:21 PM, Gershom B wrote: > > Cc: admin at haskell.org which remains (since it was set up over five? > years ago) the contact address for haskell infra admin stuff. How would I learn of that address? Who is admin at haskell.org? > There's > also, as there has always been, the #haskell-infrastructure irc > channel. How would I learn of this? > > This all _used_ to be on phabricator (and was on the haskell wiki > before that) but it wasn't really suited to port over nicely to > gitlab. At the request of the still-forming working group for HF, a > repo was created on github with key information and policy, and sent > to the HF, which should have this documented somewhere already: > https://github.com/haskell-infra/haskell-admins This is exactly the kind of stuff I was looking for. (Apologies for your need to repeat yourself -- we are still working hard to organize ourselves out of the void.) How would I find this page? Perhaps just a link from the Committees section of https://www.haskell.org/community/ would be sufficient. Might also be good to put admin at haskell.org in the footer. > > I'll go back and try to read earlier in the thread, if I was cc'd, > because I'm just now looped in? > > I.e. I see from the message "gitlab is down" but other than that I'm > not sure exactly what is at issue? The thread morphed into a new topic, thanks to me. I was just wondering who I would reach out to if both GitLab and Ben were down. The haskell-admins repo answers that question nicely. > > -g > > --- > > Davean, Herbert, Ben and me are the ones who have keys on almost every > box, I think. Alp and Austin are both officially team members but I > haven't interacted with them much (austin is legacy, and alp may only > work with Ben on ghc stuff, idk). This seems out of sync with https://github.com/haskell-infra/haskell-admins, which does not list Ben (or Austin or Alp) but does list Rick Elrod. I would think that dormant members (e.g. Austin) should be removed, but that's your call. > > Davean manages backups. Is this documented somewhere? Backups of what? > > some significant subdomains hosted: archives.haskell.org, > hoogle.haskell.org, downloads.haskell.org, wiki.haskell.org > > there are some other smaller subdomains such as summer.haskell.org, > pvp.haskell.org, etc. > > here are the main static subsites of haskell.org, most of which have > existed at those urls well prior to the existence of a haskell.org > website beyond the wiki > > alex communities ghc-perf happy hoogle onlinereport > > arrows definition ghcup haskell-symposium hugs platform > > cabal ghc haddock haskell-workshop nhc98 tutorial This stuff appears to be on https://github.com/haskell-infra/haskell-admins/blob/master/servers.md. Good. I was originally looking for each of these to have names associated with them, but perhaps if one needed to contact the owner of a particular subdomain, we could reach out to admin at haskell.org and then get the subdomain owners by dereference. That's fine by me. > > Sadly, joachim didn't keep up ghc-perf which was nice while it was working. I think this is perf.haskell.org. To be fair, some of the failure belongs on my shoulders. I offered up an unused server at Bryn Mawr (my previous employer) to host this, but that situation proved to be flaky. Actually, my best guess is that the server is still actually alive (and in the basement of my old building) at perf.haskell.org, but not being updated. I don't think we should rely on it. So: Howard, there may not be much work to do here, beyond emailing Gershom, as you've already done! :) Richard > > On Fri, Mar 19, 2021 at 2:13 PM wrote: >> >> Hi Richard, Gershom and Ben, >> >> I have access to the server that runs the Haskell wiki. There are other >> websites on that server as well. I can document them as well. I know >> that Gershom B. has done most (all?) of the work on that server. I ask >> him and the haskell.org committee to send me or point me at whatever >> documentation they have already. >> >> I ask Ben and others with any documentation of gitlab.haskell.org to >> send me or point me at whatever they have already. >> >> I don't want to do this alone. Other volunteers are welcome! I also >> will abide by the Haskell Foundation and the haskell.org committee in >> arranging this documentation according to their needs and preferences. >> >> Howard >> >> On Fri, 2021-03-19 at 17:32 +0000, Richard Eisenberg wrote: >>> >>> >>>> On Mar 19, 2021, at 12:44 PM, howard.b.golden at gmail.com wrote: >>>> >>>> I would like to help however I can. I already maintain the Haskell >>>> wiki, and I would like to improve and document its configuration >>>> using >>>> devops techniques, preferably consistent with gitlab.haskell.org. >>> >>> Thanks, Howard! >>> >>> I will try to take you up on your offer to help: do you think you >>> could start this documentation process more broadly? That is, not >>> just covering the Haskell Wiki, but also, say, gitlab.haskell.org. >>> (You say you wish to document the wiki's configuration consistently >>> with gitlab.haskell.org, but I don't know that the latter is >>> documented!) >>> >>> Ideally, I would love to know what services haskell.org hosts, who >>> runs them, and what happens if those people become unavailable. >>> There's a zoo of services out there, and knowing who does what would >>> be invaluable. >>> >>> Of course, anyone can start this process, but it takes someone >>> willing to stick with it and see it through for a few weeks. Since >>> Howard boldly stepped forward, I nominate him. :) >>> >>> Thanks, >>> Richard >> > From gershomb at gmail.com Fri Mar 19 19:18:13 2021 From: gershomb at gmail.com (Gershom B) Date: Fri, 19 Mar 2021 15:18:13 -0400 Subject: Configuration documentation (Was Re: GitLab is down: urgent) In-Reply-To: <010f01784be3ae0e-92c44cd0-1c27-4335-b3df-c9be59a6a9cf-000000@us-east-2.amazonses.com> References: <87sg4rw90k.fsf@smart-cactus.org> <010f01784ac8a554-6345913d-fea6-4314-b9ae-de353d5ff450-000000@us-east-2.amazonses.com> <010f01784b8b4c5a-ac0ddc5d-9b52-4dd3-8510-f62447408051-000000@us-east-2.amazonses.com> <66d44a1d4e4e16385ad1c98a92e6a1d8735d07a5.camel@gmail.com> <010f01784be3ae0e-92c44cd0-1c27-4335-b3df-c9be59a6a9cf-000000@us-east-2.amazonses.com> Message-ID: > How would I learn of that address? Who is admin at haskell.org? In fact, I know you knew this address at one point, because you emailed it asking for help with the haskell symposium website in 2019 :-) But yes, we all learn things and forget them, so documenting them accessibly is important. The who is answered in the github repo, as the current host of information that used to live on phab, and before that on the wiki (eg at https://wiki.haskell.org/Haskell.org_infrastructure) That address has been publicized many times over the years, and is linked to from hackage, among many other places. You're quite right that it could be on the website too, and I'm sure a PR doing that would be welcomed. And in fact, improving discoverability has been a priority of the HF, which you've been involved with. We assembled the repo specifically to help the foundation have a single place to link to that holds this information. So, I really look forward to the HF working to improve discoverability still further. -g From moritz.angermann at gmail.com Fri Mar 19 23:30:43 2021 From: moritz.angermann at gmail.com (Moritz Angermann) Date: Sat, 20 Mar 2021 07:30:43 +0800 Subject: GitLab is down: urgent In-Reply-To: <010f01784b8b4c5a-ac0ddc5d-9b52-4dd3-8510-f62447408051-000000@us-east-2.amazonses.com> References: <87sg4rw90k.fsf@smart-cactus.org> <010f01784ac8a554-6345913d-fea6-4314-b9ae-de353d5ff450-000000@us-east-2.amazonses.com> <010f01784b8b4c5a-ac0ddc5d-9b52-4dd3-8510-f62447408051-000000@us-east-2.amazonses.com> Message-ID: I can try to step up and be backup on the other side of the planet. Ben and I are almost 12hs apart exactly. On Sat, 20 Mar 2021 at 1:32 AM, Richard Eisenberg wrote: > > > On Mar 19, 2021, at 12:44 PM, howard.b.golden at gmail.com wrote: > > I would like to help however I can. I already maintain the Haskell > wiki, and I would like to improve and document its configuration using > devops techniques, preferably consistent with gitlab.haskell.org. > > > Thanks, Howard! > > I will try to take you up on your offer to help: do you think you could > start this documentation process more broadly? That is, not just covering > the Haskell Wiki, but also, say, gitlab.haskell.org. (You say you wish to > document the wiki's configuration consistently with gitlab.haskell.org, > but I don't know that the latter is documented!) > > Ideally, I would love to know what services haskell.org hosts, who runs > them, and what happens if those people become unavailable. There's a zoo of > services out there, and knowing who does what would be invaluable. > > Of course, anyone can start this process, but it takes someone willing to > stick with it and see it through for a few weeks. Since Howard boldly > stepped forward, I nominate him. :) > > Thanks, > Richard > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lexi.lambda at gmail.com Sat Mar 20 09:40:59 2021 From: lexi.lambda at gmail.com (Alexis King) Date: Sat, 20 Mar 2021 04:40:59 -0500 Subject: Type inference of singular matches on GADTs Message-ID: <914d28f5-d2d5-9cf0-2ec0-88e98c10e92d@gmail.com> Hi all, Today I was writing some code that uses a GADT to represent heterogeneous lists: data HList as where   HNil  :: HList '[]   HCons :: a -> HList as -> HList (a ': as) This type is used to provide a generic way to manipulate n-ary functions. Naturally, I have some functions that accept these n-ary functions as arguments, which have types like this: foo :: Blah as => (HList as -> Widget) -> Whatsit The idea is that Blah does some type-level induction on as and supplies the function with some appropriate values. Correspondingly, my use sites look something like this: bar = foo (\HNil -> ...) Much to my dismay, I quickly discovered that GHC finds these expressions quite unfashionable, and it invariably insults them: • Ambiguous type variable ‘as0’ arising from a use of ‘foo’   prevents the constraint ‘(Blah as0)’ from being solved. The miscommunication is simple enough. I expected that when given an expression like \HNil -> ... GHC would see a single pattern of type HList '[] and consequently infer a type like HList '[] -> ... Alas, it was not to be. It seems GHC is reluctant to commit to the choice of '[] for as, lest perhaps I add another case to my function in the future. Indeed, if I were to do that, the choice of '[] would be premature, as as ~ '[] would only be available within one branch. However, I do not in fact have any such intention, which makes me quietly wish GHC would get over its anxiety and learn to be a bit more of a risk-taker. I ended up taking a look at the OutsideIn(X) paper, hoping to find some commentary on this situation, but in spite of the nice examples toward the start about the trickiness of GADTs, I found no discussion of this specific scenario: a function with exactly one branch and an utterly unambiguous pattern. Most examples come at the problem from precisely the opposite direction, trying to tease out a principle type from a collection of branches. The case of a function (or perhaps more accurately, a case expression) with only a single branch does not seem to be given any special attention. Of course, fewer special cases is always nice. I have great sympathy for generality. Still, I can’t help but feel a little unsatisfied here. Theoretically, there is no reason GHC cannot treat \(a `HCons` b `HCons` c `HCons` HNil) -> ... and \a b c -> ... almost identically, with a well-defined principle type and pleasant type inference properties, but there is no way for me to communicate this to the typechecker! So, my questions: 1. Have people considered this problem before? Is it discussed anywhere already? 2. Is my desire here reasonable, or is there some deep philosophical argument for why my program should be rejected? 3. If it /is/ reasonable, are there any obvious situations where a change targeted at what I’m describing (vague as that is) would affect programs negatively, not positively? I realize this gets rather at the heart of the typechecker, so I don’t intend to imply a change of this sort should be made frivolously. Indeed, I’m not even particularly attached to the idea that a change must be made! But I do want to understand the tradeoffs better, so any insight would be much appreciated. Thanks, Alexis -------------- next part -------------- An HTML attachment was scrubbed... URL: From giorgio at marinel.li Sat Mar 20 10:05:30 2021 From: giorgio at marinel.li (Giorgio Marinelli) Date: Sat, 20 Mar 2021 11:05:30 +0100 Subject: GitLab is down: urgent In-Reply-To: References: <87sg4rw90k.fsf@smart-cactus.org> <010f01784ac8a554-6345913d-fea6-4314-b9ae-de353d5ff450-000000@us-east-2.amazonses.com> <010f01784b8b4c5a-ac0ddc5d-9b52-4dd3-8510-f62447408051-000000@us-east-2.amazonses.com> Message-ID: I can also help (~UTC+1), I've a long history and experience in systems management and engineering. Best, Giorgio Marinelli https://marinelli.dev/cv On Sat, 20 Mar 2021 at 00:31, Moritz Angermann wrote: > > I can try to step up and be backup on the other side of the planet. Ben and I are almost 12hs apart exactly. > > On Sat, 20 Mar 2021 at 1:32 AM, Richard Eisenberg wrote: >> >> >> >> On Mar 19, 2021, at 12:44 PM, howard.b.golden at gmail.com wrote: >> >> I would like to help however I can. I already maintain the Haskell >> wiki, and I would like to improve and document its configuration using >> devops techniques, preferably consistent with gitlab.haskell.org. >> >> >> Thanks, Howard! >> >> I will try to take you up on your offer to help: do you think you could start this documentation process more broadly? That is, not just covering the Haskell Wiki, but also, say, gitlab.haskell.org. (You say you wish to document the wiki's configuration consistently with gitlab.haskell.org, but I don't know that the latter is documented!) >> >> Ideally, I would love to know what services haskell.org hosts, who runs them, and what happens if those people become unavailable. There's a zoo of services out there, and knowing who does what would be invaluable. >> >> Of course, anyone can start this process, but it takes someone willing to stick with it and see it through for a few weeks. Since Howard boldly stepped forward, I nominate him. :) >> >> Thanks, >> Richard >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From ietf-dane at dukhovni.org Sat Mar 20 12:13:18 2021 From: ietf-dane at dukhovni.org (Viktor Dukhovni) Date: Sat, 20 Mar 2021 08:13:18 -0400 Subject: Type inference of singular matches on GADTs In-Reply-To: <914d28f5-d2d5-9cf0-2ec0-88e98c10e92d@gmail.com> References: <914d28f5-d2d5-9cf0-2ec0-88e98c10e92d@gmail.com> Message-ID: On Sat, Mar 20, 2021 at 04:40:59AM -0500, Alexis King wrote: > Today I was writing some code that uses a GADT to represent > heterogeneous lists: > > data HList as where >   HNil  :: HList '[] >   HCons :: a -> HList as -> HList (a ': as) > > This type is used to provide a generic way to manipulate n-ary > functions. Naturally, I have some functions that accept these n-ary > functions as arguments, which have types like this: > > foo :: Blah as => (HList as -> Widget) -> Whatsit > > The idea is that Blah does some type-level induction on as and supplies > the function with some appropriate values. Correspondingly, my use sites > look something like this: > > bar = foo (\HNil -> ...) > > Much to my dismay, I quickly discovered that GHC finds these expressions > quite unfashionable, and it invariably insults them: > > • Ambiguous type variable ‘as0’ arising from a use of ‘foo’ >   prevents the constraint ‘(Blah as0)’ from being solved. FWIW, the simplest possible example: {-# LANGUAGE DataKinds, TypeOperators, GADTs #-} data HList as where   HNil  :: HList '[] HCons :: a -> HList as -> HList (a ': as) foo :: (as ~ '[]) => (HList as -> Int) -> Int foo f = f HNil bar :: Int bar = foo (\HNil -> 1) compiles without error. As soon as I try add more complex contraints, I appear to need an explicit type signature for HNil, and then the code again compiles: {-# LANGUAGE DataKinds , GADTs , PolyKinds , ScopedTypeVariables , TypeFamilies , TypeOperators #-} import GHC.Types data HList as where   HNil  :: HList '[] HCons :: a -> HList as -> HList (a ': as) class Nogo a where type family Blah (as :: [Type]) :: Constraint type instance Blah '[] = () type instance Blah (_ ': '[]) = () type instance Blah (_ ': _ ': _) = (Nogo ()) foo :: (Blah as) => (HList as -> Int) -> Int foo _ = 42 bar :: Int bar = foo (\ (HNil :: HNilT) -> 1) type HNilT = HList '[] baz :: Int baz = foo (\ (True `HCons` HNil :: HOneT Bool) -> 2) type HOneT a = HList (a ': '[]) Is this at all useful? -- Viktor. From ietf-dane at dukhovni.org Sat Mar 20 12:56:17 2021 From: ietf-dane at dukhovni.org (Viktor Dukhovni) Date: Sat, 20 Mar 2021 08:56:17 -0400 Subject: Type inference of singular matches on GADTs In-Reply-To: References: <914d28f5-d2d5-9cf0-2ec0-88e98c10e92d@gmail.com> Message-ID: On Sat, Mar 20, 2021 at 08:13:18AM -0400, Viktor Dukhovni wrote: > As soon as I try add more complex contraints, I appear to need an > explicit type signature for HNil, and then the code again compiles: But aliasing the promoted constructors via pattern synonyms, and using those instead, appears to resolve the ambiguity. -- Viktor. {-# LANGUAGE DataKinds , GADTs , PatternSynonyms , PolyKinds , ScopedTypeVariables , TypeFamilies , TypeOperators #-} import GHC.Types infixr 1 `HC` data HList as where   HNil  :: HList '[] HCons :: a -> HList as -> HList (a ': as) pattern HN :: HList '[]; pattern HN = HNil pattern HC :: a -> HList as -> HList (a ': as) pattern HC a as = HCons a as class Nogo a where type family Blah (as :: [Type]) :: Constraint type instance Blah '[] = () type instance Blah (_ ': '[]) = () type instance Blah (_ ': _ ': '[]) = () type instance Blah (_ ': _ ': _ ': _) = (Nogo ()) foo :: (Blah as) => (HList as -> Int) -> Int foo _ = 42 bar :: Int bar = foo (\ HN -> 1) baz :: Int baz = foo (\ (True `HC` HN) -> 2) pattern One :: Int pattern One = 1 bam :: Int bam = foo (\ (True `HC` One `HC` HN) -> 2) From sgraf1337 at gmail.com Sat Mar 20 14:45:13 2021 From: sgraf1337 at gmail.com (Sebastian Graf) Date: Sat, 20 Mar 2021 15:45:13 +0100 Subject: Type inference of singular matches on GADTs In-Reply-To: References: <914d28f5-d2d5-9cf0-2ec0-88e98c10e92d@gmail.com> Message-ID: Hi Alexis, The following works and will have inferred type `Int`: > bar = foo (\(HNil :: HList '[]) -> 42) I'd really like it if we could write > bar2 = foo (\(HNil @'[]) -> 42) though, even if you write out the constructor type with explicit constraints and forall's. E.g. by using a -XTypeApplications here, I specify the universal type var of the type constructor `HList`. I think that is a semantics that is in line with Type Variables in Patterns, Section 4 : The only way to satisfy the `as ~ '[]` constraint in the HNil pattern is to refine the type of the pattern match to `HList '[]`. Consequently, the local `Blah '[]` can be discharged and bar2 will have inferred `Int`. But that's simply not implemented at the moment, I think. I recall there's some work that has to happen before. The corresponding proposal seems to be https://ghc-proposals.readthedocs.io/en/latest/proposals/0126-type-applications-in-patterns.html (or https://github.com/ghc-proposals/ghc-proposals/pull/238? I'm confused) and your example should probably be added there as motivation. If `'[]` is never mentioned anywhere in the pattern like in the original example, I wouldn't expect it to type-check (or at least emit a pattern-match warning): First off, the type is ambiguous. It's a similar situation as in https://stackoverflow.com/questions/50159349/type-abstraction-in-ghc-haskell. If it was accepted and got type `Blah as => Int`, then you'd get a pattern-match warning, because depending on how `as` is instantiated, your pattern-match is incomplete. E.g., `bar3 @[Int]` would crash. Complete example code: {-# LANGUAGE DataKinds #-} {-# LANGUAGE TypeOperators #-} {-# LANGUAGE GADTs #-} {-# LANGUAGE LambdaCase #-} {-# LANGUAGE TypeApplications #-} {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE RankNTypes #-} module Lib where data HList as where HNil :: forall as. (as ~ '[]) => HList as HCons :: forall as a as'. (as ~ (a ': as')) => a -> HList as' -> HList as class Blah as where blah :: HList as instance Blah '[] where blah = HNil foo :: Blah as => (HList as -> Int) -> Int foo f = f blah bar = foo (\(HNil :: HList '[]) -> 42) -- compiles bar2 = foo (\(HNil @'[]) -> 42) -- errors Cheers, Sebastian Am Sa., 20. März 2021 um 13:57 Uhr schrieb Viktor Dukhovni < ietf-dane at dukhovni.org>: > On Sat, Mar 20, 2021 at 08:13:18AM -0400, Viktor Dukhovni wrote: > > > As soon as I try add more complex contraints, I appear to need an > > explicit type signature for HNil, and then the code again compiles: > > But aliasing the promoted constructors via pattern synonyms, and using > those instead, appears to resolve the ambiguity. > > -- > Viktor. > > {-# LANGUAGE > DataKinds > , GADTs > , PatternSynonyms > , PolyKinds > , ScopedTypeVariables > , TypeFamilies > , TypeOperators > #-} > > import GHC.Types > > infixr 1 `HC` > > data HList as where > HNil :: HList '[] > HCons :: a -> HList as -> HList (a ': as) > > pattern HN :: HList '[]; > pattern HN = HNil > pattern HC :: a -> HList as -> HList (a ': as) > pattern HC a as = HCons a as > > class Nogo a where > > type family Blah (as :: [Type]) :: Constraint > type instance Blah '[] = () > type instance Blah (_ ': '[]) = () > type instance Blah (_ ': _ ': '[]) = () > type instance Blah (_ ': _ ': _ ': _) = (Nogo ()) > > foo :: (Blah as) => (HList as -> Int) -> Int > foo _ = 42 > > bar :: Int > bar = foo (\ HN -> 1) > > baz :: Int > baz = foo (\ (True `HC` HN) -> 2) > > pattern One :: Int > pattern One = 1 > bam :: Int > bam = foo (\ (True `HC` One `HC` HN) -> 2) > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom-lists-haskell-cafe-2017 at jaguarpaw.co.uk Sat Mar 20 17:21:16 2021 From: tom-lists-haskell-cafe-2017 at jaguarpaw.co.uk (Tom Ellis) Date: Sat, 20 Mar 2021 17:21:16 +0000 Subject: Configuration documentation (Was Re: GitLab is down: urgent) In-Reply-To: <010f01784be3ae0e-92c44cd0-1c27-4335-b3df-c9be59a6a9cf-000000@us-east-2.amazonses.com> References: <87sg4rw90k.fsf@smart-cactus.org> <010f01784ac8a554-6345913d-fea6-4314-b9ae-de353d5ff450-000000@us-east-2.amazonses.com> <010f01784b8b4c5a-ac0ddc5d-9b52-4dd3-8510-f62447408051-000000@us-east-2.amazonses.com> <66d44a1d4e4e16385ad1c98a92e6a1d8735d07a5.camel@gmail.com> <010f01784be3ae0e-92c44cd0-1c27-4335-b3df-c9be59a6a9cf-000000@us-east-2.amazonses.com> Message-ID: <20210320172116.GE15119@cloudinit-builder> On Fri, Mar 19, 2021 at 07:08:35PM +0000, Richard Eisenberg wrote: > > On Mar 19, 2021, at 2:21 PM, Gershom B wrote: > > Cc: admin at haskell.org which remains (since it was set up over five? > > years ago) the contact address for haskell infra admin stuff. > > How would I learn of that address? Who is admin at haskell.org? > > > There's also, as there has always been, the > > #haskell-infrastructure irc channel. > > How would I learn of this? By way of clarification, both of these contact details are mentioned in the document that Gershom linked: https://github.com/haskell-infra/haskell-admins#the-team-and-how-to-contact-them Thanks for that document, Gershom, it's very helpful! Tom From moritz.angermann at gmail.com Sun Mar 21 04:57:40 2021 From: moritz.angermann at gmail.com (Moritz Angermann) Date: Sun, 21 Mar 2021 12:57:40 +0800 Subject: GitLab is down: urgent In-Reply-To: References: <87sg4rw90k.fsf@smart-cactus.org> <010f01784ac8a554-6345913d-fea6-4314-b9ae-de353d5ff450-000000@us-east-2.amazonses.com> <010f01784b8b4c5a-ac0ddc5d-9b52-4dd3-8510-f62447408051-000000@us-east-2.amazonses.com> Message-ID: Just a heads up everyone. Gitlab appears down again. This seemed to have happened around Sunday, 4AM UTC. Everyone have a blissful Sunday! On Sat, 20 Mar 2021 at 6:05 PM, Giorgio Marinelli wrote: > I can also help (~UTC+1), I've a long history and experience in > systems management and engineering. > > Best, > > Giorgio Marinelli > https://marinelli.dev/cv > > On Sat, 20 Mar 2021 at 00:31, Moritz Angermann > wrote: > > > > I can try to step up and be backup on the other side of the planet. Ben > and I are almost 12hs apart exactly. > > > > On Sat, 20 Mar 2021 at 1:32 AM, Richard Eisenberg > wrote: > >> > >> > >> > >> On Mar 19, 2021, at 12:44 PM, howard.b.golden at gmail.com wrote: > >> > >> I would like to help however I can. I already maintain the Haskell > >> wiki, and I would like to improve and document its configuration using > >> devops techniques, preferably consistent with gitlab.haskell.org. > >> > >> > >> Thanks, Howard! > >> > >> I will try to take you up on your offer to help: do you think you could > start this documentation process more broadly? That is, not just covering > the Haskell Wiki, but also, say, gitlab.haskell.org. (You say you wish to > document the wiki's configuration consistently with gitlab.haskell.org, > but I don't know that the latter is documented!) > >> > >> Ideally, I would love to know what services haskell.org hosts, who > runs them, and what happens if those people become unavailable. There's a > zoo of services out there, and knowing who does what would be invaluable. > >> > >> Of course, anyone can start this process, but it takes someone willing > to stick with it and see it through for a few weeks. Since Howard boldly > stepped forward, I nominate him. :) > >> > >> Thanks, > >> Richard > >> _______________________________________________ > >> ghc-devs mailing list > >> ghc-devs at haskell.org > >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Sun Mar 21 13:49:31 2021 From: ben at smart-cactus.org (Ben Gamari) Date: Sun, 21 Mar 2021 09:49:31 -0400 Subject: GitLab is down: urgent In-Reply-To: References: <87sg4rw90k.fsf@smart-cactus.org> <010f01784ac8a554-6345913d-fea6-4314-b9ae-de353d5ff450-000000@us-east-2.amazonses.com> <010f01784b8b4c5a-ac0ddc5d-9b52-4dd3-8510-f62447408051-000000@us-east-2.amazonses.com> Message-ID: <87h7l4wr9z.fsf@smart-cactus.org> Moritz Angermann writes: > Just a heads up everyone. Gitlab appears down again. This seemed to have > happened around Sunday, 4AM UTC. > > Everyone have a blissful Sunday! > It is back up, again. It appears that GitLab's backup retention logic is now either broken or has changed since it is now failing to delete old backups, resulting a full disk. I'll be keeping an eye on this until we sort out the root cause. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From moritz.angermann at gmail.com Mon Mar 22 04:31:28 2021 From: moritz.angermann at gmail.com (Moritz Angermann) Date: Mon, 22 Mar 2021 12:31:28 +0800 Subject: GitLab is down: urgent In-Reply-To: <87h7l4wr9z.fsf@smart-cactus.org> References: <87sg4rw90k.fsf@smart-cactus.org> <010f01784ac8a554-6345913d-fea6-4314-b9ae-de353d5ff450-000000@us-east-2.amazonses.com> <010f01784b8b4c5a-ac0ddc5d-9b52-4dd3-8510-f62447408051-000000@us-east-2.amazonses.com> <87h7l4wr9z.fsf@smart-cactus.org> Message-ID: It appears as if gitlab went down again, this time since around 4AM UTC on Monday. On Sun, Mar 21, 2021 at 9:49 PM Ben Gamari wrote: > Moritz Angermann writes: > > > Just a heads up everyone. Gitlab appears down again. This seemed to have > > happened around Sunday, 4AM UTC. > > > > Everyone have a blissful Sunday! > > > It is back up, again. It appears that GitLab's backup retention logic is > now either broken or has changed since it is now failing to delete old > backups, resulting a full disk. I'll be keeping an eye on this until we > sort out the root cause. > > Cheers, > > - Ben > -------------- next part -------------- An HTML attachment was scrubbed... URL: From moritz.angermann at gmail.com Mon Mar 22 04:32:51 2021 From: moritz.angermann at gmail.com (Moritz Angermann) Date: Mon, 22 Mar 2021 12:32:51 +0800 Subject: GHC 8.10 backports? Message-ID: Hi there! Does anyone have any backports they'd like to see for consideration for 8.10.5? Cheers, Moritz -------------- next part -------------- An HTML attachment was scrubbed... URL: From gergo at erdi.hu Mon Mar 22 04:39:28 2021 From: gergo at erdi.hu (=?UTF-8?B?R2VyZ8WRIMOJcmRp?=) Date: Mon, 22 Mar 2021 12:39:28 +0800 Subject: GHC 8.10 backports? In-Reply-To: References: Message-ID: I'd love to have this in a GHC 8.10 release: https://mail.haskell.org/pipermail/ghc-devs/2021-March/019629.html On Mon, Mar 22, 2021, 12:34 Moritz Angermann wrote: > Hi there! > > Does anyone have any backports they'd like to see for consideration for > 8.10.5? > > Cheers, > Moritz > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From moritz.angermann at gmail.com Mon Mar 22 05:23:47 2021 From: moritz.angermann at gmail.com (Moritz Angermann) Date: Mon, 22 Mar 2021 13:23:47 +0800 Subject: GitLab is down: urgent In-Reply-To: References: <87sg4rw90k.fsf@smart-cactus.org> <010f01784ac8a554-6345913d-fea6-4314-b9ae-de353d5ff450-000000@us-east-2.amazonses.com> <010f01784b8b4c5a-ac0ddc5d-9b52-4dd3-8510-f62447408051-000000@us-east-2.amazonses.com> <87h7l4wr9z.fsf@smart-cactus.org> Message-ID: Davean has resurrected gitlab for now. On Mon, Mar 22, 2021 at 12:31 PM Moritz Angermann < moritz.angermann at gmail.com> wrote: > It appears as if gitlab went down again, this time since around 4AM UTC on > Monday. > > On Sun, Mar 21, 2021 at 9:49 PM Ben Gamari wrote: > >> Moritz Angermann writes: >> >> > Just a heads up everyone. Gitlab appears down again. This seemed to >> have >> > happened around Sunday, 4AM UTC. >> > >> > Everyone have a blissful Sunday! >> > >> It is back up, again. It appears that GitLab's backup retention logic is >> now either broken or has changed since it is now failing to delete old >> backups, resulting a full disk. I'll be keeping an eye on this until we >> sort out the root cause. >> >> Cheers, >> >> - Ben >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ietf-dane at dukhovni.org Mon Mar 22 05:50:45 2021 From: ietf-dane at dukhovni.org (Viktor Dukhovni) Date: Mon, 22 Mar 2021 01:50:45 -0400 Subject: GHC 8.10 backports? In-Reply-To: References: Message-ID: On Mon, Mar 22, 2021 at 12:39:28PM +0800, Gergő Érdi wrote: > I'd love to have this in a GHC 8.10 release: > https://mail.haskell.org/pipermail/ghc-devs/2021-March/019629.html This is already in 9.0, 9.2 and master, but it is a rather non-trivial change, given all the new work that went into the String case. So I am not sure it is small/simple enough to make for a compelling backport. There's a lot of recent activity in this space. See also , which is not yet merged into master, and might still be eta-reduced one more step). I don't know whether such optimisation tweaks (not a bugfix) are in scope for backporting, we certainly need to be confident they'll not cause any new problems. FWIW, 5259 is dramatically simpler... Of course we also have in much the same territory, but there we're still blocked on someone figuring out what's going on with the 20% compile-time hit with T13056, and whether that's acceptable or not... -- Viktor. From gergo at erdi.hu Mon Mar 22 05:57:47 2021 From: gergo at erdi.hu (=?UTF-8?B?R2VyZ8WRIMOJcmRp?=) Date: Mon, 22 Mar 2021 13:57:47 +0800 Subject: GHC 8.10 backports? In-Reply-To: References: Message-ID: Thanks, that makes it less appealing. In the original thread, I got no further replies after my email announcing my "discovery" of that commit, so I thought that was the whole story. On Mon, Mar 22, 2021, 13:53 Viktor Dukhovni wrote: > On Mon, Mar 22, 2021 at 12:39:28PM +0800, Gergő Érdi wrote: > > > I'd love to have this in a GHC 8.10 release: > > https://mail.haskell.org/pipermail/ghc-devs/2021-March/019629.html > > This is already in 9.0, 9.2 and master, but it is a rather non-trivial > change, given all the new work that went into the String case. So I am > not sure it is small/simple enough to make for a compelling backport. > > There's a lot of recent activity in this space. See also > , which is not > yet merged into master, and might still be eta-reduced one more step). > > I don't know whether such optimisation tweaks (not a bugfix) are in > scope for backporting, we certainly need to be confident they'll not > cause any new problems. FWIW, 5259 is dramatically simpler... > > Of course we also have > in much the > same territory, but there we're still blocked on someone figuring out > what's going on with the 20% compile-time hit with T13056, and whether > that's acceptable or not... > > -- > Viktor. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From moritz.angermann at gmail.com Mon Mar 22 06:02:56 2021 From: moritz.angermann at gmail.com (Moritz Angermann) Date: Mon, 22 Mar 2021 14:02:56 +0800 Subject: GHC 8.10 backports? In-Reply-To: References: Message-ID: The commit message from https://gitlab.haskell.org/ghc/ghc/-/commit/f10d11fa49fa9a7a506c4fdbdf86521c2a8d3495 , makes the changes to string seem required. Applying the commit on its own doesn't apply cleanly and pulls in quite a bit of extra dependent commits. Just applying the elem rules appears rather risky. Thus will I agree that having that would be a nice fix to have, the amount of necessary code changes makes me rather uncomfortable for a minor release :-/ On Mon, Mar 22, 2021 at 1:58 PM Gergő Érdi wrote: > Thanks, that makes it less appealing. In the original thread, I got no > further replies after my email announcing my "discovery" of that commit, so > I thought that was the whole story. > > On Mon, Mar 22, 2021, 13:53 Viktor Dukhovni > wrote: > >> On Mon, Mar 22, 2021 at 12:39:28PM +0800, Gergő Érdi wrote: >> >> > I'd love to have this in a GHC 8.10 release: >> > https://mail.haskell.org/pipermail/ghc-devs/2021-March/019629.html >> >> This is already in 9.0, 9.2 and master, but it is a rather non-trivial >> change, given all the new work that went into the String case. So I am >> not sure it is small/simple enough to make for a compelling backport. >> >> There's a lot of recent activity in this space. See also >> , which is not >> yet merged into master, and might still be eta-reduced one more step). >> >> I don't know whether such optimisation tweaks (not a bugfix) are in >> scope for backporting, we certainly need to be confident they'll not >> cause any new problems. FWIW, 5259 is dramatically simpler... >> >> Of course we also have >> in much the >> same territory, but there we're still blocked on someone figuring out >> what's going on with the 20% compile-time hit with T13056, and whether >> that's acceptable or not... >> >> -- >> Viktor. >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Mar 22 10:31:03 2021 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 22 Mar 2021 10:31:03 +0000 Subject: Type inference of singular matches on GADTs In-Reply-To: <914d28f5-d2d5-9cf0-2ec0-88e98c10e92d@gmail.com> References: <914d28f5-d2d5-9cf0-2ec0-88e98c10e92d@gmail.com> Message-ID: What would you expect of 1. \x -> case x of HNil -> blah Here the lambda and the case are separated. 1. \x -> (x, case x of HNil -> blah) Here the lambda and the case are separated more, and x is used twice. What if there are more data constructors that share a common return type? 1. data HL2 a where HNil1 :: HL2 [] HNil2 :: HL2 [] HCons :: …blah… \x -> case x of { HNil1 -> blah; HNil 2 -> blah } Here HNil1 and HNil2 both return HL2 []. Is that “singular”? What if one was a bit more general than the other? Do we seek the least common generalisation of the alternatives given? The water gets deep quickly here. I don’t (yet) see an obviously-satisfying design point that isn’t massively ad-hoc. Simon From: ghc-devs On Behalf Of Alexis King Sent: 20 March 2021 09:41 To: ghc-devs at haskell.org Subject: Type inference of singular matches on GADTs Hi all, Today I was writing some code that uses a GADT to represent heterogeneous lists: data HList as where HNil :: HList '[] HCons :: a -> HList as -> HList (a ': as) This type is used to provide a generic way to manipulate n-ary functions. Naturally, I have some functions that accept these n-ary functions as arguments, which have types like this: foo :: Blah as => (HList as -> Widget) -> Whatsit The idea is that Blah does some type-level induction on as and supplies the function with some appropriate values. Correspondingly, my use sites look something like this: bar = foo (\HNil -> ...) Much to my dismay, I quickly discovered that GHC finds these expressions quite unfashionable, and it invariably insults them: • Ambiguous type variable ‘as0’ arising from a use of ‘foo’ prevents the constraint ‘(Blah as0)’ from being solved. The miscommunication is simple enough. I expected that when given an expression like \HNil -> ... GHC would see a single pattern of type HList '[] and consequently infer a type like HList '[] -> ... Alas, it was not to be. It seems GHC is reluctant to commit to the choice of '[] for as, lest perhaps I add another case to my function in the future. Indeed, if I were to do that, the choice of '[] would be premature, as as ~ '[] would only be available within one branch. However, I do not in fact have any such intention, which makes me quietly wish GHC would get over its anxiety and learn to be a bit more of a risk-taker. I ended up taking a look at the OutsideIn(X) paper, hoping to find some commentary on this situation, but in spite of the nice examples toward the start about the trickiness of GADTs, I found no discussion of this specific scenario: a function with exactly one branch and an utterly unambiguous pattern. Most examples come at the problem from precisely the opposite direction, trying to tease out a principle type from a collection of branches. The case of a function (or perhaps more accurately, a case expression) with only a single branch does not seem to be given any special attention. Of course, fewer special cases is always nice. I have great sympathy for generality. Still, I can’t help but feel a little unsatisfied here. Theoretically, there is no reason GHC cannot treat \(a `HCons` b `HCons` c `HCons` HNil) -> ... and \a b c -> ... almost identically, with a well-defined principle type and pleasant type inference properties, but there is no way for me to communicate this to the typechecker! So, my questions: 1. Have people considered this problem before? Is it discussed anywhere already? 2. Is my desire here reasonable, or is there some deep philosophical argument for why my program should be rejected? 3. If it is reasonable, are there any obvious situations where a change targeted at what I’m describing (vague as that is) would affect programs negatively, not positively? I realize this gets rather at the heart of the typechecker, so I don’t intend to imply a change of this sort should be made frivolously. Indeed, I’m not even particularly attached to the idea that a change must be made! But I do want to understand the tradeoffs better, so any insight would be much appreciated. Thanks, Alexis -------------- next part -------------- An HTML attachment was scrubbed... URL: From xnningxie at gmail.com Mon Mar 22 11:48:37 2021 From: xnningxie at gmail.com (Ningning Xie) Date: Mon, 22 Mar 2021 11:48:37 +0000 Subject: Call for Talks: Haskell Implementors' Workshop Message-ID: Call for Talks ACM SIGPLAN Haskell Implementors' Workshop https://icfp21.sigplan.org/home/hiw-2021 Virtual, 22 Aug, 2021 Co-located with ICFP 2021 https://icfp21.sigplan.org/ Important dates --------------- Deadline: Wednesday, 30 June, 2021 (AoE) Notification: Wednesday, 14 July, 2021 Workshop: Sunday, 22 August, 2021 The 13th Haskell Implementors' Workshop is to be held alongside ICFP 2021 this year virtually. It is a forum for people involved in the design and development of Haskell implementations, tools, libraries, and supporting infrastructure, to share their work and discuss future directions and collaborations with others. Talks and/or demos are proposed by submitting an abstract, and selected by a small program committee. There will be no published proceedings. The workshop will be informal and interactive, with open spaces in the timetable and room for ad-hoc discussion, demos, and lightning short talks. Scope and target audience ------------------------- It is important to distinguish the Haskell Implementors' Workshop from the Haskell Symposium which is also co-located with ICFP 2021. The Haskell Symposium is for the publication of Haskell-related research. In contrast, the Haskell Implementors' Workshop will have no proceedings -- although we will aim to make talk videos, slides and presented data available with the consent of the speakers. The Implementors' Workshop is an ideal place to describe a Haskell extension, describe works-in-progress, demo a new Haskell-related tool, or even propose future lines of Haskell development. Members of the wider Haskell community encouraged to attend the workshop -- we need your feedback to keep the Haskell ecosystem thriving. Students working with Haskell are specially encouraged to share their work. The scope covers any of the following topics. There may be some topics that people feel we've missed, so by all means submit a proposal even if it doesn't fit exactly into one of these buckets: * Compilation techniques * Language features and extensions * Type system implementation * Concurrency and parallelism: language design and implementation * Performance, optimisation and benchmarking * Virtual machines and run-time systems * Libraries and tools for development or deployment Talks ----- We invite proposals from potential speakers for talks and demonstrations. We are aiming for 20-minute talks with 5 minutes for questions and changeovers. We want to hear from people writing compilers, tools, or libraries, people with cool ideas for directions in which we should take the platform, proposals for new features to be implemented, and half-baked crazy ideas. Please submit a talk title and abstract of no more than 300 words. Submissions should be made via HotCRP. The website is: https://icfp-hiw21.hotcrp.com/ We will also have lightning talks session. These have been very well received in recent years, and we aim to increase the time available to them. Lightning talks be ~7mins and are scheduled on the day of the workshop. Suggested topics for lightning talks are to present a single idea, a work-in-progress project, a problem to intrigue and perplex Haskell implementors, or simply to ask for feedback and collaborators. Logistics --------- Due to the on-going COVID-19 situation, ICFP (and, consequently, HIW) will be held remotely this year. However, the organizers are still working hard to provide for a great workshop experience. While we are sad that this year will lack the robust hallway track that is often the highlight of HIW, we believe that this remote workshop presents a unique opportunity to include more of the Haskell community in our discussion and explore new modes of communicating with our colleagues. We hope that you will join us in making this HIW as vibrant as any other. Program Committee ----------------- * Dominique Devriese (Vrije Universiteit Brussel) * Daan Leijen (Microsoft Research) * Andres Löh (Well-Typed LLP) * Julie Moronuki (Typeclass Consulting) * John Wiegley (DFINITY) * Ningning Xie (the University of Hong Kong) * Edward Z. Yang (Facebook AI Research) Contact ------- * Ningning Xie -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgraf1337 at gmail.com Mon Mar 22 18:28:48 2021 From: sgraf1337 at gmail.com (Sebastian Graf) Date: Mon, 22 Mar 2021 19:28:48 +0100 Subject: Type inference of singular matches on GADTs In-Reply-To: References: <914d28f5-d2d5-9cf0-2ec0-88e98c10e92d@gmail.com> Message-ID: Cale made me aware of the fact that the "Type applications in patterns" proposal had already been implemented. See https://gitlab.haskell.org/ghc/ghc/-/issues/19577 where I adapt Alexis' use case into a test case that I'd like to see compiling. Am Sa., 20. März 2021 um 15:45 Uhr schrieb Sebastian Graf < sgraf1337 at gmail.com>: > Hi Alexis, > > The following works and will have inferred type `Int`: > > > bar = foo (\(HNil :: HList '[]) -> 42) > > I'd really like it if we could write > > > bar2 = foo (\(HNil @'[]) -> 42) > > though, even if you write out the constructor type with explicit > constraints and forall's. > E.g. by using a -XTypeApplications here, I specify the universal type var > of the type constructor `HList`. I think that is a semantics that is in > line with Type Variables in Patterns, Section 4 > : The only way to satisfy > the `as ~ '[]` constraint in the HNil pattern is to refine the type of the > pattern match to `HList '[]`. Consequently, the local `Blah '[]` can be > discharged and bar2 will have inferred `Int`. > > But that's simply not implemented at the moment, I think. I recall there's > some work that has to happen before. The corresponding proposal seems to be > https://ghc-proposals.readthedocs.io/en/latest/proposals/0126-type-applications-in-patterns.html > (or https://github.com/ghc-proposals/ghc-proposals/pull/238? I'm > confused) and your example should probably be added there as motivation. > > If `'[]` is never mentioned anywhere in the pattern like in the original > example, I wouldn't expect it to type-check (or at least emit a > pattern-match warning): First off, the type is ambiguous. It's a similar > situation as in > https://stackoverflow.com/questions/50159349/type-abstraction-in-ghc-haskell. > If it was accepted and got type `Blah as => Int`, then you'd get a > pattern-match warning, because depending on how `as` is instantiated, your > pattern-match is incomplete. E.g., `bar3 @[Int]` would crash. > > Complete example code: > > {-# LANGUAGE DataKinds #-} > {-# LANGUAGE TypeOperators #-} > {-# LANGUAGE GADTs #-} > {-# LANGUAGE LambdaCase #-} > {-# LANGUAGE TypeApplications #-} > {-# LANGUAGE ScopedTypeVariables #-} > {-# LANGUAGE RankNTypes #-} > > module Lib where > > data HList as where > HNil :: forall as. (as ~ '[]) => HList as > HCons :: forall as a as'. (as ~ (a ': as')) => a -> HList as' -> HList as > > class Blah as where > blah :: HList as > > instance Blah '[] where > blah = HNil > > foo :: Blah as => (HList as -> Int) -> Int > foo f = f blah > > bar = foo (\(HNil :: HList '[]) -> 42) -- compiles > bar2 = foo (\(HNil @'[]) -> 42) -- errors > > Cheers, > Sebastian > > Am Sa., 20. März 2021 um 13:57 Uhr schrieb Viktor Dukhovni < > ietf-dane at dukhovni.org>: > >> On Sat, Mar 20, 2021 at 08:13:18AM -0400, Viktor Dukhovni wrote: >> >> > As soon as I try add more complex contraints, I appear to need an >> > explicit type signature for HNil, and then the code again compiles: >> >> But aliasing the promoted constructors via pattern synonyms, and using >> those instead, appears to resolve the ambiguity. >> >> -- >> Viktor. >> >> {-# LANGUAGE >> DataKinds >> , GADTs >> , PatternSynonyms >> , PolyKinds >> , ScopedTypeVariables >> , TypeFamilies >> , TypeOperators >> #-} >> >> import GHC.Types >> >> infixr 1 `HC` >> >> data HList as where >> HNil :: HList '[] >> HCons :: a -> HList as -> HList (a ': as) >> >> pattern HN :: HList '[]; >> pattern HN = HNil >> pattern HC :: a -> HList as -> HList (a ': as) >> pattern HC a as = HCons a as >> >> class Nogo a where >> >> type family Blah (as :: [Type]) :: Constraint >> type instance Blah '[] = () >> type instance Blah (_ ': '[]) = () >> type instance Blah (_ ': _ ': '[]) = () >> type instance Blah (_ ': _ ': _ ': _) = (Nogo ()) >> >> foo :: (Blah as) => (HList as -> Int) -> Int >> foo _ = 42 >> >> bar :: Int >> bar = foo (\ HN -> 1) >> >> baz :: Int >> baz = foo (\ (True `HC` HN) -> 2) >> >> pattern One :: Int >> pattern One = 1 >> bam :: Int >> bam = foo (\ (True `HC` One `HC` HN) -> 2) >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lonetiger at gmail.com Tue Mar 23 06:18:33 2021 From: lonetiger at gmail.com (Phyx) Date: Tue, 23 Mar 2021 06:18:33 +0000 Subject: GHC 8.10 backports? In-Reply-To: References: Message-ID: Hi, I currently have https://gitlab.haskell.org/ghc/ghc/-/merge_requests/5055 marked for backports but don't know if it was done or not. Thanks, Tamar Sent from my Mobile On Mon, Mar 22, 2021, 04:33 Moritz Angermann wrote: > Hi there! > > Does anyone have any backports they'd like to see for consideration for > 8.10.5? > > Cheers, > Moritz > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From moritz.angermann at gmail.com Tue Mar 23 07:30:54 2021 From: moritz.angermann at gmail.com (Moritz Angermann) Date: Tue, 23 Mar 2021 15:30:54 +0800 Subject: GHC 8.10 backports? In-Reply-To: References: Message-ID: Thanks! I’ll make sure not to forget that one. I’m afraid 8.10 will be delayed yet again a bit as we find ourselves in docker purgatory. On Tue, 23 Mar 2021 at 2:18 PM, Phyx wrote: > Hi, > > I currently have https://gitlab.haskell.org/ghc/ghc/-/merge_requests/5055 > marked for backports but don't know if it was done or not. > > Thanks, > Tamar > > Sent from my Mobile > > On Mon, Mar 22, 2021, 04:33 Moritz Angermann > wrote: > >> Hi there! >> >> Does anyone have any backports they'd like to see for consideration for >> 8.10.5? >> >> Cheers, >> Moritz >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From klebinger.andreas at gmx.at Wed Mar 24 11:28:11 2021 From: klebinger.andreas at gmx.at (Andreas Klebinger) Date: Wed, 24 Mar 2021 12:28:11 +0100 Subject: GHC 8.10 backports? In-Reply-To: References: Message-ID: <7f3e5161-b4d9-55bb-0014-ee9148db1644@gmx.at> Yes, only changing the rule did indeed cause regressions. Whichwhen not including the string changes. I don't think it's worth having one without the other. But it seems you already backported this? See https://gitlab.haskell.org/ghc/ghc/-/merge_requests/5263 Cheers Andreas Am 22/03/2021 um 07:02 schrieb Moritz Angermann: > The commit message from > https://gitlab.haskell.org/ghc/ghc/-/commit/f10d11fa49fa9a7a506c4fdbdf86521c2a8d3495 > , > > makes the changes to string seem required. Applying the commit on its > own doesn't apply cleanly and pulls in quite a > bit of extra dependent commits. Just applying the elem rules appears > rather risky. Thus will I agree that having that > would be a nice fix to have, the amount of necessary code changes > makes me rather uncomfortable for a minor release :-/ > > On Mon, Mar 22, 2021 at 1:58 PM Gergő Érdi > wrote: > > Thanks, that makes it less appealing. In the original thread, I > got no further replies after my email announcing my "discovery" of > that commit, so I thought that was the whole story. > > On Mon, Mar 22, 2021, 13:53 Viktor Dukhovni > > wrote: > > On Mon, Mar 22, 2021 at 12:39:28PM +0800, Gergő Érdi wrote: > > > I'd love to have this in a GHC 8.10 release: > > > https://mail.haskell.org/pipermail/ghc-devs/2021-March/019629.html > > > This is already in 9.0, 9.2 and master, but it is a rather > non-trivial > change, given all the new work that went into the String > case.  So I am > not sure it is small/simple enough to make for a compelling > backport. > > There's a lot of recent activity in this space.  See also > >, > which is not > yet merged into master, and might still be eta-reduced one > more step). > > I don't know whether such optimisation tweaks (not a bugfix) > are in > scope for backporting, we certainly need to be confident > they'll not > cause any new problems.  FWIW, 5259 is dramatically simpler... > > Of course we also have > > in > much the > same territory, but there we're still blocked on someone > figuring out > what's going on with the 20% compile-time hit with T13056, and > whether > that's acceptable or not... > > -- >     Viktor. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From klebinger.andreas at gmx.at Wed Mar 24 11:44:45 2021 From: klebinger.andreas at gmx.at (Andreas Klebinger) Date: Wed, 24 Mar 2021 12:44:45 +0100 Subject: On CI In-Reply-To: References: <010f01784069544b-55202ccf-7239-4333-80c6-3f3cd8543527-000000@us-east-2.amazonses.com> <132ab2d4-9f1f-4320-fee6-d5f48abe00f3@gmx.at> <3d7253a2-b1ee-4b7e-15e3-7d75f7b1c15f@centrum.cz> Message-ID: After the idea of letting marge accept unexpected perf improvements and looking at https://gitlab.haskell.org/ghc/ghc/-/merge_requests/4759 which failed because of a single test, for a single build flavour crossing the improvement threshold where CI fails after rebasing I wondered. When would accepting a unexpected perf improvement ever backfire? In practice I either have a patch that I expect to improve performance for some things so I want to accept whatever gains I get. Or I don't expect improvements so it's *maybe* worth failing CI for in case I optimized away some code I shouldn't or something of that sort. How could this be actionable? Perhaps having a set of indicator for CI of "Accept allocation decreases" "Accept residency decreases" Would be saner. I have personally *never* gotten value out of the requirement to list the indivial tests that improve. Usually a whole lot of them do. Some cross the threshold so I add them. If I'm unlucky I have to rebase and a new one might make it across the threshold. Being able to accept improvements (but not regressions) wholesale might be a reasonable alternative. Opinions? From rae at richarde.dev Wed Mar 24 12:08:45 2021 From: rae at richarde.dev (Richard Eisenberg) Date: Wed, 24 Mar 2021 12:08:45 +0000 Subject: On CI In-Reply-To: References: <010f01784069544b-55202ccf-7239-4333-80c6-3f3cd8543527-000000@us-east-2.amazonses.com> <132ab2d4-9f1f-4320-fee6-d5f48abe00f3@gmx.at> <3d7253a2-b1ee-4b7e-15e3-7d75f7b1c15f@centrum.cz> Message-ID: <010f017864231a40-251b9f59-0c33-4066-b7d0-9c1d0a44c17f-000000@us-east-2.amazonses.com> What about the case where the rebase *lessens* the improvement? That is, you're expecting these 10 cases to improve, but after a rebase, only 1 improves. That's news! But a blanket "accept improvements" won't tell you. I'm not hard against this proposal, because I know precise tracking has its own costs. Just wanted to bring up another scenario that might be factored in. Richard > On Mar 24, 2021, at 7:44 AM, Andreas Klebinger wrote: > > After the idea of letting marge accept unexpected perf improvements and > looking at https://gitlab.haskell.org/ghc/ghc/-/merge_requests/4759 > which failed because of a single test, for a single build flavour > crossing the > improvement threshold where CI fails after rebasing I wondered. > > When would accepting a unexpected perf improvement ever backfire? > > In practice I either have a patch that I expect to improve performance > for some things > so I want to accept whatever gains I get. Or I don't expect improvements > so it's *maybe* > worth failing CI for in case I optimized away some code I shouldn't or > something of that > sort. > > How could this be actionable? Perhaps having a set of indicator for CI of > "Accept allocation decreases" > "Accept residency decreases" > > Would be saner. I have personally *never* gotten value out of the > requirement > to list the indivial tests that improve. Usually a whole lot of them do. > Some cross > the threshold so I add them. If I'm unlucky I have to rebase and a new > one might > make it across the threshold. > > Being able to accept improvements (but not regressions) wholesale might be a > reasonable alternative. > > Opinions? > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From moritz.angermann at gmail.com Wed Mar 24 12:40:13 2021 From: moritz.angermann at gmail.com (Moritz Angermann) Date: Wed, 24 Mar 2021 20:40:13 +0800 Subject: GHC 8.10 backports? In-Reply-To: <7f3e5161-b4d9-55bb-0014-ee9148db1644@gmx.at> References: <7f3e5161-b4d9-55bb-0014-ee9148db1644@gmx.at> Message-ID: More like abandoned backport attempt :D On Wed, Mar 24, 2021 at 7:29 PM Andreas Klebinger wrote: > Yes, only changing the rule did indeed cause regressions. > Whichwhen not including the string changes. I don't think it's worth > having one without the other. > > But it seems you already backported this? > See https://gitlab.haskell.org/ghc/ghc/-/merge_requests/5263 > > Cheers > Andreas > Am 22/03/2021 um 07:02 schrieb Moritz Angermann: > > The commit message from > https://gitlab.haskell.org/ghc/ghc/-/commit/f10d11fa49fa9a7a506c4fdbdf86521c2a8d3495, > > makes the changes to string seem required. Applying the commit on its own > doesn't apply cleanly and pulls in quite a > bit of extra dependent commits. Just applying the elem rules appears > rather risky. Thus will I agree that having that > would be a nice fix to have, the amount of necessary code changes makes me > rather uncomfortable for a minor release :-/ > > On Mon, Mar 22, 2021 at 1:58 PM Gergő Érdi wrote: > >> Thanks, that makes it less appealing. In the original thread, I got no >> further replies after my email announcing my "discovery" of that commit, so >> I thought that was the whole story. >> >> On Mon, Mar 22, 2021, 13:53 Viktor Dukhovni >> wrote: >> >>> On Mon, Mar 22, 2021 at 12:39:28PM +0800, Gergő Érdi wrote: >>> >>> > I'd love to have this in a GHC 8.10 release: >>> > https://mail.haskell.org/pipermail/ghc-devs/2021-March/019629.html >>> >>> This is already in 9.0, 9.2 and master, but it is a rather non-trivial >>> change, given all the new work that went into the String case. So I am >>> not sure it is small/simple enough to make for a compelling backport. >>> >>> There's a lot of recent activity in this space. See also >>> , which is not >>> yet merged into master, and might still be eta-reduced one more step). >>> >>> I don't know whether such optimisation tweaks (not a bugfix) are in >>> scope for backporting, we certainly need to be confident they'll not >>> cause any new problems. FWIW, 5259 is dramatically simpler... >>> >>> Of course we also have >>> in much the >>> same territory, but there we're still blocked on someone figuring out >>> what's going on with the 20% compile-time hit with T13056, and whether >>> that's acceptable or not... >>> >>> -- >>> Viktor. >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > > _______________________________________________ > ghc-devs mailing listghc-devs at haskell.orghttp://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From moritz.angermann at gmail.com Wed Mar 24 12:44:19 2021 From: moritz.angermann at gmail.com (Moritz Angermann) Date: Wed, 24 Mar 2021 20:44:19 +0800 Subject: On CI In-Reply-To: <010f017864231a40-251b9f59-0c33-4066-b7d0-9c1d0a44c17f-000000@us-east-2.amazonses.com> References: <010f01784069544b-55202ccf-7239-4333-80c6-3f3cd8543527-000000@us-east-2.amazonses.com> <132ab2d4-9f1f-4320-fee6-d5f48abe00f3@gmx.at> <3d7253a2-b1ee-4b7e-15e3-7d75f7b1c15f@centrum.cz> <010f017864231a40-251b9f59-0c33-4066-b7d0-9c1d0a44c17f-000000@us-east-2.amazonses.com> Message-ID: Yes, this is exactly one of the issues that marge might run into as well, the aggregate ends up performing differently from the individual ones. Now we have marge to ensure that at least the aggregate builds together, which is the whole point of these merge trains. Not to end up in a situation where two patches that are fine on their own, end up to produce a broken merged state that doesn't build anymore. Now we have marge to ensure every commit is buildable. Next we should run regression tests on all commits on master (and that includes each and everyone that marge brings into master. Then we have visualisation that tells us how performance metrics go up/down over time, and we can drill down into commits if they yield interesting results in either way. Now lets say you had a commit that should have made GHC 50% faster across the board, but somehow after the aggregate with other patches this didn't happen anymore? We'd still expect this to somehow show in each of the singular commits on master right? On Wed, Mar 24, 2021 at 8:09 PM Richard Eisenberg wrote: > What about the case where the rebase *lessens* the improvement? That is, > you're expecting these 10 cases to improve, but after a rebase, only 1 > improves. That's news! But a blanket "accept improvements" won't tell you. > > I'm not hard against this proposal, because I know precise tracking has > its own costs. Just wanted to bring up another scenario that might be > factored in. > > Richard > > > On Mar 24, 2021, at 7:44 AM, Andreas Klebinger > wrote: > > > > After the idea of letting marge accept unexpected perf improvements and > > looking at https://gitlab.haskell.org/ghc/ghc/-/merge_requests/4759 > > which failed because of a single test, for a single build flavour > > crossing the > > improvement threshold where CI fails after rebasing I wondered. > > > > When would accepting a unexpected perf improvement ever backfire? > > > > In practice I either have a patch that I expect to improve performance > > for some things > > so I want to accept whatever gains I get. Or I don't expect improvements > > so it's *maybe* > > worth failing CI for in case I optimized away some code I shouldn't or > > something of that > > sort. > > > > How could this be actionable? Perhaps having a set of indicator for CI of > > "Accept allocation decreases" > > "Accept residency decreases" > > > > Would be saner. I have personally *never* gotten value out of the > > requirement > > to list the indivial tests that improve. Usually a whole lot of them do. > > Some cross > > the threshold so I add them. If I'm unlucky I have to rebase and a new > > one might > > make it across the threshold. > > > > Being able to accept improvements (but not regressions) wholesale might > be a > > reasonable alternative. > > > > Opinions? > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From klebinger.andreas at gmx.at Wed Mar 24 13:50:09 2021 From: klebinger.andreas at gmx.at (Andreas Klebinger) Date: Wed, 24 Mar 2021 14:50:09 +0100 Subject: On CI In-Reply-To: <010f017864231a40-251b9f59-0c33-4066-b7d0-9c1d0a44c17f-000000@us-east-2.amazonses.com> References: <010f01784069544b-55202ccf-7239-4333-80c6-3f3cd8543527-000000@us-east-2.amazonses.com> <132ab2d4-9f1f-4320-fee6-d5f48abe00f3@gmx.at> <3d7253a2-b1ee-4b7e-15e3-7d75f7b1c15f@centrum.cz> <010f017864231a40-251b9f59-0c33-4066-b7d0-9c1d0a44c17f-000000@us-east-2.amazonses.com> Message-ID: <923774a0-9066-9a0b-f85a-c63f4717f72a@gmx.at> > What about the case where the rebase *lessens* the improvement? That is, you're expecting these 10 cases to improve, but after a rebase, only 1 improves. That's news! But a blanket "accept improvements" won't tell you. I don't think that scenario currently triggers a CI failure. So this wouldn't really change. As I understand it the current logic is: * Run tests * Check if any cross the metric thresholds set in the test. * If so check if that test is allowed to cross the threshold. I believe we don't check that all benchmarks listed with an expected in/decrease actually do so. It would also be hard to do so reasonably without making it even harder to push MRs through CI. Andreas Am 24/03/2021 um 13:08 schrieb Richard Eisenberg: > What about the case where the rebase *lessens* the improvement? That is, you're expecting these 10 cases to improve, but after a rebase, only 1 improves. That's news! But a blanket "accept improvements" won't tell you. > > I'm not hard against this proposal, because I know precise tracking has its own costs. Just wanted to bring up another scenario that might be factored in. > > Richard > >> On Mar 24, 2021, at 7:44 AM, Andreas Klebinger wrote: >> >> After the idea of letting marge accept unexpected perf improvements and >> looking at https://gitlab.haskell.org/ghc/ghc/-/merge_requests/4759 >> which failed because of a single test, for a single build flavour >> crossing the >> improvement threshold where CI fails after rebasing I wondered. >> >> When would accepting a unexpected perf improvement ever backfire? >> >> In practice I either have a patch that I expect to improve performance >> for some things >> so I want to accept whatever gains I get. Or I don't expect improvements >> so it's *maybe* >> worth failing CI for in case I optimized away some code I shouldn't or >> something of that >> sort. >> >> How could this be actionable? Perhaps having a set of indicator for CI of >> "Accept allocation decreases" >> "Accept residency decreases" >> >> Would be saner. I have personally *never* gotten value out of the >> requirement >> to list the indivial tests that improve. Usually a whole lot of them do. >> Some cross >> the threshold so I add them. If I'm unlucky I have to rebase and a new >> one might >> make it across the threshold. >> >> Being able to accept improvements (but not regressions) wholesale might be a >> reasonable alternative. >> >> Opinions? >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From clintonmead at gmail.com Wed Mar 24 14:08:38 2021 From: clintonmead at gmail.com (Clinton Mead) Date: Thu, 25 Mar 2021 01:08:38 +1100 Subject: Options for targeting Windows XP? Message-ID: I'm currently trying to bring my company around to using a bit of Haskell. One issue is that a number of our clients are based in South East Asia and need software that runs on Windows XP. Unfortunately it seems the last version of GHC that produces executables that run on Windows XP is GHC 7.10. Whilst this table suggests the issue may only running GHC 8.0+ on Windows XP, I've confirmed that GHC 8.0 executables (even "Hello World") will not run on Windows XP, presumably because a non-XP WinAPI call in the runtime. My first thought would be to restrict myself to GHC 7.10 features (i.e. 2015). This would be a slight annoyance but GHC 7.10 still presents a reasonable language. But my concern would be that increasingly I'll run into issues with libraries that use extensions post GHC 7.10, particularly libraries with large dependency lists. So there's a few options I've considered at this point: 1. Use GHCJS to compile to Javascript, and then dig out a version of NodeJS that runs on Windows XP. GHCJS seems to at least have a compiler based on GHC 8.6. 2. Patch GHC with an additional command line argument to produce XP/Vista compatible executables, perhaps by looking at the changes between 7.10 -> 8.0, and re-introducing the XP approach as an option. The issue with 1 is that is that as well as being limited by how up to date GHCJS is, this will increase install size, memory usage and decrease performance on Windows XP machines, which are often in our environments quite old and resource and memory constrained. Approach 2 is something I'd be willing to put some work into if it was practical, but my thought is that XP support was removed for a reason, presumably by using newer WinAPI functions simplified things significantly. By re-adding in XP support I'd be complicating GHC once again, and GHC will effectively have to maintain two approaches. In addition, in the long term, whenever a new WinAPI call is added one would now have to check whether it's available in Windows XP, and if it's not produce a Windows XP equivalent. That might seem like just an extra burden of support for already busy GHC developers. But on the other hand, if the GHC devs would be happy to merge a patch and keep up XP support this would be the cleanest option. But then I had a thought. If GHC Core isn't supposed to change much between versions is it? Which made me come up with these approaches: 3. Hack up a script to compile programs using GHC 9 to Core, then feed that Core output into GHC 7.10. OR 4. Produce a chimera style GHC by importing the GHC 9.0 API and the GHC 7.10 API, and making a version of GHC that does Haskell -> Core in GHC 9.0 and the rest of the code generation in GHC 7.10. One issue with 4 will be that presumably that because I'm importing GHC 9.0 API and the 7.10 API separately, all their data types will technically be separate, so I'll need to basically deep copy the GHC 9.0 core datatype (and perhaps others) to GHC 7.10 datatypes. But presuming their largely similar this should be fairly mechanical. So are any of these approaches (well, particularly 2 and 4) reasonable? Or am I going to run into big problems with either of them? Is there another approach I haven't thought of? Thanks, Clinton -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Wed Mar 24 17:49:47 2021 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Wed, 24 Mar 2021 13:49:47 -0400 Subject: Options for targeting Windows XP? In-Reply-To: References: Message-ID: In terms of net amount of work: I suspect ghcjs targeting either node or some sort of browser plug-in may be the most humane assuming associated browser / node suport on xp is turn key. I think there were some genuine changes to the io manager (the Haskell code in base for doing efficient file system api stuff) on windows plus a few other things. There may have also been changes elsewhere that andreask and Tamar and ben gamari can speak to better. More broadly, there’s so many bug fixes and improvements that you’d miss out on if you don’t try to keep yourself current within the 3 most recent ghc major version releases wrt associated libraries. On Wed, Mar 24, 2021 at 10:09 AM Clinton Mead wrote: > I'm currently trying to bring my company around to using a bit of Haskell. > One issue is that a number of our clients are based in South East Asia and > need software that runs on Windows XP. > > Unfortunately it seems the last version of GHC that produces executables > that run on Windows XP is GHC 7.10. Whilst this table > suggests > the issue may only running GHC 8.0+ on Windows XP, I've confirmed that GHC > 8.0 executables (even "Hello World") will not run on Windows XP, presumably > because a non-XP WinAPI call in the runtime. > > My first thought would be to restrict myself to GHC 7.10 features (i.e. > 2015). This would be a slight annoyance but GHC 7.10 still presents a > reasonable language. But my concern would be that increasingly I'll run > into issues with libraries that use extensions post GHC 7.10, particularly > libraries with large dependency lists. > > So there's a few options I've considered at this point: > > 1. Use GHCJS to compile to Javascript, and then dig out a version of > NodeJS that runs on Windows XP. GHCJS seems to at least have a compiler > based on GHC 8.6. > 2. Patch GHC with an additional command line argument to produce XP/Vista > compatible executables, perhaps by looking at the changes between 7.10 -> > 8.0, and re-introducing the XP approach as an option. > > The issue with 1 is that is that as well as being limited by how up to > date GHCJS is, this will increase install size, memory usage and decrease > performance on Windows XP machines, which are often in our environments > quite old and resource and memory constrained. > > Approach 2 is something I'd be willing to put some work into if it was > practical, but my thought is that XP support was removed for a reason, > presumably by using newer WinAPI functions simplified things significantly. > By re-adding in XP support I'd be complicating GHC once again, and GHC will > effectively have to maintain two approaches. In addition, in the long term, > whenever a new WinAPI call is added one would now have to check whether > it's available in Windows XP, and if it's not produce a Windows XP > equivalent. That might seem like just an extra burden of support for > already busy GHC developers. But on the other hand, if the GHC devs would > be happy to merge a patch and keep up XP support this would be the cleanest > option. > > But then I had a thought. If GHC Core isn't supposed to change much > between versions is it? Which made me come up with these approaches: > > 3. Hack up a script to compile programs using GHC 9 to Core, then feed > that Core output into GHC 7.10. OR > 4. Produce a chimera style GHC by importing the GHC 9.0 API and the GHC > 7.10 API, and making a version of GHC that does Haskell -> Core in GHC 9.0 > and the rest of the code generation in GHC 7.10. > > One issue with 4 will be that presumably that because I'm importing GHC > 9.0 API and the 7.10 API separately, all their data types will technically > be separate, so I'll need to basically deep copy the GHC 9.0 core datatype > (and perhaps others) to GHC 7.10 datatypes. But presuming their largely > similar this should be fairly mechanical. > > So are any of these approaches (well, particularly 2 and 4) reasonable? Or > am I going to run into big problems with either of them? Is there another > approach I haven't thought of? > > Thanks, > Clinton > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lonetiger at gmail.com Wed Mar 24 18:29:27 2021 From: lonetiger at gmail.com (Phyx) Date: Wed, 24 Mar 2021 18:29:27 +0000 Subject: Options for targeting Windows XP? In-Reply-To: References: Message-ID: Hi, > XP. GHCJS seems to at least have a compiler based on GHC 8.6. > 2. Patch GHC with an additional command line argument to produce XP/Vista compatible executables, perhaps by looking at the changes between 7.10 -> 8.0, and re-introducing the XP approach as an option. This would be somewhat hard but not impossible for 8.0.. Which If I recalled drop XP for some linker functionality. The higher you go the more difficult it would become though. When you get to 9.0 you don't have much hope as there it's not just the linker, but the RTS itself heavily relies on functionality not available in XP, including how we manage memory and do synchronization. It's however not just GHC that would need patching but libraries such as process as well. That is not to say it's impossible, just you'd have to find ways to work around the bugs that caused us to change APIs to begin with... I can't speak for the community, but I wouldn't want to re-introduce XP as a supported options in mainline. Parts of e.g. 9.0 (like winio) just won't work on XP. The design itself is centered around new APIs. So supporting XP means essentially a new design. Kind regards, Tamar Sent from my Mobile On Wed, Mar 24, 2021, 14:09 Clinton Mead wrote: > I'm currently trying to bring my company around to using a bit of Haskell. > One issue is that a number of our clients are based in South East Asia and > need software that runs on Windows XP. > > Unfortunately it seems the last version of GHC that produces executables > that run on Windows XP is GHC 7.10. Whilst this table > suggests > the issue may only running GHC 8.0+ on Windows XP, I've confirmed that GHC > 8.0 executables (even "Hello World") will not run on Windows XP, presumably > because a non-XP WinAPI call in the runtime. > > My first thought would be to restrict myself to GHC 7.10 features (i.e. > 2015). This would be a slight annoyance but GHC 7.10 still presents a > reasonable language. But my concern would be that increasingly I'll run > into issues with libraries that use extensions post GHC 7.10, particularly > libraries with large dependency lists. > > So there's a few options I've considered at this point: > > 1. Use GHCJS to compile to Javascript, and then dig out a version of > NodeJS that runs on Windows XP. GHCJS seems to at least have a compiler > based on GHC 8.6. > 2. Patch GHC with an additional command line argument to produce XP/Vista > compatible executables, perhaps by looking at the changes between 7.10 -> > 8.0, and re-introducing the XP approach as an option. > > The issue with 1 is that is that as well as being limited by how up to > date GHCJS is, this will increase install size, memory usage and decrease > performance on Windows XP machines, which are often in our environments > quite old and resource and memory constrained. > > Approach 2 is something I'd be willing to put some work into if it was > practical, but my thought is that XP support was removed for a reason, > presumably by using newer WinAPI functions simplified things significantly. > By re-adding in XP support I'd be complicating GHC once again, and GHC will > effectively have to maintain two approaches. In addition, in the long term, > whenever a new WinAPI call is added one would now have to check whether > it's available in Windows XP, and if it's not produce a Windows XP > equivalent. That might seem like just an extra burden of support for > already busy GHC developers. But on the other hand, if the GHC devs would > be happy to merge a patch and keep up XP support this would be the cleanest > option. > > But then I had a thought. If GHC Core isn't supposed to change much > between versions is it? Which made me come up with these approaches: > > 3. Hack up a script to compile programs using GHC 9 to Core, then feed > that Core output into GHC 7.10. OR > 4. Produce a chimera style GHC by importing the GHC 9.0 API and the GHC > 7.10 API, and making a version of GHC that does Haskell -> Core in GHC 9.0 > and the rest of the code generation in GHC 7.10. > > One issue with 4 will be that presumably that because I'm importing GHC > 9.0 API and the 7.10 API separately, all their data types will technically > be separate, so I'll need to basically deep copy the GHC 9.0 core datatype > (and perhaps others) to GHC 7.10 datatypes. But presuming their largely > similar this should be fairly mechanical. > > So are any of these approaches (well, particularly 2 and 4) reasonable? Or > am I going to run into big problems with either of them? Is there another > approach I haven't thought of? > > Thanks, > Clinton > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Wed Mar 24 19:47:40 2021 From: ben at well-typed.com (Ben Gamari) Date: Wed, 24 Mar 2021 15:47:40 -0400 Subject: Options for targeting Windows XP? In-Reply-To: References: Message-ID: <87r1k4uyee.fsf@smart-cactus.org> Clinton Mead writes: > I'm currently trying to bring my company around to using a bit of Haskell. > One issue is that a number of our clients are based in South East Asia and > need software that runs on Windows XP. > Ooph, that is quite tricky. Indeed we dropped XP support for Windows 8.0, at which point XP had already been EoL'd for seven years. > Unfortunately it seems the last version of GHC that produces executables > that run on Windows XP is GHC 7.10. Whilst this table > suggests the > issue may only running GHC 8.0+ on Windows XP, I've confirmed that GHC 8.0 > executables (even "Hello World") will not run on Windows XP, presumably > because a non-XP WinAPI call in the runtime. > Indeed. The dropping of XP support was prompted by the need to use a newer Win32 interface (I can't recall which in particular). > My first thought would be to restrict myself to GHC 7.10 features (i.e. > 2015). This would be a slight annoyance but GHC 7.10 still presents a > reasonable language. But my concern would be that increasingly I'll run > into issues with libraries that use extensions post GHC 7.10, particularly > libraries with large dependency lists. > I would also be concerned about this. I wouldn't expect to be able to get very far with GHC 7.10 in 2021. > So there's a few options I've considered at this point: > > 1. Use GHCJS to compile to Javascript, and then dig out a version of NodeJS > that runs on Windows XP. GHCJS seems to at least have a compiler based on > GHC 8.6. > This is an option, although only you know whether this would fit your application given your memory and CPU constraints. I also have no idea how easy it would be to find a functional version of NodeJS. > But then I had a thought. If GHC Core isn't supposed to change much between > versions is it? Which made me come up with these approaches: > > 3. Hack up a script to compile programs using GHC 9 to Core, then feed that > Core output into GHC 7.10. OR > > 4. Produce a chimera style GHC by importing the GHC 9.0 API and the GHC > 7.10 API, and making a version of GHC that does Haskell -> Core in GHC 9.0 > and the rest of the code generation in GHC 7.10. > Sadly, I suspect this isn't going to work. While Core itself doesn't change (that much), the primops do. Even getting Core produced by GHC 9.0 to build under GHC 8.10 would require a considerable amount of work. > One issue with 4 will be that presumably that because I'm importing GHC 9.0 > API and the 7.10 API separately, all their data types will technically be > separate, so I'll need to basically deep copy the GHC 9.0 core datatype > (and perhaps others) to GHC 7.10 datatypes. But presuming their largely > similar this should be fairly mechanical. > I'm not sure how mechanical this would be, to be honest. > So are any of these approaches (well, particularly 2 and 4) reasonable? Or > am I going to run into big problems with either of them? Is there another > approach I haven't thought of? > My sense is that if you don't need the threaded runtime system it would probably be easiest to just try to make a modern GHC run on Windows XP. As Tamar suggested, it likely not easy, but also not impossible. WinIO is indeed problematic, but thankfully the old MIO IO manager is still around (and will be in 9.2). The possible reasons for Windows XP incompatibility that I can think of off the top of my head are: * Timers (we now use QueryPerformanceCounter) * Big-PE support, which is very much necessary for profiled builds * Long file path support (mostly a build-time consideration as Haskell build systems tend to produce very long paths) There may be others, but I would start looking there. I am happy to answer any questions that might arise. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From clintonmead at gmail.com Thu Mar 25 03:37:43 2021 From: clintonmead at gmail.com (Clinton Mead) Date: Thu, 25 Mar 2021 14:37:43 +1100 Subject: Options for targeting Windows XP? In-Reply-To: <87r1k4uyee.fsf@smart-cactus.org> References: <87r1k4uyee.fsf@smart-cactus.org> Message-ID: Thanks all for your replies. Just going through what Ben has said step by step: My sense is that if you don't need the threaded runtime system it would > probably be easiest to just try to make a modern GHC run on Windows XP. > Happy to run non-threaded runtime. A good chunk of these machines will be single or dual core anyway. > As Tamar suggested, it likely not easy, but also not impossible. WinIO > is indeed problematic, but thankfully the old MIO IO manager is still > around (and will be in 9.2). > "Is still around"? As in it's in the code base and just dead code, or can I trigger GHC to use the old IO manager with a GHC option? The possible reasons for Windows XP incompatibility that I can think of > off the top of my head are: > > * Timers (we now use QueryPerformanceCounter) > This page suggests that QueryPerformanceCounter should run on XP. Is this incorrect? > * Big-PE support, which is very much necessary for profiled builds > I don't really need profiled builds > * Long file path support (mostly a build-time consideration as Haskell > build systems tend to produce very long paths) > > I don't need to build on Windows XP either. I just need to run on Windows XP so hopefully this won't be an issue. Although if GHC was modified for long file path support so it could build itself with long file path support presumably it will affect everything else it builds also. > There may be others, but I would start looking there. I am happy to > answer any questions that might arise. > > I'm guessing the way forward here might be a patch with two options: 1. -no-long-path-support/-long-path-support (default -long-path-support) 2. -winxp The winxp option shall: - Require -no-long-path-support - Conflicts with -threaded - Conflicts with profiled builds - Uses the old IO manager (I'm not sure if this is an option or how this is done). What do you think (roughly speaking)? -------------- next part -------------- An HTML attachment was scrubbed... URL: From hecate at glitchbra.in Thu Mar 25 15:19:02 2021 From: hecate at glitchbra.in (=?UTF-8?Q?H=c3=a9cate?=) Date: Thu, 25 Mar 2021 16:19:02 +0100 Subject: HLint in the GHC CI, an eight-months retrospective Message-ID: Hello fellow devs, this email is an activity report on the integration of the HLint[0] tool in the Continuous Integration (CI) pipelines. On Jul. 5, 2020 I opened a discussion ticket[1] on the topic of code linting in the several components of the GHC code-base. It has served as a reference anchor for the Merge Requests (MR) that stemmed from it, and allowed us to refine our expectations and processes. If you are not acquainted with its content, I invite you to read the whole conversation. Subsequently, several Hadrian lint rules have been integrated in the following months, in order to run HLint on targeted components of the GHC repository (the base library, the compiler code-base, etc). Being satisfied with the state of the rules we applied to the code-base, such as removing extraneous pragmata and keywords, it was decided to integrate the base library linting rule in the CI. This was five months ago, in September[2], and I am happy to report that developer friction has been so far minimal. In parallel to this work on the base library, I took care of cleaning-up the compiler, and harmonised the various micro coding styles that have emerged quite organically during the decades of development that are behind us (I never realised how many variations of the same ten lines of pragmata could coexist in the same folders). Upon feedback from stakeholders of this sub-code base, the rules file was altered to better suit their development needs, such as not removing extraneous `do` keywords, as they are useful to introduce a block in which debug statements can be easily inserted. Since today, the linting of the compiler code-base has been integrated in our CI pipelines, without further burdening our CI times. Things seem to run smoothly, and I welcome comments and requests of any kind related to this area of our code quality process. Regarding our future plans, there has been a discussion about integrating such a linting mechanism for our C code-base, in the RTS. Nothing is formally established yet, so I would be grateful if people who have experience and wisdom about it can chime in to contribute to the discussion: https://gitlab.haskell.org/ghc/ghc/-/issues/19437. And I would like to say that I am overall very thankful for the involvement of the people who have been giving us feedback and have been reviewing the resulting MRs. Have a very nice day, Hécate --- [0]: https://github.com/ndmitchell/hlint [1]: https://gitlab.haskell.org/ghc/ghc/-/issues/18424 [2]: https://gitlab.haskell.org/ghc/ghc/-/merge_requests/4147 -- Hécate ✨ 🐦: @TechnoEmpress IRC: Uniaika WWW: https://glitchbra.in RUN: BSD From clintonmead at gmail.com Thu Mar 25 18:25:44 2021 From: clintonmead at gmail.com (Clinton Mead) Date: Fri, 26 Mar 2021 05:25:44 +1100 Subject: Options for targeting Windows XP? In-Reply-To: References: <87r1k4uyee.fsf@smart-cactus.org> Message-ID: Another gotcha that I didn't think of. The machines I'm targeting often have 32 bit versions of Windows, which it looks like isn't supported after GHC 8.6. Does this move it into the too hard basket? -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at richarde.dev Thu Mar 25 20:39:13 2021 From: rae at richarde.dev (Richard Eisenberg) Date: Thu, 25 Mar 2021 20:39:13 +0000 Subject: HLint in the GHC CI, an eight-months retrospective In-Reply-To: References: Message-ID: <010f01786b1cd03c-2d3134dc-a0eb-4081-babf-cf04a92e888c-000000@us-east-2.amazonses.com> Thanks for this update! Glad to know this effort is going well. One quick question: suppose I am editing something in `base`. My understanding is that my edit will be linted. How can I run hlint locally so that I can easily respond to trouble before CI takes a crack? And where would I learn this information (that is, how to run hlint locally)? Thanks! Richard > On Mar 25, 2021, at 11:19 AM, Hécate wrote: > > Hello fellow devs, > > this email is an activity report on the integration of the HLint[0] tool in the Continuous Integration (CI) pipelines. > > On Jul. 5, 2020 I opened a discussion ticket[1] on the topic of code linting in the several components of the GHC code-base. It has served as a reference anchor for the Merge Requests (MR) that stemmed from it, and allowed us to refine our expectations and processes. If you are not acquainted with its content, I invite you to read the whole conversation. > > Subsequently, several Hadrian lint rules have been integrated in the following months, in order to run HLint on targeted components of the GHC repository (the base library, the compiler code-base, etc). > Being satisfied with the state of the rules we applied to the code-base, such as removing extraneous pragmata and keywords, it was decided to integrate the base library linting rule in the CI. This was five months ago, in September[2], and I am happy to report that developer friction has been so far minimal. > In parallel to this work on the base library, I took care of cleaning-up the compiler, and harmonised the various micro coding styles that have emerged quite organically during the decades of development that are behind us (I never realised how many variations of the same ten lines of pragmata could coexist in the same folders). > Upon feedback from stakeholders of this sub-code base, the rules file was altered to better suit their development needs, such as not removing extraneous `do` keywords, as they are useful to introduce a block in which debug statements can be easily inserted. > > Since today, the linting of the compiler code-base has been integrated in our CI pipelines, without further burdening our CI times. > Things seem to run smoothly, and I welcome comments and requests of any kind related to this area of our code quality process. > > Regarding our future plans, there has been a discussion about integrating such a linting mechanism for our C code-base, in the RTS. Nothing is formally established yet, so I would be grateful if people who have experience and wisdom about it can chime in to contribute to the discussion: https://gitlab.haskell.org/ghc/ghc/-/issues/19437. > > And I would like to say that I am overall very thankful for the involvement of the people who have been giving us feedback and have been reviewing the resulting MRs. > > Have a very nice day, > Hécate > > --- > [0]: https://github.com/ndmitchell/hlint > [1]: https://gitlab.haskell.org/ghc/ghc/-/issues/18424 > [2]: https://gitlab.haskell.org/ghc/ghc/-/merge_requests/4147 > > -- > Hécate ✨ > 🐦: @TechnoEmpress > IRC: Uniaika > WWW: https://glitchbra.in > RUN: BSD > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From ben at well-typed.com Thu Mar 25 21:05:08 2021 From: ben at well-typed.com (Ben Gamari) Date: Thu, 25 Mar 2021 17:05:08 -0400 Subject: Options for targeting Windows XP? In-Reply-To: References: <87r1k4uyee.fsf@smart-cactus.org> Message-ID: <87mturuepq.fsf@smart-cactus.org> Clinton Mead writes: > Another gotcha that I didn't think of. The machines I'm targeting often > have 32 bit versions of Windows, which it looks like isn't supported after > GHC 8.6. > > Does this move it into the too hard basket? Ooph, yeah, this makes matters a bit worse. The reason we ultimately dropped 32-bit Windows support wasn't even a GHC bug; rather, a (rather long-standing at this point) bug in binutils (#17961) which made it impossible to reliably produce binary distributions. My recollection is that this but only affected builds of the object files used by GHCi. If you don't use GHCi then you can likely disable the production of these objects. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Fri Mar 26 02:34:05 2021 From: ben at well-typed.com (Ben Gamari) Date: Thu, 25 Mar 2021 22:34:05 -0400 Subject: Options for targeting Windows XP? In-Reply-To: References: <87r1k4uyee.fsf@smart-cactus.org> Message-ID: <87lfaave1x.fsf@smart-cactus.org> Clinton Mead writes: > Thanks all for your replies. Just going through what Ben has said step by > step: > > My sense is that if you don't need the threaded runtime system it would >> probably be easiest to just try to make a modern GHC run on Windows XP. >> > > Happy to run non-threaded runtime. A good chunk of these machines will be > single or dual core anyway. > That indeed somewhat simplifies things. >> As Tamar suggested, it likely not easy, but also not impossible. WinIO >> is indeed problematic, but thankfully the old MIO IO manager is still >> around (and will be in 9.2). >> > > "Is still around"? As in it's in the code base and just dead code, or can I > trigger GHC to use the old IO manager with a GHC option? > > The possible reasons for Windows XP incompatibility that I can think of >> off the top of my head are: >> >> * Timers (we now use QueryPerformanceCounter) >> > > This page suggests that QueryPerformanceCounter > > should > run on XP. Is this incorrect? > It's supported, but there are caveats [1] that make it unreliable as a timesource. [1] https://docs.microsoft.com/en-us/windows/win32/sysinfo/acquiring-high-resolution-time-stamps#windowsxp-and-windows2000 > >> * Big-PE support, which is very much necessary for profiled builds >> > > I don't really need profiled builds > Alright, then you *probably* won't be affected by PE's symbol limit. >> * Long file path support (mostly a build-time consideration as Haskell >> build systems tend to produce very long paths) >> >> > I don't need to build on Windows XP either. I just need to run on Windows > XP so hopefully this won't be an issue. Although if GHC was modified for > long file path support so it could build itself with long file path support > presumably it will affect everything else it builds also. > If you don't need to build on XP then I suspect this won't affect you. > >> There may be others, but I would start looking there. I am happy to >> answer any questions that might arise. >> > I'm guessing the way forward here might be a patch with two options: > > 1. -no-long-path-support/-long-path-support (default -long-path-support) > 2. -winxp > > The winxp option shall: > > - Require -no-long-path-support > - Conflicts with -threaded > - Conflicts with profiled builds > - Uses the old IO manager (I'm not sure if this is an option or how this is > done). > The old IO manager is still the default, although this will likely change in 9.2. > What do you think (roughly speaking)? Yes, that is essentially correct. I would probably start by trying to run a 32-bit GHC build on Windows XP under gdb and see where things fall over. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From clintonmead at gmail.com Fri Mar 26 05:27:58 2021 From: clintonmead at gmail.com (Clinton Mead) Date: Fri, 26 Mar 2021 16:27:58 +1100 Subject: Options for targeting Windows XP? In-Reply-To: <87lfaave1x.fsf@smart-cactus.org> References: <87r1k4uyee.fsf@smart-cactus.org> <87lfaave1x.fsf@smart-cactus.org> Message-ID: Thanks again for the detailed reply Ben. I guess the other dream of mine is to give GHC a .NET backend. For my problem it would be the ideal solution, but it looks like other attempts in this regard (e.g. Eta, GHCJS etc) seem to have difficulty keeping up with updates to GHC. So I'm sure it's not trivial. It would be quite lovely though if I could generate .NET + Java + even Python bytecode from GHC. Whilst not solving my immediate problem, perhaps my efforts are best spent in giving GHC a plugin architecture for backends (or if one already exists?) trying to make a .NET backend. I believe "Csaba Hruska" is working in this space with GRIN, yes? I read SPJs paper on Implementing Lazy Functional Languages on Stock Hardware: The Spineless Tagless G-machine which implemented STG in C and whilst it wasn't trivial, it didn't seem stupendously complex (even I managed to roughly follow it). I thought to myself also, implementing this in .NET would be even easier because I can hand off garbage collection to the .NET runtime so there's one less thing to worry about. I also, initially, don't care _too_ much about performance. Of course, there's probably a whole bunch of nuance. One actually needs to, for example, represent all the complexities of GADTs into object orientated classes, maybe converting sum types to inheritance hierarchies with Visitor Patterns. And also you'd actually have to make sure to do one's best to ensure exposed Haskell functions look like something sensible. So I guess, given I have a bit of an interest here, what would be the best approach if I wanted to help GHC develop more backends and into an architecture where people can add backends without forking GHC? Where could I start helping that effort? Should I contact "Csaba Hruska" and get involved in GRIN? Or is there something that I can start working on in GHC proper? Considering that I've been playing around with Haskell since 2002, and I'd like to actually get paid to write it at some point in my career, and I have an interest in this area, perhaps this is a good place to start, and actually helping to develop a pluggable backend architecture for GHC may be more useful for more people over the long term than trying to hack up an existing GHC to support 32 bit Windows XP, a battle I suspect will have to be refought every time a new GHC version is released given the current structure of GHC. On Fri, Mar 26, 2021 at 1:34 PM Ben Gamari wrote: > Clinton Mead writes: > > > Thanks all for your replies. Just going through what Ben has said step by > > step: > > > > My sense is that if you don't need the threaded runtime system it would > >> probably be easiest to just try to make a modern GHC run on Windows XP. > >> > > > > Happy to run non-threaded runtime. A good chunk of these machines will be > > single or dual core anyway. > > > That indeed somewhat simplifies things. > > >> As Tamar suggested, it likely not easy, but also not impossible. WinIO > >> is indeed problematic, but thankfully the old MIO IO manager is still > >> around (and will be in 9.2). > >> > > > > "Is still around"? As in it's in the code base and just dead code, or > can I > > trigger GHC to use the old IO manager with a GHC option? > > > > The possible reasons for Windows XP incompatibility that I can think of > >> off the top of my head are: > >> > >> * Timers (we now use QueryPerformanceCounter) > >> > > > > This page suggests that QueryPerformanceCounter > > < > https://docs.microsoft.com/en-us/windows/win32/api/profileapi/nf-profileapi-queryperformancecounter > > > > should > > run on XP. Is this incorrect? > > > It's supported, but there are caveats [1] that make it unreliable as a > timesource. > > [1] > https://docs.microsoft.com/en-us/windows/win32/sysinfo/acquiring-high-resolution-time-stamps#windowsxp-and-windows2000 > > > >> * Big-PE support, which is very much necessary for profiled builds > >> > > > > I don't really need profiled builds > > > > Alright, then you *probably* won't be affected by PE's symbol limit. > > >> * Long file path support (mostly a build-time consideration as Haskell > >> build systems tend to produce very long paths) > >> > >> > > I don't need to build on Windows XP either. I just need to run on Windows > > XP so hopefully this won't be an issue. Although if GHC was modified for > > long file path support so it could build itself with long file path > support > > presumably it will affect everything else it builds also. > > > If you don't need to build on XP then I suspect this won't affect you. > > > > >> There may be others, but I would start looking there. I am happy to > >> answer any questions that might arise. > >> > > I'm guessing the way forward here might be a patch with two options: > > > > 1. -no-long-path-support/-long-path-support (default -long-path-support) > > 2. -winxp > > > > The winxp option shall: > > > > - Require -no-long-path-support > > - Conflicts with -threaded > > - Conflicts with profiled builds > > - Uses the old IO manager (I'm not sure if this is an option or how this > is > > done). > > > The old IO manager is still the default, although this will likely > change in 9.2. > > > What do you think (roughly speaking)? > > Yes, that is essentially correct. I would probably start by trying to > run a 32-bit GHC build on Windows XP under gdb and see where > things fall over. > > Cheers, > > - Ben > -------------- next part -------------- An HTML attachment was scrubbed... URL: From moritz.angermann at gmail.com Fri Mar 26 07:59:43 2021 From: moritz.angermann at gmail.com (Moritz Angermann) Date: Fri, 26 Mar 2021 15:59:43 +0800 Subject: Options for targeting Windows XP? In-Reply-To: References: <87r1k4uyee.fsf@smart-cactus.org> <87lfaave1x.fsf@smart-cactus.org> Message-ID: I believe there is a bit of misconception about what requires a new backend or not. GHC is a bunch of different intermediate representations from which one can take off to build backends. The STG, or Cmm ones are the most popular. All our Native Code Generators and the LLVM code gen take off from the Cmm one. Whether or not that is the correct input representation for your target largely depends on the target and design of the codegenerator. GHCJS takes off from STG, and so does Csaba's GRIN work via the external STG I believe. IIRC Asterius takes off from Cmm. I don't remember the details about Eta. Why fork? Do you want to deal with GHC, and GHC's development? If not, fork. Do you want to have to keep up with GHC's development? Maybe not fork. Do you think your compiler can stand on it's own and doesn't follow GHC much, except for being a haskell compiler? By all means fork. Eta is a bit special here, Eta forked off, and basically started customising their Haskell compiler specifically to the JVM, and this also allowed them to make radical changes to GHC, which would not have been permissible in the mainline GHC. (Mainline GHC tries to support multiple platforms and architectures at all times, breaking any of them isn't really an option that can be taken lightheartedly.) Eta also started having Etlas, a custom Cabal, ... I'd still like to see a lot from Eta and the ecosystem be re-integrated into GHC. There have to be good ideas there that can be brought back. It just needs someone to go look and do the work. GHCJS is being aligned more with GHC right now precisely to eventually re-integrate it with GHC. Asterius went down the same path, likely inspired by GHCJS, but I think I was able to convince the author that eventual upstreaming should be the goal and the project should try to stay as close as possible to GHC for that reason. Now if you consider adding a codegen backend, this can be done, but again depends on your exact target. I'd love to see a CLR target, yet I don't know enough about CLR to give informed suggestions here. If you have a toolchain that functions sufficiently similar to a stock c toolchain, (or you can make your toolchain look sufficiently similar to one, easily), most of it will just work. If you can separate your building into compilation of source to some form of object code, and some form of object code aggregates (archives), and some form of linking (objects and archives into shared objects, or executables), you can likely plug in your toolchain into GHC (and Cabal), and have it work, once you taught GHC how to produce your target languages object code. If your toolchain does stuff differently, a bit more work is involved in teaching GHC (and Cabal) about that. This all only gives you *haskell* though. You still need the Runtime System. If you have a C -> Target compiler, you can try to re-use GHC's RTS. This is what the WebGHC project did. They re-used GHC's RTS, and implemented a shim for linux syscalls, so that they can emulate enough to have the RTS think it's running on some musl like linux. You most likely want something proper here eventually; but this might be a first stab at it to get something working. Next you'll have to deal with c-bits. Haskell Packages that link against C parts. This is going to be challenging, not impossible but challenging as much of the haskell ecosystem expects the ability to compile C files and use those for low level system interaction. You can use hackage overlays to build a set of patched packages, once you have your codegen working. At that point you could start patching ecosystem packages to work on your target, until your changes are upstreamed, and provide your user with a hackage overlay (essentially hackage + patches for specific packages). Hope this helps. You'll find most of us on irc.freenode.net#ghc On Fri, Mar 26, 2021 at 1:29 PM Clinton Mead wrote: > Thanks again for the detailed reply Ben. > > I guess the other dream of mine is to give GHC a .NET backend. For my > problem it would be the ideal solution, but it looks like other attempts in > this regard (e.g. Eta, GHCJS etc) seem to have difficulty keeping up with > updates to GHC. So I'm sure it's not trivial. > > It would be quite lovely though if I could generate .NET + Java + even > Python bytecode from GHC. > > Whilst not solving my immediate problem, perhaps my efforts are best spent > in giving GHC a plugin architecture for backends (or if one already > exists?) trying to make a .NET backend. > > I believe "Csaba Hruska" is working in this space with GRIN, yes? > > I read SPJs paper on Implementing Lazy Functional Languages on Stock > Hardware: The Spineless Tagless G-machine > which > implemented STG in C and whilst it wasn't trivial, it didn't seem > stupendously complex (even I managed to roughly follow it). I thought to > myself also, implementing this in .NET would be even easier because I can > hand off garbage collection to the .NET runtime so there's one less thing > to worry about. I also, initially, don't care _too_ much about performance. > > Of course, there's probably a whole bunch of nuance. One actually needs > to, for example, represent all the complexities of GADTs into object > orientated classes, maybe converting sum types to inheritance hierarchies > with Visitor Patterns. And also you'd actually have to make sure to do > one's best to ensure exposed Haskell functions look like something > sensible. > > So I guess, given I have a bit of an interest here, what would be the best > approach if I wanted to help GHC develop more backends and into an > architecture where people can add backends without forking GHC? Where could > I start helping that effort? Should I contact "Csaba Hruska" and get > involved in GRIN? Or is there something that I can start working on in GHC > proper? > > Considering that I've been playing around with Haskell since 2002, and I'd > like to actually get paid to write it at some point in my career, and I > have an interest in this area, perhaps this is a good place to start, and > actually helping to develop a pluggable backend architecture for GHC may be > more useful for more people over the long term than trying to hack up an > existing GHC to support 32 bit Windows XP, a battle I suspect will have to > be refought every time a new GHC version is released given the current > structure of GHC. > > On Fri, Mar 26, 2021 at 1:34 PM Ben Gamari wrote: > >> Clinton Mead writes: >> >> > Thanks all for your replies. Just going through what Ben has said step >> by >> > step: >> > >> > My sense is that if you don't need the threaded runtime system it would >> >> probably be easiest to just try to make a modern GHC run on Windows XP. >> >> >> > >> > Happy to run non-threaded runtime. A good chunk of these machines will >> be >> > single or dual core anyway. >> > >> That indeed somewhat simplifies things. >> >> >> As Tamar suggested, it likely not easy, but also not impossible. WinIO >> >> is indeed problematic, but thankfully the old MIO IO manager is still >> >> around (and will be in 9.2). >> >> >> > >> > "Is still around"? As in it's in the code base and just dead code, or >> can I >> > trigger GHC to use the old IO manager with a GHC option? >> > >> > The possible reasons for Windows XP incompatibility that I can think of >> >> off the top of my head are: >> >> >> >> * Timers (we now use QueryPerformanceCounter) >> >> >> > >> > This page suggests that QueryPerformanceCounter >> > < >> https://docs.microsoft.com/en-us/windows/win32/api/profileapi/nf-profileapi-queryperformancecounter >> > >> > should >> > run on XP. Is this incorrect? >> > >> It's supported, but there are caveats [1] that make it unreliable as a >> timesource. >> >> [1] >> https://docs.microsoft.com/en-us/windows/win32/sysinfo/acquiring-high-resolution-time-stamps#windowsxp-and-windows2000 >> > >> >> * Big-PE support, which is very much necessary for profiled builds >> >> >> > >> > I don't really need profiled builds >> > >> >> Alright, then you *probably* won't be affected by PE's symbol limit. >> >> >> * Long file path support (mostly a build-time consideration as Haskell >> >> build systems tend to produce very long paths) >> >> >> >> >> > I don't need to build on Windows XP either. I just need to run on >> Windows >> > XP so hopefully this won't be an issue. Although if GHC was modified for >> > long file path support so it could build itself with long file path >> support >> > presumably it will affect everything else it builds also. >> > >> If you don't need to build on XP then I suspect this won't affect you. >> >> > >> >> There may be others, but I would start looking there. I am happy to >> >> answer any questions that might arise. >> >> >> > I'm guessing the way forward here might be a patch with two options: >> > >> > 1. -no-long-path-support/-long-path-support (default -long-path-support) >> > 2. -winxp >> > >> > The winxp option shall: >> > >> > - Require -no-long-path-support >> > - Conflicts with -threaded >> > - Conflicts with profiled builds >> > - Uses the old IO manager (I'm not sure if this is an option or how >> this is >> > done). >> > >> The old IO manager is still the default, although this will likely >> change in 9.2. >> >> > What do you think (roughly speaking)? >> >> Yes, that is essentially correct. I would probably start by trying to >> run a 32-bit GHC build on Windows XP under gdb and see where >> things fall over. >> >> Cheers, >> >> - Ben >> > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Mar 26 09:41:35 2021 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 26 Mar 2021 09:41:35 +0000 Subject: Options for targeting Windows XP? In-Reply-To: References: <87r1k4uyee.fsf@smart-cactus.org> <87lfaave1x.fsf@smart-cactus.org> Message-ID: This link gives some (old) background https://wiki.haskell.org/GHC/FAQ#Why_isn.27t_GHC_available_for_.NET_or_on_the_JVM.3F Simon From: ghc-devs On Behalf Of Moritz Angermann Sent: 26 March 2021 08:00 To: Clinton Mead Cc: ghc-devs Subject: Re: Options for targeting Windows XP? I believe there is a bit of misconception about what requires a new backend or not. GHC is a bunch of different intermediate representations from which one can take off to build backends. The STG, or Cmm ones are the most popular. All our Native Code Generators and the LLVM code gen take off from the Cmm one. Whether or not that is the correct input representation for your target largely depends on the target and design of the codegenerator. GHCJS takes off from STG, and so does Csaba's GRIN work via the external STG I believe. IIRC Asterius takes off from Cmm. I don't remember the details about Eta. Why fork? Do you want to deal with GHC, and GHC's development? If not, fork. Do you want to have to keep up with GHC's development? Maybe not fork. Do you think your compiler can stand on it's own and doesn't follow GHC much, except for being a haskell compiler? By all means fork. Eta is a bit special here, Eta forked off, and basically started customising their Haskell compiler specifically to the JVM, and this also allowed them to make radical changes to GHC, which would not have been permissible in the mainline GHC. (Mainline GHC tries to support multiple platforms and architectures at all times, breaking any of them isn't really an option that can be taken lightheartedly.) Eta also started having Etlas, a custom Cabal, ... I'd still like to see a lot from Eta and the ecosystem be re-integrated into GHC. There have to be good ideas there that can be brought back. It just needs someone to go look and do the work. GHCJS is being aligned more with GHC right now precisely to eventually re-integrate it with GHC. Asterius went down the same path, likely inspired by GHCJS, but I think I was able to convince the author that eventual upstreaming should be the goal and the project should try to stay as close as possible to GHC for that reason. Now if you consider adding a codegen backend, this can be done, but again depends on your exact target. I'd love to see a CLR target, yet I don't know enough about CLR to give informed suggestions here. If you have a toolchain that functions sufficiently similar to a stock c toolchain, (or you can make your toolchain look sufficiently similar to one, easily), most of it will just work. If you can separate your building into compilation of source to some form of object code, and some form of object code aggregates (archives), and some form of linking (objects and archives into shared objects, or executables), you can likely plug in your toolchain into GHC (and Cabal), and have it work, once you taught GHC how to produce your target languages object code. If your toolchain does stuff differently, a bit more work is involved in teaching GHC (and Cabal) about that. This all only gives you *haskell* though. You still need the Runtime System. If you have a C -> Target compiler, you can try to re-use GHC's RTS. This is what the WebGHC project did. They re-used GHC's RTS, and implemented a shim for linux syscalls, so that they can emulate enough to have the RTS think it's running on some musl like linux. You most likely want something proper here eventually; but this might be a first stab at it to get something working. Next you'll have to deal with c-bits. Haskell Packages that link against C parts. This is going to be challenging, not impossible but challenging as much of the haskell ecosystem expects the ability to compile C files and use those for low level system interaction. You can use hackage overlays to build a set of patched packages, once you have your codegen working. At that point you could start patching ecosystem packages to work on your target, until your changes are upstreamed, and provide your user with a hackage overlay (essentially hackage + patches for specific packages). Hope this helps. You'll find most of us on irc.freenode.net#ghc On Fri, Mar 26, 2021 at 1:29 PM Clinton Mead > wrote: Thanks again for the detailed reply Ben. I guess the other dream of mine is to give GHC a .NET backend. For my problem it would be the ideal solution, but it looks like other attempts in this regard (e.g. Eta, GHCJS etc) seem to have difficulty keeping up with updates to GHC. So I'm sure it's not trivial. It would be quite lovely though if I could generate .NET + Java + even Python bytecode from GHC. Whilst not solving my immediate problem, perhaps my efforts are best spent in giving GHC a plugin architecture for backends (or if one already exists?) trying to make a .NET backend. I believe "Csaba Hruska" is working in this space with GRIN, yes? I read SPJs paper on Implementing Lazy Functional Languages on Stock Hardware: The Spineless Tagless G-machine which implemented STG in C and whilst it wasn't trivial, it didn't seem stupendously complex (even I managed to roughly follow it). I thought to myself also, implementing this in .NET would be even easier because I can hand off garbage collection to the .NET runtime so there's one less thing to worry about. I also, initially, don't care _too_ much about performance. Of course, there's probably a whole bunch of nuance. One actually needs to, for example, represent all the complexities of GADTs into object orientated classes, maybe converting sum types to inheritance hierarchies with Visitor Patterns. And also you'd actually have to make sure to do one's best to ensure exposed Haskell functions look like something sensible. So I guess, given I have a bit of an interest here, what would be the best approach if I wanted to help GHC develop more backends and into an architecture where people can add backends without forking GHC? Where could I start helping that effort? Should I contact "Csaba Hruska" and get involved in GRIN? Or is there something that I can start working on in GHC proper? Considering that I've been playing around with Haskell since 2002, and I'd like to actually get paid to write it at some point in my career, and I have an interest in this area, perhaps this is a good place to start, and actually helping to develop a pluggable backend architecture for GHC may be more useful for more people over the long term than trying to hack up an existing GHC to support 32 bit Windows XP, a battle I suspect will have to be refought every time a new GHC version is released given the current structure of GHC. On Fri, Mar 26, 2021 at 1:34 PM Ben Gamari > wrote: Clinton Mead > writes: > Thanks all for your replies. Just going through what Ben has said step by > step: > > My sense is that if you don't need the threaded runtime system it would >> probably be easiest to just try to make a modern GHC run on Windows XP. >> > > Happy to run non-threaded runtime. A good chunk of these machines will be > single or dual core anyway. > That indeed somewhat simplifies things. >> As Tamar suggested, it likely not easy, but also not impossible. WinIO >> is indeed problematic, but thankfully the old MIO IO manager is still >> around (and will be in 9.2). >> > > "Is still around"? As in it's in the code base and just dead code, or can I > trigger GHC to use the old IO manager with a GHC option? > > The possible reasons for Windows XP incompatibility that I can think of >> off the top of my head are: >> >> * Timers (we now use QueryPerformanceCounter) >> > > This page suggests that QueryPerformanceCounter > > > should > run on XP. Is this incorrect? > It's supported, but there are caveats [1] that make it unreliable as a timesource. [1] https://docs.microsoft.com/en-us/windows/win32/sysinfo/acquiring-high-resolution-time-stamps#windowsxp-and-windows2000 > >> * Big-PE support, which is very much necessary for profiled builds >> > > I don't really need profiled builds > Alright, then you *probably* won't be affected by PE's symbol limit. >> * Long file path support (mostly a build-time consideration as Haskell >> build systems tend to produce very long paths) >> >> > I don't need to build on Windows XP either. I just need to run on Windows > XP so hopefully this won't be an issue. Although if GHC was modified for > long file path support so it could build itself with long file path support > presumably it will affect everything else it builds also. > If you don't need to build on XP then I suspect this won't affect you. > >> There may be others, but I would start looking there. I am happy to >> answer any questions that might arise. >> > I'm guessing the way forward here might be a patch with two options: > > 1. -no-long-path-support/-long-path-support (default -long-path-support) > 2. -winxp > > The winxp option shall: > > - Require -no-long-path-support > - Conflicts with -threaded > - Conflicts with profiled builds > - Uses the old IO manager (I'm not sure if this is an option or how this is > done). > The old IO manager is still the default, although this will likely change in 9.2. > What do you think (roughly speaking)? Yes, that is essentially correct. I would probably start by trying to run a 32-bit GHC build on Windows XP under gdb and see where things fall over. Cheers, - Ben _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From hecate at glitchbra.in Fri Mar 26 15:56:14 2021 From: hecate at glitchbra.in (=?UTF-8?B?SMOpY2F0ZQ==?=) Date: Fri, 26 Mar 2021 16:56:14 +0100 Subject: HLint in the GHC CI, an eight-months retrospective In-Reply-To: <010f01786b1cd03c-2d3134dc-a0eb-4081-babf-cf04a92e888c-000000@us-east-2.amazonses.com> References: <010f01786b1cd03c-2d3134dc-a0eb-4081-babf-cf04a92e888c-000000@us-east-2.amazonses.com> Message-ID: <1786f3416e0.278a.00f17733d2c9eabc5e044ce8e37597ac@glitchbra.in> Hi Richard, I am sorry, I have indeed forgotten one of the most important parts of my email. :) The Hadrian rules are lint:base lint:compiler You can invoke them as simply as: ./hadrian/build lint:base You need to have a recent version of HLint in your PATH. If you use ghc.nix, this should be taken care of for you. Hope it clarified things! Cheers, Hécate Le 25 mars 2021 21:39:15 Richard Eisenberg a écrit : > Thanks for this update! Glad to know this effort is going well. > > One quick question: suppose I am editing something in `base`. My > understanding is that my edit will be linted. How can I run hlint locally > so that I can easily respond to trouble before CI takes a crack? And where > would I learn this information (that is, how to run hlint locally)? > > Thanks! > Richard > >> On Mar 25, 2021, at 11:19 AM, Hécate wrote: >> >> Hello fellow devs, >> >> this email is an activity report on the integration of the HLint[0] tool in >> the Continuous Integration (CI) pipelines. >> >> On Jul. 5, 2020 I opened a discussion ticket[1] on the topic of code >> linting in the several components of the GHC code-base. It has served as a >> reference anchor for the Merge Requests (MR) that stemmed from it, and >> allowed us to refine our expectations and processes. If you are not >> acquainted with its content, I invite you to read the whole conversation. >> >> Subsequently, several Hadrian lint rules have been integrated in the >> following months, in order to run HLint on targeted components of the GHC >> repository (the base library, the compiler code-base, etc). >> Being satisfied with the state of the rules we applied to the code-base, >> such as removing extraneous pragmata and keywords, it was decided to >> integrate the base library linting rule in the CI. This was five months >> ago, in September[2], and I am happy to report that developer friction has >> been so far minimal. >> In parallel to this work on the base library, I took care of cleaning-up >> the compiler, and harmonised the various micro coding styles that have >> emerged quite organically during the decades of development that are behind >> us (I never realised how many variations of the same ten lines of pragmata >> could coexist in the same folders). >> Upon feedback from stakeholders of this sub-code base, the rules file was >> altered to better suit their development needs, such as not removing >> extraneous `do` keywords, as they are useful to introduce a block in which >> debug statements can be easily inserted. >> >> Since today, the linting of the compiler code-base has been integrated in >> our CI pipelines, without further burdening our CI times. >> Things seem to run smoothly, and I welcome comments and requests of any >> kind related to this area of our code quality process. >> >> Regarding our future plans, there has been a discussion about integrating >> such a linting mechanism for our C code-base, in the RTS. Nothing is >> formally established yet, so I would be grateful if people who have >> experience and wisdom about it can chime in to contribute to the >> discussion: https://gitlab.haskell.org/ghc/ghc/-/issues/19437. >> >> And I would like to say that I am overall very thankful for the involvement >> of the people who have been giving us feedback and have been reviewing the >> resulting MRs. >> >> Have a very nice day, >> Hécate >> >> --- >> [0]: https://github.com/ndmitchell/hlint >> [1]: https://gitlab.haskell.org/ghc/ghc/-/issues/18424 >> [2]: https://gitlab.haskell.org/ghc/ghc/-/merge_requests/4147 >> >> -- >> Hécate ✨ >> 🐦: @TechnoEmpress >> IRC: Uniaika >> WWW: https://glitchbra.in >> RUN: BSD >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Mar 26 17:37:51 2021 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 26 Mar 2021 17:37:51 +0000 Subject: config.sub Message-ID: Folks I'm getting a lot of this simonpj at MSRC-3645512:~/code/HEAD-3$ git status On branch wip/T19569 Your branch is up to date with 'origin/wip/T19569'. Changes not staged for commit: (use "git add ..." to update what will be committed) (use "git restore ..." to discard changes in working directory) (commit or discard the untracked or modified content in submodules) modified: libraries/unix (modified content) Untracked files: (use "git add ..." to include in what will be committed) libraries/base/config.sub libraries/ghc-bignum/config.sub What has changed in unix? Answer: simonpj at MSRC-3645512:~/code/HEAD-3$ cd libraries/unix simonpj at MSRC-3645512:~/code/HEAD-3/libraries/unix$ git status HEAD detached at 21437f2 Changes not staged for commit: (use "git add ..." to update what will be committed) (use "git restore ..." to discard changes in working directory) modified: config.sub Ugh. Why is config.sub modified if it's a repo file? And should I ignore the untracked base/config.sub and ghc-bignum? What am I doing wrong? Thanks Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.jakobi at googlemail.com Fri Mar 26 17:42:26 2021 From: simon.jakobi at googlemail.com (Simon Jakobi) Date: Fri, 26 Mar 2021 18:42:26 +0100 Subject: config.sub In-Reply-To: References: Message-ID: Hi Simon, I haven't experienced this issue myself, but I've seen a merge request intended to address it: https://gitlab.haskell.org/ghc/ghc/-/merge_requests/5372 Cheers, Simon Am Fr., 26. März 2021 um 18:38 Uhr schrieb Simon Peyton Jones via ghc-devs : > > Folks > > I’m getting a lot of this > > simonpj at MSRC-3645512:~/code/HEAD-3$ git status > > On branch wip/T19569 > > Your branch is up to date with 'origin/wip/T19569'. > > > > Changes not staged for commit: > > (use "git add ..." to update what will be committed) > > (use "git restore ..." to discard changes in working directory) > > (commit or discard the untracked or modified content in submodules) > > modified: libraries/unix (modified content) > > > > Untracked files: > > (use "git add ..." to include in what will be committed) > > libraries/base/config.sub > > libraries/ghc-bignum/config.sub > > > > What has changed in unix? Answer: > > simonpj at MSRC-3645512:~/code/HEAD-3$ cd libraries/unix > > simonpj at MSRC-3645512:~/code/HEAD-3/libraries/unix$ git status > > HEAD detached at 21437f2 > > Changes not staged for commit: > > (use "git add ..." to update what will be committed) > > (use "git restore ..." to discard changes in working directory) > > modified: config.sub > > Ugh. Why is config.sub modified if it’s a repo file? And should I ignore the untracked base/config.sub and ghc-bignum? > > What am I doing wrong? > > Thanks > > Simon > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From lexi.lambda at gmail.com Sat Mar 27 00:41:09 2021 From: lexi.lambda at gmail.com (Alexis King) Date: Fri, 26 Mar 2021 19:41:09 -0500 Subject: Type inference of singular matches on GADTs In-Reply-To: References: <914d28f5-d2d5-9cf0-2ec0-88e98c10e92d@gmail.com> Message-ID: <12fb72cf-abfb-b2be-6560-a6a73e327319@gmail.com> I appreciate your point Sebastian, but I think in this particular case, type applications in patterns are still not enough to satisfy me. I provided the empty argument list example because it was simple, but I’d also like this to typecheck: baz :: Int -> String -> Widget baz = .... bar = foo (\(a `HCons` b `HCons` HNil) -> baz a b) I don’t want to have to annotate the types of a and b because they’re both eminently inferrable. I’d like to get type inference properties comparable to an ordinary two-argument lambda expression, since that’s how I’m trying to use this, after all. Really what I’m complaining about here is type inference of GADT patterns. For comparison, suppose I had these definitions: class Blah a where foo :: (a -> Widget) -> Whatsit instance Blah (Int, String) where foo = .... baz :: Int -> String -> Widget baz = .... bar = foo (\(a, b) -> baz a b) This compiles without any issues. The pattern (a, b) is inferred to have type (t1, t2), where both t1 and t2 are metavariables. These are unified to particular types in the body, and everything is fine. But type inference for GADT patterns works differently. In fact, even this simple definition fails to compile: bar = \(a `HCons` HNil) -> not a GHC rejects it with the following error: error: • Could not deduce: a ~ Bool from the context: as ~ (a : as1) This seems to arise from GHC’s strong reluctance to pick any particular type for a match on a GADT constructor. One way to explain this is as a sort of “open world assumption” as it applies to case expressions: we always /could/ add more cases, and if we did, specializing the type based on the existing cases might be premature. Furthermore, non-exhaustive pattern-matching is not a type error in Haskell, only a warning, so perhaps we /wanted/ to write a non-exhaustive function on an arbitrary HList. Of course, I think that’s somewhat silly. If there’s a single principal type that makes my function well-typed /and exhaustive/, I’d really like GHC to pick it. Alexis On 3/22/21 1:28 PM, Sebastian Graf wrote: > Cale made me aware of the fact that the "Type applications in > patterns" proposal had already been implemented. > See https://gitlab.haskell.org/ghc/ghc/-/issues/19577 where I adapt > Alexis' use case into a test case that I'd like to see compiling. > > Am Sa., 20. März 2021 um 15:45 Uhr schrieb Sebastian Graf > >: > > Hi Alexis, > > The following works and will have inferred type `Int`: > > > bar = foo (\(HNil :: HList '[]) -> 42) > > I'd really like it if we could write > > > bar2 = foo (\(HNil @'[]) -> 42) > > though, even if you write out the constructor type with explicit > constraints and forall's. > E.g. by using a -XTypeApplications here, I specify the universal > type var of the type constructor `HList`. I think that is a > semantics that is in line with Type Variables in Patterns, Section > 4 : The only way > to satisfy the `as ~ '[]` constraint in the HNil pattern is to > refine the type of the pattern match to `HList '[]`. Consequently, > the local `Blah '[]` can be discharged and bar2 will have inferred > `Int`. > > But that's simply not implemented at the moment, I think. I recall > there's some work that has to happen before. The corresponding > proposal seems to be > https://ghc-proposals.readthedocs.io/en/latest/proposals/0126-type-applications-in-patterns.html > (or https://github.com/ghc-proposals/ghc-proposals/pull/238? I'm > confused) and your example should probably be added there as > motivation. > > If `'[]` is never mentioned anywhere in the pattern like in the > original example, I wouldn't expect it to type-check (or at least > emit a pattern-match warning): First off, the type is ambiguous. > It's a similar situation as in > https://stackoverflow.com/questions/50159349/type-abstraction-in-ghc-haskell. > If it was accepted and got type `Blah as => Int`, then you'd get a > pattern-match warning, because depending on how `as` is > instantiated, your pattern-match is incomplete. E.g., `bar3 > @[Int]` would crash. > > Complete example code: > > {-# LANGUAGE DataKinds #-} > {-# LANGUAGE TypeOperators #-} > {-# LANGUAGE GADTs #-} > {-# LANGUAGE LambdaCase #-} > {-# LANGUAGE TypeApplications #-} > {-# LANGUAGE ScopedTypeVariables #-} > {-# LANGUAGE RankNTypes #-} > > module Lib where > > data HList as where >   HNil  :: forall as. (as ~ '[]) => HList as >   HCons :: forall as a as'. (as ~ (a ': as')) => a -> HList as' -> > HList as > > class Blah as where >   blah :: HList as > > instance Blah '[] where >   blah = HNil > > foo :: Blah as => (HList as -> Int) -> Int > foo f = f blah > > bar = foo (\(HNil :: HList '[]) -> 42) -- compiles > bar2 = foo (\(HNil @'[]) -> 42) -- errors > > Cheers, > Sebastian > > Am Sa., 20. März 2021 um 13:57 Uhr schrieb Viktor Dukhovni > >: > > On Sat, Mar 20, 2021 at 08:13:18AM -0400, Viktor Dukhovni wrote: > > > As soon as I try add more complex contraints, I appear to > need an > > explicit type signature for HNil, and then the code again > compiles: > > But aliasing the promoted constructors via pattern synonyms, > and using > those instead, appears to resolve the ambiguity. > > -- >     Viktor. > > {-# LANGUAGE >     DataKinds >   , GADTs >   , PatternSynonyms >   , PolyKinds >   , ScopedTypeVariables >   , TypeFamilies >   , TypeOperators >   #-} > > import GHC.Types > > infixr 1 `HC` > > data HList as where >   HNil  :: HList '[] >   HCons :: a -> HList as -> HList (a ': as) > > pattern HN :: HList '[]; > pattern HN = HNil > pattern HC :: a -> HList as -> HList (a ': as) > pattern HC a as = HCons a as > > class Nogo a where > > type family   Blah (as :: [Type]) :: Constraint > type instance Blah '[]        = () > type instance Blah (_ ': '[]) = () > type instance Blah (_ ': _ ': '[]) = () > type instance Blah (_ ': _ ': _ ': _) = (Nogo ()) > > foo :: (Blah as) => (HList as -> Int) -> Int > foo _ = 42 > > bar :: Int > bar = foo (\ HN -> 1) > > baz :: Int > baz = foo (\ (True `HC` HN) -> 2) > > pattern One :: Int > pattern One = 1 > bam :: Int > bam = foo (\ (True `HC` One `HC` HN) -> 2) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ietf-dane at dukhovni.org Sat Mar 27 05:24:45 2021 From: ietf-dane at dukhovni.org (Viktor Dukhovni) Date: Sat, 27 Mar 2021 01:24:45 -0400 Subject: Type inference of singular matches on GADTs In-Reply-To: <12fb72cf-abfb-b2be-6560-a6a73e327319@gmail.com> References: <914d28f5-d2d5-9cf0-2ec0-88e98c10e92d@gmail.com> <12fb72cf-abfb-b2be-6560-a6a73e327319@gmail.com> Message-ID: On Fri, Mar 26, 2021 at 07:41:09PM -0500, Alexis King wrote: > type applications in patterns are still not enough to satisfy me. I > provided the empty argument list example because it was simple, but I’d > also like this to typecheck: > > baz :: Int -> String -> Widget > baz = .... > > bar = foo (\(a `HCons` b `HCons` HNil) -> baz a b) > Can you be a bit more specific on how the constraint `Blah` is presently defined, and how `foo` uses the HList type to execute a function of the appropriate arity and signature? The example below my signature typechecks, provided I use pattern synonyms for the GADT constructors, rather than use the constructors directly. -- Viktor. {-# language DataKinds , FlexibleInstances , GADTs , PatternSynonyms , ScopedTypeVariables , TypeApplications , TypeFamilies , TypeOperators #-} import GHC.Types import Data.Proxy import Type.Reflection import Data.Type.Equality data HList as where HNil_ :: HList '[] HCons_ :: a -> HList as -> HList (a ': as) infixr 5 `HCons_` pattern HNil :: HList '[]; pattern HNil = HNil_ pattern (:^) :: a -> HList as -> HList (a ': as) pattern (:^) a as = HCons_ a as pattern (:$) a b = a :^ b :^ HNil infixr 5 :^ infixr 5 :$ class Typeable as => Blah as where params :: HList as instance Blah '[Int,String] where params = 39 :$ "abc" baz :: Int -> String -> Int baz i s = i + length s bar = foo (\(a :$ b) -> baz a b) foo :: Blah as => (HList as -> Int) -> Int foo f = f params From amindfv at mailbox.org Sat Mar 27 20:11:31 2021 From: amindfv at mailbox.org (amindfv at mailbox.org) Date: Sat, 27 Mar 2021 14:11:31 -0600 Subject: Options for targeting Windows XP? In-Reply-To: References: <87r1k4uyee.fsf@smart-cactus.org> <87lfaave1x.fsf@smart-cactus.org> Message-ID: <20210327201131.GB21057@painter.painter> On Fri, Mar 26, 2021 at 04:27:58PM +1100, Clinton Mead wrote: > I guess the other dream of mine is to give GHC a .NET backend. For my > problem it would be the ideal solution, but it looks like other attempts in > this regard (e.g. Eta, GHCJS etc) seem to have difficulty keeping up with > updates to GHC. So I'm sure it's not trivial. Worth mentioning if you haven't come across it: F# is sorta-kinda a bit like Haskell and .NET support is first-class. Tom From athas at sigkill.dk Sun Mar 28 13:43:59 2021 From: athas at sigkill.dk (Troels Henriksen) Date: Sun, 28 Mar 2021 15:43:59 +0200 Subject: Pattern matching desugaring regression? Re: [Haskell-cafe] Why does my module take so long to compile? In-Reply-To: <87mtw5406p.fsf@sigkill.dk> (Troels Henriksen's message of "Mon, 15 Feb 2021 20:10:06 +0100") References: <87y2fp5pn7.fsf@sigkill.dk> <57fb4de7-d4c6-f08c-d226-18d1572d26b@henning-thielemann.de> <87pn115kmo.fsf@sigkill.dk> <87zh05458o.fsf@sigkill.dk> <87tuqd439x.fsf@sigkill.dk> <87mtw5406p.fsf@sigkill.dk> Message-ID: <87wntr9yw0.fsf@sigkill.dk> Troels Henriksen writes: > It is very likely that issue 17386 is the issue. With > > {-# OPTIONS_GHC -Wno-overlapping-patterns -Wno-incomplete-patterns > -Wno-incomplete-uni-patterns -Wno-incomplete-record-updates #-} > > my module(s) compile very quickly. I'll wait and see if GHC 9 does > better before I try to create a smaller case (and now I at least have a > workaround). I have now tried it with GHC 9, and unfortunately it is still very slow. As time permits, I will see if I can come up with a self-contained module that illustrates the slowdown. I do have an idea for a optimisation: In all of the cases where coverage tracking takes a long time, I have a catch-all case at the bottom. I think that is a fairly common pattern, where a program tries to detect various special cases, before falling back to a general case. Perhaps coverage checking should have a short-circuiting check for whether there is an obvious catch-all case, and if so, not bother looking any closer? -- \ Troels /\ Henriksen From sgraf1337 at gmail.com Sun Mar 28 16:35:07 2021 From: sgraf1337 at gmail.com (Sebastian Graf) Date: Sun, 28 Mar 2021 18:35:07 +0200 Subject: Pattern matching desugaring regression? Re: [Haskell-cafe] Why does my module take so long to compile? In-Reply-To: <87wntr9yw0.fsf@sigkill.dk> References: <87y2fp5pn7.fsf@sigkill.dk> <57fb4de7-d4c6-f08c-d226-18d1572d26b@henning-thielemann.de> <87pn115kmo.fsf@sigkill.dk> <87zh05458o.fsf@sigkill.dk> <87tuqd439x.fsf@sigkill.dk> <87mtw5406p.fsf@sigkill.dk> <87wntr9yw0.fsf@sigkill.dk> Message-ID: Hi Troels, Sorry to hear GHC 9 didn't fix your problems! Yes, please open an issue. Optimising for specific usage patterns might be feasible, although note that most often it's not the exhaustivity check that is causing problems, but the check for overlapping patterns. At the moment the checker doesn't take shortcuts if we have -Wincomplete-patterns, but -Wno-overlapping-patterns. Maybe it could? Let's see what is causing you problems and decide then. Cheers, Sebastian Am So., 28. März 2021 um 15:44 Uhr schrieb Troels Henriksen < athas at sigkill.dk>: > Troels Henriksen writes: > > > It is very likely that issue 17386 is the issue. With > > > > {-# OPTIONS_GHC -Wno-overlapping-patterns -Wno-incomplete-patterns > > -Wno-incomplete-uni-patterns -Wno-incomplete-record-updates #-} > > > > my module(s) compile very quickly. I'll wait and see if GHC 9 does > > better before I try to create a smaller case (and now I at least have a > > workaround). > > I have now tried it with GHC 9, and unfortunately it is still very slow. > > As time permits, I will see if I can come up with a self-contained > module that illustrates the slowdown. > > I do have an idea for a optimisation: In all of the cases where coverage > tracking takes a long time, I have a catch-all case at the bottom. I > think that is a fairly common pattern, where a program tries to detect > various special cases, before falling back to a general case. Perhaps > coverage checking should have a short-circuiting check for whether there > is an obvious catch-all case, and if so, not bother looking any closer? > > -- > \ Troels > /\ Henriksen > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgraf1337 at gmail.com Sun Mar 28 16:50:49 2021 From: sgraf1337 at gmail.com (Sebastian Graf) Date: Sun, 28 Mar 2021 18:50:49 +0200 Subject: Type inference of singular matches on GADTs In-Reply-To: References: <914d28f5-d2d5-9cf0-2ec0-88e98c10e92d@gmail.com> <12fb72cf-abfb-b2be-6560-a6a73e327319@gmail.com> Message-ID: Hi Alexis, If you really want to get by without type annotations, then Viktor's pattern synonym suggestion really is your best option! Note that pattern HNil :: HList '[]; pattern HNil = HNil_ Does not actually declare an HNil that is completely synonymous with HNil_, but it changes the *provided* GADT constraint (as ~ '[]) into a *required* constraint (as ~ '[]). "Provided" as in "a pattern match on the synonym provides this constraint as a new Given", "required" as in "... requires this constraint as a new Wanted". (I hope I used the terminology correctly here.) Thus, a pattern ((a :: Int) `HCons` HNil) really has type (HList '[Int]) and is exhaustive. See also https://gitlab.haskell.org/ghc/ghc/-/wikis/pattern-synonyms#static-semantics . At the moment, I don't think it's possible to declare a GADT constructor with required constraints, so a pattern synonym seems like your best bet and fits your use case exactly. You can put each of these pattern synonyms into a singleton COMPLETE pragma. Hope that helps, Sebastian Am Sa., 27. März 2021 um 06:27 Uhr schrieb Viktor Dukhovni < ietf-dane at dukhovni.org>: > On Fri, Mar 26, 2021 at 07:41:09PM -0500, Alexis King wrote: > > > type applications in patterns are still not enough to satisfy me. I > > provided the empty argument list example because it was simple, but I’d > > also like this to typecheck: > > > > baz :: Int -> String -> Widget > > baz = .... > > > > bar = foo (\(a `HCons` b `HCons` HNil) -> baz a b) > > > > Can you be a bit more specific on how the constraint `Blah` is presently > defined, and how `foo` uses the HList type to execute a function of the > appropriate arity and signature? > > The example below my signature typechecks, provided I use pattern > synonyms for the GADT constructors, rather than use the constructors > directly. > > -- > Viktor. > > {-# language DataKinds > , FlexibleInstances > , GADTs > , PatternSynonyms > , ScopedTypeVariables > , TypeApplications > , TypeFamilies > , TypeOperators > #-} > > import GHC.Types > import Data.Proxy > import Type.Reflection > import Data.Type.Equality > > data HList as where > HNil_ :: HList '[] > HCons_ :: a -> HList as -> HList (a ': as) > infixr 5 `HCons_` > > pattern HNil :: HList '[]; > pattern HNil = HNil_ > pattern (:^) :: a -> HList as -> HList (a ': as) > pattern (:^) a as = HCons_ a as > pattern (:$) a b = a :^ b :^ HNil > infixr 5 :^ > infixr 5 :$ > > class Typeable as => Blah as where > params :: HList as > instance Blah '[Int,String] where > params = 39 :$ "abc" > > baz :: Int -> String -> Int > baz i s = i + length s > > bar = foo (\(a :$ b) -> baz a b) > > foo :: Blah as => (HList as -> Int) -> Int > foo f = f params > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From atreyu.bbb at gmail.com Sun Mar 28 20:19:47 2021 From: atreyu.bbb at gmail.com (Javier Neira Sanchez) Date: Sun, 28 Mar 2021 22:19:47 +0200 Subject: Options for targeting Windows XP? In-Reply-To: References: <87r1k4uyee.fsf@smart-cactus.org> <87lfaave1x.fsf@smart-cactus.org> Message-ID: Hi all, only want to add a small note: there was (is?) a ghc fork targeting the jvm where i had the luck to been able to contribute my two cents: eta-lang (https://github.com/typelead/eta) Unfortunately the startup behind it lost funds and it is stalled but i still have the hope some day can be resurrected. Javi Neira El vie., 26 mar. 2021 10:42, Simon Peyton Jones via ghc-devs < ghc-devs at haskell.org> escribió: > This link gives some (old) background > > > https://wiki.haskell.org/GHC/FAQ#Why_isn.27t_GHC_available_for_.NET_or_on_the_JVM.3F > > Simon > > > > *From:* ghc-devs *On Behalf Of *Moritz > Angermann > *Sent:* 26 March 2021 08:00 > *To:* Clinton Mead > *Cc:* ghc-devs > *Subject:* Re: Options for targeting Windows XP? > > > > I believe there is a bit of misconception about what requires a new > backend or not. GHC is a bunch of different intermediate representations > from which one can take off to build backends. The STG, or Cmm ones are the > most popular. All our Native Code Generators and the LLVM code gen take off > from the Cmm one. Whether or not that is the correct input representation > for your target largely depends on the target and design of the > codegenerator. GHCJS takes off from STG, and so does Csaba's GRIN work via > the external STG I believe. IIRC Asterius takes off from Cmm. I don't > remember the details about Eta. > > > > Why fork? Do you want to deal with GHC, and GHC's development? If not, > fork. Do you want to have to keep up with GHC's development? Maybe not > fork. Do you think your compiler can stand on it's own and doesn't follow > GHC much, except for being a haskell compiler? By all means fork. > > > > Eta is a bit special here, Eta forked off, and basically started > customising their Haskell compiler specifically to the JVM, and this also > allowed them to make radical changes to GHC, which would not have been > permissible in the mainline GHC. (Mainline GHC tries to support multiple > platforms and architectures at all times, breaking any of them isn't really > an option that can be taken lightheartedly.) Eta also started having Etlas, > a custom Cabal, ... I'd still like to see a lot from Eta and the ecosystem > be re-integrated into GHC. There have to be good ideas there that can be > brought back. It just needs someone to go look and do the work. > > > > GHCJS is being aligned more with GHC right now precisely to eventually > re-integrate it with GHC. > > > > Asterius went down the same path, likely inspired by GHCJS, but I think I > was able to convince the author that eventual upstreaming should be the > goal and the project should try to stay as close as possible to GHC for > that reason. > > > > Now if you consider adding a codegen backend, this can be done, but again > depends on your exact target. I'd love to see a CLR target, yet I don't > know enough about CLR to give informed suggestions here. > > > > If you have a toolchain that functions sufficiently similar to a stock c > toolchain, (or you can make your toolchain look sufficiently similar to > one, easily), most of it will just work. If you can separate your building > into compilation of source to some form of object code, and some form of > object code aggregates (archives), and some form of linking (objects and > archives into shared objects, or executables), you can likely plug in your > toolchain into GHC (and Cabal), and have it work, once you taught GHC how > to produce your target languages object code. > > > > If your toolchain does stuff differently, a bit more work is involved in > teaching GHC (and Cabal) about that. > > > > This all only gives you *haskell* though. You still need the Runtime > System. If you have a C -> Target compiler, you can try to re-use GHC's > RTS. This is what the WebGHC project did. They re-used GHC's RTS, and > implemented a shim for linux syscalls, so that they can emulate enough to > have the RTS think it's running on some musl like linux. You most likely > want something proper here eventually; but this might be a first stab at it > to get something working. > > > > Next you'll have to deal with c-bits. Haskell Packages that link against C > parts. This is going to be challenging, not impossible but challenging as > much of the haskell ecosystem expects the ability to compile C files and > use those for low level system interaction. > > > > You can use hackage overlays to build a set of patched packages, once you > have your codegen working. At that point you could start patching ecosystem > packages to work on your target, until your changes are upstreamed, and > provide your user with a hackage overlay (essentially hackage + patches for > specific packages). > > > > Hope this helps. > > > > You'll find most of us on irc.freenode.net#ghc > > > > > On Fri, Mar 26, 2021 at 1:29 PM Clinton Mead > wrote: > > Thanks again for the detailed reply Ben. > > > > I guess the other dream of mine is to give GHC a .NET backend. For my > problem it would be the ideal solution, but it looks like other attempts in > this regard (e.g. Eta, GHCJS etc) seem to have difficulty keeping up with > updates to GHC. So I'm sure it's not trivial. > > > > It would be quite lovely though if I could generate .NET + Java + even > Python bytecode from GHC. > > > > Whilst not solving my immediate problem, perhaps my efforts are best spent > in giving GHC a plugin architecture for backends (or if one already > exists?) trying to make a .NET backend. > > > > I believe "Csaba Hruska" is working in this space with GRIN, yes? > > > > I read SPJs paper on Implementing Lazy Functional Languages on Stock > Hardware: The Spineless Tagless G-machine > which > implemented STG in C and whilst it wasn't trivial, it didn't seem > stupendously complex (even I managed to roughly follow it). I thought to > myself also, implementing this in .NET would be even easier because I can > hand off garbage collection to the .NET runtime so there's one less thing > to worry about. I also, initially, don't care _too_ much about performance. > > > > Of course, there's probably a whole bunch of nuance. One actually needs > to, for example, represent all the complexities of GADTs into object > orientated classes, maybe converting sum types to inheritance hierarchies > with Visitor Patterns. And also you'd actually have to make sure to do > one's best to ensure exposed Haskell functions look like something > sensible. > > So I guess, given I have a bit of an interest here, what would be the best > approach if I wanted to help GHC develop more backends and into an > architecture where people can add backends without forking GHC? Where could > I start helping that effort? Should I contact "Csaba Hruska" and get > involved in GRIN? Or is there something that I can start working on in GHC > proper? > > Considering that I've been playing around with Haskell since 2002, and I'd > like to actually get paid to write it at some point in my career, and I > have an interest in this area, perhaps this is a good place to start, and > actually helping to develop a pluggable backend architecture for GHC may be > more useful for more people over the long term than trying to hack up an > existing GHC to support 32 bit Windows XP, a battle I suspect will have to > be refought every time a new GHC version is released given the current > structure of GHC. > > > > On Fri, Mar 26, 2021 at 1:34 PM Ben Gamari wrote: > > Clinton Mead writes: > > > Thanks all for your replies. Just going through what Ben has said step by > > step: > > > > My sense is that if you don't need the threaded runtime system it would > >> probably be easiest to just try to make a modern GHC run on Windows XP. > >> > > > > Happy to run non-threaded runtime. A good chunk of these machines will be > > single or dual core anyway. > > > That indeed somewhat simplifies things. > > >> As Tamar suggested, it likely not easy, but also not impossible. WinIO > >> is indeed problematic, but thankfully the old MIO IO manager is still > >> around (and will be in 9.2). > >> > > > > "Is still around"? As in it's in the code base and just dead code, or > can I > > trigger GHC to use the old IO manager with a GHC option? > > > > The possible reasons for Windows XP incompatibility that I can think of > >> off the top of my head are: > >> > >> * Timers (we now use QueryPerformanceCounter) > >> > > > > This page suggests that QueryPerformanceCounter > > < > https://docs.microsoft.com/en-us/windows/win32/api/profileapi/nf-profileapi-queryperformancecounter > > > > > should > > run on XP. Is this incorrect? > > > It's supported, but there are caveats [1] that make it unreliable as a > timesource. > > [1] > https://docs.microsoft.com/en-us/windows/win32/sysinfo/acquiring-high-resolution-time-stamps#windowsxp-and-windows2000 > > > > >> * Big-PE support, which is very much necessary for profiled builds > >> > > > > I don't really need profiled builds > > > > Alright, then you *probably* won't be affected by PE's symbol limit. > > >> * Long file path support (mostly a build-time consideration as Haskell > >> build systems tend to produce very long paths) > >> > >> > > I don't need to build on Windows XP either. I just need to run on Windows > > XP so hopefully this won't be an issue. Although if GHC was modified for > > long file path support so it could build itself with long file path > support > > presumably it will affect everything else it builds also. > > > If you don't need to build on XP then I suspect this won't affect you. > > > > >> There may be others, but I would start looking there. I am happy to > >> answer any questions that might arise. > >> > > I'm guessing the way forward here might be a patch with two options: > > > > 1. -no-long-path-support/-long-path-support (default -long-path-support) > > 2. -winxp > > > > The winxp option shall: > > > > - Require -no-long-path-support > > - Conflicts with -threaded > > - Conflicts with profiled builds > > - Uses the old IO manager (I'm not sure if this is an option or how this > is > > done). > > > The old IO manager is still the default, although this will likely > change in 9.2. > > > What do you think (roughly speaking)? > > Yes, that is essentially correct. I would probably start by trying to > run a 32-bit GHC build on Windows XP under gdb and see where > things fall over. > > Cheers, > > - Ben > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at richarde.dev Mon Mar 29 02:17:55 2021 From: rae at richarde.dev (Richard Eisenberg) Date: Mon, 29 Mar 2021 02:17:55 +0000 Subject: Type inference of singular matches on GADTs In-Reply-To: <12fb72cf-abfb-b2be-6560-a6a73e327319@gmail.com> References: <914d28f5-d2d5-9cf0-2ec0-88e98c10e92d@gmail.com> <12fb72cf-abfb-b2be-6560-a6a73e327319@gmail.com> Message-ID: <010f01787bc5fa60-9e5e6304-ebc5-442d-9a64-18eaf88dbdc0-000000@us-east-2.amazonses.com> > On Mar 26, 2021, at 8:41 PM, Alexis King wrote: > > If there’s a single principal type that makes my function well-typed and exhaustive, I’d really like GHC to pick it. I think this is the key part of Alexis's plea: that the type checker take into account exhaustivity in choosing how to proceed. Another way to think about this: > f1 :: HList '[] -> () > f1 HNil = () > > f2 :: HList as -> () > f2 HNil = () Both f1 and f2 are well typed definitions. In any usage site where both are well-typed, they will behave the same. Yet f1 is exhaustive while f2 is not. This isn't really about an open-world assumption or the possibility of extra cases -- it has to do with what the runtime behaviors of the two functions are. f1 never fails, while f2 must check a constructor tag and perhaps throw an exception. If we just see \HNil -> (), Alexis seems to be suggesting we prefer the f1 interpretation over the f2 interpretation. Why? Because f1 is exhaustive, and when we can choose an exhaustive interpretation, that's probably a good idea to pursue. I haven't thought about how to implement such a thing. At the least, it would probably require some annotation saying that we expect `\HNil -> ()` to be exhaustive (as GHC won't, in general, make that assumption). Even with that, could we get type inference to behave? Possibly. But first: does this match your understanding? Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Mon Mar 29 03:00:56 2021 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sun, 28 Mar 2021 23:00:56 -0400 Subject: Type inference of singular matches on GADTs In-Reply-To: <010f01787bc5fa60-9e5e6304-ebc5-442d-9a64-18eaf88dbdc0-000000@us-east-2.amazonses.com> References: <914d28f5-d2d5-9cf0-2ec0-88e98c10e92d@gmail.com> <12fb72cf-abfb-b2be-6560-a6a73e327319@gmail.com> <010f01787bc5fa60-9e5e6304-ebc5-442d-9a64-18eaf88dbdc0-000000@us-east-2.amazonses.com> Message-ID: i like how you've boiled down this discussion, it makes it much clearer to me at least :) On Sun, Mar 28, 2021 at 10:19 PM Richard Eisenberg wrote: > > > On Mar 26, 2021, at 8:41 PM, Alexis King wrote: > > If there’s a single principal type that makes my function well-typed *and > exhaustive*, I’d really like GHC to pick it. > > > I think this is the key part of Alexis's plea: that the type checker take > into account exhaustivity in choosing how to proceed. > > Another way to think about this: > > f1 :: HList '[] -> () > f1 HNil = () > > f2 :: HList as -> () > f2 HNil = () > > > Both f1 and f2 are well typed definitions. In any usage site where both > are well-typed, they will behave the same. Yet f1 is exhaustive while f2 is > not. This isn't really about an open-world assumption or the possibility of > extra cases -- it has to do with what the runtime behaviors of the two > functions are. f1 never fails, while f2 must check a constructor tag and > perhaps throw an exception. > > If we just see \HNil -> (), Alexis seems to be suggesting we prefer the f1 > interpretation over the f2 interpretation. Why? Because f1 is exhaustive, > and when we can choose an exhaustive interpretation, that's probably a good > idea to pursue. > > I haven't thought about how to implement such a thing. At the least, it > would probably require some annotation saying that we expect `\HNil -> ()` > to be exhaustive (as GHC won't, in general, make that assumption). Even > with that, could we get type inference to behave? Possibly. > > But first: does this match your understanding? > > Richard > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ietf-dane at dukhovni.org Mon Mar 29 06:31:50 2021 From: ietf-dane at dukhovni.org (Viktor Dukhovni) Date: Mon, 29 Mar 2021 02:31:50 -0400 Subject: Type inference of singular matches on GADTs In-Reply-To: References: <914d28f5-d2d5-9cf0-2ec0-88e98c10e92d@gmail.com> <12fb72cf-abfb-b2be-6560-a6a73e327319@gmail.com> <010f01787bc5fa60-9e5e6304-ebc5-442d-9a64-18eaf88dbdc0-000000@us-east-2.amazonses.com> Message-ID: On Sun, Mar 28, 2021 at 11:00:56PM -0400, Carter Schonwald wrote: > On Sun, Mar 28, 2021 at 10:19 PM Richard Eisenberg wrote: > > > I think this is the key part of Alexis's plea: that the type checker take > > into account exhaustivity in choosing how to proceed. > > > > Another way to think about this: > > > > f1 :: HList '[] -> () > > f1 HNil = () > > > > f2 :: HList as -> () > > f2 HNil = () > > > > Both f1 and f2 are well typed definitions. In any usage site where both > > are well-typed, they will behave the same. Yet f1 is exhaustive while f2 is > > not. ... > > I like how you've boiled down this discussion, it makes it much clearer to > me at least :) +1. Very much distills it for me too. Thanks! FWIW, I've since boiled down the pattern-synonym example to the below, where I find the choices of ":^" and ":$" to be pleasantly mnemonic, though "HSolo" is perhaps a bit too distracting... {-# language DataKinds, FlexibleInstances, FlexibleContexts, GADTs , PatternSynonyms, TypeOperators #-} {-# OPTIONS_GHC -Wno-type-defaults #-} import Data.Reflection import Data.Proxy default (Int) data HList as where HNil_ :: HList '[] HCons_ :: a -> HList as -> HList (a ': as) infixr 5 `HCons_` pattern HNil :: HList '[]; pattern HNil = HNil_ pattern HSolo :: a -> HList '[a] pattern HSolo a = a :^ HNil pattern (:^) :: a -> HList as -> HList (a ': as) pattern (:^) a as = HCons_ a as infixr 5 :^ pattern (:$) :: a -> b -> HList '[a,b] pattern (:$) a b = a :^ HSolo b infixr 5 :$ hApp :: Reifies s (HList as) => (HList as -> r) -> Proxy s -> r hApp f = f . reflect main :: IO () main = do print $ reify HNil $ hApp (\ HNil -> 42) print $ reify (HSolo 42) $ hApp (\ (HSolo a) -> a) print $ reify (28 :$ "0xe") $ hApp (\ (a :$ b) -> a + read b) -- Viktor. From simonpj at microsoft.com Mon Mar 29 09:00:27 2021 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 29 Mar 2021 09:00:27 +0000 Subject: Type inference of singular matches on GADTs In-Reply-To: <010f01787bc5fa60-9e5e6304-ebc5-442d-9a64-18eaf88dbdc0-000000@us-east-2.amazonses.com> References: <914d28f5-d2d5-9cf0-2ec0-88e98c10e92d@gmail.com> <12fb72cf-abfb-b2be-6560-a6a73e327319@gmail.com> <010f01787bc5fa60-9e5e6304-ebc5-442d-9a64-18eaf88dbdc0-000000@us-east-2.amazonses.com> Message-ID: I haven't thought about how to implement such a thing. At the least, it would probably require some annotation saying that we expect `\HNil -> ()` to be exhaustive (as GHC won't, in general, make that assumption). Even with that, could we get type inference to behave? Possibly. As I wrote in another post on this thread, it’s a bit tricky. What would you expect of (EX1) \x -> case x of HNil -> blah Here the lambda and the case are separated Now (EX2) \x -> (x, case x of HNil -> blah) Here the lambda and the case are separated more, and x is used twice. What if there are more data constructors that share a common return type? (EX3) data HL2 a where HNil1 :: HL2 [] HNil2 :: HL2 [] HCons :: …blah… \x -> case x of { HNil1 -> blah; HNil 2 -> blah } Here HNil1 and HNil2 both return HL2 []. Is that “singular”? What if one was a bit more general than the other? Do we seek the least common generalisation of the alternatives given? (EX4) data HL3 a where HNil1 :: HL2 [Int] HNil2 :: HL2 [a] HCons :: …blah… \x -> case x of { HNil1 -> blah; HNil 2 -> blah } What if the cases were incompatible? (EX5) data HL4 a where HNil1 :: HL2 [Int] HNil2 :: HL2 [Bool] HCons :: …blah… \x -> case x of { HNil1 -> blah; HNil 2 -> blah } Would you expect that to somehow generalise to `HL4 [a] -> blah`? What if x matched multiple times, perhaps on different constructors (EX6) \x -> (case s of HNil1 -> blah1; case x of HNil2 -> blah) The water gets deep quickly here. I don’t (yet) see an obviously-satisfying design point that isn’t massively ad-hoc. Simon From: ghc-devs On Behalf Of Richard Eisenberg Sent: 29 March 2021 03:18 To: Alexis King Cc: ghc-devs at haskell.org Subject: Re: Type inference of singular matches on GADTs On Mar 26, 2021, at 8:41 PM, Alexis King > wrote: If there’s a single principal type that makes my function well-typed and exhaustive, I’d really like GHC to pick it. I think this is the key part of Alexis's plea: that the type checker take into account exhaustivity in choosing how to proceed. Another way to think about this: f1 :: HList '[] -> () f1 HNil = () f2 :: HList as -> () f2 HNil = () Both f1 and f2 are well typed definitions. In any usage site where both are well-typed, they will behave the same. Yet f1 is exhaustive while f2 is not. This isn't really about an open-world assumption or the possibility of extra cases -- it has to do with what the runtime behaviors of the two functions are. f1 never fails, while f2 must check a constructor tag and perhaps throw an exception. If we just see \HNil -> (), Alexis seems to be suggesting we prefer the f1 interpretation over the f2 interpretation. Why? Because f1 is exhaustive, and when we can choose an exhaustive interpretation, that's probably a good idea to pursue. I haven't thought about how to implement such a thing. At the least, it would probably require some annotation saying that we expect `\HNil -> ()` to be exhaustive (as GHC won't, in general, make that assumption). Even with that, could we get type inference to behave? Possibly. But first: does this match your understanding? Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Mon Mar 29 14:42:09 2021 From: ben at well-typed.com (Ben Gamari) Date: Mon, 29 Mar 2021 10:42:09 -0400 Subject: GHC 9.2 has branched Message-ID: <87h7kuuim6.fsf@smart-cactus.org> Hello everyone, At this point the ghc-9.2 branch has officially branched off from master. If there was anything you were holding back from `master`, feel free to now send it off to Marge. I'll be working on doing some release prep and pushing out an alpha in the next few days. This alpha will very likely lack the new NCG (which is still being worked on by Moritz) but otherwise should be functionally complete. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Tue Mar 30 03:06:28 2021 From: ben at well-typed.com (Ben Gamari) Date: Mon, 29 Mar 2021 23:06:28 -0400 Subject: Options for targeting Windows XP? In-Reply-To: References: <87r1k4uyee.fsf@smart-cactus.org> <87lfaave1x.fsf@smart-cactus.org> Message-ID: <875z19uyqh.fsf@smart-cactus.org> Clinton Mead writes: > Thanks again for the detailed reply Ben. > > I guess the other dream of mine is to give GHC a .NET backend. For my > problem it would be the ideal solution, but it looks like other attempts in > this regard (e.g. Eta, GHCJS etc) seem to have difficulty keeping up with > updates to GHC. So I'm sure it's not trivial. > > It would be quite lovely though if I could generate .NET + Java + even > Python bytecode from GHC. > > Whilst not solving my immediate problem, perhaps my efforts are best spent > in giving GHC a plugin architecture for backends (or if one already > exists?) trying to make a .NET backend. > This is an interesting (albeit ambitious, for the reasons others have mentioned) idea. In particular, I think the CLR has a slightly advantage over the JVM as a Haskell target in that it has native tail-call support [1]. This avoids a fair amount of complexity (and performance overhead) that Eta had to employ to work around this lack in the JVM. I suspect that writing an STG -> CLR IR wouldn't itself be difficult. The hard part is dealing with the primops, RTS, and core libraries. [1] https://docs.microsoft.com/en-us/previous-versions/windows/silverlight/dotnet-windows-silverlight/56c08k0k(v=vs.95)?redirectedfrom=MSDN > I believe "Csaba Hruska" is working in this space with GRIN, yes? Csaba is indeed using GHC's front-end and Core pipeline to feed his own compilation pipeline. However, I believe his approach is currently quite decoupled from GHC. This may or may not complicate the ability to integrate with the rest of the ecosystem (e.g. Cabal; Csaba, perhaps you could comment here?) > > I read SPJs paper on Implementing Lazy Functional Languages on Stock > Hardware: The Spineless Tagless G-machine > > which > implemented STG in C and whilst it wasn't trivial, it didn't seem > stupendously complex (even I managed to roughly follow it). I thought to > myself also, implementing this in .NET would be even easier because I can > hand off garbage collection to the .NET runtime so there's one less thing > to worry about. I also, initially, don't care _too_ much about performance. > Indeed, STG itself is reasonably straightforward. Implementing tagged unions in the CLR doesn't even look that hard (F# does it, afterall). However, there are plenty of tricky bits: * You still need to implement a fair amount of RTS support for a full implementation (e.g. light-weight threads and STM) * You need to shim-out or reimplement the event manager in `base` * What do you do about the many `foreign import`s used by, e.g., `text`? * How do you deal with `foreign import`s elsewhere? > Of course, there's probably a whole bunch of nuance. One actually needs to, > for example, represent all the complexities of GADTs into object orientated > classes, maybe converting sum types to inheritance hierarchies with Visitor > Patterns. And also you'd actually have to make sure to do one's best to > ensure exposed Haskell functions look like something sensible. > > So I guess, given I have a bit of an interest here, what would be the best > approach if I wanted to help GHC develop more backends and into an > architecture where people can add backends without forking GHC? Where could > I start helping that effort? Should I contact "Csaba Hruska" and get > involved in GRIN? Or is there something that I can start working on in GHC > proper? > At the moment we rather lack a good model for how new backends should work. There are quite a few axes to consider here: * How do core libraries (e.g. `text`) work? Various choices are: * Disregard the core libraries (along with most of Hackage) and just take the Haskell language * Reimplement many of the core libraries in the target language (e.g. as done by GHCJS) * How does the compiler interact with the Haskell build toolchain (e.g. Cabal)? Choices are: * Disregard the Haskell build toolchain. My (possibly incorrect) understanding is this is what GRIN does. * Implement something that looks enough like GHC to fool Cabal. * Upstream changes into Cabal to make your new compiler a first-class citizen. This is what GHCJS did. * How does the backend interact with GHC? Choices: * The GRIN model: Run the GHC pipeline and serialise the resulting IR (in the case of GRIN, STG) to a file to be slurped up by another process * The Clash/GHCJS model: Implement the new compiler as an executable linking against the GHC API. * The frontend plugin model: For many years now GHC has had support for "front-end plugins". This mechanism allows a plugin to fundamentally redefine what the GHC executable does (e.g. essentially adding a new "mode" to GHC, a la --interactive or --make). It's not impossible that one could use this to implement a new backend. * A hypothetical "backend plugin" mechanism: Currently GHC has no means of introducing plugins after the Core pipeline. Perhaps this should change? This would need a fair amount of design as aspects of the backend currently tend to leak into GHC's frontend. John Ericson has been keen on cleaning this up. Anyways, lots to think about. > Considering that I've been playing around with Haskell since 2002, and I'd > like to actually get paid to write it at some point in my career, and I > have an interest in this area, perhaps this is a good place to start, and > actually helping to develop a pluggable backend architecture for GHC may be > more useful for more people over the long term than trying to hack up an > existing GHC to support 32 bit Windows XP, a battle I suspect will have to > be refought every time a new GHC version is released given the current > structure of GHC. > Yes, 32-bit Windows support sounds like something of a futile exercise. If the problem were *merely* GHC, then perhaps it would be possible. However, the entirety of the open-source compiler community struggles with Windows. For instance, binutils struggles to support even 64-bit Windows without regressions (e.g. see https://gitlab.haskell.org/ghc/ghc/-/issues/16780#note_342715). Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From moritz.angermann at gmail.com Tue Mar 30 03:24:13 2021 From: moritz.angermann at gmail.com (Moritz Angermann) Date: Tue, 30 Mar 2021 11:24:13 +0800 Subject: Options for targeting Windows XP? In-Reply-To: <875z19uyqh.fsf@smart-cactus.org> References: <87r1k4uyee.fsf@smart-cactus.org> <87lfaave1x.fsf@smart-cactus.org> <875z19uyqh.fsf@smart-cactus.org> Message-ID: > > * Upstream changes into Cabal to make your new compiler a first-class > citizen. This is what GHCJS did. Just a word of caution, please don't do this. It leads to non-negligible maintainence burden on your and on the cabal side. Rather try as hard as you can to make your compiler behave like a ghc wrt to cabal. Or add generic support for more powerful compilers to cabal. Adding special handling for one additional compiler will just result in bitrot, odd quirks that only happen with that one compiler, and just a maintenance nightmare for everyone involved. We will be ripping out GHCJS custom logic from cabal. And I've also advised the Asterius author not to go down that route. My suggesting--if I may--is to try and build a c-like toolchain around your compiler. That has some notion of compiler, archiver, linker, and those could you empty shell wrappers, or no-ops, depending on your target. Cheers, Moritz On Tue, Mar 30, 2021 at 11:08 AM Ben Gamari wrote: > Clinton Mead writes: > > > Thanks again for the detailed reply Ben. > > > > I guess the other dream of mine is to give GHC a .NET backend. For my > > problem it would be the ideal solution, but it looks like other attempts > in > > this regard (e.g. Eta, GHCJS etc) seem to have difficulty keeping up with > > updates to GHC. So I'm sure it's not trivial. > > > > It would be quite lovely though if I could generate .NET + Java + even > > Python bytecode from GHC. > > > > Whilst not solving my immediate problem, perhaps my efforts are best > spent > > in giving GHC a plugin architecture for backends (or if one already > > exists?) trying to make a .NET backend. > > > This is an interesting (albeit ambitious, for the reasons others have > mentioned) idea. In particular, I think the CLR has a slightly advantage > over the JVM as a Haskell target in that it has native tail-call > support [1]. This avoids a fair amount of complexity (and performance > overhead) that Eta had to employ to work around this lack in the JVM. > > I suspect that writing an STG -> CLR IR wouldn't itself be difficult. > The hard part is dealing with the primops, RTS, and core libraries. > > [1] > https://docs.microsoft.com/en-us/previous-versions/windows/silverlight/dotnet-windows-silverlight/56c08k0k(v=vs.95)?redirectedfrom=MSDN > > > I believe "Csaba Hruska" is working in this space with GRIN, yes? > > Csaba is indeed using GHC's front-end and Core pipeline to feed his own > compilation pipeline. However, I believe his approach is currently quite > decoupled from GHC. This may or may not complicate the ability to > integrate with the rest of the ecosystem (e.g. Cabal; Csaba, perhaps you > could > comment here?) > > > > > > I read SPJs paper on Implementing Lazy Functional Languages on Stock > > Hardware: The Spineless Tagless G-machine > > < > https://www.microsoft.com/en-us/research/publication/implementing-lazy-functional-languages-on-stock-hardware-the-spineless-tagless-g-machine/ > > > > which > > implemented STG in C and whilst it wasn't trivial, it didn't seem > > stupendously complex (even I managed to roughly follow it). I thought to > > myself also, implementing this in .NET would be even easier because I can > > hand off garbage collection to the .NET runtime so there's one less thing > > to worry about. I also, initially, don't care _too_ much about > performance. > > > Indeed, STG itself is reasonably straightforward. Implementing tagged > unions in the CLR doesn't even look that hard (F# does it, afterall). > However, there are plenty of tricky bits: > > * You still need to implement a fair amount of RTS support for a full > implementation (e.g. light-weight threads and STM) > > * You need to shim-out or reimplement the event manager in `base` > > * What do you do about the many `foreign import`s used by, e.g., > `text`? > > * How do you deal with `foreign import`s elsewhere? > > > Of course, there's probably a whole bunch of nuance. One actually needs > to, > > for example, represent all the complexities of GADTs into object > orientated > > classes, maybe converting sum types to inheritance hierarchies with > Visitor > > Patterns. And also you'd actually have to make sure to do one's best to > > ensure exposed Haskell functions look like something sensible. > > > > So I guess, given I have a bit of an interest here, what would be the > best > > approach if I wanted to help GHC develop more backends and into an > > architecture where people can add backends without forking GHC? Where > could > > I start helping that effort? Should I contact "Csaba Hruska" and get > > involved in GRIN? Or is there something that I can start working on in > GHC > > proper? > > > At the moment we rather lack a good model for how new backends should > work. There are quite a few axes to consider here: > > * How do core libraries (e.g. `text`) work? Various choices are: > > * Disregard the core libraries (along with most of Hackage) and just > take the Haskell language > > * Reimplement many of the core libraries in the target language (e.g. > as done by GHCJS) > > * How does the compiler interact with the Haskell build toolchain (e.g. > Cabal)? Choices are: > > * Disregard the Haskell build toolchain. My (possibly incorrect) > understanding is this is what GRIN does. > > * Implement something that looks enough like GHC to fool Cabal. > > * Upstream changes into Cabal to make your new compiler a first-class > citizen. This is what GHCJS did. > > * How does the backend interact with GHC? Choices: > > * The GRIN model: Run the GHC pipeline and serialise the resulting IR > (in the case of GRIN, STG) to a file to be slurped up by another > process > > * The Clash/GHCJS model: Implement the new compiler as an executable > linking against the GHC API. > > * The frontend plugin model: For many years now GHC has had support > for "front-end plugins". This mechanism allows a plugin to > fundamentally redefine what the GHC executable does (e.g. > essentially adding a new "mode" to GHC, a la --interactive or > --make). It's not impossible that one could use this to implement a > new backend. > > * A hypothetical "backend plugin" mechanism: Currently GHC has no > means of introducing plugins after the Core pipeline. Perhaps this > should change? This would need a fair amount of design as aspects > of the backend currently tend to leak into GHC's frontend. John > Ericson has been keen on cleaning this up. > > Anyways, lots to think about. > > > Considering that I've been playing around with Haskell since 2002, and > I'd > > like to actually get paid to write it at some point in my career, and I > > have an interest in this area, perhaps this is a good place to start, and > > actually helping to develop a pluggable backend architecture for GHC may > be > > more useful for more people over the long term than trying to hack up an > > existing GHC to support 32 bit Windows XP, a battle I suspect will have > to > > be refought every time a new GHC version is released given the current > > structure of GHC. > > > Yes, 32-bit Windows support sounds like something of a futile exercise. > If the problem were *merely* GHC, then perhaps it would be possible. > However, the entirety of the open-source compiler community struggles > with Windows. For instance, binutils struggles to support even 64-bit > Windows without regressions (e.g. see > https://gitlab.haskell.org/ghc/ghc/-/issues/16780#note_342715). > > Cheers, > > - Ben > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at richarde.dev Tue Mar 30 03:57:45 2021 From: rae at richarde.dev (Richard Eisenberg) Date: Tue, 30 Mar 2021 03:57:45 +0000 Subject: Type inference of singular matches on GADTs In-Reply-To: References: <914d28f5-d2d5-9cf0-2ec0-88e98c10e92d@gmail.com> <12fb72cf-abfb-b2be-6560-a6a73e327319@gmail.com> <010f01787bc5fa60-9e5e6304-ebc5-442d-9a64-18eaf88dbdc0-000000@us-east-2.amazonses.com> Message-ID: <010f01788147ba8e-e89bcd4f-e22c-42cd-a585-4217c6715b29-000000@us-east-2.amazonses.com> As usual, I want to separate out the specification of a feature from the implementation. So let's just focus on specification for now -- with the caveat that there might be no possible implementation of these ideas. The key innovation I see lurking here is the idea of an *exhaustive* function, where we know that any pattern-match on an argument is always exhaustive. I will write such a thing with @->, in both the type and in the arrow that appears after the lambda. The @-> type is a subtype of -> (and perhaps does not need to be written differently from ->). EX1: \x @-> case x of HNil -> blah This is easy: we can infer HList '[] @-> blah's type, because the pattern match is declared to be exhaustive, and no other type grants that property. EX2: \x @-> (x, case x of HNit -> blah) Same as EX1. EX3: \x @-> case x of { HNil1 -> blah; HNil2 -> blah } Same as EX1. There is still a unique type for which the patten-match is exhaustive. EX4: Reject. There are multiple valid types, and we don't know which one to pick. This is like classic untouchable-variables territory. EX5: This is hard. A declarative spec would probably choose HL2 [a] -> ... as you suggest, but there may be no implementation of such an idea. EX6: Reject. No type leads to exhaustive matches. I'm not saying this is a good idea for GHC or that it's implementable. But the idea of having type inference account for exhaustivity in this way does not seem, a priori, unspecified. Richard > On Mar 29, 2021, at 5:00 AM, Simon Peyton Jones wrote: > > I haven't thought about how to implement such a thing. At the least, it would probably require some annotation saying that we expect `\HNil -> ()` to be exhaustive (as GHC won't, in general, make that assumption). Even with that, could we get type inference to behave? Possibly. > > As I wrote in another post on this thread, it’s a bit tricky. > > What would you expect of (EX1) > > \x -> case x of HNil -> blah > > Here the lambda and the case are separated > > Now (EX2) > > \x -> (x, case x of HNil -> blah) > > Here the lambda and the case are separated more, and x is used twice. > What if there are more data constructors that share a common return type? (EX3) > > data HL2 a where > HNil1 :: HL2 [] > HNil2 :: HL2 [] > HCons :: …blah… > > \x -> case x of { HNil1 -> blah; HNil 2 -> blah } > > Here HNil1 and HNil2 both return HL2 []. Is that “singular”? > > What if one was a bit more general than the other? Do we seek the least common generalisation of the alternatives given? (EX4) > > data HL3 a where > HNil1 :: HL2 [Int] > HNil2 :: HL2 [a] > HCons :: …blah… > > \x -> case x of { HNil1 -> blah; HNil 2 -> blah } > > What if the cases were incompatible? (EX5) > > data HL4 a where > HNil1 :: HL2 [Int] > HNil2 :: HL2 [Bool] > HCons :: …blah… > > \x -> case x of { HNil1 -> blah; HNil 2 -> blah } > > Would you expect that to somehow generalise to `HL4 [a] -> blah`? > > What if x matched multiple times, perhaps on different constructors (EX6) > > \x -> (case s of HNil1 -> blah1; case x of HNil2 -> blah) > > > The water gets deep quickly here. I don’t (yet) see an obviously-satisfying design point that isn’t massively ad-hoc. > > Simon > > > From: ghc-devs On Behalf Of Richard Eisenberg > Sent: 29 March 2021 03:18 > To: Alexis King > Cc: ghc-devs at haskell.org > Subject: Re: Type inference of singular matches on GADTs > > > > > On Mar 26, 2021, at 8:41 PM, Alexis King > wrote: > > If there’s a single principal type that makes my function well-typed and exhaustive, I’d really like GHC to pick it. > > I think this is the key part of Alexis's plea: that the type checker take into account exhaustivity in choosing how to proceed. > > Another way to think about this: > > f1 :: HList '[] -> () > f1 HNil = () > > f2 :: HList as -> () > f2 HNil = () > > Both f1 and f2 are well typed definitions. In any usage site where both are well-typed, they will behave the same. Yet f1 is exhaustive while f2 is not. This isn't really about an open-world assumption or the possibility of extra cases -- it has to do with what the runtime behaviors of the two functions are. f1 never fails, while f2 must check a constructor tag and perhaps throw an exception. > > If we just see \HNil -> (), Alexis seems to be suggesting we prefer the f1 interpretation over the f2 interpretation. Why? Because f1 is exhaustive, and when we can choose an exhaustive interpretation, that's probably a good idea to pursue. > > I haven't thought about how to implement such a thing. At the least, it would probably require some annotation saying that we expect `\HNil -> ()` to be exhaustive (as GHC won't, in general, make that assumption). Even with that, could we get type inference to behave? Possibly. > > But first: does this match your understanding? > > Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfredo.dinapoli at gmail.com Tue Mar 30 07:42:17 2021 From: alfredo.dinapoli at gmail.com (Alfredo Di Napoli) Date: Tue, 30 Mar 2021 09:42:17 +0200 Subject: Why TcLclEnv and DsGblEnv need to store the same IORef for errors? Message-ID: Hello folks, as some of you might know me and Richard are reworking how GHC constructs, emits and deals with errors and warnings (See https://gitlab.haskell.org/ghc/ghc/-/wikis/Errors-as-(structured)-values and #18516). To summarise very briefly the spirit, we will have (eventually) proper domain-specific types instead of SDocs. The idea is to have very precise and "focused" types for the different phases of the compilation pipeline, and a "catch-all" monomorphic `GhcMessage` type used for the final pretty-printing and exception-throwing: data GhcMessage where GhcPsMessage :: PsMessage -> GhcMessage GhcTcRnMessage :: TcRnMessage -> GhcMessage GhcDsMessage :: DsMessage -> GhcMessage GhcDriverMessage :: DriverMessage -> GhcMessage GhcUnknownMessage :: forall a. (Diagnostic a, Typeable a) => a -> GhcMessage While starting to refactor GHC to use these types, I have stepped into something bizarre: the `DsGblEnv` and `TcLclEnv` envs both share the same `IORef` to store the diagnostics (i.e. warnings and errors) accumulated during compilation. More specifically, a function like `GHC.HsToCore.Monad.mkDsEnvsFromTcGbl` simply receives as input the `IORef` coming straight from the `TcLclEnv`, and stores it into the `DsGblEnv`. This is unfortunate, because it would force me to change the type of this `IORef` to be `IORef (Messages GhcMessage)` to accommodate both diagnostic types, but this would bubble up into top-level functions like `initTc`, which would now return a `Messages GhcMessage`. This is once again unfortunate, because is "premature": ideally it might still be nice to return `Messages TcRnMessage`, so that GHC API users could get a very precise diagnostic type rather than the bag `GhcMessage` is. It also violates an implicit contract: we are saying that `initTc` might return (potentially) *any* GHC diagnostic message (including, for example, driver errors/warnings), which I think is misleading. Having said all of that, it's also possible that returning `Messages GhcMessage` is totally fine here and we don't need to be able to do this fine-grained distinction for the GHC API functions. Regardless, I would like to ask the audience: * Why `TcLclEnv` and `DsGblEnv` need to share the same IORef? * Is this for efficiency reasons? * Is this because we need the two monads to independently accumulate errors into the same IORef? Thanks! Alfredo -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Mar 30 08:33:21 2021 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 30 Mar 2021 08:33:21 +0000 Subject: Type inference of singular matches on GADTs In-Reply-To: <010f01788147ba8e-e89bcd4f-e22c-42cd-a585-4217c6715b29-000000@us-east-2.amazonses.com> References: <914d28f5-d2d5-9cf0-2ec0-88e98c10e92d@gmail.com> <12fb72cf-abfb-b2be-6560-a6a73e327319@gmail.com> <010f01787bc5fa60-9e5e6304-ebc5-442d-9a64-18eaf88dbdc0-000000@us-east-2.amazonses.com> <010f01788147ba8e-e89bcd4f-e22c-42cd-a585-4217c6715b29-000000@us-east-2.amazonses.com> Message-ID: I'm not saying this is a good idea for GHC or that it's implementable. But the idea of having type inference account for exhaustivity in this way does not seem, a priori, unspecified. No, but I’m pointing out that specifying it might be tricky, involving some highly non-local reasoning. I can’t yet see how to write a formal specification. Note “yet” -- growth mindset! Simon From: Richard Eisenberg Sent: 30 March 2021 04:58 To: Simon Peyton Jones Cc: Alexis King ; ghc-devs at haskell.org Subject: Re: Type inference of singular matches on GADTs As usual, I want to separate out the specification of a feature from the implementation. So let's just focus on specification for now -- with the caveat that there might be no possible implementation of these ideas. The key innovation I see lurking here is the idea of an *exhaustive* function, where we know that any pattern-match on an argument is always exhaustive. I will write such a thing with @->, in both the type and in the arrow that appears after the lambda. The @-> type is a subtype of -> (and perhaps does not need to be written differently from ->). EX1: \x @-> case x of HNil -> blah This is easy: we can infer HList '[] @-> blah's type, because the pattern match is declared to be exhaustive, and no other type grants that property. EX2: \x @-> (x, case x of HNit -> blah) Same as EX1. EX3: \x @-> case x of { HNil1 -> blah; HNil2 -> blah } Same as EX1. There is still a unique type for which the patten-match is exhaustive. EX4: Reject. There are multiple valid types, and we don't know which one to pick. This is like classic untouchable-variables territory. EX5: This is hard. A declarative spec would probably choose HL2 [a] -> ... as you suggest, but there may be no implementation of such an idea. EX6: Reject. No type leads to exhaustive matches. I'm not saying this is a good idea for GHC or that it's implementable. But the idea of having type inference account for exhaustivity in this way does not seem, a priori, unspecified. Richard On Mar 29, 2021, at 5:00 AM, Simon Peyton Jones > wrote: I haven't thought about how to implement such a thing. At the least, it would probably require some annotation saying that we expect `\HNil -> ()` to be exhaustive (as GHC won't, in general, make that assumption). Even with that, could we get type inference to behave? Possibly. As I wrote in another post on this thread, it’s a bit tricky. What would you expect of (EX1) \x -> case x of HNil -> blah Here the lambda and the case are separated Now (EX2) \x -> (x, case x of HNil -> blah) Here the lambda and the case are separated more, and x is used twice. What if there are more data constructors that share a common return type? (EX3) data HL2 a where HNil1 :: HL2 [] HNil2 :: HL2 [] HCons :: …blah… \x -> case x of { HNil1 -> blah; HNil 2 -> blah } Here HNil1 and HNil2 both return HL2 []. Is that “singular”? What if one was a bit more general than the other? Do we seek the least common generalisation of the alternatives given? (EX4) data HL3 a where HNil1 :: HL2 [Int] HNil2 :: HL2 [a] HCons :: …blah… \x -> case x of { HNil1 -> blah; HNil 2 -> blah } What if the cases were incompatible? (EX5) data HL4 a where HNil1 :: HL2 [Int] HNil2 :: HL2 [Bool] HCons :: …blah… \x -> case x of { HNil1 -> blah; HNil 2 -> blah } Would you expect that to somehow generalise to `HL4 [a] -> blah`? What if x matched multiple times, perhaps on different constructors (EX6) \x -> (case s of HNil1 -> blah1; case x of HNil2 -> blah) The water gets deep quickly here. I don’t (yet) see an obviously-satisfying design point that isn’t massively ad-hoc. Simon From: ghc-devs > On Behalf Of Richard Eisenberg Sent: 29 March 2021 03:18 To: Alexis King > Cc: ghc-devs at haskell.org Subject: Re: Type inference of singular matches on GADTs On Mar 26, 2021, at 8:41 PM, Alexis King > wrote: If there’s a single principal type that makes my function well-typed and exhaustive, I’d really like GHC to pick it. I think this is the key part of Alexis's plea: that the type checker take into account exhaustivity in choosing how to proceed. Another way to think about this: f1 :: HList '[] -> () f1 HNil = () f2 :: HList as -> () f2 HNil = () Both f1 and f2 are well typed definitions. In any usage site where both are well-typed, they will behave the same. Yet f1 is exhaustive while f2 is not. This isn't really about an open-world assumption or the possibility of extra cases -- it has to do with what the runtime behaviors of the two functions are. f1 never fails, while f2 must check a constructor tag and perhaps throw an exception. If we just see \HNil -> (), Alexis seems to be suggesting we prefer the f1 interpretation over the f2 interpretation. Why? Because f1 is exhaustive, and when we can choose an exhaustive interpretation, that's probably a good idea to pursue. I haven't thought about how to implement such a thing. At the least, it would probably require some annotation saying that we expect `\HNil -> ()` to be exhaustive (as GHC won't, in general, make that assumption). Even with that, could we get type inference to behave? Possibly. But first: does this match your understanding? Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Mar 30 08:51:27 2021 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 30 Mar 2021 08:51:27 +0000 Subject: Why TcLclEnv and DsGblEnv need to store the same IORef for errors? In-Reply-To: References: Message-ID: I think the main reason is that for Template Haskell the renamer/type-checker need to run the desugarer. See the call to initDsTc in GHC.Tc.Gen.Splice. I suppose an alternative is that the TcGblEnv could have a second IORef to use for error messages that come from desugaring during TH splices. Nothing deeper than that I think. Simon From: ghc-devs On Behalf Of Alfredo Di Napoli Sent: 30 March 2021 08:42 To: Simon Peyton Jones via ghc-devs Subject: Why TcLclEnv and DsGblEnv need to store the same IORef for errors? Hello folks, as some of you might know me and Richard are reworking how GHC constructs, emits and deals with errors and warnings (See https://gitlab.haskell.org/ghc/ghc/-/wikis/Errors-as-(structured)-values and #18516). To summarise very briefly the spirit, we will have (eventually) proper domain-specific types instead of SDocs. The idea is to have very precise and "focused" types for the different phases of the compilation pipeline, and a "catch-all" monomorphic `GhcMessage` type used for the final pretty-printing and exception-throwing: data GhcMessage where GhcPsMessage :: PsMessage -> GhcMessage GhcTcRnMessage :: TcRnMessage -> GhcMessage GhcDsMessage :: DsMessage -> GhcMessage GhcDriverMessage :: DriverMessage -> GhcMessage GhcUnknownMessage :: forall a. (Diagnostic a, Typeable a) => a -> GhcMessage While starting to refactor GHC to use these types, I have stepped into something bizarre: the `DsGblEnv` and `TcLclEnv` envs both share the same `IORef` to store the diagnostics (i.e. warnings and errors) accumulated during compilation. More specifically, a function like `GHC.HsToCore.Monad.mkDsEnvsFromTcGbl` simply receives as input the `IORef` coming straight from the `TcLclEnv`, and stores it into the `DsGblEnv`. This is unfortunate, because it would force me to change the type of this `IORef` to be `IORef (Messages GhcMessage)` to accommodate both diagnostic types, but this would bubble up into top-level functions like `initTc`, which would now return a `Messages GhcMessage`. This is once again unfortunate, because is "premature": ideally it might still be nice to return `Messages TcRnMessage`, so that GHC API users could get a very precise diagnostic type rather than the bag `GhcMessage` is. It also violates an implicit contract: we are saying that `initTc` might return (potentially) *any* GHC diagnostic message (including, for example, driver errors/warnings), which I think is misleading. Having said all of that, it's also possible that returning `Messages GhcMessage` is totally fine here and we don't need to be able to do this fine-grained distinction for the GHC API functions. Regardless, I would like to ask the audience: * Why `TcLclEnv` and `DsGblEnv` need to share the same IORef? * Is this for efficiency reasons? * Is this because we need the two monads to independently accumulate errors into the same IORef? Thanks! Alfredo -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfredo.dinapoli at gmail.com Tue Mar 30 08:57:27 2021 From: alfredo.dinapoli at gmail.com (Alfredo Di Napoli) Date: Tue, 30 Mar 2021 10:57:27 +0200 Subject: Why TcLclEnv and DsGblEnv need to store the same IORef for errors? In-Reply-To: References: Message-ID: Right, I see, thanks. This is what I was attempting so far: data DsMessage = DsUnknownMessage !DiagnosticMessage | DsLiftedTcRnMessage !TcRnMessage -- ^ A diagnostic coming straight from the Typecheck-renamer. and later: liftTcRnMessages :: MonadIO m => IORef (Messages TcRnMessage) -> m (IORef (Messages DsMessage)) liftTcRnMessages ref = liftIO $ do oldContent <- readIORef ref newIORef (DsLiftedTcRnMessage <$> oldContent) ... mkDsEnvsFromTcGbl :: MonadIO m => HscEnv -> IORef (Messages TcRnMessage) -> TcGblEnv -> m (DsGblEnv, DsLclEnv) mkDsEnvsFromTcGbl hsc_env msg_var tcg_env = do { cc_st_var <- liftIO $ newIORef newCostCentreState ... ; msg_var' <- liftTcRnMessages msg_var ; return $ mkDsEnvs unit_env this_mod rdr_env type_env fam_inst_env msg_var' cc_st_var complete_matches } While this typechecks, I wonder if that's the right way to think about it -- from your reply, it seems like the dependency is in the opposite direction -- we need to store desugaring diagnostics in the TcM due to TH splicing, not the other way around. I'll explore the idea of adding a second IORef. Thanks! On Tue, 30 Mar 2021 at 10:51, Simon Peyton Jones wrote: > I think the main reason is that for Template Haskell the > renamer/type-checker need to run the desugarer. See the call to initDsTc > in GHC.Tc.Gen.Splice. > > > > I suppose an alternative is that the TcGblEnv could have a second IORef to > use for error messages that come from desugaring during TH splices. > > > > Nothing deeper than that I think. > > > > Simon > > > > *From:* ghc-devs *On Behalf Of *Alfredo Di > Napoli > *Sent:* 30 March 2021 08:42 > *To:* Simon Peyton Jones via ghc-devs > *Subject:* Why TcLclEnv and DsGblEnv need to store the same IORef for > errors? > > > > Hello folks, > > > > as some of you might know me and Richard are reworking how GHC constructs, > emits and deals with errors and warnings (See > https://gitlab.haskell.org/ghc/ghc/-/wikis/Errors-as-(structured)-values > > and #18516). > > > > To summarise very briefly the spirit, we will have (eventually) proper > domain-specific types instead of SDocs. The idea is to have very precise > and "focused" types for the different phases of the compilation pipeline, > and a "catch-all" monomorphic `GhcMessage` type used for the final > pretty-printing and exception-throwing: > > > > data GhcMessage where > > GhcPsMessage :: PsMessage -> GhcMessage > > GhcTcRnMessage :: TcRnMessage -> GhcMessage > > GhcDsMessage :: DsMessage -> GhcMessage > > GhcDriverMessage :: DriverMessage -> GhcMessage > > GhcUnknownMessage :: forall a. (Diagnostic a, Typeable a) => a -> > GhcMessage > > > > While starting to refactor GHC to use these types, I have stepped into > something bizarre: the `DsGblEnv` and `TcLclEnv` envs both share the same > `IORef` to store the diagnostics (i.e. warnings and errors) accumulated > during compilation. More specifically, a function like > `GHC.HsToCore.Monad.mkDsEnvsFromTcGbl` simply receives as input the `IORef` > coming straight from the `TcLclEnv`, and stores it into the `DsGblEnv`. > > > > This is unfortunate, because it would force me to change the type of this > `IORef` to be > > `IORef (Messages GhcMessage)` to accommodate both diagnostic types, but > this would bubble up into top-level functions like `initTc`, which would > now return a `Messages GhcMessage`. This is once again unfortunate, because > is "premature": ideally it might still be nice to return `Messages > TcRnMessage`, so that GHC API users could get a very precise diagnostic > type rather than the bag `GhcMessage` is. It also violates an implicit > contract: we are saying that `initTc` might return (potentially) *any* GHC > diagnostic message (including, for example, driver errors/warnings), which > I think is misleading. > > > > Having said all of that, it's also possible that returning `Messages > GhcMessage` is totally fine here and we don't need to be able to do this > fine-grained distinction for the GHC API functions. Regardless, I would > like to ask the audience: > > > > * Why `TcLclEnv` and `DsGblEnv` need to share the same IORef? > > * Is this for efficiency reasons? > > * Is this because we need the two monads to independently accumulate > errors into the > > same IORef? > > > > Thanks! > > > > Alfredo > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Mar 30 13:16:59 2021 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 30 Mar 2021 13:16:59 +0000 Subject: Multiple versions of happy Message-ID: What's the approved mechanism to install multiple versions of happy/alex etc? Eg I tried to build ghc-9.0 and got this: checking for makeinfo... no checking for python3... /usr/bin/python3 checking for ghc-pkg matching /opt/ghc/bin/ghc... /opt/ghc/bin/ghc-pkg checking for happy... /home/simonpj/.cabal/bin/happy checking for version of happy... 1.20.0 configure: error: Happy version 1.19 is required to compile GHC. I so I have to 1. Install happy-1.19 without overwriting the installed happy-1.20 2. Tell configure to use happy-1.19 What's the best way to do those two things? Thanks Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgraf1337 at gmail.com Tue Mar 30 13:22:01 2021 From: sgraf1337 at gmail.com (Sebastian Graf) Date: Tue, 30 Mar 2021 15:22:01 +0200 Subject: Multiple versions of happy In-Reply-To: References: Message-ID: Hi Simon, According to the configure script, you can use the HAPPY env variable. e.g. $ HAPPY=/full/path/to/happy ./configure Hope that helps. Cheers, Sebastian Am Di., 30. März 2021 um 15:19 Uhr schrieb Simon Peyton Jones via ghc-devs < ghc-devs at haskell.org>: > What’s the approved mechanism to install multiple versions of happy/alex > etc? Eg I tried to build ghc-9.0 and got this: > > checking for makeinfo... no > > checking for python3... /usr/bin/python3 > > checking for ghc-pkg matching /opt/ghc/bin/ghc... /opt/ghc/bin/ghc-pkg > > checking for happy... /home/simonpj/.cabal/bin/happy > > checking for version of happy... 1.20.0 > > configure: error: Happy version 1.19 is required to compile GHC. > > > > I so I have to > > 1. Install happy-1.19 without overwriting the installed happy-1.20 > 2. Tell configure to use happy-1.19 > > What’s the best way to do those two things? > > Thanks > > Simon > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Mar 30 13:22:04 2021 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 30 Mar 2021 13:22:04 +0000 Subject: Multiple versions of happy In-Reply-To: References: Message-ID: That's (2), thanks. How about (1)? From: Sebastian Graf Sent: 30 March 2021 14:22 To: Simon Peyton Jones Cc: ghc-devs at haskell.org Subject: Re: Multiple versions of happy Hi Simon, According to the configure script, you can use the HAPPY env variable. e.g. $ HAPPY=/full/path/to/happy ./configure Hope that helps. Cheers, Sebastian Am Di., 30. März 2021 um 15:19 Uhr schrieb Simon Peyton Jones via ghc-devs >: What's the approved mechanism to install multiple versions of happy/alex etc? Eg I tried to build ghc-9.0 and got this: checking for makeinfo... no checking for python3... /usr/bin/python3 checking for ghc-pkg matching /opt/ghc/bin/ghc... /opt/ghc/bin/ghc-pkg checking for happy... /home/simonpj/.cabal/bin/happy checking for version of happy... 1.20.0 configure: error: Happy version 1.19 is required to compile GHC. I so I have to 1. Install happy-1.19 without overwriting the installed happy-1.20 2. Tell configure to use happy-1.19 What's the best way to do those two things? Thanks Simon _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleg.grenrus at iki.fi Tue Mar 30 13:34:25 2021 From: oleg.grenrus at iki.fi (Oleg Grenrus) Date: Tue, 30 Mar 2021 16:34:25 +0300 Subject: Multiple versions of happy In-Reply-To: References: Message-ID: One way is cabal install happy --program-suffix=-1.19 --constraint='happy ^>=1.19' cabal install happy --program-suffix=-1.20 --constraint='happy ^>=1.20' happy-1.19 --version Happy Version 1.19.12 Copyright (c) 1993-1996 Andy Gill, Simon Marlow (c) 1997-2005 Simon Marlow happy-1.20 --version Happy Version 1.20.0 Copyright (c) 1993-1996 Andy Gill, Simon Marlow (c) 1997-2005 Simon Marlow - Oleg On 30.3.2021 16.22, Simon Peyton Jones via ghc-devs wrote: > > That’s (2), thanks.  How about (1)? > >   > > *From:*Sebastian Graf > *Sent:* 30 March 2021 14:22 > *To:* Simon Peyton Jones > *Cc:* ghc-devs at haskell.org > *Subject:* Re: Multiple versions of happy > >   > > Hi Simon, > >   > > According to the configure script, you can use the HAPPY env variable. > e.g. > >   > > $ HAPPY=/full/path/to/happy ./configure > >   > > Hope that helps. Cheers, > > Sebastian > >   > > Am Di., 30. März 2021 um 15:19 Uhr schrieb Simon Peyton Jones via > ghc-devs >: > > What’s the approved mechanism to install multiple versions of > happy/alex etc?  Eg I tried to build ghc-9.0 and got this: > > checking for makeinfo... no > > checking for python3... /usr/bin/python3 > > checking for ghc-pkg matching /opt/ghc/bin/ghc... /opt/ghc/bin/ghc-pkg > > checking for happy... /home/simonpj/.cabal/bin/happy > > checking for version of happy... 1.20.0 > > configure: error: Happy version 1.19 is required to compile GHC. > >   > > I so I have to > > 1. Install happy-1.19 without overwriting the installed happy-1.20 > 2. Tell configure to use happy-1.19 > > What’s the best way to do those two things? > > Thanks > > Simon > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at richarde.dev Tue Mar 30 14:14:13 2021 From: rae at richarde.dev (Richard Eisenberg) Date: Tue, 30 Mar 2021 14:14:13 +0000 Subject: Why TcLclEnv and DsGblEnv need to store the same IORef for errors? In-Reply-To: References: Message-ID: <010f0178837c2155-3d83b309-0bf4-4321-90a1-694a1936f811-000000@us-east-2.amazonses.com> > On Mar 30, 2021, at 4:57 AM, Alfredo Di Napoli wrote: > > I'll explore the idea of adding a second IORef. Renaming/type-checking is already mutually recursive. (The renamer must call the type-checker in order to rename -- that is, evaluate -- untyped splices. I actually can't recall why the type-checker needs to call the renamer.) So we will have a TcRnError. Now we see that the desugarer ends up mixed in, too. We could proceed how Alfredo suggests, by adding a second IORef. Or we could just make TcRnDsError (maybe renaming that). What's the disadvantage? Clients will have to potentially know about all the different error forms with either approach (that is, using my combined type or using multiple IORefs). The big advantage to separating is maybe module dependencies? But my guess is that the dependencies won't be an issue here, due to the fact that these components are already leaning on each other. Maybe the advantage is just in having smaller types? Maybe. I don't have a great sense as to what to do here, but I would want a clear reason that e.g. the TcRn monad would have two IORefs, while other monads will work with GhcMessage (instead of a whole bunch of IORefs). Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfredo.dinapoli at gmail.com Tue Mar 30 14:35:51 2021 From: alfredo.dinapoli at gmail.com (Alfredo Di Napoli) Date: Tue, 30 Mar 2021 16:35:51 +0200 Subject: Why TcLclEnv and DsGblEnv need to store the same IORef for errors? In-Reply-To: <010f0178837c2155-3d83b309-0bf4-4321-90a1-694a1936f811-000000@us-east-2.amazonses.com> References: <010f0178837c2155-3d83b309-0bf4-4321-90a1-694a1936f811-000000@us-east-2.amazonses.com> Message-ID: Hello folks, Richard: as I was in the middle of some other refactoring by the time Simon replied, you can see a potential refactoring that *doesn't* use the double IORef, but rather this idea of having a `DsMessage` embed `TcRnMessage`(s) via a new costructor: https://gitlab.haskell.org/ghc/ghc/-/merge_requests/4798/diffs#6eaba7424490cb26d74e0dab0f6fd7bc3537dca7 (Just grep for "DsMessage", "DsUnknownMessage", and `DsLiftedTcRnMessage` to see the call sites). The end result is not bad, I have to say. Or, at least, it doesn't strike me as totally horrid :) A. On Tue, 30 Mar 2021 at 16:14, Richard Eisenberg wrote: > > > On Mar 30, 2021, at 4:57 AM, Alfredo Di Napoli > wrote: > > I'll explore the idea of adding a second IORef. > > > Renaming/type-checking is already mutually recursive. (The renamer must > call the type-checker in order to rename -- that is, evaluate -- untyped > splices. I actually can't recall why the type-checker needs to call the > renamer.) So we will have a TcRnError. Now we see that the desugarer ends > up mixed in, too. We could proceed how Alfredo suggests, by adding a second > IORef. Or we could just make TcRnDsError (maybe renaming that). > > What's the disadvantage? Clients will have to potentially know about all > the different error forms with either approach (that is, using my combined > type or using multiple IORefs). The big advantage to separating is maybe > module dependencies? But my guess is that the dependencies won't be an > issue here, due to the fact that these components are already leaning on > each other. Maybe the advantage is just in having smaller types? Maybe. > > I don't have a great sense as to what to do here, but I would want a clear > reason that e.g. the TcRn monad would have two IORefs, while other monads > will work with GhcMessage (instead of a whole bunch of IORefs). > > Richard > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lexi.lambda at gmail.com Wed Mar 31 02:53:04 2021 From: lexi.lambda at gmail.com (Alexis King) Date: Tue, 30 Mar 2021 21:53:04 -0500 Subject: Type inference of singular matches on GADTs In-Reply-To: References: <914d28f5-d2d5-9cf0-2ec0-88e98c10e92d@gmail.com> <12fb72cf-abfb-b2be-6560-a6a73e327319@gmail.com> <010f01787bc5fa60-9e5e6304-ebc5-442d-9a64-18eaf88dbdc0-000000@us-east-2.amazonses.com> <010f01788147ba8e-e89bcd4f-e22c-42cd-a585-4217c6715b29-000000@us-east-2.amazonses.com> Message-ID: On 3/28/21 9:17 PM, Richard Eisenberg wrote: > > I think this is the key part of Alexis's plea: that the type checker > take into account exhaustivity in choosing how to proceed. > > […] > > Does this match your understanding? Yes, precisely. :) Without GADTs, exhaustivity doesn’t yield any useful information to the typechecker, but with them, it can. I agree with Simon that it seems tricky—his examples are good ones—and I agree with Richard that I don’t know if this is actually a good or fruitful idea. I’m certainly not demanding anyone else produce a solution! But I was curious if anyone had explored this before, and it sounds like perhaps the answer is “no.” Fair enough! I still appreciate the discussion. Alexis From john.ericson at obsidian.systems Wed Mar 31 03:44:25 2021 From: john.ericson at obsidian.systems (John Ericson) Date: Tue, 30 Mar 2021 23:44:25 -0400 Subject: Why TcLclEnv and DsGblEnv need to store the same IORef for errors? In-Reply-To: <010f0178837c2155-3d83b309-0bf4-4321-90a1-694a1936f811-000000@us-east-2.amazonses.com> References: <010f0178837c2155-3d83b309-0bf4-4321-90a1-694a1936f811-000000@us-east-2.amazonses.com> Message-ID: Alfredo also replied to this pointing his embedding plan. I also prefer that, because I really wish TH didn't smear together the phases so much. Moreover, I hope with  - GHC proposals https://github.com/ghc-proposals/ghc-proposals/pull/412 / https://github.com/ghc-proposals/ghc-proposals/pull/243  - The parallelism work currently be planned in https://gitlab.haskell.org/ghc/ghc/-/wikis/Plan-for-increased-parallelism-and-more-detailed-intermediate-output we might actually have an opportunity/extra motivation to do that. Splices and quotes will still induce intricate inter-phase dependencies, but I hope that could be mediated by the driver rather than just baked into each phase. (One final step would be the "stuck macros" technique of https://www.youtube.com/watch?v=nUvKoG_V_U0 / https://github.com/gelisam/klister, where TH splices would be able to making "blocking queries" of the the compiler in ways that induce more of these fine-grained dependencies.) Anyways, while we could also do a "RnTsDsError" and split later, I hope Alfredo's alternative of embedding won't be too much harder and prepare us for these exciting areas of exploration. John On 3/30/21 10:14 AM, Richard Eisenberg wrote: > > >> On Mar 30, 2021, at 4:57 AM, Alfredo Di Napoli >> > wrote: >> >> I'll explore the idea of adding a second IORef. > > Renaming/type-checking is already mutually recursive. (The renamer > must call the type-checker in order to rename -- that is, evaluate -- > untyped splices. I actually can't recall why the type-checker needs to > call the renamer.) So we will have a TcRnError. Now we see that the > desugarer ends up mixed in, too. We could proceed how Alfredo > suggests, by adding a second IORef. Or we could just make TcRnDsError > (maybe renaming that). > > What's the disadvantage? Clients will have to potentially know about > all the different error forms with either approach (that is, using my > combined type or using multiple IORefs). The big advantage to > separating is maybe module dependencies? But my guess is that the > dependencies won't be an issue here, due to the fact that these > components are already leaning on each other. Maybe the advantage is > just in having smaller types? Maybe. > > I don't have a great sense as to what to do here, but I would want a > clear reason that e.g. the TcRn monad would have two IORefs, while > other monads will work with GhcMessage (instead of a whole bunch of > IORefs). > > Richard > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfredo.dinapoli at gmail.com Wed Mar 31 06:05:23 2021 From: alfredo.dinapoli at gmail.com (Alfredo Di Napoli) Date: Wed, 31 Mar 2021 08:05:23 +0200 Subject: Why TcLclEnv and DsGblEnv need to store the same IORef for errors? In-Reply-To: References: <010f0178837c2155-3d83b309-0bf4-4321-90a1-694a1936f811-000000@us-east-2.amazonses.com> Message-ID: Morning all, *Richard*: sorry! Unfortunately MR !4798 is the cornerstone of this refactoring work but it's also gargantuan. Let's discuss a plan to attack it, but fundamentally there is a critical mass of changes that needs to happen atomically or it wouldn't make much sense, and alas this doesn't play in our favour when it comes to MR size and ease of review. However, to quickly reply to your remak: currently (for the sake of the "minimum-viable-product") I am trying to stabilise the external interfaces, by which I mean giving functions their final type signature while I do what's easiest to make things typecheck. In this phase what I think is the easiest is to wrap the majority of diagnostics into the `xxUnknownxx` constructor, and change them gradually later. A fair warning, though: you say "I would think that a DsMessage would later be wrapped in an envelope." This might be true for Ds messages (didn't actually invest any brain cycles to check that) but in general we have to turn a message into an envelope as soon as we have a chance to do so, because we need to grab the `SrcSpan` and the `DynFlags` *at the point of creation* of the diagnostics. Carrying around a message and make it bubble up at some random point won't be a good plan (even for Ds messages). Having said that, I clearly have very little knowledge about this area of GHC, so feel free to disagree :) *John*: Although it's a bit hard to predict how well this is going to evolve, my current embedding, to refresh everyone's memory, is the following: data DsMessage = DsUnknownMessage !DiagnosticMessage -- ^ Stop-gap constructor to ease the migration. | DsLiftedTcRnMessage !TcRnMessage -- ^ A diagnostic coming straight from the Typecheck-renamer. -- More messages added in the future, of course At first I thought this was the wrong way around, due to Simon's comment, but this actually creates pleasant external interfaces. To give you a bunch of examples from MR !4798: deSugar :: HscEnv -> ModLocation -> TcGblEnv -> IO (Messages DsMessage, Maybe ModGuts) deSugarExpr :: HscEnv -> LHsExpr GhcTc -> IO (Messages DsMessage, Maybe CoreExpr) Note something interesting: the second function actually calls `runTcInteractive` inside the body, but thanks to the `DsLiftedTcRnMessage` we can still expose to the consumer an opaque `DsMessage` , which is what I would expect to see from a function called "deSugarExpr". Conversely, I would be puzzled to find those functions returning a `TcRnDsMessage`. Having said all of that, I am not advocating this design is "the best". I am sure we will iterate on it. I am just reporting that even this baseline seems to be decent from an API perspective :) On Wed, 31 Mar 2021 at 05:45, John Ericson wrote: > Alfredo also replied to this pointing his embedding plan. I also prefer > that, because I really wish TH didn't smear together the phases so much. > Moreover, I hope with > > - GHC proposals https://github.com/ghc-proposals/ghc-proposals/pull/412 > / https://github.com/ghc-proposals/ghc-proposals/pull/243 > > - The parallelism work currently be planned in > https://gitlab.haskell.org/ghc/ghc/-/wikis/Plan-for-increased-parallelism-and-more-detailed-intermediate-output > > we might actually have an opportunity/extra motivation to do that. Splices > and quotes will still induce intricate inter-phase dependencies, but I hope > that could be mediated by the driver rather than just baked into each phase. > > (One final step would be the "stuck macros" technique of > https://www.youtube.com/watch?v=nUvKoG_V_U0 / > https://github.com/gelisam/klister, where TH splices would be able to > making "blocking queries" of the the compiler in ways that induce more of > these fine-grained dependencies.) > > Anyways, while we could also do a "RnTsDsError" and split later, I hope > Alfredo's alternative of embedding won't be too much harder and prepare us > for these exciting areas of exploration. > > John > On 3/30/21 10:14 AM, Richard Eisenberg wrote: > > > > On Mar 30, 2021, at 4:57 AM, Alfredo Di Napoli > wrote: > > I'll explore the idea of adding a second IORef. > > > Renaming/type-checking is already mutually recursive. (The renamer must > call the type-checker in order to rename -- that is, evaluate -- untyped > splices. I actually can't recall why the type-checker needs to call the > renamer.) So we will have a TcRnError. Now we see that the desugarer ends > up mixed in, too. We could proceed how Alfredo suggests, by adding a second > IORef. Or we could just make TcRnDsError (maybe renaming that). > > What's the disadvantage? Clients will have to potentially know about all > the different error forms with either approach (that is, using my combined > type or using multiple IORefs). The big advantage to separating is maybe > module dependencies? But my guess is that the dependencies won't be an > issue here, due to the fact that these components are already leaning on > each other. Maybe the advantage is just in having smaller types? Maybe. > > I don't have a great sense as to what to do here, but I would want a clear > reason that e.g. the TcRn monad would have two IORefs, while other monads > will work with GhcMessage (instead of a whole bunch of IORefs). > > Richard > > _______________________________________________ > ghc-devs mailing listghc-devs at haskell.orghttp://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfredo.dinapoli at gmail.com Wed Mar 31 07:45:32 2021 From: alfredo.dinapoli at gmail.com (Alfredo Di Napoli) Date: Wed, 31 Mar 2021 09:45:32 +0200 Subject: Why TcLclEnv and DsGblEnv need to store the same IORef for errors? In-Reply-To: References: <010f0178837c2155-3d83b309-0bf4-4321-90a1-694a1936f811-000000@us-east-2.amazonses.com> Message-ID: Follow up: Argh! I have just seen that I have a bunch of test failures related to my MR (which, needless to say, it's still WIP). For example: run/T9140.run.stdout.normalised 2021-03-31 09:35:48.000000000 +0200 @@ -1,12 +1,4 @@ -:2:5: - You can't mix polymorphic and unlifted bindings: a = (# 1 #) - Probable fix: add a type signature - -:3:5: - You can't mix polymorphic and unlifted bindings: a = (# 1, 3 #) - Probable fix: add a type signature - So it looks like some diagnostic is now not being reported and, surprise surprise, this was emitted from the DsM monad. I have the suspect that indeed Richard was right (like he always is :) ) -- when we go from a DsM to a TcM monad (See `initDsTc`) for example, I think we also need to carry into the new monad all the diagnostics we collected so far. This implies indeed a mutual dependency (as Simon pointed out, heh). So I think my cunning plan of embedding is crumbling -- I suspect we would end up with a type `TcRnDsMessage` which captures the dependency. Sorry for not seeing it sooner! On Wed, 31 Mar 2021 at 08:05, Alfredo Di Napoli wrote: > Morning all, > > *Richard*: sorry! Unfortunately MR !4798 is the cornerstone of this > refactoring work but it's also gargantuan. Let's discuss a plan to attack > it, but fundamentally there is a critical mass of changes that needs to > happen atomically or it wouldn't make much sense, and alas this doesn't > play in our favour when it comes to MR size and ease of review. However, to > quickly reply to your remak: currently (for the sake of the > "minimum-viable-product") I am trying to stabilise the external interfaces, > by which I mean giving functions their final type signature while I do > what's easiest to make things typecheck. In this phase what I think is the > easiest is to wrap the majority of diagnostics into the `xxUnknownxx` > constructor, and change them gradually later. A fair warning, though: you > say "I would think that a DsMessage would later be wrapped in an > envelope." This might be true for Ds messages (didn't actually invest any > brain cycles to check that) but in general we have to turn a message into > an envelope as soon as we have a chance to do so, because we need to grab > the `SrcSpan` and the `DynFlags` *at the point of creation* of the > diagnostics. Carrying around a message and make it bubble up at some random > point won't be a good plan (even for Ds messages). Having said that, I > clearly have very little knowledge about this area of GHC, so feel free to > disagree :) > > *John*: Although it's a bit hard to predict how well this is going to > evolve, my current embedding, to refresh everyone's memory, is the > following: > > data DsMessage = > > DsUnknownMessage !DiagnosticMessage > > -- ^ Stop-gap constructor to ease the migration. > > | DsLiftedTcRnMessage !TcRnMessage > > -- ^ A diagnostic coming straight from the Typecheck-renamer. > > -- More messages added in the future, of course > > > At first I thought this was the wrong way around, due to Simon's comment, > but this actually creates pleasant external interfaces. To give you a bunch > of examples from MR !4798: > > > deSugar :: HscEnv -> ModLocation -> TcGblEnv -> IO (Messages DsMessage, > Maybe ModGuts) > deSugarExpr :: HscEnv -> LHsExpr GhcTc -> IO (Messages DsMessage, Maybe > CoreExpr) > > Note something interesting: the second function actually calls > `runTcInteractive` inside the body, but thanks to the `DsLiftedTcRnMessage` > we can still expose to the consumer an opaque `DsMessage` , which is what I > would expect to see from a function called "deSugarExpr". Conversely, I > would be puzzled to find those functions returning a `TcRnDsMessage`. > > > Having said all of that, I am not advocating this design is "the best". I > am sure we will iterate on it. I am just reporting that even this baseline > seems to be decent from an API perspective :) > > > On Wed, 31 Mar 2021 at 05:45, John Ericson > wrote: > >> Alfredo also replied to this pointing his embedding plan. I also prefer >> that, because I really wish TH didn't smear together the phases so much. >> Moreover, I hope with >> >> - GHC proposals https://github.com/ghc-proposals/ghc-proposals/pull/412 >> / https://github.com/ghc-proposals/ghc-proposals/pull/243 >> >> - The parallelism work currently be planned in >> https://gitlab.haskell.org/ghc/ghc/-/wikis/Plan-for-increased-parallelism-and-more-detailed-intermediate-output >> >> we might actually have an opportunity/extra motivation to do that. >> Splices and quotes will still induce intricate inter-phase dependencies, >> but I hope that could be mediated by the driver rather than just baked into >> each phase. >> >> (One final step would be the "stuck macros" technique of >> https://www.youtube.com/watch?v=nUvKoG_V_U0 / >> https://github.com/gelisam/klister, where TH splices would be able to >> making "blocking queries" of the the compiler in ways that induce more of >> these fine-grained dependencies.) >> >> Anyways, while we could also do a "RnTsDsError" and split later, I hope >> Alfredo's alternative of embedding won't be too much harder and prepare us >> for these exciting areas of exploration. >> >> John >> On 3/30/21 10:14 AM, Richard Eisenberg wrote: >> >> >> >> On Mar 30, 2021, at 4:57 AM, Alfredo Di Napoli < >> alfredo.dinapoli at gmail.com> wrote: >> >> I'll explore the idea of adding a second IORef. >> >> >> Renaming/type-checking is already mutually recursive. (The renamer must >> call the type-checker in order to rename -- that is, evaluate -- untyped >> splices. I actually can't recall why the type-checker needs to call the >> renamer.) So we will have a TcRnError. Now we see that the desugarer ends >> up mixed in, too. We could proceed how Alfredo suggests, by adding a second >> IORef. Or we could just make TcRnDsError (maybe renaming that). >> >> What's the disadvantage? Clients will have to potentially know about all >> the different error forms with either approach (that is, using my combined >> type or using multiple IORefs). The big advantage to separating is maybe >> module dependencies? But my guess is that the dependencies won't be an >> issue here, due to the fact that these components are already leaning on >> each other. Maybe the advantage is just in having smaller types? Maybe. >> >> I don't have a great sense as to what to do here, but I would want a >> clear reason that e.g. the TcRn monad would have two IORefs, while other >> monads will work with GhcMessage (instead of a whole bunch of IORefs). >> >> Richard >> >> _______________________________________________ >> ghc-devs mailing listghc-devs at haskell.orghttp://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.ericson at obsidian.systems Wed Mar 31 14:36:43 2021 From: john.ericson at obsidian.systems (John Ericson) Date: Wed, 31 Mar 2021 10:36:43 -0400 Subject: Why TcLclEnv and DsGblEnv need to store the same IORef for errors? In-Reply-To: References: <010f0178837c2155-3d83b309-0bf4-4321-90a1-694a1936f811-000000@us-east-2.amazonses.com> Message-ID: <55053410-6363-607a-10fa-3bd170a7f006@obsidian.systems> I might still be tempted to do: data DsMessage =     ...   | DsLiftedTcRnMessage !TcRnMessage   -- ^ A diagnostic coming straight from the Typecheck-renamer. data TcRnMessage =     ...   | TcRnLiftedDsMessage !DsMessage   -- ^ A diagnostic coming straight from the Desugarer. tying them together with hs-boot. Yes, that means one can do some silly `TcRnLiftedDsMessage . DsLiftedTcRnMessage . TcRnLiftedDsMessage ...`, but that could even show up in a render as "while desugaring a splice during type checking, while typechecking during desguaring, ..." so arguably the information the wrapping isn't purely superfluous. I think this would pose no practical problem today, while still "soft enforcing" the abstraction boundaries we want. On 3/31/21 3:45 AM, Alfredo Di Napoli wrote: > Follow up: > > Argh! I have just seen that I have a bunch of test failures related to > my MR (which, needless to say, it's still WIP). > > For example: > > run/T9140.run.stdout.normalised 2021-03-31 09:35:48.000000000 +0200 > @@ -1,12 +1,4 @@ > -:2:5: > -    You can't mix polymorphic and unlifted bindings: a = (# 1 #) > -    Probable fix: add a type signature > - > -:3:5: > -    You can't mix polymorphic and unlifted bindings: a = (# 1, 3 #) > -    Probable fix: add a type signature > - > > So it looks like some diagnostic is now not being reported and, > surprise surprise, this was emitted from the DsM monad. > > I have the suspect that indeed Richard was right (like he always is :) > ) -- when we go from a DsM to a TcM monad (See `initDsTc`) for > example, I think we also need to carry into the new monad all the > diagnostics we collected so far. > > This implies indeed a mutual dependency (as Simon pointed out, heh). > > > So I think my cunning plan of embedding is crumbling -- I suspect we > would end up with a type `TcRnDsMessage` which captures the dependency. > > Sorry for not seeing it sooner! > > > > > > > > > On Wed, 31 Mar 2021 at 08:05, Alfredo Di Napoli > > wrote: > > Morning all, > > *Richard*: sorry! Unfortunately MR !4798 is the cornerstone of > this refactoring work but it's also gargantuan. Let's discuss a > plan to attack it, but fundamentally there is a critical mass of > changes that needs to happen atomically or it wouldn't make much > sense, and alas this doesn't play in our favour when it comes to > MR size and ease of review. However, to quickly reply to your > remak: currently (for the sake of the "minimum-viable-product") I > am trying to stabilise the external interfaces, by which I mean > giving functions their final type signature while I do what's > easiest to make things typecheck. In this phase what I think is > the easiest is to wrap the majority of diagnostics into the > `xxUnknownxx` constructor, and change them gradually later. A fair > warning, though: you say "I would think that a DsMessage would > later be wrapped in an envelope." This might be true for Ds > messages (didn't actually invest any brain cycles to check that) > but in general we have to turn a message into an envelope as soon > as we have a chance to do so, because we need to grab the > `SrcSpan` and the `DynFlags` *at the point of creation* of the > diagnostics. Carrying around a message and make it bubble up at > some random point won't be a good plan (even for Ds messages). > Having said that, I clearly have very little knowledge about this > area of GHC, so feel free to disagree :) > > *John*: Although it's a bit hard to predict how well this is going > to evolve, my current embedding, to refresh everyone's memory, is > the following: > > data DsMessage = > >   DsUnknownMessage !DiagnosticMessage > > -- ^ Stop-gap constructor to ease the migration. > > | DsLiftedTcRnMessage !TcRnMessage > > -- ^ A diagnostic coming straight from the Typecheck-renamer. > > -- More messages added in the future, of course > > > At first I thought this was the wrong way around, due to Simon's > comment, but this actually creates pleasant external interfaces. > To give you a bunch of examples from MR !4798: > > > deSugar :: HscEnv -> ModLocation -> TcGblEnv -> IO (Messages > DsMessage, Maybe ModGuts) > > deSugarExpr :: HscEnv -> LHsExpr GhcTc -> IO (Messages DsMessage, > Maybe CoreExpr) > > Note something interesting: the second function actually calls > `runTcInteractive` inside the body, but thanks to the > `DsLiftedTcRnMessage` we can still expose to the consumer an > opaque `DsMessage` , which is what I would expect to see from a > function called "deSugarExpr". Conversely, I would be puzzled to > find those functions returning a `TcRnDsMessage`. > > > Having said all of that, I am not advocating this design is "the > best". I am sure we will iterate on it. I am just reporting that > even this baseline seems to be decent from an API perspective :) > > > On Wed, 31 Mar 2021 at 05:45, John Ericson > wrote: > > Alfredo also replied to this pointing his embedding plan. I > also prefer that, because I really wish TH didn't smear > together the phases so much. Moreover, I hope with > >  - GHC proposals > https://github.com/ghc-proposals/ghc-proposals/pull/412 > / > https://github.com/ghc-proposals/ghc-proposals/pull/243 > > >  - The parallelism work currently be planned in > https://gitlab.haskell.org/ghc/ghc/-/wikis/Plan-for-increased-parallelism-and-more-detailed-intermediate-output > > > > we might actually have an opportunity/extra motivation to do > that. Splices and quotes will still induce intricate > inter-phase dependencies, but I hope that could be mediated by > the driver rather than just baked into each phase. > > (One final step would be the "stuck macros" technique of > https://www.youtube.com/watch?v=nUvKoG_V_U0 > / > https://github.com/gelisam/klister > , where TH splices would > be able to making "blocking queries" of the the compiler in > ways that induce more of these fine-grained dependencies.) > > Anyways, while we could also do a "RnTsDsError" and split > later, I hope Alfredo's alternative of embedding won't be too > much harder and prepare us for these exciting areas of > exploration. > > John > > On 3/30/21 10:14 AM, Richard Eisenberg wrote: >> >> >>> On Mar 30, 2021, at 4:57 AM, Alfredo Di Napoli >>> >> > wrote: >>> >>> I'll explore the idea of adding a second IORef. >> >> Renaming/type-checking is already mutually recursive. (The >> renamer must call the type-checker in order to rename -- that >> is, evaluate -- untyped splices. I actually can't recall why >> the type-checker needs to call the renamer.) So we will have >> a TcRnError. Now we see that the desugarer ends up mixed in, >> too. We could proceed how Alfredo suggests, by adding a >> second IORef. Or we could just make TcRnDsError (maybe >> renaming that). >> >> What's the disadvantage? Clients will have to potentially >> know about all the different error forms with either approach >> (that is, using my combined type or using multiple IORefs). >> The big advantage to separating is maybe module dependencies? >> But my guess is that the dependencies won't be an issue here, >> due to the fact that these components are already leaning on >> each other. Maybe the advantage is just in having smaller >> types? Maybe. >> >> I don't have a great sense as to what to do here, but I would >> want a clear reason that e.g. the TcRn monad would have two >> IORefs, while other monads will work with GhcMessage (instead >> of a whole bunch of IORefs). >> >> Richard >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: