From matthewtpickering at gmail.com Fri Feb 1 08:43:14 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Fri, 1 Feb 2019 08:43:14 +0000 Subject: No email from gitlab Message-ID: Hi all, I stopped getting emails from gitlab about 12 hours ago. Anyone know what is going on? Cheers, Matt From rsx at bluewin.ch Fri Feb 1 10:27:43 2019 From: rsx at bluewin.ch (Roland Senn) Date: Fri, 01 Feb 2019 11:27:43 +0100 Subject: How to teach Hadrian not to build ghctags and haddock Message-ID: <1549016863.4402.4.camel@bluewin.ch> I switched to Hadrian. Normally I hack somewhere in the compiler. So I use "hadrian/build.sh --flavour=devel2 --freeze1" to quickly rebuild GHC after some little changes to the code. It normally builds - library 'ghc' (Stage1, way v) - program 'ghctags' (Stage1) - program 'ghc-bin' (Stage1) - program 'haddock' (Stage1) When hacking GHC, I never use ghctags or haddock. Building ghctags is very fast, 1-2 seconds. However building haddock takes 15 to 20 seconds. Is there a way to teach hadrian not to build haddock and ghctags? Many thanks! Roland From alp at well-typed.com Fri Feb 1 11:05:56 2019 From: alp at well-typed.com (Alp Mestanogullari) Date: Fri, 1 Feb 2019 12:05:56 +0100 Subject: How to teach Hadrian not to build ghctags and haddock In-Reply-To: <1549016863.4402.4.camel@bluewin.ch> References: <1549016863.4402.4.camel@bluewin.ch> Message-ID: <7545ab0a-278d-e6a7-c70c-d80f63f1e063@well-typed.com> One way would be to simply ask to build just what you need, e.g: $ hadrian/build.sh --flavour=devel2 --freeze1 stage2:exe:ghc-bin See https://gitlab.haskell.org/ghc/ghc/blob/master/hadrian/doc/make.md for more examples of that 'simple target' syntax. When no target is specified, hadrian will go ahead and build "everything". On 01/02/2019 11:27, Roland Senn wrote: > I switched to Hadrian. Normally I hack somewhere in the compiler. So I > use "hadrian/build.sh --flavour=devel2 --freeze1" to quickly rebuild > GHC after some little changes to the code. > > It normally builds > - library 'ghc' (Stage1, way v) > - program 'ghctags' (Stage1) > - program 'ghc-bin' (Stage1) > - program 'haddock' (Stage1) > > When hacking GHC, I never use ghctags or haddock. Building ghctags is > very fast, 1-2 seconds. However building haddock takes 15 to 20 > seconds. > > Is there a way to teach hadrian not to build haddock and ghctags? > > Many thanks! > Roland > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -- Alp Mestanogullari, Haskell Consultant Well-Typed LLP, https://www.well-typed.com/ Registered in England and Wales, OC335890 118 Wymering Mansions, Wymering Road, London, W9 2NF, England From ben at well-typed.com Fri Feb 1 14:52:20 2019 From: ben at well-typed.com (Ben Gamari) Date: Fri, 01 Feb 2019 09:52:20 -0500 Subject: No email from gitlab In-Reply-To: References: Message-ID: <26C09727-F984-4671-928B-45EF805E26FD@well-typed.com> Looks like the upgrade broke the nix hack used to fix email. The hack has been fixed but i need to find a better solution in the long run. Cheers, - Ben On February 1, 2019 3:43:14 AM EST, Matthew Pickering wrote: >Hi all, > >I stopped getting emails from gitlab about 12 hours ago. Anyone know >what is going on? > >Cheers, > >Matt >_______________________________________________ >ghc-devs mailing list >ghc-devs at haskell.org >http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -- Sent from my Android device with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Fri Feb 1 19:22:52 2019 From: ben at well-typed.com (Ben Gamari) Date: Fri, 01 Feb 2019 14:22:52 -0500 Subject: How to teach Hadrian not to build ghctags and haddock In-Reply-To: <1549016863.4402.4.camel@bluewin.ch> References: <1549016863.4402.4.camel@bluewin.ch> Message-ID: <87imy3b9fd.fsf@smart-cactus.org> Roland Senn writes: > I switched to Hadrian. Normally I hack somewhere in the compiler. So I > use "hadrian/build.sh --flavour=devel2 --freeze1" to quickly rebuild > GHC after some little changes to the code. > > It normally builds > - library 'ghc' (Stage1, way v) > - program 'ghctags' (Stage1) > - program 'ghc-bin' (Stage1) > - program 'haddock' (Stage1) > In the case of ghctags I wonder if we shouldn't just remove it. I don't know anyone who actually still uses it and there are better options at this point (e.g. I personally use hasktags when working in GHC). > When hacking GHC, I never use ghctags or haddock. Building ghctags is > very fast, 1-2 seconds. However building haddock takes 15 to 20 > seconds. > Regarding haddock, I think Alp hit the nail on the head. The right thing to do is just to be more specific about what you want built. Tobias, can you make Alp's advice makes it into the new developer documentation? Also, we should likely describe a workflow for generating tags for a GHC tree. I have a script using hasktags [1] which makes this relatively painless. Cheers, - Ben [1] https://github.com/bgamari/ghc-utils/blob/master/make-ghc-tags.sh -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From chrisdone at gmail.com Sat Feb 2 14:43:37 2019 From: chrisdone at gmail.com (Christopher Done) Date: Sat, 2 Feb 2019 14:43:37 +0000 Subject: Constructor wrappers vs workers in generated Core Message-ID: Hi all, I'm compiling Haskell modules with this simple function. I'd like to interpret the Core for practical use and also educational use. compile :: GHC.GhcMonad m => GHC.ModSummary -> m GHC.ModGuts compile modSummary = do parsedModule <- GHC.parseModule modSummary typecheckedModule <- GHC.typecheckModule parsedModule desugared <- GHC.desugarModule typecheckedModule pure (GHC.dm_core_module desugared) And then I'm taking the mg_binds from ModGuts. I want to work with the simplest, least-transformed Core as possible. One thing that's a problem for me is that e.g. the constructor GHC.Integer.Type.S# in this expression \ (ds_dnHN :: Integer) -> case ds_dnHN of _ [Occ=Dead] { S# i# -> case isTrue# (># i# 1#) of _ [Occ=Dead] { False -> (\ _ [Occ=Dead, OS=OneShot] -> $WS# 2#) void#; True -> case nextPrimeWord# (int2Word# i#) of wild_Xp { __DEFAULT -> wordToInteger wild_Xp } }; Jp# bn -> $WJp# (nextPrimeBigNat bn); Jn# _ [Occ=Dead] -> $WS# 2# } is referred to by its wrapper, "$WS#". In general, I'd prefer if it Core always constructed the worker S# directly. It would reduce the number of cases I have to handle. Additionally, what if a worker gets transformed by GHC from e.g. "Wibble !(Int,Int)" to "Wibble !Int !Int", are then case alt patterns going to scrutinize this transformed two-arg version? (As documented here https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/DataTypes#Thelifecycleofadatatype) So my question is: is it possible to disable this wrapper transformation of data constructors? If not it seems like I'll have no option but to handle this extra wrapper stuff, due to the case analyses. That wouldn't be the end of the world, it'd just delay me by another week or so. For strict fields in constructors I was planning on simply forcing the fields in my interpreter when a constructor becomes saturated (and thereby enabling some nice inspection capabilities), rather than generating extra wrapper code that would force the arguments. Cheers From matthewtpickering at gmail.com Sat Feb 2 14:50:17 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Sat, 2 Feb 2019 14:50:17 +0000 Subject: Constructor wrappers vs workers in generated Core In-Reply-To: References: Message-ID: There is no way to turn off wrappers and I don't think it would be possible to implement easily if at all. However, they will all probably be inlined after the optimiser runs but it seems that you don't want to run the optimiser at all on the generated core? Perhaps it would be possible to set the inliner parameters so that only wrappers ended up being inlined and nothing else and then call the relevant function from the simplifier on your bindings to get rid of them again. Cheers, Matt On Sat, Feb 2, 2019 at 2:43 PM Christopher Done wrote: > > Hi all, > > I'm compiling Haskell modules with this simple function. I'd like to > interpret the Core for practical use and also educational use. > > compile :: > GHC.GhcMonad m > => GHC.ModSummary > -> m GHC.ModGuts > compile modSummary = do > parsedModule <- GHC.parseModule modSummary > typecheckedModule <- GHC.typecheckModule parsedModule > desugared <- GHC.desugarModule typecheckedModule > pure (GHC.dm_core_module desugared) > > And then I'm taking the mg_binds from ModGuts. I want to work with the > simplest, least-transformed Core as possible. One thing that's a > problem for me is that e.g. the constructor > > GHC.Integer.Type.S# > > in this expression > > \ (ds_dnHN :: Integer) -> > case ds_dnHN of _ [Occ=Dead] { > S# i# -> > case isTrue# (># i# 1#) of _ [Occ=Dead] { > False -> (\ _ [Occ=Dead, OS=OneShot] -> $WS# 2#) void#; > True -> > case nextPrimeWord# (int2Word# i#) of wild_Xp { __DEFAULT -> > wordToInteger wild_Xp > } > }; > Jp# bn -> $WJp# (nextPrimeBigNat bn); > Jn# _ [Occ=Dead] -> $WS# 2# > } > > is referred to by its wrapper, "$WS#". In general, I'd prefer if it > Core always constructed the worker S# directly. It would reduce the > number of cases I have to handle. > > Additionally, what if a worker gets transformed by GHC from e.g. > "Wibble !(Int,Int)" to "Wibble !Int !Int", are then case alt patterns > going to scrutinize this transformed two-arg version? (As documented > here https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/DataTypes#Thelifecycleofadatatype) > > So my question is: is it possible to disable this wrapper > transformation of data constructors? > > If not it seems like I'll have no option but to handle this extra > wrapper stuff, due to the case analyses. That wouldn't be the end of > the world, it'd just delay me by another week or so. > > For strict fields in constructors I was planning on simply forcing the > fields in my interpreter when a constructor becomes saturated (and > thereby enabling some nice inspection capabilities), rather than > generating extra wrapper code that would force the arguments. > > Cheers > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From chrisdone at gmail.com Sat Feb 2 15:07:07 2019 From: chrisdone at gmail.com (Christopher Done) Date: Sat, 2 Feb 2019 15:07:07 +0000 Subject: Constructor wrappers vs workers in generated Core In-Reply-To: References: Message-ID: On Sat, 2 Feb 2019 at 14:50, Matthew Pickering wrote: > There is no way to turn off wrappers and I don't think it would be > possible to implement easily if at all. Fair enough. > However, they will all probably be inlined after the optimiser runs > but it seems that you don't want to run the optimiser at all on the > generated core? Yeah, I'm trying to avoid as much instability in the output shape as possible, and for educational purposes, optimizations make fairly readable code unreadable. Wait. Can I rely on case alt patterns having the same arity as the original user-defined data type before optimization passes are run? If the answer to that is yes, then I could just replace all wrapper calls with worker calls, which is an easy enough transformation. As a precaution, I could add a check on all case alt patterns that the arity matches the worker arity and barf if not. Thanks for your help! Chris From matthewtpickering at gmail.com Sat Feb 2 15:09:33 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Sat, 2 Feb 2019 15:09:33 +0000 Subject: Scaling back CI (for now)? Message-ID: Hi all, Everyone has probably noticed that getting anything merged is a real effort at the moment. The main problem is that CI takes in the region of 5-7 hours and then spuriously fails at the end. After 5-7 hours you have to rebase and run CI again and so on. Therefore I propose to run just these four jobs on every MR: validate-x86_64-linux-deb9 validate-x86_64-linux-deb8-hadrian validate-x86_64-windows validate-x86_64-darwin The reasoning is as follows: validate-x86_64-linux-deb9 validate-x86_64-linux-deb8-hadrian These run first and are reliable and finish within an hour. Then we have lots of less reliable, lower priority jobs. Two windows jobs which take forever to run. validate-x86_64-windows validate-x86_64-windows-hadrian One darwin job validate-x86_64-darwin Many more linux jobs validate-x86_64-linux-deb9-unreg validate-x86_64-linux-deb9-integer-simple validate-x86_64-linux-fedora27 validate-x86_64-linux-deb9-llvm validate-x86_64-linux-deb8 validate-i386-linux-deb9 validate-aarch64-linux-deb9 So I don't argue that these are important to test but at the moment they produce too much friction on every commit through a combination of lack of resources and taking too long. Further to this, we really don't need to test fedora27, deb9 and deb8 for every build. When was the last time we broke one of these platforms but not the other, it's rare! So the concrete proposal is to slim back the per commit validation to four jobs. validate-x86_64-linux-deb9 validate-x86_64-linux-deb8-hadrian validate-x86_64-windows validate-x86_64-darwin which will test on the three major platforms. All the other flavours should be run once the commit reaches master. Thoughts? Cheers, Matt From sgraf1337 at gmail.com Sat Feb 2 19:57:51 2019 From: sgraf1337 at gmail.com (Sebastian Graf) Date: Sat, 2 Feb 2019 20:57:51 +0100 Subject: Scaling back CI (for now)? In-Reply-To: References: Message-ID: Hi, Am Sa., 2. Feb. 2019 um 16:09 Uhr schrieb Matthew Pickering < matthewtpickering at gmail.com>: > > All the other flavours should be run once the commit reaches master. > > Thoughts? > That's even better than my idea of only running them as nightlies. In favor! > Cheers, > > Matt > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Sun Feb 3 13:56:40 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Sun, 3 Feb 2019 13:56:40 +0000 Subject: Scaling back CI (for now)? In-Reply-To: References: Message-ID: It has been established today that Marge is failing to run in batch mode for some reason which means it takes at least as long as CI takes to complete for each commit to be merged. The rate is about 4 commits/day with the current configuration. On Sat, Feb 2, 2019 at 7:57 PM Sebastian Graf wrote: > > Hi, > > Am Sa., 2. Feb. 2019 um 16:09 Uhr schrieb Matthew Pickering : >> >> >> All the other flavours should be run once the commit reaches master. >> >> Thoughts? > > > That's even better than my idea of only running them as nightlies. In favor! > >> >> Cheers, >> >> Matt >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From tdammers at gmail.com Mon Feb 4 09:24:18 2019 From: tdammers at gmail.com (Tobias Dammers) Date: Mon, 4 Feb 2019 10:24:18 +0100 Subject: How to teach Hadrian not to build ghctags and haddock In-Reply-To: <87imy3b9fd.fsf@smart-cactus.org> References: <1549016863.4402.4.camel@bluewin.ch> <87imy3b9fd.fsf@smart-cactus.org> Message-ID: <20190204092417.bj4bwxenkhinwo3l@nibbler> On Fri, Feb 01, 2019 at 02:22:52PM -0500, Ben Gamari wrote: > In the case of ghctags I wonder if we shouldn't just remove it. I don't > know anyone who actually still uses it and there are better options at this > point (e.g. I personally use hasktags when working in GHC). Agree; even if there are people still using ghctags, installing and configuring it separately is the way to go, and I don't see a good reason to keep it in the GHC distribution - we don't actually need it to build anything else, do we? > Regarding haddock, I think Alp hit the nail on the head. The right thing > to do is just to be more specific about what you want built. > > Tobias, can you make Alp's advice makes it into the new developer > documentation? The documentation already advises newcomers to only rebuild GHC itself; I've updated it to use the symbolic stage2:exe:ghc-bin notation though instead of the actual filename (_build/stage1/bin/ghc), assuming that the former will work even when building in a different directory. I've also added a link to the Hadrian README, which should provide enough hints to the rest of the Hadrian documentation. > Also, we should likely describe a workflow for generating tags for a GHC > tree. I have a script using hasktags [1] which makes this relatively > painless. Maybe, but is it really that different from other source trees? The newcomers document is quite long already, so maybe we should just have a separate page for a bunch of editor support and other tooling configuration, and link to that from the newcomers' guide? -- Tobias Dammers - tdammers at gmail.com From alp at well-typed.com Mon Feb 4 09:54:28 2019 From: alp at well-typed.com (Alp Mestanogullari) Date: Mon, 4 Feb 2019 10:54:28 +0100 Subject: How to teach Hadrian not to build ghctags and haddock In-Reply-To: <20190204092417.bj4bwxenkhinwo3l@nibbler> References: <1549016863.4402.4.camel@bluewin.ch> <87imy3b9fd.fsf@smart-cactus.org> <20190204092417.bj4bwxenkhinwo3l@nibbler> Message-ID: <9d3f528f-1f31-4ee3-117e-9606f9a5c6bd@well-typed.com> Hello, On 04/02/2019 10:24, Tobias Dammers wrote: > > The documentation already advises newcomers to only rebuild GHC itself; > I've updated it to use the symbolic stage2:exe:ghc-bin notation though > instead of the actual filename (_build/stage1/bin/ghc), assuming that > the former will work even when building in a different directory. I've > also added a link to the Hadrian README, which should provide enough > hints to the rest of the Hadrian documentation. The simple notation is just a shorthand for the right path under whatever build root is used (_build by default, or the value of --build-root when it is passed), so it will indeed be mapped to the right concrete target regardless of the build root. -- Alp Mestanogullari, Haskell Consultant Well-Typed LLP, https://www.well-typed.com/ Registered in England and Wales, OC335890 118 Wymering Mansions, Wymering Road, London, W9 2NF, England From simonpj at microsoft.com Mon Feb 4 11:26:17 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 4 Feb 2019 11:26:17 +0000 Subject: Constructor wrappers vs workers in generated Core In-Reply-To: References: Message-ID: | is referred to by its wrapper, "$WS#". In general, I'd prefer if it Core | always constructed the worker S# directly. It would reduce the number of | cases I have to handle. What do you mean by "constructed the worker directly"? How does that differ from "call the wrapper, and then (in a simplifier pass) inline the wrapper? In general, the wrapper of a data constructor can do quite a bit of work: evaluating arguments, unboxing them, casting newtypes, reducing type families. Simon | -----Original Message----- | From: ghc-devs On Behalf Of Christopher | Done | Sent: 02 February 2019 14:44 | To: ghc-devs at haskell.org | Subject: Constructor wrappers vs workers in generated Core | | Hi all, | | I'm compiling Haskell modules with this simple function. I'd like to | interpret the Core for practical use and also educational use. | | compile :: | GHC.GhcMonad m | => GHC.ModSummary | -> m GHC.ModGuts | compile modSummary = do | parsedModule <- GHC.parseModule modSummary | typecheckedModule <- GHC.typecheckModule parsedModule | desugared <- GHC.desugarModule typecheckedModule | pure (GHC.dm_core_module desugared) | | And then I'm taking the mg_binds from ModGuts. I want to work with the | simplest, least-transformed Core as possible. One thing that's a problem | for me is that e.g. the constructor | | GHC.Integer.Type.S# | | in this expression | | \ (ds_dnHN :: Integer) -> | case ds_dnHN of _ [Occ=Dead] { | S# i# -> | case isTrue# (># i# 1#) of _ [Occ=Dead] { | False -> (\ _ [Occ=Dead, OS=OneShot] -> $WS# 2#) void#; | True -> | case nextPrimeWord# (int2Word# i#) of wild_Xp { __DEFAULT -> | wordToInteger wild_Xp | } | }; | Jp# bn -> $WJp# (nextPrimeBigNat bn); | Jn# _ [Occ=Dead] -> $WS# 2# | } | | is referred to by its wrapper, "$WS#". In general, I'd prefer if it Core | always constructed the worker S# directly. It would reduce the number of | cases I have to handle. | | Additionally, what if a worker gets transformed by GHC from e.g. | "Wibble !(Int,Int)" to "Wibble !Int !Int", are then case alt patterns | going to scrutinize this transformed two-arg version? (As documented | here | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fghc.has | kell.org%2Ftrac%2Fghc%2Fwiki%2FCommentary%2FCompiler%2FDataTypes%23Thelif | ecycleofadatatype&data=02%7C01%7Csimonpj%40microsoft.com%7Cf242a0e3c4 | 43448a439508d6891cd1d2%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C63684 | 7154205957282&sdata=amZzhNxmR8Ods4%2BpsLQE4NzGHP%2FYeEaJevar%2Bl9cKFE | %3D&reserved=0) | | So my question is: is it possible to disable this wrapper transformation | of data constructors? | | If not it seems like I'll have no option but to handle this extra wrapper | stuff, due to the case analyses. That wouldn't be the end of the world, | it'd just delay me by another week or so. | | For strict fields in constructors I was planning on simply forcing the | fields in my interpreter when a constructor becomes saturated (and | thereby enabling some nice inspection capabilities), rather than | generating extra wrapper code that would force the arguments. | | Cheers | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.has | kell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com%7Cf242a0e3c443448a439508d | 6891cd1d2%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636847154205957282 | &sdata=KSCjmeulEQoL6CvOpLiiwNEX9a3DNnUVnPwbqdEXmOA%3D&reserved=0 From omeragacan at gmail.com Mon Feb 4 13:23:19 2019 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Mon, 4 Feb 2019 16:23:19 +0300 Subject: Min closure payload size? Message-ID: Hi, I was trying to understand why some info tables that have no ptrs and nptrs like GCD_CAF end up with 1 nptrs in the generated info table and found this code in Constants.h: /* ----------------------------------------------------------------------------- Minimum closure sizes This is the minimum number of words in the payload of a heap-allocated closure, so that the closure has enough room to be overwritten with a forwarding pointer during garbage collection. -------------------------------------------------------------------------- */ #define MIN_PAYLOAD_SIZE 1 We use this in a few places in the compiler and add at least one word space in the payload. However the comment is actually wrong, forwarding pointers are made by tagging the info ptr field so we don't need a word in the payload for forwarding pointers. I tried updating this as 0 but that caused a lot of test failures (mostly in GHCi). I'm wondering if I'm missing anything or is it just some code assuming min payload size 1 without using this macro. Any ideas? Ömer From ben at well-typed.com Mon Feb 4 16:06:34 2019 From: ben at well-typed.com (Ben Gamari) Date: Mon, 04 Feb 2019 11:06:34 -0500 Subject: Request for comments on dry-run Trac -> GitLab migration Message-ID: <87a7jbbksb.fsf@smart-cactus.org> TL;DR. Have a look at this [2] test import of GHC's Trac tickets. Tell us what issues you find. Hello everyone, As you likely know, we are currently in the process of consolidating GHC's infrastructure on GitLab [1]. The last step of this process is to migrate our tickets and wiki from Trac. Towards this end I am happy to announce the availability of a test GitLab instance [2] for the community's review. This is a clone of gitlab.haskell.org but with the addition of tickets [3] and GHC wiki [4] content imported from a Trac dump from earlier this month. There are a few known issues: * There are currently around 50 tickets missing from the import; we are working on identifying where they escaped to * The revision history of the Wiki is currently squashed due to performance issues [6] with GitLab's wiki implementation We intend to resolve both of these problems by the time of the final migration. If you find a ticket on the staging instance with problems that are not in the above list please do add the `import problems` label to it so we can have a look. There are a few aesthetic questions that could use community input: * Should the "Trac Metadata" boxes be always visible? * How do we make the Wiki index [7] more usable? Imposing more hierarchy to the page names would likely help but I'm still a bit worried that it will be hard to browse. Please do take a few minutes to peruse the test import and check for mistakes or potential usability issues. GHC's Trac tickets are one of the crown-jewels of the project; we want to make sure we get this migration right as will be living with the result for a long time to come. Cheers, - Ben [1] https://gitlab.haskell.org/ [2] https://gitlab.staging.haskell.org/ghc/ghc/ [3] https://gitlab.staging.haskell.org/ghc/ghc/issues [4] https://gitlab.staging.haskell.org/ghc/ghc/wiki [6] https://gitlab.com/gitlab-org/gitlab-ce/issues/57179 [7] https://ghc.haskell.org/trac/ghc/ticket/16212 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From simonpj at microsoft.com Mon Feb 4 16:59:36 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 4 Feb 2019 16:59:36 +0000 Subject: Request for comments on dry-run Trac -> GitLab migration In-Reply-To: <87a7jbbksb.fsf@smart-cactus.org> References: <87a7jbbksb.fsf@smart-cactus.org> Message-ID: Could we arrange that searching for the ticket number succeeds? Eg searching for 12088 fails, and 16013. Simon | -----Original Message----- | From: ghc-devs On Behalf Of Ben Gamari | Sent: 04 February 2019 16:07 | To: GHC developers ; glasgow-haskell-users- | request at haskell.org | Subject: Request for comments on dry-run Trac -> GitLab migration | | TL;DR. Have a look at this [2] test import of GHC's Trac tickets. Tell us | what issues you find. | | | Hello everyone, | | As you likely know, we are currently in the process of consolidating | GHC's infrastructure on GitLab [1]. The last step of this process is to | migrate our tickets and wiki from Trac. | | Towards this end I am happy to announce the availability of a test GitLab | instance [2] for the community's review. This is a clone of | gitlab.haskell.org but with the addition of tickets [3] and GHC wiki [4] | content imported from a Trac dump from earlier this month. | | There are a few known issues: | | * There are currently around 50 tickets missing from the import; we are | working on identifying where they escaped to | | * The revision history of the Wiki is currently squashed due to | performance issues [6] with GitLab's wiki implementation | | We intend to resolve both of these problems by the time of the final | migration. If you find a ticket on the staging instance with problems | that are not in the above list please do add the `import problems` label | to it so we can have a look. | | There are a few aesthetic questions that could use community input: | | * Should the "Trac Metadata" boxes be always visible? | | * How do we make the Wiki index [7] more usable? Imposing more | hierarchy to the page names would likely help but I'm still a bit | worried that it will be hard to browse. | | Please do take a few minutes to peruse the test import and check for | mistakes or potential usability issues. GHC's Trac tickets are one of the | crown-jewels of the project; we want to make sure we get this migration | right as will be living with the result for a long time to come. | | Cheers, | | - Ben | | | [1] | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab. | haskell.org%2F&data=02%7C01%7Csimonpj%40microsoft.com%7C6ce4426dfbe84 | 7712d6908d68abac7c1%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C63684893 | 2159173148&sdata=NyzSJFXs2d7QiTDkFMiZULc7qUQb2rDCgo%2FC82vbGL8%3D& | ;reserved=0 | [2] | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab. | staging.haskell.org%2Fghc%2Fghc%2F&data=02%7C01%7Csimonpj%40microsoft | .com%7C6ce4426dfbe847712d6908d68abac7c1%7C72f988bf86f141af91ab2d7cd011db4 | 7%7C1%7C0%7C636848932159173148&sdata=YHWYMFhCqpvBtfeIqRwWq6d9Q5BrhUH0 | L8onmpTlyxw%3D&reserved=0 | [3] | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab. | staging.haskell.org%2Fghc%2Fghc%2Fissues&data=02%7C01%7Csimonpj%40mic | rosoft.com%7C6ce4426dfbe847712d6908d68abac7c1%7C72f988bf86f141af91ab2d7cd | 011db47%7C1%7C0%7C636848932159173148&sdata=GL201PBxtGV79y%2BZwzxOEAFg | Bc%2BJ5zOWB5jjrkl%2BZM0%3D&reserved=0 | [4] | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab. | staging.haskell.org%2Fghc%2Fghc%2Fwiki&data=02%7C01%7Csimonpj%40micro | soft.com%7C6ce4426dfbe847712d6908d68abac7c1%7C72f988bf86f141af91ab2d7cd01 | 1db47%7C1%7C0%7C636848932159173148&sdata=PTWjeNeT7nHDlJW%2BqvaSJPndlQ | Tm91tIpIbvIhrh0Ng%3D&reserved=0 | [6] | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab. | com%2Fgitlab-org%2Fgitlab- | ce%2Fissues%2F57179&data=02%7C01%7Csimonpj%40microsoft.com%7C6ce4426d | fbe847712d6908d68abac7c1%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636 | 848932159173148&sdata=ew87BU5UBIRdpZfiIl5ETAAXnvDKd0VUYwsLtsMGsM8%3D& | amp;reserved=0 | [7] | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fghc.has | kell.org%2Ftrac%2Fghc%2Fticket%2F16212&data=02%7C01%7Csimonpj%40micro | soft.com%7C6ce4426dfbe847712d6908d68abac7c1%7C72f988bf86f141af91ab2d7cd01 | 1db47%7C1%7C0%7C636848932159173148&sdata=0wS1xPHe%2F5fKlFj%2FJ6dU9TJ5 | I8ZmWcxI%2By4zTHYSS1w%3D&reserved=0 From matthewtpickering at gmail.com Mon Feb 4 17:03:20 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Mon, 4 Feb 2019 17:03:20 +0000 Subject: Request for comments on dry-run Trac -> GitLab migration In-Reply-To: <87a7jbbksb.fsf@smart-cactus.org> References: <87a7jbbksb.fsf@smart-cactus.org> Message-ID: Thanks Ben, looks amazing. I don't think the trac metadata boxes should always be visible. They are unobtrusive now and tbh, I don't think I will be opening them up much when looking at tickets. Simon, I think if you start the query with a # then it "works". For example, search for #12088 instead of 12088. https://gitlab.com/gitlab-org/gitlab-ce/issues/13306 Cheers, Matt On Mon, Feb 4, 2019 at 4:06 PM Ben Gamari wrote: > > TL;DR. Have a look at this [2] test import of GHC's Trac tickets. Tell us > what issues you find. > > > Hello everyone, > > As you likely know, we are currently in the process of consolidating > GHC's infrastructure on GitLab [1]. The last step of this process is to > migrate our tickets and wiki from Trac. > > Towards this end I am happy to announce the availability of a test > GitLab instance [2] for the community's review. This is a clone of > gitlab.haskell.org but with the addition of tickets [3] and GHC wiki [4] > content imported from a Trac dump from earlier this month. > > There are a few known issues: > > * There are currently around 50 tickets missing from the import; we are > working on identifying where they escaped to > > * The revision history of the Wiki is currently squashed due to > performance issues [6] with GitLab's wiki implementation > > We intend to resolve both of these problems by the time of the final > migration. If you find a ticket on the staging instance with problems > that are not in the above list please do add the `import problems` label > to it so we can have a look. > > There are a few aesthetic questions that could use community input: > > * Should the "Trac Metadata" boxes be always visible? > > * How do we make the Wiki index [7] more usable? Imposing more > hierarchy to the page names would likely help but I'm still a bit > worried that it will be hard to browse. > > Please do take a few minutes to peruse the test import and check for > mistakes or potential usability issues. GHC's Trac tickets are one of > the crown-jewels of the project; we want to make sure we get this > migration right as will be living with the result for a long time to > come. > > Cheers, > > - Ben > > > [1] https://gitlab.haskell.org/ > [2] https://gitlab.staging.haskell.org/ghc/ghc/ > [3] https://gitlab.staging.haskell.org/ghc/ghc/issues > [4] https://gitlab.staging.haskell.org/ghc/ghc/wiki > [6] https://gitlab.com/gitlab-org/gitlab-ce/issues/57179 > [7] https://ghc.haskell.org/trac/ghc/ticket/16212 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From matthewtpickering at gmail.com Mon Feb 4 17:09:15 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Mon, 4 Feb 2019 17:09:15 +0000 Subject: Request for comments on dry-run Trac -> GitLab migration In-Reply-To: References: <87a7jbbksb.fsf@smart-cactus.org> Message-ID: There is this other issue though which explains that searching in the issues view for a ticket number does fail. https://gitlab.com/gitlab-org/gitlab-ce/issues/30974 https://gitlab.staging.haskell.org/ghc/ghc/issues?scope=all&utf8=%E2%9C%93&state=opened&search=%231234 So if you want to search for a ticket by number you have to use the global search. On Mon, Feb 4, 2019 at 5:03 PM Matthew Pickering wrote: > > Thanks Ben, looks amazing. > > I don't think the trac metadata boxes should always be visible. They > are unobtrusive now and tbh, I don't think I will be opening them up > much when looking at tickets. > > Simon, I think if you start the query with a # then it "works". For > example, search for #12088 instead of 12088. > > https://gitlab.com/gitlab-org/gitlab-ce/issues/13306 > > Cheers, > > Matt > > On Mon, Feb 4, 2019 at 4:06 PM Ben Gamari wrote: > > > > TL;DR. Have a look at this [2] test import of GHC's Trac tickets. Tell us > > what issues you find. > > > > > > Hello everyone, > > > > As you likely know, we are currently in the process of consolidating > > GHC's infrastructure on GitLab [1]. The last step of this process is to > > migrate our tickets and wiki from Trac. > > > > Towards this end I am happy to announce the availability of a test > > GitLab instance [2] for the community's review. This is a clone of > > gitlab.haskell.org but with the addition of tickets [3] and GHC wiki [4] > > content imported from a Trac dump from earlier this month. > > > > There are a few known issues: > > > > * There are currently around 50 tickets missing from the import; we are > > working on identifying where they escaped to > > > > * The revision history of the Wiki is currently squashed due to > > performance issues [6] with GitLab's wiki implementation > > > > We intend to resolve both of these problems by the time of the final > > migration. If you find a ticket on the staging instance with problems > > that are not in the above list please do add the `import problems` label > > to it so we can have a look. > > > > There are a few aesthetic questions that could use community input: > > > > * Should the "Trac Metadata" boxes be always visible? > > > > * How do we make the Wiki index [7] more usable? Imposing more > > hierarchy to the page names would likely help but I'm still a bit > > worried that it will be hard to browse. > > > > Please do take a few minutes to peruse the test import and check for > > mistakes or potential usability issues. GHC's Trac tickets are one of > > the crown-jewels of the project; we want to make sure we get this > > migration right as will be living with the result for a long time to > > come. > > > > Cheers, > > > > - Ben > > > > > > [1] https://gitlab.haskell.org/ > > [2] https://gitlab.staging.haskell.org/ghc/ghc/ > > [3] https://gitlab.staging.haskell.org/ghc/ghc/issues > > [4] https://gitlab.staging.haskell.org/ghc/ghc/wiki > > [6] https://gitlab.com/gitlab-org/gitlab-ce/issues/57179 > > [7] https://ghc.haskell.org/trac/ghc/ticket/16212 > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From matthewtpickering at gmail.com Mon Feb 4 17:56:49 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Mon, 4 Feb 2019 17:56:49 +0000 Subject: Constructor wrappers vs workers in generated Core In-Reply-To: References: Message-ID: If you want your core to look at much like the source program as possible then you could print `$WFoo` as just `Foo`? The existence of wrappers is a crucial part of desugaring so perhaps it's useful for users to see them in the output of your program if it's intended to be educational? Matt On Sat, Feb 2, 2019 at 3:06 PM Christopher Done wrote: > > On Sat, 2 Feb 2019 at 14:50, Matthew Pickering > wrote: > > There is no way to turn off wrappers and I don't think it would be > > possible to implement easily if at all. > > Fair enough. > > > However, they will all probably be inlined after the optimiser runs > > but it seems that you don't want to run the optimiser at all on the > > generated core? > > Yeah, I'm trying to avoid as much instability in the output shape as > possible, and for educational purposes, optimizations make fairly > readable code unreadable. > > Wait. Can I rely on case alt patterns having the same arity as the > original user-defined data type before optimization passes are run? > > If the answer to that is yes, then I could just replace all wrapper > calls with worker calls, which is an easy enough transformation. As a > precaution, I could add a check on all case alt patterns that the > arity matches the worker arity and barf if not. > > Thanks for your help! > > Chris From ryan.gl.scott at gmail.com Mon Feb 4 19:05:07 2019 From: ryan.gl.scott at gmail.com (Ryan Scott) Date: Mon, 4 Feb 2019 14:05:07 -0500 Subject: Request for comments on dry-run Trac -> GitLab migration Message-ID: There appears to be some impedance mismatches between GitLab's formatting and Trac's formatting in certain places. For example, see the bottom of this issue [1], which has a long, hyperlinked line with the phrase: Icanproducethe`missinginstance`issuewithouthavingtorecompileGHC,whichiswhyIthinkitmightbeindependentofthisbug. Ryan S. ----- [1] https://gitlab.staging.haskell.org/ghc/ghc/issues/16211 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Mon Feb 4 19:13:31 2019 From: ben at well-typed.com (Ben Gamari) Date: Mon, 04 Feb 2019 14:13:31 -0500 Subject: Request for comments on dry-run Trac -> GitLab migration In-Reply-To: References: <87a7jbbksb.fsf@smart-cactus.org> Message-ID: <877eefbc4p.fsf@smart-cactus.org> Simon Peyton Jones via ghc-devs writes: > Could we arrange that searching for the ticket number succeeds? > Eg searching for 12088 fails, and 16013. > Yes, this is a bug [1]. I will bring it up with David. Cheers, - Ben [1] https://gitlab.com/gitlab-org/gitlab-ce/issues/30974 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From mail at nh2.me Tue Feb 5 00:34:03 2019 From: mail at nh2.me (=?UTF-8?Q?Niklas_Hamb=c3=bcchen?=) Date: Tue, 5 Feb 2019 01:34:03 +0100 Subject: Request for comments on dry-run Trac -> GitLab migration In-Reply-To: <87a7jbbksb.fsf@smart-cactus.org> References: <87a7jbbksb.fsf@smart-cactus.org> Message-ID: I find that commits aren't mentioned on the corresponding issues, for example there's no equivalent of https://ghc.haskell.org/trac/ghc/ticket/13497#comment:27 on https://gitlab.staging.haskell.org/ghc/ghc/issues/13497 I vaguely remember these "commit posts" being discussed before somewhere. But that's not even what I'm after. The commit itself mentions the ticket ("This fixes #13497"). Usually when such a commit is pushed to Gitlab, it automatically creates an entry like: Niklas Hambüchen @nh2 mentioned in commit abc123456 3 months ago But this isn't the case here. Is it because the issues were imported *after* the repo commits already exist? Can it be fixed? E.g. can Gitlab be told to re-index the repo accordingly? Or could it be done by deleting `master` and re-pushing the entire history? From ben at well-typed.com Tue Feb 5 03:49:38 2019 From: ben at well-typed.com (Ben Gamari) Date: Mon, 04 Feb 2019 22:49:38 -0500 Subject: Request for comments on dry-run Trac -> GitLab migration In-Reply-To: References: <87a7jbbksb.fsf@smart-cactus.org> Message-ID: <87munaao8l.fsf@smart-cactus.org> Niklas Hambüchen writes: > I find that commits aren't mentioned on the corresponding issues, for example there's no equivalent of > > https://ghc.haskell.org/trac/ghc/ticket/13497#comment:27 > > on > > https://gitlab.staging.haskell.org/ghc/ghc/issues/13497 > > I vaguely remember these "commit posts" being discussed before somewhere. > But that's not even what I'm after. > > The commit itself mentions the ticket ("This fixes #13497"). > Usually when such a commit is pushed to Gitlab, it automatically creates an entry like: > > Niklas Hambüchen @nh2 mentioned in commit abc123456 3 months ago > > But this isn't the case here. > Is it because the issues were imported *after* the repo commits already exist? > Yes, this is the cause and the import does handle this; I just (yet again) forgot to rerun this stage of the import. This should be fixed now. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From mail at nh2.me Tue Feb 5 07:09:34 2019 From: mail at nh2.me (=?UTF-8?Q?Niklas_Hamb=c3=bcchen?=) Date: Tue, 5 Feb 2019 08:09:34 +0100 Subject: Request for comments on dry-run Trac -> GitLab migration In-Reply-To: <87munaao8l.fsf@smart-cactus.org> References: <87a7jbbksb.fsf@smart-cactus.org> <87munaao8l.fsf@smart-cactus.org> Message-ID: On 05/02/2019 4:49 AM, Ben Gamari wrote:> Yes, this is the cause and the import does handle this; I just (yet > again) forgot to rerun this stage of the import. This should be fixed now. For me, nothing seems to have changed on https://gitlab.staging.haskell.org/ghc/ghc/issues/13497 From julian at leviston.net Tue Feb 5 11:16:57 2019 From: julian at leviston.net (Julian Leviston) Date: Tue, 5 Feb 2019 22:16:57 +1100 Subject: Request for comments on dry-run Trac -> GitLab migration In-Reply-To: <87a7jbbksb.fsf@smart-cactus.org> References: <87a7jbbksb.fsf@smart-cactus.org> Message-ID: <26F46C4C-7C18-4CBF-81F1-87322F8803FF@leviston.net> > On 5 Feb 2019, at 3:06 am, Ben Gamari wrote: > > TL;DR. Have a look at this [2] test import of GHC's Trac tickets. Tell us > what issues you find. > First up, it’s utterly amazing to me that this is importable and with all links transferred and syntax highlighting and whatnot! so great! However, attachments don’t seem to be present (At least, where I’m looking: at https://gitlab.staging.haskell.org/ghc/ghc/issues/3372 ) Julian -------------- next part -------------- An HTML attachment was scrubbed... URL: From tdammers at gmail.com Tue Feb 5 11:22:29 2019 From: tdammers at gmail.com (Tobias Dammers) Date: Tue, 5 Feb 2019 12:22:29 +0100 Subject: Request for comments on dry-run Trac -> GitLab migration In-Reply-To: References: Message-ID: <20190205112228.dn24654lticomjfj@nibbler> Noted, thanks for reporting. On Mon, Feb 04, 2019 at 02:05:07PM -0500, Ryan Scott wrote: > There appears to be some impedance mismatches between GitLab's formatting > and Trac's formatting in certain places. For example, see the bottom of > this issue [1], which has a long, hyperlinked line with the phrase: > > > Icanproducethe`missinginstance`issuewithouthavingtorecompileGHC,whichiswhyIthinkitmightbeindependentofthisbug. > > Ryan S. > ----- > [1] https://gitlab.staging.haskell.org/ghc/ghc/issues/16211 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -- Tobias Dammers - tdammers at gmail.com From omeragacan at gmail.com Tue Feb 5 13:37:48 2019 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Tue, 5 Feb 2019 16:37:48 +0300 Subject: Min closure payload size? In-Reply-To: References: Message-ID: I just came across a closure that is according to this code is not valid: >>> print *get_itbl(0x7b2870) $8 = { layout = { payload = { ptrs = 0, nptrs = 0 }, bitmap = 0, large_bitmap_offset = 0, __pad_large_bitmap_offset = 0, selector_offset = 0 }, type = 21, srt = 3856568, code = 0x404ef0 "H\215E\360L9\370rDH\203\354\bL\211\350H\211\336H\211\307\061\300\350|\034\062" } This is a THUNK_STATIC with 0 ptrs and nptrs in the payload. Ömer Ömer Sinan Ağacan , 4 Şub 2019 Pzt, 16:23 tarihinde şunu yazdı: > > Hi, > > I was trying to understand why some info tables that have no ptrs and nptrs like > GCD_CAF end up with 1 nptrs in the generated info table and found this code in > Constants.h: > > /* ----------------------------------------------------------------------------- > Minimum closure sizes > > This is the minimum number of words in the payload of a > heap-allocated closure, so that the closure has enough room to be > overwritten with a forwarding pointer during garbage collection. > -------------------------------------------------------------------------- > */ > > #define MIN_PAYLOAD_SIZE 1 > > We use this in a few places in the compiler and add at least one word space in > the payload. However the comment is actually wrong, forwarding pointers are made > by tagging the info ptr field so we don't need a word in the payload for > forwarding pointers. I tried updating this as 0 but that caused a lot of test > failures (mostly in GHCi). I'm wondering if I'm missing anything or is it just > some code assuming min payload size 1 without using this macro. > > Any ideas? > > Ömer From simonpj at microsoft.com Tue Feb 5 14:33:56 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 5 Feb 2019 14:33:56 +0000 Subject: Min closure payload size? In-Reply-To: References: Message-ID: I'm relying on Simon M here. I'm out of my depth! Simon | -----Original Message----- | From: ghc-devs On Behalf Of Ömer Sinan | Agacan | Sent: 05 February 2019 13:38 | To: ghc-devs | Subject: Re: Min closure payload size? | | I just came across a closure that is according to this code is not valid: | | >>> print *get_itbl(0x7b2870) | $8 = { | layout = { | payload = { | ptrs = 0, | nptrs = 0 | }, | bitmap = 0, | large_bitmap_offset = 0, | __pad_large_bitmap_offset = 0, | selector_offset = 0 | }, | type = 21, | srt = 3856568, | code = 0x404ef0 | "H\215E\360L9\370rDH\203\354\bL\211\350H\211\336H\211\307\061\300\350|\034\ | 062" | } | | This is a THUNK_STATIC with 0 ptrs and nptrs in the payload. | | Ömer | | Ömer Sinan Ağacan , 4 Şub 2019 Pzt, 16:23 | tarihinde şunu yazdı: | > | > Hi, | > | > I was trying to understand why some info tables that have no ptrs and | nptrs like | > GCD_CAF end up with 1 nptrs in the generated info table and found this | code in | > Constants.h: | > | > /* ------------------------------------------------------------------ | ----------- | > Minimum closure sizes | > | > This is the minimum number of words in the payload of a | > heap-allocated closure, so that the closure has enough room to be | > overwritten with a forwarding pointer during garbage collection. | > ------------------------------------------------------------------ | -------- | > */ | > | > #define MIN_PAYLOAD_SIZE 1 | > | > We use this in a few places in the compiler and add at least one word | space in | > the payload. However the comment is actually wrong, forwarding pointers | are made | > by tagging the info ptr field so we don't need a word in the payload for | > forwarding pointers. I tried updating this as 0 but that caused a lot of | test | > failures (mostly in GHCi). I'm wondering if I'm missing anything or is it | just | > some code assuming min payload size 1 without using this macro. | > | > Any ideas? | > | > Ömer | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.haske | ll.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C4297c3983d594168ad0b08d68 | b6f3d32%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636849707211774471& | ;sdata=H4sLNvWnJHxdHo1hjdC0fU3pUL3K1AUjV4nC3tHlBEU%3D&reserved=0 From sylvain at haskus.fr Tue Feb 5 16:36:25 2019 From: sylvain at haskus.fr (Sylvain Henry) Date: Tue, 5 Feb 2019 17:36:25 +0100 Subject: WIP branches Message-ID: <4c68698f-f0f2-91aa-0108-3b90254950f8@haskus.fr> Hi, Every time we fetch the main GHC repository, we get *a lot* of "wip/*" branches. That's a lot of noise, making the bash completion of "git checkout" pretty useless for instance: > git checkout zsh: do you wish to see all 945 possibilities (329 lines)? Unless I'm missing something, they seem to be used to: 1) get the CI run on personal branches (e.g. wip/USER/whatever) 2) share code between different people (SVN like) 3) archival of not worth merging but still worth keeping code (cf https://ghc.haskell.org/trac/ghc/wiki/ActiveBranches) Now that we have switched to Gitlab, can we keep the main repository clean of those branches? 1) The CI is run on user forks and on merge requests in Gitlab so we don't need this anymore 2 and 3) Can we have a Gitlab project ("ghc-wip" or something) that isn't protected and dedicated to this? The main project could be protected globally instead of per-branch so that only Ben and Marge could create release branches, merge, etc. Devs using wip branches would only have to add "ghc-wip" as an additional remote repo. Any opinion on this? Thanks, Sylvain From sylvain at haskus.fr Tue Feb 5 19:15:25 2019 From: sylvain at haskus.fr (Sylvain Henry) Date: Tue, 5 Feb 2019 20:15:25 +0100 Subject: WIP branches In-Reply-To: <8239D109-ABE4-4819-AF7F-DD59697BE0CA@cs.brynmawr.edu> References: <4c68698f-f0f2-91aa-0108-3b90254950f8@haskus.fr> <8239D109-ABE4-4819-AF7F-DD59697BE0CA@cs.brynmawr.edu> Message-ID: > What is the advantage of having ghc-wip instead of having all devs just have their own forks? I am all for each dev having its own fork. The ghc-wip repo would be just for devs having an SVN workflow (i.e. several people working with commit rights on the same branch/fork). If no-one uses this workflow or if Gitlab allows fine tuning of permissions on user forks, we may omit the ghc-wip repo altogether. Regards, Sylvain PS: you didn't send your answer to the list, only to me On 05/02/2019 19:44, Richard Eisenberg wrote: > I agree that movement in this direction would be good (though I don't feel the pain from the current mode -- it just seems suboptimal). What is the advantage of having ghc-wip instead of having all devs just have their own forks? > > Thanks, > Richard > >> On Feb 5, 2019, at 11:36 AM, Sylvain Henry wrote: >> >> Hi, >> >> Every time we fetch the main GHC repository, we get *a lot* of "wip/*" branches. That's a lot of noise, making the bash completion of "git checkout" pretty useless for instance: >> >>> git checkout >> zsh: do you wish to see all 945 possibilities (329 lines)? >> >> Unless I'm missing something, they seem to be used to: >> 1) get the CI run on personal branches (e.g. wip/USER/whatever) >> 2) share code between different people (SVN like) >> 3) archival of not worth merging but still worth keeping code (cf https://ghc.haskell.org/trac/ghc/wiki/ActiveBranches) >> >> Now that we have switched to Gitlab, can we keep the main repository clean of those branches? >> 1) The CI is run on user forks and on merge requests in Gitlab so we don't need this anymore >> 2 and 3) Can we have a Gitlab project ("ghc-wip" or something) that isn't protected and dedicated to this? The main project could be protected globally instead of per-branch so that only Ben and Marge could create release branches, merge, etc. Devs using wip branches would only have to add "ghc-wip" as an additional remote repo. >> >> Any opinion on this? >> >> Thanks, >> Sylvain >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From lonetiger at gmail.com Tue Feb 5 19:32:44 2019 From: lonetiger at gmail.com (Phyx) Date: Tue, 5 Feb 2019 19:32:44 +0000 Subject: WIP branches In-Reply-To: References: <4c68698f-f0f2-91aa-0108-3b90254950f8@haskus.fr> <8239D109-ABE4-4819-AF7F-DD59697BE0CA@cs.brynmawr.edu> Message-ID: The solution I use to this branch overload is changing my fetch refspecs to list explicitly the branches I want. https://git-scm.com/book/en/v2/Git-Internals-The-Refspec It's not ideal but it gets the job done. I wish git allowed you to exclude branches instead, as I could just exclude /wip/* then. Tamar On Tue, Feb 5, 2019, 19:15 Sylvain Henry wrote: > > What is the advantage of having ghc-wip instead of having all devs just > have their own forks? > > I am all for each dev having its own fork. The ghc-wip repo would be just > for devs having an SVN workflow (i.e. several people working with commit > rights on the same branch/fork). If no-one uses this workflow or if Gitlab > allows fine tuning of permissions on user forks, we may omit the ghc-wip > repo altogether. > > Regards, > Sylvain > > PS: you didn't send your answer to the list, only to me > > On 05/02/2019 19:44, Richard Eisenberg wrote: > > I agree that movement in this direction would be good (though I don't > feel the pain from the current mode -- it just seems suboptimal). What is > the advantage of having ghc-wip instead of having all devs just have their > own forks? > > > > Thanks, > > Richard > > > >> On Feb 5, 2019, at 11:36 AM, Sylvain Henry wrote: > >> > >> Hi, > >> > >> Every time we fetch the main GHC repository, we get *a lot* of "wip/*" > branches. That's a lot of noise, making the bash completion of "git > checkout" pretty useless for instance: > >> > >>> git checkout > >> zsh: do you wish to see all 945 possibilities (329 lines)? > >> > >> Unless I'm missing something, they seem to be used to: > >> 1) get the CI run on personal branches (e.g. wip/USER/whatever) > >> 2) share code between different people (SVN like) > >> 3) archival of not worth merging but still worth keeping code (cf > https://ghc.haskell.org/trac/ghc/wiki/ActiveBranches) > >> > >> Now that we have switched to Gitlab, can we keep the main repository > clean of those branches? > >> 1) The CI is run on user forks and on merge requests in Gitlab so we > don't need this anymore > >> 2 and 3) Can we have a Gitlab project ("ghc-wip" or something) that > isn't protected and dedicated to this? The main project could be protected > globally instead of per-branch so that only Ben and Marge could create > release branches, merge, etc. Devs using wip branches would only have to > add "ghc-wip" as an additional remote repo. > >> > >> Any opinion on this? > >> > >> Thanks, > >> Sylvain > >> > >> _______________________________________________ > >> ghc-devs mailing list > >> ghc-devs at haskell.org > >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lonetiger at gmail.com Tue Feb 5 19:36:28 2019 From: lonetiger at gmail.com (Phyx) Date: Tue, 5 Feb 2019 19:36:28 +0000 Subject: Scaling back CI (for now)? In-Reply-To: References: Message-ID: That aside, the CIs don't seem stable at all. Frequent timeouts even before they start. I have been trying to merge 3 changes for a while now and everytime one of them times out and I have to restart the timed out ones. Then there are merge conflicts and I have to start over. This is "bot wackamole" :) On Sun, Feb 3, 2019, 13:56 Matthew Pickering wrote: > It has been established today that Marge is failing to run in batch > mode for some reason which means it takes at least as long as CI takes > to complete for each commit to be merged. The rate is about 4 > commits/day with the current configuration. > > On Sat, Feb 2, 2019 at 7:57 PM Sebastian Graf wrote: > > > > Hi, > > > > Am Sa., 2. Feb. 2019 um 16:09 Uhr schrieb Matthew Pickering < > matthewtpickering at gmail.com>: > >> > >> > >> All the other flavours should be run once the commit reaches master. > >> > >> Thoughts? > > > > > > That's even better than my idea of only running them as nightlies. In > favor! > > > >> > >> Cheers, > >> > >> Matt > >> _______________________________________________ > >> ghc-devs mailing list > >> ghc-devs at haskell.org > >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggreif at gmail.com Wed Feb 6 00:47:35 2019 From: ggreif at gmail.com (Gabor Greif) Date: Tue, 5 Feb 2019 16:47:35 -0800 Subject: Min closure payload size? In-Reply-To: References: Message-ID: Just guessing here, maybe this thunk type lives in (read-only?) static sections, and as such it will never be overwritten with forwarding pointers? Gabor On 2/5/19, Ömer Sinan Ağacan wrote: > I just came across a closure that is according to this code is not valid: > > >>> print *get_itbl(0x7b2870) > $8 = { > layout = { > payload = { > ptrs = 0, > nptrs = 0 > }, > bitmap = 0, > large_bitmap_offset = 0, > __pad_large_bitmap_offset = 0, > selector_offset = 0 > }, > type = 21, > srt = 3856568, > code = 0x404ef0 > "H\215E\360L9\370rDH\203\354\bL\211\350H\211\336H\211\307\061\300\350|\034\062" > } > > This is a THUNK_STATIC with 0 ptrs and nptrs in the payload. > > Ömer > > Ömer Sinan Ağacan , 4 Şub 2019 Pzt, 16:23 > tarihinde şunu yazdı: >> >> Hi, >> >> I was trying to understand why some info tables that have no ptrs and >> nptrs like >> GCD_CAF end up with 1 nptrs in the generated info table and found this >> code in >> Constants.h: >> >> /* >> ----------------------------------------------------------------------------- >> Minimum closure sizes >> >> This is the minimum number of words in the payload of a >> heap-allocated closure, so that the closure has enough room to be >> overwritten with a forwarding pointer during garbage collection. >> >> -------------------------------------------------------------------------- >> */ >> >> #define MIN_PAYLOAD_SIZE 1 >> >> We use this in a few places in the compiler and add at least one word >> space in >> the payload. However the comment is actually wrong, forwarding pointers >> are made >> by tagging the info ptr field so we don't need a word in the payload for >> forwarding pointers. I tried updating this as 0 but that caused a lot of >> test >> failures (mostly in GHCi). I'm wondering if I'm missing anything or is it >> just >> some code assuming min payload size 1 without using this macro. >> >> Any ideas? >> >> Ömer > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From cheng.shao at tweag.io Wed Feb 6 05:34:46 2019 From: cheng.shao at tweag.io (Shao, Cheng) Date: Wed, 6 Feb 2019 13:34:46 +0800 Subject: Cmm code of `id` function referring to `breakpoint`? Message-ID: Hi devs, I just found that the Cmm code of `GHC.Base.id` refers to `breakpoint` in the same module, however, in the Haskell source of `GHC.Base`, the definition of `id` and `breakpoint` are totally unrelated: ``` id :: a -> a id x = x breakpoint :: a -> a breakpoint r = r ``` And here's the pretty-printed Cmm code: ``` base_GHCziBase_id_entry() // [R2] { [] } {offset chwa: // global R2 = R2; call base_GHCziBase_breakpoint_entry(R2) args: 8, res: 0, upd: 8; } base_GHCziBase_breakpoint_entry() // [R2] { [] } {offset chvW: // global R1 = R2; call stg_ap_0_fast(R1) args: 8, res: 0, upd: 8; } ``` This looks suspicious. I'm curious if this is intended behavior of ghc. Regards, Shao Cheng From omeragacan at gmail.com Wed Feb 6 05:40:08 2019 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Wed, 6 Feb 2019 08:40:08 +0300 Subject: Min closure payload size? In-Reply-To: References: Message-ID: I don't think so, for two reasons: - We update static thunks just fine so I don't think they're in a read-only section. - Forwarding pointers are needed when moving objects, and we don't move static objects, so we don't need to make them forwarding pointers (I think you confused forwarding pointers with indirections generated by thunk updates?). Ömer Gabor Greif , 6 Şub 2019 Çar, 03:47 tarihinde şunu yazdı: > > Just guessing here, maybe this thunk type lives in (read-only?) static > sections, and as such it will never be overwritten with forwarding > pointers? > > Gabor > > On 2/5/19, Ömer Sinan Ağacan wrote: > > I just came across a closure that is according to this code is not valid: > > > > >>> print *get_itbl(0x7b2870) > > $8 = { > > layout = { > > payload = { > > ptrs = 0, > > nptrs = 0 > > }, > > bitmap = 0, > > large_bitmap_offset = 0, > > __pad_large_bitmap_offset = 0, > > selector_offset = 0 > > }, > > type = 21, > > srt = 3856568, > > code = 0x404ef0 > > "H\215E\360L9\370rDH\203\354\bL\211\350H\211\336H\211\307\061\300\350|\034\062" > > } > > > > This is a THUNK_STATIC with 0 ptrs and nptrs in the payload. > > > > Ömer > > > > Ömer Sinan Ağacan , 4 Şub 2019 Pzt, 16:23 > > tarihinde şunu yazdı: > >> > >> Hi, > >> > >> I was trying to understand why some info tables that have no ptrs and > >> nptrs like > >> GCD_CAF end up with 1 nptrs in the generated info table and found this > >> code in > >> Constants.h: > >> > >> /* > >> ----------------------------------------------------------------------------- > >> Minimum closure sizes > >> > >> This is the minimum number of words in the payload of a > >> heap-allocated closure, so that the closure has enough room to be > >> overwritten with a forwarding pointer during garbage collection. > >> > >> -------------------------------------------------------------------------- > >> */ > >> > >> #define MIN_PAYLOAD_SIZE 1 > >> > >> We use this in a few places in the compiler and add at least one word > >> space in > >> the payload. However the comment is actually wrong, forwarding pointers > >> are made > >> by tagging the info ptr field so we don't need a word in the payload for > >> forwarding pointers. I tried updating this as 0 but that caused a lot of > >> test > >> failures (mostly in GHCi). I'm wondering if I'm missing anything or is it > >> just > >> some code assuming min payload size 1 without using this macro. > >> > >> Any ideas? > >> > >> Ömer > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > From omeragacan at gmail.com Wed Feb 6 05:56:26 2019 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Wed, 6 Feb 2019 08:56:26 +0300 Subject: Cmm code of `id` function referring to `breakpoint`? In-Reply-To: References: Message-ID: That's because of the CSE (common subexpression elimination) pass. Here's an example: module Lib where foo :: a -> a foo x = x bar :: a -> a bar x = x Build with -O -ddump-stg and you'll see something like: Lib.foo :: forall a. a -> a [GblId, Arity=1, Caf=NoCafRefs, Str=, Unf=OtherCon []] = [] \r [x_s1bB] x_s1bB; Lib.bar :: forall a. a -> a [GblId, Arity=1, Caf=NoCafRefs, Str=, Unf=OtherCon []] = [] \r [eta_B1] Lib.foo eta_B1; Without -O or with -fno-cse this does not happen. This is quite unexpected, but maybe not harmful. Ömer Shao, Cheng , 6 Şub 2019 Çar, 08:35 tarihinde şunu yazdı: > > Hi devs, > > I just found that the Cmm code of `GHC.Base.id` refers to `breakpoint` > in the same module, however, in the Haskell source of `GHC.Base`, the > definition of `id` and `breakpoint` are totally unrelated: > > ``` > id :: a -> a > id x = x > > breakpoint :: a -> a > breakpoint r = r > ``` > > And here's the pretty-printed Cmm code: > > ``` > base_GHCziBase_id_entry() // [R2] > { [] > } > {offset > chwa: // global > R2 = R2; > call base_GHCziBase_breakpoint_entry(R2) args: 8, res: 0, upd: 8; > } > base_GHCziBase_breakpoint_entry() // [R2] > { [] > } > {offset > chvW: // global > R1 = R2; > call stg_ap_0_fast(R1) args: 8, res: 0, upd: 8; > } > ``` > > This looks suspicious. I'm curious if this is intended behavior of ghc. > > Regards, > Shao Cheng > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From matthewtpickering at gmail.com Wed Feb 6 21:46:04 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Wed, 6 Feb 2019 21:46:04 +0000 Subject: Scaling back CI (for now)? In-Reply-To: References: Message-ID: This evening I have fixed the batch mode. For example: https://gitlab.haskell.org/ghc/ghc/merge_requests/302 Hopefully it should be smoother sailing now. Matt On Tue, Feb 5, 2019 at 7:36 PM Phyx wrote: > > That aside, the CIs don't seem stable at all. Frequent timeouts even before they start. I have been trying to merge 3 changes for a while now and everytime one of them times out and I have to restart the timed out ones. Then there are merge conflicts and I have to start over. > > This is "bot wackamole" :) > > On Sun, Feb 3, 2019, 13:56 Matthew Pickering wrote: >> >> It has been established today that Marge is failing to run in batch >> mode for some reason which means it takes at least as long as CI takes >> to complete for each commit to be merged. The rate is about 4 >> commits/day with the current configuration. >> >> On Sat, Feb 2, 2019 at 7:57 PM Sebastian Graf wrote: >> > >> > Hi, >> > >> > Am Sa., 2. Feb. 2019 um 16:09 Uhr schrieb Matthew Pickering : >> >> >> >> >> >> All the other flavours should be run once the commit reaches master. >> >> >> >> Thoughts? >> > >> > >> > That's even better than my idea of only running them as nightlies. In favor! >> > >> >> >> >> Cheers, >> >> >> >> Matt >> >> _______________________________________________ >> >> ghc-devs mailing list >> >> ghc-devs at haskell.org >> >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From matthewtpickering at gmail.com Wed Feb 6 22:09:23 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Wed, 6 Feb 2019 22:09:23 +0000 Subject: WIP branches In-Reply-To: <4c68698f-f0f2-91aa-0108-3b90254950f8@haskus.fr> References: <4c68698f-f0f2-91aa-0108-3b90254950f8@haskus.fr> Message-ID: Making `ghc-wip` sounds like a reasonable idea to me. I have found that people pushing to the `wip/` branches makes things much smoother so far as it means that I can rebase/finish/amend other people's patches and just push to the same branch without having to ask people to do annoying rebases etc. Matt On Tue, Feb 5, 2019 at 4:36 PM Sylvain Henry wrote: > > Hi, > > Every time we fetch the main GHC repository, we get *a lot* of "wip/*" > branches. That's a lot of noise, making the bash completion of "git > checkout" pretty useless for instance: > > > git checkout > zsh: do you wish to see all 945 possibilities (329 lines)? > > Unless I'm missing something, they seem to be used to: > 1) get the CI run on personal branches (e.g. wip/USER/whatever) > 2) share code between different people (SVN like) > 3) archival of not worth merging but still worth keeping code (cf > https://ghc.haskell.org/trac/ghc/wiki/ActiveBranches) > > Now that we have switched to Gitlab, can we keep the main repository > clean of those branches? > 1) The CI is run on user forks and on merge requests in Gitlab so we > don't need this anymore > 2 and 3) Can we have a Gitlab project ("ghc-wip" or something) that > isn't protected and dedicated to this? The main project could be > protected globally instead of per-branch so that only Ben and Marge > could create release branches, merge, etc. Devs using wip branches would > only have to add "ghc-wip" as an additional remote repo. > > Any opinion on this? > > Thanks, > Sylvain > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From ben at smart-cactus.org Wed Feb 6 22:23:07 2019 From: ben at smart-cactus.org (Ben Gamari) Date: Wed, 06 Feb 2019 17:23:07 -0500 Subject: Scaling back CI (for now)? In-Reply-To: References: Message-ID: <87zhr88sl3.fsf@smart-cactus.org> Phyx writes: > That aside, the CIs don't seem stable at all. Frequent timeouts even before > they start. I have been trying to merge 3 changes for a while now and > everytime one of them times out and I have to restart the timed out ones. > Then there are merge conflicts and I have to start over. > Indeed Marge was causing a remarkable amount of CI traffic, leading to long queues, and eventually build timeouts. Thankfully Matthew investigated why Marge's batch mode wasn't batching and consequently things should now be much better. Sorry for the previous inconvenience! Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Wed Feb 6 22:33:35 2019 From: ben at well-typed.com (Ben Gamari) Date: Wed, 06 Feb 2019 17:33:35 -0500 Subject: WIP branches In-Reply-To: References: <4c68698f-f0f2-91aa-0108-3b90254950f8@haskus.fr> Message-ID: <87womc8s3q.fsf@smart-cactus.org> Matthew Pickering writes: > Making `ghc-wip` sounds like a reasonable idea to me. > > I have found that people pushing to the `wip/` branches makes things > much smoother so far as it means that I can rebase/finish/amend other > people's patches and just push to the same branch without having to > ask people to do annoying rebases etc. > Right, this is a significant advantage of keeping WIP branches in the ghc repo. I agree that we should clear out some of the older, non-archival wip/ branches. One unfortunate side-effect of keeping WIP work in forks is that GitLab will not show the user that the branch has a corresponding MR when viewing its commit list. For instance if you look at [1] (a branch in the primary GHC repository associated with !298) GitLab will note the fact that the branch has an MR open with the "View open merge request" button on the top right of the page. However if we look at [2] (in osa1's fork, associated with !299) we see no such indication. Cheers, - Ben [1] https://gitlab.haskell.org/ghc/ghc/commits/wip/nonmoving-gc -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From simonpj at microsoft.com Wed Feb 6 22:44:41 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 6 Feb 2019 22:44:41 +0000 Subject: WIP branches In-Reply-To: <87womc8s3q.fsf@smart-cactus.org> References: <4c68698f-f0f2-91aa-0108-3b90254950f8@haskus.fr> <87womc8s3q.fsf@smart-cactus.org> Message-ID: | One unfortunate side-effect of keeping WIP work in forks is that GitLab | will not show the user that the branch has a corresponding MR when | viewing its commit list. For instance if you look at [1] (a branch in | the primary GHC repository associated with !298) GitLab will note the | fact that the branch has an MR open with the "View open merge request" | button on the top right of the page. However if we look at [2] (in | osa1's fork, associated with !299) we see no such indication. This is quite important (to me). On several occasions already I have asked myself "have I opened a MR for this wip/ branch, and if so which MR?". There really should be a way to answer that question. Simon | -----Original Message----- | From: ghc-devs On Behalf Of Ben Gamari | Sent: 06 February 2019 22:34 | To: Matthew Pickering ; Sylvain Henry | | Cc: ghc-devs | Subject: Re: WIP branches | | Matthew Pickering writes: | | > Making `ghc-wip` sounds like a reasonable idea to me. | > | > I have found that people pushing to the `wip/` branches makes things | > much smoother so far as it means that I can rebase/finish/amend other | > people's patches and just push to the same branch without having to | > ask people to do annoying rebases etc. | > | Right, this is a significant advantage of keeping WIP branches in the ghc | repo. I agree that we should clear out some of the older, non-archival | wip/ branches. | | One unfortunate side-effect of keeping WIP work in forks is that GitLab | will not show the user that the branch has a corresponding MR when | viewing its commit list. For instance if you look at [1] (a branch in | the primary GHC repository associated with !298) GitLab will note the | fact that the branch has an MR open with the "View open merge request" | button on the top right of the page. However if we look at [2] (in | osa1's fork, associated with !299) we see no such indication. | | Cheers, | | - Ben | | | [1] | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.ha | skell.org%2Fghc%2Fghc%2Fcommits%2Fwip%2Fnonmoving- | gc&data=02%7C01%7Csimonpj%40microsoft.com%7Caf97c2f84a6f49e53dac08d68c8 | 32cb2%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636850892378883131&s | data=%2Fdke1u5PQ0U%2F9PMzZBHGkJdR2OZplq2JUGDj8u1YXi8%3D&reserved=0 From rae at cs.brynmawr.edu Thu Feb 7 03:10:22 2019 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Wed, 6 Feb 2019 22:10:22 -0500 Subject: Scaling back CI (for now)? In-Reply-To: <87zhr88sl3.fsf@smart-cactus.org> References: <87zhr88sl3.fsf@smart-cactus.org> Message-ID: So, just checking: is the recommended route to merging now to use the Marge Bot instructions posted previously? (That is, get 1+ approvals and then assign to Marge.) Thanks, Richard > On Feb 6, 2019, at 5:23 PM, Ben Gamari wrote: > > Phyx writes: > >> That aside, the CIs don't seem stable at all. Frequent timeouts even before >> they start. I have been trying to merge 3 changes for a while now and >> everytime one of them times out and I have to restart the timed out ones. >> Then there are merge conflicts and I have to start over. >> > Indeed Marge was causing a remarkable amount of CI traffic, leading to > long queues, and eventually build timeouts. Thankfully Matthew > investigated why Marge's batch mode wasn't batching and consequently > things should now be much better. > > Sorry for the previous inconvenience! > > Cheers, > > - Ben > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From ben at well-typed.com Thu Feb 7 03:57:54 2019 From: ben at well-typed.com (Ben Gamari) Date: Wed, 06 Feb 2019 22:57:54 -0500 Subject: How to merge your patch Message-ID: <87tvhg8d36.fsf@smart-cactus.org> tl;dr. Our beloved @marge-bot is behaving much more reliably now thanks to improvements in CI reliability and build batching. To merge an (accepted) merge request simply designate @marge-bot as its assignee. Marge will handle the rest. If things go awry let me know. Hi everyone, As you may have noticed, over the last weeks we have been feeling out how best to leverage our new CI infrastructure, particularly when it comes to merging patches. As mentioned a few weeks ago, we have introduced a bot, Marge, to help us work around some temporary limitations of GitLab's merge workflow. Unfortunately, Marge had her own set of quirks which have taken a while to sort out. In particular, fragile tests tended to result in repeated merge attempts which tended to clog up CI, leading to an avalanche of waiting, build failures, and general despair. However, we have been working to improve this situation in three ways: * provisioning more builder capacity to reduce wait times * fixing or disabling fragile tests to reduce the need for retries * enable Marge's batched merge functionality, reducing the number of builds necessary per merged patch [1] In light of this I just wanted to reiterate the previous guidance on merging patches. If you have a merge request you would like to merge simply do the following: 1. make sure that it has at least one approval. This should happen in the course of code review but do ping if this was forgotten. 2. assign the merge request to @marge-bot using the assignee field in the right-hand sidebar of the merge request page. 3. next time Marge does a batch of merges she will fold in your MR (leaving a helpful comment to let you know) and, if the batch passes CI, merge it. If not she will leave a comment letting you know there was an issue. I will try to step in to sort out the mess when this happens. Do let me know if you have any questions. Cheers, - Ben [1] thanks to Matthew Pickering for picking this up -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Thu Feb 7 03:58:15 2019 From: ben at smart-cactus.org (Ben Gamari) Date: Wed, 06 Feb 2019 22:58:15 -0500 Subject: Scaling back CI (for now)? In-Reply-To: References: <87zhr88sl3.fsf@smart-cactus.org> Message-ID: <87sgx08d2i.fsf@smart-cactus.org> Richard Eisenberg writes: > So, just checking: is the recommended route to merging now to use the > Marge Bot instructions posted previously? (That is, get 1+ approvals > and then assign to Marge.) > Indeed. I was just sent an email reiterating the previous guidance to the list. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From matthewtpickering at gmail.com Thu Feb 7 06:57:01 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Thu, 7 Feb 2019 06:57:01 +0000 Subject: How to merge your patch In-Reply-To: <87tvhg8d36.fsf@smart-cactus.org> References: <87tvhg8d36.fsf@smart-cactus.org> Message-ID: I am continuing to monitor the situation. There still seems to be some teething issues. Matt On Thu, Feb 7, 2019 at 3:58 AM Ben Gamari wrote: > > tl;dr. Our beloved @marge-bot is behaving much more reliably now thanks > to improvements in CI reliability and build batching. To merge > an (accepted) merge request simply designate @marge-bot as its > assignee. Marge will handle the rest. If things go awry let me > know. > > > Hi everyone, > > As you may have noticed, over the last weeks we have been feeling out > how best to leverage our new CI infrastructure, particularly when it > comes to merging patches. > > As mentioned a few weeks ago, we have introduced a bot, Marge, to help > us work around some temporary limitations of GitLab's merge workflow. > Unfortunately, Marge had her own set of quirks which have taken a while > to sort out. In particular, fragile tests tended to result in repeated > merge attempts which tended to clog up CI, leading to an avalanche of > waiting, build failures, and general despair. > > However, we have been working to improve this situation in three ways: > > * provisioning more builder capacity to reduce wait times > > * fixing or disabling fragile tests to reduce the need for retries > > * enable Marge's batched merge functionality, reducing the number of > builds necessary per merged patch [1] > > In light of this I just wanted to reiterate the previous guidance on > merging patches. If you have a merge request you would like to merge > simply do the following: > > 1. make sure that it has at least one approval. This should > happen in the course of code review but do ping if this was > forgotten. > > 2. assign the merge request to @marge-bot using the assignee field in > the right-hand sidebar of the merge request page. > > 3. next time Marge does a batch of merges she will fold in your MR > (leaving a helpful comment to let you know) and, if the batch passes > CI, merge it. If not she will leave a comment letting you know there > was an issue. I will try to step in to sort out the mess when this > happens. > > Do let me know if you have any questions. > > Cheers, > > - Ben > > > [1] thanks to Matthew Pickering for picking this up > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From matthewtpickering at gmail.com Thu Feb 7 07:05:04 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Thu, 7 Feb 2019 07:05:04 +0000 Subject: How to merge your patch In-Reply-To: References: <87tvhg8d36.fsf@smart-cactus.org> Message-ID: I have observed the problem in the logs and have stopped the Marge service for now. On Thu, Feb 7, 2019 at 6:57 AM Matthew Pickering wrote: > > I am continuing to monitor the situation. There still seems to be some > teething issues. > > Matt > > On Thu, Feb 7, 2019 at 3:58 AM Ben Gamari wrote: > > > > tl;dr. Our beloved @marge-bot is behaving much more reliably now thanks > > to improvements in CI reliability and build batching. To merge > > an (accepted) merge request simply designate @marge-bot as its > > assignee. Marge will handle the rest. If things go awry let me > > know. > > > > > > Hi everyone, > > > > As you may have noticed, over the last weeks we have been feeling out > > how best to leverage our new CI infrastructure, particularly when it > > comes to merging patches. > > > > As mentioned a few weeks ago, we have introduced a bot, Marge, to help > > us work around some temporary limitations of GitLab's merge workflow. > > Unfortunately, Marge had her own set of quirks which have taken a while > > to sort out. In particular, fragile tests tended to result in repeated > > merge attempts which tended to clog up CI, leading to an avalanche of > > waiting, build failures, and general despair. > > > > However, we have been working to improve this situation in three ways: > > > > * provisioning more builder capacity to reduce wait times > > > > * fixing or disabling fragile tests to reduce the need for retries > > > > * enable Marge's batched merge functionality, reducing the number of > > builds necessary per merged patch [1] > > > > In light of this I just wanted to reiterate the previous guidance on > > merging patches. If you have a merge request you would like to merge > > simply do the following: > > > > 1. make sure that it has at least one approval. This should > > happen in the course of code review but do ping if this was > > forgotten. > > > > 2. assign the merge request to @marge-bot using the assignee field in > > the right-hand sidebar of the merge request page. > > > > 3. next time Marge does a batch of merges she will fold in your MR > > (leaving a helpful comment to let you know) and, if the batch passes > > CI, merge it. If not she will leave a comment letting you know there > > was an issue. I will try to step in to sort out the mess when this > > happens. > > > > Do let me know if you have any questions. > > > > Cheers, > > > > - Ben > > > > > > [1] thanks to Matthew Pickering for picking this up > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From matthewtpickering at gmail.com Thu Feb 7 07:05:04 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Thu, 7 Feb 2019 07:05:04 +0000 Subject: How to merge your patch In-Reply-To: References: <87tvhg8d36.fsf@smart-cactus.org> Message-ID: I have observed the problem in the logs and have stopped the Marge service for now. On Thu, Feb 7, 2019 at 6:57 AM Matthew Pickering wrote: > > I am continuing to monitor the situation. There still seems to be some > teething issues. > > Matt > > On Thu, Feb 7, 2019 at 3:58 AM Ben Gamari wrote: > > > > tl;dr. Our beloved @marge-bot is behaving much more reliably now thanks > > to improvements in CI reliability and build batching. To merge > > an (accepted) merge request simply designate @marge-bot as its > > assignee. Marge will handle the rest. If things go awry let me > > know. > > > > > > Hi everyone, > > > > As you may have noticed, over the last weeks we have been feeling out > > how best to leverage our new CI infrastructure, particularly when it > > comes to merging patches. > > > > As mentioned a few weeks ago, we have introduced a bot, Marge, to help > > us work around some temporary limitations of GitLab's merge workflow. > > Unfortunately, Marge had her own set of quirks which have taken a while > > to sort out. In particular, fragile tests tended to result in repeated > > merge attempts which tended to clog up CI, leading to an avalanche of > > waiting, build failures, and general despair. > > > > However, we have been working to improve this situation in three ways: > > > > * provisioning more builder capacity to reduce wait times > > > > * fixing or disabling fragile tests to reduce the need for retries > > > > * enable Marge's batched merge functionality, reducing the number of > > builds necessary per merged patch [1] > > > > In light of this I just wanted to reiterate the previous guidance on > > merging patches. If you have a merge request you would like to merge > > simply do the following: > > > > 1. make sure that it has at least one approval. This should > > happen in the course of code review but do ping if this was > > forgotten. > > > > 2. assign the merge request to @marge-bot using the assignee field in > > the right-hand sidebar of the merge request page. > > > > 3. next time Marge does a batch of merges she will fold in your MR > > (leaving a helpful comment to let you know) and, if the batch passes > > CI, merge it. If not she will leave a comment letting you know there > > was an issue. I will try to step in to sort out the mess when this > > happens. > > > > Do let me know if you have any questions. > > > > Cheers, > > > > - Ben > > > > > > [1] thanks to Matthew Pickering for picking this up > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From tdammers at gmail.com Thu Feb 7 08:47:54 2019 From: tdammers at gmail.com (Tobias Dammers) Date: Thu, 7 Feb 2019 09:47:54 +0100 Subject: Commit comments - call for opinions Message-ID: <20190207084753.agqkvovelqckiewn@nibbler> Dear all, So far, we had a feature in Trac where git commit messages mentioning a ticket would automatically be copied into a comment on that commit. See, for example, this comment: https://ghc.haskell.org/trac/ghc/ticket/13615#comment:47 GitLab does things slightly differently. It has "live" git repository information available, and rather than the relatively static comment list we see in Trac, it synthesizes a "timeline" style list of events relevant to an issue, which includes comments ("notes"), but also issue status updates, git commits, and other things. So when a commit mentions a GitLab issue, then GitLab will insert a link to that commit into the issue's event timeline. This means that in principle, copying commit messages into notes would be redundant, and we initially decided to ignore commit comments during the Trac/GitLab migration. However, it also means that the issue timeline will no longer display the actual commit message anymore, just a link to the commit ("@username mentioned this issue in commit 1234beef" or similar). So what used to be a narrative that you could read top-down to retrace the issue's history is now a bit more scattered. Now, the ideal solution would be for GitLab to instead display the full commit message, but I don't see this happening anytime soon, because it would require a patch to GitLab itself. So we're left with two options: a) Import commit comments as notes, duplicating the commit message into the note, and having both the full commit note and a hyperlinked commit reference in the issue timeline. b) Don't import commit comments as notes, just rely on GitLab to insert the hyperlinked commit reference. If any of you have any preference either way, please do tell. Thanks! -- Tobias Dammers - tdammers at gmail.com From simonpj at microsoft.com Thu Feb 7 09:10:31 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 7 Feb 2019 09:10:31 +0000 Subject: How to merge your patch In-Reply-To: <87tvhg8d36.fsf@smart-cactus.org> References: <87tvhg8d36.fsf@smart-cactus.org> Message-ID: | 2. assign the merge request to @marge-bot using the assignee field in | the right-hand sidebar of the merge request page. This sounds great. What email will we receive in these three cases? a) the merge is successful b) the merge fails because I made a mistake and it just doesn't validate c) the merge failed because the CI infrastructure failed somehow (should no longer happen, I know). Examples would be helpful. Thanks Simon | -----Original Message----- | From: ghc-devs On Behalf Of Ben Gamari | Sent: 07 February 2019 03:58 | To: GHC developers | Subject: How to merge your patch | | tl;dr. Our beloved @marge-bot is behaving much more reliably now thanks | to improvements in CI reliability and build batching. To merge | an (accepted) merge request simply designate @marge-bot as its | assignee. Marge will handle the rest. If things go awry let me | know. | | | Hi everyone, | | As you may have noticed, over the last weeks we have been feeling out how | best to leverage our new CI infrastructure, particularly when it comes to | merging patches. | | As mentioned a few weeks ago, we have introduced a bot, Marge, to help us | work around some temporary limitations of GitLab's merge workflow. | Unfortunately, Marge had her own set of quirks which have taken a while | to sort out. In particular, fragile tests tended to result in repeated | merge attempts which tended to clog up CI, leading to an avalanche of | waiting, build failures, and general despair. | | However, we have been working to improve this situation in three ways: | | * provisioning more builder capacity to reduce wait times | | * fixing or disabling fragile tests to reduce the need for retries | | * enable Marge's batched merge functionality, reducing the number of | builds necessary per merged patch [1] | | In light of this I just wanted to reiterate the previous guidance on | merging patches. If you have a merge request you would like to merge | simply do the following: | | 1. make sure that it has at least one approval. This should | happen in the course of code review but do ping if this was | forgotten. | | 2. assign the merge request to @marge-bot using the assignee field in | the right-hand sidebar of the merge request page. | | 3. next time Marge does a batch of merges she will fold in your MR | (leaving a helpful comment to let you know) and, if the batch passes | CI, merge it. If not she will leave a comment letting you know there | was an issue. I will try to step in to sort out the mess when this | happens. | | Do let me know if you have any questions. | | Cheers, | | - Ben | | | [1] thanks to Matthew Pickering for picking this up From simonpj at microsoft.com Thu Feb 7 09:22:28 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 7 Feb 2019 09:22:28 +0000 Subject: Commit comments - call for opinions In-Reply-To: <20190207084753.agqkvovelqckiewn@nibbler> References: <20190207084753.agqkvovelqckiewn@nibbler> Message-ID: | GitLab does things slightly differently. It has "live" git repository | information available, and rather than the relatively static comment list | we see in Trac, it synthesizes a "timeline" style list of events relevant | to an issue, which includes comments ("notes"), but also issue status | updates, git commits, and other things. Can you show an example of such a time-line? Where is it displayed? The "Discussion" tab perhaps? Personally I like what we are doing now, with the commit message inline. Clicking links to see useful information gets in the way. (I'm not sure if even the title/author/date of the commit are displayed.) | a) Import commit comments as notes, duplicating the commit message into | the note, and having both the full commit note and a hyperlinked commit | reference in the issue timeline. I'm in favour of this. It just preserves history and doesn't commit us for the future. One more question: currently the commit appears as a comment in every Trac ticket that is mentioned in the text of the commit message. That's _really_ useful, because it signals (in those related tickets) that there's a relevant commit to watch out for. Will it continue to be the case? Simon | -----Original Message----- | From: ghc-devs On Behalf Of Tobias Dammers | Sent: 07 February 2019 08:48 | To: GHC Devs | Subject: Commit comments - call for opinions | | Dear all, | | So far, we had a feature in Trac where git commit messages mentioning a | ticket would automatically be copied into a comment on that commit. See, | for example, this comment: | | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fghc.has | kell.org%2Ftrac%2Fghc%2Fticket%2F13615%23comment%3A47&data=02%7C01%7C | simonpj%40microsoft.com%7C4e4e4de2a28141c38bfd08d68cd9015b%7C72f988bf86f1 | 41af91ab2d7cd011db47%7C1%7C0%7C636851260994463920&sdata=JpLtBith2iW2N | fbYdZOt7dsjhKm1Ram7QCgR7aeiVUs%3D&reserved=0 | | GitLab does things slightly differently. It has "live" git repository | information available, and rather than the relatively static comment list | we see in Trac, it synthesizes a "timeline" style list of events relevant | to an issue, which includes comments ("notes"), but also issue status | updates, git commits, and other things. So when a commit mentions a | GitLab issue, then GitLab will insert a link to that commit into the | issue's event timeline. | | This means that in principle, copying commit messages into notes would be | redundant, and we initially decided to ignore commit comments during the | Trac/GitLab migration. However, it also means that the issue timeline | will no longer display the actual commit message anymore, just a link to | the commit ("@username mentioned this issue in commit 1234beef" or | similar). So what used to be a narrative that you could read top-down to | retrace the issue's history is now a bit more scattered. | | Now, the ideal solution would be for GitLab to instead display the full | commit message, but I don't see this happening anytime soon, because it | would require a patch to GitLab itself. So we're left with two options: | | a) Import commit comments as notes, duplicating the commit message into | the note, and having both the full commit note and a hyperlinked commit | reference in the issue timeline. | | b) Don't import commit comments as notes, just rely on GitLab to insert | the hyperlinked commit reference. | | If any of you have any preference either way, please do tell. | | Thanks! | | -- | Tobias Dammers - tdammers at gmail.com | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.has | kell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C4e4e4de2a28141c38bfd08d | 68cd9015b%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636851260994473914 | &sdata=sybIa7Fec76lod2d65Wc2IN187WGG3LjI2ydjpsFJF0%3D&reserved=0 From matthewtpickering at gmail.com Thu Feb 7 09:28:50 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Thu, 7 Feb 2019 09:28:50 +0000 Subject: Commit comments - call for opinions In-Reply-To: <20190207084753.agqkvovelqckiewn@nibbler> References: <20190207084753.agqkvovelqckiewn@nibbler> Message-ID: I am in favor of option b) as it fits in better with the "gitlab way of things". If we are to use gitlab then we should use it as it's most intended rather than trying to retrofit trac practices which have accrued over many years. Adding commits as comments is just a hack in trac to work around missing native support for the fundamental operation of linking a commit to. I don't really see that it is much more inconvenient to click on a link to see the commit, the hash can be hovered over to see the commit title. Clicking on the link will then also give the full commit message but also the full set of changes as well which is probably more useful. Cheers, Matt On Thu, Feb 7, 2019 at 8:48 AM Tobias Dammers wrote: > > Dear all, > > So far, we had a feature in Trac where git commit messages mentioning a > ticket would automatically be copied into a comment on that commit. See, > for example, this comment: > > https://ghc.haskell.org/trac/ghc/ticket/13615#comment:47 > > GitLab does things slightly differently. It has "live" git repository > information available, and rather than the relatively static comment > list we see in Trac, it synthesizes a "timeline" style list of events > relevant to an issue, which includes comments ("notes"), but also issue > status updates, git commits, and other things. So when a commit mentions > a GitLab issue, then GitLab will insert a link to that commit into the > issue's event timeline. > > This means that in principle, copying commit messages into notes would > be redundant, and we initially decided to ignore commit comments during > the Trac/GitLab migration. However, it also means that the issue > timeline will no longer display the actual commit message anymore, just > a link to the commit ("@username mentioned this issue in commit > 1234beef" or similar). So what used to be a narrative that you could > read top-down to retrace the issue's history is now a bit more > scattered. > > Now, the ideal solution would be for GitLab to instead display the full > commit message, but I don't see this happening anytime soon, because it > would require a patch to GitLab itself. So we're left with two options: > > a) Import commit comments as notes, duplicating the commit message into > the note, and having both the full commit note and a hyperlinked commit > reference in the issue timeline. > > b) Don't import commit comments as notes, just rely on GitLab to insert > the hyperlinked commit reference. > > If any of you have any preference either way, please do tell. > > Thanks! > > -- > Tobias Dammers - tdammers at gmail.com > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From tdammers at gmail.com Thu Feb 7 10:34:32 2019 From: tdammers at gmail.com (Tobias Dammers) Date: Thu, 7 Feb 2019 11:34:32 +0100 Subject: Commit comments - call for opinions In-Reply-To: References: <20190207084753.agqkvovelqckiewn@nibbler> Message-ID: <20190207103430.sgnae243jdsffolg@nibbler> On Thu, Feb 07, 2019 at 09:22:28AM +0000, Simon Peyton Jones wrote: > | GitLab does things slightly differently. It has "live" git repository > | information available, and rather than the relatively static comment list > | we see in Trac, it synthesizes a "timeline" style list of events relevant > | to an issue, which includes comments ("notes"), but also issue status > | updates, git commits, and other things. > > Can you show an example of such a time-line? Where is it displayed? > The "Discussion" tab perhaps? The timeline I'm talking about is simply the "activity list" right below the issue body. I did just realize however that our gitlab instance doesn't currently display any commits mentioning the issue. I haven't dug into it very deeply, but the research I've done so far suggests that this should in fact happen; the GitLab documentation explicitly says that mentioning issues in commits will link the two together, and this should generate an entry in the activity list. I can imagine, however, that this only works if the commit gets pushed after the issue has been created, which means we would have to somehow cater for this as part of the import. I will investigate further. If anyone happens to know whether this is indeed the cause, please do tell. > Personally I like what we are doing now, with the commit message > inline. Clicking links to see useful information gets in the way. > (I'm not sure if even the title/author/date of the commit are > displayed.) If and when that feature works, it will display: - Commit author - Commit hash - Commit date (though possibly mangled into a "humane" format, e.g. "6 hours ago"). E.g.: "Simon Peyton-Jones @simonpj mentioned in commit abc123 2 months ago" The commit message is not included however. > | a) Import commit comments as notes, duplicating the commit message into > | the note, and having both the full commit note and a hyperlinked commit > | reference in the issue timeline. > > I'm in favour of this. It just preserves history and doesn't commit > us for the future. > One more question: currently the commit appears as a comment in every > Trac ticket that is mentioned in the text of the commit message. > That's _really_ useful, because it signals (in those related tickets) > that there's a relevant commit to watch out for. Will it continue to > be the case? According to the GitLab documentation, mentioning any issue in any commit (or merge request, even) will link the two, so yes, you can absolutely mention multiple issues in one commit, and they will all be linked. GitLab does not support duplicating the text into the issue timeline verbatim however; if we want that even for future commits, then we will have to come up with some sort of hack (either patch GitLab to do it on the fly, or run some external bot that injects them as plain notes). > | -----Original Message----- > | From: ghc-devs On Behalf Of Tobias Dammers > | Sent: 07 February 2019 08:48 > | To: GHC Devs > | Subject: Commit comments - call for opinions > | > | Dear all, > | > | So far, we had a feature in Trac where git commit messages mentioning a > | ticket would automatically be copied into a comment on that commit. See, > | for example, this comment: > | > | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fghc.has > | kell.org%2Ftrac%2Fghc%2Fticket%2F13615%23comment%3A47&data=02%7C01%7C > | simonpj%40microsoft.com%7C4e4e4de2a28141c38bfd08d68cd9015b%7C72f988bf86f1 > | 41af91ab2d7cd011db47%7C1%7C0%7C636851260994463920&sdata=JpLtBith2iW2N > | fbYdZOt7dsjhKm1Ram7QCgR7aeiVUs%3D&reserved=0 > | > | GitLab does things slightly differently. It has "live" git repository > | information available, and rather than the relatively static comment list > | we see in Trac, it synthesizes a "timeline" style list of events relevant > | to an issue, which includes comments ("notes"), but also issue status > | updates, git commits, and other things. So when a commit mentions a > | GitLab issue, then GitLab will insert a link to that commit into the > | issue's event timeline. > | > | This means that in principle, copying commit messages into notes would be > | redundant, and we initially decided to ignore commit comments during the > | Trac/GitLab migration. However, it also means that the issue timeline > | will no longer display the actual commit message anymore, just a link to > | the commit ("@username mentioned this issue in commit 1234beef" or > | similar). So what used to be a narrative that you could read top-down to > | retrace the issue's history is now a bit more scattered. > | > | Now, the ideal solution would be for GitLab to instead display the full > | commit message, but I don't see this happening anytime soon, because it > | would require a patch to GitLab itself. So we're left with two options: > | > | a) Import commit comments as notes, duplicating the commit message into > | the note, and having both the full commit note and a hyperlinked commit > | reference in the issue timeline. > | > | b) Don't import commit comments as notes, just rely on GitLab to insert > | the hyperlinked commit reference. > | > | If any of you have any preference either way, please do tell. > | > | Thanks! > | > | -- > | Tobias Dammers - tdammers at gmail.com > | _______________________________________________ > | ghc-devs mailing list > | ghc-devs at haskell.org > | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.has > | kell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- > | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C4e4e4de2a28141c38bfd08d > | 68cd9015b%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636851260994473914 > | &sdata=sybIa7Fec76lod2d65Wc2IN187WGG3LjI2ydjpsFJF0%3D&reserved=0 -- Tobias Dammers - tdammers at gmail.com From matthewtpickering at gmail.com Thu Feb 7 11:28:17 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Thu, 7 Feb 2019 11:28:17 +0000 Subject: How to merge your patch In-Reply-To: <87tvhg8d36.fsf@smart-cactus.org> References: <87tvhg8d36.fsf@smart-cactus.org> Message-ID: I have attempted to fix the problem (which happened right at the end of her work) and have now redeployed her. I will continue to monitor the situation. Her progress can be tracked https://gitlab.haskell.org/ghc/ghc/merge_requests/307 Matt On Thu, Feb 7, 2019 at 3:58 AM Ben Gamari wrote: > > tl;dr. Our beloved @marge-bot is behaving much more reliably now thanks > to improvements in CI reliability and build batching. To merge > an (accepted) merge request simply designate @marge-bot as its > assignee. Marge will handle the rest. If things go awry let me > know. > > > Hi everyone, > > As you may have noticed, over the last weeks we have been feeling out > how best to leverage our new CI infrastructure, particularly when it > comes to merging patches. > > As mentioned a few weeks ago, we have introduced a bot, Marge, to help > us work around some temporary limitations of GitLab's merge workflow. > Unfortunately, Marge had her own set of quirks which have taken a while > to sort out. In particular, fragile tests tended to result in repeated > merge attempts which tended to clog up CI, leading to an avalanche of > waiting, build failures, and general despair. > > However, we have been working to improve this situation in three ways: > > * provisioning more builder capacity to reduce wait times > > * fixing or disabling fragile tests to reduce the need for retries > > * enable Marge's batched merge functionality, reducing the number of > builds necessary per merged patch [1] > > In light of this I just wanted to reiterate the previous guidance on > merging patches. If you have a merge request you would like to merge > simply do the following: > > 1. make sure that it has at least one approval. This should > happen in the course of code review but do ping if this was > forgotten. > > 2. assign the merge request to @marge-bot using the assignee field in > the right-hand sidebar of the merge request page. > > 3. next time Marge does a batch of merges she will fold in your MR > (leaving a helpful comment to let you know) and, if the batch passes > CI, merge it. If not she will leave a comment letting you know there > was an issue. I will try to step in to sort out the mess when this > happens. > > Do let me know if you have any questions. > > Cheers, > > - Ben > > > [1] thanks to Matthew Pickering for picking this up > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From ben at well-typed.com Thu Feb 7 16:10:04 2019 From: ben at well-typed.com (Ben Gamari) Date: Thu, 07 Feb 2019 11:10:04 -0500 Subject: How to merge your patch In-Reply-To: References: <87tvhg8d36.fsf@smart-cactus.org> Message-ID: <87tvhf7f6w.fsf@smart-cactus.org> Simon Peyton Jones via ghc-devs writes: > | 2. assign the merge request to @marge-bot using the assignee field in > | the right-hand sidebar of the merge request page. > > This sounds great. What email will we receive in these three cases? > When Marge picks up an MR and incorporates it into a batch merge she leaves a message [1] of the form >>> I will attempt to batch this MR (!303 (closed))... [1] https://gitlab.haskell.org/ghc/ghc/merge_requests/58#note_5354 > a) the merge is successful At this point Marge will close the MR and leave a note [2] of the form >>> @marge-bot merged 9 hours ago [2] https://gitlab.haskell.org/ghc/ghc/merge_requests/58#note_5385 > b) the merge fails because I made a mistake and it just doesn't > validate Unfortunately I've not seen a case where this has happened since we enabled batching so I can't comment yet. However, from previous experience I suspect the message will look something like this [3] >>> I couldn't merge this branch: CI failed! [3] https://gitlab.haskell.org/ghc/ghc/merge_requests/257#note_4788 > c) the merge failed because the CI infrastructure failed somehow > (should no longer happen, I know). From Marge's perspective this typically looks no different from (b). In the past the infrastructure issues fell into a few buckets: * the Windows or Darwin builders ran out of disk space due to [4], resulting in spurious failures * some fragile test failed Cheers, - Ben [4] https://gitlab.com/gitlab-org/gitlab-runner/issues/3856#note_127887227 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From matthewtpickering at gmail.com Thu Feb 7 21:52:35 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Thu, 7 Feb 2019 21:52:35 +0000 Subject: How to merge your patch In-Reply-To: <87tvhf7f6w.fsf@smart-cactus.org> References: <87tvhg8d36.fsf@smart-cactus.org> <87tvhf7f6w.fsf@smart-cactus.org> Message-ID: Latest update. I had to deploy her again because the CI timeout was set too low (10h). The timeout is now 24 hours. Sorry for the noisy emails this is generating. Fingers crossed for this time. Cheers, Matt On Thu, Feb 7, 2019 at 4:10 PM Ben Gamari wrote: > > Simon Peyton Jones via ghc-devs writes: > > > | 2. assign the merge request to @marge-bot using the assignee field in > > | the right-hand sidebar of the merge request page. > > > > This sounds great. What email will we receive in these three cases? > > > When Marge picks up an MR and incorporates it into a batch merge she > leaves a message [1] of the form > > >>> I will attempt to batch this MR (!303 (closed))... > > [1] https://gitlab.haskell.org/ghc/ghc/merge_requests/58#note_5354 > > > > > a) the merge is successful > > At this point Marge will close the MR and leave a note [2] of the form > > >>> @marge-bot merged 9 hours ago > > [2] https://gitlab.haskell.org/ghc/ghc/merge_requests/58#note_5385 > > > > > b) the merge fails because I made a mistake and it just doesn't > > validate > > Unfortunately I've not seen a case where this has happened since we > enabled batching so I can't comment yet. > > However, from previous experience I suspect the message will look > something like this [3] > > >>> I couldn't merge this branch: CI failed! > > [3] https://gitlab.haskell.org/ghc/ghc/merge_requests/257#note_4788 > > > > > c) the merge failed because the CI infrastructure failed somehow > > (should no longer happen, I know). > > From Marge's perspective this typically looks no different from (b). In > the past the infrastructure issues fell into a few buckets: > > * the Windows or Darwin builders ran out of disk space due to [4], > resulting in spurious failures > > * some fragile test failed > > > Cheers, > > - Ben > > > [4] https://gitlab.com/gitlab-org/gitlab-runner/issues/3856#note_127887227 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From matthewtpickering at gmail.com Thu Feb 7 23:40:08 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Thu, 7 Feb 2019 23:40:08 +0000 Subject: Fwd: [Haskell] PhD studentships in FP at Bristol In-Reply-To: References: Message-ID: Hi all, If there are any students considering applying for PhD programs who are also GHC hackers then Bristol could be a great option for you. Myself and Csongor are currently students here and both actively working on GHC. The programming languages group in Bristol has grown from 1 to 6 people in the last three years and is continuing to expand. Cheers, Matt ---------- Forwarded message --------- From: Meng Wang Date: Thu, Feb 7, 2019 at 11:20 PM Subject: [Haskell] PhD studentships in FP at Bristol To: Haskell at haskell.org , Haskell-Cafe at haskell.org Hi list, The programming language group at the University of Bristol is looking for up to three more PhD students in the broad area of functional programming, verification, and testing to join a very dynamic group of FP researchers. http://www.bristol.ac.uk/engineering/departments/computerscience/people/meng-wang/overview.html http://www.bristol.ac.uk/engineering/departments/computerscience/people/steven-j-ramsay/overview.html http://www.bristol.ac.uk/engineering/departments/computerscience/people/nicolas-wu/overview.html The positions are fully funded, and with additional bursaries for attending conferences. The ideal candidates will have a strong background in functional programming, especially Haskell, and an appetite for cutting-edge research. For enquiries, please email meng.wang at bristol.ac.uk before February 22nd. Best regards, Meng _______________________________________________ Haskell mailing list Haskell at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell From chrisdone at gmail.com Fri Feb 8 13:23:08 2019 From: chrisdone at gmail.com (Christopher Done) Date: Fri, 8 Feb 2019 13:23:08 +0000 Subject: Constructor wrappers vs workers in generated Core In-Reply-To: References: Message-ID: Sorry, here's my explanation. In summary, I was trying to go for the cleanest, tidiest, simplest AST possible for interpretation. My long-term goal is to have a slow but working interpreter of GHC Haskell written in Haskell that is capable of hot-swapping without segfaulting, by gracefully handling changing types and functions (just report a type error like the Lisps do), and the short-term goal is to export that interpretable AST as a web service so that a simple JS interpreter and/or substitution stepper could be written for the purposes of education, like https://chrisdone.com/toys/duet-delta/. I think this mailing list thread turns out to be an x/y problem. I need STG. I ended up doing lots of cleaning steps to generate a more well-formed Core. I learned that the Core that comes from the desugarer is incomplete and realized today that I was duplicating work done by the transformation that converts Core to STG. I'd initially avoided STG, thinking that Core would suffice, but it seems that STG is the AST that I was converging on anyway. It seems that STG will have all class methods generated, wrappers generated (and inlined as appropriate), primops and ffi calls are saturated, etc. no Types or coercions, etc. It even has info about whether closures can be updated and how. I wish I'd chosen STG instead of Core from the beginning, but it doesn't set me back by much so I'll consider this a learning exercise. If anything, it makes me appreciate almost everything going on in STG for why it's there and is useful. I'm aware of two projects that interpret their own form of STG: http://hackage.haskell.org/package/stgi http://hackage.haskell.org/package/ministg But I intend on consuming directly the STG AST from GHC. The bulk of the work I did that will stay useful is managing a global mapping of Ids, in terms of global, local ids, and conids in separate namespaces. IOW I replace all Ids in the AST with a globally unique Int, like Unique, but it preserves across separate runs, rather than being per GHC run. So I can plug that work into the STG AST instead. I have a Docker file that compiles a patched GHC and outputs my AST for ghc-prim, integer-gmp and base, that leads to all these files: https://gist.github.com/chrisdone/5ed9adf9dba5fd82d582e9f2bbc30c9f which have Ids that reference eachother cross-module/package without any need for more fiddling/munging. The AST looks like this https://github.com/chrisdone/prana/blob/eaa5b2111631c13eb6b41c9a47400a4ba6a09ffa/test/Main.hs#L164..L198 then I have a mapping from Int64->Name for debugging. I think having an STG representation that tools like ministg/stgi can consume that includes everything (ghc-prim, integer-gmp and base) is handy, aside from my own use-cases (giving the AST to a web app and "real" interpreting). Cheers! On Mon, 4 Feb 2019 at 17:56, Matthew Pickering wrote: > > If you want your core to look at much like the source program as > possible then you could print `$WFoo` as just `Foo`? > > The existence of wrappers is a crucial part of desugaring so perhaps > it's useful for users to see them in the output of your program if > it's intended to be educational? > > Matt > > > On Sat, Feb 2, 2019 at 3:06 PM Christopher Done wrote: > > > > On Sat, 2 Feb 2019 at 14:50, Matthew Pickering > > wrote: > > > There is no way to turn off wrappers and I don't think it would be > > > possible to implement easily if at all. > > > > Fair enough. > > > > > However, they will all probably be inlined after the optimiser runs > > > but it seems that you don't want to run the optimiser at all on the > > > generated core? > > > > Yeah, I'm trying to avoid as much instability in the output shape as > > possible, and for educational purposes, optimizations make fairly > > readable code unreadable. > > > > Wait. Can I rely on case alt patterns having the same arity as the > > original user-defined data type before optimization passes are run? > > > > If the answer to that is yes, then I could just replace all wrapper > > calls with worker calls, which is an easy enough transformation. As a > > precaution, I could add a check on all case alt patterns that the > > arity matches the worker arity and barf if not. > > > > Thanks for your help! > > > > Chris From ben at smart-cactus.org Sat Feb 9 00:04:27 2019 From: ben at smart-cactus.org (Ben Gamari) Date: Fri, 08 Feb 2019 19:04:27 -0500 Subject: Commit comments - call for opinions In-Reply-To: References: <20190207084753.agqkvovelqckiewn@nibbler> Message-ID: <87imxt7rpa.fsf@smart-cactus.org> Matthew Pickering writes: > I am in favor of option b) as it fits in better with the "gitlab way > of things". If we are to use gitlab then we should use it as it's most > intended rather than trying to retrofit trac practices which have > accrued over many years. > > Adding commits as comments is just a hack in trac to work around > missing native support for the fundamental operation of linking a > commit to. > Well, I'm not sure that's *entirely* true. > I don't really see that it is much more inconvenient to click on a > link to see the commit, the hash can be hovered over to see the commit > title. > I can see Simon's point here; Trac tickets generally tell a story, consisting of both comments as well as commit messages. It's not clear to me why the content of the former should be more visible than that of the latter. They both tell equally-important parts of the story. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ecrockett0 at gmail.com Sat Feb 9 05:57:03 2019 From: ecrockett0 at gmail.com (Eric Crockett) Date: Fri, 8 Feb 2019 21:57:03 -0800 Subject: Cannot build with Hadrian Message-ID: GHC newcomer here -- attempting to work on my first patch. I decided to try Hadrian, but ran into a problem. I think I obtained the source using > git clone --recursive https://gitlab.haskell.org/ghc/ghc Then: > ./boot && ./configure > hadrian/build.sh -j --flavour=devel2 This ran for maybe 15 minutes, then showed the error below. Apparently I ended up with too many tarballs? Any suggestions? Thanks, Eric ... ... | Run Ghc CompileCWithGhc Stage1: rts/Inlines.c => > _build/stage1/rts/build/c/Inlines.thr_o > | Run Cc FindCDependencies Stage1: rts/Compact.cmm => > _build/stage1/rts/build/cmm/Compact.o.d > | Run Cc FindCDependencies Stage1: rts/PathUtils.c => > _build/stage1/rts/build/c/PathUtils.o.d > | Run Ghc CompileHs Stage1: rts/Compact.cmm => > _build/stage1/rts/build/cmm/Compact.o > | Run Ghc CompileCWithGhc Stage1: rts/PathUtils.c => > _build/stage1/rts/build/c/PathUtils.o > | Remove file _build/stage1/rts/build/libHSrts-1.0.a > | Run Ar Pack Stage1: _build/stage1/rts/build/c/Adjustor.o (and 113 more) > => _build/stage1/rts/build/libHSrts-1.0.a > /usr/bin/ar: creating _build/stage1/rts/build/libHSrts-1.0.a > /---------------------------------------------------\ > | Successfully built library 'rts' (Stage1, way v). | > | Library: _build/stage1/rts/build/libHSrts-1.0.a | > \---------------------------------------------------/ > | Remove file _build/stage1/rts/build/libHSrts-1.0_thr.a > | Run Ar Pack Stage1: _build/stage1/rts/build/c/Adjustor.thr_o (and 115 > more) => _build/stage1/rts/build/libHSrts-1.0_thr.a > /usr/bin/ar: creating _build/stage1/rts/build/libHSrts-1.0_thr.a > /-----------------------------------------------------\ > | Successfully built library 'rts' (Stage1, way thr). | > | Library: _build/stage1/rts/build/libHSrts-1.0_thr.a | > \-----------------------------------------------------/ > | Copy file: _build/generated/ghcplatform.h => > _build/stage1/rts/build/ghcplatform.h > | Copy file: _build/generated/ghcversion.h => > _build/stage1/rts/build/ghcversion.h > | Copy file: _build/generated/DerivedConstants.h => > _build/stage1/rts/build/DerivedConstants.h > | Copy file: _build/generated/ghcautoconf.h => > _build/stage1/rts/build/ghcautoconf.h > | Remove directory _build/stage1/libffi/build > shakeArgsWith 0.000s 0% > Function shake 0.005s 0% > Database read 0.000s 0% > With database 0.000s 0% > Running rules 548.377s 99% ========================= > Total 548.383s 100% > Error when running Shake build system: > at src/Rules.hs:(35,19)-(52,17): > at src/Rules.hs:52:5-17: > * Depends on: _build/stage1/lib/package.conf.d/rts-1.0.conf > at src/Rules/Register.hs:(94,9)-(98,34): > * Depends on: _build/stage1/rts/build/ffi.h > at src/Rules/Libffi.hs:(49,7)-(52,48): > * Depends on: _build/stage1/rts/build/ffi.h > _build/stage1/rts/build/ffitarget.h > at src/Rules/Libffi.hs:52:13-48: > * Depends on: _build/stage1/libffi/build/inst/lib/libffi.a > at src/Hadrian/Builder.hs:70:5-23: > * Depends on: _build/stage1/libffi/build/Makefile > at src/Rules/Libffi.hs:107:9-27: > * Depends on: _build/stage1/libffi/build/Makefile.in > * Raised the exception: > Exactly one LibFFI tarball is expected > CallStack (from HasCallStack): > error, called at src/Hadrian/Utilities.hs:60:27 in main:Hadrian.Utilities -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrey.mokhov at newcastle.ac.uk Sat Feb 9 13:03:31 2019 From: andrey.mokhov at newcastle.ac.uk (Andrey Mokhov) Date: Sat, 9 Feb 2019 13:03:31 +0000 Subject: Cannot build with Hadrian References: Message-ID: Hi Eric, Can you show the contents of the "libffi-tarballs" directory in your GHC tree? This error says it expects exactly one libffi tarball to be there, but it got either 0 or more than 1. I guess there is just no tarball there in your case. Perhaps git clone failed to complete properly? Cheers, Andrey ------------------------------ Message: 3 Date: Fri, 8 Feb 2019 21:57:03 -0800 From: Eric Crockett To: ghc-devs Subject: Cannot build with Hadrian Message-ID: Content-Type: text/plain; charset="utf-8" GHC newcomer here -- attempting to work on my first patch. I decided to try Hadrian, but ran into a problem. I think I obtained the source using > git clone --recursive https://gitlab.haskell.org/ghc/ghc Then: > ./boot && ./configure > hadrian/build.sh -j --flavour=devel2 This ran for maybe 15 minutes, then showed the error below. Apparently I ended up with too many tarballs? Any suggestions? Thanks, Eric ... ... | Run Ghc CompileCWithGhc Stage1: rts/Inlines.c => > _build/stage1/rts/build/c/Inlines.thr_o > | Run Cc FindCDependencies Stage1: rts/Compact.cmm => > _build/stage1/rts/build/cmm/Compact.o.d > | Run Cc FindCDependencies Stage1: rts/PathUtils.c => > _build/stage1/rts/build/c/PathUtils.o.d > | Run Ghc CompileHs Stage1: rts/Compact.cmm => > _build/stage1/rts/build/cmm/Compact.o > | Run Ghc CompileCWithGhc Stage1: rts/PathUtils.c => > _build/stage1/rts/build/c/PathUtils.o > | Remove file _build/stage1/rts/build/libHSrts-1.0.a > | Run Ar Pack Stage1: _build/stage1/rts/build/c/Adjustor.o (and 113 more) > => _build/stage1/rts/build/libHSrts-1.0.a > /usr/bin/ar: creating _build/stage1/rts/build/libHSrts-1.0.a > /---------------------------------------------------\ > | Successfully built library 'rts' (Stage1, way v). | > | Library: _build/stage1/rts/build/libHSrts-1.0.a | > \---------------------------------------------------/ > | Remove file _build/stage1/rts/build/libHSrts-1.0_thr.a > | Run Ar Pack Stage1: _build/stage1/rts/build/c/Adjustor.thr_o (and 115 > more) => _build/stage1/rts/build/libHSrts-1.0_thr.a > /usr/bin/ar: creating _build/stage1/rts/build/libHSrts-1.0_thr.a > /-----------------------------------------------------\ > | Successfully built library 'rts' (Stage1, way thr). | > | Library: _build/stage1/rts/build/libHSrts-1.0_thr.a | > \-----------------------------------------------------/ > | Copy file: _build/generated/ghcplatform.h => > _build/stage1/rts/build/ghcplatform.h > | Copy file: _build/generated/ghcversion.h => > _build/stage1/rts/build/ghcversion.h > | Copy file: _build/generated/DerivedConstants.h => > _build/stage1/rts/build/DerivedConstants.h > | Copy file: _build/generated/ghcautoconf.h => > _build/stage1/rts/build/ghcautoconf.h > | Remove directory _build/stage1/libffi/build > shakeArgsWith 0.000s 0% > Function shake 0.005s 0% > Database read 0.000s 0% > With database 0.000s 0% > Running rules 548.377s 99% ========================= > Total 548.383s 100% > Error when running Shake build system: > at src/Rules.hs:(35,19)-(52,17): > at src/Rules.hs:52:5-17: > * Depends on: _build/stage1/lib/package.conf.d/rts-1.0.conf > at src/Rules/Register.hs:(94,9)-(98,34): > * Depends on: _build/stage1/rts/build/ffi.h > at src/Rules/Libffi.hs:(49,7)-(52,48): > * Depends on: _build/stage1/rts/build/ffi.h > _build/stage1/rts/build/ffitarget.h > at src/Rules/Libffi.hs:52:13-48: > * Depends on: _build/stage1/libffi/build/inst/lib/libffi.a > at src/Hadrian/Builder.hs:70:5-23: > * Depends on: _build/stage1/libffi/build/Makefile > at src/Rules/Libffi.hs:107:9-27: > * Depends on: _build/stage1/libffi/build/Makefile.in > * Raised the exception: > Exactly one LibFFI tarball is expected > CallStack (from HasCallStack): > error, called at src/Hadrian/Utilities.hs:60:27 in main:Hadrian.Utilities -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Subject: Digest Footer _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs ------------------------------ End of ghc-devs Digest, Vol 186, Issue 9 **************************************** From rae at cs.brynmawr.edu Sat Feb 9 16:19:11 2019 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Sat, 9 Feb 2019 11:19:11 -0500 Subject: TTG: Handling Source Locations Message-ID: Hi devs, I just came across [TTG: Handling Source Locations], as I was poking around in RdrHsSyn and found wondrous things like (dL->L wiz waz) all over the place. General outline: https://ghc.haskell.org/trac/ghc/wiki/ImplementingTreesThatGrow/HandlingSourceLocations Phab diff: https://phabricator.haskell.org/D5036 Trac ticket: https://ghc.haskell.org/trac/ghc/ticket/15495 Commit: https://gitlab.haskell.org/ghc/ghc/commit/509d5be69c7507ba5d0a5f39ffd1613a59e73eea I see why this change is wanted and how the new version works. It seems to me, though, that this move makes us *less typed*. That is, it would be very easy (and disastrous) to forget to match on a location node. For example, I can now do this: > foo :: LPat p -> ... > foo (VarPat ...) = ... Note that I have declared that foo takes a located pat, but then I forgot to extract the location with dL. This would type-check, but it would fail. Previously, the type checker would ensure that I didn't forget to match on the L constructor. This error would get caught after some poking about, because foo just wouldn't work. However, worse, we might forget to *add* a location when downstream functions expect one. This would be harder to detect, for two reasons: 1. The problem is caught at deconstruction, and figuring out where an object was constructed can be quite hard. 2. The problem might silently cause trouble, because dL won't actually fail on a node missing a location -- it just gives noSrcSpan. So the problem would manifest as a subtle degradation in the quality of an error message, perhaps not caught until several patches (or years!) later. So I'm uncomfortable with this direction of travel. Has this aspect of this design been brought up before? I have to say I don't have a great solution to suggest. Perhaps the best I can think of is to make Located a type family. It would branch on the type index to HsSyn types, introducing a Located node for GhcPass but not for other types. This Isn't really all that extensible (I think) and it gives special status to GHC's usage of the AST. But it seems to solve the immediate problems without the downside above. Sorry for reopening something that has already been debated, but (unless I'm missing something) the current state of affairs seems like a potential wellspring of subtle bugs. Thanks, Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From vladislav at serokell.io Sat Feb 9 17:40:06 2019 From: vladislav at serokell.io (Vladislav Zavialov) Date: Sat, 9 Feb 2019 20:40:06 +0300 Subject: TTG: Handling Source Locations In-Reply-To: References: Message-ID: I wholly share this concern, which is why I commented on the Phab diff: > Does this rely on the caller to call dL on the pattern? Very fragile, let's not do that. In addition, I'm worried about illegal states where we end up with multiple nested levels of `NewPat`, and calling `dL` once is not sufficient. As to the better solution, I think we should just go with Solution B from the Wiki page. Yes, it's somewhat more boilerplate, but it guarantees to have locations in the right places for all nodes. The main argument against it was that we'd have to define `type instance XThing (GhcPass p) = SrcSpan` for many a `Thing`, but I don't see it as a downside at all. We should do so anyway, to get rid of parsing API annotations and put them in the AST proper. All the best, Vladislav On Sat, Feb 9, 2019 at 7:19 PM Richard Eisenberg wrote: > > Hi devs, > > I just came across [TTG: Handling Source Locations], as I was poking around in RdrHsSyn and found wondrous things like (dL->L wiz waz) all over the place. > > General outline: https://ghc.haskell.org/trac/ghc/wiki/ImplementingTreesThatGrow/HandlingSourceLocations > Phab diff: https://phabricator.haskell.org/D5036 > Trac ticket: https://ghc.haskell.org/trac/ghc/ticket/15495 > Commit: https://gitlab.haskell.org/ghc/ghc/commit/509d5be69c7507ba5d0a5f39ffd1613a59e73eea > > I see why this change is wanted and how the new version works. > > It seems to me, though, that this move makes us *less typed*. That is, it would be very easy (and disastrous) to forget to match on a location node. For example, I can now do this: > > > foo :: LPat p -> ... > > foo (VarPat ...) = ... > > Note that I have declared that foo takes a located pat, but then I forgot to extract the location with dL. This would type-check, but it would fail. Previously, the type checker would ensure that I didn't forget to match on the L constructor. This error would get caught after some poking about, because foo just wouldn't work. > > However, worse, we might forget to *add* a location when downstream functions expect one. This would be harder to detect, for two reasons: > 1. The problem is caught at deconstruction, and figuring out where an object was constructed can be quite hard. > 2. The problem might silently cause trouble, because dL won't actually fail on a node missing a location -- it just gives noSrcSpan. So the problem would manifest as a subtle degradation in the quality of an error message, perhaps not caught until several patches (or years!) later. > > So I'm uncomfortable with this direction of travel. > > Has this aspect of this design been brought up before? I have to say I don't have a great solution to suggest. Perhaps the best I can think of is to make Located a type family. It would branch on the type index to HsSyn types, introducing a Located node for GhcPass but not for other types. This Isn't really all that extensible (I think) and it gives special status to GHC's usage of the AST. But it seems to solve the immediate problems without the downside above. > > Sorry for reopening something that has already been debated, but (unless I'm missing something) the current state of affairs seems like a potential wellspring of subtle bugs. > > Thanks, > Richard > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From andrey.mokhov at newcastle.ac.uk Sat Feb 9 19:39:07 2019 From: andrey.mokhov at newcastle.ac.uk (Andrey Mokhov) Date: Sat, 9 Feb 2019 19:39:07 +0000 Subject: Cannot build with Hadrian In-Reply-To: References: Message-ID: Hi Eric, Good to hear you managed to build GHC both with Make and Hadrian. Best wishes with your first GHC patch! Cheers, Andrey From: Eric Crockett [mailto:ecrockett0 at gmail.com] Sent: 09 February 2019 18:21 To: Andrey Mokhov Subject: Re: Cannot build with Hadrian Andrey, I had already deleted the folder and tried again with make, which was successful. But since I sent the email, I figured I'd try again with hadrian as well. I followed exactly what I did above (which is what I did the first time, too) and it worked fine. *shrug* Thanks anyway! Eric On Sat, Feb 9, 2019 at 5:03 AM Andrey Mokhov > wrote: Hi Eric, Can you show the contents of the "libffi-tarballs" directory in your GHC tree? This error says it expects exactly one libffi tarball to be there, but it got either 0 or more than 1. I guess there is just no tarball there in your case. Perhaps git clone failed to complete properly? Cheers, Andrey ------------------------------ Message: 3 Date: Fri, 8 Feb 2019 21:57:03 -0800 From: Eric Crockett > To: ghc-devs > Subject: Cannot build with Hadrian Message-ID: > Content-Type: text/plain; charset="utf-8" GHC newcomer here -- attempting to work on my first patch. I decided to try Hadrian, but ran into a problem. I think I obtained the source using > git clone --recursive https://gitlab.haskell.org/ghc/ghc Then: > ./boot && ./configure > hadrian/build.sh -j --flavour=devel2 This ran for maybe 15 minutes, then showed the error below. Apparently I ended up with too many tarballs? Any suggestions? Thanks, Eric ... ... | Run Ghc CompileCWithGhc Stage1: rts/Inlines.c => > _build/stage1/rts/build/c/Inlines.thr_o > | Run Cc FindCDependencies Stage1: rts/Compact.cmm => > _build/stage1/rts/build/cmm/Compact.o.d > | Run Cc FindCDependencies Stage1: rts/PathUtils.c => > _build/stage1/rts/build/c/PathUtils.o.d > | Run Ghc CompileHs Stage1: rts/Compact.cmm => > _build/stage1/rts/build/cmm/Compact.o > | Run Ghc CompileCWithGhc Stage1: rts/PathUtils.c => > _build/stage1/rts/build/c/PathUtils.o > | Remove file _build/stage1/rts/build/libHSrts-1.0.a > | Run Ar Pack Stage1: _build/stage1/rts/build/c/Adjustor.o (and 113 more) > => _build/stage1/rts/build/libHSrts-1.0.a > /usr/bin/ar: creating _build/stage1/rts/build/libHSrts-1.0.a > /---------------------------------------------------\ > | Successfully built library 'rts' (Stage1, way v). | > | Library: _build/stage1/rts/build/libHSrts-1.0.a | > \---------------------------------------------------/ > | Remove file _build/stage1/rts/build/libHSrts-1.0_thr.a > | Run Ar Pack Stage1: _build/stage1/rts/build/c/Adjustor.thr_o (and 115 > more) => _build/stage1/rts/build/libHSrts-1.0_thr.a > /usr/bin/ar: creating _build/stage1/rts/build/libHSrts-1.0_thr.a > /-----------------------------------------------------\ > | Successfully built library 'rts' (Stage1, way thr). | > | Library: _build/stage1/rts/build/libHSrts-1.0_thr.a | > \-----------------------------------------------------/ > | Copy file: _build/generated/ghcplatform.h => > _build/stage1/rts/build/ghcplatform.h > | Copy file: _build/generated/ghcversion.h => > _build/stage1/rts/build/ghcversion.h > | Copy file: _build/generated/DerivedConstants.h => > _build/stage1/rts/build/DerivedConstants.h > | Copy file: _build/generated/ghcautoconf.h => > _build/stage1/rts/build/ghcautoconf.h > | Remove directory _build/stage1/libffi/build > shakeArgsWith 0.000s 0% > Function shake 0.005s 0% > Database read 0.000s 0% > With database 0.000s 0% > Running rules 548.377s 99% ========================= > Total 548.383s 100% > Error when running Shake build system: > at src/Rules.hs:(35,19)-(52,17): > at src/Rules.hs:52:5-17: > * Depends on: _build/stage1/lib/package.conf.d/rts-1.0.conf > at src/Rules/Register.hs:(94,9)-(98,34): > * Depends on: _build/stage1/rts/build/ffi.h > at src/Rules/Libffi.hs:(49,7)-(52,48): > * Depends on: _build/stage1/rts/build/ffi.h > _build/stage1/rts/build/ffitarget.h > at src/Rules/Libffi.hs:52:13-48: > * Depends on: _build/stage1/libffi/build/inst/lib/libffi.a > at src/Hadrian/Builder.hs:70:5-23: > * Depends on: _build/stage1/libffi/build/Makefile > at src/Rules/Libffi.hs:107:9-27: > * Depends on: _build/stage1/libffi/build/Makefile.in > * Raised the exception: > Exactly one LibFFI tarball is expected > CallStack (from HasCallStack): > error, called at src/Hadrian/Utilities.hs:60:27 in main:Hadrian.Utilities -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Subject: Digest Footer _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs ------------------------------ End of ghc-devs Digest, Vol 186, Issue 9 **************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Sun Feb 10 00:01:51 2019 From: ben at smart-cactus.org (Ben Gamari) Date: Sat, 09 Feb 2019 19:01:51 -0500 Subject: Memory ordering and thunk update Message-ID: <87d0o07bpz.fsf@smart-cactus.org> Hi Simon and Peter, I just wanted to draw your attention to !337 [1], where I try to fix #15994. While it's possible I have missed something, it looks to me like there has been a bug lurking here for quite some time. In particular on a weak memory model machine we do not place a write barrier between the construction of the result of a thunk evaluation and making the results visible to other cores via the indirection's indirectee field. This essentially means that a thread looking at the indirectee may essentially see uninitialized memory. Specifically, updateWithIndirection is currently: # Evaluation result p2 initialized by caller ... ((StgInd *)p1)->indirectee = p2; write_barrier(); SET_INFO(p1, &stg_BLACKHOLE_info); ... whereas I think this should rather be # Evaluation result p2 initialized by caller ... write_barrier(); ((StgInd *)p1)->indirectee = p2; SET_INFO(p1, &stg_BLACKHOLE_info); ... I describe the reasoning for this in more detail in the Note added in !337. All-in-all, I'm rather surprised to find this bug. While we won't see this issue manifest on x86_64 due to this architecture's memory model, we have long supported ARM, PowerPC, and SPARC, all of which have weakly-ordered execution modes. I can't find any relevant open tickets from these platforms. The fact that this wasn't noticed on these platforms makes me wonder whether I am just missing something. Let me know what you think. Cheers, - Ben [1] https://gitlab.haskell.org/ghc/ghc/merge_requests/337 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From omeragacan at gmail.com Sun Feb 10 07:49:06 2019 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Sun, 10 Feb 2019 10:49:06 +0300 Subject: How do I find out which info table a continuation belongs to? Message-ID: I'm currently working on a bug and one of the things I often want to know is what's on the stack. The problem is I can't see labels of continuations so the information is really useless. Example: >>> call printStack(((StgTSO*)0x42000e0198)->stackobj) 0x42000c8788: RET_SMALL (0x512d70) 0x42000c8790: RET_SMALL (0x40edf0) stk[5] (0x42000c8798) = 0x7b3938 0x42000c87a0: CATCH_FRAME(0x735a98,0x7d3ff2) 0x42000c87b8: STOP_FRAME(0x7311b8) (I modified the printer to print stack locations when printing stacks) Here I need to know which info table the RET_SMALLs return to. Normally I do this for other kinds of closures: >>> print ((StgClosure*)...)->header.info $15 = (const StgInfoTable *) 0x404dc0 But for continuations that doesn't work: >>> print ((StgClosure*)0x42000c8788)->header.info $11 = (const StgInfoTable *) 0x512d80 >>> info symbol 0x512d80 No symbol matches 0x512d80. Anyone know how to make this work? Can I maybe mark the continuations label in the generated assembly somehow to make those labels available in gdb? Thanks Ömer From marlowsd at gmail.com Sun Feb 10 15:59:54 2019 From: marlowsd at gmail.com (Simon Marlow) Date: Sun, 10 Feb 2019 15:59:54 +0000 Subject: How do I find out which info table a continuation belongs to? In-Reply-To: References: Message-ID: I believe this is due to https://phabricator.haskell.org/D4722 (cc Sergei Azovskov) I'm a bit surprised that gdb isn't showing anything though, it should know that the address corresponds to a temporary symbol like `.L1234`. Perhaps you need to compile with -g to make this work, I'm not sure. On Sun, 10 Feb 2019 at 07:50, Ömer Sinan Ağacan wrote: > I'm currently working on a bug and one of the things I often want to know > is > what's on the stack. The problem is I can't see labels of continuations so > the > information is really useless. Example: > > >>> call printStack(((StgTSO*)0x42000e0198)->stackobj) > 0x42000c8788: RET_SMALL (0x512d70) > 0x42000c8790: RET_SMALL (0x40edf0) > stk[5] (0x42000c8798) = 0x7b3938 > 0x42000c87a0: CATCH_FRAME(0x735a98,0x7d3ff2) > 0x42000c87b8: STOP_FRAME(0x7311b8) > > (I modified the printer to print stack locations when printing stacks) > > Here I need to know which info table the RET_SMALLs return to. Normally I > do > this for other kinds of closures: > > >>> print ((StgClosure*)...)->header.info > $15 = (const StgInfoTable *) 0x404dc0 > > But for continuations that doesn't work: > > >>> print ((StgClosure*)0x42000c8788)->header.info > $11 = (const StgInfoTable *) 0x512d80 > >>> info symbol 0x512d80 > No symbol matches 0x512d80. > > Anyone know how to make this work? Can I maybe mark the continuations > label in > the generated assembly somehow to make those labels available in gdb? > > Thanks > > Ömer > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From omeragacan at gmail.com Sun Feb 10 18:45:14 2019 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Sun, 10 Feb 2019 21:45:14 +0300 Subject: How do I find out which info table a continuation belongs to? In-Reply-To: References: Message-ID: I'm already using -g3. Here's my build.mk: BuildFlavour = quick ifneq "$(BuildFlavour)" "" include mk/flavours/$(BuildFlavour).mk endif GhcRtsHcOpts += -O0 -g3 SRC_HC_OPTS += -g3 GhcStage1HcOpts += -g3 GhcStage2HcOpts += -g3 GhcLibHcOpts += -g3 STRIP_CMD = : Ömer Simon Marlow , 10 Şub 2019 Paz, 19:00 tarihinde şunu yazdı: > > I believe this is due to https://phabricator.haskell.org/D4722 > > (cc Sergei Azovskov) > > I'm a bit surprised that gdb isn't showing anything though, it should know that the address corresponds to a temporary symbol like `.L1234`. Perhaps you need to compile with -g to make this work, I'm not sure. > > On Sun, 10 Feb 2019 at 07:50, Ömer Sinan Ağacan wrote: >> >> I'm currently working on a bug and one of the things I often want to know is >> what's on the stack. The problem is I can't see labels of continuations so the >> information is really useless. Example: >> >> >>> call printStack(((StgTSO*)0x42000e0198)->stackobj) >> 0x42000c8788: RET_SMALL (0x512d70) >> 0x42000c8790: RET_SMALL (0x40edf0) >> stk[5] (0x42000c8798) = 0x7b3938 >> 0x42000c87a0: CATCH_FRAME(0x735a98,0x7d3ff2) >> 0x42000c87b8: STOP_FRAME(0x7311b8) >> >> (I modified the printer to print stack locations when printing stacks) >> >> Here I need to know which info table the RET_SMALLs return to. Normally I do >> this for other kinds of closures: >> >> >>> print ((StgClosure*)...)->header.info >> $15 = (const StgInfoTable *) 0x404dc0 >> >> But for continuations that doesn't work: >> >> >>> print ((StgClosure*)0x42000c8788)->header.info >> $11 = (const StgInfoTable *) 0x512d80 >> >>> info symbol 0x512d80 >> No symbol matches 0x512d80. >> >> Anyone know how to make this work? Can I maybe mark the continuations label in >> the generated assembly somehow to make those labels available in gdb? >> >> Thanks >> >> Ömer >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From rae at cs.brynmawr.edu Mon Feb 11 03:49:40 2019 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Sun, 10 Feb 2019 22:49:40 -0500 Subject: Commit comments - call for opinions In-Reply-To: <87imxt7rpa.fsf@smart-cactus.org> References: <20190207084753.agqkvovelqckiewn@nibbler> <87imxt7rpa.fsf@smart-cactus.org> Message-ID: <7F3C8BC0-3603-4742-B4CF-D9A15E71D092@cs.brynmawr.edu> I personally prefer seeing the whole commit message, if only because including it is more prominent than just a mention. Commits are really important, and should be made to stand out beyond just a mention. Not sure if this is worth yet more custom tooling, though. Richard > On Feb 8, 2019, at 7:04 PM, Ben Gamari wrote: > > Matthew Pickering writes: > >> I am in favor of option b) as it fits in better with the "gitlab way >> of things". If we are to use gitlab then we should use it as it's most >> intended rather than trying to retrofit trac practices which have >> accrued over many years. >> >> Adding commits as comments is just a hack in trac to work around >> missing native support for the fundamental operation of linking a >> commit to. >> > Well, I'm not sure that's *entirely* true. > >> I don't really see that it is much more inconvenient to click on a >> link to see the commit, the hash can be hovered over to see the commit >> title. >> > I can see Simon's point here; Trac tickets generally tell a story, > consisting of both comments as well as commit messages. It's not clear > to me why the content of the former should be more visible than that of > the latter. They both tell equally-important parts of the story. > > Cheers, > > - Ben > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From simonpj at microsoft.com Mon Feb 11 11:30:00 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 11 Feb 2019 11:30:00 +0000 Subject: [commit: ghc] wip/T16188: testsuite: Report stdout and stderr in JUnit output (224fec6) In-Reply-To: <20190210213147.2E9DC3A8E4@ghc.haskell.org> References: <20190210213147.2E9DC3A8E4@ghc.haskell.org> Message-ID: Are these commits really for #16188? For which we have a MR in progress https://gitlab.haskell.org/ghc/ghc/merge_requests/207 I was just surprised to see wip/T16188 going by. Simon | -----Original Message----- | From: ghc-commits On Behalf Of | git at git.haskell.org | Sent: 10 February 2019 21:32 | To: ghc-commits at haskell.org | Subject: [commit: ghc] wip/T16188: testsuite: Report stdout and stderr in | JUnit output (224fec6) | | Repository : ssh://git at git.haskell.org/ghc | | On branch : wip/T16188 | Link : | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fghc.hask | ell.org%2Ftrac%2Fghc%2Fchangeset%2F224fec6983e16ecfc44a80d47e591a2425468e | af%2Fghc&data=02%7C01%7Csimonpj%40microsoft.com%7Cec8ee4f808cd416cf0d | d08d68f9f2e48%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C1%7C63685431128635 | 6725&sdata=ZS4Bp%2FMsebLALA27waYiWgtyx%2FyrKSkiCLcFXvSfXBE%3D&res | erved=0 | | >--------------------------------------------------------------- | | commit 224fec6983e16ecfc44a80d47e591a2425468eaf | Author: Ben Gamari | Date: Thu Jan 24 14:20:11 2019 -0500 | | testsuite: Report stdout and stderr in JUnit output | | This patch makes the JUnit output more useful as now we also report | the | stdout/stderr in the message which can be used to quickly identify | why a | test is failing without downloading the log. | | This also introduces TestResult, | previously we were simply passing around tuples, making things the | implementation rather difficult to follow and harder to extend. | | | >--------------------------------------------------------------- | | 224fec6983e16ecfc44a80d47e591a2425468eaf | testsuite/driver/junit.py | 23 ++++++++-------- | testsuite/driver/testglobals.py | 15 +++++++++++ | testsuite/driver/testlib.py | 58 ++++++++++++++++++++++++++--------- | ------ | testsuite/driver/testutil.py | 7 +++-- | 4 files changed, 69 insertions(+), 34 deletions(-) | | Diff suppressed because of size. To see it, use: | | git diff-tree --root --patch-with-stat --no-color --find-copies- | harder --ignore-space-at-eol --cc | 224fec6983e16ecfc44a80d47e591a2425468eaf | _______________________________________________ | ghc-commits mailing list | ghc-commits at haskell.org | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.has | kell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | commits&data=02%7C01%7Csimonpj%40microsoft.com%7Cec8ee4f808cd416cf0dd | 08d68f9f2e48%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C1%7C636854311286366 | 720&sdata=RH6Jnvs7cNsnGyLpwONSZ0Cldighb85HZtuWfn4Gc9Q%3D&reserved | =0 From matthewtpickering at gmail.com Mon Feb 11 11:37:15 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Mon, 11 Feb 2019 11:37:15 +0000 Subject: [commit: ghc] wip/T16188: testsuite: Report stdout and stderr in JUnit output (224fec6) In-Reply-To: References: <20190210213147.2E9DC3A8E4@ghc.haskell.org> Message-ID: Ryan rebased Richard's patch to try and get it merged but was disallowed from pushing to Richard's branch. That's why he made wip/T16188 and the corresponding MR (https://gitlab.haskell.org/ghc/ghc/merge_requests/342). Cheers, Matt On Mon, Feb 11, 2019 at 11:30 AM Simon Peyton Jones via ghc-devs wrote: > > Are these commits really for #16188? For which we have a MR in progress > https://gitlab.haskell.org/ghc/ghc/merge_requests/207 > > I was just surprised to see wip/T16188 going by. > > Simon > > | -----Original Message----- > | From: ghc-commits On Behalf Of > | git at git.haskell.org > | Sent: 10 February 2019 21:32 > | To: ghc-commits at haskell.org > | Subject: [commit: ghc] wip/T16188: testsuite: Report stdout and stderr in > | JUnit output (224fec6) > | > | Repository : ssh://git at git.haskell.org/ghc > | > | On branch : wip/T16188 > | Link : > | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fghc.hask > | ell.org%2Ftrac%2Fghc%2Fchangeset%2F224fec6983e16ecfc44a80d47e591a2425468e > | af%2Fghc&data=02%7C01%7Csimonpj%40microsoft.com%7Cec8ee4f808cd416cf0d > | d08d68f9f2e48%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C1%7C63685431128635 > | 6725&sdata=ZS4Bp%2FMsebLALA27waYiWgtyx%2FyrKSkiCLcFXvSfXBE%3D&res > | erved=0 > | > | >--------------------------------------------------------------- > | > | commit 224fec6983e16ecfc44a80d47e591a2425468eaf > | Author: Ben Gamari > | Date: Thu Jan 24 14:20:11 2019 -0500 > | > | testsuite: Report stdout and stderr in JUnit output > | > | This patch makes the JUnit output more useful as now we also report > | the > | stdout/stderr in the message which can be used to quickly identify > | why a > | test is failing without downloading the log. > | > | This also introduces TestResult, > | previously we were simply passing around tuples, making things the > | implementation rather difficult to follow and harder to extend. > | > | > | >--------------------------------------------------------------- > | > | 224fec6983e16ecfc44a80d47e591a2425468eaf > | testsuite/driver/junit.py | 23 ++++++++-------- > | testsuite/driver/testglobals.py | 15 +++++++++++ > | testsuite/driver/testlib.py | 58 ++++++++++++++++++++++++++--------- > | ------ > | testsuite/driver/testutil.py | 7 +++-- > | 4 files changed, 69 insertions(+), 34 deletions(-) > | > | Diff suppressed because of size. To see it, use: > | > | git diff-tree --root --patch-with-stat --no-color --find-copies- > | harder --ignore-space-at-eol --cc > | 224fec6983e16ecfc44a80d47e591a2425468eaf > | _______________________________________________ > | ghc-commits mailing list > | ghc-commits at haskell.org > | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.has > | kell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- > | commits&data=02%7C01%7Csimonpj%40microsoft.com%7Cec8ee4f808cd416cf0dd > | 08d68f9f2e48%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C1%7C636854311286366 > | 720&sdata=RH6Jnvs7cNsnGyLpwONSZ0Cldighb85HZtuWfn4Gc9Q%3D&reserved > | =0 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From simonpj at microsoft.com Mon Feb 11 13:55:31 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 11 Feb 2019 13:55:31 +0000 Subject: Performance of pattern synonyms In-Reply-To: References: Message-ID: Are you as convinced as you were of the semantics of pattern synonyms? I can see they are the right semantics given a run-time implementation but the alternative compile-time implementation has a lot to recommend it. What exactly is "the alternative compile-time implementation"? Remember, although pattern synonyms are set up to call a matching function, that matching function is often inlined, which reduces the overhead to zero. I say "often" inlined. An INLINE pragma for pattern synonyms would be a good feature. Widening to ghc-devs. Simon From: Matthew Roberts Sent: 11 February 2019 00:10 To: matthew.pickering at cs.ox.ac.uk; gergo at erdi.hu; Simon Peyton Jones ; rae at cs.brynmawr.edu Subject: Performance of pattern synonyms Hi all, I am working with someone on a compile-time pattern matching extension and the most important prior work is pattern synonyms in Haskell. I hope you might indulge a couple of questions I have not been able to answer myself from the literature: * [In my testing](http://pattern-benchmarks.herokuapp.com/posts/2019-02-09-peano.html), pattern synonyms have remarkable performance, has anyone ever benchmarked them before? * Do you have - on hand - the hackage data you used in investigating uses of pattern synonyms? I am looking for real-world usage of pattern matching extensions. * Are you as convinced as you were of the semantics of pattern synonyms? I can see they are the right semantics given a run-time implementation but the alternative compile-time implementation has a lot to recommend it. Thanks for your time, Matt Roberts Department of Computing, Macquarie University -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at cs.brynmawr.edu Mon Feb 11 14:22:18 2019 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Mon, 11 Feb 2019 09:22:18 -0500 Subject: Performance of pattern synonyms In-Reply-To: References: Message-ID: <7C952812-6E94-452E-B7A6-8F151CC31FF6@cs.brynmawr.edu> > On Feb 11, 2019, at 8:55 AM, Simon Peyton Jones wrote: > > What exactly is “the alternative compile-time implementation”? In my response, I interpreted this to be macro-expansion, the alternative we discuss in the paper. The paper includes a nice discussion of how the semantics differs between what we currently have and macro-expansion. Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Mon Feb 11 18:14:11 2019 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 11 Feb 2019 13:14:11 -0500 Subject: Performance of pattern synonyms In-Reply-To: <7C952812-6E94-452E-B7A6-8F151CC31FF6@cs.brynmawr.edu> References: <7C952812-6E94-452E-B7A6-8F151CC31FF6@cs.brynmawr.edu> Message-ID: I'm looking at these links, but i'm actually having a hard time finding the actual different definitions of this microbenchmark... On Mon, Feb 11, 2019 at 9:22 AM Richard Eisenberg wrote: > > > On Feb 11, 2019, at 8:55 AM, Simon Peyton Jones > wrote: > > What exactly is “the alternative compile-time implementation”? > > > In my response, I interpreted this to be macro-expansion, the alternative > we discuss in the paper. The paper includes a nice discussion of how the > semantics differs between what we currently have and macro-expansion. > > Richard > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Tue Feb 12 08:37:34 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Tue, 12 Feb 2019 08:37:34 +0000 Subject: Marge has been stabilised Message-ID: Hi all, I think I have finally managed to stablise the merge bot (Marge). If you have a patch ready to merge then 1. Make sure that CI shows as passing 2. Make sure it has been approved by at least one person 3. Make sure it is not marked as WIP Once these three conditions are met, assign the PR to "Marge Bot" and then she will pick it up and merge it, hopefully within 24 hours depending on her current state. Any more questions, feel free to ask. I included some more specific detail below. Cheers, Matt ------ Here is her current mode of operation. Every 30 minutes whilst idle she will try to find new MRs to batch together. If she finds at least two MRs to batch together then she creates a batch as a new merge request. A batch is the series of MRs rebased on top of each other from oldest to newest. She then waits for this merge request to pass CI, checking every 10 minutes to see if it has done so. Once the MR has passed CI, she rebases and merges each MR in the batch one by one to the target branch. In between each MR she merges she waits 5 minutes. Due to the fact each MR is merged one by one: 1. The original MR will be closed automatically. 2. A CI job will trigger for each MR as it lands on master. From matthewtpickering at gmail.com Tue Feb 12 09:08:04 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Tue, 12 Feb 2019 09:08:04 +0000 Subject: TTG: Handling Source Locations In-Reply-To: References: Message-ID: I just did this now, it was quite disconcerting that my code continued to compile after applying `cL loc` to the return value of one of my functions. On Sat, Feb 9, 2019 at 5:40 PM Vladislav Zavialov wrote: > > I wholly share this concern, which is why I commented on the Phab diff: > > > Does this rely on the caller to call dL on the pattern? Very fragile, let's not do that. > > In addition, I'm worried about illegal states where we end up with > multiple nested levels of `NewPat`, and calling `dL` once is not > sufficient. > > As to the better solution, I think we should just go with Solution B > from the Wiki page. Yes, it's somewhat more boilerplate, but it > guarantees to have locations in the right places for all nodes. The > main argument against it was that we'd have to define `type instance > XThing (GhcPass p) = SrcSpan` for many a `Thing`, but I don't see it > as a downside at all. We should do so anyway, to get rid of parsing > API annotations and put them in the AST proper. > > All the best, > Vladislav > > On Sat, Feb 9, 2019 at 7:19 PM Richard Eisenberg wrote: > > > > Hi devs, > > > > I just came across [TTG: Handling Source Locations], as I was poking around in RdrHsSyn and found wondrous things like (dL->L wiz waz) all over the place. > > > > General outline: https://ghc.haskell.org/trac/ghc/wiki/ImplementingTreesThatGrow/HandlingSourceLocations > > Phab diff: https://phabricator.haskell.org/D5036 > > Trac ticket: https://ghc.haskell.org/trac/ghc/ticket/15495 > > Commit: https://gitlab.haskell.org/ghc/ghc/commit/509d5be69c7507ba5d0a5f39ffd1613a59e73eea > > > > I see why this change is wanted and how the new version works. > > > > It seems to me, though, that this move makes us *less typed*. That is, it would be very easy (and disastrous) to forget to match on a location node. For example, I can now do this: > > > > > foo :: LPat p -> ... > > > foo (VarPat ...) = ... > > > > Note that I have declared that foo takes a located pat, but then I forgot to extract the location with dL. This would type-check, but it would fail. Previously, the type checker would ensure that I didn't forget to match on the L constructor. This error would get caught after some poking about, because foo just wouldn't work. > > > > However, worse, we might forget to *add* a location when downstream functions expect one. This would be harder to detect, for two reasons: > > 1. The problem is caught at deconstruction, and figuring out where an object was constructed can be quite hard. > > 2. The problem might silently cause trouble, because dL won't actually fail on a node missing a location -- it just gives noSrcSpan. So the problem would manifest as a subtle degradation in the quality of an error message, perhaps not caught until several patches (or years!) later. > > > > So I'm uncomfortable with this direction of travel. > > > > Has this aspect of this design been brought up before? I have to say I don't have a great solution to suggest. Perhaps the best I can think of is to make Located a type family. It would branch on the type index to HsSyn types, introducing a Located node for GhcPass but not for other types. This Isn't really all that extensible (I think) and it gives special status to GHC's usage of the AST. But it seems to solve the immediate problems without the downside above. > > > > Sorry for reopening something that has already been debated, but (unless I'm missing something) the current state of affairs seems like a potential wellspring of subtle bugs. > > > > Thanks, > > Richard > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From sh.najd at gmail.com Tue Feb 12 10:19:33 2019 From: sh.najd at gmail.com (Shayan Najd) Date: Tue, 12 Feb 2019 11:19:33 +0100 Subject: TTG: Handling Source Locations In-Reply-To: References: Message-ID: Hi Richard, > [Richard:] > It seems to me, though, that this move makes us *less typed*. > [and] > However, worse, we might forget to *add* a location when downstream functions expect one. We had a more sophisticated version of TTG that could support the ping-pong style (and hence typed tagging of locations), but it came at the price of more complicated encoding [0]. We have decided to abandon the more typed variant since tracking whether a node is located or not is inherently a dynamic/run-time process, not a static/compile-time process: there are some nodes that are generated in the process by the compiler by an arbitrary logic (hard to encode by types), hence have no location (in the source code). The types `LHsExpr`, `LPat`, and the like will be deleted! It will be all `HsExpr`, `Pat` and the like. Baking-in, e.g. `LHsExpr` into `HsExpr`, was a mistake in the first place: we were cheating using `Maybe` type anyway, when for example an `LHsExpr` was forcibly required but we had only `HsExpr` and used `noLoc`. > Sorry for reopening something that has already been debated, but (unless I'm missing something) the current state of affairs seems like a potential wellspring of subtle bugs. We were really careful about the refactoring. The new code aside, I don't see how we can introduce any bugs by the refactoring of the old code explained in the wiki. About the new code, the convention is straightforward: anytime you destruct an AST node, assume a wrapper node inside (add an extra case), or use the smart constructors/pattern synonyms. I'd be happy to rediscuss the design space here. It would be great to have everyone fully on board as it is not a trivial change. /Shayan [0] https://github.com/shayan-najd/HsAST/blob/master/Paper.pdf On Sat, 9 Feb 2019 at 17:19, Richard Eisenberg wrote: > > Hi devs, > > I just came across [TTG: Handling Source Locations], as I was poking around in RdrHsSyn and found wondrous things like (dL->L wiz waz) all over the place. > > General outline: https://ghc.haskell.org/trac/ghc/wiki/ImplementingTreesThatGrow/HandlingSourceLocations > Phab diff: https://phabricator.haskell.org/D5036 > Trac ticket: https://ghc.haskell.org/trac/ghc/ticket/15495 > Commit: https://gitlab.haskell.org/ghc/ghc/commit/509d5be69c7507ba5d0a5f39ffd1613a59e73eea > > I see why this change is wanted and how the new version works. > > It seems to me, though, that this move makes us *less typed*. That is, it would be very easy (and disastrous) to forget to match on a location node. For example, I can now do this: > > > foo :: LPat p -> ... > > foo (VarPat ...) = ... > > Note that I have declared that foo takes a located pat, but then I forgot to extract the location with dL. This would type-check, but it would fail. Previously, the type checker would ensure that I didn't forget to match on the L constructor. This error would get caught after some poking about, because foo just wouldn't work. > > However, worse, we might forget to *add* a location when downstream functions expect one. This would be harder to detect, for two reasons: > 1. The problem is caught at deconstruction, and figuring out where an object was constructed can be quite hard. > 2. The problem might silently cause trouble, because dL won't actually fail on a node missing a location -- it just gives noSrcSpan. So the problem would manifest as a subtle degradation in the quality of an error message, perhaps not caught until several patches (or years!) later. > > So I'm uncomfortable with this direction of travel. > > Has this aspect of this design been brought up before? I have to say I don't have a great solution to suggest. Perhaps the best I can think of is to make Located a type family. It would branch on the type index to HsSyn types, introducing a Located node for GhcPass but not for other types. This Isn't really all that extensible (I think) and it gives special status to GHC's usage of the AST. But it seems to solve the immediate problems without the downside above. > > Sorry for reopening something that has already been debated, but (unless I'm missing something) the current state of affairs seems like a potential wellspring of subtle bugs. > > Thanks, > Richard > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From simonpj at microsoft.com Tue Feb 12 11:00:27 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 12 Feb 2019 11:00:27 +0000 Subject: TTG: Handling Source Locations In-Reply-To: References: Message-ID: One way to think of it is this: we can now put SrcSpans where they make sense, rather than everywhere. We can still say (Located t) in places where we want to guarantee a SrcSpan. Yes, this lets us add more than one; that's redundant but not harmful. Simon | -----Original Message----- | From: ghc-devs On Behalf Of Matthew | Pickering | Sent: 12 February 2019 09:08 | To: Vladislav Zavialov | Cc: GHC | Subject: Re: TTG: Handling Source Locations | | I just did this now, it was quite disconcerting that my code continued to | compile after applying `cL loc` to the return value of one of my | functions. | | On Sat, Feb 9, 2019 at 5:40 PM Vladislav Zavialov | wrote: | > | > I wholly share this concern, which is why I commented on the Phab diff: | > | > > Does this rely on the caller to call dL on the pattern? Very fragile, | let's not do that. | > | > In addition, I'm worried about illegal states where we end up with | > multiple nested levels of `NewPat`, and calling `dL` once is not | > sufficient. | > | > As to the better solution, I think we should just go with Solution B | > from the Wiki page. Yes, it's somewhat more boilerplate, but it | > guarantees to have locations in the right places for all nodes. The | > main argument against it was that we'd have to define `type instance | > XThing (GhcPass p) = SrcSpan` for many a `Thing`, but I don't see it | > as a downside at all. We should do so anyway, to get rid of parsing | > API annotations and put them in the AST proper. | > | > All the best, | > Vladislav | > | > On Sat, Feb 9, 2019 at 7:19 PM Richard Eisenberg | wrote: | > > | > > Hi devs, | > > | > > I just came across [TTG: Handling Source Locations], as I was poking | around in RdrHsSyn and found wondrous things like (dL->L wiz waz) all | over the place. | > > | > > General outline: | > > https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgh | > > c.haskell.org%2Ftrac%2Fghc%2Fwiki%2FImplementingTreesThatGrow%2FHand | > > lingSourceLocations&data=02%7C01%7Csimonpj%40microsoft.com%7C915 | > > 2cd5c5b624a9fac5c08d690c9a908%7C72f988bf86f141af91ab2d7cd011db47%7C1 | > > %7C0%7C636855593134767677&sdata=I6kltUVNtcMItCao1dPvnM86%2FlE8ky | > > CwshV81dD6mbY%3D&reserved=0 Phab diff: | > > https://phabricator.haskell.org/D5036 | > > Trac ticket: | > > https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgh | > > c.haskell.org%2Ftrac%2Fghc%2Fticket%2F15495&data=02%7C01%7Csimon | > > pj%40microsoft.com%7C9152cd5c5b624a9fac5c08d690c9a908%7C72f988bf86f1 | > > 41af91ab2d7cd011db47%7C1%7C0%7C636855593134767677&sdata=VeRbLhJD | > > ZQv%2FCZ39lMpwo2SRhmcyIsHRgwXNYDN28cA%3D&reserved=0 | > > Commit: | > > https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgi | > > tlab.haskell.org%2Fghc%2Fghc%2Fcommit%2F509d5be69c7507ba5d0a5f39ffd1 | > > 613a59e73eea&data=02%7C01%7Csimonpj%40microsoft.com%7C9152cd5c5b | > > 624a9fac5c08d690c9a908%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C | > > 636855593134767677&sdata=nv9GjvSvGweBPmsHEVD1jBB7yz0Br0hDHtZ5Exv | > > uDqU%3D&reserved=0 | > > | > > I see why this change is wanted and how the new version works. | > > | > > It seems to me, though, that this move makes us *less typed*. That | is, it would be very easy (and disastrous) to forget to match on a | location node. For example, I can now do this: | > > | > > > foo :: LPat p -> ... | > > > foo (VarPat ...) = ... | > > | > > Note that I have declared that foo takes a located pat, but then I | forgot to extract the location with dL. This would type-check, but it | would fail. Previously, the type checker would ensure that I didn't | forget to match on the L constructor. This error would get caught after | some poking about, because foo just wouldn't work. | > > | > > However, worse, we might forget to *add* a location when downstream | functions expect one. This would be harder to detect, for two reasons: | > > 1. The problem is caught at deconstruction, and figuring out where an | object was constructed can be quite hard. | > > 2. The problem might silently cause trouble, because dL won't | actually fail on a node missing a location -- it just gives noSrcSpan. So | the problem would manifest as a subtle degradation in the quality of an | error message, perhaps not caught until several patches (or years!) | later. | > > | > > So I'm uncomfortable with this direction of travel. | > > | > > Has this aspect of this design been brought up before? I have to say | I don't have a great solution to suggest. Perhaps the best I can think of | is to make Located a type family. It would branch on the type index to | HsSyn types, introducing a Located node for GhcPass but not for other | types. This Isn't really all that extensible (I think) and it gives | special status to GHC's usage of the AST. But it seems to solve the | immediate problems without the downside above. | > > | > > Sorry for reopening something that has already been debated, but | (unless I'm missing something) the current state of affairs seems like a | potential wellspring of subtle bugs. | > > | > > Thanks, | > > Richard | > > _______________________________________________ | > > ghc-devs mailing list | > > ghc-devs at haskell.org | > > https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmai | > > l.haskell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc-devs&data=02% | > > 7C01%7Csimonpj%40microsoft.com%7C9152cd5c5b624a9fac5c08d690c9a908%7C | > > 72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636855593134767677&sd | > > ata=EAHftJhgmjTvm8f3de99dOFSoddfwu1KPoVHMwP1KtA%3D&reserved=0 | > _______________________________________________ | > ghc-devs mailing list | > ghc-devs at haskell.org | > https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail. | > haskell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc-devs&data=02%7C01 | > %7Csimonpj%40microsoft.com%7C9152cd5c5b624a9fac5c08d690c9a908%7C72f988 | > bf86f141af91ab2d7cd011db47%7C1%7C0%7C636855593134767677&sdata=EAHf | > tJhgmjTvm8f3de99dOFSoddfwu1KPoVHMwP1KtA%3D&reserved=0 | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.has | kell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C9152cd5c5b624a9fac5c08d | 690c9a908%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636855593134767677 | &sdata=EAHftJhgmjTvm8f3de99dOFSoddfwu1KPoVHMwP1KtA%3D&reserved=0 From vladislav at serokell.io Tue Feb 12 12:32:11 2019 From: vladislav at serokell.io (Vladislav Zavialov) Date: Tue, 12 Feb 2019 15:32:11 +0300 Subject: TTG: Handling Source Locations In-Reply-To: References: Message-ID: > One way to think of it is this: we can now put SrcSpans where they make sense, rather than everywhere. I claim an SrcSpan makes sense everywhere, so this is not a useful distinction. Think about it as code provenance, an AST node always comes from somewhere: a user-written .hs file, a GHCi command, or compiler-generated code (via TH or deriving). We should never omit this information from a node. And when we are writing code that consumes an AST, it always makes sense to ask what the provenance of a node is, for example to use it in an error message. > this lets us add more than one; that's redundant but not harmful It goes against the philosophy of making illegal states irrepresentable. Now all code must be careful not to end up in an illegal state of nested SrcSpan, without any help from the typechecker. The code that pattern matches on an AST, at the same time, must be prepared to handle this case anyway (or else we risk to crash), which it currently does with stripSrcSpanPat in the implementation of dL. And having to remember to apply dL when matching on the AST is more trivia to learn and remember. Not even a warning if one forgets to do that, no appropriate place to explain this to new contributors (reading another Note just to start doing anything at all with the AST? unnecessary friction), and only a test failure at best in case of a mistake. My concrete proposal: let's just put SrcSpan in the extension fields of each node. In other words, take these lines type instance XVarPat (GhcPass _) = NoExt type instance XLazyPat (GhcPass _) = NoExt type instance XAsPat (GhcPass _) = NoExt type instance XParPat (GhcPass _) = NoExt type instance XBangPat (GhcPass _) = NoExt ... and replace them with type instance XVarPat (GhcPass _) = SrcSpan type instance XLazyPat (GhcPass _) = SrcSpan type instance XAsPat (GhcPass _) = SrcSpan type instance XParPat (GhcPass _) = SrcSpan type instance XBangPat (GhcPass _) = SrcSpan ... And don't bother with the HasSrcSpan class, don't define composeSrcSpan and decomposeSrcSpan. Very straightforward and beneficial for both producers and consumers of an AST. All the best, Vladislav From rae at cs.brynmawr.edu Tue Feb 12 14:19:20 2019 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Tue, 12 Feb 2019 09:19:20 -0500 Subject: TTG: Handling Source Locations In-Reply-To: References: Message-ID: > On Feb 12, 2019, at 5:19 AM, Shayan Najd wrote: > > About the new code, the convention is straightforward: anytime you > destruct an AST node, assume a wrapper node inside (add an extra > case), or use the smart constructors/pattern synonyms. Aha! This, I did not know. So, you're saying that all the consumers of the GHC AST need to remember to use dL every time they pattern-match. With the new design, using dL when it's unnecessary doesn't hurt, but forgetting it is problematic. So: just use it every time. My problem, though, is that this is just a convention -- no one checks it. It would be easy to forget. > On Feb 12, 2019, at 6:00 AM, Simon Peyton Jones via ghc-devs wrote: > > One way to think of it is this: we can now put SrcSpans where they make sense, rather than everywhere. This has some logic to it, but I'm not quite sold. Another way of saying this is that the new design favors flexibility for the producer, at the cost of requiring consumers to be aware of and consistently apply the convention Shayan describes above. The problem is, though, that if the producer is stingy in adding source locations, the consumer won't know which locations are expected to be informative. Is the consumer expected to collect locations from a variety of places and try to combine them somehow? I doubt it. So this means that the flexibility for the producer isn't really there -- the type system will accept arbitrary choices of where to put locations, but consumers won't get the locations where they expect them. > We can still say (Located t) in places where we want to guarantee a SrcSpan. This seems to go against the TTG philosophy. We can do this in, say, the return type of a function, but we can't in the AST proper, because that's shared among a number of clients, some of whom don't want the source locations. > > Yes, this lets us add more than one; that's redundant but not harmful. I disagree here. If we add locations to a node twice, then we'll have to use dL twice to find the underlying constructor. This is another case there the type system offers the producer flexibility but hamstrings the consumer. > On Feb 12, 2019, at 7:32 AM, Vladislav Zavialov wrote: > > I claim an SrcSpan makes sense everywhere, so this is not a useful > distinction. Think about it as code provenance, an AST node always > comes from somewhere I agree with this observation. Perhaps SrcSpan is a bad name, and SrcProvenance is better. We could even imagine using the new HasCallStack feature to track where generated code comes from (perhaps only in DEBUG compilers). Do we need to do this today? I'm not sure there's a crying need. But philosophically, we are able to attach a provenance to every slice of AST, so there's really no reason for uninformative locations. > My concrete proposal: let's just put SrcSpan in the extension fields > of each node I support this counter-proposal. Perhaps if it required writing loads of extra type instances, I wouldn't be as much in favor. But we already have to write those instances -- they just change from NoExt to SrcSpan. This seems to solve all the problems nicely, at relatively low cost. And, I'm sure it's more efficient at runtime than either the previous ping-pong style or the current scenario, as we can pattern-match on constructors directly, requiring one less pointer-chase or function call. One downside of this proposal is that it means that more care will have to be taken when setting the extension field of AST nodes after a pass, making sure to preserve the location. (This isn't really all that different from location-shuffling today.) A quick mock-up shows that record-updates can make this easier: > data Phase = Parsed | Renamed > > data Exp p = Node (XNode p) Int > > type family XNode (p :: Phase) > type instance XNode p = NodeExt p > > data NodeExt p where > NodeExt :: { flag :: Bool, fvs :: RenamedOnly p String } -> NodeExt p > > type family RenamedOnly p t where > RenamedOnly Parsed _ = () > RenamedOnly Renamed t = t > > example :: Exp Parsed > example = Node (NodeExt { flag = True, fvs = () }) 5 > > rename :: Exp Parsed -> Exp Renamed > rename (Node ext n) = Node (ext { fvs = "xyz" }) n Note that the extension point is a record type that has a field available only after renaming. We can then do a type-changing record update when producing the renamed node, preserving the flag in the code above. What's sad is that, if there were no renamer-only field, we couldn't do a type-changing empty record update as the default case. (Haskell doesn't have empty record updates. Perhaps it should. They would be useful in doing a type-change on a datatype with a phantom index. A clever compiler could even somehow ensure that such a record update is completely compiled away.) In any case, this little example is essentially orthogonal to my points above, and the choice of whether to use records or other structures are completely local to the extension point. I just thought it might make for a nice style. Thanks, Richard From sh.najd at gmail.com Tue Feb 12 14:30:43 2019 From: sh.najd at gmail.com (Shayan Najd) Date: Tue, 12 Feb 2019 15:30:43 +0100 Subject: TTG: Handling Source Locations In-Reply-To: References: Message-ID: > My problem, though, is that this is just a convention -- no one checks it. It would be easy to forget. I am not sure if I understand: shouldn't the totality checker warn if there is no pattern for the wrapper constructor (hence enforce the convention)? On Tue, 12 Feb 2019 at 15:19, Richard Eisenberg wrote: > > > > > On Feb 12, 2019, at 5:19 AM, Shayan Najd wrote: > > > > About the new code, the convention is straightforward: anytime you > > destruct an AST node, assume a wrapper node inside (add an extra > > case), or use the smart constructors/pattern synonyms. > > Aha! This, I did not know. So, you're saying that all the consumers of the GHC AST need to remember to use dL every time they pattern-match. With the new design, using dL when it's unnecessary doesn't hurt, but forgetting it is problematic. So: just use it every time. My problem, though, is that this is just a convention -- no one checks it. It would be easy to forget. > > > On Feb 12, 2019, at 6:00 AM, Simon Peyton Jones via ghc-devs wrote: > > > > One way to think of it is this: we can now put SrcSpans where they make sense, rather than everywhere. > > This has some logic to it, but I'm not quite sold. Another way of saying this is that the new design favors flexibility for the producer, at the cost of requiring consumers to be aware of and consistently apply the convention Shayan describes above. The problem is, though, that if the producer is stingy in adding source locations, the consumer won't know which locations are expected to be informative. Is the consumer expected to collect locations from a variety of places and try to combine them somehow? I doubt it. So this means that the flexibility for the producer isn't really there -- the type system will accept arbitrary choices of where to put locations, but consumers won't get the locations where they expect them. > > > We can still say (Located t) in places where we want to guarantee a SrcSpan. > > This seems to go against the TTG philosophy. We can do this in, say, the return type of a function, but we can't in the AST proper, because that's shared among a number of clients, some of whom don't want the source locations. > > > > > Yes, this lets us add more than one; that's redundant but not harmful. > > I disagree here. If we add locations to a node twice, then we'll have to use dL twice to find the underlying constructor. This is another case there the type system offers the producer flexibility but hamstrings the consumer. > > > > On Feb 12, 2019, at 7:32 AM, Vladislav Zavialov wrote: > > > > I claim an SrcSpan makes sense everywhere, so this is not a useful > > distinction. Think about it as code provenance, an AST node always > > comes from somewhere > > I agree with this observation. Perhaps SrcSpan is a bad name, and SrcProvenance is better. We could even imagine using the new HasCallStack feature to track where generated code comes from (perhaps only in DEBUG compilers). Do we need to do this today? I'm not sure there's a crying need. But philosophically, we are able to attach a provenance to every slice of AST, so there's really no reason for uninformative locations. > > > My concrete proposal: let's just put SrcSpan in the extension fields > > of each node > > I support this counter-proposal. Perhaps if it required writing loads of extra type instances, I wouldn't be as much in favor. But we already have to write those instances -- they just change from NoExt to SrcSpan. This seems to solve all the problems nicely, at relatively low cost. And, I'm sure it's more efficient at runtime than either the previous ping-pong style or the current scenario, as we can pattern-match on constructors directly, requiring one less pointer-chase or function call. > > One downside of this proposal is that it means that more care will have to be taken when setting the extension field of AST nodes after a pass, making sure to preserve the location. (This isn't really all that different from location-shuffling today.) A quick mock-up shows that record-updates can make this easier: > > > data Phase = Parsed | Renamed > > > > data Exp p = Node (XNode p) Int > > > > type family XNode (p :: Phase) > > type instance XNode p = NodeExt p > > > > data NodeExt p where > > NodeExt :: { flag :: Bool, fvs :: RenamedOnly p String } -> NodeExt p > > > > type family RenamedOnly p t where > > RenamedOnly Parsed _ = () > > RenamedOnly Renamed t = t > > > > example :: Exp Parsed > > example = Node (NodeExt { flag = True, fvs = () }) 5 > > > > rename :: Exp Parsed -> Exp Renamed > > rename (Node ext n) = Node (ext { fvs = "xyz" }) n > > Note that the extension point is a record type that has a field available only after renaming. We can then do a type-changing record update when producing the renamed node, preserving the flag in the code above. What's sad is that, if there were no renamer-only field, we couldn't do a type-changing empty record update as the default case. (Haskell doesn't have empty record updates. Perhaps it should. They would be useful in doing a type-change on a datatype with a phantom index. A clever compiler could even somehow ensure that such a record update is completely compiled away.) In any case, this little example is essentially orthogonal to my points above, and the choice of whether to use records or other structures are completely local to the extension point. I just thought it might make for a nice style. > > Thanks, > Richard From sh.najd at gmail.com Tue Feb 12 15:24:17 2019 From: sh.najd at gmail.com (Shayan Najd) Date: Tue, 12 Feb 2019 16:24:17 +0100 Subject: TTG: Handling Source Locations In-Reply-To: References: Message-ID: > [Richard:] I disagree here. If we add locations to a node twice, then we'll have to use dL twice to find the underlying constructor. This is another case there the type system offers the producer flexibility but hamstrings the consumer. Depends on the semantics of `dL`: currently (for `Pat`) it returns the top-level `SrcSpan` and then the underlying node with all the inner wrappers stripped away. So one use of `dL` is enough in this semantic. (see https://github.com/ghc/ghc/blob/master/compiler/hsSyn/HsPat.hs#L341) > [Vlad:] As to the better solution, I think we should just go with Solution B from the Wiki page. > [Richard:] I support this counter-proposal. Perhaps if it required writing loads of extra type instances, I wouldn't be as much in favour. It may help to identify at least three sorts of functions commonly used *currently* in GHC when interacting with AST nodes (please add, if I am missing some): (a) those that ignore source locations; (b) those that generically handle source locations regardless of the constructor of the underlying node; and (c) those that handle source locations case-by-case (often by nested pattern matching). The key issue with Solution B, as listed in the wiki, is that it ruins the separation of two concerns in functions working on AST nodes: handling source locations, and the actual logic of the function. With the ping-pong style, handling of source locations is sometimes refactored in a separate function, and with Solution A refactored in a separate case/function clause. With Solution B, however, every time we construct a node we should have a source location ready to put into it. That is, with Solution B, (a) and (b) are not cleanly implemented. (I can explain more if not clear.) /Shayan On Tue, 12 Feb 2019 at 15:30, Shayan Najd wrote: > > > My problem, though, is that this is just a convention -- no one checks it. It would be easy to forget. > > I am not sure if I understand: shouldn't the totality checker warn if > there is no pattern for the wrapper constructor (hence enforce the > convention)? > > > On Tue, 12 Feb 2019 at 15:19, Richard Eisenberg wrote: > > > > > > > > > On Feb 12, 2019, at 5:19 AM, Shayan Najd wrote: > > > > > > About the new code, the convention is straightforward: anytime you > > > destruct an AST node, assume a wrapper node inside (add an extra > > > case), or use the smart constructors/pattern synonyms. > > > > Aha! This, I did not know. So, you're saying that all the consumers of the GHC AST need to remember to use dL every time they pattern-match. With the new design, using dL when it's unnecessary doesn't hurt, but forgetting it is problematic. So: just use it every time. My problem, though, is that this is just a convention -- no one checks it. It would be easy to forget. > > > > > On Feb 12, 2019, at 6:00 AM, Simon Peyton Jones via ghc-devs wrote: > > > > > > One way to think of it is this: we can now put SrcSpans where they make sense, rather than everywhere. > > > > This has some logic to it, but I'm not quite sold. Another way of saying this is that the new design favors flexibility for the producer, at the cost of requiring consumers to be aware of and consistently apply the convention Shayan describes above. The problem is, though, that if the producer is stingy in adding source locations, the consumer won't know which locations are expected to be informative. Is the consumer expected to collect locations from a variety of places and try to combine them somehow? I doubt it. So this means that the flexibility for the producer isn't really there -- the type system will accept arbitrary choices of where to put locations, but consumers won't get the locations where they expect them. > > > > > We can still say (Located t) in places where we want to guarantee a SrcSpan. > > > > This seems to go against the TTG philosophy. We can do this in, say, the return type of a function, but we can't in the AST proper, because that's shared among a number of clients, some of whom don't want the source locations. > > > > > > > > Yes, this lets us add more than one; that's redundant but not harmful. > > > > I disagree here. If we add locations to a node twice, then we'll have to use dL twice to find the underlying constructor. This is another case there the type system offers the producer flexibility but hamstrings the consumer. > > > > > > > On Feb 12, 2019, at 7:32 AM, Vladislav Zavialov wrote: > > > > > > I claim an SrcSpan makes sense everywhere, so this is not a useful > > > distinction. Think about it as code provenance, an AST node always > > > comes from somewhere > > > > I agree with this observation. Perhaps SrcSpan is a bad name, and SrcProvenance is better. We could even imagine using the new HasCallStack feature to track where generated code comes from (perhaps only in DEBUG compilers). Do we need to do this today? I'm not sure there's a crying need. But philosophically, we are able to attach a provenance to every slice of AST, so there's really no reason for uninformative locations. > > > > > My concrete proposal: let's just put SrcSpan in the extension fields > > > of each node > > > > I support this counter-proposal. Perhaps if it required writing loads of extra type instances, I wouldn't be as much in favor. But we already have to write those instances -- they just change from NoExt to SrcSpan. This seems to solve all the problems nicely, at relatively low cost. And, I'm sure it's more efficient at runtime than either the previous ping-pong style or the current scenario, as we can pattern-match on constructors directly, requiring one less pointer-chase or function call. > > > > One downside of this proposal is that it means that more care will have to be taken when setting the extension field of AST nodes after a pass, making sure to preserve the location. (This isn't really all that different from location-shuffling today.) A quick mock-up shows that record-updates can make this easier: > > > > > data Phase = Parsed | Renamed > > > > > > data Exp p = Node (XNode p) Int > > > > > > type family XNode (p :: Phase) > > > type instance XNode p = NodeExt p > > > > > > data NodeExt p where > > > NodeExt :: { flag :: Bool, fvs :: RenamedOnly p String } -> NodeExt p > > > > > > type family RenamedOnly p t where > > > RenamedOnly Parsed _ = () > > > RenamedOnly Renamed t = t > > > > > > example :: Exp Parsed > > > example = Node (NodeExt { flag = True, fvs = () }) 5 > > > > > > rename :: Exp Parsed -> Exp Renamed > > > rename (Node ext n) = Node (ext { fvs = "xyz" }) n > > > > Note that the extension point is a record type that has a field available only after renaming. We can then do a type-changing record update when producing the renamed node, preserving the flag in the code above. What's sad is that, if there were no renamer-only field, we couldn't do a type-changing empty record update as the default case. (Haskell doesn't have empty record updates. Perhaps it should. They would be useful in doing a type-change on a datatype with a phantom index. A clever compiler could even somehow ensure that such a record update is completely compiled away.) In any case, this little example is essentially orthogonal to my points above, and the choice of whether to use records or other structures are completely local to the extension point. I just thought it might make for a nice style. > > > > Thanks, > > Richard From rae at cs.brynmawr.edu Tue Feb 12 15:24:38 2019 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Tue, 12 Feb 2019 10:24:38 -0500 Subject: TTG: Handling Source Locations In-Reply-To: References: Message-ID: That's true, but how would it play out in practice? For example, take a look at RnPat. There is a rnLPatAndThen which uses wrapSrcSpanCps to extract the location and then call rnPatAndThen. rnPatAndThen, in turn, just panics if it sees the extension point, because that's an unexpected constructor. Someone could easily call rnPatAndThen when they should call rnLPatAndThen. This would cause a panic. There's also the problem that the pattern-match checker can't usefully look through view patterns. If there is a nested pattern-match (that is, we see dL->L _ (SomeOtherConstructor), then there is no way to guarantee a complete pattern-match short of a catch-all. So it doesn't seem to me that the pattern-match checker is really helping us achieve what we want here. Richard > On Feb 12, 2019, at 9:30 AM, Shayan Najd wrote: > >> My problem, though, is that this is just a convention -- no one checks it. It would be easy to forget. > > I am not sure if I understand: shouldn't the totality checker warn if > there is no pattern for the wrapper constructor (hence enforce the > convention)? > > > On Tue, 12 Feb 2019 at 15:19, Richard Eisenberg wrote: >> >> >> >>> On Feb 12, 2019, at 5:19 AM, Shayan Najd wrote: >>> >>> About the new code, the convention is straightforward: anytime you >>> destruct an AST node, assume a wrapper node inside (add an extra >>> case), or use the smart constructors/pattern synonyms. >> >> Aha! This, I did not know. So, you're saying that all the consumers of the GHC AST need to remember to use dL every time they pattern-match. With the new design, using dL when it's unnecessary doesn't hurt, but forgetting it is problematic. So: just use it every time. My problem, though, is that this is just a convention -- no one checks it. It would be easy to forget. >> >>> On Feb 12, 2019, at 6:00 AM, Simon Peyton Jones via ghc-devs wrote: >>> >>> One way to think of it is this: we can now put SrcSpans where they make sense, rather than everywhere. >> >> This has some logic to it, but I'm not quite sold. Another way of saying this is that the new design favors flexibility for the producer, at the cost of requiring consumers to be aware of and consistently apply the convention Shayan describes above. The problem is, though, that if the producer is stingy in adding source locations, the consumer won't know which locations are expected to be informative. Is the consumer expected to collect locations from a variety of places and try to combine them somehow? I doubt it. So this means that the flexibility for the producer isn't really there -- the type system will accept arbitrary choices of where to put locations, but consumers won't get the locations where they expect them. >> >>> We can still say (Located t) in places where we want to guarantee a SrcSpan. >> >> This seems to go against the TTG philosophy. We can do this in, say, the return type of a function, but we can't in the AST proper, because that's shared among a number of clients, some of whom don't want the source locations. >> >>> >>> Yes, this lets us add more than one; that's redundant but not harmful. >> >> I disagree here. If we add locations to a node twice, then we'll have to use dL twice to find the underlying constructor. This is another case there the type system offers the producer flexibility but hamstrings the consumer. >> >> >>> On Feb 12, 2019, at 7:32 AM, Vladislav Zavialov wrote: >>> >>> I claim an SrcSpan makes sense everywhere, so this is not a useful >>> distinction. Think about it as code provenance, an AST node always >>> comes from somewhere >> >> I agree with this observation. Perhaps SrcSpan is a bad name, and SrcProvenance is better. We could even imagine using the new HasCallStack feature to track where generated code comes from (perhaps only in DEBUG compilers). Do we need to do this today? I'm not sure there's a crying need. But philosophically, we are able to attach a provenance to every slice of AST, so there's really no reason for uninformative locations. >> >>> My concrete proposal: let's just put SrcSpan in the extension fields >>> of each node >> >> I support this counter-proposal. Perhaps if it required writing loads of extra type instances, I wouldn't be as much in favor. But we already have to write those instances -- they just change from NoExt to SrcSpan. This seems to solve all the problems nicely, at relatively low cost. And, I'm sure it's more efficient at runtime than either the previous ping-pong style or the current scenario, as we can pattern-match on constructors directly, requiring one less pointer-chase or function call. >> >> One downside of this proposal is that it means that more care will have to be taken when setting the extension field of AST nodes after a pass, making sure to preserve the location. (This isn't really all that different from location-shuffling today.) A quick mock-up shows that record-updates can make this easier: >> >>> data Phase = Parsed | Renamed >>> >>> data Exp p = Node (XNode p) Int >>> >>> type family XNode (p :: Phase) >>> type instance XNode p = NodeExt p >>> >>> data NodeExt p where >>> NodeExt :: { flag :: Bool, fvs :: RenamedOnly p String } -> NodeExt p >>> >>> type family RenamedOnly p t where >>> RenamedOnly Parsed _ = () >>> RenamedOnly Renamed t = t >>> >>> example :: Exp Parsed >>> example = Node (NodeExt { flag = True, fvs = () }) 5 >>> >>> rename :: Exp Parsed -> Exp Renamed >>> rename (Node ext n) = Node (ext { fvs = "xyz" }) n >> >> Note that the extension point is a record type that has a field available only after renaming. We can then do a type-changing record update when producing the renamed node, preserving the flag in the code above. What's sad is that, if there were no renamer-only field, we couldn't do a type-changing empty record update as the default case. (Haskell doesn't have empty record updates. Perhaps it should. They would be useful in doing a type-change on a datatype with a phantom index. A clever compiler could even somehow ensure that such a record update is completely compiled away.) In any case, this little example is essentially orthogonal to my points above, and the choice of whether to use records or other structures are completely local to the extension point. I just thought it might make for a nice style. >> >> Thanks, >> Richard From sh.najd at gmail.com Tue Feb 12 15:40:13 2019 From: sh.najd at gmail.com (Shayan Najd) Date: Tue, 12 Feb 2019 16:40:13 +0100 Subject: TTG: Handling Source Locations In-Reply-To: References: Message-ID: > Someone could easily call rnPatAndThen when they should call rnLPatAndThen. This would cause a panic. With Solution A, there shouldn't be two functions `rnLPatAndThen` and `rnPatAndThen` anyways. There should be only `rnPatAndThen` with an extra case for the wrapper node. > There's also the problem that the pattern-match checker can't usefully look through view patterns. Yes, I have reported it while back. I don't know of the progress in fixing this. On Tue, 12 Feb 2019 at 16:24, Richard Eisenberg wrote: > > That's true, but how would it play out in practice? For example, take a look at RnPat. There is a rnLPatAndThen which uses wrapSrcSpanCps to extract the location and then call rnPatAndThen. rnPatAndThen, in turn, just panics if it sees the extension point, because that's an unexpected constructor. Someone could easily call rnPatAndThen when they should call rnLPatAndThen. This would cause a panic. > > There's also the problem that the pattern-match checker can't usefully look through view patterns. If there is a nested pattern-match (that is, we see dL->L _ (SomeOtherConstructor), then there is no way to guarantee a complete pattern-match short of a catch-all. So it doesn't seem to me that the pattern-match checker is really helping us achieve what we want here. > > Richard > > > On Feb 12, 2019, at 9:30 AM, Shayan Najd wrote: > > > >> My problem, though, is that this is just a convention -- no one checks it. It would be easy to forget. > > > > I am not sure if I understand: shouldn't the totality checker warn if > > there is no pattern for the wrapper constructor (hence enforce the > > convention)? > > > > > > On Tue, 12 Feb 2019 at 15:19, Richard Eisenberg wrote: > >> > >> > >> > >>> On Feb 12, 2019, at 5:19 AM, Shayan Najd wrote: > >>> > >>> About the new code, the convention is straightforward: anytime you > >>> destruct an AST node, assume a wrapper node inside (add an extra > >>> case), or use the smart constructors/pattern synonyms. > >> > >> Aha! This, I did not know. So, you're saying that all the consumers of the GHC AST need to remember to use dL every time they pattern-match. With the new design, using dL when it's unnecessary doesn't hurt, but forgetting it is problematic. So: just use it every time. My problem, though, is that this is just a convention -- no one checks it. It would be easy to forget. > >> > >>> On Feb 12, 2019, at 6:00 AM, Simon Peyton Jones via ghc-devs wrote: > >>> > >>> One way to think of it is this: we can now put SrcSpans where they make sense, rather than everywhere. > >> > >> This has some logic to it, but I'm not quite sold. Another way of saying this is that the new design favors flexibility for the producer, at the cost of requiring consumers to be aware of and consistently apply the convention Shayan describes above. The problem is, though, that if the producer is stingy in adding source locations, the consumer won't know which locations are expected to be informative. Is the consumer expected to collect locations from a variety of places and try to combine them somehow? I doubt it. So this means that the flexibility for the producer isn't really there -- the type system will accept arbitrary choices of where to put locations, but consumers won't get the locations where they expect them. > >> > >>> We can still say (Located t) in places where we want to guarantee a SrcSpan. > >> > >> This seems to go against the TTG philosophy. We can do this in, say, the return type of a function, but we can't in the AST proper, because that's shared among a number of clients, some of whom don't want the source locations. > >> > >>> > >>> Yes, this lets us add more than one; that's redundant but not harmful. > >> > >> I disagree here. If we add locations to a node twice, then we'll have to use dL twice to find the underlying constructor. This is another case there the type system offers the producer flexibility but hamstrings the consumer. > >> > >> > >>> On Feb 12, 2019, at 7:32 AM, Vladislav Zavialov wrote: > >>> > >>> I claim an SrcSpan makes sense everywhere, so this is not a useful > >>> distinction. Think about it as code provenance, an AST node always > >>> comes from somewhere > >> > >> I agree with this observation. Perhaps SrcSpan is a bad name, and SrcProvenance is better. We could even imagine using the new HasCallStack feature to track where generated code comes from (perhaps only in DEBUG compilers). Do we need to do this today? I'm not sure there's a crying need. But philosophically, we are able to attach a provenance to every slice of AST, so there's really no reason for uninformative locations. > >> > >>> My concrete proposal: let's just put SrcSpan in the extension fields > >>> of each node > >> > >> I support this counter-proposal. Perhaps if it required writing loads of extra type instances, I wouldn't be as much in favor. But we already have to write those instances -- they just change from NoExt to SrcSpan. This seems to solve all the problems nicely, at relatively low cost. And, I'm sure it's more efficient at runtime than either the previous ping-pong style or the current scenario, as we can pattern-match on constructors directly, requiring one less pointer-chase or function call. > >> > >> One downside of this proposal is that it means that more care will have to be taken when setting the extension field of AST nodes after a pass, making sure to preserve the location. (This isn't really all that different from location-shuffling today.) A quick mock-up shows that record-updates can make this easier: > >> > >>> data Phase = Parsed | Renamed > >>> > >>> data Exp p = Node (XNode p) Int > >>> > >>> type family XNode (p :: Phase) > >>> type instance XNode p = NodeExt p > >>> > >>> data NodeExt p where > >>> NodeExt :: { flag :: Bool, fvs :: RenamedOnly p String } -> NodeExt p > >>> > >>> type family RenamedOnly p t where > >>> RenamedOnly Parsed _ = () > >>> RenamedOnly Renamed t = t > >>> > >>> example :: Exp Parsed > >>> example = Node (NodeExt { flag = True, fvs = () }) 5 > >>> > >>> rename :: Exp Parsed -> Exp Renamed > >>> rename (Node ext n) = Node (ext { fvs = "xyz" }) n > >> > >> Note that the extension point is a record type that has a field available only after renaming. We can then do a type-changing record update when producing the renamed node, preserving the flag in the code above. What's sad is that, if there were no renamer-only field, we couldn't do a type-changing empty record update as the default case. (Haskell doesn't have empty record updates. Perhaps it should. They would be useful in doing a type-change on a datatype with a phantom index. A clever compiler could even somehow ensure that such a record update is completely compiled away.) In any case, this little example is essentially orthogonal to my points above, and the choice of whether to use records or other structures are completely local to the extension point. I just thought it might make for a nice style. > >> > >> Thanks, > >> Richard > From rae at cs.brynmawr.edu Tue Feb 12 15:44:44 2019 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Tue, 12 Feb 2019 10:44:44 -0500 Subject: Marge has been stabilised In-Reply-To: References: Message-ID: <8C942526-3776-4BB6-A220-97E8104538C2@cs.brynmawr.edu> Thanks for these instructions! > On Feb 12, 2019, at 3:37 AM, Matthew Pickering wrote: > > 3. Make sure it is not marked as WIP What does this mean, precisely? Thanks, Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Tue Feb 12 15:51:25 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Tue, 12 Feb 2019 15:51:25 +0000 Subject: Marge has been stabilised In-Reply-To: <8C942526-3776-4BB6-A220-97E8104538C2@cs.brynmawr.edu> References: <8C942526-3776-4BB6-A220-97E8104538C2@cs.brynmawr.edu> Message-ID: WIP merge requests have "WIP:" at the front of the title. I have been marking MRs as WIP is they are not ready to merge to try to keep track of things that need to be added to the merge queue or not. This can be quickly toggled on/off by typing the /wip quick command in a comment. Cheers, Matt On Tue, Feb 12, 2019 at 3:44 PM Richard Eisenberg wrote: > > Thanks for these instructions! > > On Feb 12, 2019, at 3:37 AM, Matthew Pickering wrote: > > 3. Make sure it is not marked as WIP > > > What does this mean, precisely? > > Thanks, > Richard From rae at cs.brynmawr.edu Tue Feb 12 16:23:55 2019 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Tue, 12 Feb 2019 11:23:55 -0500 Subject: Marge has been stabilised In-Reply-To: References: <8C942526-3776-4BB6-A220-97E8104538C2@cs.brynmawr.edu> Message-ID: <152A95C2-A681-4E71-A645-68EF721C0972@cs.brynmawr.edu> > On Feb 12, 2019, at 10:51 AM, Matthew Pickering wrote: > > This can be quickly toggled on/off by typing the /wip quick command in > a comment. This is an interesting aside. I understand this to mean: If I make a comment (the same place that I would write a comment for humans) that consists solely of "/wip", then instead of posting anything to humans, the title of my MR changes, either adding "WIP: " or deleting that from the beginning. Are there other such pieces of magic? Is there a place they are listed? Thanks! Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Tue Feb 12 16:28:59 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Tue, 12 Feb 2019 16:28:59 +0000 Subject: Marge has been stabilised In-Reply-To: <152A95C2-A681-4E71-A645-68EF721C0972@cs.brynmawr.edu> References: <8C942526-3776-4BB6-A220-97E8104538C2@cs.brynmawr.edu> <152A95C2-A681-4E71-A645-68EF721C0972@cs.brynmawr.edu> Message-ID: Yes there are two others I use frequently. /approve - Approves a Merge request /assign - Assign a user and I imagine I will use /label and /relabel frequently when we have labels. It's also useful to know that the ! autocomplete for merge requests can be filtered by name of MR. The user autocomplete @ can also be filtered by a user's real name. There is a full list here: https://docs.gitlab.com/ee/user/project/quick_actions.html Cheers, Matt On Tue, Feb 12, 2019 at 4:23 PM Richard Eisenberg wrote: > > > > On Feb 12, 2019, at 10:51 AM, Matthew Pickering wrote: > > This can be quickly toggled on/off by typing the /wip quick command in > a comment. > > > This is an interesting aside. I understand this to mean: If I make a comment (the same place that I would write a comment for humans) that consists solely of "/wip", then instead of posting anything to humans, the title of my MR changes, either adding "WIP: " or deleting that from the beginning. > > Are there other such pieces of magic? Is there a place they are listed? > > Thanks! > Richard From alan.zimm at gmail.com Tue Feb 12 18:36:01 2019 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Tue, 12 Feb 2019 20:36:01 +0200 Subject: Marge has been stabilised In-Reply-To: References: <8C942526-3776-4BB6-A220-97E8104538C2@cs.brynmawr.edu> <152A95C2-A681-4E71-A645-68EF721C0972@cs.brynmawr.edu> Message-ID: > Every 30 minutes whilst idle she will try to find new MRs to batch > together. If she finds at least two MRs to batch together then she > creates a batch as a new merge request. A batch is the series of MRs > rebased on top of each other from oldest to newest. Does this mean that on a quiet day (if there are ever any for GHC dev), a lone merge will languish until someone else puts forward a valid merge too? Or will marge just merge a non-batched lone merge request? Alan -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Tue Feb 12 20:21:40 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Tue, 12 Feb 2019 20:21:40 +0000 Subject: Marge has been stabilised In-Reply-To: References: <8C942526-3776-4BB6-A220-97E8104538C2@cs.brynmawr.edu> <152A95C2-A681-4E71-A645-68EF721C0972@cs.brynmawr.edu> Message-ID: Yes, on a quiet day she will idly wait for the opportunity to merge two MRs rather than try and merge just one. I don't think this is a bad default as her CI cycle has proved to be quite long so only 2-3 batches happen per day at most. Matt On Tue, Feb 12, 2019 at 6:36 PM Alan & Kim Zimmerman wrote: > > > Every 30 minutes whilst idle she will try to find new MRs to batch > > together. If she finds at least two MRs to batch together then she > > creates a batch as a new merge request. A batch is the series of MRs > > rebased on top of each other from oldest to newest. > > Does this mean that on a quiet day (if there are ever any for GHC dev), a lone merge > will languish until someone else puts forward a valid merge too? > > Or will marge just merge a non-batched lone merge request? > > Alan > From adam at well-typed.com Tue Feb 12 21:38:12 2019 From: adam at well-typed.com (Adam Gundry) Date: Tue, 12 Feb 2019 21:38:12 +0000 Subject: TTG: Handling Source Locations In-Reply-To: References: Message-ID: <1e185a0b-4076-707c-4315-ec7a1eb3b9b1@well-typed.com> While we're on the topic, is there any plan to get rid of all those panics? AFAICS they are entirely unnecessary: we should just use an empty datatype for unused constructor extension points, then we can eliminate it to get whatever we like. See #15247. Adam On 12/02/2019 15:40, Shayan Najd wrote: >> Someone could easily call rnPatAndThen when they should call rnLPatAndThen. This would cause a panic. > > With Solution A, there shouldn't be two functions `rnLPatAndThen` and > `rnPatAndThen` anyways. There should be only `rnPatAndThen` with an > extra case for the wrapper node. > >> There's also the problem that the pattern-match checker can't usefully look through view patterns. > > Yes, I have reported it while back. I don't know of the progress in fixing this. > > On Tue, 12 Feb 2019 at 16:24, Richard Eisenberg wrote: >> >> That's true, but how would it play out in practice? For example, take a look at RnPat. There is a rnLPatAndThen which uses wrapSrcSpanCps to extract the location and then call rnPatAndThen. rnPatAndThen, in turn, just panics if it sees the extension point, because that's an unexpected constructor. Someone could easily call rnPatAndThen when they should call rnLPatAndThen. This would cause a panic. >> >> There's also the problem that the pattern-match checker can't usefully look through view patterns. If there is a nested pattern-match (that is, we see dL->L _ (SomeOtherConstructor), then there is no way to guarantee a complete pattern-match short of a catch-all. So it doesn't seem to me that the pattern-match checker is really helping us achieve what we want here. >> >> Richard >> >>> On Feb 12, 2019, at 9:30 AM, Shayan Najd wrote: >>> >>>> My problem, though, is that this is just a convention -- no one checks it. It would be easy to forget. >>> >>> I am not sure if I understand: shouldn't the totality checker warn if >>> there is no pattern for the wrapper constructor (hence enforce the >>> convention)? >>> >>> >>> On Tue, 12 Feb 2019 at 15:19, Richard Eisenberg wrote: >>>> >>>> >>>> >>>>> On Feb 12, 2019, at 5:19 AM, Shayan Najd wrote: >>>>> >>>>> About the new code, the convention is straightforward: anytime you >>>>> destruct an AST node, assume a wrapper node inside (add an extra >>>>> case), or use the smart constructors/pattern synonyms. >>>> >>>> Aha! This, I did not know. So, you're saying that all the consumers of the GHC AST need to remember to use dL every time they pattern-match. With the new design, using dL when it's unnecessary doesn't hurt, but forgetting it is problematic. So: just use it every time. My problem, though, is that this is just a convention -- no one checks it. It would be easy to forget. >>>> >>>>> On Feb 12, 2019, at 6:00 AM, Simon Peyton Jones via ghc-devs wrote: >>>>> >>>>> One way to think of it is this: we can now put SrcSpans where they make sense, rather than everywhere. >>>> >>>> This has some logic to it, but I'm not quite sold. Another way of saying this is that the new design favors flexibility for the producer, at the cost of requiring consumers to be aware of and consistently apply the convention Shayan describes above. The problem is, though, that if the producer is stingy in adding source locations, the consumer won't know which locations are expected to be informative. Is the consumer expected to collect locations from a variety of places and try to combine them somehow? I doubt it. So this means that the flexibility for the producer isn't really there -- the type system will accept arbitrary choices of where to put locations, but consumers won't get the locations where they expect them. >>>> >>>>> We can still say (Located t) in places where we want to guarantee a SrcSpan. >>>> >>>> This seems to go against the TTG philosophy. We can do this in, say, the return type of a function, but we can't in the AST proper, because that's shared among a number of clients, some of whom don't want the source locations. >>>> >>>>> >>>>> Yes, this lets us add more than one; that's redundant but not harmful. >>>> >>>> I disagree here. If we add locations to a node twice, then we'll have to use dL twice to find the underlying constructor. This is another case there the type system offers the producer flexibility but hamstrings the consumer. >>>> >>>> >>>>> On Feb 12, 2019, at 7:32 AM, Vladislav Zavialov wrote: >>>>> >>>>> I claim an SrcSpan makes sense everywhere, so this is not a useful >>>>> distinction. Think about it as code provenance, an AST node always >>>>> comes from somewhere >>>> >>>> I agree with this observation. Perhaps SrcSpan is a bad name, and SrcProvenance is better. We could even imagine using the new HasCallStack feature to track where generated code comes from (perhaps only in DEBUG compilers). Do we need to do this today? I'm not sure there's a crying need. But philosophically, we are able to attach a provenance to every slice of AST, so there's really no reason for uninformative locations. >>>> >>>>> My concrete proposal: let's just put SrcSpan in the extension fields >>>>> of each node >>>> >>>> I support this counter-proposal. Perhaps if it required writing loads of extra type instances, I wouldn't be as much in favor. But we already have to write those instances -- they just change from NoExt to SrcSpan. This seems to solve all the problems nicely, at relatively low cost. And, I'm sure it's more efficient at runtime than either the previous ping-pong style or the current scenario, as we can pattern-match on constructors directly, requiring one less pointer-chase or function call. >>>> >>>> One downside of this proposal is that it means that more care will have to be taken when setting the extension field of AST nodes after a pass, making sure to preserve the location. (This isn't really all that different from location-shuffling today.) A quick mock-up shows that record-updates can make this easier: >>>> >>>>> data Phase = Parsed | Renamed >>>>> >>>>> data Exp p = Node (XNode p) Int >>>>> >>>>> type family XNode (p :: Phase) >>>>> type instance XNode p = NodeExt p >>>>> >>>>> data NodeExt p where >>>>> NodeExt :: { flag :: Bool, fvs :: RenamedOnly p String } -> NodeExt p >>>>> >>>>> type family RenamedOnly p t where >>>>> RenamedOnly Parsed _ = () >>>>> RenamedOnly Renamed t = t >>>>> >>>>> example :: Exp Parsed >>>>> example = Node (NodeExt { flag = True, fvs = () }) 5 >>>>> >>>>> rename :: Exp Parsed -> Exp Renamed >>>>> rename (Node ext n) = Node (ext { fvs = "xyz" }) n >>>> >>>> Note that the extension point is a record type that has a field available only after renaming. We can then do a type-changing record update when producing the renamed node, preserving the flag in the code above. What's sad is that, if there were no renamer-only field, we couldn't do a type-changing empty record update as the default case. (Haskell doesn't have empty record updates. Perhaps it should. They would be useful in doing a type-change on a datatype with a phantom index. A clever compiler could even somehow ensure that such a record update is completely compiled away.) In any case, this little example is essentially orthogonal to my points above, and the choice of whether to use records or other structures are completely local to the extension point. I just thought it might make for a nice style. >>>> >>>> Thanks, >>>> Richard -- Adam Gundry, Haskell Consultant Well-Typed LLP, https://www.well-typed.com/ From simonpj at microsoft.com Wed Feb 13 09:38:42 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 13 Feb 2019 09:38:42 +0000 Subject: Marge has been stabilised In-Reply-To: References: <8C942526-3776-4BB6-A220-97E8104538C2@cs.brynmawr.edu> <152A95C2-A681-4E71-A645-68EF721C0972@cs.brynmawr.edu> Message-ID: Interesting. That page says (my emphasis) Quick actions are textual shortcuts for common actions on issues, epics, merge requests, and commits that are usually done by clicking buttons or dropdowns in GitLab’s UI. You can enter these commands while creating a new issue or merge request, or in comments of issues, epics, merge requests, and commits. Each command should be on a separate line in order to be properly detected and executed. Once executed, the commands are removed from the text body and not visible to anyone else. So this may be a useful shortcut, but it’s optional. So that means that this “WIP” thing is part of GitLab’s semantic model. The table says /wip: Toggle the Work In Progress status So it seems that * Each MR has a WIP status * There is some way in the UI to toggle it Do you know what are the semantics of “WIP status”? It’s not just the title! I assume it is /not/ “WIP MRs aren’t merged” because no MR is merged until there are enough approvals /and/ the author clicks “please merge”. Correct? Sorry to be dim. I’m a bit slow to catch up with GitLab. I have a “Simon’s GitLab page” here https://ghc.haskell.org/trac/ghc/wiki/WorkingConventions/GitLabSPJ I use it to add notes on things I’ve learned. Simon | -----Original Message----- | From: ghc-devs On Behalf Of Matthew | Pickering | Sent: 12 February 2019 16:29 | To: Richard Eisenberg | Cc: GHC developers | Subject: Re: Marge has been stabilised | | Yes there are two others I use frequently. | | /approve - Approves a Merge request | /assign - Assign a user | | and I imagine I will use /label and /relabel frequently when we have | labels. | | It's also useful to know that the ! autocomplete for merge requests | can be filtered by name of MR. The user autocomplete @ can also be | filtered by a user's real name. | | There is a full list here: | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.gitl | ab.com%2Fee%2Fuser%2Fproject%2Fquick_actions.html&data=02%7C01%7Csimonp | j%40microsoft.com%7Cb91996c8302240d8c75c08d691074201%7C72f988bf86f141af91ab | 2d7cd011db47%7C1%7C0%7C636855857693042801&sdata=ngK1tGiNd6ZLGMS34emZYv% | 2B31dHS1nfUQ9StRSLXwcQ%3D&reserved=0 | | Cheers, | | Matt | | On Tue, Feb 12, 2019 at 4:23 PM Richard Eisenberg > | wrote: | > | > | > | > On Feb 12, 2019, at 10:51 AM, Matthew Pickering | > wrote: | > | > This can be quickly toggled on/off by typing the /wip quick command in | > a comment. | > | > | > This is an interesting aside. I understand this to mean: If I make a | comment (the same place that I would write a comment for humans) that | consists solely of "/wip", then instead of posting anything to humans, the | title of my MR changes, either adding "WIP: " or deleting that from the | beginning. | > | > Are there other such pieces of magic? Is there a place they are listed? | > | > Thanks! | > Richard | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.haske | ll.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com%7Cb91996c8302240d8c75c08d69 | 1074201%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636855857693042801& | ;sdata=ciA051iOt74WyZMrr9HulHik%2BG4la8YyF47sYnbqvzs%3D&reserved=0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Wed Feb 13 10:09:32 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Wed, 13 Feb 2019 10:09:32 +0000 Subject: Marge has been stabilised In-Reply-To: References: <8C942526-3776-4BB6-A220-97E8104538C2@cs.brynmawr.edu> <152A95C2-A681-4E71-A645-68EF721C0972@cs.brynmawr.edu> Message-ID: As far as I can work out the WIP field is a part of the semantic model but the only way it displays on the UI is in the title. Two other facets of the WIP state are: 1. WIP MRs can't be merged via the UI. (Marge also honours this) 2. The list of MRs can be filtered by the WIP status (which is that I use it for). https://docs.gitlab.com/ee/user/project/merge_requests/work_in_progress_merge_requests.html Your page is very helpful Simon. If you have any more questions then I'll try to answer. Cheers, Matt On Wed, Feb 13, 2019 at 9:38 AM Simon Peyton Jones wrote: > > Interesting. That page says (my emphasis) > > > > Quick actions are textual shortcuts for common actions on issues, epics, merge requests, and commits that are usually done by clicking buttons or dropdowns in GitLab’s UI. You can enter these commands while creating a new issue or merge request, or in comments of issues, epics, merge requests, and commits. Each command should be on a separate line in order to be properly detected and executed. Once executed, the commands are removed from the text body and not visible to anyone else. > > > > So this may be a useful shortcut, but it’s optional. > > > > So that means that this “WIP” thing is part of GitLab’s semantic model. The table says > > > > /wip: Toggle the Work In Progress status > > > > So it seems that > > Each MR has a WIP status > There is some way in the UI to toggle it > > > > Do you know what are the semantics of “WIP status”? It’s not just the title! > > > > I assume it is /not/ “WIP MRs aren’t merged” because no MR is merged until there are enough approvals /and/ the author clicks “please merge”. Correct? > > > > Sorry to be dim. I’m a bit slow to catch up with GitLab. > > > > I have a “Simon’s GitLab page” here https://ghc.haskell.org/trac/ghc/wiki/WorkingConventions/GitLabSPJ > > > > I use it to add notes on things I’ve learned. > > > > Simon > > > > > > | -----Original Message----- > > | From: ghc-devs On Behalf Of Matthew > > | Pickering > > | Sent: 12 February 2019 16:29 > > | To: Richard Eisenberg > > | Cc: GHC developers > > | Subject: Re: Marge has been stabilised > > | > > | Yes there are two others I use frequently. > > | > > | /approve - Approves a Merge request > > | /assign - Assign a user > > | > > | and I imagine I will use /label and /relabel frequently when we have > > | labels. > > | > > | It's also useful to know that the ! autocomplete for merge requests > > | can be filtered by name of MR. The user autocomplete @ can also be > > | filtered by a user's real name. > > | > > | There is a full list here: > > | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.gitl > > | ab.com%2Fee%2Fuser%2Fproject%2Fquick_actions.html&data=02%7C01%7Csimonp > > | j%40microsoft.com%7Cb91996c8302240d8c75c08d691074201%7C72f988bf86f141af91ab > > | 2d7cd011db47%7C1%7C0%7C636855857693042801&sdata=ngK1tGiNd6ZLGMS34emZYv% > > | 2B31dHS1nfUQ9StRSLXwcQ%3D&reserved=0 > > | > > | Cheers, > > | > > | Matt > > | > > | On Tue, Feb 12, 2019 at 4:23 PM Richard Eisenberg > > | wrote: > > | > > > | > > > | > > > | > On Feb 12, 2019, at 10:51 AM, Matthew Pickering > > | wrote: > > | > > > | > This can be quickly toggled on/off by typing the /wip quick command in > > | > a comment. > > | > > > | > > > | > This is an interesting aside. I understand this to mean: If I make a > > | comment (the same place that I would write a comment for humans) that > > | consists solely of "/wip", then instead of posting anything to humans, the > > | title of my MR changes, either adding "WIP: " or deleting that from the > > | beginning. > > | > > > | > Are there other such pieces of magic? Is there a place they are listed? > > | > > > | > Thanks! > > | > Richard > > | _______________________________________________ > > | ghc-devs mailing list > > | ghc-devs at haskell.org > > | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.haske > > | ll.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- > > | devs&data=02%7C01%7Csimonpj%40microsoft.com%7Cb91996c8302240d8c75c08d69 > > | 1074201%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636855857693042801& > > | ;sdata=ciA051iOt74WyZMrr9HulHik%2BG4la8YyF47sYnbqvzs%3D&reserved=0 From ryan.gl.scott at gmail.com Wed Feb 13 10:59:13 2019 From: ryan.gl.scott at gmail.com (Ryan Scott) Date: Wed, 13 Feb 2019 05:59:13 -0500 Subject: TTG: Handling Source Locations Message-ID: > Yes, I have reported it while back. I don't know of the progress in fixing this. Reported what? #15884? [1] You do realize that there is a very simple workaround for that issue, right? Instead of writing this, which is subject to the pattern-guard completeness issues observed in #15753: f :: Maybe a -> Bool f (id->Nothing) = False f (id->(Just _)) = True You can instead write this: f :: Maybe a -> Bool f (id -> x) = case x of Nothing -> False Just _ -> True This will get proper coverage checking, which means that this technique could be used to remove all of the panicking catch-all cases brought about by dL view patterns. Ryan S. ----- [1] https://ghc.haskell.org/trac/ghc/ticket/15884 [2] https://ghc.haskell.org/trac/ghc/ticket/15753 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryan.gl.scott at gmail.com Wed Feb 13 11:06:42 2019 From: ryan.gl.scott at gmail.com (Ryan Scott) Date: Wed, 13 Feb 2019 06:06:42 -0500 Subject: TTG: Handling Source Locations Message-ID: Yes, I agree. This will require sprinkling the codebase with EmptyCase due to [1], but that's still a sight better than calling `panic`. After GHC 8.10 is released (and the minimum version of GHC that HEAD supports is 8.8), we can even remove these empty cases by making the empty data type fields strict (see [2]). Ryan S. ----- [1] https://ghc.haskell.org/trac/ghc/ticket/15247#comment:4 [2] https://ghc.haskell.org/trac/ghc/ticket/15305 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sh.najd at gmail.com Wed Feb 13 13:00:58 2019 From: sh.najd at gmail.com (Shayan Najd) Date: Wed, 13 Feb 2019 14:00:58 +0100 Subject: TTG: Handling Source Locations In-Reply-To: References: Message-ID: >is there any plan to get rid of all those panics? There are two sorts of panics related to TTG: the ones due to #15247 (i.e. unused extension constructors), and the ones due to #15884 (i.e. issues with view patterns). About the former, I believe we all agree. Moreover, using Solution A discussed above, there will be way less unused extension constructors anyway: HsSyn types will use their extension constructors for the location wrapper constructor. About the latter, until #15247 is fixed, we can do rewrites as Ryan suggests. Hopefully, there will also be less of such panics around after making the code idiomatic to match Solution A discussed above. /Shayan On Wed, 13 Feb 2019 at 12:07, Ryan Scott wrote: > > Yes, I agree. This will require sprinkling the codebase with EmptyCase due to [1], but that's still a sight better than calling `panic`. After GHC 8.10 is released (and the minimum version of GHC that HEAD supports is 8.8), we can even remove these empty cases by making the empty data type fields strict (see [2]). > > Ryan S. > ----- > [1] https://ghc.haskell.org/trac/ghc/ticket/15247#comment:4 > [2] https://ghc.haskell.org/trac/ghc/ticket/15305 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From sh.najd at gmail.com Wed Feb 13 13:10:19 2019 From: sh.najd at gmail.com (Shayan Najd) Date: Wed, 13 Feb 2019 14:10:19 +0100 Subject: TTG: Handling Source Locations In-Reply-To: References: Message-ID: * "About the latter, until #15247 is fixed" ---> "About the latter, until #15884 is fixed" On Wed, 13 Feb 2019 at 14:00, Shayan Najd wrote: > > >is there any plan to get rid of all those panics? > > There are two sorts of panics related to TTG: the ones due to #15247 > (i.e. unused extension constructors), and the ones due to #15884 (i.e. > issues with view patterns). > > About the former, I believe we all agree. Moreover, using Solution A > discussed above, there will be way less unused extension constructors > anyway: HsSyn types will use their extension constructors for the > location wrapper constructor. > About the latter, until #15247 is fixed, we can do rewrites as Ryan > suggests. Hopefully, there will also be less of such panics around > after making the code idiomatic to match Solution A discussed above. > > /Shayan > > On Wed, 13 Feb 2019 at 12:07, Ryan Scott wrote: > > > > Yes, I agree. This will require sprinkling the codebase with EmptyCase due to [1], but that's still a sight better than calling `panic`. After GHC 8.10 is released (and the minimum version of GHC that HEAD supports is 8.8), we can even remove these empty cases by making the empty data type fields strict (see [2]). > > > > Ryan S. > > ----- > > [1] https://ghc.haskell.org/trac/ghc/ticket/15247#comment:4 > > [2] https://ghc.haskell.org/trac/ghc/ticket/15305 > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From simonpj at microsoft.com Wed Feb 13 16:03:21 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 13 Feb 2019 16:03:21 +0000 Subject: Marge has been stabilised In-Reply-To: References: <8C942526-3776-4BB6-A220-97E8104538C2@cs.brynmawr.edu> <152A95C2-A681-4E71-A645-68EF721C0972@cs.brynmawr.edu> Message-ID: | As far as I can work out the WIP field is a part of the semantic model | but the only way it displays on the UI is in the title. If you want to change the status using the UI (not via a "quick action") do you literally edit the title? That is, the semantics is driven from the title string itself? Or do you do something else in the UI? S | -----Original Message----- | From: Matthew Pickering | Sent: 13 February 2019 10:10 | To: Simon Peyton Jones | Cc: Richard Eisenberg ; GHC developers | Subject: Re: Marge has been stabilised | | As far as I can work out the WIP field is a part of the semantic model | but the only way it displays on the UI is in the title. | | Two other facets of the WIP state are: | | 1. WIP MRs can't be merged via the UI. (Marge also honours this) 2. The | list of MRs can be filtered by the WIP status (which is that I use it | for). | | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.gi | tlab.com%2Fee%2Fuser%2Fproject%2Fmerge_requests%2Fwork_in_progress_merge_ | requests.html&data=02%7C01%7Csimonpj%40microsoft.com%7C9936c5095e3147 | 63f30008d6919b61a2%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636856493 | 881824887&sdata=ii1%2FHl0V94%2F8BssWQr%2F43dSOALvm0QFw7VIGeoFIar0%3D& | amp;reserved=0 | | Your page is very helpful Simon. If you have any more questions then I'll | try to answer. | | Cheers, | | Matt | | On Wed, Feb 13, 2019 at 9:38 AM Simon Peyton Jones | wrote: | > | > Interesting. That page says (my emphasis) | > | > | > | > Quick actions are textual shortcuts for common actions on issues, | epics, merge requests, and commits that are usually done by clicking | buttons or dropdowns in GitLab’s UI. You can enter these commands while | creating a new issue or merge request, or in comments of issues, epics, | merge requests, and commits. Each command should be on a separate line in | order to be properly detected and executed. Once executed, the commands | are removed from the text body and not visible to anyone else. | > | > | > | > So this may be a useful shortcut, but it’s optional. | > | > | > | > So that means that this “WIP” thing is part of GitLab’s semantic | > model. The table says | > | > | > | > /wip: Toggle the Work In Progress status | > | > | > | > So it seems that | > | > Each MR has a WIP status | > There is some way in the UI to toggle it | > | > | > | > Do you know what are the semantics of “WIP status”? It’s not just the | title! | > | > | > | > I assume it is /not/ “WIP MRs aren’t merged” because no MR is merged | until there are enough approvals /and/ the author clicks “please merge”. | Correct? | > | > | > | > Sorry to be dim. I’m a bit slow to catch up with GitLab. | > | > | > | > I have a “Simon’s GitLab page” here | > https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fghc. | > haskell.org%2Ftrac%2Fghc%2Fwiki%2FWorkingConventions%2FGitLabSPJ&d | > ata=02%7C01%7Csimonpj%40microsoft.com%7C9936c5095e314763f30008d6919b61 | > a2%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636856493881824887& | > ;sdata=0%2BFu2xOOc1SqVx4mwmmEf7pGX3CUfENIvjFf7MPINhs%3D&reserved=0 | > | > | > | > I use it to add notes on things I’ve learned. | > | > | > | > Simon | > | > | > | > | > | > | -----Original Message----- | > | > | From: ghc-devs On Behalf Of Matthew | > | > | Pickering | > | > | Sent: 12 February 2019 16:29 | > | > | To: Richard Eisenberg | > | > | Cc: GHC developers | > | > | Subject: Re: Marge has been stabilised | > | > | | > | > | Yes there are two others I use frequently. | > | > | | > | > | /approve - Approves a Merge request | > | > | /assign - Assign a user | > | > | | > | > | and I imagine I will use /label and /relabel frequently when we have | > | > | labels. | > | > | | > | > | It's also useful to know that the ! autocomplete for merge requests | > | > | can be filtered by name of MR. The user autocomplete @ can also be | > | > | filtered by a user's real name. | > | > | | > | > | There is a full list here: | > | > | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdo | > | cs.gitl | > | > | ab.com%2Fee%2Fuser%2Fproject%2Fquick_actions.html&data=02%7C01%7 | > | Csimonp | > | > | j%40microsoft.com%7Cb91996c8302240d8c75c08d691074201%7C72f988bf86f14 | > | 1af91ab | > | > | 2d7cd011db47%7C1%7C0%7C636855857693042801&sdata=ngK1tGiNd6ZLGMS3 | > | 4emZYv% | > | > | 2B31dHS1nfUQ9StRSLXwcQ%3D&reserved=0 | > | > | | > | > | Cheers, | > | > | | > | > | Matt | > | > | | > | > | On Tue, Feb 12, 2019 at 4:23 PM Richard Eisenberg | > | | > | > | wrote: | > | > | > | > | > | > | > | > | > | > | > | > On Feb 12, 2019, at 10:51 AM, Matthew Pickering | > | > | wrote: | > | > | > | > | > | > This can be quickly toggled on/off by typing the /wip quick | > | > command in | > | > | > a comment. | > | > | > | > | > | > | > | > | > This is an interesting aside. I understand this to mean: If I make | > | > a | > | > | comment (the same place that I would write a comment for humans) | > | that | > | > | consists solely of "/wip", then instead of posting anything to | > | humans, the | > | > | title of my MR changes, either adding "WIP: " or deleting that from | > | the | > | > | beginning. | > | > | > | > | > | > Are there other such pieces of magic? Is there a place they are | listed? | > | > | > | > | > | > Thanks! | > | > | > Richard | > | > | _______________________________________________ | > | > | ghc-devs mailing list | > | > | ghc-devs at haskell.org | > | > | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmai | > | l.haske | > | > | ll.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | > | > | devs&data=02%7C01%7Csimonpj%40microsoft.com%7Cb91996c8302240d8c7 | > | 5c08d69 | > | > | 1074201%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636855857693042 | > | 801& | > | > | ;sdata=ciA051iOt74WyZMrr9HulHik%2BG4la8YyF47sYnbqvzs%3D&reserved | > | =0 From matthewtpickering at gmail.com Wed Feb 13 16:06:29 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Wed, 13 Feb 2019 16:06:29 +0000 Subject: Marge has been stabilised In-Reply-To: References: <8C942526-3776-4BB6-A220-97E8104538C2@cs.brynmawr.edu> <152A95C2-A681-4E71-A645-68EF721C0972@cs.brynmawr.edu> Message-ID: Yes you have to literally edit the string. I know this is very confusing but many things about gitlab can't be explained with normal logic. Cheers, Matt On Wed, Feb 13, 2019 at 4:03 PM Simon Peyton Jones wrote: > > | As far as I can work out the WIP field is a part of the semantic model > | but the only way it displays on the UI is in the title. > > If you want to change the status using the UI (not via a "quick action") do you literally edit the title? That is, the semantics is driven from the title string itself? Or do you do something else in the UI? > > S > > | -----Original Message----- > | From: Matthew Pickering > | Sent: 13 February 2019 10:10 > | To: Simon Peyton Jones > | Cc: Richard Eisenberg ; GHC developers | devs at haskell.org> > | Subject: Re: Marge has been stabilised > | > | As far as I can work out the WIP field is a part of the semantic model > | but the only way it displays on the UI is in the title. > | > | Two other facets of the WIP state are: > | > | 1. WIP MRs can't be merged via the UI. (Marge also honours this) 2. The > | list of MRs can be filtered by the WIP status (which is that I use it > | for). > | > | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.gi > | tlab.com%2Fee%2Fuser%2Fproject%2Fmerge_requests%2Fwork_in_progress_merge_ > | requests.html&data=02%7C01%7Csimonpj%40microsoft.com%7C9936c5095e3147 > | 63f30008d6919b61a2%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636856493 > | 881824887&sdata=ii1%2FHl0V94%2F8BssWQr%2F43dSOALvm0QFw7VIGeoFIar0%3D& > | amp;reserved=0 > | > | Your page is very helpful Simon. If you have any more questions then I'll > | try to answer. > | > | Cheers, > | > | Matt > | > | On Wed, Feb 13, 2019 at 9:38 AM Simon Peyton Jones > | wrote: > | > > | > Interesting. That page says (my emphasis) > | > > | > > | > > | > Quick actions are textual shortcuts for common actions on issues, > | epics, merge requests, and commits that are usually done by clicking > | buttons or dropdowns in GitLab’s UI. You can enter these commands while > | creating a new issue or merge request, or in comments of issues, epics, > | merge requests, and commits. Each command should be on a separate line in > | order to be properly detected and executed. Once executed, the commands > | are removed from the text body and not visible to anyone else. > | > > | > > | > > | > So this may be a useful shortcut, but it’s optional. > | > > | > > | > > | > So that means that this “WIP” thing is part of GitLab’s semantic > | > model. The table says > | > > | > > | > > | > /wip: Toggle the Work In Progress status > | > > | > > | > > | > So it seems that > | > > | > Each MR has a WIP status > | > There is some way in the UI to toggle it > | > > | > > | > > | > Do you know what are the semantics of “WIP status”? It’s not just the > | title! > | > > | > > | > > | > I assume it is /not/ “WIP MRs aren’t merged” because no MR is merged > | until there are enough approvals /and/ the author clicks “please merge”. > | Correct? > | > > | > > | > > | > Sorry to be dim. I’m a bit slow to catch up with GitLab. > | > > | > > | > > | > I have a “Simon’s GitLab page” here > | > https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fghc. > | > haskell.org%2Ftrac%2Fghc%2Fwiki%2FWorkingConventions%2FGitLabSPJ&d > | > ata=02%7C01%7Csimonpj%40microsoft.com%7C9936c5095e314763f30008d6919b61 > | > a2%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636856493881824887& > | > ;sdata=0%2BFu2xOOc1SqVx4mwmmEf7pGX3CUfENIvjFf7MPINhs%3D&reserved=0 > | > > | > > | > > | > I use it to add notes on things I’ve learned. > | > > | > > | > > | > Simon > | > > | > > | > > | > > | > > | > | -----Original Message----- > | > > | > | From: ghc-devs On Behalf Of Matthew > | > > | > | Pickering > | > > | > | Sent: 12 February 2019 16:29 > | > > | > | To: Richard Eisenberg > | > > | > | Cc: GHC developers > | > > | > | Subject: Re: Marge has been stabilised > | > > | > | > | > > | > | Yes there are two others I use frequently. > | > > | > | > | > > | > | /approve - Approves a Merge request > | > > | > | /assign - Assign a user > | > > | > | > | > > | > | and I imagine I will use /label and /relabel frequently when we have > | > > | > | labels. > | > > | > | > | > > | > | It's also useful to know that the ! autocomplete for merge requests > | > > | > | can be filtered by name of MR. The user autocomplete @ can also be > | > > | > | filtered by a user's real name. > | > > | > | > | > > | > | There is a full list here: > | > > | > | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdo > | > | cs.gitl > | > > | > | ab.com%2Fee%2Fuser%2Fproject%2Fquick_actions.html&data=02%7C01%7 > | > | Csimonp > | > > | > | j%40microsoft.com%7Cb91996c8302240d8c75c08d691074201%7C72f988bf86f14 > | > | 1af91ab > | > > | > | 2d7cd011db47%7C1%7C0%7C636855857693042801&sdata=ngK1tGiNd6ZLGMS3 > | > | 4emZYv% > | > > | > | 2B31dHS1nfUQ9StRSLXwcQ%3D&reserved=0 > | > > | > | > | > > | > | Cheers, > | > > | > | > | > > | > | Matt > | > > | > | > | > > | > | On Tue, Feb 12, 2019 at 4:23 PM Richard Eisenberg > | > | > | > > | > | wrote: > | > > | > | > > | > > | > | > > | > > | > | > > | > > | > | > On Feb 12, 2019, at 10:51 AM, Matthew Pickering > | > > | > | wrote: > | > > | > | > > | > > | > | > This can be quickly toggled on/off by typing the /wip quick > | > | > command in > | > > | > | > a comment. > | > > | > | > > | > > | > | > > | > > | > | > This is an interesting aside. I understand this to mean: If I make > | > | > a > | > > | > | comment (the same place that I would write a comment for humans) > | > | that > | > > | > | consists solely of "/wip", then instead of posting anything to > | > | humans, the > | > > | > | title of my MR changes, either adding "WIP: " or deleting that from > | > | the > | > > | > | beginning. > | > > | > | > > | > > | > | > Are there other such pieces of magic? Is there a place they are > | listed? > | > > | > | > > | > > | > | > Thanks! > | > > | > | > Richard > | > > | > | _______________________________________________ > | > > | > | ghc-devs mailing list > | > > | > | ghc-devs at haskell.org > | > > | > | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmai > | > | l.haske > | > > | > | ll.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- > | > > | > | devs&data=02%7C01%7Csimonpj%40microsoft.com%7Cb91996c8302240d8c7 > | > | 5c08d69 > | > > | > | 1074201%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636855857693042 > | > | 801& > | > > | > | ;sdata=ciA051iOt74WyZMrr9HulHik%2BG4la8YyF47sYnbqvzs%3D&reserved > | > | =0 From simonpj at microsoft.com Thu Feb 14 09:38:08 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 14 Feb 2019 09:38:08 +0000 Subject: GItLab commenting Message-ID: Friends In reviewing MR!361, I wanted to point out that a Note on line 1481 of a file needed rewriting. But the patch only modified lines 236 or so. How can I get it to display line 1481? (Or, more simply, just display the whole file?) I can get it to show 10 more lines at a time by clicking the little grey "..." symbols. But getting 1000 lines down would take 100 clicks - hardly sensible. Moreover, what if I want to comment on a file that isn't in the patch at all? Thanks Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Thu Feb 14 15:00:15 2019 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Thu, 14 Feb 2019 10:00:15 -0500 Subject: Performance of pattern synonyms In-Reply-To: <3D54BE35-2C06-498B-82B7-0A124CFAF56D@mq.edu.au> References: <7C952812-6E94-452E-B7A6-8F151CC31FF6@cs.brynmawr.edu> <3D54BE35-2C06-498B-82B7-0A124CFAF56D@mq.edu.au> Message-ID: Hey Matthew, One dimension of analsysis that would be instructive would be to characterize the differences in core / stg for these different versions Also : am I correct in believing that these all are the exact same algorithm in terms of representation or am I overlooking some differences between the 3 different codes ? On Mon, Feb 11, 2019 at 4:07 PM Matthew Roberts wrote: > My apologies, > > The link to the source was broken by some repo work - I have fixed it and > it should be stable now. This page was intended just to be a way of > showing the results to my collaborators, not a full explanation that anyone > can follow, but I thought the graphs at least show off what I am seeing. > > Regardless, it is all there in the code and hopefully not too obtuse. I > can improve the discussion on the page if enough people are interested :) > > Matt > > On 12 Feb 2019, at 5:14 AM, Carter Schonwald > wrote: > > I'm looking at these links, but i'm actually having a hard time finding > the actual different definitions of this microbenchmark... > > On Mon, Feb 11, 2019 at 9:22 AM Richard Eisenberg > wrote: > >> >> >> On Feb 11, 2019, at 8:55 AM, Simon Peyton Jones >> wrote: >> >> What exactly is “the alternative compile-time implementation”? >> >> >> In my response, I interpreted this to be macro-expansion, the alternative >> we discuss in the paper. The paper includes a nice discussion of how the >> semantics differs between what we currently have and macro-expansion. >> >> Richard >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryan.gl.scott at gmail.com Thu Feb 14 15:34:52 2019 From: ryan.gl.scott at gmail.com (Ryan Scott) Date: Thu, 14 Feb 2019 10:34:52 -0500 Subject: scopedSort and kind variable left-biasing Message-ID: Consider this function: f :: Proxy (a :: j) -> Proxy (b :: k) If you just collect the free type variables of `f`'s type in left-to-right order, you'd be left with [a,j,b,k]. But the type of `f` is not `forall (a :: j) j (b :: k) k. Proxy a -> Proxy b`, as that would be ill scoped. `j` must come before `a`, since `j` appears in `a`'s kind, and similarly, `k` must come before `b`. Fortunately, GHC is quite smart about sorting free variables such that they respect dependency order. If you ask GHCi what the type of `f` is (with -fprint-explicit-foralls enabled), it will tell you this: λ> :type +v f f :: forall j k (a :: j) (b :: k). Proxy a -> Proxy b As expected, `j` appears before `a`, and `k` appears before `b`. In a different context, I've been trying to implement a type variable sorting algorithm similar to the one that GHC is using. My previous understanding was that the entirely of this sorting algorithm was implemented in `Type.scopedSort`. To test my understanding, I decided to write a program using the GHC API which directly uses `scopedSort` on the example above: main :: IO () main = do let tv :: String -> Int -> Type -> TyVar tv n uniq ty = mkTyVar (mkSystemName (mkUniqueGrimily uniq) (mkTyVarOcc n)) ty j = tv "j" 0 liftedTypeKind a = tv "a" 1 (TyVarTy j) k = tv "k" 2 liftedTypeKind b = tv "b" 3 (TyVarTy k) sorted = scopedSort [a, j, b, k] putStrLn $ showSDocUnsafe $ ppr sorted To my surprise, however, running this program does /not/ give the answer [j,k,a,b], like what :type reported: λ> main [j_0, a_1, k_2, b_3] Instead, it gives the answer [j,a,k,b]! Strictly speaking, this answer meets the specification of ScopedSort, since it respects dependency order and preserves the left-to-right ordering of variables that don't depend on each other (i.e., `j` appears to the left of `k`, and `a` appears to the left of `b`). But it's noticeably different that what :type reports. The order that :type reports, [j,k,a,b], appears to bias kind variables to the left such that all kind variables (`j` and `k`) appear before any type variables (`a` and `b`). >From what I can tell, scopedSort isn't the full story here. That is, something else appears to be left-biasing the kind variables. My question is: which part of GHC is doing this left-biasing? Ryan S. -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Feb 14 17:46:53 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 14 Feb 2019 17:46:53 +0000 Subject: scopedSort and kind variable left-biasing In-Reply-To: References: Message-ID: See Note [Kind and type-variable binders] in RnTypes, and Note [Ordering of implicit variables]. And the data type FreeKiTyVars. But NB: that in https://gitlab.haskell.org/ghc/ghc/merge_requests/361, I argue that with this patch we can sweep all this away. If we did, we’d probably end up with [j,a,k,b]. Perhaps that’s an ergonomic reason for retaining the current rather cumbersome code. (Maybe it could be simplified.) Simon From: ghc-devs On Behalf Of Ryan Scott Sent: 14 February 2019 15:35 To: ghc-devs at haskell.org Subject: scopedSort and kind variable left-biasing Consider this function: f :: Proxy (a :: j) -> Proxy (b :: k) If you just collect the free type variables of `f`'s type in left-to-right order, you'd be left with [a,j,b,k]. But the type of `f` is not `forall (a :: j) j (b :: k) k. Proxy a -> Proxy b`, as that would be ill scoped. `j` must come before `a`, since `j` appears in `a`'s kind, and similarly, `k` must come before `b`. Fortunately, GHC is quite smart about sorting free variables such that they respect dependency order. If you ask GHCi what the type of `f` is (with -fprint-explicit-foralls enabled), it will tell you this: λ> :type +v f f :: forall j k (a :: j) (b :: k). Proxy a -> Proxy b As expected, `j` appears before `a`, and `k` appears before `b`. In a different context, I've been trying to implement a type variable sorting algorithm similar to the one that GHC is using. My previous understanding was that the entirely of this sorting algorithm was implemented in `Type.scopedSort`. To test my understanding, I decided to write a program using the GHC API which directly uses `scopedSort` on the example above: main :: IO () main = do let tv :: String -> Int -> Type -> TyVar tv n uniq ty = mkTyVar (mkSystemName (mkUniqueGrimily uniq) (mkTyVarOcc n)) ty j = tv "j" 0 liftedTypeKind a = tv "a" 1 (TyVarTy j) k = tv "k" 2 liftedTypeKind b = tv "b" 3 (TyVarTy k) sorted = scopedSort [a, j, b, k] putStrLn $ showSDocUnsafe $ ppr sorted To my surprise, however, running this program does /not/ give the answer [j,k,a,b], like what :type reported: λ> main [j_0, a_1, k_2, b_3] Instead, it gives the answer [j,a,k,b]! Strictly speaking, this answer meets the specification of ScopedSort, since it respects dependency order and preserves the left-to-right ordering of variables that don't depend on each other (i.e., `j` appears to the left of `k`, and `a` appears to the left of `b`). But it's noticeably different that what :type reports. The order that :type reports, [j,k,a,b], appears to bias kind variables to the left such that all kind variables (`j` and `k`) appear before any type variables (`a` and `b`). From what I can tell, scopedSort isn't the full story here. That is, something else appears to be left-biasing the kind variables. My question is: which part of GHC is doing this left-biasing? Ryan S. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryan.gl.scott at gmail.com Thu Feb 14 18:31:28 2019 From: ryan.gl.scott at gmail.com (Ryan Scott) Date: Thu, 14 Feb 2019 13:31:28 -0500 Subject: scopedSort and kind variable left-biasing In-Reply-To: References: Message-ID: Ah, I somehow forgot all about FreeKiTyVars. It turns out that the `freeKiTyVarsAllVars` function [1] is exactly what drives this behavior: freeKiTyVarsAllVars :: FreeKiTyVars -> [Located RdrName] freeKiTyVarsAllVars (FKTV { fktv_kis = kvs, fktv_tys = tvs }) = kvs ++ tvs That's about as straightforward as it gets. Thanks! Ryan S. ----- [1] https://gitlab.haskell.org/ghc/ghc/blob/5c1f268e2744fab2d36e64c163858995451d7095/compiler/rename/RnTypes.hs#L1604-1605 On Thu, Feb 14, 2019 at 12:46 PM Simon Peyton Jones wrote: > See Note [Kind and type-variable binders] in RnTypes, and Note [Ordering > of implicit variables]. > > And the data type FreeKiTyVars. > > > > But NB: that in https://gitlab.haskell.org/ghc/ghc/merge_requests/361, I > argue that with this patch we can sweep all this away. > > > > If we did, we’d probably end up with [j,a,k,b]. > > > > Perhaps that’s an ergonomic reason for retaining the current rather > cumbersome code. (Maybe it could be simplified.) > > > > Simon > > > > *From:* ghc-devs *On Behalf Of *Ryan Scott > *Sent:* 14 February 2019 15:35 > *To:* ghc-devs at haskell.org > *Subject:* scopedSort and kind variable left-biasing > > > > Consider this function: > > f :: Proxy (a :: j) -> Proxy (b :: k) > > If you just collect the free type variables of `f`'s type in left-to-right > order, you'd be left with [a,j,b,k]. But the type of `f` is not `forall (a > :: j) j (b :: k) k. Proxy a -> Proxy b`, as that would be ill scoped. `j` > must come before `a`, since `j` appears in `a`'s kind, and similarly, `k` > must come before `b`. > > Fortunately, GHC is quite smart about sorting free variables such that > they respect dependency order. If you ask GHCi what the type of `f` is > (with -fprint-explicit-foralls enabled), it will tell you this: > > λ> :type +v f > f :: forall j k (a :: j) (b :: k). Proxy a -> Proxy b > > As expected, `j` appears before `a`, and `k` appears before `b`. > > In a different context, I've been trying to implement a type variable > sorting algorithm similar to the one that GHC is using. My previous > understanding was that the entirely of this sorting algorithm was > implemented in `Type.scopedSort`. To test my understanding, I decided to > write a program using the GHC API which directly uses `scopedSort` on the > example above: > > main :: IO () > main = do > let tv :: String -> Int -> Type -> TyVar > tv n uniq ty = mkTyVar (mkSystemName (mkUniqueGrimily uniq) > (mkTyVarOcc n)) ty > j = tv "j" 0 liftedTypeKind > a = tv "a" 1 (TyVarTy j) > k = tv "k" 2 liftedTypeKind > b = tv "b" 3 (TyVarTy k) > sorted = scopedSort [a, j, b, k] > putStrLn $ showSDocUnsafe $ ppr sorted > > To my surprise, however, running this program does /not/ give the answer > [j,k,a,b], like what :type reported: > > λ> main > [j_0, a_1, k_2, b_3] > > Instead, it gives the answer [j,a,k,b]! Strictly speaking, this answer > meets the specification of ScopedSort, since it respects dependency order > and preserves the left-to-right ordering of variables that don't depend on > each other (i.e., `j` appears to the left of `k`, and `a` appears to the > left of `b`). But it's noticeably different that what :type reports. The > order that :type reports, [j,k,a,b], appears to bias kind variables to the > left such that all kind variables (`j` and `k`) appear before any type > variables (`a` and `b`). > > From what I can tell, scopedSort isn't the full story here. That is, > something else appears to be left-biasing the kind variables. My question > is: which part of GHC is doing this left-biasing? > > > > Ryan S. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at cs.brynmawr.edu Thu Feb 14 19:04:11 2019 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Thu, 14 Feb 2019 14:04:11 -0500 Subject: scopedSort and kind variable left-biasing In-Reply-To: References: Message-ID: > On Feb 14, 2019, at 10:34 AM, Ryan Scott wrote: > > the answer [j,a,k,b]! Strictly speaking, this answer meets the specification of ScopedSort, I wish to point out that the specification of ScopedSort is very tight: it says exactly what we should get, given an input. This is important, because the behavior of ScopedSort is user-visible and must be stable. Another way of saying this: if we got [j,k,a,b], that would be wrong. Arguably, GHC's behavior w.r.t. type and kind variables is wrong because of its habit of putting kind variables first. Once we treat kind and type variables identically, I want to just rely on ScopedSort. That is, GHC should infer the order [j,a,k,b]. While I understand the aesthetic appeal of moving k before a, I think it's simpler and more uniform just to use ScopedSort. No exceptions! Richard From ryan.gl.scott at gmail.com Thu Feb 14 19:32:15 2019 From: ryan.gl.scott at gmail.com (Ryan Scott) Date: Thu, 14 Feb 2019 14:32:15 -0500 Subject: scopedSort and kind variable left-biasing In-Reply-To: References: Message-ID: Ah, I read over this part [1] of the ScopedSort specification too quickly: * If variable v at the cursor is depended on by any earlier variable w, move v immediately before the leftmost such w. I glossed over the "immediately" part, so I was under the impression that as long as v appeared *somewhere* to the left of the leftmost w, then everything was up to spec. But no, v can appear in one (and only one) place. I was also looking in Note [ScopedSort], not the users' guide. Now that I look at the users' guide, it specifically carves out an exception for variables that appear in kind annotations [2]: - If the type signature includes any kind annotations (either on variable binders or as annotations on types), any variables used in kind annotations come before any variables never used in kind annotations. This rule is not recursive: if there is an annotation within an annotation, then the variables used therein are on equal footing. Examples:: f :: Proxy (a :: k) -> Proxy (b :: j) -> () -- as if f :: forall k j a b. ... g :: Proxy (b :: j) -> Proxy (a :: (Proxy :: (k -> Type) -> Type) Proxy) -> () -- as if g :: forall j k b a. ... -- NB: k is in a kind annotation within a kind annotation I can see the appeal behind dropping this exception, both from a specification and an implementation point of view. It'll require a massive breaking change, alas, but it just might be worth it. Unfortunately, the proposal for merging type and kind variables in `forall`s [3] makes no mention of this detail—I wonder if it is worth its own proposal. Ryan S. ----- [1] https://gitlab.haskell.org/ghc/ghc/blob/5c1f268e2744fab2d36e64c163858995451d7095/compiler/types/Type.hs#L2110-2111 [2] https://gitlab.haskell.org/ghc/ghc/blob/5c1f268e2744fab2d36e64c163858995451d7095/docs/users_guide/glasgow_exts.rst#L10815-10826 [3] https://github.com/ghc-proposals/ghc-proposals/blob/c3142c4a6f6abb90e53c2cac22b285991d0d0b3f/proposals/0024-no-kind-vars.rst On Thu, Feb 14, 2019 at 2:04 PM Richard Eisenberg wrote: > > > > On Feb 14, 2019, at 10:34 AM, Ryan Scott > wrote: > > > > the answer [j,a,k,b]! Strictly speaking, this answer meets the > specification of ScopedSort, > > I wish to point out that the specification of ScopedSort is very tight: it > says exactly what we should get, given an input. This is important, because > the behavior of ScopedSort is user-visible and must be stable. Another way > of saying this: if we got [j,k,a,b], that would be wrong. Arguably, GHC's > behavior w.r.t. type and kind variables is wrong because of its habit of > putting kind variables first. > > Once we treat kind and type variables identically, I want to just rely on > ScopedSort. That is, GHC should infer the order [j,a,k,b]. While I > understand the aesthetic appeal of moving k before a, I think it's simpler > and more uniform just to use ScopedSort. No exceptions! > > Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at cs.brynmawr.edu Thu Feb 14 20:14:44 2019 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Thu, 14 Feb 2019 15:14:44 -0500 Subject: scopedSort and kind variable left-biasing In-Reply-To: References: Message-ID: <574CBCBF-5514-4C17-ACA4-B6763323DC72@cs.brynmawr.edu> > On Feb 14, 2019, at 2:32 PM, Ryan Scott wrote: > > I can see the appeal behind dropping this exception, both from a specification and an implementation point of view. It'll require a massive breaking change, alas, but it just might be worth it. The "breaking change" is just for people using visible type application with kind-polymorphic functions where there is more than one possible well-scoped ordering for type variables and the new scheme differs from the old scheme, right? Perhaps you were just being emphatic, but I don't think this would be "massive". :) I say: go for it (without a proposal). Richard From simonpj at microsoft.com Thu Feb 14 22:30:17 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 14 Feb 2019 22:30:17 +0000 Subject: scopedSort and kind variable left-biasing In-Reply-To: References: Message-ID: What do you (or anyone else) think about sweeping all that stuff away? See my comments on https://gitlab.haskell.org/ghc/ghc/merge_requests/361 Simon From: Ryan Scott Sent: 14 February 2019 18:31 To: Simon Peyton Jones Cc: ghc-devs at haskell.org Subject: Re: scopedSort and kind variable left-biasing Ah, I somehow forgot all about FreeKiTyVars. It turns out that the `freeKiTyVarsAllVars` function [1] is exactly what drives this behavior: freeKiTyVarsAllVars :: FreeKiTyVars -> [Located RdrName] freeKiTyVarsAllVars (FKTV { fktv_kis = kvs, fktv_tys = tvs }) = kvs ++ tvs That's about as straightforward as it gets. Thanks! Ryan S. ----- [1] https://gitlab.haskell.org/ghc/ghc/blob/5c1f268e2744fab2d36e64c163858995451d7095/compiler/rename/RnTypes.hs#L1604-1605 On Thu, Feb 14, 2019 at 12:46 PM Simon Peyton Jones > wrote: See Note [Kind and type-variable binders] in RnTypes, and Note [Ordering of implicit variables]. And the data type FreeKiTyVars. But NB: that in https://gitlab.haskell.org/ghc/ghc/merge_requests/361, I argue that with this patch we can sweep all this away. If we did, we’d probably end up with [j,a,k,b]. Perhaps that’s an ergonomic reason for retaining the current rather cumbersome code. (Maybe it could be simplified.) Simon From: ghc-devs > On Behalf Of Ryan Scott Sent: 14 February 2019 15:35 To: ghc-devs at haskell.org Subject: scopedSort and kind variable left-biasing Consider this function: f :: Proxy (a :: j) -> Proxy (b :: k) If you just collect the free type variables of `f`'s type in left-to-right order, you'd be left with [a,j,b,k]. But the type of `f` is not `forall (a :: j) j (b :: k) k. Proxy a -> Proxy b`, as that would be ill scoped. `j` must come before `a`, since `j` appears in `a`'s kind, and similarly, `k` must come before `b`. Fortunately, GHC is quite smart about sorting free variables such that they respect dependency order. If you ask GHCi what the type of `f` is (with -fprint-explicit-foralls enabled), it will tell you this: λ> :type +v f f :: forall j k (a :: j) (b :: k). Proxy a -> Proxy b As expected, `j` appears before `a`, and `k` appears before `b`. In a different context, I've been trying to implement a type variable sorting algorithm similar to the one that GHC is using. My previous understanding was that the entirely of this sorting algorithm was implemented in `Type.scopedSort`. To test my understanding, I decided to write a program using the GHC API which directly uses `scopedSort` on the example above: main :: IO () main = do let tv :: String -> Int -> Type -> TyVar tv n uniq ty = mkTyVar (mkSystemName (mkUniqueGrimily uniq) (mkTyVarOcc n)) ty j = tv "j" 0 liftedTypeKind a = tv "a" 1 (TyVarTy j) k = tv "k" 2 liftedTypeKind b = tv "b" 3 (TyVarTy k) sorted = scopedSort [a, j, b, k] putStrLn $ showSDocUnsafe $ ppr sorted To my surprise, however, running this program does /not/ give the answer [j,k,a,b], like what :type reported: λ> main [j_0, a_1, k_2, b_3] Instead, it gives the answer [j,a,k,b]! Strictly speaking, this answer meets the specification of ScopedSort, since it respects dependency order and preserves the left-to-right ordering of variables that don't depend on each other (i.e., `j` appears to the left of `k`, and `a` appears to the left of `b`). But it's noticeably different that what :type reports. The order that :type reports, [j,k,a,b], appears to bias kind variables to the left such that all kind variables (`j` and `k`) appear before any type variables (`a` and `b`). From what I can tell, scopedSort isn't the full story here. That is, something else appears to be left-biasing the kind variables. My question is: which part of GHC is doing this left-biasing? Ryan S. -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Feb 14 22:33:47 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 14 Feb 2019 22:33:47 +0000 Subject: scopedSort and kind variable left-biasing In-Reply-To: References: Message-ID: | Once we treat kind and type variables identically, I want to just rely on | ScopedSort. That is, GHC should infer the order [j,a,k,b]. While I | understand the aesthetic appeal of moving k before a, I think it's simpler | and more uniform just to use ScopedSort. No exceptions! I agree. And then we can dispose entirely of the FreeKiTyVars stuff in RnTypes. Vlad: doing so would fit very neatly into https://gitlab.haskell.org/ghc/ghc/merge_requests/361 Simon | -----Original Message----- | From: ghc-devs On Behalf Of Richard | Eisenberg | Sent: 14 February 2019 19:04 | To: Ryan Scott | Cc: ghc-devs at haskell.org | Subject: Re: scopedSort and kind variable left-biasing | | | | > On Feb 14, 2019, at 10:34 AM, Ryan Scott wrote: | > | > the answer [j,a,k,b]! Strictly speaking, this answer meets the | specification of ScopedSort, | | I wish to point out that the specification of ScopedSort is very tight: it | says exactly what we should get, given an input. This is important, because | the behavior of ScopedSort is user-visible and must be stable. Another way | of saying this: if we got [j,k,a,b], that would be wrong. Arguably, GHC's | behavior w.r.t. type and kind variables is wrong because of its habit of | putting kind variables first. | | Once we treat kind and type variables identically, I want to just rely on | ScopedSort. That is, GHC should infer the order [j,a,k,b]. While I | understand the aesthetic appeal of moving k before a, I think it's simpler | and more uniform just to use ScopedSort. No exceptions! | | Richard | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.haske | ll.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com%7Cfe4658679a9d474f232e08d69 | 2af3ab0%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636857678634719246& | ;sdata=XhEw8LYBtqVxn%2BjpeHxus5%2FRTrUVlCU%2FRJrIYUsLpDY%3D&reserved=0 From rae at cs.brynmawr.edu Thu Feb 14 23:08:58 2019 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Thu, 14 Feb 2019 18:08:58 -0500 Subject: scopedSort and kind variable left-biasing In-Reply-To: References: Message-ID: <7F5A9362-9C6A-4C3F-A52A-E8AAEE65E2F7@cs.brynmawr.edu> Yes -- sweep it away! > On Feb 14, 2019, at 5:30 PM, Simon Peyton Jones via ghc-devs wrote: > > What do you (or anyone else) think about sweeping all that stuff away? See my comments on > https://gitlab.haskell.org/ghc/ghc/merge_requests/361 > > Simon > > From: Ryan Scott > Sent: 14 February 2019 18:31 > To: Simon Peyton Jones > Cc: ghc-devs at haskell.org > Subject: Re: scopedSort and kind variable left-biasing > > Ah, I somehow forgot all about FreeKiTyVars. It turns out that the `freeKiTyVarsAllVars` function [1] is exactly what drives this behavior: > > freeKiTyVarsAllVars :: FreeKiTyVars -> [Located RdrName] > freeKiTyVarsAllVars (FKTV { fktv_kis = kvs, fktv_tys = tvs }) = kvs ++ tvs > > That's about as straightforward as it gets. Thanks! > > Ryan S. > ----- > [1] https://gitlab.haskell.org/ghc/ghc/blob/5c1f268e2744fab2d36e64c163858995451d7095/compiler/rename/RnTypes.hs#L1604-1605 > > On Thu, Feb 14, 2019 at 12:46 PM Simon Peyton Jones > wrote: > See Note [Kind and type-variable binders] in RnTypes, and Note [Ordering of implicit variables]. > And the data type FreeKiTyVars. > > But NB: that in https://gitlab.haskell.org/ghc/ghc/merge_requests/361 , I argue that with this patch we can sweep all this away. > > If we did, we’d probably end up with [j,a,k,b]. > > Perhaps that’s an ergonomic reason for retaining the current rather cumbersome code. (Maybe it could be simplified.) > > Simon > > From: ghc-devs > On Behalf Of Ryan Scott > Sent: 14 February 2019 15:35 > To: ghc-devs at haskell.org > Subject: scopedSort and kind variable left-biasing > > Consider this function: > > f :: Proxy (a :: j) -> Proxy (b :: k) > > If you just collect the free type variables of `f`'s type in left-to-right order, you'd be left with [a,j,b,k]. But the type of `f` is not `forall (a :: j) j (b :: k) k. Proxy a -> Proxy b`, as that would be ill scoped. `j` must come before `a`, since `j` appears in `a`'s kind, and similarly, `k` must come before `b`. > > Fortunately, GHC is quite smart about sorting free variables such that they respect dependency order. If you ask GHCi what the type of `f` is (with -fprint-explicit-foralls enabled), it will tell you this: > > λ> :type +v f > f :: forall j k (a :: j) (b :: k). Proxy a -> Proxy b > > As expected, `j` appears before `a`, and `k` appears before `b`. > > In a different context, I've been trying to implement a type variable sorting algorithm similar to the one that GHC is using. My previous understanding was that the entirely of this sorting algorithm was implemented in `Type.scopedSort`. To test my understanding, I decided to write a program using the GHC API which directly uses `scopedSort` on the example above: > > main :: IO () > main = do > let tv :: String -> Int -> Type -> TyVar > tv n uniq ty = mkTyVar (mkSystemName (mkUniqueGrimily uniq) (mkTyVarOcc n)) ty > j = tv "j" 0 liftedTypeKind > a = tv "a" 1 (TyVarTy j) > k = tv "k" 2 liftedTypeKind > b = tv "b" 3 (TyVarTy k) > sorted = scopedSort [a, j, b, k] > putStrLn $ showSDocUnsafe $ ppr sorted > > To my surprise, however, running this program does /not/ give the answer [j,k,a,b], like what :type reported: > > λ> main > [j_0, a_1, k_2, b_3] > > Instead, it gives the answer [j,a,k,b]! Strictly speaking, this answer meets the specification of ScopedSort, since it respects dependency order and preserves the left-to-right ordering of variables that don't depend on each other (i.e., `j` appears to the left of `k`, and `a` appears to the left of `b`). But it's noticeably different that what :type reports. The order that :type reports, [j,k,a,b], appears to bias kind variables to the left such that all kind variables (`j` and `k`) appear before any type variables (`a` and `b`). > > From what I can tell, scopedSort isn't the full story here. That is, something else appears to be left-biasing the kind variables. My question is: which part of GHC is doing this left-biasing? > > > > Ryan S. > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.spiwack at tweag.io Fri Feb 15 08:30:17 2019 From: arnaud.spiwack at tweag.io (Spiwack, Arnaud) Date: Fri, 15 Feb 2019 09:30:17 +0100 Subject: Name of units in plugins Message-ID: Dear all, (first, I don't know if this is the best place for questions/discussions about the GHC API, if not, let me know where to redirect the conversation). I've been writing a plugin that substitutes call to a function by calls to another (it's a plugin reimplementation of the assert feature of GHC). And to be able to point at the names of these two functions, I need to construct a name (well, and OccName) made of three parts: unit id, module name, definition name. This question is about the unit name. Currently I simply use stringToUnitId. But the real name of my unit has a magic string in it (see https://github.com/aspiwack/assert-plugin/blob/a538d72581bae43ebf44c332e19c5ffdd28911df/src/With/Assertions.hs#L53 ). It's rather unpleasant, it seems to change every time the cabal file change (at least). The assert-explainer plugin uses another approach, only using the module name, then calling findImportedModule ( https://github.com/ocharles/assert-explainer/blob/dc6ea213d4d0576954ec883eeabeafc80c5ca18f/plugin/AssertExplainer.hs#L71-L81 ). This is much more robust to changes, but is also less precise (technically, there can be several imported modules with the same name, with package-qualified imports). So, the question is: is there a better, recommended way to recover the OccName (or Name!) of a function I defined in the same unit my plugin is defined in. Best, Arnaud -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Fri Feb 15 08:40:55 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Fri, 15 Feb 2019 08:40:55 +0000 Subject: Name of units in plugins In-Reply-To: References: Message-ID: Did you have a look at the implementation of `findImportedModule`? I think you can use it and set the final argument to `Just "assert-plugin"` so that it only looks for the module in the `assert-plugin` package. Another way people do this is to use a Template Haskell quote and then use `GhcPlugins.thNameToGhcName`. Which is probably the most robust way of persisting a name between the two stages. Cheers, Matt On Fri, Feb 15, 2019 at 8:31 AM Spiwack, Arnaud wrote: > > Dear all, > > (first, I don't know if this is the best place for questions/discussions about the GHC API, if not, let me know where to redirect the conversation). > > I've been writing a plugin that substitutes call to a function by calls to another (it's a plugin reimplementation of the assert feature of GHC). And to be able to point at the names of these two functions, I need to construct a name (well, and OccName) made of three parts: unit id, module name, definition name. > > This question is about the unit name. Currently I simply use stringToUnitId. But the real name of my unit has a magic string in it (see https://github.com/aspiwack/assert-plugin/blob/a538d72581bae43ebf44c332e19c5ffdd28911df/src/With/Assertions.hs#L53 ). It's rather unpleasant, it seems to change every time the cabal file change (at least). > > The assert-explainer plugin uses another approach, only using the module name, then calling findImportedModule ( https://github.com/ocharles/assert-explainer/blob/dc6ea213d4d0576954ec883eeabeafc80c5ca18f/plugin/AssertExplainer.hs#L71-L81 ). > > This is much more robust to changes, but is also less precise (technically, there can be several imported modules with the same name, with package-qualified imports). > > So, the question is: is there a better, recommended way to recover the OccName (or Name!) of a function I defined in the same unit my plugin is defined in. > > Best, > Arnaud > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From simonpj at microsoft.com Fri Feb 15 09:19:02 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 15 Feb 2019 09:19:02 +0000 Subject: typecheck/should_fail/all.T Message-ID: There's a mysterious line 4607 at the top of testsuite/tests/typecheck/should_fail/all.T Should it be there? What does it do? Could it be bad? Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Fri Feb 15 10:29:57 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Fri, 15 Feb 2019 10:29:57 +0000 Subject: typecheck/should_fail/all.T In-Reply-To: References: Message-ID: You added this mysterious line yourself in commit 2bbdd00c6d70bdc31ff78e2a42b26159c8717856 Author: Simon Peyton Jones Date: Fri May 18 08:43:11 2018 +0100 Orient TyVar/TyVar equalities with deepest on the left I'll make a patch to remove it. Matt On Fri, Feb 15, 2019 at 9:19 AM Simon Peyton Jones via ghc-devs wrote: > > There’s a mysterious line > > 4607 > > at the top of testsuite/tests/typecheck/should_fail/all.T > > Should it be there? What does it do? Could it be bad? > > Simon > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From simonpj at microsoft.com Fri Feb 15 10:37:35 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 15 Feb 2019 10:37:35 +0000 Subject: typecheck/should_fail/all.T In-Reply-To: References: Message-ID: Ha ha. I'm sure it was entirely accidental. Thanks for fixing S | -----Original Message----- | From: Matthew Pickering | Sent: 15 February 2019 10:30 | To: Simon Peyton Jones | Cc: ghc-devs at haskell.org | Subject: Re: typecheck/should_fail/all.T | | You added this mysterious line yourself in | | commit 2bbdd00c6d70bdc31ff78e2a42b26159c8717856 | Author: Simon Peyton Jones | Date: Fri May 18 08:43:11 2018 +0100 | | Orient TyVar/TyVar equalities with deepest on the left | | I'll make a patch to remove it. | | Matt | | | | On Fri, Feb 15, 2019 at 9:19 AM Simon Peyton Jones via ghc-devs | wrote: | > | > There’s a mysterious line | > | > 4607 | > | > at the top of testsuite/tests/typecheck/should_fail/all.T | > | > Should it be there? What does it do? Could it be bad? | > | > Simon | > | > _______________________________________________ | > ghc-devs mailing list | > ghc-devs at haskell.org | > | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.haske | ll.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C64685ec3b4f54830825d08d69 | 3308fbb%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636858234237293909& | ;sdata=xdftRahah0Aja9oSefxFG7APKopUVUb21fgGOP%2Frluc%3D&reserved=0 From chrisdone at gmail.com Fri Feb 15 12:30:48 2019 From: chrisdone at gmail.com (Christopher Done) Date: Fri, 15 Feb 2019 12:30:48 +0000 Subject: Is this the correct way to get Stg for a module? Message-ID: Hi all, I'm attempting to get the Stg for a module with the intention of interpreting it myself. Here is a self-contained GHC-8.4.3-specific test-case to get STG: https://gist.github.com/chrisdone/08ef9e8447b71c9dadc5bba949cda638#file-printstgghc8_4_3-hs-L47-L76 I'm using the handy helpers GHC.parseModule, GHC.typecheckModule, GHC.desugarModule to get the Core. For getting the STG I copied what is done in HscMain.hs in GHC, right before it generates Cmm: https://github.com/ghc/ghc/blob/ghc-8.4/compiler/main/HscMain.hs#L1312-L1318 Can anyone confirm that the part I've highlighted in my Gist is the correct way to get Stg? I can't really tell whether I missed anything or did something wrong. Cheers! From ryan.gl.scott at gmail.com Fri Feb 15 15:50:57 2019 From: ryan.gl.scott at gmail.com (Ryan Scott) Date: Fri, 15 Feb 2019 10:50:57 -0500 Subject: scopedSort and kind variable left-biasing Message-ID: As Vlad notes in [1], getting rid of FreeKiTyVars isn't as simple as it would appear, as we still treat kinds and types differently in other places, such as data type declarations. For instance: data Proxy (a :: k) = Proxy -- k is brought into scope implicitly by `extractDataDefnKindVars` I don't know how to overcome this awkwardness, so if you have suggestions, please comment at [1]. Ryan S. ----- [1] https://gitlab.haskell.org/ghc/ghc/merge_requests/361#note_6709 -------------- next part -------------- An HTML attachment was scrubbed... URL: From julian at leviston.net Fri Feb 15 23:04:28 2019 From: julian at leviston.net (Julian Leviston) Date: Sat, 16 Feb 2019 10:04:28 +1100 Subject: GItLab commenting In-Reply-To: References: Message-ID: > On 14 Feb 2019, at 8:38 pm, Simon Peyton Jones via ghc-devs wrote: > > Friends > > In reviewing MR!361, I wanted to point out that a Note on line 1481 of a file needed rewriting. But the patch only modified lines 236 or so. How can I get it to display line 1481? (Or, more simply, just display the whole file?) > > I can get it to show 10 more lines at a time by clicking the little grey “…” symbols. But getting 1000 lines down would take 100 clicks – hardly sensible. > > Moreover, what if I want to comment on a file that isn’t in the patch at all? > > Thanks > > Simon > Hi Simon, Go to the source code that you want to refer to, (Repository… Files for example, or even within an MR itself)… and you’ll notice if you hover on the LHS of any source code, next to the line numbers, that it has a little link icon. Clicking that will change the URL in the URL bar to refer to that place (you can also shift-click between two lines to create a range if that’s your particular proclivity). You can copy the link from the URL bar, and then paste it into the comment where you want to refer to the line of code in question. Warmest regards, Julian -------------- next part -------------- An HTML attachment was scrubbed... URL: From vladislav at serokell.io Sat Feb 16 12:44:17 2019 From: vladislav at serokell.io (Vladislav Zavialov) Date: Sat, 16 Feb 2019 15:44:17 +0300 Subject: Put 'haddock' in the 'ghc' repo Message-ID: Hello devs, There appears to be no good workflow for contributing patches that change both GHC and Haddock. For contributors who have push access to both repositories, it is at least tolerable: 1. create a Haddock branch with the required changes 2. create a GHC branch with the required changes Then wait for the GHC change to get merged to `master`, and 3a. fast-forward the Haddock change to the `ghc-head` branch 3b. in case a fast-forward is impossible, cherry-pick the commit to `ghc-head` and push another commit to GHC `master` to update the Haddock submodule Roundabout, but possible. For contributors who do not have push access to both repositories, each step is much harder, as working with forks implies messing with .gitmodules, which arguably should stay constant. To avoid all this friction, I propose the following principle: * all SCC (strongly connected components) of dependencies must go to the same repo. For example, since GHC depends on Haddock to build documentation, and Haddock depends on GHC, they must go to the same repo. This way, a single commit can update both of them in sync. All the best, Vladislav From julian at leviston.net Sun Feb 17 02:36:40 2019 From: julian at leviston.net (Julian Leviston) Date: Sun, 17 Feb 2019 13:36:40 +1100 Subject: Distributed local dev builds Message-ID: <86D5D29A-8839-4D0F-907F-029B626B5E3A@leviston.net> Hi all, I have several fairly high-spec machines on my network where I usually build GHC, and I was wondering if it was easy/trivial/possible to set up a distributed build where it farmed out some of the compute work to the other machines? I seem to need to build from scratch somewhat often when I switch branches I’m working on. Is this going to be the aim of distributed shake, which hadrian is based on? Thanks! Julian From omeragacan at gmail.com Sun Feb 17 05:35:02 2019 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Sun, 17 Feb 2019 08:35:02 +0300 Subject: GitLab forks and submodules In-Reply-To: <87zhs8jukl.fsf@smart-cactus.org> References: <87lg3wmqk0.fsf@smart-cactus.org> <87ftu4ma9p.fsf@smart-cactus.org> <010A181D-D3FC-47D3-9DFD-74C3979F0774@gmail.com> <87zhsblxue.fsf@smart-cactus.org> <1C1BC502-7FE0-4900-9BD1-B4513B186161@gmail.com> <87zhs8jukl.fsf@smart-cactus.org> Message-ID: Sorry for reviving this thread, but this is causing so much trouble for me. I want a fresh clone of a GHC fork. If I clone the fork it doesn't work for reasons mentioned in this thread, however I just realized that it doesn't work even if I fork gitlab/ghc/ghc and then add the fork as a new remote. Here's what I do: - Clone gitlab/ghc/ghc ("origin") - Add gitlab/fork/ghc ("fork", the fork I want to build) - git fetch --all - git checkout fork/branch - git submodule update --init For whatever reason git tries to fetch submodules from "fork" instead of "origin", and I couldn't find any way to tell git to use "origin" for submodule instead. `git submodule sync` does not fix it. I also tried pulling submodules before switching to the fork's branch, thinking that maybe if I initialize submodules with the correct remote when I switch branches it'd fetch them from there. The only way I could make this work is by replacing all relative URLs with absolute URLs with this :%s/\.\./https:\/\/gitlab.haskell.org\/ghc/g The argument for relative submodules doesn't make sense to me. Is updating a submodule remote so hard that we want to make it easy at the cost of making lots of other tasks so much harder? To me it makes sense that if you want to work on a submodule you need to update its remote to your fork. Ömer Ben Gamari , 10 Oca 2019 Per, 20:23 tarihinde şunu yazdı: > > Moritz Angermann writes: > > > Alright let me add some example that is really painful with submodules. > > > > Say I have a custom ghc fork angerman/ghc, because I really don't want > > to overload CI with all my stupidity and I *know* I'd forget to mark > > every commit with [skip ci] or something. > > > > Now I need to modify a bunch of submodules as well, say > > - libraries/bytestring > > - libraires/unix > > > > And next I want to have someone else collaborate on this with me, either > > for testing or contributing or what not. > > > > So I'm going to give them the following commands to run: > > > > git clone --recursive https://gitlab.haskell.org/ghc/ghc > > (cd ghc && git remote add angerman https://gitlab.haskell.org/angerman/ghc) > > (cd ghc && git fetch --all) > > (cd ghc/libraries/bytestring && git remote add angerman https://github.com/angerman/bytestring && git fetch --all) > > (cd ghc/libraries/unix && git remote add angerman https://github.com/angerman/unix && git fetch --all) > > (cd ghc && git checkout angerman/awesome/sauce) > > (cd ghc && git submodule update --init --recursive) > > > If you pushed your bytestring and unix changes to your gitlab account > then this wouldn't be necessary. The fact that we use relative paths > would actually work to your advantage. > > My current thinking is that the fix-submodules script run by CI should > do the following for each submodule: > > * If the branch has changed the submodule then do nothing (leaving the > submodule URL as relative; this ensures that a user can push their > submodule changes to their fork of the submodule on GitLab and things > will "just work" > > * If the branch has not changed then rewrite the submodule URL to point > to gitlab.haskell.org/ghc/packages/.... This ensures that CI will work > for contributors making non-submodule changes in their GHC forks. > > Cheers, > > - Ben > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From andrey.mokhov at newcastle.ac.uk Sun Feb 17 17:52:32 2019 From: andrey.mokhov at newcastle.ac.uk (Andrey Mokhov) Date: Sun, 17 Feb 2019 17:52:32 +0000 Subject: Distributed local dev builds References: Message-ID: Hi Julian, Have a look at this MR: https://gitlab.haskell.org/ghc/ghc/merge_requests/317 As soon as it lands, you'll be able to run Hadrian builds with a local cache as follows: hadrian/build --shared=path/to/cache In this mode, build rules are cached, and if you happen to rerun a build rule with unchanged dependencies the resulting files will be copied from the cache instead of executing actual build commands. This should significantly speed up switching between branches. Note however that this does not give you a way to run distributed builds: at the moment it is not possible to offload any build rules to other machines. Cheers, Andrey ---------------------------------------------------------------------- Message: 1 Date: Sun, 17 Feb 2019 13:36:40 +1100 From: Julian Leviston To: GHC developers Subject: Distributed local dev builds Message-ID: <86D5D29A-8839-4D0F-907F-029B626B5E3A at leviston.net> Content-Type: text/plain; charset=utf-8 Hi all, I have several fairly high-spec machines on my network where I usually build GHC, and I was wondering if it was easy/trivial/possible to set up a distributed build where it farmed out some of the compute work to the other machines? I seem to need to build from scratch somewhat often when I switch branches I'm working on. Is this going to be the aim of distributed shake, which hadrian is based on? Thanks! Julian -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.spiwack at tweag.io Mon Feb 18 08:09:26 2019 From: arnaud.spiwack at tweag.io (Spiwack, Arnaud) Date: Mon, 18 Feb 2019 09:09:26 +0100 Subject: Name of units in plugins In-Reply-To: References: Message-ID: Indeed, I missed the package-qualifer in findImportedModule. It does look plausible. If there is no recommended/better way to do this sort of thing, I think I'll go for it. If other plugin authors want to share their experience on what worked and didn't for them. I'd love to hear it, too. Seems like a common sort of problems in plugins. /Arnaud On Fri, Feb 15, 2019 at 9:41 AM Matthew Pickering < matthewtpickering at gmail.com> wrote: > Did you have a look at the implementation of `findImportedModule`? I > think you can use it and set the final argument to `Just > "assert-plugin"` so that it only looks for the module in the > `assert-plugin` package. > > Another way people do this is to use a Template Haskell quote and then > use `GhcPlugins.thNameToGhcName`. Which is probably the most robust > way of persisting a name between the two stages. > > Cheers, > > Matt > > On Fri, Feb 15, 2019 at 8:31 AM Spiwack, Arnaud > wrote: > > > > Dear all, > > > > (first, I don't know if this is the best place for questions/discussions > about the GHC API, if not, let me know where to redirect the conversation). > > > > I've been writing a plugin that substitutes call to a function by calls > to another (it's a plugin reimplementation of the assert feature of GHC). > And to be able to point at the names of these two functions, I need to > construct a name (well, and OccName) made of three parts: unit id, module > name, definition name. > > > > This question is about the unit name. Currently I simply use > stringToUnitId. But the real name of my unit has a magic string in it (see > https://github.com/aspiwack/assert-plugin/blob/a538d72581bae43ebf44c332e19c5ffdd28911df/src/With/Assertions.hs#L53 > ). It's rather unpleasant, it seems to change every time the cabal file > change (at least). > > > > The assert-explainer plugin uses another approach, only using the module > name, then calling findImportedModule ( > https://github.com/ocharles/assert-explainer/blob/dc6ea213d4d0576954ec883eeabeafc80c5ca18f/plugin/AssertExplainer.hs#L71-L81 > ). > > > > This is much more robust to changes, but is also less precise > (technically, there can be several imported modules with the same name, > with package-qualified imports). > > > > So, the question is: is there a better, recommended way to recover the > OccName (or Name!) of a function I defined in the same unit my plugin is > defined in. > > > > Best, > > Arnaud > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Mon Feb 18 08:11:33 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Mon, 18 Feb 2019 08:11:33 +0000 Subject: Name of units in plugins In-Reply-To: References: Message-ID: I described another way to do it in the second paragraph of the email. Does that not work? Matt On Mon, Feb 18, 2019 at 8:10 AM Spiwack, Arnaud wrote: > > Indeed, I missed the package-qualifer in findImportedModule. It does look plausible. If there is no recommended/better way to do this sort of thing, I think I'll go for it. > > If other plugin authors want to share their experience on what worked and didn't for them. I'd love to hear it, too. Seems like a common sort of problems in plugins. > > /Arnaud > > On Fri, Feb 15, 2019 at 9:41 AM Matthew Pickering wrote: >> >> Did you have a look at the implementation of `findImportedModule`? I >> think you can use it and set the final argument to `Just >> "assert-plugin"` so that it only looks for the module in the >> `assert-plugin` package. >> >> Another way people do this is to use a Template Haskell quote and then >> use `GhcPlugins.thNameToGhcName`. Which is probably the most robust >> way of persisting a name between the two stages. >> >> Cheers, >> >> Matt >> >> On Fri, Feb 15, 2019 at 8:31 AM Spiwack, Arnaud wrote: >> > >> > Dear all, >> > >> > (first, I don't know if this is the best place for questions/discussions about the GHC API, if not, let me know where to redirect the conversation). >> > >> > I've been writing a plugin that substitutes call to a function by calls to another (it's a plugin reimplementation of the assert feature of GHC). And to be able to point at the names of these two functions, I need to construct a name (well, and OccName) made of three parts: unit id, module name, definition name. >> > >> > This question is about the unit name. Currently I simply use stringToUnitId. But the real name of my unit has a magic string in it (see https://github.com/aspiwack/assert-plugin/blob/a538d72581bae43ebf44c332e19c5ffdd28911df/src/With/Assertions.hs#L53 ). It's rather unpleasant, it seems to change every time the cabal file change (at least). >> > >> > The assert-explainer plugin uses another approach, only using the module name, then calling findImportedModule ( https://github.com/ocharles/assert-explainer/blob/dc6ea213d4d0576954ec883eeabeafc80c5ca18f/plugin/AssertExplainer.hs#L71-L81 ). >> > >> > This is much more robust to changes, but is also less precise (technically, there can be several imported modules with the same name, with package-qualified imports). >> > >> > So, the question is: is there a better, recommended way to recover the OccName (or Name!) of a function I defined in the same unit my plugin is defined in. >> > >> > Best, >> > Arnaud >> > _______________________________________________ >> > ghc-devs mailing list >> > ghc-devs at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From arnaud.spiwack at tweag.io Mon Feb 18 08:16:53 2019 From: arnaud.spiwack at tweag.io (Spiwack, Arnaud) Date: Mon, 18 Feb 2019 09:16:53 +0100 Subject: Name of units in plugins In-Reply-To: References: Message-ID: > I described another way to do it in the second paragraph of the email. > Does that not work? > It probably does. But it doesn't seem better since I don't have template haskell to begin with. So I'd intuitively go for the other findImportedModule method. Do you think I should rather use the Template Haskell method? If so why? I'm approaching this very naively: it's my first plugin, and I'm discovering all these things as I go. So I'm very curious as to what your opinion, and, more generally, the community's opinion is. /Arnaud -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Mon Feb 18 08:21:35 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Mon, 18 Feb 2019 08:21:35 +0000 Subject: Name of units in plugins In-Reply-To: References: Message-ID: You only have to enable `TemplateHaskellQuotes` to use this. What's your reason for not wanting to use Template Haskell? Your general question is, how do I get a representation of this Haskell identifier which I have in scope which I want to persist into the future stage. The answer to this question of representation is to use Template Haskell in general as that is precisely the role of quotation. Cheers, Matt On Mon, Feb 18, 2019 at 8:17 AM Spiwack, Arnaud wrote: > > >> I described another way to do it in the second paragraph of the email. >> Does that not work? > > > It probably does. But it doesn't seem better since I don't have template haskell to begin with. So I'd intuitively go for the other findImportedModule method. Do you think I should rather use the Template Haskell method? If so why? > > I'm approaching this very naively: it's my first plugin, and I'm discovering all these things as I go. So I'm very curious as to what your opinion, and, more generally, the community's opinion is. > > /Arnaud From arnaud.spiwack at tweag.io Mon Feb 18 08:24:45 2019 From: arnaud.spiwack at tweag.io (Spiwack, Arnaud) Date: Mon, 18 Feb 2019 09:24:45 +0100 Subject: Name of units in plugins In-Reply-To: References: Message-ID: > You only have to enable `TemplateHaskellQuotes` to use this. What's > your reason for not wanting to use Template Haskell? > None, I just misunderstood your previous comment as meaning that it was useful for plugins taking some Template Haskell slice as an argument or something to that effect. > > Your general question is, how do I get a representation of this > Haskell identifier which I have in scope which I want to persist into > the future stage. The answer to this question of representation is to > use Template Haskell in general as that is precisely the role > of quotation. > Got it. It does make sense. Thanks: I'll try this out. /Arnaud -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Feb 18 09:22:17 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 18 Feb 2019 09:22:17 +0000 Subject: GItLab commenting In-Reply-To: References: Message-ID: Go to the source code that you want to refer to, (Repository… Files for example, or even within an MR itself)… and you’ll notice if you hover on the LHS of any source code, next to the line numbers, that it has a little link icon. Clicking that will change the URL in the URL bar to refer to that place (you can also shift-click between two lines to create a range if that’s your particular proclivity). You can copy the link from the URL bar, and then paste it into the comment where you want to refer to the line of code in question. My goal is to add a comment to a line in the MR that is not displayed by the MR If the line is displayed, then yes I can do what you say. But it isn’t! Zooming off to the repository is not good because while I can get a link to the line, I can’t add a comment; I can only do that in the MR. Does that make sense? Simon From: Julian Leviston Sent: 15 February 2019 23:04 To: Simon Peyton Jones Cc: GHC developers Subject: Re: GItLab commenting On 14 Feb 2019, at 8:38 pm, Simon Peyton Jones via ghc-devs > wrote: Friends In reviewing MR!361, I wanted to point out that a Note on line 1481 of a file needed rewriting. But the patch only modified lines 236 or so. How can I get it to display line 1481? (Or, more simply, just display the whole file?) I can get it to show 10 more lines at a time by clicking the little grey “…” symbols. But getting 1000 lines down would take 100 clicks – hardly sensible. Moreover, what if I want to comment on a file that isn’t in the patch at all? Thanks Simon Hi Simon, Go to the source code that you want to refer to, (Repository… Files for example, or even within an MR itself)… and you’ll notice if you hover on the LHS of any source code, next to the line numbers, that it has a little link icon. Clicking that will change the URL in the URL bar to refer to that place (you can also shift-click between two lines to create a range if that’s your particular proclivity). You can copy the link from the URL bar, and then paste it into the comment where you want to refer to the line of code in question. Warmest regards, Julian -------------- next part -------------- An HTML attachment was scrubbed... URL: From julian at leviston.net Mon Feb 18 10:41:07 2019 From: julian at leviston.net (Julian Leviston) Date: Mon, 18 Feb 2019 21:41:07 +1100 Subject: GItLab commenting In-Reply-To: References: Message-ID: <751182C4-CB44-46C7-B6D8-8D452CCB9F2F@leviston.net> > On 18 Feb 2019, at 8:22 pm, Simon Peyton Jones wrote: > > Go to the source code that you want to refer to, (Repository… Files for example, or even within an MR itself)… and you’ll notice if you hover on the LHS of any source code, next to the line numbers, that it has a little link icon. Clicking that will change the URL in the URL bar to refer to that place (you can also shift-click between two lines to create a range if that’s your particular proclivity). You can copy the link from the URL bar, and then paste it into the comment where you want to refer to the line of code in question. > > My goal is to add a comment to a line in the MR that is not displayed by the MR > > If the line is displayed, then yes I can do what you say. But it isn’t! > > Zooming off to the repository is not good because while I can get a link to the line, I can’t add a comment; I can only do that in the MR. > > Does that make sense? > > Simon > My apologies. I knew I couldn’t surely *actually* be able to help SPJ ;-) Sorry. Just a big fan. I spent 30 minutes searching around and discovered the “special gitlab references” section, which I was only vaguely aware of: https://docs.gitlab.com/ee/user/markdown.html#special-gitlab-references but I couldn’t for the life of me find a way to do what we can do in github (quote sections of code into a general comment) So, I often have this issue in github, too… sometimes I just want to refer to some file that’s not in the PR at all. My solution is usually to write a general comment, and in the comment, I just paste the link of whatever I want to refer to. It’s a bit naff. As you’re implying, we should be able to add a new comment on any file at all in a repository, *at the point in the version history that the repo was at when you wrote the comment*. Tho, I’m not sure what they’d do to comments that got swept away with rebases and the like (which is probably why it’s not a feature). Might be worth suggesting as a feature. Sorry I wasn’t more helpful, Julian -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Feb 18 13:27:14 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 18 Feb 2019 13:27:14 +0000 Subject: Validate has died Message-ID: >From master I'm getting 2052 "framework failures" in the testsuite, which all look like the one below. I also get 4468 expected passes, so not every test fails. This is with "sh validate". Yes I know I should get onto Hadrian, and I have a branch for that. But Hadrian isn't really working for me because it's putting build artefacts in my source tree rather than in my build tree (talking to Andrey about that). But I think that old 'validate' is still supposed to work, isn't it? I'm a bit stuck Thanks Simon Unexpected results from: TEST="AmapCoerce ArithInt16 ArithInt8 ArithWord16 ArithWord8 AtomicPrimops BinaryArray BinaryLiterals0 BinaryLiterals1 BinaryLiterals2 CPRRepeat CPUTime001 CallArity1 Capi_Ctype_001 Capi_Ctype_002 CarryOverflow CgStaticPointers CgStaticPointersNoFullLazyness Chan002 Chan003 CmmSwitchTest64 CmpInt16 CmpInt8 CmpWord16 CmpWord8 CopySmallArray Defer01 Defer02 DeriveNullTermination DocsInHiFile0 DocsInHiFile1 DsLambdaCase DsMultiWayIf DsStaticPointers DsStrict DsStrictData DsStrictFail DsStrictLet EvalTest EventlogOutput1 EventlogOutput2 FloatFnInverses Freeman GEq1 GEq2 GFullyStrict GFunctor1 GHCiWildcardKind GMap1 GMapAssoc GMapTop GShow1 GUniplate1 GcStaticPointers GenNewtype GhciCurDir GhciKinds HexFloatLiterals HooplPostorder IOError001 IOError002 IndT... etc... snip of a typical error subprocess.CalledProcessError: Command '['git', 'rev-parse', 'HEAD']' returned non-zero exit status 128 Traceback (most recent call last): File "/home/simonpj/code/HEAD-4/testsuite/driver/testlib.py", line 840, in test_common_work do_test(name, way, func, args, files) File "/home/simonpj/code/HEAD-4/testsuite/driver/testlib.py", line 928, in do_test result = func(*[name,way] + args) File "/home/simonpj/code/HEAD-4/testsuite/driver/testlib.py", line 1019, in makefile_test return run_command(name, way, cmd) File "/home/simonpj/code/HEAD-4/testsuite/driver/testlib.py", line 1012, in run_command return simple_run( name, '', override_options(cmd), '' ) File "/home/simonpj/code/HEAD-4/testsuite/driver/testlib.py", line 1396, in simple_run return check_stats(name, way, stats_file, opts.stats_range_fields) File "/home/simonpj/code/HEAD-4/testsuite/driver/testlib.py", line 1185, in check_stats head_commit = Perf.commit_hash('HEAD') File "/home/simonpj/code/HEAD-4/testsuite/driver/perf_notes.py", line 92, in commit_hash stderr=subprocess.STDOUT) \ File "/usr/lib/python3.5/subprocess.py", line 626, in check_output **kwargs).stdout File "/usr/lib/python3.5/subprocess.py", line 708, in run output=stdout, stderr=stderr) -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Mon Feb 18 16:35:50 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Mon, 18 Feb 2019 16:35:50 +0000 Subject: CI Docker Images Message-ID: Hi Mark, Ben Where are the dockerfiles (or nix expressions) which are used to build the docker images? I want to update them and push some new images to ghcci but I can't find any trace about how the images have been made. Cheers, Matt From matthewtpickering at gmail.com Mon Feb 18 16:42:45 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Mon, 18 Feb 2019 16:42:45 +0000 Subject: CI Docker Images In-Reply-To: References: Message-ID: Alp has pointed me to the right place. https://gitlab.haskell.org/ghc/ghc/tree/master/.circleci/images Can someone give me access to push to the ghcci docker hub please so I can update them? Matt On Mon, Feb 18, 2019 at 4:35 PM Matthew Pickering wrote: > > Hi Mark, Ben > > Where are the dockerfiles (or nix expressions) which are used to build > the docker images? I want to update them and push some new images to > ghcci but I can't find any trace about how the images have been made. > > Cheers, > > Matt From ben at well-typed.com Mon Feb 18 20:22:28 2019 From: ben at well-typed.com (Ben Gamari) Date: Mon, 18 Feb 2019 15:22:28 -0500 Subject: Validate has died In-Reply-To: References: Message-ID: <87bm38yhhc.fsf@smart-cactus.org> Simon Peyton Jones via ghc-devs writes: > From master I'm getting 2052 "framework failures" in the testsuite, > which all look like the one below. I also get 4468 expected passes, so > not every test fails. This is with "sh validate". Yes I know I should > get onto Hadrian, and I have a branch for that. But Hadrian isn't > really working for me because it's putting build artefacts in my > source tree rather than in my build tree (talking to Andrey about > that). But I think that old 'validate' is still supposed to work, > isn't it? I'm a bit stuck Oh dear, it looks like this is due to the recent performance tracking patch. It looks like it breaks the symlink tree workflow. I feel like we discussed this a few weeks ago. David, do you recall what we concluded? Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From a.pelenitsyn at gmail.com Tue Feb 19 03:37:16 2019 From: a.pelenitsyn at gmail.com (Artem Pelenitsyn) Date: Mon, 18 Feb 2019 22:37:16 -0500 Subject: GitHub Mirror is broken Message-ID: Hello devs, This is just to let you know that the latestes commit on GitHub ghc/ghc repo dates back to 22th of January. Personally, I find GitHub mirror quite useful for ocasional searches over the code base. Therefore, I'd appreciated repairing the mirror. -- Best of luck, Artem Pelenitsyn -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Tue Feb 19 06:30:19 2019 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 19 Feb 2019 01:30:19 -0500 Subject: GitHub Mirror is broken In-Reply-To: References: Message-ID: <87sgwkwart.fsf@smart-cactus.org> Artem Pelenitsyn writes: > Hello devs, > > This is just to let you know that the latestes commit on GitHub ghc/ghc > repo dates back to 22th of January. Personally, I find GitHub mirror quite > useful for ocasional searches over the code base. Therefore, I'd > appreciated repairing the mirror. > Fixed. It seems like the mirroring service got stuck. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From arnaud.spiwack at tweag.io Tue Feb 19 08:26:11 2019 From: arnaud.spiwack at tweag.io (Spiwack, Arnaud) Date: Tue, 19 Feb 2019 09:26:11 +0100 Subject: Put 'haddock' in the 'ghc' repo In-Reply-To: References: Message-ID: I want to echo this sentiment. I've lost a lot of time to Haddock. And there is no reasonable way to merge changes which affect Haddock (do I merge the Haddock change first? In that case, Haddock's master works only with a non-existent version of GHC. Or do I merge in GHC first? In which case, GHC, albeit temporarily, points to my own Haddock repo. What if one of these passes code review, and not the other?) Generally speaking, I am of the opinion that submodules will work only if they all point to released version of the dependency. If we ever feel the need to point to a non-release version, then it should not be a submodule, and should simply be part of the main tree. Haddock is not the only submodule which should be considered for inclusion (Cabal, for instance, seems to be pretty tightly integrated with GHC, and has had, if memory serves, similar issues in the past). But Haddock has been, by far, the worst offender, in my personal experience. On Sat, Feb 16, 2019 at 1:44 PM Vladislav Zavialov wrote: > Hello devs, > > There appears to be no good workflow for contributing patches that > change both GHC and Haddock. > > For contributors who have push access to both repositories, it is at > least tolerable: > > 1. create a Haddock branch with the required changes > 2. create a GHC branch with the required changes > > Then wait for the GHC change to get merged to `master`, and > > 3a. fast-forward the Haddock change to the `ghc-head` branch > 3b. in case a fast-forward is impossible, cherry-pick the commit to > `ghc-head` and push another commit to GHC `master` to update the > Haddock submodule > > Roundabout, but possible. > > For contributors who do not have push access to both repositories, > each step is much harder, as working with forks implies messing with > .gitmodules, which arguably should stay constant. > > To avoid all this friction, I propose the following principle: > > * all SCC (strongly connected components) of dependencies must go to > the same repo. > > For example, since GHC depends on Haddock to build documentation, and > Haddock depends on GHC, they must go to the same repo. This way, a > single commit can update both of them in sync. > > All the best, > Vladislav > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Feb 19 09:32:02 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 19 Feb 2019 09:32:02 +0000 Subject: Validate has died In-Reply-To: <87bm38yhhc.fsf@smart-cactus.org> References: <87bm38yhhc.fsf@smart-cactus.org> Message-ID: OK. Meanwhile, it really is a problem for me. Could someone roll back that change temporarily? Thanks! Simon | -----Original Message----- | From: Ben Gamari | Sent: 18 February 2019 20:22 | To: Simon Peyton Jones ; David Eichmann | | Cc: GHC developers | Subject: Re: Validate has died | | Simon Peyton Jones via ghc-devs writes: | | > From master I'm getting 2052 "framework failures" in the testsuite, | > which all look like the one below. I also get 4468 expected passes, so | > not every test fails. This is with "sh validate". Yes I know I should | > get onto Hadrian, and I have a branch for that. But Hadrian isn't | > really working for me because it's putting build artefacts in my | > source tree rather than in my build tree (talking to Andrey about | > that). But I think that old 'validate' is still supposed to work, | > isn't it? I'm a bit stuck | | Oh dear, it looks like this is due to the recent performance tracking | patch. It looks like it breaks the symlink tree workflow. I feel like we | discussed this a few weeks ago. David, do you recall what we concluded? | | Cheers, | | - Ben From davide at well-typed.com Tue Feb 19 10:56:39 2019 From: davide at well-typed.com (David Eichmann) Date: Tue, 19 Feb 2019 10:56:39 +0000 Subject: Validate has died In-Reply-To: References: <87bm38yhhc.fsf@smart-cactus.org> Message-ID: <1bc18ce4-5a8b-caaa-f5a2-fb86011aef9a@well-typed.com> Hello Simon, I'm very sorry about that. I believe last time we ran into a similar issue and resolved it in the test runner by first checking if we're in a git repo. The latest update doesn't respect that check. I'm having a look now and aim to have this either fixed or reverted asap. David E On 2/19/19 9:32 AM, Simon Peyton Jones wrote: > OK. Meanwhile, it really is a problem for me. Could someone roll back that change temporarily? Thanks! > > Simon > > | -----Original Message----- > | From: Ben Gamari > | Sent: 18 February 2019 20:22 > | To: Simon Peyton Jones ; David Eichmann > | > | Cc: GHC developers > | Subject: Re: Validate has died > | > | Simon Peyton Jones via ghc-devs writes: > | > | > From master I'm getting 2052 "framework failures" in the testsuite, > | > which all look like the one below. I also get 4468 expected passes, so > | > not every test fails. This is with "sh validate". Yes I know I should > | > get onto Hadrian, and I have a branch for that. But Hadrian isn't > | > really working for me because it's putting build artefacts in my > | > source tree rather than in my build tree (talking to Andrey about > | > that). But I think that old 'validate' is still supposed to work, > | > isn't it? I'm a bit stuck > | > | Oh dear, it looks like this is due to the recent performance tracking > | patch. It looks like it breaks the symlink tree workflow. I feel like we > | discussed this a few weeks ago. David, do you recall what we concluded? > | > | Cheers, > | > | - Ben > -- David Eichmann, Haskell Consultant Well-Typed LLP, http://www.well-typed.com Registered in England & Wales, OC335890 118 Wymering Mansions, Wymering Road, London W9 2NF, England From davide at well-typed.com Tue Feb 19 14:19:31 2019 From: davide at well-typed.com (David Eichmann) Date: Tue, 19 Feb 2019 14:19:31 +0000 Subject: Validate has died In-Reply-To: References: <87bm38yhhc.fsf@smart-cactus.org> Message-ID: I've created an MR to hopefully resolve this issue (https://gitlab.haskell.org/ghc/ghc/merge_requests/400). If you are eager to make use of this, and/or confirm that it solves the issue, you can e.g. cherry pick the commit: $ git remote add DavidEichmann https://gitlab.haskell.org/DavidEichmann/ghc.git $ git fetch DavidEichmann $ git cherry-pick davide/TestRunnerCrash-NonGitRepo David E On 2/19/19 9:32 AM, Simon Peyton Jones wrote: > OK. Meanwhile, it really is a problem for me. Could someone roll back that change temporarily? Thanks! > > Simon > > | -----Original Message----- > | From: Ben Gamari > | Sent: 18 February 2019 20:22 > | To: Simon Peyton Jones ; David Eichmann > | > | Cc: GHC developers > | Subject: Re: Validate has died > | > | Simon Peyton Jones via ghc-devs writes: > | > | > From master I'm getting 2052 "framework failures" in the testsuite, > | > which all look like the one below. I also get 4468 expected passes, so > | > not every test fails. This is with "sh validate". Yes I know I should > | > get onto Hadrian, and I have a branch for that. But Hadrian isn't > | > really working for me because it's putting build artefacts in my > | > source tree rather than in my build tree (talking to Andrey about > | > that). But I think that old 'validate' is still supposed to work, > | > isn't it? I'm a bit stuck > | > | Oh dear, it looks like this is due to the recent performance tracking > | patch. It looks like it breaks the symlink tree workflow. I feel like we > | discussed this a few weeks ago. David, do you recall what we concluded? > | > | Cheers, > | > | - Ben > -- David Eichmann, Haskell Consultant Well-Typed LLP, http://www.well-typed.com Registered in England & Wales, OC335890 118 Wymering Mansions, Wymering Road, London W9 2NF, England -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Feb 21 07:49:56 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 21 Feb 2019 07:49:56 +0000 Subject: ZipN fusion Message-ID: Dear devs Can someone help Alexandre by merging Phab:5249? See https://phabricator.haskell.org/D5249#151827 Many thanks Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Thu Feb 21 08:24:17 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Thu, 21 Feb 2019 08:24:17 +0000 Subject: ZipN fusion In-Reply-To: References: Message-ID: Patch is now here and in the merge queue. https://gitlab.haskell.org/ghc/ghc/merge_requests/421 Matt On Thu, Feb 21, 2019 at 7:50 AM Simon Peyton Jones via ghc-devs < ghc-devs at haskell.org> wrote: > Dear devs > > Can someone help Alexandre by merging Phab:5249? > > See https://phabricator.haskell.org/D5249#151827 > > Many thanks > > Simon > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Feb 21 08:40:23 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 21 Feb 2019 08:40:23 +0000 Subject: CI failing Message-ID: Ben, Matthew I believe that 'master' fails validate. See Trac #16346. The question is: how did it get past CI? The bug would only show up if -dcore-lint was on for libraries. Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Thu Feb 21 08:49:05 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Thu, 21 Feb 2019 08:49:05 +0000 Subject: CI failing In-Reply-To: References: Message-ID: Indeed, this was a mistake and I put up a MR last night to hopefully fix it. https://gitlab.haskell.org/ghc/ghc/merge_requests/416 Cheers, Matt On Thu, Feb 21, 2019 at 8:40 AM Simon Peyton Jones wrote: > Ben, Matthew > > I believe that ‘master’ fails validate. See Trac #16346. > > The question is: how did it get past CI? > > The bug would only show up if -dcore-lint was on for libraries. > > Simon > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Feb 21 17:51:22 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 21 Feb 2019 17:51:22 +0000 Subject: spj-wibbles Message-ID: Matthew, Ben, David I've just submitted MR 426. It's a batch of four patches, one of which makes the GHC tree validate again. Might you keep an eye on it. David/anyone: one of them "Stop inferring polymorphic kinds" makes a small change to HsSyn (removing a field). I don't know whether that'll require changing Haddock - or how to do so. Might you check? Gotta run Thanks! Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Fri Feb 22 08:53:19 2019 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Fri, 22 Feb 2019 08:53:19 +0000 Subject: --share option for hadrian doesn't work with hs-boot files Message-ID: I have been trying the new `--share` option implemented in hadrian but I haven't actually managed to complete any builds yet with it enabled after the initial one. The current error is ``` : error: ‘Var.AnonArgFlag’ is exported by the hs-boot file, but not exported by the module Error when running Shake build system: at action, called at src/Rules.hs:35:19 in main:Rules at need, called at src/Rules.hs:52:5 in main:Rules * Depends on: _build/stage0/bin/ghc at need, called at src/Utilities.hs:71:18 in main:Utilities * Depends on: _build/stage0/compiler/build/libHSghc-8.9.a at need, called at src/Rules/Library.hs:118:5 in main:Rules.Library * Depends on: _build/stage0/compiler/build/Var.o * Raised the exception: user error (Development.Shake.cmd, system command failed ``` I get this after building Simon's `FunTy` patch which does add this flag and the definition to `Var.hs-boot` and then switching back to master with the cache enabled. Could you please write down some advice Andrey about how to solve issues like this? It seems very fragile making sure that every case is covered. Cheers, Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrey.mokhov at newcastle.ac.uk Fri Feb 22 11:49:12 2019 From: andrey.mokhov at newcastle.ac.uk (Andrey Mokhov) Date: Fri, 22 Feb 2019 11:49:12 +0000 Subject: --share option for hadrian doesn't work with hs-boot files In-Reply-To: References: Message-ID: Hi Matt, Thanks! Switching branches that add/remove hs-boot files is exactly the kind of scenario which is hard to predict :) This looks like a bug, so please create a ticket. If you manage to reproduce this somehow without switching branches, e.g. by just adding or removing an hs-boot file, that would make it easier to debug. > Could you please write down some advice Andrey about > how to solve issues like this? It seems very fragile making > sure that every case is covered. Shake 0.17.6 has the following two commands that may help to partially clean up the cache in presence of such bugs: --share-list List the shared cache files. --share-remove[=SUBSTRING] Remove the shared cache keys. By running Hadrian with --share-remove=_build/stage0/compiler/build/Var* you should be able to evict the corresponding build rules from the cache and hopefully the build will go through. If this does help, please also mention this in the ticket. Cheers, Andrey From: Matthew Pickering [mailto:matthewtpickering at gmail.com] Sent: 22 February 2019 08:53 To: GHC developers ; Andrey Mokhov Subject: --share option for hadrian doesn't work with hs-boot files I have been trying the new `--share` option implemented in hadrian but I haven't actually managed to complete any builds yet with it enabled after the initial one. The current error is ``` : error: ‘Var.AnonArgFlag’ is exported by the hs-boot file, but not exported by the module Error when running Shake build system: at action, called at src/Rules.hs:35:19 in main:Rules at need, called at src/Rules.hs:52:5 in main:Rules * Depends on: _build/stage0/bin/ghc at need, called at src/Utilities.hs:71:18 in main:Utilities * Depends on: _build/stage0/compiler/build/libHSghc-8.9.a at need, called at src/Rules/Library.hs:118:5 in main:Rules.Library * Depends on: _build/stage0/compiler/build/Var.o * Raised the exception: user error (Development.Shake.cmd, system command failed ``` I get this after building Simon's `FunTy` patch which does add this flag and the definition to `Var.hs-boot` and then switching back to master with the cache enabled. Could you please write down some advice Andrey about how to solve issues like this? It seems very fragile making sure that every case is covered. Cheers, Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Feb 26 10:18:02 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 26 Feb 2019 10:18:02 +0000 Subject: Gitlab bug? Message-ID: Look here: https://gitlab.haskell.org/ghc/ghc/merge_requests/444/diffs#350c4076427c611b8f14e875a4ca553041c2b847_1930_1925 Look at the comment "Is this zipping necessary? We haven't zonked the scoped_kvs or the tc_tvs, I think" from Richard Eisenberg, just below line 1925 in TcHsType. Now click on the grey "..." above his comment, to expand more code lines above. Uh oh! * The same comment now occurs twice, just below line 1904, and again just below 1925. * The first time it's on the wrong line of code * But in the second position (the correct one) the "Reply" box is inactive. I can't type into it. Is this is a bug? How to report it? Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From tdammers at gmail.com Tue Feb 26 10:58:00 2019 From: tdammers at gmail.com (Tobias Dammers) Date: Tue, 26 Feb 2019 11:58:00 +0100 Subject: Gitlab bug? In-Reply-To: References: Message-ID: <20190226105759.dxiytwtpgvpln2ju@nibbler> It seems to behave just fine on my computer. Maybe some sort of cache-related race condition is going on in the client-side JavaScript here? A hard page reload might "fix" that. On Tue, Feb 26, 2019 at 10:18:02AM +0000, Simon Peyton Jones via ghc-devs wrote: > Look here: > https://gitlab.haskell.org/ghc/ghc/merge_requests/444/diffs#350c4076427c611b8f14e875a4ca553041c2b847_1930_1925 > Look at the comment "Is this zipping necessary? We haven't zonked the scoped_kvs or the tc_tvs, I think" from Richard Eisenberg, just below line 1925 in TcHsType. > > Now click on the grey "..." above his comment, to expand more code lines above. > > Uh oh! > > * The same comment now occurs twice, just below line 1904, and again just below 1925. > * The first time it's on the wrong line of code > * But in the second position (the correct one) the "Reply" box is inactive. I can't type into it. > > Is this is a bug? How to report it? > > Simon > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -- Tobias Dammers - tdammers at gmail.com From simonpj at microsoft.com Tue Feb 26 11:24:31 2019 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 26 Feb 2019 11:24:31 +0000 Subject: Gitlab bug? In-Reply-To: <20190226105759.dxiytwtpgvpln2ju@nibbler> References: <20190226105759.dxiytwtpgvpln2ju@nibbler> Message-ID: oddly reloading made no difference. Curious | -----Original Message----- | From: ghc-devs On Behalf Of Tobias Dammers | Sent: 26 February 2019 10:58 | To: ghc-devs at haskell.org | Subject: Re: Gitlab bug? | | It seems to behave just fine on my computer. | | Maybe some sort of cache-related race condition is going on in the | client-side JavaScript here? A hard page reload might "fix" that. | | On Tue, Feb 26, 2019 at 10:18:02AM +0000, Simon Peyton Jones via ghc-devs | wrote: | > Look here: | > https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitl | > ab.haskell.org%2Fghc%2Fghc%2Fmerge_requests%2F444%2Fdiffs%23350c407642 | > 7c611b8f14e875a4ca553041c2b847_1930_1925&data=02%7C01%7Csimonpj%40 | > microsoft.com%7C73ba769b877042937f8408d69bd95350%7C72f988bf86f141af91a | > b2d7cd011db47%7C1%7C0%7C636867755046893023&sdata=%2BMP4GdPPkz3XD%2 | > Fsk4VGSMTI27QXKMg45rHo5KLbWmaQ%3D&reserved=0 | > Look at the comment "Is this zipping necessary? We haven't zonked the | scoped_kvs or the tc_tvs, I think" from Richard Eisenberg, just below | line 1925 in TcHsType. | > | > Now click on the grey "..." above his comment, to expand more code | lines above. | > | > Uh oh! | > | > * The same comment now occurs twice, just below line 1904, and | again just below 1925. | > * The first time it's on the wrong line of code | > * But in the second position (the correct one) the "Reply" box is | inactive. I can't type into it. | > | > Is this is a bug? How to report it? | > | > Simon | | > _______________________________________________ | > ghc-devs mailing list | > ghc-devs at haskell.org | > https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail. | > haskell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc-devs&data=02%7C01 | > %7Csimonpj%40microsoft.com%7C73ba769b877042937f8408d69bd95350%7C72f988 | > bf86f141af91ab2d7cd011db47%7C1%7C0%7C636867755046898017&sdata=dyXl | > kGvSzPUHdcdib6La7MqYhcNqeI%2Falmet4zXC6LU%3D&reserved=0 | | | -- | Tobias Dammers - tdammers at gmail.com | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.has | kell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C73ba769b877042937f8408d | 69bd95350%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636867755046898017 | &sdata=dyXlkGvSzPUHdcdib6La7MqYhcNqeI%2Falmet4zXC6LU%3D&reserved= | 0 From rae at cs.brynmawr.edu Tue Feb 26 12:59:48 2019 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Tue, 26 Feb 2019 07:59:48 -0500 Subject: Gitlab bug? In-Reply-To: References: Message-ID: I can reproduce: Is there a good place to report this? Richard > On Feb 26, 2019, at 5:18 AM, Simon Peyton Jones via ghc-devs wrote: > > Look here: > > https://gitlab.haskell.org/ghc/ghc/merge_requests/444/diffs#350c4076427c611b8f14e875a4ca553041c2b847_1930_1925 > Look at the comment “Is this zipping necessary? We haven't zonked the scoped_kvs or the tc_tvs, I think” from Richard Eisenberg, just below line 1925 in TcHsType. > > > > Now click on the grey “…” above his comment, to expand more code lines above. > > > > Uh oh! > > The same comment now occurs twice, just below line 1904, and again just below 1925. > The first time it’s on the wrong line of code > But in the second position (the correct one) the “Reply” box is inactive. I can’t type into it. > > > Is this is a bug? How to report it? > > > > Simon > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2019-02-26 at 7.58.52 AM.png Type: image/png Size: 147455 bytes Desc: not available URL: From rae at cs.brynmawr.edu Tue Feb 26 13:37:45 2019 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Tue, 26 Feb 2019 08:37:45 -0500 Subject: Gitlab bug? In-Reply-To: References: Message-ID: <95421009-FCE8-4DD0-97ED-F45C27F6AD04@cs.brynmawr.edu> And a new bug: When I visit https://gitlab.haskell.org/ghc/ghc/merge_requests/444/diffs#note_8328 , I see 6/13 discussions resolved, but the button to go to first unresolved discussion goes nowhere. And when I scroll through the page, expanding every file I see, I still can't find these unresolved discussions. :( Richard > On Feb 26, 2019, at 7:59 AM, Richard Eisenberg wrote: > > I can reproduce: > > > > Is there a good place to report this? > > Richard > >> On Feb 26, 2019, at 5:18 AM, Simon Peyton Jones via ghc-devs > wrote: >> >> Look here: >> >> https://gitlab.haskell.org/ghc/ghc/merge_requests/444/diffs#350c4076427c611b8f14e875a4ca553041c2b847_1930_1925 >> Look at the comment “Is this zipping necessary? We haven't zonked the scoped_kvs or the tc_tvs, I think” from Richard Eisenberg, just below line 1925 in TcHsType. >> >> >> >> Now click on the grey “…” above his comment, to expand more code lines above. >> >> >> >> Uh oh! >> >> The same comment now occurs twice, just below line 1904, and again just below 1925. >> The first time it’s on the wrong line of code >> But in the second position (the correct one) the “Reply” box is inactive. I can’t type into it. >> >> >> Is this is a bug? How to report it? >> >> >> >> Simon >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From tdammers at gmail.com Tue Feb 26 14:23:53 2019 From: tdammers at gmail.com (Tobias Dammers) Date: Tue, 26 Feb 2019 15:23:53 +0100 Subject: Gitlab bug? In-Reply-To: References: Message-ID: <20190226142352.z3n7k7nweht4muwp@nibbler> I'd guesstimate that gitlab's issue tracker would be a good place: https://gitlab.com/gitlab-org/gitlab-ce/issues/ Though it's probably going to be helpful for Ben to somehow get involved, seeing how he's been working with the Gitlab team on other issues already. On Tue, Feb 26, 2019 at 07:59:48AM -0500, Richard Eisenberg wrote: > I can reproduce: > > > > Is there a good place to report this? > > Richard > > > On Feb 26, 2019, at 5:18 AM, Simon Peyton Jones via ghc-devs wrote: > > > > Look here: > > > > https://gitlab.haskell.org/ghc/ghc/merge_requests/444/diffs#350c4076427c611b8f14e875a4ca553041c2b847_1930_1925 > > Look at the comment “Is this zipping necessary? We haven't zonked the scoped_kvs or the tc_tvs, I think” from Richard Eisenberg, just below line 1925 in TcHsType. > > > > > > > > Now click on the grey “…” above his comment, to expand more code lines above. > > > > > > > > Uh oh! > > > > The same comment now occurs twice, just below line 1904, and again just below 1925. > > The first time it’s on the wrong line of code > > But in the second position (the correct one) the “Reply” box is inactive. I can’t type into it. > > > > > > Is this is a bug? How to report it? > > > > > > > > Simon > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -- Tobias Dammers - tdammers at gmail.com From tdammers at gmail.com Wed Feb 27 10:06:59 2019 From: tdammers at gmail.com (Tobias Dammers) Date: Wed, 27 Feb 2019 11:06:59 +0100 Subject: Newcomers' Guide to GHC Development Message-ID: <20190227100657.b3qodmep4ymvrade@nibbler> Dear all, With the migration of our affairs from Trac to GitLab nearing completion, I would like to ask for a final round of feedback on the new Newcomers' Guide to GHC development. The draft can be found here: https://github.com/tdammers/ghc-wiki/blob/wip/newcomers/newcomers-tutorial.md TL;DR: If you have any kind of input / critique / praise regarding this document, feel free to reply, or, even better, issue a PR on github. Some background: The purpose of this document is to provide potential contributors with a practical, no-nonsense tutorial, guiding them from "I know nothing about GHC development" to their first successful merge request. The document has been compiled using existing wiki content, revised and edited to match the current state of affairs (particularly using Hadrian as the recommended build system), and to tune it to the target audience of first-time contributors. As such, we avoid going off on tangents (e.g., we do not explain how to use the make-based alternative build system), and we only explain what you need to understand in order to get going (e.g., we do not provide a complete run-down of all hadrian options). A few nonlinearities were deemed necessary in order to make the tutorial suitable across target platforms; Windows in particular requires some special attention. Other than that, however, we try to provide as linear an experience as we reasonably can. So with that said; all feedback and suggestions on this are welcome. We have gotten some great responses already, but I'd like to gather one more round of feedback before merging it into the freshly-migrated Haskell Wiki on GitLab. Thank you for your attention! -- Tobias Dammers - tdammers at gmail.com From arnaud.spiwack at tweag.io Thu Feb 28 11:43:12 2019 From: arnaud.spiwack at tweag.io (Spiwack, Arnaud) Date: Thu, 28 Feb 2019 12:43:12 +0100 Subject: Newcomers' Guide to GHC Development In-Reply-To: <20190227100657.b3qodmep4ymvrade@nibbler> References: <20190227100657.b3qodmep4ymvrade@nibbler> Message-ID: Hi, I've just gone through the document, I have few comments: - The dependencies instruction are too complex and long-winded. I'd start by: if you are a nix user, use , otherwise you need installed [in particular, no sectioning of the dependencies!], see to find instruction for your particular system. - The considerations about gitlab are mostly trivial and distract from the point (at this point we're trying to build GHC): simply give a git clone instruction from the main repository. (also highlight the fact that there is a `--recursive`, it's easily missed, and give the `git submodule update --init` back up in case it was forgotten, maybe?) - Scrap the section called A note on Hadrian. It will just come up as scary. The further reading section is sufficient to point to Hadrian issues - devel2 is a good default flavour, but, when it comes up, you should also include a link to a documentation that says: want to do X -> use flavour Y - The idiom/stages seems to be a dead link, but maybe it'll work when this document is transferred to the wiki? - `git clean` is not sufficient to get to a pristine state, you need `git clean -xdf && git submodule foreach 'git clean -xdf'`. It's probably even better to just give the following one-liner: `git clean -xdf && git submodule foreach 'git clean -xdf' && git submodule update --init`. Maybe even even better, build.hs could have an option to call this one-liner? /Arnaud On Wed, Feb 27, 2019 at 11:07 AM Tobias Dammers wrote: > Dear all, > > With the migration of our affairs from Trac to GitLab nearing > completion, I would like to ask for a final round of feedback on the new > Newcomers' Guide to GHC development. > > The draft can be found here: > > > https://github.com/tdammers/ghc-wiki/blob/wip/newcomers/newcomers-tutorial.md > > TL;DR: If you have any kind of input / critique / praise regarding this > document, feel free to reply, or, even better, issue a PR on github. > > > Some background: > > The purpose of this document is to provide potential contributors with a > practical, no-nonsense tutorial, guiding them from "I know nothing about > GHC development" to their first successful merge request. > > The document has been compiled using existing wiki content, revised and > edited to match the current state of affairs (particularly using Hadrian > as the recommended build system), and to tune it to the target audience > of first-time contributors. As such, we avoid going off on tangents > (e.g., we do not explain how to use the make-based alternative build > system), and we only explain what you need to understand in order to get > going (e.g., we do not provide a complete run-down of all hadrian > options). > > A few nonlinearities were deemed necessary in order to make the tutorial > suitable across target platforms; Windows in particular requires some > special attention. Other than that, however, we try to provide as linear > an experience as we reasonably can. > > > So with that said; all feedback and suggestions on this are welcome. We > have gotten some great responses already, but I'd like to gather one > more round of feedback before merging it into the freshly-migrated > Haskell Wiki on GitLab. > > Thank you for your attention! > > -- > Tobias Dammers - tdammers at gmail.com > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: