From mail at joachim-breitner.de Mon Oct 1 00:32:40 2018 From: mail at joachim-breitner.de (Joachim Breitner) Date: Sun, 30 Sep 2018 17:32:40 -0700 Subject: nofib oldest GHC to support? In-Reply-To: References: Message-ID: <83c79d2552b379721ac8c625ac5a9141537e9562.camel@joachim-breitner.de> Hi, there is no policy that I am aware of, but being able to run nofib on old (or even ancient) versions of GHC is likely to make someone happy in the future, so I’d say that the (valid!) desire to cleanup is not a good reason to drop support – only if it would require unreasonable efforts should we drop old versions there. Cheers, Joachim Am Sonntag, den 30.09.2018, 14:18 +0300 schrieb Ömer Sinan Ağacan: > Do we have a policy on the oldest GHC to support in nofib? I'm currently doing > some hacking on nofib to parse some new info printed by a modified GHC, and I > think we can do a lot of cleaning (at the very least remove some regexes and > parsers) if we decide on which GHCs to support. > > I checked the README and RunningNoFib wiki page but couldn't see anything > relevant. > > Thanks > > Ömer > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -- Joachim Breitner mail at joachim-breitner.de http://www.joachim-breitner.de/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From omeragacan at gmail.com Mon Oct 1 08:04:57 2018 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Mon, 1 Oct 2018 11:04:57 +0300 Subject: nofib oldest GHC to support? In-Reply-To: <83c79d2552b379721ac8c625ac5a9141537e9562.camel@joachim-breitner.de> References: <83c79d2552b379721ac8c625ac5a9141537e9562.camel@joachim-breitner.de> Message-ID: We currently claim to support GOFER and GHC 4.02! Surely we can drop some of those support. I just tried booting nofib with GHC 7.10.3 and it failed. We don't even support 7.10.3, but we still have code to support ... 4.02! The desire to cleanup is not because removing code is fun, it's because it makes it easier to maintain. Currently I need to parse another variant of `+RTS -t` output, and I have to deal with this mess for it: https://github.com/ghc/nofib/blob/a80baacfc29cc2e7ed50e94f3cd2648d11b1d7d5/nofib-analyse/Slurp.hs#L153-L207 (note that all of these need to be updated to add one more field) If we decide on what versions to support we could remove most of those (I doubt `+RTS -t` output changes too much, so maybe we can even remove all but one). There are also other code with CPP macros for GOFER etc. I suggest supporting HEAD + 3 major releases. In this plan currently we should be able to run nofib with GHC HEAD, 8.6, 8.4, and 8.2. Then setting up a CI to test nofib with these configurations should be trivial (except for GHC HEAD maybe, I don't know if we're publishing GHC HEAD bindists for CI servers to use). Ömer Joachim Breitner , 1 Eki 2018 Pzt, 03:33 tarihinde şunu yazdı: > > Hi, > > there is no policy that I am aware of, but being able to run nofib on > old (or even ancient) versions of GHC is likely to make someone happy > in the future, so I’d say that the (valid!) desire to cleanup is not a > good reason to drop support – only if it would require unreasonable > efforts should we drop old versions there. > > Cheers, > Joachim > > Am Sonntag, den 30.09.2018, 14:18 +0300 schrieb Ömer Sinan Ağacan: > > Do we have a policy on the oldest GHC to support in nofib? I'm currently doing > > some hacking on nofib to parse some new info printed by a modified GHC, and I > > think we can do a lot of cleaning (at the very least remove some regexes and > > parsers) if we decide on which GHCs to support. > > > > I checked the README and RunningNoFib wiki page but couldn't see anything > > relevant. > > > > Thanks > > > > Ömer > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -- > Joachim Breitner > mail at joachim-breitner.de > http://www.joachim-breitner.de/ > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From takenobu.hs at gmail.com Mon Oct 1 12:07:41 2018 From: takenobu.hs at gmail.com (Takenobu Tani) Date: Mon, 1 Oct 2018 21:07:41 +0900 Subject: Please update ghc user's guide for GHC 8.6 Message-ID: Dear devs, Would you please update latest document [1] to GHC 8.6 ? [1]: https://downloads.haskell.org/~ghc/latest/docs/html/users_guide Regards, Takenobu -------------- next part -------------- An HTML attachment was scrubbed... URL: From nair.sreenidhi at gmail.com Mon Oct 1 18:25:00 2018 From: nair.sreenidhi at gmail.com (Sreenidhi Nair) Date: Mon, 1 Oct 2018 23:55:00 +0530 Subject: typed holes inferring very polymorphic types Message-ID: Hello, We tried the following code with ghc-8.6.1 testFailure :: Char testFailure = let x = Prelude.id _ in x which gave the following suggestion /home/sreenidhi/Work/typeql/typeql-dbrecord/test/Test/Database/Postgres/Read/Combinator.hs:83:22: error: • Found hole: _ :: a Where: ‘a’ is a rigid type variable bound by the inferred type of x :: a at /home/sreenidhi/Work/typeql/typeql-dbrecord/test/Test/Database/Postgres/Read/Combinator.hs:83:7-22 And then this testSuccess :: Char testSuccess = _ which gave a much better suggestion /home/sreenidhi/Work/typeql/typeql-dbrecord/test/Test/Database/Postgres/Read/Combinator.hs:87:15: error: • Found hole: _ :: Char • In the expression: _ In an equation for ‘testSuccess’: testSuccess = _ Is there any way to get better suggestions with 'let' version? -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.pelenitsyn at gmail.com Mon Oct 1 21:03:26 2018 From: a.pelenitsyn at gmail.com (Artem Pelenitsyn) Date: Mon, 1 Oct 2018 17:03:26 -0400 Subject: typed holes inferring very polymorphic types In-Reply-To: References: Message-ID: Hello Sreenidhi, This looks like a valid Trac ticket to me. Maybe, open one? -- Best, Artem On Mon, 1 Oct 2018 at 13:25 Sreenidhi Nair wrote: > Hello, > > We tried the following code with ghc-8.6.1 > > testFailure :: Char > testFailure = > let x = Prelude.id _ > in x > > which gave the following suggestion > > /home/sreenidhi/Work/typeql/typeql-dbrecord/test/Test/Database/Postgres/Read/Combinator.hs:83:22: > error: > • Found hole: _ :: a > Where: ‘a’ is a rigid type variable bound by > the inferred type of x :: a > at > /home/sreenidhi/Work/typeql/typeql-dbrecord/test/Test/Database/Postgres/Read/Combinator.hs:83:7-22 > > And then this > > testSuccess :: Char > testSuccess = _ > > which gave a much better suggestion > > /home/sreenidhi/Work/typeql/typeql-dbrecord/test/Test/Database/Postgres/Read/Combinator.hs:87:15: > error: > • Found hole: _ :: Char > • In the expression: _ > In an equation for ‘testSuccess’: testSuccess = _ > > Is there any way to get better suggestions with 'let' version? > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lonetiger at gmail.com Wed Oct 3 06:16:03 2018 From: lonetiger at gmail.com (Phyx) Date: Wed, 3 Oct 2018 07:16:03 +0100 Subject: [ANNOUNCE] GHC 8.6.1 released In-Reply-To: <4E0D6D6B-0F2F-4070-A22E-8336A7905667@well-typed.com> References: <87wore1h9i.fsf@smart-cactus.org> <4E0D6D6B-0F2F-4070-A22E-8336A7905667@well-typed.com> Message-ID: Hi All, I've made a ticket for this but it seems it hasn't gotten any attention at all. As it stands now the 8.6.1 tarballs for Windows are a bit broken. Because of a mistake I've made during the mapping of the ACLs from fopen to CreateFile it's accidentally asking for WRITE attributes rights when opening a read-only file. Note that this is slightly different from WRITE permissions on the file itself. So reading a read-only file works fine, as long as you're the owner or have sufficient right to modify the metadata. This is why the CI did not catch it. The CI cannot create a file which it's not an owner off, or for which it doesn't have permissions to remove the file (it would get stuck). The only way to catch this is to run GHC from a privileged location such as how chocolatey installs it or how Haskell Platform would. Essentially this means no GHC on chocolatey or HP can run without you being an admin or the owner of the location it was installed to, and the same applies to any binaries produced by this GHC. This probably will prevent HP builds for it. The ticket is here https://ghc.haskell.org/trac/ghc/ticket/15667 and the patch has been sitting at https://phabricator.haskell.org/D5177 I'll modify my chocolatey packages to actually run the GHC after installing it as a post install step. This should catch such errors in the future during betas. Thanks, Tamar On Mon, Sep 24, 2018 at 2:37 PM Ben Gamari wrote: > > > On September 24, 2018 2:09:13 AM CDT, Jens Petersen > wrote: > >I have built 8.6.1 for Fedora 27, 28, 29, Rawhide, and EPEL7 in: > > > >https://copr.fedorainfracloud.org/coprs/petersen/ghc-8.6.1/ > > > >The repo also includes latest cabal-install. > > > Thanks Jens! This is a very helpful service. > > Cheers, > > - Ben > > > -- > Sent from my Android device with K-9 Mail. Please excuse my brevity. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Wed Oct 3 18:32:40 2018 From: marlowsd at gmail.com (Simon Marlow) Date: Wed, 3 Oct 2018 19:32:40 +0100 Subject: Phabricator workflow vs. GitHub Message-ID: Here's an interesting blog post relevant to previous discussions about Phabricator / GitHub: https://jg.gg/2018/09/29/stacked-diffs-versus-pull-requests/?fbclid=IwAR3JyQP5uCn6ENiHOTWd41y5D-U0_CCJ55_23nzKeUYTjgLASHu2dq5QCc0 Yes it's a decidedly pro-Phabricator rant, but it does go into a lot of details about why the Phabricator workflow is productive, and might be useful to those who struggle to get to grips with it coming from GitHub. Cheers Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Thu Oct 4 07:35:54 2018 From: marlowsd at gmail.com (Simon Marlow) Date: Thu, 4 Oct 2018 08:35:54 +0100 Subject: nofib oldest GHC to support? In-Reply-To: References: <83c79d2552b379721ac8c625ac5a9141537e9562.camel@joachim-breitner.de> Message-ID: Typically I never remove any support for older compilers in nofib and the nofib-analyze tool, I just add support for new things. I realise we don't continuously test any of those old versions and thus they can bitrot, but in my experience so far it doesn't happen very often, and it's sometimes really useful to be able to use older versions. Why did it break with 7.10.3? Can that be fixed? I guess we could remove the Gofer support though :) In the list of regrexes you pointed to, don't you just need to add one more to support the new format? It would be nice if those regexes had comments to explain which version they were added for, I guess we could start doing that from now on. Cheers Simon On Mon, 1 Oct 2018 at 09:05, Ömer Sinan Ağacan wrote: > We currently claim to support GOFER and GHC 4.02! Surely we can drop some > of > those support. > > I just tried booting nofib with GHC 7.10.3 and it failed. We don't even > support > 7.10.3, but we still have code to support ... 4.02! > > The desire to cleanup is not because removing code is fun, it's because it > makes it easier to maintain. Currently I need to parse another variant of > `+RTS > -t` output, and I have to deal with this mess for it: > > > https://github.com/ghc/nofib/blob/a80baacfc29cc2e7ed50e94f3cd2648d11b1d7d5/nofib-analyse/Slurp.hs#L153-L207 > > (note that all of these need to be updated to add one more field) > > If we decide on what versions to support we could remove most of those (I > doubt > `+RTS -t` output changes too much, so maybe we can even remove all but > one). > > There are also other code with CPP macros for GOFER etc. > > I suggest supporting HEAD + 3 major releases. In this plan currently we > should > be able to run nofib with GHC HEAD, 8.6, 8.4, and 8.2. Then setting up a > CI to > test nofib with these configurations should be trivial (except for GHC HEAD > maybe, I don't know if we're publishing GHC HEAD bindists for CI servers to > use). > > Ömer > Joachim Breitner , 1 Eki 2018 Pzt, 03:33 > tarihinde şunu yazdı: > > > > Hi, > > > > there is no policy that I am aware of, but being able to run nofib on > > old (or even ancient) versions of GHC is likely to make someone happy > > in the future, so I’d say that the (valid!) desire to cleanup is not a > > good reason to drop support – only if it would require unreasonable > > efforts should we drop old versions there. > > > > Cheers, > > Joachim > > > > Am Sonntag, den 30.09.2018, 14:18 +0300 schrieb Ömer Sinan Ağacan: > > > Do we have a policy on the oldest GHC to support in nofib? I'm > currently doing > > > some hacking on nofib to parse some new info printed by a modified > GHC, and I > > > think we can do a lot of cleaning (at the very least remove some > regexes and > > > parsers) if we decide on which GHCs to support. > > > > > > I checked the README and RunningNoFib wiki page but couldn't see > anything > > > relevant. > > > > > > Thanks > > > > > > Ömer > > > _______________________________________________ > > > ghc-devs mailing list > > > ghc-devs at haskell.org > > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -- > > Joachim Breitner > > mail at joachim-breitner.de > > http://www.joachim-breitner.de/ > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cheng.shao at tweag.io Thu Oct 4 13:55:30 2018 From: cheng.shao at tweag.io (Shao, Cheng) Date: Thu, 4 Oct 2018 21:55:30 +0800 Subject: Does it sound a good idea to implement "backend plugins"? Message-ID: Hi all, I'm thinking of adding "backend plugins" in the current Plugins mechanism which allows one to inspect/modify the IRs post simplifier pass (STG/Cmm), similar to the recently added source plugins for HsSyn IRs. This can be useful for anyone creating a custom GHC backend to target an experimental platform (e.g. the Asterius compiler which targets WebAssembly), and previously in order to retrieve those IRs from the regular pipeline, we need to use Hooks which is somewhat hacky. Does this sound a good idea to you? If so, I can open a trac ticket and a Phab diff for this feature. Best, Shao Cheng From matthewtpickering at gmail.com Thu Oct 4 14:01:59 2018 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Thu, 4 Oct 2018 15:01:59 +0100 Subject: Does it sound a good idea to implement "backend plugins"? In-Reply-To: References: Message-ID: Sounds like a reasonable idea to me. However, you should take some time to propose a concrete interface as this part was not obvious in the design of source plugins. Matt On Thu, Oct 4, 2018 at 2:56 PM Shao, Cheng wrote: > > Hi all, > > I'm thinking of adding "backend plugins" in the current Plugins > mechanism which allows one to inspect/modify the IRs post simplifier > pass (STG/Cmm), similar to the recently added source plugins for HsSyn > IRs. This can be useful for anyone creating a custom GHC backend to > target an experimental platform (e.g. the Asterius compiler which > targets WebAssembly), and previously in order to retrieve those IRs > from the regular pipeline, we need to use Hooks which is somewhat > hacky. > > Does this sound a good idea to you? If so, I can open a trac ticket > and a Phab diff for this feature. > > Best, > Shao Cheng > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From ben at smart-cactus.org Thu Oct 4 15:22:09 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 04 Oct 2018 11:22:09 -0400 Subject: Does it sound a good idea to implement "backend plugins"? In-Reply-To: References: Message-ID: <87d0spg2jm.fsf@smart-cactus.org> "Shao, Cheng" writes: > Hi all, > > I'm thinking of adding "backend plugins" in the current Plugins > mechanism which allows one to inspect/modify the IRs post simplifier > pass (STG/Cmm), similar to the recently added source plugins for HsSyn > IRs. This can be useful for anyone creating a custom GHC backend to > target an experimental platform (e.g. the Asterius compiler which > targets WebAssembly), and previously in order to retrieve those IRs > from the regular pipeline, we need to use Hooks which is somewhat > hacky. > > Does this sound a good idea to you? If so, I can open a trac ticket > and a Phab diff for this feature. > Yes, during the Implementors' Workshop this year it seemed like there was considerable interest in such a mechanism. However, as Matthew said, the devil is in the details; before starting an implementation I would recommend that you open a ticket describing the specifics of the proposed interface. It also wouldn't hurt to motivate the proposal with a discussion of the concrete use-cases that the interface is meant to address. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From stegeman at gmail.com Thu Oct 4 16:33:43 2018 From: stegeman at gmail.com (Luite Stegeman) Date: Thu, 4 Oct 2018 18:33:43 +0200 Subject: Does it sound a good idea to implement "backend plugins"? In-Reply-To: References: Message-ID: I think it sounds like a potentially good idea in general, but I agree with Ben and Matthew here that a more concrete plan of intended use is needed. Adding "pluggable backends" to spin up new targets seems to require quite a bit of additional infrastructure for initialising a library directory and package database. But there are probably more specific use cases that need inspecting/modifying STG or Cmm where plugins would already be useful in practice. Hooks (or rather their locations in the pipeline) are rather ad hoc by nature, but for Asterius a hook that takes Cmm and takes over from there seems like a reasonable approach given the current state of things. I think the Cmm hook you implemented (or something similar) would be perfectly acceptable to use for now. I don't think it's a problem if a hook exists for some time, and at some point it's superseded by a more general plugin mechanism. Especially with the GHC 6 month release cycle there's not much need for future proofing. Luite On Thu, Oct 4, 2018 at 3:56 PM Shao, Cheng wrote: > Hi all, > > I'm thinking of adding "backend plugins" in the current Plugins > mechanism which allows one to inspect/modify the IRs post simplifier > pass (STG/Cmm), similar to the recently added source plugins for HsSyn > IRs. This can be useful for anyone creating a custom GHC backend to > target an experimental platform (e.g. the Asterius compiler which > targets WebAssembly), and previously in order to retrieve those IRs > from the regular pipeline, we need to use Hooks which is somewhat > hacky. > > Does this sound a good idea to you? If so, I can open a trac ticket > and a Phab diff for this feature. > > Best, > Shao Cheng > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cheng.shao at tweag.io Thu Oct 4 17:52:11 2018 From: cheng.shao at tweag.io (Shao, Cheng) Date: Fri, 5 Oct 2018 01:52:11 +0800 Subject: Does it sound a good idea to implement "backend plugins"? In-Reply-To: References: Message-ID: > Adding "pluggable backends" to spin up new targets seems to require quite a bit of additional infrastructure for initialising a library directory and package database. But there are probably more specific use cases that need inspecting/modifying STG or Cmm where plugins would already be useful in practice. I think setting up a new global libdir/pkgdb is beyond the scope of backend plugins. The user shall implement his/her own boot script to configure for the new architecture, generate relevant headers, run Cabal's Setup program to launch GHC with the plugin loaded. > Hooks (or rather their locations in the pipeline) are rather ad hoc by nature, but for Asterius a hook that takes Cmm and takes over from there seems like a reasonable approach given the current state of things. I think the Cmm hook you implemented (or something similar) would be perfectly acceptable to use for now. For the use case of asterius itself, indeed Hooks already fit the use case for now. But since we seek to upstream our newly added features in our ghc fork back to ghc hq, we should upstream those changes early and make them more principled. Compared to Hooks, I prefer to move to Plugins entirely since: * Plugins are more composable, you can load multiple plugins in one ghc invocation. Hooks are not. * If I implement the same mechanisms in Plugins, this can be beneficial to other projects. Currently, in asterius, everything works via a pile of hacks upon hacks in ghc-toolkit, and it's not good for reuse. * The newly added backend plugins shouldn't have visible correctness/performance impact if they're not used, and it's just a few local modifications in the ghc codebase. > On Thu, Oct 4, 2018 at 3:56 PM Shao, Cheng wrote: >> >> Hi all, >> >> I'm thinking of adding "backend plugins" in the current Plugins >> mechanism which allows one to inspect/modify the IRs post simplifier >> pass (STG/Cmm), similar to the recently added source plugins for HsSyn >> IRs. This can be useful for anyone creating a custom GHC backend to >> target an experimental platform (e.g. the Asterius compiler which >> targets WebAssembly), and previously in order to retrieve those IRs >> from the regular pipeline, we need to use Hooks which is somewhat >> hacky. >> >> Does this sound a good idea to you? If so, I can open a trac ticket >> and a Phab diff for this feature. >> >> Best, >> Shao Cheng >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From moritz.angermann at gmail.com Thu Oct 4 22:44:00 2018 From: moritz.angermann at gmail.com (Moritz Angermann) Date: Fri, 5 Oct 2018 06:44:00 +0800 Subject: Does it sound a good idea to implement "backend plugins"? In-Reply-To: References: Message-ID: <13A02A99-E5F7-4A07-8D92-87F00C8463FD@gmail.com> A long time ago, I’ve tried to inject plugin logic to allows some control over the driver pipeline (phase ordering) and hooking various code gen related functions. See https://phabricator.haskell.org/D535 At that time I ran into issues that might simply not exist with plugins anymore today, but I haven’t looked. The whole design wasn’t quite right and injects everything into the dynflags. Also ghc wanted to be able to compile the plugin on the fly, but I needed the plugin to be loaded very early during the startup phase to exert enough control of the rest of the pipeline through the plugin. Cheers, Moritz Sent from my iPhone On 5 Oct 2018, at 1:52 AM, Shao, Cheng wrote: >> Adding "pluggable backends" to spin up new targets seems to require quite a bit of additional infrastructure for initialising a library directory and package database. But there are probably more specific use cases that need inspecting/modifying STG or Cmm where plugins would already be useful in practice. > > I think setting up a new global libdir/pkgdb is beyond the scope of > backend plugins. The user shall implement his/her own boot script to > configure for the new architecture, generate relevant headers, run > Cabal's Setup program to launch GHC with the plugin loaded. > >> Hooks (or rather their locations in the pipeline) are rather ad hoc by nature, but for Asterius a hook that takes Cmm and takes over from there seems like a reasonable approach given the current state of things. I think the Cmm hook you implemented (or something similar) would be perfectly acceptable to use for now. > > For the use case of asterius itself, indeed Hooks already fit the use > case for now. But since we seek to upstream our newly added features > in our ghc fork back to ghc hq, we should upstream those changes early > and make them more principled. Compared to Hooks, I prefer to move to > Plugins entirely since: > > * Plugins are more composable, you can load multiple plugins in one > ghc invocation. Hooks are not. > * If I implement the same mechanisms in Plugins, this can be > beneficial to other projects. Currently, in asterius, everything works > via a pile of hacks upon hacks in ghc-toolkit, and it's not good for > reuse. > * The newly added backend plugins shouldn't have visible > correctness/performance impact if they're not used, and it's just a > few local modifications in the ghc codebase. > >>> On Thu, Oct 4, 2018 at 3:56 PM Shao, Cheng wrote: >>> >>> Hi all, >>> >>> I'm thinking of adding "backend plugins" in the current Plugins >>> mechanism which allows one to inspect/modify the IRs post simplifier >>> pass (STG/Cmm), similar to the recently added source plugins for HsSyn >>> IRs. This can be useful for anyone creating a custom GHC backend to >>> target an experimental platform (e.g. the Asterius compiler which >>> targets WebAssembly), and previously in order to retrieve those IRs >>> from the regular pipeline, we need to use Hooks which is somewhat >>> hacky. >>> >>> Does this sound a good idea to you? If so, I can open a trac ticket >>> and a Phab diff for this feature. >>> >>> Best, >>> Shao Cheng >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From cheng.shao at tweag.io Fri Oct 5 00:02:23 2018 From: cheng.shao at tweag.io (Shao, Cheng) Date: Fri, 5 Oct 2018 08:02:23 +0800 Subject: Does it sound a good idea to implement "backend plugins"? In-Reply-To: <13A02A99-E5F7-4A07-8D92-87F00C8463FD@gmail.com> References: <13A02A99-E5F7-4A07-8D92-87F00C8463FD@gmail.com> Message-ID: > A long time ago, I’ve tried to inject plugin logic to allows some control over the driver pipeline (phase ordering) and hooking various code gen related functions. > > See https://phabricator.haskell.org/D535 Cool! I haven't thoroughly read the history of that diff, but allowing manipulation of a Hooks via a Plugin seems overkill in this case, and even if one can do so, it still doesn't lead to the backend IR types; one would need to use runPhaseHook and modify the behavior after a CgGuts is generated, which unfortunately leads to quite some boilerplate code. > At that time I ran into issues that might simply not exist with plugins anymore today, but I haven’t looked. Interesting. I'll make sure to consult you in case I'm bitten by some hidden issues when I actually implement it :) > The whole design wasn’t quite right and injects everything into the dynflags. Also ghc wanted to be able to compile the plugin on the fly, but I needed the plugin to be loaded very early during the startup phase to exert enough control of the rest of the pipeline through the plugin. Well, in the case of backend plugins, it isn't supposed to be a home plugin to be compiled and used on the fly. A typical use case would be compiling/installing the plugin to a standalone pkgdb, then used to compile other packages. > > On 5 Oct 2018, at 1:52 AM, Shao, Cheng wrote: > > Adding "pluggable backends" to spin up new targets seems to require quite a bit of additional infrastructure for initialising a library directory and package database. But there are probably more specific use cases that need inspecting/modifying STG or Cmm where plugins would already be useful in practice. > > > I think setting up a new global libdir/pkgdb is beyond the scope of > backend plugins. The user shall implement his/her own boot script to > configure for the new architecture, generate relevant headers, run > Cabal's Setup program to launch GHC with the plugin loaded. > > Hooks (or rather their locations in the pipeline) are rather ad hoc by nature, but for Asterius a hook that takes Cmm and takes over from there seems like a reasonable approach given the current state of things. I think the Cmm hook you implemented (or something similar) would be perfectly acceptable to use for now. > > > For the use case of asterius itself, indeed Hooks already fit the use > case for now. But since we seek to upstream our newly added features > in our ghc fork back to ghc hq, we should upstream those changes early > and make them more principled. Compared to Hooks, I prefer to move to > Plugins entirely since: > > * Plugins are more composable, you can load multiple plugins in one > ghc invocation. Hooks are not. > * If I implement the same mechanisms in Plugins, this can be > beneficial to other projects. Currently, in asterius, everything works > via a pile of hacks upon hacks in ghc-toolkit, and it's not good for > reuse. > * The newly added backend plugins shouldn't have visible > correctness/performance impact if they're not used, and it's just a > few local modifications in the ghc codebase. > > On Thu, Oct 4, 2018 at 3:56 PM Shao, Cheng wrote: > > > Hi all, > > > I'm thinking of adding "backend plugins" in the current Plugins > > mechanism which allows one to inspect/modify the IRs post simplifier > > pass (STG/Cmm), similar to the recently added source plugins for HsSyn > > IRs. This can be useful for anyone creating a custom GHC backend to > > target an experimental platform (e.g. the Asterius compiler which > > targets WebAssembly), and previously in order to retrieve those IRs > > from the regular pipeline, we need to use Hooks which is somewhat > > hacky. > > > Does this sound a good idea to you? If so, I can open a trac ticket > > and a Phab diff for this feature. > > > Best, > > Shao Cheng > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From ezyang at mit.edu Fri Oct 5 00:12:53 2018 From: ezyang at mit.edu (Edward Z. Yang) Date: Thu, 04 Oct 2018 20:12:53 -0400 Subject: Phabricator workflow vs. GitHub In-Reply-To: References: Message-ID: <1538698331-sup-4138@sabre> Stacked diffs are so useful that I have literally spent several days building tooling so that I can write stacked diffs and then ship them to GitHub (where the project lives). It's just that good. Edward Excerpts from Simon Marlow's message of 2018-10-03 19:32:40 +0100: > Here's an interesting blog post relevant to previous discussions about > Phabricator / GitHub: > https://jg.gg/2018/09/29/stacked-diffs-versus-pull-requests/?fbclid=IwAR3JyQP5uCn6ENiHOTWd41y5D-U0_CCJ55_23nzKeUYTjgLASHu2dq5QCc0 > > Yes it's a decidedly pro-Phabricator rant, but it does go into a lot of > details about why the Phabricator workflow is productive, and might be > useful to those who struggle to get to grips with it coming from GitHub. > > Cheers > Simon From kazu at iij.ad.jp Fri Oct 5 03:01:06 2018 From: kazu at iij.ad.jp (Kazu Yamamoto (=?iso-2022-jp?B?GyRCOzNLXE9CSScbKEI=?=)) Date: Fri, 05 Oct 2018 12:01:06 +0900 (JST) Subject: [ANNOUNCE] GHC 8.6.1 released In-Reply-To: References: <87wore1h9i.fsf@smart-cactus.org> Message-ID: <20181005.120106.1048227468150704327.kazu@iij.ad.jp> Hi Evan, > Has anyone installed the OS X binary distribution? I get: > > "utils/ghc-cabal/dist-install/build/tmp/ghc-cabal-bindist" copy > libraries/ghc-prim dist-install "strip" '' '/usr/local' > '/usr/local/lib/ghc-8.6.1' > '/usr/local/share/doc/ghc-8.6.1/html/libraries' 'v p dyn' > dyld: Library not loaded: /usr/local/opt/gmp/lib/libgmp.10.dylib > Referenced from: > /usr/local/src/hs/ghc-8.6.1/libraries/base/dist-install/build/libHSbase-4.12.0.0-ghc8.6.1.dylib > Reason: image not found I met the same problem. --Kazu From mail at nh2.me Fri Oct 5 03:30:24 2018 From: mail at nh2.me (=?UTF-8?Q?Niklas_Hamb=c3=bcchen?=) Date: Fri, 5 Oct 2018 05:30:24 +0200 Subject: Phabricator workflow vs. GitHub In-Reply-To: <1538698331-sup-4138@sabre> References: <1538698331-sup-4138@sabre> Message-ID: <3baf88f2-f3a4-5349-eb2d-f863e985189c@nh2.me> There are some things in these argumentations that I don't get. When you have a stack of commits on top of master, like: * C | * B | * A | * master What do you use as base for `arc diff` for each of them? If B depends on A (the patch expressed by B doesn't apply if A was applied first), do you still use master as a base for B, or do you use Phabricator's feature to have diffs depend on other diffs? From marlowsd at gmail.com Fri Oct 5 09:55:51 2018 From: marlowsd at gmail.com (Simon Marlow) Date: Fri, 5 Oct 2018 10:55:51 +0100 Subject: Phabricator workflow vs. GitHub In-Reply-To: <3baf88f2-f3a4-5349-eb2d-f863e985189c@nh2.me> References: <1538698331-sup-4138@sabre> <3baf88f2-f3a4-5349-eb2d-f863e985189c@nh2.me> Message-ID: I think the article is assuming the base for `arc diff` is always the parent revision, i.e. `arc diff HEAD^`, which is how the workflow works best. Strangely I don't think the open source Phabricator is set up to do this by default so you have to actually type `arc diff HEAD^` (there's probably some setting somewhere so that you can make this the default). On the diff in Phabricator you can enter the dependencies manually. Really the tooling ought to do this for you (and at Facebook our internal tooling does do this) but for now manually specifying the dependencies is not terrible. Then Phabricator shows you the nice dependency tree in the UI, so you can see the state of all of your diffs in the stack. Cheers Simon On Fri, 5 Oct 2018 at 04:30, Niklas Hambüchen wrote: > There are some things in these argumentations that I don't get. > > When you have a stack of commits on top of master, like: > > * C > | > * B > | > * A > | > * master > > What do you use as base for `arc diff` for each of them? > > If B depends on A (the patch expressed by B doesn't apply if A was applied > first), > do you still use master as a base for B, or do you use Phabricator's > feature to have diffs depend on other diffs? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at nh2.me Fri Oct 5 14:21:44 2018 From: mail at nh2.me (=?UTF-8?Q?Niklas_Hamb=c3=bcchen?=) Date: Fri, 5 Oct 2018 16:21:44 +0200 Subject: Phabricator workflow vs. GitHub In-Reply-To: References: <1538698331-sup-4138@sabre> <3baf88f2-f3a4-5349-eb2d-f863e985189c@nh2.me> Message-ID: <53059bea-11e4-4b7c-17a8-8b6bfe384d41@nh2.me> > I think the article is assuming the base for `arc diff` is always the parent revision, i.e. `arc diff HEAD^`, which is how the workflow works best. Strangely I don't think the open source Phabricator is set up to do this by default so you have to actually type `arc diff HEAD^` Perhaps that is exactly to address the problem in my example: If you submit a patch B that depends on A, by default this patch will fail to apply against master on the Phabricator side unless you manually set up dependencies? I suppose this is why it defaults to submitting the whole master-A-B history instead? > for now manually specifying the dependencies is not terrible. I have found it pretty terrible: Setting up dependencies between commits by hand is time consuming, and you can do it wrong, which easily leads to confusion. If I do 4 refactor commits and on top a new feature that needs them, why should I have to manually click together the dependencies between those commits? The whole point of git is that it tracks that already for me in its DAG. It gets worse if I have to react to review feedback: Say Ben tells me in review that I should really squash commits 2 and 3 because they don't work independent of each other. Easily done with `git rebase -i` as suggested, but now I have to go and reflect what I just did in version control by manual clicking in an external tool again (and I better kick out the right Diff). Similarly, if want to rename all occurrences of my_var to myVar across my 5 commits using rebase -i, I have to manually invoke the right arc invocation after each commit. So I've found it a big pain to maintain a series of dependent commits with this workflow. I can imagine this to be only painless if you have access to the tooling you said you have at facebook, that automates these things for you. In my ideal world, it should work like this: * Locally, a series of dependent patches goes into a git branch. * Branches that are dependent on each other are based on each other. * You have a tool that, if you amend a commit in a branch, can rebase all the dependent branches accordingly. * You can tell `arc` to submit a whole branch, and it will automatically upload all dependent branches and set up the Phabricator dependency relationships for you. * When you react to review feedback, you change your history locally, and run an `arc upload-changes`, that automatically updates all Diffs accordingly. Niklas From omeragacan at gmail.com Fri Oct 5 18:48:08 2018 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Fri, 5 Oct 2018 21:48:08 +0300 Subject: Shall we make -dsuppress-uniques default? Message-ID: I asked this on IRC and didn't hear a lot of opposition, so as the next step I'd like to ask ghc-devs. I literally never need the details on uniques that we currently print by default. I either don't care about variables too much (when not comparing the output with some other output), or I need -dsuppress-uniques (when comparing outputs). The problem is I have to remember to add -dsuppress-uniques if I'm going to compare the outputs, and if I decide to compare outputs after the fact I need to re-generate them with -dsuppress-uniques. This takes time and effort. If you're also of the same opinion I suggest making -dsuppress-uniques default, and providing a -dno-suppress-uniques (if it doesn't already exist). Ömer From rae at cs.brynmawr.edu Fri Oct 5 18:54:55 2018 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Fri, 5 Oct 2018 14:54:55 -0400 Subject: Shall we make -dsuppress-uniques default? In-Reply-To: References: Message-ID: I'm in the opposite camp. More often than not, the biggest advantage of dumps during GHC development is to see the uniques. Indeed, I often ignore the actual names of variables and just work in my head with the uniques. Perhaps the more complete answer is to fine-tune what settings cause the uniques to be printed. -ddump-xx-trace should almost certainly. Perhaps other modes needn't. What do you say to GHC to get it to print the uniques that you don't like? Richard > On Oct 5, 2018, at 2:48 PM, Ömer Sinan Ağacan wrote: > > I asked this on IRC and didn't hear a lot of opposition, so as the next step > I'd like to ask ghc-devs. > > I literally never need the details on uniques that we currently print by > default. I either don't care about variables too much (when not comparing the > output with some other output), or I need -dsuppress-uniques (when comparing > outputs). The problem is I have to remember to add -dsuppress-uniques if I'm > going to compare the outputs, and if I decide to compare outputs after the fact > I need to re-generate them with -dsuppress-uniques. This takes time and effort. > > If you're also of the same opinion I suggest making -dsuppress-uniques default, > and providing a -dno-suppress-uniques (if it doesn't already exist). > > Ömer > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From omeragacan at gmail.com Fri Oct 5 19:02:29 2018 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Fri, 5 Oct 2018 22:02:29 +0300 Subject: Shall we make -dsuppress-uniques default? In-Reply-To: References: Message-ID: > What do you say to GHC to get it to print the uniques that you don't like? I usually use one of these: -ddump-simpl, -dverbose-core2core, -ddump-simpl-iterations, -ddump-stg. All of these print variables with unique details and I literally never need those details. Rarely I use -ddump-cmm too. Agreed that having different defaults in different dumps/traces might work .. Ömer Richard Eisenberg , 5 Eki 2018 Cum, 21:54 tarihinde şunu yazdı: > > I'm in the opposite camp. More often than not, the biggest advantage of dumps during GHC development is to see the uniques. Indeed, I often ignore the actual names of variables and just work in my head with the uniques. > > Perhaps the more complete answer is to fine-tune what settings cause the uniques to be printed. -ddump-xx-trace should almost certainly. Perhaps other modes needn't. What do you say to GHC to get it to print the uniques that you don't like? > > Richard > > > On Oct 5, 2018, at 2:48 PM, Ömer Sinan Ağacan wrote: > > > > I asked this on IRC and didn't hear a lot of opposition, so as the next step > > I'd like to ask ghc-devs. > > > > I literally never need the details on uniques that we currently print by > > default. I either don't care about variables too much (when not comparing the > > output with some other output), or I need -dsuppress-uniques (when comparing > > outputs). The problem is I have to remember to add -dsuppress-uniques if I'm > > going to compare the outputs, and if I decide to compare outputs after the fact > > I need to re-generate them with -dsuppress-uniques. This takes time and effort. > > > > If you're also of the same opinion I suggest making -dsuppress-uniques default, > > and providing a -dno-suppress-uniques (if it doesn't already exist). > > > > Ömer > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From simonpj at microsoft.com Fri Oct 5 23:11:34 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 5 Oct 2018 23:11:34 +0000 Subject: Shall we make -dsuppress-uniques default? In-Reply-To: References: Message-ID: Like Richard I use the uniques all the time. I'd prefer to leave it as-is, unless there is widespread support for a change S | -----Original Message----- | From: ghc-devs On Behalf Of Ömer Sinan | Agacan | Sent: 05 October 2018 20:02 | To: rae at cs.brynmawr.edu | Cc: ghc-devs | Subject: Re: Shall we make -dsuppress-uniques default? | | > What do you say to GHC to get it to print the uniques that you don't | like? | | I usually use one of these: -ddump-simpl, -dverbose-core2core, | -ddump-simpl-iterations, -ddump-stg. All of these print variables with | unique | details and I literally never need those details. Rarely I use -ddump-cmm | too. | | Agreed that having different defaults in different dumps/traces might | work .. | | Ömer | | Richard Eisenberg , 5 Eki 2018 Cum, 21:54 | tarihinde şunu yazdı: | > | > I'm in the opposite camp. More often than not, the biggest advantage of | dumps during GHC development is to see the uniques. Indeed, I often | ignore the actual names of variables and just work in my head with the | uniques. | > | > Perhaps the more complete answer is to fine-tune what settings cause | the uniques to be printed. -ddump-xx-trace should almost certainly. | Perhaps other modes needn't. What do you say to GHC to get it to print | the uniques that you don't like? | > | > Richard | > | > > On Oct 5, 2018, at 2:48 PM, Ömer Sinan Ağacan | wrote: | > > | > > I asked this on IRC and didn't hear a lot of opposition, so as the | next step | > > I'd like to ask ghc-devs. | > > | > > I literally never need the details on uniques that we currently print | by | > > default. I either don't care about variables too much (when not | comparing the | > > output with some other output), or I need -dsuppress-uniques (when | comparing | > > outputs). The problem is I have to remember to add -dsuppress-uniques | if I'm | > > going to compare the outputs, and if I decide to compare outputs | after the fact | > > I need to re-generate them with -dsuppress-uniques. This takes time | and effort. | > > | > > If you're also of the same opinion I suggest making -dsuppress- | uniques default, | > > and providing a -dno-suppress-uniques (if it doesn't already exist). | > > | > > Ömer | > > _______________________________________________ | > > ghc-devs mailing list | > > ghc-devs at haskell.org | > > | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C07ec32bd26d149c457ab08d | 62af537c9%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636743630029759709 | &sdata=4DVsRJ4Burv2%2BZGf38py%2FNRqM5j5%2FJAUkJPrUl7%2F%2Fm0%3D&r | eserved=0 | > | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C07ec32bd26d149c457ab08d | 62af537c9%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636743630029759709 | &sdata=4DVsRJ4Burv2%2BZGf38py%2FNRqM5j5%2FJAUkJPrUl7%2F%2Fm0%3D&r | eserved=0 From ben at smart-cactus.org Fri Oct 5 23:47:09 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Fri, 05 Oct 2018 19:47:09 -0400 Subject: Please update ghc user's guide for GHC 8.6 In-Reply-To: References: Message-ID: <87va6gc5xi.fsf@smart-cactus.org> Takenobu Tani writes: > Dear devs, > > Would you please update latest document [1] to GHC 8.6 ? > I did this earlier this week. Thanks for the ping Takenobu1 Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From takenobu.hs at gmail.com Sat Oct 6 12:36:25 2018 From: takenobu.hs at gmail.com (Takenobu Tani) Date: Sat, 6 Oct 2018 21:36:25 +0900 Subject: Please update ghc user's guide for GHC 8.6 In-Reply-To: <87va6gc5xi.fsf@smart-cactus.org> References: <87va6gc5xi.fsf@smart-cactus.org> Message-ID: Despite being busy, thank you so much. P.S. The current version includes "3. Release notes for version 8.4.2" in the top page. Regards, Takenobu On Sat, Oct 6, 2018 at 8:47 AM Ben Gamari wrote: > Takenobu Tani writes: > > > Dear devs, > > > > Would you please update latest document [1] to GHC 8.6 ? > > > I did this earlier this week. Thanks for the ping Takenobu1 > > Cheers, > > - Ben > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Sat Oct 6 20:50:26 2018 From: marlowsd at gmail.com (Simon Marlow) Date: Sat, 6 Oct 2018 21:50:26 +0100 Subject: Phabricator workflow vs. GitHub In-Reply-To: <53059bea-11e4-4b7c-17a8-8b6bfe384d41@nh2.me> References: <1538698331-sup-4138@sabre> <3baf88f2-f3a4-5349-eb2d-f863e985189c@nh2.me> <53059bea-11e4-4b7c-17a8-8b6bfe384d41@nh2.me> Message-ID: On Fri, 5 Oct 2018 at 15:22, Niklas Hambüchen wrote: > > I think the article is assuming the base for `arc diff` is always the > parent revision, i.e. `arc diff HEAD^`, which is how the workflow works > best. Strangely I don't think the open source Phabricator is set up to do > this by default so you have to actually type `arc diff HEAD^` > > Perhaps that is exactly to address the problem in my example: > If you submit a patch B that depends on A, by default this patch will fail > to apply against master on the Phabricator side unless you manually set up > dependencies? I suppose this is why it defaults to submitting the whole > master-A-B history instead? > > > for now manually specifying the dependencies is not terrible. > > I have found it pretty terrible: > Setting up dependencies between commits by hand is time consuming, and you > can do it wrong, which easily leads to confusion. > > If I do 4 refactor commits and on top a new feature that needs them, why > should I have to manually click together the dependencies between those > commits? The whole point of git is that it tracks that already for me in > its DAG. > > It gets worse if I have to react to review feedback: > > Say Ben tells me in review that I should really squash commits 2 and 3 > because they don't work independent of each other. Easily done with `git > rebase -i` as suggested, but now I have to go and reflect what I just did > in version control by manual clicking in an external tool again (and I > better kick out the right Diff). > > Similarly, if want to rename all occurrences of my_var to myVar across my > 5 commits using rebase -i, I have to manually invoke the right arc > invocation after each commit. > > So I've found it a big pain to maintain a series of dependent commits with > this workflow. > > I can imagine this to be only painless if you have access to the tooling > you said you have at facebook, that automates these things for you. > In fact we did it manually for a long time, the tool support is a recent development. Tool support can always improve things, but I'll take the inconvenience of having to specify dependencies manually in exchange for the other benefits of stacked diffs. You can put the dependencies in the commit log using "Depends on: D1234", as an alternative to the UI. 'git rebase -i' with 'x arc diff HEAD^ -m rebase' is a nice trick for rebasing your stack. Cheers Simon > In my ideal world, it should work like this: > > * Locally, a series of dependent patches goes into a git branch. > * Branches that are dependent on each other are based on each other. > * You have a tool that, if you amend a commit in a branch, can rebase all > the dependent branches accordingly. > * You can tell `arc` to submit a whole branch, and it will automatically > upload all dependent branches and set up the Phabricator dependency > relationships for you. > * When you react to review feedback, you change your history locally, and > run an `arc upload-changes`, that automatically updates all Diffs > accordingly. > > Niklas > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Sun Oct 7 15:50:36 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Sun, 07 Oct 2018 11:50:36 -0400 Subject: Phabricator workflow vs. GitHub In-Reply-To: <53059bea-11e4-4b7c-17a8-8b6bfe384d41@nh2.me> References: <1538698331-sup-4138@sabre> <3baf88f2-f3a4-5349-eb2d-f863e985189c@nh2.me> <53059bea-11e4-4b7c-17a8-8b6bfe384d41@nh2.me> Message-ID: <87murpdad4.fsf@smart-cactus.org> Niklas Hambüchen writes: ..[snip]. > > So I've found it a big pain to maintain a series of dependent commits with this workflow. > > I can imagine this to be only painless if you have access to the tooling you said you have at facebook, that automates these things for you. > > In my ideal world, it should work like this: > > * Locally, a series of dependent patches goes into a git branch. > * Branches that are dependent on each other are based on each other. > * You have a tool that, if you amend a commit in a branch, can rebase all the dependent branches accordingly. > * You can tell `arc` to submit a whole branch, and it will automatically upload all dependent branches and set up the Phabricator dependency relationships for you. > * When you react to review feedback, you change your history locally, and run an `arc upload-changes`, that automatically updates all Diffs accordingly. > Yes, I agree that this would be ideal. I have spent quite some time manually updating related differentials in this way. On the other hand, I still think this manual process is in many ways better than the typical GitHub model, where lack of any sort of PR dependency structure to make reviewing larger changes extremely painful. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From vlad.z.4096 at gmail.com Mon Oct 8 20:44:06 2018 From: vlad.z.4096 at gmail.com (Vladislav Zavialov) Date: Mon, 8 Oct 2018 23:44:06 +0300 Subject: Parser.y rewrite with parser combinators Message-ID: Hello devs, Recently I've been working on a couple of parsing-related issues in GHC. I implemented support for the -XStarIsType extension, fixed parsing of the (!) type operator (Trac #15457), allowed using type operators in existential contexts (Trac #15675). Doing these tasks required way more engineering effort than I expected from my prior experience working with parsers due to complexities of GHC's grammar. In the last couple of days, I've been working on Trac #1087 - a 12-year old parsing bug. After trying out a couple of approaches, to my dismay I realised that fixing it properly (including support for bang patterns inside infix constructors, etc) would require a complete rewrite of expression and pattern parsing logic. Worse yet, most of the work would be done outside Parser.y in Haskell code instead, in RdrHsSyn helpers. When I try to keep the logic inside Parser.y, in every design direction I face reduce/reduce conflicts. The reduce/reduce conflicts are the worst. Perhaps it is finally time to admit that Haskell syntax with all of the GHC cannot fit into a LALR grammar? The extent of hacks that we have right now just to make parsing possible is astonishing. For instance, we have dedicated constructors in HsExpr to make parsing patterns possible (EWildPat, EAsPat, EViewPat, ELazyPat). That is, one of the fundamental types (that the type checker operates on) has four additional constructors that exist due to a reduce/reduce conflict between patterns and expressions. I propose a complete rewrite of GHC's parser to use recursive descent parsing with monadic parser combinators. 1. We could significantly simplify parsing logic by doing things in a more direct manner. For instance, instead of parsing patterns as expressions and then post-processing them, we could have separate parsing logic for patterns and expressions. 2. We could fix long-standing parsing bugs like Trac #1087 because recursive descent offers more expressive power than LALR (at the cost of support for left recursion, which is not much of a loss in practice). 3. New extensions to the grammar would require less engineering effort. Of course, this rewrite is a huge chunk of work, so before I start, I would like to know that this work would be accepted if done well. Here's what I want to achieve: * Comparable performance. The new parser could turn out to be faster because it would do less post-processing, but it could be slower because 'happy' does all the sorts of low-level optimisations. I will consider this project a success only if comparable performance is achieved. * Correctness. The new parser should handle 100% of the syntactic constructs that the current parser can handle. * Error messages. The new error messages should be of equal or better quality than existing ones. * Elegance. The new parser should bring simplification to other parts of the compiler (e.g. removal of pattern constructors from HsExpr). And one of the design principles is to represent things by dedicated data structures, in contrast to the current state of affairs where we represent patterns as expressions, data constructor declarations as types (before D5180), etc. Let me know if this is a good/acceptable direction of travel. That's definitely something that I personally would like to see happen. All the best, - Vladislav From simonpj at microsoft.com Mon Oct 8 21:26:31 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 8 Oct 2018 21:26:31 +0000 Subject: Parser.y rewrite with parser combinators In-Reply-To: References: Message-ID: I'm no parser expert, but a parser that was easier to understand and modify, and was as fast as the current one, sounds good to me. It's a tricky area though; e.g. the layout rule. Worth talking to Simon Marlow. Simon | -----Original Message----- | From: ghc-devs On Behalf Of Vladislav | Zavialov | Sent: 08 October 2018 21:44 | To: ghc-devs | Subject: Parser.y rewrite with parser combinators | | Hello devs, | | Recently I've been working on a couple of parsing-related issues in | GHC. I implemented support for the -XStarIsType extension, fixed | parsing of the (!) type operator (Trac #15457), allowed using type | operators in existential contexts (Trac #15675). | | Doing these tasks required way more engineering effort than I expected | from my prior experience working with parsers due to complexities of | GHC's grammar. | | In the last couple of days, I've been working on Trac #1087 - a | 12-year old parsing bug. After trying out a couple of approaches, to | my dismay I realised that fixing it properly (including support for | bang patterns inside infix constructors, etc) would require a complete | rewrite of expression and pattern parsing logic. | | Worse yet, most of the work would be done outside Parser.y in Haskell | code instead, in RdrHsSyn helpers. When I try to keep the logic inside | Parser.y, in every design direction I face reduce/reduce conflicts. | | The reduce/reduce conflicts are the worst. | | Perhaps it is finally time to admit that Haskell syntax with all of | the GHC cannot fit into a LALR grammar? | | The extent of hacks that we have right now just to make parsing | possible is astonishing. For instance, we have dedicated constructors | in HsExpr to make parsing patterns possible (EWildPat, EAsPat, | EViewPat, ELazyPat). That is, one of the fundamental types (that the | type checker operates on) has four additional constructors that exist | due to a reduce/reduce conflict between patterns and expressions. | | I propose a complete rewrite of GHC's parser to use recursive descent | parsing with monadic parser combinators. | | 1. We could significantly simplify parsing logic by doing things in a | more direct manner. For instance, instead of parsing patterns as | expressions and then post-processing them, we could have separate | parsing logic for patterns and expressions. | | 2. We could fix long-standing parsing bugs like Trac #1087 because | recursive descent offers more expressive power than LALR (at the cost | of support for left recursion, which is not much of a loss in | practice). | | 3. New extensions to the grammar would require less engineering effort. | | Of course, this rewrite is a huge chunk of work, so before I start, I | would like to know that this work would be accepted if done well. | Here's what I want to achieve: | | * Comparable performance. The new parser could turn out to be faster | because it would do less post-processing, but it could be slower | because 'happy' does all the sorts of low-level optimisations. I will | consider this project a success only if comparable performance is | achieved. | | * Correctness. The new parser should handle 100% of the syntactic | constructs that the current parser can handle. | | * Error messages. The new error messages should be of equal or better | quality than existing ones. | | * Elegance. The new parser should bring simplification to other parts | of the compiler (e.g. removal of pattern constructors from HsExpr). | And one of the design principles is to represent things by dedicated | data structures, in contrast to the current state of affairs where we | represent patterns as expressions, data constructor declarations as | types (before D5180), etc. | | Let me know if this is a good/acceptable direction of travel. That's | definitely something that I personally would like to see happen. | | All the best, | - Vladislav | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C19181de5c6bd493ab07a08d | 62d5edbe0%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636746282778542095 | &sdata=lFRt1t4k3BuuRdyOqwOYTZcLPRB%2BtFJwfFtgMpNLxW0%3D&reserved= | 0 From iavor.diatchki at gmail.com Mon Oct 8 22:00:14 2018 From: iavor.diatchki at gmail.com (Iavor Diatchki) Date: Mon, 8 Oct 2018 15:00:14 -0700 Subject: Parser.y rewrite with parser combinators In-Reply-To: References: Message-ID: Hello, my experience with complex parsers written using parsing combinators is that they tend to be quite difficult to modify and have any kind of assurance that now you haven't broken something else. While reduce-reduce errors are indeed annoying, you at least know that there is some sort of issue you need to address. With a combinator based parser, you basically have to do program verification, or more pragmatically, have a large test suite and hope that you tested everything. I think the current approach is actually quite reasonable: use the Happy grammar to parse out the basic structure of the program, without trying to be completely precise, and then have a separate pass that validates and fixes up the results. While this has the draw-back of some constructors being in the "wrong place", there are also benefits---namely we can report better parse errors. Also, with the new rewrite of HsSyn, we should be able to mark such constructors as only usable in the parsing pass, so later passes wouldn't need to worry about them. -Iavor On Mon, Oct 8, 2018 at 2:26 PM Simon Peyton Jones via ghc-devs < ghc-devs at haskell.org> wrote: > I'm no parser expert, but a parser that was easier to understand and > modify, and was as fast as the current one, sounds good to me. > > It's a tricky area though; e.g. the layout rule. > > Worth talking to Simon Marlow. > > Simon > > > > | -----Original Message----- > | From: ghc-devs On Behalf Of Vladislav > | Zavialov > | Sent: 08 October 2018 21:44 > | To: ghc-devs > | Subject: Parser.y rewrite with parser combinators > | > | Hello devs, > | > | Recently I've been working on a couple of parsing-related issues in > | GHC. I implemented support for the -XStarIsType extension, fixed > | parsing of the (!) type operator (Trac #15457), allowed using type > | operators in existential contexts (Trac #15675). > | > | Doing these tasks required way more engineering effort than I expected > | from my prior experience working with parsers due to complexities of > | GHC's grammar. > | > | In the last couple of days, I've been working on Trac #1087 - a > | 12-year old parsing bug. After trying out a couple of approaches, to > | my dismay I realised that fixing it properly (including support for > | bang patterns inside infix constructors, etc) would require a complete > | rewrite of expression and pattern parsing logic. > | > | Worse yet, most of the work would be done outside Parser.y in Haskell > | code instead, in RdrHsSyn helpers. When I try to keep the logic inside > | Parser.y, in every design direction I face reduce/reduce conflicts. > | > | The reduce/reduce conflicts are the worst. > | > | Perhaps it is finally time to admit that Haskell syntax with all of > | the GHC cannot fit into a LALR grammar? > | > | The extent of hacks that we have right now just to make parsing > | possible is astonishing. For instance, we have dedicated constructors > | in HsExpr to make parsing patterns possible (EWildPat, EAsPat, > | EViewPat, ELazyPat). That is, one of the fundamental types (that the > | type checker operates on) has four additional constructors that exist > | due to a reduce/reduce conflict between patterns and expressions. > | > | I propose a complete rewrite of GHC's parser to use recursive descent > | parsing with monadic parser combinators. > | > | 1. We could significantly simplify parsing logic by doing things in a > | more direct manner. For instance, instead of parsing patterns as > | expressions and then post-processing them, we could have separate > | parsing logic for patterns and expressions. > | > | 2. We could fix long-standing parsing bugs like Trac #1087 because > | recursive descent offers more expressive power than LALR (at the cost > | of support for left recursion, which is not much of a loss in > | practice). > | > | 3. New extensions to the grammar would require less engineering effort. > | > | Of course, this rewrite is a huge chunk of work, so before I start, I > | would like to know that this work would be accepted if done well. > | Here's what I want to achieve: > | > | * Comparable performance. The new parser could turn out to be faster > | because it would do less post-processing, but it could be slower > | because 'happy' does all the sorts of low-level optimisations. I will > | consider this project a success only if comparable performance is > | achieved. > | > | * Correctness. The new parser should handle 100% of the syntactic > | constructs that the current parser can handle. > | > | * Error messages. The new error messages should be of equal or better > | quality than existing ones. > | > | * Elegance. The new parser should bring simplification to other parts > | of the compiler (e.g. removal of pattern constructors from HsExpr). > | And one of the design principles is to represent things by dedicated > | data structures, in contrast to the current state of affairs where we > | represent patterns as expressions, data constructor declarations as > | types (before D5180), etc. > | > | Let me know if this is a good/acceptable direction of travel. That's > | definitely something that I personally would like to see happen. > | > | All the best, > | - Vladislav > | _______________________________________________ > | ghc-devs mailing list > | ghc-devs at haskell.org > | > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask > | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- > | devs&data=02%7C01%7Csimonpj%40microsoft.com > %7C19181de5c6bd493ab07a08d > | 62d5edbe0%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636746282778542095 > | &sdata=lFRt1t4k3BuuRdyOqwOYTZcLPRB%2BtFJwfFtgMpNLxW0%3D&reserved= > | 0 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Oct 8 22:04:31 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 8 Oct 2018 22:04:31 +0000 Subject: Parser.y rewrite with parser combinators In-Reply-To: References: Message-ID: use the Happy grammar to parse out the basic structure of the program, without trying to be completely precise, and then have a separate pass that validates and fixes up the results. Incidentally, we use this for operator fixity and precedence, where the fixup is done in the renamer, and for that purpose it works really well. From: Iavor Diatchki Sent: 08 October 2018 23:00 To: Simon Peyton Jones Cc: vlad.z.4096 at gmail.com; ghc-devs Subject: Re: Parser.y rewrite with parser combinators Hello, my experience with complex parsers written using parsing combinators is that they tend to be quite difficult to modify and have any kind of assurance that now you haven't broken something else. While reduce-reduce errors are indeed annoying, you at least know that there is some sort of issue you need to address. With a combinator based parser, you basically have to do program verification, or more pragmatically, have a large test suite and hope that you tested everything. I think the current approach is actually quite reasonable: use the Happy grammar to parse out the basic structure of the program, without trying to be completely precise, and then have a separate pass that validates and fixes up the results. While this has the draw-back of some constructors being in the "wrong place", there are also benefits---namely we can report better parse errors. Also, with the new rewrite of HsSyn, we should be able to mark such constructors as only usable in the parsing pass, so later passes wouldn't need to worry about them. -Iavor On Mon, Oct 8, 2018 at 2:26 PM Simon Peyton Jones via ghc-devs > wrote: I'm no parser expert, but a parser that was easier to understand and modify, and was as fast as the current one, sounds good to me. It's a tricky area though; e.g. the layout rule. Worth talking to Simon Marlow. Simon | -----Original Message----- | From: ghc-devs > On Behalf Of Vladislav | Zavialov | Sent: 08 October 2018 21:44 | To: ghc-devs > | Subject: Parser.y rewrite with parser combinators | | Hello devs, | | Recently I've been working on a couple of parsing-related issues in | GHC. I implemented support for the -XStarIsType extension, fixed | parsing of the (!) type operator (Trac #15457), allowed using type | operators in existential contexts (Trac #15675). | | Doing these tasks required way more engineering effort than I expected | from my prior experience working with parsers due to complexities of | GHC's grammar. | | In the last couple of days, I've been working on Trac #1087 - a | 12-year old parsing bug. After trying out a couple of approaches, to | my dismay I realised that fixing it properly (including support for | bang patterns inside infix constructors, etc) would require a complete | rewrite of expression and pattern parsing logic. | | Worse yet, most of the work would be done outside Parser.y in Haskell | code instead, in RdrHsSyn helpers. When I try to keep the logic inside | Parser.y, in every design direction I face reduce/reduce conflicts. | | The reduce/reduce conflicts are the worst. | | Perhaps it is finally time to admit that Haskell syntax with all of | the GHC cannot fit into a LALR grammar? | | The extent of hacks that we have right now just to make parsing | possible is astonishing. For instance, we have dedicated constructors | in HsExpr to make parsing patterns possible (EWildPat, EAsPat, | EViewPat, ELazyPat). That is, one of the fundamental types (that the | type checker operates on) has four additional constructors that exist | due to a reduce/reduce conflict between patterns and expressions. | | I propose a complete rewrite of GHC's parser to use recursive descent | parsing with monadic parser combinators. | | 1. We could significantly simplify parsing logic by doing things in a | more direct manner. For instance, instead of parsing patterns as | expressions and then post-processing them, we could have separate | parsing logic for patterns and expressions. | | 2. We could fix long-standing parsing bugs like Trac #1087 because | recursive descent offers more expressive power than LALR (at the cost | of support for left recursion, which is not much of a loss in | practice). | | 3. New extensions to the grammar would require less engineering effort. | | Of course, this rewrite is a huge chunk of work, so before I start, I | would like to know that this work would be accepted if done well. | Here's what I want to achieve: | | * Comparable performance. The new parser could turn out to be faster | because it would do less post-processing, but it could be slower | because 'happy' does all the sorts of low-level optimisations. I will | consider this project a success only if comparable performance is | achieved. | | * Correctness. The new parser should handle 100% of the syntactic | constructs that the current parser can handle. | | * Error messages. The new error messages should be of equal or better | quality than existing ones. | | * Elegance. The new parser should bring simplification to other parts | of the compiler (e.g. removal of pattern constructors from HsExpr). | And one of the design principles is to represent things by dedicated | data structures, in contrast to the current state of affairs where we | represent patterns as expressions, data constructor declarations as | types (before D5180), etc. | | Let me know if this is a good/acceptable direction of travel. That's | definitely something that I personally would like to see happen. | | All the best, | - Vladislav | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C19181de5c6bd493ab07a08d | 62d5edbe0%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636746282778542095 | &sdata=lFRt1t4k3BuuRdyOqwOYTZcLPRB%2BtFJwfFtgMpNLxW0%3D&reserved= | 0 _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Mon Oct 8 22:06:36 2018 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Tue, 9 Oct 2018 00:06:36 +0200 Subject: Parser.y rewrite with parser combinators In-Reply-To: References: Message-ID: I am not against this proposal, but want to raise a possible future concern. As part of improving the haskell tooling environment I am keen on making GHC incremental, and have started a proof of concept based in the same techniques as used in the tree-sitter library. This is achieved by modifying happy, and requires minimal changes to the existing Parser.y. It would be unfortunate if this possibility was prevented by this rewrite. Alan -------------- next part -------------- An HTML attachment was scrubbed... URL: From vlad.z.4096 at gmail.com Mon Oct 8 22:24:45 2018 From: vlad.z.4096 at gmail.com (Vladislav Zavialov) Date: Tue, 9 Oct 2018 01:24:45 +0300 Subject: Parser.y rewrite with parser combinators In-Reply-To: References: Message-ID: > complex parsers written using parsing combinators is that they tend to be quite difficult to modify and have any kind of assurance that now you haven't broken something else That's true regardless of implementation technique, parsers are rather delicate. A LALR-based parser generator does provide more information when it detects shift/reduce and reduce/reduce conflicts, but I never found this information useful. It was always quite the opposite of being helpful - an indication that a LALR parser could not handle my change and I had to look for workarounds. > With a combinator based parser, you basically have to do program verification, or more pragmatically, have a large test suite and hope that you tested everything. Even when doing modifications to Parser.y, I relied mainly on the test suite to determine whether my change was right (and the test suite always caught many issues). A large test suite is the best approach both for 'happy'-based parsers and for combinator-based parsers. > and then have a separate pass that validates and fixes up the results That's where my concern lies. This separate pass is confusing (at least for me - it's not the most straightforward thing to parse something incorrectly and then restructure it), it is hard to modify, it does not handle corner cases (e.g. #1087). Since we have all this Haskell code that does a significant portion of processing, why even bother with having a LALR pass before it? > namely we can report better parse errors I don't think that's true, we can achieve better error messages with recursive descent. > Also, with the new rewrite of HsSyn, we should be able to mark such constructors as only usable in the parsing pass, so later passes wouldn't need to worry about them. Not completely true, GhcPs-parametrized structures are the final output of parsing, so at least the renamer will face these constructors. On Tue, Oct 9, 2018 at 1:00 AM Iavor Diatchki wrote: > > Hello, > > my experience with complex parsers written using parsing combinators is that they tend to be quite difficult to modify and have any kind of assurance that now you haven't broken something else. While reduce-reduce errors are indeed annoying, you at least know that there is some sort of issue you need to address. With a combinator based parser, you basically have to do program verification, or more pragmatically, have a large test suite and hope that you tested everything. > > I think the current approach is actually quite reasonable: use the Happy grammar to parse out the basic structure of the program, without trying to be completely precise, and then have a separate pass that validates and fixes up the results. While this has the draw-back of some constructors being in the "wrong place", there are also benefits---namely we can report better parse errors. Also, with the new rewrite of HsSyn, we should be able to mark such constructors as only usable in the parsing pass, so later passes wouldn't need to worry about them. > > -Iavor > > > > > > > > > > > > On Mon, Oct 8, 2018 at 2:26 PM Simon Peyton Jones via ghc-devs wrote: >> >> I'm no parser expert, but a parser that was easier to understand and modify, and was as fast as the current one, sounds good to me. >> >> It's a tricky area though; e.g. the layout rule. >> >> Worth talking to Simon Marlow. >> >> Simon >> >> >> >> | -----Original Message----- >> | From: ghc-devs On Behalf Of Vladislav >> | Zavialov >> | Sent: 08 October 2018 21:44 >> | To: ghc-devs >> | Subject: Parser.y rewrite with parser combinators >> | >> | Hello devs, >> | >> | Recently I've been working on a couple of parsing-related issues in >> | GHC. I implemented support for the -XStarIsType extension, fixed >> | parsing of the (!) type operator (Trac #15457), allowed using type >> | operators in existential contexts (Trac #15675). >> | >> | Doing these tasks required way more engineering effort than I expected >> | from my prior experience working with parsers due to complexities of >> | GHC's grammar. >> | >> | In the last couple of days, I've been working on Trac #1087 - a >> | 12-year old parsing bug. After trying out a couple of approaches, to >> | my dismay I realised that fixing it properly (including support for >> | bang patterns inside infix constructors, etc) would require a complete >> | rewrite of expression and pattern parsing logic. >> | >> | Worse yet, most of the work would be done outside Parser.y in Haskell >> | code instead, in RdrHsSyn helpers. When I try to keep the logic inside >> | Parser.y, in every design direction I face reduce/reduce conflicts. >> | >> | The reduce/reduce conflicts are the worst. >> | >> | Perhaps it is finally time to admit that Haskell syntax with all of >> | the GHC cannot fit into a LALR grammar? >> | >> | The extent of hacks that we have right now just to make parsing >> | possible is astonishing. For instance, we have dedicated constructors >> | in HsExpr to make parsing patterns possible (EWildPat, EAsPat, >> | EViewPat, ELazyPat). That is, one of the fundamental types (that the >> | type checker operates on) has four additional constructors that exist >> | due to a reduce/reduce conflict between patterns and expressions. >> | >> | I propose a complete rewrite of GHC's parser to use recursive descent >> | parsing with monadic parser combinators. >> | >> | 1. We could significantly simplify parsing logic by doing things in a >> | more direct manner. For instance, instead of parsing patterns as >> | expressions and then post-processing them, we could have separate >> | parsing logic for patterns and expressions. >> | >> | 2. We could fix long-standing parsing bugs like Trac #1087 because >> | recursive descent offers more expressive power than LALR (at the cost >> | of support for left recursion, which is not much of a loss in >> | practice). >> | >> | 3. New extensions to the grammar would require less engineering effort. >> | >> | Of course, this rewrite is a huge chunk of work, so before I start, I >> | would like to know that this work would be accepted if done well. >> | Here's what I want to achieve: >> | >> | * Comparable performance. The new parser could turn out to be faster >> | because it would do less post-processing, but it could be slower >> | because 'happy' does all the sorts of low-level optimisations. I will >> | consider this project a success only if comparable performance is >> | achieved. >> | >> | * Correctness. The new parser should handle 100% of the syntactic >> | constructs that the current parser can handle. >> | >> | * Error messages. The new error messages should be of equal or better >> | quality than existing ones. >> | >> | * Elegance. The new parser should bring simplification to other parts >> | of the compiler (e.g. removal of pattern constructors from HsExpr). >> | And one of the design principles is to represent things by dedicated >> | data structures, in contrast to the current state of affairs where we >> | represent patterns as expressions, data constructor declarations as >> | types (before D5180), etc. >> | >> | Let me know if this is a good/acceptable direction of travel. That's >> | definitely something that I personally would like to see happen. >> | >> | All the best, >> | - Vladislav >> | _______________________________________________ >> | ghc-devs mailing list >> | ghc-devs at haskell.org >> | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask >> | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- >> | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C19181de5c6bd493ab07a08d >> | 62d5edbe0%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636746282778542095 >> | &sdata=lFRt1t4k3BuuRdyOqwOYTZcLPRB%2BtFJwfFtgMpNLxW0%3D&reserved= >> | 0 >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From vlad.z.4096 at gmail.com Mon Oct 8 22:38:15 2018 From: vlad.z.4096 at gmail.com (Vladislav Zavialov) Date: Tue, 9 Oct 2018 01:38:15 +0300 Subject: Parser.y rewrite with parser combinators In-Reply-To: References: Message-ID: That is a very good point, thank you! I have not thought about incremental parsing. That's something I need to research before I start the rewrite. On Tue, Oct 9, 2018 at 1:06 AM Alan & Kim Zimmerman wrote: > > I am not against this proposal, but want to raise a possible future concern. > > As part of improving the haskell tooling environment I am keen on making GHC incremental, and have started a proof of concept based in the same techniques as used in the tree-sitter library. > > This is achieved by modifying happy, and requires minimal changes to the existing Parser.y. > > It would be unfortunate if this possibility was prevented by this rewrite. > > Alan From rae at cs.brynmawr.edu Tue Oct 9 00:31:02 2018 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Mon, 8 Oct 2018 20:31:02 -0400 Subject: Parser.y rewrite with parser combinators In-Reply-To: References: Message-ID: <4A757517-AC32-45B5-9BBE-810B0498E50A@cs.brynmawr.edu> I, too, have wondered about this. A pair of students this summer were working on merging the type-level and term-level parsers, in preparation for, e.g., visible dependent quantification in terms (not to mention dependent types). If successful, this would have been an entirely internal refactor. In any case, it seemed impossible to do in an LALR parser, so the students instead parsed into a new datatype Term, which then got converted either to an HsExpr, an HsPat, or an HsType. The students never finished. But the experience suggests that moving away from LALR might be a good move. All that said, I'm not sure how going to parser combinators stops us from needing an intermediate datatype to parse expressions/patterns into before we can tell whether they are expressions or patterns. For example, if we see `do K x y z ...`, we don't know whether we're parsing an expression or a pattern before we can see what's in the ..., which is arbitrarily later than the ambiguity starts. Of course, while we can write a backtracking parser with combinators, doing so doesn't seem like a particularly swell idea. This isn't an argument against using parser combinators, but fixing the pattern/expression ambiguity was a "pro" listed for them -- except I don't think this is correct. Come to think of it, the problem with parsing expressions vs. types would persist just as much in the combinator style as it does in the LALR style, so perhaps I've talked myself into a corner. Nevertheless, it seems awkward to do half the parsing in one language (happy) and half in another. Richard > On Oct 8, 2018, at 6:38 PM, Vladislav Zavialov wrote: > > That is a very good point, thank you! I have not thought about > incremental parsing. That's something I need to research before I > start the rewrite. > On Tue, Oct 9, 2018 at 1:06 AM Alan & Kim Zimmerman wrote: >> >> I am not against this proposal, but want to raise a possible future concern. >> >> As part of improving the haskell tooling environment I am keen on making GHC incremental, and have started a proof of concept based in the same techniques as used in the tree-sitter library. >> >> This is achieved by modifying happy, and requires minimal changes to the existing Parser.y. >> >> It would be unfortunate if this possibility was prevented by this rewrite. >> >> Alan > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From mail at nh2.me Tue Oct 9 00:53:49 2018 From: mail at nh2.me (=?UTF-8?Q?Niklas_Hamb=c3=bcchen?=) Date: Tue, 9 Oct 2018 02:53:49 +0200 Subject: Parser.y rewrite with parser combinators In-Reply-To: References: Message-ID: Another thing that may be of interest is that parser generators can guarantee you complexity bounds of parsing time (as usual, the goal is linear). Some of the conflicts that annoy us about parser generators are often hints on this topic; if the parser generator succeeds, you are guaranteed to have a linear parser. If backtracking is allowed in parser combinators, it is comparatively easy to get that wrong. Niklas From vanessa.mchale at iohk.io Tue Oct 9 03:46:54 2018 From: vanessa.mchale at iohk.io (Vanessa McHale) Date: Mon, 8 Oct 2018 22:46:54 -0500 Subject: Parser.y rewrite with parser combinators In-Reply-To: References: Message-ID: <03c9f989-ddfd-56a7-f2d8-29c3d3c58903@iohk.io> I actually have some experience in this department, having authored both madlang and language-ats . Parsers using combinators alone are more brittle than parsers using Happy, at least for human-facing languages. I'm also not sure what exactly parser combinators provide over Happy. It has macros that can emulate e.g. between, many. Drawing up a minimal example might be a good idea. On 10/08/2018 05:24 PM, Vladislav Zavialov wrote: >> complex parsers written using parsing combinators is that they tend to be quite difficult to modify and have any kind of assurance that now you haven't broken something else > That's true regardless of implementation technique, parsers are rather > delicate. A LALR-based parser generator does provide more information > when it detects shift/reduce and reduce/reduce conflicts, but I never > found this information useful. It was always quite the opposite of > being helpful - an indication that a LALR parser could not handle my > change and I had to look for workarounds. > >> With a combinator based parser, you basically have to do program verification, or more pragmatically, have a large test suite and hope that you tested everything. > Even when doing modifications to Parser.y, I relied mainly on the test > suite to determine whether my change was right (and the test suite > always caught many issues). A large test suite is the best approach > both for 'happy'-based parsers and for combinator-based parsers. > >> and then have a separate pass that validates and fixes up the results > That's where my concern lies. This separate pass is confusing (at > least for me - it's not the most straightforward thing to parse > something incorrectly and then restructure it), it is hard to modify, > it does not handle corner cases (e.g. #1087). > > Since we have all this Haskell code that does a significant portion of > processing, why even bother with having a LALR pass before it? > >> namely we can report better parse errors > I don't think that's true, we can achieve better error messages with > recursive descent. > >> Also, with the new rewrite of HsSyn, we should be able to mark such constructors as only usable in the parsing pass, so later passes wouldn't need to worry about them. > Not completely true, GhcPs-parametrized structures are the final > output of parsing, so at least the renamer will face these > constructors. > > On Tue, Oct 9, 2018 at 1:00 AM Iavor Diatchki wrote: >> Hello, >> >> my experience with complex parsers written using parsing combinators is that they tend to be quite difficult to modify and have any kind of assurance that now you haven't broken something else. While reduce-reduce errors are indeed annoying, you at least know that there is some sort of issue you need to address. With a combinator based parser, you basically have to do program verification, or more pragmatically, have a large test suite and hope that you tested everything. >> >> I think the current approach is actually quite reasonable: use the Happy grammar to parse out the basic structure of the program, without trying to be completely precise, and then have a separate pass that validates and fixes up the results. While this has the draw-back of some constructors being in the "wrong place", there are also benefits---namely we can report better parse errors. Also, with the new rewrite of HsSyn, we should be able to mark such constructors as only usable in the parsing pass, so later passes wouldn't need to worry about them. >> >> -Iavor >> >> >> >> >> >> >> >> >> >> >> >> On Mon, Oct 8, 2018 at 2:26 PM Simon Peyton Jones via ghc-devs wrote: >>> I'm no parser expert, but a parser that was easier to understand and modify, and was as fast as the current one, sounds good to me. >>> >>> It's a tricky area though; e.g. the layout rule. >>> >>> Worth talking to Simon Marlow. >>> >>> Simon >>> >>> >>> >>> | -----Original Message----- >>> | From: ghc-devs On Behalf Of Vladislav >>> | Zavialov >>> | Sent: 08 October 2018 21:44 >>> | To: ghc-devs >>> | Subject: Parser.y rewrite with parser combinators >>> | >>> | Hello devs, >>> | >>> | Recently I've been working on a couple of parsing-related issues in >>> | GHC. I implemented support for the -XStarIsType extension, fixed >>> | parsing of the (!) type operator (Trac #15457), allowed using type >>> | operators in existential contexts (Trac #15675). >>> | >>> | Doing these tasks required way more engineering effort than I expected >>> | from my prior experience working with parsers due to complexities of >>> | GHC's grammar. >>> | >>> | In the last couple of days, I've been working on Trac #1087 - a >>> | 12-year old parsing bug. After trying out a couple of approaches, to >>> | my dismay I realised that fixing it properly (including support for >>> | bang patterns inside infix constructors, etc) would require a complete >>> | rewrite of expression and pattern parsing logic. >>> | >>> | Worse yet, most of the work would be done outside Parser.y in Haskell >>> | code instead, in RdrHsSyn helpers. When I try to keep the logic inside >>> | Parser.y, in every design direction I face reduce/reduce conflicts. >>> | >>> | The reduce/reduce conflicts are the worst. >>> | >>> | Perhaps it is finally time to admit that Haskell syntax with all of >>> | the GHC cannot fit into a LALR grammar? >>> | >>> | The extent of hacks that we have right now just to make parsing >>> | possible is astonishing. For instance, we have dedicated constructors >>> | in HsExpr to make parsing patterns possible (EWildPat, EAsPat, >>> | EViewPat, ELazyPat). That is, one of the fundamental types (that the >>> | type checker operates on) has four additional constructors that exist >>> | due to a reduce/reduce conflict between patterns and expressions. >>> | >>> | I propose a complete rewrite of GHC's parser to use recursive descent >>> | parsing with monadic parser combinators. >>> | >>> | 1. We could significantly simplify parsing logic by doing things in a >>> | more direct manner. For instance, instead of parsing patterns as >>> | expressions and then post-processing them, we could have separate >>> | parsing logic for patterns and expressions. >>> | >>> | 2. We could fix long-standing parsing bugs like Trac #1087 because >>> | recursive descent offers more expressive power than LALR (at the cost >>> | of support for left recursion, which is not much of a loss in >>> | practice). >>> | >>> | 3. New extensions to the grammar would require less engineering effort. >>> | >>> | Of course, this rewrite is a huge chunk of work, so before I start, I >>> | would like to know that this work would be accepted if done well. >>> | Here's what I want to achieve: >>> | >>> | * Comparable performance. The new parser could turn out to be faster >>> | because it would do less post-processing, but it could be slower >>> | because 'happy' does all the sorts of low-level optimisations. I will >>> | consider this project a success only if comparable performance is >>> | achieved. >>> | >>> | * Correctness. The new parser should handle 100% of the syntactic >>> | constructs that the current parser can handle. >>> | >>> | * Error messages. The new error messages should be of equal or better >>> | quality than existing ones. >>> | >>> | * Elegance. The new parser should bring simplification to other parts >>> | of the compiler (e.g. removal of pattern constructors from HsExpr). >>> | And one of the design principles is to represent things by dedicated >>> | data structures, in contrast to the current state of affairs where we >>> | represent patterns as expressions, data constructor declarations as >>> | types (before D5180), etc. >>> | >>> | Let me know if this is a good/acceptable direction of travel. That's >>> | definitely something that I personally would like to see happen. >>> | >>> | All the best, >>> | - Vladislav >>> | _______________________________________________ >>> | ghc-devs mailing list >>> | ghc-devs at haskell.org >>> | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask >>> | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- >>> | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C19181de5c6bd493ab07a08d >>> | 62d5edbe0%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636746282778542095 >>> | &sdata=lFRt1t4k3BuuRdyOqwOYTZcLPRB%2BtFJwfFtgMpNLxW0%3D&reserved= >>> | 0 >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From vlad.z.4096 at gmail.com Tue Oct 9 06:51:16 2018 From: vlad.z.4096 at gmail.com (Vladislav Zavialov) Date: Tue, 9 Oct 2018 09:51:16 +0300 Subject: Parser.y rewrite with parser combinators In-Reply-To: <03c9f989-ddfd-56a7-f2d8-29c3d3c58903@iohk.io> References: <03c9f989-ddfd-56a7-f2d8-29c3d3c58903@iohk.io> Message-ID: > I'm also not sure what exactly parser combinators provide over Happy. Parser combinators offer backtracking. With 'happy' we get the guarantee that we parse in linear time, but we lose it because of post-processing that is not guaranteed to be linear. I think it'd be easier to backtrack in the parser itself rather than in a later pass. On Tue, Oct 9, 2018 at 6:47 AM Vanessa McHale wrote: > > I actually have some experience in this department, having authored both madlang and language-ats. Parsers using combinators alone are more brittle than parsers using Happy, at least for human-facing languages. > > I'm also not sure what exactly parser combinators provide over Happy. It has macros that can emulate e.g. between, many. Drawing up a minimal example might be a good idea. > > > On 10/08/2018 05:24 PM, Vladislav Zavialov wrote: > > complex parsers written using parsing combinators is that they tend to be quite difficult to modify and have any kind of assurance that now you haven't broken something else > > That's true regardless of implementation technique, parsers are rather > delicate. A LALR-based parser generator does provide more information > when it detects shift/reduce and reduce/reduce conflicts, but I never > found this information useful. It was always quite the opposite of > being helpful - an indication that a LALR parser could not handle my > change and I had to look for workarounds. > > With a combinator based parser, you basically have to do program verification, or more pragmatically, have a large test suite and hope that you tested everything. > > Even when doing modifications to Parser.y, I relied mainly on the test > suite to determine whether my change was right (and the test suite > always caught many issues). A large test suite is the best approach > both for 'happy'-based parsers and for combinator-based parsers. > > and then have a separate pass that validates and fixes up the results > > That's where my concern lies. This separate pass is confusing (at > least for me - it's not the most straightforward thing to parse > something incorrectly and then restructure it), it is hard to modify, > it does not handle corner cases (e.g. #1087). > > Since we have all this Haskell code that does a significant portion of > processing, why even bother with having a LALR pass before it? > > namely we can report better parse errors > > I don't think that's true, we can achieve better error messages with > recursive descent. > > Also, with the new rewrite of HsSyn, we should be able to mark such constructors as only usable in the parsing pass, so later passes wouldn't need to worry about them. > > Not completely true, GhcPs-parametrized structures are the final > output of parsing, so at least the renamer will face these > constructors. > > On Tue, Oct 9, 2018 at 1:00 AM Iavor Diatchki wrote: > > Hello, > > my experience with complex parsers written using parsing combinators is that they tend to be quite difficult to modify and have any kind of assurance that now you haven't broken something else. While reduce-reduce errors are indeed annoying, you at least know that there is some sort of issue you need to address. With a combinator based parser, you basically have to do program verification, or more pragmatically, have a large test suite and hope that you tested everything. > > I think the current approach is actually quite reasonable: use the Happy grammar to parse out the basic structure of the program, without trying to be completely precise, and then have a separate pass that validates and fixes up the results. While this has the draw-back of some constructors being in the "wrong place", there are also benefits---namely we can report better parse errors. Also, with the new rewrite of HsSyn, we should be able to mark such constructors as only usable in the parsing pass, so later passes wouldn't need to worry about them. > > -Iavor > > > > > > > > > > > > On Mon, Oct 8, 2018 at 2:26 PM Simon Peyton Jones via ghc-devs wrote: > > I'm no parser expert, but a parser that was easier to understand and modify, and was as fast as the current one, sounds good to me. > > It's a tricky area though; e.g. the layout rule. > > Worth talking to Simon Marlow. > > Simon > > > > | -----Original Message----- > | From: ghc-devs On Behalf Of Vladislav > | Zavialov > | Sent: 08 October 2018 21:44 > | To: ghc-devs > | Subject: Parser.y rewrite with parser combinators > | > | Hello devs, > | > | Recently I've been working on a couple of parsing-related issues in > | GHC. I implemented support for the -XStarIsType extension, fixed > | parsing of the (!) type operator (Trac #15457), allowed using type > | operators in existential contexts (Trac #15675). > | > | Doing these tasks required way more engineering effort than I expected > | from my prior experience working with parsers due to complexities of > | GHC's grammar. > | > | In the last couple of days, I've been working on Trac #1087 - a > | 12-year old parsing bug. After trying out a couple of approaches, to > | my dismay I realised that fixing it properly (including support for > | bang patterns inside infix constructors, etc) would require a complete > | rewrite of expression and pattern parsing logic. > | > | Worse yet, most of the work would be done outside Parser.y in Haskell > | code instead, in RdrHsSyn helpers. When I try to keep the logic inside > | Parser.y, in every design direction I face reduce/reduce conflicts. > | > | The reduce/reduce conflicts are the worst. > | > | Perhaps it is finally time to admit that Haskell syntax with all of > | the GHC cannot fit into a LALR grammar? > | > | The extent of hacks that we have right now just to make parsing > | possible is astonishing. For instance, we have dedicated constructors > | in HsExpr to make parsing patterns possible (EWildPat, EAsPat, > | EViewPat, ELazyPat). That is, one of the fundamental types (that the > | type checker operates on) has four additional constructors that exist > | due to a reduce/reduce conflict between patterns and expressions. > | > | I propose a complete rewrite of GHC's parser to use recursive descent > | parsing with monadic parser combinators. > | > | 1. We could significantly simplify parsing logic by doing things in a > | more direct manner. For instance, instead of parsing patterns as > | expressions and then post-processing them, we could have separate > | parsing logic for patterns and expressions. > | > | 2. We could fix long-standing parsing bugs like Trac #1087 because > | recursive descent offers more expressive power than LALR (at the cost > | of support for left recursion, which is not much of a loss in > | practice). > | > | 3. New extensions to the grammar would require less engineering effort. > | > | Of course, this rewrite is a huge chunk of work, so before I start, I > | would like to know that this work would be accepted if done well. > | Here's what I want to achieve: > | > | * Comparable performance. The new parser could turn out to be faster > | because it would do less post-processing, but it could be slower > | because 'happy' does all the sorts of low-level optimisations. I will > | consider this project a success only if comparable performance is > | achieved. > | > | * Correctness. The new parser should handle 100% of the syntactic > | constructs that the current parser can handle. > | > | * Error messages. The new error messages should be of equal or better > | quality than existing ones. > | > | * Elegance. The new parser should bring simplification to other parts > | of the compiler (e.g. removal of pattern constructors from HsExpr). > | And one of the design principles is to represent things by dedicated > | data structures, in contrast to the current state of affairs where we > | represent patterns as expressions, data constructor declarations as > | types (before D5180), etc. > | > | Let me know if this is a good/acceptable direction of travel. That's > | definitely something that I personally would like to see happen. > | > | All the best, > | - Vladislav > | _______________________________________________ > | ghc-devs mailing list > | ghc-devs at haskell.org > | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask > | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- > | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C19181de5c6bd493ab07a08d > | 62d5edbe0%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636746282778542095 > | &sdata=lFRt1t4k3BuuRdyOqwOYTZcLPRB%2BtFJwfFtgMpNLxW0%3D&reserved= > | 0 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From vlad.z.4096 at gmail.com Tue Oct 9 07:18:21 2018 From: vlad.z.4096 at gmail.com (Vladislav Zavialov) Date: Tue, 9 Oct 2018 10:18:21 +0300 Subject: Parser.y rewrite with parser combinators In-Reply-To: <4A757517-AC32-45B5-9BBE-810B0498E50A@cs.brynmawr.edu> References: <4A757517-AC32-45B5-9BBE-810B0498E50A@cs.brynmawr.edu> Message-ID: > For example, if we see `do K x y z ...`, we don't know whether we're parsing an expression or a pattern before we can see what's in the ..., which is arbitrarily later than the ambiguity starts. Of course, while we can write a backtracking parser with combinators, doing so doesn't seem like a particularly swell idea. Backtracking is exactly what I wanted to do here. Perhaps it is lack of theoretical background on my behalf showing, but I do not see downsides to it. It supposedly robs us of linear time guarantee, but consider this. With 'happy' and post-processing we 1. Parse into an expression (linear in the amount of tokens) 2. If it turns out we needed a pattern, rejig (linear in the size of expression) With parser combinators 1. Parse into an expression (linear in the amount of tokens) 2. If it turns out we needed a pattern, backtrack and parse into a pattern (linear in the amount of tokens) Doesn't post-processing that we do today mean that we don't actually take advantage of the linearity guarantee? On Tue, Oct 9, 2018 at 3:31 AM Richard Eisenberg wrote: > > I, too, have wondered about this. > > A pair of students this summer were working on merging the type-level and term-level parsers, in preparation for, e.g., visible dependent quantification in terms (not to mention dependent types). If successful, this would have been an entirely internal refactor. In any case, it seemed impossible to do in an LALR parser, so the students instead parsed into a new datatype Term, which then got converted either to an HsExpr, an HsPat, or an HsType. The students never finished. But the experience suggests that moving away from LALR might be a good move. > > All that said, I'm not sure how going to parser combinators stops us from needing an intermediate datatype to parse expressions/patterns into before we can tell whether they are expressions or patterns. For example, if we see `do K x y z ...`, we don't know whether we're parsing an expression or a pattern before we can see what's in the ..., which is arbitrarily later than the ambiguity starts. Of course, while we can write a backtracking parser with combinators, doing so doesn't seem like a particularly swell idea. This isn't an argument against using parser combinators, but fixing the pattern/expression ambiguity was a "pro" listed for them -- except I don't think this is correct. > > Come to think of it, the problem with parsing expressions vs. types would persist just as much in the combinator style as it does in the LALR style, so perhaps I've talked myself into a corner. Nevertheless, it seems awkward to do half the parsing in one language (happy) and half in another. > > Richard > > > On Oct 8, 2018, at 6:38 PM, Vladislav Zavialov wrote: > > > > That is a very good point, thank you! I have not thought about > > incremental parsing. That's something I need to research before I > > start the rewrite. > > On Tue, Oct 9, 2018 at 1:06 AM Alan & Kim Zimmerman wrote: > >> > >> I am not against this proposal, but want to raise a possible future concern. > >> > >> As part of improving the haskell tooling environment I am keen on making GHC incremental, and have started a proof of concept based in the same techniques as used in the tree-sitter library. > >> > >> This is achieved by modifying happy, and requires minimal changes to the existing Parser.y. > >> > >> It would be unfortunate if this possibility was prevented by this rewrite. > >> > >> Alan > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From svenpanne at gmail.com Tue Oct 9 07:23:05 2018 From: svenpanne at gmail.com (Sven Panne) Date: Tue, 9 Oct 2018 09:23:05 +0200 Subject: Parser.y rewrite with parser combinators In-Reply-To: References: Message-ID: Am Di., 9. Okt. 2018 um 00:25 Uhr schrieb Vladislav Zavialov < vlad.z.4096 at gmail.com>: > [...] That's true regardless of implementation technique, parsers are > rather > delicate. I think it's not the parsers themselves which are delicate, it is the language that they should recognize. > A LALR-based parser generator does provide more information > when it detects shift/reduce and reduce/reduce conflicts, but I never > found this information useful. It was always quite the opposite of > being helpful - an indication that a LALR parser could not handle my > change and I had to look for workarounds. [...] > Not that this would help at this point, but: The conflicts reported by parser generators like Happy are *extremely* valuable, they hint at tricky/ambiguous points in the grammar, which in turn is a strong hint that the language you're trying to parse has dark corners. IMHO every language designer and e.g. everybody proposing a syntactic extension to GHC should try to fit this into a grammar for Happy *before* proposing that extension. If you get conflicts, it is a very strong hint that the language is hard to parse by *humans*, too, which is the most important thing to consider. Haskell already has tons of syntactic warts which can only be parsed by infinite lookahead, which is only a minor technical problem, but a major usablity problem. "Programs are meant to be read by humans and only incidentally for computers to execute." (D.E.K.) ;-) The situation is a bit strange: We all love strong guarantees offered by type checking, but somehow most people shy away from "syntactic type checking" offered by parser generators. Parser combinators are the Python of parsing: Easy to use initially, but a maintenance hell in the long run for larger projects... Cheers, S. -------------- next part -------------- An HTML attachment was scrubbed... URL: From svenpanne at gmail.com Tue Oct 9 07:27:40 2018 From: svenpanne at gmail.com (Sven Panne) Date: Tue, 9 Oct 2018 09:27:40 +0200 Subject: Parser.y rewrite with parser combinators In-Reply-To: References: <4A757517-AC32-45B5-9BBE-810B0498E50A@cs.brynmawr.edu> Message-ID: Am Di., 9. Okt. 2018 um 09:18 Uhr schrieb Vladislav Zavialov < vlad.z.4096 at gmail.com>: > [...] With parser combinators > > 1. Parse into an expression (linear in the amount of tokens) > 2. If it turns out we needed a pattern, backtrack and parse into a > pattern (linear in the amount of tokens) [...] > In a larger grammar implemented by parser combinators it is quite hard to guarantee that you don't backtrack while backtracking, which would easily result in exponential runtime. And given the size of the language GHC recognizes, I can almost guarantee that this will happen unless you use formal methods. :-) Cheers, S. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vlad.z.4096 at gmail.com Tue Oct 9 07:28:50 2018 From: vlad.z.4096 at gmail.com (Vladislav Zavialov) Date: Tue, 9 Oct 2018 10:28:50 +0300 Subject: Parser.y rewrite with parser combinators In-Reply-To: References: Message-ID: > which in turn is a strong hint that the language you're trying to parse has dark corners. IMHO every language designer and e.g. everybody proposing a syntactic extension to GHC should try to fit this into a grammar for Happy *before* proposing that extension I do agree here! Having a language that has a context-free grammar would be superb. The issue is that Haskell with GHC extensions is already far from this point and it isn't helping to first pretend that it is, and then do half of the parsing in post-processing because it has no such constraints. On Tue, Oct 9, 2018 at 10:23 AM Sven Panne wrote: > > Am Di., 9. Okt. 2018 um 00:25 Uhr schrieb Vladislav Zavialov : >> >> [...] That's true regardless of implementation technique, parsers are rather >> delicate. > > > I think it's not the parsers themselves which are delicate, it is the language that they should recognize. > >> >> A LALR-based parser generator does provide more information >> when it detects shift/reduce and reduce/reduce conflicts, but I never >> found this information useful. It was always quite the opposite of >> being helpful - an indication that a LALR parser could not handle my >> change and I had to look for workarounds. [...] > > > Not that this would help at this point, but: The conflicts reported by parser generators like Happy are *extremely* valuable, they hint at tricky/ambiguous points in the grammar, which in turn is a strong hint that the language you're trying to parse has dark corners. IMHO every language designer and e.g. everybody proposing a syntactic extension to GHC should try to fit this into a grammar for Happy *before* proposing that extension. If you get conflicts, it is a very strong hint that the language is hard to parse by *humans*, too, which is the most important thing to consider. Haskell already has tons of syntactic warts which can only be parsed by infinite lookahead, which is only a minor technical problem, but a major usablity problem. "Programs are meant to be read by humans and only incidentally for computers to execute." (D.E.K.) ;-) > > The situation is a bit strange: We all love strong guarantees offered by type checking, but somehow most people shy away from "syntactic type checking" offered by parser generators. Parser combinators are the Python of parsing: Easy to use initially, but a maintenance hell in the long run for larger projects... > > Cheers, > S. From vlad.z.4096 at gmail.com Tue Oct 9 07:32:28 2018 From: vlad.z.4096 at gmail.com (Vladislav Zavialov) Date: Tue, 9 Oct 2018 10:32:28 +0300 Subject: Parser.y rewrite with parser combinators In-Reply-To: References: <4A757517-AC32-45B5-9BBE-810B0498E50A@cs.brynmawr.edu> Message-ID: > backtrack while backtracking <...> I can almost guarantee that this will happen unless you use formal methods That is a great idea, I can track backtracking depth in a type-level natural number and make sure it doesn't go over 1 (or add justification with performance analysis when it does). Formal methods for the win :-) On Tue, Oct 9, 2018 at 10:27 AM Sven Panne wrote: > > Am Di., 9. Okt. 2018 um 09:18 Uhr schrieb Vladislav Zavialov : >> >> [...] With parser combinators >> >> 1. Parse into an expression (linear in the amount of tokens) >> 2. If it turns out we needed a pattern, backtrack and parse into a >> pattern (linear in the amount of tokens) [...] > > > In a larger grammar implemented by parser combinators it is quite hard to guarantee that you don't backtrack while backtracking, which would easily result in exponential runtime. And given the size of the language GHC recognizes, I can almost guarantee that this will happen unless you use formal methods. :-) > > Cheers, > S. From simonpj at microsoft.com Tue Oct 9 10:52:49 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 9 Oct 2018 10:52:49 +0000 Subject: Parser.y rewrite with parser combinators In-Reply-To: References: Message-ID: We all love strong guarantees offered by type checking, but somehow most people shy away from "syntactic type checking" offered by parser generators. Parser combinators are the Python of parsing: Easy to use initially, but a maintenance hell in the long run for larger projects... I’d never thought of it that way before – interesting. Simon From: ghc-devs On Behalf Of Sven Panne Sent: 09 October 2018 08:23 To: vlad.z.4096 at gmail.com Cc: GHC developers Subject: Re: Parser.y rewrite with parser combinators Am Di., 9. Okt. 2018 um 00:25 Uhr schrieb Vladislav Zavialov >: [...] That's true regardless of implementation technique, parsers are rather delicate. I think it's not the parsers themselves which are delicate, it is the language that they should recognize. A LALR-based parser generator does provide more information when it detects shift/reduce and reduce/reduce conflicts, but I never found this information useful. It was always quite the opposite of being helpful - an indication that a LALR parser could not handle my change and I had to look for workarounds. [...] Not that this would help at this point, but: The conflicts reported by parser generators like Happy are *extremely* valuable, they hint at tricky/ambiguous points in the grammar, which in turn is a strong hint that the language you're trying to parse has dark corners. IMHO every language designer and e.g. everybody proposing a syntactic extension to GHC should try to fit this into a grammar for Happy *before* proposing that extension. If you get conflicts, it is a very strong hint that the language is hard to parse by *humans*, too, which is the most important thing to consider. Haskell already has tons of syntactic warts which can only be parsed by infinite lookahead, which is only a minor technical problem, but a major usablity problem. "Programs are meant to be read by humans and only incidentally for computers to execute." (D.E.K.) ;-) The situation is a bit strange: We all love strong guarantees offered by type checking, but somehow most people shy away from "syntactic type checking" offered by parser generators. Parser combinators are the Python of parsing: Easy to use initially, but a maintenance hell in the long run for larger projects... Cheers, S. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vlad.z.4096 at gmail.com Tue Oct 9 11:08:55 2018 From: vlad.z.4096 at gmail.com (Vladislav Zavialov) Date: Tue, 9 Oct 2018 14:08:55 +0300 Subject: Parser.y rewrite with parser combinators In-Reply-To: References: Message-ID: It's a nice way to look at the problem, and we're facing the same issues as with insufficiently powerful type systems. LALR is the Go of parsing in this case :) I'd rather write Python and have a larger test suite than deal with lack of generics in Go, if you allow me to take the analogy that far. In fact, we do have a fair share of boilerplate in our current grammar due to lack of parametrisation. That's another issue that would be solved by parser combinators (or by a fancier parser generator, but I'm not aware of such one). On Tue, Oct 9, 2018 at 1:52 PM Simon Peyton Jones wrote: > > We all love strong guarantees offered by type checking, but somehow most people shy away from "syntactic type checking" offered by parser generators. Parser combinators are the Python of parsing: Easy to use initially, but a maintenance hell in the long run for larger projects... > > I’d never thought of it that way before – interesting. > > > > Simon > > > > From: ghc-devs On Behalf Of Sven Panne > Sent: 09 October 2018 08:23 > To: vlad.z.4096 at gmail.com > Cc: GHC developers > Subject: Re: Parser.y rewrite with parser combinators > > > > Am Di., 9. Okt. 2018 um 00:25 Uhr schrieb Vladislav Zavialov : > > [...] That's true regardless of implementation technique, parsers are rather > delicate. > > > > I think it's not the parsers themselves which are delicate, it is the language that they should recognize. > > > > A LALR-based parser generator does provide more information > when it detects shift/reduce and reduce/reduce conflicts, but I never > found this information useful. It was always quite the opposite of > being helpful - an indication that a LALR parser could not handle my > change and I had to look for workarounds. [...] > > > > Not that this would help at this point, but: The conflicts reported by parser generators like Happy are *extremely* valuable, they hint at tricky/ambiguous points in the grammar, which in turn is a strong hint that the language you're trying to parse has dark corners. IMHO every language designer and e.g. everybody proposing a syntactic extension to GHC should try to fit this into a grammar for Happy *before* proposing that extension. If you get conflicts, it is a very strong hint that the language is hard to parse by *humans*, too, which is the most important thing to consider. Haskell already has tons of syntactic warts which can only be parsed by infinite lookahead, which is only a minor technical problem, but a major usablity problem. "Programs are meant to be read by humans and only incidentally for computers to execute." (D.E.K.) ;-) > > > > The situation is a bit strange: We all love strong guarantees offered by type checking, but somehow most people shy away from "syntactic type checking" offered by parser generators. Parser combinators are the Python of parsing: Easy to use initially, but a maintenance hell in the long run for larger projects... > > > > Cheers, > > S. From rae at cs.brynmawr.edu Tue Oct 9 13:45:12 2018 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Tue, 9 Oct 2018 09:45:12 -0400 Subject: Parser.y rewrite with parser combinators In-Reply-To: References: Message-ID: I think one problem is that we don't even have bounded levels of backtracking, because (with view patterns) you can put expressions into patterns. Consider > f = do K x (z -> ... Do we have a constructor pattern with a view pattern inside it? Or do we have an expression with a required visible type application and a function type? (This last bit will be possible only with visible dependent quantification in terms, but I'm confident that Vlad will appreciate the example.) We'll need nested backtracking to sort this disaster out -- especially if we have another `do` in the ... What I'm trying to say here is that tracking the backtracking level in types doesn't seem like it will fly (tempting though it may be). Richard > On Oct 9, 2018, at 7:08 AM, Vladislav Zavialov wrote: > > It's a nice way to look at the problem, and we're facing the same > issues as with insufficiently powerful type systems. LALR is the Go of > parsing in this case :) > > I'd rather write Python and have a larger test suite than deal with > lack of generics in Go, if you allow me to take the analogy that far. > > In fact, we do have a fair share of boilerplate in our current grammar > due to lack of parametrisation. That's another issue that would be > solved by parser combinators (or by a fancier parser generator, but > I'm not aware of such one). > > On Tue, Oct 9, 2018 at 1:52 PM Simon Peyton Jones wrote: >> >> We all love strong guarantees offered by type checking, but somehow most people shy away from "syntactic type checking" offered by parser generators. Parser combinators are the Python of parsing: Easy to use initially, but a maintenance hell in the long run for larger projects... >> >> I’d never thought of it that way before – interesting. >> >> >> >> Simon >> >> >> >> From: ghc-devs On Behalf Of Sven Panne >> Sent: 09 October 2018 08:23 >> To: vlad.z.4096 at gmail.com >> Cc: GHC developers >> Subject: Re: Parser.y rewrite with parser combinators >> >> >> >> Am Di., 9. Okt. 2018 um 00:25 Uhr schrieb Vladislav Zavialov : >> >> [...] That's true regardless of implementation technique, parsers are rather >> delicate. >> >> >> >> I think it's not the parsers themselves which are delicate, it is the language that they should recognize. >> >> >> >> A LALR-based parser generator does provide more information >> when it detects shift/reduce and reduce/reduce conflicts, but I never >> found this information useful. It was always quite the opposite of >> being helpful - an indication that a LALR parser could not handle my >> change and I had to look for workarounds. [...] >> >> >> >> Not that this would help at this point, but: The conflicts reported by parser generators like Happy are *extremely* valuable, they hint at tricky/ambiguous points in the grammar, which in turn is a strong hint that the language you're trying to parse has dark corners. IMHO every language designer and e.g. everybody proposing a syntactic extension to GHC should try to fit this into a grammar for Happy *before* proposing that extension. If you get conflicts, it is a very strong hint that the language is hard to parse by *humans*, too, which is the most important thing to consider. Haskell already has tons of syntactic warts which can only be parsed by infinite lookahead, which is only a minor technical problem, but a major usablity problem. "Programs are meant to be read by humans and only incidentally for computers to execute." (D.E.K.) ;-) >> >> >> >> The situation is a bit strange: We all love strong guarantees offered by type checking, but somehow most people shy away from "syntactic type checking" offered by parser generators. Parser combinators are the Python of parsing: Easy to use initially, but a maintenance hell in the long run for larger projects... >> >> >> >> Cheers, >> >> S. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From vlad.z.4096 at gmail.com Tue Oct 9 14:02:47 2018 From: vlad.z.4096 at gmail.com (Vladislav Zavialov) Date: Tue, 9 Oct 2018 17:02:47 +0300 Subject: Parser.y rewrite with parser combinators In-Reply-To: References: Message-ID: I agree with you. This example puts a nail on the coffin of the backtracking approach. I will have to think of something else, and at this point a full rewrite to parser combinators does not seem as appealing. Thanks! On Tue, Oct 9, 2018 at 4:45 PM Richard Eisenberg wrote: > > I think one problem is that we don't even have bounded levels of backtracking, because (with view patterns) you can put expressions into patterns. > > Consider > > > f = do K x (z -> ... > > Do we have a constructor pattern with a view pattern inside it? Or do we have an expression with a required visible type application and a function type? (This last bit will be possible only with visible dependent quantification in terms, but I'm confident that Vlad will appreciate the example.) We'll need nested backtracking to sort this disaster out -- especially if we have another `do` in the ... > > What I'm trying to say here is that tracking the backtracking level in types doesn't seem like it will fly (tempting though it may be). > > Richard > > > On Oct 9, 2018, at 7:08 AM, Vladislav Zavialov wrote: > > > > It's a nice way to look at the problem, and we're facing the same > > issues as with insufficiently powerful type systems. LALR is the Go of > > parsing in this case :) > > > > I'd rather write Python and have a larger test suite than deal with > > lack of generics in Go, if you allow me to take the analogy that far. > > > > In fact, we do have a fair share of boilerplate in our current grammar > > due to lack of parametrisation. That's another issue that would be > > solved by parser combinators (or by a fancier parser generator, but > > I'm not aware of such one). > > > > On Tue, Oct 9, 2018 at 1:52 PM Simon Peyton Jones wrote: > >> > >> We all love strong guarantees offered by type checking, but somehow most people shy away from "syntactic type checking" offered by parser generators. Parser combinators are the Python of parsing: Easy to use initially, but a maintenance hell in the long run for larger projects... > >> > >> I’d never thought of it that way before – interesting. > >> > >> > >> > >> Simon > >> > >> > >> > >> From: ghc-devs On Behalf Of Sven Panne > >> Sent: 09 October 2018 08:23 > >> To: vlad.z.4096 at gmail.com > >> Cc: GHC developers > >> Subject: Re: Parser.y rewrite with parser combinators > >> > >> > >> > >> Am Di., 9. Okt. 2018 um 00:25 Uhr schrieb Vladislav Zavialov : > >> > >> [...] That's true regardless of implementation technique, parsers are rather > >> delicate. > >> > >> > >> > >> I think it's not the parsers themselves which are delicate, it is the language that they should recognize. > >> > >> > >> > >> A LALR-based parser generator does provide more information > >> when it detects shift/reduce and reduce/reduce conflicts, but I never > >> found this information useful. It was always quite the opposite of > >> being helpful - an indication that a LALR parser could not handle my > >> change and I had to look for workarounds. [...] > >> > >> > >> > >> Not that this would help at this point, but: The conflicts reported by parser generators like Happy are *extremely* valuable, they hint at tricky/ambiguous points in the grammar, which in turn is a strong hint that the language you're trying to parse has dark corners. IMHO every language designer and e.g. everybody proposing a syntactic extension to GHC should try to fit this into a grammar for Happy *before* proposing that extension. If you get conflicts, it is a very strong hint that the language is hard to parse by *humans*, too, which is the most important thing to consider. Haskell already has tons of syntactic warts which can only be parsed by infinite lookahead, which is only a minor technical problem, but a major usablity problem. "Programs are meant to be read by humans and only incidentally for computers to execute." (D.E.K.) ;-) > >> > >> > >> > >> The situation is a bit strange: We all love strong guarantees offered by type checking, but somehow most people shy away from "syntactic type checking" offered by parser generators. Parser combinators are the Python of parsing: Easy to use initially, but a maintenance hell in the long run for larger projects... > >> > >> > >> > >> Cheers, > >> > >> S. > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From svenpanne at gmail.com Tue Oct 9 14:18:32 2018 From: svenpanne at gmail.com (Sven Panne) Date: Tue, 9 Oct 2018 16:18:32 +0200 Subject: Parser.y rewrite with parser combinators In-Reply-To: References: Message-ID: Am Di., 9. Okt. 2018 um 15:45 Uhr schrieb Richard Eisenberg < rae at cs.brynmawr.edu>: > [...] What I'm trying to say here is that tracking the backtracking level > in types doesn't seem like it will fly (tempting though it may be). > ... and even if it did fly, parser combinators with backtracking have a strong tendency to introduce space leaks: To backtrack, you have too keep previous input somehow, at least up to some point. So to keep the memory requirements sane, you have to explicitly commit to one parse or another at some point. Different combinator libraries have different ways to do that, but you have to do that by hand somehow, and that's where the beauty and maintainability of the combinator approach really suffers. Note that I'm not against parser combinators, far from it, but I don't think they are necessarily the right tool for the problem at hand. The basic problem is: Haskell's syntax, especially with all those extensions, is quite tricky, and this will be reflected in any parser for it. IMHO a parser generator is the lesser evil here, at least it points you to the ugly places of your language (on a syntactic level). If Haskell had a few more syntactic hints, reading code would be easier, not only for a compiler, but (more importantly) for humans, too. Richard's code snippet is a good example where some hint would be very useful for the casual reader, in some sense humans have to "backtrack", too, when reading such code. -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.spiwack at tweag.io Wed Oct 10 05:28:24 2018 From: arnaud.spiwack at tweag.io (Spiwack, Arnaud) Date: Wed, 10 Oct 2018 07:28:24 +0200 Subject: Parser.y rewrite with parser combinators In-Reply-To: References: Message-ID: On Tue, Oct 9, 2018 at 1:09 PM Vladislav Zavialov wrote: > In fact, we do have a fair share of boilerplate in our current grammar > due to lack of parametrisation. That's another issue that would be > solved by parser combinators (or by a fancier parser generator, but > I'm not aware of such one). > There is the Menhir [1] parser generator. It provides decent abstraction mechanism. It also generate LR parsers, hence is more flexible than Happy. And generates incremental parsers. Of course, one would have to make it output Haskell a Haskell parser first. But considering the effort it seem to have been to add a Coq backend, I'd assume it would be less of an effort than porting the entire GHC grammar to parser combinators. (one hard thing would be to decide how to replace the ML functor mechanism that Menhir uses to generate functions from parsers to parsers) [1]: http://gallium.inria.fr/~fpottier/menhir/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From davide at well-typed.com Wed Oct 10 09:42:14 2018 From: davide at well-typed.com (David Eichmann) Date: Wed, 10 Oct 2018 10:42:14 +0100 Subject: Feedback request: GHC performance test-suite In-Reply-To: <75171465-522b-68d4-dfdf-cbadb6a2235b@well-typed.com> References: <75171465-522b-68d4-dfdf-cbadb6a2235b@well-typed.com> Message-ID: Hello all, After some time and feedback, bgamari and and I have decided to slowly press on with the proposed changes to the performance test-suite. I'll continue to update the wiki page in the future and will inform ghc-devs of significant milestones. Note I've augmented the wiki page's drift issue section and moved it to future work. Thank you, David Eichmann P.S. On 13/09/18 09:24, David Eichmann wrote: > Hello all, > > I've recently resumed some work started by Jared Weakly on the GHC > test suite. This specifically regards performance tests. The work aims > to, among some other things, reduce manual work and to log performance > test results from our CI server. The proposed change is described in > more detail on this wiki page: > https://ghc.haskell.org/trac/ghc/wiki/Performance/Tests. I'd > appreciate any feedback or questions on this. > > Thank you and have a great day, > David Eichmann > -- David Eichmann, Haskell Consultant Well-Typed LLP, http://www.well-typed.com Registered in England & Wales, OC335890 118 Wymering Mansions, Wymering Road, London W9 2NF, England -------------- next part -------------- An HTML attachment was scrubbed... URL: From jweakly at pdx.edu Wed Oct 10 13:48:15 2018 From: jweakly at pdx.edu (Jared Weakly) Date: Wed, 10 Oct 2018 06:48:15 -0700 Subject: Feedback request: GHC performance test-suite In-Reply-To: References: <75171465-522b-68d4-dfdf-cbadb6a2235b@well-typed.com> Message-ID: Thanks, David! The rebase of the branch has been sitting half complete on my laptop for a shamefully long time as I've been swamped at work. Please feel free to reach out if you have questions and let me know if there's anything I can do to speed things up. On Wed, Oct 10, 2018, 2:42 AM David Eichmann wrote: > Hello all, > > After some time and feedback, bgamari and and I have decided to slowly > press on with the proposed changes to the performance test-suite. I'll > continue to update the wiki page > in the future > and will inform ghc-devs of significant milestones. Note I've augmented the > wiki page's drift issue section and moved it to future work. > > Thank you, > > David Eichmann > > P.S. > > On 13/09/18 09:24, David Eichmann wrote: > > Hello all, > > I've recently resumed some work started by Jared Weakly on the GHC test > suite. This specifically regards performance tests. The work aims to, among > some other things, reduce manual work and to log performance test results > from our CI server. The proposed change is described in more detail on this > wiki page: https://ghc.haskell.org/trac/ghc/wiki/Performance/Tests. I'd > appreciate any feedback or questions on this. > > Thank you and have a great day, > David Eichmann > > > -- > David Eichmann, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com > > Registered in England & Wales, OC335890 > 118 Wymering Mansions, Wymering Road, London W9 2NF, England > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From omeragacan at gmail.com Thu Oct 11 17:43:31 2018 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Thu, 11 Oct 2018 20:43:31 +0300 Subject: Why align all pinned array payloads on 16 bytes? Message-ID: Hi, I just found out we currently align all pinned array payloads to 16 bytes and I'm wondering why. I don't see any comments/notes on this, and it's also not part of the primop documentation. We also have another primop for aligned allocation: newAlignedPinnedByteArray#. Given that alignment behavior of newPinnedByteArray# is not documented and we have another one for aligned allocation, perhaps we can remove alignment in newPinnedByteArray#. Does anyone remember what was the motivation for always aligning pinned arrays? Thanks Ömer From ben at well-typed.com Sun Oct 14 22:17:12 2018 From: ben at well-typed.com (Ben Gamari) Date: Sun, 14 Oct 2018 18:17:12 -0400 Subject: [ANNOUNCE] GHC 8.4.4 released Message-ID: <878t30npgc.fsf@smart-cactus.org> Hello everyone, The GHC team is pleased to announce the availability of GHC 8.4.4, a patch-level release in the 8.4 series. The source distribution, binary distributions, and documentation for this release are available at https://downloads.haskell.org/~ghc/8.4.4 This release fixes several bugs present in 8.4.3 These include, - A bug which could result in memory unsafety with certain uses of `touch#` has been resolved. (#14346) - A compiler panic triggered by some GADT record updates has been fixed (#15499) - The `text` library has been updated, fixing several serious bugs in the version shipped with GHC 8.4.3 (see `text` issues #227, #221, and #197. - A serious code generation bug in the LLVM code generation, potentially resulting in incorrect evaluation of floating point expressions, has been fixed (#14251) As always, the full release notes can be found in the users guide, https://downloads.haskell.org/~ghc/8.4.4/docs/html/users_guide/8.4.4-notes.html Thanks to everyone who has contributed to developing, documenting, and testing this release! As always, let us know if you encounter trouble. How to get it ~~~~~~~~~~~~~ The easy way is to go to the web page, which should be self-explanatory: https://www.haskell.org/ghc/ We supply binary builds in the native package format for many platforms, and the source distribution is available from the same place. Packages will appear as they are built - if the package for your system isn't available yet, please try again later. Background ~~~~~~~~~~ Haskell is a standard lazy functional programming language. GHC is a state-of-the-art programming suite for Haskell. Included is an optimising compiler generating efficient code for a variety of platforms, together with an interactive system for convenient, quick development. The distribution includes space and time profiling facilities, a large collection of libraries, and support for various language extensions, including concurrency, exceptions, and foreign language interfaces. GHC is distributed under a BSD-style open source license. A wide variety of Haskell related resources (tutorials, libraries, specifications, documentation, compilers, interpreters, references, contact information, links to research groups) are available from the Haskell home page (see below). On-line GHC-related resources ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Relevant URLs on the World-Wide Web: GHC home page https://www.haskell.org/ghc/ GHC developers' home page https://ghc.haskell.org/trac/ghc/ Haskell home page https://www.haskell.org/ Supported Platforms ~~~~~~~~~~~~~~~~~~~ The list of platforms we support, and the people responsible for them, is here: https://ghc.haskell.org/trac/ghc/wiki/Contributors Ports to other platforms are possible with varying degrees of difficulty. The Building Guide describes how to go about porting to a new platform: https://ghc.haskell.org/trac/ghc/wiki/Building Developers ~~~~~~~~~~ We welcome new contributors. Instructions on accessing our source code repository, and getting started with hacking on GHC, are available from the GHC's developer's site run by Trac: https://ghc.haskell.org/trac/ghc/ Mailing lists ~~~~~~~~~~~~~ We run mailing lists for GHC users and bug reports; to subscribe, use the web interfaces at https://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users https://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-tickets There are several other haskell and ghc-related mailing lists on www.haskell.org; for the full list, see https://mail.haskell.org/cgi-bin/mailman/listinfo Some GHC developers hang out on #haskell on IRC, too: https://www.haskell.org/haskellwiki/IRC_channel Please report bugs using our bug tracking system. Instructions on reporting bugs can be found here: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Mon Oct 15 22:13:57 2018 From: ben at well-typed.com (Ben Gamari) Date: Mon, 15 Oct 2018 18:13:57 -0400 Subject: Coordinating the Hadrian merge Message-ID: <87tvlmn9i9.fsf@smart-cactus.org> Hi Andrey and Alp, Before ICFP we concluded that we will merge Hadrian into the GHC tree. This unfortunately took a back-seat priority-wise while I sorted out various release things but I think we are now in a position to make this happen. Andrey, would you be okay with my merging Hadrian as-is into the GHC tree? In the past we discussed squashing the project's early history however I've had very little luck doing this cleanly (primarily due to the difficulty of rebasing in the presence of merge commits) After merging there will be a period where we flush the pull request queue but I don't anticipate this causing much trouble. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From andrey.mokhov at newcastle.ac.uk Mon Oct 15 23:12:05 2018 From: andrey.mokhov at newcastle.ac.uk (Andrey Mokhov) Date: Mon, 15 Oct 2018 23:12:05 +0000 Subject: Coordinating the Hadrian merge In-Reply-To: References: <87tvlmn9i9.fsf@smart-cactus.org> Message-ID: Hi Ben, Yes, I'm fine to merge, but we should make it clear that Hadrian should not be used just yet: 1) It is currently broken due to some recent changes in GHC. 2) Alp made tremendous progress with fixing the testsuite failures, but there are still some failures left. 3) There are a few usability requests by Simon Marlow that we need to address. > In the past we discussed squashing the project's early history > however I've had very little luck doing this cleanly Ouch, it would be a bit grim to merge all those early commits. On the other hand, I looked at commits at the middle of Hadrian's history and they look quite sensible, just overly fine-grained. So, even if we could somehow squash the early history, that probably wouldn't give us much saving in terms of the commit count -- it would still be more than 1K. P.S.: Don't forget to switch off commit notifications when you do the merge ;-) Cheers, Andrey -----Original Message----- From: Ben Gamari [mailto:ben at well-typed.com] Sent: 15 October 2018 23:14 To: Andrey Mokhov ; Alp Mestanogullari Cc: GHC developers Subject: Coordinating the Hadrian merge Hi Andrey and Alp, Before ICFP we concluded that we will merge Hadrian into the GHC tree. This unfortunately took a back-seat priority-wise while I sorted out various release things but I think we are now in a position to make this happen. Andrey, would you be okay with my merging Hadrian as-is into the GHC tree? In the past we discussed squashing the project's early history however I've had very little luck doing this cleanly (primarily due to the difficulty of rebasing in the presence of merge commits) After merging there will be a period where we flush the pull request queue but I don't anticipate this causing much trouble. Cheers, - Ben From carter.schonwald at gmail.com Tue Oct 16 01:22:02 2018 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 15 Oct 2018 21:22:02 -0400 Subject: Why align all pinned array payloads on 16 bytes? In-Reply-To: References: Message-ID: while I dont know the original context, some care may be needed ... depending on how this alignment assumption is accidentally used by users... it may result in really gross breakages On Thu, Oct 11, 2018 at 1:44 PM Ömer Sinan Ağacan wrote: > Hi, > > I just found out we currently align all pinned array payloads to 16 bytes > and > I'm wondering why. I don't see any comments/notes on this, and it's also > not > part of the primop documentation. We also have another primop for aligned > allocation: newAlignedPinnedByteArray#. Given that alignment behavior of > newPinnedByteArray# is not documented and we have another one for aligned > allocation, perhaps we can remove alignment in newPinnedByteArray#. > > Does anyone remember what was the motivation for always aligning pinned > arrays? > > Thanks > > Ömer > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alp at well-typed.com Tue Oct 16 01:30:37 2018 From: alp at well-typed.com (Alp Mestanogullari) Date: Tue, 16 Oct 2018 03:30:37 +0200 Subject: Coordinating the Hadrian merge In-Reply-To: References: <87tvlmn9i9.fsf@smart-cactus.org> Message-ID: <15abc935-5771-5c5d-620d-3233a3f85e96@well-typed.com> Hello, Andrey: the hadrian submodule has been around for a while now, yet people have not exactly abandoned the make build system. Merging hadrian in the main ghc repo just means turning that submodule into a proper subdirectory, after all. I might be wrong but I really doubt this will make much of a difference for most ghc devs and suddenly catch everyone's attention. The most important point of merging, in my opinion, once we "unbreak" hadrian, will be to add at least one hadrian job in CI to make sure that this (breakage) never happens without us noticing right away, so ideally before differentials land. I don't see the merge as "alright, hadrian's ready, let's use it everyone", it's really about us hadrian contributors not finding the "catch-up" game all that fun after we've played it dozens of times. Without all those bumps on the road, who knows where hadrian would be right now. On 16/10/2018 01:12, Andrey Mokhov wrote: > Hi Ben, > > Yes, I'm fine to merge, but we should make it clear that Hadrian should not be used just yet: > > 1) It is currently broken due to some recent changes in GHC. > > 2) Alp made tremendous progress with fixing the testsuite failures, but there are still some failures left. > > 3) There are a few usability requests by Simon Marlow that we need to address. > >> In the past we discussed squashing the project's early history >> however I've had very little luck doing this cleanly > Ouch, it would be a bit grim to merge all those early commits. On the other hand, I looked at commits at the middle of Hadrian's history and they look quite sensible, just overly fine-grained. So, even if we could somehow squash the early history, that probably wouldn't give us much saving in terms of the commit count -- it would still be more than 1K. > > P.S.: Don't forget to switch off commit notifications when you do the merge ;-) > > Cheers, > Andrey > > -----Original Message----- > From: Ben Gamari [mailto:ben at well-typed.com] > Sent: 15 October 2018 23:14 > To: Andrey Mokhov ; Alp Mestanogullari > Cc: GHC developers > Subject: Coordinating the Hadrian merge > > Hi Andrey and Alp, > > Before ICFP we concluded that we will merge Hadrian into the GHC tree. > This unfortunately took a back-seat priority-wise while I sorted out > various release things but I think we are now in a position to make this > happen. > > Andrey, would you be okay with my merging Hadrian as-is into the GHC > tree? In the past we discussed squashing the project's early history > however I've had very little luck doing this cleanly (primarily due to > the difficulty of rebasing in the presence of merge commits) > > After merging there will be a period where we flush the pull request > queue but I don't anticipate this causing much trouble. > > Cheers, > > - Ben > -- Alp Mestanogullari, Haskell Consultant Well-Typed LLP, https://www.well-typed.com/ Registered in England and Wales, OC335890 118 Wymering Mansions, Wymering Road, London, W9 2NF, England From ben at well-typed.com Tue Oct 16 01:32:43 2018 From: ben at well-typed.com (Ben Gamari) Date: Mon, 15 Oct 2018 21:32:43 -0400 Subject: Coordinating the Hadrian merge In-Reply-To: References: <87tvlmn9i9.fsf@smart-cactus.org> Message-ID: <87efcqn0ax.fsf@smart-cactus.org> Andrey Mokhov writes: > Hi Ben, > > Yes, I'm fine to merge, but we should make it clear that Hadrian > should not be used just yet: > > 1) It is currently broken due to some recent changes in GHC. > > 2) Alp made tremendous progress with fixing the testsuite failures, but there are still some failures left. > > 3) There are a few usability requests by Simon Marlow that we need to address. > >> In the past we discussed squashing the project's early history >> however I've had very little luck doing this cleanly > Sure, I'm happy to make it clear that things are still in flux and that there are known weaknesses. That being said, I'm not sure it helps to active discourage use. Afterall, there will be little incentive for others to help find and fix the remaining issues unless there are users. > Ouch, it would be a bit grim to merge all those early commits. On the > other hand, I looked at commits at the middle of Hadrian's history and > they look quite sensible, just overly fine-grained. So, even if we > could somehow squash the early history, that probably wouldn't give us > much saving in terms of the commit count -- it would still be more > than 1K. > Right; given that GHC itself has more than 50k commits I'm not terribly concerned about Hadrian's contribution. How should we handle ticket tracking post-merge? The easiest option would probably be to keep the existing tickets on GitHub and ask that new tickets be reported via Trac. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From m at tweag.io Tue Oct 16 12:06:13 2018 From: m at tweag.io (Boespflug, Mathieu) Date: Tue, 16 Oct 2018 14:06:13 +0200 Subject: Goals for GHC 8.8 In-Reply-To: References: <87wosknaub.fsf@smart-cactus.org> <1534857692.2717.56.camel@jeltsch.info> <87h8jnn5iy.fsf@smart-cactus.org> <8736v7ms34.fsf@smart-cactus.org> Message-ID: Hi Ben, just a heads up: we are still on track for a Diff submission for linear types by end of October (the cut-off date you advertized at the top of this thread for feature work on GHC 8.8, and the one we stated we'd aim for in September). We might run into last minute blockers of course, but so far so good. I've been told that we'll be hearing from the Committee before then about acceptance or rejection of the proposal. Best, -- Mathieu Boespflug Founder at http://tweag.io. On Wed, 5 Sep 2018 at 15:46, Boespflug, Mathieu wrote: > Hi Ben, > > yes - as for the implementation of the linear types extension, we're > aiming for the submission of a Diff before the 8.8 branch is cut. (If the > Committee has given the green light by then, of course.) > > Best, > > -- > Mathieu Boespflug > Founder at http://tweag.io. > > > On Tue, 21 Aug 2018 at 21:34, Ben Gamari wrote: > >> Mathieu Boespflug writes: >> >> > The proposal would need to be accepted by the GHC Steering Committee >> first >> > before that happens. >> > >> Absolutely; I just wasn't sure whether you were considering pushing for >> merge in the event that it was accepted. >> >> Cheers, >> >> - Ben >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrey.mokhov at newcastle.ac.uk Tue Oct 16 15:09:42 2018 From: andrey.mokhov at newcastle.ac.uk (Andrey Mokhov) Date: Tue, 16 Oct 2018 15:09:42 +0000 Subject: Coordinating the Hadrian merge In-Reply-To: References: <87tvlmn9i9.fsf@smart-cactus.org> <87efcqn0ax.fsf@smart-cactus.org> Message-ID: Thanks Alp and Ben! I fully agree with you. Let's go ahead. Ben: I guess you'll do the actual merge -- feel free to do this whenever you like. > How should we handle ticket tracking post-merge? The easiest option > would probably be to keep the existing tickets on GitHub and ask that > new tickets be reported via Trac. Yes, this sounds good. Cheers, Andrey -----Original Message----- From: Ben Gamari [mailto:ben at well-typed.com] Sent: 16 October 2018 02:33 To: Andrey Mokhov ; Alp Mestanogullari Cc: GHC developers Subject: RE: Coordinating the Hadrian merge Andrey Mokhov writes: > Hi Ben, > > Yes, I'm fine to merge, but we should make it clear that Hadrian > should not be used just yet: > > 1) It is currently broken due to some recent changes in GHC. > > 2) Alp made tremendous progress with fixing the testsuite failures, but there are still some failures left. > > 3) There are a few usability requests by Simon Marlow that we need to address. > >> In the past we discussed squashing the project's early history >> however I've had very little luck doing this cleanly > Sure, I'm happy to make it clear that things are still in flux and that there are known weaknesses. That being said, I'm not sure it helps to active discourage use. Afterall, there will be little incentive for others to help find and fix the remaining issues unless there are users. > Ouch, it would be a bit grim to merge all those early commits. On the > other hand, I looked at commits at the middle of Hadrian's history and > they look quite sensible, just overly fine-grained. So, even if we > could somehow squash the early history, that probably wouldn't give us > much saving in terms of the commit count -- it would still be more > than 1K. > Right; given that GHC itself has more than 50k commits I'm not terribly concerned about Hadrian's contribution. How should we handle ticket tracking post-merge? The easiest option would probably be to keep the existing tickets on GitHub and ask that new tickets be reported via Trac. Cheers, - Ben From ben at smart-cactus.org Tue Oct 16 17:51:20 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 16 Oct 2018 13:51:20 -0400 Subject: Treatment of unknown pragmas Message-ID: <8736t5n5kc.fsf@smart-cactus.org> Hi everyone, Recently Neil Mitchell opened a pull request [1] proposing a single-line change: Adding `{-# HLINT ... #-}` to the list of pragmas ignored by the lexer. I'm a bit skeptical of this idea. Afterall, adding cases to the lexer for every tool that wants a pragma seems quite unsustainable. On the other hand, a reasonable counter-argument could be made on the basis of the Haskell Report, which specifically says that implementations should ignore unrecognized pragmas. If GHC did this (instead of warning, as it now does) then this wouldn't be a problem. Of course, silently ignoring mis-typed pragmas sounds terrible from a usability perspective. For this reason I proposed that the following happen: * The `{-# ... #-}` syntax be reserved in particular for compilers (it largely already is; the Report defines it as "compiler pragma" syntax). The next Report should also allow implementations to warn in the case of unrecognized pragmas. * We introduce a "tool pragma" convention (perhaps even standardized in the next Report). For this we can follow the model of Liquid Haskell: `{-@ $TOOL_NAME ... @-}`. Does this sound sensible? Cheers, - Ben [1] https://github.com/ghc/ghc/pull/204 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From vlad.z.4096 at gmail.com Tue Oct 16 18:13:17 2018 From: vlad.z.4096 at gmail.com (Vladislav Zavialov) Date: Tue, 16 Oct 2018 21:13:17 +0300 Subject: Treatment of unknown pragmas In-Reply-To: <8736t5n5kc.fsf@smart-cactus.org> References: <8736t5n5kc.fsf@smart-cactus.org> Message-ID: What about introducing -fno-warn-pragma=XXX? People who use HLint will add -fno-warn-pragma=HLINT to their build configuration. On Tue, Oct 16, 2018, 20:51 Ben Gamari wrote: > Hi everyone, > > Recently Neil Mitchell opened a pull request [1] proposing a single-line > change: Adding `{-# HLINT ... #-}` to the list of pragmas ignored by the > lexer. I'm a bit skeptical of this idea. Afterall, adding cases to the > lexer for every tool that wants a pragma seems quite unsustainable. > > On the other hand, a reasonable counter-argument could be made on the > basis of the Haskell Report, which specifically says that > implementations should ignore unrecognized pragmas. If GHC did this > (instead of warning, as it now does) then this wouldn't be a problem. > > Of course, silently ignoring mis-typed pragmas sounds terrible from a > usability perspective. For this reason I proposed that the following > happen: > > * The `{-# ... #-}` syntax be reserved in particular for compilers (it > largely already is; the Report defines it as "compiler pragma" > syntax). The next Report should also allow implementations to warn in > the case of unrecognized pragmas. > > * We introduce a "tool pragma" convention (perhaps even standardized in > the next Report). For this we can follow the model of Liquid Haskell: > `{-@ $TOOL_NAME ... @-}`. > > Does this sound sensible? > > Cheers, > > - Ben > > > [1] https://github.com/ghc/ghc/pull/204 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Oct 16 18:35:01 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 16 Oct 2018 18:35:01 +0000 Subject: Treatment of unknown pragmas In-Reply-To: <8736t5n5kc.fsf@smart-cactus.org> References: <8736t5n5kc.fsf@smart-cactus.org> Message-ID: I rather agree. We don't even need a convention do we? /Any/ comment in {- -} is ignored by GHC /except/ {-# ... #-}. So tool users are free to pick whatever convention they like to identify the stuff for their tool. Simon | -----Original Message----- | From: ghc-devs On Behalf Of Ben Gamari | Sent: 16 October 2018 18:51 | To: GHC developers ; haskell at haskell.org | Subject: Treatment of unknown pragmas | Hi everyone, | | Recently Neil Mitchell opened a pull request [1] proposing a single-line | change: Adding `{-# HLINT ... #-}` to the list of pragmas ignored by the | lexer. I'm a bit skeptical of this idea. Afterall, adding cases to the | lexer for every tool that wants a pragma seems quite unsustainable. | | On the other hand, a reasonable counter-argument could be made on the | basis of the Haskell Report, which specifically says that | implementations should ignore unrecognized pragmas. If GHC did this | (instead of warning, as it now does) then this wouldn't be a problem. | | Of course, silently ignoring mis-typed pragmas sounds terrible from a | usability perspective. For this reason I proposed that the following | happen: | | * The `{-# ... #-}` syntax be reserved in particular for compilers (it | largely already is; the Report defines it as "compiler pragma" | syntax). The next Report should also allow implementations to warn in | the case of unrecognized pragmas. | | * We introduce a "tool pragma" convention (perhaps even standardized in | the next Report). For this we can follow the model of Liquid Haskell: | `{-@ $TOOL_NAME ... @-}`. | | Does this sound sensible? | | Cheers, | | - Ben | | | [1] https://github.com/ghc/ghc/pull/204 From jweakly at pdx.edu Tue Oct 16 18:45:08 2018 From: jweakly at pdx.edu (Jared Weakly) Date: Tue, 16 Oct 2018 11:45:08 -0700 Subject: Treatment of unknown pragmas In-Reply-To: References: <8736t5n5kc.fsf@smart-cactus.org> Message-ID: The main problem I see with this is now N tools need to implement support for that flag and it will need to be configured for every tool separately. If we standardize on a tool pragma in the compiler, all that stays automatic as it is now (a huge plus for tooling, which should as beginner friendly as possible). It also, in my eyes, helps enforce a cleaner distinction between pragmas as a feature-gate and pragmas as a compiler/tooling directive On Tue, Oct 16, 2018, 11:13 AM Vladislav Zavialov wrote: > What about introducing -fno-warn-pragma=XXX? People who use HLint will add > -fno-warn-pragma=HLINT to their build configuration. > > On Tue, Oct 16, 2018, 20:51 Ben Gamari wrote: > >> Hi everyone, >> >> Recently Neil Mitchell opened a pull request [1] proposing a single-line >> change: Adding `{-# HLINT ... #-}` to the list of pragmas ignored by the >> lexer. I'm a bit skeptical of this idea. Afterall, adding cases to the >> lexer for every tool that wants a pragma seems quite unsustainable. >> >> On the other hand, a reasonable counter-argument could be made on the >> basis of the Haskell Report, which specifically says that >> implementations should ignore unrecognized pragmas. If GHC did this >> (instead of warning, as it now does) then this wouldn't be a problem. >> >> Of course, silently ignoring mis-typed pragmas sounds terrible from a >> usability perspective. For this reason I proposed that the following >> happen: >> >> * The `{-# ... #-}` syntax be reserved in particular for compilers (it >> largely already is; the Report defines it as "compiler pragma" >> syntax). The next Report should also allow implementations to warn in >> the case of unrecognized pragmas. >> >> * We introduce a "tool pragma" convention (perhaps even standardized in >> the next Report). For this we can follow the model of Liquid Haskell: >> `{-@ $TOOL_NAME ... @-}`. >> >> Does this sound sensible? >> >> Cheers, >> >> - Ben >> >> >> [1] https://github.com/ghc/ghc/pull/204 >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Tue Oct 16 19:00:30 2018 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Tue, 16 Oct 2018 20:00:30 +0100 Subject: Treatment of unknown pragmas In-Reply-To: References: <8736t5n5kc.fsf@smart-cactus.org> Message-ID: I like the suggestion of a flag. For any realistic compilation you have to pass a large number of flags to GHC anyway. `stack`, `cabal` or so on can choose to pass the additional flag by default if they wish or make it more ergonomic to do so. On Tue, Oct 16, 2018 at 7:58 PM Jared Weakly wrote: > > The main problem I see with this is now N tools need to implement support for that flag and it will need to be configured for every tool separately. If we standardize on a tool pragma in the compiler, all that stays automatic as it is now (a huge plus for tooling, which should as beginner friendly as possible). It also, in my eyes, helps enforce a cleaner distinction between pragmas as a feature-gate and pragmas as a compiler/tooling directive > > On Tue, Oct 16, 2018, 11:13 AM Vladislav Zavialov wrote: >> >> What about introducing -fno-warn-pragma=XXX? People who use HLint will add -fno-warn-pragma=HLINT to their build configuration. >> >> On Tue, Oct 16, 2018, 20:51 Ben Gamari wrote: >>> >>> Hi everyone, >>> >>> Recently Neil Mitchell opened a pull request [1] proposing a single-line >>> change: Adding `{-# HLINT ... #-}` to the list of pragmas ignored by the >>> lexer. I'm a bit skeptical of this idea. Afterall, adding cases to the >>> lexer for every tool that wants a pragma seems quite unsustainable. >>> >>> On the other hand, a reasonable counter-argument could be made on the >>> basis of the Haskell Report, which specifically says that >>> implementations should ignore unrecognized pragmas. If GHC did this >>> (instead of warning, as it now does) then this wouldn't be a problem. >>> >>> Of course, silently ignoring mis-typed pragmas sounds terrible from a >>> usability perspective. For this reason I proposed that the following >>> happen: >>> >>> * The `{-# ... #-}` syntax be reserved in particular for compilers (it >>> largely already is; the Report defines it as "compiler pragma" >>> syntax). The next Report should also allow implementations to warn in >>> the case of unrecognized pragmas. >>> >>> * We introduce a "tool pragma" convention (perhaps even standardized in >>> the next Report). For this we can follow the model of Liquid Haskell: >>> `{-@ $TOOL_NAME ... @-}`. >>> >>> Does this sound sensible? >>> >>> Cheers, >>> >>> - Ben >>> >>> >>> [1] https://github.com/ghc/ghc/pull/204 >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From ben at smart-cactus.org Tue Oct 16 19:14:26 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 16 Oct 2018 15:14:26 -0400 Subject: Treatment of unknown pragmas In-Reply-To: References: <8736t5n5kc.fsf@smart-cactus.org> Message-ID: <87zhvdln5l.fsf@smart-cactus.org> Vladislav Zavialov writes: > What about introducing -fno-warn-pragma=XXX? People who use HLint will > add -fno-warn-pragma=HLINT to their build configuration. > A warning flag is an interesting way to deal with the issue. On the other hand, it's not great from an ergonomic perspective; afterall, this would mean that all users of HLint (and any other tool requiring special pragmas) include this flag in their build configuration. A typical Haskell project already needs too much such boilerplate, in my opinion. I think it makes a lot of sense to have a standard way for third-parties to attach string-y information to Haskell source constructs. While it's not strictly speaking necessary to standardize the syntax, doing so minimizes the chance that tools overlap and hopefully reduces the language ecosystem learning curve. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From eric at seidel.io Tue Oct 16 19:48:10 2018 From: eric at seidel.io (Eric Seidel) Date: Tue, 16 Oct 2018 15:48:10 -0400 Subject: Treatment of unknown pragmas In-Reply-To: <87zhvdln5l.fsf@smart-cactus.org> References: <8736t5n5kc.fsf@smart-cactus.org> <87zhvdln5l.fsf@smart-cactus.org> Message-ID: <1539719290.3266129.1544285568.4036F292@webmail.messagingengine.com> On Tue, Oct 16, 2018, at 15:14, Ben Gamari wrote: > For this we can follow the model of Liquid Haskell: `{-@ $TOOL_NAME ... @-}` LiquidHaskell does not use `{-@ LIQUID ... @-}`, we just write the annotation inside `{-@ ... @-}` :) > I think it makes a lot of sense to have a standard way for third-parties > to attach string-y information to Haskell source constructs. While it's > not strictly speaking necessary to standardize the syntax, doing > so minimizes the chance that tools overlap and hopefully reduces > the language ecosystem learning curve. This sounds exactly like the existing ANN pragma, which is what I've wanted LiquidHaskell to move towards for a long time. What is wrong with using the ANN pragma? From ndmitchell at gmail.com Tue Oct 16 20:11:56 2018 From: ndmitchell at gmail.com (Neil Mitchell) Date: Tue, 16 Oct 2018 21:11:56 +0100 Subject: Treatment of unknown pragmas In-Reply-To: <1539719290.3266129.1544285568.4036F292@webmail.messagingengine.com> References: <8736t5n5kc.fsf@smart-cactus.org> <87zhvdln5l.fsf@smart-cactus.org> <1539719290.3266129.1544285568.4036F292@webmail.messagingengine.com> Message-ID: > A warning flag is an interesting way to deal with the issue. On the > other hand, it's not great from an ergonomic perspective; afterall, this > would mean that all users of HLint (and any other tool requiring special Yep, this means every HLint user has to do an extra thing. I (the HLint author) now have a whole pile of "how do I disable warnings in Stack", and "what's the equivalent of this in Nix". Personally, it ups the support level significantly that I wouldn't go this route. I think it might be a useful feature in general, as new tools could use the flag to prototype new types of warning, but I imagine once a feature gets popular it becomes too much fuss. > > I think it makes a lot of sense to have a standard way for third-parties > > to attach string-y information to Haskell source constructs. While it's > > not strictly speaking necessary to standardize the syntax, doing > > so minimizes the chance that tools overlap and hopefully reduces > > the language ecosystem learning curve. > > This sounds exactly like the existing ANN pragma, which is what I've wanted LiquidHaskell to move towards for a long time. What is wrong with using the ANN pragma? Significant compilation performance penalty and extra recompilation. ANN pragmas is what HLint currently uses. > I'm a bit skeptical of this idea. Afterall, adding cases to the > lexer for every tool that wants a pragma seems quite unsustainable. I don't find this argument that convincing. Given the list already includes CATCH and DERIVE, the bar can't have been _that_ high to entry. And yet, the list remains pretty short. My guess is the demand is pretty low - we're just whitelisting a handful of additional words that aren't misspellings. Thanks, Neil From m at tweag.io Tue Oct 16 20:25:57 2018 From: m at tweag.io (Boespflug, Mathieu) Date: Tue, 16 Oct 2018 22:25:57 +0200 Subject: Treatment of unknown pragmas In-Reply-To: References: <8736t5n5kc.fsf@smart-cactus.org> <87zhvdln5l.fsf@smart-cactus.org> <1539719290.3266129.1544285568.4036F292@webmail.messagingengine.com> Message-ID: > > I'm a bit skeptical of this idea. Afterall, adding cases to the > > lexer for every tool that wants a pragma seems quite unsustainable. > > I don't find this argument that convincing. Given the list already > includes CATCH and DERIVE, the bar can't have been _that_ high to > entry. And yet, the list remains pretty short. My guess is the demand > is pretty low - we're just whitelisting a handful of additional words > that aren't misspellings. I agree. GHC presumably gives warnings for most names because most possible pragma names really are misspellings of what the user intended to say. Some select few pragma names are likely *not* misspellings (like CATCH, DERIVE, HLINT, etc), so there the policy is reversed. Common usage is what tells us is likely a misspelling vs not. Simply tracking the common usage is the pragmatic choice. From eric at seidel.io Tue Oct 16 20:29:40 2018 From: eric at seidel.io (Eric Seidel) Date: Tue, 16 Oct 2018 16:29:40 -0400 Subject: Treatment of unknown pragmas In-Reply-To: References: <8736t5n5kc.fsf@smart-cactus.org> <87zhvdln5l.fsf@smart-cactus.org> <1539719290.3266129.1544285568.4036F292@webmail.messagingengine.com> Message-ID: <1539721780.4113994.1544347624.1958F395@webmail.messagingengine.com> > > This sounds exactly like the existing ANN pragma, which is what I've wanted LiquidHaskell to move towards for a long time. What is wrong with using the ANN pragma? > > Significant compilation performance penalty and extra recompilation. > ANN pragmas is what HLint currently uses. The extra recompilation is annoying for HLint, true, since you probably don't care about your annotations being visible from other modules, whereas LiquidHaskell does. But I'm surprised by the compilation performance penalty. I would have expected ANN to be fairly cheap. That seems worthy of a bug report, regardless of the current discussion about unknown pragmas. From allbery.b at gmail.com Tue Oct 16 20:34:31 2018 From: allbery.b at gmail.com (Brandon Allbery) Date: Tue, 16 Oct 2018 16:34:31 -0400 Subject: Treatment of unknown pragmas In-Reply-To: <1539721780.4113994.1544347624.1958F395@webmail.messagingengine.com> References: <8736t5n5kc.fsf@smart-cactus.org> <87zhvdln5l.fsf@smart-cactus.org> <1539719290.3266129.1544285568.4036F292@webmail.messagingengine.com> <1539721780.4113994.1544347624.1958F395@webmail.messagingengine.com> Message-ID: The problem with ANN is it's part of the plugins API, and as such does things like compiling the expression into the program in case a plugin generates code using its value, plus things like recompilation checking end up assuming plugins are in use and doing extra checking. Using it as a compile-time pragma is actually fairly weird from that standpoint. On Tue, Oct 16, 2018 at 4:29 PM Eric Seidel wrote: > > > This sounds exactly like the existing ANN pragma, which is what I've > wanted LiquidHaskell to move towards for a long time. What is wrong with > using the ANN pragma? > > > > Significant compilation performance penalty and extra recompilation. > > ANN pragmas is what HLint currently uses. > > The extra recompilation is annoying for HLint, true, since you probably > don't care about your annotations being visible from other modules, whereas > LiquidHaskell does. > > But I'm surprised by the compilation performance penalty. I would have > expected ANN to be fairly cheap. That seems worthy of a bug report, > regardless of the current discussion about unknown pragmas. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -- brandon s allbery kf8nh allbery.b at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Tue Oct 16 20:43:57 2018 From: marlowsd at gmail.com (Simon Marlow) Date: Tue, 16 Oct 2018 21:43:57 +0100 Subject: Treatment of unknown pragmas In-Reply-To: References: <8736t5n5kc.fsf@smart-cactus.org> <87zhvdln5l.fsf@smart-cactus.org> <1539719290.3266129.1544285568.4036F292@webmail.messagingengine.com> Message-ID: I suggested to Neil that he add the {-# HLINT #-} pragma to GHC. It seemed like the least worst option taking into account the various issues that have already been described in this thread. I'm OK with adding HLINT; after all we already ignore OPTIONS_HADDOCK, OPTIONS_NHC98, a bunch of other OPTIONS, CFILES (a Hugs relic), and several more that GHC ignores. We can either (a) not protect people from mistyped pragmas, or (b) protect people from mistyped pragma names, but then we have to bake in the set of known pragmas We could choose to have a different convention for pragmas that GHC doesn't know about (as Ben suggests), but then of course we don't get any protection for mistyped pragma names when using that convention. Cheers Simon On Tue, 16 Oct 2018 at 21:12, Neil Mitchell wrote: > > A warning flag is an interesting way to deal with the issue. On the > > other hand, it's not great from an ergonomic perspective; afterall, this > > would mean that all users of HLint (and any other tool requiring special > > Yep, this means every HLint user has to do an extra thing. I (the > HLint author) now have a whole pile of "how do I disable warnings in > Stack", and "what's the equivalent of this in Nix". Personally, it ups > the support level significantly that I wouldn't go this route. > > I think it might be a useful feature in general, as new tools could > use the flag to prototype new types of warning, but I imagine once a > feature gets popular it becomes too much fuss. > > > > I think it makes a lot of sense to have a standard way for > third-parties > > > to attach string-y information to Haskell source constructs. While it's > > > not strictly speaking necessary to standardize the syntax, doing > > > so minimizes the chance that tools overlap and hopefully reduces > > > the language ecosystem learning curve. > > > > This sounds exactly like the existing ANN pragma, which is what I've > wanted LiquidHaskell to move towards for a long time. What is wrong with > using the ANN pragma? > > Significant compilation performance penalty and extra recompilation. > ANN pragmas is what HLint currently uses. > > > I'm a bit skeptical of this idea. Afterall, adding cases to the > > lexer for every tool that wants a pragma seems quite unsustainable. > > I don't find this argument that convincing. Given the list already > includes CATCH and DERIVE, the bar can't have been _that_ high to > entry. And yet, the list remains pretty short. My guess is the demand > is pretty low - we're just whitelisting a handful of additional words > that aren't misspellings. > > Thanks, Neil > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Tue Oct 16 20:45:25 2018 From: allbery.b at gmail.com (Brandon Allbery) Date: Tue, 16 Oct 2018 16:45:25 -0400 Subject: Treatment of unknown pragmas In-Reply-To: References: <8736t5n5kc.fsf@smart-cactus.org> <87zhvdln5l.fsf@smart-cactus.org> <1539719290.3266129.1544285568.4036F292@webmail.messagingengine.com> Message-ID: Maybe the right answer is to ignore unknown OPTIONS_* pragmas and then use OPTIONS_HLINT? On Tue, Oct 16, 2018 at 4:44 PM Simon Marlow wrote: > I suggested to Neil that he add the {-# HLINT #-} pragma to GHC. It seemed > like the least worst option taking into account the various issues that > have already been described in this thread. I'm OK with adding HLINT; after > all we already ignore OPTIONS_HADDOCK, OPTIONS_NHC98, a bunch of other > OPTIONS, CFILES (a Hugs relic), and several more that GHC ignores. > > We can either > (a) not protect people from mistyped pragmas, or > (b) protect people from mistyped pragma names, but then we have to bake in > the set of known pragmas > > We could choose to have a different convention for pragmas that GHC > doesn't know about (as Ben suggests), but then of course we don't get any > protection for mistyped pragma names when using that convention. > > Cheers > Simon > > > On Tue, 16 Oct 2018 at 21:12, Neil Mitchell wrote: > >> > A warning flag is an interesting way to deal with the issue. On the >> > other hand, it's not great from an ergonomic perspective; afterall, this >> > would mean that all users of HLint (and any other tool requiring special >> >> Yep, this means every HLint user has to do an extra thing. I (the >> HLint author) now have a whole pile of "how do I disable warnings in >> Stack", and "what's the equivalent of this in Nix". Personally, it ups >> the support level significantly that I wouldn't go this route. >> >> I think it might be a useful feature in general, as new tools could >> use the flag to prototype new types of warning, but I imagine once a >> feature gets popular it becomes too much fuss. >> >> > > I think it makes a lot of sense to have a standard way for >> third-parties >> > > to attach string-y information to Haskell source constructs. While >> it's >> > > not strictly speaking necessary to standardize the syntax, doing >> > > so minimizes the chance that tools overlap and hopefully reduces >> > > the language ecosystem learning curve. >> > >> > This sounds exactly like the existing ANN pragma, which is what I've >> wanted LiquidHaskell to move towards for a long time. What is wrong with >> using the ANN pragma? >> >> Significant compilation performance penalty and extra recompilation. >> ANN pragmas is what HLint currently uses. >> >> > I'm a bit skeptical of this idea. Afterall, adding cases to the >> > lexer for every tool that wants a pragma seems quite unsustainable. >> >> I don't find this argument that convincing. Given the list already >> includes CATCH and DERIVE, the bar can't have been _that_ high to >> entry. And yet, the list remains pretty short. My guess is the demand >> is pretty low - we're just whitelisting a handful of additional words >> that aren't misspellings. >> >> Thanks, Neil >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -- brandon s allbery kf8nh allbery.b at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Tue Oct 16 21:13:00 2018 From: marlowsd at gmail.com (Simon Marlow) Date: Tue, 16 Oct 2018 22:13:00 +0100 Subject: Parser.y rewrite with parser combinators In-Reply-To: References: Message-ID: I personally love to hack things up with parser combinators, but for anything longer term where I want a degree of confidence that changes aren't going to introduce new problems I'd still use Happy. Yes it's a total pain sometimes, and LALR(1) is very restrictive, but I wouldn't want to lose the guarantees of unambiguity and performance. We have *always* had to shoehorn the Haskell grammar into LALR(1) - patterns and expressions had to be parsed using the same grammar fragment from the start due to the list comprehension syntax. And some post-processing is inevitable - it's technically not possible to parse Haskell without rearranging infix expressions later, because you don't know the fixities of imported operators. And layout is truly horrible to deal with - Happy's error token is designed purely to handle the layout rule, and it differs in semantics from yacc's error token for this reason (that is, if yacc's error token has a semantics, I could never figure out what it was supposed to do). Dealing with layout using parser combinators would probably require at least one layer of backtracking in addition to whatever other backtracking you needed to handle the other parts of the grammar. Cheers Simon On Tue, 9 Oct 2018 at 15:18, Sven Panne wrote: > Am Di., 9. Okt. 2018 um 15:45 Uhr schrieb Richard Eisenberg < > rae at cs.brynmawr.edu>: > >> [...] What I'm trying to say here is that tracking the backtracking level >> in types doesn't seem like it will fly (tempting though it may be). >> > > ... and even if it did fly, parser combinators with backtracking have a > strong tendency to introduce space leaks: To backtrack, you have too keep > previous input somehow, at least up to some point. So to keep the memory > requirements sane, you have to explicitly commit to one parse or another at > some point. Different combinator libraries have different ways to do that, > but you have to do that by hand somehow, and that's where the beauty and > maintainability of the combinator approach really suffers. > > Note that I'm not against parser combinators, far from it, but I don't > think they are necessarily the right tool for the problem at hand. The > basic problem is: Haskell's syntax, especially with all those extensions, > is quite tricky, and this will be reflected in any parser for it. IMHO a > parser generator is the lesser evil here, at least it points you to the > ugly places of your language (on a syntactic level). If Haskell had a few > more syntactic hints, reading code would be easier, not only for a > compiler, but (more importantly) for humans, too. Richard's code snippet is > a good example where some hint would be very useful for the casual reader, > in some sense humans have to "backtrack", too, when reading such code. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Tue Oct 16 21:18:14 2018 From: marlowsd at gmail.com (Simon Marlow) Date: Tue, 16 Oct 2018 22:18:14 +0100 Subject: Why align all pinned array payloads on 16 bytes? In-Reply-To: References: Message-ID: I vaguely recall that this was because 16 byte alignment is the minimum you need for certain foreign types, and it's what malloc() does. Perhaps check the FFI spec and the guarantees that mallocForeignPtrBytes and friends provide? Cheers Simon On Thu, 11 Oct 2018 at 18:44, Ömer Sinan Ağacan wrote: > Hi, > > I just found out we currently align all pinned array payloads to 16 bytes > and > I'm wondering why. I don't see any comments/notes on this, and it's also > not > part of the primop documentation. We also have another primop for aligned > allocation: newAlignedPinnedByteArray#. Given that alignment behavior of > newPinnedByteArray# is not documented and we have another one for aligned > allocation, perhaps we can remove alignment in newPinnedByteArray#. > > Does anyone remember what was the motivation for always aligning pinned > arrays? > > Thanks > > Ömer > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Tue Oct 16 21:51:08 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 16 Oct 2018 17:51:08 -0400 Subject: Treatment of unknown pragmas In-Reply-To: <1539719290.3266129.1544285568.4036F292@webmail.messagingengine.com> References: <8736t5n5kc.fsf@smart-cactus.org> <87zhvdln5l.fsf@smart-cactus.org> <1539719290.3266129.1544285568.4036F292@webmail.messagingengine.com> Message-ID: <87tvlllfw8.fsf@smart-cactus.org> Eric Seidel writes: > On Tue, Oct 16, 2018, at 15:14, Ben Gamari wrote: > >> For this we can follow the model of Liquid Haskell: `{-@ $TOOL_NAME ... @-}` > > LiquidHaskell does not use `{-@ LIQUID ... @-}`, we just write the > annotation inside `{-@ ... @-}` :) > Ahh, I see. I saw [1] and assumed that all annotations included the LIQUID keyword. Apparently this isn't the case. Thanks for clarifying! Cheers, - Ben [1] https://github.com/ucsd-progsys/liquidhaskell#theorem-proving -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Tue Oct 16 22:00:16 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 16 Oct 2018 18:00:16 -0400 Subject: Treatment of unknown pragmas In-Reply-To: References: <8736t5n5kc.fsf@smart-cactus.org> <87zhvdln5l.fsf@smart-cactus.org> <1539719290.3266129.1544285568.4036F292@webmail.messagingengine.com> <1539721780.4113994.1544347624.1958F395@webmail.messagingengine.com> Message-ID: <87r2gplfgx.fsf@smart-cactus.org> Brandon Allbery writes: > The problem with ANN is it's part of the plugins API, and as such does > things like compiling the expression into the program in case a plugin > generates code using its value, plus things like recompilation > checking end up assuming plugins are in use and doing extra checking. > Using it as a compile-time pragma is actually fairly weird from that > standpoint. > True. That being said, I wonder if we solve most of these issues by simply type-checking ANNs lazily. That is, just forkM ANNs during typechecking. This would mean that the user wouldn't see an error if the expression contained inside is invalid. On the other hand, the cost of ANNs would decrease significantly and plugins which use them would continue to work unmodified. Strict typechecking behavior could be enabled via a flag. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Tue Oct 16 22:19:14 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 16 Oct 2018 18:19:14 -0400 Subject: Treatment of unknown pragmas In-Reply-To: <87r2gplfgx.fsf@smart-cactus.org> References: <8736t5n5kc.fsf@smart-cactus.org> <87zhvdln5l.fsf@smart-cactus.org> <1539719290.3266129.1544285568.4036F292@webmail.messagingengine.com> <1539721780.4113994.1544347624.1958F395@webmail.messagingengine.com> <87r2gplfgx.fsf@smart-cactus.org> Message-ID: <87o9btlelg.fsf@smart-cactus.org> Ben Gamari writes: > Brandon Allbery writes: > >> The problem with ANN is it's part of the plugins API, and as such does >> things like compiling the expression into the program in case a plugin >> generates code using its value, plus things like recompilation >> checking end up assuming plugins are in use and doing extra checking. >> Using it as a compile-time pragma is actually fairly weird from that >> standpoint. >> > True. That being said, I wonder if we solve most of these issues by > simply type-checking ANNs lazily. That is, just forkM ANNs during > typechecking. This would mean that the user wouldn't see an error if the > expression contained inside is invalid. On the other hand, the cost of > ANNs would decrease significantly and plugins which use them would > continue to work unmodified. Strict typechecking behavior could be > enabled via a flag. > I suppose the only issue with this idea is that we would also need to drop them from interface files, lest they would be forced. Perhaps this is sometimes reasonable, however. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From eric at seidel.io Tue Oct 16 22:28:54 2018 From: eric at seidel.io (Eric Seidel) Date: Tue, 16 Oct 2018 18:28:54 -0400 Subject: Treatment of unknown pragmas In-Reply-To: <87r2gplfgx.fsf@smart-cactus.org> References: <8736t5n5kc.fsf@smart-cactus.org> <87zhvdln5l.fsf@smart-cactus.org> <1539719290.3266129.1544285568.4036F292@webmail.messagingengine.com> <1539721780.4113994.1544347624.1958F395@webmail.messagingengine.com> <87r2gplfgx.fsf@smart-cactus.org> Message-ID: Another option could be to introduce a lighter ANN that doesn’t actually embed anything into the module, instead just making the annotations available to plugins and API users during compilation of that particular module. It could also be restricted to string annotations to avoid invoking the type checker. IIRC HLint’s annotations are already strings, so this wouldn’t be a big deal. What is GHC providing in this case? Parsing the ANN pragmas (not the contents) and attaching them to the relevant Id, which is still a substantial benefit in my opinion. We have some code in LiquidHaskell that does this for local let-binders, and it’s a horrible hack; I’d much rather have GHC do it for us. Sent from my iPhone > On Oct 16, 2018, at 18:00, Ben Gamari wrote: > > Brandon Allbery writes: > >> The problem with ANN is it's part of the plugins API, and as such does >> things like compiling the expression into the program in case a plugin >> generates code using its value, plus things like recompilation >> checking end up assuming plugins are in use and doing extra checking. >> Using it as a compile-time pragma is actually fairly weird from that >> standpoint. >> > True. That being said, I wonder if we solve most of these issues by > simply type-checking ANNs lazily. That is, just forkM ANNs during > typechecking. This would mean that the user wouldn't see an error if the > expression contained inside is invalid. On the other hand, the cost of > ANNs would decrease significantly and plugins which use them would > continue to work unmodified. Strict typechecking behavior could be > enabled via a flag. > > Cheers, > > - Ben From simonpj at microsoft.com Tue Oct 16 22:34:09 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 16 Oct 2018 22:34:09 +0000 Subject: Treatment of unknown pragmas In-Reply-To: References: <8736t5n5kc.fsf@smart-cactus.org> <87zhvdln5l.fsf@smart-cactus.org> <1539719290.3266129.1544285568.4036F292@webmail.messagingengine.com> Message-ID: I’m still not understanding what’s wrong with {- HLINT blah blah -} GHC will ignore it. HLint can look at it. Simple. I must be missing something obvious. Simon From: ghc-devs On Behalf Of Simon Marlow Sent: 16 October 2018 21:44 To: Neil Mitchell Cc: ghc-devs Subject: Re: Treatment of unknown pragmas I suggested to Neil that he add the {-# HLINT #-} pragma to GHC. It seemed like the least worst option taking into account the various issues that have already been described in this thread. I'm OK with adding HLINT; after all we already ignore OPTIONS_HADDOCK, OPTIONS_NHC98, a bunch of other OPTIONS, CFILES (a Hugs relic), and several more that GHC ignores. We can either (a) not protect people from mistyped pragmas, or (b) protect people from mistyped pragma names, but then we have to bake in the set of known pragmas We could choose to have a different convention for pragmas that GHC doesn't know about (as Ben suggests), but then of course we don't get any protection for mistyped pragma names when using that convention. Cheers Simon On Tue, 16 Oct 2018 at 21:12, Neil Mitchell > wrote: > A warning flag is an interesting way to deal with the issue. On the > other hand, it's not great from an ergonomic perspective; afterall, this > would mean that all users of HLint (and any other tool requiring special Yep, this means every HLint user has to do an extra thing. I (the HLint author) now have a whole pile of "how do I disable warnings in Stack", and "what's the equivalent of this in Nix". Personally, it ups the support level significantly that I wouldn't go this route. I think it might be a useful feature in general, as new tools could use the flag to prototype new types of warning, but I imagine once a feature gets popular it becomes too much fuss. > > I think it makes a lot of sense to have a standard way for third-parties > > to attach string-y information to Haskell source constructs. While it's > > not strictly speaking necessary to standardize the syntax, doing > > so minimizes the chance that tools overlap and hopefully reduces > > the language ecosystem learning curve. > > This sounds exactly like the existing ANN pragma, which is what I've wanted LiquidHaskell to move towards for a long time. What is wrong with using the ANN pragma? Significant compilation performance penalty and extra recompilation. ANN pragmas is what HLint currently uses. > I'm a bit skeptical of this idea. Afterall, adding cases to the > lexer for every tool that wants a pragma seems quite unsustainable. I don't find this argument that convincing. Given the list already includes CATCH and DERIVE, the bar can't have been _that_ high to entry. And yet, the list remains pretty short. My guess is the demand is pretty low - we're just whitelisting a handful of additional words that aren't misspellings. Thanks, Neil _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Tue Oct 16 22:39:01 2018 From: allbery.b at gmail.com (Brandon Allbery) Date: Tue, 16 Oct 2018 18:39:01 -0400 Subject: Treatment of unknown pragmas In-Reply-To: References: <8736t5n5kc.fsf@smart-cactus.org> <87zhvdln5l.fsf@smart-cactus.org> <1539719290.3266129.1544285568.4036F292@webmail.messagingengine.com> Message-ID: One problem is you have to release a new ghc every time someone comes up with a new pragma-using tool that starts to catch on. Another is that the more of these you have, the more likely a typo will inadvertently match some tool you don't even know about but ghc does. On Tue, Oct 16, 2018 at 6:34 PM Simon Peyton Jones via ghc-devs < ghc-devs at haskell.org> wrote: > I’m still not understanding what’s wrong with > > > > {- HLINT blah blah -} > > > > GHC will ignore it. HLint can look at it. Simple. > > > > I must be missing something obvious. > > > > Simon > > > > *From:* ghc-devs *On Behalf Of *Simon > Marlow > *Sent:* 16 October 2018 21:44 > *To:* Neil Mitchell > *Cc:* ghc-devs > *Subject:* Re: Treatment of unknown pragmas > > > > I suggested to Neil that he add the {-# HLINT #-} pragma to GHC. It seemed > like the least worst option taking into account the various issues that > have already been described in this thread. I'm OK with adding HLINT; after > all we already ignore OPTIONS_HADDOCK, OPTIONS_NHC98, a bunch of other > OPTIONS, CFILES (a Hugs relic), and several more that GHC ignores. > > > > We can either > > (a) not protect people from mistyped pragmas, or > > (b) protect people from mistyped pragma names, but then we have to bake in > the set of known pragmas > > > > We could choose to have a different convention for pragmas that GHC > doesn't know about (as Ben suggests), but then of course we don't get any > protection for mistyped pragma names when using that convention. > > > > Cheers > > Simon > > > > > > On Tue, 16 Oct 2018 at 21:12, Neil Mitchell wrote: > > > A warning flag is an interesting way to deal with the issue. On the > > other hand, it's not great from an ergonomic perspective; afterall, this > > would mean that all users of HLint (and any other tool requiring special > > Yep, this means every HLint user has to do an extra thing. I (the > HLint author) now have a whole pile of "how do I disable warnings in > Stack", and "what's the equivalent of this in Nix". Personally, it ups > the support level significantly that I wouldn't go this route. > > I think it might be a useful feature in general, as new tools could > use the flag to prototype new types of warning, but I imagine once a > feature gets popular it becomes too much fuss. > > > > I think it makes a lot of sense to have a standard way for > third-parties > > > to attach string-y information to Haskell source constructs. While it's > > > not strictly speaking necessary to standardize the syntax, doing > > > so minimizes the chance that tools overlap and hopefully reduces > > > the language ecosystem learning curve. > > > > This sounds exactly like the existing ANN pragma, which is what I've > wanted LiquidHaskell to move towards for a long time. What is wrong with > using the ANN pragma? > > Significant compilation performance penalty and extra recompilation. > ANN pragmas is what HLint currently uses. > > > I'm a bit skeptical of this idea. Afterall, adding cases to the > > lexer for every tool that wants a pragma seems quite unsustainable. > > I don't find this argument that convincing. Given the list already > includes CATCH and DERIVE, the bar can't have been _that_ high to > entry. And yet, the list remains pretty short. My guess is the demand > is pretty low - we're just whitelisting a handful of additional words > that aren't misspellings. > > Thanks, Neil > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -- brandon s allbery kf8nh allbery.b at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Oct 16 22:42:46 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 16 Oct 2018 22:42:46 +0000 Subject: Goals for GHC 8.8 In-Reply-To: References: <87wosknaub.fsf@smart-cactus.org> <1534857692.2717.56.camel@jeltsch.info> <87h8jnn5iy.fsf@smart-cactus.org> <8736v7ms34.fsf@smart-cactus.org> Message-ID: Mathieu Things (especially major things) for a release should land in master before the cut-off date. But there is quite a way to go before linear types land in master… When Arnaud is ready to submit a diff, I (and, I earnestly hope, other) can begin a code review – but so far I have not reviewed a single line of code. This is a major diff and I expect there will be quite a bit of to and fro. Fortunately, now we are on a 6-monthly release cycle, the next release will be along before we know it. Actually I think it might well take several months of settling down before everyone (including Arnaud) is happy. Let’s see how it all works out. But I don’t think we can guarantee that it’ll be ready for 8.8. Simon From: ghc-devs > On Behalf Of Boespflug, Mathieu Sent: 16 October 2018 13:06 To: Ben > Cc: ghc-devs >; Richard > Subject: Re: Goals for GHC 8.8 Hi Ben, just a heads up: we are still on track for a Diff submission for linear types by end of October (the cut-off date you advertized at the top of this thread for feature work on GHC 8.8, and the one we stated we'd aim for in September). We might run into last minute blockers of course, but so far so good. I've been told that we'll be hearing from the Committee before then about acceptance or rejection of the proposal. Best, -- Mathieu Boespflug Founder at http://tweag.io. On Wed, 5 Sep 2018 at 15:46, Boespflug, Mathieu > wrote: Hi Ben, yes - as for the implementation of the linear types extension, we're aiming for the submission of a Diff before the 8.8 branch is cut. (If the Committee has given the green light by then, of course.) Best, -- Mathieu Boespflug Founder at http://tweag.io. On Tue, 21 Aug 2018 at 21:34, Ben Gamari > wrote: Mathieu Boespflug > writes: > The proposal would need to be accepted by the GHC Steering Committee first > before that happens. > Absolutely; I just wasn't sure whether you were considering pushing for merge in the event that it was accepted. Cheers, - Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Oct 16 22:44:37 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 16 Oct 2018 22:44:37 +0000 Subject: Treatment of unknown pragmas In-Reply-To: References: <8736t5n5kc.fsf@smart-cactus.org> <87zhvdln5l.fsf@smart-cactus.org> <1539719290.3266129.1544285568.4036F292@webmail.messagingengine.com> Message-ID: I’m still not getting it. GHC ignores everything between {- and -}. Why would I need to produce a new GHC if someone wants to us {- WIMWAM blah -}? Simon From: Brandon Allbery Sent: 16 October 2018 23:39 To: Simon Peyton Jones Cc: Simon Marlow ; Neil Mitchell ; ghc-devs at haskell.org Devs Subject: Re: Treatment of unknown pragmas One problem is you have to release a new ghc every time someone comes up with a new pragma-using tool that starts to catch on. Another is that the more of these you have, the more likely a typo will inadvertently match some tool you don't even know about but ghc does. On Tue, Oct 16, 2018 at 6:34 PM Simon Peyton Jones via ghc-devs > wrote: I’m still not understanding what’s wrong with {- HLINT blah blah -} GHC will ignore it. HLint can look at it. Simple. I must be missing something obvious. Simon From: ghc-devs > On Behalf Of Simon Marlow Sent: 16 October 2018 21:44 To: Neil Mitchell > Cc: ghc-devs > Subject: Re: Treatment of unknown pragmas I suggested to Neil that he add the {-# HLINT #-} pragma to GHC. It seemed like the least worst option taking into account the various issues that have already been described in this thread. I'm OK with adding HLINT; after all we already ignore OPTIONS_HADDOCK, OPTIONS_NHC98, a bunch of other OPTIONS, CFILES (a Hugs relic), and several more that GHC ignores. We can either (a) not protect people from mistyped pragmas, or (b) protect people from mistyped pragma names, but then we have to bake in the set of known pragmas We could choose to have a different convention for pragmas that GHC doesn't know about (as Ben suggests), but then of course we don't get any protection for mistyped pragma names when using that convention. Cheers Simon On Tue, 16 Oct 2018 at 21:12, Neil Mitchell > wrote: > A warning flag is an interesting way to deal with the issue. On the > other hand, it's not great from an ergonomic perspective; afterall, this > would mean that all users of HLint (and any other tool requiring special Yep, this means every HLint user has to do an extra thing. I (the HLint author) now have a whole pile of "how do I disable warnings in Stack", and "what's the equivalent of this in Nix". Personally, it ups the support level significantly that I wouldn't go this route. I think it might be a useful feature in general, as new tools could use the flag to prototype new types of warning, but I imagine once a feature gets popular it becomes too much fuss. > > I think it makes a lot of sense to have a standard way for third-parties > > to attach string-y information to Haskell source constructs. While it's > > not strictly speaking necessary to standardize the syntax, doing > > so minimizes the chance that tools overlap and hopefully reduces > > the language ecosystem learning curve. > > This sounds exactly like the existing ANN pragma, which is what I've wanted LiquidHaskell to move towards for a long time. What is wrong with using the ANN pragma? Significant compilation performance penalty and extra recompilation. ANN pragmas is what HLint currently uses. > I'm a bit skeptical of this idea. Afterall, adding cases to the > lexer for every tool that wants a pragma seems quite unsustainable. I don't find this argument that convincing. Given the list already includes CATCH and DERIVE, the bar can't have been _that_ high to entry. And yet, the list remains pretty short. My guess is the demand is pretty low - we're just whitelisting a handful of additional words that aren't misspellings. Thanks, Neil _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -- brandon s allbery kf8nh allbery.b at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.kjeldaas at gmail.com Tue Oct 16 22:50:09 2018 From: alexander.kjeldaas at gmail.com (Alexander Kjeldaas) Date: Wed, 17 Oct 2018 00:50:09 +0200 Subject: Why align all pinned array payloads on 16 bytes? In-Reply-To: References: Message-ID: The SSE types require 16-byte alignment. Most of the original SSE instructions have versions that accept non-aligned data though. Alexander On Tue, Oct 16, 2018 at 11:18 PM Simon Marlow wrote: > I vaguely recall that this was because 16 byte alignment is the minimum > you need for certain foreign types, and it's what malloc() does. Perhaps > check the FFI spec and the guarantees that mallocForeignPtrBytes and > friends provide? > > Cheers > Simon > > On Thu, 11 Oct 2018 at 18:44, Ömer Sinan Ağacan > wrote: > >> Hi, >> >> I just found out we currently align all pinned array payloads to 16 bytes >> and >> I'm wondering why. I don't see any comments/notes on this, and it's also >> not >> part of the primop documentation. We also have another primop for aligned >> allocation: newAlignedPinnedByteArray#. Given that alignment behavior of >> newPinnedByteArray# is not documented and we have another one for aligned >> allocation, perhaps we can remove alignment in newPinnedByteArray#. >> >> Does anyone remember what was the motivation for always aligning pinned >> arrays? >> >> Thanks >> >> Ömer >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amindfv at gmail.com Tue Oct 16 23:02:02 2018 From: amindfv at gmail.com (amindfv at gmail.com) Date: Tue, 16 Oct 2018 19:02:02 -0400 Subject: Treatment of unknown pragmas In-Reply-To: References: <8736t5n5kc.fsf@smart-cactus.org> <87zhvdln5l.fsf@smart-cactus.org> <1539719290.3266129.1544285568.4036F292@webmail.messagingengine.com> Message-ID: <5A7EAA95-4CFC-4F73-B5BE-5B16A0C28585@gmail.com> I think Brandon may have misread your example as "{-# HLINT ... #-}". One problem with "{- HLINT" (although I'm personally not in favor of the special-casing) is that if it's just a Haskell comment then it itself is vulnerable to typos. E.g. if I type "{- HILNT foo -}" (L and I swapped), hlint the tool will miss it. Tom > El 16 oct 2018, a las 18:44, Simon Peyton Jones via ghc-devs escribió: > > I’m still not getting it. GHC ignores everything between {- and -}. Why would I need to produce a new GHC if someone wants to us {- WIMWAM blah -}? > > Simon > > From: Brandon Allbery > Sent: 16 October 2018 23:39 > To: Simon Peyton Jones > Cc: Simon Marlow ; Neil Mitchell ; ghc-devs at haskell.org Devs > Subject: Re: Treatment of unknown pragmas > > One problem is you have to release a new ghc every time someone comes up with a new pragma-using tool that starts to catch on. Another is that the more of these you have, the more likely a typo will inadvertently match some tool you don't even know about but ghc does. > > On Tue, Oct 16, 2018 at 6:34 PM Simon Peyton Jones via ghc-devs wrote: > I’m still not understanding what’s wrong with > > {- HLINT blah blah -} > > GHC will ignore it. HLint can look at it. Simple. > > I must be missing something obvious. > > Simon > > From: ghc-devs On Behalf Of Simon Marlow > Sent: 16 October 2018 21:44 > To: Neil Mitchell > Cc: ghc-devs > Subject: Re: Treatment of unknown pragmas > > I suggested to Neil that he add the {-# HLINT #-} pragma to GHC. It seemed like the least worst option taking into account the various issues that have already been described in this thread. I'm OK with adding HLINT; after all we already ignore OPTIONS_HADDOCK, OPTIONS_NHC98, a bunch of other OPTIONS, CFILES (a Hugs relic), and several more that GHC ignores. > > We can either > (a) not protect people from mistyped pragmas, or > (b) protect people from mistyped pragma names, but then we have to bake in the set of known pragmas > > We could choose to have a different convention for pragmas that GHC doesn't know about (as Ben suggests), but then of course we don't get any protection for mistyped pragma names when using that convention. > > Cheers > Simon > > > On Tue, 16 Oct 2018 at 21:12, Neil Mitchell wrote: > > A warning flag is an interesting way to deal with the issue. On the > > other hand, it's not great from an ergonomic perspective; afterall, this > > would mean that all users of HLint (and any other tool requiring special > > Yep, this means every HLint user has to do an extra thing. I (the > HLint author) now have a whole pile of "how do I disable warnings in > Stack", and "what's the equivalent of this in Nix". Personally, it ups > the support level significantly that I wouldn't go this route. > > I think it might be a useful feature in general, as new tools could > use the flag to prototype new types of warning, but I imagine once a > feature gets popular it becomes too much fuss. > > > > I think it makes a lot of sense to have a standard way for third-parties > > > to attach string-y information to Haskell source constructs. While it's > > > not strictly speaking necessary to standardize the syntax, doing > > > so minimizes the chance that tools overlap and hopefully reduces > > > the language ecosystem learning curve. > > > > This sounds exactly like the existing ANN pragma, which is what I've wanted LiquidHaskell to move towards for a long time. What is wrong with using the ANN pragma? > > Significant compilation performance penalty and extra recompilation. > ANN pragmas is what HLint currently uses. > > > I'm a bit skeptical of this idea. Afterall, adding cases to the > > lexer for every tool that wants a pragma seems quite unsustainable. > > I don't find this argument that convincing. Given the list already > includes CATCH and DERIVE, the bar can't have been _that_ high to > entry. And yet, the list remains pretty short. My guess is the demand > is pretty low - we're just whitelisting a handful of additional words > that aren't misspellings. > > Thanks, Neil > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > -- > brandon s allbery kf8nh > allbery.b at gmail.com > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Tue Oct 16 23:13:08 2018 From: allbery.b at gmail.com (Brandon Allbery) Date: Tue, 16 Oct 2018 19:13:08 -0400 Subject: Treatment of unknown pragmas In-Reply-To: <5A7EAA95-4CFC-4F73-B5BE-5B16A0C28585@gmail.com> References: <8736t5n5kc.fsf@smart-cactus.org> <87zhvdln5l.fsf@smart-cactus.org> <1539719290.3266129.1544285568.4036F292@webmail.messagingengine.com> <5A7EAA95-4CFC-4F73-B5BE-5B16A0C28585@gmail.com> Message-ID: Yeh, I'd missed it was a normal comment and not a new pragma. Pretty much every solution has some screw case; sometimes you get to choose between a bunch of "simple, elegant, and wrong" options and have to decide which "and wrong" will be least expensive (or least frustrating). And a problem with normal comments (behind why I'd missed this was one) is I'm not sure they can be as firmly anchored to transformed ASTs; the ANN mechanism at least has that in its favor. Pragmas can as well, as indicated by e.g. {-# UNPACK #-}. Is this 100% true of random comments? And if it is, at what cost to compilations that don't care? I think this needs more than just a SrcSpan, at least for tools like hlint or Liquid Haskell that really want to associate these with AST nodes and maintain them across transformations. On Tue, Oct 16, 2018 at 7:01 PM wrote: > I think Brandon may have misread your example as "{-# HLINT ... #-}". > > One problem with "{- HLINT" (although I'm personally not in favor of the > special-casing) is that if it's just a Haskell comment then it itself is > vulnerable to typos. E.g. if I type "{- HILNT foo -}" (L and I swapped), > hlint the tool will miss it. > > Tom > > El 16 oct 2018, a las 18:44, Simon Peyton Jones via ghc-devs < > ghc-devs at haskell.org> escribió: > > I’m still not getting it. GHC *ignores* everything between {- and -}. > Why would I need to produce a new GHC if someone wants to us {- WIMWAM > blah -}? > > > > Simon > > > > *From:* Brandon Allbery > *Sent:* 16 October 2018 23:39 > *To:* Simon Peyton Jones > *Cc:* Simon Marlow ; Neil Mitchell < > ndmitchell at gmail.com>; ghc-devs at haskell.org Devs > *Subject:* Re: Treatment of unknown pragmas > > > > One problem is you have to release a new ghc every time someone comes up > with a new pragma-using tool that starts to catch on. Another is that the > more of these you have, the more likely a typo will inadvertently match > some tool you don't even know about but ghc does. > > > > On Tue, Oct 16, 2018 at 6:34 PM Simon Peyton Jones via ghc-devs < > ghc-devs at haskell.org> wrote: > > I’m still not understanding what’s wrong with > > > > {- HLINT blah blah -} > > > > GHC will ignore it. HLint can look at it. Simple. > > > > I must be missing something obvious. > > > > Simon > > > > *From:* ghc-devs *On Behalf Of *Simon > Marlow > *Sent:* 16 October 2018 21:44 > *To:* Neil Mitchell > *Cc:* ghc-devs > *Subject:* Re: Treatment of unknown pragmas > > > > I suggested to Neil that he add the {-# HLINT #-} pragma to GHC. It seemed > like the least worst option taking into account the various issues that > have already been described in this thread. I'm OK with adding HLINT; after > all we already ignore OPTIONS_HADDOCK, OPTIONS_NHC98, a bunch of other > OPTIONS, CFILES (a Hugs relic), and several more that GHC ignores. > > > > We can either > > (a) not protect people from mistyped pragmas, or > > (b) protect people from mistyped pragma names, but then we have to bake in > the set of known pragmas > > > > We could choose to have a different convention for pragmas that GHC > doesn't know about (as Ben suggests), but then of course we don't get any > protection for mistyped pragma names when using that convention. > > > > Cheers > > Simon > > > > > > On Tue, 16 Oct 2018 at 21:12, Neil Mitchell wrote: > > > A warning flag is an interesting way to deal with the issue. On the > > other hand, it's not great from an ergonomic perspective; afterall, this > > would mean that all users of HLint (and any other tool requiring special > > Yep, this means every HLint user has to do an extra thing. I (the > HLint author) now have a whole pile of "how do I disable warnings in > Stack", and "what's the equivalent of this in Nix". Personally, it ups > the support level significantly that I wouldn't go this route. > > I think it might be a useful feature in general, as new tools could > use the flag to prototype new types of warning, but I imagine once a > feature gets popular it becomes too much fuss. > > > > I think it makes a lot of sense to have a standard way for > third-parties > > > to attach string-y information to Haskell source constructs. While it's > > > not strictly speaking necessary to standardize the syntax, doing > > > so minimizes the chance that tools overlap and hopefully reduces > > > the language ecosystem learning curve. > > > > This sounds exactly like the existing ANN pragma, which is what I've > wanted LiquidHaskell to move towards for a long time. What is wrong with > using the ANN pragma? > > Significant compilation performance penalty and extra recompilation. > ANN pragmas is what HLint currently uses. > > > I'm a bit skeptical of this idea. Afterall, adding cases to the > > lexer for every tool that wants a pragma seems quite unsustainable. > > I don't find this argument that convincing. Given the list already > includes CATCH and DERIVE, the bar can't have been _that_ high to > entry. And yet, the list remains pretty short. My guess is the demand > is pretty low - we're just whitelisting a handful of additional words > that aren't misspellings. > > Thanks, Neil > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > > > -- > > brandon s allbery kf8nh > > allbery.b at gmail.com > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -- brandon s allbery kf8nh allbery.b at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From juhpetersen at gmail.com Wed Oct 17 04:44:12 2018 From: juhpetersen at gmail.com (Jens Petersen) Date: Wed, 17 Oct 2018 13:44:12 +0900 Subject: [ANNOUNCE] GHC 8.4.4 released In-Reply-To: <878t30npgc.fsf@smart-cactus.org> References: <878t30npgc.fsf@smart-cactus.org> Message-ID: On Mon, 15 Oct 2018 at 07:17, Ben Gamari wrote: > The GHC team is pleased to announce the availability of GHC 8.4.4 Thank you > As always, the full release notes can be found in the users guide, https://downloads.haskell.org/~ghc/8.4.4/docs/html/users_guide/8.4.4-notes.html#base-library I think this base text is out of date, and could be dropped, right? I see that stm was also bumped: though it is not listed in https://downloads.haskell.org/~ghc/8.4.4/docs/html/users_guide/8.4.4-notes.html#included-libraries Cheers, Jens From svenpanne at gmail.com Wed Oct 17 07:28:58 2018 From: svenpanne at gmail.com (Sven Panne) Date: Wed, 17 Oct 2018 09:28:58 +0200 Subject: Why align all pinned array payloads on 16 bytes? In-Reply-To: References: Message-ID: Am Di., 16. Okt. 2018 um 23:18 Uhr schrieb Simon Marlow : > I vaguely recall that this was because 16 byte alignment is the minimum > you need for certain foreign types, and it's what malloc() does. Perhaps > check the FFI spec and the guarantees that mallocForeignPtrBytes and > friends provide? > mallocForeignPtrBytes is defined in terms of malloc ( https://www.haskell.org/onlinereport/haskell2010/haskellch29.html#x37-28400029.1.3), which in turn has the following guarantee ( https://www.haskell.org/onlinereport/haskell2010/haskellch31.html#x39-28700031.1 ): "... All storage allocated by functions that allocate based on a size in bytes must be sufficiently aligned for any of the basic foreign types that fits into the newly allocated storage. ..." The largest basic foreign types are Word64/Double and probably Ptr/FunPtr/StablePtr ( https://www.haskell.org/onlinereport/haskell2010/haskellch8.html#x15-1700008.7), so per spec you need at least an 8-byte alignement. But in an SSE-world I would be *very* reluctant to use an alignment less strict than 16 bytes, otherwise people will probably hate you... :-] Cheers, S. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndmitchell at gmail.com Wed Oct 17 07:44:54 2018 From: ndmitchell at gmail.com (Neil Mitchell) Date: Wed, 17 Oct 2018 08:44:54 +0100 Subject: Treatment of unknown pragmas In-Reply-To: References: <8736t5n5kc.fsf@smart-cactus.org> <87zhvdln5l.fsf@smart-cactus.org> <1539719290.3266129.1544285568.4036F292@webmail.messagingengine.com> Message-ID: People expect pragmas that are machine readable to use the pragma syntax, and the Haskell report suggests that is the right thing to expect. They can be highlighted intelligently by IDEs, are immune from accidental mix ups with normal comments etc. The fact that pragmas can be lower-case (probably a mistake?) means that {- hlint gets this wrong -} should probably be interpreted as an HLint directive, when it's clearly not intended to be. Note that we can't mandate {-@ or {-! as both are used by Liquid Haskell and Derive respectively to encode non-prefixed information. In my view the three options are: 1) Do nothing. Tell HLint to use {- HLINT -} or find some unencumbered syntax. There's no point mandating a specific unencumbered syntax in the report, as the report already mandates a syntax, namely {-# #-}. 2) Whitelist HLint as a pragma. My preferred solution, but I realise that encoding knowledge of every tool into GHC is not a great solution. 3) Whitelist either X-* or TOOL as a pragma, so GHC has a universal ignored pragma, allowing HLint pragmas to be written as either {-# TOOL HLINT ... #-} or {-# X-HLINT ... #-} Thanks, Neil On Tue, Oct 16, 2018 at 11:44 PM Simon Peyton Jones wrote: > > I’m still not getting it. GHC ignores everything between {- and -}. Why would I need to produce a new GHC if someone wants to us {- WIMWAM blah -}? > > > > Simon > > > > From: Brandon Allbery > Sent: 16 October 2018 23:39 > To: Simon Peyton Jones > Cc: Simon Marlow ; Neil Mitchell ; ghc-devs at haskell.org Devs > Subject: Re: Treatment of unknown pragmas > > > > One problem is you have to release a new ghc every time someone comes up with a new pragma-using tool that starts to catch on. Another is that the more of these you have, the more likely a typo will inadvertently match some tool you don't even know about but ghc does. > > > > On Tue, Oct 16, 2018 at 6:34 PM Simon Peyton Jones via ghc-devs wrote: > > I’m still not understanding what’s wrong with > > > > {- HLINT blah blah -} > > > > GHC will ignore it. HLint can look at it. Simple. > > > > I must be missing something obvious. > > > > Simon > > > > From: ghc-devs On Behalf Of Simon Marlow > Sent: 16 October 2018 21:44 > To: Neil Mitchell > Cc: ghc-devs > Subject: Re: Treatment of unknown pragmas > > > > I suggested to Neil that he add the {-# HLINT #-} pragma to GHC. It seemed like the least worst option taking into account the various issues that have already been described in this thread. I'm OK with adding HLINT; after all we already ignore OPTIONS_HADDOCK, OPTIONS_NHC98, a bunch of other OPTIONS, CFILES (a Hugs relic), and several more that GHC ignores. > > > > We can either > > (a) not protect people from mistyped pragmas, or > > (b) protect people from mistyped pragma names, but then we have to bake in the set of known pragmas > > > > We could choose to have a different convention for pragmas that GHC doesn't know about (as Ben suggests), but then of course we don't get any protection for mistyped pragma names when using that convention. > > > > Cheers > > Simon > > > > > > On Tue, 16 Oct 2018 at 21:12, Neil Mitchell wrote: > > > A warning flag is an interesting way to deal with the issue. On the > > other hand, it's not great from an ergonomic perspective; afterall, this > > would mean that all users of HLint (and any other tool requiring special > > Yep, this means every HLint user has to do an extra thing. I (the > HLint author) now have a whole pile of "how do I disable warnings in > Stack", and "what's the equivalent of this in Nix". Personally, it ups > the support level significantly that I wouldn't go this route. > > I think it might be a useful feature in general, as new tools could > use the flag to prototype new types of warning, but I imagine once a > feature gets popular it becomes too much fuss. > > > > I think it makes a lot of sense to have a standard way for third-parties > > > to attach string-y information to Haskell source constructs. While it's > > > not strictly speaking necessary to standardize the syntax, doing > > > so minimizes the chance that tools overlap and hopefully reduces > > > the language ecosystem learning curve. > > > > This sounds exactly like the existing ANN pragma, which is what I've wanted LiquidHaskell to move towards for a long time. What is wrong with using the ANN pragma? > > Significant compilation performance penalty and extra recompilation. > ANN pragmas is what HLint currently uses. > > > I'm a bit skeptical of this idea. Afterall, adding cases to the > > lexer for every tool that wants a pragma seems quite unsustainable. > > I don't find this argument that convincing. Given the list already > includes CATCH and DERIVE, the bar can't have been _that_ high to > entry. And yet, the list remains pretty short. My guess is the demand > is pretty low - we're just whitelisting a handful of additional words > that aren't misspellings. > > Thanks, Neil > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > > -- > > brandon s allbery kf8nh > > allbery.b at gmail.com From marlowsd at gmail.com Wed Oct 17 08:05:22 2018 From: marlowsd at gmail.com (Simon Marlow) Date: Wed, 17 Oct 2018 09:05:22 +0100 Subject: Treatment of unknown pragmas In-Reply-To: References: <8736t5n5kc.fsf@smart-cactus.org> <87zhvdln5l.fsf@smart-cactus.org> <1539719290.3266129.1544285568.4036F292@webmail.messagingengine.com> Message-ID: Simon - GHC provides some protection against mistyped pragma names, in the form of the -Wunrecognised-pragmas warning, but only for {-# ... #-} pragmas. If tools decide to use their own pragma syntax, they don't benefit from this. That's one downside, in addition to the others that Neil mentioned. You might say we shouldn't care about mistyped pragma names. If the user accidentally writes {- HLNIT -} and it is silently ignored, that's not our problem. OK, but we cared about it enough for the pragmas that GHC understands to add the special warning, and it's reasonable to expect that HLint users also care about it. (personally I have no stance on whether we should have this warning, there are upsides and downsides. But that's where we are now.) Cheers Simon On Tue, 16 Oct 2018 at 23:34, Simon Peyton Jones wrote: > I’m still not understanding what’s wrong with > > > > {- HLINT blah blah -} > > > > GHC will ignore it. HLint can look at it. Simple. > > > > I must be missing something obvious. > > > > Simon > > > > *From:* ghc-devs *On Behalf Of *Simon > Marlow > *Sent:* 16 October 2018 21:44 > *To:* Neil Mitchell > *Cc:* ghc-devs > *Subject:* Re: Treatment of unknown pragmas > > > > I suggested to Neil that he add the {-# HLINT #-} pragma to GHC. It seemed > like the least worst option taking into account the various issues that > have already been described in this thread. I'm OK with adding HLINT; after > all we already ignore OPTIONS_HADDOCK, OPTIONS_NHC98, a bunch of other > OPTIONS, CFILES (a Hugs relic), and several more that GHC ignores. > > > > We can either > > (a) not protect people from mistyped pragmas, or > > (b) protect people from mistyped pragma names, but then we have to bake in > the set of known pragmas > > > > We could choose to have a different convention for pragmas that GHC > doesn't know about (as Ben suggests), but then of course we don't get any > protection for mistyped pragma names when using that convention. > > > > Cheers > > Simon > > > > > > On Tue, 16 Oct 2018 at 21:12, Neil Mitchell wrote: > > > A warning flag is an interesting way to deal with the issue. On the > > other hand, it's not great from an ergonomic perspective; afterall, this > > would mean that all users of HLint (and any other tool requiring special > > Yep, this means every HLint user has to do an extra thing. I (the > HLint author) now have a whole pile of "how do I disable warnings in > Stack", and "what's the equivalent of this in Nix". Personally, it ups > the support level significantly that I wouldn't go this route. > > I think it might be a useful feature in general, as new tools could > use the flag to prototype new types of warning, but I imagine once a > feature gets popular it becomes too much fuss. > > > > I think it makes a lot of sense to have a standard way for > third-parties > > > to attach string-y information to Haskell source constructs. While it's > > > not strictly speaking necessary to standardize the syntax, doing > > > so minimizes the chance that tools overlap and hopefully reduces > > > the language ecosystem learning curve. > > > > This sounds exactly like the existing ANN pragma, which is what I've > wanted LiquidHaskell to move towards for a long time. What is wrong with > using the ANN pragma? > > Significant compilation performance penalty and extra recompilation. > ANN pragmas is what HLint currently uses. > > > I'm a bit skeptical of this idea. Afterall, adding cases to the > > lexer for every tool that wants a pragma seems quite unsustainable. > > I don't find this argument that convincing. Given the list already > includes CATCH and DERIVE, the bar can't have been _that_ high to > entry. And yet, the list remains pretty short. My guess is the demand > is pretty low - we're just whitelisting a handful of additional words > that aren't misspellings. > > Thanks, Neil > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From moritz.angermann at gmail.com Wed Oct 17 11:48:18 2018 From: moritz.angermann at gmail.com (Moritz Angermann) Date: Wed, 17 Oct 2018 19:48:18 +0800 Subject: Treatment of unknown pragmas In-Reply-To: <993CA172-FDE7-49B3-AF84-A03AE299B968@lichtzwerge.com> References: <8736t5n5kc.fsf@smart-cactus.org> <87zhvdln5l.fsf@smart-cactus.org> <1539719290.3266129.1544285568.4036F292@webmail.messagingengine.com> <993CA172-FDE7-49B3-AF84-A03AE299B968@lichtzwerge.com> Message-ID: <5B0A7440-C88F-4222-A8DF-E0299317A555@gmail.com> Does this need to be *this* hardcoded? Or could we just parse the pragma and compare it to a list of known pragmas to be parsed from a file (or settings value?). The change in question does: - pragmas = options_pragmas ++ ["cfiles", "contract"] + pragmas = options_pragmas ++ ["cfiles", "contract", "hlint"] to the `compiler/parser/Lexer.x`, and as such is somewhat hardcoded. So we already ignore a bunch of `option_` and those three pragmas. And I see <0,option_prags> { "{-#" { warnThen Opt_WarnUnrecognisedPragmas (text "Unrecognised pragma") (nested_comment lexToken) } } which I believe handles the unrecognisedPragmas case. Can't we have a ignored-pragmas value in the settings, that just lists all those we want to ignore, instead of hardcoding them in the Lexer? That at least feels to me like a less invasive (and easier to adapt) appraoch, that might be less controversial? Yes it's just moving goal posts, but it moves the logic into a runtime value instead of a compile time value. Cheers, Moritz > On Oct 17, 2018, at 4:05 PM, Simon Marlow wrote: > > Simon - GHC provides some protection against mistyped pragma names, in the form of the -Wunrecognised-pragmas warning, but only for {-# ... #-} pragmas. If tools decide to use their own pragma syntax, they don't benefit from this. That's one downside, in addition to the others that Neil mentioned. > > You might say we shouldn't care about mistyped pragma names. If the user accidentally writes {- HLNIT -} and it is silently ignored, that's not our problem. OK, but we cared about it enough for the pragmas that GHC understands to add the special warning, and it's reasonable to expect that HLint users also care about it. > > (personally I have no stance on whether we should have this warning, there are upsides and downsides. But that's where we are now.) > > Cheers > Simon > > On Tue, 16 Oct 2018 at 23:34, Simon Peyton Jones wrote: > I’m still not understanding what’s wrong with > > > > {- HLINT blah blah -} > > > > GHC will ignore it. HLint can look at it. Simple. > > > > I must be missing something obvious. > > > > Simon > > > > From: ghc-devs On Behalf Of Simon Marlow > Sent: 16 October 2018 21:44 > To: Neil Mitchell > Cc: ghc-devs > Subject: Re: Treatment of unknown pragmas > > > > I suggested to Neil that he add the {-# HLINT #-} pragma to GHC. It seemed like the least worst option taking into account the various issues that have already been described in this thread. I'm OK with adding HLINT; after all we already ignore OPTIONS_HADDOCK, OPTIONS_NHC98, a bunch of other OPTIONS, CFILES (a Hugs relic), and several more that GHC ignores. > > > > We can either > > (a) not protect people from mistyped pragmas, or > > (b) protect people from mistyped pragma names, but then we have to bake in the set of known pragmas > > > > We could choose to have a different convention for pragmas that GHC doesn't know about (as Ben suggests), but then of course we don't get any protection for mistyped pragma names when using that convention. > > > > Cheers > > Simon > > > > > > On Tue, 16 Oct 2018 at 21:12, Neil Mitchell wrote: > >> A warning flag is an interesting way to deal with the issue. On the >> other hand, it's not great from an ergonomic perspective; afterall, this >> would mean that all users of HLint (and any other tool requiring special > > Yep, this means every HLint user has to do an extra thing. I (the > HLint author) now have a whole pile of "how do I disable warnings in > Stack", and "what's the equivalent of this in Nix". Personally, it ups > the support level significantly that I wouldn't go this route. > > I think it might be a useful feature in general, as new tools could > use the flag to prototype new types of warning, but I imagine once a > feature gets popular it becomes too much fuss. > >>> I think it makes a lot of sense to have a standard way for third-parties >>> to attach string-y information to Haskell source constructs. While it's >>> not strictly speaking necessary to standardize the syntax, doing >>> so minimizes the chance that tools overlap and hopefully reduces >>> the language ecosystem learning curve. >> >> This sounds exactly like the existing ANN pragma, which is what I've wanted LiquidHaskell to move towards for a long time. What is wrong with using the ANN pragma? > > Significant compilation performance penalty and extra recompilation. > ANN pragmas is what HLint currently uses. > >> I'm a bit skeptical of this idea. Afterall, adding cases to the >> lexer for every tool that wants a pragma seems quite unsustainable. > > I don't find this argument that convincing. Given the list already > includes CATCH and DERIVE, the bar can't have been _that_ high to > entry. And yet, the list remains pretty short. My guess is the demand > is pretty low - we're just whitelisting a handful of additional words > that aren't misspellings. > > Thanks, Neil > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From ben at smart-cactus.org Wed Oct 17 14:02:01 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Wed, 17 Oct 2018 10:02:01 -0400 Subject: Treatment of unknown pragmas In-Reply-To: References: <8736t5n5kc.fsf@smart-cactus.org> <87zhvdln5l.fsf@smart-cactus.org> <1539719290.3266129.1544285568.4036F292@webmail.messagingengine.com> Message-ID: <87bm7sllil.fsf@smart-cactus.org> Simon Marlow writes: > Simon - GHC provides some protection against mistyped pragma names, in the > form of the -Wunrecognised-pragmas warning, but only for {-# ... #-} > pragmas. If tools decide to use their own pragma syntax, they don't benefit > from this. That's one downside, in addition to the others that Neil > mentioned. > > You might say we shouldn't care about mistyped pragma names. If the user > accidentally writes {- HLNIT -} and it is silently ignored, that's not our > problem. OK, but we cared about it enough for the pragmas that GHC > understands to add the special warning, and it's reasonable to expect that > HLint users also care about it. > If this is the case then in my opinion HLint should be the one that checks for mis-spelling. If we look beyond HLint, there is no way that GHC could know generally what tokens are misspelled pragmas and which are tool names. I'm trying to view the pragma question from the perspective of setting a precedent for other tools. If a dozen Haskell tools were to approach us tomorrow and ask for similar treatment to HLint it's clear that hardcoding pragma lists in the lexer would be unsustainable. Is this likely to happen? Of course not. However, it is an indication to me that the root cause of this current debate is our lack of a good extensible pragmas. It seems to me that introducing a tool pragma convention, from which tool users can claim namespaces at will, is the right way to fix this. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Wed Oct 17 14:08:28 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Wed, 17 Oct 2018 10:08:28 -0400 Subject: Treatment of unknown pragmas In-Reply-To: <5B0A7440-C88F-4222-A8DF-E0299317A555@gmail.com> References: <8736t5n5kc.fsf@smart-cactus.org> <87zhvdln5l.fsf@smart-cactus.org> <1539719290.3266129.1544285568.4036F292@webmail.messagingengine.com> <993CA172-FDE7-49B3-AF84-A03AE299B968@lichtzwerge.com> <5B0A7440-C88F-4222-A8DF-E0299317A555@gmail.com> Message-ID: <87a7ncll7r.fsf@smart-cactus.org> Moritz Angermann writes: > Does this need to be *this* hardcoded? Or could we just parse the pragma and > compare it to a list of known pragmas to be parsed from a file (or settings value?). > To be clear, I don't think we want to start considering `settings` to be a user configuration file. In my mind `settings` is something that is produced and consumed by GHC itself and I don't believe we want to change that. > The change in question does: > > > - pragmas = options_pragmas ++ ["cfiles", "contract"] > + pragmas = options_pragmas ++ ["cfiles", "contract", "hlint"] > > to the `compiler/parser/Lexer.x`, and as such is somewhat hardcoded. So we already > ignore a bunch of `option_` and those three pragmas. > > And I see > > > <0,option_prags> { > "{-#" { warnThen Opt_WarnUnrecognisedPragmas (text "Unrecognised pragma") > (nested_comment lexToken) } > } > > which I believe handles the unrecognisedPragmas case. > > Can't we have a ignored-pragmas value in the settings, that just lists all those > we want to ignore, instead of hardcoding them in the Lexer? > > That at least feels to me like a less invasive (and easier to adapt) appraoch, that > might be less controversial? Yes it's just moving goal posts, but it moves the logic > into a runtime value instead of a compile time value. > I don't think it fundamentally changes the problem: another tool would still be unable to use the same syntax that HLint uses without getting a patch into GHC. This seems wrong to me. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Wed Oct 17 14:16:46 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Wed, 17 Oct 2018 10:16:46 -0400 Subject: Treatment of unknown pragmas In-Reply-To: References: <8736t5n5kc.fsf@smart-cactus.org> <87zhvdln5l.fsf@smart-cactus.org> <1539719290.3266129.1544285568.4036F292@webmail.messagingengine.com> Message-ID: <877eiglktw.fsf@smart-cactus.org> Neil Mitchell writes: > People expect pragmas that are machine readable to use the pragma > syntax, and the Haskell report suggests that is the right thing to > expect. They can be highlighted intelligently by IDEs, are immune from > accidental mix ups with normal comments etc. The fact that pragmas can > be lower-case (probably a mistake?) means that {- hlint gets this > wrong -} should probably be interpreted as an HLint directive, when > it's clearly not intended to be. > I agree; having a syntax that can be easily distinguished . > Note that we can't mandate {-@ or {-! as both are used by Liquid > Haskell and Derive respectively to encode non-prefixed information. > While this clash is unfortunate, I don't consider it to preclude their usage. Liquid Haskell is currently moving toward using ANN instead of this special syntax. I would also suggest that the fact that this conflict exists highlights the need for a better extensible pragma story. > In my view the three options are: > > 1) Do nothing. Tell HLint to use {- HLINT -} or find some unencumbered > syntax. There's no point mandating a specific unencumbered syntax in > the report, as the report already mandates a syntax, namely {-# #-}. > > 2) Whitelist HLint as a pragma. My preferred solution, but I realise > that encoding knowledge of every tool into GHC is not a great > solution. > > 3) Whitelist either X-* or TOOL as a pragma, so GHC has a universal > ignored pragma, allowing HLint pragmas to be written as either {-# > TOOL HLINT ... #-} or {-# X-HLINT ... #-} > This is another option that sounds plausible, although the ergonomics is pretty poor. In general Haskell pragmas are rather long; this would make this problem worse. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From marlowsd at gmail.com Wed Oct 17 14:22:57 2018 From: marlowsd at gmail.com (Simon Marlow) Date: Wed, 17 Oct 2018 15:22:57 +0100 Subject: Treatment of unknown pragmas In-Reply-To: <87bm7sllil.fsf@smart-cactus.org> References: <8736t5n5kc.fsf@smart-cactus.org> <87zhvdln5l.fsf@smart-cactus.org> <1539719290.3266129.1544285568.4036F292@webmail.messagingengine.com> <87bm7sllil.fsf@smart-cactus.org> Message-ID: On Wed, 17 Oct 2018 at 15:02, Ben Gamari wrote: > Simon Marlow writes: > > > Simon - GHC provides some protection against mistyped pragma names, in > the > > form of the -Wunrecognised-pragmas warning, but only for {-# ... #-} > > pragmas. If tools decide to use their own pragma syntax, they don't > benefit > > from this. That's one downside, in addition to the others that Neil > > mentioned. > > > > You might say we shouldn't care about mistyped pragma names. If the user > > accidentally writes {- HLNIT -} and it is silently ignored, that's not > our > > problem. OK, but we cared about it enough for the pragmas that GHC > > understands to add the special warning, and it's reasonable to expect > that > > HLint users also care about it. > > > If this is the case then in my opinion HLint should be the one that > checks for mis-spelling. But there's no way that HLint can know what is a misspelled pragma name. If we look beyond HLint, there is no way that > GHC could know generally what tokens are misspelled pragmas and which > are tool names. > Well this is the problem we created by adding -Wunrecognised-pragmas :) Now GHC has to know what all the correctly-spelled pragma names are, and the HLint diff is just following this path. Arguably -Wunrecognised-pragmas is ill-conceived. I'm surprised we didn't have this discussion when it was added (or maybe we did?). But since we have it, it comes with an obligation to have a centralised registry of pragma names, which is currently in GHC. (it doesn't have to be in the source code, of course) I'm trying to view the pragma question from the perspective of setting a > precedent for other tools. If a dozen Haskell tools were to approach us > tomorrow and ask for similar treatment to HLint it's clear that > hardcoding pragma lists in the lexer would be unsustainable. > > Is this likely to happen? Of course not. However, it is an indication to > me that the root cause of this current debate is our lack of a good > extensible pragmas. It seems to me that introducing a tool pragma > convention, from which tool users can claim namespaces at will, is the > right way to fix this. > And sacrifice checking for misspelled pragma names in those namespaces? Sure we can say {-# TOOL FOO .. #-} is ignored by GHC, but then nothing wil notice if you say {-# TOOL HLNIT ... #-} by mistake. If we decide to do that then fine, it just seems like an inconsistent design. Cheers Simon > > Cheers, > > - Ben > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vlad.z.4096 at gmail.com Wed Oct 17 15:10:07 2018 From: vlad.z.4096 at gmail.com (Vladislav Zavialov) Date: Wed, 17 Oct 2018 18:10:07 +0300 Subject: Treatment of unknown pragmas In-Reply-To: References: <8736t5n5kc.fsf@smart-cactus.org> <87zhvdln5l.fsf@smart-cactus.org> <1539719290.3266129.1544285568.4036F292@webmail.messagingengine.com> <87bm7sllil.fsf@smart-cactus.org> Message-ID: > And sacrifice checking for misspelled pragma names in those namespaces? Sure we can say {-# TOOL FOO .. #-} is ignored by GHC, but then nothing wil notice if you say {-# TOOL HLNIT ... #-} by mistake. Yes! But we can't have the whitelist of pragmas hardcoded in GHC, as there may be arbitrarily named tools out there. Only the user knows what tools they use, so they must maintain their own whitelist in the build configuration. That's why we should have -Wunrecognized-pramas -Wno-pragma=HLINT, -Wno-pragma=LIQUID, ... On Wed, Oct 17, 2018 at 5:23 PM Simon Marlow wrote: > > On Wed, 17 Oct 2018 at 15:02, Ben Gamari wrote: >> >> Simon Marlow writes: >> >> > Simon - GHC provides some protection against mistyped pragma names, in the >> > form of the -Wunrecognised-pragmas warning, but only for {-# ... #-} >> > pragmas. If tools decide to use their own pragma syntax, they don't benefit >> > from this. That's one downside, in addition to the others that Neil >> > mentioned. >> > >> > You might say we shouldn't care about mistyped pragma names. If the user >> > accidentally writes {- HLNIT -} and it is silently ignored, that's not our >> > problem. OK, but we cared about it enough for the pragmas that GHC >> > understands to add the special warning, and it's reasonable to expect that >> > HLint users also care about it. >> > >> If this is the case then in my opinion HLint should be the one that >> checks for mis-spelling. > > > But there's no way that HLint can know what is a misspelled pragma name. > >> If we look beyond HLint, there is no way that >> GHC could know generally what tokens are misspelled pragmas and which >> are tool names. > > > Well this is the problem we created by adding -Wunrecognised-pragmas :) Now GHC has to know what all the correctly-spelled pragma names are, and the HLint diff is just following this path. > > Arguably -Wunrecognised-pragmas is ill-conceived. I'm surprised we didn't have this discussion when it was added (or maybe we did?). But since we have it, it comes with an obligation to have a centralised registry of pragma names, which is currently in GHC. (it doesn't have to be in the source code, of course) > >> I'm trying to view the pragma question from the perspective of setting a >> precedent for other tools. If a dozen Haskell tools were to approach us >> tomorrow and ask for similar treatment to HLint it's clear that >> hardcoding pragma lists in the lexer would be unsustainable. >> >> Is this likely to happen? Of course not. However, it is an indication to >> me that the root cause of this current debate is our lack of a good >> extensible pragmas. It seems to me that introducing a tool pragma >> convention, from which tool users can claim namespaces at will, is the >> right way to fix this. > > > And sacrifice checking for misspelled pragma names in those namespaces? Sure we can say {-# TOOL FOO .. #-} is ignored by GHC, but then nothing wil notice if you say {-# TOOL HLNIT ... #-} by mistake. If we decide to do that then fine, it just seems like an inconsistent design. > > Cheers > Simon > >> >> >> Cheers, >> >> - Ben > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From rae at cs.brynmawr.edu Wed Oct 17 15:27:10 2018 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Wed, 17 Oct 2018 11:27:10 -0400 Subject: Treatment of unknown pragmas In-Reply-To: References: <8736t5n5kc.fsf@smart-cactus.org> <87zhvdln5l.fsf@smart-cactus.org> <1539719290.3266129.1544285568.4036F292@webmail.messagingengine.com> <87bm7sllil.fsf@smart-cactus.org> Message-ID: <7E2ED6E7-E717-44ED-A7F9-77D02879FFD0@cs.brynmawr.edu> > On Oct 17, 2018, at 10:22 AM, Simon Marlow wrote: > > but then nothing wil notice if you say {-# TOOL HLNIT ... #-} by mistak This seems fixable. Any tool can slurp in all `TOOL` (or `X-`, which I prefer) pragmas and look for ones that appear to be misspellings. Of course, this doesn't stop me from writing a tool named HLNIT and using those pragmas, but we'll never be able to guard against that. Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From kazu at iij.ad.jp Thu Oct 18 07:23:46 2018 From: kazu at iij.ad.jp (Kazu Yamamoto (=?iso-2022-jp?B?GyRCOzNLXE9CSScbKEI=?=)) Date: Thu, 18 Oct 2018 16:23:46 +0900 (JST) Subject: [ANNOUNCE] GHC 8.6.1 released In-Reply-To: <20181005.120106.1048227468150704327.kazu@iij.ad.jp> References: <87wore1h9i.fsf@smart-cactus.org> <20181005.120106.1048227468150704327.kazu@iij.ad.jp> Message-ID: <20181018.162346.2208667461376171437.kazu@iij.ad.jp> Hi Evan, >> Has anyone installed the OS X binary distribution? I get: >> >> "utils/ghc-cabal/dist-install/build/tmp/ghc-cabal-bindist" copy >> libraries/ghc-prim dist-install "strip" '' '/usr/local' >> '/usr/local/lib/ghc-8.6.1' >> '/usr/local/share/doc/ghc-8.6.1/html/libraries' 'v p dyn' >> dyld: Library not loaded: /usr/local/opt/gmp/lib/libgmp.10.dylib >> Referenced from: >> /usr/local/src/hs/ghc-8.6.1/libraries/base/dist-install/build/libHSbase-4.12.0.0-ghc8.6.1.dylib >> Reason: image not found > > I met the same problem. See https://ghc.haskell.org/trac/ghc/ticket/15769 for workaround. --Kazu From monkleyon at gmail.com Thu Oct 18 14:00:19 2018 From: monkleyon at gmail.com (MarLinn) Date: Thu, 18 Oct 2018 16:00:19 +0200 Subject: Treatment of unknown pragmas In-Reply-To: <1539719290.3266129.1544285568.4036F292@webmail.messagingengine.com> References: <8736t5n5kc.fsf@smart-cactus.org> <87zhvdln5l.fsf@smart-cactus.org> <1539719290.3266129.1544285568.4036F292@webmail.messagingengine.com> Message-ID: An HTML attachment was scrubbed... URL: From ben at well-typed.com Thu Oct 18 15:45:51 2018 From: ben at well-typed.com (Ben Gamari) Date: Thu, 18 Oct 2018 11:45:51 -0400 Subject: [ANNOUNCE] GHC 8.4.4 released In-Reply-To: References: <878t30npgc.fsf@smart-cactus.org> Message-ID: <87y3avjm1f.fsf@smart-cactus.org> Jens Petersen writes: > On Mon, 15 Oct 2018 at 07:17, Ben Gamari wrote: >> The GHC team is pleased to announce the availability of GHC 8.4.4 > > Thank you > >> As always, the full release notes can be found in the users guide, > > https://downloads.haskell.org/~ghc/8.4.4/docs/html/users_guide/8.4.4-notes.html#base-library > > I think this base text is out of date, and could be dropped, right? > Indeed it is. > I see that stm was also bumped: though it is not listed in > https://downloads.haskell.org/~ghc/8.4.4/docs/html/users_guide/8.4.4-notes.html#included-libraries > Also true. However, in my mind this isn't nearly as significant as the `text` bump, which affects many users and fixes extremely bad misbehavior. Thanks for noticing these! Cheers - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Thu Oct 18 19:45:17 2018 From: ben at well-typed.com (Ben Gamari) Date: Thu, 18 Oct 2018 15:45:17 -0400 Subject: Help inform GHC's development priorities Message-ID: <87va5zjayf.fsf@smart-cactus.org> tl;dr. Please a take a minute to express your thoughts on GHC's development priorities via this survey [1]. Hello everyone, The GHC developers want to ensure that we are working on problems that of most importance to you, the Haskell community. To this end we are surveying the community [1], asking how our users interact with GHC and which problems need to be addressed most urgently. We hope you will take a few minutes to give us your thoughts [1]. Thanks in advance for your time and, finally, thanks to the financial supporters that make our work GHC possible. Cheers, - Ben and the rest of the GHC developers [1] https://docs.google.com/forms/d/e/1FAIpQLSdh7sf2MqHoEmjt38r1cxCF-tV76OFCJqU6VabGzlOUKYqo-w/viewform -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From qdunkan at gmail.com Sun Oct 21 02:59:16 2018 From: qdunkan at gmail.com (Evan Laforge) Date: Sat, 20 Oct 2018 19:59:16 -0700 Subject: [ANNOUNCE] GHC 8.6.1 released In-Reply-To: <20181018.162346.2208667461376171437.kazu@iij.ad.jp> References: <87wore1h9i.fsf@smart-cactus.org> <20181005.120106.1048227468150704327.kazu@iij.ad.jp> <20181018.162346.2208667461376171437.kazu@iij.ad.jp> Message-ID: On Thu, Oct 18, 2018 at 12:23 AM Kazu Yamamoto wrote: > > I met the same problem. > > See https://ghc.haskell.org/trac/ghc/ticket/15769 for workaround. Thanks for the note, this helped me find a solution. I updated the ticket with my experience. From juhpetersen at gmail.com Sun Oct 21 14:54:27 2018 From: juhpetersen at gmail.com (Jens Petersen) Date: Sun, 21 Oct 2018 23:54:27 +0900 Subject: [ANNOUNCE] GHC 8.4.4 released In-Reply-To: <87y3avjm1f.fsf@smart-cactus.org> References: <878t30npgc.fsf@smart-cactus.org> <87y3avjm1f.fsf@smart-cactus.org> Message-ID: More seriously ghc-8.4.4 fails to build on ARM 32bit and 64bit due to the llvm changes afaict. I opened https://ghc.haskell.org/trac/ghc/ticket/15780 for this. Jens From david at well-typed.com Sun Oct 21 15:06:09 2018 From: david at well-typed.com (David Feuer) Date: Sun, 21 Oct 2018 11:06:09 -0400 Subject: [ANNOUNCE] GHC 8.4.4 released In-Reply-To: <87y3avjm1f.fsf@smart-cactus.org> References: <878t30npgc.fsf@smart-cactus.org> <87y3avjm1f.fsf@smart-cactus.org> Message-ID: <84655ecb-3249-4536-b978-355340796dc1@well-typed.com> Did  this release fix the dataToTag# issue? I think that has a number of people concerned. On Oct 18, 2018, 11:46 AM, at 11:46 AM, Ben Gamari wrote: >Jens Petersen writes: > >> On Mon, 15 Oct 2018 at 07:17, Ben Gamari wrote: >>> The GHC team is pleased to announce the availability of GHC 8.4.4 >> >> Thank you >> >>> As always, the full release notes can be found in the users guide, >> >> >https://downloads.haskell.org/~ghc/8.4.4/docs/html/users_guide/8.4.4-notes.html#base-library >> >> I think this base text is out of date, and could be dropped, right? >> >Indeed it is. > >> I see that stm was also bumped: though it is not listed in >> >https://downloads.haskell.org/~ghc/8.4.4/docs/html/users_guide/8.4.4-notes.html#included-libraries >> >Also true. However, in my mind this isn't nearly as significant as the >`text` bump, which affects many users and fixes extremely bad >misbehavior. > >Thanks for noticing these! > >Cheers > >- Ben > > >------------------------------------------------------------------------ > >_______________________________________________ >ghc-devs mailing list >ghc-devs at haskell.org >http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From yotam2206 at gmail.com Sun Oct 21 17:44:23 2018 From: yotam2206 at gmail.com (Yotam Ohad) Date: Sun, 21 Oct 2018 20:44:23 +0300 Subject: Building on windows Message-ID: Hi, I've tried to build ghc on windows 10 with msys. I've installed it with stack as said on the guide. In the msys shell, after updating and installing everything I've tried to run `./configure --enable-tarballs-autodownload` and got the following error: configure: loading site script /mingw64/etc/config.site checking for gfind... no checking for find... /usr/bin/find checking for sort... /usr/bin/sort checking for GHC version date... inferred 8.7.20181017 checking for GHC Git commit id... inferred 46f2906d1c6e1fb732a90882487479a2ebf19ca1 checking for ghc... no configure: error: GHC is required. I have the haskell platform installed (8.4.3), yet it can't find GHC and I'm not sure if I missed something from the guide Yotam -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Sun Oct 21 17:46:09 2018 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Sun, 21 Oct 2018 18:46:09 +0100 Subject: Building on windows In-Reply-To: References: Message-ID: Which guide are you referring to? I don't know of any which say to use stack. On Sun, Oct 21, 2018 at 6:44 PM Yotam Ohad wrote: > > Hi, > > I've tried to build ghc on windows 10 with msys. I've installed it with stack as said on the guide. In the msys shell, after updating and installing everything I've tried to run `./configure --enable-tarballs-autodownload` and got the following error: > > configure: loading site script /mingw64/etc/config.site > checking for gfind... no > checking for find... /usr/bin/find > checking for sort... /usr/bin/sort > checking for GHC version date... inferred 8.7.20181017 > checking for GHC Git commit id... inferred 46f2906d1c6e1fb732a90882487479a2ebf19ca1 > checking for ghc... no > configure: error: GHC is required. > > I have the haskell platform installed (8.4.3), yet it can't find GHC and I'm not sure if I missed something from the guide > > Yotam > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From yotam2206 at gmail.com Sun Oct 21 17:48:40 2018 From: yotam2206 at gmail.com (Yotam Ohad) Date: Sun, 21 Oct 2018 20:48:40 +0300 Subject: Building on windows In-Reply-To: References: Message-ID: This is it: https://ghc.haskell.org/trac/ghc/wiki/Building/Preparation/Windows Step one, method B On Sun, Oct 21, 2018 at 8:46 PM Matthew Pickering < matthewtpickering at gmail.com> wrote: > Which guide are you referring to? I don't know of any which say to use > stack. > On Sun, Oct 21, 2018 at 6:44 PM Yotam Ohad wrote: > > > > Hi, > > > > I've tried to build ghc on windows 10 with msys. I've installed it with > stack as said on the guide. In the msys shell, after updating and > installing everything I've tried to run `./configure > --enable-tarballs-autodownload` and got the following error: > > > > configure: loading site script /mingw64/etc/config.site > > checking for gfind... no > > checking for find... /usr/bin/find > > checking for sort... /usr/bin/sort > > checking for GHC version date... inferred 8.7.20181017 > > checking for GHC Git commit id... inferred > 46f2906d1c6e1fb732a90882487479a2ebf19ca1 > > checking for ghc... no > > configure: error: GHC is required. > > > > I have the haskell platform installed (8.4.3), yet it can't find GHC and > I'm not sure if I missed something from the guide > > > > Yotam > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Sun Oct 21 17:51:06 2018 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Sun, 21 Oct 2018 18:51:06 +0100 Subject: Building on windows In-Reply-To: References: Message-ID: I believe those instructions are only for setting up MSYS2. You are still expected to have `ghc` on your path. On Sun, Oct 21, 2018 at 6:48 PM Yotam Ohad wrote: > > This is it:https://ghc.haskell.org/trac/ghc/wiki/Building/Preparation/Windows > Step one, method B > > On Sun, Oct 21, 2018 at 8:46 PM Matthew Pickering wrote: >> >> Which guide are you referring to? I don't know of any which say to use stack. >> On Sun, Oct 21, 2018 at 6:44 PM Yotam Ohad wrote: >> > >> > Hi, >> > >> > I've tried to build ghc on windows 10 with msys. I've installed it with stack as said on the guide. In the msys shell, after updating and installing everything I've tried to run `./configure --enable-tarballs-autodownload` and got the following error: >> > >> > configure: loading site script /mingw64/etc/config.site >> > checking for gfind... no >> > checking for find... /usr/bin/find >> > checking for sort... /usr/bin/sort >> > checking for GHC version date... inferred 8.7.20181017 >> > checking for GHC Git commit id... inferred 46f2906d1c6e1fb732a90882487479a2ebf19ca1 >> > checking for ghc... no >> > configure: error: GHC is required. >> > >> > I have the haskell platform installed (8.4.3), yet it can't find GHC and I'm not sure if I missed something from the guide >> > >> > Yotam >> > _______________________________________________ >> > ghc-devs mailing list >> > ghc-devs at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From me at ara.io Mon Oct 22 08:45:34 2018 From: me at ara.io (Ara Adkins) Date: Mon, 22 Oct 2018 09:45:34 +0100 Subject: Using GHC Core as a Language Target Message-ID: Hey All, I was chatting to SPJ about the possibility of using GHC Core + the rest of the GHC compilation pipeline as a target for a functional language, and he mentioned that asking here would likely be more productive when it comes to the GHC API. I'm wondering where the best place would be for me to look in the API for building core expressions, and also whether it is possible to trigger the GHC code-generation pipeline from the core stage onwards. Best, Ara -------------- next part -------------- An HTML attachment was scrubbed... URL: From juhpetersen at gmail.com Mon Oct 22 08:54:17 2018 From: juhpetersen at gmail.com (Jens Petersen) Date: Mon, 22 Oct 2018 17:54:17 +0900 Subject: [ANNOUNCE] GHC 8.6.1 released In-Reply-To: <4E0D6D6B-0F2F-4070-A22E-8336A7905667@well-typed.com> References: <87wore1h9i.fsf@smart-cactus.org> <4E0D6D6B-0F2F-4070-A22E-8336A7905667@well-typed.com> Message-ID: On Mon, 24 Sep 2018 at 22:36, Ben Gamari wrote: > On September 24, 2018 2:09:13 AM CDT, Jens Petersen wrote: > >I have built 8.6.1 for Fedora 27, 28, 29, Rawhide, and EPEL7 in: > >https://copr.fedorainfracloud.org/coprs/petersen/ghc-8.6.1/ > >The repo also includes latest cabal-install. > Thanks Jens! This is a very helpful service. Btw I wanted to add that it is a lot easier for me now to test builds of ghc than it was in the past (mainly since I stopped doing bootstraps and testsuite builds, and also added a quick build option in my .spec build script). In Fedora we have 6 arch's now: 2 intel, 2 arm, s390x, and ppc64le. And the new Fedora module system means I can ship multiple versions of ghc for current Fedora releases (since 28), which is rather nice. :) Jens From ndmitchell at gmail.com Mon Oct 22 10:32:25 2018 From: ndmitchell at gmail.com (Neil Mitchell) Date: Mon, 22 Oct 2018 11:32:25 +0100 Subject: Treatment of unknown pragmas In-Reply-To: <87bm7sllil.fsf@smart-cactus.org> References: <8736t5n5kc.fsf@smart-cactus.org> <87zhvdln5l.fsf@smart-cactus.org> <1539719290.3266129.1544285568.4036F292@webmail.messagingengine.com> <87bm7sllil.fsf@smart-cactus.org> Message-ID: > I'm trying to view the pragma question from the perspective of setting a > precedent for other tools. If a dozen Haskell tools were to approach us > tomorrow and ask for similar treatment to HLint it's clear that > hardcoding pragma lists in the lexer would be unsustainable. Why? Making the list 12 elements longer doesn't seem fatal or add any real complexity. And do we have any idea of 12 additional programs that might want settings adding? Maybe we just demand that the program be continuously maintained for over a decade :). > Is this likely to happen? Of course not. However, it is an indication to > me that the root cause of this current debate is our lack of a good > extensible pragmas. It seems to me that introducing a tool pragma > convention, from which tool users can claim namespaces at will, is the > right way to fix this. I'd suggest just adding HLINT as a known pragma. But given there isn't any consensus on that, why not add TOOL as a known pragma, and then we've got an extension point which requires only one single entry to the list? Thanks, Neil From omeragacan at gmail.com Mon Oct 22 10:43:49 2018 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Mon, 22 Oct 2018 13:43:49 +0300 Subject: Why align all pinned array payloads on 16 bytes? In-Reply-To: References: Message-ID: Thanks for all the answers. Another surprising thing about the pinned object allocation primops is that the aligned allocator allows alignment to bytes, rather than to words (the documentation doesn't say whether it's words or bytes, but it can be seen from the code that it's actually aligning to the given byte). Is there a use case for this or people mostly use alignment on word boundaries? Ömer Sven Panne , 17 Eki 2018 Çar, 10:29 tarihinde şunu yazdı: > > Am Di., 16. Okt. 2018 um 23:18 Uhr schrieb Simon Marlow : >> >> I vaguely recall that this was because 16 byte alignment is the minimum you need for certain foreign types, and it's what malloc() does. Perhaps check the FFI spec and the guarantees that mallocForeignPtrBytes and friends provide? > > > mallocForeignPtrBytes is defined in terms of malloc (https://www.haskell.org/onlinereport/haskell2010/haskellch29.html#x37-28400029.1.3), which in turn has the following guarantee (https://www.haskell.org/onlinereport/haskell2010/haskellch31.html#x39-28700031.1): > > "... All storage allocated by functions that allocate based on a size in bytes must be sufficiently aligned for any of the basic foreign types that fits into the newly allocated storage. ..." > > The largest basic foreign types are Word64/Double and probably Ptr/FunPtr/StablePtr (https://www.haskell.org/onlinereport/haskell2010/haskellch8.html#x15-1700008.7), so per spec you need at least an 8-byte alignement. But in an SSE-world I would be *very* reluctant to use an alignment less strict than 16 bytes, otherwise people will probably hate you... :-] > > Cheers, > S. From ben at well-typed.com Tue Oct 23 18:47:12 2018 From: ben at well-typed.com (Ben Gamari) Date: Tue, 23 Oct 2018 14:47:12 -0400 Subject: Hadrian has been merged! Message-ID: <877ei87b6l.fsf@smart-cactus.org> Hi everyone, Hadrian is now merged into the GHC tree. For better or worse, due to the large number of merge commits in the Hadrian repository we were unable to squash the early history of the project. If you encounter trouble using the Hadrian build system please file a ticket on Trac, setting the "Component" field to "Build System (Hadrian)". I have renamed the old "Build System" component in Trac's issue tracker to "Build System (make)" to avoid confusion. Hadrian tickets opened before the merge will remain on GitHub. Likewise, if you would like to contribute to Hadrian please do so via Phabricator. The remaining pull requests against the GitHub repository will be merged in due course but further requests will be redirected to Phabricator. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From marlowsd at gmail.com Wed Oct 24 14:54:14 2018 From: marlowsd at gmail.com (Simon Marlow) Date: Wed, 24 Oct 2018 15:54:14 +0100 Subject: Why align all pinned array payloads on 16 bytes? In-Reply-To: References: Message-ID: I don't imagine anyone wants to align to anything that's not a power of 2, or less than a word size. Still, unless the current generality results in extra complication or overheads I wouldn't change it. On Mon, 22 Oct 2018 at 11:44, Ömer Sinan Ağacan wrote: > Thanks for all the answers. Another surprising thing about the pinned > object > allocation primops is that the aligned allocator allows alignment to bytes, > rather than to words (the documentation doesn't say whether it's words or > bytes, > but it can be seen from the code that it's actually aligning to the given > byte). Is there a use case for this or people mostly use alignment on word > boundaries? > > Ömer > > Sven Panne , 17 Eki 2018 Çar, 10:29 tarihinde şunu > yazdı: > > > > Am Di., 16. Okt. 2018 um 23:18 Uhr schrieb Simon Marlow < > marlowsd at gmail.com>: > >> > >> I vaguely recall that this was because 16 byte alignment is the minimum > you need for certain foreign types, and it's what malloc() does. Perhaps > check the FFI spec and the guarantees that mallocForeignPtrBytes and > friends provide? > > > > > > mallocForeignPtrBytes is defined in terms of malloc ( > https://www.haskell.org/onlinereport/haskell2010/haskellch29.html#x37-28400029.1.3), > which in turn has the following guarantee ( > https://www.haskell.org/onlinereport/haskell2010/haskellch31.html#x39-28700031.1 > ): > > > > "... All storage allocated by functions that allocate based on a size > in bytes must be sufficiently aligned for any of the basic foreign types > that fits into the newly allocated storage. ..." > > > > The largest basic foreign types are Word64/Double and probably > Ptr/FunPtr/StablePtr ( > https://www.haskell.org/onlinereport/haskell2010/haskellch8.html#x15-1700008.7), > so per spec you need at least an 8-byte alignement. But in an SSE-world I > would be *very* reluctant to use an alignment less strict than 16 bytes, > otherwise people will probably hate you... :-] > > > > Cheers, > > S. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Wed Oct 24 15:05:01 2018 From: marlowsd at gmail.com (Simon Marlow) Date: Wed, 24 Oct 2018 16:05:01 +0100 Subject: Shall we make -dsuppress-uniques default? In-Reply-To: References: Message-ID: For what it's worth, I put this in my .zshrc cleancore=(-ddump-simpl -dsuppress-coercions -dsuppress-var-kinds -dsuppress-idinfo -dsuppress-type-signatures -dsuppress-type-applications) and then ghc $cleancore -c Foo.hs but this is mainly for the use case of "I wonder if this thing is getting optimised the way I hope, let's have a look at the Core". There's also this little tool which is aimed at the same kind of thing: https://github.com/shachaf/ghc-core So I'd say there's definitely a demand for something, but it's not entirely clear what the something is. Someone could make a proposal... On Sat, 6 Oct 2018 at 00:12, Simon Peyton Jones via ghc-devs < ghc-devs at haskell.org> wrote: > Like Richard I use the uniques all the time. > > I'd prefer to leave it as-is, unless there is widespread support for a > change > > S > > > | -----Original Message----- > | From: ghc-devs On Behalf Of Ömer Sinan > | Agacan > | Sent: 05 October 2018 20:02 > | To: rae at cs.brynmawr.edu > | Cc: ghc-devs > | Subject: Re: Shall we make -dsuppress-uniques default? > | > | > What do you say to GHC to get it to print the uniques that you don't > | like? > | > | I usually use one of these: -ddump-simpl, -dverbose-core2core, > | -ddump-simpl-iterations, -ddump-stg. All of these print variables with > | unique > | details and I literally never need those details. Rarely I use -ddump-cmm > | too. > | > | Agreed that having different defaults in different dumps/traces might > | work .. > | > | Ömer > | > | Richard Eisenberg , 5 Eki 2018 Cum, 21:54 > | tarihinde şunu yazdı: > | > > | > I'm in the opposite camp. More often than not, the biggest advantage of > | dumps during GHC development is to see the uniques. Indeed, I often > | ignore the actual names of variables and just work in my head with the > | uniques. > | > > | > Perhaps the more complete answer is to fine-tune what settings cause > | the uniques to be printed. -ddump-xx-trace should almost certainly. > | Perhaps other modes needn't. What do you say to GHC to get it to print > | the uniques that you don't like? > | > > | > Richard > | > > | > > On Oct 5, 2018, at 2:48 PM, Ömer Sinan Ağacan > | wrote: > | > > > | > > I asked this on IRC and didn't hear a lot of opposition, so as the > | next step > | > > I'd like to ask ghc-devs. > | > > > | > > I literally never need the details on uniques that we currently print > | by > | > > default. I either don't care about variables too much (when not > | comparing the > | > > output with some other output), or I need -dsuppress-uniques (when > | comparing > | > > outputs). The problem is I have to remember to add -dsuppress-uniques > | if I'm > | > > going to compare the outputs, and if I decide to compare outputs > | after the fact > | > > I need to re-generate them with -dsuppress-uniques. This takes time > | and effort. > | > > > | > > If you're also of the same opinion I suggest making -dsuppress- > | uniques default, > | > > and providing a -dno-suppress-uniques (if it doesn't already exist). > | > > > | > > Ömer > | > > _______________________________________________ > | > > ghc-devs mailing list > | > > ghc-devs at haskell.org > | > > > | > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask > | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- > | devs&data=02%7C01%7Csimonpj%40microsoft.com > %7C07ec32bd26d149c457ab08d > | 62af537c9%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636743630029759709 > | &sdata=4DVsRJ4Burv2%2BZGf38py%2FNRqM5j5%2FJAUkJPrUl7%2F%2Fm0%3D&r > | eserved=0 > | > > | _______________________________________________ > | ghc-devs mailing list > | ghc-devs at haskell.org > | > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask > | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- > | devs&data=02%7C01%7Csimonpj%40microsoft.com > %7C07ec32bd26d149c457ab08d > | 62af537c9%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636743630029759709 > | &sdata=4DVsRJ4Burv2%2BZGf38py%2FNRqM5j5%2FJAUkJPrUl7%2F%2Fm0%3D&r > | eserved=0 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Wed Oct 24 17:38:27 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Wed, 24 Oct 2018 13:38:27 -0400 Subject: Treatment of unknown pragmas In-Reply-To: References: <8736t5n5kc.fsf@smart-cactus.org> <87zhvdln5l.fsf@smart-cactus.org> <1539719290.3266129.1544285568.4036F292@webmail.messagingengine.com> <87bm7sllil.fsf@smart-cactus.org> Message-ID: <87mur35jou.fsf@smart-cactus.org> Neil Mitchell writes: >> I'm trying to view the pragma question from the perspective of setting a >> precedent for other tools. If a dozen Haskell tools were to approach us >> tomorrow and ask for similar treatment to HLint it's clear that >> hardcoding pragma lists in the lexer would be unsustainable. > > Why? Making the list 12 elements longer doesn't seem fatal or add any > real complexity. And do we have any idea of 12 additional programs > that might want settings adding? Maybe we just demand that the program > be continuously maintained for over a decade :). > Well, for one this would mean that any packages using these pragmas would be -Werror broken until a new GHC was released. To me this is a sure sign that we need a better story here. >> Is this likely to happen? Of course not. However, it is an indication to >> me that the root cause of this current debate is our lack of a good >> extensible pragmas. It seems to me that introducing a tool pragma >> convention, from which tool users can claim namespaces at will, is the >> right way to fix this. > > I'd suggest just adding HLINT as a known pragma. But given there isn't > any consensus on that, why not add TOOL as a known pragma, and then > we've got an extension point which requires only one single entry to > the list? > With my GHC hat on this seems acceptable. From a user perspective it has the problem of being quite verbose (and pragma verbosity is already a problem in Haskell, in my opinion). I'll admit, I still don't see the problem with just adopting a variant of standard comment syntax with a convention for tool name prefixes (for instance, the `{-! HLINT ... !-}` suggested earlier). This seems to me to be an all-around better solution: less verbose, easy to parse, and requires no changes to GHC. The downsides seem easily overcome: Editors can be easily modified to give this syntax the same treatment as compiler pragmas. The conflict with Liquid Haskell's syntax is merely temporary as they are moving towards using ANN pragmas anyways. However, even if it weren't a bit of temporary pain seems worthwhile to solve the tool pragma namespacing issue once and for all. However, this is just my opinion as a user. If people want GHC to ignore `{-# TOOL ... #-}` then I certainly won't object. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From eric at seidel.io Wed Oct 24 18:26:11 2018 From: eric at seidel.io (Eric Seidel) Date: Wed, 24 Oct 2018 14:26:11 -0400 Subject: Treatment of unknown pragmas In-Reply-To: <87mur35jou.fsf@smart-cactus.org> References: <8736t5n5kc.fsf@smart-cactus.org> <87zhvdln5l.fsf@smart-cactus.org> <1539719290.3266129.1544285568.4036F292@webmail.messagingengine.com> <87bm7sllil.fsf@smart-cactus.org> <87mur35jou.fsf@smart-cactus.org> Message-ID: <1540405571.3651572.1553362800.2DB76FE3@webmail.messagingengine.com> It might be nice if GHC were to actually parse the extensible tool pragmas and insert them into the AST! Aesthetically, I prefer the {-# X_HLINT ... #-} syntax over the {-# TOOL HLINT ... #-} syntax, but I don't feel strongly about it. On Wed, Oct 24, 2018, at 13:38, Ben Gamari wrote: > Neil Mitchell writes: > > >> I'm trying to view the pragma question from the perspective of setting a > >> precedent for other tools. If a dozen Haskell tools were to approach us > >> tomorrow and ask for similar treatment to HLint it's clear that > >> hardcoding pragma lists in the lexer would be unsustainable. > > > > Why? Making the list 12 elements longer doesn't seem fatal or add any > > real complexity. And do we have any idea of 12 additional programs > > that might want settings adding? Maybe we just demand that the program > > be continuously maintained for over a decade :). > > > Well, for one this would mean that any packages using these pragmas > would be -Werror broken until a new GHC was released. To me this is a > sure sign that we need a better story here. > > > >> Is this likely to happen? Of course not. However, it is an indication to > >> me that the root cause of this current debate is our lack of a good > >> extensible pragmas. It seems to me that introducing a tool pragma > >> convention, from which tool users can claim namespaces at will, is the > >> right way to fix this. > > > > I'd suggest just adding HLINT as a known pragma. But given there isn't > > any consensus on that, why not add TOOL as a known pragma, and then > > we've got an extension point which requires only one single entry to > > the list? > > > With my GHC hat on this seems acceptable. > > From a user perspective it has the problem of being quite verbose (and > pragma verbosity is already a problem in Haskell, in my opinion). I'll > admit, I still don't see the problem with just adopting a variant of > standard comment syntax with a convention for tool name prefixes (for > instance, the `{-! HLINT ... !-}` suggested earlier). This seems to me > to be an all-around better solution: less verbose, easy to parse, > and requires no changes to GHC. > > The downsides seem easily overcome: Editors can be easily modified to > give this syntax the same treatment as compiler pragmas. The conflict > with Liquid Haskell's syntax is merely temporary as they are moving > towards using ANN pragmas anyways. However, even if it weren't a bit of > temporary pain seems worthwhile to solve the tool pragma namespacing > issue once and for all. However, this is just my opinion as a user. > > If people want GHC to ignore `{-# TOOL ... #-}` then I certainly won't > object. > > Cheers, > > - Ben > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > Email had 1 attachment: > + signature.asc > 1k (application/pgp-signature) From metaniklas at gmail.com Thu Oct 25 08:37:59 2018 From: metaniklas at gmail.com (Niklas Larsson) Date: Thu, 25 Oct 2018 10:37:59 +0200 Subject: Treatment of unknown pragmas In-Reply-To: <87mur35jou.fsf@smart-cactus.org> References: <8736t5n5kc.fsf@smart-cactus.org> <87zhvdln5l.fsf@smart-cactus.org> <1539719290.3266129.1544285568.4036F292@webmail.messagingengine.com> <87bm7sllil.fsf@smart-cactus.org> <87mur35jou.fsf@smart-cactus.org> Message-ID: <8F6232A6-9CD6-4DF3-87D9-E7A32EE8EBD7@gmail.com> Hi! Why not follow the standard in that pragmas were intended for all tools consuming Haskell and not for GHCs exclusive use? All that would require is to make the warning opt-in. Making other tools use a new syntax for the same functionality seems suboptimal to me. Regards, Niklas > 24 okt. 2018 kl. 19:38 skrev Ben Gamari : > > Neil Mitchell writes: > >>> I'm trying to view the pragma question from the perspective of setting a >>> precedent for other tools. If a dozen Haskell tools were to approach us >>> tomorrow and ask for similar treatment to HLint it's clear that >>> hardcoding pragma lists in the lexer would be unsustainable. >> >> Why? Making the list 12 elements longer doesn't seem fatal or add any >> real complexity. And do we have any idea of 12 additional programs >> that might want settings adding? Maybe we just demand that the program >> be continuously maintained for over a decade :). >> > Well, for one this would mean that any packages using these pragmas > would be -Werror broken until a new GHC was released. To me this is a > sure sign that we need a better story here. > > >>> Is this likely to happen? Of course not. However, it is an indication to >>> me that the root cause of this current debate is our lack of a good >>> extensible pragmas. It seems to me that introducing a tool pragma >>> convention, from which tool users can claim namespaces at will, is the >>> right way to fix this. >> >> I'd suggest just adding HLINT as a known pragma. But given there isn't >> any consensus on that, why not add TOOL as a known pragma, and then >> we've got an extension point which requires only one single entry to >> the list? >> > With my GHC hat on this seems acceptable. > > From a user perspective it has the problem of being quite verbose (and > pragma verbosity is already a problem in Haskell, in my opinion). I'll > admit, I still don't see the problem with just adopting a variant of > standard comment syntax with a convention for tool name prefixes (for > instance, the `{-! HLINT ... !-}` suggested earlier). This seems to me > to be an all-around better solution: less verbose, easy to parse, > and requires no changes to GHC. > > The downsides seem easily overcome: Editors can be easily modified to > give this syntax the same treatment as compiler pragmas. The conflict > with Liquid Haskell's syntax is merely temporary as they are moving > towards using ANN pragmas anyways. However, even if it weren't a bit of > temporary pain seems worthwhile to solve the tool pragma namespacing > issue once and for all. However, this is just my opinion as a user. > > If people want GHC to ignore `{-# TOOL ... #-}` then I certainly won't > object. > > Cheers, > > - Ben > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From sh.najd at gmail.com Thu Oct 25 11:36:10 2018 From: sh.najd at gmail.com (Shayan Najd) Date: Thu, 25 Oct 2018 13:36:10 +0200 Subject: Suppressing False Incomplete Pattern Matching Warnings for Polymorphic Pattern Synonyms Message-ID: Dear GHC hackers, On our work on the new front-end AST for GHC [0] based on TTG [1], we would like to use a pattern synonym like the following [2]: {{{ pattern LL :: HasSrcSpan a => SrcSpan -> SrcSpanLess a -> a pattern LL s m <- (decomposeSrcSpan -> (m , s)) where LL s m = composeSrcSpan (m , s) }}} We know that any match on `LL` patterns, makes the pattern matching total, as it uses a view pattern with a total output pattern (i.e., in `decomposeSrcSpan -> (m , s)`, the pattern `(m , s)` is total). As far as I understand, currently COMPLETE pragmas cannot be used with such a polymorphic pattern. What do you suggest us to do to avoid the false incomplete pattern matching warnings? Thanks, Shayan [0] https://ghc.haskell.org/trac/ghc/wiki/ImplementingTreesThatGrow [1] https://ghc.haskell.org/trac/ghc/wiki/ImplementingTreesThatGrow/TreesThatGrowGuidance [2] https://ghc.haskell.org/trac/ghc/wiki/ImplementingTreesThatGrow/HandlingSourceLocations From rae at cs.brynmawr.edu Thu Oct 25 12:51:29 2018 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Thu, 25 Oct 2018 08:51:29 -0400 Subject: Suppressing False Incomplete Pattern Matching Warnings for Polymorphic Pattern Synonyms In-Reply-To: References: Message-ID: <934862AE-D6D6-4296-A9DC-AA5786BA9070@cs.brynmawr.edu> This sounds like an infelicity in COMPLETE pragmas. Do we have a documented reason why fixing this is impossible? Richard > On Oct 25, 2018, at 7:36 AM, Shayan Najd wrote: > > Dear GHC hackers, > > On our work on the new front-end AST for GHC [0] based on TTG [1], we > would like to use a pattern synonym like the following [2]: > > {{{ > pattern LL :: HasSrcSpan a => SrcSpan -> SrcSpanLess a -> a > pattern LL s m <- (decomposeSrcSpan -> (m , s)) > where > LL s m = composeSrcSpan (m , s) > }}} > > We know that any match on `LL` patterns, makes the pattern matching > total, as it uses a view pattern with a total output pattern (i.e., in > `decomposeSrcSpan -> (m , s)`, the pattern `(m , s)` is total). > > As far as I understand, currently COMPLETE pragmas cannot be used with > such a polymorphic pattern. > > What do you suggest us to do to avoid the false incomplete pattern > matching warnings? > > Thanks, > Shayan > > [0] https://ghc.haskell.org/trac/ghc/wiki/ImplementingTreesThatGrow > [1] https://ghc.haskell.org/trac/ghc/wiki/ImplementingTreesThatGrow/TreesThatGrowGuidance > [2] https://ghc.haskell.org/trac/ghc/wiki/ImplementingTreesThatGrow/HandlingSourceLocations > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From ryan.gl.scott at gmail.com Thu Oct 25 13:30:42 2018 From: ryan.gl.scott at gmail.com (Ryan Scott) Date: Thu, 25 Oct 2018 09:30:42 -0400 Subject: Suppressing False Incomplete Pattern Matching Warnings for Polymorphic Pattern Synonyms Message-ID: The fact that `LL` can't be used in a COMPLETE pragma is a consequence of its current design. Per the users' guide [1]: To make things more formal, when the pattern-match checker requests a set of constructors for some data type constructor T, the checker returns: * The original set of data constructors for T * Any COMPLETE sets of type T Note the use of the phrase *type constructor*. The return type of all constructor-like things in a COMPLETE set must all be headed by the same type constructor T. Since `LL`'s return type is simply a type variable `a`, this simply doesn't work with the design of COMPLETE sets. But to be perfectly honest, I feel that trying to put `LL` into a COMPLETE set is like putting a square peg into a round hole. The original motivation for COMPLETE sets, as given in this wiki page [2], is to support using pattern synonyms in an abstract matter—that is, to ensure that users who match on pattern synonyms don't have any internal implementation details of those pattern synonyms leak into error messages. This is well and good for many use cases, but there are also many use cases where we don't *care* about abstraction. Sometimes, we simply define a pattern synonym to be a convenient shorthand for a complicated pattern to facilitate code reuse, and nothing more. `LL` is a perfect example of this, in my opinion. `LL` is simply a thin wrapper around the use of `decomposeSrcSpan` as a view pattern. Trying to put `LL` into a COMPLETE set is silly since our intention isn't to hide the implementation details of decomposing a `SrcSpan`, but rather to avoid the need to copy-paste `(decomposeSrcSpan -> (m , s))` in a bazillion patterns. Correspondingly, any use of `LL` ought to be treated as if the `(decomposeSrcSpan -> (m , s))` pattern were inlined—and from the pattern-match coverage checker's point of view, that *is* exhaustive! What's the moral of the story here? To me, this is a sign that the design space of pattern synonym coverage checking isn't rich enough. In addition to the existing {-# COMPLETE #-} machinery that we have today, I think we need to have a separate pragma for pattern synonyms that are intended to be transparent, non-abstract wrappers around patterns ({-# TRANSPARENT #-}, perhaps). Ryan S. ----- [1] https://downloads.haskell.org/~ghc/8.6.1/docs/html/users_guide/glasgow_exts.html#complete-pragma [2] https://ghc.haskell.org/trac/ghc/wiki/PatternSynonyms/CompleteSigs From rae at cs.brynmawr.edu Thu Oct 25 14:20:14 2018 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Thu, 25 Oct 2018 10:20:14 -0400 Subject: Suppressing False Incomplete Pattern Matching Warnings for Polymorphic Pattern Synonyms In-Reply-To: References: Message-ID: <9A0657C6-1248-4FE9-9A55-8F874ED04F04@cs.brynmawr.edu> In a rare move, I disagree with Ryan here. Why don't we want LL to be abstract? I personally don't want to be thinking of some desugaring to a view pattern when I say LL. I want just to be pattern matching. Is there a reason we can't extend COMPLETE pragmas to cover this case? Richard > On Oct 25, 2018, at 9:30 AM, Ryan Scott wrote: > > The fact that `LL` can't be used in a COMPLETE pragma is a consequence > of its current design. Per the users' guide [1]: > > To make things more formal, when the pattern-match checker > requests a set of constructors for some data type constructor T, the > checker returns: > > * The original set of data constructors for T > * Any COMPLETE sets of type T > > Note the use of the phrase *type constructor*. The return type of all > constructor-like things in a COMPLETE set must all be headed by the > same type constructor T. Since `LL`'s return type is simply a type > variable `a`, this simply doesn't work with the design of COMPLETE > sets. > > But to be perfectly honest, I feel that trying to put `LL` into a > COMPLETE set is like putting a square peg into a round hole. The > original motivation for COMPLETE sets, as given in this wiki page [2], > is to support using pattern synonyms in an abstract matter—that is, to > ensure that users who match on pattern synonyms don't have any > internal implementation details of those pattern synonyms leak into > error messages. This is well and good for many use cases, but there > are also many use cases where we don't *care* about abstraction. > Sometimes, we simply define a pattern synonym to be a convenient > shorthand for a complicated pattern to facilitate code reuse, and > nothing more. > > `LL` is a perfect example of this, in my opinion. `LL` is simply a > thin wrapper around the use of `decomposeSrcSpan` as a view pattern. > Trying to put `LL` into a COMPLETE set is silly since our intention > isn't to hide the implementation details of decomposing a `SrcSpan`, > but rather to avoid the need to copy-paste `(decomposeSrcSpan -> (m , > s))` in a bazillion patterns. Correspondingly, any use of `LL` ought > to be treated as if the `(decomposeSrcSpan -> (m , s))` pattern were > inlined—and from the pattern-match coverage checker's point of view, > that *is* exhaustive! > > What's the moral of the story here? To me, this is a sign that the > design space of pattern synonym coverage checking isn't rich enough. > In addition to the existing {-# COMPLETE #-} machinery that we have > today, I think we need to have a separate pragma for pattern synonyms > that are intended to be transparent, non-abstract wrappers around > patterns ({-# TRANSPARENT #-}, perhaps). > > Ryan S. > ----- > [1] https://downloads.haskell.org/~ghc/8.6.1/docs/html/users_guide/glasgow_exts.html#complete-pragma > [2] https://ghc.haskell.org/trac/ghc/wiki/PatternSynonyms/CompleteSigs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From matthewtpickering at gmail.com Thu Oct 25 14:35:01 2018 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Thu, 25 Oct 2018 15:35:01 +0100 Subject: Suppressing False Incomplete Pattern Matching Warnings for Polymorphic Pattern Synonyms In-Reply-To: References: Message-ID: I think that `LL` is precisely intended for abstraction. If this `COMPLETE` pragma were possible to implement then a user shouldn't know the difference between the old and new representations. The reason that `COMPLETE` pragmas are designed like this is that it's how the pattern match checker is defined. The collection of patterns used for the checking are queried by the type of the patterns so it made sense to associate each `COMPLETE` set with a specific type. On Thu, Oct 25, 2018 at 2:31 PM Ryan Scott wrote: > > The fact that `LL` can't be used in a COMPLETE pragma is a consequence > of its current design. Per the users' guide [1]: > > To make things more formal, when the pattern-match checker > requests a set of constructors for some data type constructor T, the > checker returns: > > * The original set of data constructors for T > * Any COMPLETE sets of type T > > Note the use of the phrase *type constructor*. The return type of all > constructor-like things in a COMPLETE set must all be headed by the > same type constructor T. Since `LL`'s return type is simply a type > variable `a`, this simply doesn't work with the design of COMPLETE > sets. > > But to be perfectly honest, I feel that trying to put `LL` into a > COMPLETE set is like putting a square peg into a round hole. The > original motivation for COMPLETE sets, as given in this wiki page [2], > is to support using pattern synonyms in an abstract matter—that is, to > ensure that users who match on pattern synonyms don't have any > internal implementation details of those pattern synonyms leak into > error messages. This is well and good for many use cases, but there > are also many use cases where we don't *care* about abstraction. > Sometimes, we simply define a pattern synonym to be a convenient > shorthand for a complicated pattern to facilitate code reuse, and > nothing more. > > `LL` is a perfect example of this, in my opinion. `LL` is simply a > thin wrapper around the use of `decomposeSrcSpan` as a view pattern. > Trying to put `LL` into a COMPLETE set is silly since our intention > isn't to hide the implementation details of decomposing a `SrcSpan`, > but rather to avoid the need to copy-paste `(decomposeSrcSpan -> (m , > s))` in a bazillion patterns. Correspondingly, any use of `LL` ought > to be treated as if the `(decomposeSrcSpan -> (m , s))` pattern were > inlined—and from the pattern-match coverage checker's point of view, > that *is* exhaustive! > > What's the moral of the story here? To me, this is a sign that the > design space of pattern synonym coverage checking isn't rich enough. > In addition to the existing {-# COMPLETE #-} machinery that we have > today, I think we need to have a separate pragma for pattern synonyms > that are intended to be transparent, non-abstract wrappers around > patterns ({-# TRANSPARENT #-}, perhaps). > > Ryan S. > ----- > [1] https://downloads.haskell.org/~ghc/8.6.1/docs/html/users_guide/glasgow_exts.html#complete-pragma > [2] https://ghc.haskell.org/trac/ghc/wiki/PatternSynonyms/CompleteSigs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From ryan.gl.scott at gmail.com Thu Oct 25 14:40:06 2018 From: ryan.gl.scott at gmail.com (Ryan Scott) Date: Thu, 25 Oct 2018 10:40:06 -0400 Subject: Suppressing False Incomplete Pattern Matching Warnings for Polymorphic Pattern Synonyms Message-ID: You *can* put `LL` into a COMPLETE set, but under the stipulation that you specify which type constructor it's monomorphized to. To quote the wiki page on COMPLETE sets: In the case where all the patterns are polymorphic, a user must provide a type signature but we accept the definition regardless of the type signature they provide. The type constructor for the whole set of patterns is the type constructor as specified by the user. If the user does not provide a type signature then the definition is rejected as ambiguous. This design is a consequence of the design of the pattern match checker. Complete sets of patterns must be identified relative to a type. This is a sanity check as users would never be able to match on all constructors if the set of patterns is inconsistent in this manner. In other words, this would work provided that you'd be willing to list every single instance of `HasSrcSpan` in its own COMPLETE set. It's tedious, but possible. Ryan S. ----- [1] https://ghc.haskell.org/trac/ghc/wiki/PatternSynonyms/CompleteSigs#Typing From sh.najd at gmail.com Thu Oct 25 15:59:47 2018 From: sh.najd at gmail.com (Shayan Najd) Date: Thu, 25 Oct 2018 17:59:47 +0200 Subject: Suppressing False Incomplete Pattern Matching Warnings for Polymorphic Pattern Synonyms In-Reply-To: References: Message-ID: > [Ryan Scott:] > In other words, this would work provided that you'd be willing to list > every single instance of `HasSrcSpan` in its own COMPLETE set. It's > tedious, but possible. Yes, for the cases where `LL` is used monomorphically, we can use the per-instance COMPLETE pragma, but it does not scale to the generic cases where `LL` is used polymorphically. Consider the following where `Exp` and `Pat` are both instances of `HasSrcSpan`: {{{ {-# COMPLETE LL :: Exp #-} {-# COMPLETE LL :: Pat #-} unLocExp :: Exp -> Exp unLocExp (LL _ m) = m unLocPat :: Pat -> Pat unLocPat (LL _ m) = m unLocGen :: HasLoc a => a -> SrcSpanLess a unLocGen (LL _ m) = m }}} In the functions `unLocExp` and `unLocPat`, the related COMPLETE pragmas rightly avoid the false incomplete pattern matching warnings. However, to avoid the false incomplete pattern matching warning in `unLocGen`, either I should add a catch-all case `unLocGen _ = error "Impossible!"` or expose the internal view patterns and hence break the abstraction `unLocGen (decomposeSrcSpan->(_ , m)) = m` We want to avoid both solutions and unfortunately, this problem arises frequently enough. For example, in GHC, there are plenty of such generic cases (e.g., `sL1` or the like in the parser). I believe the source of the issue is what you just mentioned: > [Ryan Scott:] > Complete sets of patterns must be identified relative to a type. Technically `HasSrcSpan a0 |- a0` is a type, for a Skolem variable `a0`; I understand if you mean relative to a concrete type, but I don't understand why (I have no experience with GHC's totality checker code). Why can't it be syntactic? We should allow programmers to express things like "when you see `LL` treat the related pattern matching group as total". /Shayan On Thu, Oct 25, 2018 at 4:40 PM Ryan Scott wrote: > > You *can* put `LL` into a COMPLETE set, but under the stipulation that > you specify which type constructor it's monomorphized to. To quote the > wiki page on COMPLETE sets: > > In the case where all the patterns are polymorphic, a user must > provide a type signature but we accept the definition regardless of > the type signature they provide. The type constructor for the whole > set of patterns is the type constructor as specified by the user. If > the user does not provide a type signature then the definition is > rejected as ambiguous. > > This design is a consequence of the design of the pattern match > checker. Complete sets of patterns must be identified relative to a > type. This is a sanity check as users would never be able to match on > all constructors if the set of patterns is inconsistent in this > manner. > > In other words, this would work provided that you'd be willing to list > every single instance of `HasSrcSpan` in its own COMPLETE set. It's > tedious, but possible. > > Ryan S. > ----- > [1] https://ghc.haskell.org/trac/ghc/wiki/PatternSynonyms/CompleteSigs#Typing > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From sylvain at haskus.fr Thu Oct 25 16:24:02 2018 From: sylvain at haskus.fr (Sylvain Henry) Date: Thu, 25 Oct 2018 18:24:02 +0200 Subject: Suppressing False Incomplete Pattern Matching Warnings for Polymorphic Pattern Synonyms In-Reply-To: References: Message-ID: > In the case where all the patterns are polymorphic, a user must > provide a type signature but we accept the definition regardless of > the type signature they provide. Currently we can specify the type *constructor* in a COMPLETE pragma: pattern J :: a -> Maybe apattern J a = Just apattern N :: Maybe apattern N = Nothing{-# COMPLETE N, J :: Maybe #-} Instead if we could specify the type with its free vars, we could refer to them in conlike signatures: {-# COMPLETE N, [J:: a -> Maybe a ] :: Maybe a #-} The COMPLETE pragma for LL could be: {-# COMPLETE [LL :: HasSrcSpan a => SrcSpan -> SrcSpanLess a -> a ] :: a #-} I'm borrowing the list comprehension syntax on purpose because it would allow to define a set of conlikes from a type-list (see my request [1]): {-# COMPLETE [V :: (c :< cs) => c -> Variant cs | c <- cs ] :: Variant cs #-} > To make things more formal, when the pattern-match checker > requests a set of constructors for some data type constructor T, the > checker returns: > > * The original set of data constructors for T > * Any COMPLETE sets of type T > > Note the use of the phrase *type constructor*. The return type of all > constructor-like things in a COMPLETE set must all be headed by the > same type constructor T. Since `LL`'s return type is simply a type > variable `a`, this simply doesn't work with the design of COMPLETE > sets. Could we use a mechanism similar to instance resolution (with FlexibleInstances) for the checker to return matching COMPLETE sets instead? --Sylvain [1] https://mail.haskell.org/pipermail/ghc-devs/2018-July/016053.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From iavor.diatchki at gmail.com Thu Oct 25 18:20:13 2018 From: iavor.diatchki at gmail.com (Iavor Diatchki) Date: Thu, 25 Oct 2018 11:20:13 -0700 Subject: Using GHC Core as a Language Target In-Reply-To: References: Message-ID: Hello, I have not done what you are asking, but here is how I'd approach the problem. 1. Assuming you already have some Core, you'd have to figure out how to include it with the rest of the GHC pipeline: * A lot of the code that glues everything together is in `compiler/main`. Modules of interest seem to be `DriverPipeline`, `HscMain`, and `PipelineMoand` * A quick looks suggests that maybe you want to call `hscGenHardCode` in `HscMain`, with your core program inside the `CgGuts` argument. * Exactly how you setup things probably depends on how much of the rest of the Haskell ecosystem you are trying to integrate with (separate compilation, avoiding recompilation, support for packages, etc.) 2. The syntax for Core is in `compiler/coreSyn`, with the basic AST being in module `CoreSyn`. Module `MkCore` has a lot of helpers for working with core syntax. 3. The "desugarer" (in `compiler/deSugar`) is the GHC phase that translates the front end syntax (hsSyn) into core, so that should have lots of examples of how to generate core. Cheers, -Iavor On Mon, Oct 22, 2018 at 1:46 AM Ara Adkins wrote: > Hey All, > > I was chatting to SPJ about the possibility of using GHC Core + the rest > of the GHC compilation pipeline as a target for a functional language, and > he mentioned that asking here would likely be more productive when it comes > to the GHC API. > > I'm wondering where the best place would be for me to look in the API for > building core expressions, and also whether it is possible to trigger the > GHC code-generation pipeline from the core stage onwards. > > Best, > Ara > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at ara.io Thu Oct 25 18:51:26 2018 From: me at ara.io (Ara Adkins) Date: Thu, 25 Oct 2018 19:51:26 +0100 Subject: Using GHC Core as a Language Target In-Reply-To: References: Message-ID: <70EC1EE1-5F42-40BC-8883-3A3E0E17E599@ara.io> Heya, Those are exactly the kind of pointers I was hoping for. Thanks Iavor. I’m sure I’ll have more questions with time, but that’s a great starting point. _ara > On 25 Oct 2018, at 19:20, Iavor Diatchki wrote: > > Hello, > > I have not done what you are asking, but here is how I'd approach the problem. > > 1. Assuming you already have some Core, you'd have to figure out how to include it with the rest of the GHC pipeline: > * A lot of the code that glues everything together is in `compiler/main`. Modules of interest seem to be `DriverPipeline`, `HscMain`, and `PipelineMoand` > * A quick looks suggests that maybe you want to call `hscGenHardCode` in `HscMain`, with your core program inside the `CgGuts` argument. > * Exactly how you setup things probably depends on how much of the rest of the Haskell ecosystem you are trying to integrate with (separate compilation, avoiding recompilation, support for packages, etc.) > > 2. The syntax for Core is in `compiler/coreSyn`, with the basic AST being in module `CoreSyn`. Module `MkCore` has a lot of helpers for working with core syntax. > > 3. The "desugarer" (in `compiler/deSugar`) is the GHC phase that translates the front end syntax (hsSyn) into core, so that should have lots of examples of how to generate core. > > Cheers, > -Iavor > > > > > > > >> On Mon, Oct 22, 2018 at 1:46 AM Ara Adkins wrote: >> Hey All, >> >> I was chatting to SPJ about the possibility of using GHC Core + the rest of the GHC compilation pipeline as a target for a functional language, and he mentioned that asking here would likely be more productive when it comes to the GHC API. >> >> I'm wondering where the best place would be for me to look in the API for building core expressions, and also whether it is possible to trigger the GHC code-generation pipeline from the core stage onwards. >> >> Best, >> Ara >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Thu Oct 25 20:32:46 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 25 Oct 2018 16:32:46 -0400 Subject: Treatment of unknown pragmas In-Reply-To: <8F6232A6-9CD6-4DF3-87D9-E7A32EE8EBD7@gmail.com> References: <8736t5n5kc.fsf@smart-cactus.org> <87zhvdln5l.fsf@smart-cactus.org> <1539719290.3266129.1544285568.4036F292@webmail.messagingengine.com> <87bm7sllil.fsf@smart-cactus.org> <87mur35jou.fsf@smart-cactus.org> <8F6232A6-9CD6-4DF3-87D9-E7A32EE8EBD7@gmail.com> Message-ID: <87bm7h6a3b.fsf@smart-cactus.org> Niklas Larsson writes: > Hi! > > Why not follow the standard in that pragmas were intended for all > tools consuming Haskell ... That much isn't clear to me. The Report defines the syntax very specifically to be for "compiler pragmas" to be used by "compiler implementations". I personally consider "the compiler" to be something different from tools like HLint. Of course, on the other hand it also specified that implementations should ignore unknown pragmas, so the original authors clearly didn't anticipate that non-compiler tooling would be so common. > ... and not for GHCs exclusive use? > All that would require is to make the warning opt-in. > Disabling the unknown pragma warning by default would mean that users not be warned if they mis-spelled LANGAGE or INILNE, which could result in frustrating error messages for the uninitiated. It seems to me that we should try to avoid this given just how common these pragmas are in practice. Finally, in general I think it would be generally useful to have a properly namespaced syntax for tooling pragmas. Afterall, we otherwise end up with tools claiming random bits of syntax, resulting in an unnecessarily steep learning curve and potentially syntactically-colliding tools. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From rae at cs.brynmawr.edu Fri Oct 26 03:59:09 2018 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Thu, 25 Oct 2018 23:59:09 -0400 Subject: Suppressing False Incomplete Pattern Matching Warnings for Polymorphic Pattern Synonyms In-Reply-To: References: Message-ID: <7031558E-BDD9-4571-A5C3-CFF486455BEA@cs.brynmawr.edu> I'm afraid I don't understand what your new syntax means. And, while I know it doesn't work today, what's wrong (in theory) with {-# COMPLETE LL #-} No types! (That's a rare thing for me to extol...) I feel I must be missing something here. Thanks, Richard > On Oct 25, 2018, at 12:24 PM, Sylvain Henry wrote: > > > In the case where all the patterns are polymorphic, a user must > > provide a type signature but we accept the definition regardless of > > the type signature they provide. > Currently we can specify the type *constructor* in a COMPLETE pragma: > > pattern J :: a -> Maybe a > pattern J a = Just a > > pattern N :: Maybe a > pattern N = Nothing > > {-# COMPLETE N, J :: Maybe #-} > > Instead if we could specify the type with its free vars, we could refer to them in conlike signatures: > > {-# COMPLETE N, [ J :: a -> Maybe a ] :: Maybe a #-} > The COMPLETE pragma for LL could be: > > {-# COMPLETE [ LL :: HasSrcSpan a => SrcSpan -> SrcSpanLess a -> a ] :: a #-} > > I'm borrowing the list comprehension syntax on purpose because it would allow to define a set of conlikes from a type-list (see my request [1]): > > {-# COMPLETE [ V :: (c :< cs) => c -> Variant cs > | c <- cs > ] :: Variant cs > #-} > > > To make things more formal, when the pattern-match checker > > requests a set of constructors for some data type constructor T, the > > checker returns: > > > > * The original set of data constructors for T > > * Any COMPLETE sets of type T > > > > Note the use of the phrase *type constructor*. The return type of all > > constructor-like things in a COMPLETE set must all be headed by the > > same type constructor T. Since `LL`'s return type is simply a type > > variable `a`, this simply doesn't work with the design of COMPLETE > > sets. > > Could we use a mechanism similar to instance resolution (with FlexibleInstances) for the checker to return matching COMPLETE sets instead? > > > --Sylvain > > > [1] https://mail.haskell.org/pipermail/ghc-devs/2018-July/016053.html > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Fri Oct 26 07:01:12 2018 From: marlowsd at gmail.com (Simon Marlow) Date: Fri, 26 Oct 2018 08:01:12 +0100 Subject: Treatment of unknown pragmas In-Reply-To: <87bm7h6a3b.fsf@smart-cactus.org> References: <8736t5n5kc.fsf@smart-cactus.org> <87zhvdln5l.fsf@smart-cactus.org> <1539719290.3266129.1544285568.4036F292@webmail.messagingengine.com> <87bm7sllil.fsf@smart-cactus.org> <87mur35jou.fsf@smart-cactus.org> <8F6232A6-9CD6-4DF3-87D9-E7A32EE8EBD7@gmail.com> <87bm7h6a3b.fsf@smart-cactus.org> Message-ID: What pragma syntax should other Haskell compilers use? I don't think it's fair for GHC to have exclusive rights to the pragma syntax form the report, and other compilers should not be relegated to using {-# X-FOOHC ... #-}. But now we have all the same issues again. Cheers Simon On Thu, 25 Oct 2018 at 21:32, Ben Gamari wrote: > Niklas Larsson writes: > > > Hi! > > > > Why not follow the standard in that pragmas were intended for all > > tools consuming Haskell ... > > That much isn't clear to me. The Report defines the syntax very > specifically to be for "compiler pragmas" to be used by "compiler > implementations". I personally consider "the compiler" to be something > different from tools like HLint. > > Of course, on the other hand it also specified that implementations > should ignore unknown pragmas, so the original authors clearly didn't > anticipate that non-compiler tooling would be so common. > > > ... and not for GHCs exclusive use? > > All that would require is to make the warning opt-in. > > > Disabling the unknown pragma warning by default would mean that users > not be warned if they mis-spelled LANGAGE or INILNE, which could result > in frustrating error messages for the uninitiated. It seems to me that > we should try to avoid this given just how common these pragmas are in > practice. > > Finally, in general I think it would be generally useful to have a > properly namespaced syntax for tooling pragmas. Afterall, we otherwise > end up with tools claiming random bits of syntax, resulting in an > unnecessarily steep learning curve and potentially > syntactically-colliding tools. > > Cheers, > > - Ben > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sylvain at haskus.fr Fri Oct 26 09:20:55 2018 From: sylvain at haskus.fr (Sylvain Henry) Date: Fri, 26 Oct 2018 11:20:55 +0200 Subject: Suppressing False Incomplete Pattern Matching Warnings for Polymorphic Pattern Synonyms In-Reply-To: <7031558E-BDD9-4571-A5C3-CFF486455BEA@cs.brynmawr.edu> References: <7031558E-BDD9-4571-A5C3-CFF486455BEA@cs.brynmawr.edu> Message-ID: <46855143-c211-92d8-5106-6ea0ae972b68@haskus.fr> Sorry I wasn't clear. I'm not an expert on the topic but it seems to me that there are two orthogonal concerns: 1) How does the checker retrieve COMPLETE sets. Currently it seems to "attach" them to data type constructors (e.g. Maybe). If instead it retrieved them by matching types (e.g. "Maybe a", "a") we could write: {-# COMPLETE LL #-} From an implementation point of view, it seems to me that finding complete sets would become similar to finding (flexible, overlapping) class instances. Pseudo-code: class Complete a where   conlikes :: [ConLike] instance Complete (Maybe a) where conlikes = [Nothing @a, Just @a] instance Complete (Maybe a) where conlikes = [N @a, J @a] instance Complete a where   conlikes = [LL @a] ... 2) COMPLETE set depending on the matched type. It is a thread hijack from me but while we are thinking about refactoring COMPLETE pragmas to support polymorphism, maybe we could support this too. The idea is to build a different set of conlikes depending on the matched type. Pseudo-code: instance Complete (Variant cs) where conlikes = [V @c | c <- cs] -- cs is a type list (I don't really care about the pragma syntax) Sorry for the thread hijack! Regards, Sylvain On 10/26/18 5:59 AM, Richard Eisenberg wrote: > I'm afraid I don't understand what your new syntax means. And, while I > know it doesn't work today, what's wrong (in theory) with > > {-# COMPLETE LL #-} > > No types! (That's a rare thing for me to extol...) > > I feel I must be missing something here. > > Thanks, > Richard > >> On Oct 25, 2018, at 12:24 PM, Sylvain Henry > > wrote: >> >> > In the case where all the patterns are polymorphic, a user must >> > provide a type signature but we accept the definition regardless of >> > the type signature they provide. >> >> Currently we can specify the type *constructor* in a COMPLETE pragma: >> >> pattern J :: a -> Maybe apattern J a = Just apattern N :: Maybe >> apattern N = Nothing{-# COMPLETE N, J :: Maybe #-} >> >> >> Instead if we could specify the type with its free vars, we could >> refer to them in conlike signatures: >> >> {-# COMPLETE N, [J:: a -> Maybe a ] :: Maybe a #-} >> >> The COMPLETE pragma for LL could be: >> >> {-# COMPLETE [LL :: HasSrcSpan a => SrcSpan -> SrcSpanLess a -> a ] >> :: a #-} >> >> I'm borrowing the list comprehension syntax on purpose because it >> would allow to define a set of conlikes from a type-list (see my >> request [1]): >> >> {-# COMPLETE [V :: (c :< cs) => c -> Variant cs | c <- cs ] :: >> Variant cs #-} >> >> > To make things more formal, when the pattern-match checker >> > requests a set of constructors for some data type constructor T, the >> > checker returns: >> > >> > * The original set of data constructors for T >> > * Any COMPLETE sets of type T >> > >> > Note the use of the phrase *type constructor*. The return type of all >> > constructor-like things in a COMPLETE set must all be headed by the >> > same type constructor T. Since `LL`'s return type is simply a type >> > variable `a`, this simply doesn't work with the design of COMPLETE >> > sets. >> >> Could we use a mechanism similar to instance resolution (with >> FlexibleInstances) for the checker to return matching COMPLETE sets >> instead? >> >> --Sylvain >> >> >> [1]https://mail.haskell.org/pipermail/ghc-devs/2018-July/016053.html >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Oct 26 11:42:02 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 26 Oct 2018 11:42:02 +0000 Subject: ghc-prim package-data.mk failed Message-ID: This has started happening when I do 'sh validate -no-clean' "inplace/bin/ghc-cabal" configure libraries/ghc-prim dist-install --with-ghc="/home/simonpj/5builds/HEAD-5/inplace/bin/ghc-stage1" --with-ghc-pkg="/home/simonpj/5builds/HEAD-5/inplace/bin/ghc-pkg" --disable-library-for-ghci --enable-library-vanilla --enable-library-for-ghci --disable-library-profiling --enable-shared --with-hscolour="/home/simonpj/.cabal/bin/HsColour" --configure-option=CFLAGS="-Wall -fno-stack-protector -Werror=unused-but-set-variable -Wno-error=inline" --configure-option=LDFLAGS=" " --configure-option=CPPFLAGS=" " --gcc-options="-Wall -fno-stack-protector -Werror=unused-but-set-variable -Wno-error=inline " --with-gcc="gcc" --with-ld="ld.gold" --with-ar="ar" --with-alex="/home/simonpj/.cabal/bin/alex" --with-happy="/home/simonpj/.cabal/bin/happy" Configuring ghc-prim-0.5.3... configure: WARNING: unrecognized options: --with-compiler checking for gcc... /usr/bin/gcc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether /usr/bin/gcc accepts -g... yes checking for /usr/bin/gcc option to accept ISO C89... none needed checking whether GCC supports __atomic_ builtins... no configure: creating ./config.status config.status: error: cannot find input file: `ghc-prim.buildinfo.in' libraries/ghc-prim/ghc.mk:4: recipe for target 'libraries/ghc-prim/dist-install/package-data.mk' failed make[1]: *** [libraries/ghc-prim/dist-install/package-data.mk] Error 1 Makefile:122: recipe for target 'all' failed make: *** [all] Error 2 I think it is fixed by saying 'sh validate' (i.e. start from scratch). But that is slow. I'm not 100% certain about the circumstances under which it happens, but can anyone help me diagnose what is going on when it does? Thanks SImon -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at cs.brynmawr.edu Fri Oct 26 17:04:45 2018 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Fri, 26 Oct 2018 13:04:45 -0400 Subject: Suppressing False Incomplete Pattern Matching Warnings for Polymorphic Pattern Synonyms In-Reply-To: <46855143-c211-92d8-5106-6ea0ae972b68@haskus.fr> References: <7031558E-BDD9-4571-A5C3-CFF486455BEA@cs.brynmawr.edu> <46855143-c211-92d8-5106-6ea0ae972b68@haskus.fr> Message-ID: Aha. So you're viewing complete sets as a type-directed property, where we can take a type and look up what complete sets of patterns of that type might be. Then, when checking a pattern-match for completeness, we use the inferred type of the pattern, access its complete sets, and if these match up. (Perhaps an implementation may optimize this process.) What I like about this approach is that it works well with GADTs, where, e.g., VNil is a complete set for type Vec a Zero but not for Vec a n. I take back my claim of "No types!" then, as this does sound like it has the right properties. For now, I don't want to get bogged down by syntax -- let's figure out how the idea should work first, and then we can worry about syntax. Here's a stab at a formalization of this idea, written in metatheory, not Haskell: Let C : Type -> Set of set of patterns. C will be the lookup function for complete sets. Suppose we have a pattern match, at type tau, matching against patterns Ps. Let S = C(tau). S is then a set of sets of patterns. The question is this: Is there a set s in S such that Ps is a superset of s? If yes, then the match is complete. What do we think of this design? Of course, the challenge is in building C, but we'll tackle that next. Richard > On Oct 26, 2018, at 5:20 AM, Sylvain Henry wrote: > > Sorry I wasn't clear. I'm not an expert on the topic but it seems to me that there are two orthogonal concerns: > > 1) How does the checker retrieve COMPLETE sets. > > Currently it seems to "attach" them to data type constructors (e.g. Maybe). If instead it retrieved them by matching types (e.g. "Maybe a", "a") we could write: > > {-# COMPLETE LL #-} > From an implementation point of view, it seems to me that finding complete sets would become similar to finding (flexible, overlapping) class instances. Pseudo-code: > class Complete a where > conlikes :: [ConLike] > instance Complete (Maybe a) where > conlikes = [Nothing @a, Just @a] > instance Complete (Maybe a) where > conlikes = [N @a, J @a] > instance Complete a where > conlikes = [LL @a] > ... > > 2) COMPLETE set depending on the matched type. > > It is a thread hijack from me but while we are thinking about refactoring COMPLETE pragmas to support polymorphism, maybe we could support this too. The idea is to build a different set of conlikes depending on the matched type. Pseudo-code: > instance Complete (Variant cs) where > conlikes = [V @c | c <- cs] -- cs is a type list > (I don't really care about the pragma syntax) > > Sorry for the thread hijack! > Regards, > Sylvain > > On 10/26/18 5:59 AM, Richard Eisenberg wrote: >> I'm afraid I don't understand what your new syntax means. And, while I know it doesn't work today, what's wrong (in theory) with >> >> {-# COMPLETE LL #-} >> >> No types! (That's a rare thing for me to extol...) >> >> I feel I must be missing something here. >> >> Thanks, >> Richard >> >>> On Oct 25, 2018, at 12:24 PM, Sylvain Henry > wrote: >>> >>> > In the case where all the patterns are polymorphic, a user must >>> > provide a type signature but we accept the definition regardless of >>> > the type signature they provide. >>> Currently we can specify the type *constructor* in a COMPLETE pragma: >>> >>> pattern J :: a -> Maybe a >>> pattern J a = Just a >>> >>> pattern N :: Maybe a >>> pattern N = Nothing >>> >>> {-# COMPLETE N, J :: Maybe #-} >>> >>> Instead if we could specify the type with its free vars, we could refer to them in conlike signatures: >>> >>> {-# COMPLETE N, [ J :: a -> Maybe a ] :: Maybe a #-} >>> The COMPLETE pragma for LL could be: >>> >>> {-# COMPLETE [ LL :: HasSrcSpan a => SrcSpan -> SrcSpanLess a -> a ] :: a #-} >>> >>> I'm borrowing the list comprehension syntax on purpose because it would allow to define a set of conlikes from a type-list (see my request [1]): >>> >>> {-# COMPLETE [ V :: (c :< cs) => c -> Variant cs >>> | c <- cs >>> ] :: Variant cs >>> #-} >>> >>> > To make things more formal, when the pattern-match checker >>> > requests a set of constructors for some data type constructor T, the >>> > checker returns: >>> > >>> > * The original set of data constructors for T >>> > * Any COMPLETE sets of type T >>> > >>> > Note the use of the phrase *type constructor*. The return type of all >>> > constructor-like things in a COMPLETE set must all be headed by the >>> > same type constructor T. Since `LL`'s return type is simply a type >>> > variable `a`, this simply doesn't work with the design of COMPLETE >>> > sets. >>> >>> Could we use a mechanism similar to instance resolution (with FlexibleInstances) for the checker to return matching COMPLETE sets instead? >>> >>> --Sylvain >>> >>> >>> [1] https://mail.haskell.org/pipermail/ghc-devs/2018-July/016053.html >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at cs.brynmawr.edu Fri Oct 26 18:32:51 2018 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Fri, 26 Oct 2018 14:32:51 -0400 Subject: slow execution of built executables on a Mac Message-ID: Hi devs, I have a shiny, new iMac in my office. It's thus frustrating that it takes my iMac longer to build GHC than my trusty 28-month-old laptop. Building devel2 on a fresh checkout takes just about an hour. By contrast, my laptop is done after 30 minutes of work (same build settings). The laptop has a 2.8GHz Intel i7 running macOS 10.13.5; the desktop has a 3.5GHz Intel i5 running macOS 10.13.6. Both bootstrapped from the binary distro of GHC 8.6.1. Watching GHC build, everything is snappy enough during the stage-1 build. But then, as soon as we start using GHC-produced executables, things slow down. It's most noticeable in the rts_dist_HC phase, which crawls. Stage 2 is pretty slow, too. So: is there anything anyone knows about recent Macs not liking locally built executables? Or is there some local setting that I need to update? The prepackaged GHC seems to work well, so that gives me hope that someone knows what setting to tweak. Thanks! Richard From carter.schonwald at gmail.com Fri Oct 26 20:43:21 2018 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Fri, 26 Oct 2018 16:43:21 -0400 Subject: [ANNOUNCE] GHC 8.4.4 released In-Reply-To: <84655ecb-3249-4536-b978-355340796dc1@well-typed.com> References: <878t30npgc.fsf@smart-cactus.org> <87y3avjm1f.fsf@smart-cactus.org> <84655ecb-3249-4536-b978-355340796dc1@well-typed.com> Message-ID: Hey David, i'm looking at the git history andit doesn't seem to have any commits between 8.4.3 and 8.4.4 related to the dataToTag issue does any haskell code in the while trigger the bug on 8.4 series? On Sun, Oct 21, 2018 at 11:06 AM David Feuer wrote: > Did this release fix the dataToTag# issue? I think that has a number of > people concerned. > On Oct 18, 2018, at 11:46 AM, Ben Gamari wrote: >> >> Jens Petersen writes: >> >> On Mon, 15 Oct 2018 at 07:17, Ben Gamari wrote: >>> >>>> The GHC team is pleased to announce the availability of GHC 8.4.4 >>>> >>> >>> Thank you >>> >>> As always, the full release notes can be found in the users guide, >>>> >>> >>> https://downloads.haskell.org/~ghc/8.4.4/docs/html/users_guide/8.4.4-notes.html#base-library >>> >>> I think this base text is out of date, and could be dropped, right? >> >> >> Indeed it is. >> >> I see that stm was also bumped: though it is not listed in >>> https://downloads.haskell.org/~ghc/8.4.4/docs/html/users_guide/8.4.4-notes.html#included-libraries >> >> >> Also true. However, in my mind this isn't nearly as significant as the >> `text` bump, which affects many users and fixes extremely bad >> misbehavior. >> >> Thanks for noticing these! >> >> Cheers >> >> - Ben >> >> ------------------------------ >> >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >> _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Fri Oct 26 20:56:32 2018 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Fri, 26 Oct 2018 16:56:32 -0400 Subject: can't get docs / sphinx building for ghc 8.4.4 Message-ID: hello all, i'm getting the following error quite consistently on OSX, for getting the docs working / built and its quite impossible to install if the docs aren't fully built i've bolded below an except how do i fix thisssss please help meeeee :) many thanks. *Extension error:* *The 'ghc-flag' directive is already registered to domain std* *docs/users_guide/ghc.mk:16 : recipe for target 'docs/users_guide/users_guide.pdf' failed* *make[1]: *** [docs/users_guide/users_guide.pdf] Error 2* *make[1]: *** Waiting for unfinished jobs....* *Extension error:* *The 'ghc-flag' directive is already registered to domain std* *Extension error:* *The 'ghc-flag' directive is already registered to domain std* *docs/users_guide/ghc.mk:28 : recipe for target 'docs/users_guide/build-man/ghc.1' failed* the full end of the build is as follows any /usr/local/bin/ginstall -c -m 755 utils/genapply/dist/build/tmp/genapply inplace/lib/bin/genapply cp libffi/build/inst/lib/libffi.a rts/dist/build/libCffi.a "inplace/bin/ghc-stage1" -hisuf hi -osuf o -hcsuf hc -static -H32m -O -Wall -this-unit-id ghc-prim-0.5.2.0 -hide-all-packages -i -ilibraries/ghc-prim/. -ilibraries/ghc-prim/dist-install/build -Ilibraries/ghc-prim/dist-install/build -ilibraries/ghc-prim/dist-install/build/./autogen -Ilibraries/ghc-prim/dist-install/build/./autogen -Ilibraries/ghc-prim/. -optP-include -optPlibraries/ghc-prim/dist-install/build/./autogen/cabal_macros.h -package-id rts -this-unit-id ghc-prim -XHaskell2010 -O2 -no-user-package-db -rtsopts -Wno-trustworthy-safe -Wno-deprecated-flags -Wnoncanonical-monad-instances -odir libraries/ghc-prim/dist-install/build -hidir libraries/ghc-prim/dist-install/build -stubdir libraries/ghc-prim/dist-install/build -dynamic-too -c libraries/ghc-prim/./GHC/Tuple.hs -o libraries/ghc-prim/dist-install/build/GHC/Tuple.o -dyno libraries/ghc-prim/dist-install/build/GHC/Tuple.dyn_o Running Sphinx v1.8.1 Running Sphinx v1.8.1 Running Sphinx v1.8.1 loading pickled environment... done building [mo]: targets for 0 po files that are out of date building [html]: targets for 4 source files that are out of date updating environment: 0 added, 0 changed, 0 removed looking for now-outdated files... none found pickling environment... done preparing documents... done writing output... [ 25%] index Extension error: The 'ghc-flag' directive is already registered to domain std docs/users_guide/ghc.mk:16: recipe for target 'docs/users_guide/users_guide.pdf' failed make[1]: *** [docs/users_guide/users_guide.pdf] Error 2 make[1]: *** Waiting for unfinished jobs.... Extension error: The 'ghc-flag' directive is already registered to domain std Extension error: The 'ghc-flag' directive is already registered to domain std docs/users_guide/ghc.mk:28: recipe for target 'docs/users_guide/build-man/ghc.1' failed make[1]: *** [docs/users_guide/build-man/ghc.1] Error 2 docs/users_guide/ghc.mk:16: recipe for target 'docs/users_guide/build-html/users_guide/index.html' failed make[1]: *** [docs/users_guide/build-html/users_guide/index.html] Error 2 writing output... [100%] markup /Users/carter/dev-checkouts/ghc-tree/ghc-8.4.4-checkout-clang-build/utils/haddock/doc/invoking.rst:457: WARNING: unknown option: -cpp generating indices... genindex writing additional pages... search copying static files... done copying extra files... done dumping search index in English (code: en) ... done dumping object inventory... done build succeeded, 1 warning. The HTML pages are in .build-html. cp -R utils/haddock/doc/.build-html utils/haddock/doc/haddock Makefile:122: recipe for target 'all' failed make: *** [all] Error 2 -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Fri Oct 26 21:02:28 2018 From: david.feuer at gmail.com (David Feuer) Date: Fri, 26 Oct 2018 17:02:28 -0400 Subject: [ANNOUNCE] GHC 8.4.4 released In-Reply-To: References: <878t30npgc.fsf@smart-cactus.org> <87y3avjm1f.fsf@smart-cactus.org> <84655ecb-3249-4536-b978-355340796dc1@well-typed.com> Message-ID: On Fri, Oct 26, 2018 at 4:43 PM Carter Schonwald wrote: > > Hey David, i'm looking at the git history andit doesn't seem to have any commits between 8.4.3 and 8.4.4 related to the dataToTag issue > > does any haskell code in the while trigger the bug on 8.4 series? I don't think anyone knows. It seems clear that it's considerably easier to trigger the bug in 8.6, but as far as I can tell, there's no reason to believe that it couldn't be triggered by realistic code in 8.4. From carter.schonwald at gmail.com Fri Oct 26 21:02:53 2018 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Fri, 26 Oct 2018 17:02:53 -0400 Subject: slow execution of built executables on a Mac In-Reply-To: References: Message-ID: theory one: are you setting the intree gmp flag? and/or dont have gmp the library installed? maybe your ghc is using integer simple! theory two: dont set -j greater than about 8 or perhaps 16? (some parts of ghc are slower on too paralle a setup?) those are my two off the cuff guesses you could also look at system monitor to see if your'e being IO bound or memory or CPU bound (possible failure siutation: the repo is under dropbox or something and dropbox is eating your memory bandwidth trying to sync stuff?) On Fri, Oct 26, 2018 at 2:33 PM Richard Eisenberg wrote: > Hi devs, > > I have a shiny, new iMac in my office. It's thus frustrating that it takes > my iMac longer to build GHC than my trusty 28-month-old laptop. Building > devel2 on a fresh checkout takes just about an hour. By contrast, my laptop > is done after 30 minutes of work (same build settings). The laptop has a > 2.8GHz Intel i7 running macOS 10.13.5; the desktop has a 3.5GHz Intel i5 > running macOS 10.13.6. Both bootstrapped from the binary distro of GHC > 8.6.1. > > Watching GHC build, everything is snappy enough during the stage-1 build. > But then, as soon as we start using GHC-produced executables, things slow > down. It's most noticeable in the rts_dist_HC phase, which crawls. Stage 2 > is pretty slow, too. > > So: is there anything anyone knows about recent Macs not liking locally > built executables? Or is there some local setting that I need to update? > The prepackaged GHC seems to work well, so that gives me hope that someone > knows what setting to tweak. > > Thanks! > Richard > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at cs.brynmawr.edu Fri Oct 26 21:15:10 2018 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Fri, 26 Oct 2018 17:15:10 -0400 Subject: slow execution of built executables on a Mac In-Reply-To: References: Message-ID: > On Oct 26, 2018, at 5:02 PM, Carter Schonwald wrote: > > are you setting the intree gmp flag? and/or dont have gmp the library installed? maybe your ghc is using integer simple! Intriguing possibility. I haven't done anything to install gmp. I'm now at home, away from this machine, but I'll try this next week. I have a hunch you're right. Perhaps we should have a warning when this is the case? > > theory two: dont set -j greater than about 8 or perhaps 16? (some parts of ghc are slower on too paralle a setup?) I'm at -j5, so this isn't the problem. > > you could also look at system monitor to see if your'e being IO bound or memory or CPU bound I tried this earlier, and it looked like I wasn't CPU-bound. I suspected a slow SSD somehow (if that's even possible). But watching the build this morning showed me that stage 1 is snappy while later work isn't, so that made me doubt my earlier guess. And installing the 6GB of LaTeX wasn't unduly slow, either. > (possible failure siutation: the repo is under dropbox or something and dropbox is eating your memory bandwidth trying to sync stuff?) Thanks for the idea, but I know to keep GHC away from Dropbox. :) I'll give 90% confidence on the gmp thing. Thanks! Richard From a.pelenitsyn at gmail.com Fri Oct 26 21:18:07 2018 From: a.pelenitsyn at gmail.com (Artem Pelenitsyn) Date: Fri, 26 Oct 2018 17:18:07 -0400 Subject: [ANNOUNCE] GHC 8.4.4 released In-Reply-To: References: <878t30npgc.fsf@smart-cactus.org> <87y3avjm1f.fsf@smart-cactus.org> <84655ecb-3249-4536-b978-355340796dc1@well-typed.com> Message-ID: David, when you say "dataToTag# issue", you mean #15696? It seems from the discussion there that it is still under investigation. -- Best, Artem On Fri, 26 Oct 2018 at 17:02 David Feuer wrote: > On Fri, Oct 26, 2018 at 4:43 PM Carter Schonwald > wrote: > > > > Hey David, i'm looking at the git history andit doesn't seem to have any > commits between 8.4.3 and 8.4.4 related to the dataToTag issue > > > > does any haskell code in the while trigger the bug on 8.4 series? > > I don't think anyone knows. It seems clear that it's considerably > easier to trigger the bug in 8.6, but > as far as I can tell, there's no reason to believe that it couldn't be > triggered by realistic code in > 8.4. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at seidel.io Fri Oct 26 21:30:13 2018 From: eric at seidel.io (Eric Seidel) Date: Fri, 26 Oct 2018 17:30:13 -0400 Subject: slow execution of built executables on a Mac In-Reply-To: References: Message-ID: <1540589413.2763151.1556127096.4141CEFF@webmail.messagingengine.com> rts_dist_HC is where it builds the various versions of the RTS, right? I noticed a similar slowness building the RTS at ICFP on my MacBook Pro (macOS 10.12). I don't think my battery lasted long enough to get to stage 2. I'm afraid I don't have any clue why it's gotten so slow, but you're not alone! I did notice that it seems to build many more variants of the RTS than before, but I haven't tried to build GHC in a long time. On Fri, Oct 26, 2018, at 14:32, Richard Eisenberg wrote: > Hi devs, > > I have a shiny, new iMac in my office. It's thus frustrating that it > takes my iMac longer to build GHC than my trusty 28-month-old laptop. > Building devel2 on a fresh checkout takes just about an hour. By > contrast, my laptop is done after 30 minutes of work (same build > settings). The laptop has a 2.8GHz Intel i7 running macOS 10.13.5; the > desktop has a 3.5GHz Intel i5 running macOS 10.13.6. Both bootstrapped > from the binary distro of GHC 8.6.1. > > Watching GHC build, everything is snappy enough during the stage-1 > build. But then, as soon as we start using GHC-produced executables, > things slow down. It's most noticeable in the rts_dist_HC phase, which > crawls. Stage 2 is pretty slow, too. > > So: is there anything anyone knows about recent Macs not liking locally > built executables? Or is there some local setting that I need to update? > The prepackaged GHC seems to work well, so that gives me hope that > someone knows what setting to tweak. > > Thanks! > Richard > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From carter.schonwald at gmail.com Sat Oct 27 01:43:18 2018 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Fri, 26 Oct 2018 21:43:18 -0400 Subject: Suppressing False Incomplete Pattern Matching Warnings for Polymorphic Pattern Synonyms In-Reply-To: References: <7031558E-BDD9-4571-A5C3-CFF486455BEA@cs.brynmawr.edu> <46855143-c211-92d8-5106-6ea0ae972b68@haskus.fr> Message-ID: off hand, once we're in in viewpattern/ pattern synonym land, ORDER of the abstracted constructors matters! consider foo,bar,baz,quux,boom :: Nat -> String plus some pattern synonyms i name "PowerOfTwo", "Even" and "Odd" foo (PowerOfTwo x) = "power of two" foo (Even x) = "even" foo (Odd x) = "odd" bar (Even x) = "even" bar (Odd x) = "odd" baz (PowerOfTwo x) = "power of two" baz (Odd x) = "odd" quux (Even x) = "even" quux (Odd x) = "odd" quux (PowerOfTwo) = "power of two" boom (Even x) = "even" boom (PowerOfTwo x) = "power of two" boom (Odd x) = "odd" foo and bar are both total definitions with unambiguous meanings, even though bar's patterns are a suffix of foos'! baz is partial! both boom and quux have a redudant overlapping case, power of two! so some thoughts 1) order matters! 2) pattern synonyms at type T are part of an infinite lattice, Top element == accept everything, Bottom element = reject everything 3) PowerOfTwo <= Even in the Lattice for Natual, both are "incomparable" with Odd 4) for a simple case on a single value at type T, assume c1 <= c2 , then if c1 x -> ... is before c2 x -> in the cases, then both are useful/computationally meaningful OTHERWISE when its case x :: T of c2 x -> ... c1 x -> ... then the 'c1 x' is redundant this is slightly orthogonal to other facets of this discussion so far, but i realized that Richard's Set of Sets of patterns model misses some useful/ meaningful examples/extra structure from a) the implicit lattice of different patterns possibly being super/subsets (which is still something of an approximation, but with these example I shared above I hope i've sketched out some motivation ) b) we can possibly model HOW ordering of clauses impacts coverage/totality / redundancy of clauses I'm not sure if it'd be pleasant/good from a user experience perspective to have this sort of partial ordering modelling stuff, but certainly seems like it would help distinguish some useful examples where the program meaning / coverage is sensitive to clause ordering i can try to spell this out more if theres interest, but I wanted to share while the iron was hot best! -Carter On Fri, Oct 26, 2018 at 1:05 PM Richard Eisenberg wrote: > Aha. So you're viewing complete sets as a type-directed property, where we > can take a type and look up what complete sets of patterns of that type > might be. > > Then, when checking a pattern-match for completeness, we use the inferred > type of the pattern, access its complete sets, and if these match up. > (Perhaps an implementation may optimize this process.) > > What I like about this approach is that it works well with GADTs, where, > e.g., VNil is a complete set for type Vec a Zero but not for Vec a n. > > I take back my claim of "No types!" then, as this does sound like it has > the right properties. > > For now, I don't want to get bogged down by syntax -- let's figure out how > the idea should work first, and then we can worry about syntax. > > Here's a stab at a formalization of this idea, written in metatheory, not > Haskell: > > Let C : Type -> Set of set of patterns. C will be the lookup function for > complete sets. Suppose we have a pattern match, at type tau, matching > against patterns Ps. Let S = C(tau). S is then a set of sets of patterns. > The question is this: Is there a set s in S such that Ps is a superset of > s? If yes, then the match is complete. > > What do we think of this design? Of course, the challenge is in building > C, but we'll tackle that next. > > Richard > > On Oct 26, 2018, at 5:20 AM, Sylvain Henry wrote: > > Sorry I wasn't clear. I'm not an expert on the topic but it seems to me > that there are two orthogonal concerns: > > 1) How does the checker retrieve COMPLETE sets. > > Currently it seems to "attach" them to data type constructors (e.g. > Maybe). If instead it retrieved them by matching types (e.g. "Maybe a", > "a") we could write: > > {-# COMPLETE LL #-} > > From an implementation point of view, it seems to me that finding complete > sets would become similar to finding (flexible, overlapping) class > instances. Pseudo-code: > > class Complete a where > conlikes :: [ConLike] > instance Complete (Maybe a) where > conlikes = [Nothing @a, Just @a] > instance Complete (Maybe a) where > conlikes = [N @a, J @a] > instance Complete a where > conlikes = [LL @a] > ... > > > 2) COMPLETE set depending on the matched type. > > It is a thread hijack from me but while we are thinking about refactoring > COMPLETE pragmas to support polymorphism, maybe we could support this too. > The idea is to build a different set of conlikes depending on the matched > type. Pseudo-code: > > instance Complete (Variant cs) where > conlikes = [V @c | c <- cs] -- cs is a type list > > (I don't really care about the pragma syntax) > > Sorry for the thread hijack! > Regards, > Sylvain > > > On 10/26/18 5:59 AM, Richard Eisenberg wrote: > > I'm afraid I don't understand what your new syntax means. And, while I > know it doesn't work today, what's wrong (in theory) with > > {-# COMPLETE LL #-} > > No types! (That's a rare thing for me to extol...) > > I feel I must be missing something here. > > Thanks, > Richard > > On Oct 25, 2018, at 12:24 PM, Sylvain Henry wrote: > > > In the case where all the patterns are polymorphic, a user must > > provide a type signature but we accept the definition regardless of > > the type signature they provide. > > Currently we can specify the type *constructor* in a COMPLETE pragma: > > pattern J :: a -> Maybe apattern J a = Just apattern N :: Maybe apattern N = Nothing{-# COMPLETE N, J :: Maybe #-} > > > Instead if we could specify the type with its free vars, we could refer to > them in conlike signatures: > > {-# COMPLETE N, [ J :: a -> Maybe a ] :: Maybe a #-} > > The COMPLETE pragma for LL could be: > > {-# COMPLETE [ LL :: HasSrcSpan a => SrcSpan -> SrcSpanLess a -> a ] :: a #-} > > > I'm borrowing the list comprehension syntax on purpose because it would > allow to define a set of conlikes from a type-list (see my request [1]): > > {-# COMPLETE [ V :: (c :< cs) => c -> Variant cs > | c <- cs > ] :: Variant cs > #-} > > > To make things more formal, when the pattern-match checker > requests a > set of constructors for some data type constructor T, the > checker > returns: > > * The original set of data constructors for T > * Any COMPLETE > sets of type T > > Note the use of the phrase **type constructor**. The > return type of all > constructor-like things in a COMPLETE set must all be > headed by the > same type constructor T. Since `LL`'s return type is simply > a type > variable `a`, this simply doesn't work with the design of COMPLETE > > sets. > > Could we use a mechanism similar to instance resolution (with > FlexibleInstances) for the checker to return matching COMPLETE sets instead? > > --Sylvain > > > [1] https://mail.haskell.org/pipermail/ghc-devs/2018-July/016053.html > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From omeragacan at gmail.com Sat Oct 27 07:23:14 2018 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Sat, 27 Oct 2018 10:23:14 +0300 Subject: [ANNOUNCE] GHC 8.4.4 released In-Reply-To: References: <878t30npgc.fsf@smart-cactus.org> <87y3avjm1f.fsf@smart-cactus.org> <84655ecb-3249-4536-b978-355340796dc1@well-typed.com> Message-ID: Hi all, Just a quick update about #16969. The primop itself is buggy in 8.4 (and it should be buggy even in older versions -- although I haven't confirmed this) and 2 of the 3 regressions added for it currently fail with GHC 8.4.4. I don't know what the plan is for fixing it in 8.4, Ben may say more about this, but I'm guessing that we'll see another 8.4 release. So if you're using the primop directly, just don't! If you're not using it directly, then as David says the bug is much harder to trigger in GHC 8.4 (and even older versions) than in GHC 8.6, but we don't know if it's _impossible_ to trigger in GHC 8.4 and older versions. We fixed the bug in GHC HEAD weeks ago (with Phab:5201), current investigation in #15696 is not blocking any releases, we're just tying up some loose ends and doing refactoring to handle some similar primops more uniformly. This is only refactoring and documentation -- known bugs are already fixed. (I now realize that it would've been better to do this in a separate ticket to avoid confusion) Ömer Artem Pelenitsyn , 27 Eki 2018 Cmt, 00:18 tarihinde şunu yazdı: > > David, when you say "dataToTag# issue", you mean #15696? It seems from the discussion there that it is still under investigation. > > -- > Best, Artem > > On Fri, 26 Oct 2018 at 17:02 David Feuer wrote: >> >> On Fri, Oct 26, 2018 at 4:43 PM Carter Schonwald >> wrote: >> > >> > Hey David, i'm looking at the git history andit doesn't seem to have any commits between 8.4.3 and 8.4.4 related to the dataToTag issue >> > >> > does any haskell code in the while trigger the bug on 8.4 series? >> >> I don't think anyone knows. It seems clear that it's considerably >> easier to trigger the bug in 8.6, but >> as far as I can tell, there's no reason to believe that it couldn't be >> triggered by realistic code in >> 8.4. >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From omeragacan at gmail.com Sat Oct 27 07:28:41 2018 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Sat, 27 Oct 2018 10:28:41 +0300 Subject: [ANNOUNCE] GHC 8.4.4 released In-Reply-To: References: <878t30npgc.fsf@smart-cactus.org> <87y3avjm1f.fsf@smart-cactus.org> <84655ecb-3249-4536-b978-355340796dc1@well-typed.com> Message-ID: Sorry for the typos in my previous email. #16969 -> #15696 (https://ghc.haskell.org/trac/ghc/ticket/15696) regressions -> regression tests Phab:5201 -> Phab:D5201 (https://phabricator.haskell.org/D5201) By "the primop" I mean dataToTag#. Ömer Ömer Sinan Ağacan , 27 Eki 2018 Cmt, 10:23 tarihinde şunu yazdı: > > Hi all, > > Just a quick update about #16969. > > The primop itself is buggy in 8.4 (and it should be buggy even in older > versions -- although I haven't confirmed this) and 2 of the 3 regressions added > for it currently fail with GHC 8.4.4. I don't know what the plan is for fixing > it in 8.4, Ben may say more about this, but I'm guessing that we'll see another > 8.4 release. > > So if you're using the primop directly, just don't! If you're not using it > directly, then as David says the bug is much harder to trigger in GHC 8.4 (and > even older versions) than in GHC 8.6, but we don't know if it's _impossible_ to > trigger in GHC 8.4 and older versions. > > We fixed the bug in GHC HEAD weeks ago (with Phab:5201), current investigation > in #15696 is not blocking any releases, we're just tying up some loose ends and > doing refactoring to handle some similar primops more uniformly. This is only > refactoring and documentation -- known bugs are already fixed. > > (I now realize that it would've been better to do this in a separate ticket to > avoid confusion) > > Ömer > > Artem Pelenitsyn , 27 Eki 2018 Cmt, 00:18 > tarihinde şunu yazdı: > > > > David, when you say "dataToTag# issue", you mean #15696? It seems from the discussion there that it is still under investigation. > > > > -- > > Best, Artem > > > > On Fri, 26 Oct 2018 at 17:02 David Feuer wrote: > >> > >> On Fri, Oct 26, 2018 at 4:43 PM Carter Schonwald > >> wrote: > >> > > >> > Hey David, i'm looking at the git history andit doesn't seem to have any commits between 8.4.3 and 8.4.4 related to the dataToTag issue > >> > > >> > does any haskell code in the while trigger the bug on 8.4 series? > >> > >> I don't think anyone knows. It seems clear that it's considerably > >> easier to trigger the bug in 8.6, but > >> as far as I can tell, there's no reason to believe that it couldn't be > >> triggered by realistic code in > >> 8.4. > >> _______________________________________________ > >> ghc-devs mailing list > >> ghc-devs at haskell.org > >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From ben at smart-cactus.org Sun Oct 28 06:27:54 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Sun, 28 Oct 2018 02:27:54 -0400 Subject: Treatment of unknown pragmas In-Reply-To: References: <8736t5n5kc.fsf@smart-cactus.org> <87zhvdln5l.fsf@smart-cactus.org> <1539719290.3266129.1544285568.4036F292@webmail.messagingengine.com> <87bm7sllil.fsf@smart-cactus.org> <87mur35jou.fsf@smart-cactus.org> <8F6232A6-9CD6-4DF3-87D9-E7A32EE8EBD7@gmail.com> <87bm7h6a3b.fsf@smart-cactus.org> Message-ID: <878t2i60ws.fsf@smart-cactus.org> Simon Marlow writes: > What pragma syntax should other Haskell compilers use? I don't think it's > fair for GHC to have exclusive rights to the pragma syntax form the report, > and other compilers should not be relegated to using {-# X-FOOHC ... #-}. > But now we have all the same issues again. > In my mind other compilers are of course free to use the {-# ... #-} syntax has they see fit and GHC has no other choice but to accommodate. Arguably the report should have just specified that the {-# ... #-} syntax as namespaced, avoiding this whole situation. In the case of tools we have the opportunity to correct this mistake since no strong convention has established itself yet. This is what I am advocating that we do. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From a.pelenitsyn at gmail.com Sun Oct 28 15:04:21 2018 From: a.pelenitsyn at gmail.com (Artem Pelenitsyn) Date: Sun, 28 Oct 2018 11:04:21 -0400 Subject: [Haskell] Treatment of unknown pragmas In-Reply-To: References: <8736t5n5kc.fsf@smart-cactus.org> <87zhvdln5l.fsf@smart-cactus.org> Message-ID: Hello Daniel, Annotations API was discussed earlier in this thread. Main points against are: Neil: Significant compilation performance penalty and extra recompilation. ANN pragmas is what HLint currently uses. Brandon: The problem with ANN is it's part of the plugins API, and as such does things like compiling the expression into the program in case a plugin generates code using its value, plus things like recompilation checking end up assuming plugins are in use and doing extra checking. Using it as a compile-time pragma is actually fairly weird from that standpoint. -- Best, Artem On Sat, 27 Oct 2018 at 22:12 Daniel Wagner wrote: > I don't have a really strong opinion, but... isn't this (attaching > string-y data to source constructs) pretty much exactly what GHC's > annotation pragma is for? > ~d > > On Tue, Oct 16, 2018 at 3:14 PM Ben Gamari wrote: > >> Vladislav Zavialov writes: >> >> > What about introducing -fno-warn-pragma=XXX? People who use HLint will >> > add -fno-warn-pragma=HLINT to their build configuration. >> > >> A warning flag is an interesting way to deal with the issue. On the >> other hand, it's not great from an ergonomic perspective; afterall, this >> would mean that all users of HLint (and any other tool requiring special >> pragmas) include this flag in their build configuration. A typical >> Haskell project already needs too much such boilerplate, in my opinion. >> >> I think it makes a lot of sense to have a standard way for third-parties >> to attach string-y information to Haskell source constructs. While it's >> not strictly speaking necessary to standardize the syntax, doing >> so minimizes the chance that tools overlap and hopefully reduces >> the language ecosystem learning curve. >> >> Cheers, >> >> - Ben >> _______________________________________________ >> > Haskell mailing list >> Haskell at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell >> > _______________________________________________ > Haskell mailing list > Haskell at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Sun Oct 28 17:56:24 2018 From: ben at well-typed.com (Ben Gamari) Date: Sun, 28 Oct 2018 13:56:24 -0400 Subject: GHC 8.8 freeze in three weeks Message-ID: <874ld65518.fsf@smart-cactus.org> Hello everyone, Incredibly enough we are quickly coming up on three months since the nominal GHC 8.6.1 release date. This means that the GHC 8.8 branch is quickly approaching. We will plan on cutting the branch on November 18th. Note that this is in three weeks. If you have a patch that you would like to see in GHC 8.8 and it isn't yet up on Phabricator, do let me know. Moreover, if you have something that will be in GHC 8.8 and you haven't yet added it to the release status page [1], please add it. Cheers, - Ben [1] https://ghc.haskell.org/trac/ghc/wiki/Status/GHC-8.8.1 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Mon Oct 29 01:24:13 2018 From: ben at well-typed.com (Ben Gamari) Date: Sun, 28 Oct 2018 21:24:13 -0400 Subject: GHC's Fall 2018 HCAR submission Message-ID: <87va5l4kau.fsf@smart-cactus.org> Hello everyone, The Haskell Community Activities Report is coming up and I have prepared the skeleton for GHC's contribution [1]. If you have a project cooking or have recently had a patch land do have a look to make sure it's recognized in the submission. Cheers, - Ben https://ghc.haskell.org/trac/ghc/wiki/Status/Oct18 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From rae at cs.brynmawr.edu Mon Oct 29 02:54:35 2018 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Sun, 28 Oct 2018 22:54:35 -0400 Subject: Visible dependent quantification / CUSKs Message-ID: Hi all, I see visible dependent quantification and top-level kind signatures on the release plan for GHC 8.8. Is there a diff for these I've missed? Or is something in the works? Sorry if I've just missed it go by! Thanks, Richard From rae at cs.brynmawr.edu Mon Oct 29 03:54:56 2018 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Sun, 28 Oct 2018 23:54:56 -0400 Subject: Suppressing False Incomplete Pattern Matching Warnings for Polymorphic Pattern Synonyms In-Reply-To: References: <7031558E-BDD9-4571-A5C3-CFF486455BEA@cs.brynmawr.edu> <46855143-c211-92d8-5106-6ea0ae972b68@haskus.fr> Message-ID: > On Oct 26, 2018, at 9:43 PM, Carter Schonwald wrote: > > ORDER of the abstracted constructors matters! That's a very good point. So we don't have a set of sets -- we have a set of lists (where normal constructors -- which have no overlap -- would appear in the lists in every possible permutation). Again, please don't take my set of lists too seriously from an implementation point of view. We clearly wouldn't implement it this way. But I want to use this abstraction to understand the shape of the problem and what an idealized solution might look like before worrying about implementation and syntax. Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From sylvain at haskus.fr Mon Oct 29 13:51:50 2018 From: sylvain at haskus.fr (Sylvain Henry) Date: Mon, 29 Oct 2018 14:51:50 +0100 Subject: Suppressing False Incomplete Pattern Matching Warnings for Polymorphic Pattern Synonyms In-Reply-To: References: <7031558E-BDD9-4571-A5C3-CFF486455BEA@cs.brynmawr.edu> <46855143-c211-92d8-5106-6ea0ae972b68@haskus.fr> Message-ID: <574c5795-97cb-481a-6c1e-44f06b55fc87@haskus.fr> I've just found this related ticket: https://ghc.haskell.org/trac/ghc/ticket/14422 On 10/26/18 7:04 PM, Richard Eisenberg wrote: > Aha. So you're viewing complete sets as a type-directed property, > where we can take a type and look up what complete sets of patterns of > that type might be. > > Then, when checking a pattern-match for completeness, we use the > inferred type of the pattern, access its complete sets, and if these > match up. (Perhaps an implementation may optimize this process.) > > What I like about this approach is that it works well with GADTs, > where, e.g., VNil is a complete set for type Vec a Zero but not for > Vec a n. > > I take back my claim of "No types!" then, as this does sound like it > has the right properties. > > For now, I don't want to get bogged down by syntax -- let's figure out > how the idea should work first, and then we can worry about syntax. > > Here's a stab at a formalization of this idea, written in metatheory, > not Haskell: > > Let C : Type -> Set of set of patterns. C will be the lookup function > for complete sets. Suppose we have a pattern match, at type tau, > matching against patterns Ps. Let S = C(tau). S is then a set of sets > of patterns. The question is this: Is there a set s in S such that Ps > is a superset of s? If yes, then the match is complete. > > What do we think of this design? Of course, the challenge is in > building C, but we'll tackle that next. > > Richard > >> On Oct 26, 2018, at 5:20 AM, Sylvain Henry > > wrote: >> >> Sorry I wasn't clear. I'm not an expert on the topic but it seems to >> me that there are two orthogonal concerns: >> >> 1) How does the checker retrieve COMPLETE sets. >> >> Currently it seems to "attach" them to data type constructors (e.g. >> Maybe). If instead it retrieved them by matching types (e.g. "Maybe >> a", "a") we could write: >> >> {-# COMPLETE LL #-} >> >> From an implementation point of view, it seems to me that finding >> complete sets would become similar to finding (flexible, overlapping) >> class instances. Pseudo-code: >> >> class Complete a where >>   conlikes :: [ConLike] >> instance Complete (Maybe a) where >> conlikes = [Nothing @a, Just @a] >> instance Complete (Maybe a) where >> conlikes = [N @a, J @a] >> instance Complete a where >>   conlikes = [LL @a] >> ... >> >> >> 2) COMPLETE set depending on the matched type. >> >> It is a thread hijack from me but while we are thinking about >> refactoring COMPLETE pragmas to support polymorphism, maybe we could >> support this too. The idea is to build a different set of conlikes >> depending on the matched type. Pseudo-code: >> >> instance Complete (Variant cs) where >> conlikes = [V @c | c <- cs] -- cs is a type list >> >> (I don't really care about the pragma syntax) >> >> Sorry for the thread hijack! >> Regards, >> Sylvain >> >> >> On 10/26/18 5:59 AM, Richard Eisenberg wrote: >>> I'm afraid I don't understand what your new syntax means. And, while >>> I know it doesn't work today, what's wrong (in theory) with >>> >>> {-# COMPLETE LL #-} >>> >>> No types! (That's a rare thing for me to extol...) >>> >>> I feel I must be missing something here. >>> >>> Thanks, >>> Richard >>> >>>> On Oct 25, 2018, at 12:24 PM, Sylvain Henry >>> > wrote: >>>> >>>> > In the case where all the patterns are polymorphic, a user must >>>> > provide a type signature but we accept the definition regardless of >>>> > the type signature they provide. >>>> >>>> Currently we can specify the type *constructor* in a COMPLETE pragma: >>>> >>>> pattern J :: a -> Maybe apattern J a = Just apattern N :: Maybe >>>> apattern N = Nothing{-# COMPLETE N, J :: Maybe #-} >>>> >>>> >>>> Instead if we could specify the type with its free vars, we could >>>> refer to them in conlike signatures: >>>> >>>> {-# COMPLETE N, [J:: a -> Maybe a ] :: Maybe a #-} >>>> >>>> The COMPLETE pragma for LL could be: >>>> >>>> {-# COMPLETE [LL :: HasSrcSpan a => SrcSpan -> SrcSpanLess a -> a ] >>>> :: a #-} >>>> >>>> I'm borrowing the list comprehension syntax on purpose because it >>>> would allow to define a set of conlikes from a type-list (see my >>>> request [1]): >>>> >>>> {-# COMPLETE [V :: (c :< cs) => c -> Variant cs | c <- cs ] :: >>>> Variant cs #-} >>>> >>>> > To make things more formal, when the pattern-match checker >>>> > requests a set of constructors for some data type constructor T, the >>>> > checker returns: >>>> > >>>> > * The original set of data constructors for T >>>> > * Any COMPLETE sets of type T >>>> > >>>> > Note the use of the phrase *type constructor*. The return type of all >>>> > constructor-like things in a COMPLETE set must all be headed by the >>>> > same type constructor T. Since `LL`'s return type is simply a type >>>> > variable `a`, this simply doesn't work with the design of COMPLETE >>>> > sets. >>>> >>>> Could we use a mechanism similar to instance resolution (with >>>> FlexibleInstances) for the checker to return matching COMPLETE sets >>>> instead? >>>> >>>> --Sylvain >>>> >>>> >>>> [1]https://mail.haskell.org/pipermail/ghc-devs/2018-July/016053.html >>>> _______________________________________________ >>>> ghc-devs mailing list >>>> ghc-devs at haskell.org >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Mon Oct 29 17:01:50 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Mon, 29 Oct 2018 13:01:50 -0400 Subject: CI status Message-ID: <87r2g84rgm.fsf@smart-cactus.org> Hi everyone, As you likely have noticed, GHC's CI is a bit of a mess since the Hadrian merge, especially for differentials based on commits prior to the merge. The problem is a tiresome limitation of Harbormaster's CI strategy [1]. I have taken this opportunity to finally move testing of Differentials to CircleCI. Unfortunately this has taken longer than anticipated due to more tiresome Phabricator issues. At the moment the only guidance I can offer for those in need of CI is to sit tight; the problem is being worked on. Cheers, - Ben [1] Namely, Harbormaster attempts to reuse working trees to perform builds. However, it fails to remove stale submodules, meaning any attempt to checkout a post-Hadrian-merge commit fails. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From qdunkan at gmail.com Mon Oct 29 18:24:30 2018 From: qdunkan at gmail.com (Evan Laforge) Date: Mon, 29 Oct 2018 11:24:30 -0700 Subject: GHC's Fall 2018 HCAR submission In-Reply-To: <87va5l4kau.fsf@smart-cactus.org> References: <87va5l4kau.fsf@smart-cactus.org> Message-ID: There are some incomplete sentences under "Further improvements to runtime performance:" Also, congratulations to Tamar Christina for becoming the new IO manager! Also, when I copy paste the links in the "At the time of writing" section, the backslashes in the search query mess it up. Maybe a markdown rendering step would remove those? On Sun, Oct 28, 2018 at 6:24 PM Ben Gamari wrote: > > Hello everyone, > > The Haskell Community Activities Report is coming up and I have prepared > the skeleton for GHC's contribution [1]. If you have a project cooking > or have recently had a patch land do have a look to make sure it's > recognized in the submission. > > Cheers, > > - Ben > > > https://ghc.haskell.org/trac/ghc/wiki/Status/Oct18 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From a.pelenitsyn at gmail.com Mon Oct 29 18:42:12 2018 From: a.pelenitsyn at gmail.com (Artem Pelenitsyn) Date: Mon, 29 Oct 2018 14:42:12 -0400 Subject: GHC's Fall 2018 HCAR submission In-Reply-To: References: <87va5l4kau.fsf@smart-cactus.org> Message-ID: On Mon, 29 Oct 2018 at 14:25 Evan Laforge wrote: > Also, when I copy paste the links in the "At the time of writing" > section, the backslashes in the search query mess it up. Maybe a > markdown rendering step would remove those? > I'm not sure It makes much sense to use {{{...}}} to format the document in the first place. Maybe use normal wiki-text? I can decorate links if this is approved. -- Best, Artem > On Sun, Oct 28, 2018 at 6:24 PM Ben Gamari wrote: > > > > Hello everyone, > > > > The Haskell Community Activities Report is coming up and I have prepared > > the skeleton for GHC's contribution [1]. If you have a project cooking > > or have recently had a patch land do have a look to make sure it's > > recognized in the submission. > > > > Cheers, > > > > - Ben > > > > > > https://ghc.haskell.org/trac/ghc/wiki/Status/Oct18 > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From csaba.hruska at gmail.com Mon Oct 29 19:11:38 2018 From: csaba.hruska at gmail.com (Csaba Hruska) Date: Mon, 29 Oct 2018 20:11:38 +0100 Subject: Using GHC Core as a Language Target In-Reply-To: <70EC1EE1-5F42-40BC-8883-3A3E0E17E599@ara.io> References: <70EC1EE1-5F42-40BC-8883-3A3E0E17E599@ara.io> Message-ID: There is also a nice intro blog post about GHC internals with an example how to compile a custom constructed module AST. - Dive into GHC: Pipeline - Dive into GHC: Intermediate Forms - Dive into GHC: Targeting Core Cheers, Csaba Hruska On Thu, Oct 25, 2018 at 8:51 PM Ara Adkins wrote: > Heya, > > Those are exactly the kind of pointers I was hoping for. Thanks Iavor. > > I’m sure I’ll have more questions with time, but that’s a great starting > point. > > _ara > > On 25 Oct 2018, at 19:20, Iavor Diatchki wrote: > > Hello, > > I have not done what you are asking, but here is how I'd approach the > problem. > > 1. Assuming you already have some Core, you'd have to figure out how to > include it with the rest of the GHC pipeline: > * A lot of the code that glues everything together is in > `compiler/main`. Modules of interest seem to be `DriverPipeline`, > `HscMain`, and `PipelineMoand` > * A quick looks suggests that maybe you want to call `hscGenHardCode` > in `HscMain`, with your core program inside the `CgGuts` argument. > * Exactly how you setup things probably depends on how much of the > rest of the Haskell ecosystem you are trying to integrate with (separate > compilation, avoiding recompilation, support for packages, etc.) > > 2. The syntax for Core is in `compiler/coreSyn`, with the basic AST being > in module `CoreSyn`. Module `MkCore` has a lot of helpers for working > with core syntax. > > 3. The "desugarer" (in `compiler/deSugar`) is the GHC phase that > translates the front end syntax (hsSyn) into core, so that should have lots > of examples of how to generate core. > > Cheers, > -Iavor > > > > > > > > On Mon, Oct 22, 2018 at 1:46 AM Ara Adkins wrote: > >> Hey All, >> >> I was chatting to SPJ about the possibility of using GHC Core + the rest >> of the GHC compilation pipeline as a target for a functional language, and >> he mentioned that asking here would likely be more productive when it comes >> to the GHC API. >> >> I'm wondering where the best place would be for me to look in the API for >> building core expressions, and also whether it is possible to trigger the >> GHC code-generation pipeline from the core stage onwards. >> >> Best, >> Ara >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Mon Oct 29 19:32:20 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Mon, 29 Oct 2018 15:32:20 -0400 Subject: GHC's Fall 2018 HCAR submission In-Reply-To: References: <87va5l4kau.fsf@smart-cactus.org> Message-ID: <87bm7c4khq.fsf@smart-cactus.org> Artem Pelenitsyn writes: > On Mon, 29 Oct 2018 at 14:25 Evan Laforge wrote: > >> Also, when I copy paste the links in the "At the time of writing" >> section, the backslashes in the search query mess it up. Maybe a >> markdown rendering step would remove those? >> > > I'm not sure It makes much sense to use {{{...}}} to format the document in > the first place. Maybe use normal wiki-text? I can decorate links if this > is approved. > This is what I have done in the past. However, it is unfortunately quite labor intensive since I need to convert the document to Wiki markup, then back to TeX for submission. Instead with this iteration I have decided to try just writing the thing in Markdown and convert to TeX at the end. I was hoping to install a Trac processor for Markdown rendering but sadly Trac put up resistance. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From me at ara.io Mon Oct 29 19:37:33 2018 From: me at ara.io (Ara Adkins) Date: Mon, 29 Oct 2018 19:37:33 +0000 Subject: Using GHC Core as a Language Target In-Reply-To: References: <70EC1EE1-5F42-40BC-8883-3A3E0E17E599@ara.io> Message-ID: That’s a brilliant resource! Thanks so much for the links. _ara > On 29 Oct 2018, at 19:11, Csaba Hruska wrote: > > There is also a nice intro blog post about GHC internals with an example how to compile a custom constructed module AST. > Dive into GHC: Pipeline > Dive into GHC: Intermediate Forms > Dive into GHC: Targeting Core > Cheers, > Csaba Hruska > >> On Thu, Oct 25, 2018 at 8:51 PM Ara Adkins wrote: >> Heya, >> >> Those are exactly the kind of pointers I was hoping for. Thanks Iavor. >> >> I’m sure I’ll have more questions with time, but that’s a great starting point. >> >> _ara >> >>> On 25 Oct 2018, at 19:20, Iavor Diatchki wrote: >>> >>> Hello, >>> >>> I have not done what you are asking, but here is how I'd approach the problem. >>> >>> 1. Assuming you already have some Core, you'd have to figure out how to include it with the rest of the GHC pipeline: >>> * A lot of the code that glues everything together is in `compiler/main`. Modules of interest seem to be `DriverPipeline`, `HscMain`, and `PipelineMoand` >>> * A quick looks suggests that maybe you want to call `hscGenHardCode` in `HscMain`, with your core program inside the `CgGuts` argument. >>> * Exactly how you setup things probably depends on how much of the rest of the Haskell ecosystem you are trying to integrate with (separate compilation, avoiding recompilation, support for packages, etc.) >>> >>> 2. The syntax for Core is in `compiler/coreSyn`, with the basic AST being in module `CoreSyn`. Module `MkCore` has a lot of helpers for working with core syntax. >>> >>> 3. The "desugarer" (in `compiler/deSugar`) is the GHC phase that translates the front end syntax (hsSyn) into core, so that should have lots of examples of how to generate core. >>> >>> Cheers, >>> -Iavor >>> >>> >>> >>> >>> >>> >>> >>>> On Mon, Oct 22, 2018 at 1:46 AM Ara Adkins wrote: >>>> Hey All, >>>> >>>> I was chatting to SPJ about the possibility of using GHC Core + the rest of the GHC compilation pipeline as a target for a functional language, and he mentioned that asking here would likely be more productive when it comes to the GHC API. >>>> >>>> I'm wondering where the best place would be for me to look in the API for building core expressions, and also whether it is possible to trigger the GHC code-generation pipeline from the core stage onwards. >>>> >>>> Best, >>>> Ara >>>> _______________________________________________ >>>> ghc-devs mailing list >>>> ghc-devs at haskell.org >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From a.pelenitsyn at gmail.com Mon Oct 29 20:14:41 2018 From: a.pelenitsyn at gmail.com (Artem Pelenitsyn) Date: Mon, 29 Oct 2018 16:14:41 -0400 Subject: GHC's Fall 2018 HCAR submission In-Reply-To: <87bm7c4khq.fsf@smart-cactus.org> References: <87va5l4kau.fsf@smart-cactus.org> <87bm7c4khq.fsf@smart-cactus.org> Message-ID: Hi Ben, I see. Have you considered using online converting tools like try-pandoc[1]? Is it still painful? [1]: https://pandoc.org/try/ On Mon, 29 Oct 2018 at 15:32 Ben Gamari wrote: > Artem Pelenitsyn writes: > > > On Mon, 29 Oct 2018 at 14:25 Evan Laforge wrote: > > > >> Also, when I copy paste the links in the "At the time of writing" > >> section, the backslashes in the search query mess it up. Maybe a > >> markdown rendering step would remove those? > >> > > > > I'm not sure It makes much sense to use {{{...}}} to format the document > in > > the first place. Maybe use normal wiki-text? I can decorate links if this > > is approved. > > > This is what I have done in the past. However, it is unfortunately quite > labor intensive since I need to convert the document to Wiki markup, > then back to TeX for submission. > > Instead with this iteration I have decided to try just writing the thing > in Markdown and convert to TeX at the end. I was hoping to install a > Trac processor for Markdown rendering but sadly Trac put up resistance. > > Cheers, > > - Ben > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Mon Oct 29 20:39:23 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Mon, 29 Oct 2018 16:39:23 -0400 Subject: GHC's Fall 2018 HCAR submission In-Reply-To: References: <87va5l4kau.fsf@smart-cactus.org> <87bm7c4khq.fsf@smart-cactus.org> Message-ID: <878t2g4he2.fsf@smart-cactus.org> Artem Pelenitsyn writes: > Hi Ben, > > I see. Have you considered using online converting tools like > try-pandoc[1]? Is it still painful? > I do use Pandoc to convert from Markdown to Latex. However, Pandoc does not support Trac's wiki syntax. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From a.pelenitsyn at gmail.com Mon Oct 29 21:26:57 2018 From: a.pelenitsyn at gmail.com (Artem Pelenitsyn) Date: Mon, 29 Oct 2018 17:26:57 -0400 Subject: GHC's Fall 2018 HCAR submission In-Reply-To: <878t2g4he2.fsf@smart-cactus.org> References: <87va5l4kau.fsf@smart-cactus.org> <87bm7c4khq.fsf@smart-cactus.org> <878t2g4he2.fsf@smart-cactus.org> Message-ID: Ben, I assume, writing a Pandoc writer for TracWiki shouldn't be that hard. Would that be of any help? -- Best, Artem On Mon, 29 Oct 2018 at 16:39 Ben Gamari wrote: > Artem Pelenitsyn writes: > > > Hi Ben, > > > > I see. Have you considered using online converting tools like > > try-pandoc[1]? Is it still painful? > > > I do use Pandoc to convert from Markdown to Latex. However, Pandoc does > not support Trac's wiki syntax. > > Cheers, > > - Ben > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Tue Oct 30 04:54:42 2018 From: ben at well-typed.com (Ben Gamari) Date: Tue, 30 Oct 2018 00:54:42 -0400 Subject: The future of Phabricator Message-ID: <87zhuw2fw2.fsf@smart-cactus.org> TL;DR. For several reasons I think we should consider alternatives to Phabricator. My view is that GitLab seems like the best option. Hello everyone, Over the past year I have been growing increasingly weary of our continued dependence on Phabricator. Without a doubt, its code review interface is the best I have used. However, for a myriad of reasons I am recently questioning whether it is still the best tool for GHC's needs. # The problem There are a number of reasons why I am currently uncertain about Phabricator. For one, at this point we have no options for support in the event that something goes wrong as the company responsible for Phabricator, Phacility, has closed their support channels to non-paying customers. Furthermore, in the past year or two Phacility has been placing their development resources in the parts their customers pay them for, which appear to be much different that the parts that we actively use. For this reason, some parts that we rely on seem oddly half-finished. This concern was recently underscored by some rather unfortunate misfeatures in Harbormaster which resulted in broken CI after the Hadrian merge and now apparent bugs which have made it difficult to migrate to the CircleCI integration we previously prepared. Perhaps most importantly, in our recent development priorities survey our use of Phabricator was the most common complaint by a fair margin, both in case of respondents who have contributed patches and those who have not. On the whole, past contributors and potential future contributors alike have strongly indicated that they want a more Git-like experience. Of course, this wasn't terribly surprising; this is just the most recent case where contributors have made this preference known. Frankly, in a sense it is hard to blame them. The fact that users need to install a PHP tool, Arcanist, to contribute anything but documentation patches has always seemed like unnecessary friction to me and I would be quite happy to be rid of it. Indeed we have had a quite healthy number of GitHub documentation patches since we started accepting them. This makes me thing that there may indeed be potential contributoes that we are leaving on the table. # What to do With Rackspace support ending at the end of year, now may be a good time to consider whether we really want to continue on this road. Phabricator is great at code review but I am less and less certain that it is worth the maintenance uncertainty and potential lost contributors that it costs. Moreover, good alternatives seem closer at-hand than they were when we deployed Phabricator. ## Move to GitHub When people complain about our infrastructure, they often use GitHub as the example of what they would like to see. However, realistically I have a hard time seeing GitHub as a viable option. Its feature set is simply insufficient enough to handle the needs of a larger project like GHC without significant external tooling (as seen in the case of Rust-lang). The concrete reasons have been well-documented in previous discussions but, to summarize, * its review functionality is extremely painful to use with larger patches * rebased patches lead to extreme confusion and lost review comments * it lacks support for pre-receive hooks, which serve as a last line of defense to prevent unintended submodule changes * its inability to deal with external tickets is problematic * there is essentially no possibility that we could eventually migrate GHC's tickets to GitHub's ticket tracker without considerable data loss (much less manage the ticket load that would result), meaning that we would forever be stuck with maintaining Trac. * on a personal note, its search functionality has often left me empty-handed On the whole, these issues seem pretty hard to surmount. ## Move to GitLab In using GitLab for another project over the past months I have been positively surprised by its quality. It handles rebased merge requests far better than GitHub, has functional search, and quite a usable review interface. Furthermore, upstream has been extremely responsive to suggestions for improvement [1]. Even out-of-the-box it seems to be flexible enough to accommodate our needs, supporting integration with external issue trackers, having reasonable release management features, and support for code owners to automate review triaging (replacing much of the functionality of Phabricator's Herald). Finally, other FOSS projects' [3] recent migrations from Phabrictor to GitLab have shown that GitLab-the-company is quite willing to offer help when needed. I took some time this weekend to try setting up a quick GHC instance [2] to play around with. Even after just a few hours of playing around I think the result is surprisingly usable. Out of curiosity I also played around with importing some tickets from Trac (building on Matt Pickering's Trac-to-Maniphest migration tool). With relatively little effort I was even able to get nearly all of our tickets (as of 9 months ago) imported while preserving ticket numbers (although there are naturally a few wrinkles that would need to be worked out). Naturally, I think we should treat the question of ticket tracker migration as an orthogonal one to code review, but it is good to know that this is possible. ## Continue with Phabricator Continuing with Phabricator is of course an option. Its review functionality is great and it has served us reasonably well. However, compared to GitLab and even GitHub of today its features seem less distinguished than they once did. Moreover, the prospect of having to maintain a largely stagnant product with no support strikes me as a slightly dangerous game to play. Working around the issues we have recently encountered has already cost a non-negligible amount of time. # The bottom line If it wasn't clear already, I think that we should strongly consider a move to GitLab. At this point it seems clear that it isn't going to vanish, has a steady pace of development, is featureful, and available. However, these are just my thoughts. What do you think? Cheers, - Ben [1] 11.4 will ship with a file tree view in the code review interface, which I reported (https://gitlab.com/gitlab-org/gitlab-ce/issues/46474) as being is one of the Phabricator features I missed the most during review [2] https://gitlab.home.smart-cactus.org/ghc/ghc/issues/14641 [3] The GNOME and freedesktop.org projects have recently migrated, the former from a hodge-podge of self-hosted services and the latter from Phabricator -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Tue Oct 30 05:03:50 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 30 Oct 2018 01:03:50 -0400 Subject: GHC's Fall 2018 HCAR submission In-Reply-To: References: <87va5l4kau.fsf@smart-cactus.org> <87bm7c4khq.fsf@smart-cactus.org> <878t2g4he2.fsf@smart-cactus.org> Message-ID: <87woq02fgs.fsf@smart-cactus.org> Artem Pelenitsyn writes: > Ben, > > I assume, writing a Pandoc writer for TracWiki shouldn't be that hard. > Would that be of any help? > Well, there would need to be both a reader and a writer, since we ultimately need to end up with TeX. Indeed having these would help immensely. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From harendra.kumar at gmail.com Tue Oct 30 08:38:09 2018 From: harendra.kumar at gmail.com (Harendra Kumar) Date: Tue, 30 Oct 2018 14:08:09 +0530 Subject: Hitting RTS bug on GHC 8.0.2 Message-ID: Hi, I got the following crash in one of my CI tests ( https://travis-ci.org/composewell/streamly/jobs/448112763): test: internal error: RELEASE_LOCK: I do not own this lock: rts/Messages.c 54 (GHC version 8.0.2 for x86_64_unknown_linux) Please report this as a GHC bug: http://www.haskell.org/ghc/reportabug I have hit this just once yet. Is this worth opening a ticket, given that this is an older version of the compiler? Has something like been fixed since then or might this be present in newer versions as well? -harendra -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvriedel at gmail.com Tue Oct 30 09:01:47 2018 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Tue, 30 Oct 2018 10:01:47 +0100 Subject: [GHC DevOps Group] The future of Phabricator In-Reply-To: <87zhuw2fw2.fsf@smart-cactus.org> (Ben Gamari's message of "Tue, 30 Oct 2018 00:54:42 -0400") References: <87zhuw2fw2.fsf@smart-cactus.org> Message-ID: <87muqv6c5g.fsf@gmail.com> On 2018-10-30 at 00:54:42 -0400, Ben Gamari wrote: > TL;DR. For several reasons I think we should consider alternatives to > Phabricator. My view is that GitLab seems like the best option. > > > Hello everyone, > > Over the past year I have been growing increasingly weary of our > continued dependence on Phabricator. Without a doubt, its code review > interface is the best I have used. However, for a myriad of reasons I > am recently questioning whether it is still the best tool for GHC's > needs. TL;DR. IMO, Phabricator is still the best tool we know about for *GHC's needs* ;-) [...] > For one, at this point we have no options for support in the event that > something goes wrong as the company responsible for Phabricator, > has closed their support channels to non-paying customers. While it's certainly a good idea to have an emergency plan ready, I don't think we need to act prematurely before something has actually happened. Phabricator is open-source and therefore there's little that can go so catastrophically wrong that we wouldn't give us more than enough time to act upon. (Also, there's still the option of becoming part of that paying-customers group and thus influence their focus -- after all, we'd be contributing to a improving an OSS codebase; and not a proprietary closed product such as GitHub) > Furthermore, in the past year or two Phacility has been placing their > development resources in the parts their customers pay them for, which > appear to be much different that the parts that we actively use. For > this reason, some parts that we rely on seem oddly half-finished. > > This concern was recently underscored by some rather unfortunate > misfeatures in Harbormaster which resulted in broken CI after the > Hadrian merge and now apparent bugs which have made it difficult to > migrate to the CircleCI integration we previously prepared. > > Perhaps most importantly, in our recent development priorities survey > our use of Phabricator was the most common complaint by a fair margin, > both in case of respondents who have contributed patches and those who > have not. On the whole, past contributors and potential future > contributors alike have strongly indicated that they want a more > Git-like experience. Of course, this wasn't terribly surprising; this > is just the most recent case where contributors have made this > preference known. > > Frankly, in a sense it is hard to blame them. The fact that users need > to install a PHP tool, Arcanist, to contribute anything but > documentation patches has always seemed like unnecessary friction to me > and I would be quite happy to be rid of it. [...] > Indeed we have had a quite healthy number of GitHub documentation > patches since we started accepting them. While I do agree that Phabricator's impedance mismatch with Git idioms has bugged me ever since we started using it (I even started implementing https://github.com/haskell-infra/arc-lite as a proof of concept but ran out of time), I still consider some of its features unparalleled in PR-based workflows as provided by GitHub or GitLab. For example, to me the support for stacked diffs outweights any subjective inconvenience brought forward against Phabricator https://jg.gg/2018/09/29/stacked-diffs-versus-pull-requests/?fbclid=IwAR3JyQP5uCn6ENiHOTWd41y5D-U0_CCJ55_23nzKeUYTjgLASHu2dq5QCc0 The PR workflow is perfectly well-suited for trivial documentation patches to a project; but that's for contributions that frankly are of minor importance; they're surely nice to have, but they're not the kind of contributions that are essential to GHC's long-term sustainability IMO. The reality IMO is that everybody tends to come up with this or that complaint about a tool which isn't their favorite one, but it's hardly a real barrier to contribution. In fact, I bet the majority of the people that now complain about phabricator not being GitHub will be vocally unhappy about having to create a GitLab account and that GitLab is not GitHub... Sure, by not trying to make everyone (specifically non-MVP contributors) happy we might loose some typo fixes or whatever, but do we really want to optimize the workflows for casual drive-by contributors which may contribute a couple of trivial patches and then never be seen again, at the expense of making life harder for what is already a complex enough task: managing and reviewing complex patches to GHC where it's paramount to use the best possible code-review facilities, and not shift the cost from contributors to the even more important people, the ones maintaing the projects as well as having intimate knowledge about the internals of GHCs (but unfortunately have a very tight time & cognitive time budget to spend on GHC, and which are the ones we really want to be able to review those patches with as little cognitive overhead as possible. So yes, phabricator is optimized for reviewers, and that's IMO a very good thing and outweights the benefit of trying to bend over backwards to make as many contributors as possible happy, of which there are orders of magnitudes more than there GHC maintainers&experts. An interest talk in this context is "Rebuilding the Cathedral" by Nadia Eghbal https://www.youtube.com/watch?v=VS6IpvTWwkQ Which makes the very point that the most important people for a project's sustainability are its maintainers rather than their contributors. From marlowsd at gmail.com Tue Oct 30 09:07:54 2018 From: marlowsd at gmail.com (Simon Marlow) Date: Tue, 30 Oct 2018 09:07:54 +0000 Subject: [GHC DevOps Group] The future of Phabricator In-Reply-To: <87zhuw2fw2.fsf@smart-cactus.org> References: <87zhuw2fw2.fsf@smart-cactus.org> Message-ID: I'm entirely happy to move, provided (1) whatever we move to provides the functionality we need, and (2) it's clearly what the community wants (considering both current and future contributors). In the past when moving to GitHub was brought up, there were a handful of core contributors who argued strongly in favour of Phabricator, do we think that's changed? Do we have any indication of whether the survey respondents who were anti-Phabricator would be pro- or anti-GitLab? Personally I'd like to optimise for more code review, because I think that more than anything else will increase quality and community ownership of the project. If using new tooling will make code review a more central part of our workflow, then that would be a good thing. Right now I think we're very Trac-centric, and the integration between Trac and Phabricator isn't great; if we could move to a solution with tighter integration between tickets/code-review/wiki, that would be an improvement in my view. But not GitHub, for the reasons you gave. Would GitLab solve the CI issues? I don't think you mentioned that explicitly. Cheers Simon On Tue, 30 Oct 2018 at 04:54, Ben Gamari wrote: > > TL;DR. For several reasons I think we should consider alternatives to > Phabricator. My view is that GitLab seems like the best option. > > > Hello everyone, > > Over the past year I have been growing increasingly weary of our > continued dependence on Phabricator. Without a doubt, its code review > interface is the best I have used. However, for a myriad of reasons I > am recently questioning whether it is still the best tool for GHC's > needs. > > > # The problem > > There are a number of reasons why I am currently uncertain about > Phabricator. > > For one, at this point we have no options for support in the event that > something goes wrong as the company responsible for Phabricator, > Phacility, has closed their support channels to non-paying customers. > Furthermore, in the past year or two Phacility has been placing their > development resources in the parts their customers pay them for, which > appear to be much different that the parts that we actively use. For > this reason, some parts that we rely on seem oddly half-finished. > > This concern was recently underscored by some rather unfortunate > misfeatures in Harbormaster which resulted in broken CI after the > Hadrian merge and now apparent bugs which have made it difficult to > migrate to the CircleCI integration we previously prepared. > > Perhaps most importantly, in our recent development priorities survey > our use of Phabricator was the most common complaint by a fair margin, > both in case of respondents who have contributed patches and those who > have not. On the whole, past contributors and potential future > contributors alike have strongly indicated that they want a more > Git-like experience. Of course, this wasn't terribly surprising; this > is just the most recent case where contributors have made this > preference known. > > Frankly, in a sense it is hard to blame them. The fact that users need > to install a PHP tool, Arcanist, to contribute anything but > documentation patches has always seemed like unnecessary friction to me > and I would be quite happy to be rid of it. Indeed we have had a quite > healthy number of GitHub documentation patches since we started > accepting them. This makes me thing that there may indeed be potential > contributoes that we are leaving on the table. > > > # What to do > > With Rackspace support ending at the end of year, now may be a good > time to consider whether we really want to continue on this road. > Phabricator is great at code review but I am less and less certain that > it is worth the maintenance uncertainty and potential lost contributors > that it costs. > > Moreover, good alternatives seem closer at-hand than they were when we > deployed Phabricator. > > > ## Move to GitHub > > When people complain about our infrastructure, they often use GitHub as > the example of what they would like to see. However, realistically I > have a hard time seeing GitHub as a viable option. Its feature set is > simply > insufficient enough to handle the needs of a larger project like GHC > without significant external tooling (as seen in the case of Rust-lang). > > The concrete reasons have been well-documented in previous discussions > but, to summarize, > > * its review functionality is extremely painful to use with larger > patches > > * rebased patches lead to extreme confusion and lost review comments > > * it lacks support for pre-receive hooks, which serve as a last line of > defense to prevent unintended submodule changes > > * its inability to deal with external tickets is problematic > > * there is essentially no possibility that we could eventually migrate > GHC's tickets to GitHub's ticket tracker without considerable data > loss (much less manage the ticket load that would result), meaning > that we would forever be stuck with maintaining Trac. > > * on a personal note, its search functionality has often left me > empty-handed > > On the whole, these issues seem pretty hard to surmount. > > > ## Move to GitLab > > In using GitLab for another project over the past months I have been > positively surprised by its quality. It handles rebased merge requests > far better than GitHub, has functional search, and quite a usable review > interface. Furthermore, upstream has been extremely responsive to > suggestions for improvement [1]. Even out-of-the-box it seems to be > flexible enough to accommodate our needs, supporting integration with > external issue trackers, having reasonable release management features, > and support for code owners to automate review triaging (replacing much > of the functionality of Phabricator's Herald). > > Finally, other FOSS projects' [3] recent migrations from Phabrictor to > GitLab have shown that GitLab-the-company is quite willing to offer help > when needed. I took some time this weekend to try setting up a quick GHC > instance [2] to play around with. Even after just a few hours of playing > around I think the result is surprisingly usable. > > Out of curiosity I also played around with importing some tickets from > Trac (building on Matt Pickering's Trac-to-Maniphest migration tool). > With relatively little effort I was even able to get nearly all of our > tickets (as of 9 months ago) imported while preserving ticket numbers > (although there are naturally a few wrinkles that would need to be > worked out). Naturally, I think we should treat the question of ticket > tracker migration as an orthogonal one to code review, but it is good to > know that this is possible. > > > ## Continue with Phabricator > > Continuing with Phabricator is of course an option. Its review > functionality is great and it has served us reasonably well. However, > compared to GitLab and even GitHub of today its features seem less > distinguished than they once did. Moreover, the prospect of having to > maintain a largely stagnant product with no support strikes me as a > slightly dangerous game to play. Working around the issues we have > recently encountered has already cost a non-negligible amount of time. > > > # The bottom line > > If it wasn't clear already, I think that we should strongly consider a > move to GitLab. At this point it seems clear that it isn't going to > vanish, has a steady pace of development, is featureful, and available. > > However, these are just my thoughts. What do you think? > > Cheers, > > - Ben > > > [1] 11.4 will ship with a file tree view in the code review interface, > which I reported > (https://gitlab.com/gitlab-org/gitlab-ce/issues/46474) as being is > one of the Phabricator features I missed the most during review > > [2] https://gitlab.home.smart-cactus.org/ghc/ghc/issues/14641 > > [3] The GNOME and freedesktop.org projects have recently migrated, the > former from a hodge-podge of self-hosted services and the latter > from Phabricator > > _______________________________________________ > Ghc-devops-group mailing list > Ghc-devops-group at haskell.org > https://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devops-group > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arian.vanputten at gmail.com Tue Oct 30 09:16:59 2018 From: arian.vanputten at gmail.com (Arian van Putten) Date: Tue, 30 Oct 2018 10:16:59 +0100 Subject: [GHC DevOps Group] The future of Phabricator In-Reply-To: References: <87zhuw2fw2.fsf@smart-cactus.org> Message-ID: Gitlab has built-in CI support. This means it's well-integrated. I would expect the CI to improve. On Tue, Oct 30, 2018, 10:08 Simon Marlow wrote: > I'm entirely happy to move, provided (1) whatever we move to provides the > functionality we need, and (2) it's clearly what the community wants > (considering both current and future contributors). In the past when moving > to GitHub was brought up, there were a handful of core contributors who > argued strongly in favour of Phabricator, do we think that's changed? Do we > have any indication of whether the survey respondents who were > anti-Phabricator would be pro- or anti-GitLab? > > Personally I'd like to optimise for more code review, because I think that > more than anything else will increase quality and community ownership of > the project. If using new tooling will make code review a more central part > of our workflow, then that would be a good thing. Right now I think we're > very Trac-centric, and the integration between Trac and Phabricator isn't > great; if we could move to a solution with tighter integration between > tickets/code-review/wiki, that would be an improvement in my view. But not > GitHub, for the reasons you gave. > > Would GitLab solve the CI issues? I don't think you mentioned that > explicitly. > > Cheers > Simon > > On Tue, 30 Oct 2018 at 04:54, Ben Gamari wrote: > >> >> TL;DR. For several reasons I think we should consider alternatives to >> Phabricator. My view is that GitLab seems like the best option. >> >> >> Hello everyone, >> >> Over the past year I have been growing increasingly weary of our >> continued dependence on Phabricator. Without a doubt, its code review >> interface is the best I have used. However, for a myriad of reasons I >> am recently questioning whether it is still the best tool for GHC's >> needs. >> >> >> # The problem >> >> There are a number of reasons why I am currently uncertain about >> Phabricator. >> >> For one, at this point we have no options for support in the event that >> something goes wrong as the company responsible for Phabricator, >> Phacility, has closed their support channels to non-paying customers. >> Furthermore, in the past year or two Phacility has been placing their >> development resources in the parts their customers pay them for, which >> appear to be much different that the parts that we actively use. For >> this reason, some parts that we rely on seem oddly half-finished. >> >> This concern was recently underscored by some rather unfortunate >> misfeatures in Harbormaster which resulted in broken CI after the >> Hadrian merge and now apparent bugs which have made it difficult to >> migrate to the CircleCI integration we previously prepared. >> >> Perhaps most importantly, in our recent development priorities survey >> our use of Phabricator was the most common complaint by a fair margin, >> both in case of respondents who have contributed patches and those who >> have not. On the whole, past contributors and potential future >> contributors alike have strongly indicated that they want a more >> Git-like experience. Of course, this wasn't terribly surprising; this >> is just the most recent case where contributors have made this >> preference known. >> >> Frankly, in a sense it is hard to blame them. The fact that users need >> to install a PHP tool, Arcanist, to contribute anything but >> documentation patches has always seemed like unnecessary friction to me >> and I would be quite happy to be rid of it. Indeed we have had a quite >> healthy number of GitHub documentation patches since we started >> accepting them. This makes me thing that there may indeed be potential >> contributoes that we are leaving on the table. >> >> >> # What to do >> >> With Rackspace support ending at the end of year, now may be a good >> time to consider whether we really want to continue on this road. >> Phabricator is great at code review but I am less and less certain that >> it is worth the maintenance uncertainty and potential lost contributors >> that it costs. >> >> Moreover, good alternatives seem closer at-hand than they were when we >> deployed Phabricator. >> >> >> ## Move to GitHub >> >> When people complain about our infrastructure, they often use GitHub as >> the example of what they would like to see. However, realistically I >> have a hard time seeing GitHub as a viable option. Its feature set is >> simply >> insufficient enough to handle the needs of a larger project like GHC >> without significant external tooling (as seen in the case of Rust-lang). >> >> The concrete reasons have been well-documented in previous discussions >> but, to summarize, >> >> * its review functionality is extremely painful to use with larger >> patches >> >> * rebased patches lead to extreme confusion and lost review comments >> >> * it lacks support for pre-receive hooks, which serve as a last line of >> defense to prevent unintended submodule changes >> >> * its inability to deal with external tickets is problematic >> >> * there is essentially no possibility that we could eventually migrate >> GHC's tickets to GitHub's ticket tracker without considerable data >> loss (much less manage the ticket load that would result), meaning >> that we would forever be stuck with maintaining Trac. >> >> * on a personal note, its search functionality has often left me >> empty-handed >> >> On the whole, these issues seem pretty hard to surmount. >> >> >> ## Move to GitLab >> >> In using GitLab for another project over the past months I have been >> positively surprised by its quality. It handles rebased merge requests >> far better than GitHub, has functional search, and quite a usable review >> interface. Furthermore, upstream has been extremely responsive to >> suggestions for improvement [1]. Even out-of-the-box it seems to be >> flexible enough to accommodate our needs, supporting integration with >> external issue trackers, having reasonable release management features, >> and support for code owners to automate review triaging (replacing much >> of the functionality of Phabricator's Herald). >> >> Finally, other FOSS projects' [3] recent migrations from Phabrictor to >> GitLab have shown that GitLab-the-company is quite willing to offer help >> when needed. I took some time this weekend to try setting up a quick GHC >> instance [2] to play around with. Even after just a few hours of playing >> around I think the result is surprisingly usable. >> >> Out of curiosity I also played around with importing some tickets from >> Trac (building on Matt Pickering's Trac-to-Maniphest migration tool). >> With relatively little effort I was even able to get nearly all of our >> tickets (as of 9 months ago) imported while preserving ticket numbers >> (although there are naturally a few wrinkles that would need to be >> worked out). Naturally, I think we should treat the question of ticket >> tracker migration as an orthogonal one to code review, but it is good to >> know that this is possible. >> >> >> ## Continue with Phabricator >> >> Continuing with Phabricator is of course an option. Its review >> functionality is great and it has served us reasonably well. However, >> compared to GitLab and even GitHub of today its features seem less >> distinguished than they once did. Moreover, the prospect of having to >> maintain a largely stagnant product with no support strikes me as a >> slightly dangerous game to play. Working around the issues we have >> recently encountered has already cost a non-negligible amount of time. >> >> >> # The bottom line >> >> If it wasn't clear already, I think that we should strongly consider a >> move to GitLab. At this point it seems clear that it isn't going to >> vanish, has a steady pace of development, is featureful, and available. >> >> However, these are just my thoughts. What do you think? >> >> Cheers, >> >> - Ben >> >> >> [1] 11.4 will ship with a file tree view in the code review interface, >> which I reported >> (https://gitlab.com/gitlab-org/gitlab-ce/issues/46474) as being is >> one of the Phabricator features I missed the most during review >> >> [2] https://gitlab.home.smart-cactus.org/ghc/ghc/issues/14641 >> >> [3] The GNOME and freedesktop.org projects have recently migrated, the >> former from a hodge-podge of self-hosted services and the latter >> from Phabricator >> >> _______________________________________________ >> Ghc-devops-group mailing list >> Ghc-devops-group at haskell.org >> https://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devops-group >> > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at well-typed.com Tue Oct 30 09:45:40 2018 From: david at well-typed.com (David Feuer) Date: Tue, 30 Oct 2018 05:45:40 -0400 Subject: The future of Phabricator In-Reply-To: <87zhuw2fw2.fsf@smart-cactus.org> References: <87zhuw2fw2.fsf@smart-cactus.org> Message-ID: <460df54f-817f-4abb-8cc2-5f4dc26397e8@well-typed.com> What's to prevent GitLab from doing what Phabricator has once enough companies have committed to it? ⁣David Feuer Well-Typed Consultant​ On Oct 30, 2018, 12:55 AM, at 12:55 AM, Ben Gamari wrote: > >TL;DR. For several reasons I think we should consider alternatives to > Phabricator. My view is that GitLab seems like the best option. > > >Hello everyone, > >Over the past year I have been growing increasingly weary of our >continued dependence on Phabricator. Without a doubt, its code review >interface is the best I have used. However, for a myriad of reasons I >am recently questioning whether it is still the best tool for GHC's >needs. > > ># The problem > >There are a number of reasons why I am currently uncertain about >Phabricator. > >For one, at this point we have no options for support in the event that >something goes wrong as the company responsible for Phabricator, >Phacility, has closed their support channels to non-paying customers. >Furthermore, in the past year or two Phacility has been placing their >development resources in the parts their customers pay them for, which >appear to be much different that the parts that we actively use. For >this reason, some parts that we rely on seem oddly half-finished. > >This concern was recently underscored by some rather unfortunate >misfeatures in Harbormaster which resulted in broken CI after the >Hadrian merge and now apparent bugs which have made it difficult to >migrate to the CircleCI integration we previously prepared. > >Perhaps most importantly, in our recent development priorities survey >our use of Phabricator was the most common complaint by a fair margin, >both in case of respondents who have contributed patches and those who >have not. On the whole, past contributors and potential future >contributors alike have strongly indicated that they want a more >Git-like experience. Of course, this wasn't terribly surprising; this >is just the most recent case where contributors have made this >preference known. > >Frankly, in a sense it is hard to blame them. The fact that users need >to install a PHP tool, Arcanist, to contribute anything but >documentation patches has always seemed like unnecessary friction to me >and I would be quite happy to be rid of it. Indeed we have had a quite >healthy number of GitHub documentation patches since we started >accepting them. This makes me thing that there may indeed be potential >contributoes that we are leaving on the table. > > ># What to do > >With Rackspace support ending at the end of year, now may be a good >time to consider whether we really want to continue on this road. >Phabricator is great at code review but I am less and less certain that >it is worth the maintenance uncertainty and potential lost contributors >that it costs. > >Moreover, good alternatives seem closer at-hand than they were when we >deployed Phabricator. > > >## Move to GitHub > >When people complain about our infrastructure, they often use GitHub as >the example of what they would like to see. However, realistically I >have a hard time seeing GitHub as a viable option. Its feature set is >simply >insufficient enough to handle the needs of a larger project like GHC >without significant external tooling (as seen in the case of >Rust-lang). > >The concrete reasons have been well-documented in previous discussions >but, to summarize, > > * its review functionality is extremely painful to use with larger > patches > > * rebased patches lead to extreme confusion and lost review comments > >* it lacks support for pre-receive hooks, which serve as a last line of > defense to prevent unintended submodule changes > > * its inability to deal with external tickets is problematic > > * there is essentially no possibility that we could eventually migrate > GHC's tickets to GitHub's ticket tracker without considerable data > loss (much less manage the ticket load that would result), meaning > that we would forever be stuck with maintaining Trac. > > * on a personal note, its search functionality has often left me > empty-handed > >On the whole, these issues seem pretty hard to surmount. > > >## Move to GitLab > >In using GitLab for another project over the past months I have been >positively surprised by its quality. It handles rebased merge requests >far better than GitHub, has functional search, and quite a usable >review >interface. Furthermore, upstream has been extremely responsive to >suggestions for improvement [1]. Even out-of-the-box it seems to be >flexible enough to accommodate our needs, supporting integration with >external issue trackers, having reasonable release management features, >and support for code owners to automate review triaging (replacing much >of the functionality of Phabricator's Herald). > >Finally, other FOSS projects' [3] recent migrations from Phabrictor to >GitLab have shown that GitLab-the-company is quite willing to offer >help >when needed. I took some time this weekend to try setting up a quick >GHC >instance [2] to play around with. Even after just a few hours of >playing >around I think the result is surprisingly usable. > >Out of curiosity I also played around with importing some tickets from >Trac (building on Matt Pickering's Trac-to-Maniphest migration tool). >With relatively little effort I was even able to get nearly all of our >tickets (as of 9 months ago) imported while preserving ticket numbers >(although there are naturally a few wrinkles that would need to be >worked out). Naturally, I think we should treat the question of ticket >tracker migration as an orthogonal one to code review, but it is good >to >know that this is possible. > > >## Continue with Phabricator > >Continuing with Phabricator is of course an option. Its review >functionality is great and it has served us reasonably well. However, >compared to GitLab and even GitHub of today its features seem less >distinguished than they once did. Moreover, the prospect of having to >maintain a largely stagnant product with no support strikes me as a >slightly dangerous game to play. Working around the issues we have >recently encountered has already cost a non-negligible amount of time. > > ># The bottom line > >If it wasn't clear already, I think that we should strongly consider a >move to GitLab. At this point it seems clear that it isn't going to >vanish, has a steady pace of development, is featureful, and available. > >However, these are just my thoughts. What do you think? > >Cheers, > >- Ben > > >[1] 11.4 will ship with a file tree view in the code review interface, > which I reported > (https://gitlab.com/gitlab-org/gitlab-ce/issues/46474) as being is > one of the Phabricator features I missed the most during review > >[2] https://gitlab.home.smart-cactus.org/ghc/ghc/issues/14641 > >[3] The GNOME and freedesktop.org projects have recently migrated, the > former from a hodge-podge of self-hosted services and the latter > from Phabricator > > > >------------------------------------------------------------------------ > >_______________________________________________ >ghc-devs mailing list >ghc-devs at haskell.org >http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Tue Oct 30 11:53:18 2018 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Tue, 30 Oct 2018 11:53:18 +0000 Subject: The future of Phabricator In-Reply-To: <87zhuw2fw2.fsf@smart-cactus.org> References: <87zhuw2fw2.fsf@smart-cactus.org> Message-ID: The compelling argument against Phabricator is that (as Ben mentions) parts of the product have remained unfinished whilst seemingly low-priority features are worked on for months. I think at the start Austin had a lot of success interacting with the maintainers but now you can't make a new ticket unless you are a paying customer. A compelling argument to move to gitlab is the possibility of tighter integration between the patches and tickets. I'm saying this as someone who much prefers using arcanist and the phabricator diff model to the git PR model but at the end of the day, everyone who contributes to GHC is able to use both models as most projects are hosted on github. I would be interested in reading more about the GNOME and freedesktop switch to gitlab. In particular the technical details of the migration. So I fully support Ben's judgement here and hope that we can make a decision with haste. Cheers, Matt On Tue, Oct 30, 2018 at 4:55 AM Ben Gamari wrote: > > > TL;DR. For several reasons I think we should consider alternatives to > Phabricator. My view is that GitLab seems like the best option. > > > Hello everyone, > > Over the past year I have been growing increasingly weary of our > continued dependence on Phabricator. Without a doubt, its code review > interface is the best I have used. However, for a myriad of reasons I > am recently questioning whether it is still the best tool for GHC's > needs. > > > # The problem > > There are a number of reasons why I am currently uncertain about > Phabricator. > > For one, at this point we have no options for support in the event that > something goes wrong as the company responsible for Phabricator, > Phacility, has closed their support channels to non-paying customers. > Furthermore, in the past year or two Phacility has been placing their > development resources in the parts their customers pay them for, which > appear to be much different that the parts that we actively use. For > this reason, some parts that we rely on seem oddly half-finished. > > This concern was recently underscored by some rather unfortunate > misfeatures in Harbormaster which resulted in broken CI after the > Hadrian merge and now apparent bugs which have made it difficult to > migrate to the CircleCI integration we previously prepared. > > Perhaps most importantly, in our recent development priorities survey > our use of Phabricator was the most common complaint by a fair margin, > both in case of respondents who have contributed patches and those who > have not. On the whole, past contributors and potential future > contributors alike have strongly indicated that they want a more > Git-like experience. Of course, this wasn't terribly surprising; this > is just the most recent case where contributors have made this > preference known. > > Frankly, in a sense it is hard to blame them. The fact that users need > to install a PHP tool, Arcanist, to contribute anything but > documentation patches has always seemed like unnecessary friction to me > and I would be quite happy to be rid of it. Indeed we have had a quite > healthy number of GitHub documentation patches since we started > accepting them. This makes me thing that there may indeed be potential > contributoes that we are leaving on the table. > > > # What to do > > With Rackspace support ending at the end of year, now may be a good > time to consider whether we really want to continue on this road. > Phabricator is great at code review but I am less and less certain that > it is worth the maintenance uncertainty and potential lost contributors > that it costs. > > Moreover, good alternatives seem closer at-hand than they were when we > deployed Phabricator. > > > ## Move to GitHub > > When people complain about our infrastructure, they often use GitHub as > the example of what they would like to see. However, realistically I > have a hard time seeing GitHub as a viable option. Its feature set is simply > insufficient enough to handle the needs of a larger project like GHC > without significant external tooling (as seen in the case of Rust-lang). > > The concrete reasons have been well-documented in previous discussions > but, to summarize, > > * its review functionality is extremely painful to use with larger > patches > > * rebased patches lead to extreme confusion and lost review comments > > * it lacks support for pre-receive hooks, which serve as a last line of > defense to prevent unintended submodule changes > > * its inability to deal with external tickets is problematic > > * there is essentially no possibility that we could eventually migrate > GHC's tickets to GitHub's ticket tracker without considerable data > loss (much less manage the ticket load that would result), meaning > that we would forever be stuck with maintaining Trac. > > * on a personal note, its search functionality has often left me > empty-handed > > On the whole, these issues seem pretty hard to surmount. > > > ## Move to GitLab > > In using GitLab for another project over the past months I have been > positively surprised by its quality. It handles rebased merge requests > far better than GitHub, has functional search, and quite a usable review > interface. Furthermore, upstream has been extremely responsive to > suggestions for improvement [1]. Even out-of-the-box it seems to be > flexible enough to accommodate our needs, supporting integration with > external issue trackers, having reasonable release management features, > and support for code owners to automate review triaging (replacing much > of the functionality of Phabricator's Herald). > > Finally, other FOSS projects' [3] recent migrations from Phabrictor to > GitLab have shown that GitLab-the-company is quite willing to offer help > when needed. I took some time this weekend to try setting up a quick GHC > instance [2] to play around with. Even after just a few hours of playing > around I think the result is surprisingly usable. > > Out of curiosity I also played around with importing some tickets from > Trac (building on Matt Pickering's Trac-to-Maniphest migration tool). > With relatively little effort I was even able to get nearly all of our > tickets (as of 9 months ago) imported while preserving ticket numbers > (although there are naturally a few wrinkles that would need to be > worked out). Naturally, I think we should treat the question of ticket > tracker migration as an orthogonal one to code review, but it is good to > know that this is possible. > > > ## Continue with Phabricator > > Continuing with Phabricator is of course an option. Its review > functionality is great and it has served us reasonably well. However, > compared to GitLab and even GitHub of today its features seem less > distinguished than they once did. Moreover, the prospect of having to > maintain a largely stagnant product with no support strikes me as a > slightly dangerous game to play. Working around the issues we have > recently encountered has already cost a non-negligible amount of time. > > > # The bottom line > > If it wasn't clear already, I think that we should strongly consider a > move to GitLab. At this point it seems clear that it isn't going to > vanish, has a steady pace of development, is featureful, and available. > > However, these are just my thoughts. What do you think? > > Cheers, > > - Ben > > > [1] 11.4 will ship with a file tree view in the code review interface, > which I reported > (https://gitlab.com/gitlab-org/gitlab-ce/issues/46474) as being is > one of the Phabricator features I missed the most during review > > [2] https://gitlab.home.smart-cactus.org/ghc/ghc/issues/14641 > > [3] The GNOME and freedesktop.org projects have recently migrated, the > former from a hodge-podge of self-hosted services and the latter > from Phabricator > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From hvriedel at gmail.com Tue Oct 30 12:02:37 2018 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Tue, 30 Oct 2018 13:02:37 +0100 Subject: The future of Phabricator In-Reply-To: (Matthew Pickering's message of "Tue, 30 Oct 2018 11:53:18 +0000") References: <87zhuw2fw2.fsf@smart-cactus.org> Message-ID: <87in1j63s2.fsf@gmail.com> On 2018-10-30 at 11:53:18 +0000, Matthew Pickering wrote: [...] > A compelling argument to move to gitlab is the possibility of tighter > integration between the patches and tickets. You don't need to move to GitLab to achieve that, do you? In fact, we had this project where somebody invested quite a lot of time & effort to implement a proof of concept for migrating Trac tickets into Phabricator which you might remember; it was generally well received but afaik this was silently forgotten about and so the ball was dropped in pushing it further: https://mail.haskell.org/pipermail/ghc-devs/2016-December/013444.html I'd much rather sacrifice Trac's benefits by consolidating Trac tickets into Phabricator (if the tighter integration between code-review & ticketing is the main compelling argument) than to give up on both, Phabricator *and* Trac. From chak at justtesting.org Tue Oct 30 12:39:48 2018 From: chak at justtesting.org (Manuel M T Chakravarty) Date: Tue, 30 Oct 2018 13:39:48 +0100 Subject: [GHC DevOps Group] The future of Phabricator In-Reply-To: <87zhuw2fw2.fsf@smart-cactus.org> References: <87zhuw2fw2.fsf@smart-cactus.org> Message-ID: <2377A9DA-7459-4321-995D-EFB67D0104FE@justtesting.org> Hi Ben, Thanks a lot for the summary of the situation. As you know, I do dislike Phabricator for the many reasons that you are listing, and it would be nice to finally move to a better system. In particular, it is worth emphasising the fact highlighted by the survey, namely that "Phabricator was the most common complaint by a fair margin, both in case of respondents who have contributed patches and those who have not. On the whole, past contributors and potential future contributors alike have strongly indicated that they want a more Git-like experience.” One the one hand, we want broader participation in GHC development; on the other hand, we constantly ignore the single largest source of frustration for contributors. That just doesn’t make a lot of sense to me, even if it can be explained by the inertia of some existing developers. Unfortunately, if I am not mistaken, GitLab also has a big problem. It requires the use of GitLab CI — i.e., we cannot use CircleCI and Appveyor with it. (At least, that is my current understanding. Please correct me if I am wrong.) Given that large organisations work with large code bases on GitHub, I am still puzzled why GHC somehow cannot do that. (I do understand that the dev process that has been established within GHC is naturally focused around Phabricator and its tools. However, that doesn’t mean it couldn’t be changed to work as well as before, but with another tool.) In any case, I think, you didn’t mention one of the options we did discuss previously, namely to use GitHub together with a service that adds more sophisticated code review functionality, such as https://reviewable.io This would solve the CI issues without further ado. Cheers, Manuel > Am 30.10.2018 um 05:54 schrieb Ben Gamari : > > > TL;DR. For several reasons I think we should consider alternatives to > Phabricator. My view is that GitLab seems like the best option. > > > Hello everyone, > > Over the past year I have been growing increasingly weary of our > continued dependence on Phabricator. Without a doubt, its code review > interface is the best I have used. However, for a myriad of reasons I > am recently questioning whether it is still the best tool for GHC's > needs. > > > # The problem > > There are a number of reasons why I am currently uncertain about > Phabricator. > > For one, at this point we have no options for support in the event that > something goes wrong as the company responsible for Phabricator, > Phacility, has closed their support channels to non-paying customers. > Furthermore, in the past year or two Phacility has been placing their > development resources in the parts their customers pay them for, which > appear to be much different that the parts that we actively use. For > this reason, some parts that we rely on seem oddly half-finished. > > This concern was recently underscored by some rather unfortunate > misfeatures in Harbormaster which resulted in broken CI after the > Hadrian merge and now apparent bugs which have made it difficult to > migrate to the CircleCI integration we previously prepared. > > Perhaps most importantly, in our recent development priorities survey > our use of Phabricator was the most common complaint by a fair margin, > both in case of respondents who have contributed patches and those who > have not. On the whole, past contributors and potential future > contributors alike have strongly indicated that they want a more > Git-like experience. Of course, this wasn't terribly surprising; this > is just the most recent case where contributors have made this > preference known. > > Frankly, in a sense it is hard to blame them. The fact that users need > to install a PHP tool, Arcanist, to contribute anything but > documentation patches has always seemed like unnecessary friction to me > and I would be quite happy to be rid of it. Indeed we have had a quite > healthy number of GitHub documentation patches since we started > accepting them. This makes me thing that there may indeed be potential > contributoes that we are leaving on the table. > > > # What to do > > With Rackspace support ending at the end of year, now may be a good > time to consider whether we really want to continue on this road. > Phabricator is great at code review but I am less and less certain that > it is worth the maintenance uncertainty and potential lost contributors > that it costs. > > Moreover, good alternatives seem closer at-hand than they were when we > deployed Phabricator. > > > ## Move to GitHub > > When people complain about our infrastructure, they often use GitHub as > the example of what they would like to see. However, realistically I > have a hard time seeing GitHub as a viable option. Its feature set is simply > insufficient enough to handle the needs of a larger project like GHC > without significant external tooling (as seen in the case of Rust-lang). > > The concrete reasons have been well-documented in previous discussions > but, to summarize, > > * its review functionality is extremely painful to use with larger > patches > > * rebased patches lead to extreme confusion and lost review comments > > * it lacks support for pre-receive hooks, which serve as a last line of > defense to prevent unintended submodule changes > > * its inability to deal with external tickets is problematic > > * there is essentially no possibility that we could eventually migrate > GHC's tickets to GitHub's ticket tracker without considerable data > loss (much less manage the ticket load that would result), meaning > that we would forever be stuck with maintaining Trac. > > * on a personal note, its search functionality has often left me > empty-handed > > On the whole, these issues seem pretty hard to surmount. > > > ## Move to GitLab > > In using GitLab for another project over the past months I have been > positively surprised by its quality. It handles rebased merge requests > far better than GitHub, has functional search, and quite a usable review > interface. Furthermore, upstream has been extremely responsive to > suggestions for improvement [1]. Even out-of-the-box it seems to be > flexible enough to accommodate our needs, supporting integration with > external issue trackers, having reasonable release management features, > and support for code owners to automate review triaging (replacing much > of the functionality of Phabricator's Herald). > > Finally, other FOSS projects' [3] recent migrations from Phabrictor to > GitLab have shown that GitLab-the-company is quite willing to offer help > when needed. I took some time this weekend to try setting up a quick GHC > instance [2] to play around with. Even after just a few hours of playing > around I think the result is surprisingly usable. > > Out of curiosity I also played around with importing some tickets from > Trac (building on Matt Pickering's Trac-to-Maniphest migration tool). > With relatively little effort I was even able to get nearly all of our > tickets (as of 9 months ago) imported while preserving ticket numbers > (although there are naturally a few wrinkles that would need to be > worked out). Naturally, I think we should treat the question of ticket > tracker migration as an orthogonal one to code review, but it is good to > know that this is possible. > > > ## Continue with Phabricator > > Continuing with Phabricator is of course an option. Its review > functionality is great and it has served us reasonably well. However, > compared to GitLab and even GitHub of today its features seem less > distinguished than they once did. Moreover, the prospect of having to > maintain a largely stagnant product with no support strikes me as a > slightly dangerous game to play. Working around the issues we have > recently encountered has already cost a non-negligible amount of time. > > > # The bottom line > > If it wasn't clear already, I think that we should strongly consider a > move to GitLab. At this point it seems clear that it isn't going to > vanish, has a steady pace of development, is featureful, and available. > > However, these are just my thoughts. What do you think? > > Cheers, > > - Ben > > > [1] 11.4 will ship with a file tree view in the code review interface, > which I reported > (https://gitlab.com/gitlab-org/gitlab-ce/issues/46474) as being is > one of the Phabricator features I missed the most during review > > [2] https://gitlab.home.smart-cactus.org/ghc/ghc/issues/14641 > > [3] The GNOME and freedesktop.org projects have recently migrated, the > former from a hodge-podge of self-hosted services and the latter > from Phabricator > > _______________________________________________ > Ghc-devops-group mailing list > Ghc-devops-group at haskell.org > https://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devops-group -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Tue Oct 30 13:28:54 2018 From: marlowsd at gmail.com (Simon Marlow) Date: Tue, 30 Oct 2018 13:28:54 +0000 Subject: Hitting RTS bug on GHC 8.0.2 In-Reply-To: References: Message-ID: Looking at the code I can't see how that assertion could possibly fail. Is it reproducible? On Tue, 30 Oct 2018 at 08:38, Harendra Kumar wrote: > Hi, > > I got the following crash in one of my CI tests ( > https://travis-ci.org/composewell/streamly/jobs/448112763): > > test: internal error: RELEASE_LOCK: I do not own this lock: rts/Messages.c > 54 > (GHC version 8.0.2 for x86_64_unknown_linux) > Please report this as a GHC bug: > http://www.haskell.org/ghc/reportabug > > I have hit this just once yet. Is this worth opening a ticket, given that > this is an older version of the compiler? Has something like been fixed > since then or might this be present in newer versions as well? > > -harendra > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Tue Oct 30 13:50:26 2018 From: ben at well-typed.com (Ben Gamari) Date: Tue, 30 Oct 2018 09:50:26 -0400 Subject: The future of Phabricator In-Reply-To: <460df54f-817f-4abb-8cc2-5f4dc26397e8@well-typed.com> References: <87zhuw2fw2.fsf@smart-cactus.org> <460df54f-817f-4abb-8cc2-5f4dc26397e8@well-typed.com> Message-ID: <87pnvr35nm.fsf@smart-cactus.org> David Feuer writes: > What's to prevent GitLab from doing what Phabricator has once enough > companies have committed to it? > In principle, nothing. However, in general GitLab-the-company seems significantly more devoted to the idea of GitLab as an open-source project than Phacility was to Phabricator. In truth, Phabricator was never really an healthy FOSS project. Yes, the source was available but the maintainers were quite clear that they have no intention of accepting unsolicited patches. GitLab, on the other hand, encourages external contributors, has actively supported adoption by open source projects and has an active set of maintainers. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Tue Oct 30 13:51:22 2018 From: ben at well-typed.com (Ben Gamari) Date: Tue, 30 Oct 2018 09:51:22 -0400 Subject: [GHC DevOps Group] The future of Phabricator In-Reply-To: <20181030130343.bf3f3qydf6jj47be@nullzig.kosmikus.org> References: <87zhuw2fw2.fsf@smart-cactus.org> <2377A9DA-7459-4321-995D-EFB67D0104FE@justtesting.org> <20181030130343.bf3f3qydf6jj47be@nullzig.kosmikus.org> Message-ID: <87o9bb35lz.fsf@smart-cactus.org> Andres Löh writes: > Hi. > >> Unfortunately, if I am not mistaken, GitLab also has a big problem. It requires the use of GitLab CI — i.e., we cannot use CircleCI and Appveyor with it. (At least, that is my current understanding. Please correct me if I am wrong.) > > Just a clarification on this issue. > > I might be wrong, but my understanding is that: > > - Gitlab offers its own Gitlab CI, but it doesn't force you to use it, > and doesn't prevent you from using other CI solutions. > > - Web-based CI solutions have to specifically support Gitlab for you to > be able to use them with Gitlab. > > - To my knowledge, Appveyor supports Gitlab, but Circle and Travis > currently do not. I know that there are issues open for these systems > to support Gitlab, but I have no idea whether this is likely to happen > anytime soon. For example, for Circle, the discussion seems to be > here: https://circleci.com/ideas/?idea=CCI-I-248 > That is entirely correct; however, we have already invested the effort to build a bridge between Phabricator and CircleCI (only to have deployment complicated by an apparent Phabricator bug). The implementation of this didn't take particularly long and I expect migrating this work to GitLab would be if anything easier (since GitLab has a more-standard REST interface than Phabricator's Conduit). Cheers, - ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Tue Oct 30 14:34:02 2018 From: ben at well-typed.com (Ben Gamari) Date: Tue, 30 Oct 2018 10:34:02 -0400 Subject: [GHC DevOps Group] The future of Phabricator In-Reply-To: <2377A9DA-7459-4321-995D-EFB67D0104FE@justtesting.org> References: <87zhuw2fw2.fsf@smart-cactus.org> <2377A9DA-7459-4321-995D-EFB67D0104FE@justtesting.org> Message-ID: <87lg6f33my.fsf@smart-cactus.org> Manuel M T Chakravarty writes: > Hi Ben, > ... > > Given that large organisations work with large code bases on GitHub, I > am still puzzled why GHC somehow cannot do that. (I do understand that > the dev process that has been established within GHC is naturally > focused around Phabricator and its tools. However, that doesn’t mean > it couldn’t be changed to work as well as before, but with another > tool.) > > In any case, I think, you didn’t mention one of the options we did > discuss previously, namely to use GitHub together with a service that > adds more sophisticated code review functionality, such as > > https://reviewable.io > Some of the issues I list with GitHub are entirely orthogonal to GitHub's code review tool. While Rust has shown that large open-source projects can use GitHub, they have also demonstrated that it requires a remarkable amount of automation (I counted three distinct bots in use on the first random pull request I opened). In my own discussions with Rust-lang maintainers they have noted that even with this tooling they are still somewhat unhappy with the amount of manual busywork working within GitHub requires. More generally, I think the move to CircleCI in a way underscores why I'm a bit hesitant to move to another silo. While it generally does "just work", there have been several cases where I have had to have multi-week interactions CircleCI support to work through inscrutable build issues. Moreover, we continue to be bit by the inability to prioritize jobs and, despite efforts, still have no ability to build for non-Linux/amd64 platforms. Consequently I'm rather skittish about moving to another platform where we have limited insight into issues, no influence over direction of development, and no ability to fix bugs where necessary. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From matthewtpickering at gmail.com Tue Oct 30 14:55:33 2018 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Tue, 30 Oct 2018 14:55:33 +0000 Subject: ghc-in-ghci memory usage Message-ID: I just tried using the ghc-in-ghci script again and it appears that the memory usage problems have been resolved. Probably thanks to Simon Marlow who fixed a lot of space leaks in ghc. A reminder of how to use it: ./utils/ghc-in-ghci/run.sh -j will load ghc into ghci so you can `:r` to recompile when you make a change. Cheers, Matt From ben at well-typed.com Tue Oct 30 15:04:34 2018 From: ben at well-typed.com (Ben Gamari) Date: Tue, 30 Oct 2018 11:04:34 -0400 Subject: [GHC DevOps Group] The future of Phabricator In-Reply-To: <87muqv6c5g.fsf@gmail.com> References: <87zhuw2fw2.fsf@smart-cactus.org> <87muqv6c5g.fsf@gmail.com> Message-ID: <87h8h3327z.fsf@smart-cactus.org> Herbert Valerio Riedel writes: > On 2018-10-30 at 00:54:42 -0400, Ben Gamari wrote: >> TL;DR. For several reasons I think we should consider alternatives to >> Phabricator. My view is that GitLab seems like the best option. >> >> >> Hello everyone, >> >> Over the past year I have been growing increasingly weary of our >> continued dependence on Phabricator. Without a doubt, its code review >> interface is the best I have used. However, for a myriad of reasons I >> am recently questioning whether it is still the best tool for GHC's >> needs. > > TL;DR. IMO, Phabricator is still the best tool we know about for *GHC's needs* ;-) > > [...] > >> For one, at this point we have no options for support in the event that >> something goes wrong as the company responsible for Phabricator, >> has closed their support channels to non-paying customers. > > While it's certainly a good idea to have an emergency plan ready, I > don't think we need to act prematurely before something has actually > happened. Phabricator is open-source and therefore there's little that > can go so catastrophically wrong that we wouldn't give us more than enough > time to act upon. > My point is that we have already had more than one issue that has cost significant time to resolve. Most recent is the Harbormaster issue, which lead to me having to drop everything to fix Harbormaster, which broke as a result of merging Hadrian [1]. I spent an hour trying to identify the issue and another hour trying to find a workaround before deciding to give up and just try deploying the CircleCI/Phabricator bridge that we had waiting to be deployed. Unfortunately, I then encountered another Harbormaster bug [2], which I still haven't worked out. After a couple of hours of scratching my head at this one I took to writing this email. [1] Harbormaster attempts to reuse working directories but does an inadequate job of cleaning stale submodules. This results in the any post-merge builds failing at `git checkout`. [2] Harbormaster "Send HTTP Request" build step nondeterministically resets connections after receiving valid responses, failing with the less-than-descriptive error message of "HTTP 28". I still have no idea why. > (Also, there's still the option of becoming part of that > paying-customers group and thus influence their focus -- after all, we'd > be contributing to a improving an OSS codebase; and not a proprietary > closed product such as GitHub) > At this point I'm not convinced that the benefits that Phabricator brings us are worth the significant expense that doing this would incur, especially. >> Furthermore, in the past year or two Phacility has been placing their >> development resources in the parts their customers pay them for, which >> appear to be much different that the parts that we actively use. For >> this reason, some parts that we rely on seem oddly half-finished. >> >> This concern was recently underscored by some rather unfortunate >> misfeatures in Harbormaster which resulted in broken CI after the >> Hadrian merge and now apparent bugs which have made it difficult to >> migrate to the CircleCI integration we previously prepared. >> >> Perhaps most importantly, in our recent development priorities survey >> our use of Phabricator was the most common complaint by a fair margin, >> both in case of respondents who have contributed patches and those who >> have not. On the whole, past contributors and potential future >> contributors alike have strongly indicated that they want a more >> Git-like experience. Of course, this wasn't terribly surprising; this >> is just the most recent case where contributors have made this >> preference known. >> >> Frankly, in a sense it is hard to blame them. The fact that users need >> to install a PHP tool, Arcanist, to contribute anything but >> documentation patches has always seemed like unnecessary friction to me >> and I would be quite happy to be rid of it. > > [...] > >> Indeed we have had a quite healthy number of GitHub documentation >> patches since we started accepting them. > > > While I do agree that Phabricator's impedance mismatch with Git idioms > has bugged me ever since we started using it (I even started > implementing https://github.com/haskell-infra/arc-lite as a proof of > concept but ran out of time), I still consider some of its features > unparalleled in PR-based workflows as provided by GitHub or GitLab. > > For example, to me the support for stacked diffs outweights any > subjective inconvenience brought forward against Phabricator > While I agree that stacked diffs are fantastic (I have used this feature often) and without match in the PR model, there are two problems with this argument: 1. In practice they are rarely used in GHC's case. There are a few reasons for this: * many patches are really a single atomic change or otherwise too small to benefit from * contributors are often unaware of the feature, typically coming from a PR-centric model * maintaining (more specifically, rebasing) stacks of differentials is a real headache with arcanist, which leaves the entirety of the work to be performed manually by the user. This is a serious problem since long-lived, large patches are precisely when you want to use a stack. * Phabricator's CI model doesn't properly account for stacking (e.g. test the patch by applying each patch in succession to the base commit), meaning the manual steps described above don't even get properly checked by CI. 2. With a slight change in thinking it's possible to get most of the benefit under a PR model. Namely, consider a PR to be a stack of differentials, with each commit being an atomic change in that stack. I have started using this model in another GHC-related project I have been working on and it works quite well, especially since GitLab has good support for reviewing commit-by-commit. > https://jg.gg/2018/09/29/stacked-diffs-versus-pull-requests/?fbclid=IwAR3JyQP5uCn6ENiHOTWd41y5D-U0_CCJ55_23nzKeUYTjgLASHu2dq5QCc0 > > The PR workflow is perfectly well-suited for trivial documentation > patches to a project; but that's for contributions that frankly are of > minor importance; they're surely nice to have, but they're not the kind > of contributions that are essential to GHC's long-term sustainability > IMO. > > The reality IMO is that everybody tends to come up with this or that > complaint about a tool which isn't their favorite one, but it's hardly a > real barrier to contribution. In fact, I bet the majority of the people > that now complain about phabricator not being GitHub will be vocally > unhappy about having to create a GitLab account and that GitLab is not > GitHub... > Indeed this is true; I fully suspect that even if we did move away from Phabrictor there would still be detractor to whatever tool we choose. However, I do expect that number to be a drastic reduction from the number of we hear complaints about Phabricator. Moveover, given the number of documentation contributions that materialized since we started accepting pull requests I am hopeful that a reduction in on-boarding friction will lead to an uptick in larger contributions as well. No doubt it won't be as large as the uptick in documentation contributions, but I think there is a non-zero number of potential contributors who can do useful things around the compiler who are currently scared away by Phabricator. > Sure, by not trying to make everyone (specifically non-MVP contributors) > happy we might loose some typo fixes or whatever, but do we really want > to optimize the workflows for casual drive-by contributors which may > contribute a couple of trivial patches and then never be seen again, at > the expense of making life harder for what is already a complex enough > task: managing and reviewing complex patches to GHC where it's paramount > to use the best possible code-review facilities, and not shift the cost > from contributors to the even more important people, the ones maintaing > the projects as well as having intimate knowledge about the internals of > GHCs (but unfortunately have a very tight time & cognitive time budget > to spend on GHC, and which are the ones we really want to be able to > review those patches with as little cognitive overhead as possible. > My point is that in my experience GitLab's review experience has been close enough to Phabricator's functionality that it hasn't affected my review productivity too much on patches of significant size. There are of course things I miss in Phabricator: * MRs are still an approximation of stacked diffs * Differential's ability to leave comment on lines untouched by the patch under review On the other hand, GitLab has features that Phabricator lacks, * the ability to merge a MR after CI finishes * Merge-after-CI-finishes > So yes, phabricator is optimized for reviewers, and that's IMO a very > good thing and outweights the benefit of trying to bend over backwards > to make as many contributors as possible happy, of which there are > orders of magnitudes more than there GHC maintainers&experts. > I really don't view this as a matter of optimising for contributors over reviewers. As someone who does a significant amount of reviewing I have found GitLab to be good enough. It isn't as aenemic as GitHub and has gained most of the features that makes Differential usable (namely, reasonable preservation of context in the face of rebasing, the ability to diff iterations of an MR, a file tree interface to orient yourself in a large patch during review, intelligent highlighting of differences within a line, and transactional reviews). In practice I've not found myself in terrible want for anything in particular when doing review on GitLab. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Tue Oct 30 15:14:52 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 30 Oct 2018 11:14:52 -0400 Subject: ghc-prim package-data.mk failed In-Reply-To: References: Message-ID: <87efc731qv.fsf@smart-cactus.org> Simon Peyton Jones via ghc-devs writes: > This has started happening when I do 'sh validate -no-clean' > Hi Simon, I suspect you have stale content in your tree that `make clean` isn't deleting. Could you try running `git clean -dxf` (note that this will delete any untracked files in your tree) and try validating again? Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From harendra.kumar at gmail.com Tue Oct 30 15:33:06 2018 From: harendra.kumar at gmail.com (Harendra Kumar) Date: Tue, 30 Oct 2018 21:03:06 +0530 Subject: Hitting RTS bug on GHC 8.0.2 In-Reply-To: References: Message-ID: Hit it only once. Cannot reproduce it after that. I will update if I hit it again. -harendra On Tue, 30 Oct 2018 at 18:59, Simon Marlow wrote: > Looking at the code I can't see how that assertion could possibly fail. > Is it reproducible? > > On Tue, 30 Oct 2018 at 08:38, Harendra Kumar > wrote: > >> Hi, >> >> I got the following crash in one of my CI tests ( >> https://travis-ci.org/composewell/streamly/jobs/448112763): >> >> test: internal error: RELEASE_LOCK: I do not own this lock: >> rts/Messages.c 54 >> (GHC version 8.0.2 for x86_64_unknown_linux) >> Please report this as a GHC bug: >> http://www.haskell.org/ghc/reportabug >> >> I have hit this just once yet. Is this worth opening a ticket, given that >> this is an older version of the compiler? Has something like been fixed >> since then or might this be present in newer versions as well? >> >> -harendra >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From m at tweag.io Tue Oct 30 16:05:27 2018 From: m at tweag.io (Boespflug, Mathieu) Date: Tue, 30 Oct 2018 17:05:27 +0100 Subject: [GHC DevOps Group] The future of Phabricator In-Reply-To: <87lg6f33my.fsf@smart-cactus.org> References: <87zhuw2fw2.fsf@smart-cactus.org> <2377A9DA-7459-4321-995D-EFB67D0104FE@justtesting.org> <87lg6f33my.fsf@smart-cactus.org> Message-ID: Hi Ben, On Tue, 30 Oct 2018 at 15:34, Ben Gamari wrote: > > ... > > Some of the issues I list with GitHub are entirely orthogonal to > GitHub's code review tool. > > While Rust has shown that large open-source projects can use GitHub, > they have also demonstrated that it requires a remarkable amount of > automation. Could you say more about how this would affect GHC? The issues with GitHub that were listed in your original email are all to do with reviews (and to my knowledge addressed by layering reviewable.io on top of GitHub as Manuel says), except a couple. Cribbing from followup emails as well, I end up with the following list: * Poor integration with external issue trackers (or at any rate with Trac), I assume meaning, hard to transactionally close issues upon PR merge and other ticket status updates. * No merge-on-green-CI button. So keeping the review UX issues aside for a moment, are there other GitHub limitations that you anticipate would warrant automation bots à la Rust-lang? I'm not too worried about the CI story. The hard part with CircleCI isn't CircleCI, it's getting to a green CircleCI. But once we're there, moving to a green OtherCI shouldn't be much work. Best, Mathieu From sylvain at haskus.fr Tue Oct 30 16:09:26 2018 From: sylvain at haskus.fr (Sylvain Henry) Date: Tue, 30 Oct 2018 17:09:26 +0100 Subject: ghc-prim package-data.mk failed In-Reply-To: References: Message-ID: <9b19ab1e-0688-8a68-d00d-8ec339314ccc@haskus.fr> Hi Simon, IIRC you have to delete "libraries/ghc-prim/configure" which is a left-over after d7fa8695324d6e0c3ea77228f9de93d529afc23e Sylvain On 26/10/2018 13:42, Simon Peyton Jones via ghc-devs wrote: > > This has started happening when I do ‘sh validate –no-clean’ > > "inplace/bin/ghc-cabal" configure libraries/ghc-prim dist-install > --with-ghc="/home/simonpj/5builds/HEAD-5/inplace/bin/ghc-stage1" > --with-ghc-pkg="/home/simonpj/5builds/HEAD-5/inplace/bin/ghc-pkg" > --disable-library-for-ghci --enable-library-vanilla > --enable-library-for-ghci --disable-library-profiling --enable-shared > --with-hscolour="/home/simonpj/.cabal/bin/HsColour" > --configure-option=CFLAGS="-Wall -fno-stack-protector > -Werror=unused-but-set-variable -Wno-error=inline" > --configure-option=LDFLAGS="  " --configure-option=CPPFLAGS="   " > --gcc-options="-Wall -fno-stack-protector    > -Werror=unused-but-set-variable -Wno-error=inline   " --with-gcc="gcc" > --with-ld="ld.gold" --with-ar="ar" > --with-alex="/home/simonpj/.cabal/bin/alex" > --with-happy="/home/simonpj/.cabal/bin/happy" > > Configuring ghc-prim-0.5.3... > > configure: WARNING: unrecognized options: --with-compiler > > checking for gcc... /usr/bin/gcc > > checking whether the C compiler works... yes > > checking for C compiler default output file name... a.out > > checking for suffix of executables... > > checking whether we are cross compiling... no > > checking for suffix of object files... o > > checking whether we are using the GNU C compiler... yes > > checking whether /usr/bin/gcc accepts -g... yes > > checking for /usr/bin/gcc option to accept ISO C89... none needed > > checking whether GCC supports __atomic_ builtins... no > > configure: creating ./config.status > > config.status: error: cannot find input file: `ghc-prim.buildinfo.in' > > *libraries/ghc-prim/ghc.mk:4: recipe for target > 'libraries/ghc-prim/dist-install/package-data.mk' failed* > > make[1]: *** [libraries/ghc-prim/dist-install/package-data.mk] Error 1 > > Makefile:122: recipe for target 'all' failed > > make: *** [all] Error 2 > > I think it is fixed by saying ‘sh validate’ (i.e. start from > scratch).  But that is slow. > > I’m not 100% certain about the circumstances under which it happens, > but can anyone help me diagnose what is going on when it does? > > Thanks > > SImon > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Tue Oct 30 16:49:57 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 30 Oct 2018 12:49:57 -0400 Subject: ghc-prim package-data.mk failed In-Reply-To: <9b19ab1e-0688-8a68-d00d-8ec339314ccc@haskus.fr> References: <9b19ab1e-0688-8a68-d00d-8ec339314ccc@haskus.fr> Message-ID: <87zhuv1iry.fsf@smart-cactus.org> Sylvain Henry writes: > Hi Simon, > > IIRC you have to delete "libraries/ghc-prim/configure" which is a > left-over after d7fa8695324d6e0c3ea77228f9de93d529afc23e > Yes, this sounds right. Thanks Sylvain! Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Tue Oct 30 17:47:45 2018 From: ben at well-typed.com (Ben Gamari) Date: Tue, 30 Oct 2018 13:47:45 -0400 Subject: [GHC DevOps Group] The future of Phabricator In-Reply-To: References: <87zhuw2fw2.fsf@smart-cactus.org> <2377A9DA-7459-4321-995D-EFB67D0104FE@justtesting.org> <87lg6f33my.fsf@smart-cactus.org> Message-ID: <87y3af1g3n.fsf@smart-cactus.org> "Boespflug, Mathieu" writes: > Hi Ben, > > On Tue, 30 Oct 2018 at 15:34, Ben Gamari wrote: >> >> ... >> >> Some of the issues I list with GitHub are entirely orthogonal to >> GitHub's code review tool. >> >> While Rust has shown that large open-source projects can use GitHub, >> they have also demonstrated that it requires a remarkable amount of >> automation. > > Could you say more about how this would affect GHC? The issues with > GitHub that were listed in your original email are all to do with > reviews (and to my knowledge addressed by layering reviewable.io on > top of GitHub as Manuel says), except a couple. Cribbing from followup > emails as well, I end up with the following list: > It occurs to me that I never did sit down to write up my thoughts on reviewable. I tried doing a few reviews with it [1] and indeed it is quite good; in many ways it is comparable to Differential. It's a bit sluggish to load (loading a moderate-sized patch took over 15 seconds in Firefox) but after that it seems quite usable. The comment functionality is great; the ability to leave comments even on lines that were untouched by the patch is noted. However, it really feels like a band-aid, introducing another layer of indirection and a distinct conversation venue all to make up for what are plain deficiencies in GitHub's core product. Moreover, given that using it implies that we also need to buy in to the other deficiencies in GitHub's core product, it's not clear to me why we would go this direction when there are open-source, more featureful alternatives that also have a history of being adoption by large open-source projects. I suspect it will make little difference to contributors; one can authenticate to both with GitHub credentials and the UX is fairly similar. To me, the choice seems fairly clear-cut. [1] Admittedly my these were single-shot reviews and lacked the usual back-and-forth that one typically has during review, but on moderate-size patches, so I think they are fairly representative. > * Poor integration with external issue trackers (or at any rate with > Trac), I assume meaning, hard to transactionally close issues upon PR > merge and other ticket status updates. > * No merge-on-green-CI button. > > So keeping the review UX issues aside for a moment, are there other > GitHub limitations that you anticipate would warrant automation bots à > la Rust-lang? > Ultimately Rust's tools all exist for a reason. Bors works around GitHub's lacking ability to merge-on-CI-pass, Highfive addresses the lack of a flexible code owner notification system, among other things. Both of these are features that we either have already or would like to have. Furthermore, we already recognize that there are holes in our current CI story: relying on cross-compilation to validate non-Linux/amd64 architectures both complicates troubleshooting and requires that we fix issues in GHC's build system that I would rather not tie CI to. GitLab would allow us to potentially continue using CircleCI for "normal" platforms while giving us the ability to easily fill this holes with GitLab's native CI support (which I have used in a few projects; it is both easy to configure and flexible). On the whole, I simply see very few advantages to using GitHub over GitLab; the latter simply seems to me to be a generally superior product. Furthermore, we should remember that no product will be perfect; in light of this it is important to consider a) the openness of the implementation, and b) the responsiveness of the upstream developer. GitLab has the advantage of being open-source with an extremely responsive upstream; this stands in contrast to GitHub, where I have had straightforward pull requests [2] languish for the better part of a year before being merged and deployed. [2] https://github.com/github/markup/pull/925 > I'm not too worried about the CI story. The hard part with CircleCI > isn't CircleCI, it's getting to a green CircleCI. But once we're > there, moving to a green OtherCI shouldn't be much work. > Right, and we are largely already there! Hadrian, Darwin, Fedora, and Debian/amd64 builds are all currently green; i386 is hours from passing (a regression recently snuck in which I pinned down this morning), the LLVM build is pending a fix for a long-standing critical bug (#14251), and we are now two tests away from slow validation being green. The remaining piece is moving differential validation to CircleCI. This was one of the motivations for starting this discussion. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ggreif at gmail.com Tue Oct 30 18:32:10 2018 From: ggreif at gmail.com (Gabor Greif) Date: Tue, 30 Oct 2018 19:32:10 +0100 Subject: slow execution of built executables on a Mac In-Reply-To: References: Message-ID: Maybe a symptom of an AFPS bug? https://gregoryszorc.com/blog/2018/10/29/global-kernel-locks-in-apfs/ Just came across this, might be worth a look. Cheers, Gabor On 10/26/18, Richard Eisenberg wrote: > Hi devs, > > I have a shiny, new iMac in my office. It's thus frustrating that it takes > my iMac longer to build GHC than my trusty 28-month-old laptop. Building > devel2 on a fresh checkout takes just about an hour. By contrast, my laptop > is done after 30 minutes of work (same build settings). The laptop has a > 2.8GHz Intel i7 running macOS 10.13.5; the desktop has a 3.5GHz Intel i5 > running macOS 10.13.6. Both bootstrapped from the binary distro of GHC > 8.6.1. > > Watching GHC build, everything is snappy enough during the stage-1 build. > But then, as soon as we start using GHC-produced executables, things slow > down. It's most noticeable in the rts_dist_HC phase, which crawls. Stage 2 > is pretty slow, too. > > So: is there anything anyone knows about recent Macs not liking locally > built executables? Or is there some local setting that I need to update? The > prepackaged GHC seems to work well, so that gives me hope that someone knows > what setting to tweak. > > Thanks! > Richard > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From ben at well-typed.com Tue Oct 30 19:22:28 2018 From: ben at well-typed.com (Ben Gamari) Date: Tue, 30 Oct 2018 15:22:28 -0400 Subject: [GHC DevOps Group] The future of Phabricator In-Reply-To: References: <87zhuw2fw2.fsf@smart-cactus.org> Message-ID: <87tvl31bps.fsf@smart-cactus.org> Simon Marlow writes: > I'm entirely happy to move, provided (1) whatever we move to provides the > functionality we need, and (2) it's clearly what the community wants > (considering both current and future contributors). In the past when moving > to GitHub was brought up, there were a handful of core contributors who > argued strongly in favour of Phabricator, do we think that's changed? Do we > have any indication of whether the survey respondents who were > anti-Phabricator would be pro- or anti-GitLab? > The comments fell into several buckets: a. Those who spoke in favor of GitHub in particular b. Those who spoke in favor of GitHub and GitLab c. Those who spoke against Phabricator I seem to recall that (a) was the largest group. No one explicitly stated that they would be against GitLab, although this is not terribly surprising given we didn't ask. Frankly I doubt there would be people who would actively support GitHub but not GitLab given how similar the workflows are. However, collecting data for this hunch is one of the reasons for this thread. > Personally I'd like to optimise for more code review, because I think that > more than anything else will increase quality and community ownership of > the project. If using new tooling will make code review a more central part > of our workflow, then that would be a good thing. Agreed, currently we have too few reviewers for the volume of code we are pushing into the tree. > Right now I think we're > very Trac-centric, and the integration between Trac and Phabricator isn't > great; if we could move to a solution with tighter integration between > tickets/code-review/wiki, that would be an improvement in my view. But not > GitHub, for the reasons you gave. > Yes, I agree. Currently I spend too much time keeping tickets in sync and this is almost entirely wasted time. > Would GitLab solve the CI issues? I don't think you mentioned that > explicitly. > It helps, yes. As Andres pointed out, Appveyor has native support for GitLab, which we use for Windows validation. Furthermore, GitLab's native CI would allow us to test non-x86 platforms. CircleCI lacks GitLab support however I believe the integration we have already developed to support integration with Phabricator could be easily adapted for GitLab. Moreover, given that the "Add GitLab support" request is at the top of CircleCI's feature request tracker, it seems likely that there will be native support in the future. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From m at tweag.io Tue Oct 30 21:50:48 2018 From: m at tweag.io (Boespflug, Mathieu) Date: Tue, 30 Oct 2018 22:50:48 +0100 Subject: [GHC DevOps Group] The future of Phabricator In-Reply-To: <87y3af1g3n.fsf@smart-cactus.org> References: <87zhuw2fw2.fsf@smart-cactus.org> <2377A9DA-7459-4321-995D-EFB67D0104FE@justtesting.org> <87lg6f33my.fsf@smart-cactus.org> <87y3af1g3n.fsf@smart-cactus.org> Message-ID: Hi Ben, On Tue, 30 Oct 2018 at 18:47, Ben Gamari wrote: > > ... > > It occurs to me that I never did sit down to write up my thoughts on > reviewable. I tried doing a few reviews with it [1] and indeed it is > quite good; in many ways it is comparable to Differential. [...] > However, it really feels like a band-aid, introducing another layer of > indirection and a distinct conversation venue all to make up for what > are plain deficiencies in GitHub's core product. Sure. That sounds fine to me though, or indeed no different than say, using GitHub XOR Gitlab for code hosting, Phabricator for review (and only for that), and Trac for tickets (not ideal but no worse than status quo). If Phabricator (the paid for hosted version) or Reviewable.io really are the superior review tools, and if as review tools they integrate seamlessy with GitHub (or Gitlab), then that's an option worth considering. The important things are: reducing the maintenance burden (by preferring hosted solutions) while still meeting developer requirements and supporting a workflow that is familiar to most. > > So keeping the review UX issues aside for a moment, are there other > > GitHub limitations that you anticipate would warrant automation bots à > > la Rust-lang? > > > Ultimately Rust's tools all exist for a reason. Bors works around > GitHub's lacking ability to merge-on-CI-pass, Highfive addresses the > lack of a flexible code owner notification system, among other things. > Both of these are features that we either have already or would like to > have. ... and I assume based on your positive assessment, are both out-of-the-box features of Gitlab that meet the requirements? > On the whole, I simply see very few advantages to using GitHub over > GitLab; the latter simply seems to me to be a generally superior product. That may well be the case. The main argument for GitHub is taking advantage of its network effect. But a big part of that is not having to manage a new set of credentials elsewhere, as well as remembering different user names for the same collaborators on different platforms. You're saying I can use my GitHub credentials to authenticate on Gitlab. So in the end we possibly wouldn't be losing much of that network effect. > > I'm not too worried about the CI story. The hard part with CircleCI > > isn't CircleCI, it's getting to a green CircleCI. But once we're > > there, moving to a green OtherCI shouldn't be much work. > > > Right, and we are largely already there! That's great to hear. From matthewtpickering at gmail.com Wed Oct 31 13:40:30 2018 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Wed, 31 Oct 2018 13:40:30 +0000 Subject: CircleCI currently failing - possibly due to lack of disk space? Message-ID: The bridge which sends diffs to circleci is currently failing with the following error: {"args":["clone","ssh://git at phabricator-origin.haskell.org:2222/diffusion/GHCDIFF/GHC-Differentials.git","/tmp/ghc-diffs-1008dbb33e68821b"],"prog":"git","code":128,"err":"Clonage dans '/tmp/ghc-diffs-1008dbb33e68821b'...\nfatal: Impossible d'accéder au répertoire de travail courant: No such file or directory\nfatal: échec de index-pack\n","type":"Non-zero exit code","out":""} https://phabricator.haskell.org/harbormaster/build/62371/2/ https://github.com/alpmestan/phab-circleci-bridge/blob/master/src/Main.hs#L272 The repo is 150mb big so it is possible that the machine running the bridge has run out of disk space as it clones this repo repeatedly. Could someone look into this? Cheers, Matt From alp at well-typed.com Wed Oct 31 14:10:58 2018 From: alp at well-typed.com (Alp Mestanogullari) Date: Wed, 31 Oct 2018 15:10:58 +0100 Subject: CircleCI currently failing - possibly due to lack of disk space? In-Reply-To: References: Message-ID: I'm looking into this. On 31/10/2018 14:40, Matthew Pickering wrote: > The bridge which sends diffs to circleci is currently failing with the > following error: > > {"args":["clone","ssh://git at phabricator-origin.haskell.org:2222/diffusion/GHCDIFF/GHC-Differentials.git","/tmp/ghc-diffs-1008dbb33e68821b"],"prog":"git","code":128,"err":"Clonage > dans '/tmp/ghc-diffs-1008dbb33e68821b'...\nfatal: Impossible d'accéder > au répertoire de travail courant: No such file or directory\nfatal: > échec de index-pack\n","type":"Non-zero exit code","out":""} > > https://phabricator.haskell.org/harbormaster/build/62371/2/ > > https://github.com/alpmestan/phab-circleci-bridge/blob/master/src/Main.hs#L272 > > The repo is 150mb big so it is possible that the machine running the > bridge has run out of disk space as it clones this repo repeatedly. > Could someone look into this? > > Cheers, > > Matt > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -- Alp Mestanogullari, Haskell Consultant Well-Typed LLP, https://www.well-typed.com/ Registered in England and Wales, OC335890 118 Wymering Mansions, Wymering Road, London, W9 2NF, England From mgsloan at gmail.com Wed Oct 31 19:52:27 2018 From: mgsloan at gmail.com (Michael Sloan) Date: Wed, 31 Oct 2018 12:52:27 -0700 Subject: ghc-in-ghci memory usage In-Reply-To: References: Message-ID: Great, I'm glad it's working well for you! I've realized that when working on ghc-in-ghci, I didn't know about freezing stage 1 to speed up GHC builds. For me the primary motivation was that otherwise builds took quite long. Of course, ghc-in-ghci is still quite useful, due to being able to use the repl, avoiding static link times, etc. I'm not sure what the best way would be to effectively communicate with newcomers about freezing stage 1, but it seems important for build times. It may be easy to get dissuaded if every development iteration involves a ton of waiting for the build. Perhaps hadrian can be more intelligent about avoiding stage 1 rebuilds? I realize that's likely to be tricky from a correctness perspective. One thing I'm keen on for ghc-in-ghci is getting it to load without -fobject-code. This would often mean a much longer initial start (no use of stored object files), but should make reloads quite a lot faster, since it's generating bytecode instead. The main tricky bit there is the use of unboxed tuples, since ghci cannot bytecode-compile code that uses them. So, this either means adding support for unboxed tuples to bytecode, which seems quite challenging, or having something clever that only uses object-code compilation where needed. -Michael On Tue, Oct 30, 2018 at 7:56 AM Matthew Pickering wrote: > > I just tried using the ghc-in-ghci script again and it appears that > the memory usage problems have been resolved. Probably thanks to Simon > Marlow who fixed a lot of space leaks in ghc. > > A reminder of how to use it: > > ./utils/ghc-in-ghci/run.sh -j > > will load ghc into ghci so you can `:r` to recompile when you make a change. > > Cheers, > > Matt > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From matthewtpickering at gmail.com Wed Oct 31 20:54:46 2018 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Wed, 31 Oct 2018 20:54:46 +0000 Subject: ghc-in-ghci memory usage In-Reply-To: References: Message-ID: The information about freezing stage2 is in the first paragraph after the basics on the newcomers page. So it is already very prominent. https://ghc.haskell.org/trac/ghc/wiki/Newcomers#Fastrebuilding That being said, this will all change when hadrian is the default and I'm not sure how to achieve this with Hadrian. I'm sure this will have been accounted for. Cheers, Matt On Wed, Oct 31, 2018 at 7:53 PM Michael Sloan wrote: > > Great, I'm glad it's working well for you! > > I've realized that when working on ghc-in-ghci, I didn't know about > freezing stage 1 to speed up GHC builds. For me the primary > motivation was that otherwise builds took quite long. Of course, > ghc-in-ghci is still quite useful, due to being able to use the repl, > avoiding static link times, etc. > > I'm not sure what the best way would be to effectively communicate > with newcomers about freezing stage 1, but it seems important for > build times. It may be easy to get dissuaded if every development > iteration involves a ton of waiting for the build. Perhaps hadrian > can be more intelligent about avoiding stage 1 rebuilds? I realize > that's likely to be tricky from a correctness perspective. > > One thing I'm keen on for ghc-in-ghci is getting it to load without > -fobject-code. This would often mean a much longer initial start (no > use of stored object files), but should make reloads quite a lot > faster, since it's generating bytecode instead. The main tricky bit > there is the use of unboxed tuples, since ghci cannot bytecode-compile > code that uses them. So, this either means adding support for unboxed > tuples to bytecode, which seems quite challenging, or having something > clever that only uses object-code compilation where needed. > > -Michael > > On Tue, Oct 30, 2018 at 7:56 AM Matthew Pickering > wrote: > > > > I just tried using the ghc-in-ghci script again and it appears that > > the memory usage problems have been resolved. Probably thanks to Simon > > Marlow who fixed a lot of space leaks in ghc. > > > > A reminder of how to use it: > > > > ./utils/ghc-in-ghci/run.sh -j > > > > will load ghc into ghci so you can `:r` to recompile when you make a change. > > > > Cheers, > > > > Matt > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From alp at well-typed.com Wed Oct 31 21:01:06 2018 From: alp at well-typed.com (Alp Mestanogullari) Date: Wed, 31 Oct 2018 22:01:06 +0100 Subject: ghc-in-ghci memory usage In-Reply-To: References: Message-ID: <9055ea02-e087-7d09-b1b5-a91e383e76a2@well-typed.com> Hadrian has the `--freeze1` flag, whose description is: freeze Stage1 GHC, i.e. do not rebuild it even if some of its source files are out-of-date. This allows to significantly reduce the rebuild time when you are working on a feature that affects both Stage1 and Stage2 compilers, but may lead to incorrect build results. To unfreeze Stage1 GHC simply drop the|--freeze1|flag and Hadrian will rebuild all out-of-date files. (from https://ghc.haskell.org/trac/ghc/wiki/Building/Hadrian/QuickStart#Commandlineoptions) On 31/10/2018 21:54, Matthew Pickering wrote: > The information about freezing stage2 is in the first paragraph after > the basics on the newcomers page. So it is already very prominent. > > https://ghc.haskell.org/trac/ghc/wiki/Newcomers#Fastrebuilding > > That being said, this will all change when hadrian is the default and > I'm not sure how to achieve this with Hadrian. I'm sure this will have > been accounted for. > > Cheers, > > Matt > On Wed, Oct 31, 2018 at 7:53 PM Michael Sloan wrote: >> Great, I'm glad it's working well for you! >> >> I've realized that when working on ghc-in-ghci, I didn't know about >> freezing stage 1 to speed up GHC builds. For me the primary >> motivation was that otherwise builds took quite long. Of course, >> ghc-in-ghci is still quite useful, due to being able to use the repl, >> avoiding static link times, etc. >> >> I'm not sure what the best way would be to effectively communicate >> with newcomers about freezing stage 1, but it seems important for >> build times. It may be easy to get dissuaded if every development >> iteration involves a ton of waiting for the build. Perhaps hadrian >> can be more intelligent about avoiding stage 1 rebuilds? I realize >> that's likely to be tricky from a correctness perspective. >> >> One thing I'm keen on for ghc-in-ghci is getting it to load without >> -fobject-code. This would often mean a much longer initial start (no >> use of stored object files), but should make reloads quite a lot >> faster, since it's generating bytecode instead. The main tricky bit >> there is the use of unboxed tuples, since ghci cannot bytecode-compile >> code that uses them. So, this either means adding support for unboxed >> tuples to bytecode, which seems quite challenging, or having something >> clever that only uses object-code compilation where needed. >> >> -Michael >> >> On Tue, Oct 30, 2018 at 7:56 AM Matthew Pickering >> wrote: >>> I just tried using the ghc-in-ghci script again and it appears that >>> the memory usage problems have been resolved. Probably thanks to Simon >>> Marlow who fixed a lot of space leaks in ghc. >>> >>> A reminder of how to use it: >>> >>> ./utils/ghc-in-ghci/run.sh -j >>> >>> will load ghc into ghci so you can `:r` to recompile when you make a change. >>> >>> Cheers, >>> >>> Matt >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -- Alp Mestanogullari, Haskell Consultant Well-Typed LLP, https://www.well-typed.com/ Registered in England and Wales, OC335890 118 Wymering Mansions, Wymering Road, London, W9 2NF, England -------------- next part -------------- An HTML attachment was scrubbed... URL: From pi.boy.travis at gmail.com Wed Oct 31 22:21:12 2018 From: pi.boy.travis at gmail.com (Travis Whitaker) Date: Wed, 31 Oct 2018 15:21:12 -0700 Subject: Validating with LLVM Message-ID: Hello GHC Devs, I'm working on a very tiny patch for GHC. The patch concerns the LLVM code generator, and I'd like to run the validate script. ./validate ignores mk/ build.mk (which is probably correct) and it doesn't seem to be using the LLVM backend. LLVM tools are on the PATH; I'm working from the ghc-8.4 branch so I'm using LLVM 5. Is it possible to validate with the LLVM backend? If not, what's considered a sufficiently thorough test? Apologies if I'm missing something obvious. Thanks! Travis -------------- next part -------------- An HTML attachment was scrubbed... URL: