From juhpetersen at gmail.com Tue May 1 06:48:21 2018 From: juhpetersen at gmail.com (Jens Petersen) Date: Tue, 1 May 2018 15:48:21 +0900 Subject: [ANNOUNCE] GHC 8.4.2 released In-Reply-To: <87in8md9v9.fsf@smart-cactus.org> References: <87in8md9v9.fsf@smart-cactus.org> Message-ID: I have built ghc-8.4.2 for Fedora and EPEL7 in https://copr.fedorainfracloud.org/coprs/petersen/ghc-8.4.2/ Thanks, Jens From alicekoroleva239 at gmail.com Tue May 1 07:09:19 2018 From: alicekoroleva239 at gmail.com (alice) Date: Tue, 1 May 2018 10:09:19 +0300 Subject: Poly-kinded type family In-Reply-To: <8EBDB324-22A1-43D5-8F3C-53FECF2B40C6@cs.brynmawr.edu> References: <6167FAEE-DCE0-4447-A93A-BC705DFDA4B9@gmail.com> <8EBDB324-22A1-43D5-8F3C-53FECF2B40C6@cs.brynmawr.edu> Message-ID: <0A2E1DA2-AD85-4F24-AFD8-B313DB6F4AB6@gmail.com> Thanks a lot, this helped! But sorry for asking, before this problem I evaluated Cmp on some values with * kinds, and I used (mkTemplateAnonTyConBinders [ liftedTypeKind, liftedTypeKind ]) to make the input kinds for TyCon. Changing function to mkTemplateTyConBinders (right now this part looks like 'binders = mkTemplateTyConBinders [ liftedTypeKind, liftedTypeKind ] (\ks -> ks)') made my type family not evaluating: :kind! Cmp 4 5 Cmp 4 5 :: Ordering = Cmp 4 5 And before that change: :kind! Cmp (Proxy 5) (Proxy 4) Cmp (Proxy 5) (Proxy 4) :: Ordering = 'GT I can see from debug output that before that change functions in BuiltInSynFamily like matchFamCmpType (has the same meaning as matchFamCmpNat) had been applied to values, but now they aren’t. What am I missing? > 30 апр. 2018 г., в 17:38, Richard Eisenberg написал(а): > > Hi Alice, > > You'll need mkTemplateTyConBinders, not the two variants of that function you use below. The problem is that both mkTemplateKindTyConBinders and mkTemplateAnonTyConBinders pull Uniques starting from the same value, and so GHC gets very confused when multiple arguments to your TyCon have the same Uniques. mkTemplateTyConBinders, on the other hand, allows you to specify dependency among your arguments without confusing Uniques. You can see a usage of this function in TysPrim.proxyPrimTyCon. > > I hope this helps! > Richard > > PS: I've made this mistake myself several times, and it's quite baffling to debug! > >> On Apr 30, 2018, at 8:27 AM, alice > wrote: >> >> Hello. I’m trying to make a type family with resolving it inside type checking classes, like CmpNat. Its type signature is «type family Cmp (a :: k1) (b :: k2) :: Ordering», so the function is poly kinded. But it seems unclear how to make its TyCon. I’ve tried to follow the CmpNat and CmpSymbol style, and also saw Any TyCon in TysWiredIn. So this is my attempt: >> >> typeCmpTyCon :: TyCon >> typeCmpTyCon = >> mkFamilyTyCon name >> (binders ++ (mkTemplateAnonTyConBinders [ input_kind1, input_kind2 ])) >> orderingKind >> Nothing >> (BuiltInSynFamTyCon ops) >> Nothing >> NotInjective >> where >> name = mkWiredInTyConName UserSyntax gHC_TYPELITS (fsLit "Cmp") >> typeCmpTyFamNameKey typeCmpTyCon >> ops = BuiltInSynFamily >> { sfMatchFam = matchFamCmpType >> , sfInteractTop = interactTopCmpType >> , sfInteractInert = \_ _ _ _ -> [] >> } >> binders@[kv1, kv2] = mkTemplateKindTyConBinders [ liftedTypeKind, liftedTypeKind ] >> input_kind1 = mkTyVarTy (binderVar kv1) >> input_kind2 = mkTyVarTy (binderVar kv2) >> >> ghci says this: >> >> :kind Cmp >> Cmp :: forall {k0} {k1}. k0 -> k1 -> Ordering >> >> But then I try to apply it to some values and get this exception: >> >> :kind! Cmp 4 5 >> >> :1:15: error: >> • Expected kind ‘k0’, but ‘4’ has kind ‘Nat’ >> • In the first argument of ‘Cmp’, namely ‘4’ >> In the type ‘Cmp 4 5’ >> >> :1:17: error: >> • Expected kind ‘k1’, but ‘5’ has kind ‘Nat’ >> • In the second argument of ‘Cmp’, namely ‘5’ >> In the type ‘Cmp 4 5’ >> >> Does anyone know where I made a mistake? Any help would be appreciated. >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From omeragacan at gmail.com Tue May 1 09:26:22 2018 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Tue, 1 May 2018 12:26:22 +0300 Subject: Scavenging SRTs in scavenge_one In-Reply-To: <877epwi7bb.fsf@smart-cactus.org> References: <877epwi7bb.fsf@smart-cactus.org> Message-ID: I have an idea but it doesn't explain everything; SRTs are used to collect CAFs, and CAFs are always added to the oldest generation's mut_list when allocated [1]. When we're scavenging a mut_list we know we're not doing a major GC, and because mut_list of oldest generation has all the newly allocated CAFs, which will be scavenged anyway, no need to scavenge SRTs for those. Also, static objects are always evacuated to the oldest gen [2], so any CAFs that are alive but not in the mut_list of the oldest gen will stay alive after a non-major GC, again no need to scavenge SRTs to keep these alive. This also explains why it's OK to not collect static objects (and not treat them as roots) in non-major GCs. However this doesn't explain - Why it's OK to scavenge large objects with scavenge_one(). - Why we scavenge SRTs in non-major collections in other places (e.g. scavenge_block()). Simon, could you say a few words about this? [1]: https://github.com/ghc/ghc/blob/master/rts/sm/Storage.c#L445-L449 [2]: https://github.com/ghc/ghc/blob/master/rts/sm/Scav.c#L1761-L1763 Ömer 2018-03-28 17:49 GMT+03:00 Ben Gamari : > Hi Simon, > > I'm a bit confused by scavenge_one; namely it doesn't scavenge SRTs. It > appears that it is primarily used for remembered set entries but it's > not at all clear why this means that we can safely ignore SRTs (e.g. in > the FUN and THUNK cases). > > Can you shed some light on this? > > Cheers, > > - Ben > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From simonpj at microsoft.com Tue May 1 12:22:33 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 1 May 2018 12:22:33 +0000 Subject: Testsuite wobbles Message-ID: Friends I'm seeing test-suite wobbles like these: Unexpected failures: /tmp/ghctest-9ez8i8lx/test spaces/./ghci.debugger/scripts/break006.run break006 [bad stdout] (ghci) /tmp/ghctest-9ez8i8lx/test spaces/./ghci.debugger/scripts/hist001.run hist001 [bad stdout] (ghci) /tmp/ghctest-9ez8i8lx/test spaces/./ghci.debugger/scripts/hist002.run hist002 [bad stdout] (ghci) /tmp/ghctest-9ez8i8lx/test spaces/./profiling/should_run/scc001.run scc001 [bad profile] (profasm) It turns out that they are all variations in the order of output, e.g. --- ./scripts/break006.run/break006.stdout.normalised 2018-05-01 13:20:13.297808002 +0100 +++ ./scripts/break006.run/break006.run.stdout.normalised 2018-05-01 13:20:13.297808002 +0100 @@ -4,14 +4,14 @@ x :: Integer = 1 xs :: [Integer] = [2,3] xs :: [Integer] = [2,3] -f :: Integer -> a = _ x :: Integer = 1 +f :: Integer -> a = _ _result :: [a] = _ y = (_t1::a) y = 2 xs :: [Integer] = [2,3] -f :: Integer -> Integer = _ x :: Integer = 1 +f :: Integer -> Integer = _ _result :: [Integer] = _ y :: Integer = 2 _t1 :: Integer = 2 *** unexpected failure for break006(ghci) I've seen this go to and fro recently. It looks as if the output is not deterministic somehow. (The order in which the bindings are printed is immaterial.) Does this happen for others? Does the master CI build show this problem? I don't know whether to commit these changes, because I don't know if it affects anyone else. And it's certainly tiresome for me. Any ideas? Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Tue May 1 18:24:51 2018 From: david.feuer at gmail.com (David Feuer) Date: Tue, 01 May 2018 18:24:51 +0000 Subject: Open up the issues tracker on ghc-proposals Message-ID: Sometimes, a language extension idea could benefit from some community discussion before it's ready for a formal proposal. I'd like to propose that we open up the GitHub issues tracker for ghc-proposals to serve as a place to discuss pre-proposal ideas. Once those discussions converge on one or a few specific plans, someone can write a proper proposal. -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Tue May 1 19:10:04 2018 From: marlowsd at gmail.com (Simon Marlow) Date: Tue, 1 May 2018 20:10:04 +0100 Subject: Scavenging SRTs in scavenge_one In-Reply-To: References: <877epwi7bb.fsf@smart-cactus.org> Message-ID: Your explanation is basically right. scavenge_one() is only used for a non-major collection, where we aren't traversing SRTs. Admittedly this is a subtle point that could almost certainly be documented better, I probably just overlooked it. More inline: On 1 May 2018 at 10:26, Ömer Sinan Ağacan wrote: > I have an idea but it doesn't explain everything; > > SRTs are used to collect CAFs, and CAFs are always added to the oldest > generation's mut_list when allocated [1]. > > When we're scavenging a mut_list we know we're not doing a major GC, and > because mut_list of oldest generation has all the newly allocated CAFs, > which > will be scavenged anyway, no need to scavenge SRTs for those. > > Also, static objects are always evacuated to the oldest gen [2], so any > CAFs > that are alive but not in the mut_list of the oldest gen will stay alive > after > a non-major GC, again no need to scavenge SRTs to keep these alive. > > This also explains why it's OK to not collect static objects (and not treat > them as roots) in non-major GCs. > > However this doesn't explain > > - Why it's OK to scavenge large objects with scavenge_one(). > I don't understand - perhaps you could elaborate on why you think it might not be OK? Large objects are treated exactly the same as small objects with respect to their lifetimes. > - Why we scavenge SRTs in non-major collections in other places (e.g. > scavenge_block()). > If you look at scavenge_fun_srt() and co, you'll see that they return immediately if !major_gc. > Simon, could you say a few words about this? > Was that enough words? I have more if necessary :) Cheers Simon > > [1]: https://github.com/ghc/ghc/blob/master/rts/sm/Storage.c#L445-L449 > [2]: https://github.com/ghc/ghc/blob/master/rts/sm/Scav.c#L1761-L1763 > > Ömer > > 2018-03-28 17:49 GMT+03:00 Ben Gamari : > > Hi Simon, > > > > I'm a bit confused by scavenge_one; namely it doesn't scavenge SRTs. It > > appears that it is primarily used for remembered set entries but it's > > not at all clear why this means that we can safely ignore SRTs (e.g. in > > the FUN and THUNK cases). > > > > Can you shed some light on this? > > > > Cheers, > > > > - Ben > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at cs.brynmawr.edu Tue May 1 19:23:58 2018 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Tue, 1 May 2018 15:23:58 -0400 Subject: Open up the issues tracker on ghc-proposals In-Reply-To: References: Message-ID: <8387DDD4-E8AD-489E-8D4A-851F95992957@cs.brynmawr.edu> I like this idea, but I think this can be done as a PR, which seems a better fit for collaborative building. The author can specify that a proposal is a "pre-proposal", with the goal of fleshing it out before committee submission. If it becomes necessary, we can furnish a tag to label these, but I'm honestly not sure we'll need to. Richard > On May 1, 2018, at 2:24 PM, David Feuer wrote: > > Sometimes, a language extension idea could benefit from some community discussion before it's ready for a formal proposal. I'd like to propose that we open up the GitHub issues tracker for ghc-proposals to serve as a place to discuss pre-proposal ideas. Once those discussions converge on one or a few specific plans, someone can write a proper proposal. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From mail at joachim-breitner.de Tue May 1 19:31:24 2018 From: mail at joachim-breitner.de (Joachim Breitner) Date: Tue, 01 May 2018 15:31:24 -0400 Subject: Open up the issues tracker on ghc-proposals In-Reply-To: <8387DDD4-E8AD-489E-8D4A-851F95992957@cs.brynmawr.edu> References: <8387DDD4-E8AD-489E-8D4A-851F95992957@cs.brynmawr.edu> Message-ID: <4d296e592c5da615b7c4d712f74bcbec0bdb8c81.camel@joachim-breitner.de> Hi, the distiction between an issue and a pull request is a rather thin one, and the Github API actually allows you to upgrade an issue to a pull request… So no need to be picky about the precise form: If someone has something interesting to say in an issue without yet having some text to attach to it, by all means go for it! Cheers, Joachim Am Dienstag, den 01.05.2018, 15:23 -0400 schrieb Richard Eisenberg: > I like this idea, but I think this can be done as a PR, which seems a better fit for collaborative building. The author can specify that a proposal is a "pre-proposal", with the goal of fleshing it out before committee submission. If it becomes necessary, we can furnish a tag to label these, but I'm honestly not sure we'll need to. > > Richard > > > On May 1, 2018, at 2:24 PM, David Feuer wrote: > > > > Sometimes, a language extension idea could benefit from some community discussion before it's ready for a formal proposal. I'd like to propose that we open up the GitHub issues tracker for ghc-proposals to serve as a place to discuss pre-proposal ideas. Once those discussions converge on one or a few specific plans, someone can write a proper proposal. > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -- Joachim Breitner mail at joachim-breitner.de http://www.joachim-breitner.de/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From david.feuer at gmail.com Tue May 1 19:35:18 2018 From: david.feuer at gmail.com (David Feuer) Date: Tue, 01 May 2018 19:35:18 +0000 Subject: Open up the issues tracker on ghc-proposals In-Reply-To: <8387DDD4-E8AD-489E-8D4A-851F95992957@cs.brynmawr.edu> References: <8387DDD4-E8AD-489E-8D4A-851F95992957@cs.brynmawr.edu> Message-ID: I suppose that would be a reasonable alternative. But to keep discussion tracking reasonable, I suspect it would be best to close the pre-proposal PR and open a new one (with mutual links) for the actual proposal if and when the time comes. On Tue, May 1, 2018, 3:24 PM Richard Eisenberg wrote: > I like this idea, but I think this can be done as a PR, which seems a > better fit for collaborative building. The author can specify that a > proposal is a "pre-proposal", with the goal of fleshing it out before > committee submission. If it becomes necessary, we can furnish a tag to > label these, but I'm honestly not sure we'll need to. > > Richard > > > On May 1, 2018, at 2:24 PM, David Feuer wrote: > > > > Sometimes, a language extension idea could benefit from some community > discussion before it's ready for a formal proposal. I'd like to propose > that we open up the GitHub issues tracker for ghc-proposals to serve as a > place to discuss pre-proposal ideas. Once those discussions converge on one > or a few specific plans, someone can write a proper proposal. > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at well-typed.com Tue May 1 19:48:38 2018 From: david at well-typed.com (David Feuer) Date: Tue, 01 May 2018 15:48:38 -0400 Subject: Open up the issues tracker on ghc-proposals In-Reply-To: <4d296e592c5da615b7c4d712f74bcbec0bdb8c81.camel@joachim-breitner.de> References: <8387DDD4-E8AD-489E-8D4A-851F95992957@cs.brynmawr.edu> <4d296e592c5da615b7c4d712f74bcbec0bdb8c81.camel@joachim-breitner.de> Message-ID: <1687428.QazQtrTAYr@squirrel> The ghc-proposals repository does not have the issue tracker enabled, so it's currently impossible to open an issue. On Tuesday, May 1, 2018 3:31:24 PM EDT Joachim Breitner wrote: > Hi, > > the distiction between an issue and a pull request is a rather thin > one, and the Github API actually allows you to upgrade an issue to a > pull request… > > So no need to be picky about the precise form: If someone has something > interesting to say in an issue without yet having some text to attach > to it, by all means go for it! > > Cheers, > Joachim > > Am Dienstag, den 01.05.2018, 15:23 -0400 schrieb Richard Eisenberg: > > I like this idea, but I think this can be done as a PR, which seems a better fit for collaborative building. The author can specify that a proposal is a "pre-proposal", with the goal of fleshing it out before committee submission. If it becomes necessary, we can furnish a tag to label these, but I'm honestly not sure we'll need to. > > > > Richard > > > > > On May 1, 2018, at 2:24 PM, David Feuer wrote: > > > > > > Sometimes, a language extension idea could benefit from some community discussion before it's ready for a formal proposal. I'd like to propose that we open up the GitHub issues tracker for ghc-proposals to serve as a place to discuss pre-proposal ideas. Once those discussions converge on one or a few specific plans, someone can write a proper proposal. > > > _______________________________________________ > > > ghc-devs mailing list > > > ghc-devs at haskell.org > > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From mail at joachim-breitner.de Tue May 1 20:09:59 2018 From: mail at joachim-breitner.de (Joachim Breitner) Date: Tue, 01 May 2018 16:09:59 -0400 Subject: Open up the issues tracker on ghc-proposals In-Reply-To: <1687428.QazQtrTAYr@squirrel> References: <8387DDD4-E8AD-489E-8D4A-851F95992957@cs.brynmawr.edu> <4d296e592c5da615b7c4d712f74bcbec0bdb8c81.camel@joachim-breitner.de> <1687428.QazQtrTAYr@squirrel> Message-ID: Fixed! Am Dienstag, den 01.05.2018, 15:48 -0400 schrieb David Feuer: > The ghc-proposals repository does not have the issue tracker enabled, so it's currently impossible to open an issue. > > On Tuesday, May 1, 2018 3:31:24 PM EDT Joachim Breitner wrote: > > Hi, > > > > the distiction between an issue and a pull request is a rather thin > > one, and the Github API actually allows you to upgrade an issue to a > > pull request… > > > > So no need to be picky about the precise form: If someone has something > > interesting to say in an issue without yet having some text to attach > > to it, by all means go for it! > > > > Cheers, > > Joachim > > > > Am Dienstag, den 01.05.2018, 15:23 -0400 schrieb Richard Eisenberg: > > > I like this idea, but I think this can be done as a PR, which seems a better fit for collaborative building. The author can specify that a proposal is a "pre-proposal", with the goal of fleshing it out before committee submission. If it becomes necessary, we can furnish a tag to label these, but I'm honestly not sure we'll need to. > > > > > > Richard > > > > > > > On May 1, 2018, at 2:24 PM, David Feuer wrote: > > > > > > > > Sometimes, a language extension idea could benefit from some community discussion before it's ready for a formal proposal. I'd like to propose that we open up the GitHub issues tracker for ghc-proposals to serve as a place to discuss pre-proposal ideas. Once those discussions converge on one or a few specific plans, someone can write a proper proposal. > > > > _______________________________________________ > > > > ghc-devs mailing list > > > > ghc-devs at haskell.org > > > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > > > _______________________________________________ > > > ghc-devs mailing list > > > ghc-devs at haskell.org > > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > -- Joachim Breitner mail at joachim-breitner.de http://www.joachim-breitner.de/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From rae at cs.brynmawr.edu Wed May 2 02:36:26 2018 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Tue, 1 May 2018 22:36:26 -0400 Subject: Poly-kinded type family In-Reply-To: <0A2E1DA2-AD85-4F24-AFD8-B313DB6F4AB6@gmail.com> References: <6167FAEE-DCE0-4447-A93A-BC705DFDA4B9@gmail.com> <8EBDB324-22A1-43D5-8F3C-53FECF2B40C6@cs.brynmawr.edu> <0A2E1DA2-AD85-4F24-AFD8-B313DB6F4AB6@gmail.com> Message-ID: <765A6345-10E9-4076-86A4-BD450B5E3271@cs.brynmawr.edu> I'm afraid I don't know a quick answer to that one. Does anyone else? If no one answers, write back in a few days and I'll look into it. Richard > On May 1, 2018, at 3:09 AM, alice wrote: > > Thanks a lot, this helped! > > But sorry for asking, before this problem I evaluated Cmp on some values with * kinds, and I used (mkTemplateAnonTyConBinders [ liftedTypeKind, liftedTypeKind ]) to make the input kinds for TyCon. > Changing function to mkTemplateTyConBinders (right now this part looks like 'binders = mkTemplateTyConBinders [ liftedTypeKind, liftedTypeKind ] (\ks -> ks)') made my type family not evaluating: > > :kind! Cmp 4 5 > Cmp 4 5 :: Ordering > = Cmp 4 5 > > And before that change: > > :kind! Cmp (Proxy 5) (Proxy 4) > Cmp (Proxy 5) (Proxy 4) :: Ordering > = 'GT > > I can see from debug output that before that change functions in BuiltInSynFamily like matchFamCmpType (has the same meaning as matchFamCmpNat) had been applied to values, but now they aren’t. What am I missing? > >> 30 апр. 2018 г., в 17:38, Richard Eisenberg > написал(а): >> >> Hi Alice, >> >> You'll need mkTemplateTyConBinders, not the two variants of that function you use below. The problem is that both mkTemplateKindTyConBinders and mkTemplateAnonTyConBinders pull Uniques starting from the same value, and so GHC gets very confused when multiple arguments to your TyCon have the same Uniques. mkTemplateTyConBinders, on the other hand, allows you to specify dependency among your arguments without confusing Uniques. You can see a usage of this function in TysPrim.proxyPrimTyCon. >> >> I hope this helps! >> Richard >> >> PS: I've made this mistake myself several times, and it's quite baffling to debug! >> >>> On Apr 30, 2018, at 8:27 AM, alice > wrote: >>> >>> Hello. I’m trying to make a type family with resolving it inside type checking classes, like CmpNat. Its type signature is «type family Cmp (a :: k1) (b :: k2) :: Ordering», so the function is poly kinded. But it seems unclear how to make its TyCon. I’ve tried to follow the CmpNat and CmpSymbol style, and also saw Any TyCon in TysWiredIn. So this is my attempt: >>> >>> typeCmpTyCon :: TyCon >>> typeCmpTyCon = >>> mkFamilyTyCon name >>> (binders ++ (mkTemplateAnonTyConBinders [ input_kind1, input_kind2 ])) >>> orderingKind >>> Nothing >>> (BuiltInSynFamTyCon ops) >>> Nothing >>> NotInjective >>> where >>> name = mkWiredInTyConName UserSyntax gHC_TYPELITS (fsLit "Cmp") >>> typeCmpTyFamNameKey typeCmpTyCon >>> ops = BuiltInSynFamily >>> { sfMatchFam = matchFamCmpType >>> , sfInteractTop = interactTopCmpType >>> , sfInteractInert = \_ _ _ _ -> [] >>> } >>> binders@[kv1, kv2] = mkTemplateKindTyConBinders [ liftedTypeKind, liftedTypeKind ] >>> input_kind1 = mkTyVarTy (binderVar kv1) >>> input_kind2 = mkTyVarTy (binderVar kv2) >>> >>> ghci says this: >>> >>> :kind Cmp >>> Cmp :: forall {k0} {k1}. k0 -> k1 -> Ordering >>> >>> But then I try to apply it to some values and get this exception: >>> >>> :kind! Cmp 4 5 >>> >>> :1:15: error: >>> • Expected kind ‘k0’, but ‘4’ has kind ‘Nat’ >>> • In the first argument of ‘Cmp’, namely ‘4’ >>> In the type ‘Cmp 4 5’ >>> >>> :1:17: error: >>> • Expected kind ‘k1’, but ‘5’ has kind ‘Nat’ >>> • In the second argument of ‘Cmp’, namely ‘5’ >>> In the type ‘Cmp 4 5’ >>> >>> Does anyone know where I made a mistake? Any help would be appreciated. >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From qdunkan at gmail.com Wed May 2 03:09:55 2018 From: qdunkan at gmail.com (Evan Laforge) Date: Tue, 1 May 2018 20:09:55 -0700 Subject: ghc 8.4.1 and trac 13930 Message-ID: I recently noticed that with -O1, ghc was optimizing some code if Trace.traceShowId "b" $ True then return $ Left $ Serialize.BadMagic (Serialize.magicBytes magic) file_magic else first Serialize.UnserializeError <$> Exception.evaluate (Serialize.decode rest) to always evaluate the 'else' branch. This is fixed in 8.4.2, so I'm pretty sure it's https://ghc.haskell.org/trac/ghc/ticket/13930 I assume there's no point trying to get a minimal reproduction, since it's a known issue and fixed. Still, https://downloads.haskell.org/~ghc/8.4.2/docs/html/users_guide/8.4.2-notes.html says "incorrectly optimised, resulting in space leaks". Maybe it should instead say "incorrectly optimised, resulting in taking the wrong branch in 'if' expressions"? That's a bit more alarming, and is a stronger "upgrade to 8.4.2" signal. From ben at smart-cactus.org Wed May 2 04:58:21 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Wed, 02 May 2018 00:58:21 -0400 Subject: ghc 8.4.1 and trac 13930 In-Reply-To: References: Message-ID: <8736zaejrr.fsf@smart-cactus.org> Evan Laforge writes: > I recently noticed that with -O1, ghc was optimizing some code > > if Trace.traceShowId "b" $ True > then return $ Left $ Serialize.BadMagic (Serialize.magicBytes > magic) file_magic > else first Serialize.UnserializeError <$> Exception.evaluate > (Serialize.decode rest) > > to always evaluate the 'else' branch. This is fixed in 8.4.2, so I'm > pretty sure it's https://ghc.haskell.org/trac/ghc/ticket/13930 > > I assume there's no point trying to get a minimal reproduction, since > it's a known issue and fixed. Still, > https://downloads.haskell.org/~ghc/8.4.2/docs/html/users_guide/8.4.2-notes.html > says "incorrectly optimised, resulting in space leaks". > > Maybe it should instead say "incorrectly optimised, resulting in > taking the wrong branch in 'if' expressions"? That's a bit more > alarming, and is a stronger "upgrade to 8.4.2" signal. Yes, I suppose the language in the release notes does rather understate the degree of the incorrectness. I'll push a new version of the manual with some stronger language tomorrow. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From qdunkan at gmail.com Wed May 2 05:21:06 2018 From: qdunkan at gmail.com (Evan Laforge) Date: Tue, 1 May 2018 22:21:06 -0700 Subject: ghc 8.4.1 and trac 13930 In-Reply-To: <8736zaejrr.fsf@smart-cactus.org> References: <8736zaejrr.fsf@smart-cactus.org> Message-ID: On Tue, May 1, 2018 at 9:58 PM, Ben Gamari wrote: > Yes, I suppose the language in the release notes does rather understate > the degree of the incorrectness. > > I'll push a new version of the manual with some stronger language > tomorrow. Good deal, thanks for the quick response. From omeragacan at gmail.com Wed May 2 06:03:05 2018 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Wed, 2 May 2018 09:03:05 +0300 Subject: Scavenging SRTs in scavenge_one In-Reply-To: References: <877epwi7bb.fsf@smart-cactus.org> Message-ID: Thanks Simon, this is really helpful. > If you look at scavenge_fun_srt() and co, you'll see that they return > immediately if !major_gc. Thanks for pointing this out -- I didn't realize it's returning early when !major_gc and this caused a lot of confusion. Now everything makes sense. I'll add a note for scavenging SRTs and refer to it in relevant code and submit a diff. Ömer 2018-05-01 22:10 GMT+03:00 Simon Marlow : > Your explanation is basically right. scavenge_one() is only used for a > non-major collection, where we aren't traversing SRTs. Admittedly this is a > subtle point that could almost certainly be documented better, I probably > just overlooked it. > > More inline: > > On 1 May 2018 at 10:26, Ömer Sinan Ağacan wrote: >> >> I have an idea but it doesn't explain everything; >> >> SRTs are used to collect CAFs, and CAFs are always added to the oldest >> generation's mut_list when allocated [1]. >> >> When we're scavenging a mut_list we know we're not doing a major GC, and >> because mut_list of oldest generation has all the newly allocated CAFs, >> which >> will be scavenged anyway, no need to scavenge SRTs for those. >> >> Also, static objects are always evacuated to the oldest gen [2], so any >> CAFs >> that are alive but not in the mut_list of the oldest gen will stay alive >> after >> a non-major GC, again no need to scavenge SRTs to keep these alive. >> >> This also explains why it's OK to not collect static objects (and not >> treat >> them as roots) in non-major GCs. >> >> However this doesn't explain >> >> - Why it's OK to scavenge large objects with scavenge_one(). > > > I don't understand - perhaps you could elaborate on why you think it might > not be OK? Large objects are treated exactly the same as small objects with > respect to their lifetimes. > >> >> - Why we scavenge SRTs in non-major collections in other places (e.g. >> scavenge_block()). > > > If you look at scavenge_fun_srt() and co, you'll see that they return > immediately if !major_gc. > >> >> Simon, could you say a few words about this? > > > Was that enough words? I have more if necessary :) > > Cheers > Simon > > >> >> >> [1]: https://github.com/ghc/ghc/blob/master/rts/sm/Storage.c#L445-L449 >> [2]: https://github.com/ghc/ghc/blob/master/rts/sm/Scav.c#L1761-L1763 >> >> Ömer >> >> 2018-03-28 17:49 GMT+03:00 Ben Gamari : >> > Hi Simon, >> > >> > I'm a bit confused by scavenge_one; namely it doesn't scavenge SRTs. It >> > appears that it is primarily used for remembered set entries but it's >> > not at all clear why this means that we can safely ignore SRTs (e.g. in >> > the FUN and THUNK cases). >> > >> > Can you shed some light on this? >> > >> > Cheers, >> > >> > - Ben >> > >> > _______________________________________________ >> > ghc-devs mailing list >> > ghc-devs at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > From simonpj at microsoft.com Wed May 2 08:12:41 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 2 May 2018 08:12:41 +0000 Subject: Open up the issues tracker on ghc-proposals In-Reply-To: <5ae91578.299.6bed.19608@clear.net.nz> References: <5ae91578.299.6bed.19608@clear.net.nz> Message-ID: | > Sometimes, a language extension idea could benefit from | some community discussion before it's ready for a formal proposal. | | Can I point out it's not only ghc developers who make proposals. I'd | rather you post this idea more widely. The Right Thing is surely for the main GHC proposals pave https://github.com/ghc-proposals/ghc-proposals to describe how you can up a "pre-proposal". That is, document the entire process in one, easy to find, place. Mind you, I'm unclear about the distinction between a pre-proposal and a proposal. Both are drafts that invite community discussion, prior to submitting to the committee for decision. Simon | -----Original Message----- | From: Glasgow-haskell-users On Behalf Of Anthony Clayden | Sent: 02 May 2018 02:34 | To: glasgow-haskell-users at haskell.org; ghc-devs at haskell.org | Subject: Re: Open up the issues tracker on ghc-proposals | | > On May 1, 2018, at 2:24 PM, David Feuer wrote: | > | > Sometimes, a language extension idea could benefit from | some community discussion before it's ready for a formal proposal. | | Can I point out it's not only ghc developers who make proposals. I'd | rather you post this idea more widely. | | As a datapoint, I found ghc-users and the café just fine for those | discussions. | Ghc-users seems to have very low traffic/is rather wasted currently. | And I believe a lot of people pre-discuss on reddit. | For ideas that have been on the back burner for a long time, there's | often wiki pages. (For example re Quantified | Constraints.) | | > I'd like to propose that we open up the GitHub issues | tracker for ghc-proposals to serve as a place to discuss pre-proposal | ideas. Once those discussions converge on one or a few specific plans, | someone can write a proper proposal. | | I'm not against that. There gets to be a lot of cruft on some | discussions about proposals, so I'd expect we could archive it all | once a proposal is more formalised. | | AntC | | _______________________________________________ | Glasgow-haskell-users mailing list | Glasgow-haskell-users at haskell.org | http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users From simonpj at microsoft.com Wed May 2 08:24:15 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 2 May 2018 08:24:15 +0000 Subject: ghc 8.4.1 and trac 13930 In-Reply-To: References: Message-ID: | I recently noticed that with -O1, ghc was optimizing some code | | if Trace.traceShowId "b" $ True | then ... | else ... | | to always evaluate the 'else' branch. Are you certain this is #13930? I don't see an obvious connection. It seems really really terrible to "optimise" True to False! I think #13930 was fixed by #5129, which in turn was about discarding a call to 'evaluate'. That is different to turning True to False. But there's probably some more complicated context to your use-case that means my understanding is faulty. If you are confident that it's securely fixed, well and good. But when bugs disappear I always worry that they are still there, just concealed by some other change. Simon | -----Original Message----- | From: ghc-devs On Behalf Of Evan | Laforge | Sent: 02 May 2018 04:10 | To: ghc-devs at haskell.org | Subject: ghc 8.4.1 and trac 13930 | | I recently noticed that with -O1, ghc was optimizing some code | | if Trace.traceShowId "b" $ True | then return $ Left $ Serialize.BadMagic (Serialize.magicBytes | magic) file_magic | else first Serialize.UnserializeError <$> Exception.evaluate | (Serialize.decode rest) | | to always evaluate the 'else' branch. This is fixed in 8.4.2, so I'm | pretty sure it's https://ghc.haskell.org/trac/ghc/ticket/13930 | | I assume there's no point trying to get a minimal reproduction, since | it's a known issue and fixed. Still, | https://downloads.haskell.org/~ghc/8.4.2/docs/html/users_guide/8.4.2- | notes.html | says "incorrectly optimised, resulting in space leaks". | | Maybe it should instead say "incorrectly optimised, resulting in | taking the wrong branch in 'if' expressions"? That's a bit more | alarming, and is a stronger "upgrade to 8.4.2" signal. | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From michal.terepeta at gmail.com Wed May 2 11:30:29 2018 From: michal.terepeta at gmail.com (Michal Terepeta) Date: Wed, 02 May 2018 11:30:29 +0000 Subject: Question about ArrayArray# Message-ID: Hi all, I have a quick question about ArrayArray#. Is it safe to store *both* an ByteArray# and ArrayArray# within the *same* ArrayArray#? For instance: - at index 0 of an ArrayArray# I store a different ArrayArray#, - at index 1 of that same ArrayArray# I store a ByteArray#. It seems to me that this should be safe/supported from the point of view of the runtime system: - both ArrayArray# and ByteArray# have the same kind/runtime representation, - the arrays have a header that tells rts/GC what they are/how to handle them. (But I, as a user, would be responsible for using the right primop with the right index to read them back) Is this correct? Thanks a lot! - Michal From juhpetersen at gmail.com Wed May 2 14:05:58 2018 From: juhpetersen at gmail.com (Jens Petersen) Date: Wed, 2 May 2018 23:05:58 +0900 Subject: building ghc once or twice? Message-ID: I have been packaging ghc for a long time... In older times I think it was recommended to first do a (quick) build of a new version of ghc (with the previous version) and then to do a (perf) rebuild of the new version against itself. In fact I am still building ghc this way for Fedora: though it seems like this is overhead nowadays...? (I think one major reason was to get stable ABI hashes for the core library packages.) These days should I just do a single default or perf build of a new ghc version against a previous stable release, or does it still make sense to continue to build in two steps like I have been doing? Any pros or cons? Jens From ben at smart-cactus.org Wed May 2 14:45:55 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Wed, 02 May 2018 10:45:55 -0400 Subject: building ghc once or twice? In-Reply-To: References: Message-ID: <87wowmce01.fsf@smart-cactus.org> Jens Petersen writes: > I have been packaging ghc for a long time... > > In older times I think it was recommended to first do a (quick) build > of a new version of ghc (with the previous version) and then to do a > (perf) rebuild of the new version against itself. > In fact I am still building ghc this way for Fedora: though it seems > like this is overhead nowadays...? > (I think one major reason was to get stable ABI hashes for the core > library packages.) > > These days should I just do a single default or perf build of a new > ghc version against a previous stable release, or does it still make > sense to continue to build in two steps like I have been doing? > Any pros or cons? > Indeed; GHC's build system already performs a two-stage bootstrapping so it shouldn't be necessary to do multiple builds yourself. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From qdunkan at gmail.com Wed May 2 17:55:13 2018 From: qdunkan at gmail.com (Evan Laforge) Date: Wed, 2 May 2018 10:55:13 -0700 Subject: ghc 8.4.1 and trac 13930 In-Reply-To: References: Message-ID: On Wed, May 2, 2018 at 1:24 AM, Simon Peyton Jones wrote: > | I recently noticed that with -O1, ghc was optimizing some code > | > | if Trace.traceShowId "b" $ True > | then ... > | else ... > | > | to always evaluate the 'else' branch. > > Are you certain this is #13930? I don't see an obvious connection. It seems really really terrible to "optimise" True to False! > > I think #13930 was fixed by #5129, which in turn was about discarding a call to 'evaluate'. That is different to turning True to False. > > But there's probably some more complicated context to your use-case that means my understanding is faulty. > > If you are confident that it's securely fixed, well and good. But when bugs disappear I always worry that they are still there, just concealed by some other change. I'm not totally confident, which is I why I asked. It does seem to be related to the presence of Exception.evaluate, but it also comes and goes depending on how many things are in the condition and branches. It does seem to be gone in 8.4.1, but I'm also a bit nervous when I don't know exactly why something was fixed. I'll try to reduce this to as small an expression as possible that still triggers True -> False. From qdunkan at gmail.com Wed May 2 18:39:03 2018 From: qdunkan at gmail.com (Evan Laforge) Date: Wed, 2 May 2018 11:39:03 -0700 Subject: ghc 8.4.1 and trac 13930 In-Reply-To: References: Message-ID: Ok, here's a short module: import qualified Control.Exception as Exception main :: IO () main = do unserialize putStrLn "all is well" unserialize :: IO Char unserialize = if definitelyTrue then do return 'a' else do Exception.evaluate (error "wrong place") {-# NOINLINE definitelyTrue #-} definitelyTrue :: Bool definitelyTrue = True When compiled with -O on 8.4.1, this should print "wrong place". Without -O, or with 8.4.2, or if True can be inlined, or without evaluate, all is well. From simonpj at microsoft.com Wed May 2 19:27:26 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 2 May 2018 19:27:26 +0000 Subject: ghc 8.4.1 and trac 13930 In-Reply-To: References: Message-ID: Wow. Could you open a ticket? I just tried with 8.2.2 which is what I have on this laptop, but it printed "all is well". Does that mean it was fine in 8.2, went wrong in 8.4.1 and was fixed in 8.4.2? Simon | -----Original Message----- | From: Evan Laforge | Sent: 02 May 2018 19:39 | To: Simon Peyton Jones | Cc: ghc-devs at haskell.org | Subject: Re: ghc 8.4.1 and trac 13930 | | Ok, here's a short module: | | import qualified Control.Exception as Exception | | main :: IO () | main = do | unserialize | putStrLn "all is well" | | unserialize :: IO Char | unserialize = | if definitelyTrue | then do | return 'a' | else do | Exception.evaluate (error "wrong place") | | {-# NOINLINE definitelyTrue #-} | definitelyTrue :: Bool | definitelyTrue = True | | | When compiled with -O on 8.4.1, this should print "wrong place". | Without -O, or with 8.4.2, or if True can be inlined, or without | evaluate, all is well. From qdunkan at gmail.com Wed May 2 19:36:47 2018 From: qdunkan at gmail.com (Evan Laforge) Date: Wed, 2 May 2018 12:36:47 -0700 Subject: ghc 8.4.1 and trac 13930 In-Reply-To: References: Message-ID: On Wed, May 2, 2018 at 12:27 PM, Simon Peyton Jones wrote: > Wow. Could you open a ticket? Done: https://ghc.haskell.org/trac/ghc/ticket/15114 > I just tried with 8.2.2 which is what I have on this laptop, but it printed "all is well". Does that mean it was fine in 8.2, went wrong in 8.4.1 and was fixed in 8.4.2? It seems likely! From carter.schonwald at gmail.com Thu May 3 12:40:29 2018 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Thu, 03 May 2018 12:40:29 +0000 Subject: Question about ArrayArray# In-Reply-To: References: Message-ID: I think Ed’s structs package explicitly makes use of this :) On Wed, May 2, 2018 at 7:31 AM Michal Terepeta wrote: > Hi all, > > I have a quick question about ArrayArray#. Is it safe to store *both* an > ByteArray# and ArrayArray# within the *same* ArrayArray#? For instance: > - at index 0 of an ArrayArray# I store a different ArrayArray#, > - at index 1 of that same ArrayArray# I store a ByteArray#. > > It seems to me that this should be safe/supported from the point of view of > the runtime system: > - both ArrayArray# and ByteArray# have the same kind/runtime > representation, > - the arrays have a header that tells rts/GC what they are/how to handle > them. > (But I, as a user, would be responsible for using the right primop with the > right index to read them back) > > Is this correct? > > Thanks a lot! > > - Michal > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.terepeta at gmail.com Thu May 3 14:34:05 2018 From: michal.terepeta at gmail.com (Michal Terepeta) Date: Thu, 03 May 2018 14:34:05 +0000 Subject: Question about ArrayArray# In-Reply-To: References: Message-ID: On Thu, May 3, 2018 at 2:40 PM Carter Schonwald wrote: > I think Ed’s structs package explicitly makes use of this :) > Oh, interesting! Thanks for the pointer! Looking at Ed's code, he's seems to be doing something similar to that I'm also interested in: having a SmallArray# that at one index points to another SmallArray# and at another one to a ByteArray#. (my use case involves multiple small arrays, so I'd rather use SmallArray# than ArrayArray#): https://github.com/ekmett/structs/blob/master/src/Data/Struct/Internal.hs#L146 So I guess my second question becomes: is anyone aware of some rts/GC invariants/expectations that would be broken by doing this? (ignoring the issue of getting every `unsafeCoerce#` right :) Thanks! - Michal -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Thu May 3 17:49:48 2018 From: ben at well-typed.com (Ben Gamari) Date: Thu, 03 May 2018 13:49:48 -0400 Subject: Phabricator upgraded Message-ID: <87d0ycd3yg.fsf@smart-cactus.org> Hi everyone, I just finished performing a bit of wor on Phabricator. In addition to doing a small upgrade, I disabled the draft state that newly-submitted differentials were previously ending up in. Hopefully this will restore sanity to the code review process. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From pi.boy.travis at gmail.com Fri May 4 07:44:04 2018 From: pi.boy.travis at gmail.com (Travis Whitaker) Date: Fri, 4 May 2018 00:44:04 -0700 Subject: Fwd: Availability of Coercible Instances at TH Runtime In-Reply-To: References: Message-ID: Given that Coercible instances are Deeply Magical, perhaps I'm being a bit naive here, but I recently tried to write a TH function that can check if one type is a newtype of another (or a newtype of a newtype of another, etc). coercibleToFrom :: Type -> Type -> Q Bool coercibleToFrom tx ty = (&&) <$> isInstance ''Coercible [tx, ty] <*> isInstance ''Coercible [ty, tx] If this worked as I'd hoped, I'm almost certain checking reflexively is redundant. However, I can't seem to get reifyInstances to ever return an InstanceDec for a Coercible instance. Given that these instances are generated on the fly by the typechecker, there's no way to make them available at TH runtime, correct? And, given that, is there an easy way to find out with TH whether not I'll be able to use coerce without taking all the Decs apart to hunt for NewtypeD? Travis -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at cs.brynmawr.edu Fri May 4 18:01:52 2018 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Fri, 4 May 2018 14:01:52 -0400 Subject: Availability of Coercible Instances at TH Runtime In-Reply-To: References: Message-ID: <5D5A8174-A549-4703-AAC8-D5CA335D43F4@cs.brynmawr.edu> I don't think there's an easy way to do this. We could imagine extending Quasi to have a method to check for coercibility, but I don't think there's a way to do this in the current TH API. Sorry! Richard > On May 4, 2018, at 3:44 AM, Travis Whitaker wrote: > > Given that Coercible instances are Deeply Magical, perhaps I'm being a bit naive here, but I recently tried to write a TH function that can check if one type is a newtype of another (or a newtype of a newtype of another, etc). > > coercibleToFrom :: Type -> Type -> Q Bool > coercibleToFrom tx ty = (&&) <$> isInstance ''Coercible [tx, ty] > <*> isInstance ''Coercible [ty, tx] > > If this worked as I'd hoped, I'm almost certain checking reflexively is redundant. However, I can't seem to get reifyInstances to ever return an InstanceDec for a Coercible instance. Given that these instances are generated on the fly by the typechecker, there's no way to make them available at TH runtime, correct? And, given that, is there an easy way to find out with TH whether not I'll be able to use coerce without taking all the Decs apart to hunt for NewtypeD? > > Travis > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From pi.boy.travis at gmail.com Sat May 5 02:01:43 2018 From: pi.boy.travis at gmail.com (Travis Whitaker) Date: Fri, 4 May 2018 19:01:43 -0700 Subject: Availability of Coercible Instances at TH Runtime In-Reply-To: <5D5A8174-A549-4703-AAC8-D5CA335D43F4@cs.brynmawr.edu> References: <5D5A8174-A549-4703-AAC8-D5CA335D43F4@cs.brynmawr.edu> Message-ID: A workaround that's suitable for what I'm after is defining a class like this for some fixed T. instance (Coercible a T) => C a I'm curious what it'd take to add a qReifyCoercible method; does the renamer know anything about Coercible? Thanks, Travis On Fri, May 4, 2018 at 11:01 AM, Richard Eisenberg wrote: > I don't think there's an easy way to do this. We could imagine extending > Quasi to have a method to check for coercibility, but I don't think there's > a way to do this in the current TH API. Sorry! > > Richard > > On May 4, 2018, at 3:44 AM, Travis Whitaker > wrote: > > Given that Coercible instances are Deeply Magical, perhaps I'm being a bit > naive here, but I recently tried to write a TH function that can check if > one type is a newtype of another (or a newtype of a newtype of another, > etc). > > coercibleToFrom :: Type -> Type -> Q Bool > coercibleToFrom tx ty = (&&) <$> isInstance ''Coercible [tx, ty] > <*> isInstance ''Coercible [ty, tx] > > If this worked as I'd hoped, I'm almost certain checking reflexively is > redundant. However, I can't seem to get reifyInstances to ever return an > InstanceDec for a Coercible instance. Given that these instances are > generated on the fly by the typechecker, there's no way to make them > available at TH runtime, correct? And, given that, is there an easy way to > find out with TH whether not I'll be able to use coerce without taking all > the Decs apart to hunt for NewtypeD? > > Travis > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lonetiger at gmail.com Sat May 5 10:07:06 2018 From: lonetiger at gmail.com (Phyx) Date: Sat, 5 May 2018 11:07:06 +0100 Subject: End of Windows Vista support in GHC-8.6? In-Reply-To: References: <87inaa4esg.fsf@smart-cactus.org> Message-ID: Hi Simon, Whatever happened to this? The wiki was updated but I don't see a commit actually removing vista support. Did you end up not doing this anymore? Thanks, Tamar On Mon, Mar 5, 2018 at 7:21 PM, Simon Jakobi wrote: > Thanks everyone! > > I have updated https://ghc.haskell.org/trac/ghc/wiki/Platforms/Windows > accordingly. > > Cheers, > Simon > > 2018-03-05 18:29 GMT+01:00 Phyx : > >> >> >> On Mon, Mar 5, 2018, 17:23 Ben Gamari wrote: >> >>> Simon Jakobi via ghc-devs writes: >>> >>> > Hi! >>> > >>> > Given that Vista’s EOL was in April 2017 >>> > >> a-end-of-support> >>> > i assume that there’s no intention to keep supporting it in GHC-8.6!? >>> > >>> > I’m asking because I intend to use a function >>> > >> 405488.aspx> >>> > that requires Windows 7 or newer for #13362 >>> > . >>> > >>> Given that it's EOL'd, dropping Vista sounds reasonable to me. >>> >>> Tamar, any objection? >>> >> >> No objections, however do make sure to test both 32 and 64 bit builds of >> ghc when you use the API, it's new enough and rare enough that it may not >> be implemented in both mingw-64 tool chains (we've had similar issues >> before). >> >> Thanks, >> Tamar >> >> >>> Cheers, >>> >>> - Ben >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chessai1996 at gmail.com Sat May 5 16:33:11 2018 From: chessai1996 at gmail.com (Daniel Cartwright) Date: Sat, 5 May 2018 12:33:11 -0400 Subject: potential for GHC benchmarks w.r.t. optimisations being incorrect Message-ID: I am admittedly unsure of how GHC's optimisation benchmarks are currently implemented/carried out, but I feel as though this paper and its findings could be relevant to GHC devs: http://cis.upenn.edu/~cis501/papers/producing-wrong-data.pdf Basically, according to this paper, the cache effects of changing where the stack starts based on the number of environment variables are huge for many compiler benchmarks, and adjusting for this effect shows that gcc -O3 is only in actuality 1% faster than gcc -O2. Some further thoughts, per http://aftermath.rocks/2016/04/11/wrong_data/ : "The question they looked at was the following: does the compiler’s -O3 optimization flag result in speedups over -O2? This question is investigated in the light of measurement biases caused by two sources: Unix environment size, and linking order. to the total size of the representation of Unix environment variables (such as PATH, HOME, etc.). Typically, these variables are part of the memory image of each process. The call stack begins where the environment ends. This gives rise to the following hypothesis: changing the sizes of (unused!) environment variables can change the alignment of variables on the stack and thus the performance of the program under test due to different behavior of hardware buffers such as caches or TLBs. (This is the source of the hypothetical example in the first paragraph, which I made up. On the machine where I am typing this, my user name appears in 12 of the environment variables that are set by default. All other things being equal, another user with a user name of a different length will have an environment size that differs by a multiple of 12 bytes.)" "So does this hypothesis hold? Yes. Using a simple computational kernel the authors observe that changing the size of the environment can often cause a slowdown of 33% and, in one particular case, by 300%. On larger benchmarks the effects are less pronounced but still present. Using the C programs from the standard SPEC CPU2006 benchmark suite, the effects of -O2 and -O3 optimizations were compared across a wide range of environment sizes. For several of the programs a wide range of variations was observed, and the results often included both positive and negative observations. The effects were not correlated with the environment size. All this means that for some benchmarks, a compiler engineer might by accident test a purported optimization in a lucky environment and observe a 10% speedup, while users of the same optimization in an unlucky environment may have a 10% slowdown on the same workload." I write this out of curiosity, as well as concern, over how this may affect GHC. -------------- next part -------------- An HTML attachment was scrubbed... URL: From klebinger.andreas at gmx.at Sat May 5 19:23:47 2018 From: klebinger.andreas at gmx.at (Andreas Klebinger) Date: Sat, 05 May 2018 21:23:47 +0200 Subject: Basic Block Layout in the NCG Message-ID: <5AEE04C3.6090603@gmx.at> Does anyone have good hints for literature on basic block layout algorithms? I've run into a few examples where the current algorithm falls apart while working on Cmm. There is a trac ticket https://ghc.haskell.org/trac/ghc/ticket/15124#ticket where I tracked some of the issues I ran into. As it stands some cmm optimizations are far out weighted by accidental changes they cause in the layout of basic blocks. The main problem seems to be that the current codegen only considers the last jump in a basic block as relevant for code layout. This works well for linear chains of control flow but behaves badly and somewhat unpredictable when dealing with branch heavy code where blocks have more than one successor or calls. In particular if we have a loop A jmp B call C call D which we enter into at block B from Block E we would like something like: E,B,C,D,A Which means with some luck C/D might be still in cache if we return from the call. However we can currently get: E,B,A,X,D,X,C where X are other unrelated blocks. This happens since call edges are invisible to the layout algorithm. It even happens when we have (conditional) jumps from B to C and C to D since these are invisible as well! I came across cases where inverting conditions lead to big performance losses since suddenly block layout got all messed up. (~4% slowdown for the worst offenders). So I'm looking for solutions there. From mail at joachim-breitner.de Sat May 5 20:42:11 2018 From: mail at joachim-breitner.de (Joachim Breitner) Date: Sat, 05 May 2018 16:42:11 -0400 Subject: potential for GHC benchmarks w.r.t. optimisations being incorrect In-Reply-To: References: Message-ID: <084b09cd2fb905d07dbbacc8a3a099c26dd38ec2.camel@joachim-breitner.de> Hi, Am Samstag, den 05.05.2018, 12:33 -0400 schrieb Daniel Cartwright: > I write this out of curiosity, as well as concern, over how this may affect GHC. our performance measurements are pretty non-scientific. For many decades, developers just ran our benchmark suite (nofib) before and after their change, hopefully on a cleanly built working copy, and pasted the most interesting numbers in the commit logs. Maybe some went for coffee to have an otherwise relatively quiet machine (or have some remote setup), maybe not. In the end, the run-time performance numbers are often ignored and we we focus on comparing the effects of *dynamic heap allocations*, which are much more stable across different environments, and which we believe are a good proxy for actual performance, at least for the kind of high-level optimizations that we work on in the core-to-core pipeline. But this assumption is folklore, and not scientifically investigated. Since two years or so we started collecting performance numbers for every commit to the GHC repository, and I wrote a tool to print comparisons: https://perf.haskell.org/ghc/ This runs on a dedicated physical machine, and still the run-time numbers were varying too widely and gave us many false warnings (and probably reported many false improvements which we of course were happy to believe). I have since switched to measuring only dynamic instruction counts with valgrind. This means that we cannot detect improvement or regressions due to certain low-level stuff, but we gain the ability to reliably measure *something* that we expect to change when we improve (or accidentally worsen) the high-level transformations. I wish there were a better way of getting a reliable, stable number that reflects the actual performance. Cheers, Joachim -- Joachim Breitner mail at joachim-breitner.de http://www.joachim-breitner.de/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From svenpanne at gmail.com Sun May 6 12:42:56 2018 From: svenpanne at gmail.com (Sven Panne) Date: Sun, 6 May 2018 14:42:56 +0200 Subject: Basic Block Layout in the NCG In-Reply-To: <5AEE04C3.6090603@gmx.at> References: <5AEE04C3.6090603@gmx.at> Message-ID: 2018-05-05 21:23 GMT+02:00 Andreas Klebinger : > [...] I came across cases where inverting conditions lead to big > performance losses since suddenly block layout > got all messed up. (~4% slowdown for the worst offenders). [...] > 4% is far from being "big", look e.g. at https://dendibakh.github.io/ blog/2018/01/18/Code_alignment_issues where changing just the alignment of the code lead to a 10% difference. :-/ The code itself or its layout wasn't changed at all. The "Producing Wrong Data Without Doing Anything Obviously Wrong!" paper gives more funny examples. I'm not saying that code layout has no impact, quite the opposite. The main point is: Do we really have a benchmarking machinery in place which can tell you if you've improved the real run time or made it worse? I doubt that, at least at the scale of a few percent. To reach just that simple yes/no conclusion, you would need quite a heavy machinery involving randomized linking order, varying environments (in the sense of "number and contents of environment variables"), various CPU models etc. If you do not do that, modern HW will leave you with a lot of "WTF?!" moments and wrong conclusions. -------------- next part -------------- An HTML attachment was scrubbed... URL: From klebinger.andreas at gmx.at Sun May 6 14:41:06 2018 From: klebinger.andreas at gmx.at (Andreas Klebinger) Date: Sun, 06 May 2018 16:41:06 +0200 Subject: potential for GHC benchmarks w.r.t. optimisations being incorrect In-Reply-To: <084b09cd2fb905d07dbbacc8a3a099c26dd38ec2.camel@joachim-breitner.de> References: <084b09cd2fb905d07dbbacc8a3a099c26dd38ec2.camel@joachim-breitner.de> Message-ID: <5AEF1402.6080407@gmx.at> Joachim Breitner schrieb: > This runs on a dedicated physical machine, and still the run-time > numbers were varying too widely and gave us many false warnings (and > probably reported many false improvements which we of course were happy > to believe). I have since switched to measuring only dynamic > instruction counts with valgrind. This means that we cannot detect > improvement or regressions due to certain low-level stuff, but we gain > the ability to reliably measure *something* that we expect to change > when we improve (or accidentally worsen) the high-level > transformations. While this matches my experience with the default settings, I had good results by tuning the number of measurements nofib does. With a high number of NoFibRuns (30+) , disabling frequency scaling, stopping background tasks and walking away from the computer till it was done I got noise down to differences of about +/-0.2% for subsequent runs. This doesn't eliminate alignment bias and the like but at least it gives fairly reproducible results. Sven Panne schrieb: > 4% is far from being "big", look e.g. at > https://dendibakh.github.io/blog/2018/01/18/Code_alignment_issues > > where changing just the alignment of the code lead to a 10% > difference. :-/ The code itself or its layout wasn't changed at all. > The "Producing Wrong Data Without Doing Anything Obviously Wrong!" > paper gives more funny examples. > > I'm not saying that code layout has no impact, quite the opposite. The > main point is: Do we really have a benchmarking machinery in place > which can tell you if you've improved the real run time or made it > worse? I doubt that, at least at the scale of a few percent. To reach > just that simple yes/no conclusion, you would need quite a heavy > machinery involving randomized linking order, varying environments (in > the sense of "number and contents of environment variables"), various > CPU models etc. If you do not do that, modern HW will leave you with a > lot of "WTF?!" moments and wrong conclusions. You raise good points. While the example in the blog seems a bit constructed with the whole loop fitting in a cache line the principle is a real concern though. I've hit alignment issues and WTF moments plenty of times in the past when looking at micro benchmarks. However on the scale of nofib so far I haven't really seen this happen. It's good to be aware of the chance for a whole suite to give wrong results though. I wonder if this effect is limited by GHC's tendency to use 8 byte alignment for all code (at least with tables next to code)? If we only consider 16byte (DSB Buffer) and 32 Byte (Cache Lines) relevant this reduces the possibilities by a lot after all. In the particular example I've hit however it's pretty obvious that alignment is not the issue. (And I still verified that). In the end how big the impact of a better layout would be in general is hard to quantify. Hence the question if anyone has pointers to good literature which looks into this. Cheers Andreas -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Sun May 6 14:59:22 2018 From: mail at joachim-breitner.de (Joachim Breitner) Date: Sun, 06 May 2018 10:59:22 -0400 Subject: potential for GHC benchmarks w.r.t. optimisations being incorrect In-Reply-To: <5AEF1402.6080407@gmx.at> References: <084b09cd2fb905d07dbbacc8a3a099c26dd38ec2.camel@joachim-breitner.de> <5AEF1402.6080407@gmx.at> Message-ID: <98be05955d47fbeebb7bab9ff8c4b6c98dcb2e6d.camel@joachim-breitner.de> Hi, Am Sonntag, den 06.05.2018, 16:41 +0200 schrieb Andreas Klebinger: > With a high number of NoFibRuns (30+) , disabling frequency scaling, > stopping background tasks and walking away from the computer > till it was done I got noise down to differences of about +/-0.2% for > subsequent runs. > > This doesn't eliminate alignment bias and the like but at least it > gives fairly reproducible results. That’s true, but it leaves alignment bias. This bit my in my work on Call Arity, as I write in my thesis: Initially, I attempted to use the actual run time measurements, but it turned out to be a mostly pointless endeavour. For example the knights benchmark would become 9% slower when enabling Call Arity (i.e. when comparing (A) to (B)), a completely unexpected result, given that the changes to the GHC Core code were reasonable. Further investigation using performance data obtained from the CPU indicated that with the changed code, the CPU’s instruction decoder was idling for more cycles, hinting at cache effects and/or bad program layout. Indeed: When I compiled the code with the compiler flag -g, which includes debugging information in the resulting binary, but should otherwise not affect the relative performance characteristics much, the unexpected difference vanished. I conclude that non-local changes to the Haskell or Core code will change the layout of the generated program code in unpredictable ways and render such run time measurements mostly meaningless. This conclusion has been drawn before [MDHS09], and recently, tools to mitigate this effect, e.g. by randomising the code layout [CB13], were created. Unfortunately, these currently target specific C compilers, so I could not use them here. In the following measurements, I avoid this problem by not measuring program execution time, but simply by counting the number of instructions performed. This way, the variability in execution time due to code layout does not affect the results. To obtain the instruction counts I employ valgrind [NS07], which runs the benchmarks on a virtual CPU and thus produces more reliable and reproducible measurements. Unpleasant experience. Cheers, Joachim -- Joachim Breitner mail at joachim-breitner.de http://www.joachim-breitner.de/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From kavon at farvard.in Sun May 6 16:51:51 2018 From: kavon at farvard.in (Kavon Farvardin) Date: Sun, 06 May 2018 11:51:51 -0500 Subject: Basic Block Layout in the NCG In-Reply-To: References: <5AEE04C3.6090603@gmx.at> Message-ID: > 4% is far from being "big", look e.g. at https://dendibakh.github.io/ > blog/2018/01/18/Code_alignment_issues where changing just the > alignment of the code lead to a 10% difference. :-/ The code itself > or its layout wasn't changed at all. The "Producing Wrong Data > Without Doing Anything Obviously Wrong!" paper gives more funny > examples. This particular instance of performance difference due to aligning blocks/functions is not very surprising (it is mentioned in section 3.4.1.5 of the Intel Optimization Manual). I also think 4% on large, non-trivial programs is "bigger" than 10% on the tiny example in that blog post (especially since the alignment issues disappear above 1024 iterations of the loop). ~kavon On Sun, 2018-05-06 at 14:42 +0200, Sven Panne wrote: > 2018-05-05 21:23 GMT+02:00 Andreas Klebinger t>: > > [...] I came across cases where inverting conditions lead to big > > performance losses since suddenly block layout > > got all messed up. (~4% slowdown for the worst offenders). [...] > > > 4% is far from being "big", look e.g. at https://dendibakh.github.io/ > blog/2018/01/18/Code_alignment_issues where changing just the > alignment of the code lead to a 10% difference. :-/ The code itself > or its layout wasn't changed at all. The "Producing Wrong Data > Without Doing Anything Obviously Wrong!" paper gives more funny > examples. > > I'm not saying that code layout has no impact, quite the opposite. > The main point is: Do we really have a benchmarking machinery in > place which can tell you if you've improved the real run time or made > it worse? I doubt that, at least at the scale of a few percent. To > reach just that simple yes/no conclusion, you would need quite a > heavy machinery involving randomized linking order, varying > environments (in the sense of "number and contents of environment > variables"), various CPU models etc. If you do not do that, modern HW > will leave you with a lot of "WTF?!" moments and wrong conclusions. > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 659 bytes Desc: This is a digitally signed message part URL: From kavon at farvard.in Sun May 6 18:17:56 2018 From: kavon at farvard.in (Kavon Farvardin) Date: Sun, 06 May 2018 13:17:56 -0500 Subject: Basic Block Layout in the NCG In-Reply-To: <5AEE04C3.6090603@gmx.at> References: <5AEE04C3.6090603@gmx.at> Message-ID: <32a627e23de63e52a344d1fb1166350352354e39.camel@farvard.in> > Does anyone have good hints for literature on basic block layout > algorithms? Here are some thoughts: * Branch Probability * Any good code layout algorithm should take branch probabilities into account. From what I've seen, we already have a few likely-branch heuristics baked into the generation/transformation of Cmm, though perhaps it's worth doing more to add probabilities as in [1,3]. The richer type information in STG could come in handy. I think the first step in leveraging branch proability information is to use a Float to represent the magnitude of likeliness instead of a Bool. Target probabilities on CmmSwitches could also help create smarter SwitchPlans. Slides 20-21 in [2] demonstrate a lower-cost decision tree based on these probabilities. * Code Layout * The best all-in-one source for static code positioning I've seen is in [5], and might be a good starting point for exploring that space. More importantly, [5] talks about function positioning, which is something I think we're missing. A more sophisticated extension to [5]'s function positioning can be found in [6]. Keeping in mind that LLVM is tuned to optimize loops within functions, at at high-level LLVM does the following [4]: The algorithm works from the inner-most loop within a function outward, and at each stage walks through the basic blocks, trying to coalesce them into sequential chains where allowed by the CFG (or demanded by heavy probabilities). Finally, it walks the blocks in topological order, and the first time it reaches a chain of basic blocks, it schedules them in the function in-order. There are also plenty of heuristics such as "tail duplication" to deal with diamonds and other odd cases in the CFG that are harder to layout. Unfortunately, there don't seem to be any sources cited. We may want to develop our own heuristics to modify the CFG for better layout as well. [1] Thomas Ball, James R. Larus. Branch Prediction for Free (https://do i.org/10.1145/173262.155119) [2] Hans Wennborg. The recent switch lowering improvements. (http://llv m.org/devmtg/2015-10/slides/Wennborg-SwitchLowering.pdf) See also: http s://www.youtube.com/watch?v=gMqSinyL8uk [3] James E. Smith. A study of branch prediction strategies (https://dl .acm.org/citation.cfm?id=801871) [4] http://llvm.org/doxygen/MachineBlockPlacement_8cpp_source.html [5] Karl Pettis, Robert C. Hansen. Profile guided code positioning. (ht tps://doi.org/10.1145/93542.93550) [6] Hashemi et al. Efficient procedure mapping using cache line coloring (https://doi.org/10.1145/258915.258931) ~kavon On Sat, 2018-05-05 at 21:23 +0200, Andreas Klebinger wrote: > Does anyone have good hints for literature on basic block layout > algorithms? > I've run into a few examples where the current algorithm falls apart > while working on Cmm. > > There is a trac ticket https://ghc.haskell.org/trac/ghc/ticket/15124# > ticket > where I tracked some of the issues I ran into. > > As it stands some cmm optimizations are far out weighted by > accidental changes they cause in the layout of basic blocks. > > The main problem seems to be that the current codegen only considers > the > last jump > in a basic block as relevant for code layout. > > This works well for linear chains of control flow but behaves badly > and > somewhat > unpredictable when dealing with branch heavy code where blocks have > more > than > one successor or calls. > > In particular if we have a loop > > A jmp B call C call D > > which we enter into at block B from Block E > we would like something like: > > E,B,C,D,A > > Which means with some luck C/D might be still in cache if we return > from > the call. > > However we can currently get: > > E,B,A,X,D,X,C > > where X are other unrelated blocks. This happens since call edges > are > invisible to the layout algorithm. > It even happens when we have (conditional) jumps from B to C and C > to D > since these are invisible as well! > > I came across cases where inverting conditions lead to big > performance > losses since suddenly block layout > got all messed up. (~4% slowdown for the worst offenders). > > So I'm looking for solutions there. > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 659 bytes Desc: This is a digitally signed message part URL: From klebinger.andreas at gmx.at Sun May 6 19:58:31 2018 From: klebinger.andreas at gmx.at (Andreas Klebinger) Date: Sun, 06 May 2018 21:58:31 +0200 Subject: Basic Block Layout in the NCG In-Reply-To: <32a627e23de63e52a344d1fb1166350352354e39.camel@farvard.in> References: <5AEE04C3.6090603@gmx.at> <32a627e23de63e52a344d1fb1166350352354e39.camel@farvard.in> Message-ID: <5AEF5E67.6050403@gmx.at> On branch probability: I've actually created a patch to add more probabilities in the recent past which included probabilities on CmmSwitches. Although it made little difference when only tagging error branches. Partially that also run into issues with code layout which is why I put that on ice for now. The full patch is here: https://phabricator.haskell.org/D4327 I think this really has to be done back to front as GHC currently throws away all likelyhood information before we get to block layout. Which makes it very hard to take advantage of this information. Code Layout: That seems exactly like the kind of pointers I was looking for! I do wonder how well some of them aged. For example [5] and [6] use at most two way associative cache. But as you said it should be at least a good starting point. I will put your links into the ticket so they are easily found once I (or someone else!) has time to look deeper into this. Cheers Andreas > Kavon Farvardin > Sonntag, 6. Mai 2018 20:17 >> Does anyone have good hints for literature on basic block layout >> algorithms? > > Here are some thoughts: > > * Branch Probability * > > Any good code layout algorithm should take branch probabilities into > account. From what I've seen, we already have a few likely-branch > heuristics baked into the generation/transformation of Cmm, though > perhaps it's worth doing more to add probabilities as in [1,3]. The > richer type information in STG could come in handy. > > I think the first step in leveraging branch proability information is > to use a Float to represent the magnitude of likeliness instead of a > Bool. > > Target probabilities on CmmSwitches could also help create smarter > SwitchPlans. Slides 20-21 in [2] demonstrate a lower-cost decision tree > based on these probabilities. > > > * Code Layout * > > The best all-in-one source for static code positioning I've seen is in > [5], and might be a good starting point for exploring that space. More > importantly, [5] talks about function positioning, which is something I > think we're missing. A more sophisticated extension to [5]'s function > positioning can be found in [6]. > > Keeping in mind that LLVM is tuned to optimize loops within functions, > at at high-level LLVM does the following [4]: > > The algorithm works from the inner-most loop within a > function outward, and at each stage walks through the > basic blocks, trying to coalesce them into sequential > chains where allowed by the CFG (or demanded by heavy > probabilities). Finally, it walks the blocks in > topological order, and the first time it reaches a > chain of basic blocks, it schedules them in the > function in-order. > > There are also plenty of heuristics such as "tail duplication" to deal > with diamonds and other odd cases in the CFG that are harder to layout. > Unfortunately, there don't seem to be any sources cited. We may want > to develop our own heuristics to modify the CFG for better layout as > well. > > > > [1] Thomas Ball, James R. Larus. Branch Prediction for Free (https://do > i.org/10.1145/173262.155119) > > [2] Hans Wennborg. The recent switch lowering improvements. (http://llv > m.org/devmtg/2015-10/slides/Wennborg-SwitchLowering.pdf) See also: http > s://www.youtube.com/watch?v=gMqSinyL8uk > > [3] James E. Smith. A study of branch prediction strategies (https://dl > .acm.org/citation.cfm?id=801871) > > [4] http://llvm.org/doxygen/MachineBlockPlacement_8cpp_source.html > > [5] Karl Pettis, Robert C. Hansen. Profile guided code positioning. (ht > tps://doi.org/10.1145/93542.93550) > > [6] Hashemi et al. Efficient procedure mapping using cache line > coloring (https://doi.org/10.1145/258915.258931) > > > ~kavon > > > On Sat, 2018-05-05 at 21:23 +0200, Andreas Klebinger wrote: >> Does anyone have good hints for literature on basic block layout >> algorithms? >> I've run into a few examples where the current algorithm falls apart >> while working on Cmm. >> >> There is a trac ticket https://ghc.haskell.org/trac/ghc/ticket/15124# >> ticket >> where I tracked some of the issues I ran into. >> >> As it stands some cmm optimizations are far out weighted by >> accidental changes they cause in the layout of basic blocks. >> >> The main problem seems to be that the current codegen only considers >> the >> last jump >> in a basic block as relevant for code layout. >> >> This works well for linear chains of control flow but behaves badly >> and >> somewhat >> unpredictable when dealing with branch heavy code where blocks have >> more >> than >> one successor or calls. >> >> In particular if we have a loop >> >> A jmp B call C call D >> >> which we enter into at block B from Block E >> we would like something like: >> >> E,B,C,D,A >> >> Which means with some luck C/D might be still in cache if we return >> from >> the call. >> >> However we can currently get: >> >> E,B,A,X,D,X,C >> >> where X are other unrelated blocks. This happens since call edges >> are >> invisible to the layout algorithm. >> It even happens when we have (conditional) jumps from B to C and C >> to D >> since these are invisible as well! >> >> I came across cases where inverting conditions lead to big >> performance >> losses since suddenly block layout >> got all messed up. (~4% slowdown for the worst offenders). >> >> So I'm looking for solutions there. >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > Andreas Klebinger > Samstag, 5. Mai 2018 21:23 > Does anyone have good hints for literature on basic block layout > algorithms? > I've run into a few examples where the current algorithm falls apart > while working on Cmm. > > There is a trac ticket > https://ghc.haskell.org/trac/ghc/ticket/15124#ticket > where I tracked some of the issues I ran into. > > As it stands some cmm optimizations are far out weighted by > accidental changes they cause in the layout of basic blocks. > > The main problem seems to be that the current codegen only considers > the last jump > in a basic block as relevant for code layout. > > This works well for linear chains of control flow but behaves badly > and somewhat > unpredictable when dealing with branch heavy code where blocks have > more than > one successor or calls. > > In particular if we have a loop > > A jmp B call C call D > > which we enter into at block B from Block E > we would like something like: > > E,B,C,D,A > > Which means with some luck C/D might be still in cache if we return > from the call. > > However we can currently get: > > E,B,A,X,D,X,C > > where X are other unrelated blocks. This happens since call edges are > invisible to the layout algorithm. > It even happens when we have (conditional) jumps from B to C and C to > D since these are invisible as well! > > I came across cases where inverting conditions lead to big performance > losses since suddenly block layout > got all messed up. (~4% slowdown for the worst offenders). > > So I'm looking for solutions there. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at cs.brynmawr.edu Mon May 7 02:39:36 2018 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Sun, 6 May 2018 22:39:36 -0400 Subject: Availability of Coercible Instances at TH Runtime In-Reply-To: References: <5D5A8174-A549-4703-AAC8-D5CA335D43F4@cs.brynmawr.edu> Message-ID: > On May 4, 2018, at 10:01 PM, Travis Whitaker wrote: > > I'm curious what it'd take to add a qReifyCoercible method; does the renamer know anything about Coercible? No, it doesn't, but the Q monad is actually a wrapper around TcM, the type-checker monad. I don't think it would be all that hard to do this. Could be a suitable project for someone new to GHC hacking... Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From kane at kane.cx Mon May 7 04:37:58 2018 From: kane at kane.cx (David Kraeutmann) Date: Mon, 7 May 2018 06:37:58 +0200 Subject: Availability of Coercible Instances at TH Runtime In-Reply-To: References: <5D5A8174-A549-4703-AAC8-D5CA335D43F4@cs.brynmawr.edu> Message-ID: <3174c31d-43eb-b4a9-6345-c7707b1565e9@kane.cx> Tangentially related, but wasn't there a plan at some point to integrate TH more tightly with GHC? On 5/7/2018 4:39 AM, Richard Eisenberg wrote: > > No, it doesn't, but the Q monad is actually a wrapper around TcM, the > type-checker monad. I don't think it would be all that hard to do > this. Could be a suitable project for someone new to GHC hacking... From svenpanne at gmail.com Mon May 7 11:04:38 2018 From: svenpanne at gmail.com (Sven Panne) Date: Mon, 7 May 2018 13:04:38 +0200 Subject: potential for GHC benchmarks w.r.t. optimisations being incorrect In-Reply-To: <5AEF1402.6080407@gmx.at> References: <084b09cd2fb905d07dbbacc8a3a099c26dd38ec2.camel@joachim-breitner.de> <5AEF1402.6080407@gmx.at> Message-ID: 2018-05-06 16:41 GMT+02:00 Andreas Klebinger : > [...] If we only consider 16byte (DSB Buffer) and 32 Byte (Cache Lines) > relevant this reduces the possibilities by a lot after all. [...] > Nitpick: Cache lines on basically all Intel/AMD processors contain 64 bytes, see e.g. http://www.agner.org/optimize/microarchitecture.pdf -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at cs.brynmawr.edu Mon May 7 14:14:30 2018 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Mon, 7 May 2018 10:14:30 -0400 Subject: Availability of Coercible Instances at TH Runtime In-Reply-To: <3174c31d-43eb-b4a9-6345-c7707b1565e9@kane.cx> References: <5D5A8174-A549-4703-AAC8-D5CA335D43F4@cs.brynmawr.edu> <3174c31d-43eb-b4a9-6345-c7707b1565e9@kane.cx> Message-ID: <56765644-F1B8-4059-95EB-D234B8C49C03@cs.brynmawr.edu> Yes: https://ghc.haskell.org/trac/ghc/wiki/TemplateHaskell/Introspective I think this may be waiting for the trees-that-grow work to be completed, and as far as I know, no one is actively working on this. But I still think it would be a Good Thing. Richard > On May 7, 2018, at 12:37 AM, David Kraeutmann wrote: > > Tangentially related, but wasn't there a plan at some point to integrate TH more tightly with GHC? > > On 5/7/2018 4:39 AM, Richard Eisenberg wrote: >> >> No, it doesn't, but the Q monad is actually a wrapper around TcM, the type-checker monad. I don't think it would be all that hard to do this. Could be a suitable project for someone new to GHC hacking... > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From alan.zimm at gmail.com Mon May 7 15:16:57 2018 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Mon, 7 May 2018 17:16:57 +0200 Subject: TTG hsSyn for Batch and Interactive Parsing Message-ID: I want to be able to run the GHC parser in one of two modes, batch which functions as now, and interactive which will (eventually) be incremental. In addition, the hsSyn AST for each will have different TTG[1] annotations, so that it can better support IDE usage. I think this can be done by changing the types in HsExtension to introduce a 'Process' type as follows data Pass = Parsed Process | Renamed | Typechecked deriving (Data) data Process = Batch | Interactive deriving (Show, Data) We then rename the pass synonyms so that batch is the default type GhcPs = GhcPass ('Parsed 'Batch) type GhcPsI = GhcPass ('Parsed 'Interactive) I have attached a simple proof of concept file, which emulates parsing and renaming. Is this an appropriate approach to take? Alan [1] Trees That Grow https://ghc.haskell.org/trac/ghc/wiki/ ImplementingTreesThatGrow -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ParserTypes.hs Type: text/x-haskell Size: 2250 bytes Desc: not available URL: From sh.najd at gmail.com Mon May 7 18:31:33 2018 From: sh.najd at gmail.com (Shayan Najd) Date: Mon, 7 May 2018 19:31:33 +0100 Subject: Availability of Coercible Instances at TH Runtime In-Reply-To: <56765644-F1B8-4059-95EB-D234B8C49C03@cs.brynmawr.edu> References: <5D5A8174-A549-4703-AAC8-D5CA335D43F4@cs.brynmawr.edu> <3174c31d-43eb-b4a9-6345-c7707b1565e9@kane.cx> <56765644-F1B8-4059-95EB-D234B8C49C03@cs.brynmawr.edu> Message-ID: > I think this may be waiting for the trees-that-grow work to be completed, and as far as I know, no one is actively working on this. We've just got a new Google SoC project accepted to push this forward :) /Shayan On Mon, May 7, 2018 at 3:14 PM, Richard Eisenberg wrote: > Yes: https://ghc.haskell.org/trac/ghc/wiki/TemplateHaskell/Introspective > > I think this may be waiting for the trees-that-grow work to be completed, and as far as I know, no one is actively working on this. But I still think it would be a Good Thing. > > Richard > >> On May 7, 2018, at 12:37 AM, David Kraeutmann wrote: >> >> Tangentially related, but wasn't there a plan at some point to integrate TH more tightly with GHC? >> >> On 5/7/2018 4:39 AM, Richard Eisenberg wrote: >>> >>> No, it doesn't, but the Q monad is actually a wrapper around TcM, the type-checker monad. I don't think it would be all that hard to do this. Could be a suitable project for someone new to GHC hacking... >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From omeragacan at gmail.com Mon May 7 19:34:07 2018 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Mon, 7 May 2018 22:34:07 +0300 Subject: Use NULL instead of END_X_QUEUE closures? Message-ID: Currently we sometimes use special closures to mark end of lists of different objects. Some examples: - END_TSO_QUEUE - END_STM_WATCH_QUEUE - END_STM_CHUNK_LIST But we also use NULL for the same thing, e.g. in weak pointer lists (old_weak_ptr_list, weak_ptr_list). I'm wondering why we need special marker objects (which are actual closures with info tables) instead of using NULL consistently. Current approach causes a minor problem when working on the RTS because every time I traverse a list I need to remember how the list is terminated (e.g. NULL when traversing weak pointer lists, END_TSO_QUEUE when traversing TSO lists). Ömer From fryguybob at gmail.com Tue May 8 00:52:47 2018 From: fryguybob at gmail.com (Ryan Yates) Date: Mon, 7 May 2018 20:52:47 -0400 Subject: Use NULL instead of END_X_QUEUE closures? In-Reply-To: References: Message-ID: Hi Ömer, These are pointed to by objects traversed by GC. They have info tables like any other heap object that GC can understand. I think this is a much simpler invariant to hold then to have some heap objects point to NULL. Ryan On Mon, May 7, 2018 at 3:34 PM, Ömer Sinan Ağacan wrote: > Currently we sometimes use special closures to mark end of lists of > different > objects. Some examples: > > - END_TSO_QUEUE > - END_STM_WATCH_QUEUE > - END_STM_CHUNK_LIST > > But we also use NULL for the same thing, e.g. in weak pointer lists > (old_weak_ptr_list, weak_ptr_list). > > I'm wondering why we need special marker objects (which are actual closures > with info tables) instead of using NULL consistently. Current approach > causes a > minor problem when working on the RTS because every time I traverse a list > I > need to remember how the list is terminated (e.g. NULL when traversing weak > pointer lists, END_TSO_QUEUE when traversing TSO lists). > > Ömer > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue May 8 08:20:10 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 8 May 2018 08:20:10 +0000 Subject: Basic Block Layout in the NCG In-Reply-To: <5AEF5E67.6050403@gmx.at> References: <5AEE04C3.6090603@gmx.at> <32a627e23de63e52a344d1fb1166350352354e39.camel@farvard.in> <5AEF5E67.6050403@gmx.at> Message-ID: There is good info in this thread … do add it to the ticket #15124 Simon From: ghc-devs On Behalf Of Andreas Klebinger Sent: 06 May 2018 20:59 To: Kavon Farvardin Cc: ghc-devs at haskell.org Subject: Re: Basic Block Layout in the NCG On branch probability: I've actually created a patch to add more probabilities in the recent past which included probabilities on CmmSwitches. Although it made little difference when only tagging error branches. Partially that also run into issues with code layout which is why I put that on ice for now. The full patch is here: https://phabricator.haskell.org/D4327 I think this really has to be done back to front as GHC currently throws away all likelyhood information before we get to block layout. Which makes it very hard to take advantage of this information. Code Layout: That seems exactly like the kind of pointers I was looking for! I do wonder how well some of them aged. For example [5] and [6] use at most two way associative cache. But as you said it should be at least a good starting point. I will put your links into the ticket so they are easily found once I (or someone else!) has time to look deeper into this. Cheers Andreas Kavon Farvardin Sonntag, 6. Mai 2018 20:17 Does anyone have good hints for literature on basic block layout algorithms? Here are some thoughts: * Branch Probability * Any good code layout algorithm should take branch probabilities into account. From what I've seen, we already have a few likely-branch heuristics baked into the generation/transformation of Cmm, though perhaps it's worth doing more to add probabilities as in [1,3]. The richer type information in STG could come in handy. I think the first step in leveraging branch proability information is to use a Float to represent the magnitude of likeliness instead of a Bool. Target probabilities on CmmSwitches could also help create smarter SwitchPlans. Slides 20-21 in [2] demonstrate a lower-cost decision tree based on these probabilities. * Code Layout * The best all-in-one source for static code positioning I've seen is in [5], and might be a good starting point for exploring that space. More importantly, [5] talks about function positioning, which is something I think we're missing. A more sophisticated extension to [5]'s function positioning can be found in [6]. Keeping in mind that LLVM is tuned to optimize loops within functions, at at high-level LLVM does the following [4]: The algorithm works from the inner-most loop within a function outward, and at each stage walks through the basic blocks, trying to coalesce them into sequential chains where allowed by the CFG (or demanded by heavy probabilities). Finally, it walks the blocks in topological order, and the first time it reaches a chain of basic blocks, it schedules them in the function in-order. There are also plenty of heuristics such as "tail duplication" to deal with diamonds and other odd cases in the CFG that are harder to layout. Unfortunately, there don't seem to be any sources cited. We may want to develop our own heuristics to modify the CFG for better layout as well. [1] Thomas Ball, James R. Larus. Branch Prediction for Free (https://do i.org/10.1145/173262.155119) [2] Hans Wennborg. The recent switch lowering improvements. (http://llv m.org/devmtg/2015-10/slides/Wennborg-SwitchLowering.pdf) See also: http s://www.youtube.com/watch?v=gMqSinyL8uk [3] James E. Smith. A study of branch prediction strategies (https://dl .acm.org/citation.cfm?id=801871) [4] http://llvm.org/doxygen/MachineBlockPlacement_8cpp_source.html [5] Karl Pettis, Robert C. Hansen. Profile guided code positioning. (ht tps://doi.org/10.1145/93542.93550) [6] Hashemi et al. Efficient procedure mapping using cache line coloring (https://doi.org/10.1145/258915.258931) ~kavon On Sat, 2018-05-05 at 21:23 +0200, Andreas Klebinger wrote: Does anyone have good hints for literature on basic block layout algorithms? I've run into a few examples where the current algorithm falls apart while working on Cmm. There is a trac ticket https://ghc.haskell.org/trac/ghc/ticket/15124# ticket where I tracked some of the issues I ran into. As it stands some cmm optimizations are far out weighted by accidental changes they cause in the layout of basic blocks. The main problem seems to be that the current codegen only considers the last jump in a basic block as relevant for code layout. This works well for linear chains of control flow but behaves badly and somewhat unpredictable when dealing with branch heavy code where blocks have more than one successor or calls. In particular if we have a loop A jmp B call C call D which we enter into at block B from Block E we would like something like: E,B,C,D,A Which means with some luck C/D might be still in cache if we return from the call. However we can currently get: E,B,A,X,D,X,C where X are other unrelated blocks. This happens since call edges are invisible to the layout algorithm. It even happens when we have (conditional) jumps from B to C and C to D since these are invisible as well! I came across cases where inverting conditions lead to big performance losses since suddenly block layout got all messed up. (~4% slowdown for the worst offenders). So I'm looking for solutions there. _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs Andreas Klebinger Samstag, 5. Mai 2018 21:23 Does anyone have good hints for literature on basic block layout algorithms? I've run into a few examples where the current algorithm falls apart while working on Cmm. There is a trac ticket https://ghc.haskell.org/trac/ghc/ticket/15124#ticket where I tracked some of the issues I ran into. As it stands some cmm optimizations are far out weighted by accidental changes they cause in the layout of basic blocks. The main problem seems to be that the current codegen only considers the last jump in a basic block as relevant for code layout. This works well for linear chains of control flow but behaves badly and somewhat unpredictable when dealing with branch heavy code where blocks have more than one successor or calls. In particular if we have a loop A jmp B call C call D which we enter into at block B from Block E we would like something like: E,B,C,D,A Which means with some luck C/D might be still in cache if we return from the call. However we can currently get: E,B,A,X,D,X,C where X are other unrelated blocks. This happens since call edges are invisible to the layout algorithm. It even happens when we have (conditional) jumps from B to C and C to D since these are invisible as well! I came across cases where inverting conditions lead to big performance losses since suddenly block layout got all messed up. (~4% slowdown for the worst offenders). So I'm looking for solutions there. -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue May 8 08:54:44 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 8 May 2018 08:54:44 +0000 Subject: TTG hsSyn for Batch and Interactive Parsing In-Reply-To: References: Message-ID: At first blush, “running the parser in two modes” and “changing the Pass” type don’t match up in my mind. One seems quite local (how to run the parser). The other seems more pervasive. Can you say more about your proposed design, perhaps even on a wiki page? Simon From: ghc-devs On Behalf Of Alan & Kim Zimmerman Sent: 07 May 2018 16:17 To: ghc-devs Subject: TTG hsSyn for Batch and Interactive Parsing I want to be able to run the GHC parser in one of two modes, batch which functions as now, and interactive which will (eventually) be incremental. In addition, the hsSyn AST for each will have different TTG[1] annotations, so that it can better support IDE usage. I think this can be done by changing the types in HsExtension to introduce a 'Process' type as follows data Pass = Parsed Process | Renamed | Typechecked deriving (Data) data Process = Batch | Interactive deriving (Show, Data) We then rename the pass synonyms so that batch is the default type GhcPs = GhcPass ('Parsed 'Batch) type GhcPsI = GhcPass ('Parsed 'Interactive) I have attached a simple proof of concept file, which emulates parsing and renaming. Is this an appropriate approach to take? Alan [1] Trees That Grow https://ghc.haskell.org/trac/ghc/wiki/ImplementingTreesThatGrow -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Tue May 8 20:02:24 2018 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Tue, 8 May 2018 22:02:24 +0200 Subject: TTG hsSyn for Batch and Interactive Parsing In-Reply-To: References: Message-ID: I have started a wiki page at https://ghc.haskell.org/trac/ghc/wiki/ImplementingTreesThatGrow/IdeSupport On 8 May 2018 at 10:54, Simon Peyton Jones wrote: > At first blush, “running the parser in two modes” and “changing the Pass” > type don’t match up in my mind. One seems quite local (how to run the > parser). The other seems more pervasive. > > > > Can you say more about your proposed design, perhaps even on a wiki page? > > > > Simon > > > > *From:* ghc-devs *On Behalf Of *Alan & Kim > Zimmerman > *Sent:* 07 May 2018 16:17 > *To:* ghc-devs > *Subject:* TTG hsSyn for Batch and Interactive Parsing > > > > I want to be able to run the GHC parser in one of two modes, batch which > functions as now, and interactive which will (eventually) be incremental. > > In addition, the hsSyn AST for each will have different TTG[1] > annotations, so that it can better support IDE usage. > > I think this can be done by changing the types in HsExtension to introduce > a 'Process' type as follows > > data Pass = Parsed Process | Renamed | Typechecked > deriving (Data) > > data Process = Batch | Interactive > deriving (Show, Data) > > We then rename the pass synonyms so that batch is the default > > type GhcPs = GhcPass ('Parsed 'Batch) > type GhcPsI = GhcPass ('Parsed 'Interactive) > > I have attached a simple proof of concept file, which emulates parsing and > renaming. > > Is this an appropriate approach to take? > > Alan > > > > [1] Trees That Grow https://ghc.haskell.org/trac/ghc/wiki/ > ImplementingTreesThatGrow > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Wed May 9 08:15:00 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 9 May 2018 08:15:00 +0000 Subject: TTG hsSyn for Batch and Interactive Parsing In-Reply-To: References: Message-ID: Thanks. I am absolutely behind this objective: I propose to move the API Annotations to where they belong, inside the AST. Indeed I thought that was always part of the TTG plan. But I don’t understand what this has to do with interactive vs batch parsing. Why don’t you unconditionally retain API-annotation info? How would GhcPs be used differently to GhcPsI? You might want to answer by clarifying on the wiki page, so that it is a persistent record of the design debugged in dialogue by email. Simon From: Alan & Kim Zimmerman Sent: 08 May 2018 21:02 To: Simon Peyton Jones Cc: ghc-devs Subject: Re: TTG hsSyn for Batch and Interactive Parsing I have started a wiki page at https://ghc.haskell.org/trac/ghc/wiki/ImplementingTreesThatGrow/IdeSupport On 8 May 2018 at 10:54, Simon Peyton Jones > wrote: At first blush, “running the parser in two modes” and “changing the Pass” type don’t match up in my mind. One seems quite local (how to run the parser). The other seems more pervasive. Can you say more about your proposed design, perhaps even on a wiki page? Simon From: ghc-devs > On Behalf Of Alan & Kim Zimmerman Sent: 07 May 2018 16:17 To: ghc-devs > Subject: TTG hsSyn for Batch and Interactive Parsing I want to be able to run the GHC parser in one of two modes, batch which functions as now, and interactive which will (eventually) be incremental. In addition, the hsSyn AST for each will have different TTG[1] annotations, so that it can better support IDE usage. I think this can be done by changing the types in HsExtension to introduce a 'Process' type as follows data Pass = Parsed Process | Renamed | Typechecked deriving (Data) data Process = Batch | Interactive deriving (Show, Data) We then rename the pass synonyms so that batch is the default type GhcPs = GhcPass ('Parsed 'Batch) type GhcPsI = GhcPass ('Parsed 'Interactive) I have attached a simple proof of concept file, which emulates parsing and renaming. Is this an appropriate approach to take? Alan [1] Trees That Grow https://ghc.haskell.org/trac/ghc/wiki/ImplementingTreesThatGrow -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.jakobi at googlemail.com Wed May 9 08:48:41 2018 From: simon.jakobi at googlemail.com (Simon Jakobi) Date: Wed, 9 May 2018 10:48:41 +0200 Subject: End of Windows Vista support in GHC-8.6? In-Reply-To: References: <87inaa4esg.fsf@smart-cactus.org> Message-ID: Thanks for the reminder, Tamar! I have just created https://phabricator.haskell.org/D4679. I would be very grateful if you could test it on 32bit Windows. Cheers, Simon 2018-05-05 12:07 GMT+02:00 Phyx : > Hi Simon, > > Whatever happened to this? The wiki was updated but I don't see a commit > actually removing vista support. > > Did you end up not doing this anymore? > > Thanks, > Tamar > > On Mon, Mar 5, 2018 at 7:21 PM, Simon Jakobi > wrote: >> >> Thanks everyone! >> >> I have updated https://ghc.haskell.org/trac/ghc/wiki/Platforms/Windows >> accordingly. >> >> Cheers, >> Simon >> >> 2018-03-05 18:29 GMT+01:00 Phyx : >>> >>> >>> >>> On Mon, Mar 5, 2018, 17:23 Ben Gamari wrote: >>>> >>>> Simon Jakobi via ghc-devs writes: >>>> >>>> > Hi! >>>> > >>>> > Given that Vista’s EOL was in April 2017 >>>> > >>>> > >>>> > i assume that there’s no intention to keep supporting it in GHC-8.6!? >>>> > >>>> > I’m asking because I intend to use a function >>>> > >>>> > >>>> > that requires Windows 7 or newer for #13362 >>>> > . >>>> > >>>> Given that it's EOL'd, dropping Vista sounds reasonable to me. >>>> >>>> Tamar, any objection? >>> >>> >>> No objections, however do make sure to test both 32 and 64 bit builds of >>> ghc when you use the API, it's new enough and rare enough that it may not be >>> implemented in both mingw-64 tool chains (we've had similar issues before). >>> >>> Thanks, >>> Tamar >>> >>>> >>>> Cheers, >>>> >>>> - Ben >> >> > From lonetiger at gmail.com Wed May 9 18:12:30 2018 From: lonetiger at gmail.com (lonetiger at gmail.com) Date: Wed, 9 May 2018 19:12:30 +0100 Subject: End of Windows Vista support in GHC-8.6? In-Reply-To: References: <87inaa4esg.fsf@smart-cactus.org> Message-ID: <5af33a0f.1c69fb81.81a1.f8e1@mx.google.com> I’ve started a build, note that you will also need a PR to Hadrian to update WINVER. From: Simon Jakobi Sent: Wednesday, May 9, 2018 09:49 To: Phyx Cc: Ben Gamari; ghc-devs at haskell.org Devs Subject: Re: End of Windows Vista support in GHC-8.6? Thanks for the reminder, Tamar! I have just created https://phabricator.haskell.org/D4679. I would be very grateful if you could test it on 32bit Windows. Cheers, Simon 2018-05-05 12:07 GMT+02:00 Phyx : > Hi Simon, > > Whatever happened to this? The wiki was updated but I don't see a commit > actually removing vista support. > > Did you end up not doing this anymore? > > Thanks, > Tamar > > On Mon, Mar 5, 2018 at 7:21 PM, Simon Jakobi > wrote: >> >> Thanks everyone! >> >> I have updated https://ghc.haskell.org/trac/ghc/wiki/Platforms/Windows >> accordingly. >> >> Cheers, >> Simon >> >> 2018-03-05 18:29 GMT+01:00 Phyx : >>> >>> >>> >>> On Mon, Mar 5, 2018, 17:23 Ben Gamari wrote: >>>> >>>> Simon Jakobi via ghc-devs writes: >>>> >>>> > Hi! >>>> > >>>> > Given that Vista’s EOL was in April 2017 >>>> > >>>> > >>>> > i assume that there’s no intention to keep supporting it in GHC-8.6!? >>>> > >>>> > I’m asking because I intend to use a function >>>> > >>>> > >>>> > that requires Windows 7 or newer for #13362 >>>> > . >>>> > >>>> Given that it's EOL'd, dropping Vista sounds reasonable to me. >>>> >>>> Tamar, any objection? >>> >>> >>> No objections, however do make sure to test both 32 and 64 bit builds of >>> ghc when you use the API, it's new enough and rare enough that it may not be >>> implemented in both mingw-64 tool chains (we've had similar issues before). >>> >>> Thanks, >>> Tamar >>> >>>> >>>> Cheers, >>>> >>>> - Ben >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Wed May 9 20:12:23 2018 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Wed, 9 May 2018 22:12:23 +0200 Subject: TTG hsSyn for Batch and Interactive Parsing In-Reply-To: References: Message-ID: I have updated the Wiki. On 9 May 2018 at 10:15, Simon Peyton Jones wrote: > Thanks. > > > > I am absolutely behind this objective: > > I propose to move the API Annotations to where they belong, inside the AST. > > Indeed I thought that was always part of the TTG plan. > > > > *But I don’t understand what this has to do with interactive vs batch > parsing. *Why don’t you unconditionally retain API-annotation info? How > would GhcPs be used differently to GhcPsI? > > > > You might want to answer by clarifying on the wiki page, so that it is a > persistent record of the design debugged in dialogue by email. > > > > Simon > > > > *From:* Alan & Kim Zimmerman > *Sent:* 08 May 2018 21:02 > *To:* Simon Peyton Jones > *Cc:* ghc-devs > *Subject:* Re: TTG hsSyn for Batch and Interactive Parsing > > > > I have started a wiki page at https://ghc.haskell.org/trac/ghc/wiki/ > ImplementingTreesThatGrow/IdeSupport > > > > On 8 May 2018 at 10:54, Simon Peyton Jones wrote: > > At first blush, “running the parser in two modes” and “changing the Pass” > type don’t match up in my mind. One seems quite local (how to run the > parser). The other seems more pervasive. > > > > Can you say more about your proposed design, perhaps even on a wiki page? > > > > Simon > > > > *From:* ghc-devs *On Behalf Of *Alan & Kim > Zimmerman > *Sent:* 07 May 2018 16:17 > *To:* ghc-devs > *Subject:* TTG hsSyn for Batch and Interactive Parsing > > > > I want to be able to run the GHC parser in one of two modes, batch which > functions as now, and interactive which will (eventually) be incremental. > > In addition, the hsSyn AST for each will have different TTG[1] > annotations, so that it can better support IDE usage. > > I think this can be done by changing the types in HsExtension to introduce > a 'Process' type as follows > > data Pass = Parsed Process | Renamed | Typechecked > deriving (Data) > > data Process = Batch | Interactive > deriving (Show, Data) > > We then rename the pass synonyms so that batch is the default > > type GhcPs = GhcPass ('Parsed 'Batch) > type GhcPsI = GhcPass ('Parsed 'Interactive) > > I have attached a simple proof of concept file, which emulates parsing and > renaming. > > Is this an appropriate approach to take? > > Alan > > > > [1] Trees That Grow https://ghc.haskell.org/trac/ghc/wiki/ > ImplementingTreesThatGrow > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.jakobi at googlemail.com Wed May 9 22:47:09 2018 From: simon.jakobi at googlemail.com (Simon Jakobi) Date: Thu, 10 May 2018 00:47:09 +0200 Subject: End of Windows Vista support in GHC-8.6? In-Reply-To: <5af33a0f.1c69fb81.81a1.f8e1@mx.google.com> References: <87inaa4esg.fsf@smart-cactus.org> <5af33a0f.1c69fb81.81a1.f8e1@mx.google.com> Message-ID: Thanks a lot, Tamar! I'll make the PR for Hadrian once my patch is accepted. 2018-05-09 20:12 GMT+02:00 : > I’ve started a build, note that you will also need a PR to Hadrian to update > WINVER. > > > > From: Simon Jakobi > Sent: Wednesday, May 9, 2018 09:49 > To: Phyx > Cc: Ben Gamari; ghc-devs at haskell.org Devs > Subject: Re: End of Windows Vista support in GHC-8.6? > > > > Thanks for the reminder, Tamar! > > > > I have just created https://phabricator.haskell.org/D4679. I would be > > very grateful if you could test it on 32bit Windows. > > > > Cheers, > > Simon > > > > 2018-05-05 12:07 GMT+02:00 Phyx : > >> Hi Simon, > >> > >> Whatever happened to this? The wiki was updated but I don't see a commit > >> actually removing vista support. > >> > >> Did you end up not doing this anymore? > >> > >> Thanks, > >> Tamar > >> > >> On Mon, Mar 5, 2018 at 7:21 PM, Simon Jakobi > >> wrote: > >>> > >>> Thanks everyone! > >>> > >>> I have updated https://ghc.haskell.org/trac/ghc/wiki/Platforms/Windows > >>> accordingly. > >>> > >>> Cheers, > >>> Simon > >>> > >>> 2018-03-05 18:29 GMT+01:00 Phyx : > >>>> > >>>> > >>>> > >>>> On Mon, Mar 5, 2018, 17:23 Ben Gamari wrote: > >>>>> > >>>>> Simon Jakobi via ghc-devs writes: > >>>>> > >>>>> > Hi! > >>>>> > > >>>>> > Given that Vista’s EOL was in April 2017 > >>>>> > > >>>>> > >>>>> > > >>>>> > i assume that there’s no intention to keep supporting it in GHC-8.6!? > >>>>> > > >>>>> > I’m asking because I intend to use a function > >>>>> > > >>>>> > >>>>> > > >>>>> > that requires Windows 7 or newer for #13362 > >>>>> > . > >>>>> > > >>>>> Given that it's EOL'd, dropping Vista sounds reasonable to me. > >>>>> > >>>>> Tamar, any objection? > >>>> > >>>> > >>>> No objections, however do make sure to test both 32 and 64 bit builds of > >>>> ghc when you use the API, it's new enough and rare enough that it may >>>> not be > >>>> implemented in both mingw-64 tool chains (we've had similar issues >>>> before). > >>>> > >>>> Thanks, > >>>> Tamar > >>>> > >>>>> > >>>>> Cheers, > >>>>> > >>>>> - Ben > >>> > >>> > >> > > From simonpj at microsoft.com Thu May 10 14:31:58 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 10 May 2018 14:31:58 +0000 Subject: TTG hsSyn for Batch and Interactive Parsing In-Reply-To: References: Message-ID: Thanks. But I still don’t see the connection with “interactive”. Why should maintaining API annotations have anything to do with interactivity? Maybe data Process = WithApiAnnotations | WithoutApiAnnotations I understand you want two different variants of the syntax tree, but I don’t understand what functions might produce or consume them. In particular does the parser produce (HsSyn (GhcPs WithApiAnnotations)) or without? Simon From: Alan & Kim Zimmerman Sent: 09 May 2018 21:12 To: Simon Peyton Jones Cc: ghc-devs Subject: Re: TTG hsSyn for Batch and Interactive Parsing I have updated the Wiki. On 9 May 2018 at 10:15, Simon Peyton Jones > wrote: Thanks. I am absolutely behind this objective: I propose to move the API Annotations to where they belong, inside the AST. Indeed I thought that was always part of the TTG plan. But I don’t understand what this has to do with interactive vs batch parsing. Why don’t you unconditionally retain API-annotation info? How would GhcPs be used differently to GhcPsI? You might want to answer by clarifying on the wiki page, so that it is a persistent record of the design debugged in dialogue by email. Simon From: Alan & Kim Zimmerman > Sent: 08 May 2018 21:02 To: Simon Peyton Jones > Cc: ghc-devs > Subject: Re: TTG hsSyn for Batch and Interactive Parsing I have started a wiki page at https://ghc.haskell.org/trac/ghc/wiki/ImplementingTreesThatGrow/IdeSupport On 8 May 2018 at 10:54, Simon Peyton Jones > wrote: At first blush, “running the parser in two modes” and “changing the Pass” type don’t match up in my mind. One seems quite local (how to run the parser). The other seems more pervasive. Can you say more about your proposed design, perhaps even on a wiki page? Simon From: ghc-devs > On Behalf Of Alan & Kim Zimmerman Sent: 07 May 2018 16:17 To: ghc-devs > Subject: TTG hsSyn for Batch and Interactive Parsing I want to be able to run the GHC parser in one of two modes, batch which functions as now, and interactive which will (eventually) be incremental. In addition, the hsSyn AST for each will have different TTG[1] annotations, so that it can better support IDE usage. I think this can be done by changing the types in HsExtension to introduce a 'Process' type as follows data Pass = Parsed Process | Renamed | Typechecked deriving (Data) data Process = Batch | Interactive deriving (Show, Data) We then rename the pass synonyms so that batch is the default type GhcPs = GhcPass ('Parsed 'Batch) type GhcPsI = GhcPass ('Parsed 'Interactive) I have attached a simple proof of concept file, which emulates parsing and renaming. Is this an appropriate approach to take? Alan [1] Trees That Grow https://ghc.haskell.org/trac/ghc/wiki/ImplementingTreesThatGrow -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Thu May 10 21:35:02 2018 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Thu, 10 May 2018 23:35:02 +0200 Subject: TTG hsSyn for Batch and Interactive Parsing In-Reply-To: References: Message-ID: I have updated the Wiki with the clearer names, and noted that a single parser definition would still be used, as at present, but would only keep the extra info if it was requested to. The naming around interactive and batch is to anticipate a further step I would like to take, to make the parser fully incremental, in the sense that it would process as input the prior parse tree and a list of changes to the source, and then generate a fresh parse tree, with the changed nodes marked. This mode would be tightly coupled to an external too like haskell-ide-engine, to manage the bookkeeping around this. My thinking for this is to use the approach presented in the paper "Efficient and Flexible Incremental Parsing" by Wagner and Graham[1]. The plan is to modify `happy`, so that we can reuse the existing GHC Parser.y with minor modifications. This is the same approach as used in the library tree-sitter[2], which is a very active project on github. WIP is at [3], but it is very early stage. Regards Alan [1] https://pdfs.semanticscholar.org/4d22/fab95c78b3c23fa9dff88fb82976ed c213c2.pdf [2] https://github.com/tree-sitter/tree-sitter [3] https://github.com/alanz/happy/tree/incremental -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Fri May 11 14:53:32 2018 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Fri, 11 May 2018 15:53:32 +0100 Subject: Motivation for refineDefaultAlt Message-ID: Hi all, Does anyone know the motivation for refineDefaultAlt? The comment states - -- | Refine the default alternative to a 'DataAlt', if there is a unique way to do so. OK - so the code transforms something like case x of { DEFAULT -> e } ===> case x of { Foo a1 a2 a3 -> e } but why is this necessary or desirable? Perhaps you know Simon (Jakobi)? Cheers, Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri May 11 15:03:05 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 11 May 2018 15:03:05 +0000 Subject: Motivation for refineDefaultAlt In-Reply-To: References: Message-ID: Because if e contains …(case x of Foo p q -> e2)… as a sub-expression, we’d like to simplify it. Sorry that is not documented; please do add that to the comments in the source code. Simon From: ghc-devs On Behalf Of Matthew Pickering Sent: 11 May 2018 15:54 To: GHC developers Subject: Motivation for refineDefaultAlt Hi all, Does anyone know the motivation for refineDefaultAlt? The comment states - -- | Refine the default alternative to a 'DataAlt', if there is a unique way to do so. OK - so the code transforms something like case x of { DEFAULT -> e } ===> case x of { Foo a1 a2 a3 -> e } but why is this necessary or desirable? Perhaps you know Simon (Jakobi)? Cheers, Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.jakobi at googlemail.com Fri May 11 15:17:28 2018 From: simon.jakobi at googlemail.com (Simon Jakobi) Date: Fri, 11 May 2018 17:17:28 +0200 Subject: Motivation for refineDefaultAlt In-Reply-To: References: Message-ID: Hi! I thought refineDefaultAlt was about scenarios like this: data D = C0 | C1 | C2 case e of DEFAULT -> e0 C0 -> e1 C1 -> e1 When we apply combineIdenticalAlts to this expression, it can't combine the alts for C0 and C1, as we already have a default case. If we apply refineDefaultAlt first, we get case e of C0 -> e1 C1 -> e1 C2 -> e0 and combineIdenticalAlts can turn that into case e of DEFAULT -> e1 C2 -> e0 But that's just my own interpretation and possibly not the original motivation. Cheers, Simon 2018-05-11 17:03 GMT+02:00 Simon Peyton Jones via ghc-devs : > Because if e contains > > …(case x of Foo p q -> e2)… > > as a sub-expression, we’d like to simplify it. > > > > Sorry that is not documented; please do add that to the comments in the > source code. > > > > Simon > > > > From: ghc-devs On Behalf Of Matthew Pickering > Sent: 11 May 2018 15:54 > To: GHC developers > Subject: Motivation for refineDefaultAlt > > > > Hi all, > > > > Does anyone know the motivation for refineDefaultAlt? > > The comment states > > - -- | Refine the default alternative to a 'DataAlt', if there is a unique > way to do so. > > OK - so the code transforms something like > > case x of { DEFAULT -> e } > ===> > > case x of { Foo a1 a2 a3 -> e } > > > but why is this necessary or desirable? > > Perhaps you know Simon (Jakobi)? > > Cheers, > > > > Matt > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From matthewtpickering at gmail.com Fri May 11 15:54:45 2018 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Fri, 11 May 2018 16:54:45 +0100 Subject: Motivation for refineDefaultAlt In-Reply-To: References: Message-ID: To round off Simon's email with a concrete example and explanation. ``` {-# LANGUAGE BangPatterns #-} module Test where mid x = x {-# NOINLINE mid #-} data Foo = Foo1 () test :: Foo -> () test x = case x of !_ -> mid (case x of Foo1 x1 -> x1) ``` refineDefaultAlt fills in the DEFAULT here with `Foo ip1` and then x becomes bound to `Foo ip1` so is inlined into the other case which causes the KnownBranch optimisation to kick in. Simon J's point also seems plausible. Especially as it's called just before combineIdenticalAlts. Thanks everyone! On Fri, May 11, 2018 at 4:17 PM, Simon Jakobi wrote: > Hi! > > I thought refineDefaultAlt was about scenarios like this: > > data D = C0 | C1 | C2 > > case e of > DEFAULT -> e0 > C0 -> e1 > C1 -> e1 > > When we apply combineIdenticalAlts to this expression, it can't > combine the alts for C0 and C1, as we already have a default case. > > If we apply refineDefaultAlt first, we get > > case e of > C0 -> e1 > C1 -> e1 > C2 -> e0 > > and combineIdenticalAlts can turn that into > > case e of > DEFAULT -> e1 > C2 -> e0 > > But that's just my own interpretation and possibly not the original motivation. > > Cheers, > Simon > > > > 2018-05-11 17:03 GMT+02:00 Simon Peyton Jones via ghc-devs > : >> Because if e contains >> >> …(case x of Foo p q -> e2)… >> >> as a sub-expression, we’d like to simplify it. >> >> >> >> Sorry that is not documented; please do add that to the comments in the >> source code. >> >> >> >> Simon >> >> >> >> From: ghc-devs On Behalf Of Matthew Pickering >> Sent: 11 May 2018 15:54 >> To: GHC developers >> Subject: Motivation for refineDefaultAlt >> >> >> >> Hi all, >> >> >> >> Does anyone know the motivation for refineDefaultAlt? >> >> The comment states >> >> - -- | Refine the default alternative to a 'DataAlt', if there is a unique >> way to do so. >> >> OK - so the code transforms something like >> >> case x of { DEFAULT -> e } >> ===> >> >> case x of { Foo a1 a2 a3 -> e } >> >> >> but why is this necessary or desirable? >> >> Perhaps you know Simon (Jakobi)? >> >> Cheers, >> >> >> >> Matt >> >> >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> From peter.d.podlovics at gmail.com Sun May 13 15:18:19 2018 From: peter.d.podlovics at gmail.com (Peter Podlovics) Date: Sun, 13 May 2018 17:18:19 +0200 Subject: Potential improvements for CSE, strictness analyzer, let-floating Message-ID: Hi all, During the summer, as a university project, I would like to make some contributions to GHC. There are three topics in particular that piqued my interest: common subexpression elimination, strictness analysis, and let-floating. I would like to ask you whether there is any room for improvement in these parts of the compiler. Could you give me some pointers? Thanks in advance, Peter -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.jakobi at googlemail.com Sun May 13 18:35:56 2018 From: simon.jakobi at googlemail.com (Simon Jakobi) Date: Sun, 13 May 2018 20:35:56 +0200 Subject: Potential improvements for CSE, strictness analyzer, let-floating In-Reply-To: References: Message-ID: Hi Peter, as a start, here are few tickets concerning CSE: https://ghc.haskell.org/trac/ghc/query?status=!closed&keywords=~CSE I'm not sure if there are keywords for strictness analysis or let-floating on Trac. Here's the full list of keywords: https://ghc.haskell.org/trac/ghc/report/25?max=500. Hope that helps, Simon 2018-05-13 17:18 GMT+02:00 Peter Podlovics : > Hi all, > > During the summer, as a university project, I would like to make some > contributions to GHC. There are three topics in particular that piqued my > interest: common subexpression elimination, strictness analysis, and > let-floating. > > I would like to ask you whether there is any room for improvement in these > parts of the compiler. Could you give me some pointers? > > Thanks in advance, > Peter > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From peter.d.podlovics at gmail.com Sun May 13 21:19:33 2018 From: peter.d.podlovics at gmail.com (Peter Podlovics) Date: Sun, 13 May 2018 23:19:33 +0200 Subject: Potential improvements for CSE, strictness analyzer, let-floating In-Reply-To: References: Message-ID: Thanks for the trac tickets about CSE. However, I would like to get some advice on how these features can be improved on *generally*. Are there any known limitations or areas where they are lacking in some aspect, or feature requests associated with them? Something worthy of a couple month long project. Regards, Peter On Sun, May 13, 2018 at 8:35 PM, Simon Jakobi wrote: > Hi Peter, > > as a start, here are few tickets concerning CSE: > https://ghc.haskell.org/trac/ghc/query?status=!closed&keywords=~CSE > > I'm not sure if there are keywords for strictness analysis or > let-floating on Trac. Here's the full list of keywords: > https://ghc.haskell.org/trac/ghc/report/25?max=500. > > Hope that helps, > Simon > > 2018-05-13 17:18 GMT+02:00 Peter Podlovics : > > Hi all, > > > > During the summer, as a university project, I would like to make some > > contributions to GHC. There are three topics in particular that piqued my > > interest: common subexpression elimination, strictness analysis, and > > let-floating. > > > > I would like to ask you whether there is any room for improvement in > these > > parts of the compiler. Could you give me some pointers? > > > > Thanks in advance, > > Peter > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at nh2.me Sun May 13 23:46:38 2018 From: mail at nh2.me (=?UTF-8?Q?Niklas_Hamb=c3=bcchen?=) Date: Mon, 14 May 2018 01:46:38 +0200 Subject: How does the IO manager handle reading regular files? Message-ID: I just got reminded that epoll() has no effect on regular files on Linux by reading an nginx article [1] [2] and why that is [3] [4]. By what means does the IO manager make reads (wraps around the read() syscall on Linux) non-blocking? Does it always use read() in `foreign import safe` (or `interruptible`) so that an OS thread is spawned? It would be great if somebody could point me to the code where that's done (not again: for *regular* files). Thanks! Niklas [1]: https://www.nginx.com/blog/thread-pools-boost-performance-9x/ [2]: https://stackoverflow.com/questions/8057892/epoll-on-regular-files [3]: https://jvns.ca/blog/2017/06/03/async-io-on-linux--select--poll--and-epoll/ [4]: https://groups.google.com/forum/#!topic/comp.os.linux.development.system/K-fC-G6P4EA From zubin.duggal at gmail.com Mon May 14 12:31:48 2018 From: zubin.duggal at gmail.com (Zubin Duggal) Date: Mon, 14 May 2018 18:01:48 +0530 Subject: HIE Files Message-ID: Hello, I will be working on a GSOC project that will allow GHC to output a new .hie file to be written next to .hi files. It will contain information about the typechecked Haskell AST, allowing tooling(like haddocks --hyperlinked-source and haskell-ide-engine) to work without having to parse, rename and typecheck files all over again. I have made a GHC wiki page containing more details here: https://ghc.haskell.org/trac/ghc/wiki/HIEFiles Looking forward to any comments and suggestions. Thanks, Zubin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon May 14 13:30:41 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 14 May 2018 13:30:41 +0000 Subject: HIE Files In-Reply-To: References: Message-ID: Interesting. Please do keep the wiki page up to date so that it accurately describes the current design. For example, I hope you’ll flesh out what a “simplified, source aware, annotated AST derived from the Renamed/Typechecked Source” really is. Why not put the .hie-file info into the .hi file? (Optionally, of course.) What tools/libraries do you plan to produce to allow clients to read a .hie file and make send of the contents? Simon From: ghc-devs On Behalf Of Zubin Duggal Sent: 14 May 2018 13:32 To: ghc-devs at haskell.org Cc: Joachim Breitner ; Gershom B Subject: HIE Files Hello, I will be working on a GSOC project that will allow GHC to output a new .hie file to be written next to .hi files. It will contain information about the typechecked Haskell AST, allowing tooling(like haddocks --hyperlinked-source and haskell-ide-engine) to work without having to parse, rename and typecheck files all over again. I have made a GHC wiki page containing more details here: https://ghc.haskell.org/trac/ghc/wiki/HIEFiles Looking forward to any comments and suggestions. Thanks, Zubin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Mon May 14 13:36:58 2018 From: ben at well-typed.com (Ben Gamari) Date: Mon, 14 May 2018 09:36:58 -0400 Subject: How does the IO manager handle reading regular files? In-Reply-To: References: Message-ID: <878t8mgxzf.fsf@smart-cactus.org> Niklas Hambüchen writes: > I just got reminded that epoll() has no effect on regular files on > Linux by reading an nginx article [1] [2] and why that is [3] [4]. > > By what means does the IO manager make reads (wraps around the read() > syscall on Linux) non-blocking? > > Does it always use read() in `foreign import safe` (or > `interruptible`) so that an OS thread is spawned? > > It would be great if somebody could point me to the code where that's > done (not again: for *regular* files). > I believe the relevant implementation is the RawIO instance defined in GHC.IO.FD. The read implementation in particular is GHC.IO.FD.readRawBufferPtr. There is a useful Note directly above this function. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Mon May 14 15:01:48 2018 From: ben at well-typed.com (Ben Gamari) Date: Mon, 14 May 2018 11:01:48 -0400 Subject: Plan for GHC 8.6.1 In-Reply-To: References: <87h8o6d8p8.fsf@smart-cactus.org> Message-ID: <874ljagu27.fsf@smart-cactus.org> Matthew Pickering writes: > Perhaps Nested CPR will be ready :) ? https://phabricator.haskell.org/D4244 > > I am also working on the linear types branch. Arnaud is quite keen for > it to be ready for 8.6 but we still have a bit to go. > I'll admit that I'm a bit worried that the linear types branch may be a bit late given that the proposal only went to the committee last week. That being said, I'm happy to keep all options on the table. Regardless, it might be a good idea to put up a patch sooner rather than later so we can begin the review process. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From carter.schonwald at gmail.com Mon May 14 18:48:46 2018 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 14 May 2018 14:48:46 -0400 Subject: Plan for GHC 8.6.1 In-Reply-To: <874ljagu27.fsf@smart-cactus.org> References: <87h8o6d8p8.fsf@smart-cactus.org> <874ljagu27.fsf@smart-cactus.org> Message-ID: Even aside from committee feedback easily being weeks away for such a complex proposal, I’m also concerned about how the current proposed design changes to core seem a bit fragile with linear types. Perhaps I’m misunderstanding some of the current planned details , but I’m fairly confident that I’ve got a median or better comprehension. Type checking as if the code were inlined (per join points and related case expressions as the linear core doc says ) tends to be a symptom of the types not quite modelling the right information. Likewise would not that sort of checking create a possible quadratic blowup when linting/ type checking core? (And quadratic blowups are bad when debugging/checking possibly large core programs in the core of any ghc debugging or the like !) That said, putting a check point on phab for feedback of a technical sort is def something that would help. The ghc proposal spec is vaguer than I’d like for something like this. And a lot of important details I care about will be visible in the code that are lacking in the associated proposal and paper. On Mon, May 14, 2018 at 11:02 AM Ben Gamari wrote: > Matthew Pickering writes: > > > Perhaps Nested CPR will be ready :) ? > https://phabricator.haskell.org/D4244 > > > > I am also working on the linear types branch. Arnaud is quite keen for > > it to be ready for 8.6 but we still have a bit to go. > > > I'll admit that I'm a bit worried that the linear types branch may be a > bit late given that the proposal only went to the committee last week. > That being said, I'm happy to keep all options on the table. > > Regardless, it might be a good idea to put up a patch sooner rather than > later so we can begin the review process. > > Cheers, > > - Ben > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter.d.podlovics at gmail.com Mon May 14 20:56:17 2018 From: peter.d.podlovics at gmail.com (Peter Podlovics) Date: Mon, 14 May 2018 22:56:17 +0200 Subject: HIE Files In-Reply-To: References: Message-ID: Hi, Sometimes, when working with a type-checked AST, it can be useful to know the types of subexpressions as well. A simple use case would be any kind of static analysis of the code. Currently, the type-checker discards all intermediate results after the type checking ends, so the AST only has info about nodes with names. Storing the types of subexpressions somewhere would be a great benefit for many tools. Do you plan on including these intermediate results in the HIE file? Regards, Peter On Mon, May 14, 2018 at 3:30 PM, Simon Peyton Jones via ghc-devs < ghc-devs at haskell.org> wrote: > Interesting. > > > > Please do keep the wiki page up to date so that it accurately describes > the current design. For example, I hope you’ll flesh out what a “simplified, > source aware, annotated AST derived from the Renamed/Typechecked Source” really > is. > > > > Why not put the .hie-file info into the .hi file? (Optionally, of > course.) > > > > What tools/libraries do you plan to produce to allow clients to read a > .hie file and make send of the contents? > > > > Simon > > > > > > > > *From:* ghc-devs *On Behalf Of *Zubin > Duggal > *Sent:* 14 May 2018 13:32 > *To:* ghc-devs at haskell.org > *Cc:* Joachim Breitner ; Gershom B < > gershomb at gmail.com> > *Subject:* HIE Files > > > > Hello, > > I will be working on a GSOC project that will allow GHC to output a new > .hie file to be written next to .hi files. It will contain information > about the typechecked Haskell AST, allowing tooling(like haddocks > --hyperlinked-source and haskell-ide-engine) to work without having to > parse, rename and typecheck files all over again. > > I have made a GHC wiki page containing more details here: > > https://ghc.haskell.org/trac/ghc/wiki/HIEFiles > > > > Looking forward to any comments and suggestions. > > Thanks, > > Zubin. > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at nh2.me Mon May 14 21:15:52 2018 From: mail at nh2.me (=?UTF-8?Q?Niklas_Hamb=c3=bcchen?=) Date: Mon, 14 May 2018 23:15:52 +0200 Subject: How does the IO manager handle reading regular files? In-Reply-To: <878t8mgxzf.fsf@smart-cactus.org> References: <878t8mgxzf.fsf@smart-cactus.org> Message-ID: <3d754cd7-da43-5604-0684-25849616ee35@nh2.me> Hey Ben, thanks for your quick reply. I think there's a problem. On 14/05/2018 15.36, Ben Gamari wrote: > I believe the relevant implementation is the RawIO instance defined in > GHC.IO.FD. The read implementation in particular is > GHC.IO.FD.readRawBufferPtr. There is a useful Note directly above > this function. Reading through the code at http://hackage.haskell.org/package/base-4.11.1.0/docs/src/GHC.IO.FD.html#readRawBufferPtr The first line jumped to my eye: | isNonBlocking fd = unsafe_read -- unsafe is ok, it can't block This looks suspicious. And indeed, the following program does NOT keep printing things in the printing thread, and instead blocks for 30 seconds: ``` module Main where import Control.Concurrent import Control.Monad import qualified Data.ByteString as BS import System.Environment main :: IO () main = do args <- getArgs case args of [file] -> do forkIO $ forever $ do putStrLn "still running" threadDelay 100000 -- 0.1 s bs <- BS.readFile file putStrLn $ "Read " ++ show (BS.length bs) ++ " bytes" _ -> error "Pass 1 argument (a file)" ``` when compiled with ~/.stack/programs/x86_64-linux/ghc-8.2.2/bin/ghc --make -O -threaded blocking-regular-file-read-test.hs on my Ubuntu 16.04 and on a 2GB file like ./blocking-regular-file-read-test /mnt/images/ubuntu-18.04-desktop-amd64.iso And `strace -f -e open,read` on it shows: open("/mnt/images/ubuntu-18.04-desktop-amd64.iso", O_RDONLY|O_NOCTTY|O_NONBLOCK) = 11 read(11, So GHC is trying to use `O_NONBLOCK` on regular files, which cannot work and will block when used through unsafe foreign calls like that. Is this a known problem? Otherwise I'll go ahead and file a ticket. From ben at well-typed.com Mon May 14 21:22:53 2018 From: ben at well-typed.com (Ben Gamari) Date: Mon, 14 May 2018 17:22:53 -0400 Subject: How does the IO manager handle reading regular files? In-Reply-To: <3d754cd7-da43-5604-0684-25849616ee35@nh2.me> References: <878t8mgxzf.fsf@smart-cactus.org> <3d754cd7-da43-5604-0684-25849616ee35@nh2.me> Message-ID: <87tvraexul.fsf@smart-cactus.org> Niklas Hambüchen writes: ... > > So GHC is trying to use `O_NONBLOCK` on regular files, which cannot > work and will block when used through unsafe foreign calls like that. > Yikes! > Is this a known problem? > Doesn't sound familiar to me. Sounds like a ticket is in order. Thanks for spotting all of these nasty I/O issues; who knew there could be so many of them. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From mail at nh2.me Mon May 14 21:29:27 2018 From: mail at nh2.me (=?UTF-8?Q?Niklas_Hamb=c3=bcchen?=) Date: Mon, 14 May 2018 23:29:27 +0200 Subject: How does the IO manager handle reading regular files? In-Reply-To: <3d754cd7-da43-5604-0684-25849616ee35@nh2.me> References: <878t8mgxzf.fsf@smart-cactus.org> <3d754cd7-da43-5604-0684-25849616ee35@nh2.me> Message-ID: <23a92084-def7-3035-8cba-27f498265f18@nh2.me> Also funny but perhaps not too surprising: If in my code, you replace `forkIO` by e.g. `forkOn 2`, then nondeterministically, sometimes the program hangs and sometimes it works with +RTS -N2. The higher you set -N, the more likely it is to work. If you put both the putStrLn loop and the readFile into `forkOn 0` and `forkOn 1` each, and run with +RTS -N3, then it always works as expected. From gershomb at gmail.com Tue May 15 06:27:50 2018 From: gershomb at gmail.com (Gershom B) Date: Tue, 15 May 2018 02:27:50 -0400 Subject: HIE Files In-Reply-To: References: Message-ID: On Mon, May 14, 2018 at 9:30 AM, Simon Peyton Jones wrote: > > Why not put the .hie-file info into the .hi file? (Optionally, of course.) > Simon, I'm curious what benefits you think we might get from this? (I'm one of the mentors on this GSoC project btw). > What tools/libraries do you plan to produce to allow clients to read a .hie file and make send of the contents? For GSoC as a proof of concept the idea is to teach haddock's hyperlinked-source backend to use this information to add type-annotation-on-hover to the colorized, hyperlinked, html source. I think what is anticipated more broadly is that other tools like the Haskell IDE Engine (which Zubin has contributed to in the past) will also be able to make use of these files to provide ide and tooling features in a more lightweight way than needing to directly interface with the GHC API. (This by the way is one of the key benefits of keeping the file separate from standard hi files -- it should be parseable and consumable without needing to link in GHC). -g From simonpj at microsoft.com Tue May 15 08:42:44 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 15 May 2018 08:42:44 +0000 Subject: HIE Files In-Reply-To: References: Message-ID: | > Why not put the .hie-file info into the .hi file? (Optionally, of | > course.) | > | | Simon, I'm curious what benefits you think we might get from this? | (I'm one of the mentors on this GSoC project btw). Well, I've always thought that we should really put the .hi file into the .o file! Having two files risks getting things out of sync, and three makes that worse. The file is just a place to keep a blob of info. What's the motivation for having two .hie as well as .hi? | | > What tools/libraries do you plan to produce to allow clients to read | a .hie file and make send of the contents? | | For GSoC as a proof of concept the idea is to teach haddock's | hyperlinked-source backend to use this information to add type- | annotation-on-hover to the colorized, hyperlinked, html source. That's great. But would it not be good to offer a library, with a well-defined API, that allows a client (including Haddock) to parse those .hie files into syntax trees or whatever? You'll need to do that to allow the haddock thing you describe -- and it'd be much better to make the parser (and doubtless lots of utility function like finding things in the tree) available to any client not just haddock. And that in turn raises the questions of WHAT syntax tree. HsSyn? Template Haskell? Haskell-src-exts? Or something new? Shayan and Alan are busy parameterising HsSyn to make it non-GHC-specific, and directly usable for this kind of endeavour ("Trees that grow"). It'd be great to build on their work. | with the GHC API. (This by the way is one of the key benefits of | keeping the file separate from standard hi files -- it should be | parseable and consumable without needing to link in GHC). Yes, not linking in GHC is a reasonable goal; but having two files and file formats is not a necessary consequence of that goal. Nothing stops us making a library to parse .hi files -- indeed the entire iface/ directory in GHC is quite well separated for that precise purpose. None of this is to criticise the plan. I think it's a great idea to make more info more readily available to more tools. I'm just poking at it a bit 😊. Simon From zubin.duggal at gmail.com Tue May 15 09:13:10 2018 From: zubin.duggal at gmail.com (Zubin Duggal) Date: Tue, 15 May 2018 14:43:10 +0530 Subject: HIE Files In-Reply-To: References: Message-ID: > > And that in turn raises the questions of WHAT syntax tree. HsSyn? > Template Haskell? Haskell-src-exts? Or something new? Shayan and Alan > are busy parameterising HsSyn to make it non-GHC-specific, and directly > usable for this kind of endeavour ("Trees that grow"). It'd be great to > build on their work. Mainly, we need information on every Token that appears in the original source. My plan is to further group Tokens into a simple rose-tree based on how they occur in HsSyn. We intentionally want to avoid capturing too much information so the format doesn't change much with changes to the GHC AST. I've made a file describing roughly what the data structures involved should look like https://gist.github.com/wz1000/edf14747bd890b08c01c226d5bc6a1d6 The plan is to group the Tokens together in a tree in way similar to what structured-haskell-mode does. (The gifs in the following link might provide some idea) https://github.com/chrisdone/structured-haskell-mode/ For example, here is what structured-haskell-mode outputs for a small snippet of code: https://gist.github.com/wz1000/db42d4f533ba7d2345934906b312f743 We want something similar for the HIE AST, but grouped into a tree, where each node(roughly corresponding to HsSyn constructors) points to all the subnodes and tokens it spans over. That's great. But would it not be good to offer a library, with a > well-defined API, that allows a client (including Haddock) to parse those > .hie files into syntax trees or whatever? You'll need to do that to allow > the haddock thing you describe -- and it'd be much better to make the > parser (and doubtless lots of utility function like finding things in the > tree) available to any client not just haddock. > Yes, a library to consume these files is definitely something we need, and I believe it will grow out naturally as we work out the integration with haddock and haskell-ide-engine. On 15 May 2018 at 14:12, Simon Peyton Jones wrote: > | > Why not put the .hie-file info into the .hi file? (Optionally, of > | > course.) > | > > | > | Simon, I'm curious what benefits you think we might get from this? > | (I'm one of the mentors on this GSoC project btw). > > Well, I've always thought that we should really put the .hi file into the > .o file! Having two files risks getting things out of sync, and three > makes that worse. The file is just a place to keep a blob of info. What's > the motivation for having two .hie as well as .hi? > > | > | > What tools/libraries do you plan to produce to allow clients to read > | a .hie file and make send of the contents? > | > | For GSoC as a proof of concept the idea is to teach haddock's > | hyperlinked-source backend to use this information to add type- > | annotation-on-hover to the colorized, hyperlinked, html source. > > That's great. But would it not be good to offer a library, with a > well-defined API, that allows a client (including Haddock) to parse those > .hie files into syntax trees or whatever? You'll need to do that to allow > the haddock thing you describe -- and it'd be much better to make the > parser (and doubtless lots of utility function like finding things in the > tree) available to any client not just haddock. > > And that in turn raises the questions of WHAT syntax tree. HsSyn? > Template Haskell? Haskell-src-exts? Or something new? Shayan and Alan > are busy parameterising HsSyn to make it non-GHC-specific, and directly > usable for this kind of endeavour ("Trees that grow"). It'd be great to > build on their work. > > | with the GHC API. (This by the way is one of the key benefits of > | keeping the file separate from standard hi files -- it should be > | parseable and consumable without needing to link in GHC). > > Yes, not linking in GHC is a reasonable goal; but having two files and > file formats is not a necessary consequence of that goal. Nothing stops us > making a library to parse .hi files -- indeed the entire iface/ directory > in GHC is quite well separated for that precise purpose. > > None of this is to criticise the plan. I think it's a great idea to make > more info more readily available to more tools. I'm just poking at it a > bit 😊. > > Simon > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue May 15 09:19:59 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 15 May 2018 09:19:59 +0000 Subject: HIE Files In-Reply-To: References: Message-ID: Mainly, we need information on every Token that appears in the original source. Good idea. Alan Zimmerman’s exact-print stuff has precisely that goal, I believe. So it’d be worth talking to him; perhaps by working together you can make much more rapid progress. Or not – but a conversation would be helpful in any case. I’m very happy to see more attention and effort being devoted to this space. Thank you! Simon From: Zubin Duggal Sent: 15 May 2018 10:13 To: Simon Peyton Jones Cc: Gershom B ; ghc-devs at haskell.org; Joachim Breitner ; Shayan Najd ; Alan & Kim Zimmerman Subject: Re: HIE Files And that in turn raises the questions of WHAT syntax tree. HsSyn? Template Haskell? Haskell-src-exts? Or something new? Shayan and Alan are busy parameterising HsSyn to make it non-GHC-specific, and directly usable for this kind of endeavour ("Trees that grow"). It'd be great to build on their work. Mainly, we need information on every Token that appears in the original source. My plan is to further group Tokens into a simple rose-tree based on how they occur in HsSyn. We intentionally want to avoid capturing too much information so the format doesn't change much with changes to the GHC AST. I've made a file describing roughly what the data structures involved should look like https://gist.github.com/wz1000/edf14747bd890b08c01c226d5bc6a1d6 The plan is to group the Tokens together in a tree in way similar to what structured-haskell-mode does. (The gifs in the following link might provide some idea) https://github.com/chrisdone/structured-haskell-mode/ For example, here is what structured-haskell-mode outputs for a small snippet of code: https://gist.github.com/wz1000/db42d4f533ba7d2345934906b312f743 We want something similar for the HIE AST, but grouped into a tree, where each node(roughly corresponding to HsSyn constructors) points to all the subnodes and tokens it spans over. That's great. But would it not be good to offer a library, with a well-defined API, that allows a client (including Haddock) to parse those .hie files into syntax trees or whatever? You'll need to do that to allow the haddock thing you describe -- and it'd be much better to make the parser (and doubtless lots of utility function like finding things in the tree) available to any client not just haddock. Yes, a library to consume these files is definitely something we need, and I believe it will grow out naturally as we work out the integration with haddock and haskell-ide-engine. On 15 May 2018 at 14:12, Simon Peyton Jones > wrote: | > Why not put the .hie-file info into the .hi file? (Optionally, of | > course.) | > | | Simon, I'm curious what benefits you think we might get from this? | (I'm one of the mentors on this GSoC project btw). Well, I've always thought that we should really put the .hi file into the .o file! Having two files risks getting things out of sync, and three makes that worse. The file is just a place to keep a blob of info. What's the motivation for having two .hie as well as .hi? | | > What tools/libraries do you plan to produce to allow clients to read | a .hie file and make send of the contents? | | For GSoC as a proof of concept the idea is to teach haddock's | hyperlinked-source backend to use this information to add type- | annotation-on-hover to the colorized, hyperlinked, html source. That's great. But would it not be good to offer a library, with a well-defined API, that allows a client (including Haddock) to parse those .hie files into syntax trees or whatever? You'll need to do that to allow the haddock thing you describe -- and it'd be much better to make the parser (and doubtless lots of utility function like finding things in the tree) available to any client not just haddock. And that in turn raises the questions of WHAT syntax tree. HsSyn? Template Haskell? Haskell-src-exts? Or something new? Shayan and Alan are busy parameterising HsSyn to make it non-GHC-specific, and directly usable for this kind of endeavour ("Trees that grow"). It'd be great to build on their work. | with the GHC API. (This by the way is one of the key benefits of | keeping the file separate from standard hi files -- it should be | parseable and consumable without needing to link in GHC). Yes, not linking in GHC is a reasonable goal; but having two files and file formats is not a necessary consequence of that goal. Nothing stops us making a library to parse .hi files -- indeed the entire iface/ directory in GHC is quite well separated for that precise purpose. None of this is to criticise the plan. I think it's a great idea to make more info more readily available to more tools. I'm just poking at it a bit 😊. Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Thu May 17 15:16:29 2018 From: ben at well-typed.com (Ben Gamari) Date: Thu, 17 May 2018 11:16:29 -0400 Subject: Phabricator reboot Message-ID: <87vabme2ig.fsf@smart-cactus.org> Hi everyone, Due to some strange behavior from our Phabricator instance over the last few days I'm going to be bringing it down for a quick reboot in about 30 minutes. The downtime should only be a couple of minutes long. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From simonpj at microsoft.com Fri May 18 14:13:02 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 18 May 2018 14:13:02 +0000 Subject: TTG hsSyn for Batch and Interactive Parsing In-Reply-To: References: Message-ID: This sounds fine. But I still don’t understand There would still be a single parser definition in Parser.y, which would make use of functions to add the additional info to the generated source tree, which would be NOPs if the information was not being kept. This is similar to what happens at present with the Api Annotations. What is the type of the parser? Does it produce parser :: String -> HsSyn (GhcPass (Parsed WithApiAnnotation) or parser :: String -> HsSyn (GhcPass (Parsed WithoutApiAnnotation) ? We can’t make the result type depend on DynFlags! (Yet)( parser :: DynFlags -> String -> HsSyn (GhcPass (Parsed (if … then WithApiAnnotations else WihoutsApiAnnotations) So I’m puzzled. Nomenclature. I’d say “NoApiAnnotations” rather than “WithoutApiAnnotations”. Also: do we have data to show that it’s not OK to always keep API annotations. That would be simpler, wouldn’t it. Incidentally the Haddock stuff, decorations of type (Maybe LHsDocString) somehow belong in this world too, don’t they? SImon From: Alan & Kim Zimmerman Sent: 10 May 2018 22:35 To: Simon Peyton Jones Cc: ghc-devs Subject: Re: TTG hsSyn for Batch and Interactive Parsing I have updated the Wiki with the clearer names, and noted that a single parser definition would still be used, as at present, but would only keep the extra info if it was requested to. The naming around interactive and batch is to anticipate a further step I would like to take, to make the parser fully incremental, in the sense that it would process as input the prior parse tree and a list of changes to the source, and then generate a fresh parse tree, with the changed nodes marked. This mode would be tightly coupled to an external too like haskell-ide-engine, to manage the bookkeeping around this. My thinking for this is to use the approach presented in the paper "Efficient and Flexible Incremental Parsing" by Wagner and Graham[1]. The plan is to modify `happy`, so that we can reuse the existing GHC Parser.y with minor modifications. This is the same approach as used in the library tree-sitter[2], which is a very active project on github. WIP is at [3], but it is very early stage. Regards Alan [1] https://pdfs.semanticscholar.org/4d22/fab95c78b3c23fa9dff88fb82976edc213c2.pdf [2] https://github.com/tree-sitter/tree-sitter [3] https://github.com/alanz/happy/tree/incremental -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Fri May 18 14:31:13 2018 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Fri, 18 May 2018 16:31:13 +0200 Subject: TTG hsSyn for Batch and Interactive Parsing In-Reply-To: References: Message-ID: On 18 May 2018 at 16:13, Simon Peyton Jones wrote: > > > We can’t make the result type depend on DynFlags! (Yet)( > > * parser :: DynFlags -> String * > > * -> HsSyn (GhcPass (Parsed (if … * > > * then WithApiAnnotations* > > * else WihoutsApiAnnotations)* > > We could conceptually have parser :: DynFlags -> String -> Either (HsSyn (Parsed WithApiAnnotations)) (HsSyn (Parsed NoApiAnnotations)) The main point is that the next phase can make use of either of the variants. And it may be simplest to just always use the annotations, the ParsedSource is normally discarded after renaming anyway. Alan -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri May 18 14:57:35 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 18 May 2018 14:57:35 +0000 Subject: TTG hsSyn for Batch and Interactive Parsing In-Reply-To: References: Message-ID: If we are always going to generate a parse tree with annotations from the parser, let’s not generate two! I’m fine with always generating the annotations, but we just need to check that it doesn’t have insupportable costs. Simon From: Alan & Kim Zimmerman Sent: 18 May 2018 15:31 To: Simon Peyton Jones Cc: ghc-devs Subject: Re: TTG hsSyn for Batch and Interactive Parsing On 18 May 2018 at 16:13, Simon Peyton Jones > wrote: We can’t make the result type depend on DynFlags! (Yet)( parser :: DynFlags -> String -> HsSyn (GhcPass (Parsed (if … then WithApiAnnotations else WihoutsApiAnnotations) We could conceptually have parser :: DynFlags -> String -> Either (HsSyn (Parsed WithApiAnnotations)) (HsSyn (Parsed NoApiAnnotations)) The main point is that the next phase can make use of either of the variants. And it may be simplest to just always use the annotations, the ParsedSource is normally discarded after renaming anyway. Alan -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Sat May 19 17:51:19 2018 From: ben at well-typed.com (Ben Gamari) Date: Sat, 19 May 2018 13:51:19 -0400 Subject: 8.6.1 status Message-ID: <87muwvedpn.fsf@smart-cactus.org> Hi everyone, As noted a few weeks ago, the 8.6.1 fork is quickly approaching. Currently the plan is to cut the branch on Friday, 1 May 2018. After the branch there will likely be a few days of bug fixing and clean-up, followed by the alpha 1 release. Let me know if you have a patch which you would like to see included that hasn't yet been submitted for review. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Sat May 19 17:56:20 2018 From: ben at well-typed.com (Ben Gamari) Date: Sat, 19 May 2018 13:56:20 -0400 Subject: 8.6.1 status In-Reply-To: <87muwvedpn.fsf@smart-cactus.org> References: <87muwvedpn.fsf@smart-cactus.org> Message-ID: <87k1rzedhb.fsf@smart-cactus.org> Ben Gamari writes: > Hi everyone, > > As noted a few weeks ago, the 8.6.1 fork is quickly approaching. > Currently the plan is to cut the branch on Friday, 1 May 2018. Silly me; the above is supposed to read "Friday, 1 June 2018". Sorry for the confusion! Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From mail at joachim-breitner.de Sun May 20 22:03:47 2018 From: mail at joachim-breitner.de (Joachim Breitner) Date: Sun, 20 May 2018 18:03:47 -0400 Subject: Add HeapView functionality (ec22f7d) In-Reply-To: <20180520154321.085C23ABA0@ghc.haskell.org> References: <20180520154321.085C23ABA0@ghc.haskell.org> Message-ID: <14cf0483c36bf85edec0c01b315066dbfcb3ab1d.camel@joachim-breitner.de> Hi, Am Sonntag, den 20.05.2018, 15:43 +0000 schrieb git at git.haskell.org: > commit ec22f7ddc81b40a9dbcf140e5cf44730cb776d00 > Author: Patrick Dougherty > Date: Wed May 16 16:50:13 2018 -0400 > > Add HeapView functionality > > This pulls parts of Joachim Breitner's ghc-heap-view library inside GHC. > The bits added are the C hooks into the RTS and a basic Haskell wrapper > to these C hooks. The main reason for these to be added to GHC proper > is that the code needs to be kept in sync with the closure types > defined by the RTS. It is expected that the version of HeapView shipped > with GHC will always work with that version of GHC and that extra > functionality can be layered on top with a library like ghc-heap-view > distributed via Hackage. whoohoo, this is great! I did not not expect that the horrifying (but maybe not horrible) hack that I wrote so many years ago becomes an official feature :-) I’ll be curious in what ways our community will use (or abuse) this feature… Patrick, you mention that a library like ghc-heap-view will still be required. Are you interested in updating it to use the new interfaces, or even take it over completely? Compatibility with ghc-vis (http://felsin9.de/nnis/ghc-vis/) is probably the most important goal here. I’m CC’ing the author of ghc- vis, Dennis. Or maybe ghc-vis should use the GHC-provided wrappers directly? Dennis, are you still invested in maintaining ghc-vis? Cheers, Joachim -- Joachim Breitner mail at joachim-breitner.de http://www.joachim-breitner.de/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From simonpj at microsoft.com Mon May 21 09:35:06 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 21 May 2018 09:35:06 +0000 Subject: abi-depends field Message-ID: I'm seeing lots of these errors. What's this "ignoring (possibly broken) abi-depends field" stuff? Or have I failed to update sufficiently vigorously somwhow? Simon =====> T13350(normal) 1 of 1 [0, 0, 0] cd "./should_compile/T13350/T13350.run" && $MAKE -s --no-print-directory T13350 Actual stdout output differs from expected: diff -uw "/dev/null" "./should_compile/T13350/T13350.run/T13350.run.stdout.normalised" --- /dev/null 2018-04-26 12:34:15.027999213 +0100 +++ ./should_compile/T13350/T13350.run/T13350.run.stdout.normalised 2018-05-21 10:33:05.181794031 +0100 @@ -0,0 +1 @@ +ignoring (possibly broken) abi-depends field for packages *** unexpected failure for T13350(normal) Unexpected results from: TEST="T13350" SUMMARY for test run started at Mon May 21 10:32:53 2018 BST 0:00:12 spent to go through 1 total tests, which gave rise to 1 test cases, of which 0 were skipped 0 had missing libraries 0 expected passes 0 expected failures 0 caused framework failures 0 caused framework warnings 0 unexpected passes 1 unexpected failures 0 unexpected stat failures Unexpected failures: should_compile/T13350/T13350.run T13350 [bad stdout] (normal) -------------- next part -------------- An HTML attachment was scrubbed... URL: From alicekoroleva239 at gmail.com Mon May 21 12:14:53 2018 From: alicekoroleva239 at gmail.com (alice) Date: Mon, 21 May 2018 15:14:53 +0300 Subject: Built-in type family with one input argument Message-ID: Hello, I’ve made a built-in function that takes a list and returns a list: `type family MySet [k] -> [k]`. I think I did something wrong because right now I’m getting an exception like that: ``` Extracted [Int, Bool, Char] — input list 1 List_ty '[Bool, Char, Int] — sorted list 1 Extracted [Int, Bool, Char] — input list 2 (?) List_ty '[Bool, Char, Int] — sorted list 2 (?) ghc: panic! (the 'impossible' happened) (GHC version 8.4.2 for x86_64-apple-darwin): ASSERT failed! Bad coercion hole co_a2QJ: MySet '[Bool, Char, Int] nominal (MySet '[Int, Bool, Char] :: [*]) ~# ('[Bool, Char, Int] :: [*]) Call stack: CallStack (from HasCallStack): callStackDoc, called at compiler/utils/Outputable.hs:1208:22 in ghc:Outputable assertPprPanic, called at compiler/typecheck/TcMType.hs:306:25 in ghc:TcMType Call stack: CallStack (from HasCallStack): callStackDoc, called at compiler/utils/Outputable.hs:1150:37 in ghc:Outputable pprPanic, called at compiler/utils/Outputable.hs:1206:5 in ghc:Outputable assertPprPanic, called at compiler/typecheck/TcMType.hs:306:25 in ghc:TcMType Please report this as a GHC bug: http://www.haskell.org/ghc/reportabug ``` when I’m trying to check the ~ relation: ``` test :: Proxy (GHC.TypeLits.MySet '[Int, Bool, Char]) -> Proxy (GHC.TypeLits.MySet '[Int, Bool, Char]) test = id ``` But when I do something like that: ``` test :: (GHC.TypeLits.MySet '[Int, Bool, Char] ~ as, GHC.TypeLits.MySet '[Int, Bool, Char] ~ bs) => Proxy as -> Proxy bs test = id ``` everything is fine. I think the problem is that I tried to follow TcTypeNats#typeSymbolCmpTyCon etc style but all the functions there have two inputs and one output with built-in kind (I tried to do [k] kind by myself), so probably I understood something wrong. Could someone give me a hint what I did wrong, please? I can show my code if needed. Thank you for your time, Alice. -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon May 21 12:41:51 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 21 May 2018 12:41:51 +0000 Subject: abi-depends field In-Reply-To: References: Message-ID: Hello, anyone? At the moment I am doing a manual-diff on this batch of unexpected failures, which is jolly annoying. Should we just revert some patch for now? Unexpected failures: cabal/ghcpkg02.run ghcpkg02 [bad stdout] (normal) backpack/cabal/bkpcabal05/bkpcabal05.run bkpcabal05 [bad stdout] (normal) backpack/cabal/bkpcabal07/bkpcabal07.run bkpcabal07 [bad stdout] (normal) backpack/cabal/bkpcabal04/bkpcabal04.run bkpcabal04 [bad stdout] (normal) backpack/cabal/bkpcabal06/bkpcabal06.run bkpcabal06 [bad stdout] (normal) cabal/cabal04/cabal04.run cabal04 [bad stdout] (normal) cabal/cabal09/cabal09.run cabal09 [bad stdout] (normal) backpack/cabal/T14304/T14304.run T14304 [bad stdout] (normal) cabal/T12733/T12733.run T12733 [bad stdout] (normal) cabal/cabal03/cabal03.run cabal03 [bad stdout] (normal) cabal/cabal05/cabal05.run cabal05 [bad stdout] (normal) driver/T3007/T3007.run T3007 [bad stdout] (normal) driver/T1372/T1372.run T1372 [bad stdout] (normal) patsyn/should_compile/T13350/T13350.run T13350 [bad stdout] (normal) typecheck/bug1465/bug1465.run bug1465 [bad stdout] (normal) From: ghc-devs On Behalf Of Simon Peyton Jones via ghc-devs Sent: 21 May 2018 10:35 To: ghc-devs Subject: abi-depends field I'm seeing lots of these errors. What's this "ignoring (possibly broken) abi-depends field" stuff? Or have I failed to update sufficiently vigorously somwhow? Simon =====> T13350(normal) 1 of 1 [0, 0, 0] cd "./should_compile/T13350/T13350.run" && $MAKE -s --no-print-directory T13350 Actual stdout output differs from expected: diff -uw "/dev/null" "./should_compile/T13350/T13350.run/T13350.run.stdout.normalised" --- /dev/null 2018-04-26 12:34:15.027999213 +0100 +++ ./should_compile/T13350/T13350.run/T13350.run.stdout.normalised 2018-05-21 10:33:05.181794031 +0100 @@ -0,0 +1 @@ +ignoring (possibly broken) abi-depends field for packages *** unexpected failure for T13350(normal) Unexpected results from: TEST="T13350" SUMMARY for test run started at Mon May 21 10:32:53 2018 BST 0:00:12 spent to go through 1 total tests, which gave rise to 1 test cases, of which 0 were skipped 0 had missing libraries 0 expected passes 0 expected failures 0 caused framework failures 0 caused framework warnings 0 unexpected passes 1 unexpected failures 0 unexpected stat failures Unexpected failures: should_compile/T13350/T13350.run T13350 [bad stdout] (normal) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Mon May 21 13:09:13 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Mon, 21 May 2018 09:09:13 -0400 Subject: abi-depends field In-Reply-To: References: Message-ID: <878t8ddukr.fsf@smart-cactus.org> Simon Peyton Jones via ghc-devs writes: > Hello, anyone? > At the moment I am doing a manual-diff on this batch of unexpected failures, which is jolly annoying. > Should we just revert some patch for now? > Sigh, this is due to 1cdc14f9c014f1a520638f7c0a01799ac6d104e6, which I applied as a bug-fix for 8.4.3. We likely ought to revert it if it is causing trouble. Edward, do you know why GHC's testsuite might be triggering this warning? Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From simonpj at microsoft.com Mon May 21 16:46:27 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 21 May 2018 16:46:27 +0000 Subject: Potential improvements for CSE, strictness analyzer, let-floating In-Reply-To: References: Message-ID: I’ve always thought that so-called “late lambda lifting” is not-very-well-explored candidate for pref improvement. Nick Frisby did some preliminary work, but it would (I believe) reward some careful attention. https://ghc.haskell.org/trac/ghc/wiki/LateLamLift Simon From: ghc-devs On Behalf Of Peter Podlovics Sent: 13 May 2018 16:18 To: ghc-devs at haskell.org Subject: Potential improvements for CSE, strictness analyzer, let-floating Hi all, During the summer, as a university project, I would like to make some contributions to GHC. There are three topics in particular that piqued my interest: common subexpression elimination, strictness analysis, and let-floating. I would like to ask you whether there is any room for improvement in these parts of the compiler. Could you give me some pointers? Thanks in advance, Peter -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at nh2.me Tue May 22 10:07:34 2018 From: mail at nh2.me (=?UTF-8?Q?Niklas_Hamb=c3=bcchen?=) Date: Tue, 22 May 2018 12:07:34 +0200 Subject: ZuriHac 2018 GHC DevOps track - Request for Contributions In-Reply-To: References: Message-ID: On 08/04/2018 15.01, Michal Terepeta wrote: > I'd be happy to help. :) I know a bit about the backend (e.g., cmm level), but it might be tricky to find there some smaller/self-contained projects that would fit ZuriHac. Hey Michal, that's great. Is there a topic you would like to give a talk about, or a pet peeve task that you'd like to tick off with the help of new potential contributors in a hacking session? Other topics that might be nice and that you might know about are "How do I add a new primop to GHC", handling all the way from the call on the Haskell side to emitting the code, or (if I remember that correctly) checking out that issue that GHC doesn't do certain optimisations yet (such as emitting less-than-full-word instructions e.g. for adding two Word8s, or lack of some strength reductions as in [1]). > You've mentioned performance regression tests - maybe we could also work on improving nofib? For sure! Shall we run a hacking session together where we let attendees work on both performance regression tests and nofib? It seems these two fit well together. Niklas [1]: https://stackoverflow.com/questions/23315001/maximizing-haskell-loop-performance-with-ghc/23322255#23322255 From mail at nh2.me Tue May 22 10:12:02 2018 From: mail at nh2.me (=?UTF-8?Q?Niklas_Hamb=c3=bcchen?=) Date: Tue, 22 May 2018 12:12:02 +0200 Subject: ZuriHac 2018 GHC DevOps track - Request for Contributions In-Reply-To: References: Message-ID: <81282005-b128-0f44-7061-2ec02a47180f@nh2.me> Hey Ömer, On 09/04/2018 06.56, Ömer Sinan Ağacan wrote: > I'd also be happy to help. At the very least I can be around as a mentor, but > if I can find a suitable hask I may also host a hacking session. That's awesome! Do you have a topic that you'd be especially interested in for running as a hacking session? In case not, mentoring help would also be very appreciated for other hacking sessions. The tentative topics we have right now are: * Adding performance regressions tests * Finding and fixing Hadrian issues * Back-end/Codegen * CI infrastructure * General mentoring & working on any GHC topic Best, Niklas From mail at joachim-breitner.de Tue May 22 12:40:23 2018 From: mail at joachim-breitner.de (Joachim Breitner) Date: Tue, 22 May 2018 08:40:23 -0400 Subject: 505 Error on https://www.haskell.org/ghc/license.html Message-ID: <97d8d6aa08832e77fb1574e5c1ca4ca1e64f3a72.camel@joachim-breitner.de> Hi, on https://www.haskell.org/ghc/license.html I see 500 Internal Server Error nginx before and after the text. I don’t expect many people to look there, but still ugly. Cheers, Joachim -- Joachim Breitner mail at joachim-breitner.de http://www.joachim-breitner.de/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From alicekoroleva239 at gmail.com Tue May 22 13:38:19 2018 From: alicekoroleva239 at gmail.com (alice) Date: Tue, 22 May 2018 16:38:19 +0300 Subject: Cannot compute type fingerprint in plugin, kinds/types are mixed Message-ID: <9E22A4CB-3ADB-4AF9-B106-5F187D0E9785@gmail.com> Hello again. I’m trying to make a function in a type-checking plugin that takes TyCoRep#Type and makes its fingerprint that matches `typeRepFingerprint (typeRep :: TypeRep (type))`. As I understood the type’s fingerprint is made in DsBinds#dsEvTypeable by generating a code that makes the fingerprint, so I can’t reuse functions which are already written. My goal is to write a function that makes fingerprints of types which are not type variables such as Int, Maybe Int, ‘[Int] etc. So I tried to follow `mkTrCon` and `mkTrApp` style, and I think I managed to process simple types like Int, Maybe Int — right now I’m processing TyConApp only. But when I tried to make promoted data fingerprint like ‘[Int] I faced some problems. For example, for making `’Just Int` fingerprint first I have to compute 'Just fingerprint by computing 'Just tyCon’s and * fingerprints, then combining them into one fingerprint. Then I have to apply ‘Just to Int. But when I try to do something like that while type checking I cannot separate two cases that match runtime’s`TypeRep (a b)` (a representation for a type application) and `TypeRep a` (a representation for a type constructor) because they are represented by TyConApp. Also I can’t separate tyCon kinds and types for type application because they are merged into one list which is stored in `TyConApp _ [KindOrType]`. [KindOrType] list for `’Just Int` is `[*, Int]`. Is there any way to separate this two cases and kinds/types? Probably there is an easier way to make this function? If not, does type fingerprint function seem possible to make? Any help would be appreciated, Alice. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Tue May 22 15:39:23 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 22 May 2018 11:39:23 -0400 Subject: 505 Error on https://www.haskell.org/ghc/license.html In-Reply-To: <97d8d6aa08832e77fb1574e5c1ca4ca1e64f3a72.camel@joachim-breitner.de> References: <97d8d6aa08832e77fb1574e5c1ca4ca1e64f3a72.camel@joachim-breitner.de> Message-ID: <87tvqzd7ix.fsf@smart-cactus.org> Joachim Breitner writes: > Hi, > > on https://www.haskell.org/ghc/license.html I see > > 500 Internal Server Error nginx > > before and after the text. I don’t expect many people to look there, > but still ugly. > Fixed. Thanks for letting me know. Cheers, -Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From nicolas.frisby at gmail.com Tue May 22 15:52:11 2018 From: nicolas.frisby at gmail.com (Nicolas Frisby) Date: Tue, 22 May 2018 08:52:11 -0700 Subject: Cannot compute type fingerprint in plugin, kinds/types are mixed In-Reply-To: <9E22A4CB-3ADB-4AF9-B106-5F187D0E9785@gmail.com> References: <9E22A4CB-3ADB-4AF9-B106-5F187D0E9785@gmail.com> Message-ID: Hi Alice. I'm having trouble following your question -- I'm not familiar with the internals of fingerprint generation. But, have you scoured the compiler/typecheck/TcTypeable.hs file? It seems likely to have similar logic to what you seem to be seeking. HTH. -Nick On Tue, May 22, 2018, 06:38 alice wrote: > Hello again. I’m trying to make a function in a type-checking plugin that > takes TyCoRep#Type and makes its fingerprint that matches > `typeRepFingerprint (typeRep :: TypeRep (type))`. As I understood the > type’s fingerprint is made in DsBinds#dsEvTypeable by generating a code > that makes the fingerprint, so I can’t reuse functions which are already > written. > > My goal is to write a function that makes fingerprints of types which are > not type variables such as Int, Maybe Int, ‘[Int] etc. So I tried to follow > `mkTrCon` and `mkTrApp` style, and I think I managed to process simple > types like Int, Maybe Int — right now I’m processing TyConApp only. But > when I tried to make promoted data fingerprint like ‘[Int] I faced some > problems. > > For example, for making `’Just Int` fingerprint first I have to compute > 'Just fingerprint by computing 'Just tyCon’s and * fingerprints, then > combining them into one fingerprint. Then I have to apply ‘Just to Int. But > when I try to do something like that while type checking I cannot separate > two cases that match runtime’s`TypeRep (a b)` (a representation for a type > application) and `TypeRep a` (a representation for a type constructor) > because they are represented by TyConApp. Also I can’t separate tyCon kinds > and types for type application because they are merged into one list which > is stored in `TyConApp _ [KindOrType]`. [KindOrType] list for `’Just Int` > is `[*, Int]`. Is there any way to separate this two cases and kinds/types? > > Probably there is an easier way to make this function? If not, does type > fingerprint function seem possible to make? > > Any help would be appreciated, > > Alice. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Tue May 22 20:37:34 2018 From: ezyang at mit.edu (Edward Z. Yang) Date: Tue, 22 May 2018 16:37:34 -0400 Subject: abi-depends field In-Reply-To: <878t8ddukr.fsf@smart-cactus.org> References: <878t8ddukr.fsf@smart-cactus.org> Message-ID: <1527021423-sup-4955@sabre> The unexpected failures are benign; they are what you'd expect with the fix. I recommend accepting all of the changes. Edward Excerpts from Ben Gamari's message of 2018-05-21 09:09:13 -0400: > Simon Peyton Jones via ghc-devs writes: > > > Hello, anyone? > > At the moment I am doing a manual-diff on this batch of unexpected failures, which is jolly annoying. > > Should we just revert some patch for now? > > > Sigh, this is due to 1cdc14f9c014f1a520638f7c0a01799ac6d104e6, which I > applied as a bug-fix for 8.4.3. We likely ought to revert it if it is > causing trouble. > > Edward, do you know why GHC's testsuite might be triggering this warning? > > Cheers, > > - Ben From lonetiger at gmail.com Wed May 23 00:39:53 2018 From: lonetiger at gmail.com (Phyx) Date: Wed, 23 May 2018 01:39:53 +0100 Subject: 8.6.1 status In-Reply-To: <87k1rzedhb.fsf@smart-cactus.org> References: <87muwvedpn.fsf@smart-cactus.org> <87k1rzedhb.fsf@smart-cactus.org> Message-ID: I think I'll have to punt my changes for 8.8.1, The I/O manager is mostly working but it's taking some time to iron out the. Corner cases and ensure the behavior is not different than what people expect with mio even though the model is quite different. I have 12 failing tests but each take me a reasonable amount of time to diagnose and fix without breaking other things and I still need to optimize it all and add networking support. The linker patches require some more extensive testing so I don't want to rush those through either. At this point, both patches are huge so they would take a considerable amount of time to review anyway so they wouldn't stand much of a chance even if I put them up tomorrow. Thanks, Tamar On Sat, May 19, 2018, 18:56 Ben Gamari wrote: > Ben Gamari writes: > > > Hi everyone, > > > > As noted a few weeks ago, the 8.6.1 fork is quickly approaching. > > Currently the plan is to cut the branch on Friday, 1 May 2018. > > Silly me; the above is supposed to read "Friday, 1 June 2018". > > Sorry for the confusion! > > Cheers, > > - Ben > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From erkokl at gmail.com Wed May 23 00:45:55 2018 From: erkokl at gmail.com (Levent Erkok) Date: Tue, 22 May 2018 17:45:55 -0700 Subject: 8.6.1 status In-Reply-To: References: <87muwvedpn.fsf@smart-cactus.org> <87k1rzedhb.fsf@smart-cactus.org> Message-ID: I wish I can offer more than just a request, but the following bug: https://ghc.haskell.org/trac/ghc/ticket/15105 is rendering some oft-used packages on Macs unusable. (doctest being the prime example in my case.) It would truly be awesome if it was fixed in the next release. -Levent. On Tue, May 22, 2018 at 5:39 PM, Phyx wrote: > I think I'll have to punt my changes for 8.8.1, > > The I/O manager is mostly working but it's taking some time to iron out > the. Corner cases and ensure the behavior is not different than what people > expect with mio even though the model is quite different. > > I have 12 failing tests but each take me a reasonable amount of time to > diagnose and fix without breaking other things and I still need to optimize > it all and add networking support. > > The linker patches require some more extensive testing so I don't want to > rush those through either. > > At this point, both patches are huge so they would take a considerable > amount of time to review anyway so they wouldn't stand much of a chance > even if I put them up tomorrow. > > Thanks, > Tamar > > On Sat, May 19, 2018, 18:56 Ben Gamari wrote: > >> Ben Gamari writes: >> >> > Hi everyone, >> > >> > As noted a few weeks ago, the 8.6.1 fork is quickly approaching. >> > Currently the plan is to cut the branch on Friday, 1 May 2018. >> >> Silly me; the above is supposed to read "Friday, 1 June 2018". >> >> Sorry for the confusion! >> >> Cheers, >> >> - Ben >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Wed May 23 01:23:55 2018 From: ben at well-typed.com (Ben Gamari) Date: Tue, 22 May 2018 21:23:55 -0400 Subject: 8.6.1 status In-Reply-To: References: <87muwvedpn.fsf@smart-cactus.org> <87k1rzedhb.fsf@smart-cactus.org> Message-ID: <87k1rvcggp.fsf@smart-cactus.org> Phyx writes: > I think I'll have to punt my changes for 8.8.1, > Sad but understandable. Good luck and let us know how things go! Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From alicekoroleva239 at gmail.com Wed May 23 04:10:18 2018 From: alicekoroleva239 at gmail.com (alice) Date: Wed, 23 May 2018 07:10:18 +0300 Subject: Cannot compute type fingerprint in plugin, kinds/types are mixed In-Reply-To: References: <9E22A4CB-3ADB-4AF9-B106-5F187D0E9785@gmail.com> Message-ID: <674E18A7-09C2-4528-891C-4D24C37E2304@gmail.com> Yes, I’ve looked through this file, but it seems to me that I can't use anything there in my function. > 22 мая 2018 г., в 18:52, Nicolas Frisby написал(а): > > Hi Alice. > > I'm having trouble following your question -- I'm not familiar with the internals of fingerprint generation. > > But, have you scoured the compiler/typecheck/TcTypeable.hs file? It seems likely to have similar logic to what you seem to be seeking. > > HTH. -Nick > > On Tue, May 22, 2018, 06:38 alice > wrote: > Hello again. I’m trying to make a function in a type-checking plugin that takes TyCoRep#Type and makes its fingerprint that matches `typeRepFingerprint (typeRep :: TypeRep (type))`. As I understood the type’s fingerprint is made in DsBinds#dsEvTypeable by generating a code that makes the fingerprint, so I can’t reuse functions which are already written. > > My goal is to write a function that makes fingerprints of types which are not type variables such as Int, Maybe Int, ‘[Int] etc. So I tried to follow `mkTrCon` and `mkTrApp` style, and I think I managed to process simple types like Int, Maybe Int — right now I’m processing TyConApp only. But when I tried to make promoted data fingerprint like ‘[Int] I faced some problems. > > For example, for making `’Just Int` fingerprint first I have to compute 'Just fingerprint by computing 'Just tyCon’s and * fingerprints, then combining them into one fingerprint. Then I have to apply ‘Just to Int. But when I try to do something like that while type checking I cannot separate two cases that match runtime’s`TypeRep (a b)` (a representation for a type application) and `TypeRep a` (a representation for a type constructor) because they are represented by TyConApp. Also I can’t separate tyCon kinds and types for type application because they are merged into one list which is stored in `TyConApp _ [KindOrType]`. [KindOrType] list for `’Just Int` is `[*, Int]`. Is there any way to separate this two cases and kinds/types? > > Probably there is an easier way to make this function? If not, does type fingerprint function seem possible to make? > > Any help would be appreciated, > > Alice. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From xnningxie at gmail.com Wed May 23 15:34:46 2018 From: xnningxie at gmail.com (Ningning Xie) Date: Wed, 23 May 2018 11:34:46 -0400 Subject: Build Failure Message-ID: Hi everyone, I pulled from the head this morning and would like to rebase my local changes on it. Even before I do rebase, I got a build error. Everything is fine last time I built (almost one week ago). Then I tried to glone a new repo and the build gives me the same error message. After a little bit searching, the error message seems to be specific to MacOS (mine is Sierra 10.12.6). Platform info (Let me know if you need more information): *--- repo/ghc * *»* which ghc ghc: aliased to stack ghc *--- repo/ghc * *»* stack exec -- ghc --version The Glorious Glasgow Haskell Compilation System, version 7.10.3 *--- repo/ghc * *»* gcc --version Configured with: --prefix=/Library/Developer/CommandLineTools/usr --with-gxx-include-dir=/usr/include/c++/4.2.1 Apple LLVM version 9.0.0 (clang-900.0.39.2) Target: x86_64-apple-darwin16.7.0 Thread model: posix InstalledDir: /Library/Developer/CommandLineTools/usr/bin Has anyone encountered the same problem and any clue on how to solve it? Thanks, Ningning "inplace/bin/ghc-stage1" -hisuf p_hi -osuf p_o -hcsuf p_hc -static -prof -eventlog -H32m -O -Wall -this-unit-id integer-gmp-1.0.2.0 -hide-all-packages -i -ilibraries/integer-gmp/src/ -ilibraries/integer-gmp/dist-install/build -Ilibraries/integer-gmp/dist-install/build -ilibraries/integer-gmp/dist-install/build/./autogen -Ilibraries/integer-gmp/dist-install/build/./autogen -Ilibraries/integer-gmp/include -Ilibraries/integer-gmp/dist-install/build/include -optP-include -optPlibraries/integer-gmp/dist-install/build/./autogen/cabal_macros.h -package-id ghc-prim-0.5.3 -this-unit-id integer-gmp -Wall -XHaskell2010 -O2 -no-user-package-db -rtsopts -Wno-deprecated-flags -Wnoncanonical-monad-instances -odir libraries/integer-gmp/dist-install/build -hidir libraries/integer-gmp/dist-install/build -stubdir libraries/integer-gmp/dist-install/build -c libraries/integer-gmp/src//GHC/Integer/Logarithms.hs -o libraries/integer-gmp/dist-install/build/GHC/Integer/Logarithms.p_o -dyno libraries/integer-gmp/dist-install/build/GHC/Integer/Logarithms.dyn_o */var/folders/zq/tn9b58wn37b0yby0vm8_7_cr0000gn/T/ghc47691_0/ghc_3.s:221:8: **error:* * error: unsupported relocation with subtraction expression, symbol '_integerzmgmp_GHCziIntegerziType_quotInteger_closure' can not be undefined in a subtraction expression* * .long _integerzmgmp_GHCziIntegerziType_quotInteger_closure-(_sMi_info)+0* * ^* * |* *221 |* .long _integerzmgmp_GHCziIntegerziType_quotInteger_closure-(_sMi_info)+0 * |** ^* */var/folders/zq/tn9b58wn37b0yby0vm8_7_cr0000gn/T/ghc47691_0/ghc_3.s:318:8: **error:* * error: unsupported relocation with subtraction expression, symbol '_integerzmgmp_GHCziIntegerziType_quotInteger_closure' can not be undefined in a subtraction expression* * .long _integerzmgmp_GHCziIntegerziType_quotInteger_closure-(_cPd_info)+0* * ^* * |* *318 |* .long _integerzmgmp_GHCziIntegerziType_quotInteger_closure-(_cPd_info)+0 * |** ^* */var/folders/zq/tn9b58wn37b0yby0vm8_7_cr0000gn/T/ghc47691_0/ghc_3.s:390:8: **error:* * error: unsupported relocation with subtraction expression, symbol '_integerzmgmp_GHCziIntegerziType_quotInteger_closure' can not be undefined in a subtraction expression* * .long _integerzmgmp_GHCziIntegerziType_quotInteger_closure-(_cP8_info)+0* * ^* * |* *390 |* .long _integerzmgmp_GHCziIntegerziType_quotInteger_closure-(_cP8_info)+0 * |** ^* `gcc' failed in phase `Assembler'. (Exit code: 1) make[1]: *** [libraries/integer-gmp/dist-install/build/GHC/Integer/Logarithms.p_o] Error 1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Wed May 23 15:40:33 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Wed, 23 May 2018 11:40:33 -0400 Subject: Build Failure In-Reply-To: References: Message-ID: <87a7sqcrdg.fsf@smart-cactus.org> Ningning Xie writes: > Hi everyone, > > I pulled from the head this morning and would like to rebase my local > changes on it. Even before I do rebase, I got a build error. Everything is > fine last time I built (almost one week ago). Then I tried to glone a new > repo and the build gives me the same error message. > Hi Ningning, Yes, this is a known issue. I have a patch (https://phabricator.haskell.org/D4715) unfortunately it hasn't yet validated so it hasn't yet been merged. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From simonpj at microsoft.com Wed May 23 16:29:38 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 23 May 2018 16:29:38 +0000 Subject: Quantified constraints Message-ID: Friends Now that the Quantified Constraints proposal is accepted, I want to commit it to GHC before the 8.6 release. It's up on https://phabricator.haskell.org/D4724 for your review. It's rebased on master, and validates clean. There's also a useful wiki page. Thanks! Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From qdunkan at gmail.com Wed May 23 18:23:18 2018 From: qdunkan at gmail.com (Evan Laforge) Date: Wed, 23 May 2018 11:23:18 -0700 Subject: 8.6.1 status In-Reply-To: References: <87muwvedpn.fsf@smart-cactus.org> <87k1rzedhb.fsf@smart-cactus.org> Message-ID: Also -fdefer-type-errors in ghci is broken for 8.4: https://ghc.haskell.org/trac/ghc/ticket/14963 I understand it's a complicated problem, but it would be sad if it were broken for all of 8.6 as well. At least how about a quick patch that emits a warning when it sees --interactive -fdefer-type-errors and says this combination is no longer supported? On Tue, May 22, 2018 at 5:45 PM, Levent Erkok wrote: > I wish I can offer more than just a request, but the following bug: > > https://ghc.haskell.org/trac/ghc/ticket/15105 > > is rendering some oft-used packages on Macs unusable. (doctest being the > prime example in my case.) > > It would truly be awesome if it was fixed in the next release. > > -Levent. > > On Tue, May 22, 2018 at 5:39 PM, Phyx wrote: >> >> I think I'll have to punt my changes for 8.8.1, >> >> The I/O manager is mostly working but it's taking some time to iron out >> the. Corner cases and ensure the behavior is not different than what people >> expect with mio even though the model is quite different. >> >> I have 12 failing tests but each take me a reasonable amount of time to >> diagnose and fix without breaking other things and I still need to optimize >> it all and add networking support. >> >> The linker patches require some more extensive testing so I don't want to >> rush those through either. >> >> At this point, both patches are huge so they would take a considerable >> amount of time to review anyway so they wouldn't stand much of a chance even >> if I put them up tomorrow. >> >> Thanks, >> Tamar >> >> On Sat, May 19, 2018, 18:56 Ben Gamari wrote: >>> >>> Ben Gamari writes: >>> >>> > Hi everyone, >>> > >>> > As noted a few weeks ago, the 8.6.1 fork is quickly approaching. >>> > Currently the plan is to cut the branch on Friday, 1 May 2018. >>> >>> Silly me; the above is supposed to read "Friday, 1 June 2018". >>> >>> Sorry for the confusion! >>> >>> Cheers, >>> >>> - Ben >>> >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From mail at joachim-breitner.de Thu May 24 12:21:33 2018 From: mail at joachim-breitner.de (Joachim Breitner) Date: Thu, 24 May 2018 14:21:33 +0200 Subject: Inconsistency in CoreSubst invariant Message-ID: Hi, Stephanie stumbled on this apparent inconsistency in CoreSubst, about what ought to be in the in_scope_set of a Subst. On the one hand, the file specifies #in_scope_invariant# The in-scope set contains at least those 'Id's and 'TyVar's that will be in scope /after/ applying the substitution to a term. Precisely, the in-scope set must be a superset of the free vars of the substitution range that might possibly clash with locally-bound variables in the thing being substituted in. Note that the first sentence does not actually imply the second (unless you replace “Precisely” with “In particular”). But the comment even explicitly states: Make it empty, if you know that all the free vars of the substitution are fresh, and hence can't possibly clash Looking at the code I see that lookupIdSubst indeed expects all variables in either the actual substitution or in the in_scope_set: lookupIdSubst :: SDoc -> Subst -> Id -> CoreExpr lookupIdSubst doc (Subst in_scope ids _ _) v | not (isLocalId v) = Var v | Just e <- lookupVarEnv ids v = e | Just v' <- lookupInScope in_scope v = Var v' -- Vital! See Note [Extending the Subst] | otherwise = WARN( True, text "CoreSubst.lookupIdSubst" <+> doc <+> ppr v $$ ppr in_scope) Var v Note the warning! It seems that one of these three are true: A The invariant should be the first sentence; in particular; the in_scope_set contains all the free variables that are not substituted. The rest of that comment needs to be updated to reflect that. B The invariant should be the second sentence, and the WARN is bogus, i.e. WARNs about situations that are actually ok. The rest of that comment needs to be updated, and the WARN removed. C The invariant should be the second sentence, and the WARN is still ok there because, well, it is only a warning and only appears in DEBUG builds. The rest of that comment needs to be updated, the WARN remains. Which one is it? Cheers, Joachim -- Joachim Breitner mail at joachim-breitner.de http://www.joachim-breitner.de/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From rae at cs.brynmawr.edu Fri May 25 01:47:42 2018 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Thu, 24 May 2018 21:47:42 -0400 Subject: Inconsistency in CoreSubst invariant In-Reply-To: References: Message-ID: <8A4EB754-28C6-43BE-A753-78490FF3DD71@cs.brynmawr.edu> > On May 24, 2018, at 8:21 AM, Joachim Breitner wrote: > > Which one is it? See Note [The substitution invariant] in TyCoRep. That applies to types, not terms, but I'd be shocked if terms had a different situation. That would suggest that the answer is (A) (and that the WARNing is correct). Richard From simonpj at microsoft.com Fri May 25 11:33:01 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 25 May 2018 11:33:01 +0000 Subject: Inconsistency in CoreSubst invariant In-Reply-To: References: Message-ID: Ha! That comment is out of date. More up to date is Note [The substitution invariant] in TyCoRep. I've updated it (and will commit in a moment) to say the stuff below. Does that answer the question? Simon {- Note [The substitution invariant] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When calling (substTy subst ty) it should be the case that the in-scope set in the substitution is a superset of both: (SIa) The free vars of the range of the substitution (SIb) The free vars of ty minus the domain of the substitution The same rules apply to other substitutions (notably CoreSubst.Subst) * Reason for (SIa). Consider substTy [a :-> Maybe b] (forall b. b->a) we must rename the forall b, to get forall b2. b2 -> Maybe b Making 'b' part of the in-scope set forces this renaming to take place. * Reason for (SIb). Consider substTy [a :-> Maybe b] (forall b. (a,b,x)) Then if we use the in-scope set {b}, satisfying (SIa), there is a danger we will rename the forall'd variable to 'x' by mistake, getting this: forall x. (List b, x, x) Breaking (SIb) caused the bug from #11371. Note: if the free vars of the range of the substution are freshly created, then the problems of (SIa) can't happen, and so it would be sound to ignore (SIa). | -----Original Message----- | From: ghc-devs On Behalf Of Joachim | Breitner | Sent: 24 May 2018 13:22 | To: ghc-devs at haskell.org | Subject: Inconsistency in CoreSubst invariant | | Hi, | | Stephanie stumbled on this apparent inconsistency in CoreSubst, about | what ought to be in the in_scope_set of a Subst. | | On the one hand, the file specifies | | #in_scope_invariant# The in-scope set contains at least those 'Id's | and 'TyVar's that will be in scope /after/ applying the | substitution | to a term. Precisely, the in-scope set must be a superset of the | free vars of the substitution range that might possibly clash with | locally-bound variables in the thing being substituted in. | | Note that the first sentence does not actually imply the second | (unless you replace “Precisely” with “In particular”). But the comment | even explicitly states: | | Make it empty, if you know that all the free vars of the | substitution are fresh, and hence can't possibly clash | | | | Looking at the code I see that lookupIdSubst indeed expects all | variables in either the actual substitution or in the in_scope_set: | | lookupIdSubst :: SDoc -> Subst -> Id -> CoreExpr | lookupIdSubst doc (Subst in_scope ids _ _) v | | not (isLocalId v) = Var v | | Just e <- lookupVarEnv ids v = e | | Just v' <- lookupInScope in_scope v = Var v' | -- Vital! See Note [Extending the Subst] | | otherwise = WARN( True, text "CoreSubst.lookupIdSubst" <+> doc | <+> ppr v | $$ ppr in_scope) | Var v | | Note the warning! | | It seems that one of these three are true: | | A The invariant should be the first sentence; in particular; the | in_scope_set contains all the free variables that are not | substituted. | The rest of that comment needs to be updated to reflect that. | | B The invariant should be the second sentence, and the WARN | is bogus, i.e. WARNs about situations that are actually ok. | The rest of that comment needs to be updated, and the WARN removed. | | C The invariant should be the second sentence, and the WARN | is still ok there because, well, it is only a warning and only | appears in DEBUG builds. | The rest of that comment needs to be updated, the WARN remains. | | Which one is it? | | Cheers, | Joachim | | | -- | Joachim Breitner | mail at joachim-breitner.de | | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.jo | achim- | breitner.de%2F&data=02%7C01%7Csimonpj%40microsoft.com%7C95b44beead5a4c | 16387908d5c170ed6c%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636627 | 613128370750&sdata=9%2Fz2NH8ZXH50NujT4CMx5piisF2hDRjQqJYavC%2FgLDs%3D& | reserved=0 From mail at joachim-breitner.de Fri May 25 11:36:18 2018 From: mail at joachim-breitner.de (Joachim Breitner) Date: Fri, 25 May 2018 13:36:18 +0200 Subject: Inconsistency in CoreSubst invariant In-Reply-To: References: Message-ID: Hi, Am Freitag, den 25.05.2018, 11:33 +0000 schrieb Simon Peyton Jones: > Ha! That comment is out of date. More up to date is Note [The substitution invariant] in TyCoRep. I've updated it (and will commit in a moment) to say the stuff below. > > Does that answer the question? indeed it does! Thanks, Joachim -- Joachim Breitner mail at joachim-breitner.de http://www.joachim-breitner.de/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From simonpj at microsoft.com Fri May 25 16:51:57 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 25 May 2018 16:51:57 +0000 Subject: An idea for a different style of metaprogramming evaluation using the optimiser In-Reply-To: References: Message-ID: (Clearing up my inbox.) Sorry to be slow on this Matthew. A difficulty I see is that the optimiser must exhaustively evaluate until it gets to a quote. And that is not easy if you had $$$(if (reverse [1,2,3] == [3,2,1]) then .. else .. ) you'd probably get stuck. Or at least you'd have to be able to inline reverse, and (==) and all the typeclass machinery for dealing with (==). A merit of the current setup is that evaluation in a splice can call arbitrary libraries, including compiled ones (not just bytecode!). Also the current setup lets you inspect the environment a bit with the Q monad. We already have typed and untyped TH; as yet I'm not convinced that the advantages of a third route would pay for the costs. But don't let me discourage you. I love being convinced! (Bandwidth to think about this is limited though, hence delay.) Simon | -----Original Message----- | From: Matthew Pickering | Sent: 27 February 2018 11:18 | To: Simon Peyton Jones | Subject: Re: An idea for a different style of metaprogramming | evaluation using the optimiser | | Sorry I wasn't clear. I'll try to flesh out the power example in much | more detail. The general idea is that instead of using the bytecode | interpreter in order to evaluate splices, the optimiser is often | sufficient. | | The benefits of this are: | | 1. Cross platform. | 2. Simpler to use as you are not manually creating ASTs | 3. More predictable as there are never any IO actions. | | The motivation is to precisely tell the inliner what to do rather than | trying to convince it to do what you like as it is quite | unpredictable. | | | Take the staged power example. | | ``` | power :: Int -> R Int -> R Int | power n k = if n == 0 | then .< 1 >. | else .< ~k * ~(power (n-1) k) >. | ``` | | I propose introducing two new pieces of syntax $$$(..) and [||| ... | |||] which are pronounced splice and quote. We also introduce a new | type constructor R, for representation. We can informally give the | types | of quote and splice as a -> R a and R a -> a respectively. | | What are the limitations of $$$() and [||| |||]? The only way to make | `R` fragments is by quoting. It is also not allowed to inspect the | quoted code. The only way to interact with it is by splicing. | | Which would then mean we staged power like so. | | | ``` | power :: Int -> R Int -> R Int | power n k = if n == 0 | then [||| 1 |||]. | else [||| $$$(k) * $$$(power k (n-1)) |||] | ``` | | Then to use power, we might call it like.. | | ``` | $$$(power 3 [||| x |||]) | ``` | | which we hope evaluates to x * x * x * 1. | | The difference I am proposing is that instead of the bytecode | interpreter evaluating the splice, it is the optimiser which does the | evaluation. | | Concretely, we interpret a splice meaning "evaluate as much as | possible" and a quote as meaning "don't evaluate me any more". | By evaluate as much as possible, this means to be very keen to inline | functions including recursive ones. This sounds bad but it is the user | who is in control by | where they put the annotations. | | So in this example, we would evaluate as follows.. E marks an | "evaluation context" where we are very keen to evaluate. | | ``` | $$$(power 1 [||| x |||]) | => | E: power 1 [||| x |||] | => | E: if 1 == 0 then [||| 1 |||] else ... | => (Eval condition) | E: [||| $$$([||| x |||]) * $$$(power (1-0)) k |||] | => (Quote removes evaluation context) | $$$([||| x |||]) * $$$(power (1-0) k) | => (Eval $$$(k)) | x * $$$(power (1-0) k) | => (Eval $$$(power (1-0) k) | k * E: power (1 - 0) k | => (Unroll as we are in E) | k * E: if 0 == 0 then [||| 1 |||] ... | => | k* E: [||| 1 |||] | => | k * 1 | ``` | | So we can completely evaluate the splices properly in the evaluator if | the definitions are simple enough. | In this example, that isn't actually going to be the case as the | optimiser doesn't evaluate `3 == 0` for integers because it is | implemented using a primitive. | | If we rewrite the example using an inductive data type for the | recursive argument then we can see that it would work correctly. | | ``` | data Nat = Z | S Nat | | power :: Nat -> R Int -> R Int | power n k = case n of | Z -> [||| 1 |||] | (S n) -> [||| $$$(k) * $$$(power n k) |||] | ``` | | Then it is perhaps clearer that the optimiser is all we need in order | to evaluate $$$(power (S Z) [||| x |||]). The current implementation | of $$ and [||] would invoke the bytecode interpreter to do this | evaluation but it is unnecessary for this simple example which the | optimiser could do just as well. | | ``` | $$$(power (S Z) [|| x ||]) | => | E: power (S Z) [|| x ||] | => | E: case (S Z) of | Z -> [||| 1 |||] | (S n) -> [||| $$$([||| x |||]) * $$$(power n [||| x |||]) |||] | => (Case of known constructor) | E: [||| $$$([|| x ||]) * $$$(power Z [||| x |||) |||] | => Quote | $$$([||| x |||]) * $$$(power Z [||| x |||]) | => | k * $$$(power Z [||| x |||]) | => Splice | k * E: power Z [||| x |||] | => Inline power as we in E | k * E (case Z of Z -> [||| 1 |||]; S n ... ->) | => Case of known constructor | k * E: [||| 1 |||] | => | k * 1 | ``` | | | Hope that clears some things up. | | | Matt | | On Tue, Feb 27, 2018 at 10:37 AM, Simon Peyton Jones | wrote: | > Matthew, I'm afraid I don't understand the proposal at all. Can you | give a few examples? | > | > S | > | > | -----Original Message----- | > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | > | Matthew Pickering | > | Sent: 27 February 2018 09:59 | > | To: GHC developers | > | Subject: An idea for a different style of metaprogramming | evaluation | > | using the optimiser | > | | > | I've had an idea for a while of a different way we could evaluate | TH- | > | like splices which would be more lightweight and easier to work | with. | > | | > | The idea is to create a third quotation/splicing mechanism which | has | > | no introspection (like MetaML) but then to evaluate these quotes | and | > | splices in the optimiser rather than using the bytecode | interpreter. | > | | > | I am motivated by the many examples of recursive functions in | > | libraries, which when given a statically known argument should be | able | > | to be unrolled to produce much better code. It is understandable | that | > | the compiler doesn't try to do this itself but there should be an | easy | > | way for the user to direct the compiler to do so. (See also | > | | https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.r | > | | eddit.com%2Fr%2Fhaskell%2Fcomments%2F7yvb43%2Fghc_compiletime_evaluati | > | | on%2F&data=04%7C01%7Csimonpj%40microsoft.com%7C9979850e7fb142a7e11c08d | > | | 57dc8dcb8%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636553224038023 | > | | 725%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTi | > | I6Ik1haWwifQ%3D%3D%7C- | > | | 1&sdata=%2FOQOcFIXduGY9wtWWEkbqd3qQ1715oCdaim0pCeSgrI%3D&reserved=0) | > | | > | An example to be concrete: | > | | > | Take the power function such that power k n computes k^n. | > | | > | power :: Int -> Int -> Int | > | power k n = if n == 0 | > | then 1 | > | else k * power k (n - 1) | > | | > | If we statically know n then we can create a staged version. We | use R | > | to indicate that an argument is dynamically known. | > | | > | power :: R Int -> Int -> R Int | > | power k n = if n == 0 | > | then .< 1 >. | > | else .< ~k * ~(power k (n-1)) >. | > | | > | One way to implement this in Haskell is to use typed template | haskell | > | quoting and splicing. | > | The key thing to notice about why this works is that in order to | > | splice `power k (n-1)` we need to evaluate `power k (n-1)` so | that we | > | have something of type `R Int` which we can then splice back into | the | > | quote. | > | | > | The way this is currently implemented is that the bytecode | interpreter | > | runs these splices in order to find out what they evaluate to. | > | | > | The idea is that instead, we add another mode which runs splices | in | > | the core-to-core optimiser. The optimiser performs evaluation by | beta | > | reduction, inlining and constant folding. For simple definitions | on | > | algebraic data types it does a very good job of eliminating | overhead. | > | As long as the call is not recursive. If we can just get the | optimiser | > | to inline recursive calls in a controlled manner as well, it | should do | > | a good job on the unrolled definition. | > | | > | The benefits of this are that it works across all platforms and | fits | > | nicely already into the compilation pipeline. It also fits in | nicely | > | with the intuition that a "quote" means to stop evaluating and a | > | "splice" means to evaluate. | > | | > | A disadvantage is that the optimiser is only a *partial* | evaluator | > | rather than an evaluator. It gets stuck evaluating things | containing | > | primitives and so on. I don't forsee this as a major issue but a | > | limitation that library authors should be aware of when writing | their | > | staged programs. To go back to the power example, the recursive | > | condition would have to be an inductively defined natural (data N | = Z | > | | S N) rather than an Int as the comparison operator for integers | > | can't be evaluated by the optimiser. It is already rather easy to | > | write staged programs which loop if you don't carefully consider | the | > | staging so this seems now worse than the status quo. | > | | > | Does this sound at all sensible to anyone else? | > | | > | Matt | > | _______________________________________________ | > | ghc-devs mailing list | > | ghc-devs at haskell.org | > | | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.h | > | askell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | > | | devs&data=04%7C01%7Csimonpj%40microsoft.com%7C9979850e7fb142a7e11c08d5 | > | | 7dc8dcb8%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C6365532240380237 | > | | 25%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI | > | 6Ik1haWwifQ%3D%3D%7C- | > | | 1&sdata=CJgS3uH4hHw5Zups1fAiUtOnTzn%2ByCh1bg1auiMBcYM%3D&reserved=0 From sean at mistersg.net Sun May 27 01:38:48 2018 From: sean at mistersg.net (Sean D Gillespie) Date: Sat, 26 May 2018 21:38:48 -0400 Subject: Unable to build on NixOS Message-ID: <20180527013847.GA4688@sean-nixos> Howdy, I am unable to build the latest revision of GHC on NixOS. I can build older revisions. Here's my error: ===--- building final phase make --no-print-directory -f ghc.mk phase=final all "inplace/bin/ghc-stage1" -hisuf p_hi -osuf p_o -hcsuf p_hc -static -prof -eventlog -H32m -O -Wall -this-unit-id ghc-heap-8.5 -hide-all-packages -i -ilibraries/ghc-heap/. -ilibraries/gh c-heap/dist-install/build -Ilibraries/ghc-heap/dist-install/build -ilibraries/ghc-heap/dist-install/build/./autogen -Ilibraries/ghc-heap/dist-install/build/./autogen -Ilibraries/ghc-heap/. -optP-include -optPlibraries/ghc-heap/dist-install/build/./autogen/cabal_macros.h -package-id base-4.12.0.0 -package-id ghc-prim-0.5.3 -package-id rts -Wall -XHaskell2010 -O2 -no-user-packa ge-db -rtsopts -Wno-deprecated-flags -Wnoncanonical-monad-instances -odir libraries/ghc-heap/dist-install/build -hidir libraries/ghc-heap/dist-install/build -stubdir libraries/ghc-heap/ dist-install/build -split-sections -c libraries/ghc-heap/./GHC/Exts/Heap/Closures.hs -o libraries/ghc-heap/dist-install/build/GHC/Exts/Heap/Closures.p_o -dyno libraries/ghc-heap/dist-instal l/build/GHC/Exts/Heap/Closures.dyn_o libraries/ghc-heap/GHC/Exts/Heap/Closures.hs:23:1: error: Could not find module `GHC.Exts.Heap.InfoTableProf' It is a member of the hidden package `ghc-heap-8.5'. You can run `:set -package ghc-heap' to expose it. (Note: this unloads all the modules in the current scope.) Use -v to see a list of the files searched for. | 23 | import GHC.Exts.Heap.InfoTableProf | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ make[1]: *** [libraries/ghc-heap/ghc.mk:4: libraries/ghc-heap/dist-install/build/GHC/Exts/Heap/Closures.p_o] Error 1 make: *** [Makefile:127: all] Error 2 For reference, here's my shell.nix: { nixpkgs ? import {}, compiler ? "ghcHEAD" }: let inherit (nixpkgs) pkgs; ghc = pkgs.haskell.packages.${compiler}.ghc; in with nixpkgs; lib.overrideDerivation ghc (drv: { name = "ghc-dev"; nativeBuildInputs = drv.nativeBuildInputs ++ [ arcanist git python36Packages.sphinx texlive.combined.scheme-basic ]; }) Any help would be appreciated. Thanks Sean G From patrick.doc at ameritech.net Sun May 27 17:56:15 2018 From: patrick.doc at ameritech.net (Patrick Dougherty) Date: Sun, 27 May 2018 12:56:15 -0500 Subject: Unable to build on NixOS In-Reply-To: <20180527013847.GA4688@sean-nixos> References: <20180527013847.GA4688@sean-nixos> Message-ID: <61999454-1a65-4b9e-a5da-329f180070ea@Spark> Huh, So this is a bug I thought I dealt with :/ In the short term, I've found that often simply trying the build again can fix it. This is a dependency issue that I don't 100% understand. For some more technical background, the "InfoTableProf" module is only built/needed when it is used with PROFILING. It uses CPP to "peek" into the StgInfoTable, which changes under profiling. My impression is that dependency resolution decides that module isn't necessary, so then it is missing when it goes to use it. Again, I am not sure here, that's just what seemed to be the issue. Best, Patrick Dougherty On May 26, 2018, 8:39 PM -0500, Sean D Gillespie , wrote: > Howdy, > > I am unable to build the latest revision of GHC on NixOS. I can build older revisions. > Here's my error: > > ===--- building final phase > make --no-print-directory -f ghc.mk phase=final all > "inplace/bin/ghc-stage1" -hisuf p_hi -osuf p_o -hcsuf p_hc -static -prof -eventlog -H32m -O -Wall -this-unit-id ghc-heap-8.5 -hide-all-packages -i -ilibraries/ghc-heap/. -ilibraries/gh > c-heap/dist-install/build -Ilibraries/ghc-heap/dist-install/build -ilibraries/ghc-heap/dist-install/build/./autogen -Ilibraries/ghc-heap/dist-install/build/./autogen -Ilibraries/ghc-heap/. > -optP-include -optPlibraries/ghc-heap/dist-install/build/./autogen/cabal_macros.h -package-id base-4.12.0.0 -package-id ghc-prim-0.5.3 -package-id rts -Wall -XHaskell2010 -O2 -no-user-packa > ge-db -rtsopts -Wno-deprecated-flags -Wnoncanonical-monad-instances -odir libraries/ghc-heap/dist-install/build -hidir libraries/ghc-heap/dist-install/build -stubdir libraries/ghc-heap/ > dist-install/build -split-sections -c libraries/ghc-heap/./GHC/Exts/Heap/Closures.hs -o libraries/ghc-heap/dist-install/build/GHC/Exts/Heap/Closures.p_o -dyno libraries/ghc-heap/dist-instal > l/build/GHC/Exts/Heap/Closures.dyn_o > > libraries/ghc-heap/GHC/Exts/Heap/Closures.hs:23:1: error: > Could not find module `GHC.Exts.Heap.InfoTableProf' > It is a member of the hidden package `ghc-heap-8.5'. > You can run `:set -package ghc-heap' to expose it. > (Note: this unloads all the modules in the current scope.) > Use -v to see a list of the files searched for. > | > 23 | import GHC.Exts.Heap.InfoTableProf > | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > make[1]: *** [libraries/ghc-heap/ghc.mk:4: libraries/ghc-heap/dist-install/build/GHC/Exts/Heap/Closures.p_o] Error 1 > make: *** [Makefile:127: all] Error 2 > > For reference, here's my shell.nix: > > { nixpkgs ? import {}, compiler ? "ghcHEAD" }: > > let > inherit (nixpkgs) pkgs; > ghc = pkgs.haskell.packages.${compiler}.ghc; > in > with nixpkgs; lib.overrideDerivation ghc > (drv: { > name = "ghc-dev"; > nativeBuildInputs = drv.nativeBuildInputs ++ [ > arcanist > git > python36Packages.sphinx > texlive.combined.scheme-basic > ]; > }) > > Any help would be appreciated. > > Thanks > Sean G > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean at mistersg.net Sun May 27 22:30:50 2018 From: sean at mistersg.net (Sean D Gillespie) Date: Sun, 27 May 2018 18:30:50 -0400 Subject: Unable to build on NixOS In-Reply-To: <61999454-1a65-4b9e-a5da-329f180070ea@Spark> References: <20180527013847.GA4688@sean-nixos> <61999454-1a65-4b9e-a5da-329f180070ea@Spark> Message-ID: <20180527223050.GA4261@sean-nixos> > For some more technical background, the "InfoTableProf" module is only built/needed > when it is used with PROFILING. It uses CPP to "peek" into the StgInfoTable, > which changes under profiling. I think my issue was that nixpkgs.ghcHEAD configurePhase was overwriting my build.mk. That might explain why profiling was enabled. From maoe at foldr.in Mon May 28 07:23:44 2018 From: maoe at foldr.in (Mitsutoshi Aoe) Date: Mon, 28 May 2018 16:23:44 +0900 Subject: Steps to propose a new primop in GHC Message-ID: Hi devs, I'm thinking to add a primop in GHC but not sure how I should proceed. The primop I have in mind is something like: traceEventBinary# :: Addr# -> Int# -> State# s -> State# s This function is similar to the existing traceEvent# but it takes a chunk of bytes rather than a null-terminated string. It is useful to trace custom user events (e.g. network packet arrival timestamps in an network application) in eventlogs. At library level, it is supposed to be used like the tracing functions in Debug.Trace but with ByteString argument: traceEventBinary :: ByteString -> a -> a traceEventBinaryIO :: ByteString -> IO () Note that this can't live in base because of the dependency on bytestring. So how should I proceed from here? Am I supposed to submit a GHC proposal or should I ask on the libraries list? This is not a prominently visible change in GHC. It rather affects only ghc-prim and no effects in base. Thanks, Mitsutoshi -------------- next part -------------- An HTML attachment was scrubbed... URL: From maoe at foldr.in Mon May 28 07:25:51 2018 From: maoe at foldr.in (Mitsutoshi Aoe) Date: Mon, 28 May 2018 16:25:51 +0900 Subject: Steps to propose a new primop in GHC In-Reply-To: References: Message-ID: I forgot to mention that I have prototype implementation: * https://github.com/maoe/ghc/tree/traceEventBinary * https://github.com/maoe/ghc-trace-events/blob/feature/traceEventBinary/src/Debug/Trace/ByteString.hs#L46-L58 Some details still need to be sorted out though. Regards, Mitsutoshi 2018年5月28日(月) 16:23 Mitsutoshi Aoe : > Hi devs, > > I'm thinking to add a primop in GHC but not sure how I should proceed. The > primop I have in mind is something like: > > traceEventBinary# :: Addr# -> Int# -> State# s -> State# s > > This function is similar to the existing traceEvent# but it takes a chunk > of bytes rather than a null-terminated string. It is useful to trace custom > user events (e.g. network packet arrival timestamps in an network > application) in eventlogs. At library level, it is supposed to be used like > the tracing functions in Debug.Trace but with ByteString argument: > > traceEventBinary :: ByteString -> a -> a > traceEventBinaryIO :: ByteString -> IO () > > Note that this can't live in base because of the dependency on bytestring. > > So how should I proceed from here? Am I supposed to submit a GHC proposal > or should I ask on the libraries list? This is not a prominently visible > change in GHC. It rather affects only ghc-prim and no effects in base. > > Thanks, > Mitsutoshi > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.terepeta at gmail.com Mon May 28 17:13:47 2018 From: michal.terepeta at gmail.com (Michal Terepeta) Date: Mon, 28 May 2018 19:13:47 +0200 Subject: ZuriHac 2018 GHC DevOps track - Request for Contributions In-Reply-To: References: Message-ID: Hi Niklas, Sorry for slow reply - I'm totally snowed under at the moment. I should be able to give some overview/examples of what are primops and how they go through the compilation pipeline. And talk a bit about the Cmm-level parts of GHC. But I won't have much time to prepare, so there might be fair amount of improvisation... Are you coming to this week's HaskellerZ meetup? We could chat a bit more about this. Cheers! - Michal On Tue, May 22, 2018 at 12:07 PM Niklas Hambüchen wrote: > On 08/04/2018 15.01, Michal Terepeta wrote: > > I'd be happy to help. :) I know a bit about the backend (e.g., cmm > level), but it might be tricky to find there some smaller/self-contained > projects that would fit ZuriHac. > > Hey Michal, > > that's great. Is there a topic you would like to give a talk about, or a > pet peeve task that you'd like to tick off with the help of new potential > contributors in a hacking session? > > Other topics that might be nice and that you might know about are "How do > I add a new primop to GHC", handling all the way from the call on the > Haskell side to emitting the code, or (if I remember that correctly) > checking out that issue that GHC doesn't do certain optimisations yet (such > as emitting less-than-full-word instructions e.g. for adding two Word8s, or > lack of some strength reductions as in [1]). > > > You've mentioned performance regression tests - maybe we could also work > on improving nofib? > > For sure! > Shall we run a hacking session together where we let attendees work on > both performance regression tests and nofib? It seems these two fit well > together. > > Niklas > > [1]: > https://stackoverflow.com/questions/23315001/maximizing-haskell-loop-performance-with-ghc/23322255#23322255 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From moritz at lichtzwerge.de Tue May 29 03:05:24 2018 From: moritz at lichtzwerge.de (Moritz Angermann) Date: Tue, 29 May 2018 11:05:24 +0800 Subject: Why do we prevent static archives from being loaded when DYNAMIC_GHC_PROGRAMS=YES? Message-ID: <5434F789-04B7-4B62-8B81-9609741D0DEA@lichtzwerge.de> Dear friends, when we build GHC with DYNAMIC_GHC_PROGRAMS=YES, we essentially prevent ghc/ghci from using archives (.a). Is there a technical reason behind this? The only only reasoning so far I've came across was: insist on using dynamic/shared objects, because the user said so when building GHC. In that case, we don't however prevent GHC from building archive (static) only libraries. And as a consequence when we later try to build another archive of a different library, that depends via TH on the former library, GHC will bail and complain that we don't have the relevant dynamic/shared object. Of course we don't we explicitly didn't build it. But the linker code we have in GHC is perfectly capable of loading archives. So why don't we want to fall back to archives? Similarly, as @deech asked on twitter[1], why we prevent GHCi from loading static libraries? I'd like to understand the technical reason/rational for this behavior. Can someone help me out here? If there is no fundamental reason for this behavior, I'd like to go ahead and try to lift it. Thank you! Cheers, Moritz --- [1]: https://twitter.com/deech/status/1001182709555908608 From ggreif at gmail.com Tue May 29 10:12:53 2018 From: ggreif at gmail.com (Gabor Greif) Date: Tue, 29 May 2018 12:12:53 +0200 Subject: -ddump-simpl-phases Message-ID: Hi all, the manual mentions `-ddump-simpl-phases` but there is no such flag [1]. How should we fix this? Cheers, Gabor [1] $ git grep ddump-simpl-phases docs/users_guide/debugging.rst: outputs even more information than ``-ddump-simpl-phases``. From ggreif at gmail.com Tue May 29 13:16:55 2018 From: ggreif at gmail.com (Gabor Greif) Date: Tue, 29 May 2018 15:16:55 +0200 Subject: LclId -> GblId question Message-ID: Hi devs, I have a simple question, but could not find an answer yet. The same variable (I checked!) appears in two dumps with different names and different external visibilities. Which pass transforms this variable to a global id, and why? Shouldn't a LclId remain local along the entire optimisation chain? Any hint appreciated! Cheers and thanks, Gabor Snippets from dumps below ############################################################## -rw-r--r-- 1 ggreif sw12 3281225 May 29 14:14 TcSMonad.dump-stranal -- RHS size: {terms: 2, types: 1, coercions: 0, joins: 0/0} lvl_sOra :: TcTyVarDetails [LclId, Unf=Unf{Src=, TopLvl=True, Value=False, ConLike=False, WorkFree=False, Expandable=False, Guidance=IF_ARGS [] 20 0}] lvl_sOra = ghc-prim-0.5.3:GHC.Magic.noinline @ TcTyVarDetails vanillaSkolemTv ############################################################## -rw-r--r-- 1 ggreif sw12 1438015 May 29 14:14 TcSMonad.dump-simpl -- RHS size: {terms: 2, types: 1, coercions: 0, joins: 0/0} TcSMonad.isFilledMetaTyVar_maybe2 :: TcTyVarDetails [GblId, Unf=Unf{Src=, TopLvl=True, Value=False, ConLike=False, WorkFree=False, Expandable=False, Guidance=IF_ARGS [] 20 0}] TcSMonad.isFilledMetaTyVar_maybe2 = ghc-prim-0.5.3:GHC.Magic.noinline @ TcTyVarDetails vanillaSkolemTv ############################################################## From ben at well-typed.com Tue May 29 20:07:23 2018 From: ben at well-typed.com (Ben Gamari) Date: Tue, 29 May 2018 16:07:23 -0400 Subject: [ANNOUNCE] GHC 8.4.3 released Message-ID: <87lgc2kyyx.fsf@smart-cactus.org> Hello everyone, The GHC team is pleased to announce the availability of GHC 8.4.3. The source distribution, binary distributions, and documentation for this release are available at https://downloads.haskell.org/~ghc/8.4.3 This release includes a few bug fixes including: * A code generation bug resulting in crashing of some programs using UnboxedSums has been fixed (#15038). * #14381, where Cabal and GHC would disagree about abi-depends, resulting in build failures, has been worked around. Note that the work-around patch has already been shipped by several distributions in previous releases, so this change may not be visible to you. * By popular demand, GHC now logs a message when it reads a package environment file, hopefully eliminating some of the confusion wrought by this feature. * GHC now emits assembler agreeable to newer versions of Gnu binutils, fixing #15068. * SmallArray#s can now be compacted into a compact region Thanks to everyone who has contributed to developing, documenting, and testing this release! As always, let us know if you encounter trouble. How to get it ~~~~~~~~~~~~~ The easy way is to go to the web page, which should be self-explanatory: http://www.haskell.org/ghc/ We supply binary builds in the native package format for many platforms, and the source distribution is available from the same place. Packages will appear as they are built - if the package for your system isn't available yet, please try again later. Background ~~~~~~~~~~ Haskell is a standardized lazy functional programming language. GHC is a state-of-the-art programming suite for Haskell. Included is an optimising compiler generating efficient code for a variety of platforms, together with an interactive system for convenient, quick development. The distribution includes space and time profiling facilities, a large collection of libraries, and support for various language extensions, including concurrency, exceptions, and foreign language interfaces. GHC is distributed under a BSD-style open source license. A wide variety of Haskell related resources (tutorials, libraries, specifications, documentation, compilers, interpreters, references, contact information, links to research groups) are available from the Haskell home page (see below). On-line GHC-related resources ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Relevant URLs: GHC home page https://www.haskell.org/ghc/ GHC developers' home page https://ghc.haskell.org/trac/ghc/ Haskell home page https://www.haskell.org/ Supported Platforms ~~~~~~~~~~~~~~~~~~~ The list of platforms we support, and the people responsible for them, is here: https://ghc.haskell.org/trac/ghc/wiki/TeamGHC Ports to other platforms are possible with varying degrees of difficulty. The Building Guide describes how to go about porting to a new platform: https://ghc.haskell.org/trac/ghc/wiki/Building Developers ~~~~~~~~~~ We welcome new contributors. Instructions on accessing our source code repository, and getting started with hacking on GHC, are available from the GHC's developer's site: https://ghc.haskell.org/trac/ghc/ Mailing lists ~~~~~~~~~~~~~ We run mailing lists for GHC users and bug reports; to subscribe, use the web interfaces at https://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users https://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-tickets There are several other haskell and ghc-related mailing lists on www.haskell.org; for the full list, see https://mail.haskell.org/cgi-bin/mailman/listinfo Many GHC developers hang out on #haskell on IRC: https://www.haskell.org/haskellwiki/IRC_channel Please report bugs using our bug tracking system. Instructions on reporting bugs can be found here: https://www.haskell.org/ghc/reportabug -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Tue May 29 20:33:57 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 29 May 2018 16:33:57 -0400 Subject: Steps to propose a new primop in GHC In-Reply-To: References: Message-ID: <87in76kxqn.fsf@smart-cactus.org> Mitsutoshi Aoe writes: > Hi devs, > > I'm thinking to add a primop in GHC but not sure how I should proceed. The > primop I have in mind is something like: > > traceEventBinary# :: Addr# -> Int# -> State# s -> State# s > > This function is similar to the existing traceEvent# but it takes a chunk > of bytes rather than a null-terminated string. It is useful to trace custom > user events (e.g. network packet arrival timestamps in an network > application) in eventlogs. At library level, it is supposed to be used like > the tracing functions in Debug.Trace but with ByteString argument: > > traceEventBinary :: ByteString -> a -> a > traceEventBinaryIO :: ByteString -> IO () > > Note that this can't live in base because of the dependency on bytestring. > > So how should I proceed from here? Am I supposed to submit a GHC proposal > or should I ask on the libraries list? This is not a prominently visible > change in GHC. It rather affects only ghc-prim and no effects in base. > Hmm, that is a good question. I have also needed something like your traceEventBinary# in the past and I think adding the primop is rather non-controversial. As far as adding a wrapper in `base`, I think we can just go ahead and do it. `Debug.Trace` module isn't defined by the Haskell Report so I don't think there's a need to involve the CLC here. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Tue May 29 21:26:48 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 29 May 2018 17:26:48 -0400 Subject: -ddump-simpl-phases In-Reply-To: References: Message-ID: <87fu2akvak.fsf@smart-cactus.org> Gabor Greif writes: > Hi all, > > the manual mentions `-ddump-simpl-phases` but there is no such flag > [1]. How should we fix this? > How does D4750 look to you? Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From maoe at foldr.in Tue May 29 22:55:34 2018 From: maoe at foldr.in (Mitsutoshi Aoe) Date: Wed, 30 May 2018 07:55:34 +0900 Subject: Steps to propose a new primop in GHC In-Reply-To: <87in76kxqn.fsf@smart-cactus.org> References: <87in76kxqn.fsf@smart-cactus.org> Message-ID: <862bd170-6443-438c-b0d7-6d8f09f8f75b@Spark> Hi Ben, Thanks for your reply. I take that at least for the GHC part I can submit the diff to phab and ask for review. I’ll do it. > As far as adding a wrapper in `base`, I think we can just go ahead and do it. Note that the wrapper cannot live in base doe to the dependency on bytestring. I’m thinking to put it in my ghc-trace-events for now. Regards, Mitsutoshi 2018年5月30日 5:33 +0900、Ben Gamari のメール: > Mitsutoshi Aoe writes: > > > Hi devs, > > > > I'm thinking to add a primop in GHC but not sure how I should proceed. The > > primop I have in mind is something like: > > > > traceEventBinary# :: Addr# -> Int# -> State# s -> State# s > > > > This function is similar to the existing traceEvent# but it takes a chunk > > of bytes rather than a null-terminated string. It is useful to trace custom > > user events (e.g. network packet arrival timestamps in an network > > application) in eventlogs. At library level, it is supposed to be used like > > the tracing functions in Debug.Trace but with ByteString argument: > > > > traceEventBinary :: ByteString -> a -> a > > traceEventBinaryIO :: ByteString -> IO () > > > > Note that this can't live in base because of the dependency on bytestring. > > > > So how should I proceed from here? Am I supposed to submit a GHC proposal > > or should I ask on the libraries list? This is not a prominently visible > > change in GHC. It rather affects only ghc-prim and no effects in base. > > > Hmm, that is a good question. I have also needed something like your > traceEventBinary# in the past and I think adding the primop is rather > non-controversial. > > As far as adding a wrapper in `base`, I think we can just go ahead and > do it. `Debug.Trace` module isn't defined by the Haskell Report so I > don't think there's a need to involve the CLC here. > > Cheers, > > - Ben > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Wed May 30 01:36:50 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 29 May 2018 21:36:50 -0400 Subject: Steps to propose a new primop in GHC In-Reply-To: <862bd170-6443-438c-b0d7-6d8f09f8f75b@Spark> References: <87in76kxqn.fsf@smart-cactus.org> <862bd170-6443-438c-b0d7-6d8f09f8f75b@Spark> Message-ID: <87a7shlyab.fsf@smart-cactus.org> Mitsutoshi Aoe writes: > Hi Ben, > > Thanks for your reply. I take that at least for the GHC part I can > submit the diff to phab and ask for review. I’ll do it. > Absolutely. I'm looking forward to seeing it. >> As far as adding a wrapper in `base`, I think we can just go ahead and > do it. > > Note that the wrapper cannot live in base doe to the dependency on > bytestring. I’m thinking to put it in my ghc-trace-events for now. > Yes, of course. Silly me. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 483 bytes Desc: not available URL: From qdunkan at gmail.com Wed May 30 04:41:40 2018 From: qdunkan at gmail.com (Evan Laforge) Date: Tue, 29 May 2018 21:41:40 -0700 Subject: ghci recompilation avoidance Message-ID: After switching to git, I discovered that ghci is interpreting a lot of modules when it should have loaded the .o files (i.e. I get 'SomeModule (interpreted)' instead of 'Skipped SomeModule'). It turns out that git checkouts update the modtime on checked-out files, even when they get reverted back to their original contents. Shake has an option ChangeModtimeAndDigestInput to check file contents in addition to modtime to not get fooled by this, but ghc is still doing a plain modtime check. So shake correctly skips the rebuild on those modules, but ghci recompiles them anyway. This means I'm better off just disabling shake's digest check, since otherwise I can just never recompile that stuff at all. Would it be reasonable to do the same kind of check as shake in ghc? Namely, shake does a quick check if modtime has changed, but even if it has, it checks the file contents digest to make sure. My understanding is that ghc does the quick modtime check, and then does an expensive interface check. This would augment that to become a quick modtime check, then a quick-ish digest check, and then the expensive interface check. I guess the old input file digest will have to be stored somewhere, presumably in the .hi file, so it's not a totally trivial change. The benefit should be that anyone working with git should be able to reuse more .o files after branch checkouts. The two relevant places seem to be GHC.loadModule and DriverPipeline.runPhase. I'm willing to have a go at this if people think it's a good idea and I can get some pointers on the .hi plumbing if I get hung up there. From ky3 at atamo.com Wed May 30 06:46:54 2018 From: ky3 at atamo.com (Kim-Ee Yeoh) Date: Wed, 30 May 2018 13:46:54 +0700 Subject: ghci recompilation avoidance In-Reply-To: References: Message-ID: > It turns out that git checkouts update the modtime on checked-out files, even when they get reverted back to their original contents. Looks to me the problem's right here. Namely, git checkout. If the contents didn't change, the modtime shouldn't either. What's the reason behind changing it? Have you brought this up to the git maintainers? Compensating for flaws in co-tools costs code and complexity in ghc we would rather do without. In the meantime, it shouldn't be hard to kludge up some shell scripts that run before and after git checkout to reset the modtime back to what it should be. On Wednesday, May 30, 2018, Evan Laforge wrote: > After switching to git, I discovered that ghci is interpreting a lot of > modules > when it should have loaded the .o files (i.e. I get 'SomeModule > (interpreted)' > instead of 'Skipped SomeModule'). > > It turns out that git checkouts update the modtime on checked-out files, > even > when they get reverted back to their original contents. Shake has an > option > ChangeModtimeAndDigestInput to check file contents in addition to modtime > to not > get fooled by this, but ghc is still doing a plain modtime check. So shake > correctly skips the rebuild on those modules, but ghci recompiles them > anyway. > This means I'm better off just disabling shake's digest check, since > otherwise > I can just never recompile that stuff at all. > > Would it be reasonable to do the same kind of check as shake in ghc? > Namely, > shake does a quick check if modtime has changed, but even if it has, it > checks > the file contents digest to make sure. My understanding is that ghc does > the > quick modtime check, and then does an expensive interface check. This > would > augment that to become a quick modtime check, then a quick-ish digest > check, > and then the expensive interface check. > > I guess the old input file digest will have to be stored somewhere, > presumably > in the .hi file, so it's not a totally trivial change. The benefit should > be > that anyone working with git should be able to reuse more .o files after > branch checkouts. > > The two relevant places seem to be GHC.loadModule and > DriverPipeline.runPhase. > I'm willing to have a go at this if people think it's a good idea and I can > get some pointers on the .hi plumbing if I get hung up there. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -- -- Kim-Ee -------------- next part -------------- An HTML attachment was scrubbed... URL: From qdunkan at gmail.com Wed May 30 07:37:29 2018 From: qdunkan at gmail.com (Evan Laforge) Date: Wed, 30 May 2018 00:37:29 -0700 Subject: ghci recompilation avoidance In-Reply-To: References: Message-ID: On Tue, May 29, 2018 at 11:46 PM, Kim-Ee Yeoh wrote: >> It turns out that git checkouts update the modtime on checked-out files, >> even > when they get reverted back to their original contents. > > Looks to me the problem's right here. Namely, git checkout. > > If the contents didn't change, the modtime shouldn't either. What's the > reason behind changing it? The contents do change, but then they change back again. Say you visit another branch then come back. > Have you brought this up to the git maintainers? Compensating for flaws in > co-tools costs code and complexity in ghc we would rather do without. Here's an explanation with some links: https://confluence.atlassian.com/bbkb/preserving-file-timestamps-with-git-and-mercurial-781386524.html > In the meantime, it shouldn't be hard to kludge up some shell scripts that > run before and after git checkout to reset the modtime back to what it > should be. That sounds like code and complexity too! Only it would be repeated in every repo. And it sounds pretty hard too. I'd have to keep and update a Map (Branch, FilePath) ModTime, and hook every checkout to record and restore every modified file. At that point I've more or less written my own checkout command. From juhpetersen at gmail.com Wed May 30 08:06:03 2018 From: juhpetersen at gmail.com (Jens Petersen) Date: Wed, 30 May 2018 17:06:03 +0900 Subject: build testsuite or not? Message-ID: Okay I have one more question about my packaging of ghc for Fedora. For long I always build the testsuite for every release perf build on all archs (for an example see the build.log links on https://koji.fedoraproject.org/koji/buildinfo?buildID=1086491). Is this a useful meaningful thing to do? I thought it good to have it as a reference for ghc builds on Fedora and EPEL, but it does add a considerable amount of time to builds (specially for the slower ARM arch's) so it is not without cost. So I am wondering how useful it is to continue running the testsuite for each "production" build I do. What do others and other distros, etc do for final releases? Thanks, Jens From ggreif at gmail.com Wed May 30 08:08:10 2018 From: ggreif at gmail.com (Gabor Greif) Date: Wed, 30 May 2018 10:08:10 +0200 Subject: LclId -> GblId question In-Reply-To: References: Message-ID: Never mind, I figured it out. It is the CoreTidy pass of the compilation pipeline: https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/HscMain Cheers, Gabor On 5/29/18, Gabor Greif wrote: > Hi devs, > > I have a simple question, but could not find an answer yet. The same > variable (I checked!) appears in two dumps with different names and > different external visibilities. > Which pass transforms this variable to a global id, and why? Shouldn't > a LclId remain local along the entire optimisation chain? > > Any hint appreciated! > > Cheers and thanks, > > Gabor > > Snippets from dumps below > > ############################################################## > > -rw-r--r-- 1 ggreif sw12 3281225 May 29 14:14 TcSMonad.dump-stranal > > > -- RHS size: {terms: 2, types: 1, coercions: 0, joins: 0/0} > lvl_sOra :: TcTyVarDetails > [LclId, > Unf=Unf{Src=, TopLvl=True, Value=False, ConLike=False, > WorkFree=False, Expandable=False, Guidance=IF_ARGS [] 20 0}] > lvl_sOra > = ghc-prim-0.5.3:GHC.Magic.noinline > @ TcTyVarDetails vanillaSkolemTv > > ############################################################## > > -rw-r--r-- 1 ggreif sw12 1438015 May 29 14:14 TcSMonad.dump-simpl > > > -- RHS size: {terms: 2, types: 1, coercions: 0, joins: 0/0} > TcSMonad.isFilledMetaTyVar_maybe2 :: TcTyVarDetails > [GblId, > Unf=Unf{Src=, TopLvl=True, Value=False, ConLike=False, > WorkFree=False, Expandable=False, Guidance=IF_ARGS [] 20 0}] > TcSMonad.isFilledMetaTyVar_maybe2 > = ghc-prim-0.5.3:GHC.Magic.noinline > @ TcTyVarDetails vanillaSkolemTv > > ############################################################## > From ben at well-typed.com Wed May 30 15:32:28 2018 From: ben at well-typed.com (Ben Gamari) Date: Wed, 30 May 2018 11:32:28 -0400 Subject: build testsuite or not? In-Reply-To: References: Message-ID: <874lipkvlk.fsf@smart-cactus.org> Jens Petersen writes: > Okay I have one more question about my packaging of ghc for Fedora. > > For long I always build the testsuite for every release perf build on all archs > (for an example see the build.log links on > https://koji.fedoraproject.org/koji/buildinfo?buildID=1086491). > > Is this a useful meaningful thing to do? > > I thought it good to have it as a reference for ghc builds on Fedora > and EPEL, but it does add a considerable amount of time to builds > (specially for the slower ARM arch's) so it is not without cost. > At the moment I wouold say that our testsuite is unreliable enough in non-validate configurations that this has relatively little value. This is something we are working on fixing and hopefully things will be more reliable in the future. I generally validate the tree prior to cutting a release. This of course won't catch environment- and distribution-specific issues, but I think it's a pretty good proxy for correctness. In other words, unlesss you find yourself looking at the testsuite output yourself, I think it would be fine to disable it. > So I am wondering how useful it is to continue running the testsuite > for each "production" build I do. > What do others and other distros, etc do for final releases? > As far as I know neither Debian nor NixOS run GHC's testsuite. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Wed May 30 15:38:24 2018 From: ben at well-typed.com (Ben Gamari) Date: Wed, 30 May 2018 11:38:24 -0400 Subject: [Haskell] [ANNOUNCE] GHC 8.4.2 released In-Reply-To: References: <87in8md9v9.fsf@smart-cactus.org> Message-ID: <871sdtkvbm.fsf@smart-cactus.org> Andrés Sicard-Ramírez writes: > On 19 April 2018 at 19:01, Ben Gamari wrote: >> The GHC team is pleased to announce the availability of GHC 8.4.2. > > Please note that the tar files for the binary distributions for Linux > (x86_64) Debian 9 and Debian 8 are the same (i.e. > ghc-8.4.2-x86_64-deb8-linux.tar.xz) on the following link > > https://www.haskell.org/ghc/download_ghc_8_4_2.html#linux_x86_64 > > The same happens on GHC 8.4.3. > Fixed. Thanks for letting me know. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Wed May 30 15:45:26 2018 From: ben at well-typed.com (Ben Gamari) Date: Wed, 30 May 2018 11:45:26 -0400 Subject: Unable to build on NixOS In-Reply-To: <61999454-1a65-4b9e-a5da-329f180070ea@Spark> References: <20180527013847.GA4688@sean-nixos> <61999454-1a65-4b9e-a5da-329f180070ea@Spark> Message-ID: <87y3g1jgfg.fsf@smart-cactus.org> Patrick Dougherty writes: > Huh, > > So this is a bug I thought I dealt with :/ > In the short term, I've found that often simply trying the build again > can fix it. This is a dependency issue that I don't 100% understand. > I also encountered this (although it's rather unlikely to occur with high build parallelism, which is how it snuck through validation). Anyways, I filed #15197 and pushed D4753 as a possible workaround. I believe the root cause is a bug in `ghc -M`'s treatment of ways. Cheers, - Ben [1] https://phabricator.haskell.org/D4753 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From matthewtpickering at gmail.com Wed May 30 20:43:34 2018 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Wed, 30 May 2018 16:43:34 -0400 Subject: Loading GHC into GHCi (and ghcid) Message-ID: Hi all, Csongor has informed me that he has worked out how to load GHC into GHCi which can then be used with ghcid for a more interactive development experience. 1. Put this .ghci file in compiler/ https://gist.github.com/mpickering/73749e7783f40cc762fec171b879704c 2. Run "../inplace/bin/ghc-stage2 --interactive -odir tmp -hidir tmp" from inside compiler/ It may take a while and require a little bit of memory but in the end all 500 or so modules will be loaded. It can also be used with ghcid. ghcid -c "../inplace/bin/ghc-stage2 --interactive -odir tmp -hidir tmp" Hopefully someone who has more RAM than I. Can anyone suggest the suitable place on the wiki for this information? Cheers, Matt From juhpetersen at gmail.com Thu May 31 04:31:20 2018 From: juhpetersen at gmail.com (Jens Petersen) Date: Thu, 31 May 2018 13:31:20 +0900 Subject: build testsuite or not? In-Reply-To: <874lipkvlk.fsf@smart-cactus.org> References: <874lipkvlk.fsf@smart-cactus.org> Message-ID: Thanks, Ben for your helpful reply. Okay then I think I will disable the testsuite for most Fedora builds then. I never really look at them these days any more to be honest. Cheers, Jens On 31 May 2018 at 00:32, Ben Gamari wrote: > Jens Petersen writes: > >> Okay I have one more question about my packaging of ghc for Fedora. >> >> For long I always build the testsuite for every release perf build on all archs >> (for an example see the build.log links on >> https://koji.fedoraproject.org/koji/buildinfo?buildID=1086491). >> >> Is this a useful meaningful thing to do? >> >> I thought it good to have it as a reference for ghc builds on Fedora >> and EPEL, but it does add a considerable amount of time to builds >> (specially for the slower ARM arch's) so it is not without cost. >> > At the moment I wouold say that our testsuite is unreliable enough in > non-validate configurations that this has relatively little value. This > is something we are working on fixing and hopefully things will be more > reliable in the future. > > I generally validate the tree prior to cutting a release. This of course > won't catch environment- and distribution-specific issues, but I think > it's a pretty good proxy for correctness. > > In other words, unlesss you find yourself looking at the testsuite > output yourself, I think it would be fine to disable it. > >> So I am wondering how useful it is to continue running the testsuite >> for each "production" build I do. >> What do others and other distros, etc do for final releases? >> > As far as I know neither Debian nor NixOS run GHC's testsuite. > > Cheers, > > - Ben > From ryan.gl.scott at gmail.com Thu May 31 10:48:25 2018 From: ryan.gl.scott at gmail.com (Ryan Scott) Date: Thu, 31 May 2018 06:48:25 -0400 Subject: -fghci-leak-check apparently causes many tests to fail Message-ID: I recently ran the testsuite and experienced a very large number of testsuite failures, all of which seem to involve the new -fghci-leak-check flag. Here is the list of tests that fail: Unexpected failures: ghci/prog001/prog001.run prog001 [bad stdout] (ghci) ghci/prog002/prog002.run prog002 [bad stdout] (ghci) ghci/prog003/prog003.run prog003 [bad stdout] (ghci) ghci/prog010/ghci.prog010.run ghci.prog010 [bad stdout] (ghci) ghci/prog013/prog013.run prog013 [bad stdout] (ghci) ghci/prog012/prog012.run prog012 [bad stdout] (ghci) ghci/prog009/ghci.prog009.run ghci.prog009 [bad stdout] (ghci) ghci/scripts/ghci025.run ghci025 [bad stdout] (ghci) ghci/scripts/ghci038.run ghci038 [bad stdout] (ghci) ghci/scripts/ghci057.run ghci057 [bad stdout] (ghci) ghci/scripts/T2182ghci.run T2182ghci [bad stdout] (ghci) ghci/scripts/ghci058.run ghci058 [bad stdout] (ghci) ghci/scripts/T6106.run T6106 [bad stdout] (ghci) ghci/scripts/T8353.run T8353 [bad stdout] (ghci) ghci/scripts/T9293.run T9293 [bad stdout] (ghci) ghci/scripts/T10989.run T10989 [bad stdout] (ghci) ghci/should_run/T13825-ghci.run T13825-ghci [bad stdout] (ghci) ghci.debugger/scripts/print007.run print007 [bad stdout] (ghci) ghci.debugger/scripts/break009.run break009 [bad stdout] (ghci) ghci.debugger/scripts/break008.run break008 [bad stdout] (ghci) ghci.debugger/scripts/break026.run break026 [bad stdout] (ghci) perf/space_leaks/T4029.run T4029 [bad stdout] (ghci) And the full failing test output can be found here [1]. (I won't post it inline, since it's quite large). Are these changes expected? I'm not at all familiar with -fghci-leak-check, so I don't know if we should accept the new output or not. Ryan S. ----- [1] https://gist.githubusercontent.com/RyanGlScott/f920737287049b82947e1c47cdbc2b94/raw/4fe68d47cc78675424e09cf451be556c6f430d08/gistfile1.txt -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu May 31 19:13:30 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 31 May 2018 19:13:30 +0000 Subject: Trac email Message-ID: Devs Trac has entirely stopped sending me email, so I have no idea what's happening on the GHC front. Could someone unglue it? Thanks! Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From haskelier.artem at gmail.com Thu May 31 19:25:37 2018 From: haskelier.artem at gmail.com (Artem Pelenitsyn) Date: Thu, 31 May 2018 21:25:37 +0200 Subject: Trac email In-Reply-To: References: Message-ID: Hello, It stopped working for me from like Sunday and wasn't working for about two days. Then, on Tuesday, it silently started working again. Although, I haven't checked that I get all the emails. -- Best, Artem On Thu, 31 May 2018, 21:13 Simon Peyton Jones via ghc-devs, < ghc-devs at haskell.org> wrote: > Devs > > Trac has entirely stopped sending me email, so I have no idea what’s > happening on the GHC front. Could someone unglue it? > > Thanks! > > Simon > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lonetiger at gmail.com Thu May 31 21:43:06 2018 From: lonetiger at gmail.com (Phyx) Date: Thu, 31 May 2018 17:43:06 -0400 Subject: Trac email In-Reply-To: References: Message-ID: The mail servers were backed up with spam apparently. No emails were delivered or received to any of the mailing lists https://www.reddit.com/r/haskell/comments/8mza0y/whats_up_with_mailhaskellorg_dark_since_sunday/ Cheers, Tamar On Thu, May 31, 2018, 15:26 Artem Pelenitsyn wrote: > Hello, > > It stopped working for me from like Sunday and wasn't working for about > two days. Then, on Tuesday, it silently started working again. Although, I > haven't checked that I get all the emails. > > -- > Best, Artem > > > On Thu, 31 May 2018, 21:13 Simon Peyton Jones via ghc-devs, < > ghc-devs at haskell.org> wrote: > >> Devs >> >> Trac has entirely stopped sending me email, so I have no idea what’s >> happening on the GHC front. Could someone unglue it? >> >> Thanks! >> >> Simon >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lonetiger at gmail.com Thu May 31 21:53:46 2018 From: lonetiger at gmail.com (Phyx) Date: Thu, 31 May 2018 17:53:46 -0400 Subject: -fghci-leak-check apparently causes many tests to fail In-Reply-To: References: Message-ID: I don't know what -fghci-leak-check does at all, but if they are to be expected we shouldn't accept the changes. Instead change the default options in the testsuite to pass -fno-ghci-leak-check (I assume that exists) On Thu, May 31, 2018, 06:49 Ryan Scott wrote: > I recently ran the testsuite and experienced a very large number of > testsuite failures, all of which seem to involve the new -fghci-leak-check > flag. Here is the list of tests that fail: > > Unexpected failures: > ghci/prog001/prog001.run prog001 [bad stdout] (ghci) > ghci/prog002/prog002.run prog002 [bad stdout] (ghci) > ghci/prog003/prog003.run prog003 [bad stdout] (ghci) > ghci/prog010/ghci.prog010.run ghci.prog010 [bad stdout] (ghci) > ghci/prog013/prog013.run prog013 [bad stdout] (ghci) > ghci/prog012/prog012.run prog012 [bad stdout] (ghci) > ghci/prog009/ghci.prog009.run ghci.prog009 [bad stdout] (ghci) > ghci/scripts/ghci025.run ghci025 [bad stdout] (ghci) > ghci/scripts/ghci038.run ghci038 [bad stdout] (ghci) > ghci/scripts/ghci057.run ghci057 [bad stdout] (ghci) > ghci/scripts/T2182ghci.run T2182ghci [bad stdout] (ghci) > ghci/scripts/ghci058.run ghci058 [bad stdout] (ghci) > ghci/scripts/T6106.run T6106 [bad stdout] (ghci) > ghci/scripts/T8353.run T8353 [bad stdout] (ghci) > ghci/scripts/T9293.run T9293 [bad stdout] (ghci) > ghci/scripts/T10989.run T10989 [bad stdout] (ghci) > ghci/should_run/T13825-ghci.run T13825-ghci [bad stdout] (ghci) > ghci.debugger/scripts/print007.run print007 [bad stdout] (ghci) > ghci.debugger/scripts/break009.run break009 [bad stdout] (ghci) > ghci.debugger/scripts/break008.run break008 [bad stdout] (ghci) > ghci.debugger/scripts/break026.run break026 [bad stdout] (ghci) > perf/space_leaks/T4029.run T4029 [bad stdout] (ghci) > > And the full failing test output can be found here [1]. (I won't post it > inline, since it's quite large). > > Are these changes expected? I'm not at all familiar with > -fghci-leak-check, so I don't know if we should accept the new output or > not. > > Ryan S. > ----- > [1] > https://gist.githubusercontent.com/RyanGlScott/f920737287049b82947e1c47cdbc2b94/raw/4fe68d47cc78675424e09cf451be556c6f430d08/gistfile1.txt > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: