From carter.schonwald at gmail.com Mon Mar 2 05:27:26 2020 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 2 Mar 2020 00:27:26 -0500 Subject: Blocking MVar# primops not performing stack checks? In-Reply-To: References: Message-ID: The simplest way to answer this is if you can help us construct a program, whether as Haskell or cmm, which tickles the failure you suspect is there ? The rts definitely gets less love overall. And there’s fewer folks involved in those layers overall. On Wed, Feb 26, 2020 at 10:03 AM Shao, Cheng wrote: > Hi all, > > When an MVar# primop blocks, it jumps to a function in > HeapStackCheck.cmm which pushes a RET_SMALL stack frame before > returning to the scheduler (e.g. the takeMVar# primop jumps to > stg_block_takemvar for stack adjustment). But these functions directly > bump Sp without checking for possible stack overflow, I wonder if it > is a bug? > > Cheers, > Cheng > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.frisby at gmail.com Mon Mar 2 20:20:12 2020 From: nicolas.frisby at gmail.com (Nicolas Frisby) Date: Mon, 2 Mar 2020 12:20:12 -0800 Subject: Typechecker plugin proposal, ticket #15147 Message-ID: If you know of any typechecker plugin authors I've missed, please add them to the thread. In the comments of ticket https://gitlab.haskell.org/ghc/ghc/issues/15147 several of us agreed that the behavior of the typechecker plugin interface should change: GHC should no longer unflatten the fmvs in the Wanteds before passing them to the plugin. This is presumably a breaking change to plugins. We might be able to get by with some sort of flag indicating if the plugin expects the Wanteds flattened or not, but ideally GHC would just always pass them flattened. I'm unaware of any established policy about interface changes at this level, whether we've somehow committed to backwards-compatibility here or not. Anyone know? Plugin authors: would you look over the ticket comments and share your thoughts here? We're looking to build some sort of consensus about how to proceed without shocking the API users. Thank you for your time. -Nick -------------- next part -------------- An HTML attachment was scrubbed... URL: From omeragacan at gmail.com Tue Mar 3 15:10:40 2020 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Tue, 3 Mar 2020 18:10:40 +0300 Subject: Reason for skipping sanity checking threads and mut_lists before a GC? Message-ID: Hi, With `+RTS -DS` we call this function before and after a GC: void checkSanity (bool after_gc, bool major_gc) { checkFullHeap(after_gc && major_gc); checkFreeListSanity(); // always check the stacks in threaded mode, because checkHeap() // does nothing in this case. if (after_gc) { checkMutableLists(); checkGlobalTSOList(true); } } For some reason this skips mut lists and threads before a GC, I don't understand why that is necessary. Does anyone know the reason? Thanks, Ömer From lexi.lambda at gmail.com Tue Mar 3 18:48:47 2020 From: lexi.lambda at gmail.com (Alexis King) Date: Tue, 3 Mar 2020 12:48:47 -0600 Subject: Feasibility of native RTS support for continuations? In-Reply-To: <64005FF6-1AE4-4423-BE66-C10AE54E684B@gmail.com> References: <72859DAE-06D5-473F-BA92-AD6A40543C97@gmail.com> <3BDD2CB6-5170-4C37-AB0E-24E7317AE1C7@gmail.com> <64005FF6-1AE4-4423-BE66-C10AE54E684B@gmail.com> Message-ID: As a small update on this for anyone following along, I submitted a GHC proposal about a week ago to add the discussed primops (albeit with some tweaked names). For those who haven’t seen it already, the pull request is here: https://github.com/ghc-proposals/ghc-proposals/pull/313 So far, the reception has been quite positive, so I’m optimistic about getting these added. Of course, if anyone has any concerns, please voice them in the PR thread! Thanks, Alexis From chessai1996 at gmail.com Wed Mar 4 06:32:05 2020 From: chessai1996 at gmail.com (chessai .) Date: Tue, 3 Mar 2020 22:32:05 -0800 Subject: Number of threads in haskell program Message-ID: Hi devs, Recently I became interested in obtaining the rough number of (non-GC) threads that are alive in a Haskell program. Intuitively this seemed like something that the RTS would expose in some way - but I couldn't find any such exposition in base. I then thought that I could simply total the size of each run_queue_hd in each capability, but the machinery for doing that doesn't seem to be exposed. Any thoughts? Thanks From klebinger.andreas at gmx.at Wed Mar 4 12:47:26 2020 From: klebinger.andreas at gmx.at (Andreas Klebinger) Date: Wed, 4 Mar 2020 13:47:26 +0100 Subject: Blocking MVar# primops not performing stack checks? In-Reply-To: References: Message-ID: I just took a look at the implementation and it looks like you are right Cheng. I opened a ticket here: https://gitlab.haskell.org/ghc/ghc/issues/17893 Carter Schonwald schrieb am 02.03.2020 um 06:27: > The simplest way to answer this is if you can help us construct a > program, whether as Haskell or cmm, which tickles the failure you > suspect is there ? > > The rts definitely gets less love overall.  And there’s fewer folks > involved in those layers overall. > > > > On Wed, Feb 26, 2020 at 10:03 AM Shao, Cheng > wrote: > > Hi all, > > When an MVar# primop blocks, it jumps to a function in > HeapStackCheck.cmm which pushes a RET_SMALL stack frame before > returning to the scheduler (e.g. the takeMVar# primop jumps to > stg_block_takemvar for stack adjustment). But these functions directly > bump Sp without checking for possible stack overflow, I wonder if it > is a bug? > > Cheers, > Cheng > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Wed Mar 4 13:47:47 2020 From: ben at smart-cactus.org (Ben Gamari) Date: Wed, 04 Mar 2020 08:47:47 -0500 Subject: Number of threads in haskell program In-Reply-To: References: Message-ID: <87tv349qpt.fsf@smart-cactus.org> "chessai ." writes: > Hi devs, > > Recently I became interested in obtaining the rough number of (non-GC) > threads that are alive in a Haskell program. Intuitively this seemed > like something that the RTS would expose in some way - but I couldn't > find any such exposition in base. I then thought that I could simply > total the size of each run_queue_hd in each capability, but the > machinery for doing that doesn't seem to be exposed. Any thoughts? > Hi Chessai, Indeed I also needed this functionality in the past. I have a patch (just posted as !2816) which may need some polish which adds a `listThreads#` primop. Perhaps you want to pick it up? Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From matthewtpickering at gmail.com Thu Mar 5 08:16:03 2020 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Thu, 5 Mar 2020 08:16:03 +0000 Subject: Advice implementing new constraint entailment rules Message-ID: Hello, I am attempting to implement two new constraint entailment rules which dictate how to implement a new constraint form "CodeC" can be used to satisfy constraints. The main idea is that all constraints store the level they they are introduced and required (in the Template Haskell sense of level) and that only constraints of the right level can be used. The "CodeC" constraint form allows the level of constraints to be manipulated. Therefore the two rules In order to implement this I want to add two constraint rewriting rules in the following way: 1. If in a given, `CodeC C @ n` ~> `C @ n+1` 2. If in a wanted `CodeC C @ n` -> `C @ n - 1` Can someone give me some pointers about the specific part of the constraint solver where I should add these rules? I am unsure if this rewriting of wanted constraints already occurs or not. Cheers, Matt From simonpj at microsoft.com Thu Mar 5 09:24:02 2020 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 5 Mar 2020 09:24:02 +0000 Subject: Advice implementing new constraint entailment rules In-Reply-To: References: Message-ID: Hi Matt I think you are right to say that we need to apply proper staging to the constraint solver. But I don't understand your constraint rewriting rules. Before moving to the implementation, could we discuss the specification? You already have some typeset rules in a paper of some kind, which I commented on some time ago. Could you elaborate those rules with class constraints? Then we'd have something tangible to debate. Thanks Simon | -----Original Message----- | From: ghc-devs On Behalf Of Matthew | Pickering | Sent: 05 March 2020 08:16 | To: GHC developers | Subject: Advice implementing new constraint entailment rules | | Hello, | | I am attempting to implement two new constraint entailment rules which | dictate how to implement a new constraint form "CodeC" can be used to | satisfy constraints. | | The main idea is that all constraints store the level they they are | introduced and required (in the Template Haskell sense of level) and | that only constraints of the right level can be used. | | The "CodeC" constraint form allows the level of constraints to be | manipulated. | | Therefore the two rules | | In order to implement this I want to add two constraint rewriting | rules in the following way: | | 1. If in a given, `CodeC C @ n` ~> `C @ n+1` | 2. If in a wanted `CodeC C @ n` -> `C @ n - 1` | | Can someone give me some pointers about the specific part of the | constraint solver where I should add these rules? I am unsure if this | rewriting of wanted constraints already occurs or not. | | Cheers, | | Matt | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C52ec5ca4f50c496b25e808d7 | c0dd8534%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637189929963530670&a | mp;sdata=0T2O%2FaAcIU9Yl61x2uPzl4zUG4P3jl6iA97baIDlSsM%3D&reserved=0 From christiaan.baaij at gmail.com Fri Mar 6 15:21:54 2020 From: christiaan.baaij at gmail.com (Christiaan Baaij) Date: Fri, 6 Mar 2020 16:21:54 +0100 Subject: Class op rules Message-ID: Hello, The other day I was experimenting with RULES and got this warning: src/Clash/Sized/Vector.hs:2159:11: warning: [-Winline-rule-shadowing] Rule "map Pack" may never fire because rule "Class op pack" for ‘pack’ might fire first Probable fix: add phase [n] or [~n] to the competing rule | 2159 | {-# RULES "map Pack" map pack = id #-} The warning seems to suggests two things: 1. "Class op" -> "dictionary projection" are implemented as rewrite rules and executed the same way as other user-defined RULES 2. These rules run first, and you cannot run anything before them Now my question is, is 1. actually true? or is that warning just a (white) lie? If 1. is actually true, would there be any objections to adding a "-1" phase: where RULES specified to start from phase "-1" onward fire before any of the Class op rules. I'm quite willing to implement the above if A) Class op rules are actually implemented as builtin RULES; B) there a no objections to this "-1" phase. Thanks, Christiaan -------------- next part -------------- An HTML attachment was scrubbed... URL: From conal at conal.net Fri Mar 6 17:37:04 2020 From: conal at conal.net (Conal Elliott) Date: Fri, 6 Mar 2020 09:37:04 -0800 Subject: Class op rules In-Reply-To: References: Message-ID: Thank you for raising this issue, Christiaan! The current policy (very early class-op inlining) is a major difficulty and the main source of fragility in my compiling-to-categories implementation. I have a tediously programmed and delicately balanced collection of techniques to intercept and transform class ops to non-ops early and then transform back late for elimination, but it doesn't work in all situations. Since class operations roughly correspond to operations in various algebraic abstractions---interfaces with laws---I often want to exploit exactly those laws as rewrite rules, and yet those rules currently cannot be used dependably. - Conal On Fri, Mar 6, 2020 at 7:22 AM Christiaan Baaij wrote: > Hello, > > The other day I was experimenting with RULES and got this warning: > > src/Clash/Sized/Vector.hs:2159:11: warning: [-Winline-rule-shadowing] > Rule "map Pack" may never fire > because rule "Class op pack" for ‘pack’ might fire first > Probable fix: add phase [n] or [~n] to the competing rule > | > 2159 | {-# RULES "map Pack" map pack = id #-} > > The warning seems to suggests two things: > 1. "Class op" -> "dictionary projection" are implemented as rewrite rules > and executed the same way as other user-defined RULES > 2. These rules run first, and you cannot run anything before them > > Now my question is, is 1. actually true? or is that warning just a (white) > lie? > If 1. is actually true, would there be any objections to adding a "-1" > phase: where RULES specified to start from phase "-1" onward fire before > any of the Class op rules. > I'm quite willing to implement the above if A) Class op rules are actually > implemented as builtin RULES; B) there a no objections to this "-1" phase. > > Thanks, > Christiaan > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christiaan.baaij at gmail.com Fri Mar 6 18:29:35 2020 From: christiaan.baaij at gmail.com (Christiaan Baaij) Date: Fri, 6 Mar 2020 19:29:35 +0100 Subject: Typechecker plugin proposal, ticket #15147 In-Reply-To: References: Message-ID: I actually have conflicting needs: 1. ghc-typelits-natnormalise, a solver for type-level Nat equations, would benefits from both unflattened givens and unflattened wanteds! Why unflattened givens? because from `[G] 2*x + a ~ 2*x + b` I can derive `a ~ b` Which I could then use to solve: [W] 3*y + a ~ 3*y + b So for both givens and wanteds I want unflattened constraints, because my simplifier "simplifies" by eliminating equal terms on either side of the '~' I actually had to write my own unflattening function for givens. Now, if flattened wanteds would mean that a single unflattened ` [W] 3*y + a ~ 3*y + b` is split up into multiple flattened wanteds, that could complicate ghc-typelits-natnormalise, although I cannot be sure. So if I could be kept in the loop, and test an API with flattened wanteds early, I could give more feedback on that. 2. ghc-typelits-knownnat, this solves complicated knownnat constraints from simpler ones. e.g. given [G] KnownNat a, [G] KnownNat b, and a [W] KnownNat (a + b), it can create that dictionary using some "magic" dictionary functions. Again, here I benifit from unflattened wanteds because I see [W] KnownNat (a + b), instead of [W] KnownNat fmv 3. ghc-typelits-extra, adds additional operations on types of kind Nat, e.g. LogBase. This one probably benifits from flattened wanteds, so I can solve one "magic" type family at a time. So if I'd had to hazard a guess, I think I'd want to receive my wanteds as an "Either [flattened] [unflattened]" and then return a "Solved (Either [flattened] [unflattened])" as well. On Mon, 2 Mar 2020 at 21:20, Nicolas Frisby wrote: > If you know of any typechecker plugin authors I've missed, please add them > to the thread. > > In the comments of ticket https://gitlab.haskell.org/ghc/ghc/issues/15147 > several of us agreed that the behavior of the typechecker plugin interface > should change: GHC should no longer unflatten the fmvs in the Wanteds > before passing them to the plugin. > > This is presumably a breaking change to plugins. We might be able to get > by with some sort of flag indicating if the plugin expects the Wanteds > flattened or not, but ideally GHC would just always pass them flattened. > I'm unaware of any established policy about interface changes at this > level, whether we've somehow committed to backwards-compatibility here or > not. Anyone know? > > Plugin authors: would you look over the ticket comments and share your > thoughts here? We're looking to build some sort of consensus about how to > proceed without shocking the API users. > > Thank you for your time. -Nick > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Fri Mar 6 19:11:21 2020 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Fri, 6 Mar 2020 14:11:21 -0500 Subject: Class op rules In-Reply-To: References: Message-ID: so i did some poking around see eg https://gitlab.haskell.org/ghc/ghc/blob/4898df1cc25132dc9e2599d4fa4e1bbc9423cda5/compiler/basicTypes/BasicTypes.hs#L1187-1207 , and at the moment, the simplifier phase number ordering internally (in order from first to last) "Initial phase" --- essentially positive infinity .... -- currently we can add new phases here 2 1 0 ------- This actually surprised me, as i've always (at least from how rules are usually written in eg vector) thought it was counting UP! @Christiaan ... so we'd need a Pre initial phase count? that happens before Initial phase? On Fri, Mar 6, 2020 at 10:22 AM Christiaan Baaij wrote: > Hello, > > The other day I was experimenting with RULES and got this warning: > > src/Clash/Sized/Vector.hs:2159:11: warning: [-Winline-rule-shadowing] > Rule "map Pack" may never fire > because rule "Class op pack" for ‘pack’ might fire first > Probable fix: add phase [n] or [~n] to the competing rule > | > 2159 | {-# RULES "map Pack" map pack = id #-} > > The warning seems to suggests two things: > 1. "Class op" -> "dictionary projection" are implemented as rewrite rules > and executed the same way as other user-defined RULES > 2. These rules run first, and you cannot run anything before them > > Now my question is, is 1. actually true? or is that warning just a (white) > lie? > If 1. is actually true, would there be any objections to adding a "-1" > phase: where RULES specified to start from phase "-1" onward fire before > any of the Class op rules. > I'm quite willing to implement the above if A) Class op rules are actually > implemented as builtin RULES; B) there a no objections to this "-1" phase. > > Thanks, > Christiaan > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Mar 6 23:02:41 2020 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 6 Mar 2020 23:02:41 +0000 Subject: Class op rules In-Reply-To: References: Message-ID: Here’s how it works: * The rewrite from opi (D m1 … mn) --> mi is done by a BuiltinRule: see MkId.mkDictSelId, and the BuiltinRule that is made there. * At the moment, BuiltinRules are always active (in all phases), see GHC.Core.ruleActivation. To allow them to be selectively active, we’d have to give them a ru_act fiels, like ordinary Rules. That would not be hard. * The phases go * InitialPhase * 2 * 1 * 0 * We could make classop rules active only in phase 1 and 0, say. I don’t know what the consequences would be; running the classop to pick a method out of a dictionary in turn reveals new function applications that might want to work in phase 2, say. * Of course you can always add more phases, but that adds compile time. * Would you want the classop phase to be fixed for every classop? Or controllable for each classop individually. E.g. class C a where { op :: {-# INLINE [2] op #-} } Here the intent is that, since the pragmas is in the class decl, the pragma applies to the method selector. I remember Conal raising this before, but I’ve forgotten the resolution. I’m entirely open to changes here, if someone is willing to do the work, including checking for consequences. Simon From: ghc-devs On Behalf Of Conal Elliott Sent: 06 March 2020 17:37 To: Christiaan Baaij Cc: ghc-devs Subject: Re: Class op rules Thank you for raising this issue, Christiaan! The current policy (very early class-op inlining) is a major difficulty and the main source of fragility in my compiling-to-categories implementation. I have a tediously programmed and delicately balanced collection of techniques to intercept and transform class ops to non-ops early and then transform back late for elimination, but it doesn't work in all situations. Since class operations roughly correspond to operations in various algebraic abstractions---interfaces with laws---I often want to exploit exactly those laws as rewrite rules, and yet those rules currently cannot be used dependably. - Conal On Fri, Mar 6, 2020 at 7:22 AM Christiaan Baaij > wrote: Hello, The other day I was experimenting with RULES and got this warning: src/Clash/Sized/Vector.hs:2159:11: warning: [-Winline-rule-shadowing] Rule "map Pack" may never fire because rule "Class op pack" for ‘pack’ might fire first Probable fix: add phase [n] or [~n] to the competing rule | 2159 | {-# RULES "map Pack" map pack = id #-} The warning seems to suggests two things: 1. "Class op" -> "dictionary projection" are implemented as rewrite rules and executed the same way as other user-defined RULES 2. These rules run first, and you cannot run anything before them Now my question is, is 1. actually true? or is that warning just a (white) lie? If 1. is actually true, would there be any objections to adding a "-1" phase: where RULES specified to start from phase "-1" onward fire before any of the Class op rules. I'm quite willing to implement the above if A) Class op rules are actually implemented as builtin RULES; B) there a no objections to this "-1" phase. Thanks, Christiaan _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From gergo at erdi.hu Sat Mar 7 01:56:25 2020 From: gergo at erdi.hu (=?UTF-8?B?RHIuIMOJUkRJIEdlcmfFkQ==?=) Date: Sat, 7 Mar 2020 09:56:25 +0800 Subject: Class op rules In-Reply-To: References: Message-ID: As a workaround, can you try this? https://stackoverflow.com/a/32133083/477476 On Fri, Mar 6, 2020, 23:23 Christiaan Baaij wrote: > Hello, > > The other day I was experimenting with RULES and got this warning: > > src/Clash/Sized/Vector.hs:2159:11: warning: [-Winline-rule-shadowing] > Rule "map Pack" may never fire > because rule "Class op pack" for ‘pack’ might fire first > Probable fix: add phase [n] or [~n] to the competing rule > | > 2159 | {-# RULES "map Pack" map pack = id #-} > > The warning seems to suggests two things: > 1. "Class op" -> "dictionary projection" are implemented as rewrite rules > and executed the same way as other user-defined RULES > 2. These rules run first, and you cannot run anything before them > > Now my question is, is 1. actually true? or is that warning just a (white) > lie? > If 1. is actually true, would there be any objections to adding a "-1" > phase: where RULES specified to start from phase "-1" onward fire before > any of the Class op rules. > I'm quite willing to implement the above if A) Class op rules are actually > implemented as builtin RULES; B) there a no objections to this "-1" phase. > > Thanks, > Christiaan > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christiaan.baaij at gmail.com Sat Mar 7 08:53:15 2020 From: christiaan.baaij at gmail.com (Christiaan Baaij) Date: Sat, 7 Mar 2020 09:53:15 +0100 Subject: Class op rules In-Reply-To: References: Message-ID: Thanks for explaining Simon! So I personally could live with a situation where it is controllable for each classop individually - and I'd do the work to get that in. Where if the developer doesn't specify an INLINE pragma, it defaults to AlwaysActive. That way, the change only affects users who have currently annotated their class op with an {-# INLINE[N] op #-} (I would have to scour hackage to see if anyone has currently doing that... I hope not...) But now that this has been brought up: currently, does adding an {-# INLINE op #-} (with or without phase) for a class op actually do anything? Or is it basically superfluous because the class op already gets a BuiltInRule that's equal to INLINE AlwaysActive? Or does it affect whether a default implementation for the method gets inlined into the dictionary? If the latter, I guess we should use SPECIALIZE instead of INLINE for controlling the rule phase of the class op... unless that SPECIALIZE also has an effect on the class op default implementation... Thanks, Christiaan On Sat, 7 Mar 2020 at 00:02, Simon Peyton Jones wrote: > Here’s how it works: > > > > - The rewrite from opi (D m1 … mn) à mi > > is done by a BuiltinRule: see MkId.mkDictSelId, and the BuiltinRule that > is made there. > > > > - At the moment, BuiltinRules are always active (in all phases), see > GHC.Core.ruleActivation. To allow them to be selectively active, we’d have > to give them a ru_act fiels, like ordinary Rules. That would not be hard. > > > > - The phases go > - InitialPhase > - 2 > - 1 > - 0 > > > > - We could make classop rules active only in phase 1 and 0, say. I > don’t know what the consequences would be; running the classop to pick a > method out of a dictionary in turn reveals new function applications that > might want to work in phase 2, say. > > > > - Of course you can always add more phases, but that adds compile time. > > > > - Would you want the classop phase to be fixed for every classop? Or > controllable for each classop individually. E.g. class C a where { op > :: {-# INLINE [2] op #-} } > > Here the intent is that, since the pragmas is in the class decl, the > pragma applies to the method selector. > > > > I remember Conal raising this before, but I’ve forgotten the resolution. > I’m entirely open to changes here, if someone is willing to do the work, > including checking for consequences. > > > > Simon > > > > *From:* ghc-devs *On Behalf Of *Conal > Elliott > *Sent:* 06 March 2020 17:37 > *To:* Christiaan Baaij > *Cc:* ghc-devs > *Subject:* Re: Class op rules > > > > Thank you for raising this issue, Christiaan! The current policy (very > early class-op inlining) is a major difficulty and the main source of > fragility in my compiling-to-categories implementation. I have a tediously > programmed and delicately balanced collection of techniques to intercept > and transform class ops to non-ops early and then transform back late for > elimination, but it doesn't work in all situations. Since class operations > roughly correspond to operations in various algebraic > abstractions---interfaces with laws---I often want to exploit exactly those > laws as rewrite rules, and yet those rules currently cannot be used > dependably. - Conal > > > > On Fri, Mar 6, 2020 at 7:22 AM Christiaan Baaij < > christiaan.baaij at gmail.com> wrote: > > Hello, > > > > The other day I was experimenting with RULES and got this warning: > > > > src/Clash/Sized/Vector.hs:2159:11: warning: [-Winline-rule-shadowing] > Rule "map Pack" may never fire > because rule "Class op pack" for ‘pack’ might fire first > Probable fix: add phase [n] or [~n] to the competing rule > | > 2159 | {-# RULES "map Pack" map pack = id #-} > > > > The warning seems to suggests two things: > > 1. "Class op" -> "dictionary projection" are implemented as rewrite rules > and executed the same way as other user-defined RULES > > 2. These rules run first, and you cannot run anything before them > > > > Now my question is, is 1. actually true? or is that warning just a (white) > lie? > > If 1. is actually true, would there be any objections to adding a "-1" > phase: where RULES specified to start from phase "-1" onward fire before > any of the Class op rules. > > I'm quite willing to implement the above if A) Class op rules are actually > implemented as builtin RULES; B) there a no objections to this "-1" phase. > > > > Thanks, > > Christiaan > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christiaan.baaij at gmail.com Sat Mar 7 09:19:39 2020 From: christiaan.baaij at gmail.com (Christiaan Baaij) Date: Sat, 7 Mar 2020 10:19:39 +0100 Subject: Class op rules In-Reply-To: References: Message-ID: That workaround is fragile for me: When I put everything into one file, the "fromList/toList" rule fires. However, when I put the test1 and main definitions into a separate file, the "fromList/toList" rule no longer fires. The reason for that seems to be that " fromList' = fromList " is rewritten to " fromList' = fromList' ", and then the strictness/demand analysis flags it up as always bottoming. Then in the file where we write `test1 x = fromList (toList x)`, it gets rewritten to `test1 x = fromList' (toList x)`, after which (because of the always bottoming) it gets rewritten to `test1 _ = case fromList' of {}` On Sat, 7 Mar 2020 at 02:56, Dr. ÉRDI Gergő wrote: > As a workaround, can you try this? > https://stackoverflow.com/a/32133083/477476 > > On Fri, Mar 6, 2020, 23:23 Christiaan Baaij > wrote: > >> Hello, >> >> The other day I was experimenting with RULES and got this warning: >> >> src/Clash/Sized/Vector.hs:2159:11: warning: [-Winline-rule-shadowing] >> Rule "map Pack" may never fire >> because rule "Class op pack" for ‘pack’ might fire first >> Probable fix: add phase [n] or [~n] to the competing rule >> | >> 2159 | {-# RULES "map Pack" map pack = id #-} >> >> The warning seems to suggests two things: >> 1. "Class op" -> "dictionary projection" are implemented as rewrite rules >> and executed the same way as other user-defined RULES >> 2. These rules run first, and you cannot run anything before them >> >> Now my question is, is 1. actually true? or is that warning just a >> (white) lie? >> If 1. is actually true, would there be any objections to adding a "-1" >> phase: where RULES specified to start from phase "-1" onward fire before >> any of the Class op rules. >> I'm quite willing to implement the above if A) Class op rules are >> actually implemented as builtin RULES; B) there a no objections to this >> "-1" phase. >> >> Thanks, >> Christiaan >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Mar 9 10:02:05 2020 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 9 Mar 2020 10:02:05 +0000 Subject: Class op rules In-Reply-To: References: Message-ID: But now that this has been brought up: currently, does adding an {-# INLINE op #-} (with or without phase) for a class op actually do anything? Currently it does nothing, I think, and therefore should perhaps be rejected today. So I hope no one is doing that. Worth double checking: * Hackage, to check that no one has INLINE on a method in a class decl * GHC, to check that an INLINE on method in a class decl is ignored. In contrast, an INLINE on an instance decl should mean that that particular class op instance method is inlined. I hope this works correctly today. So I personally could live with a situation where it is controllable for each classop individually - and I'd do the work to get that in. OK. This is actually user facing, so I think the right thing is to write a short GHC Proposal. It needn’t take long to get approved. But it does get more eyes on it. And it provides a solid write up to refer to from the implementation. You could draw on this thread for the raw material, so it would not be hard to write. Simon From: Christiaan Baaij Sent: 07 March 2020 08:53 To: Simon Peyton Jones Cc: Conal Elliott ; ghc-devs Subject: Re: Class op rules Thanks for explaining Simon! So I personally could live with a situation where it is controllable for each classop individually - and I'd do the work to get that in. Where if the developer doesn't specify an INLINE pragma, it defaults to AlwaysActive. That way, the change only affects users who have currently annotated their class op with an {-# INLINE[N] op #-} (I would have to scour hackage to see if anyone has currently doing that... I hope not...) But now that this has been brought up: currently, does adding an {-# INLINE op #-} (with or without phase) for a class op actually do anything? Or is it basically superfluous because the class op already gets a BuiltInRule that's equal to INLINE AlwaysActive? Or does it affect whether a default implementation for the method gets inlined into the dictionary? If the latter, I guess we should use SPECIALIZE instead of INLINE for controlling the rule phase of the class op... unless that SPECIALIZE also has an effect on the class op default implementation... Thanks, Christiaan On Sat, 7 Mar 2020 at 00:02, Simon Peyton Jones > wrote: Here’s how it works: * The rewrite from opi (D m1 … mn) --> mi is done by a BuiltinRule: see MkId.mkDictSelId, and the BuiltinRule that is made there. * At the moment, BuiltinRules are always active (in all phases), see GHC.Core.ruleActivation. To allow them to be selectively active, we’d have to give them a ru_act fiels, like ordinary Rules. That would not be hard. * The phases go * InitialPhase * 2 * 1 * 0 * We could make classop rules active only in phase 1 and 0, say. I don’t know what the consequences would be; running the classop to pick a method out of a dictionary in turn reveals new function applications that might want to work in phase 2, say. * Of course you can always add more phases, but that adds compile time. * Would you want the classop phase to be fixed for every classop? Or controllable for each classop individually. E.g. class C a where { op :: {-# INLINE [2] op #-} } Here the intent is that, since the pragmas is in the class decl, the pragma applies to the method selector. I remember Conal raising this before, but I’ve forgotten the resolution. I’m entirely open to changes here, if someone is willing to do the work, including checking for consequences. Simon From: ghc-devs > On Behalf Of Conal Elliott Sent: 06 March 2020 17:37 To: Christiaan Baaij > Cc: ghc-devs > Subject: Re: Class op rules Thank you for raising this issue, Christiaan! The current policy (very early class-op inlining) is a major difficulty and the main source of fragility in my compiling-to-categories implementation. I have a tediously programmed and delicately balanced collection of techniques to intercept and transform class ops to non-ops early and then transform back late for elimination, but it doesn't work in all situations. Since class operations roughly correspond to operations in various algebraic abstractions---interfaces with laws---I often want to exploit exactly those laws as rewrite rules, and yet those rules currently cannot be used dependably. - Conal On Fri, Mar 6, 2020 at 7:22 AM Christiaan Baaij > wrote: Hello, The other day I was experimenting with RULES and got this warning: src/Clash/Sized/Vector.hs:2159:11: warning: [-Winline-rule-shadowing] Rule "map Pack" may never fire because rule "Class op pack" for ‘pack’ might fire first Probable fix: add phase [n] or [~n] to the competing rule | 2159 | {-# RULES "map Pack" map pack = id #-} The warning seems to suggests two things: 1. "Class op" -> "dictionary projection" are implemented as rewrite rules and executed the same way as other user-defined RULES 2. These rules run first, and you cannot run anything before them Now my question is, is 1. actually true? or is that warning just a (white) lie? If 1. is actually true, would there be any objections to adding a "-1" phase: where RULES specified to start from phase "-1" onward fire before any of the Class op rules. I'm quite willing to implement the above if A) Class op rules are actually implemented as builtin RULES; B) there a no objections to this "-1" phase. Thanks, Christiaan _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Mon Mar 9 17:27:39 2020 From: david.feuer at gmail.com (David Feuer) Date: Mon, 9 Mar 2020 13:27:39 -0400 Subject: Selector thunks again Message-ID: The fragility of this feature remains frustrating. A few days ago, I wrote this code for building a complete binary tree from its breadth-first traversal. (This is an improvement of a version by Will Ness.) data Tree a = Empty | Node a (Tree a) (Tree a) deriving Show -- An infinite list data IL a = a :< IL a infixr 5 :< bft :: [a] -> Tree a bft xs = tree where tree :< subtrees = go xs subtrees go :: [a] -> IL (Tree a) -> IL (Tree a) go (a : as) ~(b1 :< ~(b2 :< bs)) = Node a b1 b2 :< go as bs go [] _ = fix (Empty :<) When GHC compiles the lazy patterns, we get something essentially like this: go (a : as) ys = Node a (case ys of b1 :< _ -> b1) (case ys of _ :< b2 :< _ -> b2) :< go as (case ys of _ :< _ :< bs -> bs) Now `case ys of b1 :< _ -> b1` is a selector thunk, which is cool. The GC can reduce it as soon as either of the other thunks is forced. But neither of the other two case expressions is a selector thunk, so neither will ever be reduced by the GC. If I consume the result tree using an inorder traversal, for example, then all the elements in the left subtree of the root will remain live until I start to consume the right subtree of the root. I can instead write this: go (a : as) ys = Node a b1 b2 :< go as bs where {-# NOINLINE b2bs #-} b1 :< b2bs = ys b2 :< bs = b2bs Now all the suspended selections are selector thunks, so things should clean up nicely. There are still three problems, though. The first is that this is harder to read. The second is that now we have four suspended selections instead of three. Finally, if b1 is not the first one forced, we'll need to force two thunks instead of one. Can't we do any better? -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Mar 9 21:33:41 2020 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 9 Mar 2020 21:33:41 +0000 Subject: Selector thunks again In-Reply-To: References: Message-ID: David Yes, quite right. Do make a ticket. But it’s not easy to see a truly robust way to fix this. Selector thunks look like a = case x of (p,q) -> p You example has thunks of the form a = case x of (_,b) -> case b of (c,_) -> c We want this thunk to vanish entirely if x is bound to a pair whose second component is a pair. But, if x is bound to a pair, whose second component is not yet evaluated, we want the thunk to partially vanish, becoming a = case b of (c,_) -> c where b is the second component of x. This seems hard in general. You could imagine translating the example into b = case x of (_,b) -> b a = case b of (c,_) -> c Here I have built two thunks rather than one, but each is (independently) a selector thunk. We get just the right thing happening if x is evaluated but its second component is note. Hooray. What’s not nice is that execution is slower in the case where the selector-thunk mechanism doesn’t fire. Another possibility. Make selector thunks carry a kind of “path” indicating which fields to pick out as they select a nested component of the data structure. A kind of selector-thunk chain. Another possibility: for suitable selector-like thunks, compile special code that is run by the garbage collector. This deserves a ticket and a wiki page. Feel free to plunder the above. Simon From: ghc-devs On Behalf Of David Feuer Sent: 09 March 2020 17:28 To: ghc-devs Subject: Selector thunks again The fragility of this feature remains frustrating. A few days ago, I wrote this code for building a complete binary tree from its breadth-first traversal. (This is an improvement of a version by Will Ness.) data Tree a = Empty | Node a (Tree a) (Tree a) deriving Show -- An infinite list data IL a = a :< IL a infixr 5 :< bft :: [a] -> Tree a bft xs = tree where tree :< subtrees = go xs subtrees go :: [a] -> IL (Tree a) -> IL (Tree a) go (a : as) ~(b1 :< ~(b2 :< bs)) = Node a b1 b2 :< go as bs go [] _ = fix (Empty :<) When GHC compiles the lazy patterns, we get something essentially like this: go (a : as) ys = Node a (case ys of b1 :< _ -> b1) (case ys of _ :< b2 :< _ -> b2) :< go as (case ys of _ :< _ :< bs -> bs) Now `case ys of b1 :< _ -> b1` is a selector thunk, which is cool. The GC can reduce it as soon as either of the other thunks is forced. But neither of the other two case expressions is a selector thunk, so neither will ever be reduced by the GC. If I consume the result tree using an inorder traversal, for example, then all the elements in the left subtree of the root will remain live until I start to consume the right subtree of the root. I can instead write this: go (a : as) ys = Node a b1 b2 :< go as bs where {-# NOINLINE b2bs #-} b1 :< b2bs = ys b2 :< bs = b2bs Now all the suspended selections are selector thunks, so things should clean up nicely. There are still three problems, though. The first is that this is harder to read. The second is that now we have four suspended selections instead of three. Finally, if b1 is not the first one forced, we'll need to force two thunks instead of one. Can't we do any better? -------------- next part -------------- An HTML attachment was scrubbed... URL: From csaba.hruska at gmail.com Tue Mar 10 11:31:21 2020 From: csaba.hruska at gmail.com (Csaba Hruska) Date: Tue, 10 Mar 2020 12:31:21 +0100 Subject: GHC HEAD (Quickest) segfaults during compilation of optparse-applicative Message-ID: Hello, GHC HEAD (9668781a36941e7552fcec38f6d4e1d5ec3ef6d1) compiled with the Quickest flavour segfaults when compiles optparse-applicative-0.15.1.0. But it compiles fine with the Quick or the default flavours. [image: image.png] Everything compiles fine (with vanilla Quickest GHC HEAD) until the Options.Applicative.Help.Core module where GHC segfaults. (as I understand the error code) Is this a known bug? If not then I'll submit an issue. Thanks, Csaba -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 147901 bytes Desc: not available URL: From csaba.hruska at gmail.com Tue Mar 10 12:07:06 2020 From: csaba.hruska at gmail.com (Csaba Hruska) Date: Tue, 10 Mar 2020 13:07:06 +0100 Subject: GHC HEAD (Quickest) segfaults during compilation of optparse-applicative In-Reply-To: References: Message-ID: I've created a bug sample project on github: https://github.com/csabahruska/ghc-bug-sample Follow the readme to reproduce the bug. Cheers, Csaba On Tue, Mar 10, 2020 at 12:31 PM Csaba Hruska wrote: > Hello, > > GHC HEAD (9668781a36941e7552fcec38f6d4e1d5ec3ef6d1) compiled with the > Quickest flavour segfaults when compiles optparse-applicative-0.15.1.0. > But it compiles fine with the Quick or the default flavours. > [image: image.png] > Everything compiles fine (with vanilla Quickest GHC HEAD) until the > Options.Applicative.Help.Core module where GHC segfaults. (as I understand > the error code) > > Is this a known bug? > If not then I'll submit an issue. > > Thanks, > Csaba > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 147901 bytes Desc: not available URL: From omeragacan at gmail.com Tue Mar 10 12:18:32 2020 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Tue, 10 Mar 2020 15:18:32 +0300 Subject: GHC HEAD (Quickest) segfaults during compilation of optparse-applicative In-Reply-To: References: Message-ID: Can you file a bug report please with reproduction instructions? I know of one heap corruption bug (#17785, I'm currently debugging it) but it's impossible to tell whether this is the same bug or not just by looking at your screenshot. In practice whether they're the same bug or not is quote hard to answer, and it also does not matter too much, for several reasons: - Segfaults/heap corruption usually happens as a result of interaction of many different features and code so usually it's impossible to tell, just by looking at the reproducer, what is responsible. - It's usually a good idea to debug different reproducers at the same time. If they're caused by the same problem then one of the reproducer may lead to the bug faster/easier than others so it's a good idea to explore them in parallel. If they're caused by different bugs then exploring all in parallel does not cause any extra work (unless you're context switching at a high rate, which I usually avoid). - More reproducers = more tests, which are good. Also, when you ask whether this is a known bug on the mailing list it's effectively the same as just submitting a bug report: either way you don't search it yourself and ask for other devs to do it. Thanks, Ömer Csaba Hruska , 10 Mar 2020 Sal, 15:08 tarihinde şunu yazdı: > > I've created a bug sample project on github: https://github.com/csabahruska/ghc-bug-sample > Follow the readme to reproduce the bug. > > Cheers, > Csaba > > On Tue, Mar 10, 2020 at 12:31 PM Csaba Hruska wrote: >> >> Hello, >> >> GHC HEAD (9668781a36941e7552fcec38f6d4e1d5ec3ef6d1) compiled with the Quickest flavour segfaults when compiles optparse-applicative-0.15.1.0. >> But it compiles fine with the Quick or the default flavours. >> >> Everything compiles fine (with vanilla Quickest GHC HEAD) until the Options.Applicative.Help.Core module where GHC segfaults. (as I understand the error code) >> >> Is this a known bug? >> If not then I'll submit an issue. >> >> Thanks, >> Csaba > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From csaba.hruska at gmail.com Tue Mar 10 15:26:06 2020 From: csaba.hruska at gmail.com (Csaba Hruska) Date: Tue, 10 Mar 2020 16:26:06 +0100 Subject: GHC HEAD (Quickest) segfaults during compilation of optparse-applicative In-Reply-To: References: Message-ID: OK, I'll submit an issue on gitlab. On Tue, Mar 10, 2020 at 1:19 PM Ömer Sinan Ağacan wrote: > Can you file a bug report please with reproduction instructions? I know of > one > heap corruption bug (#17785, I'm currently debugging it) but it's > impossible to > tell whether this is the same bug or not just by looking at your > screenshot. > > In practice whether they're the same bug or not is quote hard to answer, > and it > also does not matter too much, for several reasons: > > - Segfaults/heap corruption usually happens as a result of interaction of > many > different features and code so usually it's impossible to tell, just by > looking at the reproducer, what is responsible. > > - It's usually a good idea to debug different reproducers at the same > time. If > they're caused by the same problem then one of the reproducer may lead > to the > bug faster/easier than others so it's a good idea to explore them in > parallel. > > If they're caused by different bugs then exploring all in parallel does > not > cause any extra work (unless you're context switching at a high rate, > which I > usually avoid). > > - More reproducers = more tests, which are good. > > Also, when you ask whether this is a known bug on the mailing list it's > effectively the same as just submitting a bug report: either way you don't > search it yourself and ask for other devs to do it. > > Thanks, > > Ömer > > Csaba Hruska , 10 Mar 2020 Sal, 15:08 > tarihinde şunu yazdı: > > > > I've created a bug sample project on github: > https://github.com/csabahruska/ghc-bug-sample > > Follow the readme to reproduce the bug. > > > > Cheers, > > Csaba > > > > On Tue, Mar 10, 2020 at 12:31 PM Csaba Hruska > wrote: > >> > >> Hello, > >> > >> GHC HEAD (9668781a36941e7552fcec38f6d4e1d5ec3ef6d1) compiled with the > Quickest flavour segfaults when compiles optparse-applicative-0.15.1.0. > >> But it compiles fine with the Quick or the default flavours. > >> > >> Everything compiles fine (with vanilla Quickest GHC HEAD) until the > Options.Applicative.Help.Core module where GHC segfaults. (as I understand > the error code) > >> > >> Is this a known bug? > >> If not then I'll submit an issue. > >> > >> Thanks, > >> Csaba > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at richarde.dev Tue Mar 10 23:05:04 2020 From: rae at richarde.dev (Richard Eisenberg) Date: Tue, 10 Mar 2020 23:05:04 +0000 Subject: performance testing Message-ID: <1B67C1E9-B77E-4B7D-BE57-E6FDABC8E1CC@richarde.dev> Hi all, I'm very confused by how to do performance testing. I have a patch that seems to cause a performance regression. So I built the patch locally (`validate` flavor, with `make`) and then reproduced the regression. Good. Then, I make a small change, rebuild, commit, and then test. But now it seems that the baseline has changed. (This took some time to discover after seemingly non-deterministic results!) So: how can I make several different commits, with different experiments, switch between them at will, all without changing my baseline? I do *not* have the magic "metric decrease" bit in my commit message. Or, at least, I didn't put it there. Thanks! Richard From davide at well-typed.com Wed Mar 11 09:59:33 2020 From: davide at well-typed.com (David Eichmann) Date: Wed, 11 Mar 2020 09:59:33 +0000 Subject: performance testing In-Reply-To: <1B67C1E9-B77E-4B7D-BE57-E6FDABC8E1CC@richarde.dev> References: <1B67C1E9-B77E-4B7D-BE57-E6FDABC8E1CC@richarde.dev> Message-ID: <3499a1c5-2d0d-e00a-20e9-1fdd401e788d@well-typed.com> Richard The performance tests seem to be causing more confusion than I'd of liked. A baseline is established from previous performance test runs. There is a wiki page that may help: https://gitlab.haskell.org/ghc/ghc/wikis/building/running-tests/performance-tests. Let me know if that's not sufficiently helpful. David E On 3/10/20 11:05 PM, Richard Eisenberg wrote: > Hi all, > > I'm very confused by how to do performance testing. > > I have a patch that seems to cause a performance regression. So I built the patch locally (`validate` flavor, with `make`) and then reproduced the regression. Good. Then, I make a small change, rebuild, commit, and then test. But now it seems that the baseline has changed. (This took some time to discover after seemingly non-deterministic results!) > > So: how can I make several different commits, with different experiments, switch between them at will, all without changing my baseline? I do *not* have the magic "metric decrease" bit in my commit message. Or, at least, I didn't put it there. > > Thanks! > Richard > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -- David Eichmann, Haskell Consultant Well-Typed LLP, http://www.well-typed.com Registered in England & Wales, OC335890 118 Wymering Mansions, Wymering Road, London W9 2NF, England From rae at richarde.dev Wed Mar 11 11:24:14 2020 From: rae at richarde.dev (Richard Eisenberg) Date: Wed, 11 Mar 2020 11:24:14 +0000 Subject: performance testing In-Reply-To: <3499a1c5-2d0d-e00a-20e9-1fdd401e788d@well-typed.com> References: <1B67C1E9-B77E-4B7D-BE57-E6FDABC8E1CC@richarde.dev> <3499a1c5-2d0d-e00a-20e9-1fdd401e788d@well-typed.com> Message-ID: Ah yes -- very helpful. So the baseline is always (implicitly) the parent commit, if that commit has performance info in the set of notes. So if I have this situation (time flows down) origin/master: blah wip/xyz: big cool change (that slows things down) attempt1: comment out some of big cool change attempt2: comment out more of big cool change and I do a perf test on attempt1, it's quite likely that the perf test will *pass*, because it's comparing to the previous commit. And then when I do a perf test on attempt2 and see a metric *decrease*, that might just be because I've fixed the perf problem... but I haven't actually made an improvement. (This story is true. Names have been changed to protect the innocent.) Is my understanding accurate here? Sorry for not finding the appropriate wiki page sooner. It felt like I was witnessing non-determinism, but it actually makes sense now. Thanks! Richard > On Mar 11, 2020, at 9:59 AM, David Eichmann wrote: > > Richard > > The performance tests seem to be causing more confusion than I'd of liked. A baseline is established from previous performance test runs. There is a wiki page that may help: https://gitlab.haskell.org/ghc/ghc/wikis/building/running-tests/performance-tests. Let me know if that's not sufficiently helpful. > > David E > > On 3/10/20 11:05 PM, Richard Eisenberg wrote: >> Hi all, >> >> I'm very confused by how to do performance testing. >> >> I have a patch that seems to cause a performance regression. So I built the patch locally (`validate` flavor, with `make`) and then reproduced the regression. Good. Then, I make a small change, rebuild, commit, and then test. But now it seems that the baseline has changed. (This took some time to discover after seemingly non-deterministic results!) >> >> So: how can I make several different commits, with different experiments, switch between them at will, all without changing my baseline? I do *not* have the magic "metric decrease" bit in my commit message. Or, at least, I didn't put it there. >> >> Thanks! >> Richard >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -- > David Eichmann, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com > > Registered in England & Wales, OC335890 > 118 Wymering Mansions, Wymering Road, London W9 2NF, England > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From davide at well-typed.com Wed Mar 11 12:10:08 2020 From: davide at well-typed.com (David Eichmann) Date: Wed, 11 Mar 2020 12:10:08 +0000 Subject: performance testing In-Reply-To: <997b3b4c-303b-8c7f-7dc7-8ed9d60995b3@well-typed.com> References: <997b3b4c-303b-8c7f-7dc7-8ed9d60995b3@well-typed.com> Message-ID: <0b42b736-2e51-853d-f5c9-6166f6f13676@well-typed.com> -------- Forwarded Message -------- Subject: Re: performance testing Date: Wed, 11 Mar 2020 12:08:52 +0000 From: David Eichmann To: Richard Eisenberg > Ah yes -- very helpful. So the baseline is always (implicitly) the > parent commit, if that commit has performance info in the set of > notes. So if I have this situation (time flows down) > > origin/master: blah > wip/xyz: big cool change (that slows things down) > attempt1: comment out some of big cool change > attempt2: comment out more of big cool change With you so far. > and I do a perf test on attempt1, it's quite likely that the perf test > will*pass*, because it's comparing to the previous commit. Yes, assuming you actually ran performance tests on the previous commit with a clean working tree (remember we don't record performance metrics if the git tree has changes, though you'll see a warning in the test output in that case). I'd suggest outputting a graph as described in the wiki if you're wondering which commits have recorded performance metrics. > And then when I do a perf test on attempt2 and see a metric*decrease*, > that might just be because I've fixed the perf problem... but I > haven't actually made an improvement. Right. The decrease is compared to attempt1, but this may not be a decrease compared to earlier commits. Again I'd suggest graphing the data if you want to inspect this yourself. David E -- David Eichmann, Haskell Consultant Well-Typed LLP, http://www.well-typed.com Registered in England & Wales, OC335890 118 Wymering Mansions, Wymering Road, London W9 2NF, England -------------- next part -------------- An HTML attachment was scrubbed... URL: From lexi.lambda at gmail.com Sat Mar 14 23:20:21 2020 From: lexi.lambda at gmail.com (Alexis King) Date: Sat, 14 Mar 2020 18:20:21 -0500 Subject: Specializing functions with implicit parameters Message-ID: Hi all, I discovered today that GHC never specializes functions with implicit parameters. This is not that surprising—I wouldn’t expect GHC to specialize the implicit parameters themselves—but it’s unfortunate because it means a single implicit parameter somewhere can transitively destroy specialization that would otherwise be very helpful. Is there any obstacle to specializing these functions’ other dictionaries and leaving the implicit parameters alone? That is, if I have a function foo :: (?foo :: Bool, Show a) => a -> String could GHC specialize `foo @Int` to foo :: (?foo :: Bool) => Int -> String treating the implicit parameter little differently from an ordinary function argument? As far as I can tell, there isn’t any real obstacle to doing this, so unless I’m missing something, I might give it a try myself. I just wanted to make sure I wasn’t missing anything before diving in. Thanks, Alexis From sandy at sandymaguire.me Sun Mar 15 01:03:52 2020 From: sandy at sandymaguire.me (Sandy Maguire) Date: Sat, 14 Mar 2020 18:03:52 -0700 Subject: Specializing functions with implicit parameters In-Reply-To: References: Message-ID: What GHC are you testing against? I suspect https://gitlab.haskell.org/ghc/ghc/merge_requests/668 will fix this. On Sat, Mar 14, 2020 at 4:20 PM Alexis King wrote: > Hi all, > > I discovered today that GHC never specializes functions with implicit > parameters. This is not that surprising—I wouldn’t expect GHC to specialize > the implicit parameters themselves—but it’s unfortunate because it means a > single implicit parameter somewhere can transitively destroy specialization > that would otherwise be very helpful. > > Is there any obstacle to specializing these functions’ other dictionaries > and leaving the implicit parameters alone? That is, if I have a function > > foo :: (?foo :: Bool, Show a) => a -> String > > could GHC specialize `foo @Int` to > > foo :: (?foo :: Bool) => Int -> String > > treating the implicit parameter little differently from an ordinary > function argument? > > As far as I can tell, there isn’t any real obstacle to doing this, so > unless I’m missing something, I might give it a try myself. I just wanted > to make sure I wasn’t missing anything before diving in. > > Thanks, > Alexis > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lexi.lambda at gmail.com Sun Mar 15 02:47:00 2020 From: lexi.lambda at gmail.com (Alexis King) Date: Sat, 14 Mar 2020 21:47:00 -0500 Subject: Specializing functions with implicit parameters In-Reply-To: References: Message-ID: <28F73A98-388B-4CBC-ACAA-D8DE20F1C7FC@gmail.com> > On Mar 14, 2020, at 20:03, Sandy Maguire wrote: > > What GHC are you testing against? I suspect https://gitlab.haskell.org/ghc/ghc/merge_requests/668 will fix this. I’ve tested against HEAD. I think the change you link is helpful, but it doesn’t quite get there: the usage gets dumped before specHeader even gets a chance to look at the call. The relevant bit of code is here: https://gitlab.haskell.org/ghc/ghc/blob/1de3ab4a147eeb0b34b24a3c0e91f174e6e5cb79/compiler/specialise/Specialise.hs#L2274-2302 Specifically, this line seals the deal: ClassPred cls _ -> not (isIPClass cls) -- Superclasses can't be IPs So maybe the right fix is just to change the role of type_determines_value so that it turns SpecDicts into UnspecArgs, and then with your change everything would just happily work out. -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Sun Mar 15 19:23:33 2020 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sun, 15 Mar 2020 15:23:33 -0400 Subject: Specializing functions with implicit parameters In-Reply-To: <28F73A98-388B-4CBC-ACAA-D8DE20F1C7FC@gmail.com> References: <28F73A98-388B-4CBC-ACAA-D8DE20F1C7FC@gmail.com> Message-ID: Hey Alexis, ive been kicking around some ideas for a specializing lambda former for various uses i've wanted to make tractable, I assume you dont care about polymorphic recursion in the cases you want to specialize? (some of the stuff i want to be able to express requires a sort of type/value binder that needs to be "normalized" before desugaring, but where current meta programming cant express the primops i want ghc to support! so roughly a sortah binder thats like c+ templates, but for types/values that lets me guarantee compositions will specialize before core happens) On Sat, Mar 14, 2020 at 10:47 PM Alexis King wrote: > On Mar 14, 2020, at 20:03, Sandy Maguire wrote: > > What GHC are you testing against? I suspect > https://gitlab.haskell.org/ghc/ghc/merge_requests/668 will fix this. > > > I’ve tested against HEAD. I think the change you link is helpful, but it > doesn’t *quite* get there: the usage gets dumped before specHeader even > gets a chance to look at the call. The relevant bit of code is here: > > > https://gitlab.haskell.org/ghc/ghc/blob/1de3ab4a147eeb0b34b24a3c0e91f174e6e5cb79/compiler/specialise/Specialise.hs#L2274-2302 > > Specifically, this line seals the deal: > > ClassPred cls _ -> not (isIPClass cls) -- Superclasses can't be IPs > > So maybe the right fix is just to change the role of type_determines_value > so that it turns SpecDicts into UnspecArgs, and then with your change > everything would just happily work out. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Mon Mar 16 15:59:49 2020 From: ben at well-typed.com (Ben Gamari) Date: Mon, 16 Mar 2020 11:59:49 -0400 Subject: [HiW'20] Call for Talks Message-ID: <878sk08f4a.fsf@smart-cactus.org> Hello everyone, Haskell Implementors Workshop is calling for talk proposals. Co-located with ICFP, HiW is an ideal place to describe a Haskell library, a Haskell extension, compiler, works-in-progress, demo a new Haskell-related tool, or even propose future lines of Haskell development. The deadline for submissions is July 2nd 2020. Call for Talks ============== The 12th Haskell Implementors’ Workshop is to be held alongside ICFP 2020 this year in New Jersey. It is a forum for people involved in the design and development of Haskell implementations, tools, libraries, and supporting infrastructure, to share their work and discuss future directions and collaborations with others. Talks and/or demos are proposed by submitting an abstract, and selected by a small program committee. There will be no published proceedings. The workshop will be informal and interactive, with open spaces in the timetable and room for ad-hoc discussion, demos and lightning talks. Scope and Target Audience ------------------------- It is important to distinguish the Haskell Implementors’ Workshop from the Haskell Symposium which is also co-located with ICFP 2020. The Haskell Symposium is for the publication of Haskell-related research. In contrast, the Haskell Implementors’ Workshop will have no proceedings – although we will aim to make talk videos, slides and presented data available with the consent of the speakers. The Implementors’ Workshop is an ideal place to describe a Haskell extension, describe works-in-progress, demo a new Haskell-related tool, or even propose future lines of Haskell development. Members of the wider Haskell community encouraged to attend the workshop – we need your feedback to keep the Haskell ecosystem thriving. Students working with Haskell are specially encouraged to share their work. The scope covers any of the following topics. There may be some topics that people feel we’ve missed, so by all means submit a proposal even if it doesn’t fit exactly into one of these buckets: - Compilation techniques - Language features and extensions - Type system implementation - Concurrency and parallelism: language design and implementation - Performance, optimization and benchmarking - Virtual machines and run-time systems - Libraries and tools for development or deployment Talks ----- We invite proposals from potential speakers for talks and demonstrations. We are aiming for 20-minute talks with 5 minutes for questions and changeovers. We want to hear from people writing compilers, tools, or libraries, people with cool ideas for directions in which we should take the platform, proposals for new features to be implemented, and half-baked crazy ideas. Please submit a talk title and abstract of no more than 300 words. Submissions can be made via HotCRP at https://icfp-hiw20.hotcrp.com/ until July 2nd (anywhere on earth). We will also have lightning talks session. These have been very well received in recent years, and we aim to increase the time available to them. Lightning talks be ~7mins and are scheduled on the day of the workshop. Suggested topics for lightning talks are to present a single idea, a work-in-progress project, a problem to intrigue and perplex Haskell implementors, or simply to ask for feedback and collaborators. Logistics --------- We recognize the on-going threat that COVID-19 poses to our participants' safety. While August is nearly half a year away, we must account for the possibility that the virus continues to pose a significant threat into the summer. For this reason, we are investigating options that would allow remote presentations this year. In light of this, we urge potential presenters not to be discouraged from submitting and encourage participants to keep time open in their calendars, regardless of the on-going COVID situation. Rest assured that the conference organizers are working to ensure that the Implementors Workshop can be held safely and productively, regardless of how the COVID-19 situation evolves. Program Committee ----------------- - Andrey Mokhov (Newcastle University) - Ben Gamari (Well-Typed LLP) - Christian Baaij (QBayLogic) - George Karachalias (Tweag I/O) - Klara Marntirosian (KU Leuven) - Matthew Pickering (Univeristy of Bristol) - Ryan Scott (Indiana University Bloomington) Best wishes, Ben From simonpj at microsoft.com Mon Mar 16 17:08:31 2020 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 16 Mar 2020 17:08:31 +0000 Subject: Specializing functions with implicit parameters In-Reply-To: <28F73A98-388B-4CBC-ACAA-D8DE20F1C7FC@gmail.com> References: <28F73A98-388B-4CBC-ACAA-D8DE20F1C7FC@gmail.com> Message-ID: Spot on Alexis. Should not be hard to fix this. I think the best thing would be in mkCallUDs * not to use SpecDict for implicit parameters; instead use UnspecArg * don’t require length theta = length dicts. Need to think about what else instead. Isn’t it implied by the arity test? Make a ticket! Happy to help if you or Sandy need anything from me. Simon From: ghc-devs On Behalf Of Alexis King Sent: 15 March 2020 02:47 To: Sandy Maguire Cc: ghc-devs Subject: Re: Specializing functions with implicit parameters On Mar 14, 2020, at 20:03, Sandy Maguire > wrote: What GHC are you testing against? I suspect https://gitlab.haskell.org/ghc/ghc/merge_requests/668 will fix this. I’ve tested against HEAD. I think the change you link is helpful, but it doesn’t quite get there: the usage gets dumped before specHeader even gets a chance to look at the call. The relevant bit of code is here: https://gitlab.haskell.org/ghc/ghc/blob/1de3ab4a147eeb0b34b24a3c0e91f174e6e5cb79/compiler/specialise/Specialise.hs#L2274-2302 Specifically, this line seals the deal: ClassPred cls _ -> not (isIPClass cls) -- Superclasses can't be IPs So maybe the right fix is just to change the role of type_determines_value so that it turns SpecDicts into UnspecArgs, and then with your change everything would just happily work out. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sandy at sandymaguire.me Wed Mar 18 01:55:50 2020 From: sandy at sandymaguire.me (Sandy Maguire) Date: Tue, 17 Mar 2020 18:55:50 -0700 Subject: Getting the inferred types of TH's UnboundVarEs Message-ID: Hi all, I'm writing some TH code that should generate property tests. For example, the expression: $(generate [e| law "idempotent" (insert a (insert a b) == insert a b) |]) should generate the code property $ \a b -> insert a (insert a b) === insert a b I do this by looking for UnboundVarEs in the Exp returned by the [e| quote, and binding them in a lambda. All of this works. However, now I'm trying to get the inferred types of `a` and `b` in the above. GHC clearly is typechecking the quote, since it will fail if I replace `b` with something nonsensical. *Is there some existent way to get the inferred type of an UnboundVarE --- ideally without reimplementing the typechecker?* Thanks! Sandy -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Wed Mar 18 08:04:18 2020 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Wed, 18 Mar 2020 08:04:18 +0000 Subject: Getting the inferred types of TH's UnboundVarEs In-Reply-To: References: Message-ID: Good morning Sandy, thanks for your email. I don't think that GHC will typecheck the quote until you splice it in. What exactly do you mean that it fails if `b` is replaced with something different? What are you hoping to do with this information? This reminds me a bit of the `qTypecheck` action I have implemented on another branch - https://gitlab.haskell.org/ghc/ghc/issues/17565#note_242199 Cheers, Matt On Wed, Mar 18, 2020 at 1:56 AM Sandy Maguire wrote: > > Hi all, > > I'm writing some TH code that should generate property tests. For example, the expression: > > $(generate [e| law "idempotent" (insert a (insert a b) == insert a b) |]) > > should generate the code > > property $ \a b -> insert a (insert a b) === insert a b > > I do this by looking for UnboundVarEs in the Exp returned by the [e| quote, and binding them in a lambda. All of this works. > > However, now I'm trying to get the inferred types of `a` and `b` in the above. GHC clearly is typechecking the quote, since it will fail if I replace `b` with something nonsensical. Is there some existent way to get the inferred type of an UnboundVarE --- ideally without reimplementing the typechecker? > > Thanks! > Sandy > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From carter.schonwald at gmail.com Wed Mar 18 18:46:47 2020 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Wed, 18 Mar 2020 14:46:47 -0400 Subject: intent to enable 2fa requirement for github.com/haskell org Message-ID: hey everyone, because so much important stuff for the community, it makes sense to add 2fa required for the org, are there any good reasons to either wait to do this, or not do it? Feedback welcome! (if theres no objections i'll do it friday or this weekend, so theres some lead time for anyone who's not setup for that yet) Best wishes and great health to all -carter -------------- next part -------------- An HTML attachment was scrubbed... URL: From sandy at sandymaguire.me Wed Mar 18 18:54:00 2020 From: sandy at sandymaguire.me (Sandy Maguire) Date: Wed, 18 Mar 2020 11:54:00 -0700 Subject: Getting the inferred types of TH's UnboundVarEs In-Reply-To: References: Message-ID: I mean if `insert :: a -> Container a -> Container a`, and I call it with `[e| insert 5 True |]`, the quote will fail. The goal here is to generate `Fn f` patterns in the property lambda whenever the `UnboundVarE` is a function. For example, today if I am given this: [e| law "length/map" (length as == length (map f as)) |] the generated code will be property $ \as f -> length as === length (map f as) when I would prefer to generate property $ \as (Fn f) -> length as === length (map f as) which will have significantly better UX. I'm willing to write a bad typechecker for `Exp`s, but really hoping I won't have to. Thanks! On Wed, Mar 18, 2020 at 1:04 AM Matthew Pickering < matthewtpickering at gmail.com> wrote: > Good morning Sandy, thanks for your email. > > I don't think that GHC will typecheck the quote until you splice it > in. What exactly do you mean that it fails if `b` is replaced with > something different? > > What are you hoping to do with this information? > > This reminds me a bit of the `qTypecheck` action I have implemented on > another branch - > https://gitlab.haskell.org/ghc/ghc/issues/17565#note_242199 > > Cheers, > > Matt > > On Wed, Mar 18, 2020 at 1:56 AM Sandy Maguire > wrote: > > > > Hi all, > > > > I'm writing some TH code that should generate property tests. For > example, the expression: > > > > $(generate [e| law "idempotent" (insert a (insert a b) == insert a b) |]) > > > > should generate the code > > > > property $ \a b -> insert a (insert a b) === insert a b > > > > I do this by looking for UnboundVarEs in the Exp returned by the [e| > quote, and binding them in a lambda. All of this works. > > > > However, now I'm trying to get the inferred types of `a` and `b` in the > above. GHC clearly is typechecking the quote, since it will fail if I > replace `b` with something nonsensical. Is there some existent way to get > the inferred type of an UnboundVarE --- ideally without reimplementing the > typechecker? > > > > Thanks! > > Sandy > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Wed Mar 18 19:45:21 2020 From: david.feuer at gmail.com (David Feuer) Date: Wed, 18 Mar 2020 15:45:21 -0400 Subject: intent to enable 2fa requirement for github.com/haskell org In-Reply-To: References: Message-ID: That's not a lot of lead time. On Wed, Mar 18, 2020, 2:47 PM Carter Schonwald wrote: > hey everyone, because so much important stuff for the community, it makes > sense to add 2fa required for the org, are there any good reasons to either > wait to do this, or not do it? Feedback welcome! > > (if theres no objections i'll do it friday or this weekend, so theres some > lead time for anyone who's not setup for that yet) > > Best wishes and great health to all > -carter > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Wed Mar 18 20:09:01 2020 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Wed, 18 Mar 2020 16:09:01 -0400 Subject: intent to enable 2fa requirement for github.com/haskell org In-Reply-To: References: Message-ID: true, otoh, 2fa in various usable forms has been widely available for years, and we can reonboard people pretty easily. Its critical haskell infra and to the best of my knowledge, current 2fa tooling is pretty accessible to everyone globally. If someone has specific issues we can address them as they arise! On Wed, Mar 18, 2020 at 3:45 PM David Feuer wrote: > That's not a lot of lead time. > > On Wed, Mar 18, 2020, 2:47 PM Carter Schonwald > wrote: > >> hey everyone, because so much important stuff for the community, it makes >> sense to add 2fa required for the org, are there any good reasons to either >> wait to do this, or not do it? Feedback welcome! >> >> (if theres no objections i'll do it friday or this weekend, so theres >> some lead time for anyone who's not setup for that yet) >> >> Best wishes and great health to all >> -carter >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chessai1996 at gmail.com Wed Mar 18 21:28:51 2020 From: chessai1996 at gmail.com (chessai .) Date: Wed, 18 Mar 2020 14:28:51 -0700 Subject: [core libraries] Re: intent to enable 2fa requirement for github.com/haskell org In-Reply-To: References: Message-ID: I agree with Carter here, 2FA is very accessible and if someone runs into a problem we can just tell them they need 2FA. It wouldn't be much more than a 10 minute disruption. Perhaps we could send out notice that it will take into effect at a certain point, maybe giving people a week or two. But we should really have this be mandatory. On Wed, Mar 18, 2020, 1:09 PM Carter Schonwald wrote: > true, otoh, 2fa in various usable forms has been widely available for > years, and we can reonboard people pretty easily. Its critical haskell > infra and to the best of my knowledge, current 2fa tooling is pretty > accessible to everyone globally. If someone has specific issues we can > address them as they arise! > > On Wed, Mar 18, 2020 at 3:45 PM David Feuer wrote: > >> That's not a lot of lead time. >> >> On Wed, Mar 18, 2020, 2:47 PM Carter Schonwald < >> carter.schonwald at gmail.com> wrote: >> >>> hey everyone, because so much important stuff for the community, it >>> makes sense to add 2fa required for the org, are there any good reasons to >>> either wait to do this, or not do it? Feedback welcome! >>> >>> (if theres no objections i'll do it friday or this weekend, so theres >>> some lead time for anyone who's not setup for that yet) >>> >>> Best wishes and great health to all >>> -carter >>> _______________________________________________ >>> Libraries mailing list >>> Libraries at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries >>> >> -- > You received this message because you are subscribed to the Google Groups > "haskell-core-libraries" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to haskell-core-libraries+unsubscribe at googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/haskell-core-libraries/CAHYVw0xA-Fzh1G-YUxWtV1HgOg%3D5o4jwYKNOipU-7dpYrYmA-g%40mail.gmail.com > > . > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandreR_B at outlook.com Wed Mar 18 21:38:31 2020 From: alexandreR_B at outlook.com (=?iso-8859-1?Q?Alexandre_Rodrigues_Bald=E9?=) Date: Wed, 18 Mar 2020 21:38:31 +0000 Subject: [core libraries] Re: intent to enable 2fa requirement for github.com/haskell org In-Reply-To: References: , Message-ID: This is a sensible measure, but I doubt people that contribute to GHC via GitHub (even if just for read access) are on this maling list. Perhaps an issue can be created to notify people of this, rather than let them run into errors and wonder what they did wrong. ________________________________ De: ghc-devs em nome de chessai . Enviado: Wednesday, March 18, 2020 9:28:51 PM Para: Carter Schonwald Cc: David Feuer ; Haskell Libraries ; ghc-devs ; core-libraries-committee at haskell.org Assunto: Re: [core libraries] Re: intent to enable 2fa requirement for github.com/haskell org I agree with Carter here, 2FA is very accessible and if someone runs into a problem we can just tell them they need 2FA. It wouldn't be much more than a 10 minute disruption. Perhaps we could send out notice that it will take into effect at a certain point, maybe giving people a week or two. But we should really have this be mandatory. On Wed, Mar 18, 2020, 1:09 PM Carter Schonwald > wrote: true, otoh, 2fa in various usable forms has been widely available for years, and we can reonboard people pretty easily. Its critical haskell infra and to the best of my knowledge, current 2fa tooling is pretty accessible to everyone globally. If someone has specific issues we can address them as they arise! On Wed, Mar 18, 2020 at 3:45 PM David Feuer > wrote: That's not a lot of lead time. On Wed, Mar 18, 2020, 2:47 PM Carter Schonwald > wrote: hey everyone, because so much important stuff for the community, it makes sense to add 2fa required for the org, are there any good reasons to either wait to do this, or not do it? Feedback welcome! (if theres no objections i'll do it friday or this weekend, so theres some lead time for anyone who's not setup for that yet) Best wishes and great health to all -carter _______________________________________________ Libraries mailing list Libraries at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries -- You received this message because you are subscribed to the Google Groups "haskell-core-libraries" group. To unsubscribe from this group and stop receiving emails from it, send an email to haskell-core-libraries+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/haskell-core-libraries/CAHYVw0xA-Fzh1G-YUxWtV1HgOg%3D5o4jwYKNOipU-7dpYrYmA-g%40mail.gmail.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Wed Mar 18 22:07:25 2020 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Wed, 18 Mar 2020 18:07:25 -0400 Subject: [core libraries] Re: intent to enable 2fa requirement for github.com/haskell org In-Reply-To: References: Message-ID: theres a reason i emailed ghc-devs + libraries, to make sure its visible! we can remediate any perms that get busted that need to be reinstated, but its been sitting like it is for long enough :) On Wed, Mar 18, 2020 at 5:38 PM Alexandre Rodrigues Baldé < alexandreR_B at outlook.com> wrote: > This is a sensible measure, but I doubt people that contribute to GHC via > GitHub (even if just for read access) are on this maling list. > > > > Perhaps an issue can be created to notify people of this, rather than let > them run into errors and wonder what they did wrong. > > > ------------------------------ > *De:* ghc-devs em nome de chessai . < > chessai1996 at gmail.com> > *Enviado:* Wednesday, March 18, 2020 9:28:51 PM > *Para:* Carter Schonwald > *Cc:* David Feuer ; Haskell Libraries < > libraries at haskell.org>; ghc-devs ; > core-libraries-committee at haskell.org > > *Assunto:* Re: [core libraries] Re: intent to enable 2fa requirement for > github.com/haskell org > > I agree with Carter here, 2FA is very accessible and if someone runs into > a problem we can just tell them they need 2FA. It wouldn't be much more > than a 10 minute disruption. Perhaps we could send out notice that it will > take into effect at a certain point, maybe giving people a week or two. But > we should really have this be mandatory. > > On Wed, Mar 18, 2020, 1:09 PM Carter Schonwald > wrote: > > true, otoh, 2fa in various usable forms has been widely available for > years, and we can reonboard people pretty easily. Its critical haskell > infra and to the best of my knowledge, current 2fa tooling is pretty > accessible to everyone globally. If someone has specific issues we can > address them as they arise! > > On Wed, Mar 18, 2020 at 3:45 PM David Feuer wrote: > > That's not a lot of lead time. > > On Wed, Mar 18, 2020, 2:47 PM Carter Schonwald > wrote: > > hey everyone, because so much important stuff for the community, it makes > sense to add 2fa required for the org, are there any good reasons to either > wait to do this, or not do it? Feedback welcome! > > (if theres no objections i'll do it friday or this weekend, so theres some > lead time for anyone who's not setup for that yet) > > Best wishes and great health to all > -carter > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > > -- > You received this message because you are subscribed to the Google Groups > "haskell-core-libraries" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to haskell-core-libraries+unsubscribe at googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/haskell-core-libraries/CAHYVw0xA-Fzh1G-YUxWtV1HgOg%3D5o4jwYKNOipU-7dpYrYmA-g%40mail.gmail.com > > . > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Wed Mar 18 23:05:16 2020 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Wed, 18 Mar 2020 19:05:16 -0400 Subject: intent to enable 2fa requirement for github.com/haskell org In-Reply-To: References: Message-ID: No. You don’t. You can use a yubi key and or a totp tool like google Authenticator or 1Password etc. no phones required On Wed, Mar 18, 2020 at 6:16 PM Duncan Coutts wrote: > On Wed, 2020-03-18 at 14:46 -0400, Carter Schonwald wrote: > > hey everyone, because so much important stuff for the community, it > > makes sense to add 2fa required for the org, are there any good > > reasons to either wait to do this, or not do it? Feedback welcome! > > I think I might get cut off. > > Is it not still the case that github's 2fa needs a program running on a > mobile phone, or an SMS-capable mobile phone? Is there any support for > normal tools running on a normal Linux machine? > > (I think last time I tried to use the SMS route, it refused to send SMS > messages to my landline, despite the fact that I can receive them) > > > Duncan > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Wed Mar 18 23:52:36 2020 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Wed, 18 Mar 2020 19:52:36 -0400 Subject: intent to enable 2fa requirement for github.com/haskell org In-Reply-To: References: Message-ID: Awesome! After talking with several folks, feedback has been that best practices are to make sure the notice is a week before hand. So what I’ll do is personally reach out to those who aren’t 2fa enabled in the Haskell gh org (and haven’t commented on this thread )and ask them to enable 2fa on their GitHub account. Perhaps I should attach a 2fa options explainer ! I’ll look at folks responses and if everyone active has made the switch over, I’ll look to do a transition next Monday or Tuesday. Be well! (Nyc and many other places are pretty strange right now :/ ) -Carter On Wed, Mar 18, 2020 at 7:42 PM Duncan Coutts wrote: > On Wed, 2020-03-18 at 19:05 -0400, Carter Schonwald wrote: > > No. You don’t. You can use a yubi key and or a totp tool like google > > Authenticator or 1Password etc. no phones required > > It took me a while, but I have successfully managed to turn 2FA back > into 1FA. > > In case it helps anyone else, generate your 2FA response with > > $ oathtool --totp -b $the-2fa-secret > > Where $the-2fa-secret is the code github gives you after the recovery > codes (initially shown as a barcode, but they'll give you the actual > code if you click the link). > > > On Wed, Mar 18, 2020 at 6:16 PM Duncan Coutts > wrote: > > > On Wed, 2020-03-18 at 14:46 -0400, Carter Schonwald wrote: > > > > hey everyone, because so much important stuff for the community, it > > > > makes sense to add 2fa required for the org, are there any good > > > > reasons to either wait to do this, or not do it? Feedback welcome! > > > > > > I think I might get cut off. > > > > > > Is it not still the case that github's 2fa needs a program running on a > > > mobile phone, or an SMS-capable mobile phone? Is there any support for > > > normal tools running on a normal Linux machine? > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Wed Mar 18 19:05:39 2020 From: ben at well-typed.com (Ben Gamari) Date: Wed, 18 Mar 2020 15:05:39 -0400 Subject: intent to enable 2fa requirement for github.com/haskell org In-Reply-To: References: Message-ID: <3F698C66-BFAE-46DA-B902-BE079C8DD69E@well-typed.com> I agree that this would be a good idea. Cheers, — Ben On March 18, 2020 2:46:47 PM EDT, Carter Schonwald wrote: >hey everyone, because so much important stuff for the community, it >makes >sense to add 2fa required for the org, are there any good reasons to >either >wait to do this, or not do it? Feedback welcome! > >(if theres no objections i'll do it friday or this weekend, so theres >some >lead time for anyone who's not setup for that yet) > >Best wishes and great health to all >-carter -- Sent from my Android device with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Mar 19 08:57:26 2020 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 19 Mar 2020 08:57:26 +0000 Subject: [core libraries] Re: intent to enable 2fa requirement for github.com/haskell org In-Reply-To: References: Message-ID: I have not been following this. What is the consequence for a regular GHC developer, or someone contributing to GHC? In any announcement please give a way to verify “am I affected?” Thanks Simon From: haskell-core-libraries at googlegroups.com On Behalf Of Carter Schonwald Sent: 18 March 2020 23:53 To: Duncan Coutts Cc: Haskell Libraries ; core-libraries-committee at haskell.org; ghc-devs Subject: [core libraries] Re: intent to enable 2fa requirement for github.com/haskell org Awesome! After talking with several folks, feedback has been that best practices are to make sure the notice is a week before hand. So what I’ll do is personally reach out to those who aren’t 2fa enabled in the Haskell gh org (and haven’t commented on this thread )and ask them to enable 2fa on their GitHub account. Perhaps I should attach a 2fa options explainer ! I’ll look at folks responses and if everyone active has made the switch over, I’ll look to do a transition next Monday or Tuesday. Be well! (Nyc and many other places are pretty strange right now :/ ) -Carter On Wed, Mar 18, 2020 at 7:42 PM Duncan Coutts > wrote: On Wed, 2020-03-18 at 19:05 -0400, Carter Schonwald wrote: > No. You don’t. You can use a yubi key and or a totp tool like google > Authenticator or 1Password etc. no phones required It took me a while, but I have successfully managed to turn 2FA back into 1FA. In case it helps anyone else, generate your 2FA response with $ oathtool --totp -b $the-2fa-secret Where $the-2fa-secret is the code github gives you after the recovery codes (initially shown as a barcode, but they'll give you the actual code if you click the link). > On Wed, Mar 18, 2020 at 6:16 PM Duncan Coutts > wrote: > > On Wed, 2020-03-18 at 14:46 -0400, Carter Schonwald wrote: > > > hey everyone, because so much important stuff for the community, it > > > makes sense to add 2fa required for the org, are there any good > > > reasons to either wait to do this, or not do it? Feedback welcome! > > > > I think I might get cut off. > > > > Is it not still the case that github's 2fa needs a program running on a > > mobile phone, or an SMS-capable mobile phone? Is there any support for > > normal tools running on a normal Linux machine? > > -- You received this message because you are subscribed to the Google Groups "haskell-core-libraries" group. To unsubscribe from this group and stop receiving emails from it, send an email to haskell-core-libraries+unsubscribe at googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/haskell-core-libraries/CAHYVw0x5CTOmQDLp3%2B89muQ%2BvXgmcmgo%3DgCHs8kjBHOMb%3D5Ksw%40mail.gmail.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at richarde.dev Thu Mar 19 09:44:15 2020 From: rae at richarde.dev (Richard Eisenberg) Date: Thu, 19 Mar 2020 09:44:15 +0000 Subject: intent to enable 2fa requirement for github.com/haskell org In-Reply-To: References: Message-ID: <2EC32AA7-BA58-4456-AAB0-0950A7D2DC8A@richarde.dev> > On Mar 18, 2020, at 11:52 PM, Carter Schonwald wrote: > > After talking with several folks, feedback has been that best practices are to make sure the notice is a week before hand. > > So what I’ll do is personally reach out to those who aren’t 2fa enabled in the Haskell gh org (and haven’t commented on this thread )and ask them to enable 2fa on their GitHub account. Perhaps I should attach a 2fa options explainer ! > > I’ll look at folks responses and if everyone active has made the switch over, I’ll look to do a transition next Monday or Tuesday. > If best practices are to wait a week... shouldn't we wait a week? There's no fire here. Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at richarde.dev Thu Mar 19 09:51:13 2020 From: rae at richarde.dev (Richard Eisenberg) Date: Thu, 19 Mar 2020 09:51:13 +0000 Subject: Getting the inferred types of TH's UnboundVarEs In-Reply-To: References: Message-ID: <29C61A5A-E3B2-455A-8377-43531D24889D@richarde.dev> Good to see you around, Sandy! > On Mar 18, 2020, at 6:54 PM, Sandy Maguire wrote: > > I mean if `insert :: a -> Container a -> Container a`, and I call it with `[e| insert 5 True |]`, the quote will fail. I don't observe this. Specifically, when I compile > {-# LANGUAGE TemplateHaskellQuotes #-} > > module Bug where > > import Prelude ( Bool(..), undefined ) > > data Container a > > insert :: a -> Container a -> Container a > insert = undefined > > quote = [e| insert 5 True |] GHC happily succeeds. I think what you want, though, is reasonable: you want the ability to send an expression through GHC's type-checker. I think we'd need to extend TH to be able to support this, and it will be hard to come up with a good design, I think. (Specifically, I'm worried about interactions with top-level defined entities, whose types might not really be known by the time of splice processing.) This might all be worthwhile -- singletons would be able to be improved with this, for example -- but it's not cheap, sadly. Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Thu Mar 19 14:55:29 2020 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Thu, 19 Mar 2020 10:55:29 -0400 Subject: intent to enable 2fa requirement for github.com/haskell org In-Reply-To: <2EC32AA7-BA58-4456-AAB0-0950A7D2DC8A@richarde.dev> References: <2EC32AA7-BA58-4456-AAB0-0950A7D2DC8A@richarde.dev> Message-ID: @ Simon: you already have 2fa enabled, youre not on the list of users who do *not* have 2fa enabled. Its just an extra login prompt the first time you login from a new device or do anything in the "are you sure you want to do that change". SO enabling 2fa is largely invisible to contributors aside from the 5 minutes to setup, and the message i sent out directly to every person who would be impacted that hasn't already replied to this email thread listed a number of options that could choose (though i should have also included a url, but if anyones confused i hope they ask and I can help) @richard indeed, this is why i also directly and individually emailed every member/contributor of the github haskell org individually (who doesnt have 2fa setup). Some of them dont have an easy to track down email address! Basically everyone who's been active in the past two years has responded already or indicated they'll set it up this coming weekend. (in 1-2 cases, it helped remind that they'd forgotten to setup 2fa even though they had planned to ) On Thu, Mar 19, 2020 at 5:44 AM Richard Eisenberg wrote: > > > On Mar 18, 2020, at 11:52 PM, Carter Schonwald > wrote: > > After talking with several folks, feedback has been that best practices > are to make sure the notice is a week before hand. > > So what I’ll do is personally reach out to those who aren’t 2fa enabled in > the Haskell gh org (and haven’t commented on this thread )and ask them to > enable 2fa on their GitHub account. Perhaps I should attach a 2fa options > explainer ! > > I’ll look at folks responses and if everyone active has made the switch > over, I’ll look to do a transition next Monday or Tuesday. > > > If best practices are to wait a week... shouldn't we wait a week? There's > no fire here. > > Richard > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sandy at sandymaguire.me Thu Mar 19 17:31:10 2020 From: sandy at sandymaguire.me (Sandy Maguire) Date: Thu, 19 Mar 2020 10:31:10 -0700 Subject: Getting the inferred types of TH's UnboundVarEs In-Reply-To: <29C61A5A-E3B2-455A-8377-43531D24889D@richarde.dev> References: <29C61A5A-E3B2-455A-8377-43531D24889D@richarde.dev> Message-ID: I'm also generating code at the same time, and might have gotten confused by that interaction :) In the meantime I guess I'll implement HM. The world will be a much better place when TTG is finished and we have ghc-as-an-easy-to-use-library :) On Thu, Mar 19, 2020 at 2:51 AM Richard Eisenberg wrote: > Good to see you around, Sandy! > > On Mar 18, 2020, at 6:54 PM, Sandy Maguire wrote: > > I mean if `insert :: a -> Container a -> Container a`, and I call it with > `[e| insert 5 True |]`, the quote will fail. > > > I don't observe this. Specifically, when I compile > > {-# LANGUAGE TemplateHaskellQuotes #-} > > module Bug where > > import Prelude ( Bool(..), undefined ) > > data Container a > > insert :: a -> Container a -> Container a > insert = undefined > > quote = [e| insert 5 True |] > > > GHC happily succeeds. > > I think what you want, though, is reasonable: you want the ability to send > an expression through GHC's type-checker. I think we'd need to extend TH to > be able to support this, and it will be hard to come up with a good design, > I think. (Specifically, I'm worried about interactions with top-level > defined entities, whose types might not really be known by the time of > splice processing.) This might all be worthwhile -- singletons would be > able to be improved with this, for example -- but it's not cheap, sadly. > > Richard > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at richarde.dev Fri Mar 20 09:55:33 2020 From: rae at richarde.dev (Richard Eisenberg) Date: Fri, 20 Mar 2020 09:55:33 +0000 Subject: Getting the inferred types of TH's UnboundVarEs In-Reply-To: References: <29C61A5A-E3B2-455A-8377-43531D24889D@richarde.dev> Message-ID: <14B1A719-797F-4ABD-8F56-D5A8AF29CC0D@richarde.dev> > On Mar 19, 2020, at 5:31 PM, Sandy Maguire wrote: > > The world will be a much better place when TTG is finished and we have ghc-as-an-easy-to-use-library As much as any software is ever "finished", I'd say TTG is finished. That is, I think the structure is ready for us to consider e.g. Introspective Template Haskell (https://gitlab.haskell.org/ghc/ghc/wikis/template-haskell/introspective ), which may be what you were thinking of when you wrote the sentence above. This would be a good deal of work, but I think it would move us forward nicely, and I think it's a reasonable time to contemplate doing this, if one were motivated. Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Sat Mar 21 16:45:50 2020 From: ben at well-typed.com (Ben Gamari) Date: Sat, 21 Mar 2020 12:45:50 -0400 Subject: Brief GitLab downtime Message-ID: <87o8sp7j2d.fsf@smart-cactus.org> Hi everyone, I'll be rebooting gitlab.haskell.org for a short upgrade. Shouldn't be more than a few minutes. As usual, I'll email when things are back up. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Sat Mar 21 18:51:49 2020 From: ben at well-typed.com (Ben Gamari) Date: Sat, 21 Mar 2020 14:51:49 -0400 Subject: Brief GitLab downtime In-Reply-To: <87o8sp7j2d.fsf@smart-cactus.org> References: <87o8sp7j2d.fsf@smart-cactus.org> Message-ID: <87lfnt7d8f.fsf@smart-cactus.org> Ben Gamari writes: > Hi everyone, > > I'll be rebooting gitlab.haskell.org for a short upgrade. Shouldn't be > more than a few minutes. As usual, I'll email when things are back up. > Unfortunately due to an operating system bug [1] this upgrade has resulted in down-time. I'm working on resolving the issue but it may be a while longer. Cheers, - Ben [1] https://github.com/NixOS/nixpkgs/issues/69360#issuecomment-558823357 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Sun Mar 22 01:06:49 2020 From: ben at well-typed.com (Ben Gamari) Date: Sat, 21 Mar 2020 21:06:49 -0400 Subject: Brief GitLab downtime In-Reply-To: <87o8sp7j2d.fsf@smart-cactus.org> References: <87o8sp7j2d.fsf@smart-cactus.org> Message-ID: <87imix6vve.fsf@smart-cactus.org> Ben Gamari writes: > Hi everyone, > > I'll be rebooting gitlab.haskell.org for a short upgrade. Shouldn't be > more than a few minutes. As usual, I'll email when things are back up. > Service has been restored. Many apologies for the unexpected downtime. Hopefully things should now be in better shape to avoid this sort of problem in the future. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Sun Mar 22 01:20:59 2020 From: ben at well-typed.com (Ben Gamari) Date: Sat, 21 Mar 2020 21:20:59 -0400 Subject: Brief GitLab downtime In-Reply-To: <87imix6vve.fsf@smart-cactus.org> References: <87o8sp7j2d.fsf@smart-cactus.org> <87imix6vve.fsf@smart-cactus.org> Message-ID: <87eetl6v7t.fsf@smart-cactus.org> Ben Gamari writes: > Ben Gamari writes: > >> Hi everyone, >> >> I'll be rebooting gitlab.haskell.org for a short upgrade. Shouldn't be >> more than a few minutes. As usual, I'll email when things are back up. >> > Service has been restored. Many apologies for the unexpected downtime. > Hopefully things should now be in better shape to avoid this sort of problem > in the future. > There may be one more brief outage in a few minutes while a support engineer does a hardware check on our server. However, this should be fairly short-lived. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From matthewtpickering at gmail.com Tue Mar 24 10:57:32 2020 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Tue, 24 Mar 2020 10:57:32 +0000 Subject: Roadmap to compacting ModIface Message-ID: Hello all, I have written down the remaining steps which need to be taken in order to compact a ModIface, which we hope will be useful for applications such as IDEs to reduce GC time. https://gitlab.haskell.org/ghc/ghc/issues/17097#roadmap-to-compacting-a-modiface If there is anyone who wishes to help with this project then please ping me on IRC. So far this is joint work between myself and Daniel G. The first step we need to take is to get 1675 merged which replaces the type backing a FastString from a ByteString to a ShortByteString (and hence from a pinned ByteArray to an unpinned ByteArray). https://gitlab.haskell.org/ghc/ghc/-/merge_requests/1675 Cheers, Matt From ben at well-typed.com Tue Mar 24 15:02:01 2020 From: ben at well-typed.com (Ben Gamari) Date: Tue, 24 Mar 2020 11:02:01 -0400 Subject: [ANNOUNCE] Glasgow Haskell Compiler 8.10.1 released Message-ID: <87369xbxuo.fsf@smart-cactus.org> Hello all, The GHC team is happy to announce the availability of GHC 8.10.1. Source and binary distributions are available at the usual place: https://downloads.haskell.org/ghc/8.10.1/ GHC 8.10.1 brings a number of new features including: * The new UnliftedNewtypes extension allowing newtypes around unlifted types. * The new StandaloneKindSignatures extension allows users to give top-level kind signatures to type, type family, and class declarations. * A new warning, -Wderiving-defaults, to draw attention to ambiguous deriving clauses * A number of improvements in code generation, including changes * A new GHCi command, :instances, for listing the class instances available for a type. * An upgraded Windows toolchain lifting the MAX_PATH limitation * A new, low-latency garbage collector. * Improved support profiling, including support for sending profiler samples to the eventlog, allowing correlation between the profile and other program events Note that at the moment we still require that macOS Catalina users exempt the binary distribution from the notarization requirement by running `xattr -cr .` on the unpacked tree before running `make install`. This situation will hopefully be improved for GHC 8.10.2 with the resolution of #17418 [1]. Cheers, - Ben [1] https://gitlab.haskell.org/ghc/ghc/issues/17418 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Tue Mar 24 15:16:14 2020 From: ben at well-typed.com (Ben Gamari) Date: Tue, 24 Mar 2020 11:16:14 -0400 Subject: [ANNOUNCE] Glasgow Haskell Compiler 8.10.1 released In-Reply-To: <87369xbxuo.fsf@smart-cactus.org> References: <87369xbxuo.fsf@smart-cactus.org> Message-ID: <87zhc5aimf.fsf@smart-cactus.org> Ben Gamari writes: > Hello all, > > The GHC team is happy to announce the availability of GHC 8.10.1. Source > and binary distributions are available at the usual place: > > https://downloads.haskell.org/ghc/8.10.1/ Note that the release notes can be found here: https://downloads.haskell.org/ghc/8.10.1/docs/html/users_guide/8.10.1-notes.html Further, the migration guide can be found here: https://gitlab.haskell.org/ghc/ghc/-/wikis/migration/8.10 Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From simonpj at microsoft.com Tue Mar 24 21:27:35 2020 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 24 Mar 2020 21:27:35 +0000 Subject: Roadmap to compacting ModIface In-Reply-To: References: Message-ID: Thanks for writing this down Matthew. But I look at #17097 and I am baffled. Why is that the right list of tasks? Why do we need FastStrings backed by an unpinned ByteArray? (And similarly for each other bullet.) What will the API look like if this project is successful? Why do we want ModIfaces in a compact region? To reduce residency? Do we have data showing that this is a real issue in practice. I feel as if a wiki page to explain the problem and articulate the proposed solution would make it easier for outsiders to contribute. Thanks Simon | -----Original Message----- | From: ghc-devs On Behalf Of Matthew | Pickering | Sent: 24 March 2020 10:58 | To: GHC developers | Subject: Roadmap to compacting ModIface | | Hello all, | | I have written down the remaining steps which need to be taken in | order to compact a ModIface, which we hope will be useful for | applications such as IDEs to reduce GC time. | | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.h | askell.org%2Fghc%2Fghc%2Fissues%2F17097%23roadmap-to-compacting-a- | modiface&data=02%7C01%7Csimonpj%40microsoft.com%7C5614bf7bb16847bf5bab | 08d7cfe23972%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C6372064428432732 | 44&sdata=YWpD1VEj%2FF4JVrRRi9cdNzlZ%2BQqgfFeRZ40NXC1kI2o%3D&reserv | ed=0 | | If there is anyone who wishes to help with this project then please | ping me on IRC. So far this is joint work between myself and Daniel G. | | The first step we need to take is to get 1675 merged which replaces | the type backing a FastString from a ByteString to a ShortByteString | (and hence from a pinned ByteArray to an unpinned ByteArray). | | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.h | askell.org%2Fghc%2Fghc%2F- | %2Fmerge_requests%2F1675&data=02%7C01%7Csimonpj%40microsoft.com%7C5614 | bf7bb16847bf5bab08d7cfe23972%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C | 637206442843273244&sdata=N7fLEXXhlSESuye9BiCFQo76UmQ4%2B6GSbegaQcef0Lc | %3D&reserved=0 | | Cheers, | | Matt | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C5614bf7bb16847bf5bab08d7 | cfe23972%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637206442843283238&a | mp;sdata=a1gBw6q0tuSxuPaByJgR9Jq0Ksk5%2BsP0kzMhaxeVgzs%3D&reserved=0 From matthewtpickering at gmail.com Tue Mar 24 21:30:35 2020 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Tue, 24 Mar 2020 21:30:35 +0000 Subject: Roadmap to compacting ModIface In-Reply-To: References: Message-ID: The things which can't be compacted are * Pinned objects * Functions * Mutable variables It is only a hypothesis at the moment that compacting a ModIface will help GC times in an IDE, but in order to try it we have to implement this roadmap.. It is certainly true that the EPS can get very large for realistic projects with hundreds of dependencies, and not traversing it during GC could be a huge win. Cheers, Matt On Tue, Mar 24, 2020 at 9:27 PM Simon Peyton Jones wrote: > > Thanks for writing this down Matthew. > > But I look at #17097 and I am baffled. Why is that the right list of tasks? Why do we need FastStrings backed by an unpinned ByteArray? (And similarly for each other bullet.) What will the API look like if this project is successful? Why do we want ModIfaces in a compact region? To reduce residency? Do we have data showing that this is a real issue in practice. > > I feel as if a wiki page to explain the problem and articulate the proposed solution would make it easier for outsiders to contribute. > > Thanks > > Simon > > | -----Original Message----- > | From: ghc-devs On Behalf Of Matthew > | Pickering > | Sent: 24 March 2020 10:58 > | To: GHC developers > | Subject: Roadmap to compacting ModIface > | > | Hello all, > | > | I have written down the remaining steps which need to be taken in > | order to compact a ModIface, which we hope will be useful for > | applications such as IDEs to reduce GC time. > | > | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.h > | askell.org%2Fghc%2Fghc%2Fissues%2F17097%23roadmap-to-compacting-a- > | modiface&data=02%7C01%7Csimonpj%40microsoft.com%7C5614bf7bb16847bf5bab > | 08d7cfe23972%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C6372064428432732 > | 44&sdata=YWpD1VEj%2FF4JVrRRi9cdNzlZ%2BQqgfFeRZ40NXC1kI2o%3D&reserv > | ed=0 > | > | If there is anyone who wishes to help with this project then please > | ping me on IRC. So far this is joint work between myself and Daniel G. > | > | The first step we need to take is to get 1675 merged which replaces > | the type backing a FastString from a ByteString to a ShortByteString > | (and hence from a pinned ByteArray to an unpinned ByteArray). > | > | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.h > | askell.org%2Fghc%2Fghc%2F- > | %2Fmerge_requests%2F1675&data=02%7C01%7Csimonpj%40microsoft.com%7C5614 > | bf7bb16847bf5bab08d7cfe23972%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C > | 637206442843273244&sdata=N7fLEXXhlSESuye9BiCFQo76UmQ4%2B6GSbegaQcef0Lc > | %3D&reserved=0 > | > | Cheers, > | > | Matt > | _______________________________________________ > | ghc-devs mailing list > | ghc-devs at haskell.org > | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask > | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- > | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C5614bf7bb16847bf5bab08d7 > | cfe23972%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637206442843283238&a > | mp;sdata=a1gBw6q0tuSxuPaByJgR9Jq0Ksk5%2BsP0kzMhaxeVgzs%3D&reserved=0 From omeragacan at gmail.com Wed Mar 25 10:06:03 2020 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Wed, 25 Mar 2020 13:06:03 +0300 Subject: Roadmap to compacting ModIface In-Reply-To: References: Message-ID: How is ModIface used by IDEs exactly? I'd expect IDEs to use ModDetails, not ModIface. They're basically two different representations of the same thing (a module interface), but ModIface is more focused on serialization and deseriazliation (the type is designed to make that easy) and ModDetails is what GHC is using to e.g. type check an imported module. For example, In batch mode we add ModDetails to the module graph, not ModIface, because that's what we use to compile downstream. (In one-shot mode GHC makes ModDetails for imported modules after reading the interfaces using IfaceToCore.typecheckIface) Ömer Matthew Pickering , 25 Mar 2020 Çar, 00:31 tarihinde şunu yazdı: > > The things which can't be compacted are > > * Pinned objects > * Functions > * Mutable variables > > It is only a hypothesis at the moment that compacting a ModIface will > help GC times in an IDE, but in order to try it we have to implement > this roadmap.. > > It is certainly true that the EPS can get very large for realistic > projects with hundreds of dependencies, and not traversing it during > GC could be a huge win. > > Cheers, > > Matt > > On Tue, Mar 24, 2020 at 9:27 PM Simon Peyton Jones > wrote: > > > > Thanks for writing this down Matthew. > > > > But I look at #17097 and I am baffled. Why is that the right list of tasks? Why do we need FastStrings backed by an unpinned ByteArray? (And similarly for each other bullet.) What will the API look like if this project is successful? Why do we want ModIfaces in a compact region? To reduce residency? Do we have data showing that this is a real issue in practice. > > > > I feel as if a wiki page to explain the problem and articulate the proposed solution would make it easier for outsiders to contribute. > > > > Thanks > > > > Simon > > > > | -----Original Message----- > > | From: ghc-devs On Behalf Of Matthew > > | Pickering > > | Sent: 24 March 2020 10:58 > > | To: GHC developers > > | Subject: Roadmap to compacting ModIface > > | > > | Hello all, > > | > > | I have written down the remaining steps which need to be taken in > > | order to compact a ModIface, which we hope will be useful for > > | applications such as IDEs to reduce GC time. > > | > > | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.h > > | askell.org%2Fghc%2Fghc%2Fissues%2F17097%23roadmap-to-compacting-a- > > | modiface&data=02%7C01%7Csimonpj%40microsoft.com%7C5614bf7bb16847bf5bab > > | 08d7cfe23972%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C6372064428432732 > > | 44&sdata=YWpD1VEj%2FF4JVrRRi9cdNzlZ%2BQqgfFeRZ40NXC1kI2o%3D&reserv > > | ed=0 > > | > > | If there is anyone who wishes to help with this project then please > > | ping me on IRC. So far this is joint work between myself and Daniel G. > > | > > | The first step we need to take is to get 1675 merged which replaces > > | the type backing a FastString from a ByteString to a ShortByteString > > | (and hence from a pinned ByteArray to an unpinned ByteArray). > > | > > | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.h > > | askell.org%2Fghc%2Fghc%2F- > > | %2Fmerge_requests%2F1675&data=02%7C01%7Csimonpj%40microsoft.com%7C5614 > > | bf7bb16847bf5bab08d7cfe23972%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C > > | 637206442843273244&sdata=N7fLEXXhlSESuye9BiCFQo76UmQ4%2B6GSbegaQcef0Lc > > | %3D&reserved=0 > > | > > | Cheers, > > | > > | Matt > > | _______________________________________________ > > | ghc-devs mailing list > > | ghc-devs at haskell.org > > | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask > > | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- > > | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C5614bf7bb16847bf5bab08d7 > > | cfe23972%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637206442843283238&a > > | mp;sdata=a1gBw6q0tuSxuPaByJgR9Jq0Ksk5%2BsP0kzMhaxeVgzs%3D&reserved=0 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From matthewtpickering at gmail.com Wed Mar 25 10:16:49 2020 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Wed, 25 Mar 2020 10:16:49 +0000 Subject: Roadmap to compacting ModIface In-Reply-To: References: Message-ID: Omer, In particular the EPS contains the PackageIfaceTable which contains a map of modules to ModIface for external packages. This can end up containing a lot of ModIfaces in a big project. In every `HomeModInfo` there is also a `ModIface` which can be compacted in order to save GC traversal. ModIface is also chosen because it's possible serialise/deserialise and hence more likely to be easily compactable, ModDetails will contain too much stuff which can't be compacted, just looking now the `TypeEnv` could certainly not be compated. Cheers, Matt On Wed, Mar 25, 2020 at 10:06 AM Ömer Sinan Ağacan wrote: > > How is ModIface used by IDEs exactly? I'd expect IDEs to use ModDetails, not > ModIface. > > They're basically two different representations of the same thing (a module > interface), but ModIface is more focused on serialization and deseriazliation > (the type is designed to make that easy) and ModDetails is what GHC is using to > e.g. type check an imported module. > > For example, In batch mode we add ModDetails to the module graph, not ModIface, > because that's what we use to compile downstream. > > (In one-shot mode GHC makes ModDetails for imported modules after reading the > interfaces using IfaceToCore.typecheckIface) > > Ömer > > Matthew Pickering , 25 Mar 2020 Çar, > 00:31 tarihinde şunu yazdı: > > > > The things which can't be compacted are > > > > * Pinned objects > > * Functions > > * Mutable variables > > > > It is only a hypothesis at the moment that compacting a ModIface will > > help GC times in an IDE, but in order to try it we have to implement > > this roadmap.. > > > > It is certainly true that the EPS can get very large for realistic > > projects with hundreds of dependencies, and not traversing it during > > GC could be a huge win. > > > > Cheers, > > > > Matt > > > > On Tue, Mar 24, 2020 at 9:27 PM Simon Peyton Jones > > wrote: > > > > > > Thanks for writing this down Matthew. > > > > > > But I look at #17097 and I am baffled. Why is that the right list of tasks? Why do we need FastStrings backed by an unpinned ByteArray? (And similarly for each other bullet.) What will the API look like if this project is successful? Why do we want ModIfaces in a compact region? To reduce residency? Do we have data showing that this is a real issue in practice. > > > > > > I feel as if a wiki page to explain the problem and articulate the proposed solution would make it easier for outsiders to contribute. > > > > > > Thanks > > > > > > Simon > > > > > > | -----Original Message----- > > > | From: ghc-devs On Behalf Of Matthew > > > | Pickering > > > | Sent: 24 March 2020 10:58 > > > | To: GHC developers > > > | Subject: Roadmap to compacting ModIface > > > | > > > | Hello all, > > > | > > > | I have written down the remaining steps which need to be taken in > > > | order to compact a ModIface, which we hope will be useful for > > > | applications such as IDEs to reduce GC time. > > > | > > > | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.h > > > | askell.org%2Fghc%2Fghc%2Fissues%2F17097%23roadmap-to-compacting-a- > > > | modiface&data=02%7C01%7Csimonpj%40microsoft.com%7C5614bf7bb16847bf5bab > > > | 08d7cfe23972%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C6372064428432732 > > > | 44&sdata=YWpD1VEj%2FF4JVrRRi9cdNzlZ%2BQqgfFeRZ40NXC1kI2o%3D&reserv > > > | ed=0 > > > | > > > | If there is anyone who wishes to help with this project then please > > > | ping me on IRC. So far this is joint work between myself and Daniel G. > > > | > > > | The first step we need to take is to get 1675 merged which replaces > > > | the type backing a FastString from a ByteString to a ShortByteString > > > | (and hence from a pinned ByteArray to an unpinned ByteArray). > > > | > > > | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.h > > > | askell.org%2Fghc%2Fghc%2F- > > > | %2Fmerge_requests%2F1675&data=02%7C01%7Csimonpj%40microsoft.com%7C5614 > > > | bf7bb16847bf5bab08d7cfe23972%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C > > > | 637206442843273244&sdata=N7fLEXXhlSESuye9BiCFQo76UmQ4%2B6GSbegaQcef0Lc > > > | %3D&reserved=0 > > > | > > > | Cheers, > > > | > > > | Matt > > > | _______________________________________________ > > > | ghc-devs mailing list > > > | ghc-devs at haskell.org > > > | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask > > > | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- > > > | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C5614bf7bb16847bf5bab08d7 > > > | cfe23972%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637206442843283238&a > > > | mp;sdata=a1gBw6q0tuSxuPaByJgR9Jq0Ksk5%2BsP0kzMhaxeVgzs%3D&reserved=0 > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From carter.schonwald at gmail.com Wed Mar 25 16:47:55 2020 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Wed, 25 Mar 2020 12:47:55 -0400 Subject: intent to enable 2fa requirement for github.com/haskell org In-Reply-To: References: Message-ID: Duncan, David, please figure out 2fa tools that work for you and enable them, https://github.com/tadfisher/pass-otp https://github.com/solokeys/solo https://github.com/herrjemand/awesome-webauthn#hardware-authenticators https://1password.com/ https://keepass.info/download.html if you are having trouble figuring out tools you're comfortable using, please share with us those constraints we can help you! im here to help (and i'm delaying enabling another day or two to provide help to some active contributors who are having their own difficulties setitng up this stuff) On Wed, Mar 18, 2020 at 6:16 PM Duncan Coutts wrote: > On Wed, 2020-03-18 at 14:46 -0400, Carter Schonwald wrote: > > hey everyone, because so much important stuff for the community, it > > makes sense to add 2fa required for the org, are there any good > > reasons to either wait to do this, or not do it? Feedback welcome! > > I think I might get cut off. > > Is it not still the case that github's 2fa needs a program running on a > mobile phone, or an SMS-capable mobile phone? Is there any support for > normal tools running on a normal Linux machine? > > (I think last time I tried to use the SMS route, it refused to send SMS > messages to my landline, despite the fact that I can receive them) > > > Duncan > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From giorgio at marinel.li Wed Mar 25 17:30:20 2020 From: giorgio at marinel.li (Giorgio Marinelli) Date: Wed, 25 Mar 2020 18:30:20 +0100 Subject: intent to enable 2fa requirement for github.com/haskell org In-Reply-To: References: Message-ID: I use the following 2fa tools. They offer also import/export functionalities. - andOTP (Android) https://github.com/andOTP/andOTP - OTPClient (Linux) https://github.com/paolostivanin/OTPClient Regards, Giorgio On Wed, 25 Mar 2020 at 17:48, Carter Schonwald wrote: > > Duncan, David, please figure out 2fa tools that work for you and enable them, > > > https://github.com/tadfisher/pass-otp > > https://github.com/solokeys/solo > > https://github.com/herrjemand/awesome-webauthn#hardware-authenticators > > https://1password.com/ > > https://keepass.info/download.html > > > if you are having trouble figuring out tools you're comfortable using, please share with us those constraints we can help you! > > im here to help (and i'm delaying enabling another day or two to provide help to some active contributors who are having their own difficulties setitng up this stuff) > > On Wed, Mar 18, 2020 at 6:16 PM Duncan Coutts wrote: >> >> On Wed, 2020-03-18 at 14:46 -0400, Carter Schonwald wrote: >> > hey everyone, because so much important stuff for the community, it >> > makes sense to add 2fa required for the org, are there any good >> > reasons to either wait to do this, or not do it? Feedback welcome! >> >> I think I might get cut off. >> >> Is it not still the case that github's 2fa needs a program running on a >> mobile phone, or an SMS-capable mobile phone? Is there any support for >> normal tools running on a normal Linux machine? >> >> (I think last time I tried to use the SMS route, it refused to send SMS >> messages to my landline, despite the fact that I can receive them) >> >> >> Duncan >> > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From carter.schonwald at gmail.com Wed Mar 25 19:46:49 2020 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Wed, 25 Mar 2020 15:46:49 -0400 Subject: intent to enable 2fa requirement for github.com/haskell org In-Reply-To: References: Message-ID: Yeah, there’s def an incredible diversity of tools that are great in this space. And there’s at this point decent tools for almost every platform constraint imaginable. On Wed, Mar 25, 2020 at 1:30 PM Giorgio Marinelli wrote: > I use the following 2fa tools. They offer also import/export > functionalities. > > - andOTP (Android) https://github.com/andOTP/andOTP > - OTPClient (Linux) https://github.com/paolostivanin/OTPClient > > Regards, > > > Giorgio > > On Wed, 25 Mar 2020 at 17:48, Carter Schonwald > wrote: > > > > Duncan, David, please figure out 2fa tools that work for you and enable > them, > > > > > > https://github.com/tadfisher/pass-otp > > > > https://github.com/solokeys/solo > > > > https://github.com/herrjemand/awesome-webauthn#hardware-authenticators > > > > https://1password.com/ > > > > https://keepass.info/download.html > > > > > > if you are having trouble figuring out tools you're comfortable using, > please share with us those constraints we can help you! > > > > im here to help (and i'm delaying enabling another day or two to provide > help to some active contributors who are having their own difficulties > setitng up this stuff) > > > > On Wed, Mar 18, 2020 at 6:16 PM Duncan Coutts > wrote: > >> > >> On Wed, 2020-03-18 at 14:46 -0400, Carter Schonwald wrote: > >> > hey everyone, because so much important stuff for the community, it > >> > makes sense to add 2fa required for the org, are there any good > >> > reasons to either wait to do this, or not do it? Feedback welcome! > >> > >> I think I might get cut off. > >> > >> Is it not still the case that github's 2fa needs a program running on a > >> mobile phone, or an SMS-capable mobile phone? Is there any support for > >> normal tools running on a normal Linux machine? > >> > >> (I think last time I tried to use the SMS route, it refused to send SMS > >> messages to my landline, despite the fact that I can receive them) > >> > >> > >> Duncan > >> > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Fri Mar 27 22:32:44 2020 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Fri, 27 Mar 2020 22:32:44 +0000 Subject: Advice implementing new constraint entailment rules In-Reply-To: References: Message-ID: I have made some progress towards the implementation but am stuck on how to get the right desugaring. For example if the source program is foo :: CodeC (Show a) => Code (a -> String) foo = [| show |] Then the current approach is to canonicalise all the constraints to remove the `CodeC`. The issue with this I have found is that the evidence gets bound in the wrong place: ``` foo d = let d' = $d in [| show d' |] ``` It should rather be ``` foo d = [| let d' = $d in show d' |] ``` Now I am trying to think of ways to make the evidence binding be bound in the right place. So there are a few things I thought of, 1. Attempt to float evidence bindings back inwards to the right level after they are solved, this doesn't feel great as they are floated outwards already. 2. Don't canonicalise the constraints in the normal manner, when leaving the context of a quote, capture the wanted constraints (in this example Show a) and emit a (CodeC (Show a)) constraint whilst inserting the evidence binding inside the quote. I prefer option 2 but inside `WantedConstraints` there are `Ct`s which may be already canonicalised. Trying a few examples shows me that the `Show a` constraint in this example is not canonicalised already but it feels a bit dirty to dig into a `Ct` to find non canonical constraints to re-emit. Any hints about how to make sure the evidence is bound in the correct place? Matt On Thu, Mar 5, 2020 at 9:24 AM Simon Peyton Jones wrote: > > Hi Matt > > I think you are right to say that we need to apply proper staging to the constraint solver. But I don't understand your constraint rewriting rules. > > Before moving to the implementation, could we discuss the specification? You already have some typeset rules in a paper of some kind, which I commented on some time ago. Could you elaborate those rules with class constraints? Then we'd have something tangible to debate. > > Thanks > > Simon > > | -----Original Message----- > | From: ghc-devs On Behalf Of Matthew > | Pickering > | Sent: 05 March 2020 08:16 > | To: GHC developers > | Subject: Advice implementing new constraint entailment rules > | > | Hello, > | > | I am attempting to implement two new constraint entailment rules which > | dictate how to implement a new constraint form "CodeC" can be used to > | satisfy constraints. > | > | The main idea is that all constraints store the level they they are > | introduced and required (in the Template Haskell sense of level) and > | that only constraints of the right level can be used. > | > | The "CodeC" constraint form allows the level of constraints to be > | manipulated. > | > | Therefore the two rules > | > | In order to implement this I want to add two constraint rewriting > | rules in the following way: > | > | 1. If in a given, `CodeC C @ n` ~> `C @ n+1` > | 2. If in a wanted `CodeC C @ n` -> `C @ n - 1` > | > | Can someone give me some pointers about the specific part of the > | constraint solver where I should add these rules? I am unsure if this > | rewriting of wanted constraints already occurs or not. > | > | Cheers, > | > | Matt > | _______________________________________________ > | ghc-devs mailing list > | ghc-devs at haskell.org > | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask > | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- > | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C52ec5ca4f50c496b25e808d7 > | c0dd8534%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637189929963530670&a > | mp;sdata=0T2O%2FaAcIU9Yl61x2uPzl4zUG4P3jl6iA97baIDlSsM%3D&reserved=0 From lexi.lambda at gmail.com Sun Mar 29 08:38:31 2020 From: lexi.lambda at gmail.com (Alexis King) Date: Sun, 29 Mar 2020 03:38:31 -0500 Subject: Fusing loops by specializing on functions with SpecConstr? Message-ID: <2E809F34-633F-4089-BEED-F38929F8BFD0@gmail.com> Hi all, I have recently been toying with FRP, and I’ve noticed that traditional formulations generate a lot of tiny loops that GHC does a very poor job optimizing. Here’s a simplified example: newtype SF a b = SF { runSF :: a -> (b, SF a b) } add1_snd :: SF (String, Int) (String, Int) add1_snd = second add1 where add1 = SF $ \a -> let !b = a + 1 in (b, add1) second f = SF $ \(a, b) -> let !(c, f') = runSF f b in ((a, c), second f') Here, `add1_snd` is defined in terms of two recursive bindings, `add1` and `second`. Because they’re both recursive, GHC doesn’t know what to do with them, and the optimized program still has two separate recursive knots. But this is a missed optimization, as `add1_snd` is equivalent to the following definition, which fuses the two loops together and consequently has just one recursive knot: add1_snd_fused :: SF (String, Int) (String, Int) add1_snd_fused = SF $ \(a, b) -> let !c = b + 1 in ((a, c), add1_snd_fused) How could GHC get from `add1_snd` to `add1_snd_fused`? In theory, SpecConstr could do it! Suppose we specialize `second` at the call pattern `second add1`: {-# RULE "second/add1" second add1 = second_add1 #-} second_add1 = SF $ \(a, b) -> let !(c, f') = runSF add1 b in ((a, c), second f') This doesn’t immediately look like an improvement, but we’re actually almost there. If we unroll `add1` once on the RHS of `second_add1`, the simplifier will get us the rest of the way. We’ll end up with let !b1 = b + 1 !(c, f') = (b1, add1) in ((a, c), second f') and after substituting f' to get `second add1`, the RULE will tie the knot for us. This may look like small potatoes in isolation, but real programs can generate hundreds of these tiny, tiny loops, and fusing them together would be a big win. The only problem is SpecConstr doesn’t currently specialize on functions! The original paper, “Call-pattern Specialisation for Haskell Programs,” mentions this as a possibility in Section 6.2, but it points out that actually doing this in practice would be pretty tricky: > Specialising for function arguments is more slippery than for > constructor arguments. In the example above the argument was a > simple variable, but what if it was instead a lambda term? [...] > > The trouble is that lambda abstractions are much more fragile than > constructor applications, in the sense that simple transformations > may make two abstractions look different although they have the > same value. Still, the difference this could make in a program of mine is so large that I am interested in exploring it anyway. I am wondering if anyone has investigated this possibility any further since the paper was published, or if anyone knows of other use cases that would benefit from this capability. Thanks, Alexis From sgraf1337 at gmail.com Sun Mar 29 14:33:47 2020 From: sgraf1337 at gmail.com (Sebastian Graf) Date: Sun, 29 Mar 2020 16:33:47 +0200 Subject: Fusing loops by specializing on functions with SpecConstr? In-Reply-To: <2E809F34-633F-4089-BEED-F38929F8BFD0@gmail.com> References: <2E809F34-633F-4089-BEED-F38929F8BFD0@gmail.com> Message-ID: Hi Alexis, I've been wondering the same things and have worked on it on and off. See my progress in https://gitlab.haskell.org/ghc/ghc/issues/855#note_149482 and https://gitlab.haskell.org/ghc/ghc/issues/915#note_241520. The big problem with solving the higher-order specialisation problem through SpecConstr (which is what I did in my reports in #855) is indeed that it's hard to 1. Anticipate what the rewritten program looks like without doing a Simplifier pass after each specialisation, so that we can see and exploit new specialisation opportunities. SpecConstr does use the simple Core optimiser but, that often is not enough IIRC (think of ArgOccs from recursive calls). In particular, it will not do RULE rewrites. Interleaving SpecConstr with the Simplifier, apart from nigh impossible conceptually, is computationally intractable and would quickly drift off into Partial Evaluation swamp. 2. Make the RULE engine match and rewrite call sites in all call patterns they can apply. I.e., `f (\x -> Just (x +1))` calls its argument with one argument and scrutinises the resulting Maybe (that's what is described by the argument's `ArgOcc`), so that we want to specialise to a call pattern `f (\x -> Just )`, giving rise to the specialisation `$sf ctx`, where `ctx x` describes the `` part. In an ideal world, we want a (higher-order pattern unification) RULE for `forall f ctx. f (\x -> Just (ctx x)) ==> $sf ctx`. But from what I remember, GHC's RULE engine works quite different from that and isn't even concerned with finding unifiers (rather than just matching concrete call sites without meta variables against RULEs with meta variables) at all. Note that matching on specific Ids binding functions is just an approximation using representional equality (on the Id's Unique) rather than some sort of more semantic equality. My latest endeavour into the matter in #915 from December was using types as the representational entity and type class specialisation. I think I got ultimately blocked on thttps:// gitlab.haskell.org/ghc/ghc/issues/17548, but apparently I didn't document the problematic program. Maybe my failure so far is that I want it to apply and optimise all cases and for more complex stream pipelines, rather than just doing a better best effort job. Hope that helps. Anyway, I'm also really keen on nailing this! It's one of my high-risk, high-reward research topics. So if you need someone to collaborate/exchange ideas with, I'm happy to help! All the best, Sebastian Am So., 29. März 2020 um 10:39 Uhr schrieb Alexis King < lexi.lambda at gmail.com>: > Hi all, > > I have recently been toying with FRP, and I’ve noticed that > traditional formulations generate a lot of tiny loops that GHC does > a very poor job optimizing. Here’s a simplified example: > > newtype SF a b = SF { runSF :: a -> (b, SF a b) } > > add1_snd :: SF (String, Int) (String, Int) > add1_snd = second add1 where > add1 = SF $ \a -> let !b = a + 1 in (b, add1) > second f = SF $ \(a, b) -> > let !(c, f') = runSF f b > in ((a, c), second f') > > Here, `add1_snd` is defined in terms of two recursive bindings, > `add1` and `second`. Because they’re both recursive, GHC doesn’t > know what to do with them, and the optimized program still has two > separate recursive knots. But this is a missed optimization, as > `add1_snd` is equivalent to the following definition, which fuses > the two loops together and consequently has just one recursive knot: > > add1_snd_fused :: SF (String, Int) (String, Int) > add1_snd_fused = SF $ \(a, b) -> > let !c = b + 1 > in ((a, c), add1_snd_fused) > > How could GHC get from `add1_snd` to `add1_snd_fused`? In theory, > SpecConstr could do it! Suppose we specialize `second` at the call > pattern `second add1`: > > {-# RULE "second/add1" second add1 = second_add1 #-} > > second_add1 = SF $ \(a, b) -> > let !(c, f') = runSF add1 b > in ((a, c), second f') > > This doesn’t immediately look like an improvement, but we’re > actually almost there. If we unroll `add1` once on the RHS of > `second_add1`, the simplifier will get us the rest of the way. We’ll > end up with > > let !b1 = b + 1 > !(c, f') = (b1, add1) > in ((a, c), second f') > > and after substituting f' to get `second add1`, the RULE will tie > the knot for us. > > This may look like small potatoes in isolation, but real programs > can generate hundreds of these tiny, tiny loops, and fusing them > together would be a big win. The only problem is SpecConstr doesn’t > currently specialize on functions! The original paper, “Call-pattern > Specialisation for Haskell Programs,” mentions this as a possibility > in Section 6.2, but it points out that actually doing this in > practice would be pretty tricky: > > > Specialising for function arguments is more slippery than for > > constructor arguments. In the example above the argument was a > > simple variable, but what if it was instead a lambda term? [...] > > > > The trouble is that lambda abstractions are much more fragile than > > constructor applications, in the sense that simple transformations > > may make two abstractions look different although they have the > > same value. > > Still, the difference this could make in a program of mine is so > large that I am interested in exploring it anyway. I am wondering if > anyone has investigated this possibility any further since the paper > was published, or if anyone knows of other use cases that would > benefit from this capability. > > Thanks, > Alexis > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Mar 30 22:31:42 2020 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 30 Mar 2020 22:31:42 +0000 Subject: Advice implementing new constraint entailment rules In-Reply-To: References: Message-ID: Do you know about implication constraints? If not, look back at the OutsideIn(X) paper. An implication constraint carries with it the place to put its Given bindings: see the ic_binds field of Constraint.Implication. And that is exactly what you want. I suspect you'll want an implication to carry a stage, as well as its skolem vars and givens. When stepping inside an implication that would trigger the unwapping of CodeC constraints from outside. We could have a Skype call to discuss if you like Simon | -----Original Message----- | From: Matthew Pickering | Sent: 27 March 2020 22:33 | To: Simon Peyton Jones | Cc: GHC developers | Subject: Re: Advice implementing new constraint entailment rules | | I have made some progress towards the implementation but am stuck on how | to get the right desugaring. | | For example if the source program is | | foo :: CodeC (Show a) => Code (a -> String) foo = [| show |] | | Then the current approach is to canonicalise all the constraints to remove | the `CodeC`. The issue with this I have found is that the evidence gets | bound in the wrong place: | | ``` | foo d = let d' = $d in [| show d' |] | ``` | | It should rather be | | ``` | foo d = [| let d' = $d in show d' |] | ``` | | Now I am trying to think of ways to make the evidence binding be bound in | the right place. So there are a few things I thought of, | | 1. Attempt to float evidence bindings back inwards to the right level | after they are solved, this doesn't feel great as they are floated | outwards already. | 2. Don't canonicalise the constraints in the normal manner, when leaving | the context of a quote, capture the wanted constraints (in this example | Show a) and emit a (CodeC (Show a)) constraint whilst inserting the | evidence binding inside the quote. | | I prefer option 2 but inside `WantedConstraints` there are `Ct`s which may | be already canonicalised. Trying a few examples shows me that the `Show a` | constraint in this example is not canonicalised already but it feels a bit | dirty to dig into a `Ct` to find non canonical constraints to re-emit. | | Any hints about how to make sure the evidence is bound in the correct | place? | | Matt | | On Thu, Mar 5, 2020 at 9:24 AM Simon Peyton Jones | wrote: | > | > Hi Matt | > | > I think you are right to say that we need to apply proper staging to the | constraint solver. But I don't understand your constraint rewriting | rules. | > | > Before moving to the implementation, could we discuss the specification? | You already have some typeset rules in a paper of some kind, which I | commented on some time ago. Could you elaborate those rules with class | constraints? Then we'd have something tangible to debate. | > | > Thanks | > | > Simon | > | > | -----Original Message----- | > | From: ghc-devs On Behalf Of Matthew | > | Pickering | > | Sent: 05 March 2020 08:16 | > | To: GHC developers | > | Subject: Advice implementing new constraint entailment rules | > | | > | Hello, | > | | > | I am attempting to implement two new constraint entailment rules | > | which dictate how to implement a new constraint form "CodeC" can be | > | used to satisfy constraints. | > | | > | The main idea is that all constraints store the level they they are | > | introduced and required (in the Template Haskell sense of level) and | > | that only constraints of the right level can be used. | > | | > | The "CodeC" constraint form allows the level of constraints to be | > | manipulated. | > | | > | Therefore the two rules | > | | > | In order to implement this I want to add two constraint rewriting | > | rules in the following way: | > | | > | 1. If in a given, `CodeC C @ n` ~> `C @ n+1` 2. If in a wanted | > | `CodeC C @ n` -> `C @ n - 1` | > | | > | Can someone give me some pointers about the specific part of the | > | constraint solver where I should add these rules? I am unsure if | > | this rewriting of wanted constraints already occurs or not. | > | | > | Cheers, | > | | > | Matt | > | _______________________________________________ | > | ghc-devs mailing list | > | ghc-devs at haskell.org | > | | > | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmai | > | l.hask | > | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | > | | > | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C52ec5ca4f50c496b25 | > | e808d7 | > | c0dd8534%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C63718992996353 | > | 0670&a | > | | > | mp;sdata=0T2O%2FaAcIU9Yl61x2uPzl4zUG4P3jl6iA97baIDlSsM%3D&reserv | > | ed=0 From simonpj at microsoft.com Tue Mar 31 11:12:44 2020 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 31 Mar 2020 11:12:44 +0000 Subject: Fusing loops by specializing on functions with SpecConstr? In-Reply-To: References: <2E809F34-633F-4089-BEED-F38929F8BFD0@gmail.com> Message-ID: Wow – tricky stuff! I would never have thought of trying to optimise that program, but it’s fascinating that you get lots and lots of them from FRP. * Don’t lose this thread! Make a ticket, or a wiki page. If the former, put the main payload (including Alexis’s examples) into the Descriptions, not deep in the discussion. * I wonder whether it’d be possible to adjust the FRP library to generate easier-to-optimise code. Probably not, but worth asking. * Alexis’s proposed solution relies on * Specialising on a function argument. Clearly this must be possible, and it’d be very beneficial. * Unrolling one layer of a recursive function. That seems harder: how we know to *stop* unrolling as we successively simplify? One idea: do one layer of unrolling by hand, perhaps even in FRP source code: add1rec = SF (\a -> let !b = a+1 in (b,add1rec)) add1 = SF (\a -> let !b = a+1 in (b,add1rec)) Simon From: ghc-devs On Behalf Of Sebastian Graf Sent: 29 March 2020 15:34 To: Alexis King Cc: ghc-devs Subject: Re: Fusing loops by specializing on functions with SpecConstr? Hi Alexis, I've been wondering the same things and have worked on it on and off. See my progress in https://gitlab.haskell.org/ghc/ghc/issues/855#note_149482 and https://gitlab.haskell.org/ghc/ghc/issues/915#note_241520. The big problem with solving the higher-order specialisation problem through SpecConstr (which is what I did in my reports in #855) is indeed that it's hard to 1. Anticipate what the rewritten program looks like without doing a Simplifier pass after each specialisation, so that we can see and exploit new specialisation opportunities. SpecConstr does use the simple Core optimiser but, that often is not enough IIRC (think of ArgOccs from recursive calls). In particular, it will not do RULE rewrites. Interleaving SpecConstr with the Simplifier, apart from nigh impossible conceptually, is computationally intractable and would quickly drift off into Partial Evaluation swamp. 2. Make the RULE engine match and rewrite call sites in all call patterns they can apply. I.e., `f (\x -> Just (x +1))` calls its argument with one argument and scrutinises the resulting Maybe (that's what is described by the argument's `ArgOcc`), so that we want to specialise to a call pattern `f (\x -> Just )`, giving rise to the specialisation `$sf ctx`, where `ctx x` describes the `` part. In an ideal world, we want a (higher-order pattern unification) RULE for `forall f ctx. f (\x -> Just (ctx x)) ==> $sf ctx`. But from what I remember, GHC's RULE engine works quite different from that and isn't even concerned with finding unifiers (rather than just matching concrete call sites without meta variables against RULEs with meta variables) at all. Note that matching on specific Ids binding functions is just an approximation using representional equality (on the Id's Unique) rather than some sort of more semantic equality. My latest endeavour into the matter in #915 from December was using types as the representational entity and type class specialisation. I think I got ultimately blocked on thttps://gitlab.haskell.org/ghc/ghc/issues/17548, but apparently I didn't document the problematic program. Maybe my failure so far is that I want it to apply and optimise all cases and for more complex stream pipelines, rather than just doing a better best effort job. Hope that helps. Anyway, I'm also really keen on nailing this! It's one of my high-risk, high-reward research topics. So if you need someone to collaborate/exchange ideas with, I'm happy to help! All the best, Sebastian Am So., 29. März 2020 um 10:39 Uhr schrieb Alexis King >: Hi all, I have recently been toying with FRP, and I’ve noticed that traditional formulations generate a lot of tiny loops that GHC does a very poor job optimizing. Here’s a simplified example: newtype SF a b = SF { runSF :: a -> (b, SF a b) } add1_snd :: SF (String, Int) (String, Int) add1_snd = second add1 where add1 = SF $ \a -> let !b = a + 1 in (b, add1) second f = SF $ \(a, b) -> let !(c, f') = runSF f b in ((a, c), second f') Here, `add1_snd` is defined in terms of two recursive bindings, `add1` and `second`. Because they’re both recursive, GHC doesn’t know what to do with them, and the optimized program still has two separate recursive knots. But this is a missed optimization, as `add1_snd` is equivalent to the following definition, which fuses the two loops together and consequently has just one recursive knot: add1_snd_fused :: SF (String, Int) (String, Int) add1_snd_fused = SF $ \(a, b) -> let !c = b + 1 in ((a, c), add1_snd_fused) How could GHC get from `add1_snd` to `add1_snd_fused`? In theory, SpecConstr could do it! Suppose we specialize `second` at the call pattern `second add1`: {-# RULE "second/add1" second add1 = second_add1 #-} second_add1 = SF $ \(a, b) -> let !(c, f') = runSF add1 b in ((a, c), second f') This doesn’t immediately look like an improvement, but we’re actually almost there. If we unroll `add1` once on the RHS of `second_add1`, the simplifier will get us the rest of the way. We’ll end up with let !b1 = b + 1 !(c, f') = (b1, add1) in ((a, c), second f') and after substituting f' to get `second add1`, the RULE will tie the knot for us. This may look like small potatoes in isolation, but real programs can generate hundreds of these tiny, tiny loops, and fusing them together would be a big win. The only problem is SpecConstr doesn’t currently specialize on functions! The original paper, “Call-pattern Specialisation for Haskell Programs,” mentions this as a possibility in Section 6.2, but it points out that actually doing this in practice would be pretty tricky: > Specialising for function arguments is more slippery than for > constructor arguments. In the example above the argument was a > simple variable, but what if it was instead a lambda term? [...] > > The trouble is that lambda abstractions are much more fragile than > constructor applications, in the sense that simple transformations > may make two abstractions look different although they have the > same value. Still, the difference this could make in a program of mine is so large that I am interested in exploring it anyway. I am wondering if anyone has investigated this possibility any further since the paper was published, or if anyone knows of other use cases that would benefit from this capability. Thanks, Alexis _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgraf1337 at gmail.com Tue Mar 31 13:08:15 2020 From: sgraf1337 at gmail.com (Sebastian Graf) Date: Tue, 31 Mar 2020 15:08:15 +0200 Subject: Fusing loops by specializing on functions with SpecConstr? In-Reply-To: References: <2E809F34-633F-4089-BEED-F38929F8BFD0@gmail.com> Message-ID: We can formulate SF as a classic Stream that needs an `a` to produce its next element of type `b` like this (SF2 below): {-# LANGUAGE BangPatterns #-} {-# LANGUAGE GADTs #-} module Lib where newtype SF a b = SF { runSF :: a -> (b, SF a b) } inc1 :: SF Int Int inc1 = SF $ \a -> let !b = a+1 in (b, inc1) data Step s a = Yield !s a data SF2 a b where SF2 :: !(a -> s -> Step s b) -> !s -> SF2 a b inc2 :: SF2 Int Int inc2 = SF2 go () where go a _ = let !b = a+1 in Yield () b runSF2 :: SF2 a b -> a -> (b, SF2 a b) runSF2 (SF2 f s) a = case f a s of Yield s' b -> (b, (SF2 f s')) Note the absence of recursion in inc2. This resolves the tension around having to specialise for a function argument that is recursive and having to do the unrolling. I bet that similar to stream fusion, we can arrange that only the consumer has to be explicitly recursive. Indeed, I think this will help you inline mapping combinators such as `second`, because it won't be recursive itself anymore. Now we "only" have to solve the same problems as with good old stream fusion. The tricky case (after realising that we need to add `Skip` to `Step` for `filterSF2`) is when we want to optimise a signal of signals, e.g. something like `concatMapSF2 :: (b -> SF2 a c) -> SF2 a b -> SF2 a c` or some such. And here we are again in #855/#915. Also if you need convincing that we can embed any SF into SF2, look at this: embed :: SF Int Int -> SF2 Int Int embed origSF = SF2 go origSF where go a sf = case runSF sf a of (b, sf') -> Yield sf' b Please do open a ticket about this, though. It's an interesting data point! Cheers, Sebastian Am Di., 31. März 2020 um 13:12 Uhr schrieb Simon Peyton Jones < simonpj at microsoft.com>: > Wow – tricky stuff! I would never have thought of trying to optimise > that program, but it’s fascinating that you get lots and lots of them from > FRP. > > > > - Don’t lose this thread! Make a ticket, or a wiki page. If the > former, put the main payload (including Alexis’s examples) into the > Descriptions, not deep in the discussion. > - I wonder whether it’d be possible to adjust the FRP library to > generate easier-to-optimise code. Probably not, but worth asking. > - Alexis’s proposed solution relies on > - Specialising on a function argument. Clearly this must be > possible, and it’d be very beneficial. > - Unrolling one layer of a recursive function. That seems harder: > how we know to **stop** unrolling as we successively simplify? One > idea: do one layer of unrolling by hand, perhaps even in FRP source code: > > add1rec = SF (\a -> let !b = a+1 in (b,add1rec)) > > add1 = SF (\a -> let !b = a+1 in (b,add1rec)) > > > > Simon > > > > *From:* ghc-devs *On Behalf Of *Sebastian > Graf > *Sent:* 29 March 2020 15:34 > *To:* Alexis King > *Cc:* ghc-devs > *Subject:* Re: Fusing loops by specializing on functions with SpecConstr? > > > > Hi Alexis, > > > > I've been wondering the same things and have worked on it on and off. See > my progress in https://gitlab.haskell.org/ghc/ghc/issues/855#note_149482 > > and https://gitlab.haskell.org/ghc/ghc/issues/915#note_241520 > > . > > > > The big problem with solving the higher-order specialisation problem > through SpecConstr (which is what I did in my reports in #855) is indeed > that it's hard to > > 1. Anticipate what the rewritten program looks like without doing a > Simplifier pass after each specialisation, so that we can see and exploit > new specialisation opportunities. SpecConstr does use the simple Core > optimiser but, that often is not enough IIRC (think of ArgOccs from > recursive calls). In particular, it will not do RULE rewrites. Interleaving > SpecConstr with the Simplifier, apart from nigh impossible conceptually, is > computationally intractable and would quickly drift off into Partial > Evaluation swamp. > 2. Make the RULE engine match and rewrite call sites in all call > patterns they can apply. > I.e., `f (\x -> Just (x +1))` calls its argument with one argument and > scrutinises the resulting Maybe (that's what is described by the argument's > `ArgOcc`), so that we want to specialise to a call pattern `f (\x -> Just > )`, giving rise to the specialisation `$sf ctx`, > where `ctx x` describes the `` part. In an ideal > world, we want a (higher-order pattern unification) RULE for `forall f ctx. > f (\x -> Just (ctx x)) ==> $sf ctx`. But from what I remember, GHC's RULE > engine works quite different from that and isn't even concerned with > finding unifiers (rather than just matching concrete call sites without > meta variables against RULEs with meta variables) at all. > > Note that matching on specific Ids binding functions is just an > approximation using representional equality (on the Id's Unique) rather > than some sort of more semantic equality. My latest endeavour into the > matter in #915 from December was using types as the representational entity > and type class specialisation. I think I got ultimately blocked on thttps:// > gitlab.haskell.org/ghc/ghc/issues/17548 > , > but apparently I didn't document the problematic program. > > > > Maybe my failure so far is that I want it to apply and optimise all cases > and for more complex stream pipelines, rather than just doing a better best > effort job. > > > > Hope that helps. Anyway, I'm also really keen on nailing this! It's one of > my high-risk, high-reward research topics. So if you need someone to > collaborate/exchange ideas with, I'm happy to help! > > > > All the best, > > Sebastian > > > > Am So., 29. März 2020 um 10:39 Uhr schrieb Alexis King < > lexi.lambda at gmail.com>: > > Hi all, > > I have recently been toying with FRP, and I’ve noticed that > traditional formulations generate a lot of tiny loops that GHC does > a very poor job optimizing. Here’s a simplified example: > > newtype SF a b = SF { runSF :: a -> (b, SF a b) } > > add1_snd :: SF (String, Int) (String, Int) > add1_snd = second add1 where > add1 = SF $ \a -> let !b = a + 1 in (b, add1) > second f = SF $ \(a, b) -> > let !(c, f') = runSF f b > in ((a, c), second f') > > Here, `add1_snd` is defined in terms of two recursive bindings, > `add1` and `second`. Because they’re both recursive, GHC doesn’t > know what to do with them, and the optimized program still has two > separate recursive knots. But this is a missed optimization, as > `add1_snd` is equivalent to the following definition, which fuses > the two loops together and consequently has just one recursive knot: > > add1_snd_fused :: SF (String, Int) (String, Int) > add1_snd_fused = SF $ \(a, b) -> > let !c = b + 1 > in ((a, c), add1_snd_fused) > > How could GHC get from `add1_snd` to `add1_snd_fused`? In theory, > SpecConstr could do it! Suppose we specialize `second` at the call > pattern `second add1`: > > {-# RULE "second/add1" second add1 = second_add1 #-} > > second_add1 = SF $ \(a, b) -> > let !(c, f') = runSF add1 b > in ((a, c), second f') > > This doesn’t immediately look like an improvement, but we’re > actually almost there. If we unroll `add1` once on the RHS of > `second_add1`, the simplifier will get us the rest of the way. We’ll > end up with > > let !b1 = b + 1 > !(c, f') = (b1, add1) > in ((a, c), second f') > > and after substituting f' to get `second add1`, the RULE will tie > the knot for us. > > This may look like small potatoes in isolation, but real programs > can generate hundreds of these tiny, tiny loops, and fusing them > together would be a big win. The only problem is SpecConstr doesn’t > currently specialize on functions! The original paper, “Call-pattern > Specialisation for Haskell Programs,” mentions this as a possibility > in Section 6.2, but it points out that actually doing this in > practice would be pretty tricky: > > > Specialising for function arguments is more slippery than for > > constructor arguments. In the example above the argument was a > > simple variable, but what if it was instead a lambda term? [...] > > > > The trouble is that lambda abstractions are much more fragile than > > constructor applications, in the sense that simple transformations > > may make two abstractions look different although they have the > > same value. > > Still, the difference this could make in a program of mine is so > large that I am interested in exploring it anyway. I am wondering if > anyone has investigated this possibility any further since the paper > was published, or if anyone knows of other use cases that would > benefit from this capability. > > Thanks, > Alexis > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sloboegen1998 at gmail.com Tue Mar 31 19:57:45 2020 From: sloboegen1998 at gmail.com (=?UTF-8?B?0JXQstCz0LXQvdC40Lkg0KHQu9C+0LHQvtC00LrQuNC9?=) Date: Tue, 31 Mar 2020 22:57:45 +0300 Subject: License for grammar Message-ID: Hi all! I implemented Haskell grammar for ANTLRv4 based on HaskellReport 2010 and GHC source (Parser.y and Lexer.x files). Link: https://github.com/antlr/grammars-v4/blob/master/haskell/Haskell.g4 Could someone please help me figuring out which license this grammar should be published on? From carter.schonwald at gmail.com Tue Mar 31 20:33:54 2020 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Tue, 31 Mar 2020 16:33:54 -0400 Subject: License for grammar In-Reply-To: References: Message-ID: Very cool! Mit / bsd 3 or bsd 2 or Apache are all reasonable On Tue, Mar 31, 2020 at 3:58 PM Евгений Слободкин wrote: > Hi all! > > I implemented Haskell grammar for ANTLRv4 based on HaskellReport 2010 > and GHC source (Parser.y and Lexer.x files). > > Link: https://github.com/antlr/grammars-v4/blob/master/haskell/Haskell.g4 > > Could someone please help me figuring out which license this grammar > should be published on? > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lexi.lambda at gmail.com Tue Mar 31 21:18:26 2020 From: lexi.lambda at gmail.com (Alexis King) Date: Tue, 31 Mar 2020 16:18:26 -0500 Subject: Fusing loops by specializing on functions with SpecConstr? In-Reply-To: References: <2E809F34-633F-4089-BEED-F38929F8BFD0@gmail.com> Message-ID: <296140F4-DCAC-4ACD-80F8-6F99B37C7316@gmail.com> Sebastian and Simon, Thank you both for your responses—they are all quite helpful! I agree with both of you that figuring out how to do this kind of specialization without any guidance from the programmer seems rather intractable. It’s too hard to divine where it would actually be beneficial, and even if you could, it seems likely that other optimizations would get in the way of it actually working out. I’ve been trying to figure out if it would be possible to help the optimizer out by annotating the program with special combinators like the existing ones provided by GHC.Magic. However, I haven’t been able to come up with anything yet that seems like it would actually work. > On Mar 31, 2020, at 06:12, Simon Peyton Jones wrote: > > Wow – tricky stuff! I would never have thought of trying to optimise that program, but it’s fascinating that you get lots and lots of them from FRP. For context, the reason you get all these tiny loops is that arrowized FRP uses the Arrow and ArrowChoice interfaces to build its programs, and those interfaces use tiny combinator functions like these: first :: Arrow a => a b c -> a (b, d) (c, d) (***) :: Arrow a => a b d -> a c e -> a (b, c) (d, e) (|||) :: ArrowChoice a => a b d -> a c d -> a (Either b c) d This means you end up with programs built out of dozens or hundreds of uses of these tiny combinators. You get code that looks like first (left (arr f >>> g ||| right h) *** second i) and this is a textbook situation where you want to specialize and inline all the combinators! For arrows without this tricky recursion, doing that works as intended, and GHC’s simplifier will do what it’s supposed to, and you get fast code. But with FRP, each of these combinators is recursive. This means you often get really awful code that looks like this: arr (\case { Nothing -> Left (); Just x -> Right x }) >>> (f ||| g) This converts a Maybe to an Either, then branches on it. It’s analogous to writing something like this in direct-style code: let y = case x of { Nothing -> Left (); Just x -> Right x } in case y of { Left () -> f; Right x -> g x } We really want the optimizer to eliminate the intermediate Either and just branch on it directly, and if GHC could fuse these tiny recursive loops, it could! But without that, all this pointless shuffling of values around remains in the optimized program. > I wonder whether it’d be possible to adjust the FRP library to generate easier-to-optimise code. Probably not, but worth asking. I think it’s entirely possible to somehow annotate these combinators to communicate this information to the optimizer, but I don’t know what the annotations ought to look like. (That’s the research part!) But I’m not very optimistic about getting the library to generate easier-to-optimize code with the tools available today. Sebastian’s example of SF2 and stream fusion sort of works, but in my experience, something like that doesn’t handle enough cases well enough to work on real arrow programs. > Unrolling one layer of a recursive function. That seems harder: how we know to *stop* unrolling as we successively simplify? One idea: do one layer of unrolling by hand, perhaps even in FRP source code: > add1rec = SF (\a -> let !b = a+1 in (b,add1rec)) > add1 = SF (\a -> let !b = a+1 in (b,add1rec)) Yes, I was playing with the idea at one point of some kind of RULE that inserts GHC.Magic.inline on the specialized RHS. That way the programmer could ask for the unrolling explicitly, as otherwise it seems unreasonable to ask the compiler to figure it out. > On Mar 31, 2020, at 08:08, Sebastian Graf wrote: > > We can formulate SF as a classic Stream that needs an `a` to produce its next element of type `b` like this (SF2 below) This is a neat trick, though I’ve had trouble getting it to work reliably in my experiments (even though I was using GHC.Types.SPEC). That said, I also feel like I don’t understand the subtleties of SpecConstr very well, so it could have been my fault. The more fundamental problem I’ve found with that approach is that it doesn’t do very well for arrow combinators like (***) and (|||), which come up very often in arrow programs but rarely in streaming. Fusing long chains of first/second/left/right is actually pretty easy with ordinary RULEs, but (***) and (|||) are much harder, since they have multiple continuations. It seems at first appealing to rewrite `f *** g` into `first f >>> second g`, which solves the immediate problem, but this is actually a lot less efficient after repeated rewritings. You end up rewriting `(f ||| g) *** h` into `first (left f) >>> first (right g) >>> second h`, turning two distinct branches into four, and larger programs have much worse exponential blowups. So that’s where I’ve gotten stuck! I’ve been toying with the idea of thinking about expression “shells”, so if you have something like first (a ||| b) >>> c *** second (d ||| e) >>> f then you have a “shell” of the shape first (● ||| ●) >>> ● *** second (● ||| ●) >>> ● which theoretically serves as a key for the specialization. You can then generate a specialization and a rule: $s a b c d e f = ... {-# RULE forall a b c d e f. first (a ||| b) >>> c *** second (d ||| e) >>> f = $s a b c d e f #-} The question then becomes: how do you specify what these shells are, and how do you specify how to transform the shell into a specialized function? I don’t know, but it’s something a Core plugin could theoretically do. Maybe it makes sense for this domain-specific optimization to be a Core pass that runs before the simplifier, like the typeclass specializer currently is, but I haven’t explored that yet. Alexis -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgraf1337 at gmail.com Tue Mar 31 22:05:16 2020 From: sgraf1337 at gmail.com (Sebastian Graf) Date: Wed, 1 Apr 2020 00:05:16 +0200 Subject: Fusing loops by specializing on functions with SpecConstr? In-Reply-To: <296140F4-DCAC-4ACD-80F8-6F99B37C7316@gmail.com> References: <2E809F34-633F-4089-BEED-F38929F8BFD0@gmail.com> <296140F4-DCAC-4ACD-80F8-6F99B37C7316@gmail.com> Message-ID: > > This is a neat trick, though I’ve had trouble getting it to work reliably > in my experiments (even though I was using GHC.Types.SPEC). That said, I > also feel like I don’t understand the subtleties of SpecConstr very well, > so it could have been my fault. > Yeah, SPEC is quite unreliable, because IIRC at some point it's either consumed or irrelevant. But none of the combinators you mentioned should rely on SpecConstr! They are all non-recursive, so the Simplifier will take care of "specialisation". And it works just fine, I just tried it: https://gist.github.com/sgraf812/d15cd3ee9cc9bd2e72704f90567ef35b `test` there is optimised reasonably well. The problem is that we don't have the concrete a..f so we can't cancel away all allocations. If you give me a closed program where we fail to optimise away every bit of allocation (and it isn't due to size concerns), then I would be surprised. Although there might be a bug in how I encoded the streams, maybe we can be a bit stricter here or there if need be. `test2 = (double &&& inc) >>> arr (uncurry (+)) :: SF Int Int` is such a function that we optimise down to (the equivalent of) `arr (\n -> 3*n+1)`. Maybe you can give a medium-sized program where you think GHC does a poor job at optimising? Am Di., 31. März 2020 um 23:18 Uhr schrieb Alexis King < lexi.lambda at gmail.com>: > Sebastian and Simon, > > Thank you both for your responses—they are all quite helpful! I agree with > both of you that figuring out how to do this kind of specialization without > any guidance from the programmer seems rather intractable. It’s too hard to > divine where it would actually be beneficial, and even if you could, it > seems likely that other optimizations would get in the way of it actually > working out. > > I’ve been trying to figure out if it would be possible to help the > optimizer out by annotating the program with special combinators like the > existing ones provided by GHC.Magic. However, I haven’t been able to come > up with anything yet that seems like it would actually work. > > On Mar 31, 2020, at 06:12, Simon Peyton Jones > wrote: > > Wow – tricky stuff! I would never have thought of trying to optimise > that program, but it’s fascinating that you get lots and lots of them from > FRP. > > > For context, the reason you get all these tiny loops is that arrowized FRP > uses the Arrow and ArrowChoice interfaces to build its programs, and those > interfaces use tiny combinator functions like these: > > first :: Arrow a => a b c -> a (b, d) (c, d) > (***) :: Arrow a => a b d -> a c e -> a (b, c) (d, e) > (|||) :: ArrowChoice a => a b d -> a c d -> a (Either b c) d > > This means you end up with programs built out of dozens or hundreds of > uses of these tiny combinators. You get code that looks like > > first (left (arr f >>> g ||| right h) *** second i) > > and this is a textbook situation where you want to specialize and inline > all the combinators! For arrows without this tricky recursion, doing that > works as intended, and GHC’s simplifier will do what it’s supposed to, and > you get fast code. > > But with FRP, each of these combinators is recursive. This means you often > get really awful code that looks like this: > > arr (\case { Nothing -> Left (); Just x -> Right x }) >>> (f ||| g) > > This converts a Maybe to an Either, then branches on it. It’s analogous to > writing something like this in direct-style code: > > let y = case x of { Nothing -> Left (); Just x -> Right x } > in case y of { Left () -> f; Right x -> g x } > > We really want the optimizer to eliminate the intermediate Either and just > branch on it directly, and if GHC could fuse these tiny recursive loops, it > could! But without that, all this pointless shuffling of values around > remains in the optimized program. > > > - I wonder whether it’d be possible to adjust the FRP library to > generate easier-to-optimise code. Probably not, but worth asking. > > > I think it’s entirely possible to somehow annotate these combinators to > communicate this information to the optimizer, but I don’t know what the > annotations ought to look like. (That’s the research part!) > > But I’m not very optimistic about getting the library to generate > easier-to-optimize code with the tools available today. Sebastian’s example > of SF2 and stream fusion sort of works, but in my experience, something > like that doesn’t handle enough cases well enough to work on real arrow > programs. > > > - Unrolling one layer of a recursive function. That seems harder: how > we know to **stop** unrolling as we successively simplify? One > idea: do one layer of unrolling by hand, perhaps even in FRP source code: > > add1rec = SF (\a -> let !b = a+1 in (b,add1rec)) > add1 = SF (\a -> let !b = a+1 in (b,add1rec)) > > > Yes, I was playing with the idea at one point of some kind of RULE that > inserts GHC.Magic.inline on the specialized RHS. That way the programmer > could ask for the unrolling explicitly, as otherwise it seems unreasonable > to ask the compiler to figure it out. > > On Mar 31, 2020, at 08:08, Sebastian Graf wrote: > > We can formulate SF as a classic Stream that needs an `a` to produce its > next element of type `b` like this (SF2 below) > > > This is a neat trick, though I’ve had trouble getting it to work reliably > in my experiments (even though I was using GHC.Types.SPEC). That said, I > also feel like I don’t understand the subtleties of SpecConstr very well, > so it could have been my fault. > > The more fundamental problem I’ve found with that approach is that it > doesn’t do very well for arrow combinators like (***) and (|||), which come > up very often in arrow programs but rarely in streaming. Fusing long chains > of first/second/left/right is actually pretty easy with ordinary RULEs, but > (***) and (|||) are much harder, since they have multiple continuations. > > It seems at first appealing to rewrite `f *** g` into `first f >>> second > g`, which solves the immediate problem, but this is actually a lot less > efficient after repeated rewritings. You end up rewriting `(f ||| g) *** h` > into `first (left f) >>> first (right g) >>> second h`, turning two > distinct branches into four, and larger programs have much worse > exponential blowups. > > So that’s where I’ve gotten stuck! I’ve been toying with the idea of > thinking about expression “shells”, so if you have something like > > first (a ||| b) >>> c *** second (d ||| e) >>> f > > then you have a “shell” of the shape > > first (● ||| ●) >>> ● *** second (● ||| ●) >>> ● > > which theoretically serves as a key for the specialization. You can then > generate a specialization and a rule: > > $s a b c d e f = ... > {-# RULE forall a b c d e f. > first (a ||| b) >>> c *** second (d ||| e) >>> f = $s a b c d > e f #-} > > The question then becomes: how do you specify what these shells are, and > how do you specify how to transform the shell into a specialized function? > I don’t know, but it’s something a Core plugin could theoretically do. > Maybe it makes sense for this domain-specific optimization to be a Core > pass that runs before the simplifier, like the typeclass specializer > currently is, but I haven’t explored that yet. > > Alexis > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Mar 31 22:49:44 2020 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 31 Mar 2020 22:49:44 +0000 Subject: Fusing loops by specializing on functions with SpecConstr? In-Reply-To: <296140F4-DCAC-4ACD-80F8-6F99B37C7316@gmail.com> References: <2E809F34-633F-4089-BEED-F38929F8BFD0@gmail.com> <296140F4-DCAC-4ACD-80F8-6F99B37C7316@gmail.com> Message-ID: Joachim: this conversation is triggering some hind-brain neurons related to exitification, or something like that. I recall that we discovered we could get some surprising fusion of recursive functions expressed as join points. Something like f . g . h where h loops for a while and returns, and same for g and f. Then the call to g landed up in the return branch of h, and same for f. But I can’t find anything in writing. The Exitify module doesn’t say much. I thought we had a wiki page but I can’t find it. Can you remember? Thanks Simon From: Alexis King Sent: 31 March 2020 22:18 To: Sebastian Graf ; Simon Peyton Jones Cc: ghc-devs Subject: Re: Fusing loops by specializing on functions with SpecConstr? Sebastian and Simon, Thank you both for your responses—they are all quite helpful! I agree with both of you that figuring out how to do this kind of specialization without any guidance from the programmer seems rather intractable. It’s too hard to divine where it would actually be beneficial, and even if you could, it seems likely that other optimizations would get in the way of it actually working out. I’ve been trying to figure out if it would be possible to help the optimizer out by annotating the program with special combinators like the existing ones provided by GHC.Magic. However, I haven’t been able to come up with anything yet that seems like it would actually work. On Mar 31, 2020, at 06:12, Simon Peyton Jones > wrote: Wow – tricky stuff! I would never have thought of trying to optimise that program, but it’s fascinating that you get lots and lots of them from FRP. For context, the reason you get all these tiny loops is that arrowized FRP uses the Arrow and ArrowChoice interfaces to build its programs, and those interfaces use tiny combinator functions like these: first :: Arrow a => a b c -> a (b, d) (c, d) (***) :: Arrow a => a b d -> a c e -> a (b, c) (d, e) (|||) :: ArrowChoice a => a b d -> a c d -> a (Either b c) d This means you end up with programs built out of dozens or hundreds of uses of these tiny combinators. You get code that looks like first (left (arr f >>> g ||| right h) *** second i) and this is a textbook situation where you want to specialize and inline all the combinators! For arrows without this tricky recursion, doing that works as intended, and GHC’s simplifier will do what it’s supposed to, and you get fast code. But with FRP, each of these combinators is recursive. This means you often get really awful code that looks like this: arr (\case { Nothing -> Left (); Just x -> Right x }) >>> (f ||| g) This converts a Maybe to an Either, then branches on it. It’s analogous to writing something like this in direct-style code: let y = case x of { Nothing -> Left (); Just x -> Right x } in case y of { Left () -> f; Right x -> g x } We really want the optimizer to eliminate the intermediate Either and just branch on it directly, and if GHC could fuse these tiny recursive loops, it could! But without that, all this pointless shuffling of values around remains in the optimized program. * I wonder whether it’d be possible to adjust the FRP library to generate easier-to-optimise code. Probably not, but worth asking. I think it’s entirely possible to somehow annotate these combinators to communicate this information to the optimizer, but I don’t know what the annotations ought to look like. (That’s the research part!) But I’m not very optimistic about getting the library to generate easier-to-optimize code with the tools available today. Sebastian’s example of SF2 and stream fusion sort of works, but in my experience, something like that doesn’t handle enough cases well enough to work on real arrow programs. * Unrolling one layer of a recursive function. That seems harder: how we know to *stop* unrolling as we successively simplify? One idea: do one layer of unrolling by hand, perhaps even in FRP source code: add1rec = SF (\a -> let !b = a+1 in (b,add1rec)) add1 = SF (\a -> let !b = a+1 in (b,add1rec)) Yes, I was playing with the idea at one point of some kind of RULE that inserts GHC.Magic.inline on the specialized RHS. That way the programmer could ask for the unrolling explicitly, as otherwise it seems unreasonable to ask the compiler to figure it out. On Mar 31, 2020, at 08:08, Sebastian Graf > wrote: We can formulate SF as a classic Stream that needs an `a` to produce its next element of type `b` like this (SF2 below) This is a neat trick, though I’ve had trouble getting it to work reliably in my experiments (even though I was using GHC.Types.SPEC). That said, I also feel like I don’t understand the subtleties of SpecConstr very well, so it could have been my fault. The more fundamental problem I’ve found with that approach is that it doesn’t do very well for arrow combinators like (***) and (|||), which come up very often in arrow programs but rarely in streaming. Fusing long chains of first/second/left/right is actually pretty easy with ordinary RULEs, but (***) and (|||) are much harder, since they have multiple continuations. It seems at first appealing to rewrite `f *** g` into `first f >>> second g`, which solves the immediate problem, but this is actually a lot less efficient after repeated rewritings. You end up rewriting `(f ||| g) *** h` into `first (left f) >>> first (right g) >>> second h`, turning two distinct branches into four, and larger programs have much worse exponential blowups. So that’s where I’ve gotten stuck! I’ve been toying with the idea of thinking about expression “shells”, so if you have something like first (a ||| b) >>> c *** second (d ||| e) >>> f then you have a “shell” of the shape first (● ||| ●) >>> ● *** second (● ||| ●) >>> ● which theoretically serves as a key for the specialization. You can then generate a specialization and a rule: $s a b c d e f = ... {-# RULE forall a b c d e f. first (a ||| b) >>> c *** second (d ||| e) >>> f = $s a b c d e f #-} The question then becomes: how do you specify what these shells are, and how do you specify how to transform the shell into a specialized function? I don’t know, but it’s something a Core plugin could theoretically do. Maybe it makes sense for this domain-specific optimization to be a Core pass that runs before the simplifier, like the typeclass specializer currently is, but I haven’t explored that yet. Alexis -------------- next part -------------- An HTML attachment was scrubbed... URL: From lexi.lambda at gmail.com Tue Mar 31 23:16:50 2020 From: lexi.lambda at gmail.com (Alexis King) Date: Tue, 31 Mar 2020 18:16:50 -0500 Subject: Fusing loops by specializing on functions with SpecConstr? In-Reply-To: References: <2E809F34-633F-4089-BEED-F38929F8BFD0@gmail.com> <296140F4-DCAC-4ACD-80F8-6F99B37C7316@gmail.com> Message-ID: <208D122C-E0E7-4B28-969B-19A792E358C5@gmail.com> > On Mar 31, 2020, at 17:05, Sebastian Graf wrote: > > Yeah, SPEC is quite unreliable, because IIRC at some point it's either consumed or irrelevant. But none of the combinators you mentioned should rely on SpecConstr! They are all non-recursive, so the Simplifier will take care of "specialisation". And it works just fine, I just tried it Ah! You are right, I did not read carefully enough and misinterpreted. That approach is clever, indeed. I had tried something similar with a CPS encoding, but the piece I was missing was using the existential to tie the final knot. I have tried it out on some of my experiments. It’s definitely a significant improvement, but it isn’t perfect. Here’s a small example: mapMaybeSF :: SF a b -> SF (Maybe a) (Maybe b) mapMaybeSF f = proc v -> case v of Just x -> do y <- f -< x returnA -< Just y Nothing -> returnA -< Nothing Looking at the optimized core, it’s true that the conversion of Maybe to Either and back again gets eliminated, which is wonderful! But what’s less wonderful is the value passed around through `s`: mapMaybeSF = \ (@ a) (@ b) (f :: SF a b) -> case f of { SF @ s f2 s2 -> SF (\ (a1 :: Maybe a) (ds2 :: ((), ((), (((), (((), (((), s), ())), ((), ((), ())))), ((), ()))))) -> Yikes! GHC has no obvious way to clean this type up, so it will just grow indefinitely, and we end up doing a dozen pattern-matches in the body followed by another dozen allocations, just wrapping and unwrapping tuples. Getting rid of that seems probably a lot more tractable than fusing the recursive loops, but I’m still not immediately certain how to do it. GHC would have to somehow deduce that `s` is existentially-bound, so it can rewrite something like SF (\a ((), x) -> ... Yield ((), y) b ...) ((), s) to SF (\a x -> ... Yield y b) s by parametricity. Is that an unreasonable ask? I don’t know! Another subtlety I considered involves recursive arrows, where I currently depend on laziness in (|||). Here’s one example: mapSF :: SF a b -> SF [a] [b] mapSF f = proc xs -> case xs of x:xs -> do y <- f -< x ys <- mapSF f -< xs returnA -< (y:ys) [] -> returnA -< [] Currently, GHC will just compile this to `mapSF f = mapSF f` under your implementation, since (|||) and (>>>) are both strict. However, I think this is not totally intractable—we can easily introduce an explicit `lazy` combinator to rein in strictness: lazy :: SF a b -> SF a b lazy sf0 = SF g (Unit sf0) where g a (Unit sf1) = case runSF sf1 a of (b, sf2) -> Yield (Unit sf2) b And now we can write `lazy (mapSF f)` at the point of the recursive call to avoid the infinite loop. This defeats some optimizations, of course, but `mapSF` is fundamentally recursive, so there’s only so much we can really expect. So perhaps my needs here are less ambitious, after all! Getting rid of all those redundant tuples is my next question, but that’s rather unrelated from what we’ve been talking about so far. Alexis