From simonpj at microsoft.com Fri Jun 1 07:17:14 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 1 Jun 2018 07:17:14 +0000 Subject: Trac email In-Reply-To: References: Message-ID: Alas I am still getting no Trac mail whatsoever, despite Tamar’s work below. Can anyone help? It’s quite disabling. Thanks Simon From: Phyx Sent: 31 May 2018 22:43 To: Artem Pelenitsyn Cc: Simon Peyton Jones ; ghc-devs at haskell.org Subject: Re: Trac email The mail servers were backed up with spam apparently. No emails were delivered or received to any of the mailing lists https://www.reddit.com/r/haskell/comments/8mza0y/whats_up_with_mailhaskellorg_dark_since_sunday/ Cheers, Tamar On Thu, May 31, 2018, 15:26 Artem Pelenitsyn > wrote: Hello, It stopped working for me from like Sunday and wasn't working for about two days. Then, on Tuesday, it silently started working again. Although, I haven't checked that I get all the emails. -- Best, Artem On Thu, 31 May 2018, 21:13 Simon Peyton Jones via ghc-devs, > wrote: Devs Trac has entirely stopped sending me email, so I have no idea what’s happening on the GHC front. Could someone unglue it? Thanks! Simon _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryan.gl.scott at gmail.com Fri Jun 1 10:57:23 2018 From: ryan.gl.scott at gmail.com (Ryan Scott) Date: Fri, 1 Jun 2018 06:57:23 -0400 Subject: -fghci-leak-check apparently causes many tests to fail Message-ID: One thing I forgot to mention is that these test failures only seem to occur with the `quick` build flavor, and I couldn't reproduce them with ./validate. Is -fghci-leak-check expected to have different behavior if stage-2 GHC is built without optimization? Ryan S. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lonetiger at gmail.com Fri Jun 1 12:49:00 2018 From: lonetiger at gmail.com (Phyx) Date: Fri, 1 Jun 2018 08:49:00 -0400 Subject: Trac email In-Reply-To: References: Message-ID: No that was Gershom whom fixed it before. Perhaps he knows what's up? Tamar On Fri, Jun 1, 2018, 03:17 Simon Peyton Jones wrote: > Alas I am still getting no Trac mail whatsoever, despite Tamar’s work > below. Can anyone help? It’s quite disabling. > > > > Thanks > > > > Simon > > > > *From:* Phyx > *Sent:* 31 May 2018 22:43 > *To:* Artem Pelenitsyn > *Cc:* Simon Peyton Jones ; ghc-devs at haskell.org > *Subject:* Re: Trac email > > > > The mail servers were backed up with spam apparently. No emails were > delivered or received to any of the mailing lists > https://www.reddit.com/r/haskell/comments/8mza0y/whats_up_with_mailhaskellorg_dark_since_sunday/ > > > > > Cheers, > > Tamar > > On Thu, May 31, 2018, 15:26 Artem Pelenitsyn > wrote: > > Hello, > > > > It stopped working for me from like Sunday and wasn't working for about > two days. Then, on Tuesday, it silently started working again. Although, I > haven't checked that I get all the emails. > > > > -- > > Best, Artem > > > > On Thu, 31 May 2018, 21:13 Simon Peyton Jones via ghc-devs, < > ghc-devs at haskell.org> wrote: > > Devs > > Trac has entirely stopped sending me email, so I have no idea what’s > happening on the GHC front. Could someone unglue it? > > Thanks! > > Simon > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Fri Jun 1 13:27:24 2018 From: ben at well-typed.com (Ben Gamari) Date: Fri, 01 Jun 2018 09:27:24 -0400 Subject: Trac email In-Reply-To: References: Message-ID: <878t7yk56z.fsf@smart-cactus.org> Simon Peyton Jones via ghc-devs writes: > Alas I am still getting no Trac mail whatsoever, despite Tamar’s work > below. Can anyone help? It’s quite disabling. > Hmm, this is unfortunately. Gershom, do you happen to know what is going on here? Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From gershomb at gmail.com Fri Jun 1 14:33:27 2018 From: gershomb at gmail.com (Gershom B) Date: Fri, 1 Jun 2018 07:33:27 -0700 Subject: Trac email In-Reply-To: <878t7yk56z.fsf@smart-cactus.org> References: <878t7yk56z.fsf@smart-cactus.org> Message-ID: The mailqueue is clear, and I’ve confirmed that I (and others) can get trac emails. I checked and Simon’s email had been disabled from recieving emails from the ghc-tickets list automatically due to bounces. I re-enabled it, so perhaps that suffices. If not, at least it’ll give us a lead on _why_ things might be bouncing... Simon: contact me offthread if you’d like, with what other email lists you want to recieve messages from, if you want to confirm that they also haven’t stopped sending to you due to the bounces. —Gershom On June 1, 2018 at 9:27:27 AM, Ben Gamari (ben at well-typed.com) wrote: Simon Peyton Jones via ghc-devs writes: > Alas I am still getting no Trac mail whatsoever, despite Tamar’s work > below. Can anyone help? It’s quite disabling. > Hmm, this is unfortunately. Gershom, do you happen to know what is going on here? Cheers, - Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From mpeddie at gmail.com Sat Jun 2 05:23:22 2018 From: mpeddie at gmail.com (Matt Peddie) Date: Sat, 2 Jun 2018 15:23:22 +1000 Subject: accuracy of asinh and atanh Message-ID: Hi devs, I tried to use asinh :: Double -> Double and discovered that it's inaccurate compared to my system library (GNU libm), even returning -Infinity in place of finite values in the neighborhood of -22 for large negative arguments. `atanh` is also inaccurate compared to the system library. I wrote up a more detailed description of the problem including plots in the README file at https://github.com/peddie/ghc-inverse-hyperbolic -- this repository is package that can help you examine the error for yourself or generate the plots, and it also contains accurate pure-Haskell translations of the system library's implementation for these functions. What's the next step to fixing this in GHC? Cheers Matt Peddie From klebinger.andreas at gmx.at Sat Jun 2 10:13:32 2018 From: klebinger.andreas at gmx.at (Andreas Klebinger) Date: Sat, 02 Jun 2018 12:13:32 +0200 Subject: Combining Bag/OrdList? Message-ID: <5B126DCC.5090207@gmx.at> We have OrdList which does: Provide trees (of instructions), so that lists of instructions can be appended in linear time. And Bag which claims to be: an unordered collection with duplicates However the actual implementation of Bag is also a tree if things. Given that we have snocBag, consBag that implies to me it's also an ordered collection. I wondered if besides of someone having to do it if there is a reason why these couldn't be combined into a single data structure? Their implementation seems similar enough as far as I can tell. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kavon at farvard.in Sat Jun 2 16:00:29 2018 From: kavon at farvard.in (Kavon Farvardin) Date: Sat, 2 Jun 2018 11:00:29 -0500 Subject: Combining Bag/OrdList? In-Reply-To: <5B126DCC.5090207@gmx.at> References: <5B126DCC.5090207@gmx.at> Message-ID: If we have an algorithm that only needs a Bag, then we are free to improve the implementation of Bag in the future so that it doesn’t preserve order under the hood (e.g, use a hash table). So, I personally think it’s useful to have around. Sent from my phone. > On Jun 2, 2018, at 5:13 AM, Andreas Klebinger wrote: > > We have OrdList which does: > > Provide trees (of instructions), so that lists of instructions > can be appended in linear time. > > And Bag which claims to be: > > an unordered collection with duplicates > > However the actual implementation of Bag is also a tree if things. > Given that we have snocBag, consBag that implies to me it's > also an ordered collection. > > I wondered if besides of someone having to do it if there is a reason why these couldn't be combined > into a single data structure? Their implementation seems similar enough as far as I can tell. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From klebinger.andreas at gmx.at Sat Jun 2 17:03:02 2018 From: klebinger.andreas at gmx.at (Andreas Klebinger) Date: Sat, 02 Jun 2018 19:03:02 +0200 Subject: Combining Bag/OrdList? In-Reply-To: References: <5B126DCC.5090207@gmx.at> Message-ID: <5B12CDC6.7070301@gmx.at> > we are free to improve the implementation of Bag in the future so that it doesn’t preserve order Imo we lost that ability by exposing consBag & snocBag which imply that there is a front and a back. Which at first glance also seem to be already used in GHC with that behavior in mind. I agree with the thought that not guaranteeing an ordering might have benefits. But in practice they are almost the same data structure with slightly different interfaces. > Kavon Farvardin > Samstag, 2. Juni 2018 18:00 > If we have an algorithm that only needs a Bag, then we are free to > improve the implementation of Bag in the future so that it doesn’t > preserve order under the hood (e.g, use a hash table). So, I > personally think it’s useful to have around. > > Sent from my phone. > > > Andreas Klebinger > Samstag, 2. Juni 2018 12:13 > We have OrdList which does: > > Provide trees (of instructions), so that lists of instructions > can be appended in linear time. > > And Bag which claims to be: > > an unordered collection with duplicates > > However the actual implementation of Bag is also a tree if things. > Given that we have snocBag, consBag that implies to me it's > also an ordered collection. > > I wondered if besides of someone having to do it if there is a reason > why these couldn't be combined > into a single data structure? Their implementation seems similar > enough as far as I can tell. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Sat Jun 2 17:39:16 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Sat, 02 Jun 2018 13:39:16 -0400 Subject: Combining Bag/OrdList? In-Reply-To: <5B12CDC6.7070301@gmx.at> References: <5B126DCC.5090207@gmx.at> <5B12CDC6.7070301@gmx.at> Message-ID: <87in71hyv4.fsf@smart-cactus.org> Andreas Klebinger writes: > > we are free to improve the implementation of Bag in the future so > that it doesn’t preserve order > > Imo we lost that ability by exposing consBag & snocBag which imply that > there is a front and a back. > Which at first glance also seem to be already used in GHC with that > behavior in mind. > It looks to me like many of the applications of snocBag should really be using OrdList. In my opinion we should keep the two types apart and simply be more careful about when we use each. There is value in being precise about whether or not ordering of a structure is relevant, even if we don't take advantage of this in the structure's representation. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From jweakly at pdx.edu Sat Jun 2 17:55:10 2018 From: jweakly at pdx.edu (Jared Weakly) Date: Sat, 2 Jun 2018 10:55:10 -0700 Subject: Combining Bag/OrdList? In-Reply-To: <87in71hyv4.fsf@smart-cactus.org> References: <5B126DCC.5090207@gmx.at> <5B12CDC6.7070301@gmx.at> <87in71hyv4.fsf@smart-cactus.org> Message-ID: > It looks to me like many of the applications of snocBag should really be using OrdList. Do you think there's benefit in refactoring to use ordList and then removing snoc/cons from the bag API (instead providing only operations that make no assumptions about ordering)? Jared On Sat, Jun 2, 2018, 10:39 AM Ben Gamari wrote: > Andreas Klebinger writes: > > > > we are free to improve the implementation of Bag in the future so > > that it doesn’t preserve order > > > > Imo we lost that ability by exposing consBag & snocBag which imply that > > there is a front and a back. > > Which at first glance also seem to be already used in GHC with that > > behavior in mind. > > > It looks to me like many of the applications of snocBag should really be > using OrdList. > > In my opinion we should keep the two types apart and simply be more > careful about when we use each. There is value in being precise about > whether or not ordering of a structure is relevant, even if we don't > take advantage of this in the structure's representation. > > Cheers, > > - Ben > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Sat Jun 2 17:57:12 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Sat, 02 Jun 2018 13:57:12 -0400 Subject: Combining Bag/OrdList? In-Reply-To: References: <5B126DCC.5090207@gmx.at> <5B12CDC6.7070301@gmx.at> <87in71hyv4.fsf@smart-cactus.org> Message-ID: <87fu25hy19.fsf@smart-cactus.org> Jared Weakly writes: >> It looks to me like many of the applications of snocBag should really be > using OrdList. > > Do you think there's benefit in refactoring to use ordList and then > removing snoc/cons from the bag API (instead providing only operations that > make no assumptions about ordering)? > Absolutely. I think that is the right direction to move. Please do give it a stab if you would like! Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From mail at joachim-breitner.de Sat Jun 2 18:12:11 2018 From: mail at joachim-breitner.de (Joachim Breitner) Date: Sat, 02 Jun 2018 20:12:11 +0200 Subject: Combining Bag/OrdList? In-Reply-To: References: <5B126DCC.5090207@gmx.at> <5B12CDC6.7070301@gmx.at> <87in71hyv4.fsf@smart-cactus.org> Message-ID: <55f0722aa9818d1f28c55c97594c6be26ef1c7de.camel@joachim-breitner.de> Hi, Am Samstag, den 02.06.2018, 10:55 -0700 schrieb Jared Weakly: > > It looks to me like many of the applications of snocBag should really be using OrdList. > > Do you think there's benefit in refactoring to use ordList and then > removing snoc/cons from the bag API (instead providing only > operations that make no assumptions about ordering)? would you remove `toList` (which has to fix an ordering)? Cheers, Joachim -- Joachim Breitner mail at joachim-breitner.de http://www.joachim-breitner.de/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From ben at smart-cactus.org Sat Jun 2 18:27:24 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Sat, 02 Jun 2018 14:27:24 -0400 Subject: Combining Bag/OrdList? In-Reply-To: <55f0722aa9818d1f28c55c97594c6be26ef1c7de.camel@joachim-breitner.de> References: <5B126DCC.5090207@gmx.at> <5B12CDC6.7070301@gmx.at> <87in71hyv4.fsf@smart-cactus.org> <55f0722aa9818d1f28c55c97594c6be26ef1c7de.camel@joachim-breitner.de> Message-ID: <878t7xhwmw.fsf@smart-cactus.org> Joachim Breitner writes: > Hi, > > Am Samstag, den 02.06.2018, 10:55 -0700 schrieb Jared Weakly: >> > It looks to me like many of the applications of snocBag should really be using OrdList. >> >> Do you think there's benefit in refactoring to use ordList and then >> removing snoc/cons from the bag API (instead providing only >> operations that make no assumptions about ordering)? > > would you remove `toList` (which has to fix an ordering)? > Yes, this seems unavoidable. However, the documentation would make it clear that the returned order is arbitrary. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ekmett at gmail.com Mon Jun 4 04:30:54 2018 From: ekmett at gmail.com (Edward Kmett) Date: Mon, 4 Jun 2018 00:30:54 -0400 Subject: accuracy of asinh and atanh In-Reply-To: References: Message-ID: Note: From skimming your readme it is worth noting that log1p _is_ in base now (alongside expm1, log1pexp, and log1mexp). We added them all a couple of years back as a result of the very thread linked in your README. You need to `import Numeric` to see them, though. Switching to more accurate functions for doubles and floats for asinh, atanh, etc. to exploit this sort of functionality at least seems to make a lot of sense. That can be done locally without any user API impact as the current definitions aren't supplied as defaults, merely as pointwise implementations instance by instance. Things will just become more accurate. In that same spirit, we can probably crib a better version for complex numbers from somewhere as well, as it follows the same general simplistic formula right now, even if it can't be plugged directly into the equations you've given. For that matter, the log1p definition we're using for complex numbers was the best I could come up with, but there may well be a more accurate version you can find down in the mines of libm or another math library written by real analysts. log1p x @(a :+ b ) | abs a < 0.5 && abs b < 0.5 , u <- 2* a + a * a + b * b = log1p (u / (1 + sqrt (u + 1))) :+ atan2 (1 + a ) b | otherwise = log (1 + x ) So, here's a +1 from the libraries committee side towards improving the situation. >From there, it's a small matter of implementation. Here's where I'd usually get Ben involved. Hi Ben! -Edward On Sat, Jun 2, 2018 at 1:23 AM, Matt Peddie wrote: > Hi devs, > > I tried to use asinh :: Double -> Double and discovered that it's > inaccurate compared to my system library (GNU libm), even returning > -Infinity in place of finite values in the neighborhood of -22 for > large negative arguments. `atanh` is also inaccurate compared to the > system library. I wrote up a more detailed description of the problem > including plots in the README file at > https://github.com/peddie/ghc-inverse-hyperbolic -- this repository is > package that can help you examine the error for yourself or generate > the plots, and it also contains accurate pure-Haskell translations of > the system library's implementation for these functions. What's the > next step to fixing this in GHC? > > Cheers > > Matt Peddie > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Jun 4 08:29:30 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 4 Jun 2018 08:29:30 +0000 Subject: Combining Bag/OrdList? In-Reply-To: <87in71hyv4.fsf@smart-cactus.org> References: <5B126DCC.5090207@gmx.at> <5B12CDC6.7070301@gmx.at> <87in71hyv4.fsf@smart-cactus.org> Message-ID: | > Imo we lost that ability by exposing consBag & snocBag which imply | > that there is a front and a back. Excellent point! I agree with Ben here. * We should rename consBag/snocBag to extendBag * And use OrdList instead of Bag in any places where the order matters. Figuring out which those places are would require a little study. Simon | -----Original Message----- | From: ghc-devs On Behalf Of Ben Gamari | Sent: 02 June 2018 18:39 | To: Andreas Klebinger ; Kavon Farvardin | | Cc: ghc-devs at haskell.org | Subject: Re: Combining Bag/OrdList? | | Andreas Klebinger writes: | | > > we are free to improve the implementation of Bag in the future so | > that it doesn’t preserve order | > | > Imo we lost that ability by exposing consBag & snocBag which imply | > that there is a front and a back. | > Which at first glance also seem to be already used in GHC with that | > behavior in mind. | > | It looks to me like many of the applications of snocBag should really | be using OrdList. | | In my opinion we should keep the two types apart and simply be more | careful about when we use each. There is value in being precise about | whether or not ordering of a structure is relevant, even if we don't | take advantage of this in the structure's representation. | | Cheers, | | - Ben From mail at nh2.me Tue Jun 5 16:25:07 2018 From: mail at nh2.me (=?UTF-8?Q?Niklas_Hamb=c3=bcchen?=) Date: Tue, 5 Jun 2018 18:25:07 +0200 Subject: ZuriHac 2018 GHC DevOps track - Request for Contributions In-Reply-To: References: Message-ID: <310542f0-3d97-6bb0-5d71-46b82906ec14@nh2.me> Hey Michal, sorry for the late reply on my side too, I had a surprisingly busy weekend. I think what you propose is great, let's do it and allocate you a talk slot. Do you have a preferred time? I think having some improvisation in it is totally OK: After all, we're aiming at GHC beginners, so the point of the talks is to get them motivated and some ideas what topics they could work on. Best, Niklas On 28/05/2018 19.13, Michal Terepeta wrote: > Hi Niklas, > > Sorry for slow reply - I'm totally snowed under at the moment. > > I should be able to give some overview/examples of what are primops and how they go through the compilation pipeline. And talk a bit about the Cmm-level parts of GHC. But I won't have much time to prepare, so there might be fair amount of improvisation... > > Are you coming to this week's HaskellerZ meetup? We could chat a bit more about this. > > Cheers! From rahulmutt at gmail.com Wed Jun 6 07:58:29 2018 From: rahulmutt at gmail.com (Rahul Muttineni) Date: Wed, 6 Jun 2018 13:28:29 +0530 Subject: revertCAFs and -fexternal-interpreter Message-ID: Hello devs, I noticed that in ghc/GHCi/UI.hs, the calls to 'revertCAFs' are made in the compiler's RTS instead of the interpreter's RTS. When -fexternal-interpreter is on this distinction is visible, otherwise they are one and the same so it works as intended. Shouldn't there be a RevertCAFs data constructor in `libraries/ghci/GHCi/Message.hs` to tell the interpreter process to revert the CAFs in its heap? Thanks, Rahul Muttineni -------------- next part -------------- An HTML attachment was scrubbed... URL: From whosekiteneverfly at gmail.com Wed Jun 6 23:45:04 2018 From: whosekiteneverfly at gmail.com (Yuji Yamamoto) Date: Thu, 7 Jun 2018 08:45:04 +0900 Subject: Add taggedTrace to Debug.Trace Message-ID: Nice to meet you, GHC Developers! I'm new to contributing to GHC. Today let me suggest new APIs of the Debug.Trace module, named: - taggedTraceShowId :: Show a => String -> a -> a - taggedTraceWith :: (a -> String) -> String -> a -> a These are inspired by Elm's Debug.log function. The prefix "tagged" is named after its argument . I mean, these new APIs prepend a string as a tag to the output by traceShowId etc. It helps us recognize what the printed values stand for. I frequently want such functions and write them manually or copy-and-paste from the Debug.TraceUtils. I'm tired of that. That's why I made this suggestion. *Comparison with the existing solution* - Debug.TraceUtils : - Essentially, this suggestion is to add APIs already implemented by TraceUtils. - As the document of TraceUtils suggests, we can copy and paste the functions from its source, but it's still tiresome. - Combine Debug.Trace.traceShowId with Debug.Trace.trace: - e.g. trace "Tag" $ traceShowId x - A bit hard to type. - trace always prints a newline, which makes it difficult to tell the tags from the printed value. After receiving some feedback here, I'm going to submit to https://github.com/ghc-proposals/ghc-proposals Thanks in advance! -- Yuji Yamamoto twitter: @igrep GitHub: https://github.com/igrep GitLab: https://gitlab.com/igrep Facebook: http://www.facebook.com/igrep Google+: https://plus.google.com/u/0/+YujiYamamoto_igrep -------------- next part -------------- An HTML attachment was scrubbed... URL: From zubin.duggal at gmail.com Thu Jun 7 10:38:01 2018 From: zubin.duggal at gmail.com (Zubin Duggal) Date: Thu, 7 Jun 2018 16:08:01 +0530 Subject: Typechecker doesn't preserve HsPar in renamed source. Message-ID: Hello all, The typechecker doesn't preserve parenthesis that occur at the head of applications. This results in some weird SrcSpans in the TypecheckedSource For example, given code foo a b c = (bar a) b c The typechecker will emit an HsApp with head spanning over `bar a) b` and tail spanning over `c`. Notice that the opening parenthesis is not included. On the other hand, the renamer will generate the expected SrcSpans that always include both parenthesis, or neither. This becomes an issue when you want to associate RenamedSource with its corresponding TypecheckedSource, as the SrcSpans no longer match and overlap partially. This occurs due to this line in TcExpr.hs tcApp m_herald (L _ (HsPar _ fun)) args res_ty = tcApp m_herald fun args res_ty I have a work in progress fix here: https://github.com/wz1000/ghc/commit/3b6db5a35dc8677a7187e349a85ffd51f452452a I have also created a ticket on trac: https://ghc.haskell.org/trac/ghc/ticket/15242#ticket -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Thu Jun 7 10:53:43 2018 From: mail at joachim-breitner.de (Joachim Breitner) Date: Thu, 07 Jun 2018 12:53:43 +0200 Subject: [commit: ghc] master: Remove ad-hoc special case in occAnal (c16382d) In-Reply-To: <20180607100617.F2E603ABA2@ghc.haskell.org> References: <20180607100617.F2E603ABA2@ghc.haskell.org> Message-ID: <4f587e2dbbe747427b16cd1d0cb2f9ad5938b168.camel@joachim-breitner.de> Dear Simon (and everyone else), Am Donnerstag, den 07.06.2018, 10:06 +0000 schrieb git at git.haskell.org: > I did look at the > tiny increase in allocation for cacheprof and concluded that it was > unimportant (I forget the details). don’t bother with cacheprof, it's allocation number is non- deterministic: https://ghc.haskell.org/trac/ghc/ticket/8611 I wonder how many hours were wasted by people trying find some causal connection between their change and cacheprof… Maybe we should just remove cacheprof? Cheers, Joachim -- Joachim Breitner mail at joachim-breitner.de http://www.joachim-breitner.de/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From ben at smart-cactus.org Thu Jun 7 15:40:45 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 07 Jun 2018 11:40:45 -0400 Subject: Add taggedTrace to Debug.Trace In-Reply-To: References: Message-ID: <87fu1yhafc.fsf@smart-cactus.org> Yuji Yamamoto writes: > Nice to meet you, GHC Developers! > I'm new to contributing to GHC. > Hi Yuji! Thanks for your proposal. I think this is likely best handled by the Core Libraries Committee (CC'd). Let's see what they say. > Today let me suggest new APIs of the Debug.Trace > > module, named: > > - taggedTraceShowId :: Show a => String -> a -> a > - taggedTraceWith :: (a -> String) -> String -> a -> a > > These are inspired by Elm's Debug.log > > function. > The prefix "tagged" is named after its argument > . > > I mean, these new APIs prepend a string as a tag to the output by > traceShowId etc. > It helps us recognize what the printed values stand for. > I frequently want such functions and write them manually or copy-and-paste > from the Debug.TraceUtils. > I'm tired of that. That's why I made this suggestion. > > *Comparison with the existing solution* > > - Debug.TraceUtils > > : > - Essentially, this suggestion is to add APIs already implemented by > TraceUtils. > - As the document of TraceUtils suggests, we can copy and paste the > functions from its source, but it's still tiresome. > - Combine Debug.Trace.traceShowId with Debug.Trace.trace: > - e.g. trace "Tag" $ traceShowId x > - A bit hard to type. > - trace always prints a newline, which makes it difficult to tell the > tags from the printed value. > > After receiving some feedback here, I'm going to submit to > https://github.com/ghc-proposals/ghc-proposals > Thanks in advance! > Personally, I do like the "With" variant as I regularly find myself needing things like this. I'm a bit unsure of whether we want to bake the "tag" notion into the interface, however. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ekmett at gmail.com Thu Jun 7 15:55:23 2018 From: ekmett at gmail.com (Edward Kmett) Date: Thu, 7 Jun 2018 17:55:23 +0200 Subject: [core libraries] Re: Add taggedTrace to Debug.Trace In-Reply-To: <87fu1yhafc.fsf@smart-cactus.org> References: <87fu1yhafc.fsf@smart-cactus.org> Message-ID: <3EB9E25F-EC67-40C1-909D-69874F711B40@gmail.com> What different users would do with such a prefix, how to display it, etc. varies just enough that i’m somewhat hesitant to grow the API. I’m a very weak -1. But I’d happily let anybody else on the committee override that if they had a strong preference. -Edward > On Jun 7, 2018, at 5:40 PM, Ben Gamari wrote: > > Yuji Yamamoto writes: > >> Nice to meet you, GHC Developers! >> I'm new to contributing to GHC. >> > Hi Yuji! > > Thanks for your proposal. > > I think this is likely best handled by the Core Libraries Committee > (CC'd). Let's see what they say. > > > >> Today let me suggest new APIs of the Debug.Trace >> >> module, named: >> >> - taggedTraceShowId :: Show a => String -> a -> a >> - taggedTraceWith :: (a -> String) -> String -> a -> a >> >> These are inspired by Elm's Debug.log >> >> function. >> The prefix "tagged" is named after its argument >> . >> >> I mean, these new APIs prepend a string as a tag to the output by >> traceShowId etc. >> It helps us recognize what the printed values stand for. >> I frequently want such functions and write them manually or copy-and-paste >> from the Debug.TraceUtils. >> I'm tired of that. That's why I made this suggestion. >> >> *Comparison with the existing solution* >> >> - Debug.TraceUtils >> >> : >> - Essentially, this suggestion is to add APIs already implemented by >> TraceUtils. >> - As the document of TraceUtils suggests, we can copy and paste the >> functions from its source, but it's still tiresome. >> - Combine Debug.Trace.traceShowId with Debug.Trace.trace: >> - e.g. trace "Tag" $ traceShowId x >> - A bit hard to type. >> - trace always prints a newline, which makes it difficult to tell the >> tags from the printed value. >> >> After receiving some feedback here, I'm going to submit to >> https://github.com/ghc-proposals/ghc-proposals >> Thanks in advance! >> > > Personally, I do like the "With" variant as I regularly find myself > needing things like this. I'm a bit unsure of whether we want to bake > the "tag" notion into the interface, however. > > Cheers, > > - Ben > > -- > You received this message because you are subscribed to the Google Groups "haskell-core-libraries" group. > To unsubscribe from this group and stop receiving emails from it, send an email to haskell-core-libraries+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. From ben at smart-cactus.org Thu Jun 7 16:59:45 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 07 Jun 2018 12:59:45 -0400 Subject: [core libraries] Re: Add taggedTrace to Debug.Trace In-Reply-To: <3EB9E25F-EC67-40C1-909D-69874F711B40@gmail.com> References: <87fu1yhafc.fsf@smart-cactus.org> <3EB9E25F-EC67-40C1-909D-69874F711B40@gmail.com> Message-ID: <87a7s6h6rn.fsf@smart-cactus.org> Edward Kmett writes: > What different users would do with such a prefix, how to display it, > etc. varies just enough that i’m somewhat hesitant to grow the API. > I’m a very weak -1. But I’d happily let anybody else on the committee > override that if they had a strong preference. > Right, as I mentioned I'm not sure about the tagging idea. However, I have found things of the form `(a -> String) -> a -> a` handy in the past. Then again, it's pretty trivial to open-code this when needed. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From marlowsd at gmail.com Thu Jun 7 20:34:32 2018 From: marlowsd at gmail.com (Simon Marlow) Date: Thu, 7 Jun 2018 21:34:32 +0100 Subject: -fghci-leak-check apparently causes many tests to fail In-Reply-To: References: Message-ID: Sorry, only just saw this. -fghci-leak-check is a new flag I added to prevent regressions of the space leak that was fixed in https://phabricator.haskell.org/D4659 If you're seeing errors from this, then we should fix them. Could you open a ticket and assign to me please? Cheers Simon On 1 June 2018 at 11:57, Ryan Scott wrote: > One thing I forgot to mention is that these test failures only seem to > occur with the `quick` build flavor, and I couldn't reproduce them with > ./validate. Is -fghci-leak-check expected to have different behavior if > stage-2 GHC is built without optimization? > > Ryan S. > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Thu Jun 7 20:38:16 2018 From: marlowsd at gmail.com (Simon Marlow) Date: Thu, 7 Jun 2018 21:38:16 +0100 Subject: revertCAFs and -fexternal-interpreter In-Reply-To: References: Message-ID: Yes, very probably this is a bug. Please file a ticket and assign to me (or better still send a diff!). Cheers Simon On 6 June 2018 at 08:58, Rahul Muttineni wrote: > Hello devs, > > I noticed that in ghc/GHCi/UI.hs, the calls to 'revertCAFs' are made in > the compiler's RTS instead of the interpreter's RTS. When > -fexternal-interpreter is on this distinction is visible, otherwise they > are one and the same so it works as intended. > > Shouldn't there be a RevertCAFs data constructor in > `libraries/ghci/GHCi/Message.hs` to tell the interpreter process to > revert the CAFs in its heap? > > Thanks, > Rahul Muttineni > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryan.gl.scott at gmail.com Thu Jun 7 20:41:20 2018 From: ryan.gl.scott at gmail.com (Ryan Scott) Date: Thu, 7 Jun 2018 16:41:20 -0400 Subject: -fghci-leak-check apparently causes many tests to fail In-Reply-To: References: Message-ID: > If you're seeing errors from this, then we should fix them. Could you open a ticket and assign to me please? I've opened Trac #15246 [1] for this. Ryan S. ----- [1] https://ghc.haskell.org/trac/ghc/ticket/15246 On Thu, Jun 7, 2018 at 4:34 PM, Simon Marlow wrote: > Sorry, only just saw this. -fghci-leak-check is a new flag I added to > prevent regressions of the space leak that was fixed in > https://phabricator.haskell.org/D4659 > > If you're seeing errors from this, then we should fix them. Could you > open a ticket and assign to me please? > > Cheers > Simon > > On 1 June 2018 at 11:57, Ryan Scott wrote: > >> One thing I forgot to mention is that these test failures only seem to >> occur with the `quick` build flavor, and I couldn't reproduce them with >> ./validate. Is -fghci-leak-check expected to have different behavior if >> stage-2 GHC is built without optimization? >> >> Ryan S. >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Thu Jun 7 20:47:56 2018 From: marlowsd at gmail.com (Simon Marlow) Date: Thu, 7 Jun 2018 21:47:56 +0100 Subject: Loading GHC into GHCi (and ghcid) In-Reply-To: References: Message-ID: For loading large amounts of code into GHCi, you want to add -j +RTS -A128m where is the number of cores on your machine. We've found that parallel compilation works really well in GHCi provided you use a nice large allocation area for the GC. This dramatically speeds up working with large numbers of modules in GHCi. (500 is small!) Cheers Simon On 30 May 2018 at 21:43, Matthew Pickering wrote: > Hi all, > > Csongor has informed me that he has worked out how to load GHC into > GHCi which can then be used with ghcid for a more interactive > development experience. > > 1. Put this .ghci file in compiler/ > > https://gist.github.com/mpickering/73749e7783f40cc762fec171b879704c > > 2. Run "../inplace/bin/ghc-stage2 --interactive -odir tmp -hidir tmp" > from inside compiler/ > > It may take a while and require a little bit of memory but in the end > all 500 or so modules will be loaded. > > It can also be used with ghcid. > > ghcid -c "../inplace/bin/ghc-stage2 --interactive -odir tmp -hidir tmp" > > Hopefully someone who has more RAM than I. > > Can anyone suggest the suitable place on the wiki for this information? > > Cheers, > > Matt > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kiss.csongor.kiss at gmail.com Thu Jun 7 20:55:00 2018 From: kiss.csongor.kiss at gmail.com (Csongor Kiss) Date: Thu, 7 Jun 2018 16:55:00 -0400 Subject: Loading GHC into GHCi (and ghcid) In-Reply-To: References: Message-ID: <3DC80775-7D8A-43B0-959B-91EA055B3D31@gmail.com> Indeed, it's a lot faster with these flags, thanks for the tip! Best, Csongor > On 7 Jun 2018, at 16:47, Simon Marlow wrote: > > For loading large amounts of code into GHCi, you want to add -j +RTS -A128m where is the number of cores on your machine. We've found that parallel compilation works really well in GHCi provided you use a nice large allocation area for the GC. This dramatically speeds up working with large numbers of modules in GHCi. (500 is small!) > > Cheers > Simon > > On 30 May 2018 at 21:43, Matthew Pickering > wrote: > Hi all, > > Csongor has informed me that he has worked out how to load GHC into > GHCi which can then be used with ghcid for a more interactive > development experience. > > 1. Put this .ghci file in compiler/ > > https://gist.github.com/mpickering/73749e7783f40cc762fec171b879704c > > 2. Run "../inplace/bin/ghc-stage2 --interactive -odir tmp -hidir tmp" > from inside compiler/ > > It may take a while and require a little bit of memory but in the end > all 500 or so modules will be loaded. > > It can also be used with ghcid. > > ghcid -c "../inplace/bin/ghc-stage2 --interactive -odir tmp -hidir tmp" > > Hopefully someone who has more RAM than I. > > Can anyone suggest the suitable place on the wiki for this information? > > Cheers, > > Matt > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Thu Jun 7 21:00:45 2018 From: marlowsd at gmail.com (Simon Marlow) Date: Thu, 7 Jun 2018 22:00:45 +0100 Subject: Why do we prevent static archives from being loaded when DYNAMIC_GHC_PROGRAMS=YES? In-Reply-To: <5434F789-04B7-4B62-8B81-9609741D0DEA@lichtzwerge.de> References: <5434F789-04B7-4B62-8B81-9609741D0DEA@lichtzwerge.de> Message-ID: There's a technical restriction. The static code would be compiled with the small memory model, so it would have 32-bit relocations for external references, assuming that those references would resolve to something in the low 2GB of the address space. But we would be trying to link it against shared libraries which could be loaded anywhere in the address space. If the static code was compiled with -fPIC then it might be possible, but there's also the restriction that we wouldn't be able to dlopen() a shared library that depends on the statically linked code, because the system linker can't see the symbols that the RTS linker has loaded. GHC doesn't currently know about this restriction, so it would probably go ahead and try, and things would break. Cheers Simon On 29 May 2018 at 04:05, Moritz Angermann wrote: > Dear friends, > > when we build GHC with DYNAMIC_GHC_PROGRAMS=YES, we essentially prevent > ghc/ghci > from using archives (.a). Is there a technical reason behind this? The > only > only reasoning so far I've came across was: insist on using dynamic/shared > objects, > because the user said so when building GHC. > > In that case, we don't however prevent GHC from building archive (static) > only > libraries. And as a consequence when we later try to build another > archive of > a different library, that depends via TH on the former library, GHC will > bail > and complain that we don't have the relevant dynamic/shared object. Of > course we > don't we explicitly didn't build it. But the linker code we have in GHC is > perfectly capable of loading archives. So why don't we want to fall back > to > archives? > > Similarly, as @deech asked on twitter[1], why we prevent GHCi from loading > static > libraries? > > I'd like to understand the technical reason/rational for this behavior. > Can > someone help me out here? If there is no fundamental reason for this > behavior, > I'd like to go ahead and try to lift it. > > Thank you! > > Cheers, > Moritz > > --- > [1]: https://twitter.com/deech/status/1001182709555908608 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Thu Jun 7 21:05:03 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 07 Jun 2018 17:05:03 -0400 Subject: Loading GHC into GHCi (and ghcid) In-Reply-To: References: Message-ID: <8736xygvey.fsf@smart-cactus.org> Matthew Pickering writes: > Hi all, > > Csongor has informed me that he has worked out how to load GHC into > GHCi which can then be used with ghcid for a more interactive > development experience. > > 1. Put this .ghci file in compiler/ > > https://gist.github.com/mpickering/73749e7783f40cc762fec171b879704c > > 2. Run "../inplace/bin/ghc-stage2 --interactive -odir tmp -hidir tmp" > from inside compiler/ > > It may take a while and require a little bit of memory but in the end > all 500 or so modules will be loaded. > > It can also be used with ghcid. > > ghcid -c "../inplace/bin/ghc-stage2 --interactive -odir tmp -hidir tmp" > > Hopefully someone who has more RAM than I. > > Can anyone suggest the suitable place on the wiki for this information? > How about on a new page (e.g. Building/InGhci) linked to from, * https://ghc.haskell.org/trac/ghc/wiki/Building * https://ghc.haskell.org/trac/ghc/wiki/WorkingConventions (in the Tips & Tricks section) It might also be a good idea to add a script to the tree capturing this pattern. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From qdunkan at gmail.com Thu Jun 7 21:25:31 2018 From: qdunkan at gmail.com (Evan Laforge) Date: Thu, 7 Jun 2018 14:25:31 -0700 Subject: Loading GHC into GHCi (and ghcid) In-Reply-To: References: Message-ID: On Thu, Jun 7, 2018 at 1:47 PM, Simon Marlow wrote: > For loading large amounts of code into GHCi, you want to add -j +RTS > -A128m where is the number of cores on your machine. We've found that > parallel compilation works really well in GHCi provided you use a nice large > allocation area for the GC. This dramatically speeds up working with large > numbers of modules in GHCi. (500 is small!) This is a bit of a thread hijack (feel free to change the subject), but I also have a workflow that involves loading a lot of modules in ghci (500-700). As long as I can coax ghci to load them, things are fast and work well, but my impression is that this isn't a common workflow, and specifically ghc developers don't do this, because just about every ghc release will break it in one way or another (e.g. by putting more flags in the recompile check hash), and no one seems to understand what I'm talking about when I suggest features to improve it (e.g. the recent msg about modtime and recompilation avoidance). Given the uphill battle, I've been thinking that linking most of those modules into a package and loading much fewer will be a better supported workflow. It's actually less convenient, because now it's divided between package level (which require a restart and relink if they change) and ghci level (which don't), but is maybe less likely to be broken by ghc changes. Also, all those loaded module consume a huge amount of memory, which I haven't tracked down yet, but maybe packages will load more efficiently. But ideally I would prefer to continue to not use packages, and in fact do per-module more aggressively for larger codebases, because the need to restart ghci (or the ghc API-using program) and do a lengthy relink every time a module in the "wrong place" changed seems like it could get annoying (in fact it already is, for a cabal-oriented workflow). Does the workflow at Facebook involve loading tons of individual modules as I do? Or do they get packed into packages? If it's the many modules, do you have recommendations making that work well and keeping it working? If packages are the way you're "supposed" to do things, then is there any idea about how hard it would be to reload packages at runtime? If both modules and packages can be reloaded, is there an intended conceptual difference between a package and an unpackaged collection of modules? To illustrate, I would put packages purely as a way to organize builds and distribution, and have no meaning at the compiler level, which is how I gather C compilers traditionally work (e.g. 'cc a.o b.o c.o' is the same as 'ar abc.a a.o b.o c.o; cc abc.a'). But that's clearly not how ghc sees it! thanks! From niteria at gmail.com Thu Jun 7 21:48:38 2018 From: niteria at gmail.com (Bartosz Nitka) Date: Thu, 7 Jun 2018 23:48:38 +0200 Subject: Loading GHC into GHCi (and ghcid) In-Reply-To: References: Message-ID: What version of GHC are you using? There have been some significant improvements like https://phabricator.haskell.org/rGHCb8fec6950ad99cbf11cd22698b8d5ab35afb828f, that only just made it into GHC 8.4. Some of them maybe haven't made it into a release yet. You could try building https://github.com/niteria/ghc/commits/ghc-8.0.2-facebook and see how well it works for you. Cheers, Bartosz czw., 7 cze 2018 o 23:26 Evan Laforge napisał(a): > > On Thu, Jun 7, 2018 at 1:47 PM, Simon Marlow wrote: > > For loading large amounts of code into GHCi, you want to add -j +RTS > > -A128m where is the number of cores on your machine. We've found that > > parallel compilation works really well in GHCi provided you use a nice large > > allocation area for the GC. This dramatically speeds up working with large > > numbers of modules in GHCi. (500 is small!) > > This is a bit of a thread hijack (feel free to change the subject), > but I also have a workflow that involves loading a lot of modules in > ghci (500-700). As long as I can coax ghci to load them, things are > fast and work well, but my impression is that this isn't a common > workflow, and specifically ghc developers don't do this, because just > about every ghc release will break it in one way or another (e.g. by > putting more flags in the recompile check hash), and no one seems to > understand what I'm talking about when I suggest features to improve > it (e.g. the recent msg about modtime and recompilation avoidance). > > Given the uphill battle, I've been thinking that linking most of those > modules into a package and loading much fewer will be a better > supported workflow. It's actually less convenient, because now it's > divided between package level (which require a restart and relink if > they change) and ghci level (which don't), but is maybe less likely to > be broken by ghc changes. Also, all those loaded module consume a > huge amount of memory, which I haven't tracked down yet, but maybe > packages will load more efficiently. > > But ideally I would prefer to continue to not use packages, and in > fact do per-module more aggressively for larger codebases, because the > need to restart ghci (or the ghc API-using program) and do a lengthy > relink every time a module in the "wrong place" changed seems like it > could get annoying (in fact it already is, for a cabal-oriented > workflow). > > Does the workflow at Facebook involve loading tons of individual > modules as I do? Or do they get packed into packages? If it's the > many modules, do you have recommendations making that work well and > keeping it working? If packages are the way you're "supposed" to do > things, then is there any idea about how hard it would be to reload > packages at runtime? If both modules and packages can be reloaded, is > there an intended conceptual difference between a package and an > unpackaged collection of modules? To illustrate, I would put packages > purely as a way to organize builds and distribution, and have no > meaning at the compiler level, which is how I gather C compilers > traditionally work (e.g. 'cc a.o b.o c.o' is the same as 'ar abc.a a.o > b.o c.o; cc abc.a'). But that's clearly not how ghc sees it! > > > thanks! > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From qdunkan at gmail.com Thu Jun 7 23:33:29 2018 From: qdunkan at gmail.com (Evan Laforge) Date: Thu, 7 Jun 2018 16:33:29 -0700 Subject: Loading GHC into GHCi (and ghcid) In-Reply-To: References: Message-ID: On Thu, Jun 7, 2018 at 2:48 PM, Bartosz Nitka wrote: > What version of GHC are you using? > There have been some significant improvements like > https://phabricator.haskell.org/rGHCb8fec6950ad99cbf11cd22698b8d5ab35afb828f, > that only just made it into GHC 8.4. I did in fact notice a very nice speedup in 8.4, this explains it. Finally I know who to thank for it! Thank you very much for that fix, it really makes a difference. Are there more goodies in the 8.0.2 facebook branch, or have they all made it into 8.4? As loaded modules seem to consume a lot of memory, I've considered trying GHC.Compact on them, but haven't looked into what that would entail. Have you considered something like that? From whosekiteneverfly at gmail.com Fri Jun 8 00:19:34 2018 From: whosekiteneverfly at gmail.com (Yuji Yamamoto) Date: Fri, 8 Jun 2018 09:19:34 +0900 Subject: [core libraries] Re: Add taggedTrace to Debug.Trace In-Reply-To: References: <87fu1yhafc.fsf@smart-cactus.org> <3EB9E25F-EC67-40C1-909D-69874F711B40@gmail.com> <87a7s6h6rn.fsf@smart-cactus.org> Message-ID: Almost any formatting will do. At least I never care. I assume those APIs would be used for very ad-hoc use (like the other APIs in Debug.Trace). And debug codes put by such cases are deleted or disabled by NoTrace package in production. I want handy default functions available without batteries. Detailed formatting for debug messages should be configured by third-parties' logging libraries. 2018年6月8日(金) 4:29 Andrew Martin : > I am -1 on this. Such a function requires making a decision about > formatting. What does a user expect from > > > taggedTraceShowId "meganum" (42 :: Int) > > Any of these are reasonable: > > meganum: 42 > meganum [42] > [meganum]: 42 > > In different applications I've worked on, I've wanted different flavors of > something like this. Since there's no obvious choice, I don't think base is > a good place for such a function. > > On Thu, Jun 7, 2018 at 12:59 PM, Ben Gamari wrote: > >> Edward Kmett writes: >> >> > What different users would do with such a prefix, how to display it, >> > etc. varies just enough that i’m somewhat hesitant to grow the API. >> > I’m a very weak -1. But I’d happily let anybody else on the committee >> > override that if they had a strong preference. >> > >> Right, as I mentioned I'm not sure about the tagging idea. However, I >> have found things of the form `(a -> String) -> a -> a` handy in the past. >> Then again, it's pretty trivial to open-code this when needed. >> >> Cheers, >> >> - Ben >> >> -- >> You received this message because you are subscribed to the Google Groups >> "haskell-core-libraries" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to haskell-core-libraries+unsubscribe at googlegroups.com. >> For more options, visit https://groups.google.com/d/optout. >> > > > > -- > -Andrew Thaddeus Martin > -- 山本悠滋 twitter: @igrep GitHub: https://github.com/igrep GitLab: https://gitlab.com/igrep Facebook: http://www.facebook.com/igrep Google+: https://plus.google.com/u/0/+YujiYamamoto_igrep -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Fri Jun 8 07:18:00 2018 From: marlowsd at gmail.com (Simon Marlow) Date: Fri, 8 Jun 2018 08:18:00 +0100 Subject: Loading GHC into GHCi (and ghcid) In-Reply-To: References: Message-ID: On 8 June 2018 at 00:33, Evan Laforge wrote: > On Thu, Jun 7, 2018 at 2:48 PM, Bartosz Nitka wrote: > > What version of GHC are you using? > > There have been some significant improvements like > > https://phabricator.haskell.org/rGHCb8fec6950ad99cbf11cd22698b > 8d5ab35afb828f, > > that only just made it into GHC 8.4. > > I did in fact notice a very nice speedup in 8.4, this explains it. > Finally I know who to thank for it! Thank you very much for that fix, > it really makes a difference. > > Are there more goodies in the 8.0.2 facebook branch, or have they all > made it into 8.4? > > As loaded modules seem to consume a lot of memory, I've considered > trying GHC.Compact on them, but haven't looked into what that would > entail. Have you considered something like that? > I think I looked into this and found that it wasn't going to be easy, but I forget exactly why. Off the top of my head: - you can't compact mutable things: perhaps the FastString table would give us problems here - there is lots of deliberate laziness to support demand-loading of interface files, compaction would force all of it - you can't compact functions, so if there are any functions in ModIface or ModDetails we would have to avoid compacting those parts of the structure somehow - there are cycles and sharing in these structures so we would need to use the more expensive compaction method that keeps a hash table, which is 10x slower than cheap compaction Probably worth looking into to find out exactly what the problems are though. Cheers Simon > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Fri Jun 8 07:29:03 2018 From: marlowsd at gmail.com (Simon Marlow) Date: Fri, 8 Jun 2018 08:29:03 +0100 Subject: Loading GHC into GHCi (and ghcid) In-Reply-To: References: Message-ID: On 7 June 2018 at 22:25, Evan Laforge wrote: > On Thu, Jun 7, 2018 at 1:47 PM, Simon Marlow wrote: > > For loading large amounts of code into GHCi, you want to add -j +RTS > > -A128m where is the number of cores on your machine. We've found that > > parallel compilation works really well in GHCi provided you use a nice > large > > allocation area for the GC. This dramatically speeds up working with > large > > numbers of modules in GHCi. (500 is small!) > > This is a bit of a thread hijack (feel free to change the subject), > but I also have a workflow that involves loading a lot of modules in > ghci (500-700). As long as I can coax ghci to load them, things are > fast and work well, but my impression is that this isn't a common > workflow, and specifically ghc developers don't do this, because just > about every ghc release will break it in one way or another (e.g. by > putting more flags in the recompile check hash), and no one seems to > understand what I'm talking about when I suggest features to improve > it (e.g. the recent msg about modtime and recompilation avoidance). > > Given the uphill battle, I've been thinking that linking most of those > modules into a package and loading much fewer will be a better > supported workflow. It's actually less convenient, because now it's > divided between package level (which require a restart and relink if > they change) and ghci level (which don't), but is maybe less likely to > be broken by ghc changes. Also, all those loaded module consume a > huge amount of memory, which I haven't tracked down yet, but maybe > packages will load more efficiently. > > But ideally I would prefer to continue to not use packages, and in > fact do per-module more aggressively for larger codebases, because the > need to restart ghci (or the ghc API-using program) and do a lengthy > relink every time a module in the "wrong place" changed seems like it > could get annoying (in fact it already is, for a cabal-oriented > workflow). > > Does the workflow at Facebook involve loading tons of individual > modules as I do? Yes, our workflow involves loading a large number of modules into GHCi. However, we have run into memory issues, which was the reason for the recent work on fixing this space leak: https://phabricator.haskell.org/D4659 As it is, this workflow is OK thanks to Bartosz' work on speedups for large numbers of modules, tweaking the RTS flags as I mentioned and some other fixes we've made in GHCi to avoid performance issues. (all of this is upstream, incidentally). There is probably low-hanging fruit to be had in reducing the memory usage of GHCi, nobody has really attacked this with the heap profiler for a while. However, I imagine at some point loading everything into GHCi will become unsustainable and we'll have to explore other strategies. There are a couple of options here: - pre-compile modules so that GHCi is loading the .o instead of interpreted code - move some of the code into pre-compiled packages, as you mentioned Cheers Simon > > Or do they get packed into packages? If it's the > many modules, do you have recommendations making that work well and > keeping it working? If packages are the way you're "supposed" to do > things, then is there any idea about how hard it would be to reload > packages at runtime? If both modules and packages can be reloaded, is > there an intended conceptual difference between a package and an > unpackaged collection of modules? To illustrate, I would put packages > purely as a way to organize builds and distribution, and have no > meaning at the compiler level, which is how I gather C compilers > traditionally work (e.g. 'cc a.o b.o c.o' is the same as 'ar abc.a a.o > b.o c.o; cc abc.a'). But that's clearly not how ghc sees it! > > > thanks! > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Fri Jun 8 15:37:36 2018 From: mail at joachim-breitner.de (Joachim Breitner) Date: Fri, 08 Jun 2018 17:37:36 +0200 Subject: Loading GHC into GHCi (and ghcid) In-Reply-To: <8736xygvey.fsf@smart-cactus.org> References: <8736xygvey.fsf@smart-cactus.org> Message-ID: <39d783b293f6008963a1273198605a2c8f3570cc.camel@joachim-breitner.de> Hi, Am Donnerstag, den 07.06.2018, 17:05 -0400 schrieb Ben Gamari: > How about on a new page (e.g. Building/InGhci) linked to from, > > * https://ghc.haskell.org/trac/ghc/wiki/Building > * https://ghc.haskell.org/trac/ghc/wiki/WorkingConventions (in the Tips > & Tricks section) > > It might also be a good idea to add a script to the tree capturing this > pattern. yes pretty please! Cheers, Joachim -- Joachim Breitner mail at joachim-breitner.de http://www.joachim-breitner.de/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From qdunkan at gmail.com Fri Jun 8 18:18:34 2018 From: qdunkan at gmail.com (Evan Laforge) Date: Fri, 8 Jun 2018 11:18:34 -0700 Subject: Loading GHC into GHCi (and ghcid) In-Reply-To: References: Message-ID: On Fri, Jun 8, 2018 at 12:29 AM, Simon Marlow wrote: > heap profiler for a while. However, I imagine at some point loading > everything into GHCi will become unsustainable and we'll have to explore > other strategies. There are a couple of options here: > - pre-compile modules so that GHCi is loading the .o instead of interpreted > code This is what I do, which is why I was complaining about GHC tending to break it. But when it's working, it works well, I load 500+ modules in under a second. > - move some of the code into pre-compiled packages, as you mentioned I was wondering about the tradeoffs between these two approaches, compiled modules vs. packages. Compiled modules have the advantage that you can reload without restarting ghci and relinking a large library, but no one seems to notice when they break. Whereas if ghc broke package loading it would get noticed right away. Could they be unified so that, say, -package xyz is equivalent to adding the package root (with all the .hi and .o files) to the -i list? I guess the low level loading mechanism of loading a .so vs. a bunch of individual .o files is different. From marlowsd at gmail.com Fri Jun 8 18:46:29 2018 From: marlowsd at gmail.com (Simon Marlow) Date: Fri, 8 Jun 2018 19:46:29 +0100 Subject: Loading GHC into GHCi (and ghcid) In-Reply-To: References: Message-ID: On 8 June 2018 at 19:18, Evan Laforge wrote: > On Fri, Jun 8, 2018 at 12:29 AM, Simon Marlow wrote: > > heap profiler for a while. However, I imagine at some point loading > > everything into GHCi will become unsustainable and we'll have to explore > > other strategies. There are a couple of options here: > > - pre-compile modules so that GHCi is loading the .o instead of > interpreted > > code > > This is what I do, which is why I was complaining about GHC tending to > break it. But when it's working, it works well, I load 500+ modules > in under a second. > > > - move some of the code into pre-compiled packages, as you mentioned > > I was wondering about the tradeoffs between these two approaches, > compiled modules vs. packages. Compiled modules have the advantage > that you can reload without restarting ghci and relinking a large > library, but no one seems to notice when they break. Whereas if ghc > broke package loading it would get noticed right away. Could they be > unified so that, say, -package xyz is equivalent to adding the package > root (with all the .hi and .o files) to the -i list? I guess the low > level loading mechanism of loading a .so vs. a bunch of individual .o > files is different. > I'm slightly surprised that it keeps breaking for you, given that this is a core feature of GHCi and we have multiple tests for it. You'll need to remind me - what were the bugs specifically? Maybe we need more tests. There really are fundamental differences in how the compiler treats these two methods though, and I don't see an easy way to reconcile them. Loading object files happens as part of the compilation manager that manages the compilations for all the modules in the current package, whereas packages are assumed to be pre-compiled and are linked on-demand after all the compilation is done. Cheers Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From qdunkan at gmail.com Sat Jun 9 00:30:42 2018 From: qdunkan at gmail.com (Evan Laforge) Date: Fri, 8 Jun 2018 17:30:42 -0700 Subject: Loading GHC into GHCi (and ghcid) In-Reply-To: References: Message-ID: On Fri, Jun 8, 2018 at 11:46 AM, Simon Marlow wrote: > I'm slightly surprised that it keeps breaking for you, given that this is a > core feature of GHCi and we have multiple tests for it. You'll need to > remind me - what were the bugs specifically? Maybe we need more tests. Most recently, 8.2 had this problem: https://ghc.haskell.org/trac/ghc/ticket/13604 I seem to recall an older version also had the same problem, in that it was too sensitive about hash changes, but I think it was a plain bug, unlike the ticket above which is arguable correct though inconvenient. I also remember requesting the "why did it reload" message (e.g. flags changed, etc.), probably due to some earlier change that made compiled modules not load. It's been so long I forget the details, sorry! Currently -fdefer-type-errors is broken: https://ghc.haskell.org/trac/ghc/ticket/14963 This is not related to loading .o files, but ghci in general. Also currently there's an issue where ghc uses modtime and then the elaborate recompilation check to determine whether to recompile to binary, but it seems ghci uses just the modtime check. I think this has always been there, but I'm only just noticing it recently because of switching to git. I don't completely understand what's going on here yet so I may be misrepresenting the situation. More tests would be welcome! I guess we could compile some modules, and ensure the binary continues to load after various kinds of poking and prodding. I'd be willing to contribute tests that represent my workflow. > There really are fundamental differences in how the compiler treats these > two methods though, and I don't see an easy way to reconcile them. Loading > object files happens as part of the compilation manager that manages the > compilations for all the modules in the current package, whereas packages > are assumed to be pre-compiled and are linked on-demand after all the > compilation is done. Ah, too bad. But just out of curiosity, is there anything about the OS level linking that's fundamentally different than ghci loading individual .o files, or is this more a result of how ghc and ghci have evolved? I know you did some work to unload object files, could the same thing be used to unload and reload packages dynamically? Even if it were manual, it could be a big improvement over having to shut down and restart the whole system because a package changed. From chak at justtesting.org Sun Jun 10 19:58:42 2018 From: chak at justtesting.org (Manuel M T Chakravarty) Date: Sun, 10 Jun 2018 21:58:42 +0200 Subject: Loading GHC into GHCi (and ghcid) In-Reply-To: References: Message-ID: <04492A67-2285-425F-9608-C609A26D99CD@justtesting.org> > Am 09.06.2018 um 02:30 schrieb Evan Laforge : > Currently -fdefer-type-errors is broken: > https://ghc.haskell.org/trac/ghc/ticket/14963 This is not related to > loading .o files, but ghci in general. Which is a good indication that out CI story is still crap. Cheers, Manuel -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 874 bytes Desc: Message signed with OpenPGP URL: From nicolas.frisby at gmail.com Mon Jun 11 15:21:08 2018 From: nicolas.frisby at gmail.com (Nicolas Frisby) Date: Mon, 11 Jun 2018 08:21:08 -0700 Subject: [GHC] #15147: Type checker plugin receives Wanteds that are not completely unflattened In-Reply-To: <061.773bb808a092a29e57b71a1df3383edd@haskell.org> References: <046.7de470c11f4eabed34171ca75a3b8063@haskell.org> <061.773bb808a092a29e57b71a1df3383edd@haskell.org> Message-ID: That way of saying it clarifies the expectations for me. And doesn't seem too burdensome for the plugin author. Thus I think this ticket could be resolved by updating the documentation. (Though I still would like for a plugin to be able to request the flattened Wanteds. Separate ticket?) In particular this sentence in the User Guide "[The plugin] will be invoked at two points in the constraint solving process: after simplification of given constraints, and after unflattening of wanted constraints." would benefit from some elaboration. Specifically, "unflattening of wanted constraints" is somewhat ambiguous: until you spelled it out, I was thinking that if a constraint is flattened, it doesn't have any flattening variables in it. However, I'm inferring here that the jargon is used to mean that "unflattening a wanted constraint" only eliminates fmvs, possibly leaving fsks behind? That's what I've been confused about (until now, I think). Thanks. On Tue, Jun 5, 2018, 01:48 GHC wrote: > #15147: Type checker plugin receives Wanteds that are not completely > unflattened > -------------------------------------+------------------------------------- > Reporter: nfrisby | Owner: (none) > Type: bug | Status: new > Priority: normal | Milestone: 8.6.1 > Component: Compiler (Type | Version: 8.4.1 > checker) | Keywords: > Resolution: | TypeCheckerPlugins > Operating System: Unknown/Multiple | Architecture: > | Unknown/Multiple > Type of failure: None/Unknown | Test Case: > Blocked By: | Blocking: > Related Tickets: | Differential Rev(s): > Wiki Page: | > -------------------------------------+------------------------------------- > > Comment (by simonpj): > > > Perhaps I'm misunderstanding something > > I didn't express it very clearly. As it stands, the Given CFunEqCan's > remain, and hence so do the fsks. The Wanted CFunEqCans are removed > (currently) along with the fmvs. > > So yes, currently Wanteds can contain fsks, whose definition is given by a > CFunEqCan. I would have thought that most plugins would not find it hard > to deal with that. > > -- > Ticket URL: > GHC > The Glasgow Haskell Compiler > -------------- next part -------------- An HTML attachment was scrubbed... URL: From whosekiteneverfly at gmail.com Tue Jun 12 00:18:30 2018 From: whosekiteneverfly at gmail.com (Yuji Yamamoto) Date: Tue, 12 Jun 2018 09:18:30 +0900 Subject: [core libraries] Re: Add taggedTrace to Debug.Trace In-Reply-To: References: <87fu1yhafc.fsf@smart-cactus.org> <3EB9E25F-EC67-40C1-909D-69874F711B40@gmail.com> <87a7s6h6rn.fsf@smart-cactus.org> Message-ID: Can anyone give me more feedback? I'm interested especially in my last reply. 2018年6月8日(金) 9:19 Yuji Yamamoto : > Almost any formatting will do. At least I never care. > I assume those APIs would be used for very ad-hoc use (like the other APIs > in Debug.Trace). > And debug codes put by such cases are deleted or disabled by NoTrace > package in production. > I want handy default functions available without batteries. > Detailed formatting for debug messages should be configured by > third-parties' logging libraries. > > 2018年6月8日(金) 4:29 Andrew Martin : > >> I am -1 on this. Such a function requires making a decision about >> formatting. What does a user expect from >> >> > taggedTraceShowId "meganum" (42 :: Int) >> >> Any of these are reasonable: >> >> meganum: 42 >> meganum [42] >> [meganum]: 42 >> >> In different applications I've worked on, I've wanted different flavors >> of something like this. Since there's no obvious choice, I don't think base >> is a good place for such a function. >> >> On Thu, Jun 7, 2018 at 12:59 PM, Ben Gamari wrote: >> >>> Edward Kmett writes: >>> >>> > What different users would do with such a prefix, how to display it, >>> > etc. varies just enough that i’m somewhat hesitant to grow the API. >>> > I’m a very weak -1. But I’d happily let anybody else on the committee >>> > override that if they had a strong preference. >>> > >>> Right, as I mentioned I'm not sure about the tagging idea. However, I >>> have found things of the form `(a -> String) -> a -> a` handy in the >>> past. >>> Then again, it's pretty trivial to open-code this when needed. >>> >>> Cheers, >>> >>> - Ben >>> >>> -- >>> You received this message because you are subscribed to the Google >>> Groups "haskell-core-libraries" group. >>> To unsubscribe from this group and stop receiving emails from it, send >>> an email to haskell-core-libraries+unsubscribe at googlegroups.com. >>> For more options, visit https://groups.google.com/d/optout. >>> >> >> >> >> -- >> -Andrew Thaddeus Martin >> > > > -- > 山本悠滋 > twitter: @igrep > GitHub: https://github.com/igrep > GitLab: https://gitlab.com/igrep > Facebook: http://www.facebook.com/igrep > Google+: https://plus.google.com/u/0/+YujiYamamoto_igrep > -- 山本悠滋 twitter: @igrep GitHub: https://github.com/igrep GitLab: https://gitlab.com/igrep Facebook: http://www.facebook.com/igrep Google+: https://plus.google.com/u/0/+YujiYamamoto_igrep -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.frisby at gmail.com Tue Jun 12 02:38:46 2018 From: nicolas.frisby at gmail.com (Nicolas Frisby) Date: Mon, 11 Jun 2018 19:38:46 -0700 Subject: [GHC] #15147: Type checker plugin receives Wanteds that are not completely unflattened In-Reply-To: References: <046.7de470c11f4eabed34171ca75a3b8063@haskell.org> <061.773bb808a092a29e57b71a1df3383edd@haskell.org> Message-ID: Whoops, I replied via email instead of commenting on the ticket. I've done so now. Sorry for the mailing list noise. On Mon, Jun 11, 2018, 08:21 Nicolas Frisby wrote: > That way of saying it clarifies the expectations for me. And doesn't seem > too burdensome for the plugin author. > > Thus I think this ticket could be resolved by updating the documentation. > (Though I still would like for a plugin to be able to request the flattened > Wanteds. Separate ticket?) > > In particular this sentence in the User Guide > > "[The plugin] will be invoked at two points in the constraint solving > process: after simplification of given constraints, and after unflattening > of wanted constraints." > > would benefit from some elaboration. Specifically, "unflattening of wanted > constraints" is somewhat ambiguous: until you spelled it out, I was > thinking that if a constraint is flattened, it doesn't have any flattening > variables in it. However, I'm inferring here that the jargon is used to > mean that "unflattening a wanted constraint" only eliminates fmvs, possibly > leaving fsks behind? That's what I've been confused about (until now, I > think). Thanks. > > > On Tue, Jun 5, 2018, 01:48 GHC wrote: > >> #15147: Type checker plugin receives Wanteds that are not completely >> unflattened >> >> -------------------------------------+------------------------------------- >> Reporter: nfrisby | Owner: (none) >> Type: bug | Status: new >> Priority: normal | Milestone: 8.6.1 >> Component: Compiler (Type | Version: 8.4.1 >> checker) | Keywords: >> Resolution: | TypeCheckerPlugins >> Operating System: Unknown/Multiple | Architecture: >> | Unknown/Multiple >> Type of failure: None/Unknown | Test Case: >> Blocked By: | Blocking: >> Related Tickets: | Differential Rev(s): >> Wiki Page: | >> >> -------------------------------------+------------------------------------- >> >> Comment (by simonpj): >> >> > Perhaps I'm misunderstanding something >> >> I didn't express it very clearly. As it stands, the Given CFunEqCan's >> remain, and hence so do the fsks. The Wanted CFunEqCans are removed >> (currently) along with the fmvs. >> >> So yes, currently Wanteds can contain fsks, whose definition is given by >> a >> CFunEqCan. I would have thought that most plugins would not find it hard >> to deal with that. >> >> -- >> Ticket URL: >> GHC >> The Glasgow Haskell Compiler >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at cs.brynmawr.edu Tue Jun 12 03:34:19 2018 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Mon, 11 Jun 2018 23:34:19 -0400 Subject: Typechecker doesn't preserve HsPar in renamed source. In-Reply-To: References: Message-ID: This looks pretty good to me. What's "in progress" about it? I would want to see a comment on the declaration for HsArgPar with an example, and a test case. Thanks! Richard > On Jun 7, 2018, at 6:38 AM, Zubin Duggal wrote: > > Hello all, > > The typechecker doesn't preserve parenthesis that occur at the head of applications. > > This results in some weird SrcSpans in the TypecheckedSource > > For example, given code > > foo a b c = (bar a) b c > > The typechecker will emit an HsApp with head spanning over `bar a) b` and tail spanning over `c`. > Notice that the opening parenthesis is not included. > > On the other hand, the renamer will generate the expected SrcSpans that always include both parenthesis, or neither. This becomes an issue when you want to associate RenamedSource with its corresponding TypecheckedSource, as the SrcSpans no longer match and overlap partially. > > This occurs due to this line in TcExpr.hs > > tcApp m_herald (L _ (HsPar _ fun)) args res_ty > = tcApp m_herald fun args res_ty > > I have a work in progress fix here: https://github.com/wz1000/ghc/commit/3b6db5a35dc8677a7187e349a85ffd51f452452a > > I have also created a ticket on trac: https://ghc.haskell.org/trac/ghc/ticket/15242#ticket > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From zubin.duggal at gmail.com Tue Jun 12 03:51:34 2018 From: zubin.duggal at gmail.com (Zubin Duggal) Date: Tue, 12 Jun 2018 09:21:34 +0530 Subject: Typechecker doesn't preserve HsPar in renamed source. In-Reply-To: References: Message-ID: At the time I hadn't modified tcSeq and tcTagToEnum to take HsArgPars into account. I have now done that, and also added a test case. I've also submitted the fix to Phab, over here: https://phabricator.haskell.org/D4822 On 12 June 2018 at 09:04, Richard Eisenberg wrote: > This looks pretty good to me. What's "in progress" about it? > > I would want to see a comment on the declaration for HsArgPar with an > example, and a test case. > > Thanks! > Richard > > On Jun 7, 2018, at 6:38 AM, Zubin Duggal wrote: > > Hello all, > > The typechecker doesn't preserve parenthesis that occur at the head of > applications. > > This results in some weird SrcSpans in the TypecheckedSource > > For example, given code > > foo a b c = (bar a) b c > > The typechecker will emit an HsApp with head spanning over `bar a) b` and > tail spanning over `c`. > Notice that the opening parenthesis is not included. > > On the other hand, the renamer will generate the expected SrcSpans that > always include both parenthesis, or neither. This becomes an issue when you > want to associate RenamedSource with its corresponding TypecheckedSource, > as the SrcSpans no longer match and overlap partially. > > This occurs due to this line in TcExpr.hs > > tcApp m_herald (L _ (HsPar _ fun)) args res_ty > = tcApp m_herald fun args res_ty > > I have a work in progress fix here: https://github.com/wz1000/ghc/commit/ > 3b6db5a35dc8677a7187e349a85ffd51f452452a > > I have also created a ticket on trac: https://ghc.haskell.org/trac/ > ghc/ticket/15242#ticket > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at cs.brynmawr.edu Tue Jun 12 03:54:02 2018 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Mon, 11 Jun 2018 23:54:02 -0400 Subject: Typechecker doesn't preserve HsPar in renamed source. In-Reply-To: References: Message-ID: <7BF7F0EB-04CC-4456-9E06-C203A869D660@cs.brynmawr.edu> Great -- and glad to see this getting appropriate attention over there. Richard > On Jun 11, 2018, at 11:51 PM, Zubin Duggal wrote: > > At the time I hadn't modified tcSeq and tcTagToEnum to take HsArgPars into account. I have now done that, and also added a test case. > > I've also submitted the fix to Phab, over here: https://phabricator.haskell.org/D4822 > > On 12 June 2018 at 09:04, Richard Eisenberg > wrote: > This looks pretty good to me. What's "in progress" about it? > > I would want to see a comment on the declaration for HsArgPar with an example, and a test case. > > Thanks! > Richard > >> On Jun 7, 2018, at 6:38 AM, Zubin Duggal > wrote: >> >> Hello all, >> >> The typechecker doesn't preserve parenthesis that occur at the head of applications. >> >> This results in some weird SrcSpans in the TypecheckedSource >> >> For example, given code >> >> foo a b c = (bar a) b c >> >> The typechecker will emit an HsApp with head spanning over `bar a) b` and tail spanning over `c`. >> Notice that the opening parenthesis is not included. >> >> On the other hand, the renamer will generate the expected SrcSpans that always include both parenthesis, or neither. This becomes an issue when you want to associate RenamedSource with its corresponding TypecheckedSource, as the SrcSpans no longer match and overlap partially. >> >> This occurs due to this line in TcExpr.hs >> >> tcApp m_herald (L _ (HsPar _ fun)) args res_ty >> = tcApp m_herald fun args res_ty >> >> I have a work in progress fix here: https://github.com/wz1000/ghc/commit/3b6db5a35dc8677a7187e349a85ffd51f452452a >> >> I have also created a ticket on trac: https://ghc.haskell.org/trac/ghc/ticket/15242#ticket >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From qdunkan at gmail.com Tue Jun 12 05:24:44 2018 From: qdunkan at gmail.com (Evan Laforge) Date: Mon, 11 Jun 2018 22:24:44 -0700 Subject: [core libraries] Re: Add taggedTrace to Debug.Trace In-Reply-To: References: <87fu1yhafc.fsf@smart-cactus.org> <3EB9E25F-EC67-40C1-909D-69874F711B40@gmail.com> <87a7s6h6rn.fsf@smart-cactus.org> Message-ID: I agree that tags are necessary, and in fact I recently ported my internal debug library to cabal because now that I'm doing more stuff with cabal I find life too difficult without my accustomed debug library. For reference, it is this, but nothing fancy: https://github.com/elaforge/el-debug/blob/master/src/EL/Debug.hs So that's a vote for those things being worth it. But there are plenty of little things in there which mean I would still use my own library, not Debug.Trace, even if it did have a few extra functions. Such as pretty printing, forcing to avoid interleaved output, timeouts to avoid hanging, function return value tracing, and probably many more. And even after that, since there's no global agreed-upon Pretty class, I had to remove the Pretty variants, which means it loses a lot. And it may seem petty, but since I type them all the time, I'd want to use my short names, rather than the increasingly long and clunky ones in Debug.Trace. So rather than adding little bits to Debug.Trace to nudge it towards usefulness, maybe it would be better to make your own ideal debug trace library, and just use that wherever you go. On Mon, Jun 11, 2018 at 5:18 PM, Yuji Yamamoto wrote: > Can anyone give me more feedback? > I'm interested especially in my last reply. > > > 2018年6月8日(金) 9:19 Yuji Yamamoto : >> >> Almost any formatting will do. At least I never care. >> I assume those APIs would be used for very ad-hoc use (like the other APIs >> in Debug.Trace). >> And debug codes put by such cases are deleted or disabled by NoTrace >> package in production. >> I want handy default functions available without batteries. >> Detailed formatting for debug messages should be configured by >> third-parties' logging libraries. >> >> 2018年6月8日(金) 4:29 Andrew Martin : >>> >>> I am -1 on this. Such a function requires making a decision about >>> formatting. What does a user expect from >>> >>> > taggedTraceShowId "meganum" (42 :: Int) >>> >>> Any of these are reasonable: >>> >>> meganum: 42 >>> meganum [42] >>> [meganum]: 42 >>> >>> In different applications I've worked on, I've wanted different flavors >>> of something like this. Since there's no obvious choice, I don't think base >>> is a good place for such a function. >>> >>> On Thu, Jun 7, 2018 at 12:59 PM, Ben Gamari wrote: >>>> >>>> Edward Kmett writes: >>>> >>>> > What different users would do with such a prefix, how to display it, >>>> > etc. varies just enough that i’m somewhat hesitant to grow the API. >>>> > I’m a very weak -1. But I’d happily let anybody else on the committee >>>> > override that if they had a strong preference. >>>> > >>>> Right, as I mentioned I'm not sure about the tagging idea. However, I >>>> have found things of the form `(a -> String) -> a -> a` handy in the >>>> past. >>>> Then again, it's pretty trivial to open-code this when needed. >>>> >>>> Cheers, >>>> >>>> - Ben >>>> >>>> -- >>>> You received this message because you are subscribed to the Google >>>> Groups "haskell-core-libraries" group. >>>> To unsubscribe from this group and stop receiving emails from it, send >>>> an email to haskell-core-libraries+unsubscribe at googlegroups.com. >>>> For more options, visit https://groups.google.com/d/optout. >>> >>> >>> >>> >>> -- >>> -Andrew Thaddeus Martin >> >> >> >> -- >> 山本悠滋 >> twitter: @igrep >> GitHub: https://github.com/igrep >> GitLab: https://gitlab.com/igrep >> Facebook: http://www.facebook.com/igrep >> Google+: https://plus.google.com/u/0/+YujiYamamoto_igrep > > > > -- > 山本悠滋 > twitter: @igrep > GitHub: https://github.com/igrep > GitLab: https://gitlab.com/igrep > Facebook: http://www.facebook.com/igrep > Google+: https://plus.google.com/u/0/+YujiYamamoto_igrep > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From lonetiger at gmail.com Tue Jun 12 13:07:18 2018 From: lonetiger at gmail.com (Phyx) Date: Tue, 12 Jun 2018 14:07:18 +0100 Subject: Why do we prevent static archives from being loaded when DYNAMIC_GHC_PROGRAMS=YES? In-Reply-To: References: <5434F789-04B7-4B62-8B81-9609741D0DEA@lichtzwerge.de> Message-ID: You could work around the dlopen issue as long as the static library is compiled with -fPIC by using --whole-archive (assuming you permit dangling references which will need to be resolved later) and making a shared library out of the static code. But you'd have to create one shared library per static library and preserve the order so you don't end up with symbol collisions. And you'd likely not want to do it this on every relink. But i think the - fPIC is a much greater hurdle. Very few of the static libraries a user may want to use would have this likely. I think it'll end up being quite a messy situation.. On Thu, Jun 7, 2018, 22:01 Simon Marlow wrote: > There's a technical restriction. The static code would be compiled with > the small memory model, so it would have 32-bit relocations for external > references, assuming that those references would resolve to something in > the low 2GB of the address space. But we would be trying to link it against > shared libraries which could be loaded anywhere in the address space. > > If the static code was compiled with -fPIC then it might be possible, but > there's also the restriction that we wouldn't be able to dlopen() a shared > library that depends on the statically linked code, because the system > linker can't see the symbols that the RTS linker has loaded. GHC doesn't > currently know about this restriction, so it would probably go ahead and > try, and things would break. > > Cheers > Simon > > > On 29 May 2018 at 04:05, Moritz Angermann wrote: > >> Dear friends, >> >> when we build GHC with DYNAMIC_GHC_PROGRAMS=YES, we essentially prevent >> ghc/ghci >> from using archives (.a). Is there a technical reason behind this? The >> only >> only reasoning so far I've came across was: insist on using >> dynamic/shared objects, >> because the user said so when building GHC. >> >> In that case, we don't however prevent GHC from building archive (static) >> only >> libraries. And as a consequence when we later try to build another >> archive of >> a different library, that depends via TH on the former library, GHC will >> bail >> and complain that we don't have the relevant dynamic/shared object. Of >> course we >> don't we explicitly didn't build it. But the linker code we have in GHC >> is >> perfectly capable of loading archives. So why don't we want to fall back >> to >> archives? >> >> Similarly, as @deech asked on twitter[1], why we prevent GHCi from >> loading static >> libraries? >> >> I'd like to understand the technical reason/rational for this behavior. >> Can >> someone help me out here? If there is no fundamental reason for this >> behavior, >> I'd like to go ahead and try to lift it. >> >> Thank you! >> >> Cheers, >> Moritz >> >> --- >> [1]: https://twitter.com/deech/status/1001182709555908608 >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From moritz.angermann at gmail.com Wed Jun 13 01:30:14 2018 From: moritz.angermann at gmail.com (Moritz Angermann) Date: Wed, 13 Jun 2018 09:30:14 +0800 Subject: Why do we prevent static archives from being loaded when DYNAMIC_GHC_PROGRAMS=YES? In-Reply-To: References: <5434F789-04B7-4B62-8B81-9609741D0DEA@lichtzwerge.de> Message-ID: Thank you both for the replies. My issue with the current situation is that I can navigate myself into a situation where I’m stuck. By asking ghc to build static libraries, it will later fall over when it tries to load those. Guess what I really want is to turn the DYNAMIC_GHC_PROGRAMS into a runtime flag. That might help with getting out of the situation without resorting to building two ghcs. Cheers, Moritz Sent from my iPhone > On 12 Jun 2018, at 9:07 PM, Phyx wrote: > > You could work around the dlopen issue as long as the static library is compiled with -fPIC by using --whole-archive (assuming you permit dangling references which will need to be resolved later) and making a shared library out of the static code. But you'd have to create one shared library per static library and preserve the order so you don't end up with symbol collisions. > > And you'd likely not want to do it this on every relink. But i think the - fPIC is a much greater hurdle. Very few of the static libraries a user may want to use would have this likely. > > I think it'll end up being quite a messy situation.. > > >> On Thu, Jun 7, 2018, 22:01 Simon Marlow wrote: >> There's a technical restriction. The static code would be compiled with the small memory model, so it would have 32-bit relocations for external references, assuming that those references would resolve to something in the low 2GB of the address space. But we would be trying to link it against shared libraries which could be loaded anywhere in the address space. >> >> If the static code was compiled with -fPIC then it might be possible, but there's also the restriction that we wouldn't be able to dlopen() a shared library that depends on the statically linked code, because the system linker can't see the symbols that the RTS linker has loaded. GHC doesn't currently know about this restriction, so it would probably go ahead and try, and things would break. >> >> Cheers >> Simon >> >> >>> On 29 May 2018 at 04:05, Moritz Angermann wrote: >>> Dear friends, >>> >>> when we build GHC with DYNAMIC_GHC_PROGRAMS=YES, we essentially prevent ghc/ghci >>> from using archives (.a). Is there a technical reason behind this? The only >>> only reasoning so far I've came across was: insist on using dynamic/shared objects, >>> because the user said so when building GHC. >>> >>> In that case, we don't however prevent GHC from building archive (static) only >>> libraries. And as a consequence when we later try to build another archive of >>> a different library, that depends via TH on the former library, GHC will bail >>> and complain that we don't have the relevant dynamic/shared object. Of course we >>> don't we explicitly didn't build it. But the linker code we have in GHC is >>> perfectly capable of loading archives. So why don't we want to fall back to >>> archives? >>> >>> Similarly, as @deech asked on twitter[1], why we prevent GHCi from loading static >>> libraries? >>> >>> I'd like to understand the technical reason/rational for this behavior. Can >>> someone help me out here? If there is no fundamental reason for this behavior, >>> I'd like to go ahead and try to lift it. >>> >>> Thank you! >>> >>> Cheers, >>> Moritz >>> >>> --- >>> [1]: https://twitter.com/deech/status/1001182709555908608 >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From whosekiteneverfly at gmail.com Wed Jun 13 05:05:09 2018 From: whosekiteneverfly at gmail.com (Yuji Yamamoto) Date: Wed, 13 Jun 2018 14:05:09 +0900 Subject: [core libraries] Re: Add taggedTrace to Debug.Trace In-Reply-To: References: <87fu1yhafc.fsf@smart-cactus.org> <3EB9E25F-EC67-40C1-909D-69874F711B40@gmail.com> <87a7s6h6rn.fsf@smart-cactus.org> Message-ID: > But there are > plenty of little things in there which mean I would still use my own > library, not Debug.Trace, even if it did have a few extra functions. Oh, I see. That's actually possible. Okay, I'll withdraw my suggestion this time. Thanks! 2018年6月12日(火) 14:25 Evan Laforge : > I agree that tags are necessary, and in fact I recently ported my > internal debug library to cabal because now that I'm doing more stuff > with cabal I find life too difficult without my accustomed debug > library. For reference, it is this, but nothing fancy: > https://github.com/elaforge/el-debug/blob/master/src/EL/Debug.hs > > So that's a vote for those things being worth it. But there are > plenty of little things in there which mean I would still use my own > library, not Debug.Trace, even if it did have a few extra functions. > Such as pretty printing, forcing to avoid interleaved output, timeouts > to avoid hanging, function return value tracing, and probably many > more. And even after that, since there's no global agreed-upon Pretty > class, I had to remove the Pretty variants, which means it loses a > lot. And it may seem petty, but since I type them all the time, I'd > want to use my short names, rather than the increasingly long and > clunky ones in Debug.Trace. > > So rather than adding little bits to Debug.Trace to nudge it towards > usefulness, maybe it would be better to make your own ideal debug > trace library, and just use that wherever you go. > > > On Mon, Jun 11, 2018 at 5:18 PM, Yuji Yamamoto > wrote: > > Can anyone give me more feedback? > > I'm interested especially in my last reply. > > > > > > 2018年6月8日(金) 9:19 Yuji Yamamoto : > >> > >> Almost any formatting will do. At least I never care. > >> I assume those APIs would be used for very ad-hoc use (like the other > APIs > >> in Debug.Trace). > >> And debug codes put by such cases are deleted or disabled by NoTrace > >> package in production. > >> I want handy default functions available without batteries. > >> Detailed formatting for debug messages should be configured by > >> third-parties' logging libraries. > >> > >> 2018年6月8日(金) 4:29 Andrew Martin : > >>> > >>> I am -1 on this. Such a function requires making a decision about > >>> formatting. What does a user expect from > >>> > >>> > taggedTraceShowId "meganum" (42 :: Int) > >>> > >>> Any of these are reasonable: > >>> > >>> meganum: 42 > >>> meganum [42] > >>> [meganum]: 42 > >>> > >>> In different applications I've worked on, I've wanted different flavors > >>> of something like this. Since there's no obvious choice, I don't think > base > >>> is a good place for such a function. > >>> > >>> On Thu, Jun 7, 2018 at 12:59 PM, Ben Gamari > wrote: > >>>> > >>>> Edward Kmett writes: > >>>> > >>>> > What different users would do with such a prefix, how to display it, > >>>> > etc. varies just enough that i’m somewhat hesitant to grow the API. > >>>> > I’m a very weak -1. But I’d happily let anybody else on the > committee > >>>> > override that if they had a strong preference. > >>>> > > >>>> Right, as I mentioned I'm not sure about the tagging idea. However, I > >>>> have found things of the form `(a -> String) -> a -> a` handy in the > >>>> past. > >>>> Then again, it's pretty trivial to open-code this when needed. > >>>> > >>>> Cheers, > >>>> > >>>> - Ben > >>>> > >>>> -- > >>>> You received this message because you are subscribed to the Google > >>>> Groups "haskell-core-libraries" group. > >>>> To unsubscribe from this group and stop receiving emails from it, send > >>>> an email to haskell-core-libraries+unsubscribe at googlegroups.com. > >>>> For more options, visit https://groups.google.com/d/optout. > >>> > >>> > >>> > >>> > >>> -- > >>> -Andrew Thaddeus Martin > >> > >> > >> > >> -- > >> 山本悠滋 > >> twitter: @igrep > >> GitHub: https://github.com/igrep > >> GitLab: https://gitlab.com/igrep > >> Facebook: http://www.facebook.com/igrep > >> Google+: https://plus.google.com/u/0/+YujiYamamoto_igrep > > > > > > > > -- > > 山本悠滋 > > twitter: @igrep > > GitHub: https://github.com/igrep > > GitLab: https://gitlab.com/igrep > > Facebook: http://www.facebook.com/igrep > > Google+: https://plus.google.com/u/0/+YujiYamamoto_igrep > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > -- 山本悠滋 twitter: @igrep GitHub: https://github.com/igrep GitLab: https://gitlab.com/igrep Facebook: http://www.facebook.com/igrep Google+: https://plus.google.com/u/0/+YujiYamamoto_igrep -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Wed Jun 13 13:42:23 2018 From: marlowsd at gmail.com (Simon Marlow) Date: Wed, 13 Jun 2018 14:42:23 +0100 Subject: Why do we prevent static archives from being loaded when DYNAMIC_GHC_PROGRAMS=YES? In-Reply-To: References: <5434F789-04B7-4B62-8B81-9609741D0DEA@lichtzwerge.de> Message-ID: I'm not sure how you could make DYNAMIC_GHC_PROGRAMS into a runtime flag, since it controls whether GHC itself (and the other tools) are built as dynamic executables. If GHC is a dynamic executable, then it can only load -fPIC code to link at runtime. Cheers Simon On 13 June 2018 at 02:30, Moritz Angermann wrote: > Thank you both for the replies. > > My issue with the current situation is that I can navigate myself into a > situation where I’m stuck. By asking ghc to build static libraries, it will > later fall over when it tries to load those. > > Guess what I really want is to turn the DYNAMIC_GHC_PROGRAMS into a > runtime flag. > > That might help with getting out of the situation without resorting to > building two ghcs. > > Cheers, > Moritz > > Sent from my iPhone > > On 12 Jun 2018, at 9:07 PM, Phyx wrote: > > You could work around the dlopen issue as long as the static library is > compiled with -fPIC by using --whole-archive (assuming you permit dangling > references which will need to be resolved later) and making a shared > library out of the static code. But you'd have to create one shared library > per static library and preserve the order so you don't end up with symbol > collisions. > > And you'd likely not want to do it this on every relink. But i think the - > fPIC is a much greater hurdle. Very few of the static libraries a user may > want to use would have this likely. > > I think it'll end up being quite a messy situation.. > > > On Thu, Jun 7, 2018, 22:01 Simon Marlow wrote: > >> There's a technical restriction. The static code would be compiled with >> the small memory model, so it would have 32-bit relocations for external >> references, assuming that those references would resolve to something in >> the low 2GB of the address space. But we would be trying to link it against >> shared libraries which could be loaded anywhere in the address space. >> >> If the static code was compiled with -fPIC then it might be possible, but >> there's also the restriction that we wouldn't be able to dlopen() a shared >> library that depends on the statically linked code, because the system >> linker can't see the symbols that the RTS linker has loaded. GHC doesn't >> currently know about this restriction, so it would probably go ahead and >> try, and things would break. >> >> Cheers >> Simon >> >> >> On 29 May 2018 at 04:05, Moritz Angermann wrote: >> >>> Dear friends, >>> >>> when we build GHC with DYNAMIC_GHC_PROGRAMS=YES, we essentially prevent >>> ghc/ghci >>> from using archives (.a). Is there a technical reason behind this? The >>> only >>> only reasoning so far I've came across was: insist on using >>> dynamic/shared objects, >>> because the user said so when building GHC. >>> >>> In that case, we don't however prevent GHC from building archive >>> (static) only >>> libraries. And as a consequence when we later try to build another >>> archive of >>> a different library, that depends via TH on the former library, GHC will >>> bail >>> and complain that we don't have the relevant dynamic/shared object. Of >>> course we >>> don't we explicitly didn't build it. But the linker code we have in GHC >>> is >>> perfectly capable of loading archives. So why don't we want to fall >>> back to >>> archives? >>> >>> Similarly, as @deech asked on twitter[1], why we prevent GHCi from >>> loading static >>> libraries? >>> >>> I'd like to understand the technical reason/rational for this behavior. >>> Can >>> someone help me out here? If there is no fundamental reason for this >>> behavior, >>> I'd like to go ahead and try to lift it. >>> >>> Thank you! >>> >>> Cheers, >>> Moritz >>> >>> --- >>> [1]: https://twitter.com/deech/status/1001182709555908608 >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Wed Jun 13 16:09:10 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 13 Jun 2018 16:09:10 +0000 Subject: Strace Message-ID: Tamar I'm getting megabytes of output from 'sh validate' on windows. It looks like this 629 151745 [main] sh 2880 fhandler_base::fhaccess: returning 0 291 152036 [main] sh 2880 faccessat: returning 0 7757 159793 [main] sh 2880 fhandler_base_overlapped::wait_overlapped: wfres 0, wores 1, bytes 7 179457 1608947 [main] make 11484 fhandler_base_overlapped::wait_overlapped: wfres 0, wores 1, bytes 7 99 159892 [main] sh 2880 fhandler_base_overlapped::wait_overlapped: normal write, 7 bytes ispipe() 1 180 1609127 [main] make 11484 fhandler_base_overlapped::wait_overlapped: normal read, 7 bytes ispipe() 1 139 160031 [main] sh 2880 write: 7 = write(1, 0x6000396A0, 7) 142 1609269 [main] make 11484 fhandler_base::read: returning 7, binary mode 139 1609408 [main] make 11484 read: 7 = read(5, 0x60005B4B0, 7) 136 1609544 [main] make 11484 read: read(5, 0x60005B4B7, 193) blocking 4693 164724 [main] sh 2880 set_signal_mask: setmask 0, newmask 80000, mask_bits 0 but with hundreds of thousands of lines. (I have not counted) I believe that it may be the result of this line, earlier in the log cd "/c/Users/simonpj/AppData/Local/Temp/ghctest-8fa9s6rk/test spaces/./plugins/plugins07.run" && strace $MAKE -s --no-print-directory -C rule-defining-plugin package.plugins07 TOP=/c/code/HEAD/testsuite# Note the strace. That in turn was added in your commit commit 60fb2b2160aa16194b74262f4df8fad5af171b0f Author: Tamar Christina Date: Mon May 28 19:34:11 2018 +0100 Clean up Windows testsuite failures Summary: Another round and attempt at getting these down to 0. Could you perhaps have made a mistake here? Currently validate is unusable. Thanks! Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From lonetiger at gmail.com Wed Jun 13 16:18:43 2018 From: lonetiger at gmail.com (Phyx) Date: Wed, 13 Jun 2018 17:18:43 +0100 Subject: Strace In-Reply-To: References: Message-ID: Hi Simon, The strace is only supposed to run when the normal test pre_cmd fails. If it's running that often it means your tests are all failing during pre_cmd with a framework failure https://git.haskell.org/ghc.git/blobdiff/4778cba1dbb6adf495930322d7f9e9db0af60d8f..60fb2b2160aa16194b74262f4df8fad5af171b0f:/testsuite/driver/testlib.py But maybe I shouldn't turn this on my default. I'll pramaterize it when I get home. Tamar. On Wed, Jun 13, 2018, 17:09 Simon Peyton Jones wrote: > Tamar > > I’m getting *megabytes* of output from ‘sh validate’ on windows. It > looks like this > > 629 151745 [main] sh 2880 fhandler_base::fhaccess: returning 0 > > 291 152036 [main] sh 2880 faccessat: returning 0 > > 7757 159793 [main] sh 2880 fhandler_base_overlapped::wait_overlapped: > wfres 0, wores 1, bytes 7 > > 179457 1608947 [main] make 11484 > fhandler_base_overlapped::wait_overlapped: wfres 0, wores 1, bytes 7 > > 99 159892 [main] sh 2880 fhandler_base_overlapped::wait_overlapped: > normal write, 7 bytes ispipe() 1 > > 180 1609127 [main] make 11484 fhandler_base_overlapped::wait_overlapped: > normal read, 7 bytes ispipe() 1 > > 139 160031 [main] sh 2880 write: 7 = write(1, 0x6000396A0, 7) > > 142 1609269 [main] make 11484 fhandler_base::read: returning 7, binary > mode > > 139 1609408 [main] make 11484 read: 7 = read(5, 0x60005B4B0, 7) > > 136 1609544 [main] make 11484 read: read(5, 0x60005B4B7, 193) blocking > > 4693 164724 [main] sh 2880 set_signal_mask: setmask 0, newmask 80000, > mask_bits 0 > > but with hundreds of thousands of lines. (I have not counted) > > I believe that it may be the result of this line, earlier in the log > > cd "/c/Users/simonpj/AppData/Local/Temp/ghctest-8fa9s6rk/test > spaces/./plugins/plugins07.run" && *strace* $MAKE -s --no-print-directory > -C rule-defining-plugin package.plugins07 TOP=/c/code/HEAD/testsuite# > > Note the strace. > > That in turn was added in your commit > > commit 60fb2b2160aa16194b74262f4df8fad5af171b0f > > Author: Tamar Christina > > Date: Mon May 28 19:34:11 2018 +0100 > > > > Clean up Windows testsuite failures > > > > Summary: > > Another round and attempt at getting these down to 0. > > Could you perhaps have made a mistake here? Currently validate is > unusable. > > Thanks! > > Simon > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Wed Jun 13 16:24:11 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 13 Jun 2018 16:24:11 +0000 Subject: Strace In-Reply-To: References: Message-ID: OK – so maybe the root cause is a framework failure – and indeed for the last few weeks I’ve seen Framework failures: plugins/plugins07.run plugins07 [normal] (pre_cmd failed: 2) plugins/T10420.run T10420 [normal] (pre_cmd failed: 2) plugins/T11244.run T11244 [normal] (pre_cmd failed: 2) I have just learned to live with these failures, because I knew you were working on making things better. But it sounds as if they are still taking place. So: * Yes, please make it not happen by default * If you don’t get these framework failures, can we work together to resolve them? Thanks Simon From: Phyx Sent: 13 June 2018 17:19 To: Simon Peyton Jones Cc: ghc-devs at haskell.org Subject: Re: Strace Hi Simon, The strace is only supposed to run when the normal test pre_cmd fails. If it's running that often it means your tests are all failing during pre_cmd with a framework failure https://git.haskell.org/ghc.git/blobdiff/4778cba1dbb6adf495930322d7f9e9db0af60d8f..60fb2b2160aa16194b74262f4df8fad5af171b0f:/testsuite/driver/testlib.py But maybe I shouldn't turn this on my default. I'll pramaterize it when I get home. Tamar. On Wed, Jun 13, 2018, 17:09 Simon Peyton Jones > wrote: Tamar I’m getting megabytes of output from ‘sh validate’ on windows. It looks like this 629 151745 [main] sh 2880 fhandler_base::fhaccess: returning 0 291 152036 [main] sh 2880 faccessat: returning 0 7757 159793 [main] sh 2880 fhandler_base_overlapped::wait_overlapped: wfres 0, wores 1, bytes 7 179457 1608947 [main] make 11484 fhandler_base_overlapped::wait_overlapped: wfres 0, wores 1, bytes 7 99 159892 [main] sh 2880 fhandler_base_overlapped::wait_overlapped: normal write, 7 bytes ispipe() 1 180 1609127 [main] make 11484 fhandler_base_overlapped::wait_overlapped: normal read, 7 bytes ispipe() 1 139 160031 [main] sh 2880 write: 7 = write(1, 0x6000396A0, 7) 142 1609269 [main] make 11484 fhandler_base::read: returning 7, binary mode 139 1609408 [main] make 11484 read: 7 = read(5, 0x60005B4B0, 7) 136 1609544 [main] make 11484 read: read(5, 0x60005B4B7, 193) blocking 4693 164724 [main] sh 2880 set_signal_mask: setmask 0, newmask 80000, mask_bits 0 but with hundreds of thousands of lines. (I have not counted) I believe that it may be the result of this line, earlier in the log cd "/c/Users/simonpj/AppData/Local/Temp/ghctest-8fa9s6rk/test spaces/./plugins/plugins07.run" && strace $MAKE -s --no-print-directory -C rule-defining-plugin package.plugins07 TOP=/c/code/HEAD/testsuite# Note the strace. That in turn was added in your commit commit 60fb2b2160aa16194b74262f4df8fad5af171b0f Author: Tamar Christina > Date: Mon May 28 19:34:11 2018 +0100 Clean up Windows testsuite failures Summary: Another round and attempt at getting these down to 0. Could you perhaps have made a mistake here? Currently validate is unusable. Thanks! Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From lonetiger at gmail.com Wed Jun 13 19:47:26 2018 From: lonetiger at gmail.com (Phyx) Date: Wed, 13 Jun 2018 20:47:26 +0100 Subject: Strace In-Reply-To: References: Message-ID: Hi Simon, On Wed, Jun 13, 2018 at 5:24 PM, Simon Peyton Jones wrote: > OK – so maybe the root cause is a framework failure – and indeed for the > last few weeks I’ve seen > > Framework failures: > > plugins/plugins07.run plugins07 [normal] (pre_cmd failed: 2) > > plugins/T10420.run T10420 [normal] (pre_cmd failed: 2) > > plugins/T11244.run T11244 [normal] (pre_cmd failed: 2) > > > > I have just learned to live with these failures, because I knew you were > working on making things better. But it sounds as if they are still taking > place. > The commit I made should have reduced the amount of failing tests to 0. framework failures are always quite unusual. > > So: > > - Yes, please make it not happen by default > > I've removed the code, if you update it should be gone. It was there and on by default because I was trying to debug failures on Harbormaster, I realized a switch isn't very useful as I won't be able to toggle it for Harbormaster anyway. > > - > - If you don’t get these framework failures, can we work together to > resolve them? > > These don't happen for me nor on Harbormaster, try picking a test, e.g T10420 run only that test to make sure it's not a threading issue: make TEST=T10420 test -C testsuite/tests If it still gives a framework error then do at the top level make VERBOSE=3 TEST=T10420 test -C testsuite/tests once it runs, the output should contain the command it ran as a pre_cmd, and the stdout and stderr from the pre_cmd output. Could you then send the error? if it doesn't show any of this, try make CLEANP=0 VERBOSE=3 TEST= T10420 test -C testsuite/tests --trace and copy and paste the pre_cmd command, which should just replay the action it did. Cheers, Tamar > > Thanks > > > > Simon > > > > *From:* Phyx > *Sent:* 13 June 2018 17:19 > *To:* Simon Peyton Jones > *Cc:* ghc-devs at haskell.org > *Subject:* Re: Strace > > > > Hi Simon, > > > > The strace is only supposed to run when the normal test pre_cmd fails. > > If it's running that often it means your tests are all failing during > pre_cmd with a framework failure > > https://git.haskell.org/ghc.git/blobdiff/4778cba1dbb6adf495930322d7f9e9 > db0af60d8f..60fb2b2160aa16194b74262f4df8fad5af171b0f:/testsuite/driver/ > testlib.py > > > > But maybe I shouldn't turn this on my default. I'll pramaterize it when I > get home. > > > > Tamar. > > > > On Wed, Jun 13, 2018, 17:09 Simon Peyton Jones > wrote: > > Tamar > > I’m getting *megabytes* of output from ‘sh validate’ on windows. It > looks like this > > 629 151745 [main] sh 2880 fhandler_base::fhaccess: returning 0 > > 291 152036 [main] sh 2880 faccessat: returning 0 > > 7757 159793 [main] sh 2880 fhandler_base_overlapped::wait_overlapped: > wfres 0, wores 1, bytes 7 > > 179457 1608947 [main] make 11484 fhandler_base_overlapped::wait_overlapped: > wfres 0, wores 1, bytes 7 > > 99 159892 [main] sh 2880 fhandler_base_overlapped::wait_overlapped: > normal write, 7 bytes ispipe() 1 > > 180 1609127 [main] make 11484 fhandler_base_overlapped::wait_overlapped: > normal read, 7 bytes ispipe() 1 > > 139 160031 [main] sh 2880 write: 7 = write(1, 0x6000396A0, 7) > > 142 1609269 [main] make 11484 fhandler_base::read: returning 7, binary > mode > > 139 1609408 [main] make 11484 read: 7 = read(5, 0x60005B4B0, 7) > > 136 1609544 [main] make 11484 read: read(5, 0x60005B4B7, 193) blocking > > 4693 164724 [main] sh 2880 set_signal_mask: setmask 0, newmask 80000, > mask_bits 0 > > but with hundreds of thousands of lines. (I have not counted) > > I believe that it may be the result of this line, earlier in the log > > cd "/c/Users/simonpj/AppData/Local/Temp/ghctest-8fa9s6rk/test > spaces/./plugins/plugins07.run" && *strace* $MAKE -s --no-print-directory > -C rule-defining-plugin package.plugins07 TOP=/c/code/HEAD/testsuite# > > Note the strace. > > That in turn was added in your commit > > commit 60fb2b2160aa16194b74262f4df8fad5af171b0f > > Author: Tamar Christina > > Date: Mon May 28 19:34:11 2018 +0100 > > > > Clean up Windows testsuite failures > > > > Summary: > > Another round and attempt at getting these down to 0. > > Could you perhaps have made a mistake here? Currently validate is > unusable. > > Thanks! > > Simon > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggreif at gmail.com Fri Jun 15 08:34:31 2018 From: ggreif at gmail.com (Gabor Greif) Date: Fri, 15 Jun 2018 10:34:31 +0200 Subject: [commit: ghc] master: Embrace -XTypeInType, add -XStarIsType (d650729) In-Reply-To: <20180614190732.057773ABA3@ghc.haskell.org> References: <20180614190732.057773ABA3@ghc.haskell.org> Message-ID: My `happy` chokes on the unicode sequence you added: (if isUnicode $1 then "★" else "*") Casn this be done with unicode escapes somehow? Cheers, Gabor PS: Happy Version 1.19.9 Copyright (c) 1993-1996 Andy Gill, Simon Marlow (c) 1997-2005 Simon Marlow On 6/14/18, git at git.haskell.org wrote: > Repository : ssh://git at git.haskell.org/ghc > > On branch : master > Link : > http://ghc.haskell.org/trac/ghc/changeset/d650729f9a0f3b6aa5e6ef2d5fba337f6f70fa60/ghc > >>--------------------------------------------------------------- > > commit d650729f9a0f3b6aa5e6ef2d5fba337f6f70fa60 > Author: Vladislav Zavialov > Date: Thu Jun 14 15:02:36 2018 -0400 > > Embrace -XTypeInType, add -XStarIsType > > Summary: > Implement the "Embrace Type :: Type" GHC proposal, > .../ghc-proposals/blob/master/proposals/0020-no-type-in-type.rst > > GHC 8.0 included a major change to GHC's type system: the Type :: Type > axiom. Though casual users were protected from this by hiding its > features behind the -XTypeInType extension, all programs written in GHC > 8+ have the axiom behind the scenes. In order to preserve backward > compatibility, various legacy features were left unchanged. For example, > with -XDataKinds but not -XTypeInType, GADTs could not be used in types. > Now these restrictions are lifted and -XTypeInType becomes a redundant > flag that will be eventually deprecated. > > * Incorporate the features currently in -XTypeInType into the > -XPolyKinds and -XDataKinds extensions. > * Introduce a new extension -XStarIsType to control how to parse * in > code and whether to print it in error messages. > > Test Plan: Validate > > Reviewers: goldfire, hvr, bgamari, alanz, simonpj > > Reviewed By: goldfire, simonpj > > Subscribers: rwbarton, thomie, mpickering, carter > > GHC Trac Issues: #15195 > > Differential Revision: https://phabricator.haskell.org/D4748 > > >>--------------------------------------------------------------- > > d650729f9a0f3b6aa5e6ef2d5fba337f6f70fa60 > .gitignore | 1 + > .gitmodules | 4 +- > compiler/basicTypes/DataCon.hs | 22 +- > compiler/basicTypes/Name.hs | 21 +- > compiler/basicTypes/RdrName.hs | 96 +++- > compiler/basicTypes/SrcLoc.hs | 5 +- > compiler/deSugar/DsMeta.hs | 7 +- > compiler/hsSyn/Convert.hs | 37 +- > compiler/hsSyn/HsDecls.hs | 9 +- > compiler/hsSyn/HsExtension.hs | 16 +- > compiler/hsSyn/HsInstances.hs | 5 - > compiler/hsSyn/HsTypes.hs | 117 +---- > compiler/iface/IfaceType.hs | 8 +- > compiler/main/DynFlags.hs | 31 ++ > compiler/main/DynFlags.hs-boot | 1 + > compiler/main/HscTypes.hs | 3 +- > compiler/parser/Lexer.x | 104 +++-- > compiler/parser/Parser.y | 88 ++-- > compiler/parser/RdrHsSyn.hs | 190 ++++---- > compiler/prelude/PrelNames.hs | 7 +- > compiler/prelude/PrelNames.hs-boot | 3 +- > compiler/prelude/TysWiredIn.hs | 24 +- > compiler/rename/RnEnv.hs | 43 +- > compiler/rename/RnSource.hs | 4 +- > compiler/rename/RnTypes.hs | 186 ++------ > compiler/typecheck/TcDeriv.hs | 14 +- > compiler/typecheck/TcHsType.hs | 82 ++-- > compiler/typecheck/TcInstDcls.hs | 4 +- > compiler/typecheck/TcMType.hs | 2 +- > compiler/typecheck/TcPatSyn.hs | 2 +- > compiler/typecheck/TcRnTypes.hs | 6 - > compiler/typecheck/TcSplice.hs | 4 +- > compiler/typecheck/TcTyClsDecls.hs | 43 +- > compiler/types/Kind.hs | 33 +- > compiler/types/TyCoRep.hs | 1 + > compiler/types/TyCon.hs | 8 +- > compiler/types/Type.hs | 11 +- > compiler/types/Unify.hs | 2 +- > compiler/utils/Outputable.hs | 11 +- > docs/users_guide/8.6.1-notes.rst | 30 +- > docs/users_guide/glasgow_exts.rst | 482 > +++++++++------------ > libraries/base/Data/Data.hs | 4 +- > libraries/base/Data/Kind.hs | 2 +- > libraries/base/Data/Proxy.hs | 2 +- > libraries/base/Data/Type/Equality.hs | 4 +- > libraries/base/Data/Typeable.hs | 26 +- > libraries/base/Data/Typeable/Internal.hs | 1 - > libraries/base/GHC/Base.hs | 3 +- > libraries/base/GHC/Err.hs | 2 +- > libraries/base/GHC/Generics.hs | 50 +-- > libraries/base/Type/Reflection/Unsafe.hs | 2 +- > libraries/base/tests/CatEntail.hs | 4 +- > .../ghc-boot-th/GHC/LanguageExtensions/Type.hs | 1 + > libraries/ghc-prim/GHC/Magic.hs | 3 +- > libraries/ghc-prim/GHC/Types.hs | 8 +- > testsuite/tests/codeGen/should_fail/T13233.hs | 2 +- > testsuite/tests/dependent/ghci/T11549.script | 2 +- > testsuite/tests/dependent/ghci/T14238.stdout | 2 +- > testsuite/tests/dependent/should_compile/Dep1.hs | 2 +- > testsuite/tests/dependent/should_compile/Dep2.hs | 2 +- > testsuite/tests/dependent/should_compile/Dep3.hs | 2 +- > .../tests/dependent/should_compile/DkNameRes.hs | 9 + > .../dependent/should_compile/InferDependency.hs | 6 - > .../dependent/should_compile/KindEqualities.hs | 2 +- > .../dependent/should_compile/KindEqualities2.hs | 3 +- > .../tests/dependent/should_compile/KindLevels.hs | 2 +- > .../tests/dependent/should_compile/RAE_T32b.hs | 24 +- > testsuite/tests/dependent/should_compile/Rae31.hs | 23 +- > .../tests/dependent/should_compile/RaeBlogPost.hs | 27 +- > .../tests/dependent/should_compile/RaeJobTalk.hs | 2 +- > testsuite/tests/dependent/should_compile/T11405.hs | 2 +- > testsuite/tests/dependent/should_compile/T11635.hs | 2 +- > testsuite/tests/dependent/should_compile/T11711.hs | 1 - > testsuite/tests/dependent/should_compile/T11719.hs | 6 +- > testsuite/tests/dependent/should_compile/T11966.hs | 1 - > testsuite/tests/dependent/should_compile/T12176.hs | 2 +- > testsuite/tests/dependent/should_compile/T12442.hs | 4 +- > testsuite/tests/dependent/should_compile/T12742.hs | 2 +- > testsuite/tests/dependent/should_compile/T13910.hs | 9 +- > testsuite/tests/dependent/should_compile/T13938.hs | 3 +- > .../tests/dependent/should_compile/T13938a.hs | 3 +- > testsuite/tests/dependent/should_compile/T14038.hs | 3 +- > .../tests/dependent/should_compile/T14066a.hs | 2 +- > testsuite/tests/dependent/should_compile/T14556.hs | 3 +- > testsuite/tests/dependent/should_compile/T14720.hs | 3 +- > testsuite/tests/dependent/should_compile/T14749.hs | 2 +- > testsuite/tests/dependent/should_compile/T14991.hs | 3 +- > testsuite/tests/dependent/should_compile/T9632.hs | 2 +- > .../tests/dependent/should_compile/TypeLevelVec.hs | 2 +- > testsuite/tests/dependent/should_compile/all.T | 1 + > .../dependent/should_compile/dynamic-paper.hs | 27 +- > .../tests/dependent/should_compile/mkGADTVars.hs | 2 +- > .../tests/dependent/should_fail/BadTelescope.hs | 2 +- > .../tests/dependent/should_fail/BadTelescope2.hs | 2 +- > .../tests/dependent/should_fail/BadTelescope3.hs | 2 +- > .../tests/dependent/should_fail/BadTelescope4.hs | 2 +- > testsuite/tests/dependent/should_fail/DepFail1.hs | 2 +- > .../tests/dependent/should_fail/InferDependency.hs | 2 +- > .../tests/dependent/should_fail/KindLevelsB.hs | 9 - > .../tests/dependent/should_fail/KindLevelsB.stderr | 5 - > .../tests/dependent/should_fail/PromotedClass.hs | 2 +- > testsuite/tests/dependent/should_fail/RAE_T32a.hs | 28 +- > .../tests/dependent/should_fail/RAE_T32a.stderr | 6 +- > .../tests/dependent/should_fail/RenamingStar.hs | 2 +- > .../dependent/should_fail/RenamingStar.stderr | 10 +- > testsuite/tests/dependent/should_fail/SelfDep.hs | 2 + > .../tests/dependent/should_fail/SelfDep.stderr | 8 +- > testsuite/tests/dependent/should_fail/T11407.hs | 2 +- > testsuite/tests/dependent/should_fail/T11473.hs | 2 +- > testsuite/tests/dependent/should_fail/T12081.hs | 2 +- > testsuite/tests/dependent/should_fail/T12174.hs | 2 +- > testsuite/tests/dependent/should_fail/T13135.hs | 4 +- > testsuite/tests/dependent/should_fail/T13601.hs | 2 +- > testsuite/tests/dependent/should_fail/T13780a.hs | 2 +- > testsuite/tests/dependent/should_fail/T13780b.hs | 3 +- > testsuite/tests/dependent/should_fail/T13780c.hs | 2 +- > .../tests/dependent/should_fail/T13780c.stderr | 6 +- > testsuite/tests/dependent/should_fail/T14066.hs | 4 +- > testsuite/tests/dependent/should_fail/T14066c.hs | 2 +- > testsuite/tests/dependent/should_fail/T14066d.hs | 2 +- > testsuite/tests/dependent/should_fail/T14066e.hs | 2 +- > testsuite/tests/dependent/should_fail/T14066f.hs | 2 +- > testsuite/tests/dependent/should_fail/T14066g.hs | 2 +- > testsuite/tests/dependent/should_fail/T14066h.hs | 2 +- > testsuite/tests/dependent/should_fail/T15245.hs | 10 + > .../tests/dependent/should_fail/T15245.stderr | 7 + > .../tests/dependent/should_fail/TypeSkolEscape.hs | 2 +- > testsuite/tests/dependent/should_fail/all.T | 2 +- > testsuite/tests/dependent/should_run/T11964a.hs | 2 +- > testsuite/tests/deriving/should_compile/T11416.hs | 3 +- > testsuite/tests/deriving/should_compile/T11732a.hs | 2 +- > testsuite/tests/deriving/should_compile/T11732b.hs | 2 +- > testsuite/tests/deriving/should_compile/T11732c.hs | 2 +- > testsuite/tests/deriving/should_compile/T14331.hs | 2 +- > testsuite/tests/deriving/should_compile/T14579.hs | 3 +- > testsuite/tests/deriving/should_compile/T14932.hs | 4 +- > testsuite/tests/deriving/should_fail/T12512.hs | 2 +- > testsuite/tests/deriving/should_fail/T14728a.hs | 2 +- > testsuite/tests/deriving/should_fail/T14728b.hs | 2 +- > testsuite/tests/deriving/should_fail/T15073.hs | 2 +- > testsuite/tests/determinism/determ004/determ004.hs | 2 +- > testsuite/tests/determinism/determ014/A.hs | 6 +- > testsuite/tests/driver/T4437.hs | 1 + > testsuite/tests/gadt/T7293.hs | 6 +- > testsuite/tests/gadt/T7293.stderr | 4 +- > testsuite/tests/gadt/T7294.hs | 6 +- > testsuite/tests/gadt/T7294.stderr | 4 +- > testsuite/tests/generics/GEq/GEq1.hs | 5 +- > testsuite/tests/ghci/scripts/T10321.hs | 3 +- > testsuite/tests/ghci/scripts/T11252.script | 2 +- > testsuite/tests/ghci/scripts/T11376.script | 2 +- > testsuite/tests/ghci/scripts/T12550.script | 2 +- > testsuite/tests/ghci/scripts/T13407.script | 4 +- > testsuite/tests/ghci/scripts/T13963.script | 2 +- > testsuite/tests/ghci/scripts/T13988.hs | 2 +- > testsuite/tests/ghci/scripts/T7873.script | 2 +- > testsuite/tests/ghci/scripts/T7939.hs | 4 +- > testsuite/tests/ghci/scripts/T8357.hs | 5 +- > testsuite/tests/indexed-types/should_compile/HO.hs | 5 +- > .../tests/indexed-types/should_compile/Numerals.hs | 7 +- > .../tests/indexed-types/should_compile/T12369.hs | 4 +- > .../tests/indexed-types/should_compile/T12522b.hs | 8 +- > .../tests/indexed-types/should_compile/T12938.hs | 2 +- > .../tests/indexed-types/should_compile/T13244.hs | 2 +- > .../tests/indexed-types/should_compile/T13398b.hs | 2 +- > .../tests/indexed-types/should_compile/T14162.hs | 3 +- > .../tests/indexed-types/should_compile/T14554.hs | 5 +- > .../tests/indexed-types/should_compile/T15122.hs | 2 +- > .../tests/indexed-types/should_compile/T2219.hs | 4 +- > .../tests/indexed-types/should_compile/T7585.hs | 6 +- > .../tests/indexed-types/should_compile/T9747.hs | 9 +- > .../tests/indexed-types/should_fail/T12522a.hs | 6 +- > .../tests/indexed-types/should_fail/T12522a.stderr | 6 +- > .../tests/indexed-types/should_fail/T13674.hs | 4 +- > .../tests/indexed-types/should_fail/T13784.hs | 5 +- > .../tests/indexed-types/should_fail/T13784.stderr | 14 +- > .../tests/indexed-types/should_fail/T13877.hs | 6 +- > .../tests/indexed-types/should_fail/T13972.hs | 2 +- > .../tests/indexed-types/should_fail/T14175.hs | 2 +- > .../tests/indexed-types/should_fail/T14246.hs | 8 +- > .../tests/indexed-types/should_fail/T14246.stderr | 2 +- > .../tests/indexed-types/should_fail/T14369.hs | 2 +- > testsuite/tests/indexed-types/should_fail/T2544.hs | 4 +- > .../tests/indexed-types/should_fail/T2544.stderr | 8 +- > .../tests/indexed-types/should_fail/T3330c.hs | 6 +- > .../tests/indexed-types/should_fail/T3330c.stderr | 10 +- > testsuite/tests/indexed-types/should_fail/T4174.hs | 10 +- > .../tests/indexed-types/should_fail/T4174.stderr | 6 +- > testsuite/tests/indexed-types/should_fail/T7786.hs | 4 +- > .../tests/indexed-types/should_fail/T7786.stderr | 25 +- > testsuite/tests/indexed-types/should_fail/T7967.hs | 10 +- > .../tests/indexed-types/should_fail/T7967.stderr | 12 +- > testsuite/tests/indexed-types/should_fail/T9036.hs | 7 +- > .../tests/indexed-types/should_fail/T9036.stderr | 2 +- > testsuite/tests/indexed-types/should_fail/T9662.hs | 4 +- > .../tests/indexed-types/should_fail/T9662.stderr | 6 +- > .../tests/indexed-types/should_run/T11465a.hs | 1 - > .../should_run/overloadedrecflds_generics.hs | 5 +- > .../should_run/overloadedrecfldsrun07.hs | 6 +- > .../parser/should_compile/DumpParsedAst.stderr | 109 ++--- > .../tests/parser/should_compile/DumpRenamedAst.hs | 2 +- > .../parser/should_compile/DumpRenamedAst.stderr | 62 ++- > testsuite/tests/parser/should_compile/T10379.hs | 2 +- > testsuite/tests/parser/should_fail/T15209.stderr | 2 +- > testsuite/tests/parser/should_fail/all.T | 5 + > testsuite/tests/parser/should_fail/readFail036.hs | 4 +- > .../tests/parser/should_fail/readFail036.stderr | 4 +- > testsuite/tests/parser/should_fail/typeops_A.hs | 1 + > .../tests/parser/should_fail/typeops_A.stderr | 2 + > testsuite/tests/parser/should_fail/typeops_B.hs | 1 + > .../tests/parser/should_fail/typeops_B.stderr | 2 + > testsuite/tests/parser/should_fail/typeops_C.hs | 1 + > .../tests/parser/should_fail/typeops_C.stderr | 2 + > testsuite/tests/parser/should_fail/typeops_D.hs | 1 + > .../tests/parser/should_fail/typeops_D.stderr | 2 + > .../tests/partial-sigs/should_compile/T15039a.hs | 12 +- > .../partial-sigs/should_compile/T15039a.stderr | 11 +- > .../tests/partial-sigs/should_compile/T15039b.hs | 12 +- > .../partial-sigs/should_compile/T15039b.stderr | 44 +- > .../tests/partial-sigs/should_compile/T15039c.hs | 12 +- > .../partial-sigs/should_compile/T15039c.stderr | 11 +- > .../tests/partial-sigs/should_compile/T15039d.hs | 12 +- > .../partial-sigs/should_compile/T15039d.stderr | 44 +- > .../tests/partial-sigs/should_fail/T14040a.hs | 2 +- > testsuite/tests/partial-sigs/should_fail/T14584.hs | 2 +- > .../tests/partial-sigs/should_fail/T14584.stderr | 2 +- > testsuite/tests/patsyn/should_compile/T12698.hs | 2 +- > testsuite/tests/patsyn/should_compile/T12968.hs | 2 +- > testsuite/tests/patsyn/should_compile/T13768.hs | 8 +- > testsuite/tests/patsyn/should_compile/T14058.hs | 2 +- > testsuite/tests/patsyn/should_compile/T14058a.hs | 3 +- > testsuite/tests/patsyn/should_fail/T14507.hs | 4 +- > testsuite/tests/patsyn/should_fail/T14507.stderr | 2 +- > testsuite/tests/patsyn/should_fail/T14552.hs | 2 +- > testsuite/tests/perf/compiler/T12227.hs | 17 +- > testsuite/tests/perf/compiler/T12545a.hs | 3 +- > testsuite/tests/perf/compiler/T13035.hs | 13 +- > testsuite/tests/perf/compiler/T13035.stderr | 2 +- > testsuite/tests/perf/compiler/T9872d.hs | 186 ++++++-- > testsuite/tests/pmcheck/complete_sigs/T14253.hs | 2 +- > testsuite/tests/pmcheck/should_compile/T14086.hs | 2 +- > testsuite/tests/pmcheck/should_compile/T3927b.hs | 8 +- > testsuite/tests/polykinds/MonoidsTF.hs | 4 +- > testsuite/tests/polykinds/PolyKinds10.hs | 27 +- > testsuite/tests/polykinds/SigTvKinds3.hs | 2 +- > testsuite/tests/polykinds/T10134a.hs | 3 +- > testsuite/tests/polykinds/T10934.hs | 6 +- > testsuite/tests/polykinds/T11142.hs | 2 +- > testsuite/tests/polykinds/T11399.hs | 2 +- > testsuite/tests/polykinds/T11480b.hs | 24 +- > testsuite/tests/polykinds/T11520.hs | 2 +- > testsuite/tests/polykinds/T11523.hs | 1 - > testsuite/tests/polykinds/T11554.hs | 2 +- > testsuite/tests/polykinds/T11616.hs | 2 +- > testsuite/tests/polykinds/T11640.hs | 2 +- > testsuite/tests/polykinds/T11648.hs | 4 +- > testsuite/tests/polykinds/T11648b.hs | 2 +- > testsuite/tests/polykinds/T11821a.hs | 2 +- > testsuite/tests/polykinds/T12055.hs | 4 +- > testsuite/tests/polykinds/T12055a.hs | 4 +- > testsuite/tests/polykinds/T12593.hs | 2 +- > testsuite/tests/polykinds/T12668.hs | 2 +- > testsuite/tests/polykinds/T12718.hs | 2 +- > testsuite/tests/polykinds/T13391.hs | 7 - > testsuite/tests/polykinds/T13391.stderr | 7 - > testsuite/tests/polykinds/T13625.hs | 2 +- > testsuite/tests/polykinds/T13659.hs | 4 +- > testsuite/tests/polykinds/T13659.stderr | 2 +- > testsuite/tests/polykinds/T13738.hs | 2 +- > testsuite/tests/polykinds/T13985.stderr | 10 +- > testsuite/tests/polykinds/T14174.hs | 2 +- > testsuite/tests/polykinds/T14174a.hs | 7 +- > testsuite/tests/polykinds/T14209.hs | 2 +- > testsuite/tests/polykinds/T14270.hs | 2 +- > testsuite/tests/polykinds/T14450.hs | 4 +- > testsuite/tests/polykinds/T14450.stderr | 2 +- > testsuite/tests/polykinds/T14515.hs | 3 +- > testsuite/tests/polykinds/T14520.hs | 4 +- > testsuite/tests/polykinds/T14555.hs | 4 +- > testsuite/tests/polykinds/T14561.hs | 2 +- > testsuite/tests/polykinds/T14563.hs | 2 +- > testsuite/tests/polykinds/T14580.hs | 2 +- > testsuite/tests/polykinds/T14710.stderr | 8 - > testsuite/tests/polykinds/T14846.hs | 2 +- > testsuite/tests/polykinds/T14873.hs | 3 +- > testsuite/tests/polykinds/T15170.hs | 2 +- > testsuite/tests/polykinds/T5716.hs | 3 +- > testsuite/tests/polykinds/T5716.stderr | 10 +- > testsuite/tests/polykinds/T6021.stderr | 4 - > testsuite/tests/polykinds/T6035.hs | 4 +- > testsuite/tests/polykinds/T6039.stderr | 12 +- > testsuite/tests/polykinds/T6093.hs | 7 +- > testsuite/tests/polykinds/T7404.stderr | 4 - > testsuite/tests/polykinds/T7594.hs | 6 +- > testsuite/tests/polykinds/T7594.stderr | 9 +- > testsuite/tests/polykinds/T8566.hs | 8 +- > testsuite/tests/polykinds/T8566.stderr | 8 +- > testsuite/tests/polykinds/T8566a.hs | 8 +- > testsuite/tests/polykinds/T8985.hs | 8 +- > testsuite/tests/polykinds/T9222.hs | 3 +- > testsuite/tests/polykinds/T9222.stderr | 6 +- > testsuite/tests/polykinds/all.T | 5 +- > testsuite/tests/printer/Ppr040.hs | 2 +- > testsuite/tests/printer/Ppr045.hs | 1 + > testsuite/tests/rename/should_fail/T11592.hs | 2 +- > testsuite/tests/rename/should_fail/T13947.stderr | 2 +- > .../tests/simplCore/should_compile/T13025a.hs | 6 +- > testsuite/tests/simplCore/should_compile/T13658.hs | 2 +- > .../tests/simplCore/should_compile/T14270a.hs | 3 +- > .../tests/simplCore/should_compile/T15186A.hs | 2 +- > testsuite/tests/simplCore/should_compile/T4903a.hs | 10 +- > testsuite/tests/simplCore/should_run/T13750a.hs | 13 +- > testsuite/tests/th/T11463.hs | 2 +- > testsuite/tests/th/T11484.hs | 2 +- > testsuite/tests/th/T13642.hs | 2 +- > testsuite/tests/th/T13781.hs | 2 +- > testsuite/tests/th/T14060.hs | 2 +- > testsuite/tests/th/T14869.hs | 2 +- > testsuite/tests/th/T8031.hs | 4 +- > testsuite/tests/th/TH_RichKinds2.hs | 5 +- > testsuite/tests/th/TH_RichKinds2.stderr | 2 +- > .../tests/typecheck/should_compile/SplitWD.hs | 2 +- > testsuite/tests/typecheck/should_compile/T10432.hs | 5 +- > testsuite/tests/typecheck/should_compile/T11237.hs | 4 +- > testsuite/tests/typecheck/should_compile/T11348.hs | 1 - > testsuite/tests/typecheck/should_compile/T11524.hs | 1 - > testsuite/tests/typecheck/should_compile/T11723.hs | 2 +- > testsuite/tests/typecheck/should_compile/T11811.hs | 2 +- > testsuite/tests/typecheck/should_compile/T12133.hs | 4 +- > testsuite/tests/typecheck/should_compile/T12381.hs | 2 +- > testsuite/tests/typecheck/should_compile/T12734.hs | 38 +- > .../tests/typecheck/should_compile/T12734a.hs | 31 +- > .../tests/typecheck/should_compile/T12734a.stderr | 9 +- > .../tests/typecheck/should_compile/T12785a.hs | 2 +- > testsuite/tests/typecheck/should_compile/T12911.hs | 2 +- > testsuite/tests/typecheck/should_compile/T12919.hs | 2 +- > testsuite/tests/typecheck/should_compile/T12987.hs | 2 +- > testsuite/tests/typecheck/should_compile/T13083.hs | 5 +- > testsuite/tests/typecheck/should_compile/T13333.hs | 2 +- > testsuite/tests/typecheck/should_compile/T13337.hs | 2 +- > testsuite/tests/typecheck/should_compile/T13343.hs | 2 +- > testsuite/tests/typecheck/should_compile/T13458.hs | 2 +- > testsuite/tests/typecheck/should_compile/T13603.hs | 2 +- > testsuite/tests/typecheck/should_compile/T13643.hs | 2 +- > testsuite/tests/typecheck/should_compile/T13822.hs | 3 +- > testsuite/tests/typecheck/should_compile/T13871.hs | 2 +- > testsuite/tests/typecheck/should_compile/T13879.hs | 2 +- > .../tests/typecheck/should_compile/T13915a.hs | 2 +- > .../tests/typecheck/should_compile/T13915b.hs | 2 +- > testsuite/tests/typecheck/should_compile/T13943.hs | 2 +- > testsuite/tests/typecheck/should_compile/T14441.hs | 3 +- > .../tests/typecheck/should_compile/T14934a.hs | 3 +- > testsuite/tests/typecheck/should_compile/all.T | 4 +- > testsuite/tests/typecheck/should_compile/tc191.hs | 2 +- > testsuite/tests/typecheck/should_compile/tc205.hs | 4 +- > testsuite/tests/typecheck/should_compile/tc269.hs | 3 +- > .../should_compile/valid_hole_fits_interactions.hs | 2 +- > .../tests/typecheck/should_fail/ClassOperator.hs | 4 +- > .../typecheck/should_fail/ClassOperator.stderr | 16 +- > .../typecheck/should_fail/CustomTypeErrors04.hs | 2 +- > .../typecheck/should_fail/CustomTypeErrors05.hs | 2 +- > .../tests/typecheck/should_fail/LevPolyBounded.hs | 2 +- > testsuite/tests/typecheck/should_fail/T11313.hs | 2 - > .../tests/typecheck/should_fail/T11313.stderr | 8 +- > testsuite/tests/typecheck/should_fail/T11724.hs | 2 +- > testsuite/tests/typecheck/should_fail/T11963.hs | 29 -- > .../tests/typecheck/should_fail/T11963.stderr | 20 - > testsuite/tests/typecheck/should_fail/T12648.hs | 6 +- > testsuite/tests/typecheck/should_fail/T12709.hs | 3 +- > .../tests/typecheck/should_fail/T12709.stderr | 8 +- > testsuite/tests/typecheck/should_fail/T12785b.hs | 8 +- > testsuite/tests/typecheck/should_fail/T12973.hs | 2 +- > testsuite/tests/typecheck/should_fail/T13105.hs | 2 +- > testsuite/tests/typecheck/should_fail/T13446.hs | 4 +- > testsuite/tests/typecheck/should_fail/T13909.hs | 2 +- > testsuite/tests/typecheck/should_fail/T13929.hs | 2 +- > .../tests/typecheck/should_fail/T13983.stderr | 2 +- > testsuite/tests/typecheck/should_fail/T14350.hs | 2 +- > testsuite/tests/typecheck/should_fail/T14904a.hs | 2 +- > testsuite/tests/typecheck/should_fail/T14904b.hs | 2 +- > testsuite/tests/typecheck/should_fail/T7645.hs | 4 +- > testsuite/tests/typecheck/should_fail/T7645.stderr | 5 +- > testsuite/tests/typecheck/should_fail/all.T | 1 - > .../tests/typecheck/should_run/EtaExpandLevPoly.hs | 4 +- > .../typecheck/should_run/KindInvariant.script | 6 +- > testsuite/tests/typecheck/should_run/T11120.hs | 2 +- > testsuite/tests/typecheck/should_run/T12809.hs | 2 +- > testsuite/tests/typecheck/should_run/T13435.hs | 3 +- > testsuite/tests/typecheck/should_run/TypeOf.hs | 2 +- > testsuite/tests/typecheck/should_run/TypeRep.hs | 4 +- > testsuite/tests/unboxedsums/sum_rr.hs | 2 +- > 391 files changed, 1865 insertions(+), 1997 deletions(-) > > Diff suppressed because of size. To see it, use: > > git diff-tree --root --patch-with-stat --no-color --find-copies-harder > --ignore-space-at-eol --cc d650729f9a0f3b6aa5e6ef2d5fba337f6f70fa60 > _______________________________________________ > ghc-commits mailing list > ghc-commits at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-commits > From ggreif at gmail.com Fri Jun 15 08:49:41 2018 From: ggreif at gmail.com (Gabor Greif) Date: Fri, 15 Jun 2018 10:49:41 +0200 Subject: [commit: ghc] master: UNREG: PprC: add support for of W16 literals (Ticket #15237) (01c9d95) In-Reply-To: <20180615081034.7BE153ABA3@ghc.haskell.org> References: <20180615081034.7BE153ABA3@ghc.haskell.org> Message-ID: Thanks for fixing this! I am in the process of building an unregisterised MIPS64 cross-compiler and just noticed this warning running by: HC [stage 1] libraries/base/dist-install/build/GHC/Show.p_o /tmp/ghc414_0/ghc_7.hc: In function '_c53i': /tmp/ghc414_0/ghc_7.hc:1483:17: error: warning: integer constant is so large that it is unsigned _s4Lo = (_s4Ld+-9223372036854775808) + (_s4Lg + _s4L9); ^ | 1483 | _s4Lo = (_s4Ld+-9223372036854775808) + (_s4Lg + _s4L9); | ^ Not sure whether I should be worried (there seem to be others of this kind) or a simple change in the datatype (int -> unsigned) could silence this. Cheers, Gabor On 6/15/18, git at git.haskell.org wrote: > Repository : ssh://git at git.haskell.org/ghc > > On branch : master > Link : > http://ghc.haskell.org/trac/ghc/changeset/01c9d95aca12caf5c954320a2a82335b32568554/ghc > >>--------------------------------------------------------------- > > commit 01c9d95aca12caf5c954320a2a82335b32568554 > Author: Sergei Trofimovich > Date: Thu Jun 14 23:13:16 2018 +0100 > > UNREG: PprC: add support for of W16 literals (Ticket #15237) > > Fix UNREG build failure for 32-bit targets. > > This change is an equivalent of commit > 0238a6c78102d43dae2f56192bd3486e4f9ecf1d > ("UNREG: PprC: add support for of W32 literals") > > The change allows combining two subwords into one word > on 32-bit targets. Tested on nios2-unknown-linux-gnu. > > GHC Trac Issues: #15237 > > Signed-off-by: Sergei Trofimovich > > >>--------------------------------------------------------------- > > 01c9d95aca12caf5c954320a2a82335b32568554 > compiler/cmm/PprC.hs | 8 ++++++++ > 1 file changed, 8 insertions(+) > > diff --git a/compiler/cmm/PprC.hs b/compiler/cmm/PprC.hs > index e46fff1..8b30bbf 100644 > --- a/compiler/cmm/PprC.hs > +++ b/compiler/cmm/PprC.hs > @@ -546,6 +546,14 @@ pprStatics dflags (CmmStaticLit (CmmInt a W32) : > rest) > else pprStatics dflags (CmmStaticLit (CmmInt ((shiftL b 32) .|. a) W64) > : > rest) > +pprStatics dflags (CmmStaticLit (CmmInt a W16) : > + CmmStaticLit (CmmInt b W16) : rest) > + | wordWidth dflags == W32 > + = if wORDS_BIGENDIAN dflags > + then pprStatics dflags (CmmStaticLit (CmmInt ((shiftL a 16) .|. b) W32) > : > + rest) > + else pprStatics dflags (CmmStaticLit (CmmInt ((shiftL b 16) .|. a) W32) > : > + rest) > pprStatics dflags (CmmStaticLit (CmmInt _ w) : _) > | w /= wordWidth dflags > = pprPanic "pprStatics: cannot emit a non-word-sized static literal" (ppr > w) > > _______________________________________________ > ghc-commits mailing list > ghc-commits at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-commits > From simonpj at microsoft.com Fri Jun 15 09:04:58 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 15 Jun 2018 09:04:58 +0000 Subject: Strace In-Reply-To: References: Message-ID: I've removed the code, if you update it should be gone Yes, that’s great. The commit I made should have reduced the amount of failing tests to 0. framework failures are always quite unusual. Definitely not zero! I’m getting these failing tests Unexpected failures: plugins/plugins07.run plugins07 [bad exit code] (normal) plugins/plugins09.run plugins09 [bad stdout] (normal) plugins/plugins11.run plugins11 [bad stdout] (normal) plugins/T10420.run T10420 [bad exit code] (normal) plugins/T11244.run T11244 [bad stderr] (normal) plugins/plugin-recomp-pure.run plugin-recomp-pure [bad exit code] (normal) plugins/plugin-recomp-impure.run plugin-recomp-impure [bad exit code] (normal) plugins/plugin-recomp-flags.run plugin-recomp-flags [bad exit code] (normal) rts/stack002.run stack002 [exit code non-0] (normal) rts/T3236.run T3236 [exit code non-0] (normal) rts/testwsdeque.run testwsdeque [exit code non-0] (threaded1) /../libraries/Win32/tests/T4452.run T4452 [bad exit code] (normal) Unexpected stat failures: perf/compiler/T6048.run T6048 [stat not good enough] (optasm) perf/compiler/T12234.run T12234 [stat not good enough] (optasm) perf/compiler/T12150.run T12150 [stat not good enough] (optasm) perf/should_run/T15226.run T15226 [stat too good] (normal) perf/should_run/T15226a.run T15226a [stat too good] (normal) perf/compiler/MultiLayerModules.run MultiLayerModules [stat not good enough] (normal) Framework failures: plugins/plugins07.run plugins07 [normal] (pre_cmd failed: 2) plugins/T10420.run T10420 [normal] (pre_cmd failed: 2) plugins/T11244.run T11244 [normal] (pre_cmd failed: 2) plugins/plugin-recomp-pure.run plugin-recomp-pure [normal] (pre_cmd failed: 2) plugins/plugin-recomp-impure.run plugin-recomp-impure [normal] (pre_cmd failed: 2) plugins/plugin-recomp-flags.run plugin-recomp-flags [normal] (pre_cmd failed: 2) Framework warnings: . T13701 [numfield-no-expected] (No expected value found for bytes allocated in num_field check) I’ll send you the info you wanted for T10420 in a separate email. Thanks for helping! Simon From: Phyx Sent: 13 June 2018 20:47 To: Simon Peyton Jones Cc: ghc-devs at haskell.org Subject: Re: Strace Hi Simon, On Wed, Jun 13, 2018 at 5:24 PM, Simon Peyton Jones > wrote: OK – so maybe the root cause is a framework failure – and indeed for the last few weeks I’ve seen Framework failures: plugins/plugins07.run plugins07 [normal] (pre_cmd failed: 2) plugins/T10420.run T10420 [normal] (pre_cmd failed: 2) plugins/T11244.run T11244 [normal] (pre_cmd failed: 2) I have just learned to live with these failures, because I knew you were working on making things better. But it sounds as if they are still taking place. The commit I made should have reduced the amount of failing tests to 0. framework failures are always quite unusual. So: * Yes, please make it not happen by default I've removed the code, if you update it should be gone. It was there and on by default because I was trying to debug failures on Harbormaster, I realized a switch isn't very useful as I won't be able to toggle it for Harbormaster anyway. * * If you don’t get these framework failures, can we work together to resolve them? These don't happen for me nor on Harbormaster, try picking a test, e.g T10420 run only that test to make sure it's not a threading issue: make TEST=T10420 test -C testsuite/tests If it still gives a framework error then do at the top level make VERBOSE=3 TEST=T10420 test -C testsuite/tests once it runs, the output should contain the command it ran as a pre_cmd, and the stdout and stderr from the pre_cmd output. Could you then send the error? if it doesn't show any of this, try make CLEANP=0 VERBOSE=3 TEST= T10420 test -C testsuite/tests --trace and copy and paste the pre_cmd command, which should just replay the action it did. Cheers, Tamar Thanks Simon From: Phyx > Sent: 13 June 2018 17:19 To: Simon Peyton Jones > Cc: ghc-devs at haskell.org Subject: Re: Strace Hi Simon, The strace is only supposed to run when the normal test pre_cmd fails. If it's running that often it means your tests are all failing during pre_cmd with a framework failure https://git.haskell.org/ghc.git/blobdiff/4778cba1dbb6adf495930322d7f9e9db0af60d8f..60fb2b2160aa16194b74262f4df8fad5af171b0f:/testsuite/driver/testlib.py But maybe I shouldn't turn this on my default. I'll pramaterize it when I get home. Tamar. On Wed, Jun 13, 2018, 17:09 Simon Peyton Jones > wrote: Tamar I’m getting megabytes of output from ‘sh validate’ on windows. It looks like this 629 151745 [main] sh 2880 fhandler_base::fhaccess: returning 0 291 152036 [main] sh 2880 faccessat: returning 0 7757 159793 [main] sh 2880 fhandler_base_overlapped::wait_overlapped: wfres 0, wores 1, bytes 7 179457 1608947 [main] make 11484 fhandler_base_overlapped::wait_overlapped: wfres 0, wores 1, bytes 7 99 159892 [main] sh 2880 fhandler_base_overlapped::wait_overlapped: normal write, 7 bytes ispipe() 1 180 1609127 [main] make 11484 fhandler_base_overlapped::wait_overlapped: normal read, 7 bytes ispipe() 1 139 160031 [main] sh 2880 write: 7 = write(1, 0x6000396A0, 7) 142 1609269 [main] make 11484 fhandler_base::read: returning 7, binary mode 139 1609408 [main] make 11484 read: 7 = read(5, 0x60005B4B0, 7) 136 1609544 [main] make 11484 read: read(5, 0x60005B4B7, 193) blocking 4693 164724 [main] sh 2880 set_signal_mask: setmask 0, newmask 80000, mask_bits 0 but with hundreds of thousands of lines. (I have not counted) I believe that it may be the result of this line, earlier in the log cd "/c/Users/simonpj/AppData/Local/Temp/ghctest-8fa9s6rk/test spaces/./plugins/plugins07.run" && strace $MAKE -s --no-print-directory -C rule-defining-plugin package.plugins07 TOP=/c/code/HEAD/testsuite# Note the strace. That in turn was added in your commit commit 60fb2b2160aa16194b74262f4df8fad5af171b0f Author: Tamar Christina > Date: Mon May 28 19:34:11 2018 +0100 Clean up Windows testsuite failures Summary: Another round and attempt at getting these down to 0. Could you perhaps have made a mistake here? Currently validate is unusable. Thanks! Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggreif at gmail.com Fri Jun 15 09:08:30 2018 From: ggreif at gmail.com (Gabor Greif) Date: Fri, 15 Jun 2018 11:08:30 +0200 Subject: [commit: ghc] master: UNREG: PprC: add support for of W16 literals (Ticket #15237) (01c9d95) In-Reply-To: <20180615100005.5e9a0ebb@sf> References: <20180615081034.7BE153ABA3@ghc.haskell.org> <20180615100005.5e9a0ebb@sf> Message-ID: Hi Sergei, thanks for your swift response! I did: ``` $ mips64-wrsmllib64-linux-gcc -E -dM - wrote: > On Fri, 15 Jun 2018 10:49:41 +0200 > Gabor Greif wrote: > >> Thanks for fixing this! >> >> I am in the process of building an unregisterised MIPS64 >> cross-compiler and just noticed this warning running by: >> >> HC [stage 1] libraries/base/dist-install/build/GHC/Show.p_o >> /tmp/ghc414_0/ghc_7.hc: In function '_c53i': >> >> /tmp/ghc414_0/ghc_7.hc:1483:17: error: >> warning: integer constant is so large that it is unsigned >> _s4Lo = (_s4Ld+-9223372036854775808) + (_s4Lg + _s4L9); >> ^ >> | >> 1483 | _s4Lo = (_s4Ld+-9223372036854775808) + (_s4Lg + _s4L9); >> | ^ >> >> Not sure whether I should be worried (there seem to be others of this >> kind) or a simple change in the datatype (int -> unsigned) could >> silence this. > > The overflow looks fishy. -9223372036854775808 is 0x8000000000000000. > What ABI your mips64 targets to? 64 or n32? I'd like to reproduce it > locally. > > Simplest way to check for ABI (mine is N32): > $ mips64-unknown-linux-gnu-gcc -E -dM - #define _MIPS_SIM _ABIN32 > > -- > > Sergei > From vlad.z.4096 at gmail.com Fri Jun 15 09:50:56 2018 From: vlad.z.4096 at gmail.com (Vladislav Zavialov) Date: Fri, 15 Jun 2018 12:50:56 +0300 Subject: [commit: ghc] master: Embrace -XTypeInType, add -XStarIsType (d650729) In-Reply-To: References: <20180614190732.057773ABA3@ghc.haskell.org> Message-ID: Hi Gabor, Indeed, I can reproduce this issue. This is happening because your locale does not support Unicode. It is probably something like this: $ locale -a C POSIX Rather than fix this particular issue, I suggest we forbid Unicode in GHC sources using the linter (the one that checks for lines too long, etc) to avoid such problems in the future. > Can this be done with unicode escapes somehow? Yes, that would be '\x2605'. All the best, - Vladislav On Jun 15, 2018 11:34, "Gabor Greif" wrote: > My `happy` chokes on the unicode sequence you added: > > (if isUnicode $1 then "★" else "*") > > Casn this be done with unicode escapes somehow? > > Cheers, > > Gabor > > PS: Happy Version 1.19.9 Copyright (c) 1993-1996 Andy Gill, Simon > Marlow (c) 1997-2005 Simon Marlow > > On 6/14/18, git at git.haskell.org wrote: > > Repository : ssh://git at git.haskell.org/ghc > > > > On branch : master > > Link : > > > http://ghc.haskell.org/trac/ghc/changeset/d650729f9a0f3b6aa5e6ef2d5fba337f6f70fa60/ghc > > > >>--------------------------------------------------------------- > > > > commit d650729f9a0f3b6aa5e6ef2d5fba337f6f70fa60 > > Author: Vladislav Zavialov > > Date: Thu Jun 14 15:02:36 2018 -0400 > > > > Embrace -XTypeInType, add -XStarIsType > > > > Summary: > > Implement the "Embrace Type :: Type" GHC proposal, > > .../ghc-proposals/blob/master/proposals/0020-no-type-in-type.rst > > > > GHC 8.0 included a major change to GHC's type system: the Type :: > Type > > axiom. Though casual users were protected from this by hiding its > > features behind the -XTypeInType extension, all programs written in > GHC > > 8+ have the axiom behind the scenes. In order to preserve backward > > compatibility, various legacy features were left unchanged. For > example, > > with -XDataKinds but not -XTypeInType, GADTs could not be used in > types. > > Now these restrictions are lifted and -XTypeInType becomes a > redundant > > flag that will be eventually deprecated. > > > > * Incorporate the features currently in -XTypeInType into the > > -XPolyKinds and -XDataKinds extensions. > > * Introduce a new extension -XStarIsType to control how to parse * in > > code and whether to print it in error messages. > > > > Test Plan: Validate > > > > Reviewers: goldfire, hvr, bgamari, alanz, simonpj > > > > Reviewed By: goldfire, simonpj > > > > Subscribers: rwbarton, thomie, mpickering, carter > > > > GHC Trac Issues: #15195 > > > > Differential Revision: https://phabricator.haskell.org/D4748 > > > > > >>--------------------------------------------------------------- > > > > d650729f9a0f3b6aa5e6ef2d5fba337f6f70fa60 > > .gitignore | 1 + > > .gitmodules | 4 +- > > compiler/basicTypes/DataCon.hs | 22 +- > > compiler/basicTypes/Name.hs | 21 +- > > compiler/basicTypes/RdrName.hs | 96 +++- > > compiler/basicTypes/SrcLoc.hs | 5 +- > > compiler/deSugar/DsMeta.hs | 7 +- > > compiler/hsSyn/Convert.hs | 37 +- > > compiler/hsSyn/HsDecls.hs | 9 +- > > compiler/hsSyn/HsExtension.hs | 16 +- > > compiler/hsSyn/HsInstances.hs | 5 - > > compiler/hsSyn/HsTypes.hs | 117 +---- > > compiler/iface/IfaceType.hs | 8 +- > > compiler/main/DynFlags.hs | 31 ++ > > compiler/main/DynFlags.hs-boot | 1 + > > compiler/main/HscTypes.hs | 3 +- > > compiler/parser/Lexer.x | 104 +++-- > > compiler/parser/Parser.y | 88 ++-- > > compiler/parser/RdrHsSyn.hs | 190 ++++---- > > compiler/prelude/PrelNames.hs | 7 +- > > compiler/prelude/PrelNames.hs-boot | 3 +- > > compiler/prelude/TysWiredIn.hs | 24 +- > > compiler/rename/RnEnv.hs | 43 +- > > compiler/rename/RnSource.hs | 4 +- > > compiler/rename/RnTypes.hs | 186 ++------ > > compiler/typecheck/TcDeriv.hs | 14 +- > > compiler/typecheck/TcHsType.hs | 82 ++-- > > compiler/typecheck/TcInstDcls.hs | 4 +- > > compiler/typecheck/TcMType.hs | 2 +- > > compiler/typecheck/TcPatSyn.hs | 2 +- > > compiler/typecheck/TcRnTypes.hs | 6 - > > compiler/typecheck/TcSplice.hs | 4 +- > > compiler/typecheck/TcTyClsDecls.hs | 43 +- > > compiler/types/Kind.hs | 33 +- > > compiler/types/TyCoRep.hs | 1 + > > compiler/types/TyCon.hs | 8 +- > > compiler/types/Type.hs | 11 +- > > compiler/types/Unify.hs | 2 +- > > compiler/utils/Outputable.hs | 11 +- > > docs/users_guide/8.6.1-notes.rst | 30 +- > > docs/users_guide/glasgow_exts.rst | 482 > > +++++++++------------ > > libraries/base/Data/Data.hs | 4 +- > > libraries/base/Data/Kind.hs | 2 +- > > libraries/base/Data/Proxy.hs | 2 +- > > libraries/base/Data/Type/Equality.hs | 4 +- > > libraries/base/Data/Typeable.hs | 26 +- > > libraries/base/Data/Typeable/Internal.hs | 1 - > > libraries/base/GHC/Base.hs | 3 +- > > libraries/base/GHC/Err.hs | 2 +- > > libraries/base/GHC/Generics.hs | 50 +-- > > libraries/base/Type/Reflection/Unsafe.hs | 2 +- > > libraries/base/tests/CatEntail.hs | 4 +- > > .../ghc-boot-th/GHC/LanguageExtensions/Type.hs | 1 + > > libraries/ghc-prim/GHC/Magic.hs | 3 +- > > libraries/ghc-prim/GHC/Types.hs | 8 +- > > testsuite/tests/codeGen/should_fail/T13233.hs | 2 +- > > testsuite/tests/dependent/ghci/T11549.script | 2 +- > > testsuite/tests/dependent/ghci/T14238.stdout | 2 +- > > testsuite/tests/dependent/should_compile/Dep1.hs | 2 +- > > testsuite/tests/dependent/should_compile/Dep2.hs | 2 +- > > testsuite/tests/dependent/should_compile/Dep3.hs | 2 +- > > .../tests/dependent/should_compile/DkNameRes.hs | 9 + > > .../dependent/should_compile/InferDependency.hs | 6 - > > .../dependent/should_compile/KindEqualities.hs | 2 +- > > .../dependent/should_compile/KindEqualities2.hs | 3 +- > > .../tests/dependent/should_compile/KindLevels.hs | 2 +- > > .../tests/dependent/should_compile/RAE_T32b.hs | 24 +- > > testsuite/tests/dependent/should_compile/Rae31.hs | 23 +- > > .../tests/dependent/should_compile/RaeBlogPost.hs | 27 +- > > .../tests/dependent/should_compile/RaeJobTalk.hs | 2 +- > > testsuite/tests/dependent/should_compile/T11405.hs | 2 +- > > testsuite/tests/dependent/should_compile/T11635.hs | 2 +- > > testsuite/tests/dependent/should_compile/T11711.hs | 1 - > > testsuite/tests/dependent/should_compile/T11719.hs | 6 +- > > testsuite/tests/dependent/should_compile/T11966.hs | 1 - > > testsuite/tests/dependent/should_compile/T12176.hs | 2 +- > > testsuite/tests/dependent/should_compile/T12442.hs | 4 +- > > testsuite/tests/dependent/should_compile/T12742.hs | 2 +- > > testsuite/tests/dependent/should_compile/T13910.hs | 9 +- > > testsuite/tests/dependent/should_compile/T13938.hs | 3 +- > > .../tests/dependent/should_compile/T13938a.hs | 3 +- > > testsuite/tests/dependent/should_compile/T14038.hs | 3 +- > > .../tests/dependent/should_compile/T14066a.hs | 2 +- > > testsuite/tests/dependent/should_compile/T14556.hs | 3 +- > > testsuite/tests/dependent/should_compile/T14720.hs | 3 +- > > testsuite/tests/dependent/should_compile/T14749.hs | 2 +- > > testsuite/tests/dependent/should_compile/T14991.hs | 3 +- > > testsuite/tests/dependent/should_compile/T9632.hs | 2 +- > > .../tests/dependent/should_compile/TypeLevelVec.hs | 2 +- > > testsuite/tests/dependent/should_compile/all.T | 1 + > > .../dependent/should_compile/dynamic-paper.hs | 27 +- > > .../tests/dependent/should_compile/mkGADTVars.hs | 2 +- > > .../tests/dependent/should_fail/BadTelescope.hs | 2 +- > > .../tests/dependent/should_fail/BadTelescope2.hs | 2 +- > > .../tests/dependent/should_fail/BadTelescope3.hs | 2 +- > > .../tests/dependent/should_fail/BadTelescope4.hs | 2 +- > > testsuite/tests/dependent/should_fail/DepFail1.hs | 2 +- > > .../tests/dependent/should_fail/InferDependency.hs | 2 +- > > .../tests/dependent/should_fail/KindLevelsB.hs | 9 - > > .../tests/dependent/should_fail/KindLevelsB.stderr | 5 - > > .../tests/dependent/should_fail/PromotedClass.hs | 2 +- > > testsuite/tests/dependent/should_fail/RAE_T32a.hs | 28 +- > > .../tests/dependent/should_fail/RAE_T32a.stderr | 6 +- > > .../tests/dependent/should_fail/RenamingStar.hs | 2 +- > > .../dependent/should_fail/RenamingStar.stderr | 10 +- > > testsuite/tests/dependent/should_fail/SelfDep.hs | 2 + > > .../tests/dependent/should_fail/SelfDep.stderr | 8 +- > > testsuite/tests/dependent/should_fail/T11407.hs | 2 +- > > testsuite/tests/dependent/should_fail/T11473.hs | 2 +- > > testsuite/tests/dependent/should_fail/T12081.hs | 2 +- > > testsuite/tests/dependent/should_fail/T12174.hs | 2 +- > > testsuite/tests/dependent/should_fail/T13135.hs | 4 +- > > testsuite/tests/dependent/should_fail/T13601.hs | 2 +- > > testsuite/tests/dependent/should_fail/T13780a.hs | 2 +- > > testsuite/tests/dependent/should_fail/T13780b.hs | 3 +- > > testsuite/tests/dependent/should_fail/T13780c.hs | 2 +- > > .../tests/dependent/should_fail/T13780c.stderr | 6 +- > > testsuite/tests/dependent/should_fail/T14066.hs | 4 +- > > testsuite/tests/dependent/should_fail/T14066c.hs | 2 +- > > testsuite/tests/dependent/should_fail/T14066d.hs | 2 +- > > testsuite/tests/dependent/should_fail/T14066e.hs | 2 +- > > testsuite/tests/dependent/should_fail/T14066f.hs | 2 +- > > testsuite/tests/dependent/should_fail/T14066g.hs | 2 +- > > testsuite/tests/dependent/should_fail/T14066h.hs | 2 +- > > testsuite/tests/dependent/should_fail/T15245.hs | 10 + > > .../tests/dependent/should_fail/T15245.stderr | 7 + > > .../tests/dependent/should_fail/TypeSkolEscape.hs | 2 +- > > testsuite/tests/dependent/should_fail/all.T | 2 +- > > testsuite/tests/dependent/should_run/T11964a.hs | 2 +- > > testsuite/tests/deriving/should_compile/T11416.hs | 3 +- > > testsuite/tests/deriving/should_compile/T11732a.hs | 2 +- > > testsuite/tests/deriving/should_compile/T11732b.hs | 2 +- > > testsuite/tests/deriving/should_compile/T11732c.hs | 2 +- > > testsuite/tests/deriving/should_compile/T14331.hs | 2 +- > > testsuite/tests/deriving/should_compile/T14579.hs | 3 +- > > testsuite/tests/deriving/should_compile/T14932.hs | 4 +- > > testsuite/tests/deriving/should_fail/T12512.hs | 2 +- > > testsuite/tests/deriving/should_fail/T14728a.hs | 2 +- > > testsuite/tests/deriving/should_fail/T14728b.hs | 2 +- > > testsuite/tests/deriving/should_fail/T15073.hs | 2 +- > > testsuite/tests/determinism/determ004/determ004.hs | 2 +- > > testsuite/tests/determinism/determ014/A.hs | 6 +- > > testsuite/tests/driver/T4437.hs | 1 + > > testsuite/tests/gadt/T7293.hs | 6 +- > > testsuite/tests/gadt/T7293.stderr | 4 +- > > testsuite/tests/gadt/T7294.hs | 6 +- > > testsuite/tests/gadt/T7294.stderr | 4 +- > > testsuite/tests/generics/GEq/GEq1.hs | 5 +- > > testsuite/tests/ghci/scripts/T10321.hs | 3 +- > > testsuite/tests/ghci/scripts/T11252.script | 2 +- > > testsuite/tests/ghci/scripts/T11376.script | 2 +- > > testsuite/tests/ghci/scripts/T12550.script | 2 +- > > testsuite/tests/ghci/scripts/T13407.script | 4 +- > > testsuite/tests/ghci/scripts/T13963.script | 2 +- > > testsuite/tests/ghci/scripts/T13988.hs | 2 +- > > testsuite/tests/ghci/scripts/T7873.script | 2 +- > > testsuite/tests/ghci/scripts/T7939.hs | 4 +- > > testsuite/tests/ghci/scripts/T8357.hs | 5 +- > > testsuite/tests/indexed-types/should_compile/HO.hs | 5 +- > > .../tests/indexed-types/should_compile/Numerals.hs | 7 +- > > .../tests/indexed-types/should_compile/T12369.hs | 4 +- > > .../tests/indexed-types/should_compile/T12522b.hs | 8 +- > > .../tests/indexed-types/should_compile/T12938.hs | 2 +- > > .../tests/indexed-types/should_compile/T13244.hs | 2 +- > > .../tests/indexed-types/should_compile/T13398b.hs | 2 +- > > .../tests/indexed-types/should_compile/T14162.hs | 3 +- > > .../tests/indexed-types/should_compile/T14554.hs | 5 +- > > .../tests/indexed-types/should_compile/T15122.hs | 2 +- > > .../tests/indexed-types/should_compile/T2219.hs | 4 +- > > .../tests/indexed-types/should_compile/T7585.hs | 6 +- > > .../tests/indexed-types/should_compile/T9747.hs | 9 +- > > .../tests/indexed-types/should_fail/T12522a.hs | 6 +- > > .../tests/indexed-types/should_fail/T12522a.stderr | 6 +- > > .../tests/indexed-types/should_fail/T13674.hs | 4 +- > > .../tests/indexed-types/should_fail/T13784.hs | 5 +- > > .../tests/indexed-types/should_fail/T13784.stderr | 14 +- > > .../tests/indexed-types/should_fail/T13877.hs | 6 +- > > .../tests/indexed-types/should_fail/T13972.hs | 2 +- > > .../tests/indexed-types/should_fail/T14175.hs | 2 +- > > .../tests/indexed-types/should_fail/T14246.hs | 8 +- > > .../tests/indexed-types/should_fail/T14246.stderr | 2 +- > > .../tests/indexed-types/should_fail/T14369.hs | 2 +- > > testsuite/tests/indexed-types/should_fail/T2544.hs | 4 +- > > .../tests/indexed-types/should_fail/T2544.stderr | 8 +- > > .../tests/indexed-types/should_fail/T3330c.hs | 6 +- > > .../tests/indexed-types/should_fail/T3330c.stderr | 10 +- > > testsuite/tests/indexed-types/should_fail/T4174.hs | 10 +- > > .../tests/indexed-types/should_fail/T4174.stderr | 6 +- > > testsuite/tests/indexed-types/should_fail/T7786.hs | 4 +- > > .../tests/indexed-types/should_fail/T7786.stderr | 25 +- > > testsuite/tests/indexed-types/should_fail/T7967.hs | 10 +- > > .../tests/indexed-types/should_fail/T7967.stderr | 12 +- > > testsuite/tests/indexed-types/should_fail/T9036.hs | 7 +- > > .../tests/indexed-types/should_fail/T9036.stderr | 2 +- > > testsuite/tests/indexed-types/should_fail/T9662.hs | 4 +- > > .../tests/indexed-types/should_fail/T9662.stderr | 6 +- > > .../tests/indexed-types/should_run/T11465a.hs | 1 - > > .../should_run/overloadedrecflds_generics.hs | 5 +- > > .../should_run/overloadedrecfldsrun07.hs | 6 +- > > .../parser/should_compile/DumpParsedAst.stderr | 109 ++--- > > .../tests/parser/should_compile/DumpRenamedAst.hs | 2 +- > > .../parser/should_compile/DumpRenamedAst.stderr | 62 ++- > > testsuite/tests/parser/should_compile/T10379.hs | 2 +- > > testsuite/tests/parser/should_fail/T15209.stderr | 2 +- > > testsuite/tests/parser/should_fail/all.T | 5 + > > testsuite/tests/parser/should_fail/readFail036.hs | 4 +- > > .../tests/parser/should_fail/readFail036.stderr | 4 +- > > testsuite/tests/parser/should_fail/typeops_A.hs | 1 + > > .../tests/parser/should_fail/typeops_A.stderr | 2 + > > testsuite/tests/parser/should_fail/typeops_B.hs | 1 + > > .../tests/parser/should_fail/typeops_B.stderr | 2 + > > testsuite/tests/parser/should_fail/typeops_C.hs | 1 + > > .../tests/parser/should_fail/typeops_C.stderr | 2 + > > testsuite/tests/parser/should_fail/typeops_D.hs | 1 + > > .../tests/parser/should_fail/typeops_D.stderr | 2 + > > .../tests/partial-sigs/should_compile/T15039a.hs | 12 +- > > .../partial-sigs/should_compile/T15039a.stderr | 11 +- > > .../tests/partial-sigs/should_compile/T15039b.hs | 12 +- > > .../partial-sigs/should_compile/T15039b.stderr | 44 +- > > .../tests/partial-sigs/should_compile/T15039c.hs | 12 +- > > .../partial-sigs/should_compile/T15039c.stderr | 11 +- > > .../tests/partial-sigs/should_compile/T15039d.hs | 12 +- > > .../partial-sigs/should_compile/T15039d.stderr | 44 +- > > .../tests/partial-sigs/should_fail/T14040a.hs | 2 +- > > testsuite/tests/partial-sigs/should_fail/T14584.hs | 2 +- > > .../tests/partial-sigs/should_fail/T14584.stderr | 2 +- > > testsuite/tests/patsyn/should_compile/T12698.hs | 2 +- > > testsuite/tests/patsyn/should_compile/T12968.hs | 2 +- > > testsuite/tests/patsyn/should_compile/T13768.hs | 8 +- > > testsuite/tests/patsyn/should_compile/T14058.hs | 2 +- > > testsuite/tests/patsyn/should_compile/T14058a.hs | 3 +- > > testsuite/tests/patsyn/should_fail/T14507.hs | 4 +- > > testsuite/tests/patsyn/should_fail/T14507.stderr | 2 +- > > testsuite/tests/patsyn/should_fail/T14552.hs | 2 +- > > testsuite/tests/perf/compiler/T12227.hs | 17 +- > > testsuite/tests/perf/compiler/T12545a.hs | 3 +- > > testsuite/tests/perf/compiler/T13035.hs | 13 +- > > testsuite/tests/perf/compiler/T13035.stderr | 2 +- > > testsuite/tests/perf/compiler/T9872d.hs | 186 ++++++-- > > testsuite/tests/pmcheck/complete_sigs/T14253.hs | 2 +- > > testsuite/tests/pmcheck/should_compile/T14086.hs | 2 +- > > testsuite/tests/pmcheck/should_compile/T3927b.hs | 8 +- > > testsuite/tests/polykinds/MonoidsTF.hs | 4 +- > > testsuite/tests/polykinds/PolyKinds10.hs | 27 +- > > testsuite/tests/polykinds/SigTvKinds3.hs | 2 +- > > testsuite/tests/polykinds/T10134a.hs | 3 +- > > testsuite/tests/polykinds/T10934.hs | 6 +- > > testsuite/tests/polykinds/T11142.hs | 2 +- > > testsuite/tests/polykinds/T11399.hs | 2 +- > > testsuite/tests/polykinds/T11480b.hs | 24 +- > > testsuite/tests/polykinds/T11520.hs | 2 +- > > testsuite/tests/polykinds/T11523.hs | 1 - > > testsuite/tests/polykinds/T11554.hs | 2 +- > > testsuite/tests/polykinds/T11616.hs | 2 +- > > testsuite/tests/polykinds/T11640.hs | 2 +- > > testsuite/tests/polykinds/T11648.hs | 4 +- > > testsuite/tests/polykinds/T11648b.hs | 2 +- > > testsuite/tests/polykinds/T11821a.hs | 2 +- > > testsuite/tests/polykinds/T12055.hs | 4 +- > > testsuite/tests/polykinds/T12055a.hs | 4 +- > > testsuite/tests/polykinds/T12593.hs | 2 +- > > testsuite/tests/polykinds/T12668.hs | 2 +- > > testsuite/tests/polykinds/T12718.hs | 2 +- > > testsuite/tests/polykinds/T13391.hs | 7 - > > testsuite/tests/polykinds/T13391.stderr | 7 - > > testsuite/tests/polykinds/T13625.hs | 2 +- > > testsuite/tests/polykinds/T13659.hs | 4 +- > > testsuite/tests/polykinds/T13659.stderr | 2 +- > > testsuite/tests/polykinds/T13738.hs | 2 +- > > testsuite/tests/polykinds/T13985.stderr | 10 +- > > testsuite/tests/polykinds/T14174.hs | 2 +- > > testsuite/tests/polykinds/T14174a.hs | 7 +- > > testsuite/tests/polykinds/T14209.hs | 2 +- > > testsuite/tests/polykinds/T14270.hs | 2 +- > > testsuite/tests/polykinds/T14450.hs | 4 +- > > testsuite/tests/polykinds/T14450.stderr | 2 +- > > testsuite/tests/polykinds/T14515.hs | 3 +- > > testsuite/tests/polykinds/T14520.hs | 4 +- > > testsuite/tests/polykinds/T14555.hs | 4 +- > > testsuite/tests/polykinds/T14561.hs | 2 +- > > testsuite/tests/polykinds/T14563.hs | 2 +- > > testsuite/tests/polykinds/T14580.hs | 2 +- > > testsuite/tests/polykinds/T14710.stderr | 8 - > > testsuite/tests/polykinds/T14846.hs | 2 +- > > testsuite/tests/polykinds/T14873.hs | 3 +- > > testsuite/tests/polykinds/T15170.hs | 2 +- > > testsuite/tests/polykinds/T5716.hs | 3 +- > > testsuite/tests/polykinds/T5716.stderr | 10 +- > > testsuite/tests/polykinds/T6021.stderr | 4 - > > testsuite/tests/polykinds/T6035.hs | 4 +- > > testsuite/tests/polykinds/T6039.stderr | 12 +- > > testsuite/tests/polykinds/T6093.hs | 7 +- > > testsuite/tests/polykinds/T7404.stderr | 4 - > > testsuite/tests/polykinds/T7594.hs | 6 +- > > testsuite/tests/polykinds/T7594.stderr | 9 +- > > testsuite/tests/polykinds/T8566.hs | 8 +- > > testsuite/tests/polykinds/T8566.stderr | 8 +- > > testsuite/tests/polykinds/T8566a.hs | 8 +- > > testsuite/tests/polykinds/T8985.hs | 8 +- > > testsuite/tests/polykinds/T9222.hs | 3 +- > > testsuite/tests/polykinds/T9222.stderr | 6 +- > > testsuite/tests/polykinds/all.T | 5 +- > > testsuite/tests/printer/Ppr040.hs | 2 +- > > testsuite/tests/printer/Ppr045.hs | 1 + > > testsuite/tests/rename/should_fail/T11592.hs | 2 +- > > testsuite/tests/rename/should_fail/T13947.stderr | 2 +- > > .../tests/simplCore/should_compile/T13025a.hs | 6 +- > > testsuite/tests/simplCore/should_compile/T13658.hs | 2 +- > > .../tests/simplCore/should_compile/T14270a.hs | 3 +- > > .../tests/simplCore/should_compile/T15186A.hs | 2 +- > > testsuite/tests/simplCore/should_compile/T4903a.hs | 10 +- > > testsuite/tests/simplCore/should_run/T13750a.hs | 13 +- > > testsuite/tests/th/T11463.hs | 2 +- > > testsuite/tests/th/T11484.hs | 2 +- > > testsuite/tests/th/T13642.hs | 2 +- > > testsuite/tests/th/T13781.hs | 2 +- > > testsuite/tests/th/T14060.hs | 2 +- > > testsuite/tests/th/T14869.hs | 2 +- > > testsuite/tests/th/T8031.hs | 4 +- > > testsuite/tests/th/TH_RichKinds2.hs | 5 +- > > testsuite/tests/th/TH_RichKinds2.stderr | 2 +- > > .../tests/typecheck/should_compile/SplitWD.hs | 2 +- > > testsuite/tests/typecheck/should_compile/T10432.hs | 5 +- > > testsuite/tests/typecheck/should_compile/T11237.hs | 4 +- > > testsuite/tests/typecheck/should_compile/T11348.hs | 1 - > > testsuite/tests/typecheck/should_compile/T11524.hs | 1 - > > testsuite/tests/typecheck/should_compile/T11723.hs | 2 +- > > testsuite/tests/typecheck/should_compile/T11811.hs | 2 +- > > testsuite/tests/typecheck/should_compile/T12133.hs | 4 +- > > testsuite/tests/typecheck/should_compile/T12381.hs | 2 +- > > testsuite/tests/typecheck/should_compile/T12734.hs | 38 +- > > .../tests/typecheck/should_compile/T12734a.hs | 31 +- > > .../tests/typecheck/should_compile/T12734a.stderr | 9 +- > > .../tests/typecheck/should_compile/T12785a.hs | 2 +- > > testsuite/tests/typecheck/should_compile/T12911.hs | 2 +- > > testsuite/tests/typecheck/should_compile/T12919.hs | 2 +- > > testsuite/tests/typecheck/should_compile/T12987.hs | 2 +- > > testsuite/tests/typecheck/should_compile/T13083.hs | 5 +- > > testsuite/tests/typecheck/should_compile/T13333.hs | 2 +- > > testsuite/tests/typecheck/should_compile/T13337.hs | 2 +- > > testsuite/tests/typecheck/should_compile/T13343.hs | 2 +- > > testsuite/tests/typecheck/should_compile/T13458.hs | 2 +- > > testsuite/tests/typecheck/should_compile/T13603.hs | 2 +- > > testsuite/tests/typecheck/should_compile/T13643.hs | 2 +- > > testsuite/tests/typecheck/should_compile/T13822.hs | 3 +- > > testsuite/tests/typecheck/should_compile/T13871.hs | 2 +- > > testsuite/tests/typecheck/should_compile/T13879.hs | 2 +- > > .../tests/typecheck/should_compile/T13915a.hs | 2 +- > > .../tests/typecheck/should_compile/T13915b.hs | 2 +- > > testsuite/tests/typecheck/should_compile/T13943.hs | 2 +- > > testsuite/tests/typecheck/should_compile/T14441.hs | 3 +- > > .../tests/typecheck/should_compile/T14934a.hs | 3 +- > > testsuite/tests/typecheck/should_compile/all.T | 4 +- > > testsuite/tests/typecheck/should_compile/tc191.hs | 2 +- > > testsuite/tests/typecheck/should_compile/tc205.hs | 4 +- > > testsuite/tests/typecheck/should_compile/tc269.hs | 3 +- > > .../should_compile/valid_hole_fits_interactions.hs | 2 +- > > .../tests/typecheck/should_fail/ClassOperator.hs | 4 +- > > .../typecheck/should_fail/ClassOperator.stderr | 16 +- > > .../typecheck/should_fail/CustomTypeErrors04.hs | 2 +- > > .../typecheck/should_fail/CustomTypeErrors05.hs | 2 +- > > .../tests/typecheck/should_fail/LevPolyBounded.hs | 2 +- > > testsuite/tests/typecheck/should_fail/T11313.hs | 2 - > > .../tests/typecheck/should_fail/T11313.stderr | 8 +- > > testsuite/tests/typecheck/should_fail/T11724.hs | 2 +- > > testsuite/tests/typecheck/should_fail/T11963.hs | 29 -- > > .../tests/typecheck/should_fail/T11963.stderr | 20 - > > testsuite/tests/typecheck/should_fail/T12648.hs | 6 +- > > testsuite/tests/typecheck/should_fail/T12709.hs | 3 +- > > .../tests/typecheck/should_fail/T12709.stderr | 8 +- > > testsuite/tests/typecheck/should_fail/T12785b.hs | 8 +- > > testsuite/tests/typecheck/should_fail/T12973.hs | 2 +- > > testsuite/tests/typecheck/should_fail/T13105.hs | 2 +- > > testsuite/tests/typecheck/should_fail/T13446.hs | 4 +- > > testsuite/tests/typecheck/should_fail/T13909.hs | 2 +- > > testsuite/tests/typecheck/should_fail/T13929.hs | 2 +- > > .../tests/typecheck/should_fail/T13983.stderr | 2 +- > > testsuite/tests/typecheck/should_fail/T14350.hs | 2 +- > > testsuite/tests/typecheck/should_fail/T14904a.hs | 2 +- > > testsuite/tests/typecheck/should_fail/T14904b.hs | 2 +- > > testsuite/tests/typecheck/should_fail/T7645.hs | 4 +- > > testsuite/tests/typecheck/should_fail/T7645.stderr | 5 +- > > testsuite/tests/typecheck/should_fail/all.T | 1 - > > .../tests/typecheck/should_run/EtaExpandLevPoly.hs | 4 +- > > .../typecheck/should_run/KindInvariant.script | 6 +- > > testsuite/tests/typecheck/should_run/T11120.hs | 2 +- > > testsuite/tests/typecheck/should_run/T12809.hs | 2 +- > > testsuite/tests/typecheck/should_run/T13435.hs | 3 +- > > testsuite/tests/typecheck/should_run/TypeOf.hs | 2 +- > > testsuite/tests/typecheck/should_run/TypeRep.hs | 4 +- > > testsuite/tests/unboxedsums/sum_rr.hs | 2 +- > > 391 files changed, 1865 insertions(+), 1997 deletions(-) > > > > Diff suppressed because of size. To see it, use: > > > > git diff-tree --root --patch-with-stat --no-color > --find-copies-harder > > --ignore-space-at-eol --cc d650729f9a0f3b6aa5e6ef2d5fba337f6f70fa60 > > _______________________________________________ > > ghc-commits mailing list > > ghc-commits at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-commits > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Sat Jun 16 16:26:55 2018 From: ben at well-typed.com (Ben Gamari) Date: Sat, 16 Jun 2018 12:26:55 -0400 Subject: accuracy of asinh and atanh In-Reply-To: References: Message-ID: <87zhzur92o.fsf@smart-cactus.org> An embedded and charset-unspecified text was scrubbed... Name: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From brutallesale at gmail.com Sun Jun 17 14:01:26 2018 From: brutallesale at gmail.com (sasa bogicevic) Date: Sun, 17 Jun 2018 16:01:26 +0200 Subject: #10789 Message-ID: <59E92FEC-2D23-4283-ACE5-5AE1FFA8938F@gmail.com> Hello, I am looking at this task https://ghc.haskell.org/trac/ghc/ticket/10789 and need some help on implementing it. With the help of @int_index I found the place in TcErrors.hs where the error printing occurs and I think the check that I need to add will look similar to this one https://github.com/ghc/ghc/blob/master/compiler/typecheck/TcErrors.hs#L1935 . So I guess that we need to check if one of the kinds of two types we are comparing defaults to * (or Type if you will) and then add new warning that will be more descriptive as to why the failure happened. Maybe there is a way to check if what we are comparing are actually type families so that would make the job easier I guess. Richard Eisenberg offered some help on this but I am not sure how to grab hold of him so I'd appreciate any help I could get. Thanks, Sasa -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From pi.boy.travis at gmail.com Mon Jun 18 01:59:00 2018 From: pi.boy.travis at gmail.com (Travis Whitaker) Date: Sun, 17 Jun 2018 18:59:00 -0700 Subject: Use of -dead_strip_dylibs by default makes the -framework flag appear broken. Message-ID: Hello Haskell Friends, GHC always passes -dead_strip_dylibs to the linker on macOS. This means that Haskell programs that use Objective-C-style dynamic binding (via objc_getClass or similar) won't actually be able to find the Objective-C methods they need at runtime. Here's an example illustrating the problem: Consider this small example program: #include extern void *objc_getClass(char *n); void test_get_class(char *n) { void *cp = objc_getClass(n); if(cp == NULL) { printf("Didn't find class %s\n", n); } else { printf("Found class %s\n", n); } } int main(int argc, char *argv[]) { test_get_class(argv[1]); return 0; } Building like this: clang -o hasclass main.c -lobjc -L/usr/lib -framework Foundation -F /System/Library/Frameworks/ Yields an executable that works like this: $ ./hasclass NSObject Found class NSObject $ ./hasclass NSString Found class NSString $ ./hasclass NSDate Found class NSDate otool shows that we're linked against Foundation properly: hasclass: /usr/lib/libobjc.A.dylib (compatibility version 1.0.0, current version 228.0.0) /System/Library/Frameworks/Foundation.framework/Versions/C/Foundation (compatibility version 300.0.0, current version 1452.23.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1252.50.4) Now consider this equivalent Haskell example: module Main where import Foreign.C.String import Foreign.Ptr import System.Environment foreign import ccall objc_getClass :: CString -> IO (Ptr a) testGetClass :: String -> IO () testGetClass n = withCString n $ \cn -> do cp <- objc_getClass cn let m | cp == nullPtr = "Didn't find class " ++ n | otherwise = "Found class " ++ n putStrLn m main :: IO () main = getArgs >>= (testGetClass . head) Building like this: ghc -o hasclass Main.hs -lobjc -L/usr/lib -framework foundation -framework-path /System/Library/Frameworks/ Yields an executable that works like this: $ ./hasclass NSObject Found class NSObject $ ./hasclass NSString Didn't find class NSString $ ./hasclass NSDate Didn't find class NSDate otool shows that our load commands for Foundation are missing: hasclass: /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1252.50.4) /usr/lib/libobjc.A.dylib (compatibility version 1.0.0, current version 228.0.0) /nix/store/7jdxjpy1p5ynl9qrr3ymx01973a1abf6-gmp-6.1.2/lib/libgmp.10.dylib (compatibility version 14.0.0, current version 14.2.0) /usr/lib/libiconv.2.dylib (compatibility version 7.0.0, current version 7.0.0) Interestingly, the testGetClass function will work just fine in GHCi, since it always loads all of the shared objects and frameworks it's asked to. As far as I can tell the only way to get a hasclass executable with the correct behavior is to do the final linking manually with Clang. My understanding is that this behavior was introduced to work around symbol count limitations introduced in macOS Sierra. It would be nice to wrap the frameworks passed to the linker in some flags that spares them from -dead_strip_dylibs. I haven't found such a feature in my limited digging around, but perhaps someone who knows more about systems programming on macOS will have an idea. Statically linking against the system frameworks would be a workable stopgap solution, but I have yet to find an easy way to do that. I'm curious what others' thoughts are on this issue; it's very difficult to call Objective-C methods from Haskell (without generating Objective-C) without a fix for this. Regards, Travis Whitaker -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at cs.brynmawr.edu Mon Jun 18 03:20:59 2018 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Sun, 17 Jun 2018 23:20:59 -0400 Subject: #10789 In-Reply-To: <59E92FEC-2D23-4283-ACE5-5AE1FFA8938F@gmail.com> References: <59E92FEC-2D23-4283-ACE5-5AE1FFA8938F@gmail.com> Message-ID: <7182701F-7066-4443-9DD8-4BFABEFA28BB@cs.brynmawr.edu> > On Jun 17, 2018, at 10:01 AM, sasa bogicevic wrote: > > So I guess that we need to check if one of the kinds of two types we are comparing defaults to * (or Type if you will) and then > add new warning that will be more descriptive as to why the failure happened. Maybe there is a way to check if what we are > comparing are actually type families so that would make the job easier I guess. I don't think the problem is particular to `Type` or defaulting. Instead, the problem is when one of the two mismatched types is a type family application where the type family has equations that pattern-match on an invisible parameter, and it's that invisible-parameter matching that's gone awry. Now that I think about it, detecting these particular conditions might be tricky: you might need to edit code in FamInstEnv that does type family equation lookup to return diagnostic information if a match fails. (I would look at reduceTyFamApp_maybe, and perhaps it can return something more interesting than Nothing in the failure case.) > > Richard Eisenberg offered some help on this but I am not sure how to grab hold of him so I'd appreciate any help I could get. > Just email! :) Thanks for looking into this! Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From omeragacan at gmail.com Mon Jun 18 07:03:49 2018 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Mon, 18 Jun 2018 10:03:49 +0300 Subject: req_interp tests Message-ID: Hi, I have a few problems with req_interp tests. First, req_interp doesn't actually skip the test, it runs it but expects it to fail. This causes problems when testing stage 1 compiler because 629 req_interp tests are run for no reason. Ideally I think req_interp would skip the test, and for the error messages ("not build for interactive use" etc.) we'd have a few stage 1 tests (maybe we alrady have this). This would make the testsuite much faster for testing stage 1. Second, combination of req_interp and compile_fail currently doesn't work, because req_interp makes a failing test pass, but compile_fail expects the test to fail. See T3953 as an example. Making req_interp skip the test fixes this problem as well. So I'd like to make req_interp skip the test instead of expecting it to fail. Any objections to this? Ömer From simonpj at microsoft.com Mon Jun 18 08:35:02 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 18 Jun 2018 08:35:02 +0000 Subject: DEBUG-on Message-ID: Ben We don't really test with a DEBUG-enabled compiler. And yet, those assertions are all there for a reason. In our CI infrastructure, I wonder if we might do a regression-test run on at least one architecture with DEBUG on? e.g. https://ghc.haskell.org/trac/ghc/ticket/14904 Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Jun 18 08:41:13 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 18 Jun 2018 08:41:13 +0000 Subject: #10789 In-Reply-To: <7182701F-7066-4443-9DD8-4BFABEFA28BB@cs.brynmawr.edu> References: <59E92FEC-2D23-4283-ACE5-5AE1FFA8938F@gmail.com> <7182701F-7066-4443-9DD8-4BFABEFA28BB@cs.brynmawr.edu> Message-ID: Richard is right. Let's attach this useful info to the ticket, rather than ghc-devs. I've done that for this exchange. https://ghc.haskell.org/trac/ghc/ticket/10789#comment:18 Simon From: ghc-devs On Behalf Of Richard Eisenberg Sent: 18 June 2018 04:21 To: sasa bogicevic Cc: ghc-devs at haskell.org Subject: Re: #10789 On Jun 17, 2018, at 10:01 AM, sasa bogicevic > wrote: So I guess that we need to check if one of the kinds of two types we are comparing defaults to * (or Type if you will) and then add new warning that will be more descriptive as to why the failure happened. Maybe there is a way to check if what we are comparing are actually type families so that would make the job easier I guess. I don't think the problem is particular to `Type` or defaulting. Instead, the problem is when one of the two mismatched types is a type family application where the type family has equations that pattern-match on an invisible parameter, and it's that invisible-parameter matching that's gone awry. Now that I think about it, detecting these particular conditions might be tricky: you might need to edit code in FamInstEnv that does type family equation lookup to return diagnostic information if a match fails. (I would look at reduceTyFamApp_maybe, and perhaps it can return something more interesting than Nothing in the failure case.) Richard Eisenberg offered some help on this but I am not sure how to grab hold of him so I'd appreciate any help I could get. Just email! :) Thanks for looking into this! Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From omeragacan at gmail.com Mon Jun 18 08:45:05 2018 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Mon, 18 Jun 2018 11:45:05 +0300 Subject: DEBUG-on In-Reply-To: References: Message-ID: If we're going to test with a DEBUG-enabled compiler we may also want to enable sanity checks. I've been recently using this a lot and it really catches a lot of bugs that can go unnoticed without sanity checks. I recently filed #15241 for some of the tests that currently fail the sanity checks. Ömer Simon Peyton Jones via ghc-devs , 18 Haz 2018 Pzt, 11:35 tarihinde şunu yazdı: > > Ben > > We don’t really test with a DEBUG-enabled compiler. And yet, those assertions are all there for a reason. > > In our CI infrastructure, I wonder if we might do a regression-test run on at least one architecture with DEBUG on? > > e.g. https://ghc.haskell.org/trac/ghc/ticket/14904 > > Simon > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From simonpj at microsoft.com Mon Jun 18 08:48:59 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 18 Jun 2018 08:48:59 +0000 Subject: DEBUG-on In-Reply-To: References: Message-ID: good idea! | -----Original Message----- | From: Ömer Sinan Ağacan | Sent: 18 June 2018 09:45 | To: Simon Peyton Jones | Cc: ghc-devs | Subject: Re: DEBUG-on | | If we're going to test with a DEBUG-enabled compiler we may also want to | enable sanity checks. I've been recently using this a lot and it really | catches a lot of bugs that can go unnoticed without sanity checks. I | recently filed #15241 for some of the tests that currently fail the sanity | checks. | | Ömer | | Simon Peyton Jones via ghc-devs , 18 Haz 2018 Pzt, | 11:35 tarihinde şunu yazdı: | > | > Ben | > | > We don’t really test with a DEBUG-enabled compiler. And yet, those | assertions are all there for a reason. | > | > In our CI infrastructure, I wonder if we might do a regression-test run on | at least one architecture with DEBUG on? | > | > e.g. https://ghc.haskell.org/trac/ghc/ticket/14904 | > | > Simon | > | > _______________________________________________ | > ghc-devs mailing list | > ghc-devs at haskell.org | > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.h | > askell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc-devs&data=02%7C01%7Csi | > monpj%40microsoft.com%7C61cb7f7d0544447dd64c08d5d4f7dfdd%7C72f988bf86f | > 141af91ab2d7cd011db47%7C1%7C0%7C636649083434417033&sdata=a%2FhgLYRxKsb | > 2Mh4RoH9KC1MuIqcf1gzv%2FGRyzc7Sh9w%3D&reserved=0 From brutallesale at gmail.com Mon Jun 18 09:03:56 2018 From: brutallesale at gmail.com (sasa bogicevic) Date: Mon, 18 Jun 2018 11:03:56 +0200 Subject: #10789 In-Reply-To: References: <59E92FEC-2D23-4283-ACE5-5AE1FFA8938F@gmail.com> <7182701F-7066-4443-9DD8-4BFABEFA28BB@cs.brynmawr.edu> Message-ID: <0A4128F4-6D79-4DA9-8E00-F750B4A1372B@gmail.com> Thanks! So probably not good issue for the first PR but I will not be intimidated by the complexity. Sasa > On 18 Jun 2018, at 10:41, Simon Peyton Jones wrote: > > Richard is right. > > Let’s attach this useful info to the ticket, rather than ghc-devs. I’ve done that for this exchange. > https://ghc.haskell.org/trac/ghc/ticket/10789#comment:18 > > Simon > > From: ghc-devs > On Behalf Of Richard Eisenberg > Sent: 18 June 2018 04:21 > To: sasa bogicevic > > Cc: ghc-devs at haskell.org > Subject: Re: #10789 > > > > > On Jun 17, 2018, at 10:01 AM, sasa bogicevic > wrote: > > So I guess that we need to check if one of the kinds of two types we are comparing defaults to * (or Type if you will) and then > add new warning that will be more descriptive as to why the failure happened. Maybe there is a way to check if what we are > comparing are actually type families so that would make the job easier I guess. > > I don't think the problem is particular to `Type` or defaulting. Instead, the problem is when one of the two mismatched types is a type family application where the type family has equations that pattern-match on an invisible parameter, and it's that invisible-parameter matching that's gone awry. Now that I think about it, detecting these particular conditions might be tricky: you might need to edit code in FamInstEnv that does type family equation lookup to return diagnostic information if a match fails. (I would look at reduceTyFamApp_maybe, and perhaps it can return something more interesting than Nothing in the failure case.) > > > > Richard Eisenberg offered some help on this but I am not sure how to grab hold of him so I'd appreciate any help I could get. > > > Just email! :) > > Thanks for looking into this! > Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Jun 18 10:44:26 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 18 Jun 2018 10:44:26 +0000 Subject: Strace In-Reply-To: References: Message-ID: Tamar OK, the first thing I tried was this * cd testsuite/tests/plugins * make Note no “-j”, so this is single threaded. Output is below. Yes, fewer failures, so it does seem that many of the failures I see in a generic test suite run are to do with concurrency. So that’s Bug #1. But even if Bug #1 was solved, I seem to get three failures that happen even in the absence of concurrent testing. This is Bug #2 (or maybe 2,3,4). Happy to run more commands! Simon .../tests/plugins$ make PYTHON="python3" "python3" ../../driver/runtests.py -e "ghc_compiler_always_flags='-dcore-lint -dcmm-lint -no-user-package-db -rtsopts -fno-warn-missed-specialisations -fshow-warning-groups -fdiagnostics-color=never -fno-diagnostics-show-caret -dno-debug-output'" -e config.compiler_debugged=False -e ghc_with_native_codegen=1 -e config.have_vanilla=True -e config.have_dynamic=False -e config.have_profiling=False -e ghc_with_threaded_rts=1 -e ghc_with_dynamic_rts=0 -e config.have_interp=True -e config.unregisterised=False -e config.have_gdb=False -e config.have_readelf=True -e config.ghc_dynamic_by_default=False -e config.ghc_dynamic=False -e ghc_with_smp=1 -e ghc_with_llvm=0 -e windows=True -e darwin=False -e config.in_tree_compiler=True -e config.cleanup=True -e config.local=True --rootdir=. --config-file=../../config/ghc -e 'config.confdir="../../config"' -e 'config.platform="x86_64-unknown-mingw32"' -e 'config.os="mingw32"' -e 'config.arch="x86_64"' -e 'config.wordsize="64"' -e 'config.timeout=int() or config.timeout' -e 'config.exeext=".exe"' -e 'config.top="/c/code/HEAD/testsuite"' --config 'compiler="/c/code/HEAD/inplace/bin/ghc-stage2.exe"' --config 'ghc_pkg="/c/code/HEAD/inplace/bin/ghc-pkg.exe"' --config 'haddock=' --config 'hp2ps="/c/code/HEAD/inplace/bin/hp2ps.exe"' --config 'hpc="/c/code/HEAD/inplace/bin/hpc.exe"' --config 'gs="gs"' --config 'timeout_prog="../../timeout/install-inplace/bin/timeout.exe"' -e "config.stage=2" \ \ \ \ \ \ \ Timeout is 300 Found 1 .T files... Beginning test run at Mon Jun 18 11:24:57 2018 GMTST --- ./plugins09.run/plugins09.stdout.normalised 2018-06-18 11:27:47.971987800 +0100 +++ ./plugins09.run/plugins09.run.stdout.normalised 2018-06-18 11:27:47.972177400 +0100 @@ -1,9 +0,0 @@ -parsePlugin(a,b) -interfacePlugin: Prelude -interfacePlugin: GHC.Float -interfacePlugin: GHC.Base -interfacePlugin: GHC.Types -typeCheckPlugin (rn) -typeCheckPlugin (tc) -interfacePlugin: GHC.Integer.Type -interfacePlugin: GHC.Natural --- ./plugins11.run/plugins11.stdout.normalised 2018-06-18 11:28:40.307222400 +0100 +++ ./plugins11.run/plugins11.run.stdout.normalised 2018-06-18 11:28:40.307675400 +0100 @@ -1,9 +0,0 @@ -parsePlugin() -interfacePlugin: Prelude -interfacePlugin: GHC.Float -interfacePlugin: GHC.Base -interfacePlugin: GHC.Types -typeCheckPlugin (rn) -typeCheckPlugin (tc) -interfacePlugin: GHC.Integer.Type -interfacePlugin: GHC.Natural --- ./T11244.run/T11244.stderr.normalised 2018-06-18 11:32:21.142334600 +0100 +++ ./T11244.run/T11244.run.stderr.normalised 2018-06-18 11:32:21.145148100 +0100 @@ -1,4 +1,4 @@ -: Could not load module ‘RuleDefiningPlugin’ +: Could not find module ‘RuleDefiningPlugin’ It is a member of the hidden package ‘rule-defining-plugin-0.1’. You can run ‘:set -package rule-defining-plugin’ to expose it. (Note: this unloads all the modules in the current scope.) ====> Scanning ./all.T =====> plugins01(normal) 1 of 25 [0, 0, 0] cd "./plugins01.run" && $MAKE -s --no-print-directory -C simple-plugin package.plugins01 TOP=/c/code/HEAD/testsuite cd "./plugins01.run" && $MAKE -s --no-print-directory plugins01 =====> plugins02(normal) 2 of 25 [0, 0, 0] cd "./plugins02.run" && $MAKE -s --no-print-directory -C simple-plugin package.plugins02 TOP=/c/code/HEAD/testsuite cd "./plugins02.run" && "/c/code/HEAD/inplace/bin/ghc-stage2.exe" -c plugins02.hs -dcore-lint -dcmm-lint -no-user-package-db -rtsopts -fno-warn-missed-specialisations -fshow-warning-groups -fdiagnostics-color=never -fno-diagnostics-show-caret -dno-debug-output -package-db simple-plugin/pkg.plugins02/local.package.conf -fplugin Simple.BadlyTypedPlugin -package simple-plugin -static =====> plugins03(normal) 3 of 25 [0, 0, 0] cd "./plugins03.run" && $MAKE -s --no-print-directory -C simple-plugin package.plugins03 TOP=/c/code/HEAD/testsuite cd "./plugins03.run" && "/c/code/HEAD/inplace/bin/ghc-stage2.exe" -c plugins03.hs -dcore-lint -dcmm-lint -no-user-package-db -rtsopts -fno-warn-missed-specialisations -fshow-warning-groups -fdiagnostics-color=never -fno-diagnostics-show-caret -dno-debug-output -package-db simple-plugin/pkg.plugins03/local.package.conf -fplugin Simple.NonExistentPlugin -package simple-plugin =====> plugins04(normal) 4 of 25 [0, 0, 0] cd "./plugins04.run" && "/c/code/HEAD/inplace/bin/ghc-stage2.exe" --make plugins04 -dcore-lint -dcmm-lint -no-user-package-db -rtsopts -fno-warn-missed-specialisations -fshow-warning-groups -fdiagnostics-color=never -fno-diagnostics-show-caret -dno-debug-output -package ghc -fplugin HomePackagePlugin =====> plugins05(normal) 5 of 25 [0, 0, 0] cd "./plugins05.run" && "/c/code/HEAD/inplace/bin/ghc-stage2.exe" --make -o plugins05 plugins05 -dcore-lint -dcmm-lint -no-user-package-db -rtsopts -fno-warn-missed-specialisations -fshow-warning-groups -fdiagnostics-color=never -fno-diagnostics-show-caret -dno-debug-output -package ghc cd "./plugins05.run" && ./plugins05 =====> plugins07(normal) 7 of 25 [0, 0, 0] cd "./plugins07.run" && $MAKE -s --no-print-directory -C rule-defining-plugin package.plugins07 TOP=/c/code/HEAD/testsuite cd "./plugins07.run" && $MAKE -s --no-print-directory plugins07 =====> plugins08(normal) 8 of 25 [0, 0, 0] cd "./plugins08.run" && $MAKE -s --no-print-directory -C simple-plugin package.plugins08 TOP=/c/code/HEAD/testsuite cd "./plugins08.run" && $MAKE -s --no-print-directory plugins08 =====> plugins09(normal) 9 of 25 [0, 0, 0] cd "./plugins09.run" && $MAKE -s --no-print-directory -C simple-plugin package.plugins09 TOP=/c/code/HEAD/testsuite cd "./plugins09.run" && $MAKE -s --no-print-directory plugins09 Actual stdout output differs from expected: diff -uw "./plugins09.run/plugins09.stdout.normalised" "./plugins09.run/plugins09.run.stdout.normalised" *** unexpected failure for plugins09(normal) =====> plugins10(normal) 10 of 25 [0, 1, 0] cd "./plugins10.run" && $MAKE -s --no-print-directory -C simple-plugin package.plugins10 TOP=/c/code/HEAD/testsuite cd "./plugins10.run" && $MAKE -s --no-print-directory plugins10 =====> plugins11(normal) 11 of 25 [0, 1, 0] cd "./plugins11.run" && $MAKE -s --no-print-directory -C simple-plugin package.plugins11 TOP=/c/code/HEAD/testsuite cd "./plugins11.run" && $MAKE -s --no-print-directory plugins11 Actual stdout output differs from expected: diff -uw "./plugins11.run/plugins11.stdout.normalised" "./plugins11.run/plugins11.run.stdout.normalised" *** unexpected failure for plugins11(normal) =====> plugins12(normal) 12 of 25 [0, 2, 0] cd "./plugins12.run" && $MAKE -s --no-print-directory -C simple-plugin package.plugins12 TOP=/c/code/HEAD/testsuite cd "./plugins12.run" && $MAKE -s --no-print-directory plugins12 =====> plugins13(normal) 13 of 25 [0, 2, 0] cd "./plugins13.run" && $MAKE -s --no-print-directory -C simple-plugin package.plugins13 TOP=/c/code/HEAD/testsuite cd "./plugins13.run" && $MAKE -s --no-print-directory plugins13 =====> plugins14(normal) 14 of 25 [0, 2, 0] cd "./plugins14.run" && $MAKE -s --no-print-directory -C simple-plugin package.plugins14 TOP=/c/code/HEAD/testsuite cd "./plugins14.run" && $MAKE -s --no-print-directory plugins14 =====> plugins15(normal) 15 of 25 [0, 2, 0] cd "./plugins15.run" && $MAKE -s --no-print-directory -C simple-plugin package.plugins15 TOP=/c/code/HEAD/testsuite cd "./plugins15.run" && $MAKE -s --no-print-directory plugins15 =====> T10420(normal) 16 of 25 [0, 2, 0] cd "./T10420.run" && $MAKE -s --no-print-directory -C rule-defining-plugin package.T10420 TOP=/c/code/HEAD/testsuite cd "./T10420.run" && $MAKE -s --no-print-directory T10420 =====> T10294(normal) 17 of 25 [0, 2, 0] cd "./T10294.run" && $MAKE -s --no-print-directory -C annotation-plugin package.T10294 TOP=/c/code/HEAD/testsuite cd "./T10294.run" && $MAKE -s --no-print-directory T10294 =====> T10294a(normal) 18 of 25 [0, 2, 0] cd "./T10294a.run" && $MAKE -s --no-print-directory -C annotation-plugin package.T10294a TOP=/c/code/HEAD/testsuite cd "./T10294a.run" && $MAKE -s --no-print-directory T10294a =====> frontend01(normal) 19 of 25 [0, 2, 0] cd "./frontend01.run" && $MAKE -s --no-print-directory frontend01 =====> T11244(normal) 20 of 25 [0, 2, 0] cd "./T11244.run" && $MAKE -s --no-print-directory -C rule-defining-plugin package.T11244 TOP=/c/code/HEAD/testsuite cd "./T11244.run" && $MAKE -s --no-print-directory T11244 Actual stderr output differs from expected: diff -uw "./T11244.run/T11244.stderr.normalised" "./T11244.run/T11244.run.stderr.normalised" *** unexpected failure for T11244(normal) =====> T12567a(normal) 21 of 25 [0, 3, 0] cd "./T12567a.run" && $MAKE -s --no-print-directory -C simple-plugin package.T12567a TOP=/c/code/HEAD/testsuite cd "./T12567a.run" && $MAKE -s --no-print-directory T12567a =====> T14335(normal) 22 of 25 [0, 3, 0] cd "./T14335.run" && $MAKE -s --no-print-directory -C simple-plugin package.plugins01 TOP=/c/code/HEAD/testsuite cd "./T14335.run" && "/c/code/HEAD/inplace/bin/ghc-stage2.exe" -c T14335.hs -dcore-lint -dcmm-lint -no-user-package-db -rtsopts -fno-warn-missed-specialisations -fshow-warning-groups -fdiagnostics-color=never -fno-diagnostics-show-caret -dno-debug-output -package-db simple-plugin/pkg.plugins01/local.package.conf -fplugin Simple.Plugin -fexternal-interpreter -package simple-plugin -static =====> plugin-recomp-pure(normal) 23 of 25 [0, 3, 0] cd "./plugin-recomp-pure.run" && $MAKE -s --no-print-directory -C plugin-recomp package.plugins01 TOP=/c/code/HEAD/testsuite cd "./plugin-recomp-pure.run" && $MAKE -s --no-print-directory plugin-recomp-pure =====> plugin-recomp-impure(normal) 24 of 25 [0, 3, 0] cd "./plugin-recomp-impure.run" && $MAKE -s --no-print-directory -C plugin-recomp package.plugins01 TOP=/c/code/HEAD/testsuite cd "./plugin-recomp-impure.run" && $MAKE -s --no-print-directory plugin-recomp-impure =====> plugin-recomp-flags(normal) 25 of 25 [0, 3, 0] cd "./plugin-recomp-flags.run" && $MAKE -s --no-print-directory -C plugin-recomp package.plugins01 TOP=/c/code/HEAD/testsuite cd "./plugin-recomp-flags.run" && $MAKE -s --no-print-directory plugin-recomp-flags Unexpected results from: TEST="T11244 plugins09 plugins11" SUMMARY for test run started at Mon Jun 18 11:24:57 2018 GMTST 0:11:18 spent to go through 25 total tests, which gave rise to 35 test cases, of which 11 were skipped 0 had missing libraries 19 expected passes 2 expected failures 0 caused framework failures 0 caused framework warnings 0 unexpected passes 3 unexpected failures 0 unexpected stat failures Unexpected failures: plugins09.run plugins09 [bad stdout] (normal) plugins11.run plugins11 [bad stdout] (normal) T11244.run T11244 [bad stderr] (normal) make: *** [../../mk/test.mk:329: test] Error 1 .../tests/plugins$ From: Phyx Sent: 13 June 2018 20:47 To: Simon Peyton Jones Cc: ghc-devs at haskell.org Subject: Re: Strace Hi Simon, On Wed, Jun 13, 2018 at 5:24 PM, Simon Peyton Jones > wrote: OK – so maybe the root cause is a framework failure – and indeed for the last few weeks I’ve seen Framework failures: plugins/plugins07.run plugins07 [normal] (pre_cmd failed: 2) plugins/T10420.run T10420 [normal] (pre_cmd failed: 2) plugins/T11244.run T11244 [normal] (pre_cmd failed: 2) I have just learned to live with these failures, because I knew you were working on making things better. But it sounds as if they are still taking place. The commit I made should have reduced the amount of failing tests to 0. framework failures are always quite unusual. So: * Yes, please make it not happen by default I've removed the code, if you update it should be gone. It was there and on by default because I was trying to debug failures on Harbormaster, I realized a switch isn't very useful as I won't be able to toggle it for Harbormaster anyway. * * If you don’t get these framework failures, can we work together to resolve them? These don't happen for me nor on Harbormaster, try picking a test, e.g T10420 run only that test to make sure it's not a threading issue: make TEST=T10420 test -C testsuite/tests If it still gives a framework error then do at the top level make VERBOSE=3 TEST=T10420 test -C testsuite/tests once it runs, the output should contain the command it ran as a pre_cmd, and the stdout and stderr from the pre_cmd output. Could you then send the error? if it doesn't show any of this, try make CLEANP=0 VERBOSE=3 TEST= T10420 test -C testsuite/tests --trace and copy and paste the pre_cmd command, which should just replay the action it did. Cheers, Tamar Thanks Simon From: Phyx > Sent: 13 June 2018 17:19 To: Simon Peyton Jones > Cc: ghc-devs at haskell.org Subject: Re: Strace Hi Simon, The strace is only supposed to run when the normal test pre_cmd fails. If it's running that often it means your tests are all failing during pre_cmd with a framework failure https://git.haskell.org/ghc.git/blobdiff/4778cba1dbb6adf495930322d7f9e9db0af60d8f..60fb2b2160aa16194b74262f4df8fad5af171b0f:/testsuite/driver/testlib.py But maybe I shouldn't turn this on my default. I'll pramaterize it when I get home. Tamar. On Wed, Jun 13, 2018, 17:09 Simon Peyton Jones > wrote: Tamar I’m getting megabytes of output from ‘sh validate’ on windows. It looks like this 629 151745 [main] sh 2880 fhandler_base::fhaccess: returning 0 291 152036 [main] sh 2880 faccessat: returning 0 7757 159793 [main] sh 2880 fhandler_base_overlapped::wait_overlapped: wfres 0, wores 1, bytes 7 179457 1608947 [main] make 11484 fhandler_base_overlapped::wait_overlapped: wfres 0, wores 1, bytes 7 99 159892 [main] sh 2880 fhandler_base_overlapped::wait_overlapped: normal write, 7 bytes ispipe() 1 180 1609127 [main] make 11484 fhandler_base_overlapped::wait_overlapped: normal read, 7 bytes ispipe() 1 139 160031 [main] sh 2880 write: 7 = write(1, 0x6000396A0, 7) 142 1609269 [main] make 11484 fhandler_base::read: returning 7, binary mode 139 1609408 [main] make 11484 read: 7 = read(5, 0x60005B4B0, 7) 136 1609544 [main] make 11484 read: read(5, 0x60005B4B7, 193) blocking 4693 164724 [main] sh 2880 set_signal_mask: setmask 0, newmask 80000, mask_bits 0 but with hundreds of thousands of lines. (I have not counted) I believe that it may be the result of this line, earlier in the log cd "/c/Users/simonpj/AppData/Local/Temp/ghctest-8fa9s6rk/test spaces/./plugins/plugins07.run" && strace $MAKE -s --no-print-directory -C rule-defining-plugin package.plugins07 TOP=/c/code/HEAD/testsuite# Note the strace. That in turn was added in your commit commit 60fb2b2160aa16194b74262f4df8fad5af171b0f Author: Tamar Christina > Date: Mon May 28 19:34:11 2018 +0100 Clean up Windows testsuite failures Summary: Another round and attempt at getting these down to 0. Could you perhaps have made a mistake here? Currently validate is unusable. Thanks! Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgraf1337 at gmail.com Mon Jun 18 12:43:05 2018 From: sgraf1337 at gmail.com (Sebastian Graf) Date: Mon, 18 Jun 2018 14:43:05 +0200 Subject: Functor, Foldable and Traversable for Expr Message-ID: Hi everyone, I'm repeatedly wondering why there are no `Functor`, `Foldable` and `Traversable` instances for `Expr`. Is this just by lack of motive? I could help there: I was looking for a function that would tell me if an expression mentions `makeStatic`. After spending some minutes searching in the code base, I decided to roll my own thing in `CoreUtils`. I really couldn't think about a good name, so I settled for `anyReferenceMatching :: (b -> Bool) -> Expr b -> Bool` and realized that I could generalize the function to `foldMapExpr :: Monoid m => (b -> m) -> Expr b -> m`. Occasionally this need pops up and I really want to avoid writing my own traversals over the syntax tree. So, would anyone object to a patch implementing these instances? Thanks Sebastian -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Mon Jun 18 13:36:42 2018 From: ben at well-typed.com (Ben Gamari) Date: Mon, 18 Jun 2018 09:36:42 -0400 Subject: DEBUG-on In-Reply-To: References: Message-ID: <87lgbcqkrf.fsf@smart-cactus.org> Simon Peyton Jones via ghc-devs writes: > Ben Hi Simon, > We don't really test with a DEBUG-enabled compiler. And yet, those > assertions are all there for a reason. In our CI infrastructure, I > wonder if we might do a regression-test run on at least one > architecture with DEBUG on? e.g. > https://ghc.haskell.org/trac/ghc/ticket/14904 We actually now do precisely this. Since a few weeks ago we have a nightly `validate --slow` (which enables -DDEBUG in the stage 2 compiler) job that runs on the CircleCI infrastructure. Thanks to Alp's work it even appears to pass. However, in light of #14904 I wonder if it we are failing to catch an exit code since it sounds like it should have been failing. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From sgraf1337 at gmail.com Mon Jun 18 14:28:57 2018 From: sgraf1337 at gmail.com (Sebastian Graf) Date: Mon, 18 Jun 2018 16:28:57 +0200 Subject: Functor, Foldable and Traversable for Expr In-Reply-To: References: Message-ID: OK, so just deriving the instances doesn't yield the expected behavior, because the `Var` case explicitly mentions `Id`s instead of the type parameter `b`. Even if that would be changed, it's not easy to pin down over which parts of the syntax tree we should 'map'. Should we include binding sites of local variables? I'm inclined to say No, but only because I have my concrete use case in mind. It's probably best to have non-derived, non-typeclass functions `foldMapVars :: Monoid m => (Id -> m) -> Expr b -> m`, or a variant where the mapping function also gets supplied a value of `data NameSite = Lam | Let | VarRef` (for lack of a better name). Am Mo., 18. Juni 2018 um 14:43 Uhr schrieb Sebastian Graf < sgraf1337 at gmail.com>: > Hi everyone, > > I'm repeatedly wondering why there are no `Functor`, `Foldable` and > `Traversable` instances for `Expr`. > > Is this just by lack of motive? > I could help there: I was looking for a function that would tell me if an > expression mentions `makeStatic`. After spending some minutes searching in > the code base, I decided to roll my own thing in `CoreUtils`. > I really couldn't think about a good name, so I settled for > `anyReferenceMatching :: (b -> Bool) -> Expr b -> Bool` and realized that I > could generalize the function to `foldMapExpr :: Monoid m => (b -> m) -> > Expr b -> m`. > > Occasionally this need pops up and I really want to avoid writing my own > traversals over the syntax tree. So, would anyone object to a patch > implementing these instances? > > Thanks > Sebastian > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Mon Jun 18 15:38:38 2018 From: ben at well-typed.com (Ben Gamari) Date: Mon, 18 Jun 2018 11:38:38 -0400 Subject: Functor, Foldable and Traversable for Expr In-Reply-To: References: Message-ID: <87d0woqf46.fsf@smart-cactus.org> Sebastian Graf writes: > OK, so just deriving the instances doesn't yield the expected behavior, > because the `Var` case explicitly mentions `Id`s instead of the type > parameter `b`. > Even if that would be changed, it's not easy to pin down over which parts > of the syntax tree we should 'map'. > Should we include binding sites of local variables? I'm inclined to say No, > but only because I have my concrete use case in mind. > > It's probably best to have non-derived, non-typeclass functions > `foldMapVars :: Monoid m => (Id -> m) -> Expr b -> m`, or a variant where > the mapping function also gets supplied a value of `data NameSite = Lam | > Let | VarRef` (for lack of a better name). > Agreed, I think there is enough subtlety here that it's best to be explicit about which things in particular you want to traverse. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From lonetiger at gmail.com Mon Jun 18 19:32:07 2018 From: lonetiger at gmail.com (Phyx) Date: Mon, 18 Jun 2018 20:32:07 +0100 Subject: Strace In-Reply-To: References: Message-ID: Hi Simon, T11244 and the plugin output changes have been done after my commit to fix the tests, hence the new std issues. Those have to be fixed up again... I'm curious about the threading issues though. One way to find out is by running the entire testsuite with -j again but with VERBOSE=3 and piping to a file. This will show the underlying tool's output instead of throwing it away. This will of course generate lots of data but shouldn't be at the level of the strace output, but should give an indication of what's going on. Cheers, Tamar On Mon, Jun 18, 2018, 11:44 Simon Peyton Jones wrote: > Tamar > > > > OK, the first thing I tried was this > > - cd testsuite/tests/plugins > - make > > Note no “-j”, so this is single threaded. > > > > Output is below. Yes, fewer failures, so it does seem that many of the > failures I see in a generic test suite run are to do with concurrency. So > that’s Bug #1. > > > > But even if Bug #1 was solved, I seem to get three failures that happen > even in the absence of concurrent testing. This is Bug #2 (or maybe 2,3,4). > > > > Happy to run more commands! > > > > Simon > > > > *.../tests/plugins$ make* > > *PYTHON="python3" "python3" ../../driver/runtests.py -e > "ghc_compiler_always_flags='-dcore-lint -dcmm-lint -no-user-package-db > -rtsopts -fno-warn-missed-specialisations -fshow-warning-groups > -fdiagnostics-color=never -fno-diagnostics-show-caret -dno-debug-output'" > -e config.compiler_debugged=False -e ghc_with_native_codegen=1 -e > config.have_vanilla=True -e config.have_dynamic=False -e > config.have_profiling=False -e ghc_with_threaded_rts=1 -e > ghc_with_dynamic_rts=0 -e config.have_interp=True -e > config.unregisterised=False -e config.have_gdb=False -e > config.have_readelf=True -e config.ghc_dynamic_by_default=False -e > config.ghc_dynamic=False -e ghc_with_smp=1 -e ghc_with_llvm=0 -e > windows=True -e darwin=False -e config.in_tree_compiler=True -e > config.cleanup=True -e config.local=True --rootdir=. > --config-file=../../config/ghc -e 'config.confdir="../../config"' -e > 'config.platform="x86_64-unknown-mingw32"' -e 'config.os="mingw32"' -e > 'config.arch="x86_64"' -e 'config.wordsize="64"' -e 'config.timeout=int() > or config.timeout' -e 'config.exeext=".exe"' -e > 'config.top="/c/code/HEAD/testsuite"' --config > 'compiler="/c/code/HEAD/inplace/bin/ghc-stage2.exe"' --config > 'ghc_pkg="/c/code/HEAD/inplace/bin/ghc-pkg.exe"' --config 'haddock=' > --config 'hp2ps="/c/code/HEAD/inplace/bin/hp2ps.exe"' --config > 'hpc="/c/code/HEAD/inplace/bin/hpc.exe"' --config 'gs="gs"' --config > 'timeout_prog="../../timeout/install-inplace/bin/timeout.exe"' -e > "config.stage=2" \* > > * \* > > * \* > > * \* > > * \* > > * \* > > * \* > > > > *Timeout is 300* > > *Found 1 .T files...* > > *Beginning test run at Mon Jun 18 11:24:57 2018 GMTST* > > *--- ./plugins09.run/plugins09.stdout.normalised 2018-06-18 > 11:27:47.971987800 +0100* > > *+++ ./plugins09.run/plugins09.run.stdout.normalised 2018-06-18 > 11:27:47.972177400 +0100* > > *@@ -1,9 +0,0 @@* > > *-parsePlugin(a,b)* > > *-interfacePlugin: Prelude* > > *-interfacePlugin: GHC.Float* > > *-interfacePlugin: GHC.Base* > > *-interfacePlugin: GHC.Types* > > *-typeCheckPlugin (rn)* > > *-typeCheckPlugin (tc)* > > *-interfacePlugin: GHC.Integer.Type* > > *-interfacePlugin: GHC.Natural* > > *--- ./plugins11.run/plugins11.stdout.normalised 2018-06-18 > 11:28:40.307222400 +0100* > > *+++ ./plugins11.run/plugins11.run.stdout.normalised 2018-06-18 > 11:28:40.307675400 +0100* > > *@@ -1,9 +0,0 @@* > > *-parsePlugin()* > > *-interfacePlugin: Prelude* > > *-interfacePlugin: GHC.Float* > > *-interfacePlugin: GHC.Base* > > *-interfacePlugin: GHC.Types* > > *-typeCheckPlugin (rn)* > > *-typeCheckPlugin (tc)* > > *-interfacePlugin: GHC.Integer.Type* > > *-interfacePlugin: GHC.Natural* > > *--- ./T11244.run/T11244.stderr.normalised 2018-06-18 > 11:32:21.142334600 +0100* > > *+++ ./T11244.run/T11244.run.stderr.normalised 2018-06-18 > 11:32:21.145148100 +0100* > > *@@ -1,4 +1,4 @@* > > *-: Could not load module ‘RuleDefiningPlugin’* > > *+: Could not find module ‘RuleDefiningPlugin’* > > *It is a member of the hidden package ‘rule-defining-plugin-0.1’.* > > *You can run ‘:set -package rule-defining-plugin’ to expose it.* > > *(Note: this unloads all the modules in the current scope.)* > > *====> Scanning ./all.T* > > *=====> plugins01(normal) 1 of 25 [0, 0, 0]* > > *cd "./plugins01.run" && $MAKE -s --no-print-directory -C simple-plugin > package.plugins01 TOP=/c/code/HEAD/testsuite* > > *cd "./plugins01.run" && $MAKE -s --no-print-directory plugins01 * > > *=====> plugins02(normal) 2 of 25 [0, 0, 0]* > > *cd "./plugins02.run" && $MAKE -s --no-print-directory -C simple-plugin > package.plugins02 TOP=/c/code/HEAD/testsuite* > > *cd "./plugins02.run" && "/c/code/HEAD/inplace/bin/ghc-stage2.exe" -c > plugins02.hs -dcore-lint -dcmm-lint -no-user-package-db -rtsopts > -fno-warn-missed-specialisations -fshow-warning-groups > -fdiagnostics-color=never -fno-diagnostics-show-caret -dno-debug-output > -package-db simple-plugin/pkg.plugins02/local.package.conf -fplugin > Simple.BadlyTypedPlugin -package simple-plugin -static* > > *=====> plugins03(normal) 3 of 25 [0, 0, 0]* > > *cd "./plugins03.run" && $MAKE -s --no-print-directory -C simple-plugin > package.plugins03 TOP=/c/code/HEAD/testsuite* > > *cd "./plugins03.run" && "/c/code/HEAD/inplace/bin/ghc-stage2.exe" -c > plugins03.hs -dcore-lint -dcmm-lint -no-user-package-db -rtsopts > -fno-warn-missed-specialisations -fshow-warning-groups > -fdiagnostics-color=never -fno-diagnostics-show-caret -dno-debug-output > -package-db simple-plugin/pkg.plugins03/local.package.conf -fplugin > Simple.NonExistentPlugin -package simple-plugin* > > *=====> plugins04(normal) 4 of 25 [0, 0, 0]* > > *cd "./plugins04.run" && "/c/code/HEAD/inplace/bin/ghc-stage2.exe" > --make plugins04 -dcore-lint -dcmm-lint -no-user-package-db -rtsopts > -fno-warn-missed-specialisations -fshow-warning-groups > -fdiagnostics-color=never -fno-diagnostics-show-caret -dno-debug-output > -package ghc -fplugin HomePackagePlugin* > > *=====> plugins05(normal) 5 of 25 [0, 0, 0]* > > *cd "./plugins05.run" && "/c/code/HEAD/inplace/bin/ghc-stage2.exe" --make > -o plugins05 plugins05 -dcore-lint -dcmm-lint -no-user-package-db -rtsopts > -fno-warn-missed-specialisations -fshow-warning-groups > -fdiagnostics-color=never -fno-diagnostics-show-caret -dno-debug-output > -package ghc* > > *cd "./plugins05.run" && ./plugins05 * > > *=====> plugins07(normal) 7 of 25 [0, 0, 0]* > > *cd "./plugins07.run" && $MAKE -s --no-print-directory -C > rule-defining-plugin package.plugins07 TOP=/c/code/HEAD/testsuite* > > *cd "./plugins07.run" && $MAKE -s --no-print-directory plugins07 * > > *=====> plugins08(normal) 8 of 25 [0, 0, 0]* > > *cd "./plugins08.run" && $MAKE -s --no-print-directory -C simple-plugin > package.plugins08 TOP=/c/code/HEAD/testsuite* > > *cd "./plugins08.run" && $MAKE -s --no-print-directory plugins08 * > > *=====> plugins09(normal) 9 of 25 [0, 0, 0]* > > *cd "./plugins09.run" && $MAKE -s --no-print-directory -C simple-plugin > package.plugins09 TOP=/c/code/HEAD/testsuite* > > *cd "./plugins09.run" && $MAKE -s --no-print-directory plugins09 * > > *Actual stdout output differs from expected:* > > *diff -uw "./plugins09.run/plugins09.stdout.normalised" > "./plugins09.run/plugins09.run.stdout.normalised"* > > **** unexpected failure for plugins09(normal)* > > *=====> plugins10(normal) 10 of 25 [0, 1, 0]* > > *cd "./plugins10.run" && $MAKE -s --no-print-directory -C simple-plugin > package.plugins10 TOP=/c/code/HEAD/testsuite* > > *cd "./plugins10.run" && $MAKE -s --no-print-directory plugins10 * > > *=====> plugins11(normal) 11 of 25 [0, 1, 0]* > > *cd "./plugins11.run" && $MAKE -s --no-print-directory -C simple-plugin > package.plugins11 TOP=/c/code/HEAD/testsuite* > > *cd "./plugins11.run" && $MAKE -s --no-print-directory plugins11 * > > *Actual stdout output differs from expected:* > > *diff -uw "./plugins11.run/plugins11.stdout.normalised" > "./plugins11.run/plugins11.run.stdout.normalised"* > > **** unexpected failure for plugins11(normal)* > > *=====> plugins12(normal) 12 of 25 [0, 2, 0]* > > *cd "./plugins12.run" && $MAKE -s --no-print-directory -C simple-plugin > package.plugins12 TOP=/c/code/HEAD/testsuite* > > *cd "./plugins12.run" && $MAKE -s --no-print-directory plugins12 * > > *=====> plugins13(normal) 13 of 25 [0, 2, 0]* > > *cd "./plugins13.run" && $MAKE -s --no-print-directory -C simple-plugin > package.plugins13 TOP=/c/code/HEAD/testsuite* > > *cd "./plugins13.run" && $MAKE -s --no-print-directory plugins13 * > > *=====> plugins14(normal) 14 of 25 [0, 2, 0]* > > *cd "./plugins14.run" && $MAKE -s --no-print-directory -C simple-plugin > package.plugins14 TOP=/c/code/HEAD/testsuite* > > *cd "./plugins14.run" && $MAKE -s --no-print-directory plugins14 * > > *=====> plugins15(normal) 15 of 25 [0, 2, 0]* > > *cd "./plugins15.run" && $MAKE -s --no-print-directory -C simple-plugin > package.plugins15 TOP=/c/code/HEAD/testsuite* > > *cd "./plugins15.run" && $MAKE -s --no-print-directory plugins15 * > > *=====> T10420(normal) 16 of 25 [0, 2, 0]* > > *cd "./T10420.run" && $MAKE -s --no-print-directory -C > rule-defining-plugin package.T10420 TOP=/c/code/HEAD/testsuite* > > *cd "./T10420.run" && $MAKE -s --no-print-directory T10420 * > > *=====> T10294(normal) 17 of 25 [0, 2, 0]* > > *cd "./T10294.run" && $MAKE -s --no-print-directory -C annotation-plugin > package.T10294 TOP=/c/code/HEAD/testsuite* > > *cd "./T10294.run" && $MAKE -s --no-print-directory T10294 * > > *=====> T10294a(normal) 18 of 25 [0, 2, 0]* > > *cd "./T10294a.run" && $MAKE -s --no-print-directory -C annotation-plugin > package.T10294a TOP=/c/code/HEAD/testsuite* > > *cd "./T10294a.run" && $MAKE -s --no-print-directory T10294a * > > *=====> frontend01(normal) 19 of 25 [0, 2, 0]* > > *cd "./frontend01.run" && $MAKE -s --no-print-directory frontend01 * > > *=====> T11244(normal) 20 of 25 [0, 2, 0]* > > *cd "./T11244.run" && $MAKE -s --no-print-directory -C > rule-defining-plugin package.T11244 TOP=/c/code/HEAD/testsuite* > > *cd "./T11244.run" && $MAKE -s --no-print-directory T11244 * > > *Actual stderr output differs from expected:* > > *diff -uw "./T11244.run/T11244.stderr.normalised" > "./T11244.run/T11244.run.stderr.normalised"* > > **** unexpected failure for T11244(normal)* > > *=====> T12567a(normal) 21 of 25 [0, 3, 0]* > > *cd "./T12567a.run" && $MAKE -s --no-print-directory -C simple-plugin > package.T12567a TOP=/c/code/HEAD/testsuite* > > *cd "./T12567a.run" && $MAKE -s --no-print-directory T12567a * > > *=====> T14335(normal) 22 of 25 [0, 3, 0]* > > *cd "./T14335.run" && $MAKE -s --no-print-directory -C simple-plugin > package.plugins01 TOP=/c/code/HEAD/testsuite* > > *cd "./T14335.run" && "/c/code/HEAD/inplace/bin/ghc-stage2.exe" -c > T14335.hs -dcore-lint -dcmm-lint -no-user-package-db -rtsopts > -fno-warn-missed-specialisations -fshow-warning-groups > -fdiagnostics-color=never -fno-diagnostics-show-caret -dno-debug-output > -package-db simple-plugin/pkg.plugins01/local.package.conf -fplugin > Simple.Plugin -fexternal-interpreter -package simple-plugin -static* > > *=====> plugin-recomp-pure(normal) 23 of 25 [0, 3, 0]* > > *cd "./plugin-recomp-pure.run" && $MAKE -s --no-print-directory -C > plugin-recomp package.plugins01 TOP=/c/code/HEAD/testsuite* > > *cd "./plugin-recomp-pure.run" && $MAKE -s --no-print-directory > plugin-recomp-pure * > > *=====> plugin-recomp-impure(normal) 24 of 25 [0, 3, 0]* > > *cd "./plugin-recomp-impure.run" && $MAKE -s --no-print-directory -C > plugin-recomp package.plugins01 TOP=/c/code/HEAD/testsuite* > > *cd "./plugin-recomp-impure.run" && $MAKE -s --no-print-directory > plugin-recomp-impure * > > *=====> plugin-recomp-flags(normal) 25 of 25 [0, 3, 0]* > > *cd "./plugin-recomp-flags.run" && $MAKE -s --no-print-directory -C > plugin-recomp package.plugins01 TOP=/c/code/HEAD/testsuite* > > *cd "./plugin-recomp-flags.run" && $MAKE -s --no-print-directory > plugin-recomp-flags * > > > > *Unexpected results from:* > > *TEST="T11244 plugins09 plugins11"* > > > > *SUMMARY for test run started at Mon Jun 18 11:24:57 2018 GMTST* > > *0:11:18 spent to go through* > > * 25 total tests, which gave rise to* > > * 35 test cases, of which* > > * 11 were skipped* > > > > * 0 had missing libraries* > > * 19 expected passes* > > * 2 expected failures* > > > > * 0 caused framework failures* > > * 0 caused framework warnings* > > * 0 unexpected passes* > > * 3 unexpected failures* > > * 0 unexpected stat failures* > > > > *Unexpected failures:* > > * plugins09.run plugins09 [bad stdout] (normal)* > > * plugins11.run plugins11 [bad stdout] (normal)* > > * T11244.run T11244 [bad stderr] (normal)* > > > > *make: *** [../../mk/test.mk:329 : test] Error 1* > > *.../tests/plugins$* > > > > *From:* Phyx > *Sent:* 13 June 2018 20:47 > > > *To:* Simon Peyton Jones > *Cc:* ghc-devs at haskell.org > *Subject:* Re: Strace > > > > > > Hi Simon, > > > > On Wed, Jun 13, 2018 at 5:24 PM, Simon Peyton Jones > wrote: > > OK – so maybe the root cause is a framework failure – and indeed for the > last few weeks I’ve seen > > Framework failures: > > plugins/plugins07.run plugins07 [normal] (pre_cmd failed: 2) > > plugins/T10420.run T10420 [normal] (pre_cmd failed: 2) > > plugins/T11244.run T11244 [normal] (pre_cmd failed: 2) > > > > I have just learned to live with these failures, because I knew you were > working on making things better. But it sounds as if they are still taking > place. > > > > The commit I made should have reduced the amount of failing tests to 0. > framework failures are always quite unusual. > > > > > > So: > > - Yes, please make it not happen by default > > I've removed the code, if you update it should be gone. It was there and > on by default because I was trying to debug failures on Harbormaster, I > > realized a switch isn't very useful as I won't be able to toggle it for > Harbormaster anyway. > > > > > - > - If you don’t get these framework failures, can we work together to > resolve them? > > These don't happen for me nor on Harbormaster, try picking a test, e.g > T10420 > > > > run only that test to make sure it's not a threading issue: > > > > make TEST=T10420 test -C testsuite/tests > > > > If it still gives a framework error then do at the top level > > > > make VERBOSE=3 TEST=T10420 test -C testsuite/tests > > > > once it runs, the output should contain the command it ran as a pre_cmd, > and the stdout and > > stderr from the pre_cmd output. Could you then send the error? > > > > if it doesn't show any of this, try > > > > make CLEANP=0 VERBOSE=3 TEST= T10420 test -C testsuite/tests --trace > > > > and copy and paste the pre_cmd command, which should just replay the > action it did. > > > > > > Cheers, > > Tamar > > > > > > Thanks > > > > Simon > > > > *From:* Phyx > *Sent:* 13 June 2018 17:19 > *To:* Simon Peyton Jones > *Cc:* ghc-devs at haskell.org > *Subject:* Re: Strace > > > > Hi Simon, > > > > The strace is only supposed to run when the normal test pre_cmd fails. > > If it's running that often it means your tests are all failing during > pre_cmd with a framework failure > > https://git.haskell.org/ghc.git/blobdiff/4778cba1dbb6adf495930322d7f9e9 > db0af60d8f..60fb2b2160aa16194b74262f4df8fad5af171b0f:/testsuite/driver/ > testlib.py > > > > But maybe I shouldn't turn this on my default. I'll pramaterize it when I > get home. > > > > Tamar. > > > > On Wed, Jun 13, 2018, 17:09 Simon Peyton Jones > wrote: > > Tamar > > I’m getting *megabytes* of output from ‘sh validate’ on windows. It > looks like this > > 629 151745 [main] sh 2880 fhandler_base::fhaccess: returning 0 > > 291 152036 [main] sh 2880 faccessat: returning 0 > > 7757 159793 [main] sh 2880 fhandler_base_overlapped::wait_overlapped: > wfres 0, wores 1, bytes 7 > > 179457 1608947 [main] make 11484 fhandler_base_overlapped::wait_overlapped: > wfres 0, wores 1, bytes 7 > > 99 159892 [main] sh 2880 fhandler_base_overlapped::wait_overlapped: > normal write, 7 bytes ispipe() 1 > > 180 1609127 [main] make 11484 fhandler_base_overlapped::wait_overlapped: > normal read, 7 bytes ispipe() 1 > > 139 160031 [main] sh 2880 write: 7 = write(1, 0x6000396A0, 7) > > 142 1609269 [main] make 11484 fhandler_base::read: returning 7, binary > mode > > 139 1609408 [main] make 11484 read: 7 = read(5, 0x60005B4B0, 7) > > 136 1609544 [main] make 11484 read: read(5, 0x60005B4B7, 193) blocking > > 4693 164724 [main] sh 2880 set_signal_mask: setmask 0, newmask 80000, > mask_bits 0 > > but with hundreds of thousands of lines. (I have not counted) > > I believe that it may be the result of this line, earlier in the log > > cd "/c/Users/simonpj/AppData/Local/Temp/ghctest-8fa9s6rk/test > spaces/./plugins/plugins07.run" && *strace* $MAKE -s --no-print-directory > -C rule-defining-plugin package.plugins07 TOP=/c/code/HEAD/testsuite# > > Note the strace. > > That in turn was added in your commit > > commit 60fb2b2160aa16194b74262f4df8fad5af171b0f > > Author: Tamar Christina > > Date: Mon May 28 19:34:11 2018 +0100 > > > > Clean up Windows testsuite failures > > > > Summary: > > Another round and attempt at getting these down to 0. > > Could you perhaps have made a mistake here? Currently validate is > unusable. > > Thanks! > > Simon > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Mon Jun 18 22:49:48 2018 From: ben at well-typed.com (Ben Gamari) Date: Mon, 18 Jun 2018 18:49:48 -0400 Subject: Use of -dead_strip_dylibs by default makes the -framework flag appear broken. In-Reply-To: References: Message-ID: <87y3fbpv5l.fsf@smart-cactus.org> Travis Whitaker writes: > Hello Haskell Friends, > Hi Travis, This behavior was introduced in https://phabricator.haskell.org/rGHCb592bd98ff25730bbe3c13d6f62a427df8c78e28 to mitigate macOS's very restrictive linker command limit (which limits the number of direct dependencies that an object may have; see #14444). Perhaps it would help if we omitted -dead_strip_dylibs when doing the final link of an executable? This would allow the user to specify the -framework flags during the final link, (presumably) ensuring that they are linked in untouched. I'm sure Moritz will have insightful to add here. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From moritz.angermann at gmail.com Tue Jun 19 01:50:37 2018 From: moritz.angermann at gmail.com (Moritz Angermann) Date: Tue, 19 Jun 2018 09:50:37 +0800 Subject: Use of -dead_strip_dylibs by default makes the -framework flag appear broken. In-Reply-To: <87y3fbpv5l.fsf@smart-cactus.org> References: <87y3fbpv5l.fsf@smart-cactus.org> Message-ID: <3EDB28C4-8FF2-492D-9E24-C98020D90736@gmail.com> Hi, I haven't found a quick fix, which is a bit annoying. A few observations: As we never reference anything from Foundation, we strip it away, and a runtime test of Foundation classes returns false. That does make sense in the dead_strip setting. But as Travis pointed out is contrary to what the explicit providing of the framework flag would suggest. This is however not only limited to frameworks we will strip *any* dynamic library that is not referenced. As such it would also mean that linking -lxyz, without referencing any symbol from xyz, would drop it. And any runtime symbol lookup for symbols from -xyz *will* also fail. A rather low-impact fix might be to just add a ghc flag to disable the -dead_strip_dylibs passing. That would result in either linking any library provided, used or not, and eventually run into the LOAD COMMAND SIZE LIMIT issues if the flag is enabled. I would still suggest to do the dead_strip_dylibs by default. When using static libraries you might need to pass -load_all to prevent dead_stripping. In general though, when trying to lookup symbols that are not within the set of referenced libraries at compile time, one might want to dynamically load them at runtime anyway (which is what ghci is doing, I believe). A rather radical alternative would be to try the direct dependencies only path and try to make the indirect libraries work. I don't recall why I couldn't get them to work the last time I tried. I might have just been missing a single linker flag. Cheers, Moritz > On Jun 19, 2018, at 6:49 AM, Ben Gamari wrote: > > Travis Whitaker writes: > >> Hello Haskell Friends, >> > Hi Travis, > > This behavior was introduced in > https://phabricator.haskell.org/rGHCb592bd98ff25730bbe3c13d6f62a427df8c78e28 > to mitigate macOS's very restrictive linker command limit (which limits > the number of direct dependencies that an object may have; see #14444). > > Perhaps it would help if we omitted -dead_strip_dylibs when doing the > final link of an executable? This would allow the user to specify the > -framework flags during the final link, (presumably) ensuring that they > are linked in untouched. > > I'm sure Moritz will have insightful to add here. > > Cheers, > > - Ben > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: Message signed with OpenPGP URL: From ryan.gl.scott at gmail.com Tue Jun 19 13:53:20 2018 From: ryan.gl.scott at gmail.com (Ryan Scott) Date: Tue, 19 Jun 2018 09:53:20 -0400 Subject: How do I make a constraint tuple in Core? Message-ID: I'm currently working on some code in which I need to produce a Core Type that mentions a constraint tuple. I thought that there must surely exist some way to construct a constraint tuple using the GHC API, but to my astonishment, I could not find anything. The closest thing I found was mk_tuple [1], which gives you the ability to make boxed and unboxed tuples, but not constraint tuples. I then thought to myself, "But wait, PartialTypeSignatures has to create constraint tuples, right? How does that part of the code work?" To my horror, I discovered that PartialTypeSignatures actually creates *boxed* tuples (see mk_ctuple here [2]), then hackily treats them as constraint tuples, as explained in Note [Extra-constraint holes in partial type signatures] [3]. I tried reading that Note, but I couldn't follow the details. Is there a simpler way to create a constraint tuple that I'm not aware of? Ryan S. ----- [1] http://git.haskell.org/ghc.git/blob/676c5754e3f9e1beeb5f01e0265ffbdc0e6f49e9:/compiler/prelude/TysWiredIn.hs#l810 [2] http://git.haskell.org/ghc.git/blob/676c5754e3f9e1beeb5f01e0265ffbdc0e6f49e9:/compiler/typecheck/TcBinds.hs#l1036 [3] http://git.haskell.org/ghc.git/blob/676c5754e3f9e1beeb5f01e0265ffbdc0e6f49e9:/compiler/typecheck/TcHsType.hs#l2367 -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Tue Jun 19 14:07:14 2018 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Tue, 19 Jun 2018 15:07:14 +0100 Subject: How do I make a constraint tuple in Core? In-Reply-To: References: Message-ID: How about `tc_tuple`? On Tue, Jun 19, 2018 at 2:53 PM, Ryan Scott wrote: > I'm currently working on some code in which I need to produce a Core Type > that mentions a constraint tuple. I thought that there must surely exist > some way to construct a constraint tuple using the GHC API, but to my > astonishment, I could not find anything. The closest thing I found was > mk_tuple [1], which gives you the ability to make boxed and unboxed tuples, > but not constraint tuples. > > I then thought to myself, "But wait, PartialTypeSignatures has to create > constraint tuples, right? How does that part of the code work?" To my > horror, I discovered that PartialTypeSignatures actually creates *boxed* > tuples (see mk_ctuple here [2]), then hackily treats them as constraint > tuples, as explained in Note [Extra-constraint holes in partial type > signatures] [3]. I tried reading that Note, but I couldn't follow the > details. > > Is there a simpler way to create a constraint tuple that I'm not aware of? > > Ryan S. > ----- > [1] > http://git.haskell.org/ghc.git/blob/676c5754e3f9e1beeb5f01e0265ffbdc0e6f49e9:/compiler/prelude/TysWiredIn.hs#l810 > [2] > http://git.haskell.org/ghc.git/blob/676c5754e3f9e1beeb5f01e0265ffbdc0e6f49e9:/compiler/typecheck/TcBinds.hs#l1036 > [3] > http://git.haskell.org/ghc.git/blob/676c5754e3f9e1beeb5f01e0265ffbdc0e6f49e9:/compiler/typecheck/TcHsType.hs#l2367 > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From ryan.gl.scott at gmail.com Tue Jun 19 15:48:08 2018 From: ryan.gl.scott at gmail.com (Ryan Scott) Date: Tue, 19 Jun 2018 11:48:08 -0400 Subject: How do I make a constraint tuple in Core? In-Reply-To: References: Message-ID: Unfortunately, I can't directly use tc_tuple, since I don't have access to the Haskell AST forms I need to make that work (I'm constructing everything directly in Core). On the other hand, the implementation of tc_tuple does have one nugget of wisdom in that it reveals how GHC creates a constraint tuple *type constructor*. Namely, `tcLookupTyCon (cTupleTyConName arity)` for some `arity`. That's still a bit inconvenient, as `tcLookupTyCon` forces me to work in a monadic context (whereas the code I've been working on has been pure up to this point). Is there not a pure way to retrieve a constraint tuple type constructor? Ryan S. On Tue, Jun 19, 2018 at 10:07 AM Matthew Pickering < matthewtpickering at gmail.com> wrote: > How about `tc_tuple`? > > On Tue, Jun 19, 2018 at 2:53 PM, Ryan Scott > wrote: > > I'm currently working on some code in which I need to produce a Core Type > > that mentions a constraint tuple. I thought that there must surely exist > > some way to construct a constraint tuple using the GHC API, but to my > > astonishment, I could not find anything. The closest thing I found was > > mk_tuple [1], which gives you the ability to make boxed and unboxed > tuples, > > but not constraint tuples. > > > > I then thought to myself, "But wait, PartialTypeSignatures has to create > > constraint tuples, right? How does that part of the code work?" To my > > horror, I discovered that PartialTypeSignatures actually creates *boxed* > > tuples (see mk_ctuple here [2]), then hackily treats them as constraint > > tuples, as explained in Note [Extra-constraint holes in partial type > > signatures] [3]. I tried reading that Note, but I couldn't follow the > > details. > > > > Is there a simpler way to create a constraint tuple that I'm not aware > of? > > > > Ryan S. > > ----- > > [1] > > > http://git.haskell.org/ghc.git/blob/676c5754e3f9e1beeb5f01e0265ffbdc0e6f49e9:/compiler/prelude/TysWiredIn.hs#l810 > > [2] > > > http://git.haskell.org/ghc.git/blob/676c5754e3f9e1beeb5f01e0265ffbdc0e6f49e9:/compiler/typecheck/TcBinds.hs#l1036 > > [3] > > > http://git.haskell.org/ghc.git/blob/676c5754e3f9e1beeb5f01e0265ffbdc0e6f49e9:/compiler/typecheck/TcHsType.hs#l2367 > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Tue Jun 19 17:09:32 2018 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Tue, 19 Jun 2018 18:09:32 +0100 Subject: How do I make a constraint tuple in Core? In-Reply-To: References: Message-ID: Can you be a bit more precise about what you are doing? Constructing core like this is quite hairy. The "tuple" part doesn't really exist in core, a constraint tuple is curried. So foo :: (C1 a, C2 a) => ... desugars to `foo = /\ a . \ $dC1 . \$dC2 -> ...`. Cheers, Matt On Tue, Jun 19, 2018 at 4:48 PM, Ryan Scott wrote: > Unfortunately, I can't directly use tc_tuple, since I don't have access to > the Haskell AST forms I need to make that work (I'm constructing everything > directly in Core). On the other hand, the implementation of tc_tuple does > have one nugget of wisdom in that it reveals how GHC creates a constraint > tuple *type constructor*. Namely, `tcLookupTyCon (cTupleTyConName arity)` > for some `arity`. > > That's still a bit inconvenient, as `tcLookupTyCon` forces me to work in a > monadic context (whereas the code I've been working on has been pure up to > this point). Is there not a pure way to retrieve a constraint tuple type > constructor? > > Ryan S. > > On Tue, Jun 19, 2018 at 10:07 AM Matthew Pickering > wrote: >> >> How about `tc_tuple`? >> >> On Tue, Jun 19, 2018 at 2:53 PM, Ryan Scott >> wrote: >> > I'm currently working on some code in which I need to produce a Core >> > Type >> > that mentions a constraint tuple. I thought that there must surely exist >> > some way to construct a constraint tuple using the GHC API, but to my >> > astonishment, I could not find anything. The closest thing I found was >> > mk_tuple [1], which gives you the ability to make boxed and unboxed >> > tuples, >> > but not constraint tuples. >> > >> > I then thought to myself, "But wait, PartialTypeSignatures has to create >> > constraint tuples, right? How does that part of the code work?" To my >> > horror, I discovered that PartialTypeSignatures actually creates *boxed* >> > tuples (see mk_ctuple here [2]), then hackily treats them as constraint >> > tuples, as explained in Note [Extra-constraint holes in partial type >> > signatures] [3]. I tried reading that Note, but I couldn't follow the >> > details. >> > >> > Is there a simpler way to create a constraint tuple that I'm not aware >> > of? >> > >> > Ryan S. >> > ----- >> > [1] >> > >> > http://git.haskell.org/ghc.git/blob/676c5754e3f9e1beeb5f01e0265ffbdc0e6f49e9:/compiler/prelude/TysWiredIn.hs#l810 >> > [2] >> > >> > http://git.haskell.org/ghc.git/blob/676c5754e3f9e1beeb5f01e0265ffbdc0e6f49e9:/compiler/typecheck/TcBinds.hs#l1036 >> > [3] >> > >> > http://git.haskell.org/ghc.git/blob/676c5754e3f9e1beeb5f01e0265ffbdc0e6f49e9:/compiler/typecheck/TcHsType.hs#l2367 >> > >> > _______________________________________________ >> > ghc-devs mailing list >> > ghc-devs at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > From ryan.gl.scott at gmail.com Tue Jun 19 17:15:08 2018 From: ryan.gl.scott at gmail.com (Ryan Scott) Date: Tue, 19 Jun 2018 13:15:08 -0400 Subject: How do I make a constraint tuple in Core? In-Reply-To: References: Message-ID: > Can you be a bit more precise about what you are doing? Constructing > core like this is quite hairy. I'm modifying TcGenGenerics to use an experimental representation type that leverages ConstraintKinds (and thus constraint tuples) in its type. The function I'm modifying is tc_mkRepTy [1], which constructs the Core Type that's used for Rep/Rep1 in derived Generic instances. > The "tuple" part doesn't really exist in core Sure it does! It does in this code, at least: data Foo c a where MkFoo :: c => a -> Foo c a f :: Foo (Eq a, Show a) -> String f (MkFoo x) = show x According to ghci -ddump-simpl, that gives you the following (unoptimized) Core: f :: forall a. Foo (Eq a, Show a) a -> String f = \ (@ a_a2RQ) (ds_d2S2 :: Foo (Eq a_a2RQ, Show a_a2RQ) a_a2RQ) -> case ds_d2S2 of { MkFoo $d(%,%)_a2RS x_a2Ry -> show @ a_a2RQ (GHC.Classes.$p2(%,%) @ (Eq a_a2RQ) @ (Show a_a2RQ) $d(%,%)_a2RS) x_a2Ry } Notice the $d(%,%)_a2RS and $p2(%,%) bits, which correspond to a constraint tuple dictionary and one of its superclass selectors, respectively. Ryan S. ----- [1] http://git.haskell.org/ghc.git/blob/26e9806ada8823160dd63ca2c34556e5848b2f45:/compiler/typecheck/TcGenGenerics.hs#l513 On Tue, Jun 19, 2018 at 1:09 PM Matthew Pickering < matthewtpickering at gmail.com> wrote: > Can you be a bit more precise about what you are doing? Constructing > core like this is quite hairy. > > The "tuple" part doesn't really exist in core, a constraint tuple is > curried. So foo :: (C1 a, C2 a) => ... desugars to `foo = /\ a . \ > $dC1 . \$dC2 -> ...`. > > Cheers, > > Matt > > > > On Tue, Jun 19, 2018 at 4:48 PM, Ryan Scott > wrote: > > Unfortunately, I can't directly use tc_tuple, since I don't have access > to > > the Haskell AST forms I need to make that work (I'm constructing > everything > > directly in Core). On the other hand, the implementation of tc_tuple does > > have one nugget of wisdom in that it reveals how GHC creates a constraint > > tuple *type constructor*. Namely, `tcLookupTyCon (cTupleTyConName arity)` > > for some `arity`. > > > > That's still a bit inconvenient, as `tcLookupTyCon` forces me to work in > a > > monadic context (whereas the code I've been working on has been pure up > to > > this point). Is there not a pure way to retrieve a constraint tuple type > > constructor? > > > > Ryan S. > > > > On Tue, Jun 19, 2018 at 10:07 AM Matthew Pickering > > wrote: > >> > >> How about `tc_tuple`? > >> > >> On Tue, Jun 19, 2018 at 2:53 PM, Ryan Scott > >> wrote: > >> > I'm currently working on some code in which I need to produce a Core > >> > Type > >> > that mentions a constraint tuple. I thought that there must surely > exist > >> > some way to construct a constraint tuple using the GHC API, but to my > >> > astonishment, I could not find anything. The closest thing I found was > >> > mk_tuple [1], which gives you the ability to make boxed and unboxed > >> > tuples, > >> > but not constraint tuples. > >> > > >> > I then thought to myself, "But wait, PartialTypeSignatures has to > create > >> > constraint tuples, right? How does that part of the code work?" To my > >> > horror, I discovered that PartialTypeSignatures actually creates > *boxed* > >> > tuples (see mk_ctuple here [2]), then hackily treats them as > constraint > >> > tuples, as explained in Note [Extra-constraint holes in partial type > >> > signatures] [3]. I tried reading that Note, but I couldn't follow the > >> > details. > >> > > >> > Is there a simpler way to create a constraint tuple that I'm not aware > >> > of? > >> > > >> > Ryan S. > >> > ----- > >> > [1] > >> > > >> > > http://git.haskell.org/ghc.git/blob/676c5754e3f9e1beeb5f01e0265ffbdc0e6f49e9:/compiler/prelude/TysWiredIn.hs#l810 > >> > [2] > >> > > >> > > http://git.haskell.org/ghc.git/blob/676c5754e3f9e1beeb5f01e0265ffbdc0e6f49e9:/compiler/typecheck/TcBinds.hs#l1036 > >> > [3] > >> > > >> > > http://git.haskell.org/ghc.git/blob/676c5754e3f9e1beeb5f01e0265ffbdc0e6f49e9:/compiler/typecheck/TcHsType.hs#l2367 > >> > > >> > _______________________________________________ > >> > ghc-devs mailing list > >> > ghc-devs at haskell.org > >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From omeragacan at gmail.com Wed Jun 20 08:20:37 2018 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Wed, 20 Jun 2018 11:20:37 +0300 Subject: Scavenging SRTs in scavenge_one In-Reply-To: References: <877epwi7bb.fsf@smart-cactus.org> Message-ID: Hi Simon, I'm confused about this code again. You said > scavenge_one() is only used for a non-major collection, where we aren't > traversing SRTs. But I think this is not true; scavenge_one() is also used to scavenge large objects (in scavenge_large()), which are scavenged even in major GCs. So it seems like we never really scavenge SRTs of large objects. This doesn't look right to me. Am I missing anything? Can large objects not refer to static objects? Thanks Ömer Ömer Sinan Ağacan , 2 May 2018 Çar, 09:03 tarihinde şunu yazdı: > > Thanks Simon, this is really helpful. > > > If you look at scavenge_fun_srt() and co, you'll see that they return > > immediately if !major_gc. > > Thanks for pointing this out -- I didn't realize it's returning early when > !major_gc and this caused a lot of confusion. Now everything makes sense. > > I'll add a note for scavenging SRTs and refer to it in relevant code and submit > a diff. > > Ömer > > 2018-05-01 22:10 GMT+03:00 Simon Marlow : > > Your explanation is basically right. scavenge_one() is only used for a > > non-major collection, where we aren't traversing SRTs. Admittedly this is a > > subtle point that could almost certainly be documented better, I probably > > just overlooked it. > > > > More inline: > > > > On 1 May 2018 at 10:26, Ömer Sinan Ağacan wrote: > >> > >> I have an idea but it doesn't explain everything; > >> > >> SRTs are used to collect CAFs, and CAFs are always added to the oldest > >> generation's mut_list when allocated [1]. > >> > >> When we're scavenging a mut_list we know we're not doing a major GC, and > >> because mut_list of oldest generation has all the newly allocated CAFs, > >> which > >> will be scavenged anyway, no need to scavenge SRTs for those. > >> > >> Also, static objects are always evacuated to the oldest gen [2], so any > >> CAFs > >> that are alive but not in the mut_list of the oldest gen will stay alive > >> after > >> a non-major GC, again no need to scavenge SRTs to keep these alive. > >> > >> This also explains why it's OK to not collect static objects (and not > >> treat > >> them as roots) in non-major GCs. > >> > >> However this doesn't explain > >> > >> - Why it's OK to scavenge large objects with scavenge_one(). > > > > > > I don't understand - perhaps you could elaborate on why you think it might > > not be OK? Large objects are treated exactly the same as small objects with > > respect to their lifetimes. > > > >> > >> - Why we scavenge SRTs in non-major collections in other places (e.g. > >> scavenge_block()). > > > > > > If you look at scavenge_fun_srt() and co, you'll see that they return > > immediately if !major_gc. > > > >> > >> Simon, could you say a few words about this? > > > > > > Was that enough words? I have more if necessary :) > > > > Cheers > > Simon > > > > > >> > >> > >> [1]: https://github.com/ghc/ghc/blob/master/rts/sm/Storage.c#L445-L449 > >> [2]: https://github.com/ghc/ghc/blob/master/rts/sm/Scav.c#L1761-L1763 > >> > >> Ömer > >> > >> 2018-03-28 17:49 GMT+03:00 Ben Gamari : > >> > Hi Simon, > >> > > >> > I'm a bit confused by scavenge_one; namely it doesn't scavenge SRTs. It > >> > appears that it is primarily used for remembered set entries but it's > >> > not at all clear why this means that we can safely ignore SRTs (e.g. in > >> > the FUN and THUNK cases). > >> > > >> > Can you shed some light on this? > >> > > >> > Cheers, > >> > > >> > - Ben > >> > > >> > _______________________________________________ > >> > ghc-devs mailing list > >> > ghc-devs at haskell.org > >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > >> > > >> _______________________________________________ > >> ghc-devs mailing list > >> ghc-devs at haskell.org > >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > From marlowsd at gmail.com Wed Jun 20 11:32:41 2018 From: marlowsd at gmail.com (Simon Marlow) Date: Wed, 20 Jun 2018 12:32:41 +0100 Subject: Scavenging SRTs in scavenge_one In-Reply-To: References: <877epwi7bb.fsf@smart-cactus.org> Message-ID: Interesting point. I don't think there are any large objects with SRTs, but we should document the invariant because we're relying on it. Large objects can only be primitive objects, like MUT_ARR_PTRS, allocated by the RTS, and none of these have SRTs. We did have plans to allocate memory for large dynamic objects using `allocate()` from compiled code, in which case we could have large objects that could be THUNK, FUN, etc. and could have an SRT, in which case we would need to revisit this. You might want to take a look at Note [big objects] in GCUtils.c, which is relevant here. Cheers Simon On 20 June 2018 at 09:20, Ömer Sinan Ağacan wrote: > Hi Simon, > > I'm confused about this code again. You said > > > scavenge_one() is only used for a non-major collection, where we aren't > > traversing SRTs. > > But I think this is not true; scavenge_one() is also used to scavenge large > objects (in scavenge_large()), which are scavenged even in major GCs. So it > seems like we never really scavenge SRTs of large objects. This doesn't > look > right to me. Am I missing anything? Can large objects not refer to static > objects? > > Thanks > > Ömer > > Ömer Sinan Ağacan , 2 May 2018 Çar, 09:03 > tarihinde şunu yazdı: > > > > Thanks Simon, this is really helpful. > > > > > If you look at scavenge_fun_srt() and co, you'll see that they return > > > immediately if !major_gc. > > > > Thanks for pointing this out -- I didn't realize it's returning early > when > > !major_gc and this caused a lot of confusion. Now everything makes sense. > > > > I'll add a note for scavenging SRTs and refer to it in relevant code and > submit > > a diff. > > > > Ömer > > > > 2018-05-01 22:10 GMT+03:00 Simon Marlow : > > > Your explanation is basically right. scavenge_one() is only used for a > > > non-major collection, where we aren't traversing SRTs. Admittedly this > is a > > > subtle point that could almost certainly be documented better, I > probably > > > just overlooked it. > > > > > > More inline: > > > > > > On 1 May 2018 at 10:26, Ömer Sinan Ağacan > wrote: > > >> > > >> I have an idea but it doesn't explain everything; > > >> > > >> SRTs are used to collect CAFs, and CAFs are always added to the oldest > > >> generation's mut_list when allocated [1]. > > >> > > >> When we're scavenging a mut_list we know we're not doing a major GC, > and > > >> because mut_list of oldest generation has all the newly allocated > CAFs, > > >> which > > >> will be scavenged anyway, no need to scavenge SRTs for those. > > >> > > >> Also, static objects are always evacuated to the oldest gen [2], so > any > > >> CAFs > > >> that are alive but not in the mut_list of the oldest gen will stay > alive > > >> after > > >> a non-major GC, again no need to scavenge SRTs to keep these alive. > > >> > > >> This also explains why it's OK to not collect static objects (and not > > >> treat > > >> them as roots) in non-major GCs. > > >> > > >> However this doesn't explain > > >> > > >> - Why it's OK to scavenge large objects with scavenge_one(). > > > > > > > > > I don't understand - perhaps you could elaborate on why you think it > might > > > not be OK? Large objects are treated exactly the same as small objects > with > > > respect to their lifetimes. > > > > > >> > > >> - Why we scavenge SRTs in non-major collections in other places (e.g. > > >> scavenge_block()). > > > > > > > > > If you look at scavenge_fun_srt() and co, you'll see that they return > > > immediately if !major_gc. > > > > > >> > > >> Simon, could you say a few words about this? > > > > > > > > > Was that enough words? I have more if necessary :) > > > > > > Cheers > > > Simon > > > > > > > > >> > > >> > > >> [1]: https://github.com/ghc/ghc/blob/master/rts/sm/Storage.c# > L445-L449 > > >> [2]: https://github.com/ghc/ghc/blob/master/rts/sm/Scav.c#L1761-L1763 > > >> > > >> Ömer > > >> > > >> 2018-03-28 17:49 GMT+03:00 Ben Gamari : > > >> > Hi Simon, > > >> > > > >> > I'm a bit confused by scavenge_one; namely it doesn't scavenge > SRTs. It > > >> > appears that it is primarily used for remembered set entries but > it's > > >> > not at all clear why this means that we can safely ignore SRTs > (e.g. in > > >> > the FUN and THUNK cases). > > >> > > > >> > Can you shed some light on this? > > >> > > > >> > Cheers, > > >> > > > >> > - Ben > > >> > > > >> > _______________________________________________ > > >> > ghc-devs mailing list > > >> > ghc-devs at haskell.org > > >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > >> > > > >> _______________________________________________ > > >> ghc-devs mailing list > > >> ghc-devs at haskell.org > > >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kovanikov at gmail.com Thu Jun 21 09:06:39 2018 From: kovanikov at gmail.com (Dmitriy Kovanikov) Date: Thu, 21 Jun 2018 17:06:39 +0800 Subject: GHC API question: resolving dependencies for modules Message-ID: <631D1530-6D08-479F-822D-320DD3C33A97@gmail.com> Hello! I’m trying to use GHC as a library. And my goal is to be able to gather information about where each function or data type came from. I’ve started by simply calling `getNamesInScope` function and observing its result. Here is my code: * Main.hs: https://lpaste.net/9026688686753841152 And here is the code for my test modules: * test/X.hs: https://lpaste.net/6844657232357883904 * test/Y.hs: https://lpaste.net/8673289058127970304 Unfortunately, my implementation doesn't work since I’m not very familiar with GHC API. And I see the following errors after executing my `Main.hs` file (I’m using ghc-8.2.2): * error messages: https://lpaste.net/3316737208131518464 Could you please point me to places or parts of GHC API or some documentation about module dependencies and how to make ghc see imports of other modules? I can’t find simple and small enough usage example of this part of the library. Thanks in advance, Dmitrii Kovanikov -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Thu Jun 21 09:51:20 2018 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Thu, 21 Jun 2018 10:51:20 +0100 Subject: GHC API question: resolving dependencies for modules In-Reply-To: <631D1530-6D08-479F-822D-320DD3C33A97@gmail.com> References: <631D1530-6D08-479F-822D-320DD3C33A97@gmail.com> Message-ID: This doesn't answer your question directly but if you want to gather information about a module then using a source plugin would probably be easier and more robust than using the GHC API. You need to write a function of type: ``` ModSummary -> TcGblEnv -> TcM TcGblEnv ``` In `TcGblEnv` you will find `tcg_rdr_env` which contains all top-level things and describes how they came to be in scope. Source plugins will be in GHC 8.6. Cheers, Matt On Thu, Jun 21, 2018 at 10:06 AM, Dmitriy Kovanikov wrote: > Hello! > > I’m trying to use GHC as a library. And my goal is to be able to gather > information about where each function or data type came from. I’ve started > by simply calling `getNamesInScope` function and observing its result. Here > is my code: > > * Main.hs: https://lpaste.net/9026688686753841152 > > And here is the code for my test modules: > > * test/X.hs: https://lpaste.net/6844657232357883904 > * test/Y.hs: https://lpaste.net/8673289058127970304 > > Unfortunately, my implementation doesn't work since I’m not very familiar > with GHC API. > And I see the following errors after executing my `Main.hs` file (I’m using > ghc-8.2.2): > > * error messages: https://lpaste.net/3316737208131518464 > > Could you please point me to places or parts of GHC API or some > documentation about module dependencies and how to make ghc see imports of > other modules? I can’t find simple and small enough usage example of this > part of the library. > > Thanks in advance, > Dmitrii Kovanikov > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From omeragacan at gmail.com Thu Jun 21 10:42:08 2018 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Thu, 21 Jun 2018 13:42:08 +0300 Subject: Scavenging SRTs in scavenge_one In-Reply-To: References: <877epwi7bb.fsf@smart-cactus.org> Message-ID: > Large objects can only be primitive objects, like MUT_ARR_PTRS, allocated by > the RTS, and none of these have SRTs. Is is not possible to allocate a large STACK? I'm currently observing this in gdb: >>> call *Bdescr(0x4200ec9000) $2 = { start = 0x4200ec9000, free = 0x4200ed1000, link = 0x4200100e80, u = { back = 0x4200103980, bitmap = 0x4200103980, scan = 0x4200103980 }, gen = 0x77b4b8, gen_no = 1, dest_no = 1, node = 0, flags = 1027, <-- BF_LARGE | BF_EVACUTED | ... blocks = 8, _padding = {[0] = 0, [1] = 0, [2] = 0} } >>> call printClosure(0x4200ec9000) 0x4200ec9000: STACK >>> call checkClosure(0x4200ec9000) $3 = 4096 -- makes sense, larger than 3277 bytes So I have a large STACK object, and STACKs can refer to static objects. But when we scavenge this object we don't scavenge its SRTs because we use scavenge_one(). This seems wrong to me. Ömer Simon Marlow , 20 Haz 2018 Çar, 14:32 tarihinde şunu yazdı: > > Interesting point. I don't think there are any large objects with SRTs, but we should document the invariant because we're relying on it. > > Large objects can only be primitive objects, like MUT_ARR_PTRS, allocated by the RTS, and none of these have SRTs. > > We did have plans to allocate memory for large dynamic objects using `allocate()` from compiled code, in which case we could have large objects that could be THUNK, FUN, etc. and could have an SRT, in which case we would need to revisit this. You might want to take a look at Note [big objects] in GCUtils.c, which is relevant here. > > Cheers > Simon > > > On 20 June 2018 at 09:20, Ömer Sinan Ağacan wrote: >> >> Hi Simon, >> >> I'm confused about this code again. You said >> >> > scavenge_one() is only used for a non-major collection, where we aren't >> > traversing SRTs. >> >> But I think this is not true; scavenge_one() is also used to scavenge large >> objects (in scavenge_large()), which are scavenged even in major GCs. So it >> seems like we never really scavenge SRTs of large objects. This doesn't look >> right to me. Am I missing anything? Can large objects not refer to static >> objects? >> >> Thanks >> >> Ömer >> >> Ömer Sinan Ağacan , 2 May 2018 Çar, 09:03 >> tarihinde şunu yazdı: >> > >> > Thanks Simon, this is really helpful. >> > >> > > If you look at scavenge_fun_srt() and co, you'll see that they return >> > > immediately if !major_gc. >> > >> > Thanks for pointing this out -- I didn't realize it's returning early when >> > !major_gc and this caused a lot of confusion. Now everything makes sense. >> > >> > I'll add a note for scavenging SRTs and refer to it in relevant code and submit >> > a diff. >> > >> > Ömer >> > >> > 2018-05-01 22:10 GMT+03:00 Simon Marlow : >> > > Your explanation is basically right. scavenge_one() is only used for a >> > > non-major collection, where we aren't traversing SRTs. Admittedly this is a >> > > subtle point that could almost certainly be documented better, I probably >> > > just overlooked it. >> > > >> > > More inline: >> > > >> > > On 1 May 2018 at 10:26, Ömer Sinan Ağacan wrote: >> > >> >> > >> I have an idea but it doesn't explain everything; >> > >> >> > >> SRTs are used to collect CAFs, and CAFs are always added to the oldest >> > >> generation's mut_list when allocated [1]. >> > >> >> > >> When we're scavenging a mut_list we know we're not doing a major GC, and >> > >> because mut_list of oldest generation has all the newly allocated CAFs, >> > >> which >> > >> will be scavenged anyway, no need to scavenge SRTs for those. >> > >> >> > >> Also, static objects are always evacuated to the oldest gen [2], so any >> > >> CAFs >> > >> that are alive but not in the mut_list of the oldest gen will stay alive >> > >> after >> > >> a non-major GC, again no need to scavenge SRTs to keep these alive. >> > >> >> > >> This also explains why it's OK to not collect static objects (and not >> > >> treat >> > >> them as roots) in non-major GCs. >> > >> >> > >> However this doesn't explain >> > >> >> > >> - Why it's OK to scavenge large objects with scavenge_one(). >> > > >> > > >> > > I don't understand - perhaps you could elaborate on why you think it might >> > > not be OK? Large objects are treated exactly the same as small objects with >> > > respect to their lifetimes. >> > > >> > >> >> > >> - Why we scavenge SRTs in non-major collections in other places (e.g. >> > >> scavenge_block()). >> > > >> > > >> > > If you look at scavenge_fun_srt() and co, you'll see that they return >> > > immediately if !major_gc. >> > > >> > >> >> > >> Simon, could you say a few words about this? >> > > >> > > >> > > Was that enough words? I have more if necessary :) >> > > >> > > Cheers >> > > Simon >> > > >> > > >> > >> >> > >> >> > >> [1]: https://github.com/ghc/ghc/blob/master/rts/sm/Storage.c#L445-L449 >> > >> [2]: https://github.com/ghc/ghc/blob/master/rts/sm/Scav.c#L1761-L1763 >> > >> >> > >> Ömer >> > >> >> > >> 2018-03-28 17:49 GMT+03:00 Ben Gamari : >> > >> > Hi Simon, >> > >> > >> > >> > I'm a bit confused by scavenge_one; namely it doesn't scavenge SRTs. It >> > >> > appears that it is primarily used for remembered set entries but it's >> > >> > not at all clear why this means that we can safely ignore SRTs (e.g. in >> > >> > the FUN and THUNK cases). >> > >> > >> > >> > Can you shed some light on this? >> > >> > >> > >> > Cheers, >> > >> > >> > >> > - Ben >> > >> > >> > >> > _______________________________________________ >> > >> > ghc-devs mailing list >> > >> > ghc-devs at haskell.org >> > >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > >> > >> > >> _______________________________________________ >> > >> ghc-devs mailing list >> > >> ghc-devs at haskell.org >> > >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > > >> > > > > From omeragacan at gmail.com Thu Jun 21 10:58:52 2018 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Thu, 21 Jun 2018 13:58:52 +0300 Subject: Scavenging SRTs in scavenge_one In-Reply-To: References: <877epwi7bb.fsf@smart-cactus.org> Message-ID: Here's an example where we allocate a large (4K) stack: >>> bt #0 allocateMightFail (cap=0x7f366808cfc0 , n=4096) at rts/sm/Storage.c:876 #1 0x00007f3667e4a85d in allocate (cap=0x7f366808cfc0 , n=4096) at rts/sm/Storage.c:849 #2 0x00007f3667e16f46 in threadStackOverflow (cap=0x7f366808cfc0 , tso=0x4200152a68) at rts/Threads.c:600 #3 0x00007f3667e12a64 in schedule (initialCapability=0x7f366808cfc0 , task=0x78c970) at rts/Schedule.c:520 #4 0x00007f3667e1215f in scheduleWaitThread (tso=0x4200105388, ret=0x0, pcap=0x7ffef40dce78) at rts/Schedule.c:2533 #5 0x00007f3667e25685 in rts_evalLazyIO (cap=0x7ffef40dce78, p=0x736ef8, ret=0x0) at rts/RtsAPI.c:530 #6 0x00007f3667e25f7a in hs_main (argc=16, argv=0x7ffef40dd0a8, main_closure=0x736ef8, rts_config=...) t rts/RtsMain.c:72 #7 0x00000000004f738f in main () This is based on an old tree so source locations may not be correct, it's this code in threadStackOverflow(): // Charge the current thread for allocating stack. Stack usage is // non-deterministic, because the chunk boundaries might vary from // run to run, but accounting for this is better than not // accounting for it, since a deep recursion will otherwise not be // subject to allocation limits. cap->r.rCurrentTSO = tso; new_stack = (StgStack*) allocate(cap, chunk_size); cap->r.rCurrentTSO = NULL; SET_HDR(new_stack, &stg_STACK_info, old_stack->header.prof.ccs); TICK_ALLOC_STACK(chunk_size); Ömer Ömer Sinan Ağacan , 21 Haz 2018 Per, 13:42 tarihinde şunu yazdı: > > > Large objects can only be primitive objects, like MUT_ARR_PTRS, allocated by > > the RTS, and none of these have SRTs. > > Is is not possible to allocate a large STACK? I'm currently observing this in > gdb: > > >>> call *Bdescr(0x4200ec9000) > $2 = { > start = 0x4200ec9000, > free = 0x4200ed1000, > link = 0x4200100e80, > u = { > back = 0x4200103980, > bitmap = 0x4200103980, > scan = 0x4200103980 > }, > gen = 0x77b4b8, > gen_no = 1, > dest_no = 1, > node = 0, > flags = 1027, <-- BF_LARGE | BF_EVACUTED | ... > blocks = 8, > _padding = {[0] = 0, [1] = 0, [2] = 0} > } > > >>> call printClosure(0x4200ec9000) > 0x4200ec9000: STACK > > >>> call checkClosure(0x4200ec9000) > $3 = 4096 -- makes sense, larger than 3277 bytes > > So I have a large STACK object, and STACKs can refer to static objects. But > when we scavenge this object we don't scavenge its SRTs because we use > scavenge_one(). This seems wrong to me. > > Ömer > > Simon Marlow , 20 Haz 2018 Çar, 14:32 tarihinde şunu yazdı: > > > > Interesting point. I don't think there are any large objects with SRTs, but we should document the invariant because we're relying on it. > > > > Large objects can only be primitive objects, like MUT_ARR_PTRS, allocated by the RTS, and none of these have SRTs. > > > > We did have plans to allocate memory for large dynamic objects using `allocate()` from compiled code, in which case we could have large objects that could be THUNK, FUN, etc. and could have an SRT, in which case we would need to revisit this. You might want to take a look at Note [big objects] in GCUtils.c, which is relevant here. > > > > Cheers > > Simon > > > > > > On 20 June 2018 at 09:20, Ömer Sinan Ağacan wrote: > >> > >> Hi Simon, > >> > >> I'm confused about this code again. You said > >> > >> > scavenge_one() is only used for a non-major collection, where we aren't > >> > traversing SRTs. > >> > >> But I think this is not true; scavenge_one() is also used to scavenge large > >> objects (in scavenge_large()), which are scavenged even in major GCs. So it > >> seems like we never really scavenge SRTs of large objects. This doesn't look > >> right to me. Am I missing anything? Can large objects not refer to static > >> objects? > >> > >> Thanks > >> > >> Ömer > >> > >> Ömer Sinan Ağacan , 2 May 2018 Çar, 09:03 > >> tarihinde şunu yazdı: > >> > > >> > Thanks Simon, this is really helpful. > >> > > >> > > If you look at scavenge_fun_srt() and co, you'll see that they return > >> > > immediately if !major_gc. > >> > > >> > Thanks for pointing this out -- I didn't realize it's returning early when > >> > !major_gc and this caused a lot of confusion. Now everything makes sense. > >> > > >> > I'll add a note for scavenging SRTs and refer to it in relevant code and submit > >> > a diff. > >> > > >> > Ömer > >> > > >> > 2018-05-01 22:10 GMT+03:00 Simon Marlow : > >> > > Your explanation is basically right. scavenge_one() is only used for a > >> > > non-major collection, where we aren't traversing SRTs. Admittedly this is a > >> > > subtle point that could almost certainly be documented better, I probably > >> > > just overlooked it. > >> > > > >> > > More inline: > >> > > > >> > > On 1 May 2018 at 10:26, Ömer Sinan Ağacan wrote: > >> > >> > >> > >> I have an idea but it doesn't explain everything; > >> > >> > >> > >> SRTs are used to collect CAFs, and CAFs are always added to the oldest > >> > >> generation's mut_list when allocated [1]. > >> > >> > >> > >> When we're scavenging a mut_list we know we're not doing a major GC, and > >> > >> because mut_list of oldest generation has all the newly allocated CAFs, > >> > >> which > >> > >> will be scavenged anyway, no need to scavenge SRTs for those. > >> > >> > >> > >> Also, static objects are always evacuated to the oldest gen [2], so any > >> > >> CAFs > >> > >> that are alive but not in the mut_list of the oldest gen will stay alive > >> > >> after > >> > >> a non-major GC, again no need to scavenge SRTs to keep these alive. > >> > >> > >> > >> This also explains why it's OK to not collect static objects (and not > >> > >> treat > >> > >> them as roots) in non-major GCs. > >> > >> > >> > >> However this doesn't explain > >> > >> > >> > >> - Why it's OK to scavenge large objects with scavenge_one(). > >> > > > >> > > > >> > > I don't understand - perhaps you could elaborate on why you think it might > >> > > not be OK? Large objects are treated exactly the same as small objects with > >> > > respect to their lifetimes. > >> > > > >> > >> > >> > >> - Why we scavenge SRTs in non-major collections in other places (e.g. > >> > >> scavenge_block()). > >> > > > >> > > > >> > > If you look at scavenge_fun_srt() and co, you'll see that they return > >> > > immediately if !major_gc. > >> > > > >> > >> > >> > >> Simon, could you say a few words about this? > >> > > > >> > > > >> > > Was that enough words? I have more if necessary :) > >> > > > >> > > Cheers > >> > > Simon > >> > > > >> > > > >> > >> > >> > >> > >> > >> [1]: https://github.com/ghc/ghc/blob/master/rts/sm/Storage.c#L445-L449 > >> > >> [2]: https://github.com/ghc/ghc/blob/master/rts/sm/Scav.c#L1761-L1763 > >> > >> > >> > >> Ömer > >> > >> > >> > >> 2018-03-28 17:49 GMT+03:00 Ben Gamari : > >> > >> > Hi Simon, > >> > >> > > >> > >> > I'm a bit confused by scavenge_one; namely it doesn't scavenge SRTs. It > >> > >> > appears that it is primarily used for remembered set entries but it's > >> > >> > not at all clear why this means that we can safely ignore SRTs (e.g. in > >> > >> > the FUN and THUNK cases). > >> > >> > > >> > >> > Can you shed some light on this? > >> > >> > > >> > >> > Cheers, > >> > >> > > >> > >> > - Ben > >> > >> > > >> > >> > _______________________________________________ > >> > >> > ghc-devs mailing list > >> > >> > ghc-devs at haskell.org > >> > >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > >> > >> > > >> > >> _______________________________________________ > >> > >> ghc-devs mailing list > >> > >> ghc-devs at haskell.org > >> > >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > >> > > > >> > > > > > > From marlowsd at gmail.com Thu Jun 21 18:27:52 2018 From: marlowsd at gmail.com (Simon Marlow) Date: Thu, 21 Jun 2018 19:27:52 +0100 Subject: Scavenging SRTs in scavenge_one In-Reply-To: References: <877epwi7bb.fsf@smart-cactus.org> Message-ID: When scavenge_one() sees a STACK, it calls scavenge_stack() which traverses the stack frames, including their SRTs. So I don't understand what's going wrong for you - how are the SRTs not being traversed? Cheers Simon On 21 June 2018 at 11:58, Ömer Sinan Ağacan wrote: > Here's an example where we allocate a large (4K) stack: > > >>> bt > #0 allocateMightFail (cap=0x7f366808cfc0 , > n=4096) at rts/sm/Storage.c:876 > #1 0x00007f3667e4a85d in allocate (cap=0x7f366808cfc0 > , n=4096) at rts/sm/Storage.c:849 > #2 0x00007f3667e16f46 in threadStackOverflow (cap=0x7f366808cfc0 > , tso=0x4200152a68) at rts/Threads.c:600 > #3 0x00007f3667e12a64 in schedule > (initialCapability=0x7f366808cfc0 , task=0x78c970) at > rts/Schedule.c:520 > #4 0x00007f3667e1215f in scheduleWaitThread (tso=0x4200105388, > ret=0x0, pcap=0x7ffef40dce78) at rts/Schedule.c:2533 > #5 0x00007f3667e25685 in rts_evalLazyIO (cap=0x7ffef40dce78, > p=0x736ef8, ret=0x0) at rts/RtsAPI.c:530 > #6 0x00007f3667e25f7a in hs_main (argc=16, argv=0x7ffef40dd0a8, > main_closure=0x736ef8, rts_config=...) t rts/RtsMain.c:72 > #7 0x00000000004f738f in main () > > This is based on an old tree so source locations may not be correct, it's > this > code in threadStackOverflow(): > > // Charge the current thread for allocating stack. Stack usage is > // non-deterministic, because the chunk boundaries might vary from > // run to run, but accounting for this is better than not > // accounting for it, since a deep recursion will otherwise not be > // subject to allocation limits. > cap->r.rCurrentTSO = tso; > new_stack = (StgStack*) allocate(cap, chunk_size); > cap->r.rCurrentTSO = NULL; > > SET_HDR(new_stack, &stg_STACK_info, old_stack->header.prof.ccs); > TICK_ALLOC_STACK(chunk_size); > > Ömer > Ömer Sinan Ağacan , 21 Haz 2018 Per, 13:42 > tarihinde şunu yazdı: > > > > > Large objects can only be primitive objects, like MUT_ARR_PTRS, > allocated by > > > the RTS, and none of these have SRTs. > > > > Is is not possible to allocate a large STACK? I'm currently observing > this in > > gdb: > > > > >>> call *Bdescr(0x4200ec9000) > > $2 = { > > start = 0x4200ec9000, > > free = 0x4200ed1000, > > link = 0x4200100e80, > > u = { > > back = 0x4200103980, > > bitmap = 0x4200103980, > > scan = 0x4200103980 > > }, > > gen = 0x77b4b8, > > gen_no = 1, > > dest_no = 1, > > node = 0, > > flags = 1027, <-- BF_LARGE | BF_EVACUTED | ... > > blocks = 8, > > _padding = {[0] = 0, [1] = 0, [2] = 0} > > } > > > > >>> call printClosure(0x4200ec9000) > > 0x4200ec9000: STACK > > > > >>> call checkClosure(0x4200ec9000) > > $3 = 4096 -- makes sense, larger than 3277 bytes > > > > So I have a large STACK object, and STACKs can refer to static objects. > But > > when we scavenge this object we don't scavenge its SRTs because we use > > scavenge_one(). This seems wrong to me. > > > > Ömer > > > > Simon Marlow , 20 Haz 2018 Çar, 14:32 tarihinde > şunu yazdı: > > > > > > Interesting point. I don't think there are any large objects with > SRTs, but we should document the invariant because we're relying on it. > > > > > > Large objects can only be primitive objects, like MUT_ARR_PTRS, > allocated by the RTS, and none of these have SRTs. > > > > > > We did have plans to allocate memory for large dynamic objects using > `allocate()` from compiled code, in which case we could have large objects > that could be THUNK, FUN, etc. and could have an SRT, in which case we > would need to revisit this. You might want to take a look at Note [big > objects] in GCUtils.c, which is relevant here. > > > > > > Cheers > > > Simon > > > > > > > > > On 20 June 2018 at 09:20, Ömer Sinan Ağacan > wrote: > > >> > > >> Hi Simon, > > >> > > >> I'm confused about this code again. You said > > >> > > >> > scavenge_one() is only used for a non-major collection, where we > aren't > > >> > traversing SRTs. > > >> > > >> But I think this is not true; scavenge_one() is also used to scavenge > large > > >> objects (in scavenge_large()), which are scavenged even in major GCs. > So it > > >> seems like we never really scavenge SRTs of large objects. This > doesn't look > > >> right to me. Am I missing anything? Can large objects not refer to > static > > >> objects? > > >> > > >> Thanks > > >> > > >> Ömer > > >> > > >> Ömer Sinan Ağacan , 2 May 2018 Çar, 09:03 > > >> tarihinde şunu yazdı: > > >> > > > >> > Thanks Simon, this is really helpful. > > >> > > > >> > > If you look at scavenge_fun_srt() and co, you'll see that they > return > > >> > > immediately if !major_gc. > > >> > > > >> > Thanks for pointing this out -- I didn't realize it's returning > early when > > >> > !major_gc and this caused a lot of confusion. Now everything makes > sense. > > >> > > > >> > I'll add a note for scavenging SRTs and refer to it in relevant > code and submit > > >> > a diff. > > >> > > > >> > Ömer > > >> > > > >> > 2018-05-01 22:10 GMT+03:00 Simon Marlow : > > >> > > Your explanation is basically right. scavenge_one() is only used > for a > > >> > > non-major collection, where we aren't traversing SRTs. Admittedly > this is a > > >> > > subtle point that could almost certainly be documented better, I > probably > > >> > > just overlooked it. > > >> > > > > >> > > More inline: > > >> > > > > >> > > On 1 May 2018 at 10:26, Ömer Sinan Ağacan > wrote: > > >> > >> > > >> > >> I have an idea but it doesn't explain everything; > > >> > >> > > >> > >> SRTs are used to collect CAFs, and CAFs are always added to the > oldest > > >> > >> generation's mut_list when allocated [1]. > > >> > >> > > >> > >> When we're scavenging a mut_list we know we're not doing a major > GC, and > > >> > >> because mut_list of oldest generation has all the newly > allocated CAFs, > > >> > >> which > > >> > >> will be scavenged anyway, no need to scavenge SRTs for those. > > >> > >> > > >> > >> Also, static objects are always evacuated to the oldest gen [2], > so any > > >> > >> CAFs > > >> > >> that are alive but not in the mut_list of the oldest gen will > stay alive > > >> > >> after > > >> > >> a non-major GC, again no need to scavenge SRTs to keep these > alive. > > >> > >> > > >> > >> This also explains why it's OK to not collect static objects > (and not > > >> > >> treat > > >> > >> them as roots) in non-major GCs. > > >> > >> > > >> > >> However this doesn't explain > > >> > >> > > >> > >> - Why it's OK to scavenge large objects with scavenge_one(). > > >> > > > > >> > > > > >> > > I don't understand - perhaps you could elaborate on why you think > it might > > >> > > not be OK? Large objects are treated exactly the same as small > objects with > > >> > > respect to their lifetimes. > > >> > > > > >> > >> > > >> > >> - Why we scavenge SRTs in non-major collections in other places > (e.g. > > >> > >> scavenge_block()). > > >> > > > > >> > > > > >> > > If you look at scavenge_fun_srt() and co, you'll see that they > return > > >> > > immediately if !major_gc. > > >> > > > > >> > >> > > >> > >> Simon, could you say a few words about this? > > >> > > > > >> > > > > >> > > Was that enough words? I have more if necessary :) > > >> > > > > >> > > Cheers > > >> > > Simon > > >> > > > > >> > > > > >> > >> > > >> > >> > > >> > >> [1]: https://github.com/ghc/ghc/blob/master/rts/sm/Storage.c# > L445-L449 > > >> > >> [2]: https://github.com/ghc/ghc/blob/master/rts/sm/Scav.c# > L1761-L1763 > > >> > >> > > >> > >> Ömer > > >> > >> > > >> > >> 2018-03-28 17:49 GMT+03:00 Ben Gamari : > > >> > >> > Hi Simon, > > >> > >> > > > >> > >> > I'm a bit confused by scavenge_one; namely it doesn't scavenge > SRTs. It > > >> > >> > appears that it is primarily used for remembered set entries > but it's > > >> > >> > not at all clear why this means that we can safely ignore SRTs > (e.g. in > > >> > >> > the FUN and THUNK cases). > > >> > >> > > > >> > >> > Can you shed some light on this? > > >> > >> > > > >> > >> > Cheers, > > >> > >> > > > >> > >> > - Ben > > >> > >> > > > >> > >> > _______________________________________________ > > >> > >> > ghc-devs mailing list > > >> > >> > ghc-devs at haskell.org > > >> > >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > >> > >> > > > >> > >> _______________________________________________ > > >> > >> ghc-devs mailing list > > >> > >> ghc-devs at haskell.org > > >> > >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > >> > > > > >> > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Fri Jun 22 06:43:23 2018 From: david.feuer at gmail.com (David Feuer) Date: Fri, 22 Jun 2018 02:43:23 -0400 Subject: atomicWriteIORef Message-ID: Currently, atomicWriteIORef is implemented using atomicModifyIORef. That seems pretty heavy for the job, allocating closures and forcing thunks. Is there a reason not to do it with casMutVar#? -------------- next part -------------- An HTML attachment was scrubbed... URL: From omeragacan at gmail.com Fri Jun 22 06:54:23 2018 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Fri, 22 Jun 2018 09:54:23 +0300 Subject: Scavenging SRTs in scavenge_one In-Reply-To: References: <877epwi7bb.fsf@smart-cactus.org> Message-ID: OK, finally everything makes sense I think. I was very confused by the code and previous emails where you said: > Large objects can only be primitive objects, like MUT_ARR_PTRS, allocated by > the RTS, and none of these have SRTs. I was pointing out that this is not entirely correct; we allocate large stacks. But as you say scavenge_one() handles that case by scavenging stack SRTs. So in summary: - scavenge_one() is called to scavenge mut_lists and large objects. - When scavenging mut_lists no need to scaveng SRTs (see previous emails) - When scavenging large objects we know that certain objects can't be large (i.e. FUN, THUNK), but some others can (i.e. STACK), so scavenge_one() scavenges stack SRTs but does not scavenge FUN and THUNK SRTs. Ömer Simon Marlow , 21 Haz 2018 Per, 21:27 tarihinde şunu yazdı: > > When scavenge_one() sees a STACK, it calls scavenge_stack() which traverses the stack frames, including their SRTs. > > So I don't understand what's going wrong for you - how are the SRTs not being traversed? > > Cheers > Simon > > On 21 June 2018 at 11:58, Ömer Sinan Ağacan wrote: >> >> Here's an example where we allocate a large (4K) stack: >> >> >>> bt >> #0 allocateMightFail (cap=0x7f366808cfc0 , >> n=4096) at rts/sm/Storage.c:876 >> #1 0x00007f3667e4a85d in allocate (cap=0x7f366808cfc0 >> , n=4096) at rts/sm/Storage.c:849 >> #2 0x00007f3667e16f46 in threadStackOverflow (cap=0x7f366808cfc0 >> , tso=0x4200152a68) at rts/Threads.c:600 >> #3 0x00007f3667e12a64 in schedule >> (initialCapability=0x7f366808cfc0 , task=0x78c970) at >> rts/Schedule.c:520 >> #4 0x00007f3667e1215f in scheduleWaitThread (tso=0x4200105388, >> ret=0x0, pcap=0x7ffef40dce78) at rts/Schedule.c:2533 >> #5 0x00007f3667e25685 in rts_evalLazyIO (cap=0x7ffef40dce78, >> p=0x736ef8, ret=0x0) at rts/RtsAPI.c:530 >> #6 0x00007f3667e25f7a in hs_main (argc=16, argv=0x7ffef40dd0a8, >> main_closure=0x736ef8, rts_config=...) t rts/RtsMain.c:72 >> #7 0x00000000004f738f in main () >> >> This is based on an old tree so source locations may not be correct, it's this >> code in threadStackOverflow(): >> >> // Charge the current thread for allocating stack. Stack usage is >> // non-deterministic, because the chunk boundaries might vary from >> // run to run, but accounting for this is better than not >> // accounting for it, since a deep recursion will otherwise not be >> // subject to allocation limits. >> cap->r.rCurrentTSO = tso; >> new_stack = (StgStack*) allocate(cap, chunk_size); >> cap->r.rCurrentTSO = NULL; >> >> SET_HDR(new_stack, &stg_STACK_info, old_stack->header.prof.ccs); >> TICK_ALLOC_STACK(chunk_size); >> >> Ömer >> Ömer Sinan Ağacan , 21 Haz 2018 Per, 13:42 >> tarihinde şunu yazdı: >> > >> > > Large objects can only be primitive objects, like MUT_ARR_PTRS, allocated by >> > > the RTS, and none of these have SRTs. >> > >> > Is is not possible to allocate a large STACK? I'm currently observing this in >> > gdb: >> > >> > >>> call *Bdescr(0x4200ec9000) >> > $2 = { >> > start = 0x4200ec9000, >> > free = 0x4200ed1000, >> > link = 0x4200100e80, >> > u = { >> > back = 0x4200103980, >> > bitmap = 0x4200103980, >> > scan = 0x4200103980 >> > }, >> > gen = 0x77b4b8, >> > gen_no = 1, >> > dest_no = 1, >> > node = 0, >> > flags = 1027, <-- BF_LARGE | BF_EVACUTED | ... >> > blocks = 8, >> > _padding = {[0] = 0, [1] = 0, [2] = 0} >> > } >> > >> > >>> call printClosure(0x4200ec9000) >> > 0x4200ec9000: STACK >> > >> > >>> call checkClosure(0x4200ec9000) >> > $3 = 4096 -- makes sense, larger than 3277 bytes >> > >> > So I have a large STACK object, and STACKs can refer to static objects. But >> > when we scavenge this object we don't scavenge its SRTs because we use >> > scavenge_one(). This seems wrong to me. >> > >> > Ömer >> > >> > Simon Marlow , 20 Haz 2018 Çar, 14:32 tarihinde şunu yazdı: >> > > >> > > Interesting point. I don't think there are any large objects with SRTs, but we should document the invariant because we're relying on it. >> > > >> > > Large objects can only be primitive objects, like MUT_ARR_PTRS, allocated by the RTS, and none of these have SRTs. >> > > >> > > We did have plans to allocate memory for large dynamic objects using `allocate()` from compiled code, in which case we could have large objects that could be THUNK, FUN, etc. and could have an SRT, in which case we would need to revisit this. You might want to take a look at Note [big objects] in GCUtils.c, which is relevant here. >> > > >> > > Cheers >> > > Simon >> > > >> > > >> > > On 20 June 2018 at 09:20, Ömer Sinan Ağacan wrote: >> > >> >> > >> Hi Simon, >> > >> >> > >> I'm confused about this code again. You said >> > >> >> > >> > scavenge_one() is only used for a non-major collection, where we aren't >> > >> > traversing SRTs. >> > >> >> > >> But I think this is not true; scavenge_one() is also used to scavenge large >> > >> objects (in scavenge_large()), which are scavenged even in major GCs. So it >> > >> seems like we never really scavenge SRTs of large objects. This doesn't look >> > >> right to me. Am I missing anything? Can large objects not refer to static >> > >> objects? >> > >> >> > >> Thanks >> > >> >> > >> Ömer >> > >> >> > >> Ömer Sinan Ağacan , 2 May 2018 Çar, 09:03 >> > >> tarihinde şunu yazdı: >> > >> > >> > >> > Thanks Simon, this is really helpful. >> > >> > >> > >> > > If you look at scavenge_fun_srt() and co, you'll see that they return >> > >> > > immediately if !major_gc. >> > >> > >> > >> > Thanks for pointing this out -- I didn't realize it's returning early when >> > >> > !major_gc and this caused a lot of confusion. Now everything makes sense. >> > >> > >> > >> > I'll add a note for scavenging SRTs and refer to it in relevant code and submit >> > >> > a diff. >> > >> > >> > >> > Ömer >> > >> > >> > >> > 2018-05-01 22:10 GMT+03:00 Simon Marlow : >> > >> > > Your explanation is basically right. scavenge_one() is only used for a >> > >> > > non-major collection, where we aren't traversing SRTs. Admittedly this is a >> > >> > > subtle point that could almost certainly be documented better, I probably >> > >> > > just overlooked it. >> > >> > > >> > >> > > More inline: >> > >> > > >> > >> > > On 1 May 2018 at 10:26, Ömer Sinan Ağacan wrote: >> > >> > >> >> > >> > >> I have an idea but it doesn't explain everything; >> > >> > >> >> > >> > >> SRTs are used to collect CAFs, and CAFs are always added to the oldest >> > >> > >> generation's mut_list when allocated [1]. >> > >> > >> >> > >> > >> When we're scavenging a mut_list we know we're not doing a major GC, and >> > >> > >> because mut_list of oldest generation has all the newly allocated CAFs, >> > >> > >> which >> > >> > >> will be scavenged anyway, no need to scavenge SRTs for those. >> > >> > >> >> > >> > >> Also, static objects are always evacuated to the oldest gen [2], so any >> > >> > >> CAFs >> > >> > >> that are alive but not in the mut_list of the oldest gen will stay alive >> > >> > >> after >> > >> > >> a non-major GC, again no need to scavenge SRTs to keep these alive. >> > >> > >> >> > >> > >> This also explains why it's OK to not collect static objects (and not >> > >> > >> treat >> > >> > >> them as roots) in non-major GCs. >> > >> > >> >> > >> > >> However this doesn't explain >> > >> > >> >> > >> > >> - Why it's OK to scavenge large objects with scavenge_one(). >> > >> > > >> > >> > > >> > >> > > I don't understand - perhaps you could elaborate on why you think it might >> > >> > > not be OK? Large objects are treated exactly the same as small objects with >> > >> > > respect to their lifetimes. >> > >> > > >> > >> > >> >> > >> > >> - Why we scavenge SRTs in non-major collections in other places (e.g. >> > >> > >> scavenge_block()). >> > >> > > >> > >> > > >> > >> > > If you look at scavenge_fun_srt() and co, you'll see that they return >> > >> > > immediately if !major_gc. >> > >> > > >> > >> > >> >> > >> > >> Simon, could you say a few words about this? >> > >> > > >> > >> > > >> > >> > > Was that enough words? I have more if necessary :) >> > >> > > >> > >> > > Cheers >> > >> > > Simon >> > >> > > >> > >> > > >> > >> > >> >> > >> > >> >> > >> > >> [1]: https://github.com/ghc/ghc/blob/master/rts/sm/Storage.c#L445-L449 >> > >> > >> [2]: https://github.com/ghc/ghc/blob/master/rts/sm/Scav.c#L1761-L1763 >> > >> > >> >> > >> > >> Ömer >> > >> > >> >> > >> > >> 2018-03-28 17:49 GMT+03:00 Ben Gamari : >> > >> > >> > Hi Simon, >> > >> > >> > >> > >> > >> > I'm a bit confused by scavenge_one; namely it doesn't scavenge SRTs. It >> > >> > >> > appears that it is primarily used for remembered set entries but it's >> > >> > >> > not at all clear why this means that we can safely ignore SRTs (e.g. in >> > >> > >> > the FUN and THUNK cases). >> > >> > >> > >> > >> > >> > Can you shed some light on this? >> > >> > >> > >> > >> > >> > Cheers, >> > >> > >> > >> > >> > >> > - Ben >> > >> > >> > >> > >> > >> > _______________________________________________ >> > >> > >> > ghc-devs mailing list >> > >> > >> > ghc-devs at haskell.org >> > >> > >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > >> > >> > >> > >> > >> _______________________________________________ >> > >> > >> ghc-devs mailing list >> > >> > >> ghc-devs at haskell.org >> > >> > >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > >> > > >> > >> > > >> > > >> > > > > From george.colpitts at gmail.com Fri Jun 22 16:00:17 2018 From: george.colpitts at gmail.com (George Colpitts) Date: Fri, 22 Jun 2018 10:00:17 -0600 Subject: Plan for GHC 8.6.1 In-Reply-To: <87h8o6d8p8.fsf@smart-cactus.org> References: <87h8o6d8p8.fsf@smart-cactus.org> Message-ID: Hello Will ghc 8.6.1 use llvm 6.0? The page below doesn't mention it. GHC 8.4.2 and 8.4.3 seem to work with llvm 6.0 but I haven't done extensive testing. Thanks George On Thu, Apr 19, 2018 at 9:27 PM Ben Gamari wrote: > Hello fellow lazy purists, > > With GHC 8.4.2 out the door, it is time to begin looking forward to > 8.6.1. In keeping with our six-month release schedule, this release will > be targetted for early-September, with the stable branch being cut in > mid-to-late June. > > Remarkably, this is only 6 weeks away. If you have patches that you > would like to see in 8.6.1, please do put them up on Phabricator and the > 8.6.1 status page [1] in the coming weeks to ensure that there is > sufficient time for review. > > If you have a patch which you are concerned won't make the cut-off, do > say something. > > Cheers, > > - Ben > > > [1] https://ghc.haskell.org/trac/ghc/wiki/Status/GHC-8.6.1 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Fri Jun 22 22:59:26 2018 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Fri, 22 Jun 2018 18:59:26 -0400 Subject: atomicWriteIORef In-Reply-To: References: Message-ID: I conjecture, but might be wrong, that on a number of architectures, there isn't a native CAS plus memory ordering stuff? eg https://en.wikipedia.org/wiki/Memory_ordering on the other hand, it might be a simple oversight (in which case good catch!) i dont have SMP memory models properly digested well enough to say which though On Fri, Jun 22, 2018 at 2:43 AM David Feuer wrote: > Currently, atomicWriteIORef is implemented using atomicModifyIORef. That > seems pretty heavy for the job, allocating closures and forcing thunks. Is > there a reason not to do it with casMutVar#? > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Sun Jun 24 15:11:02 2018 From: ben at well-typed.com (Ben Gamari) Date: Sun, 24 Jun 2018 11:11:02 -0400 Subject: Plan for GHC 8.6.1 In-Reply-To: References: <87h8o6d8p8.fsf@smart-cactus.org> Message-ID: <878t74xls8.fsf@smart-cactus.org> Moritz Angermann writes: > Hi, > > I would almost go as far as saying 8.6 will work with LLVM4-6. GHC llvm codegen > is really only dependent on the textual IR, and even there only on the parts we > use. This used to be an issue, where LLVMs textual IR changed quite a bit, but > it looks like it hasn't for the last few releases. > > This of course does not insulate us from bugs in LLVM itself, which might or > might not affect us. > > Maybe we can be a bit more lenient with respect to LLVM versions? > I would really prefer not to be. We put in place the current policy for a few good reasons: not only was keeping up with the syntactic changes in a compatible way tiresome, but various sets of LLVM bugs meant significantly more work during ticket triage. While the relative syntactic stability of the last few LLVM releases may reduce the relevance of the first reason, the second reason holds just as well today as it did two years ago. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From george.colpitts at gmail.com Sun Jun 24 15:36:08 2018 From: george.colpitts at gmail.com (George Colpitts) Date: Sun, 24 Jun 2018 12:36:08 -0300 Subject: Plan for GHC 8.6.1 In-Reply-To: <878t74xls8.fsf@smart-cactus.org> References: <87h8o6d8p8.fsf@smart-cactus.org> <878t74xls8.fsf@smart-cactus.org> Message-ID: I agree. So back to my original question: will ghc 8.6.1 be moving to llvm 6 from llvm 5? Thanks George On Sun, Jun 24, 2018 at 12:11 PM Ben Gamari wrote: > Moritz Angermann writes: > > > Hi, > > > > I would almost go as far as saying 8.6 will work with LLVM4-6. GHC llvm > codegen > > is really only dependent on the textual IR, and even there only on the > parts we > > use. This used to be an issue, where LLVMs textual IR changed quite a > bit, but > > it looks like it hasn't for the last few releases. > > > > This of course does not insulate us from bugs in LLVM itself, which > might or > > might not affect us. > > > > Maybe we can be a bit more lenient with respect to LLVM versions? > > > I would really prefer not to be. We put in place the current policy for > a few good reasons: not only was keeping up with the syntactic changes > in a compatible way tiresome, but various sets of LLVM bugs meant > significantly more work during ticket triage. > > While the relative syntactic stability of the last few LLVM releases may > reduce the relevance of the first reason, the second reason holds just > as well today as it did two years ago. > > Cheers, > > - Ben > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Sun Jun 24 16:55:57 2018 From: ben at well-typed.com (Ben Gamari) Date: Sun, 24 Jun 2018 12:55:57 -0400 Subject: Plan for GHC 8.6.1 In-Reply-To: References: <87h8o6d8p8.fsf@smart-cactus.org> <878t74xls8.fsf@smart-cactus.org> Message-ID: <876028xgx4.fsf@smart-cactus.org> George Colpitts writes: > I agree. > > So back to my original question: will ghc 8.6.1 be moving to llvm 6 from > llvm 5? > Ahh, whoops, missed the original question! 8.6 will use LLVM 6.0. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From nicolas.frisby at gmail.com Sun Jun 24 18:00:46 2018 From: nicolas.frisby at gmail.com (Nicolas Frisby) Date: Sun, 24 Jun 2018 11:00:46 -0700 Subject: Can a TC plugin create new skolem vars when simplifying Givens? Message-ID: I'm still spending the occasional weekend working on a type checker plugin for row types (actually "set" types at first, but I haven't thought of a less ambiguous term for that). One point of complexity in the plugin has to do with creating fresh variables when simplifying Givens. Some constraints are traditionally simplified by introducing a fresh variable. For Wanted constraints, that's easy (newFlexiTyVar). For Givens, though, I haven't figured out how to do it. This email is just to ask these two questions: 1) Is there a function to add a new skolem variable when simplifying Givens? 2) Assuming not, is there a strong reason for there to never be such a function? Here's a small indicative example. In the "simplify Givens" step, the plugin receives [G] (p `Union` Singleton A) ~ (q `Union` Singleton B) and I would ideally simplify that to [G] p ~ (x `Union` Singleton B) [G] q ~ (x `Union` Singleton A) for some fresh skolem variable x. But I don't see how to properly create a fresh skolem variable in the Givens. If these were Wanteds, I would just use newFlexiTyVar. I think this is analogous to a hypothetical type checker plugin that eta expands tuples. If we were to simplify [G] ... (x :: (k1,k2)) ... to [G] ... '(x1 :: k1,x2 :: k2) ... we'd have to generate x1 and x2 somehow. The only method I'm aware of for that is to use Fst x and Snd x instead (ie type families). That might be acceptable for the tuple expansion example, but I'm very reticent to use something like that for the set types plugin. I have a plan to get by without creating these variables when simplifying Givens, but it's not simple. I'd be delighted if it were possible to create them. Hence my two questions listed above. Thank you for your time. -Nick -------------- next part -------------- An HTML attachment was scrubbed... URL: From kovanikov at gmail.com Mon Jun 25 08:34:50 2018 From: kovanikov at gmail.com (Dmitriy Kovanikov) Date: Mon, 25 Jun 2018 16:34:50 +0800 Subject: GHC API question: resolving dependencies for modules In-Reply-To: References: <631D1530-6D08-479F-822D-320DD3C33A97@gmail.com> Message-ID: Thanks a lot for your suggestion! I’ve looked into GHC source plugins. And, specifically, into your `hashtag-coerce` example: * https://github.com/mpickering/hashtag-coerce As far as I can see, this allows only to analyse source code (to produce warnings or errors). While my actual goal is actually to refactor code automatically. And I would like to avoid full compilation process to make it work faster. Thanks, Dmitrii > On 21 Jun 2018, at 5:51 PM, Matthew Pickering wrote: > > This doesn't answer your question directly but if you want to gather > information about a module then using a source plugin would probably > be easier and more robust than using the GHC API. > > You need to write a function of type: > > ``` > ModSummary -> TcGblEnv -> TcM TcGblEnv > ``` > > In `TcGblEnv` you will find `tcg_rdr_env` which contains all top-level > things and describes how they came to be in scope. > > Source plugins will be in GHC 8.6. > > Cheers, > > Matt > > > On Thu, Jun 21, 2018 at 10:06 AM, Dmitriy Kovanikov wrote: >> Hello! >> >> I’m trying to use GHC as a library. And my goal is to be able to gather >> information about where each function or data type came from. I’ve started >> by simply calling `getNamesInScope` function and observing its result. Here >> is my code: >> >> * Main.hs: https://lpaste.net/9026688686753841152 >> >> And here is the code for my test modules: >> >> * test/X.hs: https://lpaste.net/6844657232357883904 >> * test/Y.hs: https://lpaste.net/8673289058127970304 >> >> Unfortunately, my implementation doesn't work since I’m not very familiar >> with GHC API. >> And I see the following errors after executing my `Main.hs` file (I’m using >> ghc-8.2.2): >> >> * error messages: https://lpaste.net/3316737208131518464 >> >> Could you please point me to places or parts of GHC API or some >> documentation about module dependencies and how to make ghc see imports of >> other modules? I can’t find simple and small enough usage example of this >> part of the library. >> >> Thanks in advance, >> Dmitrii Kovanikov >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Mon Jun 25 17:58:48 2018 From: ben at well-typed.com (Ben Gamari) Date: Mon, 25 Jun 2018 13:58:48 -0400 Subject: GHC API question: resolving dependencies for modules In-Reply-To: References: <631D1530-6D08-479F-822D-320DD3C33A97@gmail.com> Message-ID: <87lgb2wxwu.fsf@smart-cactus.org> Dmitriy Kovanikov writes: > Thanks a lot for your suggestion! I’ve looked into GHC source plugins. > And, specifically, into your `hashtag-coerce` example: > > * https://github.com/mpickering/hashtag-coerce > > As far as I can see, this allows only to analyse source code (to > produce warnings or errors). It depends upon which type of source plugin you are using. There are a few points in the compilation pipeline that source plugins allow you to plug in to. Some of these allow the plugin to modify the AST while others only allow inspection. See the users guide [1] for details. Cheers, - Ben [1] https://github.com/ghc/ghc/blob/master/docs/users_guide/extending_ghc.rst#source-plugins -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From matthewtpickering at gmail.com Mon Jun 25 21:08:53 2018 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Mon, 25 Jun 2018 22:08:53 +0100 Subject: GHC API question: resolving dependencies for modules In-Reply-To: References: <631D1530-6D08-479F-822D-320DD3C33A97@gmail.com> Message-ID: I'm sorry of my poor form for not really answering your question here but it's because no one really knows how to use the GHC API. If you are stuck on this path then you could look at how GHC uses the API or an existing user of the API like haddock or haskell-indexer. However, these solutions are not going to be as robust as using a plugin and you will end up with writing a lot of code probably in order to get it to work. Cheers, Matt On Mon, Jun 25, 2018 at 9:34 AM, Dmitriy Kovanikov wrote: > Thanks a lot for your suggestion! I’ve looked into GHC source plugins. > And, specifically, into your `hashtag-coerce` example: > > * https://github.com/mpickering/hashtag-coerce > > As far as I can see, this allows only to analyse source code (to produce > warnings or errors). > While my actual goal is actually to refactor code automatically. And I would > like to avoid full compilation process to make it work faster. > > Thanks, > Dmitrii > > > On 21 Jun 2018, at 5:51 PM, Matthew Pickering > wrote: > > This doesn't answer your question directly but if you want to gather > information about a module then using a source plugin would probably > be easier and more robust than using the GHC API. > > You need to write a function of type: > > ``` > ModSummary -> TcGblEnv -> TcM TcGblEnv > ``` > > In `TcGblEnv` you will find `tcg_rdr_env` which contains all top-level > things and describes how they came to be in scope. > > Source plugins will be in GHC 8.6. > > Cheers, > > Matt > > > On Thu, Jun 21, 2018 at 10:06 AM, Dmitriy Kovanikov > wrote: > > Hello! > > I’m trying to use GHC as a library. And my goal is to be able to gather > information about where each function or data type came from. I’ve started > by simply calling `getNamesInScope` function and observing its result. Here > is my code: > > * Main.hs: https://lpaste.net/9026688686753841152 > > And here is the code for my test modules: > > * test/X.hs: https://lpaste.net/6844657232357883904 > * test/Y.hs: https://lpaste.net/8673289058127970304 > > Unfortunately, my implementation doesn't work since I’m not very familiar > with GHC API. > And I see the following errors after executing my `Main.hs` file (I’m using > ghc-8.2.2): > > * error messages: https://lpaste.net/3316737208131518464 > > Could you please point me to places or parts of GHC API or some > documentation about module dependencies and how to make ghc see imports of > other modules? I can’t find simple and small enough usage example of this > part of the library. > > Thanks in advance, > Dmitrii Kovanikov > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > From omeragacan at gmail.com Tue Jun 26 06:54:30 2018 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Tue, 26 Jun 2018 09:54:30 +0300 Subject: How do I add ghc-prim as a dep for ghc? Message-ID: I'm trying to add ghc-prim as a dependency to the ghc package. So far I've done these changes: diff --git a/compiler/ghc.cabal.in b/compiler/ghc.cabal.in index 01628dcad1..b9c3b3d02b 100644 --- a/compiler/ghc.cabal.in +++ b/compiler/ghc.cabal.in @@ -65,7 +65,8 @@ Library ghc-boot == @ProjectVersionMunged@, ghc-boot-th == @ProjectVersionMunged@, ghc-heap == @ProjectVersionMunged@, - ghci == @ProjectVersionMunged@ + ghci == @ProjectVersionMunged@, + ghc-prim if os(windows) Build-Depends: Win32 >= 2.3 && < 2.7 diff --git a/ghc.mk b/ghc.mk index c0b99c00f4..26c6e86c02 100644 --- a/ghc.mk +++ b/ghc.mk @@ -420,7 +420,8 @@ else # CLEANING # programs such as GHC and ghc-pkg, that we do not assume the stage0 # compiler already has installed (or up-to-date enough). -PACKAGES_STAGE0 = binary text transformers mtl parsec Cabal/Cabal hpc ghc-boot-th ghc-boot template-haskell ghc-heap ghci +PACKAGES_STAGE0 = binary text transformers mtl parsec Cabal/Cabal hpc \ + ghc-boot-th ghc-boot template-haskell ghc-heap ghci ghc-prim ifeq "$(Windows_Host)" "NO" PACKAGES_STAGE0 += terminfo endif But I'm getting this error: ghc-cabal: Encountered missing dependencies: ghc-prim ==0.5.3 Any ideas what else to edit? Thanks, Ömer From omeragacan at gmail.com Tue Jun 26 06:57:49 2018 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Tue, 26 Jun 2018 09:57:49 +0300 Subject: Booting ghc with system-wide installed Cabal? Message-ID: Currently we have to build Cabal from scratch after every make clean. Ideally I should be able to skip this step by installing the correct versions of Cabal and cabal-install system-wide, but as far as I can see we currently doesn't support this. Any ideas on how to make this work? Thanks, Ömer From metaniklas at gmail.com Tue Jun 26 07:26:55 2018 From: metaniklas at gmail.com (Niklas Larsson) Date: Tue, 26 Jun 2018 09:26:55 +0200 Subject: Booting ghc with system-wide installed Cabal? In-Reply-To: References: Message-ID: Installing stuff system-wide without doing ‘make install’ would break my expectations for how the build works. Also, how would one return to a pristine state if it was done that way? // Niklas > 26 juni 2018 kl. 08:57 skrev Ömer Sinan Ağacan : > > Currently we have to build Cabal from scratch after every make clean. Ideally I > should be able to skip this step by installing the correct versions of Cabal > and cabal-install system-wide, but as far as I can see we currently doesn't > support this. Any ideas on how to make this work? > > Thanks, > > Ömer > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From omeragacan at gmail.com Tue Jun 26 07:31:50 2018 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Tue, 26 Jun 2018 10:31:50 +0300 Subject: Booting ghc with system-wide installed Cabal? In-Reply-To: References: Message-ID: We don't have to break anyone's workflow. We could introduce an ENV var or a flag for this. > Also, how would one return to a pristine state if it was done that way? By unsetting the ENV var or not using the flag. Ömer Niklas Larsson , 26 Haz 2018 Sal, 10:26 tarihinde şunu yazdı: > > Installing stuff system-wide without doing ‘make install’ would break my expectations for how the build works. Also, how would one return to a pristine state if it was done that way? > > // Niklas > > > 26 juni 2018 kl. 08:57 skrev Ömer Sinan Ağacan : > > > > Currently we have to build Cabal from scratch after every make clean. Ideally I > > should be able to skip this step by installing the correct versions of Cabal > > and cabal-install system-wide, but as far as I can see we currently doesn't > > support this. Any ideas on how to make this work? > > > > Thanks, > > > > Ömer > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From simonpj at microsoft.com Tue Jun 26 08:59:46 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 26 Jun 2018 08:59:46 +0000 Subject: Strace In-Reply-To: References: Message-ID: Great. https://ghc.haskell.org/trac/ghc/ticket/15313#ticket is created. I don’t know how to force them to be “cpu multirace on windows”. If you could do that sometime it’d be great. No rush. Thank you! Simon From: Phyx Sent: 26 June 2018 06:01 To: Simon Peyton Jones Subject: Re: Strace Hi Simon, Thanks for the log, that does give a clue. The command fails with setup.exe: 'C:/code/HEAD/inplace/bin/ghc-pkg.exe' exited with an error: ... rule-defining-plugin-0.1: cannot find any of ["libHSrule-defining-plugin-0.1-GxqqrdsQ5NRK9hAhEkvz8Z.a","libHSrule-defining-plugin-0.1-GxqqrdsQ5NRK9hAhEkvz8Z.p_a","libHSrule-defining-plugin-0.1-GxqqrdsQ5NRK9hAhEkvz8Z-ghc8.5.20180616.so","libHSrule-defining-plugin-0.1-GxqqrdsQ5NRK9hAhEkvz8Z-ghc8.5.20180616.dylib","HSrule-defining-plugin-0.1-GxqqrdsQ5NRK9hAhEkvz8Z-ghc8.5.20180616.dll"] on library path (use --force to override) make[2]: *** [Makefile:18: package.T10420] Error 1 so it's the `setup install` from https://github.com/ghc/ghc/blob/c2783ccf545faabd21a234a4dfc569cd856082b9/testsuite/tests/plugins/rule-defining-plugin/Makefile failing. unfortunately, all those tests run with -v0 which is annoying because now the verbosity of the testsuite doesn't control that of these tests. I'm not sure why these commands fail under heavy load though. I'll need to dive into the source of ghc-pkg to figure out what's happening. Notice that all the framework failures are these plugin tests which modify a package database. A wild guess is that ghc-pkg tries to take a lock on all package-databases or something when it's mutating one. But I'm not intimately familiar with the package store and this doesn't explain why it doesn't happen on Linux. for now one solution I can propose is to create a ticket to track these and mark these tests as cpu multirace on Windows, which will force them to run sequentially. I'll try to take a look at ghc-pkg this week and if I don't figure anything out I'll force the tests sequential on the short term. Cheers, Tamar On Mon, Jun 25, 2018 at 4:08 AM, Simon Peyton Jones > wrote: Tamar I tried this TEST_VERBOSITY="VERBOSE=3" sh validate --fast --no-clean >& log in the root directory. I get the framework failures, but I’m not sure the verbosity-control worked. Log attached SImon -------------- next part -------------- An HTML attachment was scrubbed... URL: From zubin.duggal at gmail.com Tue Jun 26 10:48:24 2018 From: zubin.duggal at gmail.com (Zubin Duggal) Date: Tue, 26 Jun 2018 16:18:24 +0530 Subject: Update on HIE Files Message-ID: Hello all, I've been working on the HIE File ( https://ghc.haskell.org/trac/ghc/wiki/HIEFiles) GSOC project, The design of the data structure as well as the traversal of GHCs ASTs to collect all the relevant info is mostly complete. We traverse the Renamed and Typechecked AST to collect the following info about each SrcSpan 1) Its type, if it corresponds to a binding, pattern or expression 2) Details about any tokens in the original source corresponding to this span(keywords, symbols, etc.) 3) The set of Constructor/Type pairs that correspond to this span in the GHC AST 4) Details about all the identifiers that occur at this SrcSpan For each occurrence of an identifier(Name or ModuleName), we store its type(if it has one), and classify it as one of the following based on how it occurs: 1) Use 2) Import/Export 3) Pattern Binding, along with the scope of the binding, and the span of the entire binding location(including the RHS) if it occurs as part of a top level declaration, do binding or let/where binding 4) Value Binding, along with whether it is an instance binding or not, its scope, and the span of its entire binding site, including the RHS 5) Type Declaration (class or regular) (foo :: ...) 6) Declaration(class, type, instance, data, type family etc.) 7) Type variable binding, along with its scope(which takes into account ScopedTypeVariables) I have updated the wiki page with more details about the Scopes associated with bindings: https://ghc.haskell.org/trac/ghc/wiki/HIEFiles#Scopeinformationaboutsymbols These annotated SrcSpans are then arranged into a interval/rose tree to aid lookups. We assume that no SrcSpans ever partially overlap, for any two SrcSpans that occur in the Renamed/Typechecked ASTs, either they are equal, disjoint, or strictly contained in each other. This assumption has mostly held out so far while testing on the entire ghc:HEAD tree, other than one case where the typechecker strips out parenthesis in the original source, which has been patched(see https://ghc.haskell.org/trac/ghc/ticket/15242). I have also written functions that lookup the binding site(including RHS) and scope of an identifier from the tree. Testing these functions on the ghc:HEAD tree, it succeeds in looking up scopes for almost all symbol occurrences in all source files, and I've also verified that the calculated scope always contains all the occurrences of the symbol. The few cases where this check fails is where the SrcSpans have been mangled by CPP(see https://ghc.haskell.org/trac/ghc/ticket/15279). The code for this currently lives here: https://github.com/haskell/haddock/compare/ghc-head...wz1000:hiefile-2 Moving forward, the plan for the rest of the summer is 1) Move this into the GHC tree and add a flag that controls generating this 2) Write serializers and deserializers for this info 3) Teach the GHC PackageDb about .hie files 4) Rewrite haddocks --hyperlinked-source to use .hie files. -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Tue Jun 26 10:53:36 2018 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Tue, 26 Jun 2018 11:53:36 +0100 Subject: Update on HIE Files In-Reply-To: References: Message-ID: Have you considered how this feature interacts with source plugins? Could the generation of these files be implemented as a source plugin? That would mean that development of the feature would not be coupled to GHC releases. Cheers, Matt On Tue, Jun 26, 2018 at 11:48 AM, Zubin Duggal wrote: > Hello all, > > I've been working on the HIE File > (https://ghc.haskell.org/trac/ghc/wiki/HIEFiles) GSOC project, > > The design of the data structure as well as the traversal of GHCs ASTs to > collect all the relevant info is mostly complete. > > We traverse the Renamed and Typechecked AST to collect the following info > about each SrcSpan > > 1) Its type, if it corresponds to a binding, pattern or expression > 2) Details about any tokens in the original source corresponding to this > span(keywords, symbols, etc.) > 3) The set of Constructor/Type pairs that correspond to this span in the GHC > AST > 4) Details about all the identifiers that occur at this SrcSpan > > For each occurrence of an identifier(Name or ModuleName), we store its > type(if it has one), and classify it as one of the following based on how it > occurs: > > 1) Use > 2) Import/Export > 3) Pattern Binding, along with the scope of the binding, and the span of the > entire binding location(including the RHS) if it occurs as part of a top > level declaration, do binding or let/where binding > 4) Value Binding, along with whether it is an instance binding or not, its > scope, and the span of its entire binding site, including the RHS > 5) Type Declaration (class or regular) (foo :: ...) > 6) Declaration(class, type, instance, data, type family etc.) > 7) Type variable binding, along with its scope(which takes into account > ScopedTypeVariables) > > I have updated the wiki page with more details about the Scopes associated > with bindings: > https://ghc.haskell.org/trac/ghc/wiki/HIEFiles#Scopeinformationaboutsymbols > > These annotated SrcSpans are then arranged into a interval/rose tree to aid > lookups. > > We assume that no SrcSpans ever partially overlap, for any two SrcSpans that > occur in the Renamed/Typechecked ASTs, either they are equal, disjoint, or > strictly contained in each other. This assumption has mostly held out so far > while testing on the entire ghc:HEAD tree, other than one case where the > typechecker strips out parenthesis in the original source, which has been > patched(see https://ghc.haskell.org/trac/ghc/ticket/15242). > > I have also written functions that lookup the binding site(including RHS) > and scope of an identifier from the tree. Testing these functions on the > ghc:HEAD tree, it succeeds in looking up scopes for almost all symbol > occurrences in all source files, and I've also verified that the calculated > scope always contains all the occurrences of the symbol. The few cases where > this check fails is where the SrcSpans have been mangled by CPP(see > https://ghc.haskell.org/trac/ghc/ticket/15279). > > The code for this currently lives here: > https://github.com/haskell/haddock/compare/ghc-head...wz1000:hiefile-2 > > Moving forward, the plan for the rest of the summer is > > 1) Move this into the GHC tree and add a flag that controls generating this > 2) Write serializers and deserializers for this info > 3) Teach the GHC PackageDb about .hie files > 4) Rewrite haddocks --hyperlinked-source to use .hie files. > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From zubin.duggal at gmail.com Tue Jun 26 11:09:22 2018 From: zubin.duggal at gmail.com (Zubin Duggal) Date: Tue, 26 Jun 2018 16:39:22 +0530 Subject: Update on HIE Files In-Reply-To: References: Message-ID: Hey Matt, In principle, there should be no problem interacting with source plugins, or implementing this as a source plugin, given that the generating function has type: enrichHie :: GhcMonad m => TypecheckedSource -> RenamedSource -> m (HieAST Type) The only reason the GhcMonad constraint is necessary and this is not a pure function is because desugarExpr has type deSugarExpr :: HscEnv -> LHsExpr GhcTc -> IO (Messages, Maybe CoreExpr) So we need a GhcMonad to get the HscEnv. We need to desugar expressions to get their Type. However, in a private email with Németh Boldizsár regarding implementing this as a source plugin, I had the following concerns: 1. Since HIE files are going to be used for haddock generation, and haddock is a pretty important part of the haskell ecosystem, GHC should be able to produce them by default without needing to install anything else. 2. Integrating HIE file generation into GHC itself will push the burden of maintaining support to whoever makes breaking changes to GHC, instead of whoever ends up maintaining the source plugin. This way, HIE files can be a first class citizen and evolve with GHC. 3. Concerns about portability of source plugins - it should work at least wherever haddock can currently work 4. I believe there are some issues with how plugins interact with GHCs recompilation avoidance? Given that HIE files are also meant to be used for interactive usage via haskell-ide-engine, this is a pretty big deal breaker. I understand (4) has been solved now, but the first three still remain. On 26 June 2018 at 16:23, Matthew Pickering wrote: > Have you considered how this feature interacts with source plugins? > > Could the generation of these files be implemented as a source plugin? > That would mean that development of the feature would not be coupled > to GHC releases. > > Cheers, > > Matt > > On Tue, Jun 26, 2018 at 11:48 AM, Zubin Duggal > wrote: > > Hello all, > > > > I've been working on the HIE File > > (https://ghc.haskell.org/trac/ghc/wiki/HIEFiles) GSOC project, > > > > The design of the data structure as well as the traversal of GHCs ASTs to > > collect all the relevant info is mostly complete. > > > > We traverse the Renamed and Typechecked AST to collect the following info > > about each SrcSpan > > > > 1) Its type, if it corresponds to a binding, pattern or expression > > 2) Details about any tokens in the original source corresponding to this > > span(keywords, symbols, etc.) > > 3) The set of Constructor/Type pairs that correspond to this span in the > GHC > > AST > > 4) Details about all the identifiers that occur at this SrcSpan > > > > For each occurrence of an identifier(Name or ModuleName), we store its > > type(if it has one), and classify it as one of the following based on > how it > > occurs: > > > > 1) Use > > 2) Import/Export > > 3) Pattern Binding, along with the scope of the binding, and the span of > the > > entire binding location(including the RHS) if it occurs as part of a top > > level declaration, do binding or let/where binding > > 4) Value Binding, along with whether it is an instance binding or not, > its > > scope, and the span of its entire binding site, including the RHS > > 5) Type Declaration (class or regular) (foo :: ...) > > 6) Declaration(class, type, instance, data, type family etc.) > > 7) Type variable binding, along with its scope(which takes into account > > ScopedTypeVariables) > > > > I have updated the wiki page with more details about the Scopes > associated > > with bindings: > > https://ghc.haskell.org/trac/ghc/wiki/HIEFiles# > Scopeinformationaboutsymbols > > > > These annotated SrcSpans are then arranged into a interval/rose tree to > aid > > lookups. > > > > We assume that no SrcSpans ever partially overlap, for any two SrcSpans > that > > occur in the Renamed/Typechecked ASTs, either they are equal, disjoint, > or > > strictly contained in each other. This assumption has mostly held out so > far > > while testing on the entire ghc:HEAD tree, other than one case where the > > typechecker strips out parenthesis in the original source, which has been > > patched(see https://ghc.haskell.org/trac/ghc/ticket/15242). > > > > I have also written functions that lookup the binding site(including RHS) > > and scope of an identifier from the tree. Testing these functions on the > > ghc:HEAD tree, it succeeds in looking up scopes for almost all symbol > > occurrences in all source files, and I've also verified that the > calculated > > scope always contains all the occurrences of the symbol. The few cases > where > > this check fails is where the SrcSpans have been mangled by CPP(see > > https://ghc.haskell.org/trac/ghc/ticket/15279). > > > > The code for this currently lives here: > > https://github.com/haskell/haddock/compare/ghc-head...wz1000:hiefile-2 > > > > Moving forward, the plan for the rest of the summer is > > > > 1) Move this into the GHC tree and add a flag that controls generating > this > > 2) Write serializers and deserializers for this info > > 3) Teach the GHC PackageDb about .hie files > > 4) Rewrite haddocks --hyperlinked-source to use .hie files. > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From omeragacan at gmail.com Tue Jun 26 13:53:34 2018 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Tue, 26 Jun 2018 16:53:34 +0300 Subject: Scavenging SRTs in scavenge_one In-Reply-To: References: <877epwi7bb.fsf@smart-cactus.org> Message-ID: Documented in https://phabricator.haskell.org/D4893 Ömer Ömer Sinan Ağacan , 22 Haz 2018 Cum, 09:54 tarihinde şunu yazdı: > > OK, finally everything makes sense I think. I was very confused by the code and > previous emails where you said: > > > Large objects can only be primitive objects, like MUT_ARR_PTRS, allocated by > > the RTS, and none of these have SRTs. > > I was pointing out that this is not entirely correct; we allocate large stacks. > But as you say scavenge_one() handles that case by scavenging stack SRTs. > > So in summary: > > - scavenge_one() is called to scavenge mut_lists and large objects. > - When scavenging mut_lists no need to scaveng SRTs (see previous emails) > - When scavenging large objects we know that certain objects can't be large > (i.e. FUN, THUNK), but some others can (i.e. STACK), so scavenge_one() > scavenges stack SRTs but does not scavenge FUN and THUNK SRTs. > > Ömer > > Simon Marlow , 21 Haz 2018 Per, 21:27 tarihinde şunu yazdı: > > > > When scavenge_one() sees a STACK, it calls scavenge_stack() which traverses the stack frames, including their SRTs. > > > > So I don't understand what's going wrong for you - how are the SRTs not being traversed? > > > > Cheers > > Simon > > > > On 21 June 2018 at 11:58, Ömer Sinan Ağacan wrote: > >> > >> Here's an example where we allocate a large (4K) stack: > >> > >> >>> bt > >> #0 allocateMightFail (cap=0x7f366808cfc0 , > >> n=4096) at rts/sm/Storage.c:876 > >> #1 0x00007f3667e4a85d in allocate (cap=0x7f366808cfc0 > >> , n=4096) at rts/sm/Storage.c:849 > >> #2 0x00007f3667e16f46 in threadStackOverflow (cap=0x7f366808cfc0 > >> , tso=0x4200152a68) at rts/Threads.c:600 > >> #3 0x00007f3667e12a64 in schedule > >> (initialCapability=0x7f366808cfc0 , task=0x78c970) at > >> rts/Schedule.c:520 > >> #4 0x00007f3667e1215f in scheduleWaitThread (tso=0x4200105388, > >> ret=0x0, pcap=0x7ffef40dce78) at rts/Schedule.c:2533 > >> #5 0x00007f3667e25685 in rts_evalLazyIO (cap=0x7ffef40dce78, > >> p=0x736ef8, ret=0x0) at rts/RtsAPI.c:530 > >> #6 0x00007f3667e25f7a in hs_main (argc=16, argv=0x7ffef40dd0a8, > >> main_closure=0x736ef8, rts_config=...) t rts/RtsMain.c:72 > >> #7 0x00000000004f738f in main () > >> > >> This is based on an old tree so source locations may not be correct, it's this > >> code in threadStackOverflow(): > >> > >> // Charge the current thread for allocating stack. Stack usage is > >> // non-deterministic, because the chunk boundaries might vary from > >> // run to run, but accounting for this is better than not > >> // accounting for it, since a deep recursion will otherwise not be > >> // subject to allocation limits. > >> cap->r.rCurrentTSO = tso; > >> new_stack = (StgStack*) allocate(cap, chunk_size); > >> cap->r.rCurrentTSO = NULL; > >> > >> SET_HDR(new_stack, &stg_STACK_info, old_stack->header.prof.ccs); > >> TICK_ALLOC_STACK(chunk_size); > >> > >> Ömer > >> Ömer Sinan Ağacan , 21 Haz 2018 Per, 13:42 > >> tarihinde şunu yazdı: > >> > > >> > > Large objects can only be primitive objects, like MUT_ARR_PTRS, allocated by > >> > > the RTS, and none of these have SRTs. > >> > > >> > Is is not possible to allocate a large STACK? I'm currently observing this in > >> > gdb: > >> > > >> > >>> call *Bdescr(0x4200ec9000) > >> > $2 = { > >> > start = 0x4200ec9000, > >> > free = 0x4200ed1000, > >> > link = 0x4200100e80, > >> > u = { > >> > back = 0x4200103980, > >> > bitmap = 0x4200103980, > >> > scan = 0x4200103980 > >> > }, > >> > gen = 0x77b4b8, > >> > gen_no = 1, > >> > dest_no = 1, > >> > node = 0, > >> > flags = 1027, <-- BF_LARGE | BF_EVACUTED | ... > >> > blocks = 8, > >> > _padding = {[0] = 0, [1] = 0, [2] = 0} > >> > } > >> > > >> > >>> call printClosure(0x4200ec9000) > >> > 0x4200ec9000: STACK > >> > > >> > >>> call checkClosure(0x4200ec9000) > >> > $3 = 4096 -- makes sense, larger than 3277 bytes > >> > > >> > So I have a large STACK object, and STACKs can refer to static objects. But > >> > when we scavenge this object we don't scavenge its SRTs because we use > >> > scavenge_one(). This seems wrong to me. > >> > > >> > Ömer > >> > > >> > Simon Marlow , 20 Haz 2018 Çar, 14:32 tarihinde şunu yazdı: > >> > > > >> > > Interesting point. I don't think there are any large objects with SRTs, but we should document the invariant because we're relying on it. > >> > > > >> > > Large objects can only be primitive objects, like MUT_ARR_PTRS, allocated by the RTS, and none of these have SRTs. > >> > > > >> > > We did have plans to allocate memory for large dynamic objects using `allocate()` from compiled code, in which case we could have large objects that could be THUNK, FUN, etc. and could have an SRT, in which case we would need to revisit this. You might want to take a look at Note [big objects] in GCUtils.c, which is relevant here. > >> > > > >> > > Cheers > >> > > Simon > >> > > > >> > > > >> > > On 20 June 2018 at 09:20, Ömer Sinan Ağacan wrote: > >> > >> > >> > >> Hi Simon, > >> > >> > >> > >> I'm confused about this code again. You said > >> > >> > >> > >> > scavenge_one() is only used for a non-major collection, where we aren't > >> > >> > traversing SRTs. > >> > >> > >> > >> But I think this is not true; scavenge_one() is also used to scavenge large > >> > >> objects (in scavenge_large()), which are scavenged even in major GCs. So it > >> > >> seems like we never really scavenge SRTs of large objects. This doesn't look > >> > >> right to me. Am I missing anything? Can large objects not refer to static > >> > >> objects? > >> > >> > >> > >> Thanks > >> > >> > >> > >> Ömer > >> > >> > >> > >> Ömer Sinan Ağacan , 2 May 2018 Çar, 09:03 > >> > >> tarihinde şunu yazdı: > >> > >> > > >> > >> > Thanks Simon, this is really helpful. > >> > >> > > >> > >> > > If you look at scavenge_fun_srt() and co, you'll see that they return > >> > >> > > immediately if !major_gc. > >> > >> > > >> > >> > Thanks for pointing this out -- I didn't realize it's returning early when > >> > >> > !major_gc and this caused a lot of confusion. Now everything makes sense. > >> > >> > > >> > >> > I'll add a note for scavenging SRTs and refer to it in relevant code and submit > >> > >> > a diff. > >> > >> > > >> > >> > Ömer > >> > >> > > >> > >> > 2018-05-01 22:10 GMT+03:00 Simon Marlow : > >> > >> > > Your explanation is basically right. scavenge_one() is only used for a > >> > >> > > non-major collection, where we aren't traversing SRTs. Admittedly this is a > >> > >> > > subtle point that could almost certainly be documented better, I probably > >> > >> > > just overlooked it. > >> > >> > > > >> > >> > > More inline: > >> > >> > > > >> > >> > > On 1 May 2018 at 10:26, Ömer Sinan Ağacan wrote: > >> > >> > >> > >> > >> > >> I have an idea but it doesn't explain everything; > >> > >> > >> > >> > >> > >> SRTs are used to collect CAFs, and CAFs are always added to the oldest > >> > >> > >> generation's mut_list when allocated [1]. > >> > >> > >> > >> > >> > >> When we're scavenging a mut_list we know we're not doing a major GC, and > >> > >> > >> because mut_list of oldest generation has all the newly allocated CAFs, > >> > >> > >> which > >> > >> > >> will be scavenged anyway, no need to scavenge SRTs for those. > >> > >> > >> > >> > >> > >> Also, static objects are always evacuated to the oldest gen [2], so any > >> > >> > >> CAFs > >> > >> > >> that are alive but not in the mut_list of the oldest gen will stay alive > >> > >> > >> after > >> > >> > >> a non-major GC, again no need to scavenge SRTs to keep these alive. > >> > >> > >> > >> > >> > >> This also explains why it's OK to not collect static objects (and not > >> > >> > >> treat > >> > >> > >> them as roots) in non-major GCs. > >> > >> > >> > >> > >> > >> However this doesn't explain > >> > >> > >> > >> > >> > >> - Why it's OK to scavenge large objects with scavenge_one(). > >> > >> > > > >> > >> > > > >> > >> > > I don't understand - perhaps you could elaborate on why you think it might > >> > >> > > not be OK? Large objects are treated exactly the same as small objects with > >> > >> > > respect to their lifetimes. > >> > >> > > > >> > >> > >> > >> > >> > >> - Why we scavenge SRTs in non-major collections in other places (e.g. > >> > >> > >> scavenge_block()). > >> > >> > > > >> > >> > > > >> > >> > > If you look at scavenge_fun_srt() and co, you'll see that they return > >> > >> > > immediately if !major_gc. > >> > >> > > > >> > >> > >> > >> > >> > >> Simon, could you say a few words about this? > >> > >> > > > >> > >> > > > >> > >> > > Was that enough words? I have more if necessary :) > >> > >> > > > >> > >> > > Cheers > >> > >> > > Simon > >> > >> > > > >> > >> > > > >> > >> > >> > >> > >> > >> > >> > >> > >> [1]: https://github.com/ghc/ghc/blob/master/rts/sm/Storage.c#L445-L449 > >> > >> > >> [2]: https://github.com/ghc/ghc/blob/master/rts/sm/Scav.c#L1761-L1763 > >> > >> > >> > >> > >> > >> Ömer > >> > >> > >> > >> > >> > >> 2018-03-28 17:49 GMT+03:00 Ben Gamari : > >> > >> > >> > Hi Simon, > >> > >> > >> > > >> > >> > >> > I'm a bit confused by scavenge_one; namely it doesn't scavenge SRTs. It > >> > >> > >> > appears that it is primarily used for remembered set entries but it's > >> > >> > >> > not at all clear why this means that we can safely ignore SRTs (e.g. in > >> > >> > >> > the FUN and THUNK cases). > >> > >> > >> > > >> > >> > >> > Can you shed some light on this? > >> > >> > >> > > >> > >> > >> > Cheers, > >> > >> > >> > > >> > >> > >> > - Ben > >> > >> > >> > > >> > >> > >> > _______________________________________________ > >> > >> > >> > ghc-devs mailing list > >> > >> > >> > ghc-devs at haskell.org > >> > >> > >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > >> > >> > >> > > >> > >> > >> _______________________________________________ > >> > >> > >> ghc-devs mailing list > >> > >> > >> ghc-devs at haskell.org > >> > >> > >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > >> > >> > > > >> > >> > > > >> > > > >> > > > > > > From simonpj at microsoft.com Tue Jun 26 14:18:53 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 26 Jun 2018 14:18:53 +0000 Subject: GHC API question: resolving dependencies for modules In-Reply-To: References: <631D1530-6D08-479F-822D-320DD3C33A97@gmail.com> Message-ID: | I'm sorry of my poor form for not really answering your question here but | it's because no one really knows how to use the GHC API. That may be true, but it's alarming if true. The GHC API has grown rather than being consciously and carefully designed. It would be Really Good if someone cares about using the API would like to write down the API they'd *like*, and agree it with others. Then we could implement it! Simon | -----Original Message----- | From: ghc-devs On Behalf Of Matthew Pickering | Sent: 25 June 2018 22:09 | To: Dmitriy Kovanikov | Cc: GHC developers | Subject: Re: GHC API question: resolving dependencies for modules | | If you are stuck on this path then you could look at how GHC uses the API or | an existing user of the API like haddock or haskell-indexer. | | However, these solutions are not going to be as robust as using a plugin and | you will end up with writing a lot of code probably in order to get it to | work. | | Cheers, | | Matt | | On Mon, Jun 25, 2018 at 9:34 AM, Dmitriy Kovanikov | wrote: | > Thanks a lot for your suggestion! I’ve looked into GHC source plugins. | > And, specifically, into your `hashtag-coerce` example: | > | > * | > https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithu | > b.com%2Fmpickering%2Fhashtag-coerce&data=02%7C01%7Csimonpj%40micro | > soft.com%7Cdf4bf5c9bffc4815f65c08d5dadfe0d8%7C72f988bf86f141af91ab2d7c | > d011db47%7C1%7C0%7C636655577449540446&sdata=sC4G%2BjqdgA9%2B%2FJrl | > PJ10ZNls%2FTsmLc16kx8YGpAKkqI%3D&reserved=0 | > | > As far as I can see, this allows only to analyse source code (to | > produce warnings or errors). | > While my actual goal is actually to refactor code automatically. And I | > would like to avoid full compilation process to make it work faster. | > | > Thanks, | > Dmitrii | > | > | > On 21 Jun 2018, at 5:51 PM, Matthew Pickering | > | > wrote: | > | > This doesn't answer your question directly but if you want to gather | > information about a module then using a source plugin would probably | > be easier and more robust than using the GHC API. | > | > You need to write a function of type: | > | > ``` | > ModSummary -> TcGblEnv -> TcM TcGblEnv ``` | > | > In `TcGblEnv` you will find `tcg_rdr_env` which contains all top-level | > things and describes how they came to be in scope. | > | > Source plugins will be in GHC 8.6. | > | > Cheers, | > | > Matt | > | > | > On Thu, Jun 21, 2018 at 10:06 AM, Dmitriy Kovanikov | > | > wrote: | > | > Hello! | > | > I’m trying to use GHC as a library. And my goal is to be able to | > gather information about where each function or data type came from. | > I’ve started by simply calling `getNamesInScope` function and | > observing its result. Here is my code: | > | > * Main.hs: | > https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flpast | > e.net%2F9026688686753841152&data=02%7C01%7Csimonpj%40microsoft.com | > %7Cdf4bf5c9bffc4815f65c08d5dadfe0d8%7C72f988bf86f141af91ab2d7cd011db47 | > %7C1%7C0%7C636655577449540446&sdata=gRr1Ze2i4NRXqOtlwqoI1mqEEv4ux2 | > oZs5ZbA1O1938%3D&reserved=0 | > | > And here is the code for my test modules: | > | > * test/X.hs: | > https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flpast | > e.net%2F6844657232357883904&data=02%7C01%7Csimonpj%40microsoft.com | > %7Cdf4bf5c9bffc4815f65c08d5dadfe0d8%7C72f988bf86f141af91ab2d7cd011db47 | > %7C1%7C0%7C636655577449540446&sdata=QRMPG6I18wg9x7JQ6q2SpfQlUD1ag% | > 2Binofx3ZPj0TWM%3D&reserved=0 | > * test/Y.hs: | > https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flpast | > e.net%2F8673289058127970304&data=02%7C01%7Csimonpj%40microsoft.com | > %7Cdf4bf5c9bffc4815f65c08d5dadfe0d8%7C72f988bf86f141af91ab2d7cd011db47 | > %7C1%7C0%7C636655577449540446&sdata=35E0iM%2BITqeE4SRWlt9czJkkvzsg | > JCixnRFvV6YLnO0%3D&reserved=0 | > | > Unfortunately, my implementation doesn't work since I’m not very | > familiar with GHC API. | > And I see the following errors after executing my `Main.hs` file (I’m | > using | > ghc-8.2.2): | > | > * error messages: | > https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flpast | > e.net%2F3316737208131518464&data=02%7C01%7Csimonpj%40microsoft.com | > %7Cdf4bf5c9bffc4815f65c08d5dadfe0d8%7C72f988bf86f141af91ab2d7cd011db47 | > %7C1%7C0%7C636655577449540446&sdata=yF1UAiQbLOYPrmIKFpA4b2g5ooI%2B | > YBbMvNcRhOGH26A%3D&reserved=0 | > | > Could you please point me to places or parts of GHC API or some | > documentation about module dependencies and how to make ghc see | > imports of other modules? I can’t find simple and small enough usage | > example of this part of the library. | > | > Thanks in advance, | > Dmitrii Kovanikov | > | > _______________________________________________ | > ghc-devs mailing list | > ghc-devs at haskell.org | > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.h | > askell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc-devs&data=02%7C01% | > 7Csimonpj%40microsoft.com%7Cdf4bf5c9bffc4815f65c08d5dadfe0d8%7C72f988b | > f86f141af91ab2d7cd011db47%7C1%7C0%7C636655577449540446&sdata=xa7Kq | > Meknxs6ru%2BN%2FO%2BwWaubGEWGyumWc5VY%2FYZrZxg%3D&reserved=0 | > | > | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.haskell | .org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com%7Cdf4bf5c9bffc4815f65c08d5da | dfe0d8%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636655577449540446&s | data=xa7KqMeknxs6ru%2BN%2FO%2BwWaubGEWGyumWc5VY%2FYZrZxg%3D&reserved=0 From xnningxie at gmail.com Tue Jun 26 15:15:31 2018 From: xnningxie at gmail.com (Ningning Xie) Date: Tue, 26 Jun 2018 11:15:31 -0400 Subject: Literature Review of GHC Core Message-ID: Hi all, I was recently doing some GHC Core related stuff, and I found that it might be useful to have a theoretical understanding of the type system for writing better practical code. Therefore I started to read papers about GHC Core, or System FC, following the publication timeline, to try to understand each design choice involved in the type system. I wrote a simple literature review that I think might be useful for people like me. If you're thinking about learning System FC, you might find it helpful: https://github.com/xnning/GHC-Core-Literature-Review/blob/master/doc/doc.pdf Feedbacks are welcome. Best, Ningning -------------- next part -------------- An HTML attachment was scrubbed... URL: From lonetiger at gmail.com Tue Jun 26 15:28:32 2018 From: lonetiger at gmail.com (Phyx) Date: Tue, 26 Jun 2018 16:28:32 +0100 Subject: Strace In-Reply-To: References: Message-ID: Thanks Simon! Yup I'll mark them when I get home. It had occurred to me today that the difference might be in that ghc-pkg on windows uses locks and on Linux atomic replace. Not sure why the same wasn't done on Windows but that might be it. I'll mark them tomorrow :) Cheers, Tamar On Tue, Jun 26, 2018, 09:59 Simon Peyton Jones wrote: > Great. https://ghc.haskell.org/trac/ghc/ticket/15313#ticket is created. > > > > I don’t know how to force them to be “cpu multirace on windows”. If you > could do that sometime it’d be great. No rush. Thank you! > > > Simon > > > > *From:* Phyx > *Sent:* 26 June 2018 06:01 > *To:* Simon Peyton Jones > *Subject:* Re: Strace > > > > Hi Simon, > > > > Thanks for the log, that does give a clue. The command fails with > > > > setup.exe: 'C:/code/HEAD/inplace/bin/ghc-pkg.exe' exited with an error: > ... > rule-defining-plugin-0.1: cannot find any of > > ["libHSrule-defining-plugin-0.1-GxqqrdsQ5NRK9hAhEkvz8Z.a","libHSrule-defining-plugin-0.1-GxqqrdsQ5NRK9hAhEkvz8Z.p_a"," > libHSrule-defining-plugin-0.1-GxqqrdsQ5NRK9hAhEkvz8Z-ghc8.5.20180616.so > > ","libHSrule-defining-plugin-0.1-GxqqrdsQ5NRK9hAhEkvz8Z-ghc8.5.20180616.dylib","HSrule-defining-plugin-0.1-GxqqrdsQ5NRK9hAhEkvz8Z-ghc8.5.20180616.dll"] > on library path (use --force to override) > > make[2]: *** [Makefile:18: package.T10420] Error 1 > > > > so it's the `setup install` from > https://github.com/ghc/ghc/blob/c2783ccf545faabd21a234a4dfc569cd856082b9/testsuite/tests/plugins/rule-defining-plugin/Makefile > > failing. > > > > unfortunately, all those tests run with -v0 which is annoying because now > the verbosity of the testsuite doesn't control that of these tests. I'm not > sure why these commands fail under heavy load though. > > I'll need to dive into the source of ghc-pkg to figure out what's > happening. Notice that all the framework failures are these plugin tests > which modify a package database. A wild guess is that ghc-pkg tries > > to take a lock on all package-databases or something when it's mutating > one. But I'm not intimately familiar with the package store and this > doesn't explain why it doesn't happen on Linux. > > > > for now one solution I can propose is to create a ticket to track these > and mark these tests as cpu multirace on Windows, which will force them to > run sequentially. I'll try to take a look at ghc-pkg this week > > and if I don't figure anything out I'll force the tests sequential on the > short term. > > > > Cheers, > > Tamar > > > > > > On Mon, Jun 25, 2018 at 4:08 AM, Simon Peyton Jones > wrote: > > Tamar > > I tried this > > TEST_VERBOSITY="VERBOSE=3" sh validate --fast --no-clean >& log > > in the root directory. I get the framework failures, but I’m not sure the > verbosity-control worked. > > Log attached > > SImon > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chrisdone at gmail.com Tue Jun 26 15:40:36 2018 From: chrisdone at gmail.com (Christopher Done) Date: Tue, 26 Jun 2018 16:40:36 +0100 Subject: Tracking down instances from use-sites Message-ID: Hi all, Given a TypecheckedModule, what's the most direct way given a Var expression retrieved from the AST, to determine: 1) that it's a class method e.g. `read` 2) that it's a generic call (no instance chosen) e.g. `Read a => a -> String` 3) or if it's a resolved instance, then which instance is it and which package, module and declaration is that defined in? Starting with this file that has a TypecheckedModule in it: https://gist.github.com/chrisdone/6fcb9f1cba6324148d481fcd4eab6af6#file-ghc-api-hs-L23 I presume at this point that instance resolution has taken place. I'm not sure that dictionaries or chosen instances are inserted into the AST, or whether just the resolved types are inserted e.g. `Int -> String`, where I want e.g. `Read Int`, which might lead me to finding the matching instance from an InstEnv or so. I'd like to do some analyses of Haskell codebases, and the fact that calls to class methods are opaque is a bit of a road-blocker. Any handy tips? Prior work? It'd be neat in tooling to just hit a goto-definition key on `read` and be taken to the instance implementation rather than the class definition. Also, listing all functions that use throw# or functions defined in terms of throw# or FFI calls would be helpful, especially for doing audits. If I could immediately list all partial functions in a project, then list all call-sites, it would be a very convenient way when doing an audit to see whether partial functions (such as head) are used with the proper preconditions or not. Any tips appreciated, Chris From matthewtpickering at gmail.com Tue Jun 26 16:18:59 2018 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Tue, 26 Jun 2018 17:18:59 +0100 Subject: Tracking down instances from use-sites In-Reply-To: References: Message-ID: Chris, I have also considered this question. 1. Look at the `idDetails` of the `Id`. A class selector is a `ClassOpId`. 2,3, When a class selector `foo` is typechecked, the instance information is of course resolved. The selector `foo` is then wrapped in a `HsWrapper` which when desugared will apply the type arguments and dictionary arguments. Thus, in order to understand what instance has been selected, we need to look into the `HsWrapper`. In particular, one of the constructors is the `WpEvApp` constructor which is what will apply the dictionary argument. In case 2, this will be a type variable. In case 3, this will be the dictionary variable. I'm not sure how to distinguish these two cases easily. Then once you have the dictionary id, you can use `idType` to get the type of the dictionary which will be something like `Show ()` in order to tell you which instance was selected. You can inspect the AST of a typechecked program using the `-ddump-tc-ast` flag. Finally, you should considering writing this as a source plugin rather than using the GHC API as it will be easier to run in a variety of different scenarios. Cheers, Matt On Tue, Jun 26, 2018 at 4:40 PM, Christopher Done wrote: > Hi all, > > Given a TypecheckedModule, what's the most direct way given a Var > expression retrieved from the AST, to determine: > > 1) that it's a class method e.g. `read` > 2) that it's a generic call (no instance chosen) e.g. `Read a => a -> String` > 3) or if it's a resolved instance, then which instance is it and which > package, module and declaration is that defined in? > > Starting with this file that has a TypecheckedModule in it: > https://gist.github.com/chrisdone/6fcb9f1cba6324148d481fcd4eab6af6#file-ghc-api-hs-L23 > > I presume at this point that instance resolution has taken place. I'm > not sure that dictionaries or chosen instances are inserted into the > AST, or whether just the resolved types are inserted e.g. `Int -> > String`, where I want e.g. `Read Int`, which might lead me to finding > the matching instance from an InstEnv or so. > > I'd like to do some analyses of Haskell codebases, and the fact that > calls to class methods are opaque is a bit of a road-blocker. Any > handy tips? Prior work? > > It'd be neat in tooling to just hit a goto-definition key on `read` > and be taken to the instance implementation rather than the class > definition. > > Also, listing all functions that use throw# or functions defined in > terms of throw# or FFI calls would be helpful, especially for doing > audits. If I could immediately list all partial functions in a > project, then list all call-sites, it would be a very convenient way > when doing an audit to see whether partial functions (such as head) > are used with the proper preconditions or not. > > Any tips appreciated, > > Chris > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From chrisdone at gmail.com Tue Jun 26 17:04:31 2018 From: chrisdone at gmail.com (Christopher Done) Date: Tue, 26 Jun 2018 18:04:31 +0100 Subject: Tracking down instances from use-sites In-Reply-To: References: Message-ID: > The selector `foo` is then wrapped in a > `HsWrapper` which when desugared will apply the type arguments and > dictionary arguments. Nice! I'll give this a try and report back. Thanks. > Finally, you should considering writing this as a source plugin rather > than using the GHC API as it will be easier to run in a variety of > different scenarios. It took me a few minutes to find what you meant. For posterity, I think "frontend plugins" is the name: https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/extending_ghc.html#frontend-plugins That sounds like a good idea. This is the first time I've seen this feature of GHC. Cheers! On Tue, 26 Jun 2018 at 17:19, Matthew Pickering wrote: > > Chris, > > I have also considered this question. > > 1. Look at the `idDetails` of the `Id`. A class selector is a `ClassOpId`. > 2,3, > > When a class selector `foo` is typechecked, the instance information > is of course resolved. The selector `foo` is then wrapped in a > `HsWrapper` which when desugared will apply the type arguments and > dictionary arguments. > Thus, in order to understand what instance has been selected, we need > to look into the `HsWrapper`. In particular, one of the constructors > is the `WpEvApp` constructor which is what will apply the dictionary > argument. > In case 2, this will be a type variable. In case 3, this will be the > dictionary variable. I'm not sure how to distinguish these two cases > easily. Then once you have the dictionary id, you can use `idType` to > get the type of the dictionary which will be something like `Show ()` > in order > to tell you which instance was selected. > > You can inspect the AST of a typechecked program using the > `-ddump-tc-ast` flag. > > Finally, you should considering writing this as a source plugin rather > than using the GHC API as it will be easier to run in a variety of > different scenarios. > > Cheers, > > Matt > > On Tue, Jun 26, 2018 at 4:40 PM, Christopher Done wrote: > > Hi all, > > > > Given a TypecheckedModule, what's the most direct way given a Var > > expression retrieved from the AST, to determine: > > > > 1) that it's a class method e.g. `read` > > 2) that it's a generic call (no instance chosen) e.g. `Read a => a -> String` > > 3) or if it's a resolved instance, then which instance is it and which > > package, module and declaration is that defined in? > > > > Starting with this file that has a TypecheckedModule in it: > > https://gist.github.com/chrisdone/6fcb9f1cba6324148d481fcd4eab6af6#file-ghc-api-hs-L23 > > > > I presume at this point that instance resolution has taken place. I'm > > not sure that dictionaries or chosen instances are inserted into the > > AST, or whether just the resolved types are inserted e.g. `Int -> > > String`, where I want e.g. `Read Int`, which might lead me to finding > > the matching instance from an InstEnv or so. > > > > I'd like to do some analyses of Haskell codebases, and the fact that > > calls to class methods are opaque is a bit of a road-blocker. Any > > handy tips? Prior work? > > > > It'd be neat in tooling to just hit a goto-definition key on `read` > > and be taken to the instance implementation rather than the class > > definition. > > > > Also, listing all functions that use throw# or functions defined in > > terms of throw# or FFI calls would be helpful, especially for doing > > audits. If I could immediately list all partial functions in a > > project, then list all call-sites, it would be a very convenient way > > when doing an audit to see whether partial functions (such as head) > > are used with the proper preconditions or not. > > > > Any tips appreciated, > > > > Chris > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From matthewtpickering at gmail.com Tue Jun 26 17:17:58 2018 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Tue, 26 Jun 2018 18:17:58 +0100 Subject: Tracking down instances from use-sites In-Reply-To: References: Message-ID: Sorry, they are not "frontend plugins" but a new feature that will be in GHC 8.6. They are an implementation of this GHC proposal. https://github.com/ghc-proposals/ghc-proposals/blob/master/proposals/0017-source-plugins.rst There is also this thread last year about the same topic which Simon answered in the same way that I did but you may find either explanation more useful. https://mail.haskell.org/pipermail/ghc-devs/2017-October/014826.html Cheers, Matt On Tue, Jun 26, 2018 at 6:04 PM, Christopher Done wrote: >> The selector `foo` is then wrapped in a >> `HsWrapper` which when desugared will apply the type arguments and >> dictionary arguments. > > Nice! I'll give this a try and report back. Thanks. > >> Finally, you should considering writing this as a source plugin rather >> than using the GHC API as it will be easier to run in a variety of >> different scenarios. > > It took me a few minutes to find what you meant. For posterity, I > think "frontend plugins" is the name: > https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/extending_ghc.html#frontend-plugins > > That sounds like a good idea. This is the first time I've seen this > feature of GHC. > > Cheers! > > > > On Tue, 26 Jun 2018 at 17:19, Matthew Pickering > wrote: >> >> Chris, >> >> I have also considered this question. >> >> 1. Look at the `idDetails` of the `Id`. A class selector is a `ClassOpId`. >> 2,3, >> >> When a class selector `foo` is typechecked, the instance information >> is of course resolved. The selector `foo` is then wrapped in a >> `HsWrapper` which when desugared will apply the type arguments and >> dictionary arguments. >> Thus, in order to understand what instance has been selected, we need >> to look into the `HsWrapper`. In particular, one of the constructors >> is the `WpEvApp` constructor which is what will apply the dictionary >> argument. >> In case 2, this will be a type variable. In case 3, this will be the >> dictionary variable. I'm not sure how to distinguish these two cases >> easily. Then once you have the dictionary id, you can use `idType` to >> get the type of the dictionary which will be something like `Show ()` >> in order >> to tell you which instance was selected. >> >> You can inspect the AST of a typechecked program using the >> `-ddump-tc-ast` flag. >> >> Finally, you should considering writing this as a source plugin rather >> than using the GHC API as it will be easier to run in a variety of >> different scenarios. >> >> Cheers, >> >> Matt >> >> On Tue, Jun 26, 2018 at 4:40 PM, Christopher Done wrote: >> > Hi all, >> > >> > Given a TypecheckedModule, what's the most direct way given a Var >> > expression retrieved from the AST, to determine: >> > >> > 1) that it's a class method e.g. `read` >> > 2) that it's a generic call (no instance chosen) e.g. `Read a => a -> String` >> > 3) or if it's a resolved instance, then which instance is it and which >> > package, module and declaration is that defined in? >> > >> > Starting with this file that has a TypecheckedModule in it: >> > https://gist.github.com/chrisdone/6fcb9f1cba6324148d481fcd4eab6af6#file-ghc-api-hs-L23 >> > >> > I presume at this point that instance resolution has taken place. I'm >> > not sure that dictionaries or chosen instances are inserted into the >> > AST, or whether just the resolved types are inserted e.g. `Int -> >> > String`, where I want e.g. `Read Int`, which might lead me to finding >> > the matching instance from an InstEnv or so. >> > >> > I'd like to do some analyses of Haskell codebases, and the fact that >> > calls to class methods are opaque is a bit of a road-blocker. Any >> > handy tips? Prior work? >> > >> > It'd be neat in tooling to just hit a goto-definition key on `read` >> > and be taken to the instance implementation rather than the class >> > definition. >> > >> > Also, listing all functions that use throw# or functions defined in >> > terms of throw# or FFI calls would be helpful, especially for doing >> > audits. If I could immediately list all partial functions in a >> > project, then list all call-sites, it would be a very convenient way >> > when doing an audit to see whether partial functions (such as head) >> > are used with the proper preconditions or not. >> > >> > Any tips appreciated, >> > >> > Chris >> > _______________________________________________ >> > ghc-devs mailing list >> > ghc-devs at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From ben at well-typed.com Tue Jun 26 17:21:33 2018 From: ben at well-typed.com (Ben Gamari) Date: Tue, 26 Jun 2018 13:21:33 -0400 Subject: Tracking down instances from use-sites In-Reply-To: References: Message-ID: <877emlwjjb.fsf@smart-cactus.org> Christopher Done writes: > Hi all, > > Given a TypecheckedModule, what's the most direct way given a Var > expression retrieved from the AST, to determine: > > 1) that it's a class method e.g. `read` > 2) that it's a generic call (no instance chosen) e.g. `Read a => a -> String` > 3) or if it's a resolved instance, then which instance is it and which > package, module and declaration is that defined in? > > Starting with this file that has a TypecheckedModule in it: > https://gist.github.com/chrisdone/6fcb9f1cba6324148d481fcd4eab6af6#file-ghc-api-hs-L23 > > I presume at this point that instance resolution has taken place. I'm > not sure that dictionaries or chosen instances are inserted into the > AST, or whether just the resolved types are inserted e.g. `Int -> > String`, where I want e.g. `Read Int`, which might lead me to finding > the matching instance from an InstEnv or so. > > I'd like to do some analyses of Haskell codebases, and the fact that > calls to class methods are opaque is a bit of a road-blocker. Any > handy tips? Prior work? > > It'd be neat in tooling to just hit a goto-definition key on `read` > and be taken to the instance implementation rather than the class > definition. > Indeed that would be great. I believe (1) is quite straightforward: You can recognize a class operation by looking at the function's IdDetails (specifically looking for ClassOpId). This contains the Class to which the method belongs. Getting back to the instance is a bit trickier. I'll admit I don't know whether there is a convenient way to do this. However, I can try to fill in some background and give a few ideas. First let's review of how typeclass evidence is represented in HsSyn (apologies if this is already known): For concreteness, let's consider the program, showList :: Show a => [a] -> String showList x = show x After typechecking this will likely turn into something like (taken from the output of -ddump-tc -fprint-typechecker-elaboration): AbsBindsSig [a_a1hj] [$dShow_a1hl] {Exported type: Hi.showList :: forall a. Show a => [a] -> String [LclId] Bind: showList_a1hk x_azo = show @ [a_a1hj] $dShow_a1hn x_azo Evidence: EvBinds{[W] $dShow_a1hn = GHC.Show.$fShow[] @[a_a1hj] [$dShow_a1hl]}} This AbsBind represents a binding abstracted over a dictionary argument ($dShow_a1hl :: Show a_a1hj). The "Evidence" section gives a list of evidence bindings which the desugarer will wrap the RHS in; in this case the typechecker has built a `Show [a_a1hj]` instance from the `Show a => Show [a]` instance defined in GHC.Show and the abstracted `$dShow_A1hl` dictionary. The `show` call site will then look something like this in HsSyn: HsApp (HsWrap (WpEvApp $dShow_a1hn) (HsWrap (WpTyApp a_a1hj) (HsVar GHC.Show.show))) (HsVar x_azo) Here the typechecker has wrapped the (show x_azo) expression in a pair of HsWrappers which apply its type and dictionary arguments. This suggests an approach to identify "generic" call sites (item (2) above): look at whether the RHS of the call site's dictionary is lambda-bound or not. In the above case we see that it is not lambda-bound but rather a concrete dictionary: `GHC.Show.$fShow[]`. You can know that this is a dictionary by looking at its IdDetails (specifically, it is of the DFunId variety). By contrast if we have a generic call-site: printIt :: Show a => a -> IO () printIt x = putStrLn $ show x We see that we the evidence binding is headed by a lambda-bound dictionary: AbsBindsSig [a_a1AP] [$dShow_a1AR] {Exported type: printIt :: forall a. Show a => a -> IO () [LclId] Bind: printIt_a1AQ x_a12W = putStrLn $ show @ a_a1AP $dShow_a1AV x_a12W Evidence: EvBinds{[W] $dShow_a1AV = $dShow_a1AR}} Of course, in the case that you have a concrete dictionary you *also* want to know the source location of the instance declaration from which it arose. I'm afraid this may be quite challenging as this isn't information we currently keep. Currently interface files don't really keep any information that might be useful to IDE tooling users. It's possible that we could add such information, although it's unclear exactly what this would look like. It would be great to hear more from tooling users regarding what information they would like to see. Also relevant here is the HIE file GSoC project [1] being worked on this summer of Zubin Duggal (CC'd). > Also, listing all functions that use throw# or functions defined in > terms of throw# or FFI calls would be helpful, especially for doing > audits. If I could immediately list all partial functions in a > project, then list all call-sites, it would be a very convenient way > when doing an audit to see whether partial functions (such as head) > are used with the proper preconditions or not. > This may be non-trivial; you may be able to get something along these lines out of the strictness signature present in IdInfo. However, I suspect this will be a bit fragile (e.g. we don't even run demand analysis with -O0 IIRC). Cheers, - Ben [1] https://ghc.haskell.org/trac/ghc/wiki/HIEFiles -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Tue Jun 26 17:27:43 2018 From: ben at well-typed.com (Ben Gamari) Date: Tue, 26 Jun 2018 13:27:43 -0400 Subject: Literature Review of GHC Core In-Reply-To: References: Message-ID: <874lhpwj8z.fsf@smart-cactus.org> Ningning Xie writes: > Hi all, > > I was recently doing some GHC Core related stuff, and I found that it might > be useful to have a theoretical understanding of the type system for > writing better practical code. Therefore I started to read papers about GHC > Core, or System FC, following the publication timeline, to try to > understand each design choice involved in the type system. > Quite helpful indeed; thanks Ningning! I've added a link from the Commentary's contributed documentation list [1]. Cheers, - Ben [1] https://ghc.haskell.org/trac/ghc/wiki/Commentary#ContributedDocumentation -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From chrisdone at gmail.com Tue Jun 26 18:07:02 2018 From: chrisdone at gmail.com (Christopher Done) Date: Tue, 26 Jun 2018 19:07:02 +0100 Subject: Tracking down instances from use-sites In-Reply-To: <877emlwjjb.fsf@smart-cactus.org> References: <877emlwjjb.fsf@smart-cactus.org> Message-ID: Ben, Thanks for the in-depth elaboration of what Mathew/Simon were describing! It seems within reach! > Of course, in the case that you have a concrete dictionary you *also* > want to know the source location of the instance declaration from which > it arose. I'm afraid this may be quite challenging as this isn't > information we currently keep. Currently interface files don't really > keep any information that might be useful to IDE tooling users. It's > possible that we could add such information, although it's unclear > exactly what this would look like. It would be great to hear more from > tooling users regarding what information they would like to see. Indeed, not having the exact source location was a stretch, I didn't have high hopes for that. However, the package and module is actually useful. Regarding that, I did find the following field: -- | @is_dfun_name = idName . is_dfun at . -- -- We use 'is_dfun_name' for the visibility check, -- 'instIsVisible', which needs to know the 'Module' which the -- dictionary is defined in. However, we cannot use the 'Module' -- attached to 'is_dfun' since doing so would mean we would -- potentially pull in an entire interface file unnecessarily. -- This was the cause of #12367. , is_dfun_name :: Name So it seems like I could use the Name to get a Module which contains a UnitId (package and version) and ModuleName. If I've already generated the right metadata for that package and module, then I can do the mapping. > Also relevant here is the HIE file GSoC project [1] being worked on this > summer of Zubin Duggal (CC'd). I think this would be a good use-case for that. > > Also, listing all functions that use throw# or functions defined in > > terms of throw# or FFI calls would be helpful, especially for doing > > audits. If I could immediately list all partial functions in a > > project, then list all call-sites, it would be a very convenient way > > when doing an audit to see whether partial functions (such as head) > > are used with the proper preconditions or not. > > This may be non-trivial; you may be able to get something along these > lines out of the strictness signature present in IdInfo. However, I > suspect this will be a bit fragile (e.g. we don't even run demand > analysis with -O0 IIRC). I was going to start with a very naive approach of creating a dependency graph merely based on presence in a declaration, not on use. E.g. foo = if False then head [] else 123 would still be flagged up as partial, even though upon inspection it isn't. But it uses `head`, so it should arouse suspicion. I'd want to review it myself and determine that it's safe and then mark it safe. In the least, I might mark such code as having potential for bugs. Cheers! From ben at well-typed.com Tue Jun 26 18:39:47 2018 From: ben at well-typed.com (Ben Gamari) Date: Tue, 26 Jun 2018 14:39:47 -0400 Subject: How do I add ghc-prim as a dep for ghc? In-Reply-To: References: Message-ID: <87woulv1ci.fsf@smart-cactus.org> Ömer Sinan Ağacan writes: > I'm trying to add ghc-prim as a dependency to the ghc package. So far I've done > these changes: > snip > Any ideas what else to edit? > Did you rerun ./configure after modifying ghc.cabal.in? I would double-check that ghc.cabal contains the dependency. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From omeragacan at gmail.com Tue Jun 26 18:58:04 2018 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Tue, 26 Jun 2018 21:58:04 +0300 Subject: How do I add ghc-prim as a dep for ghc? In-Reply-To: <87woulv1ci.fsf@smart-cactus.org> References: <87woulv1ci.fsf@smart-cactus.org> Message-ID: I did make distclean; ./boot; ./configure ... no luck. Checked ghc.cabal also. Ömer Ben Gamari , 26 Haz 2018 Sal, 21:39 tarihinde şunu yazdı: > > Ömer Sinan Ağacan writes: > > > I'm trying to add ghc-prim as a dependency to the ghc package. So far I've done > > these changes: > > > snip > > > Any ideas what else to edit? > > > Did you rerun ./configure after modifying ghc.cabal.in? I would > double-check that ghc.cabal contains the dependency. > > Cheers, > > - Ben From gershomb at gmail.com Wed Jun 27 03:01:51 2018 From: gershomb at gmail.com (Gershom B) Date: Tue, 26 Jun 2018 20:01:51 -0700 Subject: Update on HIE Files In-Reply-To: References: Message-ID: Another reason that this probably should go into the mainline rather than a plugin is that Zubin was explaining to me that the mechanisms introduced could improve and generalize the “:set +c” family of type and location information provided by ghci: https://downloads.haskell.org/~ghc/master/users-guide/ghci.html#ghci-cmd-:set%20+c —g On June 26, 2018 at 7:09:23 AM, Zubin Duggal (zubin.duggal at gmail.com) wrote: Hey Matt, In principle, there should be no problem interacting with source plugins, or implementing this as a source plugin, given that the generating function has type: enrichHie :: GhcMonad m => TypecheckedSource -> RenamedSource -> m (HieAST Type) The only reason the GhcMonad constraint is necessary and this is not a pure function is because desugarExpr has type deSugarExpr :: HscEnv -> LHsExpr GhcTc -> IO (Messages, Maybe CoreExpr) So we need a GhcMonad to get the HscEnv. We need to desugar expressions to get their Type. However, in a private email with Németh Boldizsár regarding implementing this as a source plugin, I had the following concerns: 1. Since HIE files are going to be used for haddock generation, and haddock is a pretty important part of the haskell ecosystem, GHC should be able to produce them by default without needing to install anything else. 2. Integrating HIE file generation into GHC itself will push the burden of maintaining support to whoever makes breaking changes to GHC, instead of whoever ends up maintaining the source plugin. This way, HIE files can be a first class citizen and evolve with GHC. 3. Concerns about portability of source plugins - it should work at least wherever haddock can currently work 4. I believe there are some issues with how plugins interact with GHCs recompilation avoidance? Given that HIE files are also meant to be used for interactive usage via haskell-ide-engine, this is a pretty big deal breaker. I understand (4) has been solved now, but the first three still remain. On 26 June 2018 at 16:23, Matthew Pickering wrote: > Have you considered how this feature interacts with source plugins? > > Could the generation of these files be implemented as a source plugin? > That would mean that development of the feature would not be coupled > to GHC releases. > > Cheers, > > Matt > > On Tue, Jun 26, 2018 at 11:48 AM, Zubin Duggal > wrote: > > Hello all, > > > > I've been working on the HIE File > > (https://ghc.haskell.org/trac/ghc/wiki/HIEFiles) GSOC project, > > > > The design of the data structure as well as the traversal of GHCs ASTs to > > collect all the relevant info is mostly complete. > > > > We traverse the Renamed and Typechecked AST to collect the following info > > about each SrcSpan > > > > 1) Its type, if it corresponds to a binding, pattern or expression > > 2) Details about any tokens in the original source corresponding to this > > span(keywords, symbols, etc.) > > 3) The set of Constructor/Type pairs that correspond to this span in the > GHC > > AST > > 4) Details about all the identifiers that occur at this SrcSpan > > > > For each occurrence of an identifier(Name or ModuleName), we store its > > type(if it has one), and classify it as one of the following based on > how it > > occurs: > > > > 1) Use > > 2) Import/Export > > 3) Pattern Binding, along with the scope of the binding, and the span of > the > > entire binding location(including the RHS) if it occurs as part of a top > > level declaration, do binding or let/where binding > > 4) Value Binding, along with whether it is an instance binding or not, > its > > scope, and the span of its entire binding site, including the RHS > > 5) Type Declaration (class or regular) (foo :: ...) > > 6) Declaration(class, type, instance, data, type family etc.) > > 7) Type variable binding, along with its scope(which takes into account > > ScopedTypeVariables) > > > > I have updated the wiki page with more details about the Scopes > associated > > with bindings: > > https://ghc.haskell.org/trac/ghc/wiki/HIEFiles# > Scopeinformationaboutsymbols > > > > These annotated SrcSpans are then arranged into a interval/rose tree to > aid > > lookups. > > > > We assume that no SrcSpans ever partially overlap, for any two SrcSpans > that > > occur in the Renamed/Typechecked ASTs, either they are equal, disjoint, > or > > strictly contained in each other. This assumption has mostly held out so > far > > while testing on the entire ghc:HEAD tree, other than one case where the > > typechecker strips out parenthesis in the original source, which has been > > patched(see https://ghc.haskell.org/trac/ghc/ticket/15242). > > > > I have also written functions that lookup the binding site(including RHS) > > and scope of an identifier from the tree. Testing these functions on the > > ghc:HEAD tree, it succeeds in looking up scopes for almost all symbol > > occurrences in all source files, and I've also verified that the > calculated > > scope always contains all the occurrences of the symbol. The few cases > where > > this check fails is where the SrcSpans have been mangled by CPP(see > > https://ghc.haskell.org/trac/ghc/ticket/15279). > > > > The code for this currently lives here: > > https://github.com/haskell/haddock/compare/ghc-head...wz1000:hiefile-2 > > > > Moving forward, the plan for the rest of the summer is > > > > 1) Move this into the GHC tree and add a flag that controls generating > this > > 2) Write serializers and deserializers for this info > > 3) Teach the GHC PackageDb about .hie files > > 4) Rewrite haddocks --hyperlinked-source to use .hie files. > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From omeragacan at gmail.com Wed Jun 27 09:35:26 2018 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Wed, 27 Jun 2018 12:35:26 +0300 Subject: How do I add ghc-prim as a dep for ghc? In-Reply-To: References: <87woulv1ci.fsf@smart-cactus.org> Message-ID: It turns out there are two GHC packages: ghc and ghc-bin. I needed to add to ghc-bin but I wasn't aware of it so added to ghc. Ömer Ömer Sinan Ağacan , 26 Haz 2018 Sal, 21:58 tarihinde şunu yazdı: > > I did make distclean; ./boot; ./configure ... no luck. Checked ghc.cabal also. > > Ömer > > > Ben Gamari , 26 Haz 2018 Sal, 21:39 tarihinde şunu yazdı: > > > > Ömer Sinan Ağacan writes: > > > > > I'm trying to add ghc-prim as a dependency to the ghc package. So far I've done > > > these changes: > > > > > snip > > > > > Any ideas what else to edit? > > > > > Did you rerun ./configure after modifying ghc.cabal.in? I would > > double-check that ghc.cabal contains the dependency. > > > > Cheers, > > > > - Ben From matthewtpickering at gmail.com Wed Jun 27 10:14:50 2018 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Wed, 27 Jun 2018 11:14:50 +0100 Subject: Loading GHC into GHCi (and ghcid) In-Reply-To: <39d783b293f6008963a1273198605a2c8f3570cc.camel@joachim-breitner.de> References: <8736xygvey.fsf@smart-cactus.org> <39d783b293f6008963a1273198605a2c8f3570cc.camel@joachim-breitner.de> Message-ID: I added the wiki page now: https://ghc.haskell.org/trac/ghc/wiki/Building/InGhci Do you mean just adding the .ghci file? It seems that this might be something that would be good to add to hadrian so that it can control the locations of the object files rather than splurging them over the build tree. Cheers, Matt On Fri, Jun 8, 2018 at 4:37 PM, Joachim Breitner wrote: > Hi, > > Am Donnerstag, den 07.06.2018, 17:05 -0400 schrieb Ben Gamari: >> How about on a new page (e.g. Building/InGhci) linked to from, >> >> * https://ghc.haskell.org/trac/ghc/wiki/Building >> * https://ghc.haskell.org/trac/ghc/wiki/WorkingConventions (in the Tips >> & Tricks section) >> >> It might also be a good idea to add a script to the tree capturing this >> pattern. > > yes pretty please! > > Cheers, > Joachim > -- > Joachim Breitner > mail at joachim-breitner.de > http://www.joachim-breitner.de/ > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From ben at smart-cactus.org Wed Jun 27 16:30:08 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Wed, 27 Jun 2018 12:30:08 -0400 Subject: Loading GHC into GHCi (and ghcid) In-Reply-To: References: <8736xygvey.fsf@smart-cactus.org> <39d783b293f6008963a1273198605a2c8f3570cc.camel@joachim-breitner.de> Message-ID: <87r2ksur9a.fsf@smart-cactus.org> Matthew Pickering writes: > I added the wiki page now: https://ghc.haskell.org/trac/ghc/wiki/Building/InGhci > > Do you mean just adding the .ghci file? It seems that this might be > something that would be good to add to hadrian so that it can control > the locations of the object files rather than splurging them over the > build tree. > Just adding the .ghci file is a start. However, ideally it would be something that will be more resistant to bitrotting that we might even be able to test. For instance, a shell script or build system rule that can generate the .ghci file from GHC's cabal file and launch ghc --interactive. We could then test this during CI as I think there is enough interest in this workflow that we should really try to support it. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From asiamgenius at gmail.com Wed Jun 27 21:32:27 2018 From: asiamgenius at gmail.com (Abhiroop Sarkar) Date: Wed, 27 Jun 2018 22:32:27 +0100 Subject: Is it possible to enhance the vector STG registers(Xmm, Ymm, Zmm) with more information? Message-ID: Hello all, I am currently working on adding support for SIMD operations to the native code generator. One of the roadblocks I faced recently was the definition of the `globalRegType` function in "compiler/cmm/CmmExpr.hs". The `globalRegType` function maps the STG registers to the respective `CmmType` datatype. For Xmm, Ymm, Zmm registers the function defines globalRegType like this: https://github.com/ghc/ghc/blob/master/compiler/cmm/CmmExpr.hs#L585-L587 Consider the case for an Xmm register, the above definition limits an Xmm register to hold only vectors of size 4. However we can store 2 64-bit Doubles or 16 Int8s or 8 Int16s and so on The function `globalRegType` is internally called by the function `cmmRegType` ( https://github.com/ghc/ghc/blob/838b69032566ce6ab3918d70e8d5e098d0bcee02/compiler/cmm/CmmExpr.hs#L275) which is itself used in a number of places in the x86 code generator. In fact depending on the result of the `cmmRegType` function is another important function `cmmTypeFormat` defined in Format.hs whose result is used to print the actual assembly instruction. I have extended all the other Format types to include VectorFormats, however this definition of the `globalRegType` seems incorrect to me. Looking at the signature of the function itself: `globalRegType :: DynFlags -> GlobalReg -> CmmType` its actually difficult to predict the CmmType by just looking at the GlobalReg in case of Xmm, Ymm, Zmm. So thats why my original question how do I go about solving this. Should I modify the GlobalReg type to contain more information like Width and Length(for Xmm, Ymm, Zmm) or do I somehow pass the length and width information to the globalRegType function? Thanks Abhiroop Sakar -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Wed Jun 27 23:29:26 2018 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Wed, 27 Jun 2018 19:29:26 -0400 Subject: Is it possible to enhance the vector STG registers(Xmm, Ymm, Zmm) with more information? In-Reply-To: References: Message-ID: hrmm, i think i can help with this tomorrow/ rest of the week, (esp since i'm one of your mentors :) ) but if other folks have design ideas, more than happy to use as many ideas as possible! On Wed, Jun 27, 2018 at 5:32 PM Abhiroop Sarkar wrote: > Hello all, > > I am currently working on adding support for SIMD operations to the native > code generator. One of the roadblocks I faced recently was the definition > of the `globalRegType` function in "compiler/cmm/CmmExpr.hs". The > `globalRegType` function maps the STG registers to the respective `CmmType` > datatype. > > For Xmm, Ymm, Zmm registers the function defines globalRegType like this: > https://github.com/ghc/ghc/blob/master/compiler/cmm/CmmExpr.hs#L585-L587 > > Consider the case for an Xmm register, the above definition limits an Xmm > register to hold only vectors of size 4. However we can store 2 64-bit > Doubles or 16 Int8s or 8 Int16s and so on > > The function `globalRegType` is internally called by the function > `cmmRegType` ( > https://github.com/ghc/ghc/blob/838b69032566ce6ab3918d70e8d5e098d0bcee02/compiler/cmm/CmmExpr.hs#L275) > which is itself used in a number of places in the x86 code generator. > > In fact depending on the result of the `cmmRegType` function is another > important function `cmmTypeFormat` defined in Format.hs whose result is > used to print the actual assembly instruction. > > I have extended all the other Format types to include VectorFormats, > however this definition of the `globalRegType` seems incorrect to me. > Looking at the signature of the function itself: > > `globalRegType :: DynFlags -> GlobalReg -> CmmType` > > its actually difficult to predict the CmmType by just looking at the > GlobalReg in case of Xmm, Ymm, Zmm. So thats why my original question how > do I go about solving this. Should I modify the GlobalReg type to contain > more information like Width and Length(for Xmm, Ymm, Zmm) or do I somehow > pass the length and width information to the globalRegType function? > > Thanks > Abhiroop Sakar > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mgsloan at gmail.com Thu Jun 28 05:48:11 2018 From: mgsloan at gmail.com (Michael Sloan) Date: Wed, 27 Jun 2018 22:48:11 -0700 Subject: Loading GHC into GHCi (and ghcid) In-Reply-To: References: Message-ID: Wow! This is an absolute game changer for me with regards to ghc development. My usual workflow on large haskell projects is to use GHCI as much as possible for quick iterations. I'm really glad Csongor figured this out and that you sent an email about it. I've been messing with this for a bit today, and after a few code changes, I can to load GHCI into GHCI!! For example, I can modify "ghciWelcomeMessage", reload, and enter into the nested ghci: λ :r [491 of 492] Compiling GHCi.UI ( ../ghc/GHCi/UI.hs, tmp/GHCi/UI.o ) Ok, 492 modules loaded. λ :main -ignore-dot-ghci --interactive GHCi inception, version 8.7.20180627: http://www.haskell.org/ghc/ :? for help Prelude> unwords ["it", "works!"] "it works!" Prelude> :q Leaving GHCi. λ Prelude.unwords ["now,", "in", "outer", "GHCi"] "now, in outer GHCi" I will be opening up a PR soon that makes this convenient, once I've polished it up. -Michael On Wed, May 30, 2018 at 1:43 PM Matthew Pickering wrote: > > Hi all, > > Csongor has informed me that he has worked out how to load GHC into > GHCi which can then be used with ghcid for a more interactive > development experience. > > 1. Put this .ghci file in compiler/ > > https://gist.github.com/mpickering/73749e7783f40cc762fec171b879704c > > 2. Run "../inplace/bin/ghc-stage2 --interactive -odir tmp -hidir tmp" > from inside compiler/ > > It may take a while and require a little bit of memory but in the end > all 500 or so modules will be loaded. > > It can also be used with ghcid. > > ghcid -c "../inplace/bin/ghc-stage2 --interactive -odir tmp -hidir tmp" > > Hopefully someone who has more RAM than I. > > Can anyone suggest the suitable place on the wiki for this information? > > Cheers, > > Matt > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From simonpj at microsoft.com Thu Jun 28 10:46:42 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 28 Jun 2018 10:46:42 +0000 Subject: GHC.Prim.Int# is not at TyThing? In-Reply-To: References: Message-ID: Does the above indicate that in fact, `GHC.Prim.Int#` DOES NOT (any longer) correspond to a `TyCon`? No, it does not indicate that! There still is a TyCon for Int#. Indeed you can see it defined in TysPrim.intPrimTyCon. Simon From: Ranjit Jhala Sent: 28 June 2018 06:08 To: ghc-devs at haskell.org; Simon Peyton Jones Subject: GHC.Prim.Int# is not at TyThing? Hi all, I am trying to update LiquidHaskell to GHC 8.4. In doing so, I find that I can no longer resolve (i.e. get the `TyThing`, and hence `TyCon`) corresponding to the name: GHC.Prim.Int# It seems like in older versions, we had λ> :i GHC.Prim.Int# data GHC.Prim.Int# -- Defined in ‘GHC.Prim’ λ> :k GHC.Prim.Int# GHC.Prim.Int# :: # but in GHC 8.4 this is changed so: λ> :i GHC.Prim.Int# data GHC.Prim.Int# :: TYPE 'GHC.Types.IntRep -- Defined in ‘GHC.Prim’ λ> :k GHC.Prim.Int# GHC.Prim.Int# :: TYPE 'GHC.Types.IntRep Does the above indicate that in fact, `GHC.Prim.Int#` DOES NOT (any longer) correspond to a `TyCon`? If so, what does it correspond to? i.e. how is it represented as a `Type` in GHC? Any pointers would be most appreciated! Thanks! - Ranjit. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chrisdone at gmail.com Thu Jun 28 17:08:41 2018 From: chrisdone at gmail.com (Christopher Done) Date: Thu, 28 Jun 2018 18:08:41 +0100 Subject: How do you build base with your own compiler? Message-ID: I've built the GHC compiler along with libraries/base in the canonical way. Now, I want to compile base with my own compiler frontend which will do some analysis. Here's what I've done so far: 1) I've compiled my frontend with the ghc-stage2 compiler and registered it. That works. 2) I've found the right GHC invocation which looks like this: "inplace/bin/ghc-stage1" -hisuf hi -osuf o -hcsuf hc -static -O0 -H64m -Wall -this-unit-id base-4.9.1.0 -hide-all-packages -i -ilibraries/base/. -ilibraries/base/dist-install/build -ilibraries/base/dist-install/build/autogen -Ilibraries/base/dist-install/build -Ilibraries/base/dist-install/build/autogen -Ilibraries/base/include -optP-DOPTIMISE_INTEGER_GCD_LCM -optP-include -optPlibraries/base/dist-install/build/autogen/cabal_macros.h -package-id ghc-prim-0.5.0.0 -package-id integer-gmp-1.0.0.1 -package-id rts -this-unit-id base -XHaskell2010 -O0 -no-user-package-db -rtsopts -Wno-trustworthy-safe -Wno-deprecated-flags -Wnoncanonical-monad-instances -odir libraries/base/dist-install/build -hidir libraries/base/dist-install/build -stubdir libraries/base/dist-install/build -dynamic-too $hsfile So I (1) changed stage1 to stage2, so that I could use plugins, (2) simply added --frontend GhcFrontendPlugin -package frontend and ran that on every file under base/ in a loop: https://gist.github.com/chrisdone/3ca64592aed2053606d8814f2fa5d772 That seems to work. I basically have what I wanted. But I'd rather be able to invoke make with e.g. GHC_COMPILER=inplace/ghc/stage2 and EXTRA_HC_OPTS=" --frontend GhcFrontendPlugin -package frontend". Is there an easy flag to do that? If not, can someone point me where in the makefile I could tweak this? Cheers From hvriedel at gmail.com Thu Jun 28 17:35:46 2018 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Thu, 28 Jun 2018 19:35:46 +0200 Subject: How do you build base with your own compiler? In-Reply-To: References: Message-ID: Note that `base` is mostly a normal cabal package and you can easily build it with `cabal new-build` if that's any help. On Thu, Jun 28, 2018 at 7:09 PM Christopher Done wrote: > > I've built the GHC compiler along with libraries/base in the canonical way. > > Now, I want to compile base with my own compiler frontend which will > do some analysis. Here's what I've done so far: > > 1) I've compiled my frontend with the ghc-stage2 compiler and > registered it. That works. > 2) I've found the right GHC invocation which looks like this: > > "inplace/bin/ghc-stage1" -hisuf hi -osuf o -hcsuf hc -static -O0 > -H64m -Wall -this-unit-id base-4.9.1.0 -hide-all-packages -i > -ilibraries/base/. -ilibraries/base/dist-install/build > -ilibraries/base/dist-install/build/autogen > -Ilibraries/base/dist-install/build > -Ilibraries/base/dist-install/build/autogen -Ilibraries/base/include > -optP-DOPTIMISE_INTEGER_GCD_LCM -optP-include > -optPlibraries/base/dist-install/build/autogen/cabal_macros.h > -package-id ghc-prim-0.5.0.0 -package-id integer-gmp-1.0.0.1 > -package-id rts -this-unit-id base -XHaskell2010 -O0 > -no-user-package-db -rtsopts -Wno-trustworthy-safe > -Wno-deprecated-flags -Wnoncanonical-monad-instances -odir > libraries/base/dist-install/build -hidir > libraries/base/dist-install/build -stubdir > libraries/base/dist-install/build -dynamic-too $hsfile > > So I > > (1) changed stage1 to stage2, so that I could use plugins, > (2) simply added --frontend GhcFrontendPlugin -package frontend and > ran that on every file under base/ in a loop: > > https://gist.github.com/chrisdone/3ca64592aed2053606d8814f2fa5d772 > > That seems to work. I basically have what I wanted. > > But I'd rather be able to invoke make with e.g. > GHC_COMPILER=inplace/ghc/stage2 and EXTRA_HC_OPTS=" --frontend > GhcFrontendPlugin -package frontend". Is there an easy flag to do > that? > > If not, can someone point me where in the makefile I could tweak this? > > Cheers > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From chrisdone at gmail.com Thu Jun 28 17:51:31 2018 From: chrisdone at gmail.com (Christopher Done) Date: Thu, 28 Jun 2018 18:51:31 +0100 Subject: How do you build base with your own compiler? In-Reply-To: References: Message-ID: I did try that first. I did a cabal unpack and ran ./configure, but sadly it failed with "internal error": https://gist.github.com/chrisdone/fc784c6dcc2acdfc3d10a55fc4d5378d#file-output-txt-L289 If you have e.g. a docker image that it's known to build inside of, that would be awesome. I'd much prefer that over the whole compiling GHC rigmarole. Ciao! On Thu, 28 Jun 2018 at 18:35, Herbert Valerio Riedel wrote: > > Note that `base` is mostly a normal cabal package and you can easily > build it with `cabal new-build` if that's any help. > On Thu, Jun 28, 2018 at 7:09 PM Christopher Done wrote: > > > > I've built the GHC compiler along with libraries/base in the canonical way. > > > > Now, I want to compile base with my own compiler frontend which will > > do some analysis. Here's what I've done so far: > > > > 1) I've compiled my frontend with the ghc-stage2 compiler and > > registered it. That works. > > 2) I've found the right GHC invocation which looks like this: > > > > "inplace/bin/ghc-stage1" -hisuf hi -osuf o -hcsuf hc -static -O0 > > -H64m -Wall -this-unit-id base-4.9.1.0 -hide-all-packages -i > > -ilibraries/base/. -ilibraries/base/dist-install/build > > -ilibraries/base/dist-install/build/autogen > > -Ilibraries/base/dist-install/build > > -Ilibraries/base/dist-install/build/autogen -Ilibraries/base/include > > -optP-DOPTIMISE_INTEGER_GCD_LCM -optP-include > > -optPlibraries/base/dist-install/build/autogen/cabal_macros.h > > -package-id ghc-prim-0.5.0.0 -package-id integer-gmp-1.0.0.1 > > -package-id rts -this-unit-id base -XHaskell2010 -O0 > > -no-user-package-db -rtsopts -Wno-trustworthy-safe > > -Wno-deprecated-flags -Wnoncanonical-monad-instances -odir > > libraries/base/dist-install/build -hidir > > libraries/base/dist-install/build -stubdir > > libraries/base/dist-install/build -dynamic-too $hsfile > > > > So I > > > > (1) changed stage1 to stage2, so that I could use plugins, > > (2) simply added --frontend GhcFrontendPlugin -package frontend and > > ran that on every file under base/ in a loop: > > > > https://gist.github.com/chrisdone/3ca64592aed2053606d8814f2fa5d772 > > > > That seems to work. I basically have what I wanted. > > > > But I'd rather be able to invoke make with e.g. > > GHC_COMPILER=inplace/ghc/stage2 and EXTRA_HC_OPTS=" --frontend > > GhcFrontendPlugin -package frontend". Is there an easy flag to do > > that? > > > > If not, can someone point me where in the makefile I could tweak this? > > > > Cheers > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From mail at joachim-breitner.de Thu Jun 28 18:00:59 2018 From: mail at joachim-breitner.de (Joachim Breitner) Date: Thu, 28 Jun 2018 14:00:59 -0400 Subject: How do you build base with your own compiler? In-Reply-To: References: Message-ID: Hi, Am Donnerstag, den 28.06.2018, 18:08 +0100 schrieb Christopher Done: > I've built the GHC compiler along with libraries/base in the canonical way. > > Now, I want to compile base with my own compiler frontend which will > do some analysis. maybe https://github.com/nomeata/veggies/blob/master/boot.sh can be some inspiration? There I build boot using a GHC with a very different code generator. Cheers, Joachim -- Joachim Breitner mail at joachim-breitner.de http://www.joachim-breitner.de/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From david at well-typed.com Fri Jun 29 05:28:02 2018 From: david at well-typed.com (David Feuer) Date: Fri, 29 Jun 2018 01:28:02 -0400 Subject: Should we have primitive fill-once variables? Message-ID: <7467676.ftxV8Eecj6@squirrel> IVars (write-once variables) can be useful for various purposes. In Haskell, they can be implemented using MVars. But MVars are not the lightest things in the world. I'm wondering if it might pay to implement something much lighter primitively. Here's a sketch of some things I'll tentatively call QVars. A QVar has two fields: a value (possibly null) and a stack (no need for a queue) of waiting threads. newEmptyQVar: Create a QVar with a null value and an empty queue. tryReadQVar: Just look at the value. readQVar: Check if the value is null (simple memory read). If not, use it. If so, push yourself onto the waiting stack (CAS loop). The code that will run when you're awakened will try to awaken the next thread if there is one (CAS loop). putQVar: Install a new value and get the old one (exchange). If the old value was null, mark the QVar dirty. Awaken the first thread if there is one (CAS loop). Return the old value if it was non-null (this can be used in library code to make duplicate writes, or non-equal duplicate writes, an error). I think we'd probably also want atomic modification operations, but I haven't figured out which ones yet. Implementation differences from MVars: * We have two fields instead of three (because we can get away with a stack instead of a queue). * We never need to lock the QVar closure. MVars do: they can change freely between empty and full, so it's necessary to coordinate between value and queue modifications. From simonpj at microsoft.com Fri Jun 29 07:20:14 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 29 Jun 2018 07:20:14 +0000 Subject: GHC.Prim.Int# is not at TyThing? In-Reply-To: References: Message-ID: Are you sure that Int# is in (lexical scope) in the GlobalRdrEnv of the HscEnv? If not, looking up the String in the lexicial environment will fail. You can always just grab TysPrim.intPrimTyCon. Simon From: Ranjit Jhala Sent: 29 June 2018 00:55 To: Simon Peyton Jones Cc: ghc-devs at haskell.org Subject: Re: GHC.Prim.Int# is not at TyThing? Dear Simon (and all), Thanks! Then it seems my problem is much worse: somehow the code I had that used the `HscEnv` to "resolve" names (i.e. get `Name` and then `TyThing` and then `TyCon`) from plain strings is no longer working with the GHC 8.4.3. My efforts to distill the relevant LH code into a simple test that shows the difference between the two versions (GHC 8.2 and 8.4) have proven fruitless so far. Can anyone point me to a (small?) example of using the GHC API that implements something like: lookupVarType :: String -> IO String which takes a `String` corresponding to the name of a top-level binder as input, and returns a `String` containing the TYPE of the binder as output? Thanks! - Ranjit. On Thu, Jun 28, 2018 at 3:48 AM Simon Peyton Jones > wrote: Does the above indicate that in fact, `GHC.Prim.Int#` DOES NOT (any longer) correspond to a `TyCon`? No, it does not indicate that! There still is a TyCon for Int#. Indeed you can see it defined in TysPrim.intPrimTyCon. Simon From: Ranjit Jhala > Sent: 28 June 2018 06:08 To: ghc-devs at haskell.org; Simon Peyton Jones > Subject: GHC.Prim.Int# is not at TyThing? Hi all, I am trying to update LiquidHaskell to GHC 8.4. In doing so, I find that I can no longer resolve (i.e. get the `TyThing`, and hence `TyCon`) corresponding to the name: GHC.Prim.Int# It seems like in older versions, we had λ> :i GHC.Prim.Int# data GHC.Prim.Int# -- Defined in ‘GHC.Prim’ λ> :k GHC.Prim.Int# GHC.Prim.Int# :: # but in GHC 8.4 this is changed so: λ> :i GHC.Prim.Int# data GHC.Prim.Int# :: TYPE 'GHC.Types.IntRep -- Defined in ‘GHC.Prim’ λ> :k GHC.Prim.Int# GHC.Prim.Int# :: TYPE 'GHC.Types.IntRep Does the above indicate that in fact, `GHC.Prim.Int#` DOES NOT (any longer) correspond to a `TyCon`? If so, what does it correspond to? i.e. how is it represented as a `Type` in GHC? Any pointers would be most appreciated! Thanks! - Ranjit. -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleg.grenrus at iki.fi Fri Jun 29 13:14:48 2018 From: oleg.grenrus at iki.fi (Oleg Grenrus) Date: Fri, 29 Jun 2018 16:14:48 +0300 Subject: Should we have primitive fill-once variables? In-Reply-To: <7467676.ftxV8Eecj6@squirrel> References: <7467676.ftxV8Eecj6@squirrel> Message-ID: <922c7b78-bb3b-0b2c-6b43-f68f0457592c@iki.fi> I have wanted something like that! Also similiar STM TQVar would be nice to have. For example `async` uses TMVar where the value is written only once. if once filled the QVar cannot be empty is useful property. For STM variant, I'd actually like to have putTQVar' :: QTVar a -> a -> STM (TVar a) i.e. giving me back a `TVar a`, for which I now the value is already there (i.e. reading won't block). Not sure what the would be such for IO, IORef doesn't feel right. - Oleg On 29.06.2018 08:28, David Feuer wrote: > IVars (write-once variables) can be useful for various purposes. In Haskell, they can be implemented using MVars. But MVars are not the lightest things in the world. I'm wondering if it might pay to implement something much lighter primitively. Here's a sketch of some things I'll tentatively call QVars. > > A QVar has two fields: a value (possibly null) and a stack (no need for a queue) of waiting threads. > > newEmptyQVar: Create a QVar with a null value and an empty queue. > > tryReadQVar: Just look at the value. > > readQVar: Check if the value is null (simple memory read). If not, use it. If so, push yourself onto the waiting stack (CAS loop). The code that will run when you're awakened will try to awaken the next thread if there is one (CAS loop). > > putQVar: Install a new value and get the old one (exchange). If the old value was null, mark the QVar dirty. Awaken the first thread if there is one (CAS loop). Return the old value if it was non-null (this can be used in library code to make duplicate writes, or non-equal duplicate writes, an error). > > I think we'd probably also want atomic modification operations, but I haven't figured out which ones yet. > > Implementation differences from MVars: > > * We have two fields instead of three (because we can get away with a stack instead of a queue). > > * We never need to lock the QVar closure. MVars do: they can change freely between empty and full, so it's necessary to coordinate between value and queue modifications. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From simonpj at microsoft.com Fri Jun 29 14:36:25 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 29 Jun 2018 14:36:25 +0000 Subject: GHC.Prim.Int# is not at TyThing? In-Reply-To: References: Message-ID: Well what is lexically in scope is, well, whatever should be lexically in scope at that point. Yes, I suppose your lexicial environment might have changed, but I can’t speculate as to why. Starting with Strings makes you vulnerable to this. Starting with an “Orig” RdrName would be more robust. Simon From: Ranjit Jhala Sent: 29 June 2018 15:26 To: Simon Peyton Jones Cc: ghc-devs at haskell.org Subject: Re: GHC.Prim.Int# is not at TyThing? Dear Simon, Yes I expect that the notion of what is in scope in the GlobalRdrEnv has changed across the GHC versions? Earlier, these lookups would succeed but now (likely due to some artifact of how we are using the API) they fail. You are right that we can get Int# from the wiredInTyCons; but the issue arises with other names (eg “Fractional”) Which are not (?) wiredIn. The silver lining is this may force us to redo the LH name resolution entirely in a “lazy” fashion only using those Var and Type That actually appear in the core being analyzed (as opposed to the current “eager” fashion where we use the hscenv to lookup all names in the LH prelude...) Thanks! Ranjit. On Fri, Jun 29, 2018 at 12:21 AM Simon Peyton Jones > wrote: Are you sure that Int# is in (lexical scope) in the GlobalRdrEnv of the HscEnv? If not, looking up the String in the lexicial environment will fail. You can always just grab TysPrim.intPrimTyCon. Simon From: Ranjit Jhala > Sent: 29 June 2018 00:55 To: Simon Peyton Jones > Cc: ghc-devs at haskell.org Subject: Re: GHC.Prim.Int# is not at TyThing? Dear Simon (and all), Thanks! Then it seems my problem is much worse: somehow the code I had that used the `HscEnv` to "resolve" names (i.e. get `Name` and then `TyThing` and then `TyCon`) from plain strings is no longer working with the GHC 8.4.3. My efforts to distill the relevant LH code into a simple test that shows the difference between the two versions (GHC 8.2 and 8.4) have proven fruitless so far. Can anyone point me to a (small?) example of using the GHC API that implements something like: lookupVarType :: String -> IO String which takes a `String` corresponding to the name of a top-level binder as input, and returns a `String` containing the TYPE of the binder as output? Thanks! - Ranjit. On Thu, Jun 28, 2018 at 3:48 AM Simon Peyton Jones > wrote: Does the above indicate that in fact, `GHC.Prim.Int#` DOES NOT (any longer) correspond to a `TyCon`? No, it does not indicate that! There still is a TyCon for Int#. Indeed you can see it defined in TysPrim.intPrimTyCon. Simon From: Ranjit Jhala > Sent: 28 June 2018 06:08 To: ghc-devs at haskell.org; Simon Peyton Jones > Subject: GHC.Prim.Int# is not at TyThing? Hi all, I am trying to update LiquidHaskell to GHC 8.4. In doing so, I find that I can no longer resolve (i.e. get the `TyThing`, and hence `TyCon`) corresponding to the name: GHC.Prim.Int# It seems like in older versions, we had λ> :i GHC.Prim.Int# data GHC.Prim.Int# -- Defined in ‘GHC.Prim’ λ> :k GHC.Prim.Int# GHC.Prim.Int# :: # but in GHC 8.4 this is changed so: λ> :i GHC.Prim.Int# data GHC.Prim.Int# :: TYPE 'GHC.Types.IntRep -- Defined in ‘GHC.Prim’ λ> :k GHC.Prim.Int# GHC.Prim.Int# :: TYPE 'GHC.Types.IntRep Does the above indicate that in fact, `GHC.Prim.Int#` DOES NOT (any longer) correspond to a `TyCon`? If so, what does it correspond to? i.e. how is it represented as a `Type` in GHC? Any pointers would be most appreciated! Thanks! - Ranjit. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryan.trinkle at gmail.com Fri Jun 29 14:40:37 2018 From: ryan.trinkle at gmail.com (Ryan Trinkle) Date: Fri, 29 Jun 2018 10:40:37 -0400 Subject: Should we have primitive fill-once variables? In-Reply-To: <922c7b78-bb3b-0b2c-6b43-f68f0457592c@iki.fi> References: <7467676.ftxV8Eecj6@squirrel> <922c7b78-bb3b-0b2c-6b43-f68f0457592c@iki.fi> Message-ID: This would make MonadFix's implementation much nicer, I think :) On Fri, Jun 29, 2018 at 9:14 AM, Oleg Grenrus wrote: > I have wanted something like that! Also similiar STM TQVar would be nice > to have. For example `async` uses TMVar where the value is written only > once. if once filled the QVar cannot be empty is useful property. > > For STM variant, I'd actually like to have > > putTQVar' :: QTVar a -> a -> STM (TVar a) > > i.e. giving me back a `TVar a`, for which I now the value is already > there (i.e. reading won't block). Not sure what the would be such for > IO, IORef doesn't feel right. > > - Oleg > > On 29.06.2018 08:28, David Feuer wrote: > > IVars (write-once variables) can be useful for various purposes. In > Haskell, they can be implemented using MVars. But MVars are not the > lightest things in the world. I'm wondering if it might pay to implement > something much lighter primitively. Here's a sketch of some things I'll > tentatively call QVars. > > > > A QVar has two fields: a value (possibly null) and a stack (no need for > a queue) of waiting threads. > > > > newEmptyQVar: Create a QVar with a null value and an empty queue. > > > > tryReadQVar: Just look at the value. > > > > readQVar: Check if the value is null (simple memory read). If not, use > it. If so, push yourself onto the waiting stack (CAS loop). The code that > will run when you're awakened will try to awaken the next thread if there > is one (CAS loop). > > > > putQVar: Install a new value and get the old one (exchange). If the old > value was null, mark the QVar dirty. Awaken the first thread if there is > one (CAS loop). Return the old value if it was non-null (this can be used > in library code to make duplicate writes, or non-equal duplicate writes, an > error). > > > > I think we'd probably also want atomic modification operations, but I > haven't figured out which ones yet. > > > > Implementation differences from MVars: > > > > * We have two fields instead of three (because we can get away with a > stack instead of a queue). > > > > * We never need to lock the QVar closure. MVars do: they can change > freely between empty and full, so it's necessary to coordinate between > value and queue modifications. > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rsx at bluewin.ch Fri Jun 29 15:35:02 2018 From: rsx at bluewin.ch (Roland Senn) Date: Fri, 29 Jun 2018 17:35:02 +0200 Subject: Harbormaster: Build failure on OS/X build. How to fix it as a non Mac user Message-ID: <1530286502.26527.2.camel@bluewin.ch> Hi all, Phabricator informed me, that the builds on Harbourmaster for my changes failed.  Looking at the log at https://phabricator.haskell.org/B21368 I can see, that the builds on Linux and Windows succeeded, however the tests on OS/X failed. Looking at the failed tests at https://phabricator.haskell.org/harborma ster/build/48354/ it tells me that the 2 tests TEST="T5631 T6048" failed. Now, I don't own a machine with OS/X. How can I look, what really failed, and how can I try to fix it? Many thanks Roland From ben at smart-cactus.org Fri Jun 29 15:50:10 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Fri, 29 Jun 2018 11:50:10 -0400 Subject: Harbormaster: Build failure on OS/X build. How to fix it as a non Mac user In-Reply-To: <1530286502.26527.2.camel@bluewin.ch> References: <1530286502.26527.2.camel@bluewin.ch> Message-ID: <87lgaxvbgy.fsf@smart-cactus.org> Roland Senn writes: > Hi all, > Hi Roland, it looks like the failing tests are performance tests. These unfortunately do spuriously fail occasionally. Given the nature of your patch it seems extremely unlikely that the failure is your fault. Feel free to ignore it and I will sort it out when I go to merge. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From lonetiger at gmail.com Fri Jun 29 15:50:27 2018 From: lonetiger at gmail.com (Phyx) Date: Fri, 29 Jun 2018 16:50:27 +0100 Subject: Harbormaster: Build failure on OS/X build. How to fix it as a non Mac user In-Reply-To: <1530286502.26527.2.camel@bluewin.ch> References: <1530286502.26527.2.camel@bluewin.ch> Message-ID: Hi, They are stats failures. Not good enough, which means you have a regression in memory usage. Look at the full test log https://phabricator.haskell.org/harbormaster/build/48354/1/?l=100 They also failed on windows, but currently harbormaster doesn't fail when tests fail for windows. Clicking the build log will show you they failed on windows too. Regards, Tamar On Fri, Jun 29, 2018, 16:35 Roland Senn wrote: > Hi all, > > Phabricator informed me, that the builds on Harbourmaster for my > changes failed. > > Looking at the log at https://phabricator.haskell.org/B21368 I can see, > that the builds on Linux and Windows succeeded, however the tests on > OS/X failed. > > Looking at the failed tests at https://phabricator.haskell.org/harborma > ster/build/48354/ > it tells me > that the 2 tests TEST="T5631 T6048" > failed. > > Now, I don't own a machine with OS/X. How can I look, what really > failed, and how can I try to fix it? > > Many thanks > Roland > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Fri Jun 29 15:51:07 2018 From: mail at joachim-breitner.de (Joachim Breitner) Date: Fri, 29 Jun 2018 11:51:07 -0400 Subject: Should we have primitive fill-once variables? In-Reply-To: <7467676.ftxV8Eecj6@squirrel> References: <7467676.ftxV8Eecj6@squirrel> Message-ID: <5a84d26570302b4b06a952d18ffbf32e355deda5.camel@joachim-breitner.de> Hi, when reading the subject I was expecting something like this: -- | Creates an empty IVar newIVar :: IO (IVar a) -- | pure! but blocks until the IVar is written readIVar :: IVar a -> a -- | tries to write to an IVar. -- Succeeds if it is empty (returning True) -- Does nothing if it has been written to (returning False) writeIVar :: IVar a -> a -> IO Bool Alternatively: -- | all in one newIVar :: IO (a, a -> IO Bool) Essentially a thunk, but with explicit control over filling it. In fact, people have implemented something like this using C-- hacks before: https://github.com/twanvl/unsafe-sequence > This would make MonadFix's implementation much nicer, I think :) This would suffice for MonadFix, right? Sorry for derailing the thread :-) Cheers, Joachim -- Joachim Breitner mail at joachim-breitner.de http://www.joachim-breitner.de/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From carter.schonwald at gmail.com Fri Jun 29 16:19:53 2018 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Fri, 29 Jun 2018 12:19:53 -0400 Subject: Should we have primitive fill-once variables? In-Reply-To: <5a84d26570302b4b06a952d18ffbf32e355deda5.camel@joachim-breitner.de> References: <7467676.ftxV8Eecj6@squirrel> <5a84d26570302b4b06a952d18ffbf32e355deda5.camel@joachim-breitner.de> Message-ID: i'm a little confused, whats the order of reads here? Mvars have write wins, whats the order here? last writer runs first, first writer runs last? (wouldn't there be starvation issues?) On Fri, Jun 29, 2018 at 11:51 AM Joachim Breitner wrote: > Hi, > > when reading the subject I was expecting something like this: > > -- | Creates an empty IVar > newIVar :: IO (IVar a) > > -- | pure! but blocks until the IVar is written > readIVar :: IVar a -> a > > -- | tries to write to an IVar. > -- Succeeds if it is empty (returning True) > -- Does nothing if it has been written to (returning False) > writeIVar :: IVar a -> a -> IO Bool > > Alternatively: > > -- | all in one > newIVar :: IO (a, a -> IO Bool) > > > Essentially a thunk, but with explicit control over filling it. > In fact, people have implemented something like this using C-- hacks > before: https://github.com/twanvl/unsafe-sequence > > > This would make MonadFix's implementation much nicer, I think :) > > This would suffice for MonadFix, right? > > Sorry for derailing the thread :-) > > Cheers, > Joachim > > -- > Joachim Breitner > mail at joachim-breitner.de > http://www.joachim-breitner.de/ > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Fri Jun 29 16:20:07 2018 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Fri, 29 Jun 2018 12:20:07 -0400 Subject: Should we have primitive fill-once variables? References: <7467676.ftxV8Eecj6@squirrel> <5a84d26570302b4b06a952d18ffbf32e355deda5.camel@joachim-breitner.de> Message-ID: *first write first On Fri, Jun 29, 2018 at 12:19 PM Carter Schonwald < carter.schonwald at gmail.com> wrote: > i'm a little confused, whats the order of reads here? > Mvars have write wins, whats the order here? last writer runs first, > first writer runs last? (wouldn't there be starvation issues?) > > On Fri, Jun 29, 2018 at 11:51 AM Joachim Breitner < > mail at joachim-breitner.de> wrote: > >> Hi, >> >> when reading the subject I was expecting something like this: >> >> -- | Creates an empty IVar >> newIVar :: IO (IVar a) >> >> -- | pure! but blocks until the IVar is written >> readIVar :: IVar a -> a >> >> -- | tries to write to an IVar. >> -- Succeeds if it is empty (returning True) >> -- Does nothing if it has been written to (returning False) >> writeIVar :: IVar a -> a -> IO Bool >> >> Alternatively: >> >> -- | all in one >> newIVar :: IO (a, a -> IO Bool) >> >> >> Essentially a thunk, but with explicit control over filling it. >> In fact, people have implemented something like this using C-- hacks >> before: https://github.com/twanvl/unsafe-sequence >> >> > This would make MonadFix's implementation much nicer, I think :) >> >> This would suffice for MonadFix, right? >> >> Sorry for derailing the thread :-) >> >> Cheers, >> Joachim >> >> -- >> Joachim Breitner >> mail at joachim-breitner.de >> http://www.joachim-breitner.de/ >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Fri Jun 29 16:24:20 2018 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Fri, 29 Jun 2018 12:24:20 -0400 Subject: Plan for GHC 8.6.1 In-Reply-To: <876028xgx4.fsf@smart-cactus.org> References: <87h8o6d8p8.fsf@smart-cactus.org> <878t74xls8.fsf@smart-cactus.org> <876028xgx4.fsf@smart-cactus.org> Message-ID: current cabal has a nasty build failure on haddock errors, i'd really appeciate some eyeballs / attention / ideas on how to get my fix / PR for it over the finish line for GHC 8.6 https://github.com/haskell/cabal/pull/5269 (the issue is that haddock failures, such as on an empty package, fail an entire build) On Sun, Jun 24, 2018 at 12:56 PM Ben Gamari wrote: > George Colpitts writes: > > > I agree. > > > > So back to my original question: will ghc 8.6.1 be moving to llvm 6 from > > llvm 5? > > > Ahh, whoops, missed the original question! > > 8.6 will use LLVM 6.0. > > Cheers, > > - Ben > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at well-typed.com Fri Jun 29 16:54:23 2018 From: david at well-typed.com (David Feuer) Date: Fri, 29 Jun 2018 12:54:23 -0400 Subject: Should we have primitive fill-once variables? In-Reply-To: <5a84d26570302b4b06a952d18ffbf32e355deda5.camel@joachim-breitner.de> References: <7467676.ftxV8Eecj6@squirrel> <5a84d26570302b4b06a952d18ffbf32e355deda5.camel@joachim-breitner.de> Message-ID: <1871529.NxJevu4dU2@squirrel> On Friday, June 29, 2018 11:51:07 AM EDT Joachim Breitner wrote: > when reading the subject I was expecting something like this: > > -- | pure! but blocks until the IVar is written > readIVar :: IVar a -> a > > -- | tries to write to an IVar. > -- Succeeds if it is empty (returning True) > -- Does nothing if it has been written to (returning False) > writeIVar :: IVar a -> a -> IO Bool It really depends. Are there useful (compile-time or run-time) optimization for IVars (write-once) that don't apply to QVars (fill-once)? If so, we might indeed want to offer writeIVar as you suggest, and > readIVar :: IVar a -> (# a #) The unboxed tuple allows the value to be extracted from the IVar without being forced. If, however, we want QVars, we can always simulate that using accursedUnutterablePerformIO: > readIVarPure (IVar var) = case readIVar# var realWorld# of (# _, a #) -> (# a #) I don't know if there are useful IVar optimizations or not. If so, we should take them; if not, we should take the extra flexibility. > Alternatively: > > -- | all in one > newIVar :: IO (a, a -> IO Bool) Does this have some advantage as a primop? > In fact, people have implemented something like this using C-- hacks > before: https://github.com/twanvl/unsafe-sequence I'll have to take a look. > > This would make MonadFix's implementation much nicer, I think :) > > This would suffice for MonadFix, right? It should indeed. Side note: my implementation sketch for readQVar was missing one piece: after a reader pushes itself on the stack, it must check the QVar status a second time in case the QVar was filled between the read and the enstackment. If it's been filled, it needs to awaken the first thread the way writeQVar would. Indeed, it might as well check the QVar status on every trip through the CAS loop. From david at well-typed.com Fri Jun 29 16:58:16 2018 From: david at well-typed.com (David Feuer) Date: Fri, 29 Jun 2018 12:58:16 -0400 Subject: Should we have primitive fill-once variables? In-Reply-To: References: <7467676.ftxV8Eecj6@squirrel> <5a84d26570302b4b06a952d18ffbf32e355deda5.camel@joachim-breitner.de> Message-ID: <5053508.LyOgC1aFov@squirrel> On Friday, June 29, 2018 12:19:53 PM EDT Carter Schonwald wrote: > i'm a little confused, whats the order of reads here? > Mvars have write wins, whats the order here? last writer runs first, first > writer runs last? (wouldn't there be starvation issues?) Writes would occur in no particular order (very much like atomicWriteIORef). Blocked reads would occur from last to first, and I think that's okay. If you think about this as being primarily intended to implement IVars, but with a little extra flexibility, you should get the right intuition. From david at well-typed.com Fri Jun 29 17:59:41 2018 From: david at well-typed.com (David Feuer) Date: Fri, 29 Jun 2018 13:59:41 -0400 Subject: Should we have primitive fill-once variables? In-Reply-To: <1871529.NxJevu4dU2@squirrel> References: <7467676.ftxV8Eecj6@squirrel> <5a84d26570302b4b06a952d18ffbf32e355deda5.camel@joachim-breitner.de> <1871529.NxJevu4dU2@squirrel> Message-ID: <3744595.jx1lIHKdY1@squirrel> On Friday, June 29, 2018 12:54:23 PM EDT David Feuer wrote: > The unboxed tuple allows the value to be extracted from the IVar without being forced. If, however, we want QVars, we can always simulate that using accursedUnutterablePerformIO: Actually, this might be silly, depending on implementation details. Twan's technique (which may or may not be directly relevant) overwrites a closure with its final value, which is a pretty good match for your approach. We still need to be able to implement tryRead[IQ]Var; I'm not sure what limitations that imposes. From mail at joachim-breitner.de Fri Jun 29 18:13:39 2018 From: mail at joachim-breitner.de (Joachim Breitner) Date: Fri, 29 Jun 2018 14:13:39 -0400 Subject: Should we have primitive fill-once variables? In-Reply-To: <1871529.NxJevu4dU2@squirrel> References: <7467676.ftxV8Eecj6@squirrel> <5a84d26570302b4b06a952d18ffbf32e355deda5.camel@joachim-breitner.de> <1871529.NxJevu4dU2@squirrel> Message-ID: Hi, Am Freitag, den 29.06.2018, 12:54 -0400 schrieb David Feuer: > On Friday, June 29, 2018 11:51:07 AM EDT Joachim Breitner wrote: > > when reading the subject I was expecting something like this: > > > > -- | pure! but blocks until the IVar is written > > readIVar :: IVar a -> a > > > > -- | tries to write to an IVar. > > -- Succeeds if it is empty (returning True) > > -- Does nothing if it has been written to (returning False) > > writeIVar :: IVar a -> a -> IO Bool > > It really depends. Are there useful (compile-time or run-time) > optimization for IVars (write-once) that don't apply to QVars (fill- > once)? If so, we might indeed want to offer writeIVar as you suggest, > and I don’t know! Maybe the GC can treat a filled IVar differently (because it is no longer mutable?) But really, I don't know :-) Cheers, Joachim -- Joachim Breitner mail at joachim-breitner.de http://www.joachim-breitner.de/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From david at well-typed.com Fri Jun 29 18:22:38 2018 From: david at well-typed.com (David Feuer) Date: Fri, 29 Jun 2018 14:22:38 -0400 Subject: Should we have primitive fill-once variables? In-Reply-To: References: <7467676.ftxV8Eecj6@squirrel> <1871529.NxJevu4dU2@squirrel> Message-ID: <1792862.fH3rJv34vH@squirrel> On Friday, June 29, 2018 2:13:39 PM EDT Joachim Breitner wrote: > I don’t know! Maybe the GC can treat a filled IVar differently (because > it is no longer mutable?) But really, I don't know :-) That's a very good point. If we turn the IVar into a pure value, then it loses its "dirty" bit and the GC no longer has to check whether it's pointing to a newer generation. But this is getting into territory that's pretty murky to me. From ben at well-typed.com Sat Jun 30 21:26:34 2018 From: ben at well-typed.com (Ben Gamari) Date: Sat, 30 Jun 2018 17:26:34 -0400 Subject: [ANNOUNCE] GHC 8.6.1-alpha1 available Message-ID: <87fu14ufsp.fsf@smart-cactus.org> The GHC development team is pleased to announce the first alpha release leading up to GHC 8.6.1. The usual release artifacts are available from https://downloads.haskell.org/~ghc/8.6.1-alpha1 This is the first release (partially) generated using our new CI infrastructure. One known issue is that the haddock documentation is currently unavailable. This will be fixed in the next alpha release. Do let us know if you spot anything else amiss. As always, do let us know if you encounter any trouble in the course of testing. Thanks for your help! Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Sat Jun 30 21:33:10 2018 From: ben at well-typed.com (Ben Gamari) Date: Sat, 30 Jun 2018 17:33:10 -0400 Subject: [ANNOUNCE] GHC 8.6.1-alpha1 available In-Reply-To: <87fu14ufsp.fsf@smart-cactus.org> References: <87fu14ufsp.fsf@smart-cactus.org> Message-ID: <87d0w8ufhn.fsf@smart-cactus.org> Small correction inline. Ben Gamari writes: > The GHC development team is pleased to announce the first > alpha release leading up to GHC 8.6.1. The usual release artifacts > are available from > > https://downloads.haskell.org/~ghc/8.6.1-alpha1 > > This is the first release (partially) generated using our new CI > infrastructure. One known issue is that the haddock documentation is > currently unavailable. Correction: the issue is not restricted only to haddock documentation; the users guide is also not present. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From matthewtpickering at gmail.com Sat Jun 30 23:38:02 2018 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Sun, 1 Jul 2018 00:38:02 +0100 Subject: [ANNOUNCE] GHC 8.6.1-alpha1 available In-Reply-To: <87fu14ufsp.fsf@smart-cactus.org> References: <87fu14ufsp.fsf@smart-cactus.org> Message-ID: Users of nix can test their package using the instructions in this gist. It should be straightforward as the 8.6.1 alpha will be downloaded from the binary cache. https://gist.github.com/mpickering/fd26e9f03d6cb88cbb91b90b6019f3dd The compiler will use patches form head.hackage in order to build dependencies. Any problems, let me know. Matt On Sat, Jun 30, 2018 at 10:26 PM, Ben Gamari wrote: > > The GHC development team is pleased to announce the first > alpha release leading up to GHC 8.6.1. The usual release artifacts > are available from > > https://downloads.haskell.org/~ghc/8.6.1-alpha1 > > This is the first release (partially) generated using our new CI > infrastructure. One known issue is that the haddock documentation is > currently unavailable. This will be fixed in the next alpha release. Do > let us know if you spot anything else amiss. > > As always, do let us know if you encounter any trouble in the course of > testing. Thanks for your help! > > Cheers, > > - Ben > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >