From takenobu.hs at gmail.com Sat Oct 1 07:37:24 2016 From: takenobu.hs at gmail.com (Takenobu Tani) Date: Sat, 1 Oct 2016 16:37:24 +0900 Subject: How, precisely, can we improve? In-Reply-To: References: <7D482DF4-9AF7-4F41-8406-06247028F808@gmail.com> <6BBCE6FC-D792-43FB-8291-2BCE2AF744A0@lichtzwerge.de> <1474990356.4064381.738574073.34877450@webmail.messagingengine.com> <1F34D095-6E7F-44B1-B47F-AA7E9CEFBEAC@cs.brynmawr.edu> <1474993116.4076453.738641009.0E1FACB7@webmail.messagingengine.com> <62BFC158-B5B6-4C65-9D24-7BB064DEF696@cs.brynmawr.edu> <73702D9C-F495-462C-8220-ECCE08B62AAB@cs.brynmawr.edu> Message-ID: Main discussion is here ( https://github.com/ghc-proposals/ghc-proposals/pull/10). BTW, I prepared a conceptual web page about following. > Furthermore, we provide a simple search box for multiple wiki sites. > (Please wait for a while. I'll prepare simple-conceptual demonstration with web.) Here is a rapid prototyping for searching multiple wiki sites. [1] [1]: https://takenobu-hs.github.io/haskell-wiki-search Regards, Takenobu -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Sat Oct 1 18:41:31 2016 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sat, 1 Oct 2016 14:41:31 -0400 Subject: How, precisely, can we improve? In-Reply-To: References: <7D482DF4-9AF7-4F41-8406-06247028F808@gmail.com> <6BBCE6FC-D792-43FB-8291-2BCE2AF744A0@lichtzwerge.de> <1474990356.4064381.738574073.34877450@webmail.messagingengine.com> <1F34D095-6E7F-44B1-B47F-AA7E9CEFBEAC@cs.brynmawr.edu> <1474993116.4076453.738641009.0E1FACB7@webmail.messagingengine.com> <62BFC158-B5B6-4C65-9D24-7BB064DEF696@cs.brynmawr.edu> <73702D9C-F495-462C-8220-ECCE08B62AAB@cs.brynmawr.edu> Message-ID: Thank you so much. I know I've said it before. But the slides / notes you've taken the time to write and circulate before have been truely great. How can I and others help encourage you and others to keep up the great work on that side? Good synthesis notes that help orientate new and old contributors for systems they don't know or have forgotten is a powerful resource. I hope my encouragement helps. But either way it's a skill / focus that takes time to develop and is worth celebrating / thanking. As always, happy to be a resource to help, but 👍 to you Takenobu! :) On Saturday, October 1, 2016, Takenobu Tani wrote: > Main discussion is here (https://github.com/ghc- > proposals/ghc-proposals/pull/10). > > > BTW, I prepared a conceptual web page about following. > > > Furthermore, we provide a simple search box for multiple wiki sites. > > (Please wait for a while. I'll prepare simple-conceptual demonstration > with web.) > > > Here is a rapid prototyping for searching multiple wiki sites. [1] > > > [1]: https://takenobu-hs.github.io/haskell-wiki-search > > Regards, > Takenobu > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Sat Oct 1 20:47:13 2016 From: marlowsd at gmail.com (Simon Marlow) Date: Sat, 1 Oct 2016 21:47:13 +0100 Subject: Create a ghc-simple-patch-propose list? Re: Notes from Ben's "contribute to ghc" discussion In-Reply-To: References: <87ponqijpk.fsf@ben-laptop.smart-cactus.org> <4B26A359-0937-4BEA-A071-D24BDF26C2DB@cs.brynmawr.edu> <8760pgf1pz.fsf@ben-laptop.smart-cactus.org> <87a8erebb0.fsf@ben-laptop.smart-cactus.org> Message-ID: A nice trick for dealing with stacked diffs in Phabricator is to use "git rebase -i" to modify diffs in the middle of the stack. You can also insert "x arc diff" between lines to automatically update later diffs on Phabricator after a rebase lower down the stack. You only need a single branch for the whole stack, and continually rebase it. I also push the whole branch to github to get Travis to build it, but that's optional. Cheers Simon On 29 September 2016 at 03:27, Moritz Angermann wrote: > > >> Hence you can go wild on your local branches (use what ever > >> development model suites your needs) and get one final squashed commit > >> with an extensive summary. > >> > > Sure, but this leads to generally unreviewable patches IMHO. In order to > > stay sane I generally split up my work into a set of standalone patches > > with git rebase and then create a Diff of each of these commits. > > Phabricator supports this by having a notion of dependencies between > > Diffs, but arcanist has no sensible scheme for taking a branch and > > conveniently producing a series of Diffs. > > Yes, this has been a constant source of frustration for us as well. Dealing > with dependent diffs is just plain painful with arc :( What I usually end > up > doing, and I assume that’s what you are describing is: > > Turning > > A -- B -- C -- D -- E -- F -- origin/master > ^ > HEAD > > into: > > branch B1: E -- F -- origin/master > / > branch B2: C -- D > / > branch B3: A -- B > > and producing three diffs: > > $ git checkout E > $ arc diff origin/master # producing D1 > > $ git checkout C > $ arc diff B1 # adding “depends on D1" into the summary field > > $ git checkout A > $ git diff B2 # adding “depends on D2” into the summary field > > and then rebase B2 and B3 when changes to D1 on B1 are necessary. > > Running `arc patch` with dependent diffs often resulted in trouble; > this seems to be getting better with the staging areas though. > > So clearly we can see there are drawbacks. All I wanted to say in > the previous email was essentially that from my experience frustration > with arc often came from trying to make arc be git. > > Cheers, > Moritz -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Sat Oct 1 20:49:26 2016 From: allbery.b at gmail.com (Brandon Allbery) Date: Sat, 1 Oct 2016 16:49:26 -0400 Subject: Create a ghc-simple-patch-propose list? Re: Notes from Ben's "contribute to ghc" discussion In-Reply-To: References: <87ponqijpk.fsf@ben-laptop.smart-cactus.org> <4B26A359-0937-4BEA-A071-D24BDF26C2DB@cs.brynmawr.edu> <8760pgf1pz.fsf@ben-laptop.smart-cactus.org> <87a8erebb0.fsf@ben-laptop.smart-cactus.org> Message-ID: On Sat, Oct 1, 2016 at 4:47 PM, Simon Marlow wrote: > A nice trick for dealing with stacked diffs in Phabricator is to use "git > rebase -i" to modify diffs in the middle of the stack. You can also insert > "x arc diff" between lines to automatically update later diffs on > Phabricator after a rebase lower down the stack. > > You only need a single branch for the whole stack, and continually rebase > it. I also push the whole branch to github to get Travis to build it, but > that's optional. > Perhaps someone could put a sample workflow on (one of...) the wiki(s). -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Sat Oct 1 21:09:28 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Sat, 01 Oct 2016 17:09:28 -0400 Subject: [GHC Proposal] Type-indexed Typeable Message-ID: <87intbbwtz.fsf@ben-laptop.smart-cactus.org> Hello everyone, I just opened Pull Request #16 [1] against the ghc-proposals repository, describing the new Type-indexed Typeable machinery that we would like to introduce with GHC 8.2. Please give it a read through and leave your comments. Thanks! Cheers, - Ben [1] https://github.com/ghc-proposals/ghc-proposals/pull/16 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From ben at smart-cactus.org Sat Oct 1 21:44:06 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Sat, 01 Oct 2016 17:44:06 -0400 Subject: [GHC Proposal] Allow use of TypeApplications in SPECIALISE pragmas Message-ID: <87bmz3bv89.fsf@ben-laptop.smart-cactus.org> Hello everyone, I just opened Pull Request #15 [1] against the ghc-proposals repository, describing a propsed extension to the syntax of the SPECIALISE pragma, allowing significantly more concise usage. Please give it a read through and leave your comments. Thanks! Cheers, - Ben [1] https://github.com/ghc-proposals/ghc-proposals/pull/15 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From takenobu.hs at gmail.com Sun Oct 2 00:53:26 2016 From: takenobu.hs at gmail.com (Takenobu Tani) Date: Sun, 2 Oct 2016 09:53:26 +0900 Subject: How, precisely, can we improve? In-Reply-To: References: <7D482DF4-9AF7-4F41-8406-06247028F808@gmail.com> <6BBCE6FC-D792-43FB-8291-2BCE2AF744A0@lichtzwerge.de> <1474990356.4064381.738574073.34877450@webmail.messagingengine.com> <1F34D095-6E7F-44B1-B47F-AA7E9CEFBEAC@cs.brynmawr.edu> <1474993116.4076453.738641009.0E1FACB7@webmail.messagingengine.com> <62BFC158-B5B6-4C65-9D24-7BB064DEF696@cs.brynmawr.edu> <73702D9C-F495-462C-8220-ECCE08B62AAB@cs.brynmawr.edu> Message-ID: Hi Carter, Thank you, I'm glad to hear it :) Haskell community is beautiful and great. Regards, Takenobu 2016-10-02 3:41 GMT+09:00 Carter Schonwald : > Thank you so much. I know I've said it before. But the slides / notes > you've taken the time to write and circulate before have been truely great. > > How can I and others help encourage you and others to keep up the great > work on that side? > > Good synthesis notes that help orientate new and old contributors for > systems they don't know or have forgotten is a powerful resource. > > I hope my encouragement helps. But either way it's a skill / focus that > takes time to develop and is worth celebrating / thanking. > > As always, happy to be a resource to help, but 👍 to you Takenobu! :) > > > On Saturday, October 1, 2016, Takenobu Tani wrote: > >> Main discussion is here (https://github.com/ghc-propos >> als/ghc-proposals/pull/10). >> >> >> BTW, I prepared a conceptual web page about following. >> >> > Furthermore, we provide a simple search box for multiple wiki sites. >> > (Please wait for a while. I'll prepare simple-conceptual demonstration >> with web.) >> >> >> Here is a rapid prototyping for searching multiple wiki sites. [1] >> >> >> [1]: https://takenobu-hs.github.io/haskell-wiki-search >> >> Regards, >> Takenobu >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at cs.brynmawr.edu Sun Oct 2 01:58:02 2016 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Sat, 1 Oct 2016 21:58:02 -0400 Subject: Getting rid of -XImpredicativeTypes In-Reply-To: References: Message-ID: > On Sep 30, 2016, at 6:36 PM, Baldur Blöndal wrote: > > Shot in the dark: Would extensions like QuantifiedConstraints or ImplicationConstraints, if implemented, help with ImpredicativeTypes? I don't think so. The challenge with ImpredicativeTypes is retaining predictability of type inference, and I don't see how implication constraints helps with this. Richard From ganesh at earth.li Sun Oct 2 11:06:57 2016 From: ganesh at earth.li (Ganesh Sittampalam) Date: Sun, 2 Oct 2016 12:06:57 +0100 Subject: Getting rid of -XImpredicativeTypes In-Reply-To: References: Message-ID: Elsewhere in the thread, you said > 1) ImpredicativeTypes enables types like `Maybe (forall a. a)`. Do > those just disappear, or are they also enabled anyway? (I would guess > the former.) > > Yes, they’d disappear. but here you're talking about 'xs :: [forall a . a->a]' being possible with VTA - is the idea that such types will be possible but only with both explicit signatures and VTA? On 30/09/2016 16:29, Simon Peyton Jones via ghc-devs wrote: > > Alejandro: excellent point. I mis-spoke before. In my proposal we > WILL allow types like (Tree (forall a. a->a)). > > > > I’m trying to get round to writing a proposal (would someone else like > to write it – it should be short), but the idea is this: > > > > *When you have -XImpredicativeTypes* > > · *You can write a polytype in a visible type argument; eg. f > @(forall a. a->a)* > > · *You can write a polytype as an argument of a type in a > signature e.g. f :: [forall a. a->a] -> Int* > > * * > > *And that’s all. A unification variable STILL CANNOT be unified with > a polytype. The only way you can call a polymorphic function at a > polytype is to use Visible Type Application.* > > * * > > *So using impredicative types might be tiresome. E.g.* > > * type SID = forall a. a->a* > > * * > > * xs :: [forall a. a->a]* > > * xs = (:) @SID id ( (:) @SID id ([] @ SID))* > > * * > > *In short, if you call a function at a polytype, you must use VTA. > Simple, easy, predictable; and doubtless annoying. But possible*. > > > > Simon > > > > *From:*Alejandro Serrano Mena [mailto:trupill at gmail.com] > *Sent:* 26 September 2016 08:13 > *To:* Simon Peyton Jones > *Cc:* ghc-users at haskell.org; ghc-devs at haskell.org > *Subject:* Re: Getting rid of -XImpredicativeTypes > > > > What would be the story for the types of the arguments. Would I be > allowed to write the following? > > > f (lst :: [forall a. a -> a]) = head @(forall a. a -> a) lst 3 > > Regards, > > Alejandro > > > > 2016-09-25 20:05 GMT+02:00 Simon Peyton Jones via ghc-devs > >: > > Friends > > > > GHC has a flag -XImpredicativeTypes that makes a half-hearted > attempt to support impredicative polymorphism. But it is > vestigial…. if it works, it’s really a fluke. We don’t really > have a systematic story here at all. > > > > I propose, therefore, to remove it entirely. That is, if you use > -XImpredicativeTypes, you’ll get a warning that it does nothing > (ie. complete no-op) and you should remove it. > > > > Before I pull the trigger, does anyone think they are using it in > a mission-critical way? > > > > Now that we have Visible Type Application there is a workaround: > if you want to call a polymorphic function at a polymorphic type, > you can explicitly apply it to that type. For example: > > > > {-# LANGUAGE ImpredicativeTypes, TypeApplications, RankNTypes #-} > > module Vta where > > f x = id @(forall a. a->a) id @Int x > > > > You can also leave out the @Int part of course. > > > > Currently we have to use -XImpredicativeTypes to allow the > @(forall a. a->a). Is that sensible? Or should we allow it > regardless? I rather think the latter… if you have Visible Type > Application (i.e. -XTypeApplications) then applying to a polytype > is nothing special. So I propose to lift that restriction. > > > > I should go through the GHC Proposals Process for this, but I’m on > a plane, so I’m going to at least start with an email. > > > > Simon > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Sun Oct 2 12:18:29 2016 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sun, 2 Oct 2016 08:18:29 -0400 Subject: Getting rid of -XImpredicativeTypes In-Reply-To: References: Message-ID: On a more inane front, does this give a path to either making $ less magical, or better user facing errors when folks use compose (.) style code instead and hit impredicativtity issues that $ magic would have handled ? On Sunday, October 2, 2016, Ganesh Sittampalam wrote: > Elsewhere in the thread, you said > > 1) ImpredicativeTypes enables types like `Maybe (forall a. a)`. Do those > just disappear, or are they also enabled anyway? (I would guess the former.) > Yes, they’d disappear. > > > but here you're talking about 'xs :: [forall a . a->a]' being possible > with VTA - is the idea that such types will be possible but only with both > explicit signatures and VTA? > > On 30/09/2016 16:29, Simon Peyton Jones via ghc-devs wrote: > > Alejandro: excellent point. I mis-spoke before. In my proposal we WILL > allow types like (Tree (forall a. a->a)). > > > > I’m trying to get round to writing a proposal (would someone else like to > write it – it should be short), but the idea is this: > > > > *When you have -XImpredicativeTypes* > > · *You can write a polytype in a visible type argument; eg. f > @(forall a. a->a)* > > · *You can write a polytype as an argument of a type in a > signature e.g. f :: [forall a. a->a] -> Int* > > > > *And that’s all. A unification variable STILL CANNOT be unified with a > polytype. The only way you can call a polymorphic function at a polytype > is to use Visible Type Application.* > > > > *So using impredicative types might be tiresome. E.g.* > > * type SID = forall a. a->a* > > > > * xs :: [forall a. a->a]* > > * xs = (:) @SID id ( (:) @SID id ([] @ SID))* > > > > *In short, if you call a function at a polytype, you must use VTA. > Simple, easy, predictable; and doubtless annoying. But possible*. > > > > Simon > > > > *From:* Alejandro Serrano Mena [mailto:trupill at gmail.com > ] > *Sent:* 26 September 2016 08:13 > *To:* Simon Peyton Jones > > *Cc:* ghc-users at haskell.org > ; > ghc-devs at haskell.org > > *Subject:* Re: Getting rid of -XImpredicativeTypes > > > > What would be the story for the types of the arguments. Would I be allowed > to write the following? > > > f (lst :: [forall a. a -> a]) = head @(forall a. a -> a) lst 3 > > Regards, > > Alejandro > > > > 2016-09-25 20:05 GMT+02:00 Simon Peyton Jones via ghc-devs < > ghc-devs at haskell.org > >: > > Friends > > > > GHC has a flag -XImpredicativeTypes that makes a half-hearted attempt to > support impredicative polymorphism. But it is vestigial…. if it works, > it’s really a fluke. We don’t really have a systematic story here at all. > > > > I propose, therefore, to remove it entirely. That is, if you use > -XImpredicativeTypes, you’ll get a warning that it does nothing (ie. > complete no-op) and you should remove it. > > > > Before I pull the trigger, does anyone think they are using it in a > mission-critical way? > > > > Now that we have Visible Type Application there is a workaround: if you > want to call a polymorphic function at a polymorphic type, you can > explicitly apply it to that type. For example: > > > > {-# LANGUAGE ImpredicativeTypes, TypeApplications, RankNTypes #-} > > module Vta where > > f x = id @(forall a. a->a) id @Int x > > > > You can also leave out the @Int part of course. > > > > Currently we have to use -XImpredicativeTypes to allow the @(forall a. > a->a). Is that sensible? Or should we allow it regardless? I rather > think the latter… if you have Visible Type Application (i.e. > -XTypeApplications) then applying to a polytype is nothing special. So I > propose to lift that restriction. > > > > I should go through the GHC Proposals Process for this, but I’m on a > plane, so I’m going to at least start with an email. > > > > Simon > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > > > _______________________________________________ > ghc-devs mailing listghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Sun Oct 2 15:38:33 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Sun, 02 Oct 2016 11:38:33 -0400 Subject: Getting rid of -XImpredicativeTypes In-Reply-To: References: Message-ID: <871szybw1y.fsf@ben-laptop.smart-cactus.org> Carter Schonwald writes: > On a more inane front, does this give a path to either making $ less > magical, or better user facing errors when folks use compose (.) style code > instead and hit impredicativtity issues that $ magic would have handled ? > I don't believe this will have any effect on the behavior of ($). That is, unless you don't mind giving up the ability to write runST $ do ... Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From hvriedel at gmail.com Mon Oct 3 08:29:06 2016 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Mon, 03 Oct 2016 10:29:06 +0200 Subject: Allow top-level shadowing for imported names? Message-ID: <877f9pkf8t.fsf@gnu.org> Hi *, I seem to recall this was already suggested in the past, but I can't seem to find it in the archives. For simplicity I'll restate the idea: foo :: Int -> Int -> (Int,Int) foo x y = (bar x, bar y) where bar x = x+x results merely in a name-shadowing warning (for -Wall): foo.hs:4:9: warning: [-Wname-shadowing] This binding for ‘x’ shadows the existing binding bound at foo.hs:2:5 However, import Data.Monoid (<>) :: String -> String -> String (<>) = (++) main :: IO () main = putStrLn ("Hi" <> "There") doesn't allow to shadow (<>), but rather complains about ambiguity: bar.hs:7:23: error: Ambiguous occurrence ‘<>’ It could refer to either ‘Data.Monoid.<>’, imported from ‘Data.Monoid’ at bar.hs:1:1-18 or ‘Main.<>’, defined at bar.hs:4:1 This is of course in line with the Haskell Report, which says in https://www.haskell.org/onlinereport/haskell2010/haskellch5.html#x11-1010005.3 | The entities exported by a module may be brought into scope in another | module with an import declaration at the beginning of the module. The | import declaration names the module to be imported and optionally | specifies the entities to be imported. A single module may be imported | by more than one import declaration. Imported names serve as top level | declarations: they scope over the entire body of the module but may be | shadowed by *local non-top-level bindings.* However, why don't we allow this to be relaxed via a new language extensions, to allow top-level bindings to shadow imported names (and of course emit a warning)? Unless I'm missing something, this would help to keep existing and working code compiling if new versions of libraries start exporting new symbols (which happen to clash with local top-level defs), rather than resulting in a fatal name-clash; and have no major downsides. If this sounds like a good idea, I'll happily promote this into a proper proposal over at https://github.com/ghc-proposals/ghc-proposals; I mostly wanted to get early feedback here (and possibly find out if and where this was proposed before), before investing more time turning this into a fully fledged GHC proposal. Cheers, HVR -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 818 bytes Desc: not available URL: From ezyang at mit.edu Mon Oct 3 09:10:01 2016 From: ezyang at mit.edu (Edward Z. Yang) Date: Mon, 03 Oct 2016 02:10:01 -0700 Subject: Allow top-level shadowing for imported names? In-Reply-To: <877f9pkf8t.fsf@gnu.org> References: <877f9pkf8t.fsf@gnu.org> Message-ID: <1475485788-sup-1113@sabre> I don't see why not. (But then again I wasn't around for Haskell98!) Edward Excerpts from Herbert Valerio Riedel's message of 2016-10-03 10:29:06 +0200: > Hi *, > > I seem to recall this was already suggested in the past, but I can't > seem to find it in the archives. For simplicity I'll restate the idea: > > > foo :: Int -> Int -> (Int,Int) > foo x y = (bar x, bar y) > where > bar x = x+x > > results merely in a name-shadowing warning (for -Wall): > > foo.hs:4:9: warning: [-Wname-shadowing] > This binding for ‘x’ shadows the existing binding > bound at foo.hs:2:5 > > > However, > > import Data.Monoid > > (<>) :: String -> String -> String > (<>) = (++) > > main :: IO () > main = putStrLn ("Hi" <> "There") > > doesn't allow to shadow (<>), but rather complains about ambiguity: > > bar.hs:7:23: error: > Ambiguous occurrence ‘<>’ > It could refer to either ‘Data.Monoid.<>’, > imported from ‘Data.Monoid’ at bar.hs:1:1-18 > or ‘Main.<>’, defined at bar.hs:4:1 > > > This is of course in line with the Haskell Report, which says in > https://www.haskell.org/onlinereport/haskell2010/haskellch5.html#x11-1010005.3 > > | The entities exported by a module may be brought into scope in another > | module with an import declaration at the beginning of the module. The > | import declaration names the module to be imported and optionally > | specifies the entities to be imported. A single module may be imported > | by more than one import declaration. Imported names serve as top level > | declarations: they scope over the entire body of the module but may be > | shadowed by *local non-top-level bindings.* > > > However, why don't we allow this to be relaxed via a new language > extensions, to allow top-level bindings to shadow imported names (and > of course emit a warning)? > > Unless I'm missing something, this would help to keep existing and > working code compiling if new versions of libraries start exporting new > symbols (which happen to clash with local top-level defs), rather than > resulting in a fatal name-clash; and have no major downsides. > > If this sounds like a good idea, I'll happily promote this into a proper > proposal over at https://github.com/ghc-proposals/ghc-proposals; I > mostly wanted to get early feedback here (and possibly find out if and > where this was proposed before), before investing more time turning > this into a fully fledged GHC proposal. > > Cheers, > HVR From simonpj at microsoft.com Mon Oct 3 09:14:42 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 3 Oct 2016 09:14:42 +0000 Subject: Getting rid of -XImpredicativeTypes In-Reply-To: References: Message-ID: Indeed, as I said “I mis-spoke before: In my proposal we WILL allow types like (Tree (forall a. a->a))”. So yes, such types will be possible in type signatures (with ImpredicativeTypes). But using functions with such type signatures will be tiresome, because you’ll have to use VTA on every occasion. E.g. if xs :: [forall a. a->a] then you can’t say (reverse xs), because that requires impredicative instantiation of reverse’s type argument. You must stay reverse @(forall a. a->a) xs Does that help? Simon From: Ganesh Sittampalam [mailto:ganesh at earth.li] Sent: 02 October 2016 12:07 To: Simon Peyton Jones ; Alejandro Serrano Mena Cc: ghc-devs at haskell.org; ghc-users at haskell.org Subject: Re: Getting rid of -XImpredicativeTypes Elsewhere in the thread, you said 1) ImpredicativeTypes enables types like `Maybe (forall a. a)`. Do those just disappear, or are they also enabled anyway? (I would guess the former.) Yes, they’d disappear. but here you're talking about 'xs :: [forall a . a->a]' being possible with VTA - is the idea that such types will be possible but only with both explicit signatures and VTA? On 30/09/2016 16:29, Simon Peyton Jones via ghc-devs wrote: Alejandro: excellent point. I mis-spoke before. In my proposal we WILL allow types like (Tree (forall a. a->a)). I’m trying to get round to writing a proposal (would someone else like to write it – it should be short), but the idea is this: When you have -XImpredicativeTypes · You can write a polytype in a visible type argument; eg. f @(forall a. a->a) · You can write a polytype as an argument of a type in a signature e.g. f :: [forall a. a->a] -> Int And that’s all. A unification variable STILL CANNOT be unified with a polytype. The only way you can call a polymorphic function at a polytype is to use Visible Type Application. So using impredicative types might be tiresome. E.g. type SID = forall a. a->a xs :: [forall a. a->a] xs = (:) @SID id ( (:) @SID id ([] @ SID)) In short, if you call a function at a polytype, you must use VTA. Simple, easy, predictable; and doubtless annoying. But possible. Simon From: Alejandro Serrano Mena [mailto:trupill at gmail.com] Sent: 26 September 2016 08:13 To: Simon Peyton Jones Cc: ghc-users at haskell.org; ghc-devs at haskell.org Subject: Re: Getting rid of -XImpredicativeTypes What would be the story for the types of the arguments. Would I be allowed to write the following? > f (lst :: [forall a. a -> a]) = head @(forall a. a -> a) lst 3 Regards, Alejandro 2016-09-25 20:05 GMT+02:00 Simon Peyton Jones via ghc-devs >: Friends GHC has a flag -XImpredicativeTypes that makes a half-hearted attempt to support impredicative polymorphism. But it is vestigial…. if it works, it’s really a fluke. We don’t really have a systematic story here at all. I propose, therefore, to remove it entirely. That is, if you use -XImpredicativeTypes, you’ll get a warning that it does nothing (ie. complete no-op) and you should remove it. Before I pull the trigger, does anyone think they are using it in a mission-critical way? Now that we have Visible Type Application there is a workaround: if you want to call a polymorphic function at a polymorphic type, you can explicitly apply it to that type. For example: {-# LANGUAGE ImpredicativeTypes, TypeApplications, RankNTypes #-} module Vta where f x = id @(forall a. a->a) id @Int x You can also leave out the @Int part of course. Currently we have to use -XImpredicativeTypes to allow the @(forall a. a->a). Is that sensible? Or should we allow it regardless? I rather think the latter… if you have Visible Type Application (i.e. -XTypeApplications) then applying to a polytype is nothing special. So I propose to lift that restriction. I should go through the GHC Proposals Process for this, but I’m on a plane, so I’m going to at least start with an email. Simon _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Oct 3 09:44:08 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 3 Oct 2016 09:44:08 +0000 Subject: Allow top-level shadowing for imported names? In-Reply-To: <877f9pkf8t.fsf@gnu.org> References: <877f9pkf8t.fsf@gnu.org> Message-ID: Fine with me! Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | Herbert Valerio Riedel | Sent: 03 October 2016 09:29 | To: ghc-devs | Subject: Allow top-level shadowing for imported names? | | Hi *, | | I seem to recall this was already suggested in the past, but I can't | seem to find it in the archives. For simplicity I'll restate the idea: | | | foo :: Int -> Int -> (Int,Int) | foo x y = (bar x, bar y) | where | bar x = x+x | | results merely in a name-shadowing warning (for -Wall): | | foo.hs:4:9: warning: [-Wname-shadowing] | This binding for ‘x’ shadows the existing binding | bound at foo.hs:2:5 | | | However, | | import Data.Monoid | | (<>) :: String -> String -> String | (<>) = (++) | | main :: IO () | main = putStrLn ("Hi" <> "There") | | doesn't allow to shadow (<>), but rather complains about ambiguity: | | bar.hs:7:23: error: | Ambiguous occurrence ‘<>’ | It could refer to either ‘Data.Monoid.<>’, | imported from ‘Data.Monoid’ at | bar.hs:1:1-18 | or ‘Main.<>’, defined at bar.hs:4:1 | | | This is of course in line with the Haskell Report, which says in | https://www.haskell.org/onlinereport/haskell2010/haskellch5.html#x11- | 1010005.3 | | | The entities exported by a module may be brought into scope in | another | | module with an import declaration at the beginning of the module. | The | | import declaration names the module to be imported and optionally | | specifies the entities to be imported. A single module may be | imported | | by more than one import declaration. Imported names serve as top | level | | declarations: they scope over the entire body of the module but may | be | | shadowed by *local non-top-level bindings.* | | | However, why don't we allow this to be relaxed via a new language | extensions, to allow top-level bindings to shadow imported names (and | of course emit a warning)? | | Unless I'm missing something, this would help to keep existing and | working code compiling if new versions of libraries start exporting | new symbols (which happen to clash with local top-level defs), rather | than resulting in a fatal name-clash; and have no major downsides. | | If this sounds like a good idea, I'll happily promote this into a | proper proposal over at | https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithu | b.com%2Fghc-proposals%2Fghc- | proposals&data=01%7C01%7Csimonpj%40microsoft.com%7C6cb5253b609241e0b10 | 008d3eb675d5e%7C72f988bf86f141af91ab2d7cd011db47%7C1&sdata=COdAXpXOOox | mAnZSBnJfbF%2BTctssVUlqn%2BiccABrkF0%3D&reserved=0; I mostly wanted to | get early feedback here (and possibly find out if and where this was | proposed before), before investing more time turning this into a fully | fledged GHC proposal. | | Cheers, | HVR From rae at cs.brynmawr.edu Mon Oct 3 11:46:59 2016 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Mon, 3 Oct 2016 07:46:59 -0400 Subject: Allow top-level shadowing for imported names? In-Reply-To: <877f9pkf8t.fsf@gnu.org> References: <877f9pkf8t.fsf@gnu.org> Message-ID: <1DBD3558-B94D-47A1-BB5B-55E5810B7C48@cs.brynmawr.edu> By all means make the proposal -- I like this idea. > On Oct 3, 2016, at 4:29 AM, Herbert Valerio Riedel wrote: > > Hi *, > > I seem to recall this was already suggested in the past, but I can't > seem to find it in the archives. For simplicity I'll restate the idea: > > > foo :: Int -> Int -> (Int,Int) > foo x y = (bar x, bar y) > where > bar x = x+x > > results merely in a name-shadowing warning (for -Wall): > > foo.hs:4:9: warning: [-Wname-shadowing] > This binding for ‘x’ shadows the existing binding > bound at foo.hs:2:5 > > > However, > > import Data.Monoid > > (<>) :: String -> String -> String > (<>) = (++) > > main :: IO () > main = putStrLn ("Hi" <> "There") > > doesn't allow to shadow (<>), but rather complains about ambiguity: > > bar.hs:7:23: error: > Ambiguous occurrence ‘<>’ > It could refer to either ‘Data.Monoid.<>’, > imported from ‘Data.Monoid’ at bar.hs:1:1-18 > or ‘Main.<>’, defined at bar.hs:4:1 > > > This is of course in line with the Haskell Report, which says in > https://www.haskell.org/onlinereport/haskell2010/haskellch5.html#x11-1010005.3 > > | The entities exported by a module may be brought into scope in another > | module with an import declaration at the beginning of the module. The > | import declaration names the module to be imported and optionally > | specifies the entities to be imported. A single module may be imported > | by more than one import declaration. Imported names serve as top level > | declarations: they scope over the entire body of the module but may be > | shadowed by *local non-top-level bindings.* > > > However, why don't we allow this to be relaxed via a new language > extensions, to allow top-level bindings to shadow imported names (and > of course emit a warning)? > > Unless I'm missing something, this would help to keep existing and > working code compiling if new versions of libraries start exporting new > symbols (which happen to clash with local top-level defs), rather than > resulting in a fatal name-clash; and have no major downsides. > > If this sounds like a good idea, I'll happily promote this into a proper > proposal over at https://github.com/ghc-proposals/ghc-proposals; I > mostly wanted to get early feedback here (and possibly find out if and > where this was proposed before), before investing more time turning > this into a fully fledged GHC proposal. > > Cheers, > HVR > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From ben at well-typed.com Mon Oct 3 13:43:41 2016 From: ben at well-typed.com (Ben Gamari) Date: Mon, 03 Oct 2016 09:43:41 -0400 Subject: Status of Harbormaster Message-ID: <8737kda6pe.fsf@ben-laptop.smart-cactus.org> Hello everyone, Over the last few weeks I have been gradually pushing away at increasing our Harbormaster coverage. I'm happy to report that Harbormaster should now test commits on, * x86_64 Ubuntu Linux * x86_64 Mac OS X Sierra * x86_64 Windows (although the bugs are still being worked out here) Differentials are tested on, * x86_64 Ubuntu Linux * x86_64 Windows For those of you following along at home I've roughly documented the configuration on the Phabricator Wiki [1]. One open question is whether we want to enable Differential building on the OS X box. The security implications of allowing essentially anonymous users to build and run untrusted code in our own CI environment are already quite sticky; to enable Differential building on the OS X box would mean that we would be running untrusted code on someone else's hardware, which seems like it may be a step too far. It would be nice to find a way to extend Differential builds to other platforms in the future, however, so we can ensure that we catch bad patches before they even make it to the tree. It's a bit unclear how far we should extend test coverage. In the future I think I will at very least add an i386 Ubuntu environment, but we could go farther still. For instance these platforms immediately come to mind, * x86_64 FreeBSD * x86_64 Solaris * ARM Linux (although this could be quite tricky given the speed of these machines) * AArch64 Linux There is certainly a maintenance and complexity tradeoff that comes with extending our coverage like this, however, so it's quite unclear where the right compromise lies. I'd love to hear what you think. Happy hacking, - Ben [1] https://phabricator.haskell.org/w/ghc_harbormaster/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From iavor.diatchki at gmail.com Mon Oct 3 18:12:09 2016 From: iavor.diatchki at gmail.com (Iavor Diatchki) Date: Mon, 3 Oct 2016 11:12:09 -0700 Subject: Allow top-level shadowing for imported names? In-Reply-To: <1DBD3558-B94D-47A1-BB5B-55E5810B7C48@cs.brynmawr.edu> References: <877f9pkf8t.fsf@gnu.org> <1DBD3558-B94D-47A1-BB5B-55E5810B7C48@cs.brynmawr.edu> Message-ID: Hi, Lennart suggested that some time ago, here is the thread from the last time we discussed it: https://mail.haskell.org/pipermail/haskell-prime/2012-July/003702.html I think it is a good plan! -Iavor On Mon, Oct 3, 2016 at 4:46 AM, Richard Eisenberg wrote: > By all means make the proposal -- I like this idea. > > > On Oct 3, 2016, at 4:29 AM, Herbert Valerio Riedel > wrote: > > > > Hi *, > > > > I seem to recall this was already suggested in the past, but I can't > > seem to find it in the archives. For simplicity I'll restate the idea: > > > > > > foo :: Int -> Int -> (Int,Int) > > foo x y = (bar x, bar y) > > where > > bar x = x+x > > > > results merely in a name-shadowing warning (for -Wall): > > > > foo.hs:4:9: warning: [-Wname-shadowing] > > This binding for ‘x’ shadows the existing binding > > bound at foo.hs:2:5 > > > > > > However, > > > > import Data.Monoid > > > > (<>) :: String -> String -> String > > (<>) = (++) > > > > main :: IO () > > main = putStrLn ("Hi" <> "There") > > > > doesn't allow to shadow (<>), but rather complains about ambiguity: > > > > bar.hs:7:23: error: > > Ambiguous occurrence ‘<>’ > > It could refer to either ‘Data.Monoid.<>’, > > imported from ‘Data.Monoid’ at > bar.hs:1:1-18 > > or ‘Main.<>’, defined at bar.hs:4:1 > > > > > > This is of course in line with the Haskell Report, which says in > > https://www.haskell.org/onlinereport/haskell2010/ > haskellch5.html#x11-1010005.3 > > > > | The entities exported by a module may be brought into scope in another > > | module with an import declaration at the beginning of the module. The > > | import declaration names the module to be imported and optionally > > | specifies the entities to be imported. A single module may be imported > > | by more than one import declaration. Imported names serve as top level > > | declarations: they scope over the entire body of the module but may be > > | shadowed by *local non-top-level bindings.* > > > > > > However, why don't we allow this to be relaxed via a new language > > extensions, to allow top-level bindings to shadow imported names (and > > of course emit a warning)? > > > > Unless I'm missing something, this would help to keep existing and > > working code compiling if new versions of libraries start exporting new > > symbols (which happen to clash with local top-level defs), rather than > > resulting in a fatal name-clash; and have no major downsides. > > > > If this sounds like a good idea, I'll happily promote this into a proper > > proposal over at https://github.com/ghc-proposals/ghc-proposals; I > > mostly wanted to get early feedback here (and possibly find out if and > > where this was proposed before), before investing more time turning > > this into a fully fledged GHC proposal. > > > > Cheers, > > HVR > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ganesh at earth.li Mon Oct 3 22:36:08 2016 From: ganesh at earth.li (Ganesh Sittampalam) Date: Mon, 3 Oct 2016 23:36:08 +0100 Subject: Getting rid of -XImpredicativeTypes In-Reply-To: References: Message-ID: Oops, I completely missed you saying that despite reading your post multiple times and actually quoting it. Sorry about that. But yes, that makes it very clear, thanks. Doable, even if a pain in the neck. The motivation for my question was that I vaguely recalled encountering code that uses impredicative instantation when upgrading darcs to support GHC 8.0. Using VTA will at least make it feasible to migrate even if it requires CPP, so I'm no longer worried about having to rewrite hurriedly. Given the type inference problems, I appreciate it's better to just give up than try to support something half baked. On 03/10/2016 10:14, Simon Peyton Jones wrote: > Indeed, as I said “I mis-spoke before: In my proposal we WILL allow > types like (Tree (forall a. a->a))”. > > > > So yes, such types will be possible in type signatures (with > ImpredicativeTypes). But using functions with such type signatures will > be tiresome, because you’ll have to use VTA on every occasion. E.g. if > > xs :: [forall a. a->a] > > > > then you can’t say (reverse xs), because that requires impredicative > instantiation of reverse’s type argument. You must stay > > reverse @(forall a. a->a) xs > > > > Does that help? > > > > Simon > > > > *From:*Ganesh Sittampalam [mailto:ganesh at earth.li] > *Sent:* 02 October 2016 12:07 > *To:* Simon Peyton Jones ; Alejandro Serrano Mena > > *Cc:* ghc-devs at haskell.org; ghc-users at haskell.org > *Subject:* Re: Getting rid of -XImpredicativeTypes > > > > Elsewhere in the thread, you said > > > 1) ImpredicativeTypes enables types like `Maybe (forall a. a)`. Do > those just disappear, or are they also enabled anyway? (I would > guess the former.) > > Yes, they’d disappear. > > > but here you're talking about 'xs :: [forall a . a->a]' being possible > with VTA - is the idea that such types will be possible but only with > both explicit signatures and VTA? > > On 30/09/2016 16:29, Simon Peyton Jones via ghc-devs wrote: > > Alejandro: excellent point. I mis-spoke before. In my proposal we > WILL allow types like (Tree (forall a. a->a)). > > > > I’m trying to get round to writing a proposal (would someone else > like to write it – it should be short), but the idea is this: > > > > *When you have -XImpredicativeTypes* > > · *You can write a polytype in a visible type argument; eg. > f @(forall a. a->a)* > > · *You can write a polytype as an argument of a type in a > signature e.g. f :: [forall a. a->a] -> Int* > > * * > > *And that’s all. A unification variable STILL CANNOT be unified > with a polytype. The only way you can call a polymorphic function > at a polytype is to use Visible Type Application.* > > * * > > *So using impredicative types might be tiresome. E.g.* > > * type SID = forall a. a->a* > > * * > > * xs :: [forall a. a->a]* > > * xs = (:) @SID id ( (:) @SID id ([] @ SID))* > > * * > > *In short, if you call a function at a polytype, you must use VTA. > Simple, easy, predictable; and doubtless annoying. But possible*. > > > > Simon > > > > *From:*Alejandro Serrano Mena [mailto:trupill at gmail.com] > *Sent:* 26 September 2016 08:13 > *To:* Simon Peyton Jones > > *Cc:* ghc-users at haskell.org ; > ghc-devs at haskell.org > *Subject:* Re: Getting rid of -XImpredicativeTypes > > > > What would be the story for the types of the arguments. Would I be > allowed to write the following? > > > f (lst :: [forall a. a -> a]) = head @(forall a. a -> a) lst 3 > > Regards, > > Alejandro > > > > 2016-09-25 20:05 GMT+02:00 Simon Peyton Jones via ghc-devs > >: > > Friends > > > > GHC has a flag -XImpredicativeTypes that makes a half-hearted > attempt to support impredicative polymorphism. But it is > vestigial…. if it works, it’s really a fluke. We don’t really > have a systematic story here at all. > > > > I propose, therefore, to remove it entirely. That is, if you > use -XImpredicativeTypes, you’ll get a warning that it does > nothing (ie. complete no-op) and you should remove it. > > > > Before I pull the trigger, does anyone think they are using it > in a mission-critical way? > > > > Now that we have Visible Type Application there is a workaround: > if you want to call a polymorphic function at a polymorphic > type, you can explicitly apply it to that type. For example: > > > > {-# LANGUAGE ImpredicativeTypes, TypeApplications, RankNTypes #-} > > module Vta where > > f x = id @(forall a. a->a) id @Int x > > > > You can also leave out the @Int part of course. > > > > Currently we have to use -XImpredicativeTypes to allow the > @(forall a. a->a). Is that sensible? Or should we allow it > regardless? I rather think the latter… if you have Visible > Type Application (i.e. -XTypeApplications) then applying to a > polytype is nothing special. So I propose to lift that > restriction. > > > > I should go through the GHC Proposals Process for this, but I’m > on a plane, so I’m going to at least start with an email. > > > > Simon > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > From mmskeen at gmail.com Tue Oct 4 03:16:59 2016 From: mmskeen at gmail.com (Michael Skeen) Date: Mon, 3 Oct 2016 21:16:59 -0600 Subject: Patterns & Quality Attributes Research Message-ID: Dear Haskell Compiler Community, We are doing undergraduate research on software architecture patterns and quality attributes for Utah Valley University. We recently analyzed the work published on Haskell Compiler in the Architecture of Open Source Applications (AOSA) and referenced it in a paper we presented at the 13th Working IEEE/IFIP Conference on Software Architecture (WICSA), as attached. As a part of our continuing research we wish to validate our architectural analysis for Haskell Compiler with the current developers. We would like to know if we are missing any patterns or quality attributes that may have been included in Haskell Compiler, or if there are any we listed that aren’t used. Any additional comment on these topics you might have would also, of course, be welcome. We believe we found the following software architectural patterns in this application: Pattern Name Is This Found in the Architecture? (yes / no / don’t know) Comments (optional) Interpreter Layers Pipes & Filters Plugin Rule-Based System Other? We also identified the following quality attributes: Attribute Name Is This Found in the Architecture? Comments (optional) Scalability Usability Performance Portability Reliability Portability Maintainability Testability Modularity Robustness Other? For your convenience, we have a complete list below of the patterns and quality attributes we referred to when conducting our research. To clarify, we are specifically studying architectural patterns, rather than design patterns such as the GoF patterns. Architectural Patterns Considered Quality Attributes Considered Active Repository Scalability Batch Usability Blackboard Extensibility Broker Performance Client Server Portability Event System Flexibility Explicit Invocation Reliability Implicit Invocation Maintainability Indirection Layer Security Interceptor Testability Interpreter Capacity Layers Cost Master and Commander Legality Microkernel Modularity Model View Controller Robustness Peer to Peer Pipes and Filters Plugin Presentation Abstraction Control Publish Subscribe Reflection Rule-Based System Shared Repository Simple Repository State Based Virtual Machine Please respond by October 17, if possible. Thank you for considering our request, and for your continued work on Haskell Compiler. Sincerely, Erich Gubler, Danielle Skinner, Brandon Leishman, Michael Skeen, Neil Harrison, Ph.D. (advisor) Reference: Neil B. Harrison, Erich Gubler, Danielle Skinner, "Software Architecture Pattern Morphology in Open-Source Systems",WICSA, 2016, 2016 13th Working IEEE/IFIP Conference on Software Architecture (WICSA), 2016 13th Working IEEE/IFIP Conference on Software Architecture (WICSA) 2016, pp. 91-98, doi:10.1109/WICSA.2016.8 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PID4110571 (Morphology).pdf Type: application/pdf Size: 231943 bytes Desc: not available URL: From m at tweag.io Tue Oct 4 06:58:51 2016 From: m at tweag.io (Boespflug, Mathieu) Date: Tue, 4 Oct 2016 08:58:51 +0200 Subject: GHC 8.0.2 status In-Reply-To: <8760peeglq.fsf@ben-laptop.smart-cactus.org> References: <8760peeglq.fsf@ben-laptop.smart-cactus.org> Message-ID: Hi Ben, while in the eye of the cyclone, as we're waiting for these last few OS X issues to clear up... I was wondering, do you currently have a means to "vet" release candidates before cutting a new release? More to the point, to check that a point release like these doesn't include any breaking changes it might be useful to try compiling all of Stackage with it, even if it's just once, much like the Stackage curators do. As you may have seen a few hours ago, e.g. singletons doesn't compile using the current GHC 8.0.2 candidate (tip of ghc-8.0 branch). It did compile just fine using 8.0.1. And FWIW it also compiles okay with GHC HEAD. Likely this has been discussed before. So I'm just enquiring about status regarding adding this item in the workflow. Best, -- Mathieu Boespflug Founder at http://tweag.io. On 29 September 2016 at 19:54, Ben Gamari wrote: > Hello everyone, > > The week before ICFP I was able to get the ghc-8.0 branch ready for an > 8.0.2 release, which I intended to cut this week. In the intervening > time an additional rather serious issue was reported affected Mac OS X > Sierra (#12479). Since this issue affects the usability of GHC on the > new OS X release, we'll be deferring the 8.0.2 release until it has been > resolved. > > While there appears to be an actionable path forward on this ticket, we > will need someone with an affected machine and an understanding of GHC's > use of dynamic linking to step up to implement. Otherwise it looks like > the release will be delayed at least until October 9, when darchon has > time to have a look. > > Sorry to be the bearer of bad news! > > Cheers, > > - Ben > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mle+hs at mega-nerd.com Tue Oct 4 07:18:52 2016 From: mle+hs at mega-nerd.com (Erik de Castro Lopo) Date: Tue, 4 Oct 2016 18:18:52 +1100 Subject: Status of Harbormaster In-Reply-To: <8737kda6pe.fsf@ben-laptop.smart-cactus.org> References: <8737kda6pe.fsf@ben-laptop.smart-cactus.org> Message-ID: <20161004181852.0f1a9f23294b76821e604813@mega-nerd.com> Ben Gamari wrote: > It's a bit unclear how far we should extend test coverage. In the future > I think I will at very least add an i386 Ubuntu environment, but we > could go farther still. For instance these platforms immediately come to > mind, > > * x86_64 FreeBSD > * x86_64 Solaris > * ARM Linux (although this could be quite tricky given the speed of > these machines) > * AArch64 Linux That would be awesome if we could get access to a decent (by which I mean server grade, with at least 4 cores and 8 Gig of RAM). The other option for ARM/Linux and AArch64/Linux is cross-compile builds. Just building GHC as a cross-compiler for these targets would shave out a lot of bugs. Let me know if you're interested. Its really pretty easy to set up on a Debian or Ubuntu system. Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/ From karel.gardas at centrum.cz Tue Oct 4 07:23:09 2016 From: karel.gardas at centrum.cz (Karel Gardas) Date: Tue, 04 Oct 2016 09:23:09 +0200 Subject: Status of Harbormaster In-Reply-To: <20161004181852.0f1a9f23294b76821e604813@mega-nerd.com> References: <8737kda6pe.fsf@ben-laptop.smart-cactus.org> <20161004181852.0f1a9f23294b76821e604813@mega-nerd.com> Message-ID: <57F358DD.2060107@centrum.cz> On 10/ 4/16 09:18 AM, Erik de Castro Lopo wrote: >> * AArch64 Linux > > That would be awesome if we could get access to a decent (by which I mean > server grade, with at least 4 cores and 8 Gig of RAM). Just ask for account on GNU GCC Compile Farm. They do have X-gene V1 machine in the farm, pretty powerful box especially in comparison with my pandaboard. I for example keep running POWER7 buildbot for GHC there. Cheers, Karel From mle+hs at mega-nerd.com Tue Oct 4 07:28:27 2016 From: mle+hs at mega-nerd.com (Erik de Castro Lopo) Date: Tue, 4 Oct 2016 18:28:27 +1100 Subject: SafeHaskell vs TemplateHaskell Message-ID: <20161004182827.4eba5dfc97a78e8863861e93@mega-nerd.com> Hi all, I tried to fix trac ticket #12511 (template-haskell's Language.Haskell.Syntax module should be Trustworthy) but in doing so I began to think this is actually a bad idea. Specifically, I suspect its actually possible to craft something using TH that bypasses the guarantees that Safe is supposed to ensure. Comments? Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/ From simonpj at microsoft.com Tue Oct 4 08:38:53 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 4 Oct 2016 08:38:53 +0000 Subject: GHC 8.0.2 status In-Reply-To: References: <8760peeglq.fsf@ben-laptop.smart-cactus.org> Message-ID: Smoke-testing with Stackage would be a great idea. In the past Michael Snoyman has kindly done that for us, but ultimately some automation would be good. Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Boespflug, Mathieu Sent: 04 October 2016 07:59 To: Ben Gamari Cc: ghc-devs at haskell.org Subject: Re: GHC 8.0.2 status Hi Ben, while in the eye of the cyclone, as we're waiting for these last few OS X issues to clear up... I was wondering, do you currently have a means to "vet" release candidates before cutting a new release? More to the point, to check that a point release like these doesn't include any breaking changes it might be useful to try compiling all of Stackage with it, even if it's just once, much like the Stackage curators do. As you may have seen a few hours ago, e.g. singletons doesn't compile using the current GHC 8.0.2 candidate (tip of ghc-8.0 branch). It did compile just fine using 8.0.1. And FWIW it also compiles okay with GHC HEAD. Likely this has been discussed before. So I'm just enquiring about status regarding adding this item in the workflow. Best, -- Mathieu Boespflug Founder at http://tweag.io. On 29 September 2016 at 19:54, Ben Gamari > wrote: Hello everyone, The week before ICFP I was able to get the ghc-8.0 branch ready for an 8.0.2 release, which I intended to cut this week. In the intervening time an additional rather serious issue was reported affected Mac OS X Sierra (#12479). Since this issue affects the usability of GHC on the new OS X release, we'll be deferring the 8.0.2 release until it has been resolved. While there appears to be an actionable path forward on this ticket, we will need someone with an affected machine and an understanding of GHC's use of dynamic linking to step up to implement. Otherwise it looks like the release will be delayed at least until October 9, when darchon has time to have a look. Sorry to be the bearer of bad news! Cheers, - Ben _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekmett at gmail.com Tue Oct 4 08:48:27 2016 From: ekmett at gmail.com (Edward Kmett) Date: Tue, 4 Oct 2016 04:48:27 -0400 Subject: Allow top-level shadowing for imported names? In-Reply-To: References: <877f9pkf8t.fsf@gnu.org> <1DBD3558-B94D-47A1-BB5B-55E5810B7C48@cs.brynmawr.edu> Message-ID: I for one would really like to see this go in. (I've commiserated with Lennart in the past about the fact that the previous proposal just sort of died.) It makes additions of names to libraries far less brittle. You can add a new export with a mere minor version bump, and many of the situations where that causes breakage can be fixed by this simple rule change. -Edward On Mon, Oct 3, 2016 at 2:12 PM, Iavor Diatchki wrote: > Hi, > > Lennart suggested that some time ago, here is the thread from the last > time we discussed it: > > https://mail.haskell.org/pipermail/haskell-prime/2012-July/003702.html > > I think it is a good plan! > > -Iavor > > > > On Mon, Oct 3, 2016 at 4:46 AM, Richard Eisenberg > wrote: > >> By all means make the proposal -- I like this idea. >> >> > On Oct 3, 2016, at 4:29 AM, Herbert Valerio Riedel >> wrote: >> > >> > Hi *, >> > >> > I seem to recall this was already suggested in the past, but I can't >> > seem to find it in the archives. For simplicity I'll restate the idea: >> > >> > >> > foo :: Int -> Int -> (Int,Int) >> > foo x y = (bar x, bar y) >> > where >> > bar x = x+x >> > >> > results merely in a name-shadowing warning (for -Wall): >> > >> > foo.hs:4:9: warning: [-Wname-shadowing] >> > This binding for ‘x’ shadows the existing binding >> > bound at foo.hs:2:5 >> > >> > >> > However, >> > >> > import Data.Monoid >> > >> > (<>) :: String -> String -> String >> > (<>) = (++) >> > >> > main :: IO () >> > main = putStrLn ("Hi" <> "There") >> > >> > doesn't allow to shadow (<>), but rather complains about ambiguity: >> > >> > bar.hs:7:23: error: >> > Ambiguous occurrence ‘<>’ >> > It could refer to either ‘Data.Monoid.<>’, >> > imported from ‘Data.Monoid’ at >> bar.hs:1:1-18 >> > or ‘Main.<>’, defined at bar.hs:4:1 >> > >> > >> > This is of course in line with the Haskell Report, which says in >> > https://www.haskell.org/onlinereport/haskell2010/haskellch5. >> html#x11-1010005.3 >> > >> > | The entities exported by a module may be brought into scope in another >> > | module with an import declaration at the beginning of the module. The >> > | import declaration names the module to be imported and optionally >> > | specifies the entities to be imported. A single module may be imported >> > | by more than one import declaration. Imported names serve as top level >> > | declarations: they scope over the entire body of the module but may be >> > | shadowed by *local non-top-level bindings.* >> > >> > >> > However, why don't we allow this to be relaxed via a new language >> > extensions, to allow top-level bindings to shadow imported names (and >> > of course emit a warning)? >> > >> > Unless I'm missing something, this would help to keep existing and >> > working code compiling if new versions of libraries start exporting new >> > symbols (which happen to clash with local top-level defs), rather than >> > resulting in a fatal name-clash; and have no major downsides. >> > >> > If this sounds like a good idea, I'll happily promote this into a proper >> > proposal over at https://github.com/ghc-proposals/ghc-proposals; I >> > mostly wanted to get early feedback here (and possibly find out if and >> > where this was proposed before), before investing more time turning >> > this into a fully fledged GHC proposal. >> > >> > Cheers, >> > HVR >> > _______________________________________________ >> > ghc-devs mailing list >> > ghc-devs at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shumovichy at gmail.com Tue Oct 4 11:12:54 2016 From: shumovichy at gmail.com (Yuras Shumovich) Date: Tue, 04 Oct 2016 14:12:54 +0300 Subject: Allow top-level shadowing for imported names? In-Reply-To: References: <877f9pkf8t.fsf@gnu.org> <1DBD3558-B94D-47A1-BB5B-55E5810B7C48@cs.brynmawr.edu> Message-ID: <1475579574.4131.13.camel@gmail.com> On Tue, 2016-10-04 at 04:48 -0400, Edward Kmett wrote: > It makes additions of names to libraries far less brittle. You can > add a > new export with a mere minor version bump, and many of the situations > where > that causes breakage can be fixed by this simple rule change. It would be true only if we also allow imports to shadow each other. Otherwise there will be a big chance for name clash yet. Can we generalize the proposal such that subsequent imports shadow preceding ones? In that case you may e.g. list local modules after libraries' modules, and be sure new identifies in libraries will not clash with local ones. Obviously shadowing should be a warning still. > > -Edward > > On Mon, Oct 3, 2016 at 2:12 PM, Iavor Diatchki com> > wrote: > > > > > Hi, > > > > Lennart suggested that some time ago, here is the thread from the > > last > > time we discussed it: > > > > https://mail.haskell.org/pipermail/haskell-prime/2012-July/003702.h > > tml > > > > I think it is a good plan! > > > > -Iavor > > > > > > > > On Mon, Oct 3, 2016 at 4:46 AM, Richard Eisenberg > edu> > > wrote: > > > > > > > > By all means make the proposal -- I like this idea. > > > > > > > > > > > On Oct 3, 2016, at 4:29 AM, Herbert Valerio Riedel > > > ail.com> > > > wrote: > > > > > > > > > > > > Hi *, > > > > > > > > I seem to recall this was already suggested in the past, but I > > > > can't > > > > seem to find it in the archives. For simplicity I'll restate > > > > the idea: > > > > > > > > > > > >    foo :: Int -> Int -> (Int,Int) > > > >    foo x y = (bar x, bar y) > > > >      where > > > >        bar x = x+x > > > > > > > > results merely in a name-shadowing warning (for -Wall): > > > > > > > >    foo.hs:4:9: warning: [-Wname-shadowing] > > > >        This binding for ‘x’ shadows the existing binding > > > >          bound at foo.hs:2:5 > > > > > > > > > > > > However, > > > > > > > >    import Data.Monoid > > > > > > > >    (<>) :: String -> String -> String > > > >    (<>) = (++) > > > > > > > >    main :: IO () > > > >    main = putStrLn ("Hi" <> "There") > > > > > > > > doesn't allow to shadow (<>), but rather complains about > > > > ambiguity: > > > > > > > >    bar.hs:7:23: error: > > > >        Ambiguous occurrence ‘<>’ > > > >        It could refer to either ‘Data.Monoid.<>’, > > > >                                 imported from ‘Data.Monoid’ at > > > bar.hs:1:1-18 > > > > > > > >                              or ‘Main.<>’, defined at > > > > bar.hs:4:1 > > > > > > > > > > > > This is of course in line with the Haskell Report, which says > > > > in > > > > https://www.haskell.org/onlinereport/haskell2010/haskellch5. > > > html#x11-1010005.3 > > > > > > > > > > > > > > > > > > The entities exported by a module may be brought into scope > > > > > in another > > > > > module with an import declaration at the beginning of the > > > > > module. The > > > > > import declaration names the module to be imported and > > > > > optionally > > > > > specifies the entities to be imported. A single module may be > > > > > imported > > > > > by more than one import declaration. Imported names serve as > > > > > top level > > > > > declarations: they scope over the entire body of the module > > > > > but may be > > > > > shadowed by *local non-top-level bindings.* > > > > > > > > > > > > However, why don't we allow this to be relaxed via a new > > > > language > > > > extensions, to allow top-level bindings to shadow imported > > > > names (and > > > > of course emit a warning)? > > > > > > > > Unless I'm missing something, this would help to keep existing > > > > and > > > > working code compiling if new versions of libraries start > > > > exporting new > > > > symbols (which happen to clash with local top-level defs), > > > > rather than > > > > resulting in a fatal name-clash; and have no major downsides. > > > > > > > > If this sounds like a good idea, I'll happily promote this into > > > > a proper > > > > proposal over at https://github.com/ghc-proposals/ghc-proposals > > > > ; I > > > > mostly wanted to get early feedback here (and possibly find out > > > > if and > > > > where this was proposed before), before investing more time > > > > turning > > > > this into a fully fledged GHC proposal. > > > > > > > > Cheers, > > > >  HVR > > > > _______________________________________________ > > > > ghc-devs mailing list > > > > ghc-devs at haskell.org > > > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > > > _______________________________________________ > > > ghc-devs mailing list > > > ghc-devs at haskell.org > > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > > > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From matthewtpickering at gmail.com Tue Oct 4 11:20:36 2016 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Tue, 4 Oct 2016 12:20:36 +0100 Subject: GHC 8.0.2 status In-Reply-To: References: <8760peeglq.fsf@ben-laptop.smart-cactus.org> Message-ID: The easiest way I found to build stackage recently was to take the cabal.config file [1] and then manipulate it to just install each package in turn. cabal install package-1 cabal install package-2 This was quite a bit easier than using stackage-curator if you just want to build packages to check that they work [1]: https://www.stackage.org/lts-7.2/cabal.config Matt On Tue, Oct 4, 2016 at 9:38 AM, Simon Peyton Jones via ghc-devs wrote: > Smoke-testing with Stackage would be a great idea. In the past Michael > Snoyman has kindly done that for us, but ultimately some automation would be > good. > > > > Simon > > > > From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Boespflug, > Mathieu > Sent: 04 October 2016 07:59 > To: Ben Gamari > Cc: ghc-devs at haskell.org > Subject: Re: GHC 8.0.2 status > > > > Hi Ben, > > > > while in the eye of the cyclone, as we're waiting for these last few OS X > issues to clear up... I was wondering, do you currently have a means to > "vet" release candidates before cutting a new release? > > > > More to the point, to check that a point release like these doesn't include > any breaking changes it might be useful to try compiling all of Stackage > with it, even if it's just once, much like the Stackage curators do. As you > may have seen a few hours ago, e.g. singletons doesn't compile using the > current GHC 8.0.2 candidate (tip of ghc-8.0 branch). It did compile just > fine using 8.0.1. And FWIW it also compiles okay with GHC HEAD. > > > > Likely this has been discussed before. So I'm just enquiring about status > regarding adding this item in the workflow. > > > > Best, > > > -- > Mathieu Boespflug > Founder at http://tweag.io. > > > > On 29 September 2016 at 19:54, Ben Gamari wrote: > > Hello everyone, > > The week before ICFP I was able to get the ghc-8.0 branch ready for an > 8.0.2 release, which I intended to cut this week. In the intervening > time an additional rather serious issue was reported affected Mac OS X > Sierra (#12479). Since this issue affects the usability of GHC on the > new OS X release, we'll be deferring the 8.0.2 release until it has been > resolved. > > While there appears to be an actionable path forward on this ticket, we > will need someone with an affected machine and an understanding of GHC's > use of dynamic linking to step up to implement. Otherwise it looks like > the release will be delayed at least until October 9, when darchon has > time to have a look. > > Sorry to be the bearer of bad news! > > Cheers, > > - Ben > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From hvriedel at gmail.com Tue Oct 4 11:50:58 2016 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Tue, 04 Oct 2016 13:50:58 +0200 Subject: Allow top-level shadowing for imported names? In-Reply-To: <1475579574.4131.13.camel@gmail.com> (Yuras Shumovich's message of "Tue, 04 Oct 2016 14:12:54 +0300") References: <877f9pkf8t.fsf@gnu.org> <1DBD3558-B94D-47A1-BB5B-55E5810B7C48@cs.brynmawr.edu> <1475579574.4131.13.camel@gmail.com> Message-ID: <87lgy4qqn1.fsf@gmail.com> Hi, On 2016-10-04 at 13:12:54 +0200, Yuras Shumovich wrote: > On Tue, 2016-10-04 at 04:48 -0400, Edward Kmett wrote: > >> It makes additions of names to libraries far less brittle. You can >> add a >> new export with a mere minor version bump, and many of the situations >> where >> that causes breakage can be fixed by this simple rule change. > > It would be true only if we also allow imports to shadow each other. > Otherwise there will be a big chance for name clash yet. > > Can we generalize the proposal such that subsequent imports shadow > preceding ones? IMO, that would be lead to fragile situations with hard to detect/debug problems unless you take warnings serious. With the original proposal, the semantics of your code doesn't change if a library starts exposing a name it didn't before. There is a clear priority of what shadows what. However, when we allow the ordering of import statements to affect shadowing, it gets more brittle and surprising imho: For one, we have tooling which happily reorders/reformats import statements which would need to be made aware that the reordering symmetry has been broken. Moreover, now we get into the situation that if in import Foo -- exports performCreditCardTransaction import Bar main = do -- .. performCreditCardTransaction ccNumber -- 'Bar' suddenly starts exporting performCreditCardTransaction as well (and doing something sinister with the ccNumber before handing it over to the real performCreditCardTransaction...), it can effectively change the semantics of a program and this would merely emit a warning which imho rather deserves to be a hard error. However, iirc there is a different idea to address this without breaking reordering-symmetry, e.g. by allowing explicitly enumerated names as in import Foo (performCreditCardTransaction) import Bar to shadow imports from other modules which didn't explicitly name the same import; effectively introducing a higher-priority scope for names imported explicitly. > In that case you may e.g. list local modules after libraries' modules, > and be sure new identifies in libraries will not clash with local > ones. Obviously shadowing should be a warning still. From rae at cs.brynmawr.edu Tue Oct 4 12:09:36 2016 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Tue, 4 Oct 2016 08:09:36 -0400 Subject: SafeHaskell vs TemplateHaskell In-Reply-To: <20161004182827.4eba5dfc97a78e8863861e93@mega-nerd.com> References: <20161004182827.4eba5dfc97a78e8863861e93@mega-nerd.com> Message-ID: <6D2A6182-EDEC-4AE1-A3CF-0C188A9DA64B@cs.brynmawr.edu> Hi Erik, You're right that Template Haskell violates Safe Haskell guarantees, but that should be independent of the Language.Haskell.TH.Syntax module. The problem is the -XTemplateHaskell extension, not the module. So I think labeling the module Trustworthy is OK. Richard > On Oct 4, 2016, at 3:28 AM, Erik de Castro Lopo wrote: > > Hi all, > > I tried to fix trac ticket #12511 (template-haskell's Language.Haskell.Syntax > module should be Trustworthy) but in doing so I began to think this is actually > a bad idea. Specifically, I suspect its actually possible to craft something > using TH that bypasses the guarantees that Safe is supposed to ensure. > > Comments? > > Erik > -- > ---------------------------------------------------------------------- > Erik de Castro Lopo > http://www.mega-nerd.com/ > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From ben at smart-cactus.org Tue Oct 4 13:39:59 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 04 Oct 2016 09:39:59 -0400 Subject: Status of Harbormaster In-Reply-To: <20161004181852.0f1a9f23294b76821e604813@mega-nerd.com> References: <8737kda6pe.fsf@ben-laptop.smart-cactus.org> <20161004181852.0f1a9f23294b76821e604813@mega-nerd.com> Message-ID: <87bmz0p70w.fsf@ben-laptop.smart-cactus.org> Erik de Castro Lopo writes: > Ben Gamari wrote: > >> It's a bit unclear how far we should extend test coverage. In the future >> I think I will at very least add an i386 Ubuntu environment, but we >> could go farther still. For instance these platforms immediately come to >> mind, >> >> * x86_64 FreeBSD >> * x86_64 Solaris >> * ARM Linux (although this could be quite tricky given the speed of >> these machines) >> * AArch64 Linux > > That would be awesome if we could get access to a decent (by which I mean > server grade, with at least 4 cores and 8 Gig of RAM). > I've wondered how one of the Scaleway ARMs would fare. Their specs are seem to be comparable to the Odroid XU4 which I use for my own builds, which barely passes. Perhaps dedicating some of the 50GB SSD to swap would help. > The other option for ARM/Linux and AArch64/Linux is cross-compile builds. > Just building GHC as a cross-compiler for these targets would shave out > a lot of bugs. Let me know if you're interested. Its really pretty easy > to set up on a Debian or Ubuntu system. > I am absolutely interested but I'd first like to stabilize what we have. Let's chat about cross-compilation once that happens. Don't hesitate to remind me in a few weeks. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From eacameron at gmail.com Tue Oct 4 15:13:03 2016 From: eacameron at gmail.com (Elliot Cameron) Date: Tue, 4 Oct 2016 11:13:03 -0400 Subject: Allow top-level shadowing for imported names? In-Reply-To: <87lgy4qqn1.fsf@gmail.com> References: <877f9pkf8t.fsf@gnu.org> <1DBD3558-B94D-47A1-BB5B-55E5810B7C48@cs.brynmawr.edu> <1475579574.4131.13.camel@gmail.com> <87lgy4qqn1.fsf@gmail.com> Message-ID: I second Herbert's concern. Giving semantics to import order is one of the greatest plagues of C, C++, Python, etc. It is worth avoiding at all costs. Herbert's suggestion re: explicitly enumerated names seems to hold promise, however. On Tue, Oct 4, 2016 at 7:50 AM, Herbert Valerio Riedel wrote: > Hi, > > On 2016-10-04 at 13:12:54 +0200, Yuras Shumovich wrote: > > On Tue, 2016-10-04 at 04:48 -0400, Edward Kmett wrote: > > > >> It makes additions of names to libraries far less brittle. You can > >> add a > >> new export with a mere minor version bump, and many of the situations > >> where > >> that causes breakage can be fixed by this simple rule change. > > > > It would be true only if we also allow imports to shadow each other. > > Otherwise there will be a big chance for name clash yet. > > > > Can we generalize the proposal such that subsequent imports shadow > > preceding ones? > > IMO, that would be lead to fragile situations with hard to detect/debug > problems unless you take warnings serious. > > With the original proposal, the semantics of your code doesn't change if > a library starts exposing a name it didn't before. There is a clear > priority of what shadows what. > > However, when we allow the ordering of import statements to affect > shadowing, it gets more brittle and surprising imho: > > For one, we have tooling which happily reorders/reformats import > statements which would need to be made aware that the reordering > symmetry has been broken. > > Moreover, now we get into the situation that if in > > import Foo -- exports performCreditCardTransaction > import Bar > > main = do > -- .. > performCreditCardTransaction ccNumber > -- > > 'Bar' suddenly starts exporting performCreditCardTransaction as well > (and doing something sinister with the ccNumber before handing it over > to the real performCreditCardTransaction...), it can effectively change > the semantics of a program and this would merely emit a warning which > imho rather deserves to be a hard error. > > However, iirc there is a different idea to address this without breaking > reordering-symmetry, e.g. by allowing explicitly enumerated names as in > > import Foo (performCreditCardTransaction) > import Bar > > to shadow imports from other modules which didn't explicitly name the > same import; effectively introducing a higher-priority scope for names > imported explicitly. > > > In that case you may e.g. list local modules after libraries' modules, > > and be sure new identifies in libraries will not clash with local > > ones. Obviously shadowing should be a warning still. > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rwbarton at gmail.com Tue Oct 4 15:46:55 2016 From: rwbarton at gmail.com (Reid Barton) Date: Tue, 4 Oct 2016 11:46:55 -0400 Subject: Allow top-level shadowing for imported names? In-Reply-To: <1475579574.4131.13.camel@gmail.com> References: <877f9pkf8t.fsf@gnu.org> <1DBD3558-B94D-47A1-BB5B-55E5810B7C48@cs.brynmawr.edu> <1475579574.4131.13.camel@gmail.com> Message-ID: On Tue, Oct 4, 2016 at 7:12 AM, Yuras Shumovich wrote: > On Tue, 2016-10-04 at 04:48 -0400, Edward Kmett wrote: > >> It makes additions of names to libraries far less brittle. You can >> add a >> new export with a mere minor version bump, and many of the situations >> where >> that causes breakage can be fixed by this simple rule change. > > It would be true only if we also allow imports to shadow each other. > Otherwise there will be a big chance for name clash yet. Could you give a concrete example of what you are worried about? It's already legal to have a clash between imported names as long as you don't refer to the colliding name. For example if one of my imports A exports a name `foo` which I don't use, and then another import B starts to export the same name `foo`, there won't be any error as long as I continue to not use `foo`. Regards, Reid Barton From ezyang at mit.edu Tue Oct 4 18:00:44 2016 From: ezyang at mit.edu (Edward Z. Yang) Date: Tue, 04 Oct 2016 11:00:44 -0700 Subject: Allow top-level shadowing for imported names? In-Reply-To: <87lgy4qqn1.fsf@gmail.com> References: <877f9pkf8t.fsf@gnu.org> <1DBD3558-B94D-47A1-BB5B-55E5810B7C48@cs.brynmawr.edu> <1475579574.4131.13.camel@gmail.com> <87lgy4qqn1.fsf@gmail.com> Message-ID: <1475604019-sup-7549@sabre> There is another options: names from local modules (same package) shadow names from external packages. But it is not obvious to me that this is a good idea. Edward Excerpts from Herbert Valerio Riedel's message of 2016-10-04 13:50:58 +0200: > Hi, > > On 2016-10-04 at 13:12:54 +0200, Yuras Shumovich wrote: > > On Tue, 2016-10-04 at 04:48 -0400, Edward Kmett wrote: > > > >> It makes additions of names to libraries far less brittle. You can > >> add a > >> new export with a mere minor version bump, and many of the situations > >> where > >> that causes breakage can be fixed by this simple rule change. > > > > It would be true only if we also allow imports to shadow each other. > > Otherwise there will be a big chance for name clash yet. > > > > Can we generalize the proposal such that subsequent imports shadow > > preceding ones? > > IMO, that would be lead to fragile situations with hard to detect/debug > problems unless you take warnings serious. > > With the original proposal, the semantics of your code doesn't change if > a library starts exposing a name it didn't before. There is a clear > priority of what shadows what. > > However, when we allow the ordering of import statements to affect > shadowing, it gets more brittle and surprising imho: > > For one, we have tooling which happily reorders/reformats import > statements which would need to be made aware that the reordering > symmetry has been broken. > > Moreover, now we get into the situation that if in > > import Foo -- exports performCreditCardTransaction > import Bar > > main = do > -- .. > performCreditCardTransaction ccNumber > -- > > 'Bar' suddenly starts exporting performCreditCardTransaction as well > (and doing something sinister with the ccNumber before handing it over > to the real performCreditCardTransaction...), it can effectively change > the semantics of a program and this would merely emit a warning which > imho rather deserves to be a hard error. > > However, iirc there is a different idea to address this without breaking > reordering-symmetry, e.g. by allowing explicitly enumerated names as in > > import Foo (performCreditCardTransaction) > import Bar > > to shadow imports from other modules which didn't explicitly name the > same import; effectively introducing a higher-priority scope for names > imported explicitly. > > > In that case you may e.g. list local modules after libraries' modules, > > and be sure new identifies in libraries will not clash with local > > ones. Obviously shadowing should be a warning still. > From gale at sefer.org Wed Oct 5 13:02:04 2016 From: gale at sefer.org (Yitzchak Gale) Date: Wed, 5 Oct 2016 16:02:04 +0300 Subject: Allow top-level shadowing for imported names? In-Reply-To: <87lgy4qqn1.fsf@gmail.com> References: <877f9pkf8t.fsf@gnu.org> <1DBD3558-B94D-47A1-BB5B-55E5810B7C48@cs.brynmawr.edu> <1475579574.4131.13.camel@gmail.com> <87lgy4qqn1.fsf@gmail.com> Message-ID: Yuras Shumovich wrote: >> Can we generalize the proposal such that subsequent imports shadow >> preceding ones? Herbert Valerio Riedel wrote: > ...iirc there is a different idea... > allowing explicitly enumerated names... > to shadow imports from other modules which didn't explicitly name the > same import; effectively introducing a higher-priority scope for names > imported explicitly. Conversely - the original proposal should be modified to remain an error, not a warning, when the symbol was imported explicitly on an import list and then redefined locally at the top level. This is equivalent to defining a symbol twice in the same scope. Thanks, Yitz From carter.schonwald at gmail.com Wed Oct 5 13:20:45 2016 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Wed, 5 Oct 2016 09:20:45 -0400 Subject: Allow top-level shadowing for imported names? In-Reply-To: References: <877f9pkf8t.fsf@gnu.org> <1DBD3558-B94D-47A1-BB5B-55E5810B7C48@cs.brynmawr.edu> <1475579574.4131.13.camel@gmail.com> <87lgy4qqn1.fsf@gmail.com> Message-ID: Yeah... let's not have import order sensitivity. On Wednesday, October 5, 2016, Yitzchak Gale wrote: > Yuras Shumovich wrote: > >> Can we generalize the proposal such that subsequent imports shadow > >> preceding ones? > > Herbert Valerio Riedel wrote: > > ...iirc there is a different idea... > > allowing explicitly enumerated names... > > to shadow imports from other modules which didn't explicitly name the > > same import; effectively introducing a higher-priority scope for names > > imported explicitly. > > Conversely - the original proposal should be modified to remain > an error, not a warning, when the symbol was imported explicitly > on an import list and then redefined locally at the top level. > This is equivalent to defining a symbol twice in the same scope. > > Thanks, > Yitz > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Wed Oct 5 14:27:51 2016 From: marlowsd at gmail.com (Simon Marlow) Date: Wed, 5 Oct 2016 15:27:51 +0100 Subject: Create a ghc-simple-patch-propose list? Re: Notes from Ben's "contribute to ghc" discussion In-Reply-To: References: <87ponqijpk.fsf@ben-laptop.smart-cactus.org> <4B26A359-0937-4BEA-A071-D24BDF26C2DB@cs.brynmawr.edu> <8760pgf1pz.fsf@ben-laptop.smart-cactus.org> <87a8erebb0.fsf@ben-laptop.smart-cactus.org> Message-ID: I added a description of the workflow for multiple dependent diffs here: https://ghc.haskell.org/trac/ghc/wiki/Phabricator#Workingwithmultipledependentdiffs Please let me know if anything doesn't make sense. Note that I never let arc squash my commits, keeping commits 1:1 with diffs makes things a lot simpler. On 1 October 2016 at 21:49, Brandon Allbery wrote: > On Sat, Oct 1, 2016 at 4:47 PM, Simon Marlow wrote: > >> A nice trick for dealing with stacked diffs in Phabricator is to use "git >> rebase -i" to modify diffs in the middle of the stack. You can also insert >> "x arc diff" between lines to automatically update later diffs on >> Phabricator after a rebase lower down the stack. >> >> You only need a single branch for the whole stack, and continually rebase >> it. I also push the whole branch to github to get Travis to build it, but >> that's optional. >> > > Perhaps someone could put a sample workflow on (one of...) the wiki(s). > > -- > brandon s allbery kf8nh sine nomine > associates > allbery.b at gmail.com > ballbery at sinenomine.net > unix, openafs, kerberos, infrastructure, xmonad > http://sinenomine.net > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amindfv at gmail.com Wed Oct 5 16:34:14 2016 From: amindfv at gmail.com (amindfv at gmail.com) Date: Wed, 5 Oct 2016 12:34:14 -0400 Subject: Allow top-level shadowing for imported names? In-Reply-To: <877f9pkf8t.fsf@gnu.org> References: <877f9pkf8t.fsf@gnu.org> Message-ID: <395A0079-3E9B-4D8F-AC6B-9DC204507B85@gmail.com> I'm weakly against this proposal. I may compile with -Wall, but I read code by many people who don't. When I'm browsing a file and see e.g. import Network.Socket and then later in the file, I see a reference to "recvFrom", I currently know exactly what function is being called. I don't want to have to search around every time to make sure a function wasn't redefined in some dark corner of the module. This allows too much "sneakiness" for my taste. Tom > On Oct 3, 2016, at 04:29, Herbert Valerio Riedel wrote: > > Hi *, > > I seem to recall this was already suggested in the past, but I can't > seem to find it in the archives. For simplicity I'll restate the idea: > > > foo :: Int -> Int -> (Int,Int) > foo x y = (bar x, bar y) > where > bar x = x+x > > results merely in a name-shadowing warning (for -Wall): > > foo.hs:4:9: warning: [-Wname-shadowing] > This binding for ‘x’ shadows the existing binding > bound at foo.hs:2:5 > > > However, > > import Data.Monoid > > (<>) :: String -> String -> String > (<>) = (++) > > main :: IO () > main = putStrLn ("Hi" <> "There") > > doesn't allow to shadow (<>), but rather complains about ambiguity: > > bar.hs:7:23: error: > Ambiguous occurrence ‘<>’ > It could refer to either ‘Data.Monoid.<>’, > imported from ‘Data.Monoid’ at bar.hs:1:1-18 > or ‘Main.<>’, defined at bar.hs:4:1 > > > This is of course in line with the Haskell Report, which says in > https://www.haskell.org/onlinereport/haskell2010/haskellch5.html#x11-1010005.3 > > | The entities exported by a module may be brought into scope in another > | module with an import declaration at the beginning of the module. The > | import declaration names the module to be imported and optionally > | specifies the entities to be imported. A single module may be imported > | by more than one import declaration. Imported names serve as top level > | declarations: they scope over the entire body of the module but may be > | shadowed by *local non-top-level bindings.* > > > However, why don't we allow this to be relaxed via a new language > extensions, to allow top-level bindings to shadow imported names (and > of course emit a warning)? > > Unless I'm missing something, this would help to keep existing and > working code compiling if new versions of libraries start exporting new > symbols (which happen to clash with local top-level defs), rather than > resulting in a fatal name-clash; and have no major downsides. > > If this sounds like a good idea, I'll happily promote this into a proper > proposal over at https://github.com/ghc-proposals/ghc-proposals; I > mostly wanted to get early feedback here (and possibly find out if and > where this was proposed before), before investing more time turning > this into a fully fledged GHC proposal. > > Cheers, > HVR > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From cma at bitemyapp.com Wed Oct 5 16:35:01 2016 From: cma at bitemyapp.com (Christopher Allen) Date: Wed, 5 Oct 2016 11:35:01 -0500 Subject: Allow top-level shadowing for imported names? In-Reply-To: <395A0079-3E9B-4D8F-AC6B-9DC204507B85@gmail.com> References: <877f9pkf8t.fsf@gnu.org> <395A0079-3E9B-4D8F-AC6B-9DC204507B85@gmail.com> Message-ID: I agree with Tom on this. This isn't a good way to spend the cleverness budget. On Wed, Oct 5, 2016 at 11:34 AM, wrote: > I'm weakly against this proposal. I may compile with -Wall, but I read code by many people who don't. When I'm browsing a file and see e.g. > > import Network.Socket > > and then later in the file, I see a reference to "recvFrom", I currently know exactly what function is being called. I don't want to have to search around every time to make sure a function wasn't redefined in some dark corner of the module. > > This allows too much "sneakiness" for my taste. > > Tom > > >> On Oct 3, 2016, at 04:29, Herbert Valerio Riedel wrote: >> >> Hi *, >> >> I seem to recall this was already suggested in the past, but I can't >> seem to find it in the archives. For simplicity I'll restate the idea: >> >> >> foo :: Int -> Int -> (Int,Int) >> foo x y = (bar x, bar y) >> where >> bar x = x+x >> >> results merely in a name-shadowing warning (for -Wall): >> >> foo.hs:4:9: warning: [-Wname-shadowing] >> This binding for ‘x’ shadows the existing binding >> bound at foo.hs:2:5 >> >> >> However, >> >> import Data.Monoid >> >> (<>) :: String -> String -> String >> (<>) = (++) >> >> main :: IO () >> main = putStrLn ("Hi" <> "There") >> >> doesn't allow to shadow (<>), but rather complains about ambiguity: >> >> bar.hs:7:23: error: >> Ambiguous occurrence ‘<>’ >> It could refer to either ‘Data.Monoid.<>’, >> imported from ‘Data.Monoid’ at bar.hs:1:1-18 >> or ‘Main.<>’, defined at bar.hs:4:1 >> >> >> This is of course in line with the Haskell Report, which says in >> https://www.haskell.org/onlinereport/haskell2010/haskellch5.html#x11-1010005.3 >> >> | The entities exported by a module may be brought into scope in another >> | module with an import declaration at the beginning of the module. The >> | import declaration names the module to be imported and optionally >> | specifies the entities to be imported. A single module may be imported >> | by more than one import declaration. Imported names serve as top level >> | declarations: they scope over the entire body of the module but may be >> | shadowed by *local non-top-level bindings.* >> >> >> However, why don't we allow this to be relaxed via a new language >> extensions, to allow top-level bindings to shadow imported names (and >> of course emit a warning)? >> >> Unless I'm missing something, this would help to keep existing and >> working code compiling if new versions of libraries start exporting new >> symbols (which happen to clash with local top-level defs), rather than >> resulting in a fatal name-clash; and have no major downsides. >> >> If this sounds like a good idea, I'll happily promote this into a proper >> proposal over at https://github.com/ghc-proposals/ghc-proposals; I >> mostly wanted to get early feedback here (and possibly find out if and >> where this was proposed before), before investing more time turning >> this into a fully fledged GHC proposal. >> >> Cheers, >> HVR >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -- Chris Allen Currently working on http://haskellbook.com From ekmett at gmail.com Wed Oct 5 16:47:40 2016 From: ekmett at gmail.com (Edward Kmett) Date: Wed, 5 Oct 2016 12:47:40 -0400 Subject: Allow top-level shadowing for imported names? In-Reply-To: References: <877f9pkf8t.fsf@gnu.org> <1DBD3558-B94D-47A1-BB5B-55E5810B7C48@cs.brynmawr.edu> <1475579574.4131.13.camel@gmail.com> <87lgy4qqn1.fsf@gmail.com> Message-ID: That makes perfect sense to me. -Edward On Wed, Oct 5, 2016 at 9:02 AM, Yitzchak Gale wrote: > Yuras Shumovich wrote: > >> Can we generalize the proposal such that subsequent imports shadow > >> preceding ones? > > Herbert Valerio Riedel wrote: > > ...iirc there is a different idea... > > allowing explicitly enumerated names... > > to shadow imports from other modules which didn't explicitly name the > > same import; effectively introducing a higher-priority scope for names > > imported explicitly. > > Conversely - the original proposal should be modified to remain > an error, not a warning, when the symbol was imported explicitly > on an import list and then redefined locally at the top level. > This is equivalent to defining a symbol twice in the same scope. > > Thanks, > Yitz > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mgsloan at gmail.com Thu Oct 6 02:02:09 2016 From: mgsloan at gmail.com (Michael Sloan) Date: Wed, 5 Oct 2016 19:02:09 -0700 Subject: Allow top-level shadowing for imported names? In-Reply-To: References: <877f9pkf8t.fsf@gnu.org> <395A0079-3E9B-4D8F-AC6B-9DC204507B85@gmail.com> Message-ID: It is really good to think in terms of a cleverness budget. Every additional feature is not only implementation work, but also maintenance, extra cognitive overhead when adding new features, an extra thing for tooling outside of GHC to support. Personally, I'm on the wall on this one. Here are the things I see in favor of this proposal: 1) It is common practice to use -Wall. Infact, I think it should be a default, and should include more warnings than -Wall (-fwarn-tabs -fwarn-incomplete-uni-patterns -fwarn-incomplete-record-updates -fwarn-identities) 2) It lets us do things that are otherwise quite inconvenient. We can now easily shadow a bunch of identifiers without explicitly hiding them. Things I see against it: 1) It feels like a complicated way to avoid needing a "hiding ()" clause on your imports. 2) There is no good way to use this feature without creating a warning. I would like to be explicit in my name shadowing I'm thinking a pragma like {-# NO_WARN myFunction #-}, or, better yet, the more specific {-# SHADOWING myFunction #-} orso. What if instead we re-framed this as a "top-level where clause", like this: main :: IO () main = putStrLn ("Hi" <> "There") other-function :: IO () other-function = putStrLn ("I can " <> "also use it") -- NOTE: 0 indent! where (<>) :: String -> String -> String (<>) = (++) I would also like to see this extension enabling other top level declarations in where clauses, except not typeclasses. I have discussed this before, IIRC, with Richard, and there was some complexity getting local data types playing well with scoped type variables. It seems like a very worthwhile extension in its own right. I like this top-level where clause, because it riffs on existing syntax. Also, there isn't very much danger in someone accidentally dedenting their where clause to 0 level and not realizing it. The only danger there is if they were also simultaneously relying on scope shadowing (and somehow the types still work out). -Michael On Wed, Oct 5, 2016 at 9:35 AM, Christopher Allen wrote: > I agree with Tom on this. This isn't a good way to spend the cleverness budget. > > On Wed, Oct 5, 2016 at 11:34 AM, wrote: >> I'm weakly against this proposal. I may compile with -Wall, but I read code by many people who don't. When I'm browsing a file and see e.g. >> >> import Network.Socket >> >> and then later in the file, I see a reference to "recvFrom", I currently know exactly what function is being called. I don't want to have to search around every time to make sure a function wasn't redefined in some dark corner of the module. >> >> This allows too much "sneakiness" for my taste. >> >> Tom >> >> >>> On Oct 3, 2016, at 04:29, Herbert Valerio Riedel wrote: >>> >>> Hi *, >>> >>> I seem to recall this was already suggested in the past, but I can't >>> seem to find it in the archives. For simplicity I'll restate the idea: >>> >>> >>> foo :: Int -> Int -> (Int,Int) >>> foo x y = (bar x, bar y) >>> where >>> bar x = x+x >>> >>> results merely in a name-shadowing warning (for -Wall): >>> >>> foo.hs:4:9: warning: [-Wname-shadowing] >>> This binding for ‘x’ shadows the existing binding >>> bound at foo.hs:2:5 >>> >>> >>> However, >>> >>> import Data.Monoid >>> >>> (<>) :: String -> String -> String >>> (<>) = (++) >>> >>> main :: IO () >>> main = putStrLn ("Hi" <> "There") >>> >>> doesn't allow to shadow (<>), but rather complains about ambiguity: >>> >>> bar.hs:7:23: error: >>> Ambiguous occurrence ‘<>’ >>> It could refer to either ‘Data.Monoid.<>’, >>> imported from ‘Data.Monoid’ at bar.hs:1:1-18 >>> or ‘Main.<>’, defined at bar.hs:4:1 >>> >>> >>> This is of course in line with the Haskell Report, which says in >>> https://www.haskell.org/onlinereport/haskell2010/haskellch5.html#x11-1010005.3 >>> >>> | The entities exported by a module may be brought into scope in another >>> | module with an import declaration at the beginning of the module. The >>> | import declaration names the module to be imported and optionally >>> | specifies the entities to be imported. A single module may be imported >>> | by more than one import declaration. Imported names serve as top level >>> | declarations: they scope over the entire body of the module but may be >>> | shadowed by *local non-top-level bindings.* >>> >>> >>> However, why don't we allow this to be relaxed via a new language >>> extensions, to allow top-level bindings to shadow imported names (and >>> of course emit a warning)? >>> >>> Unless I'm missing something, this would help to keep existing and >>> working code compiling if new versions of libraries start exporting new >>> symbols (which happen to clash with local top-level defs), rather than >>> resulting in a fatal name-clash; and have no major downsides. >>> >>> If this sounds like a good idea, I'll happily promote this into a proper >>> proposal over at https://github.com/ghc-proposals/ghc-proposals; I >>> mostly wanted to get early feedback here (and possibly find out if and >>> where this was proposed before), before investing more time turning >>> this into a fully fledged GHC proposal. >>> >>> Cheers, >>> HVR >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > -- > Chris Allen > Currently working on http://haskellbook.com > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From allbery.b at gmail.com Thu Oct 6 02:05:42 2016 From: allbery.b at gmail.com (Brandon Allbery) Date: Wed, 5 Oct 2016 22:05:42 -0400 Subject: Allow top-level shadowing for imported names? In-Reply-To: References: <877f9pkf8t.fsf@gnu.org> <395A0079-3E9B-4D8F-AC6B-9DC204507B85@gmail.com> Message-ID: On Wed, Oct 5, 2016 at 10:02 PM, Michael Sloan wrote: > What if instead we re-framed this as a "top-level where clause", like this: > > main :: IO () > main = putStrLn ("Hi" <> "There") > > other-function :: IO () > other-function = putStrLn ("I can " <> "also use it") > > -- NOTE: 0 indent! > > where > (<>) :: String -> String -> String > (<>) = (++) > This would actually be slightly odd parse-wise, as we're already *in* an unindented where clause (module ... where) -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From mgsloan at gmail.com Thu Oct 6 02:59:51 2016 From: mgsloan at gmail.com (Michael Sloan) Date: Wed, 5 Oct 2016 19:59:51 -0700 Subject: Allow top-level shadowing for imported names? In-Reply-To: References: <877f9pkf8t.fsf@gnu.org> <395A0079-3E9B-4D8F-AC6B-9DC204507B85@gmail.com> Message-ID: On Wed, Oct 5, 2016 at 7:05 PM, Brandon Allbery wrote: > > On Wed, Oct 5, 2016 at 10:02 PM, Michael Sloan wrote: >> >> What if instead we re-framed this as a "top-level where clause", like >> this: >> >> main :: IO () >> main = putStrLn ("Hi" <> "There") >> >> other-function :: IO () >> other-function = putStrLn ("I can " <> "also use it") >> >> -- NOTE: 0 indent! >> >> where >> (<>) :: String -> String -> String >> (<>) = (++) > > > This would actually be slightly odd parse-wise, as we're already *in* an > unindented where clause (module ... where) Ahh, of course! Good point, that makes this idea rather unappealing - it is indeed inconsistent. Just throwing ideas out there! > -- > brandon s allbery kf8nh sine nomine associates > allbery.b at gmail.com ballbery at sinenomine.net > unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net From simonpj at microsoft.com Thu Oct 6 15:57:13 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 6 Oct 2016 15:57:13 +0000 Subject: Typeable meeting In-Reply-To: <6F2B224E-B121-405F-9257-1C346D97BF59@cs.brynmawr.edu> References: <874m4t9hca.fsf@ben-laptop.smart-cactus.org> <87ponfokuj.fsf@ben-laptop.smart-cactus.org> <6F2B224E-B121-405F-9257-1C346D97BF59@cs.brynmawr.edu> Message-ID: Richard Ben and I concluded 1. Currently saturated (->) is treated differently to unsaturated (->), as witnessed by the special kinding rule for FunTy. The shortest path for TypeRep is to do the same: add TrFunTy. 1. Moreover we probably need FunCo for coercions; at the moment it's done with TyConAppCo, but that doesn't work with (->) :: * -> * -> *. Having FunCo would be (a) more efficient for a very common case, (b) deal with the kinding issue. 2. splitAppTy_maybe currently will split a FunTy even when it is not used at kind *. This is probably wrong. Maybe it should return Nothing in the non-* case. Ben is trying this out. 3. Orthogonally, one could imagine generalising the kind of prefix (->) to (->) :: forall r1 r2. TYPE r1 -> TYPE r2 -> Type LiftedPtrRep Then FunTy would become an abbreviation/optimisation, to avoid a plethora of runtime-rep args. And splitAppTy_maybe could succeed all the time on FunTy. Ben is trying this. 4. Under (3) at what kinds is ((->) ?? LiftedPtrRep (Ord Int) Char) called? If we continue to distinguish * from Constraint, we'd need to instantiate ?? with ConstraintRep, and define type Constraint = TYPE ConstraintRep Orthogonally, one might wonder about combining * and Constraint, but we don't yet know how to do that well. | -----Original Message----- | From: Richard Eisenberg [mailto:rae at cs.brynmawr.edu] | Sent: 06 October 2016 15:08 | To: Ben Gamari | Cc: Simon Peyton Jones | Subject: Re: Typeable meeting | | Just seeing this now. (Please use my new address: rae at cs.brynmawr.edu). I | wasn't available yesterday at that time, anyway. | | Any conclusions reached? | | > On Oct 4, 2016, at 5:39 PM, Ben Gamari wrote: | > | > Simon Peyton Jones writes: | > | >> Ben, and copying Richard who is far better equipped to respond than | me. | >> | >> I'm way behind the curve here. | >> | >> If "this is precisely what mkFunCo does" why not use mkFunCo? Why do | >> you want to "generalise the arrow operation"? | >> | > Perhaps I should have said "this is what mkFunCo is supposed to do". | > However, mkFunCo needs to be adjusted under the generalized (->) | > scheme, since the (->) tycon will gain two additional type arguments | > (corresponding to the RuntimeReps of the argument and result types). | > mkFunCo needs to fill these type arguments when constructing its | result. | > | >> I'm lost as to your goal and the reason why you need to wade through | this particular swamp. | >> | > There are a few reasons, | > | > * We want to provide a pattern to match against function types | > | > pattern TrFun :: TypeRep arg -> TypeRep res -> TypeRep (arg -> | > res) | > | > * T11120 currently fails on my branch as it contains a TypeRep of a | > promoted type containing an unboxed type. Namely, | > | > import Data.Typeable | > data CharHash = CharHash Char# | > main = print $ typeRep (Proxy :: Proxy 'CharHash) | > | > Note how 'CharHash has kind Char# -> Type. This kind occurs in one of | > the terms in `typeRep` needed in the above program, which currently | > results in core lint failures due to the restrictive kind of (->). | > For this reason (->) needs to be polymorphic over runtime | > representation. | > | > This issue is mentioned in ticket:11714#comment:1. | > | > * I believe there were a few other testcases that failed in similar | > ways to T11120. | > | > For what it's worth, the Type /~ Constraint issue that I'm currently | > fighting with in carrying out the (->) generalization is also the | > cause of another Typeable bug, #11715, so I think this is a worthwhile | > thing to sort out. | > | >> Also your text kind of stops... and then repeats itself. | >> | > Oh dear, I apologize for that. Clipboard malfunction. I've included a | > clean version below for reference. | > | >> We could talk at 2pm UK time tomorrow Weds. Richard would you be | free? | >> | > That would work for me. | > | > Cheers, | > | > - Ben | > | > | > | > | > # Background | > | > Consider what happens when we have the coercions, | > | > co1 :: (a ~ b) | > co2 :: (x ~ y) | > | > where | > | > a :: TYPE ra b :: TYPE rb | > x :: TYPE rx y :: TYPE ry | > | > and we want to construct, | > | > co :: (a -> x) ~ (b -> y) | > | > As I understand it this is precisely what Coercion.mkFunCo does. | > Specifically, co should be a TyConAppCo of (->) applied to co1, co2, | > and coercions showing the equalities ra~rx and rb~ry, | > | > (->) (co_ax :: ra ~ rx) (co_by :: rb ~ ry) co1 co2 | > | > Actually implementing mkFunCo to this specification seems to be not | > too difficult, | > | > -- | Build a function 'Coercion' from two other 'Coercion's. That | is, | > -- given @co1 :: a ~ b@ and @co2 :: x ~ y@ produce @co :: (a -> x) | > ~ (b -> y)@. | > mkFunCo :: Role -> Coercion -> Coercion -> Coercion | > mkFunCo r co1 co2 = | > mkTyConAppCo r funTyCon | > [ mkRuntimeRepCo r co1 -- ra ~ rx where a :: TYPE ra, x :: TYPE | > rx | > , mkRuntimeRepCo r co2 -- rb ~ ry where b :: TYPE rb, y :: TYPE | > ry | > , co1 -- a ~ x | > , co2 -- b ~ y | > ] | > | > -- | For a constraint @co :: (a :: TYPE rep1) ~ (b :: TYPE rep2)@ | > produce | > -- @co' :: rep1 ~ rep2 at . | > mkRuntimeRepCo :: Role -> Coercion -> Coercion | > mkRuntimeRepCo r co = mkNthCoRole r 0 $ mkKindCo co | > | Just (tycon, [co]) <- splitTyConAppCo_maybe $ mkKindCo co | > = co | > | otherwise | > = pprPanic "mkRuntimeRepCo: Non-TYPE" (ppr co) | > | > So far, so good (hopefully). | > | > # The Problem | > | > Consider what happens when one of the types is, e.g., HasCallStack. | > For the sake of argument let's say | > | > co1 = _R | > co2 = _R | > | > Recall that HasCallStack is, | > | > type HasCallStack = ((?callStack :: CallStack) :: Constraint) | > | > The problem arises when we attempt to evaluate | > | > mkFunCo r co1 co2 | > | > which will look at the kind coercion of co1, | > | > mkKindCo co1 === _R | > | > and then attempt to splitTyConAppCo it, under the assumption that the | > kind coercion is an application of TYPE, from which we can extract the | > RuntimeRep coercion. Instead we find a nullary TyConAppCo of | > Constraint; things then blow up. | > | > This is the problem. | > | > Ultimately it seems all of this is due to the fact that we play it | > fast- and-loose with Type and Constraint in Core (#11715), especially | > with implicit parameters. I worry that resolving this is more than I | > have time to chew on before 8.2. From rae at cs.brynmawr.edu Thu Oct 6 16:15:36 2016 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Thu, 6 Oct 2016 12:15:36 -0400 Subject: Typeable meeting In-Reply-To: References: <874m4t9hca.fsf@ben-laptop.smart-cactus.org> <87ponfokuj.fsf@ben-laptop.smart-cactus.org> <6F2B224E-B121-405F-9257-1C346D97BF59@cs.brynmawr.edu> Message-ID: <2D9A2C75-43D2-45E0-810A-997B73BEC0BD@cs.brynmawr.edu> > On Oct 6, 2016, at 11:57 AM, Simon Peyton Jones wrote: > > 1. Currently saturated (->) is treated differently to unsaturated (->), as witnessed by the special kinding rule for FunTy. The shortest path for TypeRep is to do the same: add TrFunTy. Will this be changed after Ben's work to generalize the kind of (->)? > > 1. Moreover we probably need FunCo for coercions; at the moment it's done with TyConAppCo, but that doesn't work with (->) :: * -> * -> *. Having FunCo would be (a) more efficient for a very common case, (b) deal with the kinding issue. I've wondered why we haven't had this. But also: will this become unnecessary for use case (b) after Ben's work to generalize the kind of (->)? > > 2. splitAppTy_maybe currently will split a FunTy even when it is not used at kind *. This is probably wrong. Maybe it should return Nothing in the non-* case. Ben is trying this out. Seems reasonable. > > 3. Orthogonally, one could imagine generalising the kind of prefix (->) to > (->) :: forall r1 r2. TYPE r1 -> TYPE r2 -> Type LiftedPtrRep > Then FunTy would become an abbreviation/optimisation, to avoid a plethora of runtime-rep args. And splitAppTy_maybe could succeed all the time on FunTy. Ben is trying this. Right. I like this direction of travel. But it just struck me that (->) :: forall r1. TYPE r1 -> forall r2. TYPE r2 -> TYPE LiftedPtrRep might be a touch better, because it's more general. (We don't have deep skolemization at the type level. Yet.) > > 4. Under (3) at what kinds is ((->) ?? LiftedPtrRep (Ord Int) Char) called? If we continue to distinguish * from Constraint, we'd need to instantiate ?? with ConstraintRep, and define > type Constraint = TYPE ConstraintRep > Orthogonally, one might wonder about combining * and Constraint, but we don't yet know how to do that well. Urgh. This gives me the willies. The whole point of representation polymorphism is to common up things with the same representation. And here we're specifically taking two things with the same rep -- Type and Constraint -- and separating them. What's wrong with ((->) LiftedPtrRep LiftedPtrRep (Ord Int) Char)? Recall that `eqType Constraint Type` says `True`. It's only `tcEqType` that distinguishes between Constraint and Type. Richard > > | -----Original Message----- > | From: Richard Eisenberg [mailto:rae at cs.brynmawr.edu] > | Sent: 06 October 2016 15:08 > | To: Ben Gamari > | Cc: Simon Peyton Jones > | Subject: Re: Typeable meeting > | > | Just seeing this now. (Please use my new address: rae at cs.brynmawr.edu). I > | wasn't available yesterday at that time, anyway. > | > | Any conclusions reached? > | > | > On Oct 4, 2016, at 5:39 PM, Ben Gamari wrote: > | > > | > Simon Peyton Jones writes: > | > > | >> Ben, and copying Richard who is far better equipped to respond than > | me. > | >> > | >> I'm way behind the curve here. > | >> > | >> If "this is precisely what mkFunCo does" why not use mkFunCo? Why do > | >> you want to "generalise the arrow operation"? > | >> > | > Perhaps I should have said "this is what mkFunCo is supposed to do". > | > However, mkFunCo needs to be adjusted under the generalized (->) > | > scheme, since the (->) tycon will gain two additional type arguments > | > (corresponding to the RuntimeReps of the argument and result types). > | > mkFunCo needs to fill these type arguments when constructing its > | result. > | > > | >> I'm lost as to your goal and the reason why you need to wade through > | this particular swamp. > | >> > | > There are a few reasons, > | > > | > * We want to provide a pattern to match against function types > | > > | > pattern TrFun :: TypeRep arg -> TypeRep res -> TypeRep (arg -> > | > res) > | > > | > * T11120 currently fails on my branch as it contains a TypeRep of a > | > promoted type containing an unboxed type. Namely, > | > > | > import Data.Typeable > | > data CharHash = CharHash Char# > | > main = print $ typeRep (Proxy :: Proxy 'CharHash) > | > > | > Note how 'CharHash has kind Char# -> Type. This kind occurs in one of > | > the terms in `typeRep` needed in the above program, which currently > | > results in core lint failures due to the restrictive kind of (->). > | > For this reason (->) needs to be polymorphic over runtime > | > representation. > | > > | > This issue is mentioned in ticket:11714#comment:1. > | > > | > * I believe there were a few other testcases that failed in similar > | > ways to T11120. > | > > | > For what it's worth, the Type /~ Constraint issue that I'm currently > | > fighting with in carrying out the (->) generalization is also the > | > cause of another Typeable bug, #11715, so I think this is a worthwhile > | > thing to sort out. > | > > | >> Also your text kind of stops... and then repeats itself. > | >> > | > Oh dear, I apologize for that. Clipboard malfunction. I've included a > | > clean version below for reference. > | > > | >> We could talk at 2pm UK time tomorrow Weds. Richard would you be > | free? > | >> > | > That would work for me. > | > > | > Cheers, > | > > | > - Ben > | > > | > > | > > | > > | > # Background > | > > | > Consider what happens when we have the coercions, > | > > | > co1 :: (a ~ b) > | > co2 :: (x ~ y) > | > > | > where > | > > | > a :: TYPE ra b :: TYPE rb > | > x :: TYPE rx y :: TYPE ry > | > > | > and we want to construct, > | > > | > co :: (a -> x) ~ (b -> y) > | > > | > As I understand it this is precisely what Coercion.mkFunCo does. > | > Specifically, co should be a TyConAppCo of (->) applied to co1, co2, > | > and coercions showing the equalities ra~rx and rb~ry, > | > > | > (->) (co_ax :: ra ~ rx) (co_by :: rb ~ ry) co1 co2 > | > > | > Actually implementing mkFunCo to this specification seems to be not > | > too difficult, > | > > | > -- | Build a function 'Coercion' from two other 'Coercion's. That > | is, > | > -- given @co1 :: a ~ b@ and @co2 :: x ~ y@ produce @co :: (a -> x) > | > ~ (b -> y)@. > | > mkFunCo :: Role -> Coercion -> Coercion -> Coercion > | > mkFunCo r co1 co2 = > | > mkTyConAppCo r funTyCon > | > [ mkRuntimeRepCo r co1 -- ra ~ rx where a :: TYPE ra, x :: TYPE > | > rx > | > , mkRuntimeRepCo r co2 -- rb ~ ry where b :: TYPE rb, y :: TYPE > | > ry > | > , co1 -- a ~ x > | > , co2 -- b ~ y > | > ] > | > > | > -- | For a constraint @co :: (a :: TYPE rep1) ~ (b :: TYPE rep2)@ > | > produce > | > -- @co' :: rep1 ~ rep2 at . > | > mkRuntimeRepCo :: Role -> Coercion -> Coercion > | > mkRuntimeRepCo r co = mkNthCoRole r 0 $ mkKindCo co > | > | Just (tycon, [co]) <- splitTyConAppCo_maybe $ mkKindCo co > | > = co > | > | otherwise > | > = pprPanic "mkRuntimeRepCo: Non-TYPE" (ppr co) > | > > | > So far, so good (hopefully). > | > > | > # The Problem > | > > | > Consider what happens when one of the types is, e.g., HasCallStack. > | > For the sake of argument let's say > | > > | > co1 = _R > | > co2 = _R > | > > | > Recall that HasCallStack is, > | > > | > type HasCallStack = ((?callStack :: CallStack) :: Constraint) > | > > | > The problem arises when we attempt to evaluate > | > > | > mkFunCo r co1 co2 > | > > | > which will look at the kind coercion of co1, > | > > | > mkKindCo co1 === _R > | > > | > and then attempt to splitTyConAppCo it, under the assumption that the > | > kind coercion is an application of TYPE, from which we can extract the > | > RuntimeRep coercion. Instead we find a nullary TyConAppCo of > | > Constraint; things then blow up. > | > > | > This is the problem. > | > > | > Ultimately it seems all of this is due to the fact that we play it > | > fast- and-loose with Type and Constraint in Core (#11715), especially > | > with implicit parameters. I worry that resolving this is more than I > | > have time to chew on before 8.2. > From ben at smart-cactus.org Thu Oct 6 17:15:26 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 06 Oct 2016 13:15:26 -0400 Subject: Typeable meeting In-Reply-To: <2D9A2C75-43D2-45E0-810A-997B73BEC0BD@cs.brynmawr.edu> References: <874m4t9hca.fsf@ben-laptop.smart-cactus.org> <87ponfokuj.fsf@ben-laptop.smart-cactus.org> <6F2B224E-B121-405F-9257-1C346D97BF59@cs.brynmawr.edu> <2D9A2C75-43D2-45E0-810A-997B73BEC0BD@cs.brynmawr.edu> Message-ID: <87zimhmma9.fsf@ben-laptop.smart-cactus.org> Thanks for summarizing this, Simon! Richard Eisenberg writes: >> On Oct 6, 2016, at 11:57 AM, Simon Peyton Jones wrote: >> >> 1. Currently saturated (->) is treated differently to unsaturated >> (->), as witnessed by the special kinding rule for FunTy. The >> shortest path for TypeRep is to do the same: add TrFunTy. > > Will this be changed after Ben's work to generalize the kind of (->)? I suspect not; I think we'll want TrFunTy regardless of the kind of (->). If nothing else it makes common TypeReps a bit more compact. >> 1. Moreover we probably need FunCo for coercions; at the moment it's >> done with TyConAppCo, but that doesn't work with (->) :: * -> * -> *. >> Having FunCo would be (a) more efficient for a very common case, (b) >> deal with the kinding issue. > > I've wondered why we haven't had this. But also: will this become > unnecessary for use case (b) after Ben's work to generalize the kind > of (->)? > There >> 2. splitAppTy_maybe currently will split a FunTy even when it is not >> used at kind *. This is probably wrong. Maybe it should return >> Nothing in the non-* case. Ben is trying this out. > > Seems reasonable. > Indeed, but as I mentioned on Phabricator it looks like this won't be entirely trivial. I've put this aside for a bit to get the branch building. >> >> 3. Orthogonally, one could imagine generalising the kind of prefix (->) to >> (->) :: forall r1 r2. TYPE r1 -> TYPE r2 -> Type LiftedPtrRep >> Then FunTy would become an abbreviation/optimisation, to avoid a plethora of runtime-rep args. And splitAppTy_maybe could succeed all the time on FunTy. Ben is trying this. > > Right. I like this direction of travel. > > But it just struck me that > > (->) :: forall r1. TYPE r1 -> forall r2. TYPE r2 -> TYPE LiftedPtrRep > > might be a touch better, because it's more general. (We don't have > deep skolemization at the type level. Yet.) > Seems reasonable. >> >> 4. Under (3) at what kinds is ((->) ?? LiftedPtrRep (Ord Int) Char) >> called? If we continue to distinguish * from Constraint, we'd need to >> instantiate ?? with ConstraintRep, and define >> type Constraint = TYPE ConstraintRep >> Orthogonally, one might wonder about combining * and Constraint, but >> we don't yet know how to do that well. > > Urgh. This gives me the willies. The whole point of representation > polymorphism is to common up things with the same representation. And > here we're specifically taking two things with the same rep -- Type > and Constraint -- and separating them. What's wrong with ((->) > LiftedPtrRep LiftedPtrRep (Ord Int) Char)? Recall that `eqType > Constraint Type` says `True`. It's only `tcEqType` that distinguishes > between Constraint and Type. > Indeed this seems to be one of the possible conclusions of #11715. However, I think the point of the approach Simon describes is that we want to handle the Constraint ~ Type issue separately from Typeable (and the variety of other issues that it has dragged in). Introducing another RuntimeRep is merely a way of maintaining the status quo (where Constraint /~ Type) until we want to consider unifying the two. Also, Simon didn't mention another conclusion from our meeting: 5. TrTyCon should somehow carry the TyCon's kind variable instantiations, not the final kind of the type. That is, currently we have, TrTyCon :: TyCon -> TypeRep k -> TypeRep (a :: k) What we want, however, is something closer to, TrTyCon :: TyCon -> [SomeTypeRep] -> TypeRep (a :: k) This carries a complication, however, since typeRepKind needs to somehow reconstruct the kind `k` from the information carried by the TrTyCon. TyCon must carry additional information to make this feasible. The naive approach would be, data TyCon = TyCon { tyConName :: String , tyConKind :: [SomeTypeRep] -> SomeTypeRep } This, however, suffers from the fact that it can no longer be serialized. Consequently we'll need to use something like, type KindBinder = Int data TyConKindRep = TyConKindVar KindBinder | TyConKindApp TyCon [TyConKindRep] data TyCon = TyCon { tyConName :: String , tyConKindRep :: TyConKindRep } tyConKind :: TyCon -> [SomeTypeRep] -> SomeTypeRep Note that for simplicity I kept TyCon un-indexed, meaning that all of the kind-reconstruction logic is in the TCB. However, I suspect this is okay especially since we really need to keep an eye on the effort required by the compiler when generating TyCons. Keep in mind that we generate a TyCon for every datatype we compile; the existing Typeable implementation takes pains to generate efficient, already-optimized TyCon bindings. Moreover, serializing TyCon's will clearly carry a non-trivial cost, which we'll need to advertise to users. Serious users like Cloud Haskell will likely want to maintain some sort of interning table for TypeReps. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From ben at smart-cactus.org Thu Oct 6 17:22:46 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 06 Oct 2016 13:22:46 -0400 Subject: Typeable meeting In-Reply-To: <87zimhmma9.fsf@ben-laptop.smart-cactus.org> References: <874m4t9hca.fsf@ben-laptop.smart-cactus.org> <87ponfokuj.fsf@ben-laptop.smart-cactus.org> <6F2B224E-B121-405F-9257-1C346D97BF59@cs.brynmawr.edu> <2D9A2C75-43D2-45E0-810A-997B73BEC0BD@cs.brynmawr.edu> <87zimhmma9.fsf@ben-laptop.smart-cactus.org> Message-ID: <87wphlmly1.fsf@ben-laptop.smart-cactus.org> Ben Gamari writes: > Richard Eisenberg writes: > >>> On Oct 6, 2016, at 11:57 AM, Simon Peyton Jones wrote: >>> >>> 1. Moreover we probably need FunCo for coercions; at the moment it's >>> done with TyConAppCo, but that doesn't work with (->) :: * -> * -> *. >>> Having FunCo would be (a) more efficient for a very common case, (b) >>> deal with the kinding issue. >> >> I've wondered why we haven't had this. But also: will this become >> unnecessary for use case (b) after Ben's work to generalize the kind >> of (->)? >> > There > Oops! Looks like I trailed off here. My apologies. Strictly speaking I don't even think we *need* it today. I also don't think it's strictly necessary after the generalization. However, again, it seems like a nice representational optimization for a very common case. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From rae at cs.brynmawr.edu Thu Oct 6 19:46:55 2016 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Thu, 6 Oct 2016 15:46:55 -0400 Subject: Typeable meeting In-Reply-To: <87zimhmma9.fsf@ben-laptop.smart-cactus.org> References: <874m4t9hca.fsf@ben-laptop.smart-cactus.org> <87ponfokuj.fsf@ben-laptop.smart-cactus.org> <6F2B224E-B121-405F-9257-1C346D97BF59@cs.brynmawr.edu> <2D9A2C75-43D2-45E0-810A-997B73BEC0BD@cs.brynmawr.edu> <87zimhmma9.fsf@ben-laptop.smart-cactus.org> Message-ID: > On Oct 6, 2016, at 1:15 PM, Ben Gamari wrote: >>> 1. Currently saturated (->) is treated differently to unsaturated >>> (->), as witnessed by the special kinding rule for FunTy. The >>> shortest path for TypeRep is to do the same: add TrFunTy. >> >> Will this be changed after Ben's work to generalize the kind of (->)? > > I suspect not; I think we'll want TrFunTy regardless of the kind of > (->). If nothing else it makes common TypeReps a bit more compact. So this will just be an optimization, then? May be worth thinking about how to measure whether this is actually an improvement. I was also under the impression that TypeReps would carry fingerprints for efficiency.... perhaps this will make TypeReps smaller in memory, but I don't foresee gobs and gobs of TypeReps filling up a lot of memory, so I'm not convinced this compression will be worth its weight. I do agree that we need TrFunTy now (because of the weird kind of (->)), but I'm unconvinced that we need it in the future. > >>> >>> 4. Under (3) at what kinds is ((->) ?? LiftedPtrRep (Ord Int) Char) >>> called? If we continue to distinguish * from Constraint, we'd need to >>> instantiate ?? with ConstraintRep, and define >>> type Constraint = TYPE ConstraintRep >>> Orthogonally, one might wonder about combining * and Constraint, but >>> we don't yet know how to do that well. >> >> Urgh. This gives me the willies. The whole point of representation >> polymorphism is to common up things with the same representation. And >> here we're specifically taking two things with the same rep -- Type >> and Constraint -- and separating them. What's wrong with ((->) >> LiftedPtrRep LiftedPtrRep (Ord Int) Char)? Recall that `eqType >> Constraint Type` says `True`. It's only `tcEqType` that distinguishes >> between Constraint and Type. >> > Indeed this seems to be one of the possible conclusions of #11715. > > However, I think the point of the approach Simon describes is that we > want to handle the Constraint ~ Type issue separately from Typeable (and > the variety of other issues that it has dragged in). Introducing another > RuntimeRep is merely a way of maintaining the status quo (where > Constraint /~ Type) until we want to consider unifying the two. I believe that **right now**, if you say (eqType Constraint Type), you get True. So they're already unified! A problem is that this equality is not universally respected. If you try to splitAppTy Type, you get TYPE and LiftedPtrRep. If you try to splitAppTy Constraint, you fail. Perhaps the short-term thing is just to fix, e.g., splitAppTy.... much like we already special-case FunTy to be splittable, we can special-case Constraint to be splittable. I'm sure that the problem extends past just splitAppTy, but a scan through Type.hs, Kind.hs, and TcType.hs should hopefully show up all the problem sites. > > > Also, Simon didn't mention another conclusion from our meeting: > > 5. TrTyCon should somehow carry the TyCon's kind variable > instantiations, not the final kind of the type. Why? Will we want to decompose this? (I suppose we will. But do we have a use-case?) Why not carry both the kind instantiations and the final kind? That seems much simpler than the scheme below. > That is, currently we > have, > > TrTyCon :: TyCon -> TypeRep k -> TypeRep (a :: k) > > What we want, however, is something closer to, > > TrTyCon :: TyCon -> [SomeTypeRep] -> TypeRep (a :: k) To be concrete, I'm proposing TrTyCon :: TyCon -> [SomeTypeRep] -> TypeRep k -> TypeRep (a :: k) > > This carries a complication, however, since typeRepKind needs to somehow > reconstruct the kind `k` from the information carried by the TrTyCon. > TyCon must carry additional information to make this feasible. The naive > approach would be, > > data TyCon = TyCon { tyConName :: String > , tyConKind :: [SomeTypeRep] -> SomeTypeRep > } > > This, however, suffers from the fact that it can no longer be > serialized. Consequently we'll need to use something like, > > type KindBinder = Int > > data TyConKindRep = TyConKindVar KindBinder > | TyConKindApp TyCon [TyConKindRep] > > data TyCon = TyCon { tyConName :: String > , tyConKindRep :: TyConKindRep > } > > tyConKind :: TyCon -> [SomeTypeRep] -> SomeTypeRep I'm afraid this isn't quite sophisticated enough, because kind quantification isn't necessarily prenex anymore. For example: data (:~~:) :: forall k1. k1 -> forall k2. k2 -> Type Your TyConKindRep doesn't have any spot for kind quantification, so I'm assuming your plan is just to quantify everything up front... but that won't quite work, I think. Argh. Richard From ben at smart-cactus.org Thu Oct 6 21:36:19 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 06 Oct 2016 17:36:19 -0400 Subject: Typeable meeting In-Reply-To: References: <874m4t9hca.fsf@ben-laptop.smart-cactus.org> <87ponfokuj.fsf@ben-laptop.smart-cactus.org> <6F2B224E-B121-405F-9257-1C346D97BF59@cs.brynmawr.edu> <2D9A2C75-43D2-45E0-810A-997B73BEC0BD@cs.brynmawr.edu> <87zimhmma9.fsf@ben-laptop.smart-cactus.org> Message-ID: <87lgy1ma7g.fsf@ben-laptop.smart-cactus.org> Richard Eisenberg writes: >> On Oct 6, 2016, at 1:15 PM, Ben Gamari wrote: >>>> 1. Currently saturated (->) is treated differently to unsaturated >>>> (->), as witnessed by the special kinding rule for FunTy. The >>>> shortest path for TypeRep is to do the same: add TrFunTy. >>> >>> Will this be changed after Ben's work to generalize the kind of (->)? >> >> I suspect not; I think we'll want TrFunTy regardless of the kind of >> (->). If nothing else it makes common TypeReps a bit more compact. > > So this will just be an optimization, then? May be worth thinking > about how to measure whether this is actually an improvement. I was > also under the impression that TypeReps would carry fingerprints for > efficiency.... perhaps this will make TypeReps smaller in memory, but > I don't foresee gobs and gobs of TypeReps filling up a lot of memory, > so I'm not convinced this compression will be worth its weight. > Perhaps; I seem to recall examining the effect of the reintroduction of FunTy in GHC to find that I was rather disappointed in the difference. > I do agree that we need TrFunTy now (because of the weird kind of > (->)), but I'm unconvinced that we need it in the future. Indeed we'll need to measure. >> Indeed this seems to be one of the possible conclusions of #11715. >> >> However, I think the point of the approach Simon describes is that we >> want to handle the Constraint ~ Type issue separately from Typeable (and >> the variety of other issues that it has dragged in). Introducing another >> RuntimeRep is merely a way of maintaining the status quo (where >> Constraint /~ Type) until we want to consider unifying the two. > > I believe that **right now**, if you say (eqType Constraint Type), you > get True. So they're already unified! A problem is that this equality > is not universally respected. If you try to splitAppTy Type, you get > TYPE and LiftedPtrRep. If you try to splitAppTy Constraint, you fail. > Perhaps the short-term thing is just to fix, e.g., splitAppTy.... much > like we already special-case FunTy to be splittable, we can > special-case Constraint to be splittable. I'm sure that the problem > extends past just splitAppTy, but a scan through Type.hs, Kind.hs, and > TcType.hs should hopefully show up all the problem sites. > The situation currently seems to be quite messy. I brought up all of this because this (in the context of coercions, as I discussed in one of our emails, Richard) was the most significant barrier I ran into while looking that the (->) generalization. >> Also, Simon didn't mention another conclusion from our meeting: >> >> 5. TrTyCon should somehow carry the TyCon's kind variable >> instantiations, not the final kind of the type. > > Why? Will we want to decompose this? (I suppose we will. But do we > have a use-case?) The motivation here isn't that we want to decompose this. It's that carrying the kind variable instantiations will shrink the size of your typical (non-kind-polymorphic) TypeRep and may actually simplify a few things in the implementation. Namely, I suspect that the trouble we currently experience due to recursive kind relationships will vanish. Relatedly, the fact that you no longer need to worry about recursive kind relationships simplifies serialization, as well as makes the serialized representation more compact. You would lose this benefit if you carried the result kind (either exclusively or in conjunction with the instantiations). > Why not carry both the kind instantiations and the > final kind? That seems much simpler than the scheme below. > Admittedly there is a bit of complexity here. I haven't yet tried implementing it but I suspect it's manageable. >> That is, currently we >> have, >> >> TrTyCon :: TyCon -> TypeRep k -> TypeRep (a :: k) >> >> What we want, however, is something closer to, >> >> TrTyCon :: TyCon -> [SomeTypeRep] -> TypeRep (a :: k) > > To be concrete, I'm proposing > > TrTyCon :: TyCon -> [SomeTypeRep] -> TypeRep k -> TypeRep (a :: k) > Right. Understood. This means that consumers that deeply traverse TypeReps (e.g. serializers) still need to worry about no traversing the kinds of certain types (e.g. Type). >> >> This carries a complication, however, since typeRepKind needs to somehow >> reconstruct the kind `k` from the information carried by the TrTyCon. >> TyCon must carry additional information to make this feasible. The naive >> approach would be, >> >> data TyCon = TyCon { tyConName :: String >> , tyConKind :: [SomeTypeRep] -> SomeTypeRep >> } >> >> This, however, suffers from the fact that it can no longer be >> serialized. Consequently we'll need to use something like, >> >> type KindBinder = Int >> >> data TyConKindRep = TyConKindVar KindBinder >> | TyConKindApp TyCon [TyConKindRep] >> >> data TyCon = TyCon { tyConName :: String >> , tyConKindRep :: TyConKindRep >> } >> >> tyConKind :: TyCon -> [SomeTypeRep] -> SomeTypeRep > > I'm afraid this isn't quite sophisticated enough, because kind > quantification isn't necessarily prenex anymore. For example: > > data (:~~:) :: forall k1. k1 -> forall k2. k2 -> Type > > Your TyConKindRep doesn't have any spot for kind quantification, so > I'm assuming your plan is just to quantify everything up front... but > that won't quite work, I think. Argh. > Oh dear. This will require some thought. Thanks! Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From rae at cs.brynmawr.edu Fri Oct 7 01:00:56 2016 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Thu, 6 Oct 2016 21:00:56 -0400 Subject: Typeable meeting In-Reply-To: <87lgy1ma7g.fsf@ben-laptop.smart-cactus.org> References: <874m4t9hca.fsf@ben-laptop.smart-cactus.org> <87ponfokuj.fsf@ben-laptop.smart-cactus.org> <6F2B224E-B121-405F-9257-1C346D97BF59@cs.brynmawr.edu> <2D9A2C75-43D2-45E0-810A-997B73BEC0BD@cs.brynmawr.edu> <87zimhmma9.fsf@ben-laptop.smart-cactus.org> <87lgy1ma7g.fsf@ben-laptop.smart-cactus.org> Message-ID: > On Oct 6, 2016, at 5:36 PM, Ben Gamari wrote: > > The motivation here isn't that we want to decompose this. It's that > carrying the kind variable instantiations will shrink the size of your > typical (non-kind-polymorphic) TypeRep and may actually simplify a few > things in the implementation. Namely, I suspect that the trouble we > currently experience due to recursive kind relationships will vanish. Ah. This is great motivation. >> Why not carry both the kind instantiations and the >> final kind? That seems much simpler than the scheme below. >> > Admittedly there is a bit of complexity here. I haven't yet tried > implementing it but I suspect it's manageable. On further thought, my idea doesn't simplify any of the TyConKindRep stuff, so I retract my idea. > Oh dear. This will require some thought. Indeed. It's never easy. :) Richard From mail at joachim-breitner.de Fri Oct 7 03:49:17 2016 From: mail at joachim-breitner.de (Joachim Breitner) Date: Thu, 06 Oct 2016 23:49:17 -0400 Subject: Travis broken (type checking patches by SPJ) Message-ID: <1475812157.3333.8.camel@joachim-breitner.de> Hi, our secondary CI infrastructure is in bad shape since a week. The first push that broke it was the one ending with commit fc4ef66 by SPJ: * fc4ef66 - Comments only (vor 7 Tagen) * b612da6 - Fix impredicativity (again) (vor 7 Tagen) * 3012c43 - Add Outputable Report in TcErrors (vor 7 Tagen) * 66a8c19 - Fix a bug in occurs checking (vor 7 Tagen) * 2fbfbca - Fix desugaring of pattern bindings (again) (vor 7 Tagen) * 0b533a2 - A bit of tracing about flattening (vor 7 Tagen) https://github.com/ghc/ghc/compare/f21eedbc223c...fc4ef667ce2b --- /dev/null 2015-01-28 16:31:58.000000000 +0000 +++ /tmp/ghctest-gSX4vv/test spaces/./boxy/Base1.run/Base1.comp.stderr.normalised 2016-09-30 12:44:57.182659875 +0000 @@ -0,0 +1,18 @@ + +Base1.hs:20:13: + Couldn't match type ‘a0 -> a0’ with ‘forall a. a -> a’ + Expected type: MEither Sid b + Actual type: MEither (a0 -> a0) b + In the expression: MLeft fid + In an equation for ‘test1’: test1 fid = MLeft fid + +Base1.hs:25:39: + Couldn't match type ‘a1 -> a1’ with ‘forall a. a -> a’ + Expected type: Maybe (Sid, Sid) + Actual type: Maybe (a1 -> a1, a2 -> a2) + In the expression: Just (x, y) + In a case alternative: MRight y -> Just (x, y) + In the expression: + case m of { + MRight y -> Just (x, y) + _ -> Nothing } *** unexpected failure for Base1(normal) Compile failed (exit code 1) errors were: ghc-stage2: panic! (the 'impossible' happened) (GHC version 8.1.20160930 for x86_64-unknown-linux): ASSERT failed! m_aAI Call stack: CallStack (from HasCallStack): prettyCurrentCallStack, called at compiler/utils/Outputable.hs:1076:58 in ghc:Outputable callStackDoc, called at compiler/utils/Outputable.hs:1125:22 in ghc:Outputable assertPprPanic, called at compiler/typecheck/TcType.hs:979:47 in ghc:TcType Call stack: CallStack (from HasCallStack): prettyCurrentCallStack, called at compiler/utils/Outputable.hs:1076:58 in ghc:Outputable callStackDoc, called at compiler/utils/Outputable.hs:1080:37 in ghc:Outputable pprPanic, called at compiler/utils/Outputable.hs:1123:5 in ghc:Outputable assertPprPanic, called at compiler/typecheck/TcType.hs:979:47 in ghc:TcType Please report this as a GHC bug: http://www.haskell.org/ghc/reportabug *** unexpected failure for RaeBlogPost(normal) Compile failed (exit code 1) errors were: ghc-stage2: panic! (the 'impossible' happened) (GHC version 8.1.20160930 for x86_64-unknown-linux): ASSERT failed! m_awK Call stack: CallStack (from HasCallStack): prettyCurrentCallStack, called at compiler/utils/Outputable.hs:1076:58 in ghc:Outputable callStackDoc, called at compiler/utils/Outputable.hs:1125:22 in ghc:Outputable assertPprPanic, called at compiler/typecheck/TcType.hs:979:47 in ghc:TcType Call stack: CallStack (from HasCallStack): prettyCurrentCallStack, called at compiler/utils/Outputable.hs:1076:58 in ghc:Outputable callStackDoc, called at compiler/utils/Outputable.hs:1080:37 in ghc:Outputable pprPanic, called at compiler/utils/Outputable.hs:1123:5 in ghc:Outputable assertPprPanic, called at compiler/typecheck/TcType.hs:979:47 in ghc:TcType Please report this as a GHC bug: http://www.haskell.org/ghc/reportabug *** unexpected failure for TypeLevelVec(normal) A subsequent commit by SPJ fixed the first one. It now compield without -DDEBUG again, but the other two failures (observable only with -DDEBUG) are still there. Since then, Travis has been reporting failures for the master branch. I only noticed now as I pushed something to master, and I got an email. Did you not get notifications about the breakage? If you did, was it unclear how to get to the log file? In any case: Simon, could you have a look and see if the ASSERT is pointing out a real bug introduced with your commits, or whether the ASSERT is wrong, so that we can build master with -DDEBUG again? Thanks, Joachim -- “Comments only” with this failure:Joachim Breitner mail at joachim-breitner.de http://www.joachim-breitner.de/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: This is a digitally signed message part URL: From amindfv at gmail.com Fri Oct 7 08:20:09 2016 From: amindfv at gmail.com (amindfv at gmail.com) Date: Fri, 7 Oct 2016 04:20:09 -0400 Subject: Type hole in pattern match Message-ID: Is this intended behavior?: foo :: _ (foo, _) = (True, True) Produces no warning or message at all even with -Wall Tom From simonpj at microsoft.com Fri Oct 7 10:39:09 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 7 Oct 2016 10:39:09 +0000 Subject: Type hole in pattern match In-Reply-To: References: Message-ID: For HEAD I get. $ ghc -c Foo.hs Foo.hs:5:8: warning: [-Wpartial-type-signatures] * Found type wildcard `_' standing for `Bool' * In the type signature: foo :: _ Not sure about the 8.0 branch. There may well be ticket(s) about this; worth a hunt, because it looks as if it's already fixed. Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | amindfv at gmail.com | Sent: 07 October 2016 09:20 | To: ghc-devs at haskell.org | Subject: Type hole in pattern match | | Is this intended behavior?: | | foo :: _ | (foo, _) = (True, True) | | Produces no warning or message at all even with -Wall | | Tom | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=01%7C01%7Csimonpj%40microsoft.com%7C966e5c7ba691483e7ab808d3ee8 | ab1a8%7C72f988bf86f141af91ab2d7cd011db47%7C1&sdata=zujBlRyIkmN01tTFzfd22n | zBKbg9BGDUvMbEmBDNzj4%3D&reserved=0 From ben at well-typed.com Fri Oct 7 21:07:46 2016 From: ben at well-typed.com (Ben Gamari) Date: Fri, 07 Oct 2016 17:07:46 -0400 Subject: [GHC Proposal] Optional tuple parenthesization Message-ID: <8760p3n9zx.fsf@ben-laptop.smart-cactus.org> Hello everyone, Iceland_Jack has just opened Pull Requests #18 [1] and #19 [2] against the ghc-proposals repository, describing a extensions to GHC's LambdaCase extension and pattern matching syntax on tuples. Cheers, - Ben [1] https://github.com/ghc-proposals/ghc-proposals/pull/18 [2] https://github.com/ghc-proposals/ghc-proposals/pull/19 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From lonetiger at gmail.com Sat Oct 8 16:43:52 2016 From: lonetiger at gmail.com (lonetiger at gmail.com) Date: Sat, 8 Oct 2016 17:43:52 +0100 Subject: Default options for -threaded Message-ID: <57f92249.4d081c0a.292b6.e00a@mx.google.com> Hi All, A user on https://ghc.haskell.org/trac/ghc/ticket/11054 has asked why -N -qa isn’t the default for -threaded. I’m afraid I don’t know a good reason why it’s not. Can anyone help shed some light on this? If there is no good reason, Would anyone object to making it the default? Thanks, Tamar -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Sat Oct 8 16:55:17 2016 From: ben at well-typed.com (Ben Gamari) Date: Sat, 08 Oct 2016 12:55:17 -0400 Subject: Default options for -threaded In-Reply-To: <57f92249.4d081c0a.292b6.e00a@mx.google.com> References: <57f92249.4d081c0a.292b6.e00a@mx.google.com> Message-ID: <87mvielr0q.fsf@ben-laptop.smart-cactus.org> lonetiger at gmail.com writes: > Hi All, > > A user on https://ghc.haskell.org/trac/ghc/ticket/11054 has asked why > -N -qa isn’t the default for -threaded. > I'm not sure that scheduling on all of the cores on the user's machine by default is a good idea, especially given that our users have learned to expect the existing default. Enabling affinity by default seems reasonable if we have evidence that it helps the majority of applications, but we would first need to introduce an additional flag to disable it. In general I think -N1 is a reasonable default as it acknowledges the fact that deploying parallelism is not something that can be done blindly in many (most?) applications. To make effective use of parallelism the user needs to understand their hardware, their application, and its interaction with the runtime system and configure the RTS appropriately. Of course, this is just my two-cents. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From eric at seidel.io Sat Oct 8 17:13:02 2016 From: eric at seidel.io (Eric Seidel) Date: Sat, 08 Oct 2016 10:13:02 -0700 Subject: Default options for -threaded In-Reply-To: <87mvielr0q.fsf@ben-laptop.smart-cactus.org> References: <57f92249.4d081c0a.292b6.e00a@mx.google.com> <87mvielr0q.fsf@ben-laptop.smart-cactus.org> Message-ID: <1475946782.2358803.749841641.12715F7B@webmail.messagingengine.com> I would prefer keeping -N1 as a default, especially now that the number of capabilities can be set at runtime. Programs can then use the more common -j flag to enable parallelism. Regarding -qa, I was experimenting with it over the summer and found its behavior a bit surprising. It did prevent threads from being moved between capabilities, but it also forced all of the threads (created with forkIO) to be *spawned* on the same capability, which was unexpected. So -N -qa was, in my experience, equivalent to -N1! On Sat, Oct 8, 2016, at 09:55, Ben Gamari wrote: > lonetiger at gmail.com writes: > > > Hi All, > > > > A user on https://ghc.haskell.org/trac/ghc/ticket/11054 has asked why > > -N -qa isn’t the default for -threaded. > > > I'm not sure that scheduling on all of the cores on the user's machine by > default is a good idea, especially given that our users have > learned to expect the existing default. Enabling affinity by default > seems reasonable if we have evidence that it helps the majority of > applications, but we would first need to introduce an additional > flag to disable it. > > In general I think -N1 is a reasonable default as it acknowledges the > fact that deploying parallelism is not something that can be done > blindly in many (most?) applications. To make effective use of > parallelism the user needs to understand their hardware, their > application, and its interaction with the runtime system and configure > the RTS appropriately. > > Of course, this is just my two-cents. > > Cheers, > > - Ben > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > Email had 1 attachment: > + signature.asc > 1k (application/pgp-signature) From david.feuer at gmail.com Sat Oct 8 19:27:18 2016 From: david.feuer at gmail.com (David Feuer) Date: Sat, 8 Oct 2016 15:27:18 -0400 Subject: Reading floating point In-Reply-To: References: Message-ID: The current Read instances for Float and Double look pretty iffy from an efficiency standpoint. Going through Rational is exceedingly weird: we have absolutely nothing to gain by dividing out the GCD, as far as I can tell. Then, in doing so, we read the digits of the integral part to form an Integer. This looks like a detour, and particularly bad when it has many digits. Wouldn't it be better to normalize the decimal representation first in some fashion (e.g., to 0.xxxxxxexxx) and go from there? Probably less importantly, is there some way to avoid converting the mantissa to an Integer at all? The low digits may not end up making any difference whatsoever. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Sat Oct 8 19:38:10 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Sat, 08 Oct 2016 15:38:10 -0400 Subject: Type hole in pattern match In-Reply-To: References: Message-ID: <87fuo6ljh9.fsf@ben-laptop.smart-cactus.org> Simon Peyton Jones via ghc-devs writes: > For HEAD I get. > > $ ghc -c Foo.hs > > Foo.hs:5:8: warning: [-Wpartial-type-signatures] > * Found type wildcard `_' standing for `Bool' > * In the type signature: foo :: _ > > > Not sure about the 8.0 branch. There may well be ticket(s) about this; > worth a hunt, because it looks as if it's already fixed. > Sadly it looks like this issue is still present on ghc-8.0. A quick search through Trac didn't turn up any compelling candidate fixes, however. It looks like identifying the missing patch may require more work. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From ben at well-typed.com Sat Oct 8 20:10:15 2016 From: ben at well-typed.com (Ben Gamari) Date: Sat, 08 Oct 2016 16:10:15 -0400 Subject: bkpcabal01 test failure on Mac OS X Message-ID: <87d1jalhzs.fsf@ben-laptop.smart-cactus.org> Hi Edward, Our new OS X build bot has noticed that the bkpcabal01 testcase introduced with your Backpack merge seems to fail on OS X with, =====> bkpcabal01(normal) 1 of 1 [0, 0, 0] cd "./backpack/cabal/bkpcabal01/bkpcabal01.run" && $MAKE -s --no-print-directory bkpcabal01 CLEANUP=1 Actual stderr output differs from expected: --- /dev/null 2016-10-08 22:51:23.000000000 +0300 +++ ./backpack/cabal/bkpcabal01/bkpcabal01.run/bkpcabal01.run.stderr.normalised 2016-10-08 22:51:23.000000000 +0300 @@ -0,0 +1,4 @@ +/Applications/Xcode-7.2/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: dist/build/p-0.1+FBOSaiWyMx9DR2UZVI6wQJ/objs-36887/libHSp-0.1+FBOSaiWyMx9DR2UZVI6wQJ.a(H.o) has no symbols +/Applications/Xcode-7.2/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: dist/build/q-0.1+70e5o6lPGfgGiochTG2tqQ/objs-36936/libHSq-0.1+70e5o6lPGfgGiochTG2tqQ.a(I.o) has no symbols +/Applications/Xcode-7.2/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: dist/build/p-0.1+FBOSaiWyMx9DR2UZVI6wQJ/objs-37078/libHSp-0.1+FBOSaiWyMx9DR2UZVI6wQJ.a(H.o) has no symbols +/Applications/Xcode-7.2/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: dist/build/q-0.1+70e5o6lPGfgGiochTG2tqQ/objs-37123/libHSq-0.1+70e5o6lPGfgGiochTG2tqQ.a(I.o) has no symbols *** unexpected failure for bkpcabal01(normal) It seems that this message is produced by `ar` while building the .a archive, bkpcabal01 bgamari$ make SETUP="./Setup -v3" bkpcabal01 ... ("/usr/bin/ar",["-r","-s","-v","dist/build/q-0.1+70e5o6lPGfgGiochTG2tqQ/objs-39951/libHSq-0.1+70e5o6lPGfgGiochTG2tqQ.a","dist/build/q-0.1+70e5o6lPGfgGiochTG2tqQ/Q.o","dist/build/q-0.1+70e5o6lPGfgGiochTG2tqQ/I.o"]) ar: creating archive dist/build/q-0.1+70e5o6lPGfgGiochTG2tqQ/objs-39951/libHSq-0.1+70e5o6lPGfgGiochTG2tqQ.a a - dist/build/q-0.1+70e5o6lPGfgGiochTG2tqQ/Q.o a - dist/build/q-0.1+70e5o6lPGfgGiochTG2tqQ/I.o /Applications/Xcode-7.2/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: dist/build/q-0.1+70e5o6lPGfgGiochTG2tqQ/objs-39951/libHSq-0.1+70e5o6lPGfgGiochTG2tqQ.a(I.o) has no symbols I've opened #12673 to track this issue but until we have a chance to fix it I've enabled `ignore_stderr` to the testcase on Darwin. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From tjakway at nyu.edu Sun Oct 9 14:32:10 2016 From: tjakway at nyu.edu (Thomas Jakway) Date: Sun, 9 Oct 2016 10:32:10 -0400 Subject: Better X87 Message-ID: I was looking through compiler/nativeGen/X86/Instr.hs and it's pretty hard not to notice the (hilarious) diatribe about the horror that is x87. git log -p says this was apparently written in 2009 by Ben.Lippmeier at anu.edu.au (92ee78e03c3670f56ebbbbfb0f67a00f9ea1305f). Since this has survived in X86/ all this time I'm guessing this is still an issue (another guess: we've gotten by because of SSE?). Is there any interest in improving x87 code generation? And if so, has anyone tried before? According to the comment there seems to be room for improvement. Sadly I don't think x87 is going away any time soon. -Thomas Jakway -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Sun Oct 9 18:14:57 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Sun, 09 Oct 2016 14:14:57 -0400 Subject: Better X87 In-Reply-To: References: Message-ID: <87a8edl78e.fsf@ben-laptop.smart-cactus.org> Hi Thomas! Thomas Jakway writes: > I was looking through compiler/nativeGen/X86/Instr.hs > > and it's pretty hard not to notice the (hilarious) diatribe about the > horror that is x87. > Reading these notes is one of the joys of working on GHC. > git log -p says this was apparently written in 2009 by > Ben.Lippmeier at anu.edu.au (92ee78e03c3670f56ebbbbfb0f67a00f9ea1305f). > > Since this has survived in X86/ all this time I'm guessing this is still > an issue (another guess: we've gotten by because of SSE?). Is there any > interest in improving x87 code generation? And if so, has anyone tried > before? > As far as I know this is indeed still an issue, but one that (I would guess) relatively few people really feel. There are two reasons for this, * We have the LLVM backend which users needing high performance numerics tend to gravitate towards * We have -msse2 which is used by default on x86_64 (which is most users as this point). My impression is that this is still a "problem" but, unless you are yourself actively affected by it, there are probably more important ways to contribute (e.g. fix up the graph coloring register allocator). If you did want to fix up x87 support, I think it would preferable to do so in a way that avoids complicating the register allocator; some day the monster that is x87 will die and we'd prefer not to have to rip out more of its tentacles from the code generator than necessary. I think the "more clever" approach described in the note would probably be a good start: retain the virtual "registers" but try to be more clever about assigning them to stack entries by looking at more than one instruction at once. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From tjakway at nyu.edu Mon Oct 10 02:27:17 2016 From: tjakway at nyu.edu (Thomas Jakway) Date: Sun, 9 Oct 2016 22:27:17 -0400 Subject: Better X87 In-Reply-To: <87a8edl78e.fsf@ben-laptop.smart-cactus.org> References: <87a8edl78e.fsf@ben-laptop.smart-cactus.org> Message-ID: <75050bbd-9b67-dd21-0e04-8cc1973d17bc@nyu.edu> OK, makes sense, thanks. On 10/09/2016 02:14 PM, Ben Gamari wrote: > Hi Thomas! > > > Thomas Jakway writes: > >> I was looking through compiler/nativeGen/X86/Instr.hs >> >> and it's pretty hard not to notice the (hilarious) diatribe about the >> horror that is x87. >> > Reading these notes is one of the joys of working on GHC. > >> git log -p says this was apparently written in 2009 by >> Ben.Lippmeier at anu.edu.au (92ee78e03c3670f56ebbbbfb0f67a00f9ea1305f). >> >> Since this has survived in X86/ all this time I'm guessing this is still >> an issue (another guess: we've gotten by because of SSE?). Is there any >> interest in improving x87 code generation? And if so, has anyone tried >> before? >> > As far as I know this is indeed still an issue, but one that (I would > guess) relatively few people really feel. There are two reasons for > this, > > * We have the LLVM backend which users needing high performance > numerics tend to gravitate towards > > * We have -msse2 which is used by default on x86_64 (which is most > users as this point). > > My impression is that this is still a "problem" but, unless you are > yourself actively affected by it, there are probably more important > ways to contribute (e.g. fix up the graph coloring register allocator). > > If you did want to fix up x87 support, I think it would preferable to do > so in a way that avoids complicating the register allocator; some day > the monster that is x87 will die and we'd prefer not to have to rip out > more of its tentacles from the code generator than necessary. I think > the "more clever" approach described in the note would probably be > a good start: retain the virtual "registers" but try to be more clever > about assigning them to stack entries by looking at more than one > instruction at once. > > Cheers, > > - Ben > From gale at sefer.org Mon Oct 10 09:40:38 2016 From: gale at sefer.org (Yitzchak Gale) Date: Mon, 10 Oct 2016 12:40:38 +0300 Subject: Allow top-level shadowing for imported names? In-Reply-To: References: <877f9pkf8t.fsf@gnu.org> <395A0079-3E9B-4D8F-AC6B-9DC204507B85@gmail.com> Message-ID: Michael Sloan wrote: > It is really good to think in terms of a cleverness budget... > Here are the things I see in favor of this proposal: > > 1) It is common practice to use -Wall... > 2) It lets us do things that are otherwise quite inconvenient... You missed the most important plus: 0) It fixes an inconsistency and thus simplifies Haskell syntax. So in my opinion this is not a cleverness proposal, it's a simplication. > 2) There is no good way to use this feature without creating a > warning. I'm not sure what you mean. There is already a warning for shadowing. Except shadowing from imports, where it is an error instead. The proposal is to eliminate that special case. > I would like to be explicit in my name shadowing I'm > thinking a pragma like {-# NO_WARN myFunction #-}, > or, better yet, the more specific > {-# SHADOWING myFunction #-} orso. The same applies to shadowing in every other context. Adding such a pragma might indeed be a nice idea. But it should apply consistently to shadowing in all contexts, not just for import shadowing. In any case, this would be a separate proposal. -Yitz From gale at sefer.org Mon Oct 10 09:59:43 2016 From: gale at sefer.org (Yitzchak Gale) Date: Mon, 10 Oct 2016 12:59:43 +0300 Subject: Reading floating point In-Reply-To: References: Message-ID: The way I understood it, it's because the type of "floating point" literals is Fractional a => a so the literal parser has no choice but to go via Rational. Once you have that, you use the same parser for those Read instances to ensure that the result is identical to what you would get if you parse it as a literal in every case. You could replace the Read parsers for Float and Double with much more efficient ones. But you would need to provide some other guarantee of consistency with literals. That would be more difficult to achieve than one might think - floating point is deceivingly tricky. There are already several good parsers in the libraries, but I believe all of them can provide different results than literals in some cases. YItz On Sat, Oct 8, 2016 at 10:27 PM, David Feuer wrote: > The current Read instances for Float and Double look pretty iffy from an > efficiency standpoint. Going through Rational is exceedingly weird: we have > absolutely nothing to gain by dividing out the GCD, as far as I can tell. > Then, in doing so, we read the digits of the integral part to form an > Integer. This looks like a detour, and particularly bad when it has many > digits. Wouldn't it be better to normalize the decimal representation first > in some fashion (e.g., to 0.xxxxxxexxx) and go from there? Probably less > importantly, is there some way to avoid converting the mantissa to an > Integer at all? The low digits may not end up making any difference > whatsoever. > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From lonetiger at gmail.com Mon Oct 10 10:33:02 2016 From: lonetiger at gmail.com (Phyx) Date: Mon, 10 Oct 2016 11:33:02 +0100 Subject: Default options for -threaded In-Reply-To: <87mvielr0q.fsf@ben-laptop.smart-cactus.org> References: <57f92249.4d081c0a.292b6.e00a@mx.google.com> <87mvielr0q.fsf@ben-laptop.smart-cactus.org> Message-ID: Oops, sorry, only just now seen this. It seems my overly aggressive filters couldn't decide where to put the email :) I do agree to some extend with this. I'd prefer if I made a mistake for my system not to hang. The one downside to this default though is that you can't just hand a program over to user and have it run at full capabilities. If it possible to set this from inside a program? My guess is no, since by the time you get to main the rts is already initialized? Would a useful alternative be to provide a compile flag that would change the default? e.g. opt-in? Since now there is a small burden on the end user. Cheers, Tamar On Sat, Oct 8, 2016 at 5:55 PM, Ben Gamari wrote: > lonetiger at gmail.com writes: > > > Hi All, > > > > A user on https://ghc.haskell.org/trac/ghc/ticket/11054 has asked why > > -N -qa isn’t the default for -threaded. > > > I'm not sure that scheduling on all of the cores on the user's machine by > default is a good idea, especially given that our users have > learned to expect the existing default. Enabling affinity by default > seems reasonable if we have evidence that it helps the majority of > applications, but we would first need to introduce an additional > flag to disable it. > > In general I think -N1 is a reasonable default as it acknowledges the > fact that deploying parallelism is not something that can be done > blindly in many (most?) applications. To make effective use of > parallelism the user needs to understand their hardware, their > application, and its interaction with the runtime system and configure > the RTS appropriately. > > Of course, this is just my two-cents. > > Cheers, > > - Ben > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lonetiger at gmail.com Mon Oct 10 10:34:14 2016 From: lonetiger at gmail.com (Phyx) Date: Mon, 10 Oct 2016 11:34:14 +0100 Subject: Default options for -threaded In-Reply-To: <1475946782.2358803.749841641.12715F7B@webmail.messagingengine.com> References: <57f92249.4d081c0a.292b6.e00a@mx.google.com> <87mvielr0q.fsf@ben-laptop.smart-cactus.org> <1475946782.2358803.749841641.12715F7B@webmail.messagingengine.com> Message-ID: Oh, this is surprising, I must admit I haven't tried forkIO, but with forkOS is doesn't move the threads across capabilities. Do you know if this is by design or a bug? On Sat, Oct 8, 2016 at 6:13 PM, Eric Seidel wrote: > I would prefer keeping -N1 as a default, especially now that the number > of capabilities can be set at runtime. Programs can then use the more > common -j flag to enable parallelism. > > Regarding -qa, I was experimenting with it over the summer and found its > behavior a bit surprising. It did prevent threads from being moved > between capabilities, but it also forced all of the threads (created > with forkIO) to be *spawned* on the same capability, which was > unexpected. So -N -qa was, in my experience, equivalent to -N1! > > On Sat, Oct 8, 2016, at 09:55, Ben Gamari wrote: > > lonetiger at gmail.com writes: > > > > > Hi All, > > > > > > A user on https://ghc.haskell.org/trac/ghc/ticket/11054 has asked why > > > -N -qa isn’t the default for -threaded. > > > > > I'm not sure that scheduling on all of the cores on the user's machine by > > default is a good idea, especially given that our users have > > learned to expect the existing default. Enabling affinity by default > > seems reasonable if we have evidence that it helps the majority of > > applications, but we would first need to introduce an additional > > flag to disable it. > > > > In general I think -N1 is a reasonable default as it acknowledges the > > fact that deploying parallelism is not something that can be done > > blindly in many (most?) applications. To make effective use of > > parallelism the user needs to understand their hardware, their > > application, and its interaction with the runtime system and configure > > the RTS appropriately. > > > > Of course, this is just my two-cents. > > > > Cheers, > > > > - Ben > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > Email had 1 attachment: > > + signature.asc > > 1k (application/pgp-signature) > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Mon Oct 10 13:33:43 2016 From: ben at well-typed.com (Ben Gamari) Date: Mon, 10 Oct 2016 09:33:43 -0400 Subject: Default options for -threaded In-Reply-To: References: <57f92249.4d081c0a.292b6.e00a@mx.google.com> <87mvielr0q.fsf@ben-laptop.smart-cactus.org> Message-ID: <8760p09vm0.fsf@ben-laptop.smart-cactus.org> Phyx writes: > Oops, sorry, only just now seen this. It seems my overly aggressive filters > couldn't decide where to put the email :) > > I do agree to some extend with this. I'd prefer if I made a mistake for my > system not to hang. The one downside to this default though is that you > can't just hand a program over to user and have it run at full capabilities. > > If it possible to set this from inside a program? My guess is no, since by > the time you get to main the rts is already initialized? > > Would a useful alternative be to provide a compile flag that would change > the default? e.g. opt-in? Since now there is a small burden on the end user. > There exist two pretty good tools for accomplishing what you want, 1. Call Control.Concurrent.setNumCapabilities [1] from within your application. 2. Use GHC's -with-rtsopts flag [2] to set the default RTS arguments during compilation of your application. Cheers, - Ben [1] http://localhost:7000/file/opt/exp/ghc/roots/8.0.1/share/doc/ghc-8.0.1/html/libraries/base-4.9.0.0/Control-Concurrent.html#v:setNumCapabilities [2] http://downloads.haskell.org/~ghc/master/users-guide//phases.html?highlight=#ghc-flag--with-rtsopts -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From eric at seidel.io Mon Oct 10 14:04:07 2016 From: eric at seidel.io (Eric Seidel) Date: Mon, 10 Oct 2016 07:04:07 -0700 Subject: Default options for -threaded In-Reply-To: References: <57f92249.4d081c0a.292b6.e00a@mx.google.com> <87mvielr0q.fsf@ben-laptop.smart-cactus.org> <1475946782.2358803.749841641.12715F7B@webmail.messagingengine.com> Message-ID: <1476108247.2837099.751232649.7154CDD2@webmail.messagingengine.com> Ah, I'm sorry, I believe I was thinking of -qm, which is supposed to prevent threads from being moved. I forgot these were separate options! And the latest version of the User's Guide includes a comment about -qm > This option is probably only of use for concurrent programs that explicitly schedule threads onto CPUs with Control.Concurrent.forkOn. which is exactly what I had to do. On Mon, Oct 10, 2016, at 03:34, Phyx wrote: > Oh, this is surprising, I must admit I haven't tried forkIO, but with > forkOS is doesn't move the threads across capabilities. > > Do you know if this is by design or a bug? > > On Sat, Oct 8, 2016 at 6:13 PM, Eric Seidel wrote: > > > I would prefer keeping -N1 as a default, especially now that the number > > of capabilities can be set at runtime. Programs can then use the more > > common -j flag to enable parallelism. > > > > Regarding -qa, I was experimenting with it over the summer and found its > > behavior a bit surprising. It did prevent threads from being moved > > between capabilities, but it also forced all of the threads (created > > with forkIO) to be *spawned* on the same capability, which was > > unexpected. So -N -qa was, in my experience, equivalent to -N1! > > > > On Sat, Oct 8, 2016, at 09:55, Ben Gamari wrote: > > > lonetiger at gmail.com writes: > > > > > > > Hi All, > > > > > > > > A user on https://ghc.haskell.org/trac/ghc/ticket/11054 has asked why > > > > -N -qa isn’t the default for -threaded. > > > > > > > I'm not sure that scheduling on all of the cores on the user's machine by > > > default is a good idea, especially given that our users have > > > learned to expect the existing default. Enabling affinity by default > > > seems reasonable if we have evidence that it helps the majority of > > > applications, but we would first need to introduce an additional > > > flag to disable it. > > > > > > In general I think -N1 is a reasonable default as it acknowledges the > > > fact that deploying parallelism is not something that can be done > > > blindly in many (most?) applications. To make effective use of > > > parallelism the user needs to understand their hardware, their > > > application, and its interaction with the runtime system and configure > > > the RTS appropriately. > > > > > > Of course, this is just my two-cents. > > > > > > Cheers, > > > > > > - Ben > > > _______________________________________________ > > > ghc-devs mailing list > > > ghc-devs at haskell.org > > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > Email had 1 attachment: > > > + signature.asc > > > 1k (application/pgp-signature) > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > From simonpj at microsoft.com Mon Oct 10 14:24:38 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 10 Oct 2016 14:24:38 +0000 Subject: [Diffusion] [Build Failed] rGHCa6111b8cc14a: More tests for Trac #12522 In-Reply-To: <20161010140708.1540.10740.DB8AE9B5@phabricator.haskell.org> References: <20161010140708.1540.10740.DB8AE9B5@phabricator.haskell.org> Message-ID: This says “stat not good enough” for “max_bytes_used” on T1969. I pushed a “T1969 is ok” patch recently, because it IS ok on my (64-bit Linux) machine. If it’s not ok for our CI infrastructure, by all means un-push it or something. Simon From: noreply at phabricator.haskell.org [mailto:noreply at phabricator.haskell.org] Sent: 10 October 2016 15:07 To: Simon Peyton Jones Subject: [Diffusion] [Build Failed] rGHCa6111b8cc14a: More tests for Trac #12522 Harbormaster failed to build B11303: rGHCa6111b8cc14a: More tests for Trac #12522! BRANCHES master USERS simonpj (Author) O7 (Auditor) COMMIT https://phabricator.haskell.org/rGHCa6111b8cc14a EMAIL PREFERENCES https://phabricator.haskell.org/settings/panel/emailpreferences/ To: simonpj, Harbormaster -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Mon Oct 10 14:42:51 2016 From: ben at well-typed.com (Ben Gamari) Date: Mon, 10 Oct 2016 10:42:51 -0400 Subject: GHC 8.0.2 status Message-ID: <87twck8duc.fsf@ben-laptop.smart-cactus.org> Hello GHCers, Thanks to the work of darchon the last blocker for the 8.0.2 release (#12479) has nearly been resolved. After the fix has been merged I'll be doing some further testing of the ghc-8.0 branch and cut a source tarball for 8.0.2-rc1 later this week. If you intend on offering a binary release for 8.0.2 it would be great if you could plan on testing the tarball promptly so we can cut 8.0.2 and move on to planning for 8.2.1. Thanks for your help and patience! Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From lonetiger at gmail.com Mon Oct 10 16:19:33 2016 From: lonetiger at gmail.com (Phyx) Date: Mon, 10 Oct 2016 16:19:33 +0000 Subject: Default options for -threaded In-Reply-To: <8760p09vm0.fsf@ben-laptop.smart-cactus.org> References: <57f92249.4d081c0a.292b6.e00a@mx.google.com> <87mvielr0q.fsf@ben-laptop.smart-cactus.org> <8760p09vm0.fsf@ben-laptop.smart-cactus.org> Message-ID: Oh, thanks! I wasn't aware of either or these! Useful to know. That does cover the use case I could think of. Thanks, Tamar On Mon, Oct 10, 2016, 14:34 Ben Gamari wrote: > Phyx writes: > > > Oops, sorry, only just now seen this. It seems my overly aggressive > filters > > couldn't decide where to put the email :) > > > > I do agree to some extend with this. I'd prefer if I made a mistake for > my > > system not to hang. The one downside to this default though is that you > > can't just hand a program over to user and have it run at full > capabilities. > > > > If it possible to set this from inside a program? My guess is no, since > by > > the time you get to main the rts is already initialized? > > > > Would a useful alternative be to provide a compile flag that would change > > the default? e.g. opt-in? Since now there is a small burden on the end > user. > > > There exist two pretty good tools for accomplishing what you want, > > 1. Call Control.Concurrent.setNumCapabilities [1] from within your > application. > > 2. Use GHC's -with-rtsopts flag [2] to set the default RTS arguments > during compilation of your application. > > Cheers, > > - Ben > > > [1] > http://localhost:7000/file/opt/exp/ghc/roots/8.0.1/share/doc/ghc-8.0.1/html/libraries/base-4.9.0.0/Control-Concurrent.html#v:setNumCapabilities > [2] > http://downloads.haskell.org/~ghc/master/users-guide//phases.html?highlight=#ghc-flag--with-rtsopts > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Mon Oct 10 16:25:04 2016 From: david.feuer at gmail.com (David Feuer) Date: Mon, 10 Oct 2016 12:25:04 -0400 Subject: Reading floating point In-Reply-To: References: Message-ID: I fully expect this to be somewhat tricky, yes. But some aspects of the current implementation strike me as pretty clearly non-optimal. What I meant about going through Rational is that given "625e-5", say, it calculates 625%100000, producing a fraction in lowest terms, before calling fromRational, which itself invokes fromRat'', a division function optimized for a special case that doesn't seem too relevant in this context. I could be mistaken, but I imagine even reducing to lowest terms is useless here. The separate treatment of the digits preceding and following the decimal point doesn't do anything obviously useful either. If we (effectively) normalize in decimal to an integral mantissa, for example, then we can convert the whole mantissa to an Integer at once; this will balance the merge tree better than converting the two pieces separately and combining. On Oct 10, 2016 6:00 AM, "Yitzchak Gale" wrote: The way I understood it, it's because the type of "floating point" literals is Fractional a => a so the literal parser has no choice but to go via Rational. Once you have that, you use the same parser for those Read instances to ensure that the result is identical to what you would get if you parse it as a literal in every case. You could replace the Read parsers for Float and Double with much more efficient ones. But you would need to provide some other guarantee of consistency with literals. That would be more difficult to achieve than one might think - floating point is deceivingly tricky. There are already several good parsers in the libraries, but I believe all of them can provide different results than literals in some cases. YItz On Sat, Oct 8, 2016 at 10:27 PM, David Feuer wrote: > The current Read instances for Float and Double look pretty iffy from an > efficiency standpoint. Going through Rational is exceedingly weird: we have > absolutely nothing to gain by dividing out the GCD, as far as I can tell. > Then, in doing so, we read the digits of the integral part to form an > Integer. This looks like a detour, and particularly bad when it has many > digits. Wouldn't it be better to normalize the decimal representation first > in some fashion (e.g., to 0.xxxxxxexxx) and go from there? Probably less > importantly, is there some way to avoid converting the mantissa to an > Integer at all? The low digits may not end up making any difference > whatsoever. > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Mon Oct 10 17:56:20 2016 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 10 Oct 2016 10:56:20 -0700 Subject: Reading floating point In-Reply-To: References: Message-ID: The right solution is to fix things so we have scientific notation literal rep available. Any other contortions run into challenges in repsentavility of things. That's of course ignoring denormalized floats, infinities, negative zero and perhaps nans. At the very least we need to efficiently and safely support everything but nan. And I have some ideas for that I hope to share soon. On Monday, October 10, 2016, David Feuer wrote: > I fully expect this to be somewhat tricky, yes. But some aspects of the > current implementation strike me as pretty clearly non-optimal. What I > meant about going through Rational is that given "625e-5", say, it > calculates 625%100000, producing a fraction in lowest terms, before calling > fromRational, which itself invokes fromRat'', a division function optimized > for a special case that doesn't seem too relevant in this context. I could > be mistaken, but I imagine even reducing to lowest terms is useless here. > The separate treatment of the digits preceding and following the decimal > point doesn't do anything obviously useful either. If we (effectively) > normalize in decimal to an integral mantissa, for example, then we can > convert the whole mantissa to an Integer at once; this will balance the > merge tree better than converting the two pieces separately and combining. > > On Oct 10, 2016 6:00 AM, "Yitzchak Gale" > wrote: > > The way I understood it, it's because the type of "floating point" > literals is > > Fractional a => a > > so the literal parser has no choice but to go via Rational. Once you > have that, you use the same parser for those Read instances to ensure > that the result is identical to what you would get if you parse it as > a literal in every case. > > You could replace the Read parsers for Float and Double with much more > efficient ones. But you would need to provide some other guarantee of > consistency with literals. That would be more difficult to achieve > than one might think - floating point is deceivingly tricky. There are > already several good parsers in the libraries, but I believe all of > them can provide different results than literals in some cases. > > YItz > > On Sat, Oct 8, 2016 at 10:27 PM, David Feuer > wrote: > > The current Read instances for Float and Double look pretty iffy from an > > efficiency standpoint. Going through Rational is exceedingly weird: we > have > > absolutely nothing to gain by dividing out the GCD, as far as I can tell. > > Then, in doing so, we read the digits of the integral part to form an > > Integer. This looks like a detour, and particularly bad when it has many > > digits. Wouldn't it be better to normalize the decimal representation > first > > in some fashion (e.g., to 0.xxxxxxexxx) and go from there? Probably less > > importantly, is there some way to avoid converting the mantissa to an > > Integer at all? The low digits may not end up making any difference > > whatsoever. > > > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Mon Oct 10 18:08:59 2016 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 10 Oct 2016 11:08:59 -0700 Subject: Better X87 In-Reply-To: <75050bbd-9b67-dd21-0e04-8cc1973d17bc@nyu.edu> References: <87a8edl78e.fsf@ben-laptop.smart-cactus.org> <75050bbd-9b67-dd21-0e04-8cc1973d17bc@nyu.edu> Message-ID: I actually think we should either remove x87 support, or make it not the default and make it more cleanly factored out from the rest of x86 code gen Improving x87 as a getting feet wet in a focused way sounds like a great low risk way. I think we should change to having code gen on x86-32 default to sse based float and double by default with mx87 or whatever we wanna call it / is the standard convention for naming that be soemthing that has to be explicitly asked for As far as I can tell the only current Intel 32bit chips that don't support sse based float and double computation are the ones that are meant to compete with atmel / arduino etc, like the Edison / quark boards https://en.m.wikipedia.org/wiki/Intel_Quark , as all the slightly beefier offerings like the atom series of low power systems all have sse support. @thomas I'm happy to help mentor / collab / whatever on this, since I've been wanting to cleanup this stuff for a while. Either way, I strongly support the work start with a warm up patch to swap the defaults. X87 floating point causes a lot of heart ache for those who fit. Expect it / and it definitely makes it so that results on 32bit by default don't match 32bit or even optimization levels On Sunday, October 9, 2016, Thomas Jakway wrote: > OK, makes sense, thanks. > > On 10/09/2016 02:14 PM, Ben Gamari wrote: > >> Hi Thomas! >> >> >> Thomas Jakway writes: >> >> I was looking through compiler/nativeGen/X86/Instr.hs >>> >> X86/Instr.hs#L71> >>> and it's pretty hard not to notice the (hilarious) diatribe about the >>> horror that is x87. >>> >>> Reading these notes is one of the joys of working on GHC. >> >> git log -p says this was apparently written in 2009 by >>> Ben.Lippmeier at anu.edu.au (92ee78e03c3670f56ebbbbfb0f67a00f9ea1305f). >>> >>> Since this has survived in X86/ all this time I'm guessing this is still >>> an issue (another guess: we've gotten by because of SSE?). Is there any >>> interest in improving x87 code generation? And if so, has anyone tried >>> before? >>> >>> As far as I know this is indeed still an issue, but one that (I would >> guess) relatively few people really feel. There are two reasons for >> this, >> >> * We have the LLVM backend which users needing high performance >> numerics tend to gravitate towards >> >> * We have -msse2 which is used by default on x86_64 (which is most >> users as this point). >> >> My impression is that this is still a "problem" but, unless you are >> yourself actively affected by it, there are probably more important >> ways to contribute (e.g. fix up the graph coloring register allocator). >> >> If you did want to fix up x87 support, I think it would preferable to do >> so in a way that avoids complicating the register allocator; some day >> the monster that is x87 will die and we'd prefer not to have to rip out >> more of its tentacles from the code generator than necessary. I think >> the "more clever" approach described in the note would probably be >> a good start: retain the virtual "registers" but try to be more clever >> about assigning them to stack entries by looking at more than one >> instruction at once. >> >> Cheers, >> >> - Ben >> >> > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Mon Oct 10 20:11:17 2016 From: david.feuer at gmail.com (David Feuer) Date: Mon, 10 Oct 2016 16:11:17 -0400 Subject: Reading floating point In-Reply-To: References: Message-ID: What does any of that have to do with the Read instances? On Oct 10, 2016 1:56 PM, "Carter Schonwald" wrote: > The right solution is to fix things so we have scientific notation literal > rep available. Any other contortions run into challenges in repsentavility > of things. That's of course ignoring denormalized floats, infinities, > negative zero and perhaps nans. > > At the very least we need to efficiently and safely support everything but > nan. And I have some ideas for that I hope to share soon. > > On Monday, October 10, 2016, David Feuer wrote: > >> I fully expect this to be somewhat tricky, yes. But some aspects of the >> current implementation strike me as pretty clearly non-optimal. What I >> meant about going through Rational is that given "625e-5", say, it >> calculates 625%100000, producing a fraction in lowest terms, before calling >> fromRational, which itself invokes fromRat'', a division function optimized >> for a special case that doesn't seem too relevant in this context. I could >> be mistaken, but I imagine even reducing to lowest terms is useless here. >> The separate treatment of the digits preceding and following the decimal >> point doesn't do anything obviously useful either. If we (effectively) >> normalize in decimal to an integral mantissa, for example, then we can >> convert the whole mantissa to an Integer at once; this will balance the >> merge tree better than converting the two pieces separately and combining. >> >> On Oct 10, 2016 6:00 AM, "Yitzchak Gale" wrote: >> >> The way I understood it, it's because the type of "floating point" >> literals is >> >> Fractional a => a >> >> so the literal parser has no choice but to go via Rational. Once you >> have that, you use the same parser for those Read instances to ensure >> that the result is identical to what you would get if you parse it as >> a literal in every case. >> >> You could replace the Read parsers for Float and Double with much more >> efficient ones. But you would need to provide some other guarantee of >> consistency with literals. That would be more difficult to achieve >> than one might think - floating point is deceivingly tricky. There are >> already several good parsers in the libraries, but I believe all of >> them can provide different results than literals in some cases. >> >> YItz >> >> On Sat, Oct 8, 2016 at 10:27 PM, David Feuer >> wrote: >> > The current Read instances for Float and Double look pretty iffy from an >> > efficiency standpoint. Going through Rational is exceedingly weird: we >> have >> > absolutely nothing to gain by dividing out the GCD, as far as I can >> tell. >> > Then, in doing so, we read the digits of the integral part to form an >> > Integer. This looks like a detour, and particularly bad when it has many >> > digits. Wouldn't it be better to normalize the decimal representation >> first >> > in some fashion (e.g., to 0.xxxxxxexxx) and go from there? Probably less >> > importantly, is there some way to avoid converting the mantissa to an >> > Integer at all? The low digits may not end up making any difference >> > whatsoever. >> > >> > >> > _______________________________________________ >> > ghc-devs mailing list >> > ghc-devs at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Mon Oct 10 22:07:57 2016 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 10 Oct 2016 15:07:57 -0700 Subject: Reading floating point In-Reply-To: References: Message-ID: Read should accept exactly the valid source literals for a type. On Monday, October 10, 2016, David Feuer wrote: > What does any of that have to do with the Read instances? > > On Oct 10, 2016 1:56 PM, "Carter Schonwald" > wrote: > >> The right solution is to fix things so we have scientific notation >> literal rep available. Any other contortions run into challenges in >> repsentavility of things. That's of course ignoring denormalized floats, >> infinities, negative zero and perhaps nans. >> >> At the very least we need to efficiently and safely support everything >> but nan. And I have some ideas for that I hope to share soon. >> >> On Monday, October 10, 2016, David Feuer > > wrote: >> >>> I fully expect this to be somewhat tricky, yes. But some aspects of the >>> current implementation strike me as pretty clearly non-optimal. What I >>> meant about going through Rational is that given "625e-5", say, it >>> calculates 625%100000, producing a fraction in lowest terms, before calling >>> fromRational, which itself invokes fromRat'', a division function optimized >>> for a special case that doesn't seem too relevant in this context. I could >>> be mistaken, but I imagine even reducing to lowest terms is useless here. >>> The separate treatment of the digits preceding and following the decimal >>> point doesn't do anything obviously useful either. If we (effectively) >>> normalize in decimal to an integral mantissa, for example, then we can >>> convert the whole mantissa to an Integer at once; this will balance the >>> merge tree better than converting the two pieces separately and combining. >>> >>> On Oct 10, 2016 6:00 AM, "Yitzchak Gale" wrote: >>> >>> The way I understood it, it's because the type of "floating point" >>> literals is >>> >>> Fractional a => a >>> >>> so the literal parser has no choice but to go via Rational. Once you >>> have that, you use the same parser for those Read instances to ensure >>> that the result is identical to what you would get if you parse it as >>> a literal in every case. >>> >>> You could replace the Read parsers for Float and Double with much more >>> efficient ones. But you would need to provide some other guarantee of >>> consistency with literals. That would be more difficult to achieve >>> than one might think - floating point is deceivingly tricky. There are >>> already several good parsers in the libraries, but I believe all of >>> them can provide different results than literals in some cases. >>> >>> YItz >>> >>> On Sat, Oct 8, 2016 at 10:27 PM, David Feuer >>> wrote: >>> > The current Read instances for Float and Double look pretty iffy from >>> an >>> > efficiency standpoint. Going through Rational is exceedingly weird: we >>> have >>> > absolutely nothing to gain by dividing out the GCD, as far as I can >>> tell. >>> > Then, in doing so, we read the digits of the integral part to form an >>> > Integer. This looks like a detour, and particularly bad when it has >>> many >>> > digits. Wouldn't it be better to normalize the decimal representation >>> first >>> > in some fashion (e.g., to 0.xxxxxxexxx) and go from there? Probably >>> less >>> > importantly, is there some way to avoid converting the mantissa to an >>> > Integer at all? The low digits may not end up making any difference >>> > whatsoever. >>> > >>> > >>> > _______________________________________________ >>> > ghc-devs mailing list >>> > ghc-devs at haskell.org >>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>> > >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Tue Oct 11 00:06:55 2016 From: david.feuer at gmail.com (David Feuer) Date: Mon, 10 Oct 2016 20:06:55 -0400 Subject: Reading floating point In-Reply-To: References: Message-ID: It doesn't, and it never has. On Oct 10, 2016 6:08 PM, "Carter Schonwald" wrote: > Read should accept exactly the valid source literals for a type. > > On Monday, October 10, 2016, David Feuer wrote: > >> What does any of that have to do with the Read instances? >> >> On Oct 10, 2016 1:56 PM, "Carter Schonwald" >> wrote: >> >>> The right solution is to fix things so we have scientific notation >>> literal rep available. Any other contortions run into challenges in >>> repsentavility of things. That's of course ignoring denormalized floats, >>> infinities, negative zero and perhaps nans. >>> >>> At the very least we need to efficiently and safely support everything >>> but nan. And I have some ideas for that I hope to share soon. >>> >>> On Monday, October 10, 2016, David Feuer wrote: >>> >>>> I fully expect this to be somewhat tricky, yes. But some aspects of the >>>> current implementation strike me as pretty clearly non-optimal. What I >>>> meant about going through Rational is that given "625e-5", say, it >>>> calculates 625%100000, producing a fraction in lowest terms, before calling >>>> fromRational, which itself invokes fromRat'', a division function optimized >>>> for a special case that doesn't seem too relevant in this context. I could >>>> be mistaken, but I imagine even reducing to lowest terms is useless here. >>>> The separate treatment of the digits preceding and following the decimal >>>> point doesn't do anything obviously useful either. If we (effectively) >>>> normalize in decimal to an integral mantissa, for example, then we can >>>> convert the whole mantissa to an Integer at once; this will balance the >>>> merge tree better than converting the two pieces separately and combining. >>>> >>>> On Oct 10, 2016 6:00 AM, "Yitzchak Gale" wrote: >>>> >>>> The way I understood it, it's because the type of "floating point" >>>> literals is >>>> >>>> Fractional a => a >>>> >>>> so the literal parser has no choice but to go via Rational. Once you >>>> have that, you use the same parser for those Read instances to ensure >>>> that the result is identical to what you would get if you parse it as >>>> a literal in every case. >>>> >>>> You could replace the Read parsers for Float and Double with much more >>>> efficient ones. But you would need to provide some other guarantee of >>>> consistency with literals. That would be more difficult to achieve >>>> than one might think - floating point is deceivingly tricky. There are >>>> already several good parsers in the libraries, but I believe all of >>>> them can provide different results than literals in some cases. >>>> >>>> YItz >>>> >>>> On Sat, Oct 8, 2016 at 10:27 PM, David Feuer >>>> wrote: >>>> > The current Read instances for Float and Double look pretty iffy from >>>> an >>>> > efficiency standpoint. Going through Rational is exceedingly weird: >>>> we have >>>> > absolutely nothing to gain by dividing out the GCD, as far as I can >>>> tell. >>>> > Then, in doing so, we read the digits of the integral part to form an >>>> > Integer. This looks like a detour, and particularly bad when it has >>>> many >>>> > digits. Wouldn't it be better to normalize the decimal representation >>>> first >>>> > in some fashion (e.g., to 0.xxxxxxexxx) and go from there? Probably >>>> less >>>> > importantly, is there some way to avoid converting the mantissa to an >>>> > Integer at all? The low digits may not end up making any difference >>>> > whatsoever. >>>> > >>>> > >>>> > _______________________________________________ >>>> > ghc-devs mailing list >>>> > ghc-devs at haskell.org >>>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>>> > >>>> >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjakway at nyu.edu Tue Oct 11 00:31:35 2016 From: tjakway at nyu.edu (Thomas Jakway) Date: Mon, 10 Oct 2016 20:31:35 -0400 Subject: Register Allocator Tests Message-ID: Can anyone point me to the register allocator tests (especially for the graph register allocator)? Can't seem to find them and grepping doesn't turn up much (pretty much just testsuite/tests/codeGen/should_run/cgrun028.h). From leo at halfaya.org Tue Oct 11 03:22:18 2016 From: leo at halfaya.org (John Leo) Date: Mon, 10 Oct 2016 20:22:18 -0700 Subject: when building latest GHC on Mac with Xcode 8: Symbol not found: _clock_gettime Message-ID: Hi everyone, I'm trying to compile ghc from the latest source and am hitting an error "Symbol not found: _clock_gettime". I'm on Mac El Capitan and recently installed Xcode 8 which I'm sure is what caused the problem. Using Google I found some relevant pages including this one https://mail.haskell.org/pipermail/ghc-devs/2016-July/012511.html but I've been unable to figure out what I can do to fix the problem. Any help would be appreciated. The tail end of my compilation output is below. Thanks in advance. John cat ghc/ghc.wrapper >> inplace/bin/ghc-stage2 chmod +x inplace/bin/ghc-stage2 "inplace/bin/ghc-stage2" -hisuf dyn_hi -osuf dyn_o -hcsuf dyn_hc -fPIC -dynamic -O -H64m -Wall -hide-all-packages -i -iutils/ghctags/. -iutils/ghctags/dist-install/build -Iutils/ghctags/dist-install/build -iutils/ghctags/dist-install/build/ghctags/autogen -Iutils/ghctags/dist-install/build/ghctags/autogen -optP-include -optPutils/ghctags/dist-install/build/ghctags/autogen/cabal_macros.h -package-id Cabal-1.25.0.0 -package-id base-4.9.0.0 -package-id containers-0.5.7.1 -package-id ghc-8.1 -XHaskell2010 -no-user-package-db -rtsopts -Wnoncanonical-monad-instances -odir utils/ghctags/dist-install/build -hidir utils/ghctags/dist-install/build -stubdir utils/ghctags/dist-install/build -c utils/ghctags/./Main.hs -o utils/ghctags/dist-install/build/Main.dyn_o "inplace/bin/ghc-stage2" -hisuf dyn_hi -osuf dyn_o -hcsuf dyn_hc -fPIC -dynamic -O -H64m -Wall -hide-all-packages -i -iutils/check-api-annotations/. -iutils/check-api-annotations/dist-install/build -Iutils/check-api-annotations/dist-install/build -iutils/check-api-annotations/dist-install/build/check-api-annotations/autogen -Iutils/check-api-annotations/dist-install/build/check-api-annotations/autogen -optP-include -optPutils/check-api-annotations/dist-install/build/check-api-annotations/autogen/cabal_macros.h -package-id Cabal-1.25.0.0 -package-id base-4.9.0.0 -package-id containers-0.5.7.1 -package-id directory-1.2.6.2 -package-id ghc-8.1 -Wall -XHaskell2010 -no-user-package-db -rtsopts -Wnoncanonical-monad-instances -odir utils/check-api-annotations/dist-install/build -hidir utils/check-api-annotations/dist-install/build -stubdir utils/check-api-annotations/dist-install/build -c utils/check-api-annotations/./Main.hs -o utils/check-api-annotations/dist-install/build/Main.dyn_o dyld: lazy symbol binding failed: Symbol not found: _clock_gettime Referenced from: /Users/leo/haskell/ghc/rts/dist/build/libHSrts_thr-ghc8.1.20161010.dylib (which was built for Mac OS X 10.12) Expected in: /usr/lib/libSystem.B.dylib dyld: Symbol not found: _clock_gettime Referenced from: /Users/leo/haskell/ghc/rts/dist/build/libHSrts_thr-ghc8.1.20161010.dylib (which was built for Mac OS X 10.12) Expected in: /usr/lib/libSystem.B.dylib dyld: lazy symbol binding failed: Symbol not found: _clock_gettime Referenced from: /Users/leo/haskell/ghc/rts/dist/build/libHSrts_thr-ghc8.1.20161010.dylib (which was built for Mac OS X 10.12) Expected in: /usr/lib/libSystem.B.dylib dyld: Symbol not found: _clock_gettime Referenced from: /Users/leo/haskell/ghc/rts/dist/build/libHSrts_thr-ghc8.1.20161010.dylib (which was built for Mac OS X 10.12) Expected in: /usr/lib/libSystem.B.dylib make[1]: *** [utils/ghctags/dist-install/build/Main.dyn_o] Trace/BPT trap: 5 make[1]: *** Waiting for unfinished jobs.... make[1]: *** [utils/check-api-annotations/dist-install/build/Main.dyn_o] Trace/BPT trap: 5 make: *** [all] Error 2 i -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Tue Oct 11 03:27:10 2016 From: allbery.b at gmail.com (Brandon Allbery) Date: Mon, 10 Oct 2016 23:27:10 -0400 Subject: when building latest GHC on Mac with Xcode 8: Symbol not found: _clock_gettime In-Reply-To: References: Message-ID: On Mon, Oct 10, 2016 at 11:22 PM, John Leo wrote: > I'm trying to compile ghc from the latest source and am hitting an error > "Symbol not found: _clock_gettime". I'm on Mac El Capitan and recently > installed Xcode 8 which I'm sure is what caused the problem. Using Google > I found some relevant pages including this one > https://mail.haskell.org/pipermail/ghc-devs/2016-July/012511.html > > > but I've been unable to figure out what I can do to fix the problem. Any > help would be appreciated. > You need to download the 10.11 Command Line Tools from download.apple.com and reinstall them over the Xcode 8 command line tools, which are for 10.12 and will have problems like this. (Apple intends to correct this in Xcode 8.1.) You need a free Mac Developer account for this, or maybe you can find the 10.11 tools elsewhere. You will then need to clean and rebuild ghc. -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From leo at halfaya.org Tue Oct 11 04:14:29 2016 From: leo at halfaya.org (John Leo) Date: Mon, 10 Oct 2016 21:14:29 -0700 Subject: when building latest GHC on Mac with Xcode 8: Symbol not found: _clock_gettime In-Reply-To: References: Message-ID: Thanks very much Brandon for your fast reply! That did the trick. I had to rerun configure as well since when I didn't do that I got a different but seemingly related error. But after clean, configure and make everything seems to work again. John On Mon, Oct 10, 2016 at 8:27 PM, Brandon Allbery wrote: > > On Mon, Oct 10, 2016 at 11:22 PM, John Leo wrote: > >> I'm trying to compile ghc from the latest source and am hitting an error >> "Symbol not found: _clock_gettime". I'm on Mac El Capitan and recently >> installed Xcode 8 which I'm sure is what caused the problem. Using Google >> I found some relevant pages including this one >> https://mail.haskell.org/pipermail/ghc-devs/2016-July/012511.html >> >> >> but I've been unable to figure out what I can do to fix the problem. Any >> help would be appreciated. >> > > You need to download the 10.11 Command Line Tools from download.apple.com > and reinstall them over the Xcode 8 command line tools, which are for 10.12 > and will have problems like this. (Apple intends to correct this in Xcode > 8.1.) You need a free Mac Developer account for this, or maybe you can find > the 10.11 tools elsewhere. You will then need to clean and rebuild ghc. > > -- > brandon s allbery kf8nh sine nomine > associates > allbery.b at gmail.com > ballbery at sinenomine.net > unix, openafs, kerberos, infrastructure, xmonad > http://sinenomine.net > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Tue Oct 11 05:50:35 2016 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 10 Oct 2016 22:50:35 -0700 Subject: Reading floating point In-Reply-To: References: Message-ID: How is that not a bug? We should be able to read back floats On Monday, October 10, 2016, David Feuer wrote: > It doesn't, and it never has. > > On Oct 10, 2016 6:08 PM, "Carter Schonwald" > wrote: > >> Read should accept exactly the valid source literals for a type. >> >> On Monday, October 10, 2016, David Feuer > > wrote: >> >>> What does any of that have to do with the Read instances? >>> >>> On Oct 10, 2016 1:56 PM, "Carter Schonwald" >>> wrote: >>> >>>> The right solution is to fix things so we have scientific notation >>>> literal rep available. Any other contortions run into challenges in >>>> repsentavility of things. That's of course ignoring denormalized floats, >>>> infinities, negative zero and perhaps nans. >>>> >>>> At the very least we need to efficiently and safely support everything >>>> but nan. And I have some ideas for that I hope to share soon. >>>> >>>> On Monday, October 10, 2016, David Feuer wrote: >>>> >>>>> I fully expect this to be somewhat tricky, yes. But some aspects of >>>>> the current implementation strike me as pretty clearly non-optimal. What I >>>>> meant about going through Rational is that given "625e-5", say, it >>>>> calculates 625%100000, producing a fraction in lowest terms, before calling >>>>> fromRational, which itself invokes fromRat'', a division function optimized >>>>> for a special case that doesn't seem too relevant in this context. I could >>>>> be mistaken, but I imagine even reducing to lowest terms is useless here. >>>>> The separate treatment of the digits preceding and following the decimal >>>>> point doesn't do anything obviously useful either. If we (effectively) >>>>> normalize in decimal to an integral mantissa, for example, then we can >>>>> convert the whole mantissa to an Integer at once; this will balance the >>>>> merge tree better than converting the two pieces separately and combining. >>>>> >>>>> On Oct 10, 2016 6:00 AM, "Yitzchak Gale" wrote: >>>>> >>>>> The way I understood it, it's because the type of "floating point" >>>>> literals is >>>>> >>>>> Fractional a => a >>>>> >>>>> so the literal parser has no choice but to go via Rational. Once you >>>>> have that, you use the same parser for those Read instances to ensure >>>>> that the result is identical to what you would get if you parse it as >>>>> a literal in every case. >>>>> >>>>> You could replace the Read parsers for Float and Double with much more >>>>> efficient ones. But you would need to provide some other guarantee of >>>>> consistency with literals. That would be more difficult to achieve >>>>> than one might think - floating point is deceivingly tricky. There are >>>>> already several good parsers in the libraries, but I believe all of >>>>> them can provide different results than literals in some cases. >>>>> >>>>> YItz >>>>> >>>>> On Sat, Oct 8, 2016 at 10:27 PM, David Feuer >>>>> wrote: >>>>> > The current Read instances for Float and Double look pretty iffy >>>>> from an >>>>> > efficiency standpoint. Going through Rational is exceedingly weird: >>>>> we have >>>>> > absolutely nothing to gain by dividing out the GCD, as far as I can >>>>> tell. >>>>> > Then, in doing so, we read the digits of the integral part to form an >>>>> > Integer. This looks like a detour, and particularly bad when it has >>>>> many >>>>> > digits. Wouldn't it be better to normalize the decimal >>>>> representation first >>>>> > in some fashion (e.g., to 0.xxxxxxexxx) and go from there? Probably >>>>> less >>>>> > importantly, is there some way to avoid converting the mantissa to an >>>>> > Integer at all? The low digits may not end up making any difference >>>>> > whatsoever. >>>>> > >>>>> > >>>>> > _______________________________________________ >>>>> > ghc-devs mailing list >>>>> > ghc-devs at haskell.org >>>>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>>>> > >>>>> >>>>> >>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Tue Oct 11 05:54:08 2016 From: david.feuer at gmail.com (David Feuer) Date: Tue, 11 Oct 2016 01:54:08 -0400 Subject: Reading floating point In-Reply-To: References: Message-ID: It may currently be true for floats, but it's never been true in general, particularly with regard to records. Read is not actually designed to parse Haskell; it's for parsing "Haskell-like" things. Because it, unlike a true Haskell parser, is type-directed, there are somewhat different trade-offs. On Oct 11, 2016 1:50 AM, "Carter Schonwald" wrote: > How is that not a bug? We should be able to read back floats > > On Monday, October 10, 2016, David Feuer wrote: > >> It doesn't, and it never has. >> >> On Oct 10, 2016 6:08 PM, "Carter Schonwald" >> wrote: >> >>> Read should accept exactly the valid source literals for a type. >>> >>> On Monday, October 10, 2016, David Feuer wrote: >>> >>>> What does any of that have to do with the Read instances? >>>> >>>> On Oct 10, 2016 1:56 PM, "Carter Schonwald" >>>> wrote: >>>> >>>>> The right solution is to fix things so we have scientific notation >>>>> literal rep available. Any other contortions run into challenges in >>>>> repsentavility of things. That's of course ignoring denormalized floats, >>>>> infinities, negative zero and perhaps nans. >>>>> >>>>> At the very least we need to efficiently and safely support everything >>>>> but nan. And I have some ideas for that I hope to share soon. >>>>> >>>>> On Monday, October 10, 2016, David Feuer >>>>> wrote: >>>>> >>>>>> I fully expect this to be somewhat tricky, yes. But some aspects of >>>>>> the current implementation strike me as pretty clearly non-optimal. What I >>>>>> meant about going through Rational is that given "625e-5", say, it >>>>>> calculates 625%100000, producing a fraction in lowest terms, before calling >>>>>> fromRational, which itself invokes fromRat'', a division function optimized >>>>>> for a special case that doesn't seem too relevant in this context. I could >>>>>> be mistaken, but I imagine even reducing to lowest terms is useless here. >>>>>> The separate treatment of the digits preceding and following the decimal >>>>>> point doesn't do anything obviously useful either. If we (effectively) >>>>>> normalize in decimal to an integral mantissa, for example, then we can >>>>>> convert the whole mantissa to an Integer at once; this will balance the >>>>>> merge tree better than converting the two pieces separately and combining. >>>>>> >>>>>> On Oct 10, 2016 6:00 AM, "Yitzchak Gale" wrote: >>>>>> >>>>>> The way I understood it, it's because the type of "floating point" >>>>>> literals is >>>>>> >>>>>> Fractional a => a >>>>>> >>>>>> so the literal parser has no choice but to go via Rational. Once you >>>>>> have that, you use the same parser for those Read instances to ensure >>>>>> that the result is identical to what you would get if you parse it as >>>>>> a literal in every case. >>>>>> >>>>>> You could replace the Read parsers for Float and Double with much more >>>>>> efficient ones. But you would need to provide some other guarantee of >>>>>> consistency with literals. That would be more difficult to achieve >>>>>> than one might think - floating point is deceivingly tricky. There are >>>>>> already several good parsers in the libraries, but I believe all of >>>>>> them can provide different results than literals in some cases. >>>>>> >>>>>> YItz >>>>>> >>>>>> On Sat, Oct 8, 2016 at 10:27 PM, David Feuer >>>>>> wrote: >>>>>> > The current Read instances for Float and Double look pretty iffy >>>>>> from an >>>>>> > efficiency standpoint. Going through Rational is exceedingly weird: >>>>>> we have >>>>>> > absolutely nothing to gain by dividing out the GCD, as far as I can >>>>>> tell. >>>>>> > Then, in doing so, we read the digits of the integral part to form >>>>>> an >>>>>> > Integer. This looks like a detour, and particularly bad when it has >>>>>> many >>>>>> > digits. Wouldn't it be better to normalize the decimal >>>>>> representation first >>>>>> > in some fashion (e.g., to 0.xxxxxxexxx) and go from there? Probably >>>>>> less >>>>>> > importantly, is there some way to avoid converting the mantissa to >>>>>> an >>>>>> > Integer at all? The low digits may not end up making any difference >>>>>> > whatsoever. >>>>>> > >>>>>> > >>>>>> > _______________________________________________ >>>>>> > ghc-devs mailing list >>>>>> > ghc-devs at haskell.org >>>>>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>>>>> > >>>>>> >>>>>> >>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Tue Oct 11 14:41:48 2016 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Tue, 11 Oct 2016 07:41:48 -0700 Subject: Reading floating point In-Reply-To: References: Message-ID: Could you elaborate or point me to where this philosophy is articulated in commentary in base or in the language standards ? On Monday, October 10, 2016, David Feuer wrote: > It may currently be true for floats, but it's never been true in general, > particularly with regard to records. Read is not actually designed to parse > Haskell; it's for parsing "Haskell-like" things. Because it, unlike a true > Haskell parser, is type-directed, there are somewhat different trade-offs. > > On Oct 11, 2016 1:50 AM, "Carter Schonwald" > wrote: > >> How is that not a bug? We should be able to read back floats >> >> On Monday, October 10, 2016, David Feuer > > wrote: >> >>> It doesn't, and it never has. >>> >>> On Oct 10, 2016 6:08 PM, "Carter Schonwald" >>> wrote: >>> >>>> Read should accept exactly the valid source literals for a type. >>>> >>>> On Monday, October 10, 2016, David Feuer wrote: >>>> >>>>> What does any of that have to do with the Read instances? >>>>> >>>>> On Oct 10, 2016 1:56 PM, "Carter Schonwald" < >>>>> carter.schonwald at gmail.com> wrote: >>>>> >>>>>> The right solution is to fix things so we have scientific notation >>>>>> literal rep available. Any other contortions run into challenges in >>>>>> repsentavility of things. That's of course ignoring denormalized floats, >>>>>> infinities, negative zero and perhaps nans. >>>>>> >>>>>> At the very least we need to efficiently and safely support >>>>>> everything but nan. And I have some ideas for that I hope to share soon. >>>>>> >>>>>> On Monday, October 10, 2016, David Feuer >>>>>> wrote: >>>>>> >>>>>>> I fully expect this to be somewhat tricky, yes. But some aspects of >>>>>>> the current implementation strike me as pretty clearly non-optimal. What I >>>>>>> meant about going through Rational is that given "625e-5", say, it >>>>>>> calculates 625%100000, producing a fraction in lowest terms, before calling >>>>>>> fromRational, which itself invokes fromRat'', a division function optimized >>>>>>> for a special case that doesn't seem too relevant in this context. I could >>>>>>> be mistaken, but I imagine even reducing to lowest terms is useless here. >>>>>>> The separate treatment of the digits preceding and following the decimal >>>>>>> point doesn't do anything obviously useful either. If we (effectively) >>>>>>> normalize in decimal to an integral mantissa, for example, then we can >>>>>>> convert the whole mantissa to an Integer at once; this will balance the >>>>>>> merge tree better than converting the two pieces separately and combining. >>>>>>> >>>>>>> On Oct 10, 2016 6:00 AM, "Yitzchak Gale" wrote: >>>>>>> >>>>>>> The way I understood it, it's because the type of "floating point" >>>>>>> literals is >>>>>>> >>>>>>> Fractional a => a >>>>>>> >>>>>>> so the literal parser has no choice but to go via Rational. Once you >>>>>>> have that, you use the same parser for those Read instances to ensure >>>>>>> that the result is identical to what you would get if you parse it as >>>>>>> a literal in every case. >>>>>>> >>>>>>> You could replace the Read parsers for Float and Double with much >>>>>>> more >>>>>>> efficient ones. But you would need to provide some other guarantee of >>>>>>> consistency with literals. That would be more difficult to achieve >>>>>>> than one might think - floating point is deceivingly tricky. There >>>>>>> are >>>>>>> already several good parsers in the libraries, but I believe all of >>>>>>> them can provide different results than literals in some cases. >>>>>>> >>>>>>> YItz >>>>>>> >>>>>>> On Sat, Oct 8, 2016 at 10:27 PM, David Feuer >>>>>>> wrote: >>>>>>> > The current Read instances for Float and Double look pretty iffy >>>>>>> from an >>>>>>> > efficiency standpoint. Going through Rational is exceedingly >>>>>>> weird: we have >>>>>>> > absolutely nothing to gain by dividing out the GCD, as far as I >>>>>>> can tell. >>>>>>> > Then, in doing so, we read the digits of the integral part to form >>>>>>> an >>>>>>> > Integer. This looks like a detour, and particularly bad when it >>>>>>> has many >>>>>>> > digits. Wouldn't it be better to normalize the decimal >>>>>>> representation first >>>>>>> > in some fashion (e.g., to 0.xxxxxxexxx) and go from there? >>>>>>> Probably less >>>>>>> > importantly, is there some way to avoid converting the mantissa to >>>>>>> an >>>>>>> > Integer at all? The low digits may not end up making any difference >>>>>>> > whatsoever. >>>>>>> > >>>>>>> > >>>>>>> > _______________________________________________ >>>>>>> > ghc-devs mailing list >>>>>>> > ghc-devs at haskell.org >>>>>>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>>>>>> > >>>>>>> >>>>>>> >>>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Tue Oct 11 14:53:09 2016 From: allbery.b at gmail.com (Brandon Allbery) Date: Tue, 11 Oct 2016 10:53:09 -0400 Subject: Reading floating point In-Reply-To: References: Message-ID: On Tue, Oct 11, 2016 at 10:41 AM, Carter Schonwald < carter.schonwald at gmail.com> wrote: > Could you elaborate or point me to where this philosophy is articulated in > commentary in base or in the language standards ? https://www.haskell.org/onlinereport/haskell2010/haskellch11.html#x18-18600011.4 is instructive, insofar as one can assume the restrictions it quotes that do not agree with the semantics of Haskell imply a philosophy. -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From karl at cs.tufts.edu Tue Oct 11 15:54:15 2016 From: karl at cs.tufts.edu (Karl Cronburg) Date: Tue, 11 Oct 2016 11:54:15 -0400 Subject: qualified module export Message-ID: Hello, I'm attempting to add support for export of qualified modules (feature request #8043), and any guidance would be greatly appreciated. Namely I'm very familiar with languages / grammars / happy and was easily able to add an appropriate production alternative to Parser.y to construct a new AST node when 'qualified module' is seen in the export list, i.e.: | 'module' modid {% amsu (sLL $1 $> (IEModuleContents $2)) [mj AnnModule $1] } | 'qualified' 'module' qconid --maybeas {% amsu (sLL $2 $> (IEModuleQualified $3)) [mj AnnQualified $1] } But now I'm lost in the compiler internals. Where should I be looking / focusing on? In particular: - Where do exported identifiers get added to the list of "[LIE Name]" in ExportAccum (in TcRnExports.hs)? Thanks, -Karl Cronburg- -------------- next part -------------- An HTML attachment was scrubbed... URL: From iavor.diatchki at gmail.com Tue Oct 11 17:04:02 2016 From: iavor.diatchki at gmail.com (Iavor Diatchki) Date: Tue, 11 Oct 2016 10:04:02 -0700 Subject: qualified module export In-Reply-To: References: Message-ID: Hello, There may be some more thinking to be done on the design of this feature. In particular, if a module `M` has en export declaration `module T`, this is not at all the same as adding `import T` in modules exporting `M`. The reason is that meaning of `module T` depends on what is in scope in `M` and with what names. For example: * `module T` may export only some of the names from `T` (e.g. if `M` contains `import T(onlyThisName)`); or, * `module T` may export the names from an entirely different module (e.g., if `M` contains `import S as T`); or, * `module T` may export a combination of multiple modules (e.g., if `M` contains `import S1 as T` and `import S2 as T`). So, I would expect an export of the form `qualified module T as X` to work in a similar fashion (for the full details on the current semantics you could have a look at [1]). The next issue would be that, currently, entities exported by a module are only identified by an unqualified name, and the imports introduce qualified names as necessary. It might make sense to allow modules to also export qualified names instead, but then we'd have to decide what happens on the importing end. Presumably, a simple `import M` would now bring both some qualified and some unqualified names. This means that the explicit import and hiding lists would have to support qualified names, seems doable. However, we'd also have to decide how `import M as X` works, in particular how does it affect imported qualified names. One option would be to have `X` replace the qualifier, so if `A.b` is imported via `import M as X`, the resulting name would be `X.b`. Another option would be to have `X` extend the qualifier, so `A.b` would become `X.A.b` locally. Neither seems perfect: the first one is somewhat surprising, where you might accidentally "overwrite" a qualifier and introduce name conflicts; the second does not allow exported qualified names to ever get shorter. I hope this is helpful, -Iavor [1] http://yav.github.io/publications/modules98.pdf On Tue, Oct 11, 2016 at 8:54 AM, Karl Cronburg wrote: > Hello, > > I'm attempting to add support for export of qualified modules (feature > request #8043), and any guidance would be greatly appreciated. Namely I'm > very familiar with languages / grammars / happy and was easily able to add > an appropriate production alternative to Parser.y to construct a new AST > node when 'qualified module' is seen in the export list, i.e.: > > | 'module' modid {% amsu (sLL $1 $> (IEModuleContents $2)) > [mj AnnModule $1] } > | 'qualified' 'module' qconid --maybeas > {% amsu (sLL $2 $> (IEModuleQualified $3)) > [mj AnnQualified $1] } > > But now I'm lost in the compiler internals. Where should I be looking / > focusing on? In particular: > > - Where do exported identifiers get added to the list of "[LIE Name]" in > ExportAccum (in TcRnExports.hs)? > > Thanks, > -Karl Cronburg- > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Tue Oct 11 18:13:06 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 11 Oct 2016 14:13:06 -0400 Subject: Register Allocator Tests In-Reply-To: References: Message-ID: <877f9e92kt.fsf@ben-laptop.smart-cactus.org> Thomas Jakway writes: > Can anyone point me to the register allocator tests (especially for the > graph register allocator)? Can't seem to find them and grepping doesn't > turn up much (pretty much just > testsuite/tests/codeGen/should_run/cgrun028.h). > What sort of tests are you looking for in particular? I'm afraid all we have are regression tests covering the code generator as a whole. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From tjakway at nyu.edu Tue Oct 11 21:50:06 2016 From: tjakway at nyu.edu (Thomas Jakway) Date: Tue, 11 Oct 2016 17:50:06 -0400 Subject: Register Allocator Tests In-Reply-To: <877f9e92kt.fsf@ben-laptop.smart-cactus.org> References: <877f9e92kt.fsf@ben-laptop.smart-cactus.org> Message-ID: <95ac4293-15dd-babb-afb2-05b1bab45aee@nyu.edu> I read somewhere that fixing the graph register allocator would be a good project so I thought I'd look into it. I couldn't find any tickets about it on Trac though so I was poking around for tests to see what (if anything) was wrong with it. After I sent that last email I googled around for how to write ghc unit tests and this is the only thing I found. Is it not possible to unit test GHC? If not are there plans/discussions about this? I think it'd help document the code base if nothing else and it'd be a good way to get my feet wet. On 10/11/2016 02:13 PM, Ben Gamari wrote: > Thomas Jakway writes: > >> Can anyone point me to the register allocator tests (especially for the >> graph register allocator)? Can't seem to find them and grepping doesn't >> turn up much (pretty much just >> testsuite/tests/codeGen/should_run/cgrun028.h). >> > What sort of tests are you looking for in particular? I'm afraid all we > have are regression tests covering the code generator as a whole. > > Cheers, > > - Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From omeragacan at gmail.com Wed Oct 12 00:25:15 2016 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Tue, 11 Oct 2016 20:25:15 -0400 Subject: Register Allocator Tests In-Reply-To: <95ac4293-15dd-babb-afb2-05b1bab45aee@nyu.edu> References: <877f9e92kt.fsf@ben-laptop.smart-cactus.org> <95ac4293-15dd-babb-afb2-05b1bab45aee@nyu.edu> Message-ID: > Is it not possible to unit test GHC? You need to export functions you want to test, and then write a program that tests those functions using the `ghc` package. See https://github.com/ghc/ghc/blob/master/testsuite/tests/unboxedsums/unboxedsums_unit_tests.hs for an example. 2016-10-11 17:50 GMT-04:00 Thomas Jakway : > I read somewhere that fixing the graph register allocator would be a good > project so I thought I'd look into it. I couldn't find any tickets about it > on Trac though so I was poking around for tests to see what (if anything) > was wrong with it. > > After I sent that last email I googled around for how to write ghc unit > tests and this > is > the only thing I found. Is it not possible to unit test GHC? If not are > there plans/discussions about this? I think it'd help document the code > base if nothing else and it'd be a good way to get my feet wet. > On 10/11/2016 02:13 PM, Ben Gamari wrote: > > Thomas Jakway writes: > > > Can anyone point me to the register allocator tests (especially for the > graph register allocator)? Can't seem to find them and grepping doesn't > turn up much (pretty much just > testsuite/tests/codeGen/should_run/cgrun028.h). > > > What sort of tests are you looking for in particular? I'm afraid all we > have are regression tests covering the code generator as a whole. > > Cheers, > > - Ben > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjakway at nyu.edu Wed Oct 12 04:37:47 2016 From: tjakway at nyu.edu (Thomas Jakway) Date: Wed, 12 Oct 2016 00:37:47 -0400 Subject: Register Allocator Tests In-Reply-To: References: <877f9e92kt.fsf@ben-laptop.smart-cactus.org> <95ac4293-15dd-babb-afb2-05b1bab45aee@nyu.edu> Message-ID: Ah okay, thanks! On Oct 11, 2016 8:25 PM, "Ömer Sinan Ağacan" wrote: > > Is it not possible to unit test GHC? > > You need to export functions you want to test, and then write a program > that > tests those functions using the `ghc` package. > > See > https://github.com/ghc/ghc/blob/master/testsuite/tests/ > unboxedsums/unboxedsums_unit_tests.hs > for an example. > > 2016-10-11 17:50 GMT-04:00 Thomas Jakway : > >> I read somewhere that fixing the graph register allocator would be a good >> project so I thought I'd look into it. I couldn't find any tickets about it >> on Trac though so I was poking around for tests to see what (if anything) >> was wrong with it. >> >> After I sent that last email I googled around for how to write ghc unit >> tests and this >> >> is the only thing I found. Is it not possible to unit test GHC? If not >> are there plans/discussions about this? I think it'd help document the >> code base if nothing else and it'd be a good way to get my feet wet. >> On 10/11/2016 02:13 PM, Ben Gamari wrote: >> >> Thomas Jakway writes: >> >> >> Can anyone point me to the register allocator tests (especially for the >> graph register allocator)? Can't seem to find them and grepping doesn't >> turn up much (pretty much just >> testsuite/tests/codeGen/should_run/cgrun028.h). >> >> >> What sort of tests are you looking for in particular? I'm afraid all we >> have are regression tests covering the code generator as a whole. >> >> Cheers, >> >> - Ben >> >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Wed Oct 12 12:27:24 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 12 Oct 2016 12:27:24 +0000 Subject: failing Message-ID: Ben: all builds are failing https://phabricator.haskell.org/harbormaster/ What’s up? I see a perf failure on T1969. Does not happen for me; and is only in residency, so just bump it? Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Wed Oct 12 12:38:09 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Wed, 12 Oct 2016 08:38:09 -0400 Subject: failing In-Reply-To: References: Message-ID: On October 12, 2016 8:27:24 AM EDT, Simon Peyton Jones via ghc-devs wrote: >Ben: all builds are failing >https://phabricator.haskell.org/harbormaster/ >What’s up? I see a perf failure on T1969. Does not happen for me; and >is only in residency, so just bump it? > >Simon > > >------------------------------------------------------------------------ > >_______________________________________________ >ghc-devs mailing list >ghc-devs at haskell.org >http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs Oh dear, this doesn't fail for me either. I suppose the best option for the time being is to simply bump it, but this does reiterate the need to do something about our performance test cases. Cheers, - Ben From simonpj at microsoft.com Wed Oct 12 14:24:20 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 12 Oct 2016 14:24:20 +0000 Subject: failing In-Reply-To: References: Message-ID: Can you? With comment etc. Simon | -----Original Message----- | From: Ben Gamari [mailto:ben at smart-cactus.org] | Sent: 12 October 2016 13:38 | To: Simon Peyton Jones ; Simon Peyton Jones via | ghc-devs ; Ben Gamari | Cc: ghc-devs at haskell.org | Subject: Re: failing | | On October 12, 2016 8:27:24 AM EDT, Simon Peyton Jones via ghc-devs | wrote: | >Ben: all builds are failing | >https://phabricator.haskell.org/harbormaster/ | >What’s up? I see a perf failure on T1969. Does not happen for me; | and | >is only in residency, so just bump it? | > | >Simon | > | > | >--------------------------------------------------------------------- | -- | >- | > | >_______________________________________________ | >ghc-devs mailing list | >ghc-devs at haskell.org | >https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail. | ha | >skell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=01%7C01%7Csimo | >npj%40microsoft.com%7C666f491b93a2430e990b08d3f29ca1d4%7C72f988bf86f1 | 41 | >af91ab2d7cd011db47%7C1&sdata=BiH9cQXWJiiyYhh9SlX4QOGDhYXusSUpwOxZa3f% | 2F | >nhg%3D&reserved=0 | | Oh dear, this doesn't fail for me either. I suppose the best option | for the time being is to simply bump it, but this does reiterate the | need to do something about our performance test cases. | | Cheers, | | - Ben From ben at smart-cactus.org Wed Oct 12 16:06:58 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Wed, 12 Oct 2016 12:06:58 -0400 Subject: failing In-Reply-To: References: Message-ID: <87mvi97dr1.fsf@ben-laptop.smart-cactus.org> Simon Peyton Jones writes: > Can you? With comment etc. > Of course. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From marlowsd at gmail.com Thu Oct 13 08:08:44 2016 From: marlowsd at gmail.com (Simon Marlow) Date: Thu, 13 Oct 2016 09:08:44 +0100 Subject: qualified module export In-Reply-To: References: Message-ID: On 11 October 2016 at 18:04, Iavor Diatchki wrote: > Hello, > > There may be some more thinking to be done on the design of this feature. > In particular, if a module `M` has en export declaration `module T`, this > is not at all the same as adding `import T` in modules exporting `M`. The > reason is that meaning of `module T` depends on what is in scope in `M` and > with what names. For example: > * `module T` may export only some of the names from `T` (e.g. if `M` > contains `import T(onlyThisName)`); or, > * `module T` may export the names from an entirely different module > (e.g., if `M` contains `import S as T`); or, > * `module T` may export a combination of multiple modules (e.g., if `M` > contains `import S1 as T` and `import S2 as T`). > > So, I would expect an export of the form `qualified module T as X` to work > in a similar fashion (for the full details on the current semantics you > could have a look at [1]). > > The next issue would be that, currently, entities exported by a module are > only identified by an unqualified name, and the imports introduce qualified > names as necessary. It might make sense to allow modules to also export > qualified names instead, but then we'd have to decide what happens on the > importing end. Presumably, a simple `import M` would now bring both some > qualified and some unqualified names. This means that the explicit import > and hiding lists would have to support qualified names, seems doable. > However, we'd also have to decide how `import M as X` works, in particular > how does it affect imported qualified names. One option would be to have > `X` replace the qualifier, so if `A.b` is imported via `import M as X`, the > resulting name would be `X.b`. Another option would be to have `X` extend > the qualifier, so `A.b` would become `X.A.b` locally. Neither seems > perfect: the first one is somewhat surprising, where you might > accidentally "overwrite" a qualifier and introduce name conflicts; the > second does not allow exported qualified names to ever get shorter. > > Yes, I think this is an important consideration. It's much simpler if we can think of the set of names that a module exports as just strings (possibly containing dots), and an import brings those names into scope, possibly prepending a qualifier. That's a simple story, but it doesn't let you change the qualifier at import time. The question is, do we think it's important to allow that? Suppose Data.Text exported the Text type and everything else qualified by Text: Text.null, Text.concat, etc. Now you wouldn't be able to rename the qualifier to T if you wanted to. Many people do this. Perhaps people would lobby to have Data.Text.Unqualified so that they could do "import qualified Data.Text.Unqualified as T", but then we haven't really made anything better. Cheers Simon I hope this is helpful, > -Iavor > > [1] http://yav.github.io/publications/modules98.pdf > > > On Tue, Oct 11, 2016 at 8:54 AM, Karl Cronburg wrote: > >> Hello, >> >> I'm attempting to add support for export of qualified modules (feature >> request #8043), and any guidance would be greatly appreciated. Namely I'm >> very familiar with languages / grammars / happy and was easily able to add >> an appropriate production alternative to Parser.y to construct a new AST >> node when 'qualified module' is seen in the export list, i.e.: >> >> | 'module' modid {% amsu (sLL $1 $> (IEModuleContents $2)) >> [mj AnnModule $1] } >> | 'qualified' 'module' qconid --maybeas >> {% amsu (sLL $2 $> (IEModuleQualified $3)) >> [mj AnnQualified $1] } >> >> But now I'm lost in the compiler internals. Where should I be looking / >> focusing on? In particular: >> >> - Where do exported identifiers get added to the list of "[LIE Name]" in >> ExportAccum (in TcRnExports.hs)? >> >> Thanks, >> -Karl Cronburg- >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >> > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Fri Oct 14 18:35:49 2016 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Fri, 14 Oct 2016 19:35:49 +0100 Subject: Updated Phabricator Home Page Message-ID: Hi all, I have updated the homepage for our phabricator installation. I moved some things around, deleted some unused panels and updated the landing text. If anyone has any other gripes with the installation please message me and I will see if I can fix them! If you liked the old version better, you can install your own dashboard by going to the dashboard interface - https://phabricator.haskell.org/dashboard/ - and installing dashboard 2. You can also create your own personal dashboard this way if you want to display custom queries for example. Matt From simonpj at microsoft.com Fri Oct 14 21:38:43 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 14 Oct 2016 21:38:43 +0000 Subject: Aargh! Windows build is broken AGAIN Message-ID: I really wish I did not have to be the Windows integration server. Currently, from a clean build of HEAD, I'm getting libraries\base\GHC\Event\TimerManager.hs:62:3: error: error: #error not implemented for this operating system # error not implemented for this operating system ^ I'd revert something if I could, but I can't see what to revert. Help, please! Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From lonetiger at gmail.com Fri Oct 14 22:23:49 2016 From: lonetiger at gmail.com (lonetiger at gmail.com) Date: Fri, 14 Oct 2016 23:23:49 +0100 Subject: Aargh! Windows build is broken AGAIN In-Reply-To: References: Message-ID: <58015af5.ca13c20a.19dbb.c427@mx.google.com> Hi Simon, Sorry for the broken build again. Since your last email I do run a nightly build, but you were about an hour and a half before today’s build! Anyway, I believe the offending commit is 8c6a3d68c0301bb985aa2a462936bbcf7584ae9c , This unconditionally adds GHC.Event which then includes that TimerManager which is defined for POSIX only. Reverting that should get you building again. Cheers, Tamar From: Simon Peyton Jones via ghc-devs Sent: Friday, October 14, 2016 22:38 To: ghc-devs at haskell.org Subject: Aargh! Windows build is broken AGAIN I really wish I did not have to be the Windows integration server. Currently, from a clean build of HEAD, I’m getting libraries\base\GHC\Event\TimerManager.hs:62:3: error:      error: #error not implemented for this operating system      # error not implemented for this operating system        ^ I’d revert something if I could, but I can’t see what to revert.  Help, please! Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From mle+hs at mega-nerd.com Fri Oct 14 22:25:06 2016 From: mle+hs at mega-nerd.com (Erik de Castro Lopo) Date: Sat, 15 Oct 2016 09:25:06 +1100 Subject: Aargh! Windows build is broken AGAIN In-Reply-To: References: Message-ID: <20161015092506.9eb1729c699fd0d6641ca587@mega-nerd.com> Simon Peyton Jones via ghc-devs wrote: > I really wish I did not have to be the Windows integration server. > Currently, from a clean build of HEAD, I'm getting > > libraries\base\GHC\Event\TimerManager.hs:62:3: error: > > error: #error not implemented for this operating system > > # error not implemented for this operating system > > ^ > I'd revert something if I could, but I can't see what to revert. Help, please! According to git annotate, that error line was added in 2012 so that commit is unlikely to be cause. I'll log into my Windows VM and see if I can figure it out. Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/ From lonetiger at gmail.com Fri Oct 14 22:28:00 2016 From: lonetiger at gmail.com (lonetiger at gmail.com) Date: Fri, 14 Oct 2016 23:28:00 +0100 Subject: Aargh! Windows build is broken AGAIN In-Reply-To: <58015af5.ca13c20a.19dbb.c427@mx.google.com> References: <58015af5.ca13c20a.19dbb.c427@mx.google.com> Message-ID: <58015bf0.012ac20a.83b3.bcd8@mx.google.com> Seems I forgot to do a reply all… From: lonetiger at gmail.com Sent: Friday, October 14, 2016 23:23 To: Simon Peyton Jones via ghc-devs Subject: RE: Aargh! Windows build is broken AGAIN Hi Simon, Sorry for the broken build again. Since your last email I do run a nightly build, but you were about an hour and a half before today’s build! Anyway, I believe the offending commit is 8c6a3d68c0301bb985aa2a462936bbcf7584ae9c , This unconditionally adds GHC.Event which then includes that TimerManager which is defined for POSIX only. Reverting that should get you building again. Cheers, Tamar From: Simon Peyton Jones via ghc-devs Sent: Friday, October 14, 2016 22:38 To: ghc-devs at haskell.org Subject: Aargh! Windows build is broken AGAIN I really wish I did not have to be the Windows integration server. Currently, from a clean build of HEAD, I’m getting libraries\base\GHC\Event\TimerManager.hs:62:3: error:      error: #error not implemented for this operating system      # error not implemented for this operating system        ^ I’d revert something if I could, but I can’t see what to revert.  Help, please! Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Oct 14 22:33:50 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 14 Oct 2016 22:33:50 +0000 Subject: Aargh! Windows build is broken AGAIN In-Reply-To: <58015af5.ca13c20a.19dbb.c427@mx.google.com> References: <58015af5.ca13c20a.19dbb.c427@mx.google.com> Message-ID: Ah, good catch! Thank you. Ryan: might you fix this, since you authored the offending commit? Thanks! Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of lonetiger at gmail.com Sent: 14 October 2016 23:24 To: Simon Peyton Jones via ghc-devs Subject: RE: Aargh! Windows build is broken AGAIN Hi Simon, Sorry for the broken build again. Since your last email I do run a nightly build, but you were about an hour and a half before today’s build! Anyway, I believe the offending commit is 8c6a3d68c0301bb985aa2a462936bbcf7584ae9c , This unconditionally adds GHC.Event which then includes that TimerManager which is defined for POSIX only. Reverting that should get you building again. Cheers, Tamar From: Simon Peyton Jones via ghc-devs Sent: Friday, October 14, 2016 22:38 To: ghc-devs at haskell.org Subject: Aargh! Windows build is broken AGAIN I really wish I did not have to be the Windows integration server. Currently, from a clean build of HEAD, I’m getting libraries\base\GHC\Event\TimerManager.hs:62:3: error: error: #error not implemented for this operating system # error not implemented for this operating system ^ I’d revert something if I could, but I can’t see what to revert. Help, please! Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Fri Oct 14 22:45:25 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Fri, 14 Oct 2016 18:45:25 -0400 Subject: Aargh! Windows build is broken AGAIN In-Reply-To: References: Message-ID: <878ttqv9bu.fsf@ben-laptop.smart-cactus.org> Simon Peyton Jones via ghc-devs writes: > I really wish I did not have to be the Windows integration server. Indeed; I sadly had to disable the Windows build bot for the time being but I'll have a look at fixing it this weekend. The end to your misery is in sight! Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From lonetiger at gmail.com Fri Oct 14 22:59:03 2016 From: lonetiger at gmail.com (lonetiger at gmail.com) Date: Fri, 14 Oct 2016 23:59:03 +0100 Subject: Testsuite threadsafety Message-ID: <58016338.86471c0a.9d599.3625@mx.google.com> Hi *, I’m trying to understand a few pieces of code in the testsuite, As it so happens quite a few tests randomly fail on newer msys2 and python installs: r:/temp/ghctest-0u4c8o/test spaces/./th/T12407.run T12407 [ext-interp] ([Error 183] Cannot create a file when that file already exists: 'r:/temp/ghctest-0u4c8o/test spaces/./th/T12407.run') r:/temp/ghctest-0u4c8o/test spaces/./th/T11463.run T11463 [ext-interp] ([Error 183] Cannot create a file when that file already exists: 'r:/temp/ghctest-0u4c8o/test spaces/./th/T11463.run') r:/temp/ghctest-0u4c8o/test spaces/./th/T12478_4.run T12478_4 [ext-interp] ([Error 183] Cannot create a file when that file already exists: 'r:/temp/ghctest-0u4c8o/test spaces/./th/T12478_4.run') r:/temp/ghctest-0u4c8o/test spaces/./th/T12478_3.run T12478_3 [ext-interp] ([Error 183] Cannot create a file when that file already exists: 'r:/temp/ghctest-0u4c8o/test spaces/./th/T12478_3.run') (I say random, but the set of tests seem to be the same ones, just within that group a few randomly pass every so often. It’s mostly TH tests.) Anyone have any ideas? I’m not very familiar with the internals of the testsuite. Secondly, I’ve noticed all paths in the testsuite are relative paths. And this hand me wondering, relative to what. I see that in do_test we actually change directories 837 if opts.pre_cmd: 838 exit_code = runCmd('cd "{0}" && {1}'.format(opts.testdir, opts.pre_cmd)) So I am now assuming that the relative paths are relative to the cwd. Which brings up the question.. how is this thread safe and working on Linux? Surely any two tests can change the cwd and then one of them would be writing to the wrong place? Am I missing something here? Cheers, Tamar -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Fri Oct 14 23:12:03 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Fri, 14 Oct 2016 19:12:03 -0400 Subject: Testsuite threadsafety In-Reply-To: <58016338.86471c0a.9d599.3625@mx.google.com> References: <58016338.86471c0a.9d599.3625@mx.google.com> Message-ID: <874m4ev83g.fsf@ben-laptop.smart-cactus.org> lonetiger at gmail.com writes: > Hi *, > > I’m trying to understand a few pieces of code in the testsuite, > > As it so happens quite a few tests randomly fail on newer msys2 and python installs: > > r:/temp/ghctest-0u4c8o/test spaces/./th/T12407.run T12407 [ext-interp] ([Error 183] Cannot create a file when that file already exists: 'r:/temp/ghctest-0u4c8o/test spaces/./th/T12407.run') > r:/temp/ghctest-0u4c8o/test spaces/./th/T11463.run T11463 [ext-interp] ([Error 183] Cannot create a file when that file already exists: 'r:/temp/ghctest-0u4c8o/test spaces/./th/T11463.run') > r:/temp/ghctest-0u4c8o/test spaces/./th/T12478_4.run T12478_4 [ext-interp] ([Error 183] Cannot create a file when that file already exists: 'r:/temp/ghctest-0u4c8o/test spaces/./th/T12478_4.run') > r:/temp/ghctest-0u4c8o/test spaces/./th/T12478_3.run T12478_3 [ext-interp] ([Error 183] Cannot create a file when that file already exists: 'r:/temp/ghctest-0u4c8o/test spaces/./th/T12478_3.run') > > (I say random, but the set of tests seem to be the same ones, just within that group a few randomly pass every so often. It’s mostly TH tests.) > > Anyone have any ideas? I’m not very familiar with the internals of the testsuite. > > Secondly, I’ve noticed all paths in the testsuite are relative paths. And this hand me wondering, relative to what. > > I see that in do_test we actually change directories > > 837 if opts.pre_cmd: > 838 exit_code = runCmd('cd "{0}" && {1}'.format(opts.testdir, opts.pre_cmd)) > If I understand this correctly, this is merely spawning off a child shell process which then moves its own cwd to opts.testdir. This should not affect the cwd of the testsuite driver, which means that it should be perfectly safe. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From ryan.gl.scott at gmail.com Sat Oct 15 02:31:05 2016 From: ryan.gl.scott at gmail.com (Ryan Scott) Date: Fri, 14 Oct 2016 22:31:05 -0400 Subject: Aargh! Windows build is broken AGAIN In-Reply-To: References: <58015af5.ca13c20a.19dbb.c427@mx.google.com> Message-ID: My apologies for the breakage! I just pushed [1], and I confirmed that things build again on Windows. Ryan S. ----- [1] http://git.haskell.org/ghc.git/commit/e39589e2e4f788565c4a7f02cb85802214a95757 On Fri, Oct 14, 2016 at 6:33 PM, Simon Peyton Jones wrote: > Ah, good catch! Thank you. > > > > Ryan: might you fix this, since you authored the offending commit? Thanks! > > > Simon > > > > *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of * > lonetiger at gmail.com > *Sent:* 14 October 2016 23:24 > *To:* Simon Peyton Jones via ghc-devs > *Subject:* RE: Aargh! Windows build is broken AGAIN > > > > Hi Simon, > > > > Sorry for the broken build again. Since your last email I do run a nightly > build, but you were about an hour and a half before today’s build! > > > > Anyway, I believe the offending commit is 8c6a3d68c0301bb985aa2a462936bbcf7584ae9c > , > > This unconditionally adds GHC.Event which then includes that TimerManager > which is defined for POSIX only. > > > > Reverting that should get you building again. > > > > Cheers, > > Tamar > > > > *From: *Simon Peyton Jones via ghc-devs > *Sent: *Friday, October 14, 2016 22:38 > *To: *ghc-devs at haskell.org > *Subject: *Aargh! Windows build is broken AGAIN > > > > I really wish I did not have to be the Windows integration server. > > Currently, from a clean build of HEAD, I’m getting > > libraries\base\GHC\Event\TimerManager.hs:62:3: error: > > error: #error not implemented for this operating system > > # error not implemented for this operating system > > ^ > > I’d revert something if I could, but I can’t see what to revert. Help, > please! > > Simon > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lonetiger at gmail.com Sat Oct 15 09:08:31 2016 From: lonetiger at gmail.com (lonetiger at gmail.com) Date: Sat, 15 Oct 2016 10:08:31 +0100 Subject: Testsuite threadsafety In-Reply-To: <874m4ev83g.fsf@ben-laptop.smart-cactus.org> References: <58016338.86471c0a.9d599.3625@mx.google.com> <874m4ev83g.fsf@ben-laptop.smart-cactus.org> Message-ID: <5801f20f.51ae1c0a.337b1.c65a@mx.google.com> Thanks! Had missed the spawning of a new process part. From: Ben Gamari Sent: Saturday, October 15, 2016 00:12 To: lonetiger at gmail.com; ghc-devs at haskell.org Subject: Re: Testsuite threadsafety lonetiger at gmail.com writes: > Hi *, > > I’m trying to understand a few pieces of code in the testsuite, > > As it so happens quite a few tests randomly fail on newer msys2 and python installs: > > r:/temp/ghctest-0u4c8o/test spaces/./th/T12407.run T12407 [ext-interp] ([Error 183] Cannot create a file when that file already exists: 'r:/temp/ghctest-0u4c8o/test spaces/./th/T12407.run') > r:/temp/ghctest-0u4c8o/test spaces/./th/T11463.run T11463 [ext-interp] ([Error 183] Cannot create a file when that file already exists: 'r:/temp/ghctest-0u4c8o/test spaces/./th/T11463.run') > r:/temp/ghctest-0u4c8o/test spaces/./th/T12478_4.run T12478_4 [ext-interp] ([Error 183] Cannot create a file when that file already exists: 'r:/temp/ghctest-0u4c8o/test spaces/./th/T12478_4.run') > r:/temp/ghctest-0u4c8o/test spaces/./th/T12478_3.run T12478_3 [ext-interp] ([Error 183] Cannot create a file when that file already exists: 'r:/temp/ghctest-0u4c8o/test spaces/./th/T12478_3.run') > > (I say random, but the set of tests seem to be the same ones, just within that group a few randomly pass every so often. It’s mostly TH tests.) > > Anyone have any ideas? I’m not very familiar with the internals of the testsuite. > > Secondly, I’ve noticed all paths in the testsuite are relative paths. And this hand me wondering, relative to what. > > I see that in do_test we actually change directories > > 837 if opts.pre_cmd: > 838 exit_code = runCmd('cd "{0}" && {1}'.format(opts.testdir, opts.pre_cmd)) > If I understand this correctly, this is merely spawning off a child shell process which then moves its own cwd to opts.testdir. This should not affect the cwd of the testsuite driver, which means that it should be perfectly safe. Cheers, - Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From rob at robjhen.com Sat Oct 15 14:36:50 2016 From: rob at robjhen.com (Robert Henderson) Date: Sat, 15 Oct 2016 15:36:50 +0100 Subject: GHC Trac spam filter is rejecting new registrations Message-ID: <58023F02.60405@robjhen.com> Hi, I've been trying to register a new account on GHC Trac in order to submit a bug report, and I'm getting the following error: Submission rejected as potential spam SpamBayes determined spam probability of 90.82% Could this be a bug or issue with a recent release of the Trac software? I've noticed people complaining about the same problem on other websites that use Trac, e.g.: https://dev.haiku-os.org/ticket/12947 https://forum.openwrt.org/viewtopic.php?id=67711 Thanks, Rob Henderson From ben at smart-cactus.org Sat Oct 15 15:38:10 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Sat, 15 Oct 2016 11:38:10 -0400 Subject: GHC Trac spam filter is rejecting new registrations In-Reply-To: <58023F02.60405@robjhen.com> References: <58023F02.60405@robjhen.com> Message-ID: <87y41ptyfx.fsf@ben-laptop.smart-cactus.org> Robert Henderson writes: > Hi, > > I've been trying to register a new account on GHC Trac in order to > submit a bug report, and I'm getting the following error: > > Submission rejected as potential spam > SpamBayes determined spam probability of 90.82% > Oh dear, very sorry about that. I've adjusted the spam filter configuration; can you try again? > Could this be a bug or issue with a recent release of the Trac software? > I've noticed people complaining about the same problem on other websites > that use Trac, e.g.: > It's not a bug; it's just that spammers are quite good at emulating humans and unfortunately Trac doesn't have very strong tools for catching them. We use a Bayesian spam classifier to catch Trac spam, but sadly it's imperfect. Thanks for bringing up your issue! Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From ben at smart-cactus.org Sun Oct 16 04:21:09 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Sun, 16 Oct 2016 00:21:09 -0400 Subject: [GHC Proposal] "Constraint to Bool" wired-in type family Message-ID: <87insssz4a.fsf@ben-laptop.smart-cactus.org> Hello everyone, Sylvain Henry just opened Pull Request #22 [1] against the ghc-proposals repository. This proposal describes a type family which would given users access to type-level evidence of the satisfiability of a constraint. Please feel free to read and discuss the proposal on the pull request. Cheers, - Ben [1] https://github.com/ghc-proposals/ghc-proposals/pull/22 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From michal.terepeta at gmail.com Sun Oct 16 13:03:05 2016 From: michal.terepeta at gmail.com (Michal Terepeta) Date: Sun, 16 Oct 2016 13:03:05 +0000 Subject: Dataflow analysis for Cmm Message-ID: Hi, I was looking at cleaning up a bit the situation with dataflow analysis for Cmm. In particular, I was experimenting with rewriting the current `cmm.Hoopl.Dataflow` module: - To only include the functionality to do analysis (since GHC doesn’t seem to use the rewriting part). Benefits: - Code simplification (we could remove a lot of unused code). - Makes it clear what we’re actually using from Hoopl. - To have an interface that works with transfer functions operating on a whole basic block (`Block CmmNode C C`). This means that it would be up to the user of the algorithm to traverse the whole block. Benefits: - Further simplifications. - We could remove `analyzeFwdBlocks` hack, which AFAICS is just a copy&paste of `analyzeFwd` but ignores the middle nodes (probably for efficiency of analyses that only look at the blocks). - More flexible (e.g., the clients could know which block they’re processing; we could consider memoizing some per block information, etc.). What do you think about this? I have a branch that implements the above: https://github.com/michalt/ghc/tree/dataflow2/1 It’s introducing a second parallel implementation (`cmm.Hoopl.Dataflow2` module), so that it's possible to run ./validate while comparing the results of the old implementation with the new one. Second question: how could we merge this? (assuming that people are generally ok with the approach) Some ideas: - Change cmm/Hoopl/Dataflow module itself along with the three analyses that use it in one step. - Introduce the Dataflow2 module first, then switch the analyses, then remove any unused code that still depends on the old Dataflow module, finally remove the old Dataflow module itself. (Personally I'd prefer the second option, but I'm also ok with the first one) I’m happy to export the code to Phab if you prefer - I wasn’t sure what’s the recommended workflow for code that’s not ready for review… Thanks, Michal -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Mon Oct 17 00:32:02 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Sun, 16 Oct 2016 20:32:02 -0400 Subject: Status of GHC testsuite driver on Windows Message-ID: <87d1izstml.fsf@ben-laptop.smart-cactus.org> So I spent my weekend in the jungles Windows compatibility layers. I'll spare you the details as they are gruesome but here's a brief summary, * There are a few nasty bugs currently in msys2 which affect the GHC testsuite driver: * Mingw Python packages are terribly broken (#12554) * Msys Python packages are also broken, but differently and only with msys2-runtime >= 2.5.1 (#12660) * Both of these issues manifest as a failure to remove test directories; unfortunately this error is hidden by the testsuite driver and you will likely instead see an error of the form, [Error 183] Cannot create a file when that file already exists: ... from os.makedirs. This issue appears to happen more often when threading is enabled in the testsuite driver (e.g. `make test THREADS=4` after disable the check disabling it in runtests.py), but can also happen in single-threaded mode. * If you see this issue do the following, * Check #12554 for comments suggesting that the issue has been fixed upstream. If so, update msys2. * Run `pacman -Q msys2-runtime` and verify that you are running a 2.5-series runtime * If you are running a 2.5-series runtime, you can simply downgrade to the last-known-good version, 2.5.0, by running, $ wget http://repo.msys2.org/msys/x86_64/msys2-runtime-2.5.0.17080.65c939c-1-x86_64.pkg.tar.xz $ pacman -U msys2-runtime-2.5.0.17080.65c939c-1-x86_64.pkg.tar.xz * If you are running any other runtime version then sadly you will need to reinstall msys2. This base tarball, http://repo.msys2.org/distrib/x86_64/msys2-base-x86_64-20160719.tar.xz, is known to work. * After you have an msys installation with function runtime, you'll need to ensure that the testsuite driver runs with the msys python interpreter (located in /usr/bin/python), not the mingw interpreter (located in /mingw*/bin/python). This can be accomplished with `make test PYTHON=/usr/bin/python`. Unfortunately there's no easy way of doing this with `./validate`. The easiest (but terrible) hack is, $ cp /usr/bin/python /mingw64/bin/python * Armed with this knowledge, I should soon be able to bring the Windows build bot back online. * At some point someone is going to need to track down these bugs in CPython and/or msys2 if Windows support is going to remain viable. If you have time and interest in a challenge please get let me know. Now to go drown my sorrows. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From jan.stolarek at p.lodz.pl Mon Oct 17 08:57:41 2016 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Mon, 17 Oct 2016 09:57:41 +0100 Subject: Dataflow analysis for Cmm In-Reply-To: References: Message-ID: <201610170957.41204.jan.stolarek@p.lodz.pl> Michał, Dataflow module could indeed use cleanup. I have made two attempts at this in the past but I don't think any of them was merged - see [1] and [2]. [2] was mostly type-directed simplifications. It would be nice to have this included in one form or another. It sounds like you also have a more in-depth refactoring in mind. Personally as long as it is semantically correct I think it will be a good thing. I would especially support removing dead code that we don't really use. [1] https://github.com/jstolarek/ghc/commits/js-hoopl-cleanup-v2 [2] https://github.com/jstolarek/ghc/commits/js-hoopl-cleanup-v2 > Second question: how could we merge this? (...) I'm not sure if I understand. The end result after merging will be exactly the same, right? Are you asking for advice what is the best way of doing this from a technical point if view? I would simply edit the existing module. Introducing a temporary second module seems like unnecessary extra work and perhaps complicating the patch review. > I’m happy to export the code to Phab if you prefer - I wasn’t sure what’s > the recommended workflow for code that’s not ready for review… This is OK but please remember to set status of revision to "Planned changes" after uploading it to Phab so it doesn't sit in reviewing queue. Janek Dnia niedziela, 16 października 2016, Michal Terepeta napisał: > Hi, > > I was looking at cleaning up a bit the situation with dataflow analysis for > Cmm. > In particular, I was experimenting with rewriting the current > `cmm.Hoopl.Dataflow` module: > - To only include the functionality to do analysis (since GHC doesn’t seem > to use > the rewriting part). > Benefits: > - Code simplification (we could remove a lot of unused code). > - Makes it clear what we’re actually using from Hoopl. > - To have an interface that works with transfer functions operating on a > whole > basic block (`Block CmmNode C C`). > This means that it would be up to the user of the algorithm to traverse > the > whole block. > Benefits: > - Further simplifications. > - We could remove `analyzeFwdBlocks` hack, which AFAICS is just a > copy&paste > of `analyzeFwd` but ignores the middle nodes (probably for efficiency > of analyses that only look at the blocks). > - More flexible (e.g., the clients could know which block they’re > processing; > we could consider memoizing some per block information, etc.). > > What do you think about this? > > I have a branch that implements the above: > https://github.com/michalt/ghc/tree/dataflow2/1 > It’s introducing a second parallel implementation (`cmm.Hoopl.Dataflow2` > module), so that it's possible to run ./validate while comparing the > results of > the old implementation with the new one. > > Second question: how could we merge this? (assuming that people are > generally > ok with the approach) Some ideas: > - Change cmm/Hoopl/Dataflow module itself along with the three analyses > that use > it in one step. > - Introduce the Dataflow2 module first, then switch the analyses, then > remove > any unused code that still depends on the old Dataflow module, finally > remove > the old Dataflow module itself. > (Personally I'd prefer the second option, but I'm also ok with the first > one) > > I’m happy to export the code to Phab if you prefer - I wasn’t sure what’s > the > recommended workflow for code that’s not ready for review… > > Thanks, > Michal --- Politechnika Łódzka Lodz University of Technology Treść tej wiadomości zawiera informacje przeznaczone tylko dla adresata. Jeżeli nie jesteście Państwo jej adresatem, bądź otrzymaliście ją przez pomyłkę prosimy o powiadomienie o tym nadawcy oraz trwałe jej usunięcie. This email contains information intended solely for the use of the individual to whom it is addressed. If you are not the intended recipient or if you have received this message in error, please notify the sender and delete it from your system. From rob at robjhen.com Mon Oct 17 13:05:21 2016 From: rob at robjhen.com (Robert Henderson) Date: Mon, 17 Oct 2016 14:05:21 +0100 Subject: GHC Trac spam filter is rejecting new registrations In-Reply-To: <87y41ptyfx.fsf@ben-laptop.smart-cactus.org> References: <58023F02.60405@robjhen.com> <87y41ptyfx.fsf@ben-laptop.smart-cactus.org> Message-ID: <5804CC91.9000501@robjhen.com> Thanks for fixing that, registration seems to be working fine now. Cheers, Rob On 15/10/16 16:38, Ben Gamari wrote: > Robert Henderson writes: > >> Hi, >> >> I've been trying to register a new account on GHC Trac in order to >> submit a bug report, and I'm getting the following error: >> >> Submission rejected as potential spam >> SpamBayes determined spam probability of 90.82% >> > Oh dear, very sorry about that. I've adjusted the spam filter > configuration; can you try again? > >> Could this be a bug or issue with a recent release of the Trac software? >> I've noticed people complaining about the same problem on other websites >> that use Trac, e.g.: >> > It's not a bug; it's just that spammers are quite good at emulating > humans and unfortunately Trac doesn't have very strong tools for > catching them. We use a Bayesian spam classifier to catch Trac spam, but > sadly it's imperfect. > > Thanks for bringing up your issue! > > Cheers, > > - Ben > From michal.terepeta at gmail.com Mon Oct 17 13:12:27 2016 From: michal.terepeta at gmail.com (Michal Terepeta) Date: Mon, 17 Oct 2016 13:12:27 +0000 Subject: Dataflow analysis for Cmm In-Reply-To: <201610170957.41204.jan.stolarek@p.lodz.pl> References: <201610170957.41204.jan.stolarek@p.lodz.pl> Message-ID: On Mon, Oct 17, 2016 at 10:57 AM Jan Stolarek wrote: > Michał, > > Dataflow module could indeed use cleanup. I have made two attempts at this > in the past but I don't > think any of them was merged - see [1] and [2]. [2] was mostly > type-directed simplifications. It > would be nice to have this included in one form or another. It sounds like > you also have a more > in-depth refactoring in mind. Personally as long as it is semantically > correct I think it will be > a good thing. I would especially support removing dead code that we don't > really use. > > [1] https://github.com/jstolarek/ghc/commits/js-hoopl-cleanup-v2 > [2] https://github.com/jstolarek/ghc/commits/js-hoopl-cleanup-v2 Ok, I'll have a look at this! (did you intend to send two identical links?) > Second question: how could we merge this? (...) > I'm not sure if I understand. The end result after merging will be exactly > the same, right? Are > you asking for advice what is the best way of doing this from a technical > point if view? I would > simply edit the existing module. Introducing a temporary second module > seems like unnecessary > extra work and perhaps complicating the patch review. > Yes, the end result would be the same - I'm merely asking what would be preferred by GHC devs (i.e., I don't know how fine grained patches to GHC usually are). > > I’m happy to export the code to Phab if you prefer - I wasn’t sure what’s > > the recommended workflow for code that’s not ready for review… > This is OK but please remember to set status of revision to "Planned > changes" after uploading it > to Phab so it doesn't sit in reviewing queue. > Cool, I didn't know about the "Planned changes" status. Thanks for mentioning it! Cheers, Michal -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Mon Oct 17 13:21:21 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Mon, 17 Oct 2016 09:21:21 -0400 Subject: Status of GHC testsuite driver on Windows In-Reply-To: <87d1izstml.fsf@ben-laptop.smart-cactus.org> References: <87d1izstml.fsf@ben-laptop.smart-cactus.org> Message-ID: <871szfru0e.fsf@ben-laptop.smart-cactus.org> Ben Gamari writes: > So I spent my weekend in the jungles Windows compatibility layers. I'll > spare you the details as they are gruesome but here's a brief summary, > > * There are a few nasty bugs currently in msys2 which affect the GHC > testsuite driver: > > * Mingw Python packages are terribly broken (#12554) > > * Msys Python packages are also broken, but differently and only > with msys2-runtime >= 2.5.1 (#12660) > My apologies, this was supposed to read #12661. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From ben at smart-cactus.org Mon Oct 17 14:47:56 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Mon, 17 Oct 2016 10:47:56 -0400 Subject: Dataflow analysis for Cmm In-Reply-To: References: <201610170957.41204.jan.stolarek@p.lodz.pl> Message-ID: <87y41nqbfn.fsf@ben-laptop.smart-cactus.org> Michal Terepeta writes: > On Mon, Oct 17, 2016 at 10:57 AM Jan Stolarek > wrote: > >> Second question: how could we merge this? (...) >> I'm not sure if I understand. The end result after merging will be exactly >> the same, right? Are >> you asking for advice what is the best way of doing this from a technical >> point if view? I would >> simply edit the existing module. Introducing a temporary second module >> seems like unnecessary >> extra work and perhaps complicating the patch review. >> > > Yes, the end result would be the same - I'm merely asking what would be > preferred by GHC devs (i.e., I don't know how fine grained patches to GHC > usually are). > It varies quite wildly. In general I would prefer fine-grained patches (but of course atomic) over coarse patches as they are easier to understand during review and after merge. Moreover, it's generally much easier to squash together patches that are too fine-grained than it is to split up a large patch, so I generally err on the side of finer rather than coarser during development. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From cma at bitemyapp.com Mon Oct 17 17:08:01 2016 From: cma at bitemyapp.com (Christopher Allen) Date: Mon, 17 Oct 2016 12:08:01 -0500 Subject: Improving GHC GC for latency-sensitive networked services Message-ID: It'd be unfortunate if more companies trying out Haskell came to the same result: https://blog.pusher.com/latency-working-set-ghc-gc-pick-two/#comment-2866985345 (They gave up and rewrote the service in Golang) Most of the state of the art I'm aware of (such as from Azul Systems) is from when I was using a JVM language, which isn't necessarily applicable for GHC. I understand Marlow's thread-local heaps experiment circa 7.2/7.4 was abandoned because it penalized performance too much. Does the impression that there isn't the labor to maintain two GCs still hold? It seems like thread-local heaps would be pervasive. Does anyone know what could be done in GHC itself to improve this situation? Stop-the-world is pretty painful when the otherwise excellent concurrency primitives are much of why you're using Haskell. --- Chris Allen From ben at well-typed.com Mon Oct 17 17:32:10 2016 From: ben at well-typed.com (Ben Gamari) Date: Mon, 17 Oct 2016 13:32:10 -0400 Subject: Compact regions in users guide Message-ID: <87y41mkhk5.fsf@ben-laptop.smart-cactus.org> Hello Compact Regions authors, It occurs to me that the compact regions support that is due to be included in GHC 8.2 is lacking any discussion in the users guide. At very least we should have a mention in the release notes (this is one of the major features of 8.2, afterall) and a brief overview of the feature elsewhere. It's a bit hard saying where the overview would fit (parallel.rst is an option, albeit imperfect; glasgow_exts.rst is another). I'll leave this up to you. I've opened #12413 [1] to track this task. Do you suppose one of you could take a few minutes to finish this off? Thanks! Cheers, - Ben [1] https://ghc.haskell.org/trac/ghc/ticket/12413 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From ben at smart-cactus.org Mon Oct 17 18:10:07 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Mon, 17 Oct 2016 14:10:07 -0400 Subject: Improving GHC GC for latency-sensitive networked services In-Reply-To: References: Message-ID: <87vawqkfsw.fsf@ben-laptop.smart-cactus.org> Christopher Allen writes: > It'd be unfortunate if more companies trying out Haskell came to the > same result: https://blog.pusher.com/latency-working-set-ghc-gc-pick-two/#comment-2866985345 > (They gave up and rewrote the service in Golang) > Aside: Go strikes me as an odd choice here; I would have thought they would just move to something like Rust or C++ to avoid GC entirely and still benefit from a reasonably expressive type system. Anyways, moving along... > Most of the state of the art I'm aware of (such as from Azul Systems) > is from when I was using a JVM language, which isn't necessarily > applicable for GHC. > > I understand Marlow's thread-local heaps experiment circa 7.2/7.4 was > abandoned because it penalized performance too much. Does the > impression that there isn't the labor to maintain two GCs still hold? > It seems like thread-local heaps would be pervasive. > Yes, I believe that this indeed still holds. In general the RTS lacks hands and garbage collectors (especially parallel implementations) require a fair bit of background knowledge to maintain. > Does anyone know what could be done in GHC itself to improve this > situation? Stop-the-world is pretty painful when the otherwise > excellent concurrency primitives are much of why you're using Haskell. > Indeed it is quite painful. However, I suspect that compact regions (coming in 8.2) could help in many workloads. In the case of Pusher's workload (which isn't very precisely described, so I'm guessing here) I suspect you could take batches of N messages and add them to a compact region, essentially reducing the number of live heap objects (and hence work that the GC must perform) by a factor of N. Of course, in doing this you give up the ability to "retire" messages individually. To recover this ability one could introduce a Haskell "garbage collector" task to scan the active regions and copy messages that should be kept into a new region, dropping those that should be retired. Here you benefit from the fact that copying into a compact region can be done in parallel (IIRC), allowing us to essentially implement a copying, non-stop-the-world GC in our Haskell program. This allows the runtime's GC to handle a large, static heap as though it were a constant factor smaller, hopefully reducing pause duration. That being said, this is all just wild speculation; I could be wrong, YMMV, etc. Of course, another option is splitting your workload across multiple runtime systems. Cloud Haskell is a very nice tool for this which I've used on client projects with very good results. Obviously it isn't always possible to segment your heap as required by this approach, but it is quite effective when possible. While clearly neither of these are as convenient as a more scalable garbage collector, they are both things we can (nearly) do today. Looking farther into the future, know there is a group looking to add linear types to GHC/Haskell with a separate linear heap (which needn't be garbage collected). I'll let them elaborate if they so desire. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From jan.stolarek at p.lodz.pl Tue Oct 18 08:49:17 2016 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Tue, 18 Oct 2016 09:49:17 +0100 Subject: Dataflow analysis for Cmm In-Reply-To: References: <201610170957.41204.jan.stolarek@p.lodz.pl> Message-ID: <201610180949.17368.jan.stolarek@p.lodz.pl> > (did you intend to send two identical links?) No :-) The branches should be js-hoopl-cleanup-v1 and js-hoopl-cleanup-v2 > Yes, the end result would be the same - I'm merely asking what would be > preferred by GHC devs (i.e., I don't know how fine grained patches to GHC > usually are). I don't think this should be visible in your final patch. In a perfect world each repositrory commit shouyld provide exactly one functionality - not more and not less. So your final patch should not reflect intermediate steps you took to implement some functionality because they are not really relevant. For the purpose of development and review it is fine to have a branch with lots of small commits but before merging you should just squash them into one. Janek --- Politechnika Łódzka Lodz University of Technology Treść tej wiadomości zawiera informacje przeznaczone tylko dla adresata. Jeżeli nie jesteście Państwo jej adresatem, bądź otrzymaliście ją przez pomyłkę prosimy o powiadomienie o tym nadawcy oraz trwałe jej usunięcie. This email contains information intended solely for the use of the individual to whom it is addressed. If you are not the intended recipient or if you have received this message in error, please notify the sender and delete it from your system. From simonpj at microsoft.com Tue Oct 18 09:58:40 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 18 Oct 2016 09:58:40 +0000 Subject: Windows build Message-ID: On Windows I now get a lot of framework failures, below. I have not tried them all, but some work fine when run individually; e.g. make TEST=AssocTyDef09 Simon Unexpected passes: rts/T7037.run T7037 [unexpected] (normal) Unexpected failures: ghci/prog003/prog003.run prog003 [bad exit code] (ghci) plugins/plugins07.run plugins07 [bad exit code] (normal) Unexpected stat failures: perf/haddock/haddock.compiler.run haddock.compiler [stat not good enough] (normal) Framework failures: ./cabal/T5442b.run T5442b [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./codeGen/should_run/cgrun040.run cgrun040 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./concurrent/should_run/conc027.run conc027 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./deSugar/should_run/dsrun010.run dsrun010 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./dph/sumnats/dph-sumnats-vseg.run dph-sumnats-vseg [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./dph/words/dph-words-copy-fast.run dph-words-copy-fast [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./driver/T9963.run T9963 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./ghci/scripts/ghci044a.run ghci044a [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./ghci/scripts/T4127.run T4127 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./ghci/should_run/ghcirun001.run ghcirun001 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./overloadedlists/should_fail/overloadedlistsfail02.run overloadedlistsfail02 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./package/package07e.run package07e [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./parser/should_fail/ParserNoForallUnicode.run ParserNoForallUnicode [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./parser/should_fail/T12051.run T12051 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./perf/should_run/T4830.run T4830 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./plugins/plugins07.run plugins07 [normal] (pre_cmd failed: 2) ./rts/stack003.run stack003 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./rts/ffishutdown.run ffishutdown [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./simplCore/should_compile/T3234.run T3234 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./typecheck/should_compile/tc217.run tc217 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./typecheck/should_fail/tcfail013.run tcfail013 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./typecheck/should_fail/tcfail110.run tcfail110 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./typecheck/should_fail/AssocTyDef09.run AssocTyDef09 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ../../libraries/base/tests/stableptr003.run stableptr003 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ../../libraries/base/tests/IO/ioeGetErrorString001.run ioeGetErrorString001 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) -------------- next part -------------- An HTML attachment was scrubbed... URL: From lonetiger at gmail.com Tue Oct 18 10:06:38 2016 From: lonetiger at gmail.com (Phyx) Date: Tue, 18 Oct 2016 10:06:38 +0000 Subject: Windows build In-Reply-To: References: Message-ID: Hi Simon, What does which python 2 and which python 3 return? Along with uname -a? Tamar On Tue, Oct 18, 2016, 10:58 Simon Peyton Jones via ghc-devs < ghc-devs at haskell.org> wrote: > On Windows I now get a lot of framework failures, below. > > I have not tried them all, but some work fine when run individually; e.g. > > make TEST=AssocTyDef09 > > Simon > > > > > > Unexpected passes: > > rts/T7037.run T7037 [unexpected] (normal) > > > > Unexpected failures: > > ghci/prog003/prog003.run prog003 [bad exit code] (ghci) > > plugins/plugins07.run plugins07 [bad exit code] (normal) > > > > Unexpected stat failures: > > perf/haddock/haddock.compiler.run haddock.compiler [stat not good > enough] (normal) > > > > Framework failures: > > ./cabal/T5442b.run T5442b > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ./codeGen/should_run/cgrun040.run cgrun040 > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ./concurrent/should_run/conc027.run conc027 > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ./deSugar/should_run/dsrun010.run dsrun010 > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ./dph/sumnats/dph-sumnats-vseg.run > dph-sumnats-vseg [runTest] (Unhandled exception: global name 'WindowsError' > is not defined) > > ./dph/words/dph-words-copy-fast.run > dph-words-copy-fast [runTest] (Unhandled exception: global name > 'WindowsError' is not defined) > > ./driver/T9963.run T9963 > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ./ghci/scripts/ghci044a.run ghci044a > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ./ghci/scripts/T4127.run T4127 > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ./ghci/should_run/ghcirun001.run ghcirun001 > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ./overloadedlists/should_fail/overloadedlistsfail02.run > overloadedlistsfail02 [runTest] (Unhandled exception: global name > 'WindowsError' is not defined) > > ./package/package07e.run package07e > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ./parser/should_fail/ParserNoForallUnicode.run > ParserNoForallUnicode [runTest] (Unhandled exception: global name > 'WindowsError' is not defined) > > ./parser/should_fail/T12051.run T12051 > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ./perf/should_run/T4830.run T4830 > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ./plugins/plugins07.run plugins07 > [normal] (pre_cmd failed: 2) > > ./rts/stack003.run stack003 > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ./rts/ffishutdown.run ffishutdown > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ./simplCore/should_compile/T3234.run T3234 > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ./typecheck/should_compile/tc217.run tc217 > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ./typecheck/should_fail/tcfail013.run tcfail013 > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ./typecheck/should_fail/tcfail110.run tcfail110 > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ./typecheck/should_fail/AssocTyDef09.run AssocTyDef09 > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ../../libraries/base/tests/stableptr003.run stableptr003 > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ../../libraries/base/tests/IO/ioeGetErrorString001.run > ioeGetErrorString001 [runTest] (Unhandled exception: global name > 'WindowsError' is not defined) > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Oct 18 10:30:18 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 18 Oct 2016 10:30:18 +0000 Subject: Windows build In-Reply-To: References: Message-ID: .../typecheck/should_fail$ which python /usr/bin/python .../typecheck/should_fail$ which python2 /usr/bin/python2 .../typecheck/should_fail$ which python3 /usr/bin/python3 .../typecheck/should_fail$ python --version Python 3.4.3 .../typecheck/should_fail$ python2 --version Python 2.7.11 .../typecheck/should_fail$ python3 --version Python 3.4.3 .../typecheck/should_fail$ From: Phyx [mailto:lonetiger at gmail.com] Sent: 18 October 2016 11:07 To: Simon Peyton Jones ; ghc-devs at haskell.org Subject: Re: Windows build Hi Simon, What does which python 2 and which python 3 return? Along with uname -a? Tamar On Tue, Oct 18, 2016, 10:58 Simon Peyton Jones via ghc-devs > wrote: On Windows I now get a lot of framework failures, below. I have not tried them all, but some work fine when run individually; e.g. make TEST=AssocTyDef09 Simon Unexpected passes: rts/T7037.run T7037 [unexpected] (normal) Unexpected failures: ghci/prog003/prog003.run prog003 [bad exit code] (ghci) plugins/plugins07.run plugins07 [bad exit code] (normal) Unexpected stat failures: perf/haddock/haddock.compiler.run haddock.compiler [stat not good enough] (normal) Framework failures: ./cabal/T5442b.run T5442b [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./codeGen/should_run/cgrun040.run cgrun040 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./concurrent/should_run/conc027.run conc027 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./deSugar/should_run/dsrun010.run dsrun010 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./dph/sumnats/dph-sumnats-vseg.run dph-sumnats-vseg [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./dph/words/dph-words-copy-fast.run dph-words-copy-fast [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./driver/T9963.run T9963 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./ghci/scripts/ghci044a.run ghci044a [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./ghci/scripts/T4127.run T4127 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./ghci/should_run/ghcirun001.run ghcirun001 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./overloadedlists/should_fail/overloadedlistsfail02.run overloadedlistsfail02 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./package/package07e.run package07e [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./parser/should_fail/ParserNoForallUnicode.run ParserNoForallUnicode [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./parser/should_fail/T12051.run T12051 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./perf/should_run/T4830.run T4830 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./plugins/plugins07.run plugins07 [normal] (pre_cmd failed: 2) ./rts/stack003.run stack003 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./rts/ffishutdown.run ffishutdown [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./simplCore/should_compile/T3234.run T3234 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./typecheck/should_compile/tc217.run tc217 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./typecheck/should_fail/tcfail013.run tcfail013 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./typecheck/should_fail/tcfail110.run tcfail110 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./typecheck/should_fail/AssocTyDef09.run AssocTyDef09 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ../../libraries/base/tests/stableptr003.run stableptr003 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ../../libraries/base/tests/IO/ioeGetErrorString001.run ioeGetErrorString001 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From lonetiger at gmail.com Tue Oct 18 10:39:24 2016 From: lonetiger at gmail.com (Phyx) Date: Tue, 18 Oct 2016 10:39:24 +0000 Subject: Windows build In-Reply-To: References: Message-ID: And uname -a? If you're on anything higher than 2.5.1 the runtime has a regression per Ben's email. If you're not. Try using python3 for testing. Make test PYTHON=/usr/bin/python3 On Tue, Oct 18, 2016, 11:30 Simon Peyton Jones wrote: > .../typecheck/should_fail$ which python > > /usr/bin/python > > .../typecheck/should_fail$ which python2 > > /usr/bin/python2 > > .../typecheck/should_fail$ which python3 > > /usr/bin/python3 > > .../typecheck/should_fail$ python --version > > Python 3.4.3 > > .../typecheck/should_fail$ python2 --version > > Python 2.7.11 > > .../typecheck/should_fail$ python3 --version > > Python 3.4.3 > > .../typecheck/should_fail$ > > > > *From:* Phyx [mailto:lonetiger at gmail.com] > *Sent:* 18 October 2016 11:07 > *To:* Simon Peyton Jones ; ghc-devs at haskell.org > *Subject:* Re: Windows build > > > > Hi Simon, > > > > What does which python 2 and which python 3 return? Along with uname -a? > > > > Tamar > > On Tue, Oct 18, 2016, 10:58 Simon Peyton Jones via ghc-devs < > ghc-devs at haskell.org> wrote: > > On Windows I now get a lot of framework failures, below. > > I have not tried them all, but some work fine when run individually; e.g. > > make TEST=AssocTyDef09 > > Simon > > > > > > Unexpected passes: > > rts/T7037.run T7037 [unexpected] (normal) > > > > Unexpected failures: > > ghci/prog003/prog003.run prog003 [bad exit code] (ghci) > > plugins/plugins07.run plugins07 [bad exit code] (normal) > > > > Unexpected stat failures: > > perf/haddock/haddock.compiler.run haddock.compiler [stat not good > enough] (normal) > > > > Framework failures: > > ./cabal/T5442b.run T5442b > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ./codeGen/should_run/cgrun040.run cgrun040 > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ./concurrent/should_run/conc027.run conc027 > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ./deSugar/should_run/dsrun010.run dsrun010 > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ./dph/sumnats/dph-sumnats-vseg.run > dph-sumnats-vseg [runTest] (Unhandled exception: global name 'WindowsError' > is not defined) > > ./dph/words/dph-words-copy-fast.run > dph-words-copy-fast [runTest] (Unhandled exception: global name > 'WindowsError' is not defined) > > ./driver/T9963.run T9963 > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ./ghci/scripts/ghci044a.run ghci044a > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ./ghci/scripts/T4127.run T4127 > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ./ghci/should_run/ghcirun001.run ghcirun001 > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ./overloadedlists/should_fail/overloadedlistsfail02.run > overloadedlistsfail02 [runTest] (Unhandled exception: global name > 'WindowsError' is not defined) > > ./package/package07e.run package07e > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ./parser/should_fail/ParserNoForallUnicode.run > ParserNoForallUnicode [runTest] (Unhandled exception: global name > 'WindowsError' is not defined) > > ./parser/should_fail/T12051.run T12051 > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ./perf/should_run/T4830.run T4830 > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ./plugins/plugins07.run plugins07 > [normal] (pre_cmd failed: 2) > > ./rts/stack003.run stack003 > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ./rts/ffishutdown.run ffishutdown > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ./simplCore/should_compile/T3234.run T3234 > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ./typecheck/should_compile/tc217.run tc217 > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ./typecheck/should_fail/tcfail013.run tcfail013 > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ./typecheck/should_fail/tcfail110.run tcfail110 > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ./typecheck/should_fail/AssocTyDef09.run AssocTyDef09 > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ../../libraries/base/tests/stableptr003.run stableptr003 > [runTest] (Unhandled exception: global name 'WindowsError' is not defined) > > ../../libraries/base/tests/IO/ioeGetErrorString001.run > ioeGetErrorString001 [runTest] (Unhandled exception: global name > 'WindowsError' is not defined) > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Oct 18 11:02:22 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 18 Oct 2016 11:02:22 +0000 Subject: Windows build In-Reply-To: References: Message-ID: Sorry I forgot .../typecheck/should_fail$ uname -a MSYS_NT-10.0 MSRC-4079181 2.5.1(0.297/5/3) 2016-05-16 10:51 x86_64 Msys I’ll try Make test PYTHON=/usr/bin/python3 Simon From: Phyx [mailto:lonetiger at gmail.com] Sent: 18 October 2016 11:39 To: Simon Peyton Jones ; ghc-devs at haskell.org Subject: Re: Windows build And uname -a? If you're on anything higher than 2.5.1 the runtime has a regression per Ben's email. If you're not. Try using python3 for testing. Make test PYTHON=/usr/bin/python3 On Tue, Oct 18, 2016, 11:30 Simon Peyton Jones > wrote: .../typecheck/should_fail$ which python /usr/bin/python .../typecheck/should_fail$ which python2 /usr/bin/python2 .../typecheck/should_fail$ which python3 /usr/bin/python3 .../typecheck/should_fail$ python --version Python 3.4.3 .../typecheck/should_fail$ python2 --version Python 2.7.11 .../typecheck/should_fail$ python3 --version Python 3.4.3 .../typecheck/should_fail$ From: Phyx [mailto:lonetiger at gmail.com] Sent: 18 October 2016 11:07 To: Simon Peyton Jones >; ghc-devs at haskell.org Subject: Re: Windows build Hi Simon, What does which python 2 and which python 3 return? Along with uname -a? Tamar On Tue, Oct 18, 2016, 10:58 Simon Peyton Jones via ghc-devs > wrote: On Windows I now get a lot of framework failures, below. I have not tried them all, but some work fine when run individually; e.g. make TEST=AssocTyDef09 Simon Unexpected passes: rts/T7037.run T7037 [unexpected] (normal) Unexpected failures: ghci/prog003/prog003.run prog003 [bad exit code] (ghci) plugins/plugins07.run plugins07 [bad exit code] (normal) Unexpected stat failures: perf/haddock/haddock.compiler.run haddock.compiler [stat not good enough] (normal) Framework failures: ./cabal/T5442b.run T5442b [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./codeGen/should_run/cgrun040.run cgrun040 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./concurrent/should_run/conc027.run conc027 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./deSugar/should_run/dsrun010.run dsrun010 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./dph/sumnats/dph-sumnats-vseg.run dph-sumnats-vseg [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./dph/words/dph-words-copy-fast.run dph-words-copy-fast [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./driver/T9963.run T9963 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./ghci/scripts/ghci044a.run ghci044a [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./ghci/scripts/T4127.run T4127 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./ghci/should_run/ghcirun001.run ghcirun001 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./overloadedlists/should_fail/overloadedlistsfail02.run overloadedlistsfail02 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./package/package07e.run package07e [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./parser/should_fail/ParserNoForallUnicode.run ParserNoForallUnicode [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./parser/should_fail/T12051.run T12051 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./perf/should_run/T4830.run T4830 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./plugins/plugins07.run plugins07 [normal] (pre_cmd failed: 2) ./rts/stack003.run stack003 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./rts/ffishutdown.run ffishutdown [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./simplCore/should_compile/T3234.run T3234 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./typecheck/should_compile/tc217.run tc217 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./typecheck/should_fail/tcfail013.run tcfail013 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./typecheck/should_fail/tcfail110.run tcfail110 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ./typecheck/should_fail/AssocTyDef09.run AssocTyDef09 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ../../libraries/base/tests/stableptr003.run stableptr003 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) ../../libraries/base/tests/IO/ioeGetErrorString001.run ioeGetErrorString001 [runTest] (Unhandled exception: global name 'WindowsError' is not defined) _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Oct 18 13:03:13 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 18 Oct 2016 13:03:13 +0000 Subject: Improving GHC GC for latency-sensitive networked services In-Reply-To: References: Message-ID: | I understand Marlow's thread-local heaps experiment circa 7.2/7.4 was | abandoned because it penalized performance too much. Does the | impression that there isn't the labor to maintain two GCs still hold? | It seems like thread-local heaps would be pervasive. I was optimistic about thread-local heaps, but while perf did improve a bit, the complexity of the implementation was extremely daunting. So we decided that the pain didn't justify the gain. I'm not sure it'd help much here, since the data is long-lived and might migrate into the global heap anyway. Most GCs rely on traversing live data. Here the live data is big. So really the only solution is to traverse it incrementally. You can still stop-the-world, but you have to be able to resume normal execution before GC is complete, thus smearing GC out into a series of slices, interleaved with (but not necessarily in parallel with) the main application. I believe the that the OCaml runtime now has such a GC. It'd be lovely to have one for GHC. But I defer to Simon M Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | Christopher Allen | Sent: 17 October 2016 18:08 | To: ghc-devs at haskell.org | Subject: Improving GHC GC for latency-sensitive networked services | | It'd be unfortunate if more companies trying out Haskell came to the | same result: | https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fblog. | pusher.com%2Flatency-working-set-ghc-gc-pick-two%2F%23comment- | 2866985345&data=01%7C01%7Csimonpj%40microsoft.com%7C04c1bc69e00c47d382 | 2908d3f6b028d0%7C72f988bf86f141af91ab2d7cd011db47%7C1&sdata=dE1VP0u3kQ | L9R7CaGTAOGswRY6SyKH72c0xG%2FOggEK0%3D&reserved=0 | (They gave up and rewrote the service in Golang) | | Most of the state of the art I'm aware of (such as from Azul Systems) | is from when I was using a JVM language, which isn't necessarily | applicable for GHC. | | I understand Marlow's thread-local heaps experiment circa 7.2/7.4 was | abandoned because it penalized performance too much. Does the | impression that there isn't the labor to maintain two GCs still hold? | It seems like thread-local heaps would be pervasive. | | Does anyone know what could be done in GHC itself to improve this | situation? Stop-the-world is pretty painful when the otherwise | excellent concurrency primitives are much of why you're using Haskell. | | --- Chris Allen | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.h | askell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=01%7C01%7Csimonpj%40microsoft.com%7C04c1bc69e00c47d3822908d3 | f6b028d0%7C72f988bf86f141af91ab2d7cd011db47%7C1&sdata=XwvaAPx%2BGqugD4 | Kx%2FkXiYticgBCUMkboqH9QE315EhQ%3D&reserved=0 From harendra.kumar at gmail.com Tue Oct 18 13:37:11 2016 From: harendra.kumar at gmail.com (Harendra Kumar) Date: Tue, 18 Oct 2016 19:07:11 +0530 Subject: Improving GHC GC for latency-sensitive networked services In-Reply-To: References: Message-ID: It will be awesome if we can spread the GC work instead of stopping the world for too long. I am a new entrant to the Haskell world but something similar to this was the first real problem (other than lazy IO) that I faced with GHC. While I was debugging I had to learn how the GC works to really understand what's going on. Then I learnt to always strive to keep the retained heap to the minimum possible. But sometimes the minimum possible could be a lot. This blog article was sort of a deja vu for me. It seems this is not a rare problem. I guess, the compact regions technique as suggested by Ben can be used to workaround the problem but it sounds like it is application aware and users will have to discover the possibility of that solution, I might be mistaken though. If we want GHC to work smoothly for performance critical applications then we should perhaps find a cost effective way to solve this in an application transparent manner. -harendra On 18 October 2016 at 18:33, Simon Peyton Jones via ghc-devs < ghc-devs at haskell.org> wrote: > | I understand Marlow's thread-local heaps experiment circa 7.2/7.4 was > | abandoned because it penalized performance too much. Does the > | impression that there isn't the labor to maintain two GCs still hold? > | It seems like thread-local heaps would be pervasive. > > I was optimistic about thread-local heaps, but while perf did improve a > bit, the complexity of the implementation was extremely daunting. So we > decided that the pain didn't justify the gain. > > I'm not sure it'd help much here, since the data is long-lived and might > migrate into the global heap anyway. > > Most GCs rely on traversing live data. Here the live data is big. So > really the only solution is to traverse it incrementally. You can still > stop-the-world, but you have to be able to resume normal execution before > GC is complete, thus smearing GC out into a series of slices, interleaved > with (but not necessarily in parallel with) the main application. > > I believe the that the OCaml runtime now has such a GC. It'd be lovely to > have one for GHC. > > But I defer to Simon M > > Simon > > | -----Original Message----- > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of > | Christopher Allen > | Sent: 17 October 2016 18:08 > | To: ghc-devs at haskell.org > | Subject: Improving GHC GC for latency-sensitive networked services > | > | It'd be unfortunate if more companies trying out Haskell came to the > | same result: > | https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fblog. > | pusher.com%2Flatency-working-set-ghc-gc-pick-two%2F%23comment- > | 2866985345&data=01%7C01%7Csimonpj%40microsoft.com%7C04c1bc69e00c47d382 > | 2908d3f6b028d0%7C72f988bf86f141af91ab2d7cd011db47%7C1&sdata=dE1VP0u3kQ > | L9R7CaGTAOGswRY6SyKH72c0xG%2FOggEK0%3D&reserved=0 > | (They gave up and rewrote the service in Golang) > | > | Most of the state of the art I'm aware of (such as from Azul Systems) > | is from when I was using a JVM language, which isn't necessarily > | applicable for GHC. > | > | I understand Marlow's thread-local heaps experiment circa 7.2/7.4 was > | abandoned because it penalized performance too much. Does the > | impression that there isn't the labor to maintain two GCs still hold? > | It seems like thread-local heaps would be pervasive. > | > | Does anyone know what could be done in GHC itself to improve this > | situation? Stop-the-world is pretty painful when the otherwise > | excellent concurrency primitives are much of why you're using Haskell. > | > | --- Chris Allen > | _______________________________________________ > | ghc-devs mailing list > | ghc-devs at haskell.org > | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.h > | askell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- > | devs&data=01%7C01%7Csimonpj%40microsoft.com%7C04c1bc69e00c47d3822908d3 > | f6b028d0%7C72f988bf86f141af91ab2d7cd011db47%7C1&sdata=XwvaAPx%2BGqugD4 > | Kx%2FkXiYticgBCUMkboqH9QE315EhQ%3D&reserved=0 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From m at tweag.io Tue Oct 18 14:12:18 2016 From: m at tweag.io (Boespflug, Mathieu) Date: Tue, 18 Oct 2016 16:12:18 +0200 Subject: Improving GHC GC for latency-sensitive networked services In-Reply-To: References: Message-ID: Hi Chris, the GC pauses when using GHC have seldom been a serious issue in most of our projects at Tweag I/O. We do also have some projects with special requirements, however (strong synchrony between many machines that block frequently). For those the GC pauses are indeed a problem. And like most non-trivial problems, it's a combination of multiple solutions that'll help us reduce or eliminate these long pauses. The first line of work involves hacks to the GC. Making the GC incremental would certainly be nice. Local heaps might help for some workloads, but it's no silver bullet, as Simon PJ writes below. I think it would be very illuminating if Simon M or whoever else worked on early experiments regarding local heaps could post a detailed writeup as to what made the "complexity of the implementation extremely daunting" and the tradeoffs involved. Or a link there already is one. :) As Ben alluded to earlier and as Reddit discovered some weeks ago, as part of another line of work, we are donating some ongoing effort to help with the problem by simply taking out some objects from the GC managed heap. Objects that the GC just doesn't have to deal with at all (either because allocated elsewhere or not at all, thanks to fusion) can relieve the pressure on the GC. But quite apart from our effort here, which does involve an extension to the type system to enable the programmer to make more of her/his intent clear to the compiler, I think the compact regions work that will be part of 8.2 is already a great step forward. It requires some programmer assistance, but if it's GC pause times you're wrestling with, chances are you have a very hard use case indeed so providing that assistance is likely easier than most other things you'll have to deal with. Best, -- Mathieu Boespflug Founder at http://tweag.io. On 17 October 2016 at 19:08, Christopher Allen wrote: > It'd be unfortunate if more companies trying out Haskell came to the > same result: https://blog.pusher.com/latency-working-set-ghc-gc- > pick-two/#comment-2866985345 > (They gave up and rewrote the service in Golang) > > Most of the state of the art I'm aware of (such as from Azul Systems) > is from when I was using a JVM language, which isn't necessarily > applicable for GHC. > > I understand Marlow's thread-local heaps experiment circa 7.2/7.4 was > abandoned because it penalized performance too much. Does the > impression that there isn't the labor to maintain two GCs still hold? > It seems like thread-local heaps would be pervasive. > > Does anyone know what could be done in GHC itself to improve this > situation? Stop-the-world is pretty painful when the otherwise > excellent concurrency primitives are much of why you're using Haskell. > > --- Chris Allen > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Tue Oct 18 14:32:19 2016 From: marlowsd at gmail.com (Simon Marlow) Date: Tue, 18 Oct 2016 15:32:19 +0100 Subject: Improving GHC GC for latency-sensitive networked services In-Reply-To: References: Message-ID: Chris, There are a few things here. - There are different levels of latency-sensitivity. The system I work on at Facebook is latency sensitive and we have no problem with the GC (after we implemented a few optimisations and did some tuning). But we're ok with pauses up to 100ms or so, and our average pause time is <50ms with 100MB live data on large multicore machines. There's probably still scope to reduce that some more. - Thread-local heaps don't fix the pause-time issue. They reduce the pause time for a local collection but have no impact on the global collection, which is still unbounded in size. - I absolutely agree we should have incremental or concurrent collection. It's a big project though. Most of the technology is fairly well understood (just read https://www.amazon.co.uk/gp/product/1420082795/ref=pd_bxgy_14_img_2?ie=UTF8&psc=1&refRID=P08F0WS4W6Q6Q6K8CSCF) and I have some vague plans for what direction to take. - The issue is not so much maintaining multiple GCs. We already have 3 GCs (one of which is experimental and unsupported). The issue is more that a new kind of GC has non-local implications because it affects read- and write-barriers, and making a bad tradeoff can penalize the performance of all code. Perhaps you're willing to give up 10% of performance to get guaranteed 10ms pause times, but can we impose that 10% on everyone? If not, are you willing to recompile GHC and all your libraries? Cheers Simon On 17 October 2016 at 18:08, Christopher Allen wrote: > It'd be unfortunate if more companies trying out Haskell came to the > same result: https://blog.pusher.com/latency-working-set-ghc-gc- > pick-two/#comment-2866985345 > (They gave up and rewrote the service in Golang) > > Most of the state of the art I'm aware of (such as from Azul Systems) > is from when I was using a JVM language, which isn't necessarily > applicable for GHC. > > I understand Marlow's thread-local heaps experiment circa 7.2/7.4 was > abandoned because it penalized performance too much. Does the > impression that there isn't the labor to maintain two GCs still hold? > It seems like thread-local heaps would be pervasive. > > Does anyone know what could be done in GHC itself to improve this > situation? Stop-the-world is pretty painful when the otherwise > excellent concurrency primitives are much of why you're using Haskell. > > --- Chris Allen > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at nh2.me Tue Oct 18 14:46:13 2016 From: mail at nh2.me (=?UTF-8?Q?Niklas_Hamb=c3=bcchen?=) Date: Tue, 18 Oct 2016 16:46:13 +0200 Subject: Improving GHC GC for latency-sensitive networked services In-Reply-To: References: Message-ID: I'll be lazy and answer the simplest question in this thread :) On 18/10/16 16:32, Simon Marlow wrote: > If not, are you willing to recompile GHC and all your libraries? Yes. From ben at well-typed.com Tue Oct 18 22:08:00 2016 From: ben at well-typed.com (Ben Gamari) Date: Tue, 18 Oct 2016 18:08:00 -0400 Subject: Master recently broke on OS X Message-ID: <871szdia4f.fsf@ben-laptop.smart-cactus.org> Hello Simon, It looks like one of the patches that you pushed to master today may have broken the build on OS X. According to Harbormaster something in the range of f148513ccd93..7129861397f8 caused T5611 to fail on the OS X build bot [1]. Could you have a look? Cheers, - Ben [1] https://phabricator.haskell.org/harbormaster/build/14220/?l=100 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From marlowsd at gmail.com Wed Oct 19 08:34:12 2016 From: marlowsd at gmail.com (Simon Marlow) Date: Wed, 19 Oct 2016 09:34:12 +0100 Subject: Master recently broke on OS X In-Reply-To: <871szdia4f.fsf@ben-laptop.smart-cactus.org> References: <871szdia4f.fsf@ben-laptop.smart-cactus.org> Message-ID: It appears to be passing now. I did commit a sequence of 3 patches, 2 of which should have been squashed together (my bad) and the intermediate builds were broken, but the final state was OK except for a failure in setnumcapabilities001. I'll try to reproduce that one today. Cheers Simon On 18 October 2016 at 23:08, Ben Gamari wrote: > Hello Simon, > > It looks like one of the patches that you pushed to master today may > have broken the build on OS X. According to Harbormaster something in > the range of f148513ccd93..7129861397f8 caused T5611 to fail on the OS X > build bot [1]. Could you have a look? > > Cheers, > > - Ben > > > [1] https://phabricator.haskell.org/harbormaster/build/14220/?l=100 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.kjeldaas at gmail.com Wed Oct 19 10:28:07 2016 From: alexander.kjeldaas at gmail.com (Alexander Kjeldaas) Date: Wed, 19 Oct 2016 12:28:07 +0200 Subject: Improving GHC GC for latency-sensitive networked services In-Reply-To: References: Message-ID: On Tue, Oct 18, 2016 at 4:46 PM, Niklas Hambüchen wrote: > I'll be lazy and answer the simplest question in this thread :) > > On 18/10/16 16:32, Simon Marlow wrote: > > If not, are you willing to recompile GHC and all your libraries? > > Yes. > I'll add that managing this is probably a lot easier now than it was back then. Today you would just add a flag in stack.yaml, get a cup of coffee, and the tooling would guarantee that there's no breakage. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chrisdone at gmail.com Wed Oct 19 11:45:58 2016 From: chrisdone at gmail.com (Christopher Done) Date: Wed, 19 Oct 2016 12:45:58 +0100 Subject: How to best display type variables with the same name Message-ID: We've encountered a problem in Intero which is that when inspecting types of expressions and patterns, sometimes it happens that the type, when pretty printing, yields variables of the same name but which have different provenance. Here's a summary of the issue: https://github.com/commercialhaskell/intero/issues/280#issuecomment- 254784904 And a strawman proposal of how it could be solved: https://github.com/commercialhaskell/intero/issues/280#issuecomment- 254787927 What do you think? Also, if I were to implement the strawman proposal, is it possible to recover from a `tyvar :: Type` its original quantification/its "forall"? I've had a look through the API briefly and it looks like a _maybe_. Ciao! -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at cs.brynmawr.edu Wed Oct 19 12:48:47 2016 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Wed, 19 Oct 2016 08:48:47 -0400 Subject: How to best display type variables with the same name In-Reply-To: References: Message-ID: <0A1F0572-48D4-4A42-974A-DA90ECB8532C@cs.brynmawr.edu> Interesting problem & solution. Here's a wacky idea, from a position of utter ignorance about your environment: could you use color? Already, when I saw `b :: a` in the commentary there, where `b` is in scope as a type variable, it seemed wrong to me. In any case, I can answer your simpler question: yes, with some work, you can get from a tyvar to its provenance. A tyvar's Name will have its binding location in it. If you also keep track of binding locations as you spot foralls, you should be able to match them up. In theory. Richard > On Oct 19, 2016, at 7:45 AM, Christopher Done wrote: > > We've encountered a problem in Intero which is that when inspecting types of expressions and patterns, sometimes it happens that the type, when pretty printing, yields variables of the same name but which have different provenance. > > Here's a summary of the issue: > > https://github.com/commercialhaskell/intero/issues/280#issuecomment-254784904 > > And a strawman proposal of how it could be solved: > > https://github.com/commercialhaskell/intero/issues/280#issuecomment-254787927 > > What do you think? > > Also, if I were to implement the strawman proposal, is it possible to recover from a `tyvar :: Type` its original quantification/its "forall"? I've had a look through the API briefly and it looks like a _maybe_. > > Ciao! > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Wed Oct 19 13:05:14 2016 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Wed, 19 Oct 2016 15:05:14 +0200 Subject: How to best display type variables with the same name In-Reply-To: <0A1F0572-48D4-4A42-974A-DA90ECB8532C@cs.brynmawr.edu> References: <0A1F0572-48D4-4A42-974A-DA90ECB8532C@cs.brynmawr.edu> Message-ID: This sounds like a thing that should be in the GHC API (the tyvar to provenance lookup). Alan On Wed, Oct 19, 2016 at 2:48 PM, Richard Eisenberg wrote: > Interesting problem & solution. > > Here's a wacky idea, from a position of utter ignorance about your > environment: could you use color? Already, when I saw `b :: a` in the > commentary there, where `b` is in scope as a type variable, it seemed wrong > to me. > > In any case, I can answer your simpler question: yes, with some work, you > can get from a tyvar to its provenance. A tyvar's Name will have its > binding location in it. If you also keep track of binding locations as you > spot foralls, you should be able to match them up. In theory. > > Richard > > On Oct 19, 2016, at 7:45 AM, Christopher Done wrote: > > We've encountered a problem in Intero which is that when inspecting types > of expressions and patterns, sometimes it happens that the type, when > pretty printing, yields variables of the same name but which have different > provenance. > > Here's a summary of the issue: > > https://github.com/commercialhaskell/intero/issues/280# > issuecomment-254784904 > > And a strawman proposal of how it could be solved: > > https://github.com/commercialhaskell/intero/issues/280# > issuecomment-254787927 > > What do you think? > > Also, if I were to implement the strawman proposal, is it possible to > recover from a `tyvar :: Type` its original quantification/its "forall"? > I've had a look through the API briefly and it looks like a _maybe_. > > Ciao! > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Wed Oct 19 13:35:14 2016 From: ben at well-typed.com (Ben Gamari) Date: Wed, 19 Oct 2016 09:35:14 -0400 Subject: Master recently broke on OS X In-Reply-To: References: <871szdia4f.fsf@ben-laptop.smart-cactus.org> Message-ID: <87oa2gh371.fsf@ben-laptop.smart-cactus.org> Simon Marlow writes: > It appears to be passing now. I did commit a sequence of 3 patches, 2 of > which should have been squashed together (my bad) and the intermediate > builds were broken, but the final state was OK except for a failure in > setnumcapabilities001. I'll try to reproduce that one today. > Thanks Simon! I think this emphasizes the need for a auto-push bot like what we discussed at HIW. This is something that I'll try to prototype this week. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From simonpj at microsoft.com Wed Oct 19 16:00:54 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 19 Oct 2016 16:00:54 +0000 Subject: How to best display type variables with the same name In-Reply-To: References: Message-ID: I’m afraid I didn’t understand the issue in the link below. It speaks of “querying the type”, but I’m not sure what that means. A GHCi session perhaps? Does this relate to the way GHCi displays types? I’m a bit lost. A from-the-beginning example, showing steps and what the unexpected behaviour is would be helpful (to me anyway) Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Christopher Done Sent: 19 October 2016 12:46 To: ghc-devs at haskell.org Subject: How to best display type variables with the same name We've encountered a problem in Intero which is that when inspecting types of expressions and patterns, sometimes it happens that the type, when pretty printing, yields variables of the same name but which have different provenance. Here's a summary of the issue: https://github.com/commercialhaskell/intero/issues/280#issuecomment-254784904 And a strawman proposal of how it could be solved: https://github.com/commercialhaskell/intero/issues/280#issuecomment-254787927 What do you think? Also, if I were to implement the strawman proposal, is it possible to recover from a `tyvar :: Type` its original quantification/its "forall"? I've had a look through the API briefly and it looks like a _maybe_. Ciao! -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Oct 20 14:05:29 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 20 Oct 2016 14:05:29 +0000 Subject: Perf on T10858 Message-ID: I’m getting this on HEAD on my Linux box (64 bit) cd "./perf/T10858.run" && "/5playpen/simonpj/HEAD-2/inplace/test spaces/ghc-stage2" -c T10858.hs -dcore-lint -dcmm-lint -no-user-package-db -rtsopts -fno-warn-missed-specialisations -fshow-warning-groups -dno-debug-output -O +RTS -V0 -tT10858.comp.stats --machine-readable -RTS bytes allocated value is too low: (If this is because you have improved GHC, please update the test so that GHC doesn't regress again) Expected T10858(normal) bytes allocated: 241655120 +/-8% Lower bound T10858(normal) bytes allocated: 222322710 Upper bound T10858(normal) bytes allocated: 260987530 Actual T10858(normal) bytes allocated: 221938928 Deviation T10858(normal) bytes allocated: -8.2 % Does anyone else? It’s good – but why isn’t Harbormaster complaining? Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Thu Oct 20 14:20:42 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 20 Oct 2016 10:20:42 -0400 Subject: Perf on T10858 In-Reply-To: References: Message-ID: <87d1iv85l1.fsf@ben-laptop.smart-cactus.org> Simon Peyton Jones via ghc-devs writes: > I’m getting this on HEAD on my Linux box (64 bit) > > cd "./perf/T10858.run" && "/5playpen/simonpj/HEAD-2/inplace/test spaces/ghc-stage2" -c T10858.hs -dcore-lint -dcmm-lint -no-user-package-db -rtsopts -fno-warn-missed-specialisations -fshow-warning-groups -dno-debug-output -O +RTS -V0 -tT10858.comp.stats --machine-readable -RTS > > bytes allocated value is too low: > > (If this is because you have improved GHC, please > > update the test so that GHC doesn't regress again) > > Expected T10858(normal) bytes allocated: 241655120 +/-8% > > Lower bound T10858(normal) bytes allocated: 222322710 > > Upper bound T10858(normal) bytes allocated: 260987530 > > Actual T10858(normal) bytes allocated: 221938928 > > Deviation T10858(normal) bytes allocated: -8.2 % > > Does anyone else? It’s good – but why isn’t Harbormaster complaining? > A very good question. It looks like the result is straddling the edge of acceptable so it's conceivable that Harbormaster is (for some reason) just below the failing threshold. We've seen this sort of small non-determinism in allocations in the past, although I don't have a compelling explanation for why. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From mail at joachim-breitner.de Thu Oct 20 15:48:33 2016 From: mail at joachim-breitner.de (Joachim Breitner) Date: Thu, 20 Oct 2016 11:48:33 -0400 Subject: Perf on T10858 In-Reply-To: <87d1iv85l1.fsf@ben-laptop.smart-cactus.org> References: <87d1iv85l1.fsf@ben-laptop.smart-cactus.org> Message-ID: <1476978513.3649.7.camel@joachim-breitner.de> Hi, Am Donnerstag, den 20.10.2016, 10:20 -0400 schrieb Ben Gamari: > > Simon Peyton Jones via ghc-devs writes: > > > I’m getting this on HEAD on my Linux box (64 bit) > > > > cd "./perf/T10858.run" &&  "/5playpen/simonpj/HEAD-2/inplace/test   spaces/ghc-stage2" -c T10858.hs -dcore-lint -dcmm-lint -no-user-package-db -rtsopts -fno-warn-missed-specialisations -fshow-warning-groups -dno-debug-output  -O +RTS -V0 -tT10858.comp.stats --machine-readable -RTS > > > > bytes allocated value is too low: > > > > (If this is because you have improved GHC, please > > > > update the test so that GHC doesn't regress again) > > > >     Expected    T10858(normal) bytes allocated: 241655120 +/-8% > > > >     Lower bound T10858(normal) bytes allocated: 222322710 > > > >     Upper bound T10858(normal) bytes allocated: 260987530 > > > >     Actual      T10858(normal) bytes allocated: 221938928 > > > >     Deviation   T10858(normal) bytes allocated:      -8.2 % > > > > Does anyone else?  It’s good – but why isn’t Harbormaster complaining? > > > > A very good question. It looks like the result is straddling the edge of > acceptable so it's conceivable that Harbormaster is (for some reason) > just below the failing threshold. We've seen this sort of small > non-determinism in allocations in the past, although I don't have a > compelling explanation for why. this is confirmed here: https://perf.haskell.org/ghc/#graph/tests/alloc/T10858 It is close to the lower edge, and not 100% stable. Greetings, Joachim -- Joachim “nomeata” Breitner   mail at joachim-breitner.de • https://www.joachim-breitner.de/   XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F   Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: This is a digitally signed message part URL: From ben at smart-cactus.org Thu Oct 20 18:15:38 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 20 Oct 2016 14:15:38 -0400 Subject: Dataflow analysis for Cmm In-Reply-To: <201610180949.17368.jan.stolarek@p.lodz.pl> References: <201610170957.41204.jan.stolarek@p.lodz.pl> <201610180949.17368.jan.stolarek@p.lodz.pl> Message-ID: <87wph27uph.fsf@ben-laptop.smart-cactus.org> Jan Stolarek writes: >> (did you intend to send two identical links?) > No :-) The branches should be js-hoopl-cleanup-v1 and js-hoopl-cleanup-v2 > >> Yes, the end result would be the same - I'm merely asking what would be >> preferred by GHC devs (i.e., I don't know how fine grained patches to GHC >> usually are). > I don't think this should be visible in your final patch. In a perfect world each repositrory > commit shouyld provide exactly one functionality - not more and not less. So your final patch > should not reflect intermediate steps you took to implement some functionality because they are > not really relevant. For the purpose of development and review it is fine to have a branch with > lots of small commits but before merging you should just squash them into one. > I don't entirely agree. I personally find it very hard to review large patches as the amount of mental context generally seems to grow super-linearly in the amount of code touched. Moreover, I think it's important to remember that the need to read patches does not vanish the moment the patch is committed. To the contrary, review is merely the first of many instances in which a patch will be read. Other instances include, * when the patch is backported * when someone is trying to rebase one of their own changes on top of the patch * when another contributor is trying to follow the evolution of a piece of the compiler * when someone is later trying to understand a bug in the patch Consequently, I think it is fairly important not to throw out the structure that multiple, sensibly sized commits provides. Of course there is a compromise to be struck here: we don't want dozens of five-line patches; however I think that one mega-patch swings too far to the other extreme in the case of most features. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From jan.stolarek at p.lodz.pl Thu Oct 20 19:21:49 2016 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Thu, 20 Oct 2016 20:21:49 +0100 Subject: Dataflow analysis for Cmm In-Reply-To: <87wph27uph.fsf@ben-laptop.smart-cactus.org> References: <201610180949.17368.jan.stolarek@p.lodz.pl> <87wph27uph.fsf@ben-laptop.smart-cactus.org> Message-ID: <201610202021.49803.jan.stolarek@p.lodz.pl> > I don't entirely agree. I personally find it very hard to review large > patches as the amount of mental context generally seems to grow > super-linearly in the amount of code touched. Moreover, I think it's > important to remember that the need to read patches does not vanish the > moment the patch is committed. To the contrary, review is merely the > first of many instances in which a patch will be read. Other instances > include, I wholeheartedly agree with everything you say. I don't see it as contradicting in any way principles that I outlined. It's just that sometimes doing a single logical change to the code requires a large patch and breaking it artificially into smaller patches might actually make matters worse. I believe this would be the case for this scenario. And honestly speaking I don't think that the patch here will be very big. But like you say, there's a compromise to be struck. Janek > > * when the patch is backported > > * when someone is trying to rebase one of their own changes on top of > the patch > > * when another contributor is trying to follow the evolution of a piece > of the compiler > > * when someone is later trying to understand a bug in the patch > > Consequently, I think it is fairly important not to throw out the > structure that multiple, sensibly sized commits provides. Of course > there is a compromise to be struck here: we don't want dozens of > five-line patches; however I think that one mega-patch swings too far to > the other extreme in the case of most features. > > Cheers, > > - Ben --- Politechnika Łódzka Lodz University of Technology Treść tej wiadomości zawiera informacje przeznaczone tylko dla adresata. Jeżeli nie jesteście Państwo jej adresatem, bądź otrzymaliście ją przez pomyłkę prosimy o powiadomienie o tym nadawcy oraz trwałe jej usunięcie. This email contains information intended solely for the use of the individual to whom it is addressed. If you are not the intended recipient or if you have received this message in error, please notify the sender and delete it from your system. From ben at smart-cactus.org Thu Oct 20 19:43:49 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 20 Oct 2016 15:43:49 -0400 Subject: Dataflow analysis for Cmm In-Reply-To: <201610202021.49803.jan.stolarek@p.lodz.pl> References: <201610180949.17368.jan.stolarek@p.lodz.pl> <87wph27uph.fsf@ben-laptop.smart-cactus.org> <201610202021.49803.jan.stolarek@p.lodz.pl> Message-ID: <87twc67qmi.fsf@ben-laptop.smart-cactus.org> Jan Stolarek writes: >> I don't entirely agree. I personally find it very hard to review large >> patches as the amount of mental context generally seems to grow >> super-linearly in the amount of code touched. Moreover, I think it's >> important to remember that the need to read patches does not vanish the >> moment the patch is committed. To the contrary, review is merely the >> first of many instances in which a patch will be read. Other instances >> include, > > I wholeheartedly agree with everything you say. I don't see it as contradicting in any way > principles that I outlined. It's just that sometimes doing a single logical change to the code > requires a large patch and breaking it artificially into smaller patches might actually make > matters worse. I believe this would be the case for this scenario. And honestly speaking I don't > think that the patch here will be very big. But like you say, there's a compromise to be struck. > Ahh, it looks like I was probably reading more into what you wrote than you intended; my apologies! Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From george.colpitts at gmail.com Thu Oct 20 21:11:54 2016 From: george.colpitts at gmail.com (George Colpitts) Date: Thu, 20 Oct 2016 21:11:54 +0000 Subject: GHC 8.0.2 status In-Reply-To: <87twck8duc.fsf@ben-laptop.smart-cactus.org> References: <87twck8duc.fsf@ben-laptop.smart-cactus.org> Message-ID: Hi Where are we on this? 12479 was last updated 5 days ago and it is not clear who has the next action. Thanks George On Mon, Oct 10, 2016 at 11:43 AM Ben Gamari wrote: > Hello GHCers, > > Thanks to the work of darchon the last blocker for the 8.0.2 release > (#12479) has nearly been resolved. After the fix has been merged I'll be > doing some further testing of the ghc-8.0 branch and cut a source > tarball for 8.0.2-rc1 later this week. > > If you intend on offering a binary release for 8.0.2 it would be great > if you could plan on testing the tarball promptly so we can cut 8.0.2 > and move on to planning for 8.2.1. > > Thanks for your help and patience! > > Cheers, > > - Ben > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Fri Oct 21 14:04:35 2016 From: marlowsd at gmail.com (Simon Marlow) Date: Fri, 21 Oct 2016 15:04:35 +0100 Subject: Dataflow analysis for Cmm In-Reply-To: References: Message-ID: On 16 October 2016 at 14:03, Michal Terepeta wrote: > Hi, > > I was looking at cleaning up a bit the situation with dataflow analysis > for Cmm. > In particular, I was experimenting with rewriting the current > `cmm.Hoopl.Dataflow` module: > - To only include the functionality to do analysis (since GHC doesn’t seem > to use > the rewriting part). > Benefits: > - Code simplification (we could remove a lot of unused code). > - Makes it clear what we’re actually using from Hoopl. > - To have an interface that works with transfer functions operating on a > whole > basic block (`Block CmmNode C C`). > This means that it would be up to the user of the algorithm to traverse > the > whole block. > Ah! This is actually something I wanted to do but didn't get around to. When I was working on the code generator I found that using Hoopl for rewriting was prohibitively slow, which is why we're not using it for anything right now, but I think that pulling out the basic block transformation is possibly a way forwards that would let us use Hoopl. A lot of the code you're removing is my attempt at "optimising" the Hoopl dataflow algorithm to make it usable in GHC. (I don't mind removing this, it was a failed experiment really) > Benefits: > - Further simplifications. > - We could remove `analyzeFwdBlocks` hack, which AFAICS is just a > copy&paste > of `analyzeFwd` but ignores the middle nodes (probably for efficiency > of > analyses that only look at the blocks). > Aren't we using this in dataflowAnalFwdBlocks, that's used by procpointAnalysis? Cheers Simon - More flexible (e.g., the clients could know which block they’re > processing; > we could consider memoizing some per block information, etc.). > > What do you think about this? > > I have a branch that implements the above: > https://github.com/michalt/ghc/tree/dataflow2/1 > It’s introducing a second parallel implementation (`cmm.Hoopl.Dataflow2` > module), so that it's possible to run ./validate while comparing the > results of > the old implementation with the new one. > > Second question: how could we merge this? (assuming that people are > generally > ok with the approach) Some ideas: > - Change cmm/Hoopl/Dataflow module itself along with the three analyses > that use > it in one step. > - Introduce the Dataflow2 module first, then switch the analyses, then > remove > any unused code that still depends on the old Dataflow module, finally > remove > the old Dataflow module itself. > (Personally I'd prefer the second option, but I'm also ok with the first > one) > > I’m happy to export the code to Phab if you prefer - I wasn’t sure what’s > the > recommended workflow for code that’s not ready for review… > > Thanks, > Michal > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chrisdone at gmail.com Fri Oct 21 14:07:16 2016 From: chrisdone at gmail.com (Christopher Done) Date: Fri, 21 Oct 2016 15:07:16 +0100 Subject: How to best display type variables with the same name In-Reply-To: References: Message-ID: On 19 October 2016 at 17:00, Simon Peyton Jones simonpj at microsoft.com wrote: I’m afraid I didn’t understand the issue in the link below. It speaks of “querying the type”, but I’m not sure what that means. A GHCi session perhaps? Does this relate to the way GHCi displays types? I’m a bit lost. A from-the-beginning example, showing steps and what the unexpected behaviour is would be helpful (to me anyway) Sure. I’ll explain from top-level down: - In this case “querying the type” means running the :type-at command in Intero (which itself is a fork of GHCi’s codebase around GHC 7.10): https://github.com/commercialhaskell/intero/blob/master/src/InteractiveUI.hs#L1693-L1713 It accepts a file name, line-col to line-col span and prints the type of that expression/pattern. As you can see in that function it uses printForUserModInfo (from GhcMonad), similar to (scroll above) the printForUser for GHCi’s regular :type command. - Where does that info come from? When we load a module in Intero, we perform an additional step of “collecting info” here: https://github.com/commercialhaskell/intero/blob/master/src/GhciInfo.hs#L73 That info, for each node in the AST, is ultimately stored in a SpanInfo: https://github.com/commercialhaskell/intero/blob/master/src/GhciTypes.hs#L28-L39 Which we then use for :type-at. So in summary we collect info from tm_typechecked_source, keep that for later, and then when the user’s editor asks via e.g. :type-at X.hs 1 5 1 7 “what is the type of the thing at this point?” we use GHC’s regular pretty printing function to print that type. That actually all works splendidly. For example, if we query foo g f = maybe g f-- ^ here or ^ here yields g :: bfoo g f = maybe g f-- ^ here or ^ here yields: f :: a -> b The tricky part arises in this example: https://github.com/commercialhaskell/intero/issues/280#issuecomment-254784904 Which is that we have two perfectly cromulent types from the AST that are both a in isolation, but are actually different. They will have different Unique values in their Name’s and come from different implicit forall‘s. The question is what’s a good way to communicate this to the user? This is partly a “user interface” question, and on the side a “given an ideal UI, do we have the necessary info the GHC API?” If it helps, I could probably spend some time making an isolated module that uses the GHC API to compile a file and then report these types. Ciao! ​ -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Fri Oct 21 14:09:46 2016 From: marlowsd at gmail.com (Simon Marlow) Date: Fri, 21 Oct 2016 15:09:46 +0100 Subject: Compact regions in users guide In-Reply-To: <87y41mkhk5.fsf@ben-laptop.smart-cactus.org> References: <87y41mkhk5.fsf@ben-laptop.smart-cactus.org> Message-ID: Yes we need some docs. But I expect the API to change before we're done with the implementation (it isn't really usable in its current state), so I'm deferring the docs until things settle down. Cheers Simon On 17 October 2016 at 18:32, Ben Gamari wrote: > Hello Compact Regions authors, > > It occurs to me that the compact regions support that is due to be > included in GHC 8.2 is lacking any discussion in the users guide. At > very least we should have a mention in the release notes (this is one of > the major features of 8.2, afterall) and a brief overview of the feature > elsewhere. It's a bit hard saying where the overview would fit > (parallel.rst is an option, albeit imperfect; glasgow_exts.rst is > another). I'll leave this up to you. > > I've opened #12413 [1] to track this task. Do you suppose one of you > could take a few minutes to finish this off? > > Thanks! > > Cheers, > > - Ben > > > [1] https://ghc.haskell.org/trac/ghc/ticket/12413 > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Fri Oct 21 16:29:16 2016 From: marlowsd at gmail.com (Simon Marlow) Date: Fri, 21 Oct 2016 17:29:16 +0100 Subject: Better X87 In-Reply-To: References: Message-ID: I believe that comment goes even further back - it was probably Julian Seward who worked on the x86 code generator around 1999, if I recall correctly. ​ -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Fri Oct 21 16:35:43 2016 From: marlowsd at gmail.com (Simon Marlow) Date: Fri, 21 Oct 2016 17:35:43 +0100 Subject: Default options for -threaded In-Reply-To: <87mvielr0q.fsf@ben-laptop.smart-cactus.org> References: <57f92249.4d081c0a.292b6.e00a@mx.google.com> <87mvielr0q.fsf@ben-laptop.smart-cactus.org> Message-ID: On 8 October 2016 at 17:55, Ben Gamari wrote: > lonetiger at gmail.com writes: > > > Hi All, > > > > A user on https://ghc.haskell.org/trac/ghc/ticket/11054 has asked why > > -N -qa isn’t the default for -threaded. > > > I'm not sure that scheduling on all of the cores on the user's machine by > default is a good idea, especially given that our users have > learned to expect the existing default. Enabling affinity by default > seems reasonable if we have evidence that it helps the majority of > applications, but we would first need to introduce an additional > flag to disable it. > Affinity is almost always a bad idea in my experience. > In general I think -N1 is a reasonable default as it acknowledges the > fact that deploying parallelism is not something that can be done > blindly in many (most?) applications. To make effective use of > parallelism the user needs to understand their hardware, their > application, and its interaction with the runtime system and configure > the RTS appropriately. > > Agree on keeping -N1. Related to this, I think it's about time we made -threaded the default. We could add a -single-threaded option to get back the old behaviour. There is a small overhead to using -threaded, but -threaded is also required to make a lot of things work (e.g. waitForProcess in a multithreaded program, not to mention parallelism). Anyone interested in doing this? Cheers Simon > Of course, this is just my two-cents. > > Cheers, > > - Ben > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From iavor.diatchki at gmail.com Fri Oct 21 17:19:18 2016 From: iavor.diatchki at gmail.com (Iavor Diatchki) Date: Fri, 21 Oct 2016 10:19:18 -0700 Subject: Semantics of MVars and IORefs? Message-ID: Hello, recently, I and a few colleagues have been wondering about the interaction between IORefs and MVars, and we can't seem to find any explicit documentation stating the behavior, so I was wondering if anyone might know the answer. The question is: is it safe to use IORefs in a multi-threaded program, provided that the uses are within a "critical section" implemented with MVars. Here is a simple program to illustrate the situation: we have two threads, each thread takes a lock, increments a counter, then releases the lock: > import Control.Concurrent > import Data.IORef > > main :: IO () > main = > do lock <- newMVar () > counter <- newIORef 0 > forkIO (thread lock counter) > thread lock counter > > thread :: MVar () -> IORef Integer -> IO a > thread lock counter = > do takeMVar lock > value <- readIORef counter > print value > writeIORef counter (value + 1) > putMVar lock () > thread lock counter The question is if this program has a race condition or not, due to the use of IORefs? More explicitly, the concern is if a write to an IORef in one thread is guaranteed to be seen by a read from the same IORef in another thread, provided that there is proper synchronization between the two. -Iavor -------------- next part -------------- An HTML attachment was scrubbed... URL: From fryguybob at gmail.com Fri Oct 21 17:56:34 2016 From: fryguybob at gmail.com (Ryan Yates) Date: Fri, 21 Oct 2016 13:56:34 -0400 Subject: Semantics of MVars and IORefs? In-Reply-To: References: Message-ID: Hi Iavor, You might be interested in what Edward has written about this: http://blog.ezyang.com/2014/01/so-you-want-to-add-a-new-concurrency-primitive-to-ghc/ I would say when we do have a memory model for GHC the program you gave will almost certainly be correct. MVar operations should be full synchronization operations. There are some bugs on relaxed systems with initialization that I think are being addressed. I can't find the tickets at the moment. Ryan On Fri, Oct 21, 2016 at 1:19 PM, Iavor Diatchki wrote: > Hello, > > recently, I and a few colleagues have been wondering about the interaction > between IORefs and MVars, and we can't seem to find any explicit > documentation stating the behavior, so I was wondering if anyone might know > the answer. > > The question is: is it safe to use IORefs in a multi-threaded program, > provided that the uses are within a "critical section" implemented with > MVars. Here is a simple program to illustrate the situation: we have two > threads, each thread takes a lock, increments a counter, then releases the > lock: > > > import Control.Concurrent > > import Data.IORef > > > > main :: IO () > > main = > > do lock <- newMVar () > > counter <- newIORef 0 > > forkIO (thread lock counter) > > thread lock counter > > > > thread :: MVar () -> IORef Integer -> IO a > > thread lock counter = > > do takeMVar lock > > value <- readIORef counter > > print value > > writeIORef counter (value + 1) > > putMVar lock () > > thread lock counter > > The question is if this program has a race condition or not, due to the > use of IORefs? More explicitly, the concern is if a write to an IORef in > one thread is guaranteed to be seen by a read from the same IORef in > another thread, provided that there is proper synchronization between the > two. > > -Iavor > > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Oct 21 21:20:36 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 21 Oct 2016 21:20:36 +0000 Subject: [Diffusion] [Build Failed] rGHC6ddba64287fe: Improve TcCanonical.unifyWanted and unifyDerived In-Reply-To: <20161021203203.9330.56560.6AD365AC@phabricator.haskell.org> References: <20161021203203.9330.56560.6AD365AC@phabricator.haskell.org> Message-ID: I'm still seeing this OSX failure, below. It seems to be to do with T5611 but I have no way to see what the problem is, and I don't think it's any thing to do with me When I click on "unlimited lines" to attempt to see the error, Firefox hangs, and I have to kill it. It's unsettling getting all these failure messages Simon From: noreply at phabricator.haskell.org [mailto:noreply at phabricator.haskell.org] Sent: 21 October 2016 21:32 To: Simon Peyton Jones Subject: [Diffusion] [Build Failed] rGHC6ddba64287fe: Improve TcCanonical.unifyWanted and unifyDerived Harbormaster failed to build B11529: rGHC6ddba64287fe: Improve TcCanonical.unifyWanted and unifyDerived! BRANCHES master USERS simonpj (Author) O11 (Auditor) COMMIT https://phabricator.haskell.org/rGHC6ddba64287fe EMAIL PREFERENCES https://phabricator.haskell.org/settings/panel/emailpreferences/ To: simonpj, Harbormaster -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Fri Oct 21 21:25:06 2016 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Fri, 21 Oct 2016 22:25:06 +0100 Subject: [Diffusion] [Build Failed] rGHC6ddba64287fe: Improve TcCanonical.unifyWanted and unifyDerived In-Reply-To: References: <20161021203203.9330.56560.6AD365AC@phabricator.haskell.org> Message-ID: I can reproduce the failure locally but not always. No clue as to the sudden cause but here's the ticket and differential to mark the test as broken for now. https://ghc.haskell.org/trac/ghc/ticket/12751 https://phabricator.haskell.org/D2622 Matt On Fri, Oct 21, 2016 at 10:20 PM, Simon Peyton Jones via ghc-devs < ghc-devs at haskell.org> wrote: > I’m still seeing this OSX failure, below. > > > > It seems to be to do with T5611 but I have no way to see what the problem > is, and I don’t think it’s any thing to do with me > > > > When I click on “unlimited lines” to attempt to see the error, Firefox > hangs, and I have to kill it. > > > > It’s unsettling getting all these failure messages > > > > Simon > > > > *From:* noreply at phabricator.haskell.org [mailto:noreply at phabricator. > haskell.org] > *Sent:* 21 October 2016 21:32 > *To:* Simon Peyton Jones > *Subject:* [Diffusion] [Build Failed] rGHC6ddba64287fe: Improve > TcCanonical.unifyWanted and unifyDerived > > > > Harbormaster failed to build B11529: rGHC6ddba64287fe: Improve > TcCanonical.unifyWanted and unifyDerived! > > > > *BRANCHES* > > master > > > > *USERS* > > simonpj (Author) > O11 (Auditor) > > > > *COMMIT* > > https://phabricator.haskell.org/rGHC6ddba64287fe > > > > *EMAIL PREFERENCES* > > https://phabricator.haskell.org/settings/panel/emailpreferences/ > > > > *To: *simonpj, Harbormaster > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Sat Oct 22 12:39:26 2016 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Sat, 22 Oct 2016 13:39:26 +0100 Subject: Windows Builder Disabled Message-ID: I have disabled the windows builder as it is consistently failing with the same error message. >From this log for example: https://phabricator.haskell.org/harbormaster/build/14430/ ghc.exe: getMBlocks: VirtualAlloc MEM_COMMIT failed: The paging file is too small for this operation to complete. 764ghc.exe: failed to create OS thread: The paging file is too small for this operation to complete. Do you have any ideas Ben/Tamar? I can't see any recent commits which would have caused this failure. Matt From ben at well-typed.com Sat Oct 22 19:39:19 2016 From: ben at well-typed.com (Ben Gamari) Date: Sat, 22 Oct 2016 15:39:19 -0400 Subject: Windows Builder Disabled In-Reply-To: References: Message-ID: <87a8dw6umw.fsf@ben-laptop.smart-cactus.org> Matthew Pickering writes: > I have disabled the windows builder as it is consistently failing with > the same error message. > > From this log for example: > https://phabricator.haskell.org/harbormaster/build/14430/ > > ghc.exe: getMBlocks: VirtualAlloc MEM_COMMIT failed: The paging file > is too small for this operation to complete. > 764ghc.exe: failed to create OS thread: The paging file is too small > for this operation to complete. > Thanks for doing this Matthew! > Do you have any ideas Ben/Tamar? I can't see any recent commits which > would have caused this failure. > Sadly no. This is quite odd. I'll need to look into this when I get back from Hac Phi. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From lonetiger at gmail.com Sat Oct 22 20:00:01 2016 From: lonetiger at gmail.com (lonetiger at gmail.com) Date: Sat, 22 Oct 2016 21:00:01 +0100 Subject: Windows Builder Disabled In-Reply-To: <87a8dw6umw.fsf@ben-laptop.smart-cactus.org> References: <87a8dw6umw.fsf@ben-laptop.smart-cactus.org> Message-ID: <580bc541.0370c20a.c6071.026e@mx.google.com> Hi Matthew, These tests are only run sequential right? As in, Phab doesn’t build two diffs concurrently? If it used to pass then there might be something running on the server consuming memory. The pagefile is probably set to auto, but that means a certain % of the disk, if it needs more for some Reason it won’t grow beyond this maximum. Someone would need to login to see what’s going on 😊 Tamar From: Ben Gamari Sent: Saturday, October 22, 2016 20:39 To: Matthew Pickering; GHC developers; lonetiger at gmail.com Subject: Re: Windows Builder Disabled Matthew Pickering writes: > I have disabled the windows builder as it is consistently failing with > the same error message. > > From this log for example: > https://phabricator.haskell.org/harbormaster/build/14430/ > > ghc.exe: getMBlocks: VirtualAlloc MEM_COMMIT failed: The paging file > is too small for this operation to complete. > 764ghc.exe: failed to create OS thread: The paging file is too small > for this operation to complete. > Thanks for doing this Matthew! > Do you have any ideas Ben/Tamar? I can't see any recent commits which > would have caused this failure. > Sadly no. This is quite odd. I'll need to look into this when I get back from Hac Phi. Cheers, - Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Sat Oct 22 22:23:53 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Sat, 22 Oct 2016 22:23:53 +0000 Subject: [commit: ghc] master: A collection of type-inference refactorings. (3f5673f) In-Reply-To: <20161021161635.9733E3A300@ghc.haskell.org> References: <20161021161635.9733E3A300@ghc.haskell.org> Message-ID: Friends You'll have seen a brief flurry of commits from me, of which this is the biggest. I've managed to spend some time on the typechecker recently, and this is the result. I'm pleased with the results (simpler code, more predictable behaviour), but although I worked through the changes with Richard, I may have introduced new bugs, so keep an eye out. More on the way! Simon | >--------------------------------------------------------------- | | commit 3f5673f34a2f761423027bf46f64f7499708725f | Author: Simon Peyton Jones | Date: Tue Sep 20 23:31:07 2016 +0100 | | A collection of type-inference refactorings. | | This patch does a raft of useful tidy-ups in the type checker. | I've been meaning to do this for some time, and finally made | time to do it en route to ICFP. | | 1. Modify TcType.ExpType to make a distinct data type, | InferResult for the Infer case, and consequential | refactoring. | | 2. Define a new function TcUnify.fillInferResult, to fill in | an InferResult. It uses TcMType.promoteTcType to promote | the type to the level of the InferResult. | See TcMType Note [Promoting a type] | This refactoring is in preparation for an improvement | to typechecking pattern bindings, coming next. | | I flirted with an elaborate scheme to give better | higher rank inference, but it was just too complicated. | See TcMType Note [Promotion and higher rank types] | | 3. Add to InferResult a new field ir_inst :: Bool to say | whether or not the type used to fill in the | InferResult should be deeply instantiated. See | TcUnify Note [Deep instantiation of InferResult]. | | 4. Add a TcLevel to SkolemTvs. This will be useful generally | | - it's a fast way to see if the type | variable escapes when floating (not used yet) | | - it provides a good consistency check when updating a | unification variable (TcMType.writeMetaTyVarRef, the | level_check_ok check) | | I originally had another reason (related to the flirting | in (2), but I left it in because it seems like a step in | the right direction. | | 5. Reduce and simplify the plethora of uExpType, | tcSubType and related functions in TcUnify. It was | such an opaque mess and it's still not great, but it's | better. | | 6. Simplify the uo_expected field of TypeEqOrigin. Richard | had generatlised it to a ExpType, but it was almost always | a Check type. Now it's back to being a plain TcType which | is much, much easier. | | 7. Improve error messages by refraining from skolemisation when | it's clear that there's an error: see | TcUnify Note [Don't skolemise unnecessarily] | | 8. Type.isPiTy and isForAllTy seem to be missing a coreView check, | so I added it | | 9. Kill off tcs_used_tcvs. Its purpose is to track the | givens used by wanted constraints. For dictionaries etc | we do that via the free vars of the /bindings/ in the | implication constraint ic_binds. But for coercions we | just do update-in-place in the type, rather than | generating a binding. So we need something analogous to | bindings, to track what coercions we have added. | | That was the purpose of tcs_used_tcvs. But it only | worked for a /single/ iteration, whereas we may have | multiple iterations of solving an implication. Look | at (the old) 'setImplicationStatus'. If the constraint | is unsolved, it just drops the used_tvs on the floor. | If it becomes solved next time round, we'll pick up | coercions used in that round, but ignore ones used in | the first round. | | There was an outright bug. Result = (potentialy) bogus | unused-constraint errors. Constructing a case where this | actually happens seems quite trick so I did not do so. | | Solution: expand EvBindsVar to include the (free vars of | the) coercions, so that the coercions are tracked in | essentially the same way as the bindings. | | This turned out to be much simpler. Less code, more | correct. | | 10. Make the ic_binds field in an implication have type | ic_binds :: EvBindsVar | instead of (as previously) | ic_binds :: Maybe EvBindsVar | This is notably simpler, and faster to use -- less | testing of the Maybe. But in the occaional situation | where we don't have anywhere to put the bindings, the | belt-and-braces error check is lost. So I put it back | as an ASSERT in 'setImplicationStatus' (see the use of | 'termEvidenceAllowed') | | All these changes led to quite bit of error message wibbling | | | >--------------------------------------------------------------- | | 3f5673f34a2f761423027bf46f64f7499708725f | compiler/ghci/RtClosureInspect.hs | 2 +- | compiler/typecheck/Inst.hs | 4 +- | compiler/typecheck/TcBinds.hs | 90 +-- | compiler/typecheck/TcErrors.hs | 44 +- | compiler/typecheck/TcEvidence.hs | 33 +- | compiler/typecheck/TcExpr.hs | 24 +- | compiler/typecheck/TcHsSyn.hs | 7 +- | compiler/typecheck/TcHsType.hs | 21 +- | compiler/typecheck/TcInstDcls.hs | 7 +- | compiler/typecheck/TcMType.hs | 324 ++++++--- | compiler/typecheck/TcMatches.hs | 19 +- | compiler/typecheck/TcPat.hs | 22 +- | compiler/typecheck/TcPatSyn.hs | 16 +- | compiler/typecheck/TcPluginM.hs | 16 +- | compiler/typecheck/TcRnDriver.hs | 9 +- | compiler/typecheck/TcRnMonad.hs | 28 +- | compiler/typecheck/TcRnTypes.hs | 29 +- | compiler/typecheck/TcSMonad.hs | 124 ++-- | compiler/typecheck/TcSimplify.hs | 58 +- | compiler/typecheck/TcType.hs | 78 ++- | compiler/typecheck/TcUnify.hs | 731 | +++++++++++++++------ | compiler/typecheck/TcValidity.hs | 2 +- | compiler/types/Type.hs | 2 + | compiler/vectorise/Vectorise/Generic/PData.hs | 2 +- | testsuite/tests/ado/ado004.stderr | 4 +- | .../tests/annotations/should_fail/annfail10.stderr | 12 +- | testsuite/tests/driver/T2182.stderr | 32 +- | testsuite/tests/gadt/gadt-escape1.stderr | 16 +- | testsuite/tests/gadt/gadt13.stderr | 10 +- | testsuite/tests/gadt/gadt7.stderr | 18 +- | .../tests/ghci.debugger/scripts/break012.stdout | 8 +- | .../tests/ghci.debugger/scripts/print022.stdout | 4 +- | testsuite/tests/ghci/scripts/T11524a.stdout | 4 +- | testsuite/tests/ghci/scripts/T2182ghci.stderr | 10 +- | .../tests/indexed-types/should_fail/T12386.hs | 9 + | .../tests/indexed-types/should_fail/T12386.stderr | 7 + | .../tests/indexed-types/should_fail/T5439.stderr | 16 +- | .../tests/indexed-types/should_fail/T7354.stderr | 8 +- | .../tests/parser/should_compile/read014.stderr | 2 +- | testsuite/tests/parser/should_fail/T7848.stderr | 5 +- | .../tests/parser/should_fail/readFail003.stderr | 4 +- | .../partial-sigs/should_compile/T10438.stderr | 14 +- | .../partial-sigs/should_compile/T11192.stderr | 16 +- | .../tests/patsyn/should_compile/T11213.stderr | 2 +- | testsuite/tests/patsyn/should_fail/mono.stderr | 4 +- | testsuite/tests/polykinds/T7438.stderr | 16 +- | testsuite/tests/rebindable/rebindable6.stderr | 12 +- | .../tests/rename/should_compile/T12597.stderr | 2 +- | testsuite/tests/roles/should_compile/T8958.stderr | 5 +- | .../simplCore/should_compile/noinline01.stderr | 4 +- | testsuite/tests/th/T11452.stderr | 2 +- | testsuite/tests/th/T2222.stderr | 2 +- | .../typecheck/should_compile/ExPatFail.stderr | 4 +- | .../should_compile/T12427.stderr} | 0 | .../tests/typecheck/should_compile/T12427a.stderr | 33 + | .../tests/typecheck/should_compile/tc141.stderr | 6 +- | .../tests/typecheck/should_fail/T10495.stderr | 10 +- | .../tests/typecheck/should_fail/T10619.stderr | 4 +- | .../tests/typecheck/should_fail/T12177.stderr | 19 +- | testsuite/tests/typecheck/should_fail/T3102.hs | 6 +- | testsuite/tests/typecheck/should_fail/T3102.stderr | 12 - | testsuite/tests/typecheck/should_fail/T7453.stderr | 50 +- | testsuite/tests/typecheck/should_fail/T7734.stderr | 12 +- | testsuite/tests/typecheck/should_fail/T9109.stderr | 10 +- | testsuite/tests/typecheck/should_fail/T9318.stderr | 12 +- | .../tests/typecheck/should_fail/VtaFail.stderr | 2 +- | testsuite/tests/typecheck/should_fail/all.T | 2 +- | .../tests/typecheck/should_fail/tcfail002.stderr | 6 +- | .../tests/typecheck/should_fail/tcfail004.stderr | 6 +- | .../tests/typecheck/should_fail/tcfail005.stderr | 6 +- | .../tests/typecheck/should_fail/tcfail013.stderr | 2 +- | .../tests/typecheck/should_fail/tcfail014.stderr | 6 +- | .../tests/typecheck/should_fail/tcfail018.stderr | 2 +- | .../tests/typecheck/should_fail/tcfail032.stderr | 6 +- | .../tests/typecheck/should_fail/tcfail099.stderr | 6 +- | .../tests/typecheck/should_fail/tcfail104.stderr | 10 +- | .../tests/typecheck/should_fail/tcfail140.stderr | 4 +- | .../tests/typecheck/should_fail/tcfail181.stderr | 2 +- | .../tests/warnings/should_compile/T12574.stderr | 2 +- | 79 files changed, 1321 insertions(+), 859 deletions(-) | | Diff suppressed because of size. To see it, use: | | git diff-tree --root --patch-with-stat --no-color --find-copies- | harder --ignore-space-at-eol --cc | 3f5673f34a2f761423027bf46f64f7499708725f | _______________________________________________ | ghc-commits mailing list | ghc-commits at haskell.org | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | commits&data=02%7C01%7Csimonpj%40microsoft.com%7Ccf7d693b723d4b7b061b08d3 | f9cda27f%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636126634003055994& | sdata=p7fR8mA%2BXBSWTN%2B7pX6B2qs9zYM0CiXPsSI%2BvK21d8Q%3D&reserved=0 From simonpj at microsoft.com Sat Oct 22 23:11:03 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Sat, 22 Oct 2016 23:11:03 +0000 Subject: Aarrgh! Windows build broken again Message-ID: On Windows with HEAD I get C:/code/HEAD/inplace/mingw/bin/ld.exe: cannot find -lnuma Sigh. This didn't happen a day or two ago Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Sat Oct 22 23:13:28 2016 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Sun, 23 Oct 2016 00:13:28 +0100 Subject: Aarrgh! Windows build broken again In-Reply-To: References: Message-ID: Erik has a patch which fixes this. I will merge it now. Matt On Sun, Oct 23, 2016 at 12:11 AM, Simon Peyton Jones via ghc-devs wrote: > On Windows with HEAD I get > > C:/code/HEAD/inplace/mingw/bin/ld.exe: cannot find -lnuma > > Sigh. This didn’t happen a day or two ago > > Simon > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From mle+hs at mega-nerd.com Sun Oct 23 05:59:49 2016 From: mle+hs at mega-nerd.com (Erik de Castro Lopo) Date: Sun, 23 Oct 2016 16:59:49 +1100 Subject: [commit: ghc] master: A collection of type-inference refactorings. (3f5673f) In-Reply-To: References: <20161021161635.9733E3A300@ghc.haskell.org> Message-ID: <20161023165949.eb3d18c2a409822556971d96@mega-nerd.com> Simon Peyton Jones via ghc-devs wrote: > You'll have seen a brief flurry of commits from me, of which this is the > biggest. I've managed to spend some time on the typechecker recently, and > this is the result. > > I'm pleased with the results (simpler code, more predictable behaviour), > but although I worked through the changes with Richard, I may have > introduced new bugs, so keep an eye out. Simon, I suspect these commits may be responsible for 11 test failures: Unexpected results from: TEST="T9857 T12007 T9732 T9783 T9867 unboxed-wrapper-naked match-unboxed num T12698 T4439 unboxed-wrapper" SUMMARY for test run started at Sun Oct 23 16:56:26 2016 AEDT 0:00:02 spent to go through 11 total tests, which gave rise to 47 test cases, of which 36 were skipped 0 had missing libraries 0 expected passes 0 expected failures 0 caused framework failures 0 unexpected passes 11 unexpected failures 0 unexpected stat failures Unexpected failures: deSugar/should_compile/T4439.run T4439 [exit code non-0] (normal) ghci/scripts/T12007.run T12007 [bad stderr] (ghci) patsyn/should_compile/num.run num [exit code non-0] (normal) patsyn/should_compile/T9732.run T9732 [exit code non-0] (normal) patsyn/should_compile/T9857.run T9857 [exit code non-0] (normal) patsyn/should_compile/T9867.run T9867 [exit code non-0] (normal) patsyn/should_compile/T12698.run T12698 [exit code non-0] (normal) patsyn/should_fail/unboxed-wrapper-naked.run unboxed-wrapper-naked [stderr mismatch] (normal) patsyn/should_run/T9783.run T9783 [exit code non-0] (normal) patsyn/should_run/match-unboxed.run match-unboxed [exit code non-0] (normal) patsyn/should_run/unboxed-wrapper.run unboxed-wrapper [exit code non-0] (normal) They all fail with exactly the same callstack: ghc-stage2: panic! (the 'impossible' happened) (GHC version 8.1.20161022 for x86_64-unknown-linux): ASSERT failed! Infer{apr,2 True} :: TYPE t_apq[tau:2] a_a1pr[tau:2] Call stack: CallStack (from HasCallStack): prettyCurrentCallStack, called at compiler/utils/Outputable.hs:1076:58 in ghc:Outputable callStackDoc, called at compiler/utils/Outputable.hs:1125:22 in ghc:Outputable assertPprPanic, called at compiler/typecheck/TcUnify.hs:547:56 in ghc:TcUnify Call stack: CallStack (from HasCallStack): prettyCurrentCallStack, called at compiler/utils/Outputable.hs:1076:58 in ghc:Outputable callStackDoc, called at compiler/utils/Outputable.hs:1080:37 in ghc:Outputable pprPanic, called at compiler/utils/Outputable.hs:1123:5 in ghc:Outputable assertPprPanic, called at compiler/typecheck/TcUnify.hs:547:56 in ghc:TcUnify Cheers, Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/ From simonpj at microsoft.com Mon Oct 24 07:40:20 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 24 Oct 2016 07:40:20 +0000 Subject: Aarrgh! Windows build broken again In-Reply-To: References: Message-ID: Thanks for such a prompt reply -- I'll try that. Simon | -----Original Message----- | From: Matthew Pickering [mailto:matthewtpickering at gmail.com] | Sent: 23 October 2016 00:13 | To: Simon Peyton Jones | Cc: ghc-devs at haskell.org | Subject: Re: Aarrgh! Windows build broken again | | Erik has a patch which fixes this. I will merge it now. | | Matt | | On Sun, Oct 23, 2016 at 12:11 AM, Simon Peyton Jones via ghc-devs | wrote: | > On Windows with HEAD I get | > | > C:/code/HEAD/inplace/mingw/bin/ld.exe: cannot find -lnuma | > | > Sigh. This didn’t happen a day or two ago | > | > Simon | > | > | > _______________________________________________ | > ghc-devs mailing list | > ghc-devs at haskell.org | > | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.h | > askell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com%7Ca7ec55f6a94d429104a008d3 | fad1098b%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C6361277481201917 | 06&sdata=IswaW41WhOlEA2tHdxqjLfjaay3Eob%2B1RSo%2F3u4hqlI%3D&reserved=0 | > From trp at bluewin.ch Mon Oct 24 09:16:08 2016 From: trp at bluewin.ch (Peter Trommler) Date: Mon, 24 Oct 2016 11:16:08 +0200 Subject: Semantics of MVars and IORefs? In-Reply-To: References: Message-ID: <97D731AC-C482-48E6-8DE2-94388AA9BA38@bluewin.ch> Hi Iavor and Ryan, One ticket on memory model issues is #12469. At openSUSE we see several build failures only now because we recently switched to parallel Cabal builds. A compiled Cabal Setup that is called with -j sometimes segfaults on PowerPC. Actually, if I try building package OpenGL locally on my PowerMac Setup -j almost always fails with a segfault if n is the number of cores or higher. See also #12537. When building on Open(SUSE) Build Service the build fails only sometimes. We build all of LTS Haskell and a random selection of around 40 packages fail, most of them with segfaults in Setup but some with GHC panics. #12469 has examples. Peter > On 21.10.2016, at 19:56, Ryan Yates wrote: > > Hi Iavor, > > You might be interested in what Edward has written about this: > > http://blog.ezyang.com/2014/01/so-you-want-to-add-a-new-concurrency-primitive-to-ghc/ > > I would say when we do have a memory model for GHC the program you gave will almost certainly be correct. MVar operations should be full synchronization operations. There are some bugs on relaxed systems with initialization that I think are being addressed. I can't find the tickets at the moment. > > Ryan From mail at joachim-breitner.de Mon Oct 24 16:47:35 2016 From: mail at joachim-breitner.de (Joachim Breitner) Date: Mon, 24 Oct 2016 12:47:35 -0400 Subject: Long-term storage of abandoned branches Message-ID: <1477327655.10367.1.camel@joachim-breitner.de> Hi, in https://ghc.haskell.org/trac/ghc/ticket/12618#comment:37 Simon raises a question that I was wondering about as well: Where do we want to store feature branches that contain useful work that might be picked up some time later (maybe much later¹)? So far, I have left them as wip/foobar branches. Which works ok, but it clutters up the branch namespace, and “wip” is a lie. Maybe I should move the branch to archive/foobar, to make it clear that this is not something actively worked on? An alternative is having it in Phab only, where it is “more out of the way”, and there is commentary attached to it. Linked from the appropriate ticket, the code is as accessible as  a git branch. But are we committed to keeping the Differential Revisions around for years? Also, if the branch contains many small commits, which presumably makes it more useful to whoever revives the project one day, it is easy to recover that from Phab? (I wonder because arc land squashes commits.) Greetings, Joachim ¹ this does happen, see    https://ghc.haskell.org/trac/ghc/ticket/1600#comment:52 -- Joachim “nomeata” Breitner   mail at joachim-breitner.de • https://www.joachim-breitner.de/   XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F   Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: This is a digitally signed message part URL: From michal.terepeta at gmail.com Mon Oct 24 17:41:08 2016 From: michal.terepeta at gmail.com (Michal Terepeta) Date: Mon, 24 Oct 2016 17:41:08 +0000 Subject: Dataflow analysis for Cmm In-Reply-To: References: Message-ID: On Fri, Oct 21, 2016 at 4:04 PM Simon Marlow wrote: > On 16 October 2016 at 14:03, Michal Terepeta > wrote: > > Hi, > > I was looking at cleaning up a bit the situation with dataflow analysis > for Cmm. > In particular, I was experimenting with rewriting the current > `cmm.Hoopl.Dataflow` module: > - To only include the functionality to do analysis (since GHC doesn’t seem > to use > the rewriting part). > Benefits: > - Code simplification (we could remove a lot of unused code). > - Makes it clear what we’re actually using from Hoopl. > - To have an interface that works with transfer functions operating on a > whole > basic block (`Block CmmNode C C`). > This means that it would be up to the user of the algorithm to traverse > the > whole block. > > > Ah! This is actually something I wanted to do but didn't get around to. > When I was working on the code generator I found that using Hoopl for > rewriting was prohibitively slow, which is why we're not using it for > anything right now, but I think that pulling out the basic block > transformation is possibly a way forwards that would let us use Hoopl. > Right, I've also seen: https://plus.google.com/107890464054636586545/posts/dBbewpRfw6R https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/HooplPerformance but it seems that there weren't any follow-ups/conclusions on that. Also, I haven't started writing anything for the rewriting yet (only analysis for now). Btw. I'm currently experimenting with the GHC's fork of Dataflow module - and for now I'm not planning on pushing the changes to the upstream Hoopl. There are already projects that depend on the current interface of Hoopl (it's on Hackage after all) and it's going to be hard to make certain changes there. Hope that's ok with everyone! (also, we can always revisit this question later) A lot of the code you're removing is my attempt at "optimising" the Hoopl > dataflow algorithm to make it usable in GHC. (I don't mind removing this, > it was a failed experiment really) > Thanks for saying that! > Benefits: > - Further simplifications. > - We could remove `analyzeFwdBlocks` hack, which AFAICS is just a > copy&paste > of `analyzeFwd` but ignores the middle nodes (probably for efficiency > of > analyses that only look at the blocks). > > > Aren't we using this in dataflowAnalFwdBlocks, that's used by > procpointAnalysis? > Yes, sorry for confusion! What I meant is that analyzeFwdBlocks/dataflowAnalFwdBlocks is currently a special case of analyzeFwd/dataflowAnalFwd that only looks at first and last nodes. So if we move to block-oriented interface, it simply stops being a special case and fits the new interface (since it's the analysis that decides whether to look at the whole block or only parts of it). So it's removed in the sense of "removing a special case". Cheers, Michal -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnw at newartisans.com Mon Oct 24 18:13:30 2016 From: johnw at newartisans.com (John Wiegley) Date: Mon, 24 Oct 2016 11:13:30 -0700 Subject: Long-term storage of abandoned branches In-Reply-To: <1477327655.10367.1.camel@joachim-breitner.de> (Joachim Breitner's message of "Mon, 24 Oct 2016 12:47:35 -0400") References: <1477327655.10367.1.camel@joachim-breitner.de> Message-ID: >>>>> "JB" == Joachim Breitner writes: JB> Maybe I should move the branch to archive/foobar, to make it clear that JB> this is not something actively worked on? As a side note: you could move them into a different ref spec entirely, which is not pulled by default when someone clone's or pull's. For example, you could store them under: refs/archives/foobar Which would require a pullspec to be added before someone can gain access to these archives: git config remote.origin.fetch '+refs/archives/*:refs/remotes/origin/archives/*' GitHub does something very similar with its pull requests, which are stored under a "pull" ref spec, and are not cloned by default, but can be accessed by adding the appropriate fetch attribute: +refs/pull/*:refs/remotes/origin/pr/* -- John Wiegley GPG fingerprint = 4710 CF98 AF9B 327B B80F http://newartisans.com 60E1 46C4 BD1A 7AC1 4BA2 From mail at joachim-breitner.de Mon Oct 24 18:39:52 2016 From: mail at joachim-breitner.de (Joachim Breitner) Date: Mon, 24 Oct 2016 14:39:52 -0400 Subject: Long-term storage of abandoned branches In-Reply-To: References: <1477327655.10367.1.camel@joachim-breitner.de> Message-ID: <1477334392.10367.6.camel@joachim-breitner.de> Hi, Am Montag, den 24.10.2016, 11:13 -0700 schrieb John Wiegley: > JB> Maybe I should move the branch to archive/foobar, to make it clear that > JB> this is not something actively worked on? > > As a side note: you could move them into a different ref spec entirely, which > is not pulled by default when someone clone's or pull's. I know about the possibility, but I think that would simply be too undiscoverable then. Greetings, Joachim -- Joachim “nomeata” Breitner   mail at joachim-breitner.de • https://www.joachim-breitner.de/   XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F   Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: This is a digitally signed message part URL: From mail at joachim-breitner.de Mon Oct 24 23:25:12 2016 From: mail at joachim-breitner.de (Joachim Breitner) Date: Mon, 24 Oct 2016 19:25:12 -0400 Subject: New home for the perf.haskell.org builder wanted Message-ID: <1477351512.14842.9.camel@joachim-breitner.de> Hi, although I have moved away from Karlsruhe three months ago, so far it is still my office PC driving https://perf.haskell.org/ghc/ But a new person is now using my desk and wants to use this machine, so I should really really move this away from there now. Sebastian Graf has been working on turning gipeda, the Frontend perf.haskell.org, into a more general service open to open source Haskell projects, and this is close, but not close enough to simply stop running the performance tests until he is good to go. He currently has a machine given from haskell.org to run this on, but it is a virtual machine and the measurements are too flaky for real use. So basically, I need a decent non-virtualized (or virtualized, but exclusive) machine to move my performance build runner to, as quickly as possible. The current specs are  * 8 core Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz   * 16 GB RAM but it does not matter too much, a slightly weaker machine would be able to keep up as well. I also do not necessarily need root access (but it would be beyond the point if the machine would do other stuff that incurs a heavy load). The same machine could then be used by Sebastian for his more general setup, once that is ready to go. Does anyone have something handy? Greetings, Joachim -- Joachim “nomeata” Breitner   mail at joachim-breitner.de • https://www.joachim-breitner.de/   XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F   Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: This is a digitally signed message part URL: From ryan.trinkle at gmail.com Tue Oct 25 13:57:55 2016 From: ryan.trinkle at gmail.com (Ryan Trinkle) Date: Tue, 25 Oct 2016 09:57:55 -0400 Subject: New home for the perf.haskell.org builder wanted In-Reply-To: References: <1477351512.14842.9.camel@joachim-breitner.de> Message-ID: Hi Joachim and Sebastian, I think it would make sense to get a machine like that with Haskell.org funds. My company (Obsidian) would be happy to host it physically and cover internet/power/etc., although our facilities aren't too fancy (no redundant connections or anything like that). Best, Ryan On Tue, Oct 25, 2016 at 2:54 AM, Sebastian Graf wrote: > Hi, > > I'm the guy getting things going with the a more involved architecture ( > https://groups.google.com/forum/#!topic/haskell-cafe/Ak0eMiDVaCQ). I'm > currently reading into how to set up proper container images for master and > slave nodes, so > > I also do not necessarily need root access > > > If I'm not mistaken, root access will probably be needed for Docker, or > some way to submit container images to run stuff on. > > So long, > Sebastian > > On Tue, Oct 25, 2016 at 1:25 AM, Joachim Breitner < > mail at joachim-breitner.de> wrote: > >> Hi, >> >> although I have moved away from Karlsruhe three months ago, so far it >> is still my office PC driving >> https://perf.haskell.org/ghc/ >> >> But a new person is now using my desk and wants to use this machine, so >> I should really really move this away from there now. >> >> Sebastian Graf has been working on turning gipeda, the Frontend >> perf.haskell.org, into a more general service open to open source >> Haskell projects, and this is close, but not close enough to simply >> stop running the performance tests until he is good to go. >> >> He currently has a machine given from haskell.org to run this on, but >> it is a virtual machine and the measurements are too flaky for real >> use. >> >> So basically, I need a decent non-virtualized (or virtualized, but >> exclusive) machine to move my performance build runner to, as quickly >> as possible. The current specs are >> * 8 core Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz >> * 16 GB RAM >> but it does not matter too much, a slightly weaker machine would be >> able to keep up as well. I also do not necessarily need root access >> (but it would be beyond the point if the machine would do other stuff >> that incurs a heavy load). >> >> The same machine could then be used by Sebastian for his more general >> setup, once that is ready to go. >> >> Does anyone have something handy? >> >> Greetings, >> Joachim >> >> >> -- >> Joachim “nomeata” Breitner >> mail at joachim-breitner.de • https://www.joachim-breitner.de/ >> XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F >> Debian Developer: nomeata at debian.org > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryan.trinkle at gmail.com Tue Oct 25 14:00:33 2016 From: ryan.trinkle at gmail.com (Ryan Trinkle) Date: Tue, 25 Oct 2016 10:00:33 -0400 Subject: New home for the perf.haskell.org builder wanted In-Reply-To: References: <1477351512.14842.9.camel@joachim-breitner.de> Message-ID: Hi Joachim and Sebastian, I think it would make sense to get a machine like that with Haskell.org funds. My company (Obsidian) would be happy to host it physically and cover internet/power/etc., although our facilities aren't too fancy (no redundant connections or anything like that). Best, Ryan On Tue, Oct 25, 2016 at 2:54 AM, Sebastian Graf wrote: Hi, > > I'm the guy getting things going with the a more involved architecture ( > https://groups.google.com/forum/#!topic/haskell-cafe/Ak0eMiDVaCQ). I'm > currently reading into how to set up proper container images for master and > slave nodes, so > > I also do not necessarily need root access >> > > If I'm not mistaken, root access will probably be needed for Docker, or > some way to submit container images to run stuff on. > > So long, > Sebastian > > On Tue, Oct 25, 2016 at 1:25 AM, Joachim Breitner < > mail at joachim-breitner.de> wrote: > > Hi, >> >>> >> although I have moved away from Karlsruhe three months ago, so far it >> >> is still my office PC driving >> >>> https://perf.haskell.org/ghc/ >> >>> >> But a new person is now using my desk and wants to use this machine, so >> >> I should really really move this away from there now. >> >>> >> Sebastian Graf has been working on turning gipeda, the Frontend >> >>> perf.haskell.org, into a more general service open to open source >> >> Haskell projects, and this is close, but not close enough to simply >> >> stop running the performance tests until he is good to go. >> >>> >> He currently has a machine given from haskell.org to run this on, but >> >> it is a virtual machine and the measurements are too flaky for real >> >> use. >> >>> >> So basically, I need a decent non-virtualized (or virtualized, but >> >> exclusive) machine to move my performance build runner to, as quickly >> >> as possible. The current specs are >> >> * 8 core Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz >> >> * 16 GB RAM >> >> but it does not matter too much, a slightly weaker machine would be >> >> able to keep up as well. I also do not necessarily need root access >> >> (but it would be beyond the point if the machine would do other stuff >> >> that incurs a heavy load). >> >>> >> The same machine could then be used by Sebastian for his more general >> >> setup, once that is ready to go. >> >>> >> Does anyone have something handy? >> >>> >> Greetings, >> >> Joachim >> >>> >> >> -- >> >> Joachim “nomeata” Breitner >> >> mail at joachim-breitner.de • https://www.joachim-breitner.de/ >> >> XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F >> Debian Developer: nomeata at debian.org >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Tue Oct 25 14:01:29 2016 From: ben at well-typed.com (Ben Gamari) Date: Tue, 25 Oct 2016 10:01:29 -0400 Subject: GHC 8.0.2 status In-Reply-To: References: <87twck8duc.fsf@ben-laptop.smart-cactus.org> Message-ID: <87insg5xza.fsf@ben-laptop.smart-cactus.org> George Colpitts writes: > Hi > > Where are we on this? 12479 was last updated 5 days ago and it is not clear > who has the next action. > Unfortunately The merge of #12479 took longer than expected and soon thereafter yet another terrible bug has reared its ugly head. See #12757. I'm trying to work out what to do about this today. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From ben at smart-cactus.org Tue Oct 25 15:56:10 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 25 Oct 2016 11:56:10 -0400 Subject: setnumcapabilities001 failure Message-ID: <87bmy85so5.fsf@ben-laptop.smart-cactus.org> Hi Simon, It seems that setnumcapabilities001 still occassionally fails, although this time by a different mode: https://phabricator.haskell.org/harbormaster/build/14485/?l=100 Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From ben at well-typed.com Tue Oct 25 18:02:06 2016 From: ben at well-typed.com (Ben Gamari) Date: Tue, 25 Oct 2016 14:02:06 -0400 Subject: [Help needed] GHC HCAR Submission Contributors Message-ID: <87wpgwcnoh.fsf@ben-laptop.smart-cactus.org> Hello everyone, Haskell Communities and Activities Report submission season is once again upon us. If you are receiving this message then you have a major contribution listed on the GHC 8.2 status page and therefore should have a corresponding entry in GHC's HCAR submission. I've collected some notes for our submission in the usual place [1]. Please have a look and refine as you see fit. In particular, I'd really like to see some text added by, * Edward Yang: Backpack * Ryan Scott: Deriving strategies, new classes * Simon Marlow: NUMA and schduler changes * Bartosz Nitka: Determinism Thanks for your help! Cheers, - Ben [1] https://ghc.haskell.org/trac/ghc/wiki/Status/Oct16 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From ecrockett0 at gmail.com Wed Oct 26 04:27:11 2016 From: ecrockett0 at gmail.com (Eric Crockett) Date: Wed, 26 Oct 2016 00:27:11 -0400 Subject: Tool for minimizing examples Message-ID: Devs: as I'm sure you know, the hardest part of reporting a GHC bug is finding a minimal example that triggers the bug. When I initially trigger a bug in my large code base, my workflow is something like: 1. write a driver that triggers the bug 2. do manual dead code elimination by removing unused files and functions 3. "human required" step to figure out what can be trimmed to further minimize 4. go to step 2 until example is simple enouogh Since I work on a large library (>60 modules) and also report a fair number of bugs, I spend a nontrivial amount of time on step 2, which is completely mechanical. It would be nice to have a tool that can help out. Specifically, something that takes a "driver" file, and produces a copy of the code contents to a new directory sans unimported files, and unused functions from imported files. Ideally, this tool would make a "closed universe" assumption so that exported functions can also be eliminated as dead, if they are never used elsewhere. A bonus feature would be to remove unused imports, and even unused build-depends from the cabal file. Are there any tools out there that can do any portion of this process for me? Perhaps it is possible to output contents after the compiler does a DCE pass? Regards, Eric Crockett -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Wed Oct 26 06:28:55 2016 From: ezyang at mit.edu (Edward Z. Yang) Date: Tue, 25 Oct 2016 23:28:55 -0700 Subject: Tool for minimizing examples In-Reply-To: References: Message-ID: <1477463162-sup-7604@sabre> I asked about this on Twitter a while back and John Regehr suggested that we give C-reduce a try. I have not yet but if you try it out I'm quite curious to see what happens. Edward Excerpts from Eric Crockett's message of 2016-10-26 00:27:11 -0400: > Devs: as I'm sure you know, the hardest part of reporting a GHC bug is > finding a minimal example that triggers the bug. When I initially trigger a > bug in my large code base, my workflow is something like: > > 1. write a driver that triggers the bug > 2. do manual dead code elimination by removing unused files and functions > 3. "human required" step to figure out what can be trimmed to further > minimize > 4. go to step 2 until example is simple enouogh > > Since I work on a large library (>60 modules) and also report a fair number > of bugs, I spend a nontrivial amount of time on step 2, which is completely > mechanical. It would be nice to have a tool that can help out. > Specifically, something that takes a "driver" file, and produces a copy of > the code contents to a new directory sans unimported files, and unused > functions from imported files. > > Ideally, this tool would make a "closed universe" assumption so that > exported functions can also be eliminated as dead, if they are never used > elsewhere. A bonus feature would be to remove unused imports, and even > unused build-depends from the cabal file. > > Are there any tools out there that can do any portion of this process for > me? Perhaps it is possible to output contents after the compiler does a DCE > pass? > > Regards, > Eric Crockett From simonpj at microsoft.com Wed Oct 26 07:21:40 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 26 Oct 2016 07:21:40 +0000 Subject: Tool for minimizing examples In-Reply-To: References: Message-ID: Great question. And just to add: I _really_ appreciate the fact that you make small examples. Thank you. Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Eric Crockett Sent: 26 October 2016 05:27 To: ghc-devs Subject: Tool for minimizing examples Devs: as I'm sure you know, the hardest part of reporting a GHC bug is finding a minimal example that triggers the bug. When I initially trigger a bug in my large code base, my workflow is something like: 1. write a driver that triggers the bug 2. do manual dead code elimination by removing unused files and functions 3. "human required" step to figure out what can be trimmed to further minimize 4. go to step 2 until example is simple enouogh Since I work on a large library (>60 modules) and also report a fair number of bugs, I spend a nontrivial amount of time on step 2, which is completely mechanical. It would be nice to have a tool that can help out. Specifically, something that takes a "driver" file, and produces a copy of the code contents to a new directory sans unimported files, and unused functions from imported files. Ideally, this tool would make a "closed universe" assumption so that exported functions can also be eliminated as dead, if they are never used elsewhere. A bonus feature would be to remove unused imports, and even unused build-depends from the cabal file. Are there any tools out there that can do any portion of this process for me? Perhaps it is possible to output contents after the compiler does a DCE pass? Regards, Eric Crockett -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Wed Oct 26 10:43:22 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 26 Oct 2016 10:43:22 +0000 Subject: How to best display type variables with the same name In-Reply-To: References: Message-ID: Chris As far as I understand it: · You are plucking a type from the midst of a syntax tree, and displaying it. · That type might well mention type variables that are bound “further out” o either by a forall (if this is a sub-tree of a type) o or by a big lambda · There are some tricky UI issues in how to display such types to the user. Generally, I think it’s mainly up to you to track which type variables are in scope from “further out”. It’s not a property that is stable under transformation, so it’s not part of the TyVar. The typechecker itself uses “tidying” to avoid accidentally displaying distinct type variables in the same way. See TyCoRep.tidyType and related functions. They may be useful to you too. Hard for me to say more… I’m swamped, and there are genuine UI issues here. Maybe some folk on Haskell Café might be interested. Simon From: Christopher Done [mailto:chrisdone at gmail.com] Sent: 21 October 2016 15:07 To: Simon Peyton Jones Cc: ghc-devs at haskell.org Subject: Re: How to best display type variables with the same name On 19 October 2016 at 17:00, Simon Peyton Jones simonpj at microsoft.com wrote: I’m afraid I didn’t understand the issue in the link below. It speaks of “querying the type”, but I’m not sure what that means. A GHCi session perhaps? Does this relate to the way GHCi displays types? I’m a bit lost. A from-the-beginning example, showing steps and what the unexpected behaviour is would be helpful (to me anyway) Sure. I’ll explain from top-level down: · In this case “querying the type” means running the :type-at command in Intero (which itself is a fork of GHCi’s codebase around GHC 7.10): https://github.com/commercialhaskell/intero/blob/master/src/InteractiveUI.hs#L1693-L1713 It accepts a file name, line-col to line-col span and prints the type of that expression/pattern. As you can see in that function it uses printForUserModInfo (from GhcMonad), similar to (scroll above) the printForUser for GHCi’s regular :type command. · Where does that info come from? When we load a module in Intero, we perform an additional step of “collecting info” here: https://github.com/commercialhaskell/intero/blob/master/src/GhciInfo.hs#L73 That info, for each node in the AST, is ultimately stored in a SpanInfo: https://github.com/commercialhaskell/intero/blob/master/src/GhciTypes.hs#L28-L39 Which we then use for :type-at. So in summary we collect info from tm_typechecked_source, keep that for later, and then when the user’s editor asks via e.g. :type-at X.hs 1 5 1 7 “what is the type of the thing at this point?” we use GHC’s regular pretty printing function to print that type. That actually all works splendidly. For example, if we query foo g f = maybe g f -- ^ here or ^ here yields g :: b foo g f = maybe g f -- ^ here or ^ here yields: f :: a -> b The tricky part arises in this example: https://github.com/commercialhaskell/intero/issues/280#issuecomment-254784904 Which is that we have two perfectly cromulent types from the AST that are both a in isolation, but are actually different. They will have different Unique values in their Name’s and come from different implicit forall‘s. The question is what’s a good way to communicate this to the user? This is partly a “user interface” question, and on the side a “given an ideal UI, do we have the necessary info the GHC API?” If it helps, I could probably spend some time making an isolated module that uses the GHC API to compile a file and then report these types. Ciao! ​ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Wed Oct 26 13:23:19 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Wed, 26 Oct 2016 09:23:19 -0400 Subject: Tool for minimizing examples In-Reply-To: References: Message-ID: <871sz35jnc.fsf@ben-laptop.smart-cactus.org> Eric Crockett writes: > Are there any tools out there that can do any portion of this process for > me? This is a good question. Unfortunately I don't know of any; I do my minimizations by hand. > Perhaps it is possible to output contents after the compiler does a DCE > pass? > Unfortunately I don't believe this would be helpful as DCE isn't done on the Haskell syntax tree representation and we have no external representation of Core. That being said, you might be able to write something without too much difficulty using the GHC API. I have a small toy project that I once used to explore the API; you might find that it's a useful place to start [1]. Sorry for the not-so-helpful response! Cheers, - Ben [1] https://github.com/bgamari/play-type-search -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From ben at well-typed.com Wed Oct 26 15:47:09 2016 From: ben at well-typed.com (Ben Gamari) Date: Wed, 26 Oct 2016 11:47:09 -0400 Subject: IRC: Logging #ghc with ircbrowse.net In-Reply-To: <87poobpltf.fsf@ben-laptop.smart-cactus.org> References: <87poobpltf.fsf@ben-laptop.smart-cactus.org> Message-ID: <87oa273yf6.fsf@ben-laptop.smart-cactus.org> Hello everyone! I'm happy to report that #ghc is now available on Chris Done's lovely ircbrowse.net [1]. Thanks to Chris for his help in making this happen. Note that currently there are no logs for any time prior to the last few days. However, I have ZNC logs which Chris said he may be able to import which cover the last several years of #ghc activity. If there is no objection I would like to have him import these so we have a more permanent archive of the channel. Does anyone object? Cheers, - Ben [1] http://ircbrowse.net/ghc -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From marlowsd at gmail.com Thu Oct 27 08:03:32 2016 From: marlowsd at gmail.com (Simon Marlow) Date: Thu, 27 Oct 2016 09:03:32 +0100 Subject: setnumcapabilities001 failure In-Reply-To: <87bmy85so5.fsf@ben-laptop.smart-cactus.org> References: <87bmy85so5.fsf@ben-laptop.smart-cactus.org> Message-ID: How many cores does the builder machine have? (this should make it easier for me to repro) On 25 October 2016 at 16:56, Ben Gamari wrote: > > Hi Simon, > > It seems that setnumcapabilities001 still occassionally fails, although > this time by a different mode: > https://phabricator.haskell.org/harbormaster/build/14485/?l=100 > > Cheers, > > - Ben > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Oct 27 08:11:17 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 27 Oct 2016 08:11:17 +0000 Subject: OSX failing Message-ID: All OSX builds are failing with rts/Linker.c:6371:1: error: error: unused function 'machoGetMisalignment' [-Werror,-Wunused-function] See eg https://phabricator.haskell.org/harbormaster/build/14551/ So I get lots of “Phab failed” messages, and am now simply deleting them, which rather defeats the object of the exercise. Could someone fix this, or (I suppose) switch off the OSX build? Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Thu Oct 27 08:51:31 2016 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Thu, 27 Oct 2016 09:51:31 +0100 Subject: OSX failing In-Reply-To: References: Message-ID: I will revert the commit which caused the build failure as I am not able to fix it myself. It is hard to get this stuff right as differentials are not built on OSX. commit 488a9ed3440fe882ae043ba7f44fed4e84e679ce Author: Ben Gamari Date: Wed Oct 26 11:19:01 2016 -0400 rts/linker: Move loadArchive to new source file Test Plan: Validate Reviewers: erikd, simonmar, austin, DemiMarie Reviewed By: erikd, simonmar, DemiMarie Subscribers: hvr, thomie Differential Revision: https://phabricator.haskell.org/D2615 GHC Trac Issues: #12388 On Thu, Oct 27, 2016 at 9:11 AM, Simon Peyton Jones via ghc-devs wrote: > All OSX builds are failing with > > rts/Linker.c:6371:1: error: > > error: unused function 'machoGetMisalignment' [-Werror,-Wunused-function] > > See eg https://phabricator.haskell.org/harbormaster/build/14551/ > > So I get lots of “Phab failed” messages, and am now simply deleting them, > which rather defeats the object of the exercise. > > Could someone fix this, or (I suppose) switch off the OSX build? > > Simon > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From chrisdone at gmail.com Thu Oct 27 09:35:47 2016 From: chrisdone at gmail.com (Christopher Done) Date: Thu, 27 Oct 2016 10:35:47 +0100 Subject: How to best display type variables with the same name In-Reply-To: References: Message-ID: On 26 October 2016 at 11:43, Simon Peyton Jones wrote: > As far as I understand it: > > · You are plucking a type from the midst of a syntax tree, and > displaying it. > > · That type might well mention type variables that are bound > “further out” > > o either by a forall (if this is a sub-tree of a type) > > o or by a big lambda > > · There are some tricky UI issues in how to display such types to > the user. Right. > Generally, I think it’s mainly up to you to track which type variables are > in scope from “further out”. It’s not a property that is stable under > transformation, so it’s not part of the TyVar. Fair enough. That's useful to know, thanks. In that case I can explore alternative approaches. > The typechecker itself uses “tidying” to avoid accidentally displaying > distinct type variables in the same way. See TyCoRep.tidyType and related > functions. They may be useful to you too. Ah, the TidyEnv makes sense to me. I think that'll come in handy to review the code to make sure I don't miss anything. > Hard for me to say more… I’m swamped, and there are genuine UI issues here. > Maybe some folk on Haskell Café might be interested. Thanks! I've also got low bandwidth for this, I just posted it to have it out there, as it probably requires some brewing before anything will be implemented anyway. From chrisdone at gmail.com Thu Oct 27 09:37:17 2016 From: chrisdone at gmail.com (Christopher Done) Date: Thu, 27 Oct 2016 10:37:17 +0100 Subject: How to best display type variables with the same name In-Reply-To: <0A1F0572-48D4-4A42-974A-DA90ECB8532C@cs.brynmawr.edu> References: <0A1F0572-48D4-4A42-974A-DA90ECB8532C@cs.brynmawr.edu> Message-ID: On 19 October 2016 at 13:48, Richard Eisenberg wrote: > Interesting problem & solution. > > Here's a wacky idea, from a position of utter ignorance about your > environment: could you use color? Already, when I saw `b :: a` in the > commentary there, where `b` is in scope as a type variable, it seemed wrong > to me. I think using colour when it's available is potentially a nice alternative to a[1] vs a[1]. From ben at well-typed.com Thu Oct 27 12:31:09 2016 From: ben at well-typed.com (Ben Gamari) Date: Thu, 27 Oct 2016 08:31:09 -0400 Subject: OSX failing In-Reply-To: References: Message-ID: <87h97y3rea.fsf@ben-laptop.smart-cactus.org> Matthew Pickering writes: > I will revert the commit which caused the build failure as I am not > able to fix it myself. It is hard to get this stuff right as > differentials are not built on OSX. > Thanks Matthew. I'll try to clean this up. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From ben at smart-cactus.org Thu Oct 27 12:39:48 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 27 Oct 2016 08:39:48 -0400 Subject: setnumcapabilities001 failure In-Reply-To: References: <87bmy85so5.fsf@ben-laptop.smart-cactus.org> Message-ID: <87eg323qzv.fsf@ben-laptop.smart-cactus.org> Simon Marlow writes: > How many cores does the builder machine have? (this should make it easier > for me to repro) > It looks like 8. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From marlowsd at gmail.com Thu Oct 27 12:43:15 2016 From: marlowsd at gmail.com (Simon Marlow) Date: Thu, 27 Oct 2016 13:43:15 +0100 Subject: setnumcapabilities001 failure In-Reply-To: <87eg323qzv.fsf@ben-laptop.smart-cactus.org> References: <87bmy85so5.fsf@ben-laptop.smart-cactus.org> <87eg323qzv.fsf@ben-laptop.smart-cactus.org> Message-ID: I haven't been able to reproduce the failure yet. :( On 27 October 2016 at 13:39, Ben Gamari wrote: > Simon Marlow writes: > > > How many cores does the builder machine have? (this should make it > easier > > for me to repro) > > > It looks like 8. > > Cheers, > > - Ben > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Thu Oct 27 13:18:14 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 27 Oct 2016 09:18:14 -0400 Subject: setnumcapabilities001 failure In-Reply-To: References: <87bmy85so5.fsf@ben-laptop.smart-cactus.org> <87eg323qzv.fsf@ben-laptop.smart-cactus.org> Message-ID: <87bmy63p7t.fsf@ben-laptop.smart-cactus.org> Simon Marlow writes: > I haven't been able to reproduce the failure yet. :( > Indeed I've also not seen it in my own local builds. It's quite an fragile failure. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From fryguybob at gmail.com Thu Oct 27 16:10:00 2016 From: fryguybob at gmail.com (Ryan Yates) Date: Thu, 27 Oct 2016 12:10:00 -0400 Subject: setnumcapabilities001 failure In-Reply-To: <87bmy63p7t.fsf@ben-laptop.smart-cactus.org> References: <87bmy85so5.fsf@ben-laptop.smart-cactus.org> <87eg323qzv.fsf@ben-laptop.smart-cactus.org> <87bmy63p7t.fsf@ben-laptop.smart-cactus.org> Message-ID: Briefly looking at the code it seems like several global variables involved should be volatile: n_capabilities, enabled_capabilities, and capabilities. Perhaps in a loop like in scheduleDoGC the compiler moves the reads of n_capabilites or capabilites outside the loop. A failed requestSync in that loop would not get updated values for those global pointers. That particular loop isn't doing that optimization for me, but I think it could happen without volatile. Ryan On Thu, Oct 27, 2016 at 9:18 AM, Ben Gamari wrote: > Simon Marlow writes: > > > I haven't been able to reproduce the failure yet. :( > > > Indeed I've also not seen it in my own local builds. It's quite an > fragile failure. > > Cheers, > > - Ben > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lonetiger at gmail.com Thu Oct 27 22:04:00 2016 From: lonetiger at gmail.com (lonetiger at gmail.com) Date: Thu, 27 Oct 2016 23:04:00 +0100 Subject: Dynamic Linking help Message-ID: <581279d0.a4a9c20a.86042.7f9f@mx.google.com> Hi *, I’ve been working the past 4 or so months on reviving dynamic linking support for Windows in a way that has the most chance of working. My first patch in the series is up on Phabricator and with this patch dynamic linking work again, but only for the threaded RTS. The reason for this is because the core libraries that get distributed with GHC get compiled with -threaded and shared libraries on Windows can’t have dangling symbols. In any case, I’m at the point now where I need to be able to delay the selection of the runtime library till the final link. E.g. when the exe or dll is made. The problem however is that when linked normally, dependencies are satisfied by the Windows loader, before the program is run. One way I could do this is with Window’s equivalent to SONAME. Unfortunately this only works with SxS Assemblies, and I’ll need Admin rights to be able to register the shared libraries. This is a problem for running tests in the testsuite using the inplace GHC. Typically on Windows the way you would do this is by delay loading the dll. This allows me to write some code on startup and manually load the runtime dll. The Windows loader would then just use the loaded dll. Unfortunately delay loading does not support const extern data. Such as const extern RtsConfig defaultRtsConfig; The RTS/GHC is full of such usage so it won’t be easy to change. Though I’d only have to change those exposed by Rts.h. So I have two options: 1) Replace all const externs with a function call. This would work, but isn’t ideal because it would break if someone in the future adds a new data entry instead of a function. And we have an extra function call everywhere. 2) I could do some hacks on the Windows side, e.g. compile the program to a shared library, embed the shared library inside the exe and on startup after loading the propert rts, load the DLL from (mmapped) memory and run the code. I don’t like either approach and am hoping someone here has a better solution for me. Thanks, Tamar -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryan.trinkle at gmail.com Thu Oct 27 23:05:31 2016 From: ryan.trinkle at gmail.com (Ryan Trinkle) Date: Thu, 27 Oct 2016 19:05:31 -0400 Subject: Specialization plugin Message-ID: Hi everyone, I'm trying my hand at writing a GHC plugin to generate specializations for all uses of a particular typeclass, and I've run into some trouble. I'd appreciate it if someone could point me in the right direction! I'm new to GHC development, so I may just be overlooking some simple stuff. The original problem: Reflex's interface is presented as a typeclass, which allows the underlying FRP engine to be selected by instance resolution. Although some programs make use of this (particularly the semantics test suite), most only ever use one implementation. However, since their code is typically written polymorphically, the implementation cannot be inlined and its rewrite rules cannot fire. This means that the typeclass My attempted solutions: * Initially, I wrote a plugin that adds INLINABLE pragmas to everything. This helped; small programs now generally see the inlining/rule-firings I was hoping for. However, in large programs, this does not occur. I'm looking into this, but also trying another approach: * Now, I am attempting to write a plugin that adds a SPECIALIZE pragma to every binding whose type mentions Reflex. The trouble: Since SPECIALIZE pragmas seem to be removed during typechecking (DsBinds.dsSpec), I can't directly add them. I would like to, perhaps, invoke Specialise.specBind; however, I'm not sure how to obtain the necessary instance - as mentioned here , "only the type checker can conjure [dictionaries] up". Perhaps it's possible to explicitly create that dictionary somewhere and extract it for use during the plugin pass? Thanks, Ryan -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Fri Oct 28 01:33:14 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 27 Oct 2016 21:33:14 -0400 Subject: Linker reorganization Message-ID: <8760od45r9.fsf@ben-laptop.smart-cactus.org> Hello RTS people, Today I finally grew frustrated enough with my constant battle with the 7000 line tangle of CPP that is rts/Linker.c to do something about it. The result is D2643 through D2650. In short, I took the file and chopped it into more managable pieces: * linker/PEi386.[ch]: PE loading * linker/MachO.[ch]: MachO loading * linker/Elf.[ch]: ELF loading * linker/CacheFlush.[ch]: Platform-specific icache flushing logic * linker/SymbolExtras.[ch]: Symbol extras support logic * Linker.c: Everything necessary to glue all of the above together * LinkerInternals.h: Declarations shared by the above and declarations for Linker.c For the most part this involved just shuffling code around since there was some rough platform abstraction already in place. In fact, I tried quite hard to avoid performing any more intricate refactoring to keep the scope of the project in check. Consequently, this is only a start and the design is in places a bit awkward; there is still certainly no shortage of work remaining to be done. Regardless, I think this change an improvement over the current state of affairs. One concern that I have is that the RTS's header file structure (where everything is #include'd via Rts.h) doesn't work very well for this particular use, where we have a group of headers specific to a particular subsystem (e.g. linker/*.h). Consequently, these header files currently lack enclosing `extern "C"` blocks (as well as Begin/EndPrivate blocks). It would be easy to add these, but I was curious to hear if others had any better ideas. The refactoring was performed over several small-ish commits, so review shouldn't be so bad. I expect to rebase the LoadArchive.c refactoring performed in D2642 on top of this set once it has been merged. I will also offer to rebase DemiMarie's recent error-handling patch (D2652). I have tested the set on a variety of platforms, * x86-64 Linux * x86-64 Darwin * x86-64 FreeBSD * x86-64 Windows * ARM Linux What do you think? Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From ezyang at mit.edu Fri Oct 28 01:40:56 2016 From: ezyang at mit.edu (Edward Z. Yang) Date: Thu, 27 Oct 2016 18:40:56 -0700 Subject: Linker reorganization In-Reply-To: <8760od45r9.fsf@ben-laptop.smart-cactus.org> References: <8760od45r9.fsf@ben-laptop.smart-cactus.org> Message-ID: <1477618840-sup-4176@sabre> Great work! We've been mumbling about it for a while, good to see it actually done. Edward Excerpts from Ben Gamari's message of 2016-10-27 21:33:14 -0400: > Hello RTS people, > > Today I finally grew frustrated enough with my constant battle with the > 7000 line tangle of CPP that is rts/Linker.c to do something about it. > The result is D2643 through D2650. In short, I took the file and chopped > it into more managable pieces: > > * linker/PEi386.[ch]: PE loading > * linker/MachO.[ch]: MachO loading > * linker/Elf.[ch]: ELF loading > * linker/CacheFlush.[ch]: Platform-specific icache flushing logic > * linker/SymbolExtras.[ch]: Symbol extras support logic > * Linker.c: Everything necessary to glue all of the above together > * LinkerInternals.h: Declarations shared by the above and > declarations for Linker.c > > For the most part this involved just shuffling code around since there > was some rough platform abstraction already in place. In fact, I tried > quite hard to avoid performing any more intricate refactoring to keep > the scope of the project in check. Consequently, this is only a start > and the design is in places a bit awkward; there is still certainly no > shortage of work remaining to be done. Regardless, I think this change > an improvement over the current state of affairs. > > One concern that I have is that the RTS's header file structure (where > everything is #include'd via Rts.h) doesn't work very well for this > particular use, where we have a group of headers specific to a > particular subsystem (e.g. linker/*.h). Consequently, these header files > currently lack enclosing `extern "C"` blocks (as well as > Begin/EndPrivate blocks). It would be easy to add these, but I was > curious to hear if others had any better ideas. > > The refactoring was performed over several small-ish commits, so review > shouldn't be so bad. I expect to rebase the LoadArchive.c refactoring > performed in D2642 on top of this set once it has been merged. I will > also offer to rebase DemiMarie's recent error-handling patch (D2652). > > I have tested the set on a variety of platforms, > > * x86-64 Linux > * x86-64 Darwin > * x86-64 FreeBSD > * x86-64 Windows > * ARM Linux > > What do you think? > > Cheers, > > - Ben From ben at smart-cactus.org Fri Oct 28 01:52:25 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 27 Oct 2016 21:52:25 -0400 Subject: Dynamic Linking help In-Reply-To: <581279d0.a4a9c20a.86042.7f9f@mx.google.com> References: <581279d0.a4a9c20a.86042.7f9f@mx.google.com> Message-ID: <8737jh44va.fsf@ben-laptop.smart-cactus.org> lonetiger at gmail.com writes: > Hi *, > > I’ve been working the past 4 or so months on reviving dynamic linking > support for Windows in a way that has the most chance of working. > > My first patch in the series is up on Phabricator and with this patch > dynamic linking work again, but only for the threaded RTS. > Thanks for all of your work on this, Tamar! > The reason for this is because the core libraries that get distributed > with GHC get compiled with -threaded and shared libraries on Windows > can’t have dangling symbols. > Let me make sure we are on the same page here: By "dangling symbols" do you just mean symbols that the linker did not find a definition for at link time (e.g. as is the case with libHSrts symbols when we link libHSbase)? > In any case, I’m at the point now where I need to be able to delay the > selection of the runtime library till the final link. E.g. when the > exe or dll is made. The problem however is that when linked normally, > dependencies are satisfied by the Windows loader, before the program > is run. One way I could do this is with Window’s equivalent to SONAME. > Unfortunately this only works with SxS Assemblies, and I’ll need Admin > rights to be able to register the shared libraries. > Hmm, why? I thought recent Windows releases had a notion of "user local" installation, no? From what little I have heard it sounds like SxS assemblies is the right solution here. > This is a problem for running tests in the testsuite using the inplace GHC. > > Typically on Windows the way you would do this is by delay loading the > dll. This allows me to write some code on startup and manually load > the runtime dll. The Windows loader would then just use the loaded > dll. Unfortunately delay loading does not support const extern data. > Such as const extern RtsConfig defaultRtsConfig; > Silly Windows. > The RTS/GHC is full of such usage so it won’t be easy to change. > Though I’d only have to change those exposed by Rts.h. > > So I have two options: > 1) Replace all const externs with a function call. This would work, > but isn’t ideal because it would break if someone in the future > adds a new data entry instead of a function. And we have an extra > function call everywhere. Right, I'm really not a fan of this option. Crippling the RTS's use of C on account of arbitrary limitations of the Windows dynamic linker doesn't seem very appealing. > 2) I could do some hacks on the Windows side, e.g. compile the program > to a shared library, embed the shared library inside the exe and on > startup after loading the propert rts, load the DLL from (mmapped) > memory and run the code. This sounds like it would get the job done, although it certainly adds complexity. Do you have any intuition for how much work it would be to implement this? Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From marlowsd at gmail.com Fri Oct 28 06:58:44 2016 From: marlowsd at gmail.com (Simon Marlow) Date: Fri, 28 Oct 2016 07:58:44 +0100 Subject: Linker reorganization In-Reply-To: <8760od45r9.fsf@ben-laptop.smart-cactus.org> References: <8760od45r9.fsf@ben-laptop.smart-cactus.org> Message-ID: On 28 October 2016 at 02:33, Ben Gamari wrote: > Hello RTS people, > > Today I finally grew frustrated enough with my constant battle with the > 7000 line tangle of CPP that is rts/Linker.c to do something about it. > The result is D2643 through D2650. In short, I took the file and chopped > it into more managable pieces: > > * linker/PEi386.[ch]: PE loading > * linker/MachO.[ch]: MachO loading > * linker/Elf.[ch]: ELF loading > * linker/CacheFlush.[ch]: Platform-specific icache flushing logic > * linker/SymbolExtras.[ch]: Symbol extras support logic > * Linker.c: Everything necessary to glue all of the above > together > * LinkerInternals.h: Declarations shared by the above and > declarations for Linker.c > > For the most part this involved just shuffling code around since there > was some rough platform abstraction already in place. In fact, I tried > quite hard to avoid performing any more intricate refactoring to keep > the scope of the project in check. Consequently, this is only a start > and the design is in places a bit awkward; there is still certainly no > shortage of work remaining to be done. Regardless, I think this change > an improvement over the current state of affairs. > > I haven't looked through all the patches, but this is a great step forwards, thanks Ben! > One concern that I have is that the RTS's header file structure (where > everything is #include'd via Rts.h) doesn't work very well for this > particular use, where we have a group of headers specific to a > particular subsystem (e.g. linker/*.h). Consequently, these header files > currently lack enclosing `extern "C"` blocks (as well as > Begin/EndPrivate blocks). It would be easy to add these, but I was > curious to hear if others had any better ideas. > > Not sure I understand the problem. Rts.h is for *public* APIs, those that are accessible outside the RTS, but these APIs are mostly *internal*. The public-facing linker API is in includes/rts/Linker.h. We don't need extern "C" in the internal header files because we're never going to include these from C++ (we do in the external ones though). But we should have BeginPrivate.h/EndPrivate.h in the internal headers. Cheers Simon > The refactoring was performed over several small-ish commits, so review > shouldn't be so bad. I expect to rebase the LoadArchive.c refactoring > performed in D2642 on top of this set once it has been merged. I will > also offer to rebase DemiMarie's recent error-handling patch (D2652). > > I have tested the set on a variety of platforms, > > * x86-64 Linux > * x86-64 Darwin > * x86-64 FreeBSD > * x86-64 Windows > * ARM Linux > > What do you think? > > Cheers, > > - Ben > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Fri Oct 28 07:02:36 2016 From: marlowsd at gmail.com (Simon Marlow) Date: Fri, 28 Oct 2016 08:02:36 +0100 Subject: setnumcapabilities001 failure In-Reply-To: References: <87bmy85so5.fsf@ben-laptop.smart-cactus.org> <87eg323qzv.fsf@ben-laptop.smart-cactus.org> <87bmy63p7t.fsf@ben-laptop.smart-cactus.org> Message-ID: Hi Ryan, I don't think that's the issue. Those variables can only be modified in setNumCapabilities, which acquires *all* the capabilities before it makes any changes. There should be no other threads running RTS code(*) while we change the number of capabilities. In particular we shouldn't be in releaseGCThreads while enabled_capabilities is being changed. (*) well except for the parts at the boundary with the external world which run without a capability, such as rts_lock() which acquires a capability. Cheers Simon On 27 Oct 2016 17:10, "Ryan Yates" wrote: > Briefly looking at the code it seems like several global variables > involved should be volatile: n_capabilities, enabled_capabilities, and > capabilities. Perhaps in a loop like in scheduleDoGC the compiler moves > the reads of n_capabilites or capabilites outside the loop. A failed > requestSync in that loop would not get updated values for those global > pointers. That particular loop isn't doing that optimization for me, but I > think it could happen without volatile. > > Ryan > > On Thu, Oct 27, 2016 at 9:18 AM, Ben Gamari wrote: > >> Simon Marlow writes: >> >> > I haven't been able to reproduce the failure yet. :( >> > >> Indeed I've also not seen it in my own local builds. It's quite an >> fragile failure. >> >> Cheers, >> >> - Ben >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fryguybob at gmail.com Fri Oct 28 10:58:36 2016 From: fryguybob at gmail.com (Ryan Yates) Date: Fri, 28 Oct 2016 06:58:36 -0400 Subject: setnumcapabilities001 failure In-Reply-To: References: <87bmy85so5.fsf@ben-laptop.smart-cactus.org> <87eg323qzv.fsf@ben-laptop.smart-cactus.org> <87bmy63p7t.fsf@ben-laptop.smart-cactus.org> Message-ID: Right, it is compiler effects at this boundary that I'm worried about, values that are not read from memory after the changes have been made, not memory effects or data races. On Fri, Oct 28, 2016 at 3:02 AM, Simon Marlow wrote: > Hi Ryan, I don't think that's the issue. Those variables can only be > modified in setNumCapabilities, which acquires *all* the capabilities > before it makes any changes. There should be no other threads running RTS > code(*) while we change the number of capabilities. In particular we > shouldn't be in releaseGCThreads while enabled_capabilities is being > changed. > > (*) well except for the parts at the boundary with the external world > which run without a capability, such as rts_lock() which acquires a > capability. > > Cheers > Simon > > On 27 Oct 2016 17:10, "Ryan Yates" wrote: > >> Briefly looking at the code it seems like several global variables >> involved should be volatile: n_capabilities, enabled_capabilities, and >> capabilities. Perhaps in a loop like in scheduleDoGC the compiler moves >> the reads of n_capabilites or capabilites outside the loop. A failed >> requestSync in that loop would not get updated values for those global >> pointers. That particular loop isn't doing that optimization for me, but I >> think it could happen without volatile. >> >> Ryan >> >> On Thu, Oct 27, 2016 at 9:18 AM, Ben Gamari wrote: >> >>> Simon Marlow writes: >>> >>> > I haven't been able to reproduce the failure yet. :( >>> > >>> Indeed I've also not seen it in my own local builds. It's quite an >>> fragile failure. >>> >>> Cheers, >>> >>> - Ben >>> >>> >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>> >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From takenobu.hs at gmail.com Fri Oct 28 12:19:43 2016 From: takenobu.hs at gmail.com (Takenobu Tani) Date: Fri, 28 Oct 2016 21:19:43 +0900 Subject: simple pictures about GHC development flow Message-ID: Hi devs, For myself and new contributors, I drew overview pictures about GHC development flow. GHC development flow http://takenobu-hs.github.io/downloads/ghc_development_flow.pdf https://github.com/takenobu-hs/ghc-development-flow If I misunderstood something, please teach me. I'll correct or remove them. Regards, Takenobu -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Fri Oct 28 13:01:32 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Fri, 28 Oct 2016 09:01:32 -0400 Subject: simple pictures about GHC development flow In-Reply-To: References: Message-ID: <87zilo39w3.fsf@ben-laptop.smart-cactus.org> Takenobu Tani writes: > Hi devs, > > For myself and new contributors, I drew overview pictures about GHC > development flow. > > GHC development flow > http://takenobu-hs.github.io/downloads/ghc_development_flow.pdf > https://github.com/takenobu-hs/ghc-development-flow > Thanks Takenobu! This is quite helpful. One minor inaccuracy I found was the spelling of "Arcanist" on page 12. Another is in the description of the ghc-proposals process on page 9. Specifically, I think the proposal process should look more like, write the proposal ↓ pull request ↓ discussion ←┐ ←──┐ ↓ │ │ revise proposal ┘ │ ↓ │ request review │ by steering committee │ ↓ │ wait for approval ───┘ ↓ create ticket Finally, I think it would be helpful if more attention could be given to the bug reporting protocol depicted on page 8. In particular, users have approached me in the past with questions about what the various ticket states mean. Really, it's (fairly) simple, * New: The ticket is waiting for someone to look at it and/or discussion is underway on how to fix the issue * Assigned: Someone has said they are working on fixing the issue. * Patch: There is a patch to fix the issue that is awaiting review (it is typically listed in the "Differential Rev(s)" field of the ticket. * Merge: A patch fixing the issue is present in the `master` branch and we are considering backporting it to the stable branch (e.g. currently the `ghc-8.0` branch). * Closed: As of the release listed in the "Milestone" field the bug is considered resolved. I think a diagram describing this workflow could be quite helpful. Let me know if I can help. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From ben at smart-cactus.org Fri Oct 28 13:08:58 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Fri, 28 Oct 2016 09:08:58 -0400 Subject: Linker reorganization In-Reply-To: References: <8760od45r9.fsf@ben-laptop.smart-cactus.org> Message-ID: <87twbw39jp.fsf@ben-laptop.smart-cactus.org> Simon Marlow writes: >> One concern that I have is that the RTS's header file structure (where >> everything is #include'd via Rts.h) doesn't work very well for this >> particular use, where we have a group of headers specific to a >> particular subsystem (e.g. linker/*.h). Consequently, these header files >> currently lack enclosing `extern "C"` blocks (as well as >> Begin/EndPrivate blocks). It would be easy to add these, but I was >> curious to hear if others had any better ideas. >> >> > Not sure I understand the problem. Rts.h is for *public* APIs, those that > are accessible outside the RTS, but these APIs are mostly *internal*. The > public-facing linker API is in includes/rts/Linker.h. > > We don't need extern "C" in the internal header files because we're never > going to include these from C++ (we do in the external ones though). But we > should have BeginPrivate.h/EndPrivate.h in the internal headers. > Ahh, right; silly me. I'll just add the necessary BeginPrivates and hopefully we get this merged after Karel reports back with the results of his testing. Thanks Simon! Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From marlowsd at gmail.com Fri Oct 28 13:10:02 2016 From: marlowsd at gmail.com (Simon Marlow) Date: Fri, 28 Oct 2016 14:10:02 +0100 Subject: setnumcapabilities001 failure In-Reply-To: References: <87bmy85so5.fsf@ben-laptop.smart-cactus.org> <87eg323qzv.fsf@ben-laptop.smart-cactus.org> <87bmy63p7t.fsf@ben-laptop.smart-cactus.org> Message-ID: I see, but the compiler has no business caching things across requestSync(), which can in principle change anything: even if the compiler could see all the code, it would find a pthread_condwait() in there. Anyway I've found the problem - it was caused by a subsequent GC overwriting the values of gc_threads[].idle before the previous GC had finished releaseGCThreads() which reads those values. Diff on the way... Cheers Simon On 28 October 2016 at 11:58, Ryan Yates wrote: > Right, it is compiler effects at this boundary that I'm worried about, > values that are not read from memory after the changes have been made, not > memory effects or data races. > > On Fri, Oct 28, 2016 at 3:02 AM, Simon Marlow wrote: > >> Hi Ryan, I don't think that's the issue. Those variables can only be >> modified in setNumCapabilities, which acquires *all* the capabilities >> before it makes any changes. There should be no other threads running RTS >> code(*) while we change the number of capabilities. In particular we >> shouldn't be in releaseGCThreads while enabled_capabilities is being >> changed. >> >> (*) well except for the parts at the boundary with the external world >> which run without a capability, such as rts_lock() which acquires a >> capability. >> >> Cheers >> Simon >> >> On 27 Oct 2016 17:10, "Ryan Yates" wrote: >> >>> Briefly looking at the code it seems like several global variables >>> involved should be volatile: n_capabilities, enabled_capabilities, and >>> capabilities. Perhaps in a loop like in scheduleDoGC the compiler moves >>> the reads of n_capabilites or capabilites outside the loop. A failed >>> requestSync in that loop would not get updated values for those global >>> pointers. That particular loop isn't doing that optimization for me, but I >>> think it could happen without volatile. >>> >>> Ryan >>> >>> On Thu, Oct 27, 2016 at 9:18 AM, Ben Gamari >>> wrote: >>> >>>> Simon Marlow writes: >>>> >>>> > I haven't been able to reproduce the failure yet. :( >>>> > >>>> Indeed I've also not seen it in my own local builds. It's quite an >>>> fragile failure. >>>> >>>> Cheers, >>>> >>>> - Ben >>>> >>>> >>>> _______________________________________________ >>>> ghc-devs mailing list >>>> ghc-devs at haskell.org >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>>> >>>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Fri Oct 28 13:19:35 2016 From: mail at joachim-breitner.de (Joachim Breitner) Date: Fri, 28 Oct 2016 09:19:35 -0400 Subject: simple pictures about GHC development flow In-Reply-To: References: Message-ID: <1477660775.1146.5.camel@joachim-breitner.de> Hi, Am Freitag, den 28.10.2016, 21:19 +0900 schrieb Takenobu Tani: > For myself and new contributors, I drew overview pictures about GHC > development flow. > >   GHC development flow >     http://takenobu-hs.github.io/downloads/ghc_development_flow.pdf >     https://github.com/takenobu-hs/ghc-development-flow very nice! I wonder where we can keep it so that people will find it, and how to make sure it stays up-to-date. You could add travis to the tools sections. It is, in a way, a second line of CI defense: Runs a bit less, but is available when Harbormaster fails, and is a different environment. Also, if you fork GHC on github, travis will automatically test your commits. There is a box „committer flow“. What exactly is meant by that? Is there more to be said about that? Greetings, Joachim -- Joachim “nomeata” Breitner   mail at joachim-breitner.de • https://www.joachim-breitner.de/   XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F   Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: This is a digitally signed message part URL: From takenobu.hs at gmail.com Fri Oct 28 13:45:03 2016 From: takenobu.hs at gmail.com (Takenobu Tani) Date: Fri, 28 Oct 2016 22:45:03 +0900 Subject: simple pictures about GHC development flow In-Reply-To: <87zilo39w3.fsf@ben-laptop.smart-cactus.org> References: <87zilo39w3.fsf@ben-laptop.smart-cactus.org> Message-ID: Hi Ben, Joachim, Thank you for your checking and reply! After I'll be carefully considered, and then reply. I'll reflect your feedback. Please wait for a little while. Thank you very much :) , Takenobu 2016-10-28 22:01 GMT+09:00 Ben Gamari : > Takenobu Tani writes: > > > Hi devs, > > > > For myself and new contributors, I drew overview pictures about GHC > > development flow. > > > > GHC development flow > > http://takenobu-hs.github.io/downloads/ghc_development_flow.pdf > > https://github.com/takenobu-hs/ghc-development-flow > > > Thanks Takenobu! This is quite helpful. > > One minor inaccuracy I found was the spelling of "Arcanist" on page > 12. Another is in the description of the ghc-proposals process on > page 9. Specifically, I think the proposal process should look more > like, > > write the proposal > ↓ > pull request > ↓ > discussion ←┐ ←──┐ > ↓ │ │ > revise proposal ┘ │ > ↓ │ > request review │ > by steering committee │ > ↓ │ > wait for approval ───┘ > ↓ > create ticket > > Finally, I think it would be helpful if more attention could be given to > the bug reporting protocol depicted on page 8. In particular, users have > approached me in the past with questions about what the various ticket > states mean. Really, it's (fairly) simple, > > * New: The ticket is waiting for someone to look at it and/or > discussion is underway on how to fix the issue > > * Assigned: Someone has said they are working on fixing the issue. > > * Patch: There is a patch to fix the issue that is awaiting review (it > is typically listed in the "Differential Rev(s)" field of the ticket. > > * Merge: A patch fixing the issue is present in the `master` branch and > we are considering backporting it to the stable branch (e.g. > currently the `ghc-8.0` branch). > > * Closed: As of the release listed in the "Milestone" field the bug is > considered resolved. > > I think a diagram describing this workflow could be quite helpful. Let > me know if I can help. > > Cheers, > > - Ben > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Fri Oct 28 16:36:31 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Fri, 28 Oct 2016 12:36:31 -0400 Subject: simple pictures about GHC development flow In-Reply-To: <1477660775.1146.5.camel@joachim-breitner.de> References: <1477660775.1146.5.camel@joachim-breitner.de> Message-ID: <87r3702zxs.fsf@ben-laptop.smart-cactus.org> Joachim Breitner writes: > Hi, > > Am Freitag, den 28.10.2016, 21:19 +0900 schrieb Takenobu Tani: >> For myself and new contributors, I drew overview pictures about GHC >> development flow. >> >>   GHC development flow >>     http://takenobu-hs.github.io/downloads/ghc_development_flow.pdf >>     https://github.com/takenobu-hs/ghc-development-flow > > very nice! I wonder where we can keep it so that people will find it, > and how to make sure it stays up-to-date. > > You could add travis to the tools sections. It is, in a way, a second > line of CI defense: Runs a bit less, but is available when Harbormaster > fails, and is a different environment. Also, if you fork GHC on github, > travis will automatically test your commits. > > There is a box „committer flow“. What exactly is meant by that? Is > there more to be said about that? > I think this means someone with commit bits simply pushing a patch without submitting to code review. Ideally we'd be able to deprecate this workflow in favor of the "auto-validating push" that you've proposed. I started looking at implementing this earlier this week; sadly Harbormaster doesn't make it easy as there is no way to manually fire off Harbormaster builds without creating a Diff. Nevertheless, I have an initial hack; perhaps I'll be able to finish it next week. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From lonetiger at gmail.com Fri Oct 28 17:41:29 2016 From: lonetiger at gmail.com (lonetiger at gmail.com) Date: Fri, 28 Oct 2016 18:41:29 +0100 Subject: Dynamic Linking help In-Reply-To: <8737jh44va.fsf@ben-laptop.smart-cactus.org> References: <581279d0.a4a9c20a.86042.7f9f@mx.google.com> <8737jh44va.fsf@ben-laptop.smart-cactus.org> Message-ID: <58138dc9.c6bdc20a.7d720.ffd5@mx.google.com> Hi Ben, Thanks for the reply! > > > Hi *, > > > > I’ve been working the past 4 or so months on reviving dynamic linking > > support for Windows in a way that has the most chance of working. > > > > My first patch in the series is up on Phabricator and with this patch > > dynamic linking work again, but only for the threaded RTS. > > > Thanks for all of your work on this, Tamar! > Home stretch :) > > > The reason for this is because the core libraries that get distributed > > with GHC get compiled with -threaded and shared libraries on Windows > > can’t have dangling symbols. > > > Let me make sure we are on the same page here: By "dangling symbols" do > you just mean symbols that the linker did not find a definition for at > link time (e.g. as is the case with libHSrts symbols when we link > libHSbase)? Yes, indeed. > > > In any case, I’m at the point now where I need to be able to delay the > > selection of the runtime library till the final link. E.g. when the > > exe or dll is made. The problem however is that when linked normally, > > dependencies are satisfied by the Windows loader, before the program > > is run. One way I could do this is with Window’s equivalent to SONAME. > > Unfortunately this only works with SxS Assemblies, and I’ll need Admin > > rights to be able to register the shared libraries. > > > Hmm, why? I thought recent Windows releases had a notion of "user local" > installation, no? From what little I have heard it sounds like SxS > assemblies is the right solution here. Yes, so to be clear, SxS absolutely solve this problem. For final installs. The majority of the issue is that the testsuite won't have the assemblies in the SxS cache. There *is* a sort of RPATH equivalent for SxS that can be used here, it however has two problems: 1) Even though the API has no such limit, the implementation in the Windows loader limits it per application to 5 entries. Obviously this won't be enough. So this is absolutely another option (maybe even preferable now that I think about it since it require almost no code change, mostly some build system changes.). Can do either one of two things: a) Copy all dll's to the lib folder when they're compiled instead of leaving them in place and add a single SxS search entry to find them. b) Turn of SxS in the testsuite for any Assemblies not the RTS, and add the inplace RTS directory to the SxS search path. Since it's only the RTS that's an issue. 2) The other problem is that the paths specified have to be relative to the application. (Of the top of my head) It doesn't support absolute paths. Which means I can't have GHC generate the entry because I have no idea where the testsuite intends to run the binary. One way around this is to have the testsuite generate the needed config file. That should be do-able. I'll investigate this method first. I had discarded it for some reason before but now can't remember... > > > This is a problem for running tests in the testsuite using the inplace GHC. > > > > Typically on Windows the way you would do this is by delay loading the > > dll. This allows me to write some code on startup and manually load > > the runtime dll. The Windows loader would then just use the loaded > > dll. Unfortunately delay loading does not support const extern data. > > Such as const extern RtsConfig defaultRtsConfig; > > > Silly Windows. Yeah, unfortunately this is because this isn't done by any OS functions. Lazy loading is purely something implemented by linkers on Windows and the appropriate runtimes. In essense it just creates a stub that replaces all functions, which first checks if the dll is loaded, if not load it and then call the real function. Which is why it only works for functions. > > > The RTS/GHC is full of such usage so it won’t be easy to change. > > Though I’d only have to change those exposed by Rts.h. > > > > So I have two options: > > 1) Replace all const externs with a function call. This would work, > > but isn’t ideal because it would break if someone in the future > > adds a new data entry instead of a function. And we have an extra > > function call everywhere. > > Right, I'm really not a fan of this option. Crippling the RTS's use of C > on account of arbitrary limitations of the Windows dynamic linker > doesn't seem very appealing. I was not a fan of this either. Would imagine I would be chased around with flaming pitchforks were I to do this.. > > > 2) I could do some hacks on the Windows side, e.g. compile the program > > to a shared library, embed the shared library inside the exe and on > > startup after loading the propert rts, load the DLL from (mmapped) > > memory and run the code. > > This sounds like it would get the job done, although it certainly adds > complexity. Do you have any intuition for how much work it would be to > implement this? We sorta already do 80% of this. For Windows when making a dynamic version of GHC, the ghc-stage2.exe is a very thin shell, who's only purpose is to load the right libraries and change the search path to include the lib folder. The actual code is in a dll named e.g. ghc-stage2.exe.dll. The change needed would be to embed this dll into the exe (which is trivially done), and then load it from memory. This would require some work but there are enough wrapper code out there with appropriate licenses that we can use to accomplish this. Or at least get a running head start. > > Cheers, > > - Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From lonetiger at gmail.com Fri Oct 28 17:42:53 2016 From: lonetiger at gmail.com (lonetiger at gmail.com) Date: Fri, 28 Oct 2016 18:42:53 +0100 Subject: Linker reorganization In-Reply-To: <87twbw39jp.fsf@ben-laptop.smart-cactus.org> References: <8760od45r9.fsf@ben-laptop.smart-cactus.org> <87twbw39jp.fsf@ben-laptop.smart-cactus.org> Message-ID: <58138e1d.12111c0a.1ded5.48d1@mx.google.com> > Simon Marlow writes: > > >> One concern that I have is that the RTS's header file structure (where > >> everything is #include'd via Rts.h) doesn't work very well for this > >> particular use, where we have a group of headers specific to a > >> particular subsystem (e.g. linker/*.h). Consequently, these header files > >> currently lack enclosing `extern "C"` blocks (as well as > >> Begin/EndPrivate blocks). It would be easy to add these, but I was > >> curious to hear if others had any better ideas. > >> > >> > > Not sure I understand the problem. Rts.h is for *public* APIs, those that > > are accessible outside the RTS, but these APIs are mostly *internal*. The > > public-facing linker API is in includes/rts/Linker.h. > > > > We don't need extern "C" in the internal header files because we're never > > going to include these from C++ (we do in the external ones though). But we > > should have BeginPrivate.h/EndPrivate.h in the internal headers. > > > > Ahh, right; silly me. I'll just add the necessary BeginPrivates and > hopefully we get this merged after Karel reports back with the results > of his testing. Thanks Simon! Great work! I'm itching to take an Axe to the PE stuff. lots of constants can just go. > > Cheers, > > - Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Fri Oct 28 18:17:15 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Fri, 28 Oct 2016 14:17:15 -0400 Subject: Dynamic Linking help In-Reply-To: <58138dc9.c6bdc20a.7d720.ffd5@mx.google.com> References: <581279d0.a4a9c20a.86042.7f9f@mx.google.com> <8737jh44va.fsf@ben-laptop.smart-cactus.org> <58138dc9.c6bdc20a.7d720.ffd5@mx.google.com> Message-ID: <87oa242v9w.fsf@ben-laptop.smart-cactus.org> lonetiger at gmail.com writes: > Hi Ben, > > Thanks for the reply! > Sure. >> > Hi *, >> > >> > I’ve been working the past 4 or so months on reviving dynamic linking >> > support for Windows in a way that has the most chance of working. >> > >> > My first patch in the series is up on Phabricator and with this patch >> > dynamic linking work again, but only for the threaded RTS. >> > >> Thanks for all of your work on this, Tamar! >> > > Home stretch :) > Yay! >> Hmm, why? I thought recent Windows releases had a notion of "user local" >> installation, no? From what little I have heard it sounds like SxS >> assemblies is the right solution here. > > Yes, so to be clear, SxS absolutely solve this problem. For final installs. > The majority of the issue is that the testsuite won't have the assemblies in > the SxS cache. > > There *is* a sort of RPATH equivalent for SxS that can be used here, it however > has two problems: > 1) Even though the API has no such limit, the implementation in the Windows loader > limits it per application to 5 entries. (╯°□°)╯︵ ┻━┻ > Obviously this won't be enough. > So this is absolutely another option (maybe even preferable now that I think about it > since it require almost no code change, mostly some build system changes.). > Can do either one of two things: > a) Copy all dll's to the lib folder when they're compiled instead of leaving them in place and > add a single SxS search entry to find them. > b) Turn of SxS in the testsuite for any Assemblies not the RTS, and add the inplace RTS directory to > the SxS search path. Since it's only the RTS that's an issue. > > 2) The other problem is that the paths specified have to be relative to the application. > (Of the top of my head) It doesn't support absolute paths. Which > means I can't have GHC generate the entry because I have no idea > where the testsuite intends to run the binary. > One way around this is to have the testsuite generate the needed config file. That should be do-able. > > I'll investigate this method first. I had discarded it for some reason before but now can't remember... > Right, it sounds like this is a workable option and the fact that it requires adding no further complexity to the compiler is quite a merit. The only question is what other use-cases might run into this same issue. For instance, what happens when I run `cabal build` and try to run an executable from `dist/build`. Then I run `cabal install` and run it from `.cabal/bin`. Surely Cabal will need to take some sort of action in this case. I suppose this means that using plain `ghc -dynamic` alone is probably out of the question. >> > This is a problem for running tests in the testsuite using the inplace GHC. >> > >> > Typically on Windows the way you would do this is by delay loading the >> > dll. This allows me to write some code on startup and manually load >> > the runtime dll. The Windows loader would then just use the loaded >> > dll. Unfortunately delay loading does not support const extern data. >> > Such as const extern RtsConfig defaultRtsConfig; >> > >> Silly Windows. > > Yeah, unfortunately this is because this isn't done by any OS > functions. Lazy loading is purely something implemented by linkers on > Windows and the appropriate runtimes. In essense it just creates a > stub that replaces all functions, which first checks if the dll is > loaded, if not load it and then call the real function. Which is why > it only works for functions. > Ahhh. I see, that makes sense. >> >> > The RTS/GHC is full of such usage so it won’t be easy to change. >> > Though I’d only have to change those exposed by Rts.h. >> > >> > So I have two options: >> > 1) Replace all const externs with a function call. This would work, >> > but isn’t ideal because it would break if someone in the future >> > adds a new data entry instead of a function. And we have an extra >> > function call everywhere. >> >> Right, I'm really not a fan of this option. Crippling the RTS's use of C >> on account of arbitrary limitations of the Windows dynamic linker >> doesn't seem very appealing. > > I was not a fan of this either. Would imagine I would be chased around > with flaming pitchforks were I to do this.. > Yes, sharp, rusty, flaming pitchforks. >> > 2) I could do some hacks on the Windows side, e.g. compile the program >> > to a shared library, embed the shared library inside the exe and on >> > startup after loading the propert rts, load the DLL from (mmapped) >> > memory and run the code. >> >> This sounds like it would get the job done, although it certainly adds >> complexity. Do you have any intuition for how much work it would be to >> implement this? > > We sorta already do 80% of this. For Windows when making a dynamic > version of GHC, the ghc-stage2.exe is a very thin shell, who's only > purpose is to load the right libraries and change the search path to > include the lib folder. The actual code is in a dll named e.g. > ghc-stage2.exe.dll. The change needed would be to embed this dll into > the exe (which is trivially done), and then load it from memory. This > would require some work but there are enough wrapper code out there > with appropriate licenses that we can use to accomplish this. Or at > least get a running head start. > Alright, good to know. We can keep this one in our back pocket in case the option described about doesn't work out. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From lonetiger at gmail.com Fri Oct 28 18:48:28 2016 From: lonetiger at gmail.com (lonetiger at gmail.com) Date: Fri, 28 Oct 2016 19:48:28 +0100 Subject: Dynamic Linking help In-Reply-To: <87oa242v9w.fsf@ben-laptop.smart-cactus.org> References: <581279d0.a4a9c20a.86042.7f9f@mx.google.com> <8737jh44va.fsf@ben-laptop.smart-cactus.org> <58138dc9.c6bdc20a.7d720.ffd5@mx.google.com> <87oa242v9w.fsf@ben-laptop.smart-cactus.org> Message-ID: <58139d7c.c1341c0a.05cb.632d@mx.google.com> > > > > 2) The other problem is that the paths specified have to be relative to the application. > > (Of the top of my head) It doesn't support absolute paths. Which > > means I can't have GHC generate the entry because I have no idea > > where the testsuite intends to run the binary. > > One way around this is to have the testsuite generate the needed config file. That should be do-able. > > > > I'll investigate this method first. I had discarded it for some reason before but now can't remember... > > > Right, it sounds like this is a workable option and the fact that it > requires adding no further complexity to the compiler is quite a merit. > The only question is what other use-cases might run into this same issue. > For instance, what happens when I run `cabal build` and try to run an > executable from `dist/build`. Then I run `cabal install` and run it from > `.cabal/bin`. Surely Cabal will need to take some sort of action in this > case. I suppose this means that using plain `ghc -dynamic` alone is > probably out of the question. No, this would only be the case for development versions of GHC. For the end user, the GHC core libraries *should* be registered in the SxS cache (much like the Microsoft, Intel, etc compilers do). This is part of the distribution story we still have to have a chat about. For user libraries, it wouldn't matter since the application would check the SxS Cache and it would always work. Also the SxS assembly creation is opt-in. So if you rely on a dll, even one created by GHC (and not a core library) then by default it won't be an SxS assembly (as is currently the case.). This does however mean, that if you just use the tarball, you can't run programs created with -dynamic. In the implementation I also have an override `-fno-gen-sxs-assembly` which will create a binary which will not try to use SxS at all to find it's depencencies. In this case you'd have to have the proper PATH entries. So by adjusting your PATH you can still get the development/in-place ghc to run from abritrary locations. Since this method can't be used to satisfy the loader than that the right runtime was already loaded it does mean that Stack won't be able to support -dynamic out of the box. However they can probably get it to work using the same methods the testsuite uses. The testsuite always compiles applications using this override, and then corrects the PATH entries before running. This is why a large part of the tests still run fine. -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Oct 28 21:14:59 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 28 Oct 2016 21:14:59 +0000 Subject: Specialization plugin In-Reply-To: References: Message-ID: I’d really like to know why INLINABLE pragmas don’t work. Perhaps an example? Only the type checker currently can conjure up dictionaries. It would presumably not be impossible to do so later, but it’d be quite a new thing, involving invoking the constraint solver. The pattern-match overlap checker does this; but without needing to generate any evidence bindings. But let’s see what’s wrong with INLINABLE first. After all if there’s a bug there, fixing it will benefit everyone. SImon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Ryan Trinkle Sent: 28 October 2016 00:06 To: ghc-devs at haskell.org Subject: Specialization plugin Hi everyone, I'm trying my hand at writing a GHC plugin to generate specializations for all uses of a particular typeclass, and I've run into some trouble. I'd appreciate it if someone could point me in the right direction! I'm new to GHC development, so I may just be overlooking some simple stuff. The original problem: Reflex's interface is presented as a typeclass, which allows the underlying FRP engine to be selected by instance resolution. Although some programs make use of this (particularly the semantics test suite), most only ever use one implementation. However, since their code is typically written polymorphically, the implementation cannot be inlined and its rewrite rules cannot fire. This means that the typeclass My attempted solutions: * Initially, I wrote a plugin that adds INLINABLE pragmas to everything. This helped; small programs now generally see the inlining/rule-firings I was hoping for. However, in large programs, this does not occur. I'm looking into this, but also trying another approach: * Now, I am attempting to write a plugin that adds a SPECIALIZE pragma to every binding whose type mentions Reflex. The trouble: Since SPECIALIZE pragmas seem to be removed during typechecking (DsBinds.dsSpec), I can't directly add them. I would like to, perhaps, invoke Specialise.specBind; however, I'm not sure how to obtain the necessary instance - as mentioned here, "only the type checker can conjure [dictionaries] up". Perhaps it's possible to explicitly create that dictionary somewhere and extract it for use during the plugin pass? Thanks, Ryan -------------- next part -------------- An HTML attachment was scrubbed... URL: From takenobu.hs at gmail.com Sat Oct 29 12:36:37 2016 From: takenobu.hs at gmail.com (Takenobu Tani) Date: Sat, 29 Oct 2016 21:36:37 +0900 Subject: simple pictures about GHC development flow In-Reply-To: <87r3702zxs.fsf@ben-laptop.smart-cactus.org> References: <1477660775.1146.5.camel@joachim-breitner.de> <87r3702zxs.fsf@ben-laptop.smart-cactus.org> Message-ID: Hi Ben, Joachim, 2016-10-29 1:36 GMT+09:00 Ben Gamari : > > There is a box „committer flow“. What exactly is meant by that? Is > > there more to be said about that? > > > I think this means someone with commit bits simply pushing a patch > without submitting to code review. Ideally we'd be able to deprecate > this workflow in favor of the "auto-validating push" that you've > proposed. I assumed that "committer flow" is simply pushing a patch without submitting to code review or without discussion. I thought committers [1] have the authority in case of typo or small modification. Do I misunderstand? [1]: https://ghc.haskell.org/trac/ghc/wiki/TeamGHC According to your advice, I will update the following: * page 12: correct "Arcanist" * page 9: update ghc-proposals flow * after page 8 (new page 9): add a simple diagram for the various ticket states * page 12: add "travis" Regards, Takenobu -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Sat Oct 29 17:05:00 2016 From: ben at well-typed.com (Ben Gamari) Date: Sat, 29 Oct 2016 13:05:00 -0400 Subject: [Help needed] GHC HCAR Submission Contributors In-Reply-To: <87wpgwcnoh.fsf@ben-laptop.smart-cactus.org> References: <87wpgwcnoh.fsf@ben-laptop.smart-cactus.org> Message-ID: <8760obaxxf.fsf@ben-laptop.smart-cactus.org> Ben Gamari writes: > Hello everyone, > > Haskell Communities and Activities Report submission season is once > again upon us. If you are receiving this message then you have a major > contribution listed on the GHC 8.2 status page and therefore should have > a corresponding entry in GHC's HCAR submission. I've collected some > notes for our submission in the usual place [1]. Please have a look and > refine as you see fit. > Hello everyone! Please note that today is the last day to amend the HCAR text [1]. I'll be sending out the final version tomorrow. Thanks for your contributions so far! Cheers, - Ben [1] https://ghc.haskell.org/trac/ghc/wiki/Status/Oct16 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From ben at smart-cactus.org Sat Oct 29 17:07:14 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Sat, 29 Oct 2016 13:07:14 -0400 Subject: simple pictures about GHC development flow In-Reply-To: References: <1477660775.1146.5.camel@joachim-breitner.de> <87r3702zxs.fsf@ben-laptop.smart-cactus.org> Message-ID: <8737jfaxtp.fsf@ben-laptop.smart-cactus.org> Takenobu Tani writes: > Hi Ben, Joachim, > > 2016-10-29 1:36 GMT+09:00 Ben Gamari : >> > There is a box „committer flow“. What exactly is meant by that? Is >> > there more to be said about that? >> > >> I think this means someone with commit bits simply pushing a patch >> without submitting to code review. Ideally we'd be able to deprecate >> this workflow in favor of the "auto-validating push" that you've >> proposed. > > I assumed that "committer flow" is simply pushing a patch without > submitting to code review or without discussion. > I thought committers [1] have the authority in case of typo or small > modification. > > Do I misunderstand? Nope, that's exactly right. Unfortunately, even "trivial" fixes have a tendency to break the tree (which has been happening too often recently) so I'm trying to push contributors to use CI whenever possible. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From mail at joachim-breitner.de Sat Oct 29 17:59:30 2016 From: mail at joachim-breitner.de (Joachim Breitner) Date: Sat, 29 Oct 2016 13:59:30 -0400 Subject: simple pictures about GHC development flow In-Reply-To: References: <1477660775.1146.5.camel@joachim-breitner.de> <87r3702zxs.fsf@ben-laptop.smart-cactus.org> Message-ID: <1477763970.1364.3.camel@joachim-breitner.de> Hi, Am Samstag, den 29.10.2016, 21:36 +0900 schrieb Takenobu Tani: > Hi Ben, Joachim, > > 2016-10-29 1:36 GMT+09:00 Ben Gamari : > > > There is a box „committer flow“. What exactly is meant by that? Is > > > there more to be said about that? > > > > > I think this means someone with commit bits simply pushing a patch > > without submitting to code review. Ideally we'd be able to deprecate > > this workflow in favor of the "auto-validating push" that you've > > proposed. > > I assumed that "committer flow" is simply pushing a patch without > submitting to code review or without discussion. > I thought committers [1] have the authority in case of typo or small > modification. > Do I misunderstand? I see. The term “flow” suggested something deeper or more structured (which might be good, but is not the case). Maybe relabel it “direct commit”. Greetings, Joachim -- Joachim “nomeata” Breitner   mail at joachim-breitner.de • https://www.joachim-breitner.de/   XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F   Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: This is a digitally signed message part URL: From ryan.trinkle at gmail.com Sun Oct 30 00:43:05 2016 From: ryan.trinkle at gmail.com (Ryan Trinkle) Date: Sat, 29 Oct 2016 20:43:05 -0400 Subject: Specialization plugin In-Reply-To: References: Message-ID: I definitely plan to investigate the INLINABLE thing. In small programs, it all appears to be working as expected. However, in a larger (~50 modules) client project, I think (but have not 100% confirmed) that it is not specializing everything. The Core certainly has a good number of Reflex dictionaries floating around (I would ideally hope that they would all be eliminated). Perhaps there is an issue relating to INLINABLE crossing many module or package boundaries. I'll follow up when I have more info about exactly how that's breaking, along with an example I can share (the current one is confidential to my client). On Fri, Oct 28, 2016 at 5:14 PM, Simon Peyton Jones wrote: > I’d really like to know why INLINABLE pragmas don’t work. Perhaps an > example? > > > > Only the type checker currently can conjure up dictionaries. It would > presumably not be impossible to do so later, but it’d be quite a new thing, > involving invoking the constraint solver. The pattern-match overlap > checker does this; but without needing to generate any evidence bindings. > > > > But let’s see what’s wrong with INLINABLE first. After all if there’s a > bug there, fixing it will benefit everyone. > > > > SImon > > > > *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of *Ryan > Trinkle > *Sent:* 28 October 2016 00:06 > *To:* ghc-devs at haskell.org > *Subject:* Specialization plugin > > > > Hi everyone, > > > > I'm trying my hand at writing a GHC plugin to generate specializations for > all uses of a particular typeclass, and I've run into some trouble. I'd > appreciate it if someone could point me in the right direction! I'm new to > GHC development, so I may just be overlooking some simple stuff. > > > > The original problem: > > Reflex's interface is presented as a typeclass, which allows the > underlying FRP engine to be selected by instance resolution. Although some > programs make use of this (particularly the semantics test suite), most > only ever use one implementation. However, since their code is typically > written polymorphically, the implementation cannot be inlined and its > rewrite rules cannot fire. This means that the typeclass > > > > My attempted solutions: > > * Initially, I wrote a plugin that adds INLINABLE pragmas to everything. > This helped; small programs now generally see the inlining/rule-firings I > was hoping for. However, in large programs, this does not occur. I'm > looking into this, but also trying another approach: > > * Now, I am attempting to write a plugin that adds a SPECIALIZE pragma to > every binding whose type mentions Reflex. > > > > The trouble: > > Since SPECIALIZE pragmas seem to be removed during typechecking > (DsBinds.dsSpec), I can't directly add them. I would like to, perhaps, > invoke Specialise.specBind; however, I'm not sure how to obtain the > necessary instance - as mentioned here > , > "only the type checker can conjure [dictionaries] up". Perhaps it's > possible to explicitly create that dictionary somewhere and extract it for > use during the plugin pass? > > > > > > Thanks, > > Ryan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From takenobu.hs at gmail.com Sun Oct 30 02:25:56 2016 From: takenobu.hs at gmail.com (Takenobu Tani) Date: Sun, 30 Oct 2016 11:25:56 +0900 Subject: simple pictures about GHC development flow In-Reply-To: <1477763970.1364.3.camel@joachim-breitner.de> References: <1477660775.1146.5.camel@joachim-breitner.de> <87r3702zxs.fsf@ben-laptop.smart-cactus.org> <1477763970.1364.3.camel@joachim-breitner.de> Message-ID: Hi Ben, Joachim, 2016-10-30 2:07 GMT+09:00 Ben Gamari : > Nope, that's exactly right. Unfortunately, even "trivial" fixes have a > tendency to break the tree (which has been happening too often recently) > so I'm trying to push contributors to use CI whenever possible. Thank you for explaining me. I understand the reason that you are talking about travis. 2016-10-30 2:59 GMT+09:00 Joachim Breitner : > I see. The term “flow” suggested something deeper or more structured > (which might be good, but is not the case). Maybe relabel it “direct > commit”. Thank you. I'll update the "committer flow" to "direct commit" on page 7. It is more clear about the current situation. Regards, Takenobu -------------- next part -------------- An HTML attachment was scrubbed... URL: From takenobu.hs at gmail.com Sun Oct 30 09:58:13 2016 From: takenobu.hs at gmail.com (Takenobu Tani) Date: Sun, 30 Oct 2016 18:58:13 +0900 Subject: simple pictures about GHC development flow In-Reply-To: References: <1477660775.1146.5.camel@joachim-breitner.de> <87r3702zxs.fsf@ben-laptop.smart-cactus.org> <1477763970.1364.3.camel@joachim-breitner.de> Message-ID: Hi Ben, Joachim and devs, I updated the following: * page 7: update "committer flow" to "direct commit" * page 9: add a simple diagram for the various ticket states * page 10: update ghc-proposals flow * page 13: correct "Arcanist" * page 13: add "travis" Here is Rev.2016-Oct-30: GHC development flow http://takenobu-hs.github.io/downloads/ghc_development_flow.pdf https://github.com/takenobu-hs/ghc-development-flow Please teach me if I have misunderstood, especially page 9. Regards, Takenobu -------------- next part -------------- An HTML attachment was scrubbed... URL: From christiaan.baaij at gmail.com Sun Oct 30 12:23:00 2016 From: christiaan.baaij at gmail.com (Christiaan Baaij) Date: Sun, 30 Oct 2016 12:23:00 +0000 Subject: Specialization plugin In-Reply-To: References: Message-ID: One small question: what's the difference between adding INLINABLE everywhere, and just compiling with -fexpose-all-unfoldings, https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/using-optimisation.html#ghc-flag--fexpose-all-unfoldings? Is there any reason you couldn't use that flag as opposed to writing a plugin that adds INLINEABLE pragmas to all bindings? On 28 October 2016 at 00:05, Ryan Trinkle wrote: > Hi everyone, > > I'm trying my hand at writing a GHC plugin to generate specializations for > all uses of a particular typeclass, and I've run into some trouble. I'd > appreciate it if someone could point me in the right direction! I'm new to > GHC development, so I may just be overlooking some simple stuff. > > The original problem: > Reflex's interface is presented as a typeclass, which allows the > underlying FRP engine to be selected by instance resolution. Although some > programs make use of this (particularly the semantics test suite), most > only ever use one implementation. However, since their code is typically > written polymorphically, the implementation cannot be inlined and its > rewrite rules cannot fire. This means that the typeclass > > My attempted solutions: > * Initially, I wrote a plugin that adds INLINABLE pragmas to everything. > This helped; small programs now generally see the inlining/rule-firings I > was hoping for. However, in large programs, this does not occur. I'm > looking into this, but also trying another approach: > * Now, I am attempting to write a plugin that adds a SPECIALIZE pragma to > every binding whose type mentions Reflex. > > The trouble: > Since SPECIALIZE pragmas seem to be removed during typechecking > (DsBinds.dsSpec), I can't directly add them. I would like to, perhaps, > invoke Specialise.specBind; however, I'm not sure how to obtain the > necessary instance - as mentioned here > , > "only the type checker can conjure [dictionaries] up". Perhaps it's > possible to explicitly create that dictionary somewhere and extract it for > use during the plugin pass? > > > Thanks, > Ryan > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From george.colpitts at gmail.com Sun Oct 30 12:49:54 2016 From: george.colpitts at gmail.com (George Colpitts) Date: Sun, 30 Oct 2016 12:49:54 +0000 Subject: simple pictures about GHC development flow In-Reply-To: References: <1477660775.1146.5.camel@joachim-breitner.de> <87r3702zxs.fsf@ben-laptop.smart-cactus.org> <1477763970.1364.3.camel@joachim-breitner.de> Message-ID: Hi Takenobu Thanks for writing this up. A small suggestion, for the box "Add a test case" it might be good to add a reference to https://ghc.haskell.org/trac/ghc/wiki/Building/RunningTests/Adding Also it might be good to add something about the process of fixing doc "bugs" and improving the doc. I think these are areas where less experienced Haskell developers can add value and contribute to the ghc community. Thanks again George On Sun, Oct 30, 2016 at 6:58 AM Takenobu Tani wrote: > Hi Ben, Joachim and devs, > > I updated the following: > * page 7: update "committer flow" to "direct commit" > * page 9: add a simple diagram for the various ticket states > * page 10: update ghc-proposals flow > * page 13: correct "Arcanist" > * page 13: add "travis" > > Here is Rev.2016-Oct-30: > GHC development flow > http://takenobu-hs.github.io/downloads/ghc_development_flow.pdf > https://github.com/takenobu-hs/ghc-development-flow > > Please teach me if I have misunderstood, especially page 9. > > Regards, > Takenobu > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryan.trinkle at gmail.com Mon Oct 31 00:50:45 2016 From: ryan.trinkle at gmail.com (Ryan Trinkle) Date: Sun, 30 Oct 2016 20:50:45 -0400 Subject: Specialization plugin In-Reply-To: References: Message-ID: My understanding was that they ought to be the same. However, that didn't seem to be the case. In my small example, where the plugin did work, -fexpose-all-unfoldings did not. On Sun, Oct 30, 2016 at 8:23 AM, Christiaan Baaij < christiaan.baaij at gmail.com> wrote: > One small question: what's the difference between adding INLINABLE > everywhere, and just compiling with -fexpose-all-unfoldings, > https://downloads.haskell.org/~ghc/latest/docs/html/users_ > guide/using-optimisation.html#ghc-flag--fexpose-all-unfoldings? Is there > any reason you couldn't use that flag as opposed to writing a plugin that > adds INLINEABLE pragmas to all bindings? > > On 28 October 2016 at 00:05, Ryan Trinkle wrote: > >> Hi everyone, >> >> I'm trying my hand at writing a GHC plugin to generate specializations >> for all uses of a particular typeclass, and I've run into some trouble. >> I'd appreciate it if someone could point me in the right direction! I'm >> new to GHC development, so I may just be overlooking some simple stuff. >> >> The original problem: >> Reflex's interface is presented as a typeclass, which allows the >> underlying FRP engine to be selected by instance resolution. Although some >> programs make use of this (particularly the semantics test suite), most >> only ever use one implementation. However, since their code is typically >> written polymorphically, the implementation cannot be inlined and its >> rewrite rules cannot fire. This means that the typeclass >> >> My attempted solutions: >> * Initially, I wrote a plugin that adds INLINABLE pragmas to >> everything. This helped; small programs now generally see the >> inlining/rule-firings I was hoping for. However, in large programs, this >> does not occur. I'm looking into this, but also trying another approach: >> * Now, I am attempting to write a plugin that adds a SPECIALIZE pragma >> to every binding whose type mentions Reflex. >> >> The trouble: >> Since SPECIALIZE pragmas seem to be removed during typechecking >> (DsBinds.dsSpec), I can't directly add them. I would like to, perhaps, >> invoke Specialise.specBind; however, I'm not sure how to obtain the >> necessary instance - as mentioned here >> , >> "only the type checker can conjure [dictionaries] up". Perhaps it's >> possible to explicitly create that dictionary somewhere and extract it for >> use during the plugin pass? >> >> >> Thanks, >> Ryan >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >> > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From takenobu.hs at gmail.com Mon Oct 31 11:02:23 2016 From: takenobu.hs at gmail.com (Takenobu Tani) Date: Mon, 31 Oct 2016 20:02:23 +0900 Subject: simple pictures about GHC development flow In-Reply-To: References: <1477660775.1146.5.camel@joachim-breitner.de> <87r3702zxs.fsf@ben-laptop.smart-cactus.org> <1477763970.1364.3.camel@joachim-breitner.de> Message-ID: Hi George, 2016-10-30 21:49 GMT+09:00 George Colpitts : > Thanks for writing this up. A small suggestion, for the box "Add a test case" > it might be good to add a reference to > https://ghc.haskell.org/trac/ghc/wiki/Building/RunningTests/Adding Thank you for advice. A test case is important, but it is often forgotten. I'll add it to reference. > Also it might be good to add something about the process of fixing doc "bugs" and improving the doc. > > I think these are areas where less experienced Haskell developers can add value and contribute to the > ghc community. Indeed. It's good :) Update of documents is easy to contribute by new contributors. I'll understand the document process, then I'll try to draw the diagram. Thank you, Takenobu -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Mon Oct 31 20:56:51 2016 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Mon, 31 Oct 2016 20:56:51 +0000 Subject: Specialization plugin In-Reply-To: References: Message-ID: I am helping Ryan investigate this. See https://ghc.haskell.org/trac/ghc/ticket/12791 for one example which we have identified so far. Matt On Fri, Oct 28, 2016 at 10:14 PM, Simon Peyton Jones via ghc-devs wrote: > I’d really like to know why INLINABLE pragmas don’t work. Perhaps an > example? > > > > Only the type checker currently can conjure up dictionaries. It would > presumably not be impossible to do so later, but it’d be quite a new thing, > involving invoking the constraint solver. The pattern-match overlap checker > does this; but without needing to generate any evidence bindings. > > > > But let’s see what’s wrong with INLINABLE first. After all if there’s a bug > there, fixing it will benefit everyone. > > > > SImon > > > > From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Ryan > Trinkle > Sent: 28 October 2016 00:06 > To: ghc-devs at haskell.org > Subject: Specialization plugin > > > > Hi everyone, > > > > I'm trying my hand at writing a GHC plugin to generate specializations for > all uses of a particular typeclass, and I've run into some trouble. I'd > appreciate it if someone could point me in the right direction! I'm new to > GHC development, so I may just be overlooking some simple stuff. > > > > The original problem: > > Reflex's interface is presented as a typeclass, which allows the underlying > FRP engine to be selected by instance resolution. Although some programs > make use of this (particularly the semantics test suite), most only ever use > one implementation. However, since their code is typically written > polymorphically, the implementation cannot be inlined and its rewrite rules > cannot fire. This means that the typeclass > > > > My attempted solutions: > > * Initially, I wrote a plugin that adds INLINABLE pragmas to everything. > This helped; small programs now generally see the inlining/rule-firings I > was hoping for. However, in large programs, this does not occur. I'm > looking into this, but also trying another approach: > > * Now, I am attempting to write a plugin that adds a SPECIALIZE pragma to > every binding whose type mentions Reflex. > > > > The trouble: > > Since SPECIALIZE pragmas seem to be removed during typechecking > (DsBinds.dsSpec), I can't directly add them. I would like to, perhaps, > invoke Specialise.specBind; however, I'm not sure how to obtain the > necessary instance - as mentioned here, "only the type checker can conjure > [dictionaries] up". Perhaps it's possible to explicitly create that > dictionary somewhere and extract it for use during the plugin pass? > > > > > > Thanks, > > Ryan > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >