From acfoltzer at gmail.com Fri Jun 3 21:33:51 2016 From: acfoltzer at gmail.com (Adam Foltzer) Date: Fri, 3 Jun 2016 14:33:51 -0700 Subject: Feedback on -Wredundant-constraints Message-ID: With 8.0.1 freshly arrived, I'm taking on the task of updating a number of our larger projects here at Galois. I made a couple of these comments briefly on a relevant Trac ticket[1], but I wanted to open this discussion to a wider audience. We tend to use -Wall (and -Werror, in CI environments), and so I've had to make decisions about how to handle the new -Wredundant-constraints warnings. So far, I've come to think of it as two different warnings that happen to be combined: Warning 1: a warning for constraints made redundant by superclass relationships, and Warning 2: a warning for unused constraints Overall I'm a fan of Warning 1. It seems very much in the spirit of other warnings such as unused imports. The only stumbling block is how it affects the 3-release compatibility plan with respect to, e.g., the AMP. Most of our code targets a 2-release window, though, so in every such case it has been fine for us to simply remove the offending constraint. Warning 2 on the other hand is far more dubious to me. In the best case, it finds constraints that through oversight or non-local changes are truly no longer necessary in the codebase. This is nice, but the much more common case in our code is that we've made a deliberate decision to include that constraint as part of our API design. The most painful example of this I've hit so far is in an API of related functions, where we've put the same constraint on each function even when the implementation of that particular function might not need that constraint. This is good for consistency and forward-looking compatibility (what if we need that constraint in the next version?). The warning's advice in this case makes the API harder to understand, and less abstract (the client shouldn't care or know that f needs Functor, but g doesn't, if both will always be used in a Functor context). On another level, Warning 2 is a warning that we could have given a more general type to a definition. We quite rightly don't do this for the non-constraint parts of the type signatures, so why are we doing it for the constraints? I'm happy that Warning 1 is now around, but Warning 2 feels much more like an opinionated lint check, and I really wish it wasn't part of -Wall. [1]: https://ghc.haskell.org/trac/ghc/ticket/10635#comment:15 -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Sat Jun 4 13:11:19 2016 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sat, 4 Jun 2016 09:11:19 -0400 Subject: Feedback on -Wredundant-constraints In-Reply-To: References: Message-ID: Agreed. Ive hit exactly these issues myself to the point where i err on suppressing that warning in my code now. In part because i use those unused constraints as a semantic contract. On Friday, June 3, 2016, Adam Foltzer wrote: > With 8.0.1 freshly arrived, I'm taking on the task of updating a number of > our larger projects here at Galois. I made a couple of these comments > briefly on a relevant Trac ticket[1], but I wanted to open this discussion > to a wider audience. > > We tend to use -Wall (and -Werror, in CI environments), and so I've had to > make decisions about how to handle the new -Wredundant-constraints > warnings. So far, I've come to think of it as two different warnings that > happen to be combined: > > Warning 1: a warning for constraints made redundant by superclass > relationships, and > Warning 2: a warning for unused constraints > > Overall I'm a fan of Warning 1. It seems very much in the spirit of other > warnings such as unused imports. The only stumbling block is how it affects > the 3-release compatibility plan with respect to, e.g., the AMP. Most of > our code targets a 2-release window, though, so in every such case it has > been fine for us to simply remove the offending constraint. > > Warning 2 on the other hand is far more dubious to me. In the best case, > it finds constraints that through oversight or non-local changes are truly > no longer necessary in the codebase. This is nice, but the much more common > case in our code is that we've made a deliberate decision to include that > constraint as part of our API design. > > The most painful example of this I've hit so far is in an API of related > functions, where we've put the same constraint on each function even when > the implementation of that particular function might not need that > constraint. This is good for consistency and forward-looking compatibility > (what if we need that constraint in the next version?). The warning's > advice in this case makes the API harder to understand, and less abstract > (the client shouldn't care or know that f needs Functor, but g doesn't, if > both will always be used in a Functor context). > > On another level, Warning 2 is a warning that we could have given a more > general type to a definition. We quite rightly don't do this for the > non-constraint parts of the type signatures, so why are we doing it for the > constraints? > > I'm happy that Warning 1 is now around, but Warning 2 feels much more like > an opinionated lint check, and I really wish it wasn't part of -Wall. > > [1]: https://ghc.haskell.org/trac/ghc/ticket/10635#comment:15 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eir at cis.upenn.edu Mon Jun 6 13:40:13 2016 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Mon, 6 Jun 2016 09:40:13 -0400 Subject: Feedback on -Wredundant-constraints In-Reply-To: References: Message-ID: <984EDFF8-1F80-4CDE-9E7A-2FFB1C7BE08B@cis.upenn.edu> I've been bitten by this too and had to disable the warning. Let me propose an alternative; * -Wredundant-constraints becomes only your Warning 1. That is, it reports when a user writes a constraint that is fully equivalent to some other, strictly smaller constraint, like suggesting simplifying (Eq a, Ord a) to (Ord a). * -Wtype-overly-specific takes on Warning 2, and adds the ability to catch any type signature that's more specific than it needs to be. (Whether or not to add this to -Wall is for others to decide.) This is indeed a more lint-like warning, but HLint would be hard-pressed to figure this one out. * We really need a way of disabling/enabling warnings per declaration. I propose something like this: > {-# WARNINGS foo -Wno-type-overly-specific #-} > foo :: Int -> Int > foo x = x Richard On Jun 4, 2016, at 9:11 AM, Carter Schonwald wrote: > Agreed. Ive hit exactly these issues myself to the point where i err on suppressing that warning in my code now. In part because i use those unused constraints as a semantic contract. > > On Friday, June 3, 2016, Adam Foltzer wrote: > With 8.0.1 freshly arrived, I'm taking on the task of updating a number of our larger projects here at Galois. I made a couple of these comments briefly on a relevant Trac ticket[1], but I wanted to open this discussion to a wider audience. > > We tend to use -Wall (and -Werror, in CI environments), and so I've had to make decisions about how to handle the new -Wredundant-constraints warnings. So far, I've come to think of it as two different warnings that happen to be combined: > > Warning 1: a warning for constraints made redundant by superclass relationships, and > Warning 2: a warning for unused constraints > > Overall I'm a fan of Warning 1. It seems very much in the spirit of other warnings such as unused imports. The only stumbling block is how it affects the 3-release compatibility plan with respect to, e.g., the AMP. Most of our code targets a 2-release window, though, so in every such case it has been fine for us to simply remove the offending constraint. > > Warning 2 on the other hand is far more dubious to me. In the best case, it finds constraints that through oversight or non-local changes are truly no longer necessary in the codebase. This is nice, but the much more common case in our code is that we've made a deliberate decision to include that constraint as part of our API design. > > The most painful example of this I've hit so far is in an API of related functions, where we've put the same constraint on each function even when the implementation of that particular function might not need that constraint. This is good for consistency and forward-looking compatibility (what if we need that constraint in the next version?). The warning's advice in this case makes the API harder to understand, and less abstract (the client shouldn't care or know that f needs Functor, but g doesn't, if both will always be used in a Functor context). > > On another level, Warning 2 is a warning that we could have given a more general type to a definition. We quite rightly don't do this for the non-constraint parts of the type signatures, so why are we doing it for the constraints? > > I'm happy that Warning 1 is now around, but Warning 2 feels much more like an opinionated lint check, and I really wish it wasn't part of -Wall. > > [1]: https://ghc.haskell.org/trac/ghc/ticket/10635#comment:15 > _______________________________________________ > Glasgow-haskell-users mailing list > Glasgow-haskell-users at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Mon Jun 6 14:37:53 2016 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 6 Jun 2016 10:37:53 -0400 Subject: Feedback on -Wredundant-constraints In-Reply-To: <984EDFF8-1F80-4CDE-9E7A-2FFB1C7BE08B@cis.upenn.edu> References: <984EDFF8-1F80-4CDE-9E7A-2FFB1C7BE08B@cis.upenn.edu> Message-ID: Strong emphatic agreement. Both that it should split up thusly and that the latter shouldn't be in WALL. Otherwise it poisons the well for all be the most sophisticated users of type level programming. I don't suppose there's any hope of having this resolved prior to GHC 8.2 because it is a real usability regression? Because I think we can all agree that the proposed change would not break any 8.0 series code, and positively impact everyone. On Jun 6, 2016 9:40 AM, "Richard Eisenberg" wrote: > I've been bitten by this too and had to disable the warning. > > Let me propose an alternative; > * -Wredundant-constraints becomes only your Warning 1. That is, it reports > when a user writes a constraint that is fully equivalent to some other, > strictly smaller constraint, like suggesting simplifying (Eq a, Ord a) to > (Ord a). > > * -Wtype-overly-specific takes on Warning 2, and adds the ability to catch > any type signature that's more specific than it needs to be. (Whether or > not to add this to -Wall is for others to decide.) This is indeed a more > lint-like warning, but HLint would be hard-pressed to figure this one out. > > * We really need a way of disabling/enabling warnings per declaration. I > propose something like this: > > > {-# WARNINGS foo -Wno-type-overly-specific #-} > > foo :: Int -> Int > > foo x = x > > Richard > > On Jun 4, 2016, at 9:11 AM, Carter Schonwald > wrote: > > Agreed. Ive hit exactly these issues myself to the point where i err on > suppressing that warning in my code now. In part because i use those > unused constraints as a semantic contract. > > On Friday, June 3, 2016, Adam Foltzer wrote: > >> With 8.0.1 freshly arrived, I'm taking on the task of updating a number >> of our larger projects here at Galois. I made a couple of these comments >> briefly on a relevant Trac ticket[1], but I wanted to open this discussion >> to a wider audience. >> >> We tend to use -Wall (and -Werror, in CI environments), and so I've had >> to make decisions about how to handle the new -Wredundant-constraints >> warnings. So far, I've come to think of it as two different warnings that >> happen to be combined: >> >> Warning 1: a warning for constraints made redundant by superclass >> relationships, and >> Warning 2: a warning for unused constraints >> >> Overall I'm a fan of Warning 1. It seems very much in the spirit of other >> warnings such as unused imports. The only stumbling block is how it affects >> the 3-release compatibility plan with respect to, e.g., the AMP. Most of >> our code targets a 2-release window, though, so in every such case it has >> been fine for us to simply remove the offending constraint. >> >> Warning 2 on the other hand is far more dubious to me. In the best case, >> it finds constraints that through oversight or non-local changes are truly >> no longer necessary in the codebase. This is nice, but the much more common >> case in our code is that we've made a deliberate decision to include that >> constraint as part of our API design. >> >> The most painful example of this I've hit so far is in an API of related >> functions, where we've put the same constraint on each function even when >> the implementation of that particular function might not need that >> constraint. This is good for consistency and forward-looking compatibility >> (what if we need that constraint in the next version?). The warning's >> advice in this case makes the API harder to understand, and less abstract >> (the client shouldn't care or know that f needs Functor, but g doesn't, if >> both will always be used in a Functor context). >> >> On another level, Warning 2 is a warning that we could have given a more >> general type to a definition. We quite rightly don't do this for the >> non-constraint parts of the type signatures, so why are we doing it for the >> constraints? >> >> I'm happy that Warning 1 is now around, but Warning 2 feels much more >> like an opinionated lint check, and I really wish it wasn't part of -Wall. >> >> [1]: https://ghc.haskell.org/trac/ghc/ticket/10635#comment:15 >> > _______________________________________________ > Glasgow-haskell-users mailing list > Glasgow-haskell-users at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at seidel.io Mon Jun 6 16:16:39 2016 From: eric at seidel.io (Eric Seidel) Date: Mon, 06 Jun 2016 09:16:39 -0700 Subject: Feedback on -Wredundant-constraints In-Reply-To: References: <984EDFF8-1F80-4CDE-9E7A-2FFB1C7BE08B@cis.upenn.edu> Message-ID: <1465229799.761202.629427369.1BC5B3CD@webmail.messagingengine.com> On Mon, Jun 6, 2016, at 07:37, Carter Schonwald wrote: > I don't suppose there's any hope of having this resolved prior to GHC 8.2 > because it is a real usability regression? Because I think we can all > agree > that the proposed change would not break any 8.0 series code, and > positively impact everyone. Do you mean prior to 8.0.2? From carter.schonwald at gmail.com Mon Jun 6 17:50:20 2016 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 6 Jun 2016 13:50:20 -0400 Subject: Feedback on -Wredundant-constraints In-Reply-To: <1465229799.761202.629427369.1BC5B3CD@webmail.messagingengine.com> References: <984EDFF8-1F80-4CDE-9E7A-2FFB1C7BE08B@cis.upenn.edu> <1465229799.761202.629427369.1BC5B3CD@webmail.messagingengine.com> Message-ID: It's a tall ask, but why not? On Jun 6, 2016 12:16 PM, "Eric Seidel" wrote: > > > On Mon, Jun 6, 2016, at 07:37, Carter Schonwald wrote: > > I don't suppose there's any hope of having this resolved prior to GHC 8.2 > > because it is a real usability regression? Because I think we can all > > agree > > that the proposed change would not break any 8.0 series code, and > > positively impact everyone. > > Do you mean prior to 8.0.2? > _______________________________________________ > Glasgow-haskell-users mailing list > Glasgow-haskell-users at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Mon Jun 6 21:59:48 2016 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 6 Jun 2016 17:59:48 -0400 Subject: Feedback on -Wredundant-constraints In-Reply-To: References: <984EDFF8-1F80-4CDE-9E7A-2FFB1C7BE08B@cis.upenn.edu> <1465229799.761202.629427369.1BC5B3CD@webmail.messagingengine.com> Message-ID: To better elaborate : I definitely want this for the next major version i.e. 8.2.* , but I'm also wondering if perhaps the "overly constrained type" warning should be flat out removed from Wall even in ghc 8.0.2, On Monday, June 6, 2016, Carter Schonwald wrote: > It's a tall ask, but why not? > On Jun 6, 2016 12:16 PM, "Eric Seidel" > wrote: > >> >> >> On Mon, Jun 6, 2016, at 07:37, Carter Schonwald wrote: >> > I don't suppose there's any hope of having this resolved prior to GHC >> 8.2 >> > because it is a real usability regression? Because I think we can all >> > agree >> > that the proposed change would not break any 8.0 series code, and >> > positively impact everyone. >> >> Do you mean prior to 8.0.2? >> _______________________________________________ >> Glasgow-haskell-users mailing list >> Glasgow-haskell-users at haskell.org >> >> http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Mon Jun 6 22:39:07 2016 From: david.feuer at gmail.com (David Feuer) Date: Mon, 6 Jun 2016 18:39:07 -0400 Subject: Feedback on -Wredundant-constraints In-Reply-To: <984EDFF8-1F80-4CDE-9E7A-2FFB1C7BE08B@cis.upenn.edu> References: <984EDFF8-1F80-4CDE-9E7A-2FFB1C7BE08B@cis.upenn.edu> Message-ID: I strongly agree with per-declaration warning suppression. But I'd like to leave both warnings on by default in -Wall. 1. Sometimes an upstream library will drop a constraint. The warning lets me know I can drop it too. 2. Sometimes an implementation evolves from a draft that requires a constraint to a final form that does not; it's easy to forget to check whether any constraints have become redundant. It's true that this is somewhat analogous to a function that is less polymorphic than it can be. But my answer to that is the opposite: I'd be very happy to be warned when I write a top-level function with a specific type when it could be parametrically polymorphic in that type. On Jun 6, 2016 9:40 AM, "Richard Eisenberg" wrote: > I've been bitten by this too and had to disable the warning. > > Let me propose an alternative; > * -Wredundant-constraints becomes only your Warning 1. That is, it reports > when a user writes a constraint that is fully equivalent to some other, > strictly smaller constraint, like suggesting simplifying (Eq a, Ord a) to > (Ord a). > > * -Wtype-overly-specific takes on Warning 2, and adds the ability to catch > any type signature that's more specific than it needs to be. (Whether or > not to add this to -Wall is for others to decide.) This is indeed a more > lint-like warning, but HLint would be hard-pressed to figure this one out. > > * We really need a way of disabling/enabling warnings per declaration. I > propose something like this: > > > {-# WARNINGS foo -Wno-type-overly-specific #-} > > foo :: Int -> Int > > foo x = x > > Richard > > On Jun 4, 2016, at 9:11 AM, Carter Schonwald > wrote: > > Agreed. Ive hit exactly these issues myself to the point where i err on > suppressing that warning in my code now. In part because i use those > unused constraints as a semantic contract. > > On Friday, June 3, 2016, Adam Foltzer wrote: > >> With 8.0.1 freshly arrived, I'm taking on the task of updating a number >> of our larger projects here at Galois. I made a couple of these comments >> briefly on a relevant Trac ticket[1], but I wanted to open this discussion >> to a wider audience. >> >> We tend to use -Wall (and -Werror, in CI environments), and so I've had >> to make decisions about how to handle the new -Wredundant-constraints >> warnings. So far, I've come to think of it as two different warnings that >> happen to be combined: >> >> Warning 1: a warning for constraints made redundant by superclass >> relationships, and >> Warning 2: a warning for unused constraints >> >> Overall I'm a fan of Warning 1. It seems very much in the spirit of other >> warnings such as unused imports. The only stumbling block is how it affects >> the 3-release compatibility plan with respect to, e.g., the AMP. Most of >> our code targets a 2-release window, though, so in every such case it has >> been fine for us to simply remove the offending constraint. >> >> Warning 2 on the other hand is far more dubious to me. In the best case, >> it finds constraints that through oversight or non-local changes are truly >> no longer necessary in the codebase. This is nice, but the much more common >> case in our code is that we've made a deliberate decision to include that >> constraint as part of our API design. >> >> The most painful example of this I've hit so far is in an API of related >> functions, where we've put the same constraint on each function even when >> the implementation of that particular function might not need that >> constraint. This is good for consistency and forward-looking compatibility >> (what if we need that constraint in the next version?). The warning's >> advice in this case makes the API harder to understand, and less abstract >> (the client shouldn't care or know that f needs Functor, but g doesn't, if >> both will always be used in a Functor context). >> >> On another level, Warning 2 is a warning that we could have given a more >> general type to a definition. We quite rightly don't do this for the >> non-constraint parts of the type signatures, so why are we doing it for the >> constraints? >> >> I'm happy that Warning 1 is now around, but Warning 2 feels much more >> like an opinionated lint check, and I really wish it wasn't part of -Wall. >> >> [1]: https://ghc.haskell.org/trac/ghc/ticket/10635#comment:15 >> > _______________________________________________ > Glasgow-haskell-users mailing list > Glasgow-haskell-users at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users > > > > _______________________________________________ > Glasgow-haskell-users mailing list > Glasgow-haskell-users at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Thu Jun 9 07:14:53 2016 From: ezyang at mit.edu (Edward Z. Yang) Date: Thu, 09 Jun 2016 00:14:53 -0700 Subject: Call for talks: Haskell Implementors Workshop 2016, Aug 24, Nara Message-ID: <1465456335-sup-6629@sabre> Call for Contributions ACM SIGPLAN Haskell Implementors' Workshop http://haskell.org/haskellwiki/HaskellImplementorsWorkshop/2016 Nara, Japan, 24 September, 2016 Co-located with ICFP 2016 http://www.icfpconference.org/icfp2016/ Important dates --------------- Proposal Deadline: Monday, 8 August, 2016 Notification: Monday, 22 August, 2016 Workshop: Saturday, 24 September, 2016 The 8th Haskell Implementors' Workshop is to be held alongside ICFP 2016 this year in Nara. It is a forum for people involved in the design and development of Haskell implementations, tools, libraries, and supporting infrastructure, to share their work and discuss future directions and collaborations with others. Talks and/or demos are proposed by submitting an abstract, and selected by a small program committee. There will be no published proceedings; the workshop will be informal and interactive, with a flexible timetable and plenty of room for ad-hoc discussion, demos, and impromptu short talks. Scope and target audience ------------------------- It is important to distinguish the Haskell Implementors' Workshop from the Haskell Symposium which is also co-located with ICFP 2016. The Haskell Symposium is for the publication of Haskell-related research. In contrast, the Haskell Implementors' Workshop will have no proceedings -- although we will aim to make talk videos, slides and presented data available with the consent of the speakers. In the Haskell Implementors' Workshop, we hope to study the underlying technology. We want to bring together anyone interested in the nitty-gritty details behind turning plain-text source code into a deployed product. Having said that, members of the wider Haskell community are more than welcome to attend the workshop -- we need your feedback to keep the Haskell ecosystem thriving. The scope covers any of the following topics. There may be some topics that people feel we've missed, so by all means submit a proposal even if it doesn't fit exactly into one of these buckets: * Compilation techniques * Language features and extensions * Type system implementation * Concurrency and parallelism: language design and implementation * Performance, optimisation and benchmarking * Virtual machines and run-time systems * Libraries and tools for development or deployment Talks ----- At this stage we would like to invite proposals from potential speakers for talks and demonstrations. We are aiming for 20 minute talks with 10 minutes for questions and changeovers. We want to hear from people writing compilers, tools, or libraries, people with cool ideas for directions in which we should take the platform, proposals for new features to be implemented, and half-baked crazy ideas. Please submit a talk title and abstract of no more than 300 words. Submissions should be made via HotCRP. The website is: https://icfp-hiw16.hotcrp.com/ We will also have a lightning talks session which will be organised on the day. These talks will be 5-10 minutes, depending on available time. Suggested topics for lightning talks are to present a single idea, a work-in-progress project, a problem to intrigue and perplex Haskell implementors, or simply to ask for feedback and collaborators. Organisers ---------- * Joachim Breitner (Karlsruhe Institut f?r Technologie) * Duncan Coutts (Well Typed) * Michael Snoyman (FP Complete) * Luite Stegeman (ghcjs) * Niki Vazou (UCSD) * Stephanie Weirich (University of Pennsylvania) * Edward Z. Yang - chair (Stanford University) From ezyang at mit.edu Thu Jun 9 07:17:18 2016 From: ezyang at mit.edu (Edward Z. Yang) Date: Thu, 09 Jun 2016 00:17:18 -0700 Subject: Call for talks: Haskell Implementors Workshop 2016, Sep 24 (FIXED), Nara Message-ID: <1465456550-sup-6234@sabre> (...and now with the right date in the subject line!) Call for Contributions ACM SIGPLAN Haskell Implementors' Workshop http://haskell.org/haskellwiki/HaskellImplementorsWorkshop/2016 Nara, Japan, 24 September, 2016 Co-located with ICFP 2016 http://www.icfpconference.org/icfp2016/ Important dates --------------- Proposal Deadline: Monday, 8 August, 2016 Notification: Monday, 22 August, 2016 Workshop: Saturday, 24 September, 2016 The 8th Haskell Implementors' Workshop is to be held alongside ICFP 2016 this year in Nara. It is a forum for people involved in the design and development of Haskell implementations, tools, libraries, and supporting infrastructure, to share their work and discuss future directions and collaborations with others. Talks and/or demos are proposed by submitting an abstract, and selected by a small program committee. There will be no published proceedings; the workshop will be informal and interactive, with a flexible timetable and plenty of room for ad-hoc discussion, demos, and impromptu short talks. Scope and target audience ------------------------- It is important to distinguish the Haskell Implementors' Workshop from the Haskell Symposium which is also co-located with ICFP 2016. The Haskell Symposium is for the publication of Haskell-related research. In contrast, the Haskell Implementors' Workshop will have no proceedings -- although we will aim to make talk videos, slides and presented data available with the consent of the speakers. In the Haskell Implementors' Workshop, we hope to study the underlying technology. We want to bring together anyone interested in the nitty-gritty details behind turning plain-text source code into a deployed product. Having said that, members of the wider Haskell community are more than welcome to attend the workshop -- we need your feedback to keep the Haskell ecosystem thriving. The scope covers any of the following topics. There may be some topics that people feel we've missed, so by all means submit a proposal even if it doesn't fit exactly into one of these buckets: * Compilation techniques * Language features and extensions * Type system implementation * Concurrency and parallelism: language design and implementation * Performance, optimisation and benchmarking * Virtual machines and run-time systems * Libraries and tools for development or deployment Talks ----- At this stage we would like to invite proposals from potential speakers for talks and demonstrations. We are aiming for 20 minute talks with 10 minutes for questions and changeovers. We want to hear from people writing compilers, tools, or libraries, people with cool ideas for directions in which we should take the platform, proposals for new features to be implemented, and half-baked crazy ideas. Please submit a talk title and abstract of no more than 300 words. Submissions should be made via HotCRP. The website is: https://icfp-hiw16.hotcrp.com/ We will also have a lightning talks session which will be organised on the day. These talks will be 5-10 minutes, depending on available time. Suggested topics for lightning talks are to present a single idea, a work-in-progress project, a problem to intrigue and perplex Haskell implementors, or simply to ask for feedback and collaborators. Organisers ---------- * Joachim Breitner (Karlsruhe Institut f?r Technologie) * Duncan Coutts (Well Typed) * Michael Snoyman (FP Complete) * Luite Stegeman (ghcjs) * Niki Vazou (UCSD) * Stephanie Weirich (University of Pennsylvania) * Edward Z. Yang - chair (Stanford University) From carter.schonwald at gmail.com Sat Jun 11 16:38:08 2016 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sat, 11 Jun 2016 12:38:08 -0400 Subject: Feedback on -Wredundant-constraints In-Reply-To: References: <984EDFF8-1F80-4CDE-9E7A-2FFB1C7BE08B@cis.upenn.edu> <1465229799.761202.629427369.1BC5B3CD@webmail.messagingengine.com> Message-ID: Either way, do we have a strong agreement that for ghc 8.2 that this warning should be split up and the noiser part should be moved out of the default wall set? On Monday, June 6, 2016, Carter Schonwald wrote: > To better elaborate : I definitely want this for the next major version > i.e. 8.2.* , but I'm also wondering if perhaps the "overly constrained > type" warning should be flat out removed from Wall even in ghc 8.0.2, > > On Monday, June 6, 2016, Carter Schonwald > wrote: > >> It's a tall ask, but why not? >> On Jun 6, 2016 12:16 PM, "Eric Seidel" wrote: >> >>> >>> >>> On Mon, Jun 6, 2016, at 07:37, Carter Schonwald wrote: >>> > I don't suppose there's any hope of having this resolved prior to GHC >>> 8.2 >>> > because it is a real usability regression? Because I think we can all >>> > agree >>> > that the proposed change would not break any 8.0 series code, and >>> > positively impact everyone. >>> >>> Do you mean prior to 8.0.2? >>> _______________________________________________ >>> Glasgow-haskell-users mailing list >>> Glasgow-haskell-users at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From michaelburge at pobox.com Sat Jun 11 19:12:12 2016 From: michaelburge at pobox.com (Michael Burge) Date: Sat, 11 Jun 2016 12:12:12 -0700 Subject: Allow extra commas in module declarations or lists? Message-ID: Some languages like Perl allow you to include additional commas in your lists, so that you can freely reorder them without worrying about adding or removing commas from the last or first item: my @array = ( 1, 2, 3, ); Perl even allows multiple redundant commas everywhere after the first element, which is less useful but has come up on occasion: my @array = (1,,,2,,,,,,,,,3,,,,,); I propose allowing an optional single extra comma at the end in module declarations, record declarations, record constructors, and list constructors: module Example ( module Example, SomeConstructor(..), ) where data SomeConstructor = SomeConstructor { foo :: Int, bar :: Int, } baz :: SomeConstructor -> SomeConstructor baz x = x { foo = 5, bar = 10, } qux :: [ Int ] qux = [ 1, 2, 3, ] What do you think? -------------- next part -------------- An HTML attachment was scrubbed... URL: From migmit at gmail.com Sat Jun 11 20:02:13 2016 From: migmit at gmail.com (MigMit) Date: Sat, 11 Jun 2016 22:02:13 +0200 Subject: Allow extra commas in module declarations or lists? In-Reply-To: References: Message-ID: I think that's the greatest idea since monads. Seriously. ?????????? ? iPad > 11 ???? 2016 ?., ? 21:12, Michael Burge ???????(?): > > Some languages like Perl allow you to include additional commas in your lists, so that you can freely reorder them without worrying about adding or removing commas from the last or first item: > > my @array = ( > 1, > 2, > 3, > ); > > Perl even allows multiple redundant commas everywhere after the first element, which is less useful but has come up on occasion: > my @array = (1,,,2,,,,,,,,,3,,,,,); > > I propose allowing an optional single extra comma at the end in module declarations, record declarations, record constructors, and list constructors: > > module Example ( > module Example, > SomeConstructor(..), > ) where > > data SomeConstructor = SomeConstructor { > foo :: Int, > bar :: Int, > } > > baz :: SomeConstructor -> SomeConstructor > baz x = x { > foo = 5, > bar = 10, > } > > qux :: [ Int ] > qux = [ > 1, > 2, > 3, > ] > > What do you think? > _______________________________________________ > Glasgow-haskell-users mailing list > Glasgow-haskell-users at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users From mail at joachim-breitner.de Sun Jun 12 12:24:04 2016 From: mail at joachim-breitner.de (Joachim Breitner) Date: Sun, 12 Jun 2016 14:24:04 +0200 Subject: Allow extra commas in module declarations or lists? In-Reply-To: References: Message-ID: <1465734244.2063.10.camel@joachim-breitner.de> Hi, Am Samstag, den 11.06.2016, 12:12 -0700 schrieb Michael Burge: > What do you think? For the module header, this is already possible. For the term language, it unfortunately clashes with things like TupleSections. I believe this has been discussed a few times in the past, e.g. https://mail.haskell.org/pipermail/haskell-prime/2013-May/003833.html Greetings, Joachim -- Joachim ?nomeata? Breitner ? mail at joachim-breitner.de ? https://www.joachim-breitner.de/ ? XMPP: nomeata at joachim-breitner.de?? OpenPGP-Key: 0xF0FBF51F ? Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From harendra.kumar at gmail.com Sun Jun 12 18:38:38 2016 From: harendra.kumar at gmail.com (Harendra Kumar) Date: Mon, 13 Jun 2016 00:08:38 +0530 Subject: CMM-to-ASM: Register allocation wierdness Message-ID: Hi, I am implementing unicode normalization in Haskell. I challenged myself to match the performance with the best C/C++ implementation, the best being the ICU library. I am almost there, beating it in one of the benchmarks and within 30% for others. I am out of all application level tricks that I could think of and now need help from the compiler. I started with a bare minimum loop and adding functionality incrementally watching where the performance trips. At one point I saw that adding just one 'if' condition reduced the performance by half. I looked at what's going on at the assembly level. Here is a github gist of the assembly instructions executed in the fast path of the loop, corresponding cmm snippets and also the full cmm corresponding to the loop: https://gist.github.com/harendra-kumar/7d34c6745f604a15a872768e57cd2447 I have annotated the assembly code with labels matching the corresponding CMM. With the addition of another "if" condition the loop which was pretty simple till now suddenly got bloated with a lot of register reassignments. Here is a snippet of the register movements added: # _n4se: # swap r14 <-> r11 => 0x408d6b: mov %r11,0x98(%rsp) => 0x408d73: mov %r14,%r11 => 0x408d76: mov 0x98(%rsp),%r14 # reassignments # rbx -> r10 -> r9 -> r8 -> rdi -> rsi -> rdx -> rcx -> rbx => 0x408d7e: mov %rbx,0x90(%rsp) => 0x408d86: mov %rcx,%rbx => 0x408d89: mov %rdx,%rcx => 0x408d8c: mov %rsi,%rdx => 0x408d8f: mov %rdi,%rsi => 0x408d92: mov %r8,%rdi => 0x408d95: mov %r9,%r8 => 0x408d98: mov %r10,%r9 => 0x408d9b: mov 0x90(%rsp),%r10 . . . loop logic here which uses only %rax, %r10 and %r9 . . . . # _n4s8: # shuffle back to original assignments => 0x4090dc: mov %r14,%r11 => 0x4090df: mov %r9,%r10 => 0x4090e2: mov %r8,%r9 => 0x4090e5: mov %rdi,%r8 => 0x4090e8: mov %rsi,%rdi => 0x4090eb: mov %rdx,%rsi => 0x4090ee: mov %rcx,%rdx => 0x4090f1: mov %rbx,%rcx => 0x4090f4: mov %rax,%rbx => 0x4090f7: mov 0x88(%rsp),%rax => 0x4090ff: jmpq 0x408d2a The registers seem to be getting reassigned here, data flowing from one to the next. In this particular path a lot of these register movements seem unnecessary and are only undone at the end without being used. Maybe this is because these are reusable blocks and the movement is necessary when used in some other path? Can this be avoided? Or at least avoided in a certain fast path somehow by hinting the compiler? Any pointers to the GHC code will be appreciated. I am not yet much familiar with the GHC code but can dig deeper pretty quickly. But before that I hope someone knowledgeable in this area can shed some light on this at a conceptual level or if at all it can be improved. I can provide more details and experiment more if needed. Thanks, Harendra -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at seidel.io Sun Jun 12 20:02:35 2016 From: eric at seidel.io (Eric Seidel) Date: Sun, 12 Jun 2016 13:02:35 -0700 Subject: Feedback on -Wredundant-constraints In-Reply-To: References: <984EDFF8-1F80-4CDE-9E7A-2FFB1C7BE08B@cis.upenn.edu> <1465229799.761202.629427369.1BC5B3CD@webmail.messagingengine.com> Message-ID: <1465761755.3176975.635439865.6688CF05@webmail.messagingengine.com> I don't have any personal experience with the new warning, but it does sound to me like there are two separate warnings (redundant vs unused constraints) combined under a single flag. So I would support separating them! On Sat, Jun 11, 2016, at 09:38, Carter Schonwald wrote: > Either way, do we have a strong agreement that for ghc 8.2 that this > warning should be split up and the noiser part should be moved out of the > default wall set? > > On Monday, June 6, 2016, Carter Schonwald > wrote: > > > To better elaborate : I definitely want this for the next major version > > i.e. 8.2.* , but I'm also wondering if perhaps the "overly constrained > > type" warning should be flat out removed from Wall even in ghc 8.0.2, > > > > On Monday, June 6, 2016, Carter Schonwald > > wrote: > > > >> It's a tall ask, but why not? > >> On Jun 6, 2016 12:16 PM, "Eric Seidel" wrote: > >> > >>> > >>> > >>> On Mon, Jun 6, 2016, at 07:37, Carter Schonwald wrote: > >>> > I don't suppose there's any hope of having this resolved prior to GHC > >>> 8.2 > >>> > because it is a real usability regression? Because I think we can all > >>> > agree > >>> > that the proposed change would not break any 8.0 series code, and > >>> > positively impact everyone. > >>> > >>> Do you mean prior to 8.0.2? > >>> _______________________________________________ > >>> Glasgow-haskell-users mailing list > >>> Glasgow-haskell-users at haskell.org > >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users > >>> > >> From harendra.kumar at gmail.com Mon Jun 13 07:29:33 2016 From: harendra.kumar at gmail.com (Harendra Kumar) Date: Mon, 13 Jun 2016 12:59:33 +0530 Subject: CMM-to-ASM: Register allocation wierdness In-Reply-To: References: Message-ID: My earlier experiment was on GHC-7.10.3. I repeated this on GHC-8.0.1 and the assembly traced was exactly the same except for a marginal improvement. The 8.0.1 code generator removed the r14/r11 swap but the rest of the register ring shift remains the same. I have updated the github gist with the 8.0.1 trace: https://gist.github.com/harendra-kumar/7d34c6745f604a15a872768e57cd2447 thanks, harendra On 13 June 2016 at 00:08, Harendra Kumar wrote: > Hi, > > I am implementing unicode normalization in Haskell. I challenged myself to > match the performance with the best C/C++ implementation, the best being > the ICU library. I am almost there, beating it in one of the benchmarks and > within 30% for others. I am out of all application level tricks that I > could think of and now need help from the compiler. > > I started with a bare minimum loop and adding functionality incrementally > watching where the performance trips. At one point I saw that adding just > one 'if' condition reduced the performance by half. I looked at what's > going on at the assembly level. Here is a github gist of the assembly > instructions executed in the fast path of the loop, corresponding cmm > snippets and also the full cmm corresponding to the loop: > > https://gist.github.com/harendra-kumar/7d34c6745f604a15a872768e57cd2447 > > I have annotated the assembly code with labels matching the corresponding > CMM. > > With the addition of another "if" condition the loop which was pretty > simple till now suddenly got bloated with a lot of register reassignments. > Here is a snippet of the register movements added: > > # _n4se: > # swap r14 <-> r11 > => 0x408d6b: mov %r11,0x98(%rsp) > => 0x408d73: mov %r14,%r11 > => 0x408d76: mov 0x98(%rsp),%r14 > > # reassignments > # rbx -> r10 -> r9 -> r8 -> rdi -> rsi -> rdx -> rcx -> rbx > => 0x408d7e: mov %rbx,0x90(%rsp) > => 0x408d86: mov %rcx,%rbx > => 0x408d89: mov %rdx,%rcx > => 0x408d8c: mov %rsi,%rdx > => 0x408d8f: mov %rdi,%rsi > => 0x408d92: mov %r8,%rdi > => 0x408d95: mov %r9,%r8 > => 0x408d98: mov %r10,%r9 > => 0x408d9b: mov 0x90(%rsp),%r10 > . > . > . > loop logic here which uses only %rax, %r10 and %r9 . > . > . > . > # _n4s8: > # shuffle back to original assignments > => 0x4090dc: mov %r14,%r11 > => 0x4090df: mov %r9,%r10 > => 0x4090e2: mov %r8,%r9 > => 0x4090e5: mov %rdi,%r8 > => 0x4090e8: mov %rsi,%rdi > => 0x4090eb: mov %rdx,%rsi > => 0x4090ee: mov %rcx,%rdx > => 0x4090f1: mov %rbx,%rcx > => 0x4090f4: mov %rax,%rbx > => 0x4090f7: mov 0x88(%rsp),%rax > > => 0x4090ff: jmpq 0x408d2a > > > The registers seem to be getting reassigned here, data flowing from one to > the next. In this particular path a lot of these register movements seem > unnecessary and are only undone at the end without being used. > > Maybe this is because these are reusable blocks and the movement is > necessary when used in some other path? > > Can this be avoided? Or at least avoided in a certain fast path somehow by > hinting the compiler? Any pointers to the GHC code will be appreciated. I > am not yet much familiar with the GHC code but can dig deeper pretty > quickly. But before that I hope someone knowledgeable in this area can shed > some light on this at a conceptual level or if at all it can be improved. I > can provide more details and experiment more if needed. > > Thanks, > Harendra > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anthony_clayden at clear.net.nz Wed Jun 15 03:29:59 2016 From: anthony_clayden at clear.net.nz (AntC) Date: Wed, 15 Jun 2016 03:29:59 +0000 (UTC) Subject: ORF for fields of higher-ranked type [was: TDNR without new operators or syntax changes] References: <1463402347025-5835927.post@n5.nabble.com> <1463469688161-5835978.post@n5.nabble.com> <20160521151419.GA8247@24f89f8c-e6a1-4e75-85ee-bb8a3743bb9f> Message-ID: > Adam Gundry writes: > ... Having spent more time thinking about record field overloading > than perhaps I should, ... Thanks Adam, another thing on the back burner ... The earlier design for SORF tried to support higher-ranked fields. https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/SORF That had to be abandoned, until explicit type application was available IIRC. We now have type application in GHC 8.0. Is there some hope for higher-rank type fields? AntC From dominic at steinitz.org Thu Jun 16 07:11:59 2016 From: dominic at steinitz.org (Dominic Steinitz) Date: Thu, 16 Jun 2016 07:11:59 +0000 (UTC) Subject: CMM-to-ASM: Register allocation wierdness References: Message-ID: > Hi, I am implementing unicode normalization in Haskell. I > challenged myself to match the performance with the best C/C++ > implementation, the best being the ICU library. I am almost there, > beating it in one of the benchmarks and within 30% for others. I am > out of all application level tricks that I could think of and now > need help from the compiler. I can't answer your question but I am very happy that someone is looking at performance issues. I am sad that no-one has responded. More generally, is there a story around what is happening to improve performance? Dominic. From ben at smart-cactus.org Thu Jun 16 08:29:39 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 16 Jun 2016 10:29:39 +0200 Subject: CMM-to-ASM: Register allocation wierdness In-Reply-To: References: Message-ID: <87mvml5yfg.fsf@smart-cactus.org> Harendra Kumar writes: > My earlier experiment was on GHC-7.10.3. I repeated this on GHC-8.0.1 and > the assembly traced was exactly the same except for a marginal improvement. > The 8.0.1 code generator removed the r14/r11 swap but the rest of the > register ring shift remains the same. I have updated the github gist with > the 8.0.1 trace: > Have you tried compiling with -fregs-graph [1] (the graph-coloring allocator)? By default GHC uses a very naive linear register allocator which I'd imagine may produce these sorts of results. At some point there was an effort to make -fregs-graph the default (see #2790) but it is unfortunately quite slow despite having a relatively small impact on produced-code quality in most cases. However, in your case it may be worth enabling. Note, however, that the graph coloring allocator has a few quirks of its own (see #8657 and #7697). It actually came to my attention while researching this that the -fregs-graph flag is currently silently ignored [2]. Unfortunately this means you'll need to build a new compiler if you want to try using it. Simon Marlow: If we really want to disable this option we should at very least issue an error when the user requests it. However, really it seems to me like we shouldn't disable it at all; why not just allow the user to use it and add a note to the documentation stating that the graph coloring allocator may fail with some programs and if it breaks the user gets to keep both pieces? All-in-all, the graph coloring allocator is in great need of some love; Harendra, perhaps you'd like to have a try at dusting it off and perhaps look into why it regresses in compiler performance? It would be great if we could use it by default. Cheers, - Ben [1] http://downloads.haskell.org/~ghc/master/users-guide//using-optimisation.html?highlight=register%20graph#ghc-flag--fregs-graph [2] https://git.haskell.org/ghc.git/commitdiff/f0a7261a39bd1a8c5217fecba56c593c353f198c -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From harendra.kumar at gmail.com Thu Jun 16 10:53:12 2016 From: harendra.kumar at gmail.com (Harendra Kumar) Date: Thu, 16 Jun 2016 16:23:12 +0530 Subject: CMM-to-ASM: Register allocation wierdness In-Reply-To: <87mvml5yfg.fsf@smart-cactus.org> References: <87mvml5yfg.fsf@smart-cactus.org> Message-ID: On 16 June 2016 at 13:59, Ben Gamari wrote: > > It actually came to my attention while researching this that the > -fregs-graph flag is currently silently ignored [2]. Unfortunately this > means you'll need to build a new compiler if you want to try using it. Yes I did try -fregs-graph and -fregs-iterative both. To debug why nothing changed I had to compare the executables produced with and without the flags and found them identical. A note in the manual could have saved me some time since that's the first place to go for help. I was wondering if I am making a mistake in the build and if it is not being rebuilt properly. Your note confirms my observation, it indeed does not change anything. > All-in-all, the graph coloring allocator is in great need of some love; > Harendra, perhaps you'd like to have a try at dusting it off and perhaps > look into why it regresses in compiler performance? It would be great if > we could use it by default. Yes, I can try that. In fact I was going in that direction and then stopped to look at what llvm does. llvm gave me impressive results in some cases though not so great in others. I compared the code generated by llvm and it perhaps did a better job in theory (used fewer instructions) but due to more spilling the end result was pretty similar. But I found a few interesting optimizations that llvm did. For example, there was a heap adjustment and check in the looping path which was redundant and was readjusted in the loop itself without use. LLVM either removed the redundant _adjustments_ in the loop or moved them out of the loop. But it did not remove the corresponding heap _checks_. That makes me wonder if the redundant heap checks can also be moved or removed. If we can do some sort of loop analysis at the CMM level itself and avoid or remove the redundant heap adjustments as well as checks or at least float them out of the cycle wherever possible. That sort of optimization can make a significant difference to my case at least. Since we are explicitly aware of the heap at the CMM level there may be an opportunity to do better than llvm if we optimize the generated CMM or the generation of CMM itself. A thought that came to my mind was whether we should focus on getting better code out of the llvm backend or the native code generator. LLVM seems pretty good at the specialized task of code generation and low level optimization, it is well funded, widely used and has a big community support. That allows us to leverage that huge effort and take advantage of the new developments. Does it make sense to outsource the code generation and low level optimization tasks to llvm and ghc focussing on higher level optimizations which are harder to do at the llvm level? What are the downsides of using llvm exclusively in future? -harendra -------------- next part -------------- An HTML attachment was scrubbed... URL: From karel.gardas at centrum.cz Thu Jun 16 11:18:50 2016 From: karel.gardas at centrum.cz (Karel Gardas) Date: Thu, 16 Jun 2016 13:18:50 +0200 Subject: CMM-to-ASM: Register allocation wierdness In-Reply-To: References: <87mvml5yfg.fsf@smart-cactus.org> Message-ID: <57628B1A.6050002@centrum.cz> On 06/16/16 12:53 PM, Harendra Kumar wrote: > A thought that came to my mind was whether we should focus on getting > better code out of the llvm backend or the native code generator. LLVM > seems pretty good at the specialized task of code generation and low > level optimization, it is well funded, widely used and has a big > community support. That allows us to leverage that huge effort and take > advantage of the new developments. Does it make sense to outsource the > code generation and low level optimization tasks to llvm and ghc > focussing on higher level optimizations which are harder to do at the > llvm level? What are the downsides of using llvm exclusively in future? Good reading IMHO about the topic is here: https://ghc.haskell.org/trac/ghc/wiki/ImprovedLLVMBackend Cheers, Karel From harendra.kumar at gmail.com Thu Jun 16 12:10:19 2016 From: harendra.kumar at gmail.com (Harendra Kumar) Date: Thu, 16 Jun 2016 17:40:19 +0530 Subject: CMM-to-ASM: Register allocation wierdness In-Reply-To: <57628B1A.6050002@centrum.cz> References: <87mvml5yfg.fsf@smart-cactus.org> <57628B1A.6050002@centrum.cz> Message-ID: That's a nice read, thanks for the pointer. I agree with the solution presented there. If we can do that it will be awesome. If help is needed I can spend some time on it. One of the things that I noticed is that the code can be optimized significantly if we know the common case so that we can optimize that path at the expense of less common path. At times I saw wild difference in performance just by a very small change in the source. I could attribute the difference to code blocks having moved and differently placed jump instructions or change in register allocations impacting the common case more. This could be avoided if we know the common case. The common case is not visible or obvious to low level tools. It is easier to write the code in a low level language like C such that it is closer to how it will run on the processor, we can also easily influence gcc from the source level. It is harder to do the same in a high level language like Haskell. Perhaps there is no point in doing so. What we can do instead is to use the llvm toolchain to perform feedback directed optimization and it will adjust the low level code accordingly based on your feedback runs. That will be entirely free since it can be done at the llvm level. My point is that it will pay off in things like that if we invest in integrating llvm better. -harendra On 16 June 2016 at 16:48, Karel Gardas wrote: > On 06/16/16 12:53 PM, Harendra Kumar wrote: > >> A thought that came to my mind was whether we should focus on getting >> better code out of the llvm backend or the native code generator. LLVM >> seems pretty good at the specialized task of code generation and low >> level optimization, it is well funded, widely used and has a big >> community support. That allows us to leverage that huge effort and take >> advantage of the new developments. Does it make sense to outsource the >> code generation and low level optimization tasks to llvm and ghc >> focussing on higher level optimizations which are harder to do at the >> llvm level? What are the downsides of using llvm exclusively in future? >> > > Good reading IMHO about the topic is here: > https://ghc.haskell.org/trac/ghc/wiki/ImprovedLLVMBackend > > Cheers, > Karel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Jun 16 12:10:18 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 16 Jun 2016 12:10:18 +0000 Subject: CMM-to-ASM: Register allocation wierdness In-Reply-To: <87mvml5yfg.fsf@smart-cactus.org> References: <87mvml5yfg.fsf@smart-cactus.org> Message-ID: | All-in-all, the graph coloring allocator is in great need of some love; | Harendra, perhaps you'd like to have a try at dusting it off and perhaps | look into why it regresses in compiler performance? It would be great if | we could use it by default. I second this. Volunteers are sorely needed. Simon From ben at smart-cactus.org Thu Jun 16 12:37:17 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 16 Jun 2016 14:37:17 +0200 Subject: CMM-to-ASM: Register allocation wierdness In-Reply-To: References: <87mvml5yfg.fsf@smart-cactus.org> Message-ID: <87y46548ea.fsf@smart-cactus.org> Ccing David Spitzenberg, who has thought about proc-point splitting, which is relevant for reasons that we will see below. Harendra Kumar writes: > On 16 June 2016 at 13:59, Ben Gamari wrote: >> >> It actually came to my attention while researching this that the >> -fregs-graph flag is currently silently ignored [2]. Unfortunately this >> means you'll need to build a new compiler if you want to try using it. > > Yes I did try -fregs-graph and -fregs-iterative both. To debug why nothing > changed I had to compare the executables produced with and without the > flags and found them identical. A note in the manual could have saved me > some time since that's the first place to go for help. I was wondering if I > am making a mistake in the build and if it is not being rebuilt > properly. Your note confirms my observation, it indeed does not change > anything. > Indeed; I've opened D2335 [1] to reenable -fregs-graph and add an appropriate note to the users guide. >> All-in-all, the graph coloring allocator is in great need of some love; >> Harendra, perhaps you'd like to have a try at dusting it off and perhaps >> look into why it regresses in compiler performance? It would be great if >> we could use it by default. > > Yes, I can try that. In fact I was going in that direction and then stopped > to look at what llvm does. llvm gave me impressive results in some cases > though not so great in others. I compared the code generated by llvm and it > perhaps did a better job in theory (used fewer instructions) but due to > more spilling the end result was pretty similar. > For the record, I have also struggled with register spilling issues in the past. See, for instance, #10012, which describes a behavior which arises from the C-- sinking pass's unwillingness to duplicate code across branches. While in general it's good to avoid the code bloat that this duplication implies, in the case shown in that ticket duplicating the computation would be significantly less code than the bloat from spilling the needed results. > But I found a few interesting optimizations that llvm did. For example, > there was a heap adjustment and check in the looping path which was > redundant and was readjusted in the loop itself without use. LLVM either > removed the redundant _adjustments_ in the loop or moved them out of the > loop. But it did not remove the corresponding heap _checks_. That makes me > wonder if the redundant heap checks can also be moved or removed. If we can > do some sort of loop analysis at the CMM level itself and avoid or remove > the redundant heap adjustments as well as checks or at least float them out > of the cycle wherever possible. That sort of optimization can make a > significant difference to my case at least. Since we are explicitly aware > of the heap at the CMM level there may be an opportunity to do better than > llvm if we optimize the generated CMM or the generation of CMM itself. > Very interesting, thanks for writing this down! Indeed if these checks really are redundant then we should try to avoid them. Do you have any code you could share that demosntrates this? It would be great to open Trac tickets to track some of the optimization opportunities that you noted we may be missing. Trac tickets are far easier to track over longer durations than mailing list conversations, which tend to get lost in the noise after a few weeks pass. > A thought that came to my mind was whether we should focus on getting > better code out of the llvm backend or the native code generator. LLVM > seems pretty good at the specialized task of code generation and low level > optimization, it is well funded, widely used and has a big community > support. That allows us to leverage that huge effort and take advantage of > the new developments. Does it make sense to outsource the code generation > and low level optimization tasks to llvm and ghc focussing on higher level > optimizations which are harder to do at the llvm level? What are the > downsides of using llvm exclusively in future? > There is indeed a question of where we wish to focus our optimization efforts. However, I think using LLVM exclusively would be a mistake. LLVM is a rather large dependency that has in the past been rather difficult to track (this is why we now only target one LLVM release in a given GHC release). Moreover, it's significantly slower than our existing native code generator. There are a number of reasons for this, some of which are fixable. For instance, we currently make no effort to tell LLVM which passes are worth running and which we've handled; this is something which should be fixed but will require a rather significant investment by someone to determine how GHC's and LLVM's passes overlap, how they interact, and generally which are helpful (see GHC #11295). Furthermore, there are a few annoying impedance mismatches between Cmm and LLVM's representation. This can be seen in our treatment of proc points: when we need to take the address of a block within a function LLVM requires that we break the block into a separate procedure, hiding many potential optimizations from the optimizer. This was discussed further on this list earlier this year [2]. It would be great to eliminate proc-point splitting but doing so will almost certainly require cooperation from LLVM. Cheers, - Ben [1] https://phabricator.haskell.org/D2335 [2] https://mail.haskell.org/pipermail/ghc-devs/2015-November/010535.html -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From harendra.kumar at gmail.com Fri Jun 17 09:09:03 2016 From: harendra.kumar at gmail.com (Harendra Kumar) Date: Fri, 17 Jun 2016 14:39:03 +0530 Subject: CMM-to-ASM: Register allocation wierdness In-Reply-To: <87y46548ea.fsf@smart-cactus.org> References: <87mvml5yfg.fsf@smart-cactus.org> <87y46548ea.fsf@smart-cactus.org> Message-ID: Thanks Ben! I have my responses inline below. On 16 June 2016 at 18:07, Ben Gamari wrote: > > Indeed; I've opened D2335 [1] to reenable -fregs-graph and add an > appropriate note to the users guide. > Thanks! That was quick. > For the record, I have also struggled with register spilling issues in > the past. See, for instance, #10012, which describes a behavior which > arises from the C-- sinking pass's unwillingness to duplicate code > across branches. While in general it's good to avoid the code bloat that > this duplication implies, in the case shown in that ticket duplicating > the computation would be significantly less code than the bloat from > spilling the needed results. > Not sure if this is possible but when unsure we can try both and compare if the duplication results in significantly more code than no duplication and make a decision based on that. Though that will slow down the compilation. Maybe we can bundle slower passes in something like -O3, meaning it will be slow and may or may not provide better results? > > But I found a few interesting optimizations that llvm did. For example, > > there was a heap adjustment and check in the looping path which was > > redundant and was readjusted in the loop itself without use. LLVM either > > removed the redundant _adjustments_ in the loop or moved them out of the > > loop. But it did not remove the corresponding heap _checks_. That makes > me > > wonder if the redundant heap checks can also be moved or removed. If we > can > > do some sort of loop analysis at the CMM level itself and avoid or remove > > the redundant heap adjustments as well as checks or at least float them > out > > of the cycle wherever possible. That sort of optimization can make a > > significant difference to my case at least. Since we are explicitly aware > > of the heap at the CMM level there may be an opportunity to do better > than > > llvm if we optimize the generated CMM or the generation of CMM itself. > > > Very interesting, thanks for writing this down! Indeed if these checks > really are redundant then we should try to avoid them. Do you have any > code you could share that demosntrates this? > The gist that I provided in this email thread earlier demonstrates it. Here it is again: https://gist.github.com/harendra-kumar/7d34c6745f604a15a872768e57cd2447 If you look at the CMM trace in the gist. Start at label c4ic where we allocate space on heap (+48). Now, there are many possible paths from this point on some of those use the heap and some don't. I have marked those which use the heap by curly braces, the rest do not use it at all. 1) c4ic (allocate) -> c4mw -> {c4pv} -> ... 2) c4ic (allocate) -> c4mw -> c4pw -> ((c4pr -> ({c4pe} -> ... | c4ph -> ...)) | cp4ps -> ...) If we can place this allocation at both c4pv and c4pe instead of the common parent then we can save the fast path from this check. The same thing applies to the allocation at label c4jd as well. I have the code to produce this CMM, I can commit it on a branch and leave it in the github repository so that we can use it for fixing. > It would be great to open Trac tickets to track some of the optimization > Will do. > There is indeed a question of where we wish to focus our optimization > efforts. However, I think using LLVM exclusively would be a mistake. > LLVM is a rather large dependency that has in the past been rather > difficult to track (this is why we now only target one LLVM release in a > given GHC release). Moreover, it's significantly slower than our > existing native code generator. There are a number of reasons for this, > some of which are fixable. For instance, we currently make no effort to > tell > LLVM which passes are worth running and which we've handled; this is > something which should be fixed but will require a rather significant > investment by someone to determine how GHC's and LLVM's passes overlap, > how they interact, and generally which are helpful (see GHC #11295). > > Furthermore, there are a few annoying impedance mismatches between Cmm > and LLVM's representation. This can be seen in our treatment of proc > points: when we need to take the address of a block within a function > LLVM requires that we break the block into a separate procedure, hiding > many potential optimizations from the optimizer. This was discussed > further on this list earlier this year [2]. It would be great to > eliminate proc-point splitting but doing so will almost certainly > require cooperation from LLVM. > It sounds like we need to continue with both for now and see how the llvm option pans out. There is clearly no reason for a decisive tilt towards llvm in near future. -harendra -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Sat Jun 18 13:01:33 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Sat, 18 Jun 2016 13:01:33 +0000 Subject: CMM-to-ASM: Register allocation wierdness In-Reply-To: References: <87mvml5yfg.fsf@smart-cactus.org> Message-ID: Takenobu, and others Thanks. Several people have mentioned that email from me is ending up in spam. It turns out to be a fault in the Haskell.org mailman setup, which was mis-forwarding my email. Apparently it is fixed now. If THIS message ends up in spam with a complaint like that below, could you let me know? Thanks Simon From: Takenobu Tani [mailto:takenobu.hs at gmail.com] Sent: 18 June 2016 08:18 To: Simon Peyton Jones Subject: Re: CMM-to-ASM: Register allocation wierdness Hi Simon, I report to you about your mails. Maybe, your mails don't reach to Gmail users. I don't know why, but your mails have been distributed to "Spam" folder in Gmail. Gmail displays following message: "Why is this message in Spam? It has a from address in microsoft.com but has failed microsoft.com's required tests for authentication." For reference, I attach the screen of my Gmail of spam folder. Recent your mails have been detected as spam. Please check your mail settings. Regards, Takenobu 2016-06-16 21:10 GMT+09:00 Simon Peyton Jones >: | All-in-all, the graph coloring allocator is in great need of some love; | Harendra, perhaps you'd like to have a try at dusting it off and perhaps | look into why it regresses in compiler performance? It would be great if | we could use it by default. I second this. Volunteers are sorely needed. Simon _______________________________________________ Glasgow-haskell-users mailing list Glasgow-haskell-users at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users -------------- next part -------------- An HTML attachment was scrubbed... URL: From takenobu.hs at gmail.com Sat Jun 18 14:44:31 2016 From: takenobu.hs at gmail.com (Takenobu Tani) Date: Sat, 18 Jun 2016 23:44:31 +0900 Subject: CMM-to-ASM: Register allocation wierdness In-Reply-To: References: <87mvml5yfg.fsf@smart-cactus.org> Message-ID: Hi Simon, I've received this email in "inbox" folder of Gmail. It is all right now. Thank you, Takenobu 2016-06-18 22:01 GMT+09:00 Simon Peyton Jones : > Takenobu, and others > > Thanks. Several people have mentioned that email from me is ending up in > spam. > > It turns out to be a fault in the Haskell.org mailman setup, which was > mis-forwarding my email. > > Apparently it is fixed now. *If THIS message ends up in spam with a > complaint like that below, could you let me know?* > > Thanks > > Simon > > > > > *From:* Takenobu Tani [mailto:takenobu.hs at gmail.com] > *Sent:* 18 June 2016 08:18 > *To:* Simon Peyton Jones > *Subject:* Re: CMM-to-ASM: Register allocation wierdness > > > > Hi Simon, > > > > I report to you about your mails. > > > > Maybe, your mails don't reach to Gmail users. > > I don't know why, but your mails have been distributed to "Spam" folder in > Gmail. > > > > Gmail displays following message: > > "Why is this message in Spam? It has a from address in microsoft.com > > but has failed microsoft.com > 's > required tests for authentication." > > > > > > For reference, I attach the screen of my Gmail of spam folder. > > Recent your mails have been detected as spam. > > Please check your mail settings. > > > > Regards, > > Takenobu > > > > 2016-06-16 21:10 GMT+09:00 Simon Peyton Jones : > > | All-in-all, the graph coloring allocator is in great need of some love; > | Harendra, perhaps you'd like to have a try at dusting it off and perhaps > | look into why it regresses in compiler performance? It would be great if > | we could use it by default. > > I second this. Volunteers are sorely needed. > > Simon > > _______________________________________________ > Glasgow-haskell-users mailing list > Glasgow-haskell-users at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Sun Jun 19 04:30:51 2016 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sun, 19 Jun 2016 00:30:51 -0400 Subject: CMM-to-ASM: Register allocation wierdness In-Reply-To: References: <87mvml5yfg.fsf@smart-cactus.org> Message-ID: This email would have been marked spam had I not unmarked all your emails as spam. Also a gmail user :/ On Saturday, June 18, 2016, Simon Peyton Jones wrote: > Takenobu, and others > > Thanks. Several people have mentioned that email from me is ending up in > spam. > > It turns out to be a fault in the Haskell.org mailman setup, which was > mis-forwarding my email. > > Apparently it is fixed now. *If THIS message ends up in spam with a > complaint like that below, could you let me know?* > > Thanks > > Simon > > > > > *From:* Takenobu Tani [mailto:takenobu.hs at gmail.com > ] > *Sent:* 18 June 2016 08:18 > *To:* Simon Peyton Jones > > *Subject:* Re: CMM-to-ASM: Register allocation wierdness > > > > Hi Simon, > > > > I report to you about your mails. > > > > Maybe, your mails don't reach to Gmail users. > > I don't know why, but your mails have been distributed to "Spam" folder in > Gmail. > > > > Gmail displays following message: > > "Why is this message in Spam? It has a from address in microsoft.com > > but has failed microsoft.com > 's > required tests for authentication." > > > > > > For reference, I attach the screen of my Gmail of spam folder. > > Recent your mails have been detected as spam. > > Please check your mail settings. > > > > Regards, > > Takenobu > > > > 2016-06-16 21:10 GMT+09:00 Simon Peyton Jones >: > > | All-in-all, the graph coloring allocator is in great need of some love; > | Harendra, perhaps you'd like to have a try at dusting it off and perhaps > | look into why it regresses in compiler performance? It would be great if > | we could use it by default. > > I second this. Volunteers are sorely needed. > > Simon > > _______________________________________________ > Glasgow-haskell-users mailing list > Glasgow-haskell-users at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Sun Jun 19 04:35:17 2016 From: allbery.b at gmail.com (Brandon Allbery) Date: Sun, 19 Jun 2016 00:35:17 -0400 Subject: CMM-to-ASM: Register allocation wierdness In-Reply-To: References: <87mvml5yfg.fsf@smart-cactus.org> Message-ID: On Sun, Jun 19, 2016 at 12:30 AM, Carter Schonwald < carter.schonwald at gmail.com> wrote: > This email would have been marked spam had I not unmarked all your emails > as spam. Also a gmail user :/ Same. I forwarded my received headers to the infra folks; they made some more adjustments to Mailman, which need to be tested. -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Sun Jun 19 08:33:53 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Sun, 19 Jun 2016 10:33:53 +0200 Subject: CMM-to-ASM: Register allocation wierdness In-Reply-To: References: <87mvml5yfg.fsf@smart-cactus.org> <87y46548ea.fsf@smart-cactus.org> Message-ID: <87fus9r30u.fsf@smart-cactus.org> Harendra Kumar writes: > Thanks Ben! I have my responses inline below. > No worries! > On 16 June 2016 at 18:07, Ben Gamari wrote: > >> For the record, I have also struggled with register spilling issues in >> the past. See, for instance, #10012, which describes a behavior which >> arises from the C-- sinking pass's unwillingness to duplicate code >> across branches. While in general it's good to avoid the code bloat that >> this duplication implies, in the case shown in that ticket duplicating >> the computation would be significantly less code than the bloat from >> spilling the needed results. >> > > Not sure if this is possible but when unsure we can try both and compare if > the duplication results in significantly more code than no duplication and > make a decision based on that. Though that will slow down the compilation. > Maybe we can bundle slower passes in something like -O3, meaning it will be > slow and may or may not provide better results? > Indeed this would be one option although I suspect we can do better. I have discussed the problem with a few people and have some ideas on how to proceed. Unfortunately I've been suffering from a chronic lack of time recently. snip >> Very interesting, thanks for writing this down! Indeed if these checks >> really are redundant then we should try to avoid them. Do you have any >> code you could share that demosntrates this? >> > snip > > I have the code to produce this CMM, I can commit it on a branch and leave > it in the github repository so that we can use it for fixing. > Indeed it would be great if you could provide the program that produced this code. >> It would be great to open Trac tickets to track some of the optimization >> > > Will do. > Thanks! >> >> Furthermore, there are a few annoying impedance mismatches between Cmm >> and LLVM's representation. This can be seen in our treatment of proc >> points: when we need to take the address of a block within a function >> LLVM requires that we break the block into a separate procedure, hiding >> many potential optimizations from the optimizer. This was discussed >> further on this list earlier this year [2]. It would be great to >> eliminate proc-point splitting but doing so will almost certainly >> require cooperation from LLVM. >> > > It sounds like we need to continue with both for now and see how the llvm > option pans out. There is clearly no reason for a decisive tilt towards > llvm in near future. > I agree. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From carter.schonwald at gmail.com Sun Jun 19 15:59:48 2016 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sun, 19 Jun 2016 11:59:48 -0400 Subject: CMM-to-ASM: Register allocation wierdness In-Reply-To: References: <87mvml5yfg.fsf@smart-cactus.org> Message-ID: Agreed. There's also some other mismatches between ghc and llvm in a few fun / interesting ways! There's a lot of room for improvement in both code gens, but there's also a lot of room to improve the ease of experimenting with improvements. Eg we don't have a peephole pass per target, so those get hacked into the pretty printing code last time I checked On Thursday, June 16, 2016, Ben Gamari > wrote: > > Ccing David Spitzenberg, who has thought about proc-point splitting, which > is relevant for reasons that we will see below. > > > Harendra Kumar writes: > > > On 16 June 2016 at 13:59, Ben Gamari wrote: > >> > >> It actually came to my attention while researching this that the > >> -fregs-graph flag is currently silently ignored [2]. Unfortunately this > >> means you'll need to build a new compiler if you want to try using it. > > > > Yes I did try -fregs-graph and -fregs-iterative both. To debug why > nothing > > changed I had to compare the executables produced with and without the > > flags and found them identical. A note in the manual could have saved me > > some time since that's the first place to go for help. I was wondering > if I > > am making a mistake in the build and if it is not being rebuilt > > properly. Your note confirms my observation, it indeed does not change > > anything. > > > Indeed; I've opened D2335 [1] to reenable -fregs-graph and add an > appropriate note to the users guide. > > >> All-in-all, the graph coloring allocator is in great need of some love; > >> Harendra, perhaps you'd like to have a try at dusting it off and perhaps > >> look into why it regresses in compiler performance? It would be great if > >> we could use it by default. > > > > Yes, I can try that. In fact I was going in that direction and then > stopped > > to look at what llvm does. llvm gave me impressive results in some cases > > though not so great in others. I compared the code generated by llvm and > it > > perhaps did a better job in theory (used fewer instructions) but due to > > more spilling the end result was pretty similar. > > > For the record, I have also struggled with register spilling issues in > the past. See, for instance, #10012, which describes a behavior which > arises from the C-- sinking pass's unwillingness to duplicate code > across branches. While in general it's good to avoid the code bloat that > this duplication implies, in the case shown in that ticket duplicating > the computation would be significantly less code than the bloat from > spilling the needed results. > > > But I found a few interesting optimizations that llvm did. For example, > > there was a heap adjustment and check in the looping path which was > > redundant and was readjusted in the loop itself without use. LLVM either > > removed the redundant _adjustments_ in the loop or moved them out of the > > loop. But it did not remove the corresponding heap _checks_. That makes > me > > wonder if the redundant heap checks can also be moved or removed. If we > can > > do some sort of loop analysis at the CMM level itself and avoid or remove > > the redundant heap adjustments as well as checks or at least float them > out > > of the cycle wherever possible. That sort of optimization can make a > > significant difference to my case at least. Since we are explicitly aware > > of the heap at the CMM level there may be an opportunity to do better > than > > llvm if we optimize the generated CMM or the generation of CMM itself. > > > Very interesting, thanks for writing this down! Indeed if these checks > really are redundant then we should try to avoid them. Do you have any > code you could share that demosntrates this? > > It would be great to open Trac tickets to track some of the optimization > opportunities that you noted we may be missing. Trac tickets are far > easier to track over longer durations than mailing list conversations, > which tend to get lost in the noise after a few weeks pass. > > > A thought that came to my mind was whether we should focus on getting > > better code out of the llvm backend or the native code generator. LLVM > > seems pretty good at the specialized task of code generation and low > level > > optimization, it is well funded, widely used and has a big community > > support. That allows us to leverage that huge effort and take advantage > of > > the new developments. Does it make sense to outsource the code generation > > and low level optimization tasks to llvm and ghc focussing on higher > level > > optimizations which are harder to do at the llvm level? What are the > > downsides of using llvm exclusively in future? > > > > There is indeed a question of where we wish to focus our optimization > efforts. However, I think using LLVM exclusively would be a mistake. > LLVM is a rather large dependency that has in the past been rather > difficult to track (this is why we now only target one LLVM release in a > given GHC release). Moreover, it's significantly slower than our > existing native code generator. There are a number of reasons for this, > some of which are fixable. For instance, we currently make no effort to > tell > LLVM which passes are worth running and which we've handled; this is > something which should be fixed but will require a rather significant > investment by someone to determine how GHC's and LLVM's passes overlap, > how they interact, and generally which are helpful (see GHC #11295). > > Furthermore, there are a few annoying impedance mismatches between Cmm > and LLVM's representation. This can be seen in our treatment of proc > points: when we need to take the address of a block within a function > LLVM requires that we break the block into a separate procedure, hiding > many potential optimizations from the optimizer. This was discussed > further on this list earlier this year [2]. It would be great to > eliminate proc-point splitting but doing so will almost certainly > require cooperation from LLVM. > > Cheers, > > - Ben > > > [1] https://phabricator.haskell.org/D2335 > [2] https://mail.haskell.org/pipermail/ghc-devs/2015-November/010535.html > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Sun Jun 19 19:08:09 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Sun, 19 Jun 2016 19:08:09 +0000 Subject: Simon's email classified as spam Message-ID: <436234deb9584094820a5dc045276315@DB4PR30MB030.064d.mgd.msft.net> Dear GHC devs/users This is another test to see if email from me, relayed via Haskell.org, ends up in your spam folder. Gershom thinks he’s fixed it (below). Can I trespass on your patience once more? Just let me know if this email ends up in your inbox or spam. Can you cc John and Gershom (but perhaps not everyone else)? Thanks Simon | From: Gershom B [mailto:gershomb at gmail.com] | Sent: 18 June 2016 18:53 | To: Simon Peyton Jones ; John Wiegley | | Cc: Michael Burge | Subject: Re: FW: CMM-to-SAM: Register allocation weirdness | | Simon — I just found two possible sources of the problem (first: the top | level config didn’t take hold due to other errors when updating — fixed that, | and second, it might be possible the top level config isn’t retroactively | applied to all lists — so i added the config to the relevant lists directly). | | I think if you try one more time it might work (fingers crossed). -------------- next part -------------- An HTML attachment was scrubbed... URL: From agocorona at gmail.com Sun Jun 19 19:26:29 2016 From: agocorona at gmail.com (Alberto G. Corona ) Date: Sun, 19 Jun 2016 21:26:29 +0200 Subject: Simon's email classified as spam In-Reply-To: <436234deb9584094820a5dc045276315@DB4PR30MB030.064d.mgd.msft.net> References: <436234deb9584094820a5dc045276315@DB4PR30MB030.064d.mgd.msft.net> Message-ID: All Ok. it was not marked as spam 2016-06-19 21:08 GMT+02:00 Simon Peyton Jones via Glasgow-haskell-users < glasgow-haskell-users at haskell.org>: > Dear GHC devs/users > > This is another test to see if email from me, relayed via Haskell.org, > ends up in your spam folder. Gershom thinks he’s fixed it (below). Can I > trespass on your patience once more? > > Just let me know if this email ends up in your inbox or spam. Can you cc > John and Gershom (but perhaps not everyone else)? Thanks > > Simon > > > > | From: Gershom B [mailto:gershomb at gmail.com] > > | Sent: 18 June 2016 18:53 > > | To: Simon Peyton Jones ; John Wiegley > > | > > | Cc: Michael Burge > > | Subject: Re: FW: CMM-to-SAM: Register allocation weirdness > > | > > | Simon — I just found two possible sources of the problem (first: the top > > | level config didn’t take hold due to other errors when updating — fixed > that, > > | and second, it might be possible the top level config isn’t retroactively > > | applied to all lists — so i added the config to the relevant lists > directly). > > | > > | I think if you try one more time it might work (fingers crossed). > > > > _______________________________________________ > Glasgow-haskell-users mailing list > Glasgow-haskell-users at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users > > -- Alberto. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gershomb at gmail.com Sun Jun 19 20:44:56 2016 From: gershomb at gmail.com (Gershom B) Date: Sun, 19 Jun 2016 16:44:56 -0400 Subject: Simon's email classified as spam In-Reply-To: <436234deb9584094820a5dc045276315@DB4PR30MB030.064d.mgd.msft.net> References: <436234deb9584094820a5dc045276315@DB4PR30MB030.064d.mgd.msft.net> Message-ID: Dear all, thanks for the many responses. It appears that this is now fixed. (no need to send more). Cheers, Gershom On June 19, 2016 at 3:08:28 PM, Simon Peyton Jones via Glasgow-haskell-users (glasgow-haskell-users at haskell.org) wrote: > Dear GHC devs/users > This is another test to see if email from me, relayed via Haskell.org, ends up in your spam > folder. Gershom thinks he’s fixed it (below). Can I trespass on your patience once more? > Just let me know if this email ends up in your inbox or spam. Can you cc John and Gershom (but > perhaps not everyone else)? Thanks > Simon > > > | From: Gershom B [mailto:gershomb at gmail.com] > > | Sent: 18 June 2016 18:53 > > | To: Simon Peyton Jones ; John Wiegley > > | > > | Cc: Michael Burge > > | Subject: Re: FW: CMM-to-SAM: Register allocation weirdness > > | > > | Simon — I just found two possible sources of the problem (first: the top > > | level config didn’t take hold due to other errors when updating — fixed that, > > | and second, it might be possible the top level config isn’t retroactively > > | applied to all lists — so i added the config to the relevant lists directly). > > | > > | I think if you try one more time it might work (fingers crossed). > > _______________________________________________ > Glasgow-haskell-users mailing list > Glasgow-haskell-users at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users > From adam at well-typed.com Wed Jun 22 08:49:30 2016 From: adam at well-typed.com (Adam Gundry) Date: Wed, 22 Jun 2016 09:49:30 +0100 Subject: ORF for fields of higher-ranked type [was: TDNR without new operators or syntax changes] In-Reply-To: References: <1463402347025-5835927.post@n5.nabble.com> <1463469688161-5835978.post@n5.nabble.com> <20160521151419.GA8247@24f89f8c-e6a1-4e75-85ee-bb8a3743bb9f> Message-ID: On 15/06/16 04:29, AntC wrote: > ... > > The earlier design for SORF tried to support higher-ranked fields. > https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/SORF > > That had to be abandoned, > until explicit type application was available IIRC. > > We now have type application in GHC 8.0. > > Is there some hope for higher-rank type fields? Unfortunately, doing ORF with higher-rank fields is a bit of a non-starter, even with explicit type application, because the combination would break bidirectional type inference and require impredicativity. This was part of the reason we ended up preferring an explicit syntactic marker for ORF-style overloaded labels. For example, consider this (rather artificial) type: data T = MkT { foo :: ((forall a . a -> a) -> Bool) -> Bool } -- foo :: T -> ((forall a . a -> a) -> Bool) -> Bool Suppose `t :: T`. When type-checking `foo t (\ k -> k k True)`, the compiler will infer (look up) the type of `foo` and use it to check the types of the arguments. The second argument type-checks only because we are "pushing in" the type `(forall a . a -> a) -> Bool` and hence we know that the type of `k` will be `forall a . a -> a`. Now suppose we want to type-check `#foo t (\ k -> k k True)` using ORF instead. That ends up deferring a constraint `HasField r "foo" a` to the constraint solver, and inferring a type `a` for `#foo t`, so we can't type-check the second argument. Only the constraint solver will figure out that `a` should be impredicatively instantiated with a polytype. We end up needing to do type inference in the presence of impredicativity, which is a Hard Problem. There is some work aimed at improving GHC's type inference for impredicativity, so perhaps there's hope for this in the future. Explicit type application makes it possible (in principle, modulo #11352) for the user to write something like #foo @T @(((forall a . a -> a) -> Bool) -> Bool) t (\ k -> k k True) although they might not want to! But the ramifications haven't been fully thought through, e.g. we'd need to be able to solve the constraint HasField T "foo" (((forall a . a -> a) -> Bool) -> Bool) even though it has a polytype as an argument. Sorry to be the bearer of bad news, Adam -- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From harendra.kumar at gmail.com Sat Jun 25 21:26:47 2016 From: harendra.kumar at gmail.com (Harendra Kumar) Date: Sun, 26 Jun 2016 02:56:47 +0530 Subject: CMM-to-ASM: Register allocation wierdness In-Reply-To: <87fus9r30u.fsf@smart-cactus.org> References: <87mvml5yfg.fsf@smart-cactus.org> <87y46548ea.fsf@smart-cactus.org> <87fus9r30u.fsf@smart-cactus.org> Message-ID: On 19 June 2016 at 14:03, Ben Gamari wrote: > > Indeed it would be great if you could provide the program that produced > this code. > > >> It would be great to open Trac tickets to track some of the optimization Ok, I created an account on ghc trac and raised two tickets: #12231 & #12232. Yay! I also added the code branch to reproduce this on github ( https://github.com/harendra-kumar/unicode-transforms/tree/ghc-trac-12231). -harendra -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Sun Jun 26 08:53:09 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Sun, 26 Jun 2016 10:53:09 +0200 Subject: CMM-to-ASM: Register allocation wierdness In-Reply-To: References: <87mvml5yfg.fsf@smart-cactus.org> <87y46548ea.fsf@smart-cactus.org> <87fus9r30u.fsf@smart-cactus.org> Message-ID: <87k2hcl4ay.fsf@smart-cactus.org> Harendra Kumar writes: > On 19 June 2016 at 14:03, Ben Gamari wrote: > >> >> Indeed it would be great if you could provide the program that produced >> this code. >> >> >> It would be great to open Trac tickets to track some of the optimization > > > Ok, I created an account on ghc trac and raised two tickets: #12231 & > #12232. Yay! I also added the code branch to reproduce this on github ( > https://github.com/harendra-kumar/unicode-transforms/tree/ghc-trac-12231). > Great job summarizing the issue. Thanks! Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: