From mle+hs at mega-nerd.com Mon Jan 2 18:45:03 2017 From: mle+hs at mega-nerd.com (Erik de Castro Lopo) Date: Tue, 3 Jan 2017 05:45:03 +1100 Subject: Github repos for boot libraries Message-ID: <20170103054503.4561c40ab79dc07a109c366e@mega-nerd.com> Hi all, Currently if I go to the Github mirror for a boot library like transformers: https://github.com/ghc/packages-transformers I see the text: Mirror of packages-transformers repository. DO NOT SUBMIT PULL REQUESTS HERE This may well be true, but it is far less that useful, because although it tells me I can't submit I pull request, It doesn't tell me what I should do to get my issue addressed. Would it be possible to get these messages updated for all of these mirrored repos? Thanks, Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/ From ekmett at gmail.com Tue Jan 3 01:24:00 2017 From: ekmett at gmail.com (Edward Kmett) Date: Mon, 2 Jan 2017 20:24:00 -0500 Subject: Github repos for boot libraries In-Reply-To: <20170103054503.4561c40ab79dc07a109c366e@mega-nerd.com> References: <20170103054503.4561c40ab79dc07a109c366e@mega-nerd.com> Message-ID: For reference, the master repository for transformers is at http://hub.darcs.net/ross/transformers We should probably edit the 'website' link for that github repository to at least point there. I don't have access to do so, however. Subtly pinging Herbert, by adding him here. =) -Edward On Mon, Jan 2, 2017 at 1:45 PM, Erik de Castro Lopo wrote: > Hi all, > > Currently if I go to the Github mirror for a boot library like > transformers: > > https://github.com/ghc/packages-transformers > > I see the text: > > Mirror of packages-transformers repository. DO NOT SUBMIT PULL > REQUESTS HERE > > This may well be true, but it is far less that useful, because although it > tells me I can't submit I pull request, It doesn't tell me what I should do > to get my issue addressed. > > Would it be possible to get these messages updated for all of these > mirrored > repos? > > Thanks, > Erik > -- > ---------------------------------------------------------------------- > Erik de Castro Lopo > http://www.mega-nerd.com/ > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Tue Jan 3 15:06:40 2017 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Tue, 3 Jan 2017 15:06:40 +0000 Subject: Trac to Phabricator (Maniphest) migration prototype In-Reply-To: References: Message-ID: Thanks everyone for the comments so far. If you use Trac regularly then please comment. Thinking this is a bad idea but not commenting is not particularly useful as it leaves everyone in a limbo. I moved the site to a smaller instance so it would cost me less money to host. http://ec2-52-214-147-146.eu-west-1.compute.amazonaws.com/ Matt On Wed, Dec 21, 2016 at 10:12 AM, Matthew Pickering wrote: > Dear devs, > > I have completed writing a migration which moves tickets from trac to > phabricator. The conversion is essentially lossless. The trac > transaction history is replayed which means all events are transferred > with their original authors and timestamps. I welcome comments on the > work I have done so far, especially bugs as I have definitely not > looked at all 12000 tickets. > > http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com > > All the user accounts are automatically generated. If you want to see > the tracker from your perspective then send me an email or ping me on > IRC and I can set the password of the relevant account. > > NOTE: This is not a decision, the existence of this prototype is to > show that the migration is feasible in a satisfactory way and to > remove hypothetical arguments from the discussion. > > I must also thank Dan Palmer and Herbert who helped me along the way. > Dan was responsible for the first implementation and setting up much > of the infrastructure at the Haskell Exchange hackathon in October. We > extensively used the API bindings which Herbert had been working on. > > Further information below! > > Matt > > ===================================================================== > > Reasons > ====== > > Why this change? The main argument is consolidation. Having many > different services is confusing for new and old contributors. > Phabricator has proved effective as a code review tool. It is modern > and actively developed with a powerful feature set which we currently > only use a small fraction of. > > Trac is showing signs of its age. It is old and slow, users regularly > lose comments through accidently refreshing their browser. Further to > this, the integration with other services is quite poor. Commits do > not close tickets which mention them and the only link to commits is a > comment. Querying the tickets is also quite difficult, I usually > resort to using google search or my emails to find the relevant > ticket. > > > Why is Phabricator better? > ==================== > > Through learning more about Phabricator, there are many small things > that I think it does better which will improve the usability of the > issue tracker. I will list a few but I urge you to try it out. > > * Commits which mention ticket numbers are currently posted as trac > comments. There is better integration in phabricator as linking to > commits has first-class support. > * Links with differentials are also more direct than the current > custom field which means you must update two places when posting a > differential. > * Fields are verified so that mispelling user names is not possible > (see #12623 where Ben mispelled his name for example) > * This is also true for projects and other fields. Inspecting these > fields on trac you will find that the formatting on each ticket is > often quite different. > * Keywords are much more useful as the set of used keywords is discoverable. > * Related tickets are much more substantial as the status of related > tickets is reflected to parent ticket. > (http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com/T7724) > > Implementation > ============ > > Keywords are implemented as projects. A project is a combination of a > tag which can be used with any Phabricator object, a workboard to > organise tasks and a group of people who care about the topic. Not all > keywords are migrated. Only keywords with at least 5 tickets were > added to avoid lots of useless projects. The state of keywords is > still a bit unsatisfactory but I wanted to take this chance to clean > them up. > > Custom fields such as architecture and OS are replaced by *projects* > just like keywords. This has the same advantage as other projects. > Users can be subscribed to projects and receive emails when new > tickets are tagged with a project. The large majority of tickets have > very little additional metadata set. I also implemented these as > custom fields but found the the result to be less satisfactory. > > Some users who have trac accounts do not have phab accounts. > Fortunately it is easy to create new user accounts for these users > which have empty passwords which can be recovered by the appropriate > email address. This means tickets can be properly attributed in the > migration. > > The ticket numbers are maintained. I still advocate moving the > infrastructure tickets in order to maintain this mapping. Especially > as there has been little activity in thr the last year. > > Tickets are linked to the relevant commits, differentials and other > tickets. There are 3000 dummy differentials which are used to test > that the linking works correctly. Of course with real data, the proper > differential would be > linked.(http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com/T11044) > > There are a couple of issues currently with the migration. There are a > few issues in the parser which converts trac markup to remarkup. Most > comments have very simple with just paragraphs and code blocks but > complex items like lists are sometimes parsed incorrectly. Definition > lists are converted to tables as there are no equivalent in remarkup. > Trac ticket links are converted to phab ticket links. > > The ideal time to migrate is before the end of January The busiest > time for the issue tracker is before and after a new major release. > With 8.2 planned for around April this gives the transition a few > months to settle. We can close the trac issue tracker and continue to > serve it or preferably redirect users to the new ticket. I don't plan > to migrate the wiki at this stage as I do not feel that the parser is > robust enough although there are now few other technical challenges > blocking this direction. From ezyang at mit.edu Tue Jan 3 17:21:14 2017 From: ezyang at mit.edu (Edward Z. Yang) Date: Tue, 03 Jan 2017 12:21:14 -0500 Subject: Trac to Phabricator (Maniphest) migration prototype In-Reply-To: References: Message-ID: <1483464029-sup-2265@sabre> Hi Matthew, Thanks for doing the work for setting up this prototype, it definitely helps in making an informed decision about the switch. Some comments: 1. In your original email, you stated that many of the custom fields were going to be replaced with Phabricator "projects" (their equivalent of tags). First, I noticed a little trouble where some fields were just completely lost. Compare: http://ec2-52-214-147-146.eu-west-1.compute.amazonaws.com/T8095 https://ghc.haskell.org/trac/ghc/ticket/8095 AFAICT, we lost version the bug was found in, architecture, component. I hope we can keep these details in the real migration! Also related tickets doesn't seem to be working (see example above); migration problem? 2. Following on to this, I am actually a fan of Trac having a separate field per logically distinct field, rather than GitHub style "EVERYTHING IS A TAG". The primary reason for this is during bug submission: I find that if everything is a tag, they all show up in a giant list of ALL THE TAGS, and I never bother adding all the fields I should. Whereas in Trac, every relevant field is in front of me, and I remember to toggle things as necessary. Sometimes they don't matter, but sometimes they do (e.g., arch!) and I think *seeing* every field you might want to field in is helpful for remembering, at least for me as an expert user. 3. One thing that is bad about Trac is that its built-in search bar is useless. So here, Phabricator is an improvement. However, it seems like Phabricator search is less good than Trac's advanced ticket query. Here's what I like about Trac's version: - It sorts by priority, and then by ticket number. There is a long tail of bugs that I don't really care about, and this sorting helps me ignore the low priority ones which I don't care about. I find sort-by-relevance *particularly frustrating* because I find that these search engines don't have particularly good relevance metrics, and the chronology hint of sorting by ticket number is much better. Phabricator doesn't offer any control over search. - Trac search devotes only one row per ticket, so it is really easy to scan through and find the one I'm looking for. Both GH and Phabricator insist on putting a useless second row, fluffing up the results and making it difficult to scan. - It searches tickets only. In Phabricator I always have to type in "Maniphest Task" into document types to get into the bug finding view. Maybe there's a way to setup default search for something like this? Thanks, Edward Excerpts from Matthew Pickering's message of 2016-12-21 10:12:56 +0000: > Dear devs, > > I have completed writing a migration which moves tickets from trac to > phabricator. The conversion is essentially lossless. The trac > transaction history is replayed which means all events are transferred > with their original authors and timestamps. I welcome comments on the > work I have done so far, especially bugs as I have definitely not > looked at all 12000 tickets. > > http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com > > All the user accounts are automatically generated. If you want to see > the tracker from your perspective then send me an email or ping me on > IRC and I can set the password of the relevant account. > > NOTE: This is not a decision, the existence of this prototype is to > show that the migration is feasible in a satisfactory way and to > remove hypothetical arguments from the discussion. > > I must also thank Dan Palmer and Herbert who helped me along the way. > Dan was responsible for the first implementation and setting up much > of the infrastructure at the Haskell Exchange hackathon in October. We > extensively used the API bindings which Herbert had been working on. > > Further information below! > > Matt > > ===================================================================== > > Reasons > ====== > > Why this change? The main argument is consolidation. Having many > different services is confusing for new and old contributors. > Phabricator has proved effective as a code review tool. It is modern > and actively developed with a powerful feature set which we currently > only use a small fraction of. > > Trac is showing signs of its age. It is old and slow, users regularly > lose comments through accidently refreshing their browser. Further to > this, the integration with other services is quite poor. Commits do > not close tickets which mention them and the only link to commits is a > comment. Querying the tickets is also quite difficult, I usually > resort to using google search or my emails to find the relevant > ticket. > > > Why is Phabricator better? > ==================== > > Through learning more about Phabricator, there are many small things > that I think it does better which will improve the usability of the > issue tracker. I will list a few but I urge you to try it out. > > * Commits which mention ticket numbers are currently posted as trac > comments. There is better integration in phabricator as linking to > commits has first-class support. > * Links with differentials are also more direct than the current > custom field which means you must update two places when posting a > differential. > * Fields are verified so that mispelling user names is not possible > (see #12623 where Ben mispelled his name for example) > * This is also true for projects and other fields. Inspecting these > fields on trac you will find that the formatting on each ticket is > often quite different. > * Keywords are much more useful as the set of used keywords is discoverable. > * Related tickets are much more substantial as the status of related > tickets is reflected to parent ticket. > (http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com/T7724) > > Implementation > ============ > > Keywords are implemented as projects. A project is a combination of a > tag which can be used with any Phabricator object, a workboard to > organise tasks and a group of people who care about the topic. Not all > keywords are migrated. Only keywords with at least 5 tickets were > added to avoid lots of useless projects. The state of keywords is > still a bit unsatisfactory but I wanted to take this chance to clean > them up. > > Custom fields such as architecture and OS are replaced by *projects* > just like keywords. This has the same advantage as other projects. > Users can be subscribed to projects and receive emails when new > tickets are tagged with a project. The large majority of tickets have > very little additional metadata set. I also implemented these as > custom fields but found the the result to be less satisfactory. > > Some users who have trac accounts do not have phab accounts. > Fortunately it is easy to create new user accounts for these users > which have empty passwords which can be recovered by the appropriate > email address. This means tickets can be properly attributed in the > migration. > > The ticket numbers are maintained. I still advocate moving the > infrastructure tickets in order to maintain this mapping. Especially > as there has been little activity in thr the last year. > > Tickets are linked to the relevant commits, differentials and other > tickets. There are 3000 dummy differentials which are used to test > that the linking works correctly. Of course with real data, the proper > differential would be > linked.(http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com/T11044) > > There are a couple of issues currently with the migration. There are a > few issues in the parser which converts trac markup to remarkup. Most > comments have very simple with just paragraphs and code blocks but > complex items like lists are sometimes parsed incorrectly. Definition > lists are converted to tables as there are no equivalent in remarkup. > Trac ticket links are converted to phab ticket links. > > The ideal time to migrate is before the end of January The busiest > time for the issue tracker is before and after a new major release. > With 8.2 planned for around April this gives the transition a few > months to settle. We can close the trac issue tracker and continue to > serve it or preferably redirect users to the new ticket. I don't plan > to migrate the wiki at this stage as I do not feel that the parser is > robust enough although there are now few other technical challenges > blocking this direction. From matthewtpickering at gmail.com Tue Jan 3 23:32:42 2017 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Tue, 3 Jan 2017 23:32:42 +0000 Subject: Trac to Phabricator (Maniphest) migration prototype In-Reply-To: <1483464029-sup-2265@sabre> References: <1483464029-sup-2265@sabre> Message-ID: Good comments Edward. I think the answers to all three of your points will be insightful. 1. As for why it appears some information from the ticket is missing. The version field is not very useful as there are only two active branches at once. Instead we want a project which marks tickets which apply to the 8.0.2 branch for example. Tickets which refer to ancient versions of GHC should be closed if they can't be reproduced with HEAD. It is also not used very often, only updated on about 600 tickets. A tag for the architecture field is only added if the architecture is set to something other than the default. The ticket you linked had the default value and in fact, architecture was irrelevant to the ticket as it was about the type checker so it could be argued including this option is confusing to the reporter. I'm also skeptical about how useful the component field is. I've never personally used it and it is not accurate for the majority of tickets. If someone disagrees then please say but it is really not used very much. Finally, the related tickets look slightly off. This is a reoccurring problem with trac, there is no validation for any of the text fields. The parser I wrote assumed that tickets started with a # but in this example the first related ticket initially was hashless. There are many examples like this which I have come across. In this case I think I can relax it to search for any contiguous set of numbers and recover the correct information. The dump of the database which I had is about two months old which explains why the most recent changes are not included. Here is how often each field has been updated. field | count --------------+------- _comment0 | 1839 _comment1 | 364 _comment10 | 1 _comment11 | 1 _comment12 | 1 _comment13 | 1 _comment14 | 1 _comment15 | 1 _comment16 | 1 _comment2 | 123 _comment3 | 53 _comment4 | 23 _comment5 | 13 _comment6 | 4 _comment7 | 4 _comment8 | 3 _comment9 | 1 architecture | 2025 blockedby | 1103 blocking | 1112 cc | 5358 comment | 75967 component | 1217 description | 1919 differential | 2410 difficulty | 4968 failure | 1427 keywords | 833 milestone | 13695 os | 1964 owner | 4870 patch | 26 priority | 2495 related | 1869 reporter | 7 resolution | 10446 severity | 83 status | 14612 summary | 827 testcase | 2386 type | 811 version | 687 wikipage | 873 2. This is a legitimate point. I will say in response that the majority of triaging is done by those who regularly use the bug tracker. These people will know the correct tags to use. There are occasionally people who submit tickets and ask questions about what to fill in these fields or fill them in incorrectly. Removing them for a uniform system is advantageous in my opinion. 3. Here is how the search works. It is context sensitive, if you search from the home page then you search everything. If you search after clicking on "maniphest" to enter the maniphest application it will only search tickets. There is a similar advanced search for Maniphest - http://ec2-52-214-147-146.eu-west-1.compute.amazonaws.com/maniphest/query/advanced/ Using this you can sort by priority and then by ticket number. Here is an example searching the tickets with the PatternSynonyms tag - http://ec2-52-214-147-146.eu-west-1.compute.amazonaws.com/maniphest/query/WjCq6bD27idM/#R I agree with you about the default layout. I would also like a more compact view but I feel this style is the prevailing modern web dev trend. Thanks for your comments. Matt On Tue, Jan 3, 2017 at 5:21 PM, Edward Z. Yang wrote: > Hi Matthew, > > Thanks for doing the work for setting up this prototype, it definitely > helps in making an informed decision about the switch. > > Some comments: > > 1. In your original email, you stated that many of the custom fields > were going to be replaced with Phabricator "projects" (their > equivalent of tags). First, I noticed a little trouble where > some fields were just completely lost. Compare: > > http://ec2-52-214-147-146.eu-west-1.compute.amazonaws.com/T8095 > https://ghc.haskell.org/trac/ghc/ticket/8095 > > AFAICT, we lost version the bug was found in, > architecture, component. I hope we can keep these details > in the real migration! Also related tickets doesn't seem > to be working (see example above); migration problem? From rae at cs.brynmawr.edu Wed Jan 4 03:15:28 2017 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Tue, 3 Jan 2017 22:15:28 -0500 Subject: Trac to Phabricator (Maniphest) migration prototype In-Reply-To: References: <1483464029-sup-2265@sabre> Message-ID: <851d871c-1d2c-e366-57cc-9713f6580aa5@cs.brynmawr.edu> On 1/3/17 6:32 PM, Matthew Pickering wrote: > The version field is not very useful as there are only two active > branches at once. Instead we want a project which marks tickets which > apply to the 8.0.2 branch for example. Tickets which refer to ancient > versions of GHC should be closed if they can't be reproduced with > HEAD. It is also not used very often, only updated on about 600 > tickets. I strongly disagree here. It is not uncommon to have a user report a bug against an old version of GHC. When the reporter sets a version of, say, 7.6.3 these days, we know that the bug is quite clearly old and may already be fixed. > I'm also skeptical about how useful the component field is. I've never > personally used it and it is not accurate for the majority of tickets. > If someone disagrees then please say but it is really not used very > much. I use the Template Haskell component field to quickly search for all TH tickets. That said, the line between Component and Keyword is very murky and ought to be straightened out. I'm thus not against scrapping Component, but I wouldn't want data loss during the conversion: an updated Component field should be retained as a tag. Richard From ezyang at mit.edu Wed Jan 4 04:34:16 2017 From: ezyang at mit.edu (Edward Z. Yang) Date: Tue, 03 Jan 2017 23:34:16 -0500 Subject: Trac to Phabricator (Maniphest) migration prototype In-Reply-To: References: <1483464029-sup-2265@sabre> Message-ID: <1483504046-sup-5311@sabre> Excerpts from Matthew Pickering's message of 2017-01-03 23:32:42 +0000: > The version field is not very useful as there are only two active > branches at once. Instead we want a project which marks tickets which > apply to the 8.0.2 branch for example. Tickets which refer to ancient > versions of GHC should be closed if they can't be reproduced with > HEAD. It is also not used very often, only updated on about 600 > tickets. Echoing Richard's comment, it's a helpful reminder to the reporter to say what version of GHC the bug was on. If it's old, the first thing to do is check if it still fails in HEAD. > I'm also skeptical about how useful the component field is. I've never > personally used it and it is not accurate for the majority of tickets. > If someone disagrees then please say but it is really not used very > much. I think that there is a substantial subset of tickets that have a good "component" characterization. I have historically found "Runtime system", "Linker", "Build system", "Profiling" and a number of others very useful. Yes, a lot of tickets get dumped in "Compiler", but many have very good categorization! > 2. This is a legitimate point. I will say in response that the > majority of triaging is done by those who regularly use the bug > tracker. These people will know the correct tags to use. There are > occasionally people who submit tickets and ask questions about what to > fill in these fields or fill them in incorrectly. Removing them for a > uniform system is advantageous in my opinion. I am a "regular user" of Cabal's GitHub bug tracker, and I find it difficult to remember to apply all of the tags that I'm "supposed" to for any given ticket. It got better when I reorged all the tags to give them a prefix for the "category" they were in, but I still have to scroll through the list that contains ALL THE TAGS when really I just want to select one per category. For example, priority in GH is a complete lost cause because none of the display mechanisms take priority into account. Another benefit of tags by category is in tabular views, you can ask to group things by category, or priority, etc. Tag based systems rarely have this kind of UI. > I agree with you about the default layout. I would also like a more > compact view but I feel this style is the prevailing modern web dev > trend. Well, this is something we can fix with a little CSS :) Edward > Thanks for your comments. > > Matt > > On Tue, Jan 3, 2017 at 5:21 PM, Edward Z. Yang wrote: > > Hi Matthew, > > > > Thanks for doing the work for setting up this prototype, it definitely > > helps in making an informed decision about the switch. > > > > Some comments: > > > > 1. In your original email, you stated that many of the custom fields > > were going to be replaced with Phabricator "projects" (their > > equivalent of tags). First, I noticed a little trouble where > > some fields were just completely lost. Compare: > > > > http://ec2-52-214-147-146.eu-west-1.compute.amazonaws.com/T8095 > > https://ghc.haskell.org/trac/ghc/ticket/8095 > > > > AFAICT, we lost version the bug was found in, > > architecture, component. I hope we can keep these details > > in the real migration! Also related tickets doesn't seem > > to be working (see example above); migration problem? From ben at well-typed.com Wed Jan 4 05:28:45 2017 From: ben at well-typed.com (Ben Gamari) Date: Wed, 04 Jan 2017 00:28:45 -0500 Subject: [ANNOUNCE] Formation of the initial GHC Steering Committee Message-ID: <87o9zn75rm.fsf@ben-laptop.smart-cactus.org> Dear Haskell community, Over the past months we have discussed changes to GHC's process for collecting, discussing, and considering new language extensions, compiler features, and the like. Happily, we are now ready to move forward with our new proposal process. Towards this end, we have formed the GHC Steering Committee which will be responsible for evaluating the proposals that run through the process. The committee consists of the following members (with GitHub user names given parenthetically), * Chris Allen (@bitemyapp) * Joachim Breitner (@nomeata) * Manuel M T Chakravarty (@mchakravarty) * Iavor Diatchki (@yav) * Atze Dijkstra (@atzedijkstra) * Richard Eisenberg (@goldfirere) * Ben Gamari (@bgamari) * Simon Marlow (@simonmar) * Ryan Newton (@rrnewton) * Simon Peyton-Jones (@simonpj) The body will be chaired jointly by Simon Marlow and Simon Peyton-Jones. Since the ghc-proposals repository was created, it has accumulated nearly thirty pull requests describing a variety of compelling changes. We will consider these proposals to be at the beginning of their four-week discussion period. The goal of this discussion is to find and eliminate weaknesses of the proposal. The final proposal should address all valid points raised in the discussion. When you believe the proposal has converged, bring it to the steering committee and summarize the discussion in a pull request comment. If you would like to contribute a new proposal, please refer to the directions given in the ghc-proposals' repository README [1] and proposal submission guidelines [2]. Cheers, - Ben, on behalf of the GHC Steering Committee [1] https://github.com/ghc-proposals/ghc-proposals [2] https://github.com/ghc-proposals/ghc-proposals/blob/master/proposal-submission.rst -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From matthewtpickering at gmail.com Wed Jan 4 10:18:16 2017 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Wed, 4 Jan 2017 10:18:16 +0000 Subject: Trac to Phabricator (Maniphest) migration prototype In-Reply-To: <1483504046-sup-5311@sabre> References: <1483464029-sup-2265@sabre> <1483504046-sup-5311@sabre> Message-ID: I am persuaded that component is useful. Richard makes the point that there is a murky divide between component and keywords. This is right and it indicates that we should keep the component field but also homogenise it was the keywords (in the form of projects). I have included which fields are used frequently in the footer of this message. The arguments for version are not convincing. The first thing you do when working on a ticket anyway is to try and reproduce the bug with a test case in HEAD. The date a ticket reported is as good an indicator of version. I also noted that I neglected to update the dateUpdated field for tickets so queries by date last modified do not currently work and some dates may appear strange when searching. Edward: There is support for bucketing by project as well. See this query for example, http://ec2-52-214-147-146.eu-west-1.compute.amazonaws.com/maniphest/query/AnBbna53Q.ue/#R In these discussions we should remember that phabricator is not trac. I am not trying to exactly recreate the trac experience because otherwise we might as well carry on using trac. Matt newvalue | count | data last used -------------------------+-------+------------------ Core Libraries | 171 | 1467895032829170 Compiler (Type checker) | 162 | 1473084454218219 Runtime System | 128 | 1465462467207643 GHCi | 78 | 1469178319691539 Build System | 67 | 1466005572342115 libraries/base | 63 | 1434353266824396 Template Haskell | 60 | 1471813543330680 Compiler | 57 | 1454111515909737 Compiler (Parser) | 48 | 1469089960346131 libraries (other) | 43 | 1426680041417003 Documentation | 34 | 1469109265405752 Runtime System (Linker) | 31 | 1465457535216398 Profiling | 27 | 1464370392337006 Compiler (NCG) | 24 | 1464454221111344 Driver | 23 | 1464988993920936 Compiler (Linking) | 20 | 1453415934718049 Package system | 19 | 1445379269305504 Test Suite | 19 | 1466101320438273 libraries/process | 17 | 1380279285602376 Compiler (LLVM) | 17 | 1453643706521552 Trac & Git | 13 | 1417192612294646 Compiler (CodeGen) | 12 | 1466888486602141 Code Coverage | 9 | 1463929919796252 Compiler (FFI) | 9 | 1438109440242842 libraries/unix | 8 | 1393611940333801 Data Parallel Haskell | 8 | 1427089471985001 None | 8 | 1457637409612864 hsc2hs | 5 | 1433929228414856 libraries/network | 5 | 1225140383000000 libraries/random | 4 | 1404824781090514 libraries/directory | 4 | 1355926496000000 External Core | 4 | 1365855698000000 libraries/pretty | 4 | 1321743759000000 ghc-pkg | 4 | 1447954686599857 GHC API | 3 | 1463577929270933 libraries/HGL | 3 | 1214961444000000 Prelude | 2 | 1313617037000000 libraries/haskell98 | 1 | 1186695581000000 libraries (old-time) | 1 | 1215426256000000 Visual Haskell | 1 | 1166430307000000 NoFib benchmark suite | 1 | 1405260335339305 On Wed, Jan 4, 2017 at 4:34 AM, Edward Z. Yang wrote: > Excerpts from Matthew Pickering's message of 2017-01-03 23:32:42 +0000: >> The version field is not very useful as there are only two active >> branches at once. Instead we want a project which marks tickets which >> apply to the 8.0.2 branch for example. Tickets which refer to ancient >> versions of GHC should be closed if they can't be reproduced with >> HEAD. It is also not used very often, only updated on about 600 >> tickets. > > Echoing Richard's comment, it's a helpful reminder to the reporter > to say what version of GHC the bug was on. If it's old, the first > thing to do is check if it still fails in HEAD. > >> I'm also skeptical about how useful the component field is. I've never >> personally used it and it is not accurate for the majority of tickets. >> If someone disagrees then please say but it is really not used very >> much. > > I think that there is a substantial subset of tickets that have a good > "component" characterization. I have historically found "Runtime > system", "Linker", "Build system", "Profiling" and a number of others > very useful. Yes, a lot of tickets get dumped in "Compiler", but many > have very good categorization! > >> 2. This is a legitimate point. I will say in response that the >> majority of triaging is done by those who regularly use the bug >> tracker. These people will know the correct tags to use. There are >> occasionally people who submit tickets and ask questions about what to >> fill in these fields or fill them in incorrectly. Removing them for a >> uniform system is advantageous in my opinion. > > I am a "regular user" of Cabal's GitHub bug tracker, and I find it > difficult to remember to apply all of the tags that I'm "supposed" to > for any given ticket. It got better when I reorged all the tags > to give them a prefix for the "category" they were in, but I still > have to scroll through the list that contains ALL THE TAGS when > really I just want to select one per category. For example, priority > in GH is a complete lost cause because none of the display mechanisms > take priority into account. > > Another benefit of tags by category is in tabular views, you can ask > to group things by category, or priority, etc. Tag based systems rarely > have this kind of UI. > >> I agree with you about the default layout. I would also like a more >> compact view but I feel this style is the prevailing modern web dev >> trend. > > Well, this is something we can fix with a little CSS :) > > Edward > >> Thanks for your comments. >> >> Matt >> >> On Tue, Jan 3, 2017 at 5:21 PM, Edward Z. Yang wrote: >> > Hi Matthew, >> > >> > Thanks for doing the work for setting up this prototype, it definitely >> > helps in making an informed decision about the switch. >> > >> > Some comments: >> > >> > 1. In your original email, you stated that many of the custom fields >> > were going to be replaced with Phabricator "projects" (their >> > equivalent of tags). First, I noticed a little trouble where >> > some fields were just completely lost. Compare: >> > >> > http://ec2-52-214-147-146.eu-west-1.compute.amazonaws.com/T8095 >> > https://ghc.haskell.org/trac/ghc/ticket/8095 >> > >> > AFAICT, we lost version the bug was found in, >> > architecture, component. I hope we can keep these details >> > in the real migration! Also related tickets doesn't seem >> > to be working (see example above); migration problem? From matthewtpickering at gmail.com Wed Jan 4 10:20:51 2017 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Wed, 4 Jan 2017 10:20:51 +0000 Subject: Trac to Phabricator (Maniphest) migration prototype In-Reply-To: References: <1483464029-sup-2265@sabre> <1483504046-sup-5311@sabre> Message-ID: I should have looked more closely at the implementation.. the component field was already preserved. There is a bug where it is not set properly if it was set by the ticket reporter. I will look into this problem this evening! Matt On Wed, Jan 4, 2017 at 10:18 AM, Matthew Pickering wrote: > I am persuaded that component is useful. Richard makes the point that > there is a murky divide between component and keywords. This is right > and it indicates that we should keep the component field but also > homogenise it was the keywords (in the form of projects). > > I have included which fields are used frequently in the footer of this message. > > The arguments for version are not convincing. The first thing you do > when working on a ticket anyway is to try and reproduce the bug with a > test case in HEAD. The date a ticket reported is as good an indicator > of version. > > I also noted that I neglected to update the dateUpdated field for > tickets so queries by date last modified do not currently work and > some dates may appear strange when searching. > > Edward: There is support for bucketing by project as well. See this > query for example, > http://ec2-52-214-147-146.eu-west-1.compute.amazonaws.com/maniphest/query/AnBbna53Q.ue/#R > > In these discussions we should remember that phabricator is not trac. > I am not trying to exactly recreate the trac experience because > otherwise we might as well carry on using trac. > > Matt > > newvalue | count | data last used > > -------------------------+-------+------------------ > Core Libraries | 171 | 1467895032829170 > Compiler (Type checker) | 162 | 1473084454218219 > Runtime System | 128 | 1465462467207643 > GHCi | 78 | 1469178319691539 > Build System | 67 | 1466005572342115 > libraries/base | 63 | 1434353266824396 > Template Haskell | 60 | 1471813543330680 > Compiler | 57 | 1454111515909737 > Compiler (Parser) | 48 | 1469089960346131 > libraries (other) | 43 | 1426680041417003 > Documentation | 34 | 1469109265405752 > Runtime System (Linker) | 31 | 1465457535216398 > Profiling | 27 | 1464370392337006 > Compiler (NCG) | 24 | 1464454221111344 > Driver | 23 | 1464988993920936 > Compiler (Linking) | 20 | 1453415934718049 > Package system | 19 | 1445379269305504 > Test Suite | 19 | 1466101320438273 > libraries/process | 17 | 1380279285602376 > Compiler (LLVM) | 17 | 1453643706521552 > Trac & Git | 13 | 1417192612294646 > Compiler (CodeGen) | 12 | 1466888486602141 > Code Coverage | 9 | 1463929919796252 > Compiler (FFI) | 9 | 1438109440242842 > libraries/unix | 8 | 1393611940333801 > Data Parallel Haskell | 8 | 1427089471985001 > None | 8 | 1457637409612864 > hsc2hs | 5 | 1433929228414856 > libraries/network | 5 | 1225140383000000 > libraries/random | 4 | 1404824781090514 > libraries/directory | 4 | 1355926496000000 > External Core | 4 | 1365855698000000 > libraries/pretty | 4 | 1321743759000000 > ghc-pkg | 4 | 1447954686599857 > GHC API | 3 | 1463577929270933 > libraries/HGL | 3 | 1214961444000000 > Prelude | 2 | 1313617037000000 > libraries/haskell98 | 1 | 1186695581000000 > libraries (old-time) | 1 | 1215426256000000 > Visual Haskell | 1 | 1166430307000000 > NoFib benchmark suite | 1 | 1405260335339305 > > On Wed, Jan 4, 2017 at 4:34 AM, Edward Z. Yang wrote: >> Excerpts from Matthew Pickering's message of 2017-01-03 23:32:42 +0000: >>> The version field is not very useful as there are only two active >>> branches at once. Instead we want a project which marks tickets which >>> apply to the 8.0.2 branch for example. Tickets which refer to ancient >>> versions of GHC should be closed if they can't be reproduced with >>> HEAD. It is also not used very often, only updated on about 600 >>> tickets. >> >> Echoing Richard's comment, it's a helpful reminder to the reporter >> to say what version of GHC the bug was on. If it's old, the first >> thing to do is check if it still fails in HEAD. >> >>> I'm also skeptical about how useful the component field is. I've never >>> personally used it and it is not accurate for the majority of tickets. >>> If someone disagrees then please say but it is really not used very >>> much. >> >> I think that there is a substantial subset of tickets that have a good >> "component" characterization. I have historically found "Runtime >> system", "Linker", "Build system", "Profiling" and a number of others >> very useful. Yes, a lot of tickets get dumped in "Compiler", but many >> have very good categorization! >> >>> 2. This is a legitimate point. I will say in response that the >>> majority of triaging is done by those who regularly use the bug >>> tracker. These people will know the correct tags to use. There are >>> occasionally people who submit tickets and ask questions about what to >>> fill in these fields or fill them in incorrectly. Removing them for a >>> uniform system is advantageous in my opinion. >> >> I am a "regular user" of Cabal's GitHub bug tracker, and I find it >> difficult to remember to apply all of the tags that I'm "supposed" to >> for any given ticket. It got better when I reorged all the tags >> to give them a prefix for the "category" they were in, but I still >> have to scroll through the list that contains ALL THE TAGS when >> really I just want to select one per category. For example, priority >> in GH is a complete lost cause because none of the display mechanisms >> take priority into account. >> >> Another benefit of tags by category is in tabular views, you can ask >> to group things by category, or priority, etc. Tag based systems rarely >> have this kind of UI. >> >>> I agree with you about the default layout. I would also like a more >>> compact view but I feel this style is the prevailing modern web dev >>> trend. >> >> Well, this is something we can fix with a little CSS :) >> >> Edward >> >>> Thanks for your comments. >>> >>> Matt >>> >>> On Tue, Jan 3, 2017 at 5:21 PM, Edward Z. Yang wrote: >>> > Hi Matthew, >>> > >>> > Thanks for doing the work for setting up this prototype, it definitely >>> > helps in making an informed decision about the switch. >>> > >>> > Some comments: >>> > >>> > 1. In your original email, you stated that many of the custom fields >>> > were going to be replaced with Phabricator "projects" (their >>> > equivalent of tags). First, I noticed a little trouble where >>> > some fields were just completely lost. Compare: >>> > >>> > http://ec2-52-214-147-146.eu-west-1.compute.amazonaws.com/T8095 >>> > https://ghc.haskell.org/trac/ghc/ticket/8095 >>> > >>> > AFAICT, we lost version the bug was found in, >>> > architecture, component. I hope we can keep these details >>> > in the real migration! Also related tickets doesn't seem >>> > to be working (see example above); migration problem? From simonpj at microsoft.com Wed Jan 4 10:38:26 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 4 Jan 2017 10:38:26 +0000 Subject: Lightweight Concurrency Branch In-Reply-To: References: Message-ID: David KC never finished work on this stuff. I’m copying him because I’m sure he’d be happy to help. KC: can you summarise where you left it? I think it’s very interesting work, and has the potential to make GHC’s RTS much more malleable, by moving more of it into Haskell libraries instead of deeply-magic C code. But it’s not easy, because we are reluctant to lose performance, and because there are interactions with STM, weak pointers, foreign function calls, etc. I think it’d require a bit of commitment to make a go of it. Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Daniel Bennet Sent: 28 December 2016 17:23 To: ghc-devs at haskell.org Subject: Lightweight Concurrency Branch The lightweight concurrency branch is highly interesting and relevant to my interests, however, the ghc-lwc2 branch hasn't been updated in several years even though it's listed as an active branch at https://ghc.haskell.org/trac/ghc/wiki/ActiveBranches The wiki page for the work hasn't been updated in almost two years either, https://ghc.haskell.org/trac/ghc/wiki/LightweightConcurrency Relevant papers: Composable Scheduler Activations for Haskell (2014) https://timharris.uk/papers/2014-composable-tr.pdf Composable Scheduler Activations for Haskell (2016) http://kcsrk.info/papers/schedact_jfp16.pdf What remains for integrating this branch into GHC? -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Wed Jan 4 12:54:29 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 4 Jan 2017 12:54:29 +0000 Subject: FW: Lightweight Concurrency Branch In-Reply-To: References: Message-ID: Reply from KC… see below. Simon From: KC Sivaramakrishnan [mailto:sk826 at cam.ac.uk] Sent: 04 January 2017 11:29 To: Simon Peyton Jones Cc: Daniel Bennet ; ghc-devs at haskell.org; KC (sk826 at hermes.cam.ac.uk) Subject: Re: Lightweight Concurrency Branch Hi Simon, David, Indeed as Simon mentioned it, the work was never finished. The implementation is(was) at a stage where we can run some small non-trivial benchmarks (Section 7 of the JFP paper). The interactions with FFI, MVars, STM, asynchronous exceptions worked well (though we most probably do things a little differently now), until we encountered interaction with blackholes mechanism. The crux of the problem is that the blackholing mechanism interacts with the scheduler and if the scheduler functionality itself is written in Haskell, then we have the potential for a deadlock. Some of the details are presented in Section 6.5, but we never got around to a clean solution that I could get working properly. Hence the reason for not formalizing it completely in the paper, and I am sure there are some edge cases that I hadn't thought about. This is a particularly tricky issue, and the current design unfortunately does not lend itself to a clean solution. Although the asynchronous exception mechanism interaction was not formalized, it works well and passes the tests. Not much effort was put in to make the implementation go particularly fast. While the performance was comparable on average, but varied quite a bit on edge cases; some of the results in the paper show this. If one were to revive the project, I would suggest starting from the design, using the existing code as a prototype, but write code from scratch; pleasantly there isn't much new code in this branch. The project does need substantial amount of work to make it upstream with the newer RTS mechanisms. I am very happy to provide more details and eager to assist with the work, but my time commitments mean that I cannot lead this effort. Kind Regards, KC On Wed, Jan 4, 2017 at 10:38 AM, Simon Peyton Jones > wrote: David KC never finished work on this stuff. I’m copying him because I’m sure he’d be happy to help. KC: can you summarise where you left it? I think it’s very interesting work, and has the potential to make GHC’s RTS much more malleable, by moving more of it into Haskell libraries instead of deeply-magic C code. But it’s not easy, because we are reluctant to lose performance, and because there are interactions with STM, weak pointers, foreign function calls, etc. I think it’d require a bit of commitment to make a go of it. Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Daniel Bennet Sent: 28 December 2016 17:23 To: ghc-devs at haskell.org Subject: Lightweight Concurrency Branch The lightweight concurrency branch is highly interesting and relevant to my interests, however, the ghc-lwc2 branch hasn't been updated in several years even though it's listed as an active branch at https://ghc.haskell.org/trac/ghc/wiki/ActiveBranches The wiki page for the work hasn't been updated in almost two years either, https://ghc.haskell.org/trac/ghc/wiki/LightweightConcurrency Relevant papers: Composable Scheduler Activations for Haskell (2014) https://timharris.uk/papers/2014-composable-tr.pdf Composable Scheduler Activations for Haskell (2016) http://kcsrk.info/papers/schedact_jfp16.pdf What remains for integrating this branch into GHC? -------------- next part -------------- An HTML attachment was scrubbed... URL: From m at tweag.io Wed Jan 4 13:30:15 2017 From: m at tweag.io (Boespflug, Mathieu) Date: Wed, 4 Jan 2017 14:30:15 +0100 Subject: FW: Lightweight Concurrency Branch In-Reply-To: References: Message-ID: Hi KC, if blackholes only appear during thunk evaluation, could the problem you describe below be worked around by simply imposing that the scheduler never creates black holes? Say by leveraging GHC's new -XStrict language extension? -- Mathieu Boespflug Founder at http://tweag.io. On 4 January 2017 at 13:54, Simon Peyton Jones via ghc-devs < ghc-devs at haskell.org> wrote: > Reply from KC… see below. > > > > Simon > > > > *From:* KC Sivaramakrishnan [mailto:sk826 at cam.ac.uk] > *Sent:* 04 January 2017 11:29 > *To:* Simon Peyton Jones > *Cc:* Daniel Bennet ; ghc-devs at haskell.org; KC > (sk826 at hermes.cam.ac.uk) > *Subject:* Re: Lightweight Concurrency Branch > > > > Hi Simon, David, > > > > Indeed as Simon mentioned it, the work was never finished. The > implementation is(was) at a stage where we can run some small non-trivial > benchmarks (Section 7 of the JFP paper). The interactions with FFI, MVars, > STM, asynchronous exceptions worked well (though we most probably do things > a little differently now), until we encountered interaction with blackholes > mechanism. > > > > The crux of the problem is that the blackholing mechanism interacts with > the scheduler and if the scheduler functionality itself is written in > Haskell, then we have the potential for a deadlock. Some of the details are > presented in Section 6.5, but we never got around to a clean solution that > I could get working properly. Hence the reason for not formalizing it > completely in the paper, and I am sure there are some edge cases that I > hadn't thought about. This is a particularly tricky issue, and the current > design unfortunately does not lend itself to a clean solution. > > > > Although the asynchronous exception mechanism interaction was not > formalized, it works well and passes the tests. Not much effort was put in > to make the implementation go particularly fast. While the performance was > comparable on average, but varied quite a bit on edge cases; some of the > results in the paper show this. > > > > If one were to revive the project, I would suggest starting from the > design, using the existing code as a prototype, but write code from > scratch; pleasantly there isn't much new code in this branch. The project > does need substantial amount of work to make it upstream with the newer RTS > mechanisms. I am very happy to provide more details and eager to assist > with the work, but my time commitments mean that I cannot lead this effort. > > > > Kind Regards, > > KC > > > > On Wed, Jan 4, 2017 at 10:38 AM, Simon Peyton Jones > wrote: > > David > > > > KC never finished work on this stuff. I’m copying him because I’m sure > he’d be happy to help. > > > > KC: can you summarise where you left it? > > > > I think it’s very interesting work, and has the potential to make GHC’s > RTS much more malleable, by moving more of it into Haskell libraries > instead of deeply-magic C code. > > > > But it’s not easy, because we are reluctant to lose performance, and > because there are interactions with STM, weak pointers, foreign function > calls, etc. I think it’d require a bit of commitment to make a go of it. > > > > > > Simon > > > > > > *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of *Daniel > Bennet > *Sent:* 28 December 2016 17:23 > *To:* ghc-devs at haskell.org > *Subject:* Lightweight Concurrency Branch > > > > The lightweight concurrency branch is highly interesting and relevant to > my interests, however, the ghc-lwc2 branch hasn't been updated in several > years even though it's listed as an active branch at > https://ghc.haskell.org/trac/ghc/wiki/ActiveBranches > > > > The wiki page for the work hasn't been updated in almost two years either, > https://ghc.haskell.org/trac/ghc/wiki/LightweightConcurrency > > > > Relevant papers: > > Composable Scheduler Activations for Haskell (2014) > > https://timharris.uk/papers/2014-composable-tr.pdf > > > > > > Composable Scheduler Activations for Haskell (2016) > > http://kcsrk.info/papers/schedact_jfp16.pdf > > > > > What remains for integrating this branch into GHC? > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From varma.sunjay at gmail.com Wed Jan 4 14:14:59 2017 From: varma.sunjay at gmail.com (Sunjay Varma) Date: Wed, 4 Jan 2017 09:14:59 -0500 Subject: Contributing Examples to the Documentation Message-ID: Hi, I'm considering contributing examples to the documentation. I wanted to start with something like Data.List because it is one of the modules I end up using the most. I think a few examples for each function would help users understand them better. I find myself referring back to books like learn you a haskell because I don't remember exactly what I'm supposed to do with a function. Doing one module seems like a good start and hopefully we can have some other people begin to add their own examples too. Is this a worthwhile contribution? I haven't contributed before and so I think it's prudent to ask before I add something no one wants. Are there any examples of modules with good code examples that I should use as a reference? I want to include both the code and output of the example as if the user was running ghci. Are there any guidelines for contributing documentation? When I say Data.List, I really mean Data.Foldable and Data.Traversable since that is where the functions are actually implemented. I noticed the GitHub repo said that Pull Requests were okay for easy to review documentation changes. Can I open a pull request there or should I follow another process? Please let me know when you can. I don't have an exact timeline for when this will be done, but hopefully I'll have something in the next few weeks. I don't anticipate that it will take long once I sit down to do it. I've always complained about a lack of examples and never done anything about it. Hopefully I can practice what I preach and contribute some in order to make the documentation a little better for everyone. Thanks for helping to make this language so great! Sunjay -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at cs.brynmawr.edu Wed Jan 4 14:18:02 2017 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Wed, 4 Jan 2017 09:18:02 -0500 Subject: Trac to Phabricator (Maniphest) migration prototype In-Reply-To: References: <1483464029-sup-2265@sabre> <1483504046-sup-5311@sabre> Message-ID: <6F3AFE3B-F83B-4632-9366-BB0D5F196A70@cs.brynmawr.edu> > On Jan 4, 2017, at 5:18 AM, Matthew Pickering wrote: > > I am persuaded that component is useful. Richard makes the point that > there is a murky divide between component and keywords. This is right > and it indicates that we should keep the component field but also > homogenise it was the keywords (in the form of projects). This seems sensible, yes. > > The arguments for version are not convincing. The first thing you do > when working on a ticket anyway is to try and reproduce the bug with a > test case in HEAD. The date a ticket reported is as good an indicator > of version. I still strongly disagree with you here. You've described the first thing *you* do when working on a ticket, but that's not the first thing *I* do. My first step is to decide whether or not a care about a ticket, roughly like this: - Read ticket title. If it's not about my area, stop. - If the ticket is obviously a bug (panic, core lint error): * Check the version reported. If the version is old and I'm squeezed for time at the moment, stop. * Try to reproduce at the version reported. If I can't, report this on the ticket. * If HEAD is around on my machine and I'm sufficiently interested, try to repro on HEAD. Report my findings. - If the ticket is not obviously a bug (complicated type-level shenanigans that is either accepted or rejected): * Think Hard about whether behavior is expected or not and report accordingly. * If behavior is indeed a bug, continue with the steps outlined above. Note that version numbers play a key role in the triage process! If we had a bot that could try to repro every reported bug on HEAD and could report its findings, I would find the need for a version number less pressing. But until then, it's very very helpful. > > I also noted that I neglected to update the dateUpdated field for > tickets so queries by date last modified do not currently work and > some dates may appear strange when searching. > > Edward: There is support for bucketing by project as well. See this > query for example, > http://ec2-52-214-147-146.eu-west-1.compute.amazonaws.com/maniphest/query/AnBbna53Q.ue/#R > > In these discussions we should remember that phabricator is not trac. > I am not trying to exactly recreate the trac experience because > otherwise we might as well carry on using trac. Indeed. And I think using a tag list instead of the current keywords/component interface would be an improvement. At the same time, we need to identify workflows that go well with Trac but which might be disrupted in the changeover. I'm not arguing that any disrupted workflows are a deal-breaker, but we need to know what they are. Richard From matthewtpickering at gmail.com Wed Jan 4 14:20:18 2017 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Wed, 4 Jan 2017 14:20:18 +0000 Subject: Contributing Examples to the Documentation In-Reply-To: References: Message-ID: I think this will be welcomed, go for it! All patches should be submitted on Phabricator. There are some straightforward instructions on how to submit a patch on the wiki - https://ghc.haskell.org/trac/ghc/wiki/Phabricator If you want to build the documentation then you can modify mk/build.mk and add the line HADDOCK_DOCS = YES (and probably enable the quick build flavour by uncommenting the line). Message back if you need help or #ghc on freenode. Matt On Wed, Jan 4, 2017 at 2:14 PM, Sunjay Varma wrote: > Hi, > I'm considering contributing examples to the documentation. I wanted to > start with something like Data.List because it is one of the modules I end > up using the most. I think a few examples for each function would help users > understand them better. I find myself referring back to books like learn you > a haskell because I don't remember exactly what I'm supposed to do with a > function. > > Doing one module seems like a good start and hopefully we can have some > other people begin to add their own examples too. > > Is this a worthwhile contribution? I haven't contributed before and so I > think it's prudent to ask before I add something no one wants. > > Are there any examples of modules with good code examples that I should use > as a reference? I want to include both the code and output of the example as > if the user was running ghci. Are there any guidelines for contributing > documentation? > > When I say Data.List, I really mean Data.Foldable and Data.Traversable since > that is where the functions are actually implemented. > > I noticed the GitHub repo said that Pull Requests were okay for easy to > review documentation changes. Can I open a pull request there or should I > follow another process? > > Please let me know when you can. I don't have an exact timeline for when > this will be done, but hopefully I'll have something in the next few weeks. > I don't anticipate that it will take long once I sit down to do it. > > I've always complained about a lack of examples and never done anything > about it. Hopefully I can practice what I preach and contribute some in > order to make the documentation a little better for everyone. > > Thanks for helping to make this language so great! > Sunjay > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From varma.sunjay at gmail.com Wed Jan 4 21:36:39 2017 From: varma.sunjay at gmail.com (Sunjay Varma) Date: Wed, 4 Jan 2017 16:36:39 -0500 Subject: Contributing Examples to the Documentation In-Reply-To: References: Message-ID: Hi all, Are there any modules with good code examples that I should use as a reference? I want to include both the code and output of the example as if the user was running ghci. Are there any guidelines for contributing documentation? Thanks! Sunjay On Jan 4, 2017 9:20 AM, "Matthew Pickering" wrote: I think this will be welcomed, go for it! All patches should be submitted on Phabricator. There are some straightforward instructions on how to submit a patch on the wiki - https://ghc.haskell.org/trac/ghc/wiki/Phabricator If you want to build the documentation then you can modify mk/build.mk and add the line HADDOCK_DOCS = YES (and probably enable the quick build flavour by uncommenting the line). Message back if you need help or #ghc on freenode. Matt On Wed, Jan 4, 2017 at 2:14 PM, Sunjay Varma wrote: > Hi, > I'm considering contributing examples to the documentation. I wanted to > start with something like Data.List because it is one of the modules I end > up using the most. I think a few examples for each function would help users > understand them better. I find myself referring back to books like learn you > a haskell because I don't remember exactly what I'm supposed to do with a > function. > > Doing one module seems like a good start and hopefully we can have some > other people begin to add their own examples too. > > Is this a worthwhile contribution? I haven't contributed before and so I > think it's prudent to ask before I add something no one wants. > > Are there any examples of modules with good code examples that I should use > as a reference? I want to include both the code and output of the example as > if the user was running ghci. Are there any guidelines for contributing > documentation? > > When I say Data.List, I really mean Data.Foldable and Data.Traversable since > that is where the functions are actually implemented. > > I noticed the GitHub repo said that Pull Requests were okay for easy to > review documentation changes. Can I open a pull request there or should I > follow another process? > > Please let me know when you can. I don't have an exact timeline for when > this will be done, but hopefully I'll have something in the next few weeks. > I don't anticipate that it will take long once I sit down to do it. > > I've always complained about a lack of examples and never done anything > about it. Hopefully I can practice what I preach and contribute some in > order to make the documentation a little better for everyone. > > Thanks for helping to make this language so great! > Sunjay > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Wed Jan 4 22:01:26 2017 From: ben at smart-cactus.org (Ben Gamari) Date: Wed, 04 Jan 2017 17:01:26 -0500 Subject: Contributing Examples to the Documentation In-Reply-To: References: Message-ID: <874m1epjrd.fsf@ben-laptop.smart-cactus.org> Sunjay Varma writes: > Hi, > I'm considering contributing examples to the documentation. I wanted to > start with something like Data.List because it is one of the modules I end > up using the most. I think a few examples for each function would help > users understand them better. I find myself referring back to books like > learn you a haskell because I don't remember exactly what I'm supposed to > do with a function. > > Doing one module seems like a good start and hopefully we can have some > other people begin to add their own examples too. > > Is this a worthwhile contribution? I haven't contributed before and so I > think it's prudent to ask before I add something no one wants. > This would be amazing! Many people have remarked that GHC's library documentation is in need of examples; we just need someone to step up and start contributing patches. > Are there any examples of modules with good code examples that I should use > as a reference? I want to include both the code and output of the example > as if the user was running ghci. Are there any guidelines for contributing > documentation? > I think Edward's lens library is probably a good example here. I know of few good examples in GHC itself. > When I say Data.List, I really mean Data.Foldable and Data.Traversable > since that is where the functions are actually implemented. > These are great places to start. > I noticed the GitHub repo said that Pull Requests were okay for easy to > review documentation changes. Can I open a pull request there or should I > follow another process? > We prefer to take patches on Phabricator. However, to lower the barrier to small patches like this I have suggested in the past that we accept GitHub pull requests for small changes. > Please let me know when you can. I don't have an exact timeline for when > this will be done, but hopefully I'll have something in the next few weeks. > I don't anticipate that it will take long once I sit down to do it. > Great. Let us know if you encounter friction. > I've always complained about a lack of examples and never done anything > about it. Hopefully I can practice what I preach and contribute some in > order to make the documentation a little better for everyone. > > Thanks for helping to make this language so great! And thanks to you for helping as well! Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From kazu at iij.ad.jp Wed Jan 4 22:39:05 2017 From: kazu at iij.ad.jp (Kazu Yamamoto (=?iso-2022-jp?B?GyRCOzNLXE9CSScbKEI=?=)) Date: Thu, 05 Jan 2017 07:39:05 +0900 (JST) Subject: Contributing Examples to the Documentation In-Reply-To: References: Message-ID: <20170105.073905.1610806035760344071.kazu@iij.ad.jp> Hi Sunjay, > Are there any modules with good code examples that I should use as a > reference? I want to include both the code and output of the example as if > the user was running ghci. Are there any guidelines for contributing > documentation? I would suggest to use the doctest style so that your examples can be automatically tested. I would like to move the test suites of the containers library to doctest. --Kazu From varma.sunjay at gmail.com Wed Jan 4 22:44:56 2017 From: varma.sunjay at gmail.com (Sunjay Varma) Date: Wed, 4 Jan 2017 17:44:56 -0500 Subject: Contributing Examples to the Documentation In-Reply-To: <20170105.073905.1610806035760344071.kazu@iij.ad.jp> References: <20170105.073905.1610806035760344071.kazu@iij.ad.jp> Message-ID: Hi Kazu, If I use the doctest style, do I use ">>>" for the prompt? That's more a Python thing. Maybe ">" would be more appropriate for Haskell? Sunjay On Jan 4, 2017 5:39 PM, "Kazu Yamamoto" wrote: Hi Sunjay, > Are there any modules with good code examples that I should use as a > reference? I want to include both the code and output of the example as if > the user was running ghci. Are there any guidelines for contributing > documentation? I would suggest to use the doctest style so that your examples can be automatically tested. I would like to move the test suites of the containers library to doctest. --Kazu -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsf at seereason.com Thu Jan 5 00:01:35 2017 From: dsf at seereason.com (David Fox) Date: Wed, 4 Jan 2017 16:01:35 -0800 Subject: Contributing Examples to the Documentation In-Reply-To: References: Message-ID: There are lots of examples in the lens package. I can't think of a better model. On Wed, Jan 4, 2017 at 1:36 PM, Sunjay Varma wrote: > Hi all, > Are there any modules with good code examples that I should use as a > reference? I want to include both the code and output of the example as if > the user was running ghci. Are there any guidelines for contributing > documentation? > > Thanks! > Sunjay > > > On Jan 4, 2017 9:20 AM, "Matthew Pickering" > wrote: > > I think this will be welcomed, go for it! > > All patches should be submitted on Phabricator. There are some > straightforward instructions on how to submit a patch on the wiki - > https://ghc.haskell.org/trac/ghc/wiki/Phabricator > > If you want to build the documentation then you can modify mk/build.mk > and add the line HADDOCK_DOCS = YES (and probably enable the quick > build flavour by uncommenting the line). > > Message back if you need help or #ghc on freenode. > > Matt > > On Wed, Jan 4, 2017 at 2:14 PM, Sunjay Varma > wrote: > > Hi, > > I'm considering contributing examples to the documentation. I wanted to > > start with something like Data.List because it is one of the modules I > end > > up using the most. I think a few examples for each function would help > users > > understand them better. I find myself referring back to books like learn > you > > a haskell because I don't remember exactly what I'm supposed to do with a > > function. > > > > Doing one module seems like a good start and hopefully we can have some > > other people begin to add their own examples too. > > > > Is this a worthwhile contribution? I haven't contributed before and so I > > think it's prudent to ask before I add something no one wants. > > > > Are there any examples of modules with good code examples that I should > use > > as a reference? I want to include both the code and output of the > example as > > if the user was running ghci. Are there any guidelines for contributing > > documentation? > > > > When I say Data.List, I really mean Data.Foldable and Data.Traversable > since > > that is where the functions are actually implemented. > > > > I noticed the GitHub repo said that Pull Requests were okay for easy to > > review documentation changes. Can I open a pull request there or should I > > follow another process? > > > > Please let me know when you can. I don't have an exact timeline for when > > this will be done, but hopefully I'll have something in the next few > weeks. > > I don't anticipate that it will take long once I sit down to do it. > > > > I've always complained about a lack of examples and never done anything > > about it. Hopefully I can practice what I preach and contribute some in > > order to make the documentation a little better for everyone. > > > > Thanks for helping to make this language so great! > > Sunjay > > > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lonetiger at gmail.com Thu Jan 5 01:34:40 2017 From: lonetiger at gmail.com (Phyx) Date: Thu, 05 Jan 2017 01:34:40 +0000 Subject: Trac to Phabricator (Maniphest) migration prototype In-Reply-To: References: <1483464029-sup-2265@sabre> Message-ID: In response to no# 2, I was looking at this my self since I think separate fields would definitely be useful. It seems you can customize the form design on phabricrator. I think it would definitely be worth it to make a new form layout resembling the layout of track. I don't know how it stores it but presumably this would also improve the searching allowing you to filter on specific fields instead of just tags. Have you looked into this already Matt? On Tue, 3 Jan 2017, 23:33 Matthew Pickering, wrote: > Good comments Edward. I think the answers to all three of your points > will be insightful. > > 1. As for why it appears some information from the ticket is missing. > > The version field is not very useful as there are only two active > branches at once. Instead we want a project which marks tickets which > apply to the 8.0.2 branch for example. Tickets which refer to ancient > versions of GHC should be closed if they can't be reproduced with > HEAD. It is also not used very often, only updated on about 600 > tickets. > > A tag for the architecture field is only added if the architecture is > set to something other than the default. The ticket you linked had the > default value and in fact, architecture was irrelevant to the ticket > as it was about the type checker so it could be argued including this > option is confusing to the reporter. > > I'm also skeptical about how useful the component field is. I've never > personally used it and it is not accurate for the majority of tickets. > If someone disagrees then please say but it is really not used very > much. > > Finally, the related tickets look slightly off. > This is a reoccurring problem with trac, there is no validation for > any of the text fields. The parser I wrote assumed that tickets > started with a # but in this example the first related ticket > initially was hashless. There are many examples like this which I have > come across. In this case I think I can relax it to search for any > contiguous set of numbers and recover the correct information. > > The dump of the database which I had is about two months old which > explains why the most recent changes are not included. > > Here is how often each field has been updated. > > field | count > --------------+------- > _comment0 | 1839 > _comment1 | 364 > _comment10 | 1 > _comment11 | 1 > _comment12 | 1 > _comment13 | 1 > _comment14 | 1 > _comment15 | 1 > _comment16 | 1 > _comment2 | 123 > _comment3 | 53 > _comment4 | 23 > _comment5 | 13 > _comment6 | 4 > _comment7 | 4 > _comment8 | 3 > _comment9 | 1 > architecture | 2025 > blockedby | 1103 > blocking | 1112 > cc | 5358 > comment | 75967 > component | 1217 > description | 1919 > differential | 2410 > difficulty | 4968 > failure | 1427 > keywords | 833 > milestone | 13695 > os | 1964 > owner | 4870 > patch | 26 > priority | 2495 > related | 1869 > reporter | 7 > resolution | 10446 > severity | 83 > status | 14612 > summary | 827 > testcase | 2386 > type | 811 > version | 687 > wikipage | 873 > > 2. This is a legitimate point. I will say in response that the > majority of triaging is done by those who regularly use the bug > tracker. These people will know the correct tags to use. There are > occasionally people who submit tickets and ask questions about what to > fill in these fields or fill them in incorrectly. Removing them for a > uniform system is advantageous in my opinion. > > 3. Here is how the search works. It is context sensitive, if you > search from the home page then you search everything. If you search > after clicking on "maniphest" to enter the maniphest application it > will only search tickets. > > There is a similar advanced search for Maniphest - > > http://ec2-52-214-147-146.eu-west-1.compute.amazonaws.com/maniphest/query/advanced/ > Using this you can sort by priority and then by ticket number. Here is > an example searching the tickets with the PatternSynonyms tag - > > http://ec2-52-214-147-146.eu-west-1.compute.amazonaws.com/maniphest/query/WjCq6bD27idM/#R > > I agree with you about the default layout. I would also like a more > compact view but I feel this style is the prevailing modern web dev > trend. > > Thanks for your comments. > > Matt > > > On Tue, Jan 3, 2017 at 5:21 PM, Edward Z. Yang wrote: > > Hi Matthew, > > > > Thanks for doing the work for setting up this prototype, it definitely > > helps in making an informed decision about the switch. > > > > Some comments: > > > > 1. In your original email, you stated that many of the custom fields > > were going to be replaced with Phabricator "projects" (their > > equivalent of tags). First, I noticed a little trouble where > > some fields were just completely lost. Compare: > > > > http://ec2-52-214-147-146.eu-west-1.compute.amazonaws.com/T8095 > > https://ghc.haskell.org/trac/ghc/ticket/8095 > > > > AFAICT, we lost version the bug was found in, > > architecture, component. I hope we can keep these details > > in the real migration! Also related tickets doesn't seem > > to be working (see example above); migration problem? > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kazu at iij.ad.jp Thu Jan 5 01:51:34 2017 From: kazu at iij.ad.jp (Kazu Yamamoto (=?iso-2022-jp?B?GyRCOzNLXE9CSScbKEI=?=)) Date: Thu, 05 Jan 2017 10:51:34 +0900 (JST) Subject: Contributing Examples to the Documentation In-Reply-To: References: <20170105.073905.1610806035760344071.kazu@iij.ad.jp> Message-ID: <20170105.105134.1780648794399898422.kazu@iij.ad.jp> Hi Sunjay, > If I use the doctest style, do I use ">>>" for the prompt? That's more a > Python thing. Maybe ">" would be more appropriate for Haskell? You should use ">>>". If you don't know doctest for Haskell, please read: https://github.com/sol/doctest#readme The following document would also help: https://github.com/kazu-yamamoto/unit-test-example/blob/master/markdown/en/tutorial.md --Kazu From oleg.grenrus at iki.fi Thu Jan 5 08:06:43 2017 From: oleg.grenrus at iki.fi (Oleg Grenrus) Date: Thu, 5 Jan 2017 10:06:43 +0200 Subject: Contributing Examples to the Documentation In-Reply-To: <874m1epjrd.fsf@ben-laptop.smart-cactus.org> References: <874m1epjrd.fsf@ben-laptop.smart-cactus.org> Message-ID: On 05.01.2017 00:01, Ben Gamari wrote: > Sunjay Varma writes: > >> Hi, >> I'm considering contributing examples to the documentation. I wanted to >> start with something like Data.List because it is one of the modules I end >> up using the most. I think a few examples for each function would help >> users understand them better. I find myself referring back to books like >> learn you a haskell because I don't remember exactly what I'm supposed to >> do with a function. >> >> Doing one module seems like a good start and hopefully we can have some >> other people begin to add their own examples too. >> >> Is this a worthwhile contribution? I haven't contributed before and so I >> think it's prudent to ask before I add something no one wants. >> > This would be amazing! Many people have remarked that GHC's library > documentation is in need of examples; we just need someone to step up > and start contributing patches. > >> Are there any examples of modules with good code examples that I should use >> as a reference? I want to include both the code and output of the example >> as if the user was running ghci. Are there any guidelines for contributing >> documentation? >> > I think Edward's lens library is probably a good example here. I know > of few good examples in GHC itself. And `lens` also uses `doctest` to actually verify the examples. Let's keep in mind that GHC might want to doctest examples too at some point. If any have ideas how that can happen (also having QuickCheck tests and doctests would be great too), I can try to implement that. >> When I say Data.List, I really mean Data.Foldable and Data.Traversable >> since that is where the functions are actually implemented. >> > These are great places to start. > >> I noticed the GitHub repo said that Pull Requests were okay for easy to >> review documentation changes. Can I open a pull request there or should I >> follow another process? >> > We prefer to take patches on Phabricator. However, to lower the barrier > to small patches like this I have suggested in the past that we accept > GitHub pull requests for small changes. > >> Please let me know when you can. I don't have an exact timeline for when >> this will be done, but hopefully I'll have something in the next few weeks. >> I don't anticipate that it will take long once I sit down to do it. >> > Great. Let us know if you encounter friction. > >> I've always complained about a lack of examples and never done anything >> about it. Hopefully I can practice what I preach and contribute some in >> order to make the documentation a little better for everyone. >> >> Thanks for helping to make this language so great! > And thanks to you for helping as well! > > Cheers, > > - Ben > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From oleg.grenrus at iki.fi Thu Jan 5 11:22:56 2017 From: oleg.grenrus at iki.fi (Oleg Grenrus) Date: Thu, 5 Jan 2017 13:22:56 +0200 Subject: Contributing Examples to the Documentation In-Reply-To: <20170105.105134.1780648794399898422.kazu@iij.ad.jp> References: <20170105.073905.1610806035760344071.kazu@iij.ad.jp> <20170105.105134.1780648794399898422.kazu@iij.ad.jp> Message-ID: <1321efa8-b705-9c82-3eaa-d9a1324290dd@iki.fi> See e.g. https://github.com/ghc/ghc/blob/baf9ebe55a51827c0511b3a670e60b9bb3617ab5/libraries/base/Data/Maybe.hs#L84-L101 for an example already in `base`. Relevant ticket: https://ghc.haskell.org/trac/ghc/ticket/11551 - Oleg On 05.01.2017 03:51, Kazu Yamamoto (山本和彦) wrote: > Hi Sunjay, > >> If I use the doctest style, do I use ">>>" for the prompt? That's more a >> Python thing. Maybe ">" would be more appropriate for Haskell? > You should use ">>>". > > If you don't know doctest for Haskell, please read: > > https://github.com/sol/doctest#readme > > The following document would also help: > > https://github.com/kazu-yamamoto/unit-test-example/blob/master/markdown/en/tutorial.md > > --Kazu > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From rwbarton at gmail.com Thu Jan 5 15:26:35 2017 From: rwbarton at gmail.com (Reid Barton) Date: Thu, 5 Jan 2017 10:26:35 -0500 Subject: Large tuple strategy Message-ID: Hi all, https://phabricator.haskell.org/D2899 proposes adding Generic instances for large tuples (up to size 62). Currently GHC only provides Generic instances for tuples of size up to 7. There's been some concern about the effect that all these instances will have on compilation time for anyone who uses Generics, even if they don't actually use the new instances. There was a suggestion to move these new instances to a separate module, but as these instances would then be orphans, I believe GHC would have to read the interface file for that module anyways once Generic comes into scope, which would defeat the purpose of the split. It occurred to me that rather than moving just these instances to a new module, we could move the large tuples themselves to a new module Data.LargeTuple and put the instances there. The Prelude would reexport the large tuples, so there would be no user-visible change. According to my experiments, GHC should never have to read the Data.LargeTuple interface file unless a program actually mentions a large tuple type, which is presumably rare. We could then also extend the existing instances for Eq, Show, etc., which are currently only provided through 15-tuples. A nontrivial aspect of this change is that tuples are wired-in types, and they currently all live in the ghc-prim package. I'm actually not sure why they need to be wired-in rather than ordinary types with a funny-looking name. In any case I need to look into this further, but the difficulties here don't seem to be insurmountable. Does this seem like a reasonable plan? Anything important I have missed? Regards, Reid Barton From simonpj at microsoft.com Thu Jan 5 15:28:26 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 5 Jan 2017 15:28:26 +0000 Subject: Large tuple strategy In-Reply-To: References: Message-ID: | It occurred to me that rather than moving just these instances to a new | module, we could move the large tuples themselves to a new module | Data.LargeTuple and put the instances there. Yes, that's what I intended to suggest. Good plan. Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Reid | Barton | Sent: 05 January 2017 15:27 | To: ghc-devs at haskell.org | Subject: Large tuple strategy | | Hi all, | | https://phabricator.haskell.org/D2899 proposes adding Generic instances for | large tuples (up to size 62). Currently GHC only provides Generic instances | for tuples of size up to 7. There's been some concern about the effect that | all these instances will have on compilation time for anyone who uses | Generics, even if they don't actually use the new instances. | | There was a suggestion to move these new instances to a separate module, but | as these instances would then be orphans, I believe GHC would have to read | the interface file for that module anyways once Generic comes into scope, | which would defeat the purpose of the split. | | It occurred to me that rather than moving just these instances to a new | module, we could move the large tuples themselves to a new module | Data.LargeTuple and put the instances there. The Prelude would reexport the | large tuples, so there would be no user-visible change. | According to my experiments, GHC should never have to read the | Data.LargeTuple interface file unless a program actually mentions a large | tuple type, which is presumably rare. We could then also extend the existing | instances for Eq, Show, etc., which are currently only provided through 15- | tuples. | | A nontrivial aspect of this change is that tuples are wired-in types, and | they currently all live in the ghc-prim package. I'm actually not sure why | they need to be wired-in rather than ordinary types with a funny-looking | name. In any case I need to look into this further, but the difficulties | here don't seem to be insurmountable. | | Does this seem like a reasonable plan? Anything important I have missed? | | Regards, | Reid Barton | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.haskell | .org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com%7Cd3262b1204df407f65ce08d4357f4b | d8%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636192268229980651&sdata=NV% | 2BaCgD7Xo5EIwuL5cZaTrCBHihNxAiZvT0VCNQl6Z8%3D&reserved=0 From rwbarton at gmail.com Thu Jan 5 15:44:12 2017 From: rwbarton at gmail.com (Reid Barton) Date: Thu, 5 Jan 2017 10:44:12 -0500 Subject: Large tuple strategy In-Reply-To: References: Message-ID: OK, I filed https://ghc.haskell.org/trac/ghc/ticket/13072 for this. Regards, Reid Barton On Thu, Jan 5, 2017 at 10:28 AM, Simon Peyton Jones wrote: > | It occurred to me that rather than moving just these instances to a new > | module, we could move the large tuples themselves to a new module > | Data.LargeTuple and put the instances there. > > Yes, that's what I intended to suggest. Good plan. > > Simon > > | -----Original Message----- > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Reid > | Barton > | Sent: 05 January 2017 15:27 > | To: ghc-devs at haskell.org > | Subject: Large tuple strategy > | > | Hi all, > | > | https://phabricator.haskell.org/D2899 proposes adding Generic instances for > | large tuples (up to size 62). Currently GHC only provides Generic instances > | for tuples of size up to 7. There's been some concern about the effect that > | all these instances will have on compilation time for anyone who uses > | Generics, even if they don't actually use the new instances. > | > | There was a suggestion to move these new instances to a separate module, but > | as these instances would then be orphans, I believe GHC would have to read > | the interface file for that module anyways once Generic comes into scope, > | which would defeat the purpose of the split. > | > | It occurred to me that rather than moving just these instances to a new > | module, we could move the large tuples themselves to a new module > | Data.LargeTuple and put the instances there. The Prelude would reexport the > | large tuples, so there would be no user-visible change. > | According to my experiments, GHC should never have to read the > | Data.LargeTuple interface file unless a program actually mentions a large > | tuple type, which is presumably rare. We could then also extend the existing > | instances for Eq, Show, etc., which are currently only provided through 15- > | tuples. > | > | A nontrivial aspect of this change is that tuples are wired-in types, and > | they currently all live in the ghc-prim package. I'm actually not sure why > | they need to be wired-in rather than ordinary types with a funny-looking > | name. In any case I need to look into this further, but the difficulties > | here don't seem to be insurmountable. > | > | Does this seem like a reasonable plan? Anything important I have missed? > | > | Regards, > | Reid Barton > | _______________________________________________ > | ghc-devs mailing list > | ghc-devs at haskell.org > | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.haskell > | .org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- > | devs&data=02%7C01%7Csimonpj%40microsoft.com%7Cd3262b1204df407f65ce08d4357f4b > | d8%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636192268229980651&sdata=NV% > | 2BaCgD7Xo5EIwuL5cZaTrCBHihNxAiZvT0VCNQl6Z8%3D&reserved=0 From ben at smart-cactus.org Thu Jan 5 16:18:42 2017 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 05 Jan 2017 11:18:42 -0500 Subject: Large tuple strategy In-Reply-To: References: Message-ID: <87shoxo4yl.fsf@ben-laptop.smart-cactus.org> Reid Barton writes: > Hi all, > > https://phabricator.haskell.org/D2899 proposes adding Generic > instances for large tuples (up to size 62). Currently GHC only > provides Generic instances for tuples of size up to 7. There's been > some concern about the effect that all these instances will have on > compilation time for anyone who uses Generics, even if they don't > actually use the new instances. > > There was a suggestion to move these new instances to a separate > module, but as these instances would then be orphans, I believe GHC > would have to read the interface file for that module anyways once > Generic comes into scope, which would defeat the purpose of the split. > > It occurred to me that rather than moving just these instances to a > new module, we could move the large tuples themselves to a new module > Data.LargeTuple and put the instances there. The Prelude would > reexport the large tuples, so there would be no user-visible change. > According to my experiments, GHC should never have to read the > Data.LargeTuple interface file unless a program actually mentions a > large tuple type, which is presumably rare. We could then also extend > the existing instances for Eq, Show, etc., which are currently only > provided through 15-tuples. > Good catch Reid! Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Thu Jan 5 16:44:40 2017 From: ben at well-typed.com (Ben Gamari) Date: Thu, 05 Jan 2017 11:44:40 -0500 Subject: Backpack status for 8.2 Message-ID: <87r34ho3rb.fsf@ben-laptop.smart-cactus.org> Hi Edward, I am currently trying to get an idea of what work remains outstanding for the 8.2.1. How is Backpack coming along? Would a freeze around the end of January be enough time to get most of the larger items ticked off? It would be great if there were a list (perhaps a tracking ticket on Trac?) where we could collect and track the progress of the remaining tasks. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Thu Jan 5 18:17:23 2017 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 05 Jan 2017 13:17:23 -0500 Subject: OpenSearch with GHC manual In-Reply-To: References: Message-ID: <87d1g1nzgs.fsf@ben-laptop.smart-cactus.org> Sylvain Henry writes: > Hi, > > Search engines often reference old versions of the GHC user guide. For > instance with Google and the request "ghc unboxed tuples" I get the > manual for 7.0.3, 5.04.1 and 6.8.2 as first results. With DuckDuckGo I > get 6.12.3 and then "latest" versions of the manual. > > So I have made a custom search engine for the latest manual (using > OpenSearch spec). You can install it from the following page: > http://haskus.fr/ghc/index.html like any other search engine. > > Sphinx supports automatic generation of OpenSearch spec: > http://www.sphinx-doc.org/en/1.4.8/config.html#confval-html_use_opensearch > Maybe we should use this to make the search engine easier to find and use. > I've opened a differential (D2921) which enables opensearch support for the users guide. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Thu Jan 5 19:48:22 2017 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 05 Jan 2017 14:48:22 -0500 Subject: Github repos for boot libraries In-Reply-To: References: <20170103054503.4561c40ab79dc07a109c366e@mega-nerd.com> Message-ID: <87a8b5nv95.fsf@ben-laptop.smart-cactus.org> Edward Kmett writes: > For reference, the master repository for transformers is at > > http://hub.darcs.net/ross/transformers > > We should probably edit the 'website' link for that github repository to at > least point there. > > I don't have access to do so, however. > > Subtly pinging Herbert, by adding him here. =) > I have fixed the GitHub repo URL. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From simonpj at microsoft.com Fri Jan 6 17:47:36 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 6 Jan 2017 17:47:36 +0000 Subject: [commit: ghc] master: Actually add the right file for T13035 stderr (54227a4) In-Reply-To: <20170106171732.67D7C3A300@ghc.haskell.org> References: <20170106171732.67D7C3A300@ghc.haskell.org> Message-ID: Thanks. Sorry about failing to add that file simon | -----Original Message----- | From: ghc-commits [mailto:ghc-commits-bounces at haskell.org] On Behalf Of | git at git.haskell.org | Sent: 06 January 2017 17:18 | To: ghc-commits at haskell.org | Subject: [commit: ghc] master: Actually add the right file for T13035 stderr | (54227a4) | | Repository : ssh://git at git.haskell.org/ghc | | On branch : master | Link : | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fghc.haskell. | org%2Ftrac%2Fghc%2Fchangeset%2F54227a45352903e951b81153f798162264f02ad9%2Fgh | c&data=02%7C01%7Csimonpj%40microsoft.com%7C86bd120363964482265708d43657ea8d% | 7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636193198616484576&sdata=rsqX9O | 5K9ga2Aym%2BF%2BsWCA1vpaD7yJRDoWDfxPbCvVk%3D&reserved=0 | | >--------------------------------------------------------------- | | commit 54227a45352903e951b81153f798162264f02ad9 | Author: Matthew Pickering | Date: Fri Jan 6 17:15:35 2017 +0000 | | Actually add the right file for T13035 stderr | | | >--------------------------------------------------------------- | | 54227a45352903e951b81153f798162264f02ad9 | testsuite/tests/perf/compiler/T13035.stderr | 5 ++++- | 1 file changed, 4 insertions(+), 1 deletion(-) | | diff --git a/testsuite/tests/perf/compiler/T13035.stderr | b/testsuite/tests/perf/compiler/T13035.stderr | index ae02c1f..52836d7 100644 | --- a/testsuite/tests/perf/compiler/T13035.stderr | +++ b/testsuite/tests/perf/compiler/T13035.stderr | @@ -1 +1,4 @@ | -compilation IS NOT required | + | +T13035.hs:141:28: warning: [-Wpartial-type-signatures (in -Wdefault)] | + • Found type wildcard ‘_’ standing for ‘'['Author]’ | + • In the type signature: g :: MyRec RecipeFormatter _ | | _______________________________________________ | ghc-commits mailing list | ghc-commits at haskell.org | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.haskell | .org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | commits&data=02%7C01%7Csimonpj%40microsoft.com%7C86bd120363964482265708d4365 | 7ea8d%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636193198616484576&sdata= | zDIxHu4g5rU8jgEedgGvPWHoM8jRG%2FaQCKrHrAgvIcs%3D&reserved=0 From lonetiger at gmail.com Fri Jan 6 22:27:54 2017 From: lonetiger at gmail.com (Phyx) Date: Fri, 06 Jan 2017 22:27:54 +0000 Subject: [GHC] #13035: GHC enters a loop when partial type signatures and advanced type level code mix In-Reply-To: <058.09a4b53d566c6b1bc354273f68166c88@haskell.org> References: <043.73bcb566c7aa5daa0f1f227af5f74322@haskell.org> <058.09a4b53d566c6b1bc354273f68166c88@haskell.org> Message-ID: Ah, great thanks! On Fri, 6 Jan 2017, 20:32 GHC, wrote: > #13035: GHC enters a loop when partial type signatures and advanced type > level code > mix > -------------------------------------+------------------------------------- > Reporter: xcmw | Owner: > Type: bug | Status: merge > Priority: normal | Milestone: 8.0.3 > Component: Compiler | Version: 8.0.1 > Resolution: | Keywords: > Operating System: Unknown/Multiple | Architecture: > | Unknown/Multiple > Type of failure: None/Unknown | Test Case: > | perf/compiler/T13035 > Blocked By: | Blocking: > Related Tickets: | Differential Rev(s): > Wiki Page: | > -------------------------------------+------------------------------------- > > Comment (by RyanGlScott): > > I believe mpickering fixed in in f3c7cf9b89cad7f326682b23d9f3908ebf0f8f9d > and 54227a45352903e951b81153f798162264f02ad9. > > -- > Ticket URL: > GHC > The Glasgow Haskell Compiler > -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentm at themactionfaction.com Sat Jan 7 03:00:49 2017 From: agentm at themactionfaction.com (A.M.) Date: Fri, 6 Jan 2017 22:00:49 -0500 Subject: GHC API 7.10 -> 8.0.1 (unusable due to missing or recursive dependencies) Message-ID: Hello, I am migrating a use of the GHC API from GHC 7.10 to GHC 8.0.1 and have hit an issue whereby importing pre-compiled libraries spits out: package testhsscript-0.1-CQQKGzp5pWwBmTnAldv1Hk is unusable due to missing or recursive dependencies: Glob-0.7.13-2PN6d9dpHzz7DHotD0T0wu base-4.9.0.0 though these packages are indeed in the sandbox and listed with ExposePackage in the packageFlags. To be clear, a variant of this code worked as expected under GHC 7.10. I have created a simplified test case to demonstrate the issue: https://github.com/agentm/testhsscript Here is the meat of the problem: * lib/Test.hs - contains a module importing Glob with some functions of no consequence https://github.com/agentm/testhsscript/blob/master/lib/Test.hs * test.hs includes the GHC API call to load the precompiled Test module from the cabal sandbox https://github.com/agentm/testhsscript/blob/master/test.hs * cabal file has two targets: Library and executable "test"- the executable uses the GHC API to load the pacakge "testhsscript" https://github.com/agentm/testhsscript/blob/master/testhsscript.cabal For reference, here is the GHC API invocation which works under GHC 7.10: https://github.com/agentm/project-m36/blob/master/src/lib/ProjectM36/AtomFunctionBody.hs#L31 I suspect I am missing something basic that has changed in GHC 8 and would very much appreciate any tips. Here is the failing result of attempting to load the testhsscript package: Setting up HscEnv Using binary package database: /home/agentm/Dev/testhsscript/.cabal-sandbox/x86_64-linux-ghc-8.0.1-packages.conf.d/package.cache Using binary package database: /opt/ghc/8.0.1/lib/ghc-8.0.1/package.conf.d/package.cache loading package database /home/agentm/Dev/testhsscript/.cabal-sandbox/x86_64-linux-ghc-8.0.1-packages.conf.d package testhsscript-0.1-CQQKGzp5pWwBmTnAldv1Hk is unusable due to missing or recursive dependencies: Glob-0.7.13-2PN6d9dpHzz7DHotD0T0wu base-4.9.0.0 package Glob-0.7.13-2PN6d9dpHzz7DHotD0T0wu is unusable due to missing or recursive dependencies: base-4.9.0.0 containers-0.5.7.1 directory-1.2.6.2 dlist-0.8.0.2-GWAMmbX9rLg3tqrbOizHGv filepath-1.4.1.0 transformers-0.5.2.0 transformers-compat-0.5.1.4-G5tKvPrwhggJRvSwXNMs1N package ghc-paths-0.1.0.9-GIOnKzk0HmEBZ77Q1HsThK is unusable due to missing or recursive dependencies: base-4.9.0.0 package mtl-2.2.1-6qsR1PHUy5lL47Hpoa4jCM is unusable due to missing or recursive dependencies: base-4.9.0.0 transformers-0.5.2.0 package dlist-0.8.0.2-GWAMmbX9rLg3tqrbOizHGv is unusable due to missing or recursive dependencies: base-4.9.0.0 deepseq-1.4.2.0 package transformers-compat-0.5.1.4-G5tKvPrwhggJRvSwXNMs1N is unusable due to missing or recursive dependencies: base-4.9.0.0 ghc-prim-0.5.0.0 transformers-0.5.2.0 loading package database /opt/ghc/8.0.1/lib/ghc-8.0.1/package.conf.d test: : cannot satisfy dlist: dlist-0.8.0.2-GWAMmbX9rLg3tqrbOizHGv is unusable due to missing or recursive dependencies: base-4.9.0.0 deepseq-1.4.2.0 (use -v for more information) Cheers, M -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: OpenPGP digital signature URL: From ezyang at mit.edu Sat Jan 7 03:37:35 2017 From: ezyang at mit.edu (Edward Z. Yang) Date: Fri, 06 Jan 2017 22:37:35 -0500 Subject: GHC API 7.10 -> 8.0.1 (unusable due to missing or recursive dependencies) In-Reply-To: References: Message-ID: <1483760207-sup-1296@sabre> Hello A.M., In 8.0.1 package databases must be specified in the correct order, whereas in 7.10 they could be done in any order. This problem was fixed in 8.0.2, give it a try. Edward Excerpts from A.M.'s message of 2017-01-06 22:00:49 -0500: > Hello, > > I am migrating a use of the GHC API from GHC 7.10 to GHC 8.0.1 and have > hit an issue whereby importing pre-compiled libraries spits out: > > package testhsscript-0.1-CQQKGzp5pWwBmTnAldv1Hk is unusable due to > missing or recursive dependencies: > Glob-0.7.13-2PN6d9dpHzz7DHotD0T0wu base-4.9.0.0 > > though these packages are indeed in the sandbox and listed with > ExposePackage in the packageFlags. To be clear, a variant of this code > worked as expected under GHC 7.10. > > I have created a simplified test case to demonstrate the issue: > > https://github.com/agentm/testhsscript > > Here is the meat of the problem: > > * lib/Test.hs - contains a module importing Glob with some functions of > no consequence > https://github.com/agentm/testhsscript/blob/master/lib/Test.hs > > * test.hs includes the GHC API call to load the precompiled Test module > from the cabal sandbox > https://github.com/agentm/testhsscript/blob/master/test.hs > > * cabal file has two targets: Library and executable "test"- the > executable uses the GHC API to load the pacakge "testhsscript" > https://github.com/agentm/testhsscript/blob/master/testhsscript.cabal > > For reference, here is the GHC API invocation which works under GHC 7.10: > > https://github.com/agentm/project-m36/blob/master/src/lib/ProjectM36/AtomFunctionBody.hs#L31 > > I suspect I am missing something basic that has changed in GHC 8 and > would very much appreciate any tips. > > Here is the failing result of attempting to load the testhsscript package: > > Setting up HscEnv > Using binary package database: > /home/agentm/Dev/testhsscript/.cabal-sandbox/x86_64-linux-ghc-8.0.1-packages.conf.d/package.cache > Using binary package database: > /opt/ghc/8.0.1/lib/ghc-8.0.1/package.conf.d/package.cache > loading package database > /home/agentm/Dev/testhsscript/.cabal-sandbox/x86_64-linux-ghc-8.0.1-packages.conf.d > package testhsscript-0.1-CQQKGzp5pWwBmTnAldv1Hk is unusable due to > missing or recursive dependencies: > Glob-0.7.13-2PN6d9dpHzz7DHotD0T0wu base-4.9.0.0 > package Glob-0.7.13-2PN6d9dpHzz7DHotD0T0wu is unusable due to missing or > recursive dependencies: > base-4.9.0.0 containers-0.5.7.1 directory-1.2.6.2 > dlist-0.8.0.2-GWAMmbX9rLg3tqrbOizHGv filepath-1.4.1.0 > transformers-0.5.2.0 transformers-compat-0.5.1.4-G5tKvPrwhggJRvSwXNMs1N > package ghc-paths-0.1.0.9-GIOnKzk0HmEBZ77Q1HsThK is unusable due to > missing or recursive dependencies: > base-4.9.0.0 > package mtl-2.2.1-6qsR1PHUy5lL47Hpoa4jCM is unusable due to missing or > recursive dependencies: > base-4.9.0.0 transformers-0.5.2.0 > package dlist-0.8.0.2-GWAMmbX9rLg3tqrbOizHGv is unusable due to missing > or recursive dependencies: > base-4.9.0.0 deepseq-1.4.2.0 > package transformers-compat-0.5.1.4-G5tKvPrwhggJRvSwXNMs1N is unusable > due to missing or recursive dependencies: > base-4.9.0.0 ghc-prim-0.5.0.0 transformers-0.5.2.0 > loading package database /opt/ghc/8.0.1/lib/ghc-8.0.1/package.conf.d > test: : cannot satisfy dlist: > dlist-0.8.0.2-GWAMmbX9rLg3tqrbOizHGv is unusable due to missing or > recursive dependencies: > base-4.9.0.0 deepseq-1.4.2.0 > (use -v for more information) > > > > Cheers, > M From agentm at themactionfaction.com Sat Jan 7 15:54:56 2017 From: agentm at themactionfaction.com (A.M.) Date: Sat, 7 Jan 2017 10:54:56 -0500 Subject: GHC API 7.10 -> 8.0.1 (unusable due to missing or recursive dependencies) In-Reply-To: <1483760207-sup-1296@sabre> References: <1483760207-sup-1296@sabre> Message-ID: On 01/06/2017 10:37 PM, Edward Z. Yang wrote: > Hello A.M., > > In 8.0.1 package databases must be specified in the correct order, > whereas in 7.10 they could be done in any order. This problem > was fixed in 8.0.2, give it a try. Thanks for the tip, Edward. That was it! Following https://ghc.haskell.org/trac/ghc/ticket/12485 to https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/packages.html#package-databases, I read: "By default, the stack contains just the global and the user’s package databases, in that order." However, the order that works for me is: extraPkgConfs = const (localPkgPaths ++ [GlobalPkgConf]) so is the description flipped or is the stack head at the end of the list? In any case, it's rather confusing, though I recognize that the above sentence is not referring to the GHC API in particular. I now see that the wrong ordering caused me to be unable to load *any* packages from my sandbox. Thanks for the fix! Cheers, M -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: OpenPGP digital signature URL: From ezyang at mit.edu Sat Jan 7 16:38:08 2017 From: ezyang at mit.edu (Edward Z. Yang) Date: Sat, 07 Jan 2017 11:38:08 -0500 Subject: GHC API 7.10 -> 8.0.1 (unusable due to missing or recursive dependencies) In-Reply-To: References: <1483760207-sup-1296@sabre> Message-ID: <1483807023-sup-6678@sabre> Hi A.M., It's very possible that the list in DynFlags are accumulated in reverse order to the flags that were provided. The manual is just talking about user provided flags. You may also be interested in https://downloads.haskell.org/~ghc/master/users-guide/extending_ghc.html#frontend-plugins Edward Excerpts from A.M.'s message of 2017-01-07 10:54:56 -0500: > On 01/06/2017 10:37 PM, Edward Z. Yang wrote: > > Hello A.M., > > > > In 8.0.1 package databases must be specified in the correct order, > > whereas in 7.10 they could be done in any order. This problem > > was fixed in 8.0.2, give it a try. > > Thanks for the tip, Edward. That was it! > > Following https://ghc.haskell.org/trac/ghc/ticket/12485 to > https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/packages.html#package-databases, > I read: > > "By default, the stack contains just the global and the user’s package > databases, in that order." > > However, the order that works for me is: > > extraPkgConfs = const (localPkgPaths ++ [GlobalPkgConf]) > > so is the description flipped or is the stack head at the end of the > list? In any case, it's rather confusing, though I recognize that the > above sentence is not referring to the GHC API in particular. > > I now see that the wrong ordering caused me to be unable to load *any* > packages from my sandbox. > > Thanks for the fix! > > Cheers, > M From ben at well-typed.com Sat Jan 7 17:07:03 2017 From: ben at well-typed.com (Ben Gamari) Date: Sat, 07 Jan 2017 12:07:03 -0500 Subject: Phabricator upgrade tomorrow Message-ID: <87eg0en6iw.fsf@ben-laptop.smart-cactus.org> Hello everyone! Currently our Phabricator installation is quite old, based on a commit from last July. Given that I have a bit of breathing room now between 8.0.2 and 8.2.1, I'd like to take this opportunity to do an upgrade tomorrow if no one objects. Given that this is the first time I have attempted an upgrade, I expect this may take a few hours in the middle of the day EST. I would expect that the GHC Phabricator instance would be down for much of this time. Let me know if this will cause you undue burden. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From harendra.kumar at gmail.com Sat Jan 7 22:22:47 2017 From: harendra.kumar at gmail.com (Harendra Kumar) Date: Sun, 8 Jan 2017 03:52:47 +0530 Subject: Lexical error in string continuation Message-ID: Hi devs, I am making a change in runghc on the ghc master branch. When compiling the following code (edited/new code in utils/runghc): 208 splitGhcNonGhcArgs :: [String] -> IO ([String], [String]) 209 splitGhcNonGhcArgs args = do 210 let (ghc, other) = break notAFlag args 211 when (hasUnescapedGhcArgs ghc) $ 212 hPutStrLn stderr "yy\ 213 \ xx" I get an error because of the string continuation at line 212. If I put the backslashes on the same line I do not get any error. I have more string continuations in the same file and they all work fine. This snippet works fine with ghc-7.10.3/ghc-8.0.1 when compiled separately. Here is the error message that I get: utils/runghc/Main.hs:212:56: error: lexical error in string/character literal at character 'x' | 212 | hPutStrLn stderr "yy\ | ^ utils/runghc/ghc.mk:30: recipe for target 'utils/runghc/dist-install/build/Main.dyn_o' failed make[2]: *** [utils/runghc/dist-install/build/Main.dyn_o] Error 1 Makefile:122: recipe for target 'all_utils/runghc' failed make[1]: *** [all_utils/runghc] Error 2 make[1]: Leaving directory '/vol/hosts/cueball/workspace/play/ghc' ../../mk/sub-makefile.mk:50: recipe for target 'all' failed make: *** [all] Error 2 Any help will be appreciated. I can send the modified file if anyone wants to reproduce/debug. -harendra -------------- next part -------------- An HTML attachment was scrubbed... URL: From harendra.kumar at gmail.com Sat Jan 7 22:45:52 2017 From: harendra.kumar at gmail.com (Harendra Kumar) Date: Sun, 8 Jan 2017 04:15:52 +0530 Subject: Lexical error in string continuation In-Reply-To: References: Message-ID: Ah, it looks like it is because of the pre-processor. This file has CPP enabled: {-# LANGUAGE CPP #-} -harendra On 8 January 2017 at 03:52, Harendra Kumar wrote: > Hi devs, > > I am making a change in runghc on the ghc master branch. When compiling > the following code (edited/new code in utils/runghc): > > 208 splitGhcNonGhcArgs :: [String] -> IO ([String], [String]) > 209 splitGhcNonGhcArgs args = do > 210 let (ghc, other) = break notAFlag args > 211 when (hasUnescapedGhcArgs ghc) $ > 212 hPutStrLn stderr "yy\ > 213 \ xx" > > I get an error because of the string continuation at line 212. If I put > the backslashes on the same line I do not get any error. I have more string > continuations in the same file and they all work fine. This snippet works > fine with ghc-7.10.3/ghc-8.0.1 when compiled separately. Here is the error > message that I get: > > utils/runghc/Main.hs:212:56: error: > > lexical error in string/character literal at character 'x' > > | > > 212 | hPutStrLn stderr "yy\ > > | ^ > > utils/runghc/ghc.mk:30: recipe for target 'utils/runghc/dist-install/build/Main.dyn_o' > failed > > make[2]: *** [utils/runghc/dist-install/build/Main.dyn_o] Error 1 > > Makefile:122: recipe for target 'all_utils/runghc' failed > > make[1]: *** [all_utils/runghc] Error 2 > > make[1]: Leaving directory '/vol/hosts/cueball/workspace/play/ghc' > > ../../mk/sub-makefile.mk:50: recipe for target 'all' failed > > make: *** [all] Error 2 > > > Any help will be appreciated. I can send the modified file if anyone wants > to reproduce/debug. > > -harendra > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Sun Jan 8 05:40:02 2017 From: ben at smart-cactus.org (Ben Gamari) Date: Sun, 08 Jan 2017 00:40:02 -0500 Subject: Trac to Phabricator (Maniphest) migration prototype In-Reply-To: References: Message-ID: <87y3ymkt3h.fsf@ben-laptop.smart-cactus.org> Matthew Pickering writes: > Dear devs, > Hi Matthew and Dan, First, thanks for your work on this; it is an impressive effort. Reconstructing a decade of tickets with broken markup, tricky syntax, and a strange data model is no easy task. Good work so far! On the whole I am pleasantly surprised by how nicely Maniphest seems to hang together. I have pasted my notes from my own reflection on the pros and cons of both systems below. On re-reading them, it seems clear that Trac does leave us with a number of issues which Phabricator may resolve. As I've expressed in the past, I think we should consider preservation of ticket numbers to be an absolute requirement of any migration. To do otherwise imposes a small cost on a large number of people for a very long time with little benefit. GHC infrastructure exists to support GHC, not the other way around. However, ticket numbers notwithstanding I think I would be fine moving in this direction if the community agrees that this is the direction we want to go in. There are a few questions that remain outstanding, however: What do others think of this? ============================= Does Phabricator address the concerns that others, including those outside of the usual GHC development community, have raised about our issue tracking in the past? It would be interesting to hear some outside voices. How do we handle the stable branch? =================================== One important role that the issue tracker plays is ensuring that patches are backported to the stable branch when appropriate. In Trac we handle this with a "merge" state (indicating that the ticket has been fixed in `master`, but the fix needs to be backported) and the milestone field (to indicate which release we want to backport to). A significant amount of GHC's maintenance load is spent backporting and updating tickets, so it's important that this process works smoothly and reliably. I think this is may be an area where Phabricator could improve the status quo since the workflow currently looks something like this, 1. Someone merges a patch to `master` fixing an issue; if the commit message mentions the ticket number then our infrastructure automatically leaves a comment referencing the commit on the ticket. 2. Someone (usually the committer) places the ticket in `merge` state and sets the milestone appropriately 3. I merge the patch to the stable branch 4. I close the ticket and manually leave a comment mentioning the SHA of the backported commit. In particular (4) is more work than it needs to be; ideally comment generation would be automated as it is for commits to `master` but Trac's comment-on-commit functionality is a bit limited, so this is currently not an option. I'm not sure what Phabricator's analogous workflow to the above might look like. It seems that Phabricator's Releeph module may be in part intended for this use-case, but it seems to have some unfortunate limitations (e.g. the inflexibility in branch naming) that make it hard to imagine it being usable in our case. Setting aside Releeph, perhaps the best solution would be to continue with our current workflow: we would retain the "status" state and milestone projects would take the place of the current "milestone" field. If I'm not mistaken Phabricator can be configured to mention commits on stable branches in the ticket history, so this should help point (4). Which fields should be preserved? ================================= Our Trac instance associates a lot of structured metadata with each ticket. I generally think that this is a good thing, especially compared to the everything-is-a-tag model which I have found can quickly become unmaintainable. Unfortunately, Trac makes our users pay for these fields with a confusing ticket form. It appears that Phabricator's Transactions [1] module may allow us to have our cake and eat it too: we can define one form for users to create tickets and another, more complete form for developer use. In light of this I don't see why we would need to fall back to the everything-is-a-tag. Matthew, what did you feel was less-than-satisfactory about the proper-fields approach? I fear that relevant metadata like GHC version, operating system and architecture simply won't be provided unless the user is explicitly prompted; I personally find the cue to be quite helpful. Presumably contributors can set Herald rules for notification on these fields if they so desire. In the particular case of the "Component" field, I personally try to set this correctly when possible and have certainly found it useful as a search criterion. However, I suspect it would be fine as a tag. I also know that Simon PJ is quite fond of the test case field (although few others as as diligent in keeping this one up to date). How would we migrate and what will become of Trac? ================================================== The mechanics of migration will take time and effort to work out. If we decide this is the right direction I think we should be cautious in setting timelines; we should take as much time as we need to do it correctly. Regardless, we should gather a consensus on the general direction before we start hashing this out. Thanks again for your effort on this, Matthew and Dan, and sorry it took me so long to finally get these notes out. Cheers, - Ben [1] https://secure.phabricator.com/book/phabricator/article/forms/ Notes ===== These were largely for my sake to keep track of the pros and cons of the two options. I've nevertheless included them here for completeness. What does Maniphest do well? ---------------------------- * Actively developed: Phabricator will continue to improve in the future. * Metadata: Custom fields are supported. * Flexible user interface: Custom fields can be hidden from the new ticket form to prevent user confusion. * Familiarity: Many users may feel more at home in Phabricator's interface; reMarkup's similarity to Markdown also helps. * Integration: Having Phabricator handle code review, release management, and issue tracking will hopefully reduce maintenance workload. * Notifications: Herald's rule-based notifications are quite handy. What does Maniphest do poorly? ------------------------------ * Flexibility of search: The search options feel a bit more limiting than Trac; in particular the ability to show arbitrary columns in search results seems conspicuously missing. * Legibility: This is admittedly to some extent a matter of aesthetics but the search results list feels very busy and is quite difficult to quickly scan. This is made exacerbated by the fact that some aspects of the the color scheme are quite low contrast (e.g. grey against white for closed tickets). This hurts quite a bit since a number of contributors spend a significant amount of time looking through lists of tickets. Perhaps we could convince the Phacility people to provide a more legible, compact presentation option. What does Trac do well? ----------------------- * Convenient cross-referencing: while the syntax is a bit odd, once you acclimate it is quite liberating to be able to precisely cross-reference tickets, wiki documents, and comments without copying links around. * Automation of ticket lifecycle: Trac tickets progress through their lifecycle (e.g. from "new" to "patch" to "merge" to "closed" statuses) through predefined actions. This means that moving a ticket through its lifecycle typically only requires one click and the right thing happens with no additional effort. I think this is a great model, although in practice it's not clear how much we benefit from it compared to a typical Maniphest workflow. * Rich metadata: Tickets can have a variety of metadata fields which can be queried in sophisticated ways . What does Trac do poorly? ------------------------- * Familiarity: Many users feel rather lost in Trac * Active development: Trac is largely a stagnant project. * Spam management: Keeping spam at bay has been a challenge. We seem to have it under control at the moment, but I wonder how long this will last for. * Safety: I have personally lost probably a half-dozen hours of my life to Trac eating comments. * Integration with code review workflow: We use Phabricator for CI and code review; the only thing that remains on Trac are our tickets and the Wiki. Keeping these two resources in sync is time-consuming and error-prone. * Full text search: Trac's full text search capabilities are generally terrible. While I've tried working around this using PostgreSQL's native full text search functionality, the result is poorly integrated into Trac and consequently hard to use. * Customizability of the ticket form: While the rich metadata that Trac supports can be very helpful developers, it can also be confusing to users. Ideally we would hide these fields but Trac does not give us the ability to do so. * Relations between tickets: Trac has essentially no first-class notion of ticket relatedness. Even duplicate tickets need to be manually annotated in both directions. * Keywords are hard to discover and apply * Fine-grained notification support is nearly non-existent -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From rae at cs.brynmawr.edu Sun Jan 8 14:32:56 2017 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Sun, 8 Jan 2017 09:32:56 -0500 Subject: Trac to Phabricator (Maniphest) migration prototype In-Reply-To: <87y3ymkt3h.fsf@ben-laptop.smart-cactus.org> References: <87y3ymkt3h.fsf@ben-laptop.smart-cactus.org> Message-ID: > On Jan 8, 2017, at 12:40 AM, Ben Gamari wrote: > > * Metadata: Custom fields are supported. In agreement with your comments above, I'm glad to see this. Trac's metadata currently is suboptimal, but I don't think this means we should throw out the ability to have structured metadata in its entirety. > > * Flexible user interface: Custom fields can be hidden from the new > ticket form to prevent user confusion. Yay. > > * Familiarity: Many users may feel more at home in Phabricator's interface; > reMarkup's similarity to Markdown also helps. Nitpick: Do we have any control over this? (I doubt it.) I find switching between GitHub Markdown, RST, and reMarkup to be a low-grade but constant annoyance. > > * Legibility: I seem to recall that Phab originally used gray-on-white in Diffs, but we were able to fix with some CSS. (Or did it require upstream intervention?) Perhaps we can do something similar here. I find the modern trend to use lower-contrast interfaces utterly maddening. > > What does Trac do well? > ----------------------- > > * Convenient cross-referencing: while the syntax is a bit odd, once you > acclimate it is quite liberating to be able to precisely > cross-reference tickets, wiki documents, and comments without copying > links around. Yesyesyes. > > * Automation of ticket lifecycle: Trac tickets progress through their > lifecycle (e.g. from "new" to "patch" to "merge" to "closed" > statuses) through predefined actions. This means that moving a ticket > through its lifecycle typically only requires one click and the right > thing happens with no additional effort. I think this is a great model, > although in practice it's not clear how much we benefit from it > compared to a typical Maniphest workflow. I'm less convinced about this benefit of Trac. For instance, if I close a ticket in error, I lose ownership. Then I have to reopen but cannot set an owner at the same time. Maybe it's just a configuration issue, but I imagine we have experienced devs doing ticket-state management and don't need strict controls here. > > > What does Trac do poorly? > ------------------------- > > * Active development: Trac is largely a stagnant project. I was unaware of this. This is a significant downside in my opinion. > * Safety: I have personally lost probably a half-dozen hours of my life > to Trac eating comments. That's odd. I have not had this experience. > > * Relations between tickets: Trac has essentially no first-class notion > of ticket relatedness. Even duplicate tickets need to be manually > annotated in both directions. Yes. Frustrating. > > * Keywords are hard to discover and apply Yes. With discoverable keywords, we might be able to get more reporter buy-in. One more issue that we should consider: email notification. Perhaps I'm stodgy, but I'm a big fan of email. Trac emails notifications are not quite ideal (I wish the new content came above the metadata), but they're very functional. How does Phab compare? Can we see a sample notification it might create? Thanks! Richard From michal.terepeta at gmail.com Sun Jan 8 17:48:28 2017 From: michal.terepeta at gmail.com (Michal Terepeta) Date: Sun, 08 Jan 2017 17:48:28 +0000 Subject: nofib on Shake Message-ID: Hi all, While looking at nofib, I've found a blog post from Neil Mitchell [1], which describes a Shake build system for nofib. The comments mentioned that this should get merged, but it seems that nothing actually happened? Is there some fundamental reason for that? If not, I'd be interested picking this up - the current make-based system is pretty confusing for me and `runstdtest` looks simply terrifying ;-) We could also create a cabal and stack files for `nofib-analyse` (making it possible to use some libraries for it). Thanks, Michal [1] http://neilmitchell.blogspot.ch/2013/02/a-nofib-build-system-using-shake.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Sun Jan 8 18:45:16 2017 From: ben at well-typed.com (Ben Gamari) Date: Sun, 08 Jan 2017 13:45:16 -0500 Subject: nofib on Shake In-Reply-To: References: Message-ID: <87tw99l7b7.fsf@ben-laptop.smart-cactus.org> Michal Terepeta writes: > Hi all, > > While looking at nofib, I've found a blog post from Neil Mitchell [1], > which describes a Shake build system for nofib. The comments mentioned > that this should get merged, but it seems that nothing actually happened? > Is there some fundamental reason for that? > Indeed there is no fundamental reason and I think it would be great to make nofib a bit easier to run and modify. However, I think we should be careful to maintain some degree of compatibility. One of the nice properties of nofib is that it can be run against a wide range of compiler versions. It would be ashame if, for instance, Joachim's gipeda had to do different things to extract performance metrics from logs produced by logs pre- and post-Shake nofibs. > We could also create a cabal and stack files for `nofib-analyse` (making > it possible to use some libraries for it). > This would be great. This would allow me to drop a submodule from my own performance monitoring tool. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From mail at joachim-breitner.de Sun Jan 8 21:56:06 2017 From: mail at joachim-breitner.de (Joachim Breitner) Date: Sun, 08 Jan 2017 16:56:06 -0500 Subject: nofib on Shake In-Reply-To: <87tw99l7b7.fsf@ben-laptop.smart-cactus.org> References: <87tw99l7b7.fsf@ben-laptop.smart-cactus.org> Message-ID: <1483912566.4901.1.camel@joachim-breitner.de> Hi, Am Sonntag, den 08.01.2017, 13:45 -0500 schrieb Ben Gamari: > > We could also create a cabal and stack files for `nofib-analyse` (making > > it possible to use some libraries for it). > > > This would be great. This would allow me to drop a submodule from my own > performance monitoring tool. Exists since last April: http://hackage.haskell.org/package/nofib-analyse Only the binary so far, though, but good enough for "cabal install nofib-analyse". Greetings, Joachim -- Joachim “nomeata” Breitner   mail at joachim-breitner.de • https://www.joachim-breitner.de/   XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F   Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From ben at well-typed.com Mon Jan 9 01:24:35 2017 From: ben at well-typed.com (Ben Gamari) Date: Sun, 08 Jan 2017 20:24:35 -0500 Subject: Phabricator upgrade underway Message-ID: <87mvf1koto.fsf@ben-laptop.smart-cactus.org> Hello everyone, I'll be bringing down Phabricator for an upgrade in a few minutes. I'll let you know when things are back up. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From tjakway at nyu.edu Mon Jan 9 01:33:14 2017 From: tjakway at nyu.edu (Thomas Jakway) Date: Sun, 8 Jan 2017 17:33:14 -0800 Subject: Debugging GHC with GHCi Message-ID: <2a49fd43-9896-bcc1-26be-763f17e3fb83@nyu.edu> I want to be able to load certain GHC modules in interpreted mode in ghci so I can set breakpoints in them. I have tests in the testsuite that are compiled by inplace/bin/ghc-stage2 with -package ghc. I can load the tests with ghc-stage2 --interactive -package ghc but since ghc is compiled I can only set breakpoints in the tests themselves. Loading the relevant files by passing them as absolute paths to :l loads them but ghci doesn't stop at the breakpoints placed in them (I'm guessing because ghci doesn't realize that the module I just loaded is meant to replace the compiled version in -package ghc). So if I use inplace/bin/ghc-stage2 --interactive -package ghc mytest.hs then :l abs/path/to/AsmCodeGen.hs and set breakpoints, nothing happens. Ideally I'd only have to load the module I'm debugging (and it's dependencies?) but if that isn't possible how would I get ghci to load all of GHC in interpreted mode (intead of using -package ghc)? Currently I'm using trace & friends to do printf-style debugging but it's definitely not ideal. -Thomas Jakway From rae at cs.brynmawr.edu Mon Jan 9 02:04:32 2017 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Sun, 8 Jan 2017 21:04:32 -0500 Subject: Debugging GHC with GHCi In-Reply-To: <2a49fd43-9896-bcc1-26be-763f17e3fb83@nyu.edu> References: <2a49fd43-9896-bcc1-26be-763f17e3fb83@nyu.edu> Message-ID: <0AF5BC2D-6A00-43FC-AF1F-2BB9498D0B90@cs.brynmawr.edu> > On Jan 8, 2017, at 8:33 PM, Thomas Jakway wrote: > > Currently I'm using trace & friends to do printf-style debugging but it's definitely not ideal. I don't have an answer to your question, but I can tell you that this is exactly what I do. It's not ideal at all. If you figure out how to do this, tell us! Richard From ben at smart-cactus.org Mon Jan 9 04:51:02 2017 From: ben at smart-cactus.org (Ben Gamari) Date: Sun, 08 Jan 2017 23:51:02 -0500 Subject: Debugging GHC with GHCi In-Reply-To: <2a49fd43-9896-bcc1-26be-763f17e3fb83@nyu.edu> References: <2a49fd43-9896-bcc1-26be-763f17e3fb83@nyu.edu> Message-ID: <87k2a4ltu1.fsf@ben-laptop.smart-cactus.org> Thomas Jakway writes: > I want to be able to load certain GHC modules in interpreted mode in > ghci so I can set breakpoints in them. I have tests in the testsuite > that are compiled by inplace/bin/ghc-stage2 with -package ghc. I can > load the tests with ghc-stage2 --interactive -package ghc but since ghc > is compiled I can only set breakpoints in the tests themselves. Loading > the relevant files by passing them as absolute paths to :l loads them > but ghci doesn't stop at the breakpoints placed in them (I'm guessing > because ghci doesn't realize that the module I just loaded is meant to > replace the compiled version in -package ghc). > > So if I use > > inplace/bin/ghc-stage2 --interactive -package ghc mytest.hs > then > :l abs/path/to/AsmCodeGen.hs > > and set breakpoints, nothing happens. > Many of us would love to be able to load GHC into GHCi. Unfortunately, we aren't currently in a position where this is possible. The only thing standing in our way is the inability of GHC's interpreter to run modules which use unboxed tuples. While there are a few modules within GHC which use unboxed tuples, none of them are particularly interesting for debugging purposes, so compiling them with -fobject-code should be fine. In principle this could be accomplished by, {-# OPTIONS_GHC -fobject-code #-} However, as explained in #10965, GHC sadly does not allow this. I spent a bit of time tonight trying to see if I could convince GHC to first manually build object code for the needed modules, and then load bytecode for the rest. Unfortunately recompilation checking fought me at every turn. The current state of my attempt can be found here [1]. It would be great if someone could pick it up. This will involve, * Working out how to convince the GHC to use the object code for utils/Encoding.o instead of recompiling * Identifying all of the modules which can't be byte-code compiled and add them to $obj_modules * Chassing down whatever other issues that might pop up along the way I also wouldn't be surprised if you would need this GHC patch [2]. Cheers, - Ben [1] https://gist.github.com/bgamari/bd53e4fd6f3323599387ffc7b11d1a1e [2] http://git.haskell.org/ghc.git/commit/326931db9cdc26f2d47657c1f084b9903fd46246 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From matthewtpickering at gmail.com Mon Jan 9 11:41:55 2017 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Mon, 9 Jan 2017 11:41:55 +0000 Subject: Trac to Phabricator (Maniphest) migration prototype In-Reply-To: References: <87y3ymkt3h.fsf@ben-laptop.smart-cactus.org> Message-ID: To first reply to the one specific reoccurring point about custom fields. The problem with 'os' and 'architecture' is a philosophical one, in what way are they any different to any other metadata for a ticket? I am of the opinion that we should only include information when it is relevant, a lot of the time these two fields are not. Including the fields as a special case is in fact worse as users feel obliged to complete them even when they are irrelevant. Users who are interested in these platforms are interested in problems which are specific. These fields have been structured for over 10 years now, some options have been barely used but still clutter all interfaces. Nearly 2000 tickets are tagged as relevant to `x86` which massively dwarfs the other fields -- there is an assumption that unless otherwise stated the problem is manifested on x86 as that is the default use case. What's more, just by browsing tickets categorised with this meta data, it is often evident from the title that the problem is on a non-default operating system. The assumption in this case is some debian derivation, users reporting issues on other operating systems include the operating system prominently as they know it is not standard. (For example - https://ghc.haskell.org/trac/ghc/query?os=MacOS+X&order=priority) Stats for architecture - https://phabricator.haskell.org/P133 Stats for operating system - https://phabricator.haskell.org/P134 I modified my local install of phab to add custom fields, http://ec2-52-214-147-146.eu-west-1.compute.amazonaws.com/M1/7/ The results were better than I remembered but I am unsatisfied that they behave differently to projects and clutter up the description of each ticket when they are "unset". On the other hand, I think custom fields are suitable for things like test case, wikipage etc. It is easy to add a field to carry over the test case but I always found it a bit redundant as by looking at the commit information you could work out which test is relevant to which commit. So to summarise I think.. Component -> Projects OS -> (Sub)Projects Arch -> (Sub)Projects Keywords -> Projects Version -> Remove (It is a proxy for date reported) Milestone -> Project (Milestone) Matt On Sun, Jan 8, 2017 at 2:32 PM, Richard Eisenberg wrote: > >> On Jan 8, 2017, at 12:40 AM, Ben Gamari wrote: >> >> * Metadata: Custom fields are supported. > > In agreement with your comments above, I'm glad to see this. Trac's metadata currently is suboptimal, but I don't think this means we should throw out the ability to have structured metadata in its entirety. > >> >> * Flexible user interface: Custom fields can be hidden from the new >> ticket form to prevent user confusion. > > Yay. > >> >> * Familiarity: Many users may feel more at home in Phabricator's interface; >> reMarkup's similarity to Markdown also helps. > > Nitpick: Do we have any control over this? (I doubt it.) I find switching between GitHub Markdown, RST, and reMarkup to be a low-grade but constant annoyance. >> >> * Legibility: > > I seem to recall that Phab originally used gray-on-white in Diffs, but we were able to fix with some CSS. (Or did it require upstream intervention?) Perhaps we can do something similar here. I find the modern trend to use lower-contrast interfaces utterly maddening. > >> >> What does Trac do well? >> ----------------------- >> >> * Convenient cross-referencing: while the syntax is a bit odd, once you >> acclimate it is quite liberating to be able to precisely >> cross-reference tickets, wiki documents, and comments without copying >> links around. > > Yesyesyes. > >> >> * Automation of ticket lifecycle: Trac tickets progress through their >> lifecycle (e.g. from "new" to "patch" to "merge" to "closed" >> statuses) through predefined actions. This means that moving a ticket >> through its lifecycle typically only requires one click and the right >> thing happens with no additional effort. I think this is a great model, >> although in practice it's not clear how much we benefit from it >> compared to a typical Maniphest workflow. > > I'm less convinced about this benefit of Trac. For instance, if I close a ticket in error, I lose ownership. Then I have to reopen but cannot set an owner at the same time. Maybe it's just a configuration issue, but I imagine we have experienced devs doing ticket-state management and don't need strict controls here. >> >> >> What does Trac do poorly? >> ------------------------- >> >> * Active development: Trac is largely a stagnant project. > > I was unaware of this. This is a significant downside in my opinion. > >> * Safety: I have personally lost probably a half-dozen hours of my life >> to Trac eating comments. > > That's odd. I have not had this experience. > >> >> * Relations between tickets: Trac has essentially no first-class notion >> of ticket relatedness. Even duplicate tickets need to be manually >> annotated in both directions. > > Yes. Frustrating. > >> >> * Keywords are hard to discover and apply > > Yes. With discoverable keywords, we might be able to get more reporter buy-in. > > > One more issue that we should consider: email notification. Perhaps I'm stodgy, but I'm a big fan of email. Trac emails notifications are not quite ideal (I wish the new content came above the metadata), but they're very functional. How does Phab compare? Can we see a sample notification it might create? > > Thanks! > Richard > > From matthewtpickering at gmail.com Mon Jan 9 12:01:14 2017 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Mon, 9 Jan 2017 12:01:14 +0000 Subject: Trac to Phabricator (Maniphest) migration prototype In-Reply-To: <87y3ymkt3h.fsf@ben-laptop.smart-cactus.org> References: <87y3ymkt3h.fsf@ben-laptop.smart-cactus.org> Message-ID: With regards to the other two points that Ben made. I solicited the opinion of a few people when I first made the prototype and the reaction was that it didn't matter to them. We should really make this decision based on the opinions of people who are high utilisers of the tracker. The experience should be better for occasional contributors as there are many more options for authentication and clearer control over notifications. The interaction with the stable branch was something I had not considered yet so thank you for bringing this up. I spent yesterday looking at the situation and it doesn't look like there is anything immediate which could help with release management. People in #phabricator told me that we shouldn't use Releeph as there were plans to change it significantly and it wasn't a finished product. They pointed me to https://secure.phabricator.com/T9530 and https://secure.phabricator.com/D16981 which describe the future of the feature. In particular, the "Facebook-Style Cherry-Picks / Phabricator-Style Stable / Backporting" work flow looks close to current practices. This being said, it isn't clear at all when they plan to introduce these features. Custom forms are also a good idea. We could even modify the URLs sometimes produced by the compiler for panics to prefill certain fields of the form. (For example.. http://ec2-52-214-147-146.eu-west-1.compute.amazonaws.com/maniphest/task/edit/?projects=Inlining&title=Simplified%20Ticks%20Exhausted) Matt On Sun, Jan 8, 2017 at 5:40 AM, Ben Gamari wrote: > Matthew Pickering writes: > >> Dear devs, >> > Hi Matthew and Dan, > > First, thanks for your work on this; it is an impressive effort. > Reconstructing a decade of tickets with broken markup, tricky syntax, > and a strange data model is no easy task. Good work so far! > > On the whole I am pleasantly surprised by how nicely Maniphest seems to > hang together. I have pasted my notes from my own reflection on the pros > and cons of both systems below. On re-reading them, it seems clear that > Trac does leave us with a number of issues which Phabricator may > resolve. > > As I've expressed in the past, I think we should consider preservation > of ticket numbers to be an absolute requirement of any migration. To do > otherwise imposes a small cost on a large number of people for a > very long time with little benefit. GHC infrastructure exists to > support GHC, not the other way around. > > However, ticket numbers notwithstanding I think I would be fine moving > in this direction if the community agrees that this is the direction we > want to go in. > > There are a few questions that remain outstanding, however: > > > What do others think of this? > ============================= > > Does Phabricator address the concerns that others, including those > outside of the usual GHC development community, have raised about our > issue tracking in the past? It would be interesting to hear some outside > voices. > > > How do we handle the stable branch? > =================================== > > One important role that the issue tracker plays is ensuring that > patches are backported to the stable branch when appropriate. In Trac we > handle this with a "merge" state (indicating that the ticket has been > fixed in `master`, but the fix needs to be backported) and the milestone > field (to indicate which release we want to backport to). > > A significant amount of GHC's maintenance load is spent backporting and > updating tickets, so it's important that this process works smoothly and > reliably. I think this is may be an area where Phabricator could > improve the status quo since the workflow currently looks something like > this, > > 1. Someone merges a patch to `master` fixing an issue; if the commit > message mentions the ticket number then our infrastructure > automatically leaves a comment referencing the commit on the ticket. > > 2. Someone (usually the committer) places the ticket in `merge` state > and sets the milestone appropriately > > 3. I merge the patch to the stable branch > > 4. I close the ticket and manually leave a comment mentioning the SHA > of the backported commit. > > In particular (4) is more work than it needs to be; ideally comment > generation would be automated as it is for commits to `master` but > Trac's comment-on-commit functionality is a bit limited, so this is > currently not an option. > > I'm not sure what Phabricator's analogous workflow to the above might > look like. It seems that Phabricator's Releeph module may be in part > intended for this use-case, but it seems to have some unfortunate > limitations (e.g. the inflexibility in branch naming) that make it hard > to imagine it being usable in our case. > > Setting aside Releeph, perhaps the best solution would be to continue > with our current workflow: we would retain the "status" state and > milestone projects would take the place of the current "milestone" > field. If I'm not mistaken Phabricator can be configured to mention > commits on stable branches in the ticket history, so this should help > point (4). > > > Which fields should be preserved? > ================================= > > Our Trac instance associates a lot of structured metadata with each > ticket. I generally think that this is a good thing, especially compared > to the everything-is-a-tag model which I have found can quickly become > unmaintainable. Unfortunately, Trac makes our users pay for these fields > with a confusing ticket form. > > It appears that Phabricator's Transactions [1] module may allow us to have > our cake and eat it too: we can define one form for users to create > tickets and another, more complete form for developer use. In light of > this I don't see why we would need to fall back to the > everything-is-a-tag. Matthew, what did you feel was > less-than-satisfactory about the proper-fields approach? I fear that > relevant metadata like GHC version, operating system and architecture > simply won't be provided unless the user is explicitly prompted; I > personally find the cue to be quite helpful. Presumably contributors can > set Herald rules for notification on these fields if they so desire. > > In the particular case of the "Component" field, I personally try to > set this correctly when possible and have certainly found it useful as a > search criterion. However, I suspect it would be fine as a tag. I also > know that Simon PJ is quite fond of the test case field (although > few others as as diligent in keeping this one up to date). > > > How would we migrate and what will become of Trac? > ================================================== > > The mechanics of migration will take time and effort to work out. If we > decide this is the right direction I think we should be cautious in > setting timelines; we should take as much time as we need to do it > correctly. Regardless, we should gather a consensus on the general > direction before we start hashing this out. > > > Thanks again for your effort on this, Matthew and Dan, and sorry it took > me so long to finally get these notes out. > > Cheers, > > - Ben > > > [1] https://secure.phabricator.com/book/phabricator/article/forms/ > > > > Notes > ===== > > These were largely for my sake to keep track of the pros and cons of the > two options. I've nevertheless included them here for completeness. > > What does Maniphest do well? > ---------------------------- > > * Actively developed: Phabricator will continue to improve in the > future. > > * Metadata: Custom fields are supported. > > * Flexible user interface: Custom fields can be hidden from the new > ticket form to prevent user confusion. > > * Familiarity: Many users may feel more at home in Phabricator's interface; > reMarkup's similarity to Markdown also helps. > > * Integration: Having Phabricator handle code review, release > management, and issue tracking will hopefully reduce maintenance > workload. > > * Notifications: Herald's rule-based notifications are quite handy. > > > What does Maniphest do poorly? > ------------------------------ > > * Flexibility of search: The search options feel a bit more limiting > than Trac; in particular the ability to show arbitrary columns in > search results seems conspicuously missing. > > * Legibility: This is admittedly to some extent a matter of aesthetics > but the search results list feels very busy and is quite difficult to > quickly scan. This is made exacerbated by the fact that some aspects > of the the color scheme are quite low contrast (e.g. grey against > white for closed tickets). This hurts quite a bit since a number of > contributors spend a significant amount of time looking through lists > of tickets. Perhaps we could convince the Phacility people to provide > a more legible, compact presentation option. > > > What does Trac do well? > ----------------------- > > * Convenient cross-referencing: while the syntax is a bit odd, once you > acclimate it is quite liberating to be able to precisely > cross-reference tickets, wiki documents, and comments without copying > links around. > > * Automation of ticket lifecycle: Trac tickets progress through their > lifecycle (e.g. from "new" to "patch" to "merge" to "closed" > statuses) through predefined actions. This means that moving a ticket > through its lifecycle typically only requires one click and the right > thing happens with no additional effort. I think this is a great model, > although in practice it's not clear how much we benefit from it > compared to a typical Maniphest workflow. > > * Rich metadata: Tickets can have a variety of metadata fields > which can be queried in sophisticated ways . > > > What does Trac do poorly? > ------------------------- > > * Familiarity: Many users feel rather lost in Trac > > * Active development: Trac is largely a stagnant project. > > * Spam management: Keeping spam at bay has been a challenge. We seem to > have it under control at the moment, but I wonder how long this will > last for. > > * Safety: I have personally lost probably a half-dozen hours of my life > to Trac eating comments. > > * Integration with code review workflow: We use Phabricator for CI and > code review; the only thing that remains on Trac are our tickets and > the Wiki. Keeping these two resources in sync is time-consuming and > error-prone. > > * Full text search: Trac's full text search capabilities are generally > terrible. While I've tried working around this using PostgreSQL's > native full text search functionality, the result is poorly > integrated into Trac and consequently hard to use. > > * Customizability of the ticket form: While the rich metadata that Trac > supports can be very helpful developers, it can also be confusing to > users. Ideally we would hide these fields but Trac does not give us > the ability to do so. > > * Relations between tickets: Trac has essentially no first-class notion > of ticket relatedness. Even duplicate tickets need to be manually > annotated in both directions. > > * Keywords are hard to discover and apply > > * Fine-grained notification support is nearly non-existent > From simonpj at microsoft.com Mon Jan 9 13:55:51 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 9 Jan 2017 13:55:51 +0000 Subject: Navigating GHC proposals Message-ID: Once I am looking the rendered form of a GHC proposal, eg https://github.com/ghc-proposals/ghc-proposals/blob/rae/constraint-vs-type/proposals/0000-constraint-vs-type.rst how can I find my way to the “conversation” for that proposal, so I can comment on it? https://github.com/ghc-proposals/ghc-proposals/pull/32 Once more, I am lost in a maze of twisty little Githup passages. I clearly have not yet internalised an accurate model of what Github is thinking Thanks Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Mon Jan 9 15:05:56 2017 From: ben at well-typed.com (Ben Gamari) Date: Mon, 09 Jan 2017 10:05:56 -0500 Subject: Phabricator upgrade underway In-Reply-To: <87mvf1koto.fsf@ben-laptop.smart-cactus.org> References: <87mvf1koto.fsf@ben-laptop.smart-cactus.org> Message-ID: <87h958l1d7.fsf@ben-laptop.smart-cactus.org> Ben Gamari writes: > Hello everyone, > > I'll be bringing down Phabricator for an upgrade in a few minutes. I'll > let you know when things are back up. > Hello everyone, The upgrade should now be complete. Feel free to resume your typical Phabrication. I've done some testing but let me know if you encounter any trouble. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From marlowsd at gmail.com Mon Jan 9 16:03:22 2017 From: marlowsd at gmail.com (Simon Marlow) Date: Mon, 9 Jan 2017 16:03:22 +0000 Subject: Navigating GHC proposals In-Reply-To: References: Message-ID: I don't think there is a way to go from the rendered proposal to the pull request, other than the "back" button in your browser. The constraint-vs-type proposal seems a little bit weird in that it actually has a branch in the ghc-proposals repository itself, rather than being a pull request from a fork in @goldfire's account. Richard, was that intentional? On 9 January 2017 at 13:55, Simon Peyton Jones via ghc-devs < ghc-devs at haskell.org> wrote: > Once I am looking the rendered form of a GHC proposal, eg > > https://github.com/ghc-proposals/ghc-proposals/blob/ > rae/constraint-vs-type/proposals/0000-constraint-vs-type.rst > > how can I find my way to the “conversation” for that proposal, so I can > comment on it? > > https://github.com/ghc-proposals/ghc-proposals/pull/32 > > > > Once more, I am lost in a maze of twisty little Githup passages. I > clearly have not yet internalised an accurate model of what Github is > thinking > > Thanks > > Simon > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Jan 9 16:05:09 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 9 Jan 2017 16:05:09 +0000 Subject: Navigating GHC proposals In-Reply-To: References: Message-ID: I don't think there is a way to go from the rendered proposal to the pull request, other than the "back" button in your browser. Seriously? But the rendered proposal is the useful link to send to people. There _must_ be a way, even if its indirect. Simon From: Simon Marlow [mailto:marlowsd at gmail.com] Sent: 09 January 2017 16:03 To: Simon Peyton Jones Cc: ghc-devs at haskell.org; Richard Eisenberg Subject: Re: Navigating GHC proposals I don't think there is a way to go from the rendered proposal to the pull request, other than the "back" button in your browser. The constraint-vs-type proposal seems a little bit weird in that it actually has a branch in the ghc-proposals repository itself, rather than being a pull request from a fork in @goldfire's account. Richard, was that intentional? On 9 January 2017 at 13:55, Simon Peyton Jones via ghc-devs > wrote: Once I am looking the rendered form of a GHC proposal, eg https://github.com/ghc-proposals/ghc-proposals/blob/rae/constraint-vs-type/proposals/0000-constraint-vs-type.rst how can I find my way to the “conversation” for that proposal, so I can comment on it? https://github.com/ghc-proposals/ghc-proposals/pull/32 Once more, I am lost in a maze of twisty little Githup passages. I clearly have not yet internalised an accurate model of what Github is thinking Thanks Simon _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Mon Jan 9 16:12:15 2017 From: marlowsd at gmail.com (Simon Marlow) Date: Mon, 9 Jan 2017 16:12:15 +0000 Subject: Debugging GHC with GHCi In-Reply-To: <87k2a4ltu1.fsf@ben-laptop.smart-cactus.org> References: <2a49fd43-9896-bcc1-26be-763f17e3fb83@nyu.edu> <87k2a4ltu1.fsf@ben-laptop.smart-cactus.org> Message-ID: On 9 January 2017 at 04:51, Ben Gamari wrote: > Thomas Jakway writes: > > > I want to be able to load certain GHC modules in interpreted mode in > > ghci so I can set breakpoints in them. I have tests in the testsuite > > that are compiled by inplace/bin/ghc-stage2 with -package ghc. I can > > load the tests with ghc-stage2 --interactive -package ghc but since ghc > > is compiled I can only set breakpoints in the tests themselves. Loading > > the relevant files by passing them as absolute paths to :l loads them > > but ghci doesn't stop at the breakpoints placed in them (I'm guessing > > because ghci doesn't realize that the module I just loaded is meant to > > replace the compiled version in -package ghc). > > > > So if I use > > > > inplace/bin/ghc-stage2 --interactive -package ghc mytest.hs > > then > > :l abs/path/to/AsmCodeGen.hs > > > > and set breakpoints, nothing happens. > > > Many of us would love to be able to load GHC into GHCi. Unfortunately, > we aren't currently in a position where this is possible. The only thing > standing in our way is the inability of GHC's interpreter to run modules > which use unboxed tuples. While there are a few modules within GHC which > use unboxed tuples, none of them are particularly interesting for > debugging purposes, so compiling them with -fobject-code should be fine. > In principle this could be accomplished by, > > {-# OPTIONS_GHC -fobject-code #-} > > However, as explained in #10965, GHC sadly does not allow this. I spent > a bit of time tonight trying to see if I could convince GHC to first > manually build object code for the needed modules, and then load > bytecode for the rest. Unfortunately recompilation checking fought me at > every turn. > > The current state of my attempt can be found here [1]. It would be great > if someone could pick it up. This will involve, > > * Working out how to convince the GHC to use the object code for > utils/Encoding.o instead of recompiling > > * Identifying all of the modules which can't be byte-code compiled and > add them to $obj_modules > > * Chassing down whatever other issues that might pop up along the way > > I also wouldn't be surprised if you would need this GHC patch [2]. > I would have thought that something like :set -fobject-code :load Main -- or whatever -- modify some source file :set -fbyte-code :load Main should do the right thing, loading object code when it can, up to the first module that has been modified more recently. Of course you can't have any object code modules that depend on byte-code modules, so if you modify something too low down in the dependency graph then you'll have a lot of interpreted modules, and you may end up trying to interpret something that can't be interpreted because it has an unboxed tuple. But for simple tests it ought to work. (I haven't tried this so I'm probably forgetting something...) Cheers Simon > > Cheers, > > - Ben > > > [1] https://gist.github.com/bgamari/bd53e4fd6f3323599387ffc7b11d1a1e > [2] http://git.haskell.org/ghc.git/commit/326931db9cdc26f2d47657c1f084b9 > 903fd46246 > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Mon Jan 9 16:55:06 2017 From: marlowsd at gmail.com (Simon Marlow) Date: Mon, 9 Jan 2017 16:55:06 +0000 Subject: Navigating GHC proposals In-Reply-To: References: Message-ID: Well, you can go to the history of the file, and from there to the first commit ("Rename proposal file"), and from there you'll see a link to the pull request in the blue box next to the name of the branch (the link looks like "#32" in this case). But really, I wouldn't recommend sending the rendered link to someone, send the link to the pull request. On 9 January 2017 at 16:05, Simon Peyton Jones wrote: > I don't think there is a way to go from the rendered proposal to the pull > request, other than the "back" button in your browser. > > > > Seriously? But the rendered proposal is the useful link to send to > people. There _*must_ *be a way, even if its indirect. > > > > Simon > > > > *From:* Simon Marlow [mailto:marlowsd at gmail.com] > *Sent:* 09 January 2017 16:03 > *To:* Simon Peyton Jones > *Cc:* ghc-devs at haskell.org; Richard Eisenberg > *Subject:* Re: Navigating GHC proposals > > > > I don't think there is a way to go from the rendered proposal to the pull > request, other than the "back" button in your browser. > > > > The constraint-vs-type proposal seems a little bit weird in that it > actually has a branch in the ghc-proposals repository itself, rather than > being a pull request from a fork in @goldfire's account. Richard, was that > intentional? > > > > On 9 January 2017 at 13:55, Simon Peyton Jones via ghc-devs < > ghc-devs at haskell.org> wrote: > > Once I am looking the rendered form of a GHC proposal, eg > > https://github.com/ghc-proposals/ghc-proposals/blob/ > rae/constraint-vs-type/proposals/0000-constraint-vs-type.rst > > > how can I find my way to the “conversation” for that proposal, so I can > comment on it? > > https://github.com/ghc-proposals/ghc-proposals/pull/32 > > > > > Once more, I am lost in a maze of twisty little Githup passages. I > clearly have not yet internalised an accurate model of what Github is > thinking > > Thanks > > Simon > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at cs.brynmawr.edu Mon Jan 9 18:41:38 2017 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Mon, 9 Jan 2017 13:41:38 -0500 Subject: Trac to Phabricator (Maniphest) migration prototype In-Reply-To: References: <87y3ymkt3h.fsf@ben-laptop.smart-cactus.org> Message-ID: > On Jan 9, 2017, at 6:41 AM, Matthew Pickering wrote: > > Component -> Projects > OS -> (Sub)Projects > Arch -> (Sub)Projects What is a (Sub)Project? I've been operating under the assumption that a Project is just a tag. Do these tags have structure? My best guess from your discussion is that when you choose on Project, you are then forced to choose one of a set of others. That seems like a good plan. > Keywords -> Projects > Version -> Remove (It is a proxy for date reported) No no no no. I don't think either of us will convince the other on this point, but we should be clear that we need input from others to decide on this one. > Milestone -> Project (Milestone) What does this mean? What is "Project (Milestone)"? > I have some example emails. https://phabricator.haskell.org/M2 Looks good. Thanks for posting this! Richard From michal.terepeta at gmail.com Mon Jan 9 19:44:20 2017 From: michal.terepeta at gmail.com (Michal Terepeta) Date: Mon, 09 Jan 2017 19:44:20 +0000 Subject: nofib on Shake In-Reply-To: <87tw99l7b7.fsf@ben-laptop.smart-cactus.org> References: <87tw99l7b7.fsf@ben-laptop.smart-cactus.org> Message-ID: > On Sun, Jan 8, 2017 at 7:45 PM Ben Gamari wrote: > Michal Terepeta writes: > > > Hi all, > > > > While looking at nofib, I've found a blog post from Neil Mitchell [1], > > which describes a Shake build system for nofib. The comments mentioned > > that this should get merged, but it seems that nothing actually happened? > > Is there some fundamental reason for that? > > > Indeed there is no fundamental reason and I think it would be great to > make nofib a bit easier to run and modify. Ok, cool. I'll have a look at using Neil's code and see if it needs any updating or if something is missing. > However, I think we should be careful to maintain some degree of > compatibility. One of the nice properties of nofib is that it can be run > against a wide range of compiler versions. It would be ashame if, for > instance, Joachim's gipeda had to do different things to extract > performance metrics from logs produced by logs pre- and post-Shake > nofibs. Thanks for mentioning this! I don't have any concrete plans to change that at the moment, but I was thinking that in the future it'd be nice if the results were, e.g., a simple csv file, instead of a log containing all the stdout/stderr (i.e., it currently contains the results, any warnings from GHC, output from `Debug.Trace.trace`, etc.) Anyway, that's probably further down the road, so before doing anything, I'll likely send an email to ghc-devs so that we can discuss this. Cheers, Michal -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.terepeta at gmail.com Mon Jan 9 19:48:22 2017 From: michal.terepeta at gmail.com (Michal Terepeta) Date: Mon, 09 Jan 2017 19:48:22 +0000 Subject: nofib on Shake In-Reply-To: <1483912566.4901.1.camel@joachim-breitner.de> References: <87tw99l7b7.fsf@ben-laptop.smart-cactus.org> <1483912566.4901.1.camel@joachim-breitner.de> Message-ID: On Sun, Jan 8, 2017 at 10:56 PM Joachim Breitner wrote: > Hi, > > Am Sonntag, den 08.01.2017, 13:45 -0500 schrieb Ben Gamari: > > > We could also create a cabal and stack files for `nofib-analyse` (making > > > it possible to use some libraries for it). > > > > > This would be great. This would allow me to drop a submodule from my own > > performance monitoring tool. > > Exists since last April: > http://hackage.haskell.org/package/nofib-analyse > > Only the binary so far, though, but good enough for > "cabal install nofib-analyse". Oh, interesting! But now I'm a bit confused - what's the relationship between https://github.com/nomeata/nofib-analyse and https://git.haskell.org/nofib.git, e.g., is the github repo the upstream for nofib-anaylse and the haskell.org one for the other parts of nofib? Or is the github one just a mirror and all patches should go to haskell.org repo? Thanks, Michal -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Mon Jan 9 21:05:49 2017 From: mail at joachim-breitner.de (Joachim Breitner) Date: Mon, 09 Jan 2017 16:05:49 -0500 Subject: nofib on Shake In-Reply-To: References: <87tw99l7b7.fsf@ben-laptop.smart-cactus.org> <1483912566.4901.1.camel@joachim-breitner.de> Message-ID: <1483995949.3211.3.camel@joachim-breitner.de> Hi, Am Montag, den 09.01.2017, 19:48 +0000 schrieb Michal Terepeta: > On Sun, Jan 8, 2017 at 10:56 PM Joachim Breitner wrote: > > Hi, > > > > Am Sonntag, den 08.01.2017, 13:45 -0500 schrieb Ben Gamari: > > > > We could also create a cabal and stack files for `nofib-analyse` (making > > > > it possible to use some libraries for it). > > > > > > > This would be great. This would allow me to drop a submodule from my own > > > performance monitoring tool. > > > > Exists since last April: > > http://hackage.haskell.org/package/nofib-analyse > > > > Only the binary so far, though, but good enough for > > "cabal install nofib-analyse". > > Oh, interesting! But now I'm a bit confused - what's the relationship > > between https://github.com/nomeata/nofib-analyse and > https://git.haskell.org/nofib.git, e.g., is the github repo the > upstream for nofib-anaylse and the haskell.org one for the other parts > of nofib? Or is the github one just a mirror and all patches should go > to haskell.org repo? my repo occasionally pulls in the nofib-analyse directory from the haskell.org nofib repo; see for example this commit (especially its message): https://github.com/nomeata/nofib-analyse/commit/8225e0dd84c3c31cd156d10df75ea47ea29eda87 So yes, patches go to the haskell.org nofib repo (or Phab or whatever). Greetings, Joachim -- Joachim “nomeata” Breitner   mail at joachim-breitner.de • https://www.joachim-breitner.de/   XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F   Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From mail at joachim-breitner.de Mon Jan 9 21:09:26 2017 From: mail at joachim-breitner.de (Joachim Breitner) Date: Mon, 09 Jan 2017 16:09:26 -0500 Subject: Navigating GHC proposals In-Reply-To: References: Message-ID: <1483996166.3211.4.camel@joachim-breitner.de> Hi, Am Montag, den 09.01.2017, 16:03 +0000 schrieb Simon Marlow: > I don't think there is a way to go from the rendered proposal to the > pull request, other than the "back" button in your browser. nothing stops the author to add a link to the discussion to the file, as I did in my proposal (first line): https://github.com/nomeata/ghc-proposals/blob/patch-1/proposals/0000-forced-class-instantiation.rst Greetings, Joachim -- Joachim “nomeata” Breitner   mail at joachim-breitner.de • https://www.joachim-breitner.de/   XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F   Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From simonpj at microsoft.com Mon Jan 9 21:18:03 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 9 Jan 2017 21:18:03 +0000 Subject: Navigating GHC proposals In-Reply-To: References: Message-ID: That is amazingly indirect. Oh well. Simon From: Simon Marlow [mailto:marlowsd at gmail.com] Sent: 09 January 2017 16:55 To: Simon Peyton Jones Cc: ghc-devs at haskell.org; Richard Eisenberg Subject: Re: Navigating GHC proposals Well, you can go to the history of the file, and from there to the first commit ("Rename proposal file"), and from there you'll see a link to the pull request in the blue box next to the name of the branch (the link looks like "#32" in this case). But really, I wouldn't recommend sending the rendered link to someone, send the link to the pull request. On 9 January 2017 at 16:05, Simon Peyton Jones > wrote: I don't think there is a way to go from the rendered proposal to the pull request, other than the "back" button in your browser. Seriously? But the rendered proposal is the useful link to send to people. There _must_ be a way, even if its indirect. Simon From: Simon Marlow [mailto:marlowsd at gmail.com] Sent: 09 January 2017 16:03 To: Simon Peyton Jones > Cc: ghc-devs at haskell.org; Richard Eisenberg > Subject: Re: Navigating GHC proposals I don't think there is a way to go from the rendered proposal to the pull request, other than the "back" button in your browser. The constraint-vs-type proposal seems a little bit weird in that it actually has a branch in the ghc-proposals repository itself, rather than being a pull request from a fork in @goldfire's account. Richard, was that intentional? On 9 January 2017 at 13:55, Simon Peyton Jones via ghc-devs > wrote: Once I am looking the rendered form of a GHC proposal, eg https://github.com/ghc-proposals/ghc-proposals/blob/rae/constraint-vs-type/proposals/0000-constraint-vs-type.rst how can I find my way to the “conversation” for that proposal, so I can comment on it? https://github.com/ghc-proposals/ghc-proposals/pull/32 Once more, I am lost in a maze of twisty little Githup passages. I clearly have not yet internalised an accurate model of what Github is thinking Thanks Simon _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at cs.brynmawr.edu Mon Jan 9 21:46:04 2017 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Mon, 9 Jan 2017 16:46:04 -0500 Subject: Navigating GHC proposals In-Reply-To: References: Message-ID: > On Jan 9, 2017, at 11:03 AM, Simon Marlow wrote: > > The constraint-vs-type proposal seems a little bit weird in that it actually has a branch in the ghc-proposals repository itself, rather than being a pull request from a fork in @goldfire's account. Richard, was that intentional? I like to use GitHub's edit-in-place feature. Previously, I could write my proposal and then GitHub would prepare the pull request by automatically forking the repository (if I hadn't already), making a branch in my own fork, and then posting the PR. Now, however, because I have commit access to ghc-proposals, it only allows me to automatically create a branch at ghc-proposals/ghc-proposals, not goldfirere/ghc-proposals. Was this choice intentional? Not quite -- I didn't originally want to do it this way. But I did know what I was doing when I clicked "go". In retrospect, if all the committee members wrote proposals the way I did, it would clutter ghc-proposals. I'll avoid this in the future, now that I know GitHub won't automatically do what I want when I have commit access. And I've added a link from the rendered version back to the PR. I do think this is a good practice but it must be manually done. Richard From ben at well-typed.com Tue Jan 10 01:51:21 2017 From: ben at well-typed.com (Ben Gamari) Date: Mon, 09 Jan 2017 20:51:21 -0500 Subject: GHC 8.2.1 tree freeze timing Message-ID: <8737grlm1y.fsf@ben-laptop.smart-cactus.org> Hello everyone, GHC 8.2.1 is quickly approaching. Our plan is to do a release candidate in February, so we would like to have the tree sorted out by the end of this month. Consequently, I would like to set a general feature freeze for 30 Janary 2016. If you are concerned that your work isn't positioned to meet this freeze date then please speak to me so we can make appropriate arrangements. Happy hacking! Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Tue Jan 10 01:54:04 2017 From: ben at well-typed.com (Ben Gamari) Date: Mon, 09 Jan 2017 20:54:04 -0500 Subject: GHC 8.2.1 tree freeze timing Message-ID: <871swbllxf.fsf@ben-laptop.smart-cactus.org> tl;dr. The feature freeze for GHC 8.2 will happen on 30 January 2016. Get any patches you'd like to see in 8.2 up on Phabricator soon! Hello everyone, GHC 8.2.1 is quickly approaching. Our plan is to do a release candidate in February, so we would like to have the tree sorted out by the end of this month. Consequently, I would like to set a general feature freeze for 30 Janary 2016. After the freeze I will be much less likely to accept larger patches; ideally all work post-freeze should be in the name of preparing the tree for cutting the ghc-8.2 branch. If you are concerned that your work isn't positioned to meet this freeze date then please speak to me so we can make appropriate arrangements. Happy hacking! Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From gracjanpolak at gmail.com Tue Jan 10 10:34:26 2017 From: gracjanpolak at gmail.com (Gracjan Polak) Date: Tue, 10 Jan 2017 11:34:26 +0100 Subject: nofib on Shake In-Reply-To: References: Message-ID: I was looking nearby recently and you might want to take into account my discoveries described in https://ghc.haskell.org/trac/ghc/ticket/11501 2017-01-08 18:48 GMT+01:00 Michal Terepeta : > Hi all, > > While looking at nofib, I've found a blog post from Neil Mitchell [1], > which describes a Shake build system for nofib. The comments mentioned > that this should get merged, but it seems that nothing actually happened? > Is there some fundamental reason for that? > > If not, I'd be interested picking this up - the current make-based > system is pretty confusing for me and `runstdtest` looks simply > terrifying ;-) > We could also create a cabal and stack files for `nofib-analyse` (making > it possible to use some libraries for it). > > Thanks, > Michal > > [1] http://neilmitchell.blogspot.ch/2013/02/a-nofib-build- > system-using-shake.html > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Tue Jan 10 11:59:10 2017 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Tue, 10 Jan 2017 11:59:10 +0000 Subject: Trac to Phabricator (Maniphest) migration prototype In-Reply-To: References: <87y3ymkt3h.fsf@ben-laptop.smart-cactus.org> Message-ID: Subprojects and Milestones are both special kinds of projects. Subprojects are like projects but are associated with a parent project. Unfortunately this doesn't really show up anywhere on the UI. There is quite a long discussion about this on the upstream issue tracker - https://secure.phabricator.com/T10349 Milestones are projects which are meant for tracking releases. A project can only have one milestone at a time and importantly the parent project is shown on the UI with milestones. I took some screenshots to show how they show up in UI. https://phabricator.haskell.org/M3 If you're interested about projects then the best place to read is https://secure.phabricator.com/book/phabricator/article/projects/ They are not very mature and I expect their usage will be refined in the next iteration. The UI for selecting projects could certainly be improved rather than presenting a list of amorphous labels. Matt On Mon, Jan 9, 2017 at 6:41 PM, Richard Eisenberg wrote: > >> On Jan 9, 2017, at 6:41 AM, Matthew Pickering wrote: >> >> Component -> Projects >> OS -> (Sub)Projects >> Arch -> (Sub)Projects > > What is a (Sub)Project? I've been operating under the assumption that a Project is just a tag. Do these tags have structure? My best guess from your discussion is that when you choose on Project, you are then forced to choose one of a set of others. That seems like a good plan. > >> Keywords -> Projects >> Version -> Remove (It is a proxy for date reported) > > No no no no. I don't think either of us will convince the other on this point, but we should be clear that we need input from others to decide on this one. > >> Milestone -> Project (Milestone) > > What does this mean? What is "Project (Milestone)"? > >> I have some example emails. https://phabricator.haskell.org/M2 > > Looks good. Thanks for posting this! > > Richard From simonpj at microsoft.com Tue Jan 10 12:00:46 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 10 Jan 2017 12:00:46 +0000 Subject: Trac to Phabricator (Maniphest) migration prototype In-Reply-To: References: <87y3ymkt3h.fsf@ben-laptop.smart-cactus.org> Message-ID: On this Phab question, does Phab have an equivalent to Trac's wiki? That's quite important. Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Matthew | Pickering | Sent: 10 January 2017 11:59 | To: Richard Eisenberg | Cc: GHC developers | Subject: Re: Trac to Phabricator (Maniphest) migration prototype | | Subprojects and Milestones are both special kinds of projects. | | Subprojects are like projects but are associated with a parent project. | Unfortunately this doesn't really show up anywhere on the UI. | There is quite a long discussion about this on the upstream issue tracker - | https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsecure.phab | ricator.com%2FT10349&data=02%7C01%7Csimonpj%40microsoft.com%7Cfe21d0babfb843 | 78ec2a08d439502937%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636196463840 | 067524&sdata=7KEVArFhp3bIBclu2Drh8hSn8d47Q1aXrhGRQVKp%2BwM%3D&reserved=0 | | Milestones are projects which are meant for tracking releases. A project can | only have one milestone at a time and importantly the parent project is | shown on the UI with milestones. | | I took some screenshots to show how they show up in UI. | | https://phabricator.haskell.org/M3 | | If you're interested about projects then the best place to read is | https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsecure.phab | ricator.com%2Fbook%2Fphabricator%2Farticle%2Fprojects%2F&data=02%7C01%7Csimo | npj%40microsoft.com%7Cfe21d0babfb84378ec2a08d439502937%7C72f988bf86f141af91a | b2d7cd011db47%7C1%7C0%7C636196463840067524&sdata=MfqbX%2BCzMtqf4wIRhe0MUqSN4 | l8l7oRiDO3IUIqb8ag%3D&reserved=0 | | They are not very mature and I expect their usage will be refined in the | next iteration. The UI for selecting projects could certainly be improved | rather than presenting a list of amorphous labels. | | Matt | | On Mon, Jan 9, 2017 at 6:41 PM, Richard Eisenberg | wrote: | > | >> On Jan 9, 2017, at 6:41 AM, Matthew Pickering | wrote: | >> | >> Component -> Projects | >> OS -> (Sub)Projects | >> Arch -> (Sub)Projects | > | > What is a (Sub)Project? I've been operating under the assumption that a | Project is just a tag. Do these tags have structure? My best guess from your | discussion is that when you choose on Project, you are then forced to choose | one of a set of others. That seems like a good plan. | > | >> Keywords -> Projects | >> Version -> Remove (It is a proxy for date reported) | > | > No no no no. I don't think either of us will convince the other on this | point, but we should be clear that we need input from others to decide on | this one. | > | >> Milestone -> Project (Milestone) | > | > What does this mean? What is "Project (Milestone)"? | > | >> I have some example emails. https://phabricator.haskell.org/M2 | > | > Looks good. Thanks for posting this! | > | > Richard | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.haskell | .org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com%7Cfe21d0babfb84378ec2a08d4395029 | 37%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636196463840067524&sdata=mRz | Phhzd1Xjqfbwz%2BTtfESWg4gbdSWx5gC2HpDUtSuM%3D&reserved=0 From matthewtpickering at gmail.com Tue Jan 10 12:09:58 2017 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Tue, 10 Jan 2017 12:09:58 +0000 Subject: Trac to Phabricator (Maniphest) migration prototype In-Reply-To: References: <87y3ymkt3h.fsf@ben-laptop.smart-cactus.org> Message-ID: There is an equivalent, it is called Phriction - https://phabricator.haskell.org/w/ I am not proposing at this stage that we migrate the wiki contents as well. That would certainly be a logical next step but I think trac's wiki is quite a bit better. Matt On Tue, Jan 10, 2017 at 12:00 PM, Simon Peyton Jones wrote: > On this Phab question, does Phab have an equivalent to Trac's wiki? That's quite important. > > Simon > > | -----Original Message----- > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Matthew > | Pickering > | Sent: 10 January 2017 11:59 > | To: Richard Eisenberg > | Cc: GHC developers > | Subject: Re: Trac to Phabricator (Maniphest) migration prototype > | > | Subprojects and Milestones are both special kinds of projects. > | > | Subprojects are like projects but are associated with a parent project. > | Unfortunately this doesn't really show up anywhere on the UI. > | There is quite a long discussion about this on the upstream issue tracker - > | https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsecure.phab > | ricator.com%2FT10349&data=02%7C01%7Csimonpj%40microsoft.com%7Cfe21d0babfb843 > | 78ec2a08d439502937%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636196463840 > | 067524&sdata=7KEVArFhp3bIBclu2Drh8hSn8d47Q1aXrhGRQVKp%2BwM%3D&reserved=0 > | > | Milestones are projects which are meant for tracking releases. A project can > | only have one milestone at a time and importantly the parent project is > | shown on the UI with milestones. > | > | I took some screenshots to show how they show up in UI. > | > | https://phabricator.haskell.org/M3 > | > | If you're interested about projects then the best place to read is > | https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsecure.phab > | ricator.com%2Fbook%2Fphabricator%2Farticle%2Fprojects%2F&data=02%7C01%7Csimo > | npj%40microsoft.com%7Cfe21d0babfb84378ec2a08d439502937%7C72f988bf86f141af91a > | b2d7cd011db47%7C1%7C0%7C636196463840067524&sdata=MfqbX%2BCzMtqf4wIRhe0MUqSN4 > | l8l7oRiDO3IUIqb8ag%3D&reserved=0 > | > | They are not very mature and I expect their usage will be refined in the > | next iteration. The UI for selecting projects could certainly be improved > | rather than presenting a list of amorphous labels. > | > | Matt > | > | On Mon, Jan 9, 2017 at 6:41 PM, Richard Eisenberg > | wrote: > | > > | >> On Jan 9, 2017, at 6:41 AM, Matthew Pickering > | wrote: > | >> > | >> Component -> Projects > | >> OS -> (Sub)Projects > | >> Arch -> (Sub)Projects > | > > | > What is a (Sub)Project? I've been operating under the assumption that a > | Project is just a tag. Do these tags have structure? My best guess from your > | discussion is that when you choose on Project, you are then forced to choose > | one of a set of others. That seems like a good plan. > | > > | >> Keywords -> Projects > | >> Version -> Remove (It is a proxy for date reported) > | > > | > No no no no. I don't think either of us will convince the other on this > | point, but we should be clear that we need input from others to decide on > | this one. > | > > | >> Milestone -> Project (Milestone) > | > > | > What does this mean? What is "Project (Milestone)"? > | > > | >> I have some example emails. https://phabricator.haskell.org/M2 > | > > | > Looks good. Thanks for posting this! > | > > | > Richard > | _______________________________________________ > | ghc-devs mailing list > | ghc-devs at haskell.org > | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.haskell > | .org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- > | devs&data=02%7C01%7Csimonpj%40microsoft.com%7Cfe21d0babfb84378ec2a08d4395029 > | 37%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636196463840067524&sdata=mRz > | Phhzd1Xjqfbwz%2BTtfESWg4gbdSWx5gC2HpDUtSuM%3D&reserved=0 From simonpj at microsoft.com Tue Jan 10 16:05:34 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 10 Jan 2017 16:05:34 +0000 Subject: Exhaustiveness checking for pattern synonyms In-Reply-To: References: Message-ID: Questions * What if there are multiple COMPLETE pragmas e.g. {-# COMPLETE A, B, C #-} {-# COMPLETE A, X, Y, Z #-} Is that ok? I guess it should be! Will the pattern-match exhaustiveness check then succeed if a function uses either set? What happens if you use a mixture of constructors in a match (e.g. A, X, C, Z)? Presumably all bets are off? * Note that COMPLETE pragmas could be a new source of orphan modules module M where import N( pattern P, pattern Q ) {-# COMPLETE P, Q #-} where neither P nor Q is defined in M. Then every module that is transitively "above" M would need to read M.hi just in case its COMPLETE pragmas was relevant. * Point out in the spec that COMPLETE pragmas are entirely unchecked. It's up to the programmer to get it right. * Typing. What does it mean for the types to "agree" with each other. E.g A :: a -> [(a, Int)] B :: b -> [(Int, b)] Is this ok? Please say explicitly with examples. * I didn't really didn't understand the "Error messages" section. Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Matthew | Pickering | Sent: 22 November 2016 10:43 | To: GHC developers | Subject: Exhaustiveness checking for pattern synonyms | | Hello devs, | | I have implemented exhaustiveness checking for pattern synonyms. The idea is | very simple, you specify a set of pattern synonyms (or data | constructors) which are regarded as a complete match. | The pattern match checker then uses this information in order to check | whether a function covers all possibilities. | | Specification: | | https://ghc.haskell.org/trac/ghc/wiki/PatternSynonyms/CompleteSigs | | https://phabricator.haskell.org/D2669 | https://phabricator.haskell.org/D2725 | | https://ghc.haskell.org/trac/ghc/ticket/8779 | | Matt | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.haskell | .org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C155eb2786cb040d8052908d412c453 | b5%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636154081815249356&sdata=MkQ | FpwJWaTU%2BdEQSYEBjXLt80BrXLkBp9V8twdKB6BI%3D&reserved=0 From tjakway at nyu.edu Tue Jan 10 18:09:39 2017 From: tjakway at nyu.edu (Thomas Jakway) Date: Tue, 10 Jan 2017 10:09:39 -0800 Subject: Debugging GHC with GHCi In-Reply-To: References: <2a49fd43-9896-bcc1-26be-763f17e3fb83@nyu.edu> <87k2a4ltu1.fsf@ben-laptop.smart-cactus.org> Message-ID: Thanks very much, I'll give that a shot. On 01/09/2017 08:12 AM, Simon Marlow wrote: > On 9 January 2017 at 04:51, Ben Gamari > wrote: > > Thomas Jakway > writes: > > > I want to be able to load certain GHC modules in interpreted mode in > > ghci so I can set breakpoints in them. I have tests in the > testsuite > > that are compiled by inplace/bin/ghc-stage2 with -package ghc. > I can > > load the tests with ghc-stage2 --interactive -package ghc but > since ghc > > is compiled I can only set breakpoints in the tests themselves. > Loading > > the relevant files by passing them as absolute paths to :l loads > them > > but ghci doesn't stop at the breakpoints placed in them (I'm > guessing > > because ghci doesn't realize that the module I just loaded is > meant to > > replace the compiled version in -package ghc). > > > > So if I use > > > > inplace/bin/ghc-stage2 --interactive -package ghc mytest.hs > > then > > :l abs/path/to/AsmCodeGen.hs > > > > and set breakpoints, nothing happens. > > > Many of us would love to be able to load GHC into GHCi. Unfortunately, > we aren't currently in a position where this is possible. The only > thing > standing in our way is the inability of GHC's interpreter to run > modules > which use unboxed tuples. While there are a few modules within GHC > which > use unboxed tuples, none of them are particularly interesting for > debugging purposes, so compiling them with -fobject-code should be > fine. > In principle this could be accomplished by, > > {-# OPTIONS_GHC -fobject-code #-} > > However, as explained in #10965, GHC sadly does not allow this. I > spent > a bit of time tonight trying to see if I could convince GHC to first > manually build object code for the needed modules, and then load > bytecode for the rest. Unfortunately recompilation checking fought > me at > every turn. > > The current state of my attempt can be found here [1]. It would be > great > if someone could pick it up. This will involve, > > * Working out how to convince the GHC to use the object code for > utils/Encoding.o instead of recompiling > > * Identifying all of the modules which can't be byte-code > compiled and > add them to $obj_modules > > * Chassing down whatever other issues that might pop up along the way > > I also wouldn't be surprised if you would need this GHC patch [2]. > > > I would have thought that something like > > :set -fobject-code > :load Main -- or whatever > -- modify some source file > :set -fbyte-code > :load Main > > should do the right thing, loading object code when it can, up to the > first module that has been modified more recently. Of course you > can't have any object code modules that depend on byte-code modules, > so if you modify something too low down in the dependency graph then > you'll have a lot of interpreted modules, and you may end up trying to > interpret something that can't be interpreted because it has an > unboxed tuple. But for simple tests it ought to work. (I haven't > tried this so I'm probably forgetting something...) > > Cheers > Simon > > > Cheers, > > - Ben > > > [1] > https://gist.github.com/bgamari/bd53e4fd6f3323599387ffc7b11d1a1e > > [2] > http://git.haskell.org/ghc.git/commit/326931db9cdc26f2d47657c1f084b9903fd46246 > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Tue Jan 10 18:44:48 2017 From: ben at well-typed.com (Ben Gamari) Date: Tue, 10 Jan 2017 13:44:48 -0500 Subject: Debugging GHC with GHCi In-Reply-To: References: <2a49fd43-9896-bcc1-26be-763f17e3fb83@nyu.edu> <87k2a4ltu1.fsf@ben-laptop.smart-cactus.org> Message-ID: <87wpe2kb4v.fsf@ben-laptop.smart-cactus.org> Thomas Jakway writes: > Thanks very much, I'll give that a shot. > I have opened #13101 to track this since it has come up a number of times on the mailing list and elsewhere. Be sure to add a comment if you make any progress. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Tue Jan 10 18:51:53 2017 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 10 Jan 2017 13:51:53 -0500 Subject: Trac to Phabricator (Maniphest) migration prototype In-Reply-To: References: Message-ID: <87o9zekat2.fsf@ben-laptop.smart-cactus.org> Matthew Pickering writes: > Dear devs, > > I have completed writing a migration which moves tickets from trac to > phabricator. The conversion is essentially lossless. The trac > transaction history is replayed which means all events are transferred > with their original authors and timestamps. I welcome comments on the > work I have done so far, especially bugs as I have definitely not > looked at all 12000 tickets. > We discussed this a bit in this week's GHC call and the general feeling was that it would be nice to have a comprehensive list of the pros and cons somewhere. I pasted my notes on the Wiki [1] as a starting point. Matthew, would you like to add your thoughts there as well? Cheers, - Ben [1] https://ghc.haskell.org/trac/ghc/wiki/Phabricator/Maniphest -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From michal.terepeta at gmail.com Tue Jan 10 19:43:01 2017 From: michal.terepeta at gmail.com (Michal Terepeta) Date: Tue, 10 Jan 2017 19:43:01 +0000 Subject: nofib on Shake In-Reply-To: References: Message-ID: On Tue, Jan 10, 2017 at 11:35 AM Gracjan Polak wrote: > I was looking nearby recently and you might want to take into account my > discoveries described in https://ghc.haskell.org/trac/ghc/ticket/11501 Thanks a lot for mentioning it! (I didn't see this ticket/discussion) I don't want to get in your way - did you already start working on something? Do you have some concrete plans wrt. nofib? >From my side, I was recently mostly interested in using nofib to measure the performance of GHC itself. Nofib already tries to do that, but it's super flaky (it only compiles things once and most modules are small). So I was thinking of improving this, but when I started to look into it a bit closer, I decided that it might be better to start with the build system ;) And then add options to compile things more than once, add some compile-time only benchmarks, etc. Thanks, Michal -------------- next part -------------- An HTML attachment was scrubbed... URL: From gracjanpolak at gmail.com Tue Jan 10 20:22:45 2017 From: gracjanpolak at gmail.com (Gracjan Polak) Date: Tue, 10 Jan 2017 20:22:45 +0000 Subject: nofib on Shake In-Reply-To: References: Message-ID: My time for this ticket run out and for the foreseeable future I won't be able to do much. We can discuss ideas if you have some. The part of nofib that is called fibno should be replaced with latest version of packages and permanently connected to the test suite. Or removed. This is the only sure conclusion I came to. Everything else is up for debate. W dniu wt., 10.01.2017 o 20:43 Michal Terepeta napisał(a): > On Tue, Jan 10, 2017 at 11:35 AM Gracjan Polak > wrote: > > I was looking nearby recently and you might want to take into account my > > discoveries described in https://ghc.haskell.org/trac/ghc/ticket/11501 > > Thanks a lot for mentioning it! (I didn't see this ticket/discussion) > > I don't want to get in your way - did you already start working on > something? Do you have some concrete plans wrt. nofib? > > From my side, I was recently mostly interested in using nofib to > measure the performance of GHC itself. Nofib already tries to do that, > but it's super flaky (it only compiles things once and most modules > are small). So I was thinking of improving this, but when I started > to look into it a bit closer, I decided that it might be better to > start with the build system ;) And then add options to compile things > more than once, add some compile-time only benchmarks, etc. > > Thanks, > Michal > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at well-typed.com Wed Jan 11 04:04:59 2017 From: david at well-typed.com (David Feuer) Date: Tue, 10 Jan 2017 23:04:59 -0500 Subject: Pattern checker status Message-ID: <1927002.zmK5YSgWrp@squirrel> Could you possibly give us an update about the current status of your pattern checker work? I know the number of tickets may seem a bit overwhelming; please reach out to the ghc-devs list, or individual developers, to get whatever help you need. Knocking out some of these tickets is a priority for 8.2, and the freeze is coming up fast. In my mind, the top priorities for GHC 8.2 should probably be #10746 and #12949. We really want to get those squashed. I think #11195 is also a fairly high priority (if it's still an issue!). #10746 is a serious correctness issue, and Simon's suggested fix sounds straightforward. Have you run into trouble? #12949 is also a serious correctness issue. You're right that desugaring doesn't happen till after type checking, but I believe the type checker should have already worked out what it needs to be able to desugar the overloading. It would be worth asking how you might access that information. #11195 looks like a practically important and serious performance problem. I know you spent some time investigating it months ago; do you have any more recent progress to report? Do you know if the problem is still there? David Feuer From ben at well-typed.com Wed Jan 11 18:40:45 2017 From: ben at well-typed.com (Ben Gamari) Date: Wed, 11 Jan 2017 13:40:45 -0500 Subject: [ANNOUNCE] Glasgow Haskell Compiler 8.0.2 is available! Message-ID: <87wpe1ignm.fsf@ben-laptop.smart-cactus.org> =============================================== The Glasgow Haskell Compiler -- version 8.0.2 =============================================== The GHC team is happy to at last announce the 8.0.2 release of the Glasgow Haskell Compiler. Source and binary distributions are available at http://downloads.haskell.org/~ghc/8.0.2/ This is the second release of the 8.0 series and fixes nearly two-hundred bugs. These include, * Interface file build determinism (#4012). * Compatibility with macOS Sierra and GCC compilers which compile position-independent executables by default * Compatibility with systems which use the gold linker * Runtime linker fixes on Windows (see #12797) * A compiler bug which resulted in undefined reference errors while compiling some packages (see #12076) * A number of memory consistency bugs in the runtime system * A number of efficiency issues in the threaded runtime which manifest on larger core counts and large numbers of bound threads. * A typechecker bug which caused some programs using -XDefaultSignatures to be incorrectly accepted. * More than two-hundred other bugs. See Trac [1] for a complete listing. * #12757, which lead to broken runtime behavior and even crashes in the presence of primitive strings. * #12844, a type inference issue affecting partial type signatures. * A bump of the `directory` library, fixing buggy path canonicalization behavior (#12894). Unfortunately this required a major version bump in `directory` and minor bumps in several other libraries. * #12912, where use of the `select` system call would lead to runtime system failures with large numbers of open file handles. * #10635, wherein -Wredundant-constraints was included in the -Wall warning set A more detailed list of the changes included in this release can be found in the release notes, https://downloads.haskell.org/~ghc/8.0.2/docs/html/users_guide/8.0.2-notes.html Please note that this release breaks with our usual tendency to avoid major version bumps of core libraries in minor GHC releases by including an upgrade of the `directory` library to 1.3.0.0. Also note that, due to a rather serious bug (#13100) affecting Windows noticed late in the release cycle, the Windows binary distributions were produced using a slightly patched [2] source tree. Users compiling from source for Windows should be certain to include this patch in their build. This release is the result of six months of effort by the GHC development community. We'd like to thank everyone who has contributed code, bug reports, and feedback to this release. It's only due to their efforts that GHC remains a vibrant and exciting project. [1] https://ghc.haskell.org/trac/ghc/query?status=closed&milestone=8.0.2&col=id&col=summary&col=status&col=type&col=priority&col=milestone&col=component&order=priority [2] http://downloads.haskell.org/~ghc/8.0.2/0001-SysTools-Revert-linker-flags-change.patch How to get it ~~~~~~~~~~~~~ Both the source tarball and binary distributions for a wide variety of platforms are available at, http://www.haskell.org/ghc/ Background ~~~~~~~~~~ Haskell is a standardized lazy functional programming language. The Glasgow Haskell Compiler (GHC) is a state-of-the-art programming suite for Haskell. Included is an optimising compiler generating efficient code for a variety of platforms, together with an interactive system for convenient, quick development. The distribution includes space and time profiling facilities, a large collection of libraries, and support for various language extensions, including concurrency, exceptions, and foreign language interfaces. GHC is distributed under a BSD-style open source license. Supported Platforms ~~~~~~~~~~~~~~~~~~~ The list of platforms we support, and the people responsible for them, can be found on the GHC wiki http://ghc.haskell.org/trac/ghc/wiki/Platforms Ports to other platforms are possible with varying degrees of difficulty. The Building Guide describes how to go about porting to a new platform: http://ghc.haskell.org/trac/ghc/wiki/Building Developers ~~~~~~~~~~ We welcome new contributors. Instructions on getting started with hacking on GHC are available from GHC's developer site, http://ghc.haskell.org/trac/ghc/ Community Resources ~~~~~~~~~~~~~~~~~~~ There are mailing lists for GHC users, develpoers, and monitoring bug tracker activity; to subscribe, use the web interfaces at http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-tickets There are several other Haskell and GHC-related mailing lists on www.haskell.org; for the full list, see https://mail.haskell.org/cgi-bin/mailman/listinfo Some GHC developers hang out on the #ghc and #haskell of the Freenode IRC network, too: http://www.haskell.org/haskellwiki/IRC_channel Please report bugs using our bug tracking system. Instructions on reporting bugs can be found here: http://www.haskell.org/ghc/reportabug -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From matthewtpickering at gmail.com Wed Jan 11 20:47:29 2017 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Wed, 11 Jan 2017 20:47:29 +0000 Subject: Trac to Phabricator (Maniphest) migration prototype In-Reply-To: <87o9zekat2.fsf@ben-laptop.smart-cactus.org> References: <87o9zekat2.fsf@ben-laptop.smart-cactus.org> Message-ID: I experimented a bit more with subprojects and I was able to add a +6 line patch to make them behave a bit better. Specifically, the parent project now appears in the UI and auto complete works as expected. https://phabricator.haskell.org/M3/6/ Matt On Tue, Jan 10, 2017 at 6:51 PM, Ben Gamari wrote: > Matthew Pickering writes: > >> Dear devs, >> >> I have completed writing a migration which moves tickets from trac to >> phabricator. The conversion is essentially lossless. The trac >> transaction history is replayed which means all events are transferred >> with their original authors and timestamps. I welcome comments on the >> work I have done so far, especially bugs as I have definitely not >> looked at all 12000 tickets. >> > We discussed this a bit in this week's GHC call and the general feeling > was that it would be nice to have a comprehensive list of the pros and > cons somewhere. I pasted my notes on the Wiki [1] as a starting point. > > Matthew, would you like to add your thoughts there as well? > > Cheers, > > - Ben > > [1] https://ghc.haskell.org/trac/ghc/wiki/Phabricator/Maniphest From timmcgil at gmail.com Wed Jan 11 21:01:47 2017 From: timmcgil at gmail.com (Tim McGilchrist) Date: Thu, 12 Jan 2017 08:01:47 +1100 Subject: Inlining Wiki Page In-Reply-To: References: Message-ID: Hi Matt, I noted this down last year as something I wanted to work on for this year. Just letting you know that I'm starting to look at some of the easier tickets in that page. Is there a good person or place to ask questions if I get stuck on anything? Cheers, Tim On Thursday, 4 August 2016, Matthew Pickering wrote: > Dear Devs, > > I've spent the last day looking at the inliner. In doing so I updated > the wiki page about inlining to be a lot more useful to other people > wanting to understand the intricacies and problems. > > https://ghc.haskell.org/trac/ghc/wiki/Inlining > > This looks like the perfect place for a newcomer to start working on > GHC. The inliner is quite well contained, there are lots of open > tickets with well-specified aims and lots of investigatory work to be > done. > > So the purpose of this email is: > > 1. Please tag any tickets relevant to inlining/specialisation with > "Inlining" > 2. Any newcomers keen to get involved should read the wiki page and > see if they can tackle one of the tickets there. > > Matt > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Wed Jan 11 21:53:37 2017 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Wed, 11 Jan 2017 21:53:37 +0000 Subject: Inlining Wiki Page In-Reply-To: References: Message-ID: Very good Tim! There are always people more knowledgable than me in #ghc on freenode. I apologise if it is harder than I anticipated! Matt On Wed, Jan 11, 2017 at 9:01 PM, Tim McGilchrist wrote: > Hi Matt, > > I noted this down last year as something I wanted to work on for this year. > Just letting you know that I'm starting to look at some of the easier > tickets in that page. > > Is there a good person or place to ask questions if I get stuck on anything? > > Cheers, > Tim > > > On Thursday, 4 August 2016, Matthew Pickering > wrote: >> >> Dear Devs, >> >> I've spent the last day looking at the inliner. In doing so I updated >> the wiki page about inlining to be a lot more useful to other people >> wanting to understand the intricacies and problems. >> >> https://ghc.haskell.org/trac/ghc/wiki/Inlining >> >> This looks like the perfect place for a newcomer to start working on >> GHC. The inliner is quite well contained, there are lots of open >> tickets with well-specified aims and lots of investigatory work to be >> done. >> >> So the purpose of this email is: >> >> 1. Please tag any tickets relevant to inlining/specialisation with >> "Inlining" >> 2. Any newcomers keen to get involved should read the wiki page and >> see if they can tackle one of the tickets there. >> >> Matt >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From dedgrant at gmail.com Thu Jan 12 00:30:27 2017 From: dedgrant at gmail.com (Darren Grant) Date: Wed, 11 Jan 2017 16:30:27 -0800 Subject: [Haskell-cafe] [ANNOUNCE] Glasgow Haskell Compiler 8.0.2 is available! In-Reply-To: <87wpe1ignm.fsf@ben-laptop.smart-cactus.org> References: <87wpe1ignm.fsf@ben-laptop.smart-cactus.org> Message-ID: Wow. Congrats on the release all and thank you for the fixes! Cheers, Darren On Jan 11, 2017 10:42 AM, "Ben Gamari" wrote: =============================================== The Glasgow Haskell Compiler -- version 8.0.2 =============================================== The GHC team is happy to at last announce the 8.0.2 release of the Glasgow Haskell Compiler. Source and binary distributions are available at http://downloads.haskell.org/~ghc/8.0.2/ This is the second release of the 8.0 series and fixes nearly two-hundred bugs. These include, * Interface file build determinism (#4012). * Compatibility with macOS Sierra and GCC compilers which compile position-independent executables by default * Compatibility with systems which use the gold linker * Runtime linker fixes on Windows (see #12797) * A compiler bug which resulted in undefined reference errors while compiling some packages (see #12076) * A number of memory consistency bugs in the runtime system * A number of efficiency issues in the threaded runtime which manifest on larger core counts and large numbers of bound threads. * A typechecker bug which caused some programs using -XDefaultSignatures to be incorrectly accepted. * More than two-hundred other bugs. See Trac [1] for a complete listing. * #12757, which lead to broken runtime behavior and even crashes in the presence of primitive strings. * #12844, a type inference issue affecting partial type signatures. * A bump of the `directory` library, fixing buggy path canonicalization behavior (#12894). Unfortunately this required a major version bump in `directory` and minor bumps in several other libraries. * #12912, where use of the `select` system call would lead to runtime system failures with large numbers of open file handles. * #10635, wherein -Wredundant-constraints was included in the -Wall warning set A more detailed list of the changes included in this release can be found in the release notes, https://downloads.haskell.org/~ghc/8.0.2/docs/html/users_gui de/8.0.2-notes.html Please note that this release breaks with our usual tendency to avoid major version bumps of core libraries in minor GHC releases by including an upgrade of the `directory` library to 1.3.0.0. Also note that, due to a rather serious bug (#13100) affecting Windows noticed late in the release cycle, the Windows binary distributions were produced using a slightly patched [2] source tree. Users compiling from source for Windows should be certain to include this patch in their build. This release is the result of six months of effort by the GHC development community. We'd like to thank everyone who has contributed code, bug reports, and feedback to this release. It's only due to their efforts that GHC remains a vibrant and exciting project. [1] https://ghc.haskell.org/trac/ghc/query?status=closed&milesto ne=8.0.2&col=id&col=summary&col=status&col=type&col= priority&col=milestone&col=component&order=priority [2] http://downloads.haskell.org/~ghc/8.0.2/0001-SysTools-Revert -linker-flags-change.patch How to get it ~~~~~~~~~~~~~ Both the source tarball and binary distributions for a wide variety of platforms are available at, http://www.haskell.org/ghc/ Background ~~~~~~~~~~ Haskell is a standardized lazy functional programming language. The Glasgow Haskell Compiler (GHC) is a state-of-the-art programming suite for Haskell. Included is an optimising compiler generating efficient code for a variety of platforms, together with an interactive system for convenient, quick development. The distribution includes space and time profiling facilities, a large collection of libraries, and support for various language extensions, including concurrency, exceptions, and foreign language interfaces. GHC is distributed under a BSD-style open source license. Supported Platforms ~~~~~~~~~~~~~~~~~~~ The list of platforms we support, and the people responsible for them, can be found on the GHC wiki http://ghc.haskell.org/trac/ghc/wiki/Platforms Ports to other platforms are possible with varying degrees of difficulty. The Building Guide describes how to go about porting to a new platform: http://ghc.haskell.org/trac/ghc/wiki/Building Developers ~~~~~~~~~~ We welcome new contributors. Instructions on getting started with hacking on GHC are available from GHC's developer site, http://ghc.haskell.org/trac/ghc/ Community Resources ~~~~~~~~~~~~~~~~~~~ There are mailing lists for GHC users, develpoers, and monitoring bug tracker activity; to subscribe, use the web interfaces at http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-tickets There are several other Haskell and GHC-related mailing lists on www.haskell.org; for the full list, see https://mail.haskell.org/cgi-bin/mailman/listinfo Some GHC developers hang out on the #ghc and #haskell of the Freenode IRC network, too: http://www.haskell.org/haskellwiki/IRC_channel Please report bugs using our bug tracking system. Instructions on reporting bugs can be found here: http://www.haskell.org/ghc/reportabug _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Thu Jan 12 06:04:43 2017 From: ben at well-typed.com (Ben Gamari) Date: Thu, 12 Jan 2017 01:04:43 -0500 Subject: [ANNOUNCE] Glasgow Haskell Compiler 8.0.2 is available! In-Reply-To: <87wpe1ignm.fsf@ben-laptop.smart-cactus.org> References: <87wpe1ignm.fsf@ben-laptop.smart-cactus.org> Message-ID: <87ziiwstj8.fsf@ben-laptop.smart-cactus.org> Ben Gamari writes: > =============================================== > The Glasgow Haskell Compiler -- version 8.0.2 > =============================================== > > The GHC team is happy to at last announce the 8.0.2 release of the > Glasgow Haskell Compiler. Source and binary distributions are available > at > I'm sorry to say that the Windows tarballs were built without profiling libraries and will need to be reissued. To prevent confusion I have removed the bad tarballs until I have a chance to rebuild them. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From alain.odea at gmail.com Thu Jan 12 13:43:51 2017 From: alain.odea at gmail.com (Alain O'Dea) Date: Thu, 12 Jan 2017 13:43:51 +0000 Subject: many haskell's mails are detected as spam on gmail In-Reply-To: <03ba2967-2a3e-caf9-7937-9ebf9bf315e2@nh2.me> References: <03ba2967-2a3e-caf9-7937-9ebf9bf315e2@nh2.me> Message-ID: My late reply on this is related to losing masses of ghc-devs emails to Spam on Google Inbox. I've been manually selecting and moving them to my Haskell label. Apparently this is how Google Inbox does the "not spam" interaction. This is pretty annoying. They also label lost of shibboleth-users email as spam. On Tue, Dec 27, 2016 at 6:42 AM Niklas Hambüchen wrote: > Despite Google's public claims to the contrary, I have found the Gmail > spam filter not to work too reliably; I've had cases where it blocked > important emails like "OK, here's my invoice (PDF attached)" in the > middle of long email threads, of which messages were otherwise let > through without problem. > > As a result, I disabled the Gmail spam filter completely; these > instructions worked for me: > > > http://webapps.stackexchange.com/questions/69442/how-to-disable-gmail-anti-spam-completely > > You may consider this too if the various technical things people are > working on in the other replies don't improve the situation for you. > > On 25/12/16 09:35, Takenobu Tani wrote: > > Hi, > > > > I'm using gmail. > > Recently, many haskell's mails are detected as spam on gmail. > > (ghc-devs, haskell-cafe, ghc-commit, ...) > > > > Does anyone know why? > > Do you know the workaround? > > > > Regards, > > Takenobu > > > > > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggreif at gmail.com Thu Jan 12 16:36:03 2017 From: ggreif at gmail.com (Gabor Greif) Date: Thu, 12 Jan 2017 17:36:03 +0100 Subject: Inlining Wiki Page In-Reply-To: References: Message-ID: Hello Tim! I had a pet inlining ticket, which exposes some frivolous blowup: https://ghc.haskell.org/trac/ghc/ticket/8901 It has been closed because nobody really knows how to proceed. Anyway, I have just got the latest stats (appending to the ticket too) $ ls -l ./libraries/time/dist-install/build/Data/Time/Format.*o -rw-r--r-- 1 ggreif lb40 482568 Jan 12 16:24 ./libraries/time/dist-install/build/Data/Time/Format.dyn_o -rw-r--r-- 1 ggreif lb40 514776 Jan 12 16:24 ./libraries/time/dist-install/build/Data/Time/Format.o $ wc -l ./libraries/time/lib/Data/Time/Format.hs 254 ./libraries/time/lib/Data/Time/Format.hs $ strip ./libraries/time/dist-install/build/Data/Time/Format.*o $ ls -l ./libraries/time/dist-install/build/Data/Time/Format.*o -rw-r--r-- 1 ggreif lb40 201512 Jan 12 17:26 ./libraries/time/dist-install/build/Data/Time/Format.dyn_o -rw-r--r-- 1 ggreif lb40 187712 Jan 12 17:26 ./libraries/time/dist-install/build/Data/Time/Format.o $ ghc -e "187712/254" 739.0236220472441 As you can see a single line of Format.hs gets compiled to 739 stripped bytes. Maybe you are inclined to put this ticket on the Wiki list too? Cheers, Gabor On 1/11/17, Tim McGilchrist wrote: > Hi Matt, > > I noted this down last year as something I wanted to work on for this year. > Just letting you know that I'm starting to look at some of the easier > tickets in that page. > > Is there a good person or place to ask questions if I get stuck on > anything? > > Cheers, > Tim > > On Thursday, 4 August 2016, Matthew Pickering > wrote: > >> Dear Devs, >> >> I've spent the last day looking at the inliner. In doing so I updated >> the wiki page about inlining to be a lot more useful to other people >> wanting to understand the intricacies and problems. >> >> https://ghc.haskell.org/trac/ghc/wiki/Inlining >> >> This looks like the perfect place for a newcomer to start working on >> GHC. The inliner is quite well contained, there are lots of open >> tickets with well-specified aims and lots of investigatory work to be >> done. >> >> So the purpose of this email is: >> >> 1. Please tag any tickets relevant to inlining/specialisation with >> "Inlining" >> 2. Any newcomers keen to get involved should read the wiki page and >> see if they can tackle one of the tickets there. >> >> Matt >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > From iavor.diatchki at gmail.com Thu Jan 12 18:36:12 2017 From: iavor.diatchki at gmail.com (Iavor Diatchki) Date: Thu, 12 Jan 2017 10:36:12 -0800 Subject: ANN: `dump-core` a prettier GHC core viewer Message-ID: Hello, Over the holidays I wrote a small GHC plugin to help me do some low-level optimizations of Haskell code. I thought it might be of use to other people too, so please try it out! When enabled, the plugin will save the Core generated by GHC in JSON format, and also render it in HTML for human inspection. The plugin is available on Hackage: http://hackage.haskell.org/package/dump-core The instructions on how to use it are in the README file. You may also read about it at the github page: http://hackage.haskell.org/package/dump-core There are many things that could probably be improved, just let me know. Also, if you are good at design, I could use some help making things look prettier :) Happy hacking, -Iavor -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Thu Jan 12 18:51:16 2017 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 12 Jan 2017 13:51:16 -0500 Subject: [Haskell-cafe] ANN: `dump-core` a prettier GHC core viewer In-Reply-To: References: Message-ID: <1D00A7A7-16B9-4083-B491-F9A395C5373F@smart-cactus.org> On January 12, 2017 1:36:12 PM EST, Iavor Diatchki wrote: >Hello, > >Over the holidays I wrote a small GHC plugin to help me do some >low-level >optimizations of Haskell code. I thought it might be of use to other >people too, so please try it out! > >When enabled, the plugin will save the Core generated by GHC in JSON >format, and also render it in HTML for human inspection. > >The plugin is available on Hackage: >http://hackage.haskell.org/package/dump-core > >The instructions on how to use it are in the README file. >You may also read about it at the github page: >http://hackage.haskell.org/package/dump-core > >There are many things that could probably be improved, just let me >know. >Also, if you are good at design, I could use some help making things >look >prettier :) > >Happy hacking, >-Iavor > > >------------------------------------------------------------------------ > >_______________________________________________ >Haskell-Cafe mailing list >To (un)subscribe, modify options or view archives go to: >http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >Only members subscribed via the mailman list are allowed to post. This looks fantastic, Iavor. I have often wanted something like this. It would be nice if the package would also expose a library providing the types along with FromJson instances so one can load a dump into ghci for further inspection. I've also long wanted a tool to easily fuzzily compare pairs of core dumps. This could be a great tool for enabling this. Cheers, - Ben From eric at seidel.io Thu Jan 12 18:51:32 2017 From: eric at seidel.io (Eric Seidel) Date: Thu, 12 Jan 2017 10:51:32 -0800 Subject: ANN: `dump-core` a prettier GHC core viewer In-Reply-To: References: Message-ID: <1484247092.375955.845843760.72C86BA8@webmail.messagingengine.com> Hi Iavor, This sounds like a great idea, but it's not clear from the package description *how* the output is improved over -ddump-simpl. An example of the html output would be a great addition! Thanks! Eric On Thu, Jan 12, 2017, at 10:36, Iavor Diatchki wrote: > Hello, > > Over the holidays I wrote a small GHC plugin to help me do some low-level > optimizations of Haskell code. I thought it might be of use to other > people too, so please try it out! > > When enabled, the plugin will save the Core generated by GHC in JSON > format, and also render it in HTML for human inspection. > > The plugin is available on Hackage: > http://hackage.haskell.org/package/dump-core > > The instructions on how to use it are in the README file. > You may also read about it at the github page: > http://hackage.haskell.org/package/dump-core > > There are many things that could probably be improved, just let me know. > Also, if you are good at design, I could use some help making things look > prettier :) > > Happy hacking, > -Iavor > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From simonpj at microsoft.com Thu Jan 12 21:23:33 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 12 Jan 2017 21:23:33 +0000 Subject: `dump-core` a prettier GHC core viewer In-Reply-To: References: Message-ID: Iavor Sounds good…but there are no instructions on what it does, screen shots, why one might want it, how to install, how to use… Both URLs below are the same Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Iavor Diatchki Sent: 12 January 2017 18:36 To: Haskell Cafe ; ghc-devs at haskell.org Subject: ANN: `dump-core` a prettier GHC core viewer Hello, Over the holidays I wrote a small GHC plugin to help me do some low-level optimizations of Haskell code. I thought it might be of use to other people too, so please try it out! When enabled, the plugin will save the Core generated by GHC in JSON format, and also render it in HTML for human inspection. The plugin is available on Hackage: http://hackage.haskell.org/package/dump-core The instructions on how to use it are in the README file. You may also read about it at the github page: http://hackage.haskell.org/package/dump-core There are many things that could probably be improved, just let me know. Also, if you are good at design, I could use some help making things look prettier :) Happy hacking, -Iavor -------------- next part -------------- An HTML attachment was scrubbed... URL: From iavor.diatchki at gmail.com Thu Jan 12 22:18:32 2017 From: iavor.diatchki at gmail.com (Iavor Diatchki) Date: Thu, 12 Jan 2017 14:18:32 -0800 Subject: `dump-core` a prettier GHC core viewer In-Reply-To: References: Message-ID: Hello, sorry about the link mix-up, the second one was supposed to be a link to the GitHub: https://github.com/yav/dump-core/ The README.md, which is rendered on GitHub, has instructions on how to use the plugin, and I just updated it with some more information on how to use the rendered HTML. You can have a look at a sample rendered module here: http://yav.github.io/dump-core/example-output/Galua.OpcodeInterpreter.html The most striking thing everyone seems to notice first is that there are many variables that have the same name---this is because I am only rendering the string part of the names, without the uniques. The hovering behavior does use the uniques though, so that's what I usually use to disambiguate the variables. It would be easy enough to show the numbers if people would like that. I am sure that there are many other things that can be improved---if you have ideas / suggestions, please file an issue on github. Cheers, -Iavor On Thu, Jan 12, 2017 at 1:23 PM, Simon Peyton Jones wrote: > Iavor > > > > Sounds good…but there are no instructions on what it does, screen shots, > why one might want it, how to install, how to use… Both URLs below are > the same > > > > Simon > > > > *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of *Iavor > Diatchki > *Sent:* 12 January 2017 18:36 > *To:* Haskell Cafe ; ghc-devs at haskell.org > *Subject:* ANN: `dump-core` a prettier GHC core viewer > > > > Hello, > > > > Over the holidays I wrote a small GHC plugin to help me do some low-level > optimizations of Haskell code. I thought it might be of use to other > people too, so please try it out! > > > > When enabled, the plugin will save the Core generated by GHC in JSON > format, and also render it in HTML for human inspection. > > > > The plugin is available on Hackage: > > http://hackage.haskell.org/package/dump-core > > > > > The instructions on how to use it are in the README file. > > You may also read about it at the github page: > > http://hackage.haskell.org/package/dump-core > > > > > There are many things that could probably be improved, just let me know. > Also, if you are good at design, I could use some help making things look > prettier :) > > > > Happy hacking, > > -Iavor > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Thu Jan 12 22:58:03 2017 From: mail at joachim-breitner.de (Joachim Breitner) Date: Thu, 12 Jan 2017 17:58:03 -0500 Subject: `dump-core` a prettier GHC core viewer In-Reply-To: References: Message-ID: <1484261883.27912.1.camel@joachim-breitner.de> Hi, Am Donnerstag, den 12.01.2017, 14:18 -0800 schrieb Iavor Diatchki: > http://yav.github.io/dump-core/example-output/Galua.OpcodeInterpreter > .html this is amazing! It should in no way sound diminishing if I say that I always wanted to create something like that (and I am sure I am not the online one who will say that :-)). Can your tool step forward and backward between dumps from different phases, correlating the corresponding entries? Thanks, Joachim -- Joachim “nomeata” Breitner   mail at joachim-breitner.de • https://www.joachim-breitner.de/   XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F   Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From iavor.diatchki at gmail.com Thu Jan 12 23:30:27 2017 From: iavor.diatchki at gmail.com (Iavor Diatchki) Date: Thu, 12 Jan 2017 15:30:27 -0800 Subject: `dump-core` a prettier GHC core viewer In-Reply-To: <1484261883.27912.1.camel@joachim-breitner.de> References: <1484261883.27912.1.camel@joachim-breitner.de> Message-ID: Hello, not really, the plugin does not do anything clever---it simply walks over the GHC core and renders whatever it deems necessary to JSON. The only extra bits it does is to make the unique names globally unique (I thought GHC already did that, but apparently not, perhaps that happens during tidying?). I was thinking of trying to do something like this across compilations (i.e., where you keep a history of all the files to compare how your changes to the source affected the core), but it hadn't occurred to me to try to do it for each phase. Please file a ticket, or even better if you have the time please feel free to hack on it. I was just finding myself staring at a lot of core, and wanted something a little easier to read, but with all/most of the information still available. It would be awesome to have a more clever tool that helps further with these sorts of low level optimizations---at present I find it to be a rather unpleasant task and so avoid it when I can :-) -Iavor On Thu, Jan 12, 2017 at 2:58 PM, Joachim Breitner wrote: > Hi, > > Am Donnerstag, den 12.01.2017, 14:18 -0800 schrieb Iavor Diatchki: > > http://yav.github.io/dump-core/example-output/Galua.OpcodeInterpreter > > .html > > this is amazing! It should in no way sound diminishing if I say that I > always wanted to create something like that (and I am sure I am not the > online one who will say that :-)). > > Can your tool step forward and backward between dumps from different > phases, correlating the corresponding entries? > > Thanks, > Joachim > > -- > Joachim “nomeata” Breitner > mail at joachim-breitner.de • https://www.joachim-breitner.de/ > XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F > Debian Developer: nomeata at debian.org > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sylvain at haskus.fr Fri Jan 13 01:01:58 2017 From: sylvain at haskus.fr (Sylvain Henry) Date: Fri, 13 Jan 2017 02:01:58 +0100 Subject: `dump-core` a prettier GHC core viewer In-Reply-To: References: <1484261883.27912.1.camel@joachim-breitner.de> Message-ID: <780a5c99-c4a7-46eb-cff7-9d1cf41e53cf@haskus.fr> Hi, > It would be awesome to have a more clever tool that helps further with > these sorts of low level optimizations---at present I find it to be a > rather unpleasant task and so avoid it when I can :-) A few weeks ago I worked on a similar tool. I have just uploaded a demo: https://www.youtube.com/watch?v=sPu5UOYPKUw (it still needs a lot of work). It would be great to have a better core renderer like you did at some point (currently it just highlights it). Sylvain On 13/01/2017 00:30, Iavor Diatchki wrote: > Hello, > > not really, the plugin does not do anything clever---it simply walks > over the GHC core and renders whatever it deems necessary to JSON. > The only extra bits it does is to make the unique names globally > unique (I thought GHC already did that, but apparently not, perhaps > that happens during tidying?). > > I was thinking of trying to do something like this across compilations > (i.e., where you keep a history of all the files to compare how your > changes to the source affected the core), but it hadn't occurred to me > to try to do it for each phase. Please file a ticket, or even better > if you have the time please feel free to hack on it. I was just > finding myself staring at a lot of core, and wanted something a little > easier to read, but with all/most of the information still available. > > It would be awesome to have a more clever tool that helps further with > these sorts of low level optimizations---at present I find it to be a > rather unpleasant task and so avoid it when I can :-) > > -Iavor > > > > > > > > > > > On Thu, Jan 12, 2017 at 2:58 PM, Joachim Breitner > > wrote: > > Hi, > > Am Donnerstag, den 12.01.2017, 14:18 -0800 schrieb Iavor Diatchki: > > > http://yav.github.io/dump-core/example-output/Galua.OpcodeInterpreter > > > .html > > this is amazing! It should in no way sound diminishing if I say that I > always wanted to create something like that (and I am sure I am > not the > online one who will say that :-)). > > Can your tool step forward and backward between dumps from different > phases, correlating the corresponding entries? > > Thanks, > Joachim > > -- > Joachim “nomeata” Breitner > mail at joachim-breitner.de • > https://www.joachim-breitner.de/ > XMPP: nomeata at joachim-breitner.de > • OpenPGP-Key: 0xF0FBF51F > Debian Developer: nomeata at debian.org > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Jan 13 08:41:18 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 13 Jan 2017 08:41:18 +0000 Subject: Build failures Message-ID: recomp001 is failing on Harbormaster on OSX (only). Does anyone know why? See for example https://phabricator.haskell.org/harbormaster/build/18293/ Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Fri Jan 13 09:13:35 2017 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Fri, 13 Jan 2017 09:13:35 +0000 Subject: Build failures In-Reply-To: References: Message-ID: Ben has a patch to fix it on phab - https://phabricator.haskell.org/D2964 Matt On Fri, Jan 13, 2017 at 8:41 AM, Simon Peyton Jones via ghc-devs wrote: > recomp001 is failing on Harbormaster on OSX (only). Does anyone know why? > See for example > > https://phabricator.haskell.org/harbormaster/build/18293/ > > Simon > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From takenobu.hs at gmail.com Fri Jan 13 13:01:24 2017 From: takenobu.hs at gmail.com (Takenobu Tani) Date: Fri, 13 Jan 2017 22:01:24 +0900 Subject: Kind page of trac wiki Message-ID: Dear devs, May I update Kind page of trac wiki [1] as following ? - "#" is the kind of unboxed values. Things like Int# have kind #. + "#" is the kind of unlifted values. Things like Int# have kind #. Is this correct? (These pages [2][3] are explained as "unlifted values".) [1]: https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/Kinds [2]: https://ghc.haskell.org/trac/ghc/wiki/UnliftedDataTypes [3]: https://ghc.haskell.org/trac/ghc/wiki/NoSubKinds Regards, Takenobu -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Jan 13 13:28:28 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 13 Jan 2017 13:28:28 +0000 Subject: Kind page of trac wiki In-Reply-To: References: Message-ID: Yes, that looks right. But much is in flux with levity polymorphism! https://ghc.haskell.org/trac/ghc/wiki/LevityPolymorphism Thanks. We need people to keep improving the wiki; it tends to get out of date Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Takenobu Tani Sent: 13 January 2017 13:01 To: ghc-devs at haskell.org Subject: Kind page of trac wiki Dear devs, May I update Kind page of trac wiki [1] as following ? - "#" is the kind of unboxed values. Things like Int# have kind #. + "#" is the kind of unlifted values. Things like Int# have kind #. Is this correct? (These pages [2][3] are explained as "unlifted values".) [1]: https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/Kinds [2]: https://ghc.haskell.org/trac/ghc/wiki/UnliftedDataTypes [3]: https://ghc.haskell.org/trac/ghc/wiki/NoSubKinds Regards, Takenobu -------------- next part -------------- An HTML attachment was scrubbed... URL: From takenobu.hs at gmail.com Fri Jan 13 13:44:23 2017 From: takenobu.hs at gmail.com (Takenobu Tani) Date: Fri, 13 Jan 2017 22:44:23 +0900 Subject: Kind page of trac wiki In-Reply-To: References: Message-ID: Hi Simon, Thanks for the reply and explanation. I'll update the kind page and add explanation to refer to page of "LevityPolymorphism". Regards, Takenobu 2017-01-13 22:28 GMT+09:00 Simon Peyton Jones : > Yes, that looks right. But much is in flux with levity polymorphism! > > https://ghc.haskell.org/trac/ghc/wiki/LevityPolymorphism > > > > Thanks. We need people to keep improving the wiki; it tends to get out of > date > > > > Simon > > > > *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of *Takenobu > Tani > *Sent:* 13 January 2017 13:01 > *To:* ghc-devs at haskell.org > *Subject:* Kind page of trac wiki > > > > Dear devs, > > > > May I update Kind page of trac wiki [1] as following ? > > > > - "#" is the kind of unboxed values. Things like Int# have kind #. > > + "#" is the kind of unlifted values. Things like Int# have kind #. > > > > Is this correct? > > > > > > (These pages [2][3] are explained as "unlifted values".) > > > > [1]: https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/Kinds > > [2]: https://ghc.haskell.org/trac/ghc/wiki/UnliftedDataTypes > > [3]: https://ghc.haskell.org/trac/ghc/wiki/NoSubKinds > > > > Regards, > > Takenobu > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Fri Jan 13 13:58:08 2017 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Fri, 13 Jan 2017 13:58:08 +0000 Subject: Travis Builds are broken Message-ID: A recent commit has caused the Travis builds to stop working. I think it was caused by https://phabricator.haskell.org/rGHCc2bd62ed62d2fae126819136d428989a7b4ddc79 Example failure Wrong exit code for plugins01()(expected 0 , actual 2 ) Stderr ( plugins01 ): ghc-stage2: /home/travis/build/ghc/ghc/libraries/ghci/dist-install/build/HSghci-8.1.o: unknown symbol `purgeObj' ghc-stage2: unable to load package `ghci-8.1' make[2]: *** [plugins01] Error 1 *** unexpected failure for plugins01(normal) Full log: https://s3.amazonaws.com/archive.travis-ci.org/jobs/190738798/log.txt Jon, could you perhaps take a look to see if the fix is obvious? Matt From matthewtpickering at gmail.com Fri Jan 13 15:21:46 2017 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Fri, 13 Jan 2017 15:21:46 +0000 Subject: Exhaustiveness checking for pattern synonyms In-Reply-To: References: Message-ID: On Tue, Jan 10, 2017 at 4:05 PM, Simon Peyton Jones wrote: > Questions > > * What if there are multiple COMPLETE pragmas e.g. > {-# COMPLETE A, B, C #-} > {-# COMPLETE A, X, Y, Z #-} > Is that ok? I guess it should be! > > Will the pattern-match exhaustiveness check then succeed > if a function uses either set? > > What happens if you use a mixture of constructors in a match > (e.g. A, X, C, Z)? Presumably all bets are off? Yes this is fine. In the case you ask about then as neither COMPLETE pragma will match then the "best" one as described in the error messages section will be chosen. > > * Note that COMPLETE pragmas could be a new source of orphan modules > module M where > import N( pattern P, pattern Q ) > {-# COMPLETE P, Q #-} > where neither P nor Q is defined in M. Then every module that is > transitively "above" M would need to read M.hi just in case its > COMPLETE pragmas was relevant. > > * Point out in the spec that COMPLETE pragmas are entirely unchecked. > It's up to the programmer to get it right. > > * Typing. What does it mean for the types to "agree" with each other. > E.g A :: a -> [(a, Int)] > B :: b -> [(Int, b)] > Is this ok? Please say explicitly with examples. This would be ok as the type constructor of both result types is []. There are now examples I see which could never be used together but are currently accepted e.g. P :: Int -> [Int] Q :: Int -> [Char] could be specified together in a COMPLETE pragma but then the actual type checker will reject any usages of `P` and `Q` together for obvious reasons. I am not too worried about this as I don't want to reimplement the type checker for pattern matches poorly -- a simple sanity check is the reason why there is any type checking at all for these pragmas. > * I didn't really didn't understand the "Error messages" section. > > I can't really help unless I know what you don't understand. The idea is simply that all the different sets of patterns are tried and that the results are prioritised by 1. Fewest uncovered clauses 2. Fewest redundant clauses 3. Fewest inaccessible clauses 4. Whether the match comes from a COMPLETE pragma or the build in set of data constructors for a type constructor. > Simon > > | -----Original Message----- > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Matthew > | Pickering > | Sent: 22 November 2016 10:43 > | To: GHC developers > | Subject: Exhaustiveness checking for pattern synonyms > | > | Hello devs, > | > | I have implemented exhaustiveness checking for pattern synonyms. The idea is > | very simple, you specify a set of pattern synonyms (or data > | constructors) which are regarded as a complete match. > | The pattern match checker then uses this information in order to check > | whether a function covers all possibilities. > | > | Specification: > | > | https://ghc.haskell.org/trac/ghc/wiki/PatternSynonyms/CompleteSigs > | > | https://phabricator.haskell.org/D2669 > | https://phabricator.haskell.org/D2725 > | > | https://ghc.haskell.org/trac/ghc/ticket/8779 > | > | Matt > | _______________________________________________ > | ghc-devs mailing list > | ghc-devs at haskell.org > | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.haskell > | .org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- > | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C155eb2786cb040d8052908d412c453 > | b5%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636154081815249356&sdata=MkQ > | FpwJWaTU%2BdEQSYEBjXLt80BrXLkBp9V8twdKB6BI%3D&reserved=0 I updated the wiki page quite a bit. Thanks Simon for the comments. Matt From simonpj at microsoft.com Fri Jan 13 15:49:47 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 13 Jan 2017 15:49:47 +0000 Subject: Exhaustiveness checking for pattern synonyms In-Reply-To: References: Message-ID: Thanks. | > * What if there are multiple COMPLETE pragmas e.g. | > {-# COMPLETE A, B, C #-} | > {-# COMPLETE A, X, Y, Z #-} | > Is that ok? I guess it should be! | > | > Will the pattern-match exhaustiveness check then succeed | > if a function uses either set? | > | > What happens if you use a mixture of constructors in a match | > (e.g. A, X, C, Z)? Presumably all bets are off? | | Yes this is fine. In the case you ask about then as neither COMPLETE pragma | will match then the "best" one as described in the error messages section | will be chosen. The wiki spec doesn't say this (presumably under "semantics"). Could it? | > * Note that COMPLETE pragmas could be a new source of orphan modules | > module M where | > import N( pattern P, pattern Q ) | > {-# COMPLETE P, Q #-} | > where neither P nor Q is defined in M. Then every module that is | > transitively "above" M would need to read M.hi just in case its | > COMPLETE pragmas was relevant. Can you say this in the spec? | > | > * Point out in the spec that COMPLETE pragmas are entirely unchecked. | > It's up to the programmer to get it right. Can you say this in the spec? Ah -- it's in "Discussion"... put it under "Semantics". | > | > * Typing. What does it mean for the types to "agree" with each other. | > E.g A :: a -> [(a, Int)] | > B :: b -> [(Int, b)] | > Is this ok? Please say explicitly with examples. | | This would be ok as the type constructor of both result types is []. Can you say this in the spec? | | > * I didn't really didn't understand the "Error messages" section. | > | > | | I can't really help unless I know what you don't understand. "The pattern match checker checks each set of patterns individually" Given a program, what are the "sets of patterns", precisely? | The idea is simply that all the different sets of patterns are tried and | that the results are prioritised by | | 1. Fewest uncovered clauses | 2. Fewest redundant clauses | 3. Fewest inaccessible clauses | 4. Whether the match comes from a COMPLETE pragma or the build in set of | data constructors for a type constructor. Some examples would be a big help. Simon | | | | > Simon | > | > | -----Original Message----- | > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | > | Matthew Pickering | > | Sent: 22 November 2016 10:43 | > | To: GHC developers | > | Subject: Exhaustiveness checking for pattern synonyms | > | | > | Hello devs, | > | | > | I have implemented exhaustiveness checking for pattern synonyms. | > | The idea is very simple, you specify a set of pattern synonyms (or | > | data | > | constructors) which are regarded as a complete match. | > | The pattern match checker then uses this information in order to | > | check whether a function covers all possibilities. | > | | > | Specification: | > | | > | https://ghc.haskell.org/trac/ghc/wiki/PatternSynonyms/CompleteSigs | > | | > | https://phabricator.haskell.org/D2669 | > | https://phabricator.haskell.org/D2725 | > | | > | https://ghc.haskell.org/trac/ghc/ticket/8779 | > | | > | Matt | > | _______________________________________________ | > | ghc-devs mailing list | > | ghc-devs at haskell.org | > | | > | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail | > | .haskell | > | .org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | > | | > | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C155eb2786cb040d8052908 | > | d412c453 | > | b5%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636154081815249356&s | > | data=MkQ | > | FpwJWaTU%2BdEQSYEBjXLt80BrXLkBp9V8twdKB6BI%3D&reserved=0 | | I updated the wiki page quite a bit. Thanks Simon for the comments. | | Matt From simonpj at microsoft.com Sat Jan 14 00:33:46 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Sat, 14 Jan 2017 00:33:46 +0000 Subject: Magical function to support reflection In-Reply-To: References: Message-ID: David, Edward Here’s my take on this thread about reflection. I’ll ignore Tagged and the ‘s’ parameter, and the proxy arguments, since they are incidental. I can finally see a reasonable path; I think there’s a potential GHC proposal here. Simon First thing: PLEASE let's give a Core rendering of whatever is proposed. If it's expressible in Core that's reassuring. If it requires an extension to Core, that's a whole different thing. Second. For any particular class, I think it's easy to express reify in Core. Example (in Core): reifyTypeable :: (Typeable a => b) -> TypeRep a -> b reifyTypable k = k |> co where co is a coercion that witnesses co :: (forall a b. Typeable a => b) ~ forall a b. (TypeRep a -> b) Third. This does not depend, and should not depend, on the fact that single-method classes are represented with a newtype. E.g. if we changed Typeable to be represented with a data type thus (in Core) data Typeable a = MkTypeable (TypeRep a) using data rather than newtype, then we could still write reifyTypable. reifyTypeable :: (Typeable a => b) -> TypeRep a -> b reifyTypable = /\ab. \(f :: Typeable a => b). \(r :: TypeRep a). f (MkTypeable r) The efficiency of newtype is nice, but it’s not essential. Fourth. As you point out, reify# is far too polymorphic. Clearly you need reify# to be a class method! Something like this class Reifiable a where type RC a :: Constraint -- Short for Reified Constraint reify# :: forall r. (RC a => r) -> a -> r Now (in Core at least) we can make instances instance Reifiable (TypeRep a) where type RC (TypeRep a) = Typeable a reify# k = k |> co -- For a suitable co Now, we can’t write those instances in Haskell, but we could make the ‘deriving’ mechanism deal with it, thus: deriving instance Reifiable (Typeable a) You can supply a ‘where’ part if you like, but if you don’t GHC will fill in the implementation for you. It’ll check that Typeable is a single-method class; produce a suitable implementation (in Core, as above) for reify#, and a suitable instance for RC. Pretty simple. Now the solver can use those instances. There are lots of details · I’ve used a single parameter class and a type function, because the call site of reify# will provide no information about the ‘c’ in (c => r) argument. · What if some other class has the same method type? E.g. if someone wrote class MyTR a where op :: TypeRep a would that mess up the use of reify# for Typeable? Well it would if they also did deriving instance Reifiable (MyTR a) And there really is an ambiguity: what should (reify# k (tr :: TypeRep Int)) do? Apply k to a TypeRep or to a MyTR? So a complaint here would be entirely legitimate. · I suppose that another formulation might be to abstract over the constraint, rather than the method type, and use explicit type application at calls of reify#. So class Reifiable c where type RL c :: * reify# :: (c => r) -> RL c -> r Now all calls of reify# would have to look like reify# @(Typeable Int) k tr Maybe that’s acceptable. But it doesn’t seem as nice to me. · One could use functional dependencies and a 2-parameter type class, but I don’t think it would change anything much. If type functions work, they are more robust than fundeps. · One could abstract over the type constructor rather than the type. I see no advantage and some disadvantages class Reifiable t where type RC t :: * -> Constraint -- Short for Reified Constraint reify# :: forall r. (RC t a => r) -> t a -> r | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of David | Feuer | Sent: 11 December 2016 05:01 | To: ghc-devs >; Edward Kmett > | Subject: Magical function to support reflection | | The following proposal (with fancier formatting and some improved | wording) can be viewed at | https://ghc.haskell.org/trac/ghc/wiki/MagicalReflectionSupport | | Using the Data.Reflection has some runtime costs. Notably, there can be no | inlining or unboxing of reified values. I think it would be nice to add a | GHC special to support it. I'll get right to the point of what I want, and | then give a bit of background about why. | | === What I want | | I propose the following absurdly over-general lie: | | reify# :: (forall s . c s a => t s r) -> a -> r | | `c` is assumed to be a single-method class with no superclasses whose | dictionary representation is exactly the same as the representation of `a`, | and `t s r` is assumed to be a newtype wrapper around `r`. In desugaring, | reify# f would be compiled to f at S, where S is a fresh type. I believe it's | necessary to use a fresh type to prevent specialization from mixing up | different reified values. | | === Background | | Let me set up a few pieces. These pieces are slightly modified from what the | package actually does to make things cleaner under the hood, but the | differences are fairly shallow. | | newtype Tagged s a = Tagged { unTagged :: a } | | unproxy :: (Proxy s -> a) -> Tagged s a | unproxy f = Tagged (f Proxy) | | class Reifies s a | s -> a where | reflect' :: Tagged s a | | -- For convenience | reflect :: forall s a proxy . Reifies s a => proxy s -> a reflect _ = | unTagged (reflect' :: Tagged s a) | | -- The key function--see below regarding implementation reify' :: (forall s | . Reifies s a => Tagged s r) -> a -> r | | -- For convenience | reify :: a -> (forall s . Reifies s a => Proxy s -> r) -> r reify a f = | reify' (unproxy f) a | | The key idea of reify' is that something of type | | forall s . Reifies s a => Tagged s r | | is represented in memory exactly the same as a function of type | | a -> r | | So we can currently use unsafeCoerce to interpret one as the other. | Following the general approach of the library, we can do this as such: | | newtype Magic a r = Magic (forall s . Reifies s a => Tagged s r) reify' :: | (forall s . Reifies s a => Tagged s r) -> a -> r reify' f = unsafeCoerce | (Magic f) | | This certainly works. The trouble is that any knowledge about what is | reflected is totally lost. For instance, if I write | | reify 12 $ \p -> reflect p + 3 | | then GHC will not see, at compile time, that the result is 15. If I write | | reify (+1) $ \p -> reflect p x | | then GHC will never inline the application of (+1). Etc. | | I'd like to replace reify' with reify# to avoid this problem. | | Thanks, | David Feuer | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.haskell | .org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C488bf00986e34ac0833208d42182c4 | 7a%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636170292905032831&sdata=quv | Cny8vD%2Fw%2BjIIypEtungW3OWbVmCQxFAK4%2FXrX%2Bb8%3D&reserved=0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Sat Jan 14 00:49:53 2017 From: david.feuer at gmail.com (David Feuer) Date: Fri, 13 Jan 2017 19:49:53 -0500 Subject: Magical function to support reflection In-Reply-To: References: Message-ID: I need to look through a bit more of this, but explicit type application certainly can be avoided using Tagged. Once we get the necessary magic, libraries will be able to come up with whatever interfaces they like. My main concern about the generality of reify# :: forall r. (RC a => r) -> a -> r (as with the primop type Edward came up with) is that it lacks the `forall s` safety mechanism of the reflection library. Along with its key role in ensuring class coherence[*], that mechanism also makes it clear what specialization is and is not allowed to do with reified values. Again, I'm not sure it can mess up the simpler/more general form you and Edward propose, but it makes me nervous. [*] Coherence: as long as an instance of Reifies S A exists for some concrete S::K, users can't incoherently write a polymorphic Reifies instance for s::K. On Jan 13, 2017 7:33 PM, "Simon Peyton Jones" wrote: David, Edward Here’s my take on this thread about reflection. I’ll ignore Tagged and the ‘s’ parameter, and the proxy arguments, since they are incidental. I can finally see a reasonable path; I think there’s a potential GHC proposal here. Simon *First thing*: PLEASE let's give a Core rendering of whatever is proposed. If it's expressible in Core that's reassuring. If it requires an extension to Core, that's a whole different thing. *Second*. For any *particular* class, I think it's easy to express reify in Core. Example (in Core): reifyTypeable :: (Typeable a => b) -> TypeRep a -> b reifyTypable k = k |> co where co is a coercion that witnesses co :: (forall a b. Typeable a => b) ~ forall a b. (TypeRep a -> b) *Third. *This does not depend, and should not depend, on the fact that single-method classes are represented with a newtype. E.g. if we changed Typeable to be represented with a data type thus (in Core) data Typeable a = MkTypeable (TypeRep a) using data rather than newtype, then we could still write reifyTypable. reifyTypeable :: (Typeable a => b) -> TypeRep a -> b reifyTypable = /\ab. \(f :: Typeable a => b). \(r :: TypeRep a). f (MkTypeable r) The efficiency of newtype is nice, but it’s not essential. *Fourth*. As you point out, reify# is far too polymorphic. *Clearly you need reify# to be a class method!* Something like this class Reifiable a where type RC a :: Constraint -- Short for Reified Constraint reify# :: forall r. (RC a => r) -> a -> r Now (in Core at least) we can make instances instance Reifiable (TypeRep a) where type RC (TypeRep a) = Typeable a reify# k = k |> co -- For a suitable co Now, we can’t write those instances in Haskell, but we could make the ‘deriving’ mechanism deal with it, thus: deriving instance Reifiable (Typeable a) You can supply a ‘where’ part if you like, but if you don’t GHC will fill in the implementation for you. It’ll check that Typeable is a single-method class; produce a suitable implementation (in Core, as above) for reify#, and a suitable instance for RC. Pretty simple. Now the solver can use those instances. There are lots of details · I’ve used a single parameter class and a type function, because the call site of reify# will provide no information about the ‘c’ in (c => r) argument. · What if some other class has the same method type? E.g. if someone wrote class MyTR a where op :: TypeRep a would that mess up the use of reify# for Typeable? Well it would if they also did deriving instance Reifiable (MyTR a) And there really is an ambiguity: what should (reify# k (tr :: TypeRep Int)) do? Apply k to a TypeRep or to a MyTR? So a complaint here would be entirely legitimate. · I suppose that another formulation might be to abstract over the constraint, rather than the method type, and use explicit type application at calls of reify#. So class Reifiable c where type RL c :: * reify# :: (c => r) -> RL c -> r Now all calls of reify# would have to look like reify# @(Typeable Int) k tr Maybe that’s acceptable. But it doesn’t seem as nice to me. · One could use functional dependencies and a 2-parameter type class, but I don’t think it would change anything much. If type functions work, they are more robust than fundeps. · One could abstract over the type constructor rather than the type. I see no advantage and some disadvantages class Reifiable t where type RC t :: * -> Constraint -- Short for Reified Constraint reify# :: forall r. (RC t a => r) -> t a -> r | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org ] On Behalf Of David | Feuer | Sent: 11 December 2016 05:01 | To: ghc-devs ; Edward Kmett | Subject: Magical function to support reflection | | The following proposal (with fancier formatting and some improved | wording) can be viewed at | https://ghc.haskell.org/trac/ghc/wiki/MagicalReflectionSupport | | Using the Data.Reflection has some runtime costs. Notably, there can be no | inlining or unboxing of reified values. I think it would be nice to add a | GHC special to support it. I'll get right to the point of what I want, and | then give a bit of background about why. | | === What I want | | I propose the following absurdly over-general lie: | | reify# :: (forall s . c s a => t s r) -> a -> r | | `c` is assumed to be a single-method class with no superclasses whose | dictionary representation is exactly the same as the representation of `a`, | and `t s r` is assumed to be a newtype wrapper around `r`. In desugaring, | reify# f would be compiled to f at S, where S is a fresh type. I believe it's | necessary to use a fresh type to prevent specialization from mixing up | different reified values. | | === Background | | Let me set up a few pieces. These pieces are slightly modified from what the | package actually does to make things cleaner under the hood, but the | differences are fairly shallow. | | newtype Tagged s a = Tagged { unTagged :: a } | | unproxy :: (Proxy s -> a) -> Tagged s a | unproxy f = Tagged (f Proxy) | | class Reifies s a | s -> a where | reflect' :: Tagged s a | | -- For convenience | reflect :: forall s a proxy . Reifies s a => proxy s -> a reflect _ = | unTagged (reflect' :: Tagged s a) | | -- The key function--see below regarding implementation reify' :: (forall s | . Reifies s a => Tagged s r) -> a -> r | | -- For convenience | reify :: a -> (forall s . Reifies s a => Proxy s -> r) -> r reify a f = | reify' (unproxy f) a | | The key idea of reify' is that something of type | | forall s . Reifies s a => Tagged s r | | is represented in memory exactly the same as a function of type | | a -> r | | So we can currently use unsafeCoerce to interpret one as the other. | Following the general approach of the library, we can do this as such: | | newtype Magic a r = Magic (forall s . Reifies s a => Tagged s r) reify' :: | (forall s . Reifies s a => Tagged s r) -> a -> r reify' f = unsafeCoerce | (Magic f) | | This certainly works. The trouble is that any knowledge about what is | reflected is totally lost. For instance, if I write | | reify 12 $ \p -> reflect p + 3 | | then GHC will not see, at compile time, that the result is 15. If I write | | reify (+1) $ \p -> reflect p x | | then GHC will never inline the application of (+1). Etc. | | I'd like to replace reify' with reify# to avoid this problem. | | Thanks, | David Feuer | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://na01.safelinks.protection.outlook.com/?url= http%3A%2F%2Fmail.haskell | .org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com% 7C488bf00986e34ac0833208d42182c4 | 7a%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0% 7C636170292905032831&sdata=quv | Cny8vD%2Fw%2BjIIypEtungW3OWbVmCQxFAK4%2FXrX%2Bb8%3D&reserved=0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From benno.fuenfstueck at gmail.com Sat Jan 14 10:25:38 2017 From: benno.fuenfstueck at gmail.com (=?UTF-8?B?QmVubm8gRsO8bmZzdMO8Y2s=?=) Date: Sat, 14 Jan 2017 10:25:38 +0000 Subject: Exhaustiveness checking for pattern synonyms In-Reply-To: References: Message-ID: An idea: could we perhaps use the same syntax as for MINIMAL pragmas also for COMPLETE pragmas, since those two feel very similar to me? So instead of multiple pragmas, let's have {-# COMPLETE (Pat1, Pat2) | (Pat3, Pat4) #}, just like with {-# MINIMAL #-} ? Regards, Benno Simon Peyton Jones via ghc-devs schrieb am Fr., 13. Jan. 2017 um 16:50 Uhr: > Thanks. > > | > * What if there are multiple COMPLETE pragmas e.g. > | > {-# COMPLETE A, B, C #-} > | > {-# COMPLETE A, X, Y, Z #-} > | > Is that ok? I guess it should be! > | > > | > Will the pattern-match exhaustiveness check then succeed > | > if a function uses either set? > | > > | > What happens if you use a mixture of constructors in a match > | > (e.g. A, X, C, Z)? Presumably all bets are off? > | > | Yes this is fine. In the case you ask about then as neither COMPLETE > pragma > | will match then the "best" one as described in the error messages > section > | will be chosen. > > The wiki spec doesn't say this (presumably under "semantics"). Could it? > > | > * Note that COMPLETE pragmas could be a new source of orphan modules > | > module M where > | > import N( pattern P, pattern Q ) > | > {-# COMPLETE P, Q #-} > | > where neither P nor Q is defined in M. Then every module that is > | > transitively "above" M would need to read M.hi just in case its > | > COMPLETE pragmas was relevant. > > Can you say this in the spec? > > | > > | > * Point out in the spec that COMPLETE pragmas are entirely unchecked. > | > It's up to the programmer to get it right. > > Can you say this in the spec? Ah -- it's in "Discussion"... put it under > "Semantics". > > | > > | > * Typing. What does it mean for the types to "agree" with each other. > | > E.g A :: a -> [(a, Int)] > | > B :: b -> [(Int, b)] > | > Is this ok? Please say explicitly with examples. > | > | This would be ok as the type constructor of both result types is []. > > Can you say this in the spec? > > | > | > * I didn't really didn't understand the "Error messages" section. > | > > | > > | > | I can't really help unless I know what you don't understand. > > "The pattern match checker checks each set of patterns individually" > > Given a program, what are the "sets of patterns", precisely? > > | The idea is simply that all the different sets of patterns are tried and > | that the results are prioritised by > | > | 1. Fewest uncovered clauses > | 2. Fewest redundant clauses > | 3. Fewest inaccessible clauses > | 4. Whether the match comes from a COMPLETE pragma or the build in set of > | data constructors for a type constructor. > > Some examples would be a big help. > > Simon > > | > | > | > | > Simon > | > > | > | -----Original Message----- > | > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of > | > | Matthew Pickering > | > | Sent: 22 November 2016 10:43 > | > | To: GHC developers > | > | Subject: Exhaustiveness checking for pattern synonyms > | > | > | > | Hello devs, > | > | > | > | I have implemented exhaustiveness checking for pattern synonyms. > | > | The idea is very simple, you specify a set of pattern synonyms (or > | > | data > | > | constructors) which are regarded as a complete match. > | > | The pattern match checker then uses this information in order to > | > | check whether a function covers all possibilities. > | > | > | > | Specification: > | > | > | > | https://ghc.haskell.org/trac/ghc/wiki/PatternSynonyms/CompleteSigs > | > | > | > | https://phabricator.haskell.org/D2669 > | > | https://phabricator.haskell.org/D2725 > | > | > | > | https://ghc.haskell.org/trac/ghc/ticket/8779 > | > | > | > | Matt > | > | _______________________________________________ > | > | ghc-devs mailing list > | > | ghc-devs at haskell.org > | > | > | > | > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail > | > | .haskell > | > | .org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- > | > | > | > | devs&data=02%7C01%7Csimonpj%40microsoft.com > %7C155eb2786cb040d8052908 > | > | d412c453 > | > | b5%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636154081815249356&s > | > | data=MkQ > | > | FpwJWaTU%2BdEQSYEBjXLt80BrXLkBp9V8twdKB6BI%3D&reserved=0 > | > | I updated the wiki page quite a bit. Thanks Simon for the comments. > | > | Matt > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shea at shealevy.com Sat Jan 14 15:34:50 2017 From: shea at shealevy.com (Shea Levy) Date: Sat, 14 Jan 2017 10:34:50 -0500 Subject: Including remote-iserv upstream? Message-ID: <87d1fpmz8l.fsf@shlevy-laptop.i-did-not-set--mail-host-address--so-tickle-me> Hi Simon, devs, As part of my work to get TH working when cross-compiling to iOS, I've developed remote-iserv [1] (not yet on hackage), a set of libraries for letting GHC communicate with an external interpreter that may be on another machine. So far, there are only three additions of note on top of what the ghci library offers: 1. The remote-iserv protocol has facilities for the host sending archives and object files the target doesn't have (dynlibs not yet implemented but there's no reason they can't be). This works by having the server send back a Bool after a loadObj or loadArchive indicating whether it needs the object sent, and then just reading it off the Pipe. 2. The remote-iserv lib abstracts over how the Pipe it communicates over is obtained. One could imagine e.g. an ssh-based implementation that just uses stdin and stdout* for the communication, the implementation I've actually tested on is a TCP server advertised over bonjour. 3. There is a protocol version included to address forwards compatibility concerns. As the library currently stands, it works for my use case. However, there would be a number of benefits if it were included with ghc (and used for local iserv as well): 1. Reduced code duplication (the server side copies iserv/src/Main.hs pretty heavily) 2. Reduced overhead keeping up to date with iserv protocol changes 3. No need for an extra client-side process, GHC can just open the Pipe itself 4. Proper library distribution in the cross-compiling case. The client needs to be linked with the ghci lib built by the stage0 compiler, as it runs on the build machine, while the server needs to be linked with the ghci lib built by the stage1 compiler. With a distribution created by 'make install', we only get the ghci lib for the target. Currently, I'm working around this by just using the ghci lib of the bootstrap compiler, which in my case is built from the same source checkout, but of course this isn't correct. If these libs were upstream, we'd only need one version of the client lib exposed and one version of the server lib exposed and could have them be for the build machine and the target, respectively 5. Better haskell hackers than I invested in the code ;) Thoughts on this? Would this be welcome upstream in some form? Thanks, Shea * Note that, in the general case, having the server process's stdio be the same as the compiler's (as we have in the local-iserv case) is not possible. Future work includes adding something to the protocol to allow forwarding stdio over the protocol pipe, to make GHCi usable without watching the *server*'s console. [1]: https://github.com/obsidiansystems/remote-iserv -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From alan.zimm at gmail.com Sat Jan 14 15:50:57 2017 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Sat, 14 Jan 2017 17:50:57 +0200 Subject: Including remote-iserv upstream? In-Reply-To: <87d1fpmz8l.fsf@shlevy-laptop.i-did-not-set--mail-host-address--so-tickle-me> References: <87d1fpmz8l.fsf@shlevy-laptop.i-did-not-set--mail-host-address--so-tickle-me> Message-ID: As a matter of interest, are you making use of `createIservProcessHook` which allows you to set the FDs to be used when communicating with the iserve process? -------------- next part -------------- An HTML attachment was scrubbed... URL: From shea at shealevy.com Sat Jan 14 18:10:55 2017 From: shea at shealevy.com (Shea Levy) Date: Sat, 14 Jan 2017 13:10:55 -0500 Subject: Including remote-iserv upstream? In-Reply-To: References: <87d1fpmz8l.fsf@shlevy-laptop.i-did-not-set--mail-host-address--so-tickle-me> Message-ID: <87a8atms0g.fsf@shlevy-laptop.i-did-not-set--mail-host-address--so-tickle-me> No, as it currently exists it has to create a ProcessHandle, and I have to layer some stuff on top of the iserv protocol anyway, so it wouldn't really help much. I just use -pgmi to point to the client executable. ~Shea Alan & Kim Zimmerman writes: > As a matter of interest, are you making use of `createIservProcessHook` > which allows you to set the FDs to be used when communicating with the > iserve process? > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From marlowsd at gmail.com Mon Jan 16 09:05:01 2017 From: marlowsd at gmail.com (Simon Marlow) Date: Mon, 16 Jan 2017 09:05:01 +0000 Subject: Including remote-iserv upstream? In-Reply-To: <87d1fpmz8l.fsf@shlevy-laptop.i-did-not-set--mail-host-address--so-tickle-me> References: <87d1fpmz8l.fsf@shlevy-laptop.i-did-not-set--mail-host-address--so-tickle-me> Message-ID: Absolutely, let's get this code upstream. Just put it up on Phabricator and I'll be happy to review. I recall that we wanted to split up the ghci lib into modules that are compiled with stage0 (the client) and modules compiled with stage1 (the server). Is that a part of your plans? I think it would be a good cleanup. Cheers Simon On 14 January 2017 at 15:34, Shea Levy wrote: > Hi Simon, devs, > > As part of my work to get TH working when cross-compiling to iOS, I've > developed remote-iserv [1] (not yet on hackage), a set of libraries for > letting GHC communicate with an external interpreter that may be on > another machine. So far, there are only three additions of note on top > of what the ghci library offers: > > 1. The remote-iserv protocol has facilities for the host sending > archives and object files the target doesn't have (dynlibs not yet > implemented but there's no reason they can't be). This works by > having the server send back a Bool after a loadObj or loadArchive > indicating whether it needs the object sent, and then just reading it > off the Pipe. > 2. The remote-iserv lib abstracts over how the Pipe it communicates over > is obtained. One could imagine e.g. an ssh-based implementation that > just uses stdin and stdout* for the communication, the implementation > I've actually tested on is a TCP server advertised over bonjour. > 3. There is a protocol version included to address forwards > compatibility concerns. > > As the library currently stands, it works for my use case. However, > there would be a number of benefits if it were included with ghc (and > used for local iserv as well): > > 1. Reduced code duplication (the server side copies iserv/src/Main.hs > pretty heavily) > 2. Reduced overhead keeping up to date with iserv protocol changes > 3. No need for an extra client-side process, GHC can just open the Pipe > itself > 4. Proper library distribution in the cross-compiling case. The client > needs to be linked with the ghci lib built by the stage0 compiler, as > it runs on the build machine, while the server needs to be linked > with the ghci lib built by the stage1 compiler. With a distribution > created by 'make install', we only get the ghci lib for the > target. Currently, I'm working around this by just using the ghci lib > of the bootstrap compiler, which in my case is built from the same > source checkout, but of course this isn't correct. If these libs were > upstream, we'd only need one version of the client lib exposed and > one version of the server lib exposed and could have them be for the > build machine and the target, respectively > 5. Better haskell hackers than I invested in the code ;) > > Thoughts on this? Would this be welcome upstream in some form? > > Thanks, > Shea > > * Note that, in the general case, having the server process's stdio be > the same as the compiler's (as we have in the local-iserv case) is not > possible. Future work includes adding something to the protocol to > allow forwarding stdio over the protocol pipe, to make GHCi usable > without watching the *server*'s console. > > [1]: https://github.com/obsidiansystems/remote-iserv > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shea at shealevy.com Mon Jan 16 15:31:32 2017 From: shea at shealevy.com (Shea Levy) Date: Mon, 16 Jan 2017 10:31:32 -0500 Subject: Including remote-iserv upstream? In-Reply-To: References: <87d1fpmz8l.fsf@shlevy-laptop.i-did-not-set--mail-host-address--so-tickle-me> Message-ID: <87h94zqawb.fsf@shlevy-laptop.i-did-not-set--mail-host-address--so-tickle-me> OK, will do, thanks! Simon Marlow writes: > Absolutely, let's get this code upstream. Just put it up on Phabricator > and I'll be happy to review. > > I recall that we wanted to split up the ghci lib into modules that are > compiled with stage0 (the client) and modules compiled with stage1 (the > server). Is that a part of your plans? I think it would be a good cleanup. > > Cheers > Simon > > On 14 January 2017 at 15:34, Shea Levy wrote: > >> Hi Simon, devs, >> >> As part of my work to get TH working when cross-compiling to iOS, I've >> developed remote-iserv [1] (not yet on hackage), a set of libraries for >> letting GHC communicate with an external interpreter that may be on >> another machine. So far, there are only three additions of note on top >> of what the ghci library offers: >> >> 1. The remote-iserv protocol has facilities for the host sending >> archives and object files the target doesn't have (dynlibs not yet >> implemented but there's no reason they can't be). This works by >> having the server send back a Bool after a loadObj or loadArchive >> indicating whether it needs the object sent, and then just reading it >> off the Pipe. >> 2. The remote-iserv lib abstracts over how the Pipe it communicates over >> is obtained. One could imagine e.g. an ssh-based implementation that >> just uses stdin and stdout* for the communication, the implementation >> I've actually tested on is a TCP server advertised over bonjour. >> 3. There is a protocol version included to address forwards >> compatibility concerns. >> >> As the library currently stands, it works for my use case. However, >> there would be a number of benefits if it were included with ghc (and >> used for local iserv as well): >> >> 1. Reduced code duplication (the server side copies iserv/src/Main.hs >> pretty heavily) >> 2. Reduced overhead keeping up to date with iserv protocol changes >> 3. No need for an extra client-side process, GHC can just open the Pipe >> itself >> 4. Proper library distribution in the cross-compiling case. The client >> needs to be linked with the ghci lib built by the stage0 compiler, as >> it runs on the build machine, while the server needs to be linked >> with the ghci lib built by the stage1 compiler. With a distribution >> created by 'make install', we only get the ghci lib for the >> target. Currently, I'm working around this by just using the ghci lib >> of the bootstrap compiler, which in my case is built from the same >> source checkout, but of course this isn't correct. If these libs were >> upstream, we'd only need one version of the client lib exposed and >> one version of the server lib exposed and could have them be for the >> build machine and the target, respectively >> 5. Better haskell hackers than I invested in the code ;) >> >> Thoughts on this? Would this be welcome upstream in some form? >> >> Thanks, >> Shea >> >> * Note that, in the general case, having the server process's stdio be >> the same as the compiler's (as we have in the local-iserv case) is not >> possible. Future work includes adding something to the protocol to >> allow forwarding stdio over the protocol pipe, to make GHCi usable >> without watching the *server*'s console. >> >> [1]: https://github.com/obsidiansystems/remote-iserv >> -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From simonpj at microsoft.com Mon Jan 16 17:00:17 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 16 Jan 2017 17:00:17 +0000 Subject: [Diffusion] [Build Failed] rGHC5a9a1738023a: Refine exprOkForSpeculation In-Reply-To: <20170116165212.27184.76068.0BE9A3FE@phabricator.haskell.org> References: <20170116165212.27184.76068.0BE9A3FE@phabricator.haskell.org> Message-ID: I'm sorry but I cannot make head or tail of the offending build log here. I validated from scratch at my end. Could it be a Harbormaster error? Simon From: noreply at phabricator.haskell.org [mailto:noreply at phabricator.haskell.org] Sent: 16 January 2017 16:52 To: Simon Peyton Jones Subject: [Diffusion] [Build Failed] rGHC5a9a1738023a: Refine exprOkForSpeculation Harbormaster failed to build B13227: rGHC5a9a1738023a: Refine exprOkForSpeculation! BRANCHES master USERS simonpj (Author) O12 (Auditor) O20 (Auditor) COMMIT https://phabricator.haskell.org/rGHC5a9a1738023a EMAIL PREFERENCES https://phabricator.haskell.org/settings/panel/emailpreferences/ To: simonpj, Harbormaster -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Mon Jan 16 17:05:09 2017 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Mon, 16 Jan 2017 17:05:09 +0000 Subject: [Diffusion] [Build Failed] rGHC5a9a1738023a: Refine exprOkForSpeculation In-Reply-To: References: <20170116165212.27184.76068.0BE9A3FE@phabricator.haskell.org> Message-ID: I restarted the build to see if it is a transient failure. I haven't seen errors like that before. On Mon, Jan 16, 2017 at 5:00 PM, Simon Peyton Jones via ghc-devs < ghc-devs at haskell.org> wrote: > I’m sorry but I cannot make head or tail of the offending build log here. > I validated from scratch at my end. Could it be a Harbormaster error? > > > > Simon > > > > *From:* noreply at phabricator.haskell.org [mailto:noreply at phabricator. > haskell.org] > *Sent:* 16 January 2017 16:52 > *To:* Simon Peyton Jones > *Subject:* [Diffusion] [Build Failed] rGHC5a9a1738023a: Refine > exprOkForSpeculation > > > > Harbormaster failed to build B13227: rGHC5a9a1738023a: Refine > exprOkForSpeculation! > > > > *BRANCHES* > > master > > > > *USERS* > > simonpj (Author) > O12 (Auditor) > O20 (Auditor) > > > > *COMMIT* > > https://phabricator.haskell.org/rGHC5a9a1738023a > > > > *EMAIL PREFERENCES* > > https://phabricator.haskell.org/settings/panel/emailpreferences/ > > > > *To: *simonpj, Harbormaster > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Jan 17 08:13:34 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 17 Jan 2017 08:13:34 +0000 Subject: Windows build Message-ID: Hi Tamar The current state of a clean Windows build has improved - but still has multiple failures. See below Thanks for your work on this. I can send more details if that'd be helpful Simon Unexpected failures: plugins/plugins07.run plugins07 [bad exit code] (normal) plugins/T10420.run T10420 [bad exit code] (normal) plugins/T10294a.run T10294a [bad exit code] (normal) plugins/T11244.run T11244 [bad stderr] (normal) Framework failures: backpack/cabal/bkpcabal03/bkpcabal03.run bkpcabal03 [runTest] (Unhandled exception: [Errno 90] Directory not empty: 'autogen') plugins/plugins07.run plugins07 [normal] (pre_cmd failed: 2) plugins/T10420.run T10420 [normal] (pre_cmd failed: 2) plugins/T10294a.run T10294a [normal] (pre_cmd failed: 2) plugins/plugins07.run plugins07 [runTest] (Unhandled exception: [Errno 90] Directory not empty: 'rule-defining-plugin-0.1') plugins/T11244.run T11244 [normal] (pre_cmd failed: 2) safeHaskell/check/pkg01/ImpSafeOnly03.run ImpSafeOnly03 [runTest] (Unhandled exception: [Errno 90] Directory not empty: 'safePkg01-1.0-COK8JB4prM8EuKYKd3XaX3') -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Jan 17 15:45:58 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 17 Jan 2017 15:45:58 +0000 Subject: Magical function to support reflection In-Reply-To: References: Message-ID: David says that this paper is relevant http://okmij.org/ftp/Haskell/tr-15-04.pdf Simon From: David Feuer [mailto:david.feuer at gmail.com] Sent: 14 January 2017 00:50 To: Simon Peyton Jones Cc: ghc-devs ; Edward Kmett Subject: RE: Magical function to support reflection I need to look through a bit more of this, but explicit type application certainly can be avoided using Tagged. Once we get the necessary magic, libraries will be able to come up with whatever interfaces they like. My main concern about the generality of reify# :: forall r. (RC a => r) -> a -> r (as with the primop type Edward came up with) is that it lacks the `forall s` safety mechanism of the reflection library. Along with its key role in ensuring class coherence[*], that mechanism also makes it clear what specialization is and is not allowed to do with reified values. Again, I'm not sure it can mess up the simpler/more general form you and Edward propose, but it makes me nervous. [*] Coherence: as long as an instance of Reifies S A exists for some concrete S::K, users can't incoherently write a polymorphic Reifies instance for s::K. On Jan 13, 2017 7:33 PM, "Simon Peyton Jones" > wrote: David, Edward Here’s my take on this thread about reflection. I’ll ignore Tagged and the ‘s’ parameter, and the proxy arguments, since they are incidental. I can finally see a reasonable path; I think there’s a potential GHC proposal here. Simon First thing: PLEASE let's give a Core rendering of whatever is proposed. If it's expressible in Core that's reassuring. If it requires an extension to Core, that's a whole different thing. Second. For any particular class, I think it's easy to express reify in Core. Example (in Core): reifyTypeable :: (Typeable a => b) -> TypeRep a -> b reifyTypable k = k |> co where co is a coercion that witnesses co :: (forall a b. Typeable a => b) ~ forall a b. (TypeRep a -> b) Third. This does not depend, and should not depend, on the fact that single-method classes are represented with a newtype. E.g. if we changed Typeable to be represented with a data type thus (in Core) data Typeable a = MkTypeable (TypeRep a) using data rather than newtype, then we could still write reifyTypable. reifyTypeable :: (Typeable a => b) -> TypeRep a -> b reifyTypable = /\ab. \(f :: Typeable a => b). \(r :: TypeRep a). f (MkTypeable r) The efficiency of newtype is nice, but it’s not essential. Fourth. As you point out, reify# is far too polymorphic. Clearly you need reify# to be a class method! Something like this class Reifiable a where type RC a :: Constraint -- Short for Reified Constraint reify# :: forall r. (RC a => r) -> a -> r Now (in Core at least) we can make instances instance Reifiable (TypeRep a) where type RC (TypeRep a) = Typeable a reify# k = k |> co -- For a suitable co Now, we can’t write those instances in Haskell, but we could make the ‘deriving’ mechanism deal with it, thus: deriving instance Reifiable (Typeable a) You can supply a ‘where’ part if you like, but if you don’t GHC will fill in the implementation for you. It’ll check that Typeable is a single-method class; produce a suitable implementation (in Core, as above) for reify#, and a suitable instance for RC. Pretty simple. Now the solver can use those instances. There are lots of details • I’ve used a single parameter class and a type function, because the call site of reify# will provide no information about the ‘c’ in (c => r) argument. • What if some other class has the same method type? E.g. if someone wrote class MyTR a where op :: TypeRep a would that mess up the use of reify# for Typeable? Well it would if they also did deriving instance Reifiable (MyTR a) And there really is an ambiguity: what should (reify# k (tr :: TypeRep Int)) do? Apply k to a TypeRep or to a MyTR? So a complaint here would be entirely legitimate. • I suppose that another formulation might be to abstract over the constraint, rather than the method type, and use explicit type application at calls of reify#. So class Reifiable c where type RL c :: * reify# :: (c => r) -> RL c -> r Now all calls of reify# would have to look like reify# @(Typeable Int) k tr Maybe that’s acceptable. But it doesn’t seem as nice to me. • One could use functional dependencies and a 2-parameter type class, but I don’t think it would change anything much. If type functions work, they are more robust than fundeps. • One could abstract over the type constructor rather than the type. I see no advantage and some disadvantages class Reifiable t where type RC t :: * -> Constraint -- Short for Reified Constraint reify# :: forall r. (RC t a => r) -> t a -> r | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of David | Feuer | Sent: 11 December 2016 05:01 | To: ghc-devs >; Edward Kmett > | Subject: Magical function to support reflection | | The following proposal (with fancier formatting and some improved | wording) can be viewed at | https://ghc.haskell.org/trac/ghc/wiki/MagicalReflectionSupport | | Using the Data.Reflection has some runtime costs. Notably, there can be no | inlining or unboxing of reified values. I think it would be nice to add a | GHC special to support it. I'll get right to the point of what I want, and | then give a bit of background about why. | | === What I want | | I propose the following absurdly over-general lie: | | reify# :: (forall s . c s a => t s r) -> a -> r | | `c` is assumed to be a single-method class with no superclasses whose | dictionary representation is exactly the same as the representation of `a`, | and `t s r` is assumed to be a newtype wrapper around `r`. In desugaring, | reify# f would be compiled to f at S, where S is a fresh type. I believe it's | necessary to use a fresh type to prevent specialization from mixing up | different reified values. | | === Background | | Let me set up a few pieces. These pieces are slightly modified from what the | package actually does to make things cleaner under the hood, but the | differences are fairly shallow. | | newtype Tagged s a = Tagged { unTagged :: a } | | unproxy :: (Proxy s -> a) -> Tagged s a | unproxy f = Tagged (f Proxy) | | class Reifies s a | s -> a where | reflect' :: Tagged s a | | -- For convenience | reflect :: forall s a proxy . Reifies s a => proxy s -> a reflect _ = | unTagged (reflect' :: Tagged s a) | | -- The key function--see below regarding implementation reify' :: (forall s | . Reifies s a => Tagged s r) -> a -> r | | -- For convenience | reify :: a -> (forall s . Reifies s a => Proxy s -> r) -> r reify a f = | reify' (unproxy f) a | | The key idea of reify' is that something of type | | forall s . Reifies s a => Tagged s r | | is represented in memory exactly the same as a function of type | | a -> r | | So we can currently use unsafeCoerce to interpret one as the other. | Following the general approach of the library, we can do this as such: | | newtype Magic a r = Magic (forall s . Reifies s a => Tagged s r) reify' :: | (forall s . Reifies s a => Tagged s r) -> a -> r reify' f = unsafeCoerce | (Magic f) | | This certainly works. The trouble is that any knowledge about what is | reflected is totally lost. For instance, if I write | | reify 12 $ \p -> reflect p + 3 | | then GHC will not see, at compile time, that the result is 15. If I write | | reify (+1) $ \p -> reflect p x | | then GHC will never inline the application of (+1). Etc. | | I'd like to replace reify' with reify# to avoid this problem. | | Thanks, | David Feuer | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.haskell | .org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C488bf00986e34ac0833208d42182c4 | 7a%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636170292905032831&sdata=quv | Cny8vD%2Fw%2BjIIypEtungW3OWbVmCQxFAK4%2FXrX%2Bb8%3D&reserved=0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From maurerl at cs.uoregon.edu Tue Jan 17 16:29:31 2017 From: maurerl at cs.uoregon.edu (Luke Maurer) Date: Tue, 17 Jan 2017 08:29:31 -0800 Subject: recomp001 failing on OSX for Phab:D2853 Message-ID: <48bfbcae-5f61-3cb5-b77d-94f448117dd0@cs.uoregon.edu> Getting an odd test failure from Phab: @@ -1,2 +0,0 @@ - -C.hs:3:11: Module ‘B’ does not export ‘foo’ Actual stderr output differs from expected: *** unexpected failure for recomp001(normal) https://phabricator.haskell.org/harbormaster/build/18499/?l=0 I can't explain how my changes would break module exports *at all,* much less only on OSX. Found a Mac (running Mavericks and GHC 8.0.1) but couldn't reproduce it there. Thoughts? I'd love to nail this as soon as possible so I can get join points in before the feature freeze ... - Luke Maurer University of Oregon maurerl at cs.uoregon.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Tue Jan 17 18:12:52 2017 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 17 Jan 2017 13:12:52 -0500 Subject: recomp001 failing on OSX for Phab:D2853 In-Reply-To: <48bfbcae-5f61-3cb5-b77d-94f448117dd0@cs.uoregon.edu> References: <48bfbcae-5f61-3cb5-b77d-94f448117dd0@cs.uoregon.edu> Message-ID: <871sw1sggr.fsf@ben-laptop.smart-cactus.org> Luke Maurer writes: > Getting an odd test failure from Phab: > > @@ -1,2 +0,0 @@ > - > -C.hs:3:11: Module ‘B’ does not export ‘foo’ > Actual stderr output differs from expected: > *** unexpected failure for recomp001(normal) > > https://phabricator.haskell.org/harbormaster/build/18499/?l=0 > > I can't explain how my changes would break module exports *at all,* much > less only on OSX. Found a Mac (running Mavericks and GHC 8.0.1) but > couldn't reproduce it there. Thoughts? I'd love to nail this as soon as > possible so I can get join points in before the feature freeze ... > Fear not! This is indeed a known issue (with a patch even, see D2964). Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Tue Jan 17 18:29:03 2017 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 17 Jan 2017 13:29:03 -0500 Subject: [GHC Proposal] Propose overhaul of implicit quantification in Template Haskell quoting References: <87insssz4a.fsf@ben-laptop.smart-cactus.org> Message-ID: <87wpdtr15c.fsf@ben-laptop.smart-cactus.org> Hello everyone, Ryan Scott just opened Pull Request #36 [1] against the ghc-proposals repository. This is a proposal to change the behavior of Template Haskell quoting with respect to implicitly quantified type variables. The overall effect is that implicitly quantified type variables will be separated from the explicitly quantified type variables, and instead put into their own AST nodes. Please feel free to read and discuss the proposal on its pull request [1]. Cheers, - Ben [1] https://github.com/ghc-proposals/ghc-proposals/pull/36 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From maurerl at cs.uoregon.edu Tue Jan 17 18:53:10 2017 From: maurerl at cs.uoregon.edu (Luke Maurer) Date: Tue, 17 Jan 2017 10:53:10 -0800 Subject: recomp001 failing on OSX for Phab:D2853 In-Reply-To: <871sw1sggr.fsf@ben-laptop.smart-cactus.org> References: <48bfbcae-5f61-3cb5-b77d-94f448117dd0@cs.uoregon.edu> <871sw1sggr.fsf@ben-laptop.smart-cactus.org> Message-ID: <25ECEB53-3573-4E2E-B757-AD8C0F035877@cs.uoregon.edu> Whew! Glad to hear it. Thanks! - Luke Maurer University of Oregon maurerl at uoregon.edu > On Jan 17, 2017, at 10:12 AM, Ben Gamari wrote: > > Luke Maurer writes: > >> Getting an odd test failure from Phab: >> >> @@ -1,2 +0,0 @@ >> - >> -C.hs:3:11: Module ‘B’ does not export ‘foo’ >> Actual stderr output differs from expected: >> *** unexpected failure for recomp001(normal) >> >> https://phabricator.haskell.org/harbormaster/build/18499/?l=0 >> >> I can't explain how my changes would break module exports *at all,* much >> less only on OSX. Found a Mac (running Mavericks and GHC 8.0.1) but >> couldn't reproduce it there. Thoughts? I'd love to nail this as soon as >> possible so I can get join points in before the feature freeze ... >> > Fear not! This is indeed a known issue (with a patch even, see > D2964). > > Cheers, > > - Ben From lonetiger at gmail.com Tue Jan 17 19:17:08 2017 From: lonetiger at gmail.com (lonetiger at gmail.com) Date: Tue, 17 Jan 2017 19:17:08 +0000 Subject: Windows build In-Reply-To: References: Message-ID: <587e6db4.8a9a1c0a.46f5d.f7c4@mx.google.com> Hi Simon, Thanks for the update, I can reproduce a similar failure on the build bot and one of my own machines. I’m looking into what might be causing it, I have something I want to try but not Sure yet it will solve it. I don’t need more information atm as I can reproduce the race condition by doing enough I/O. Thanks, Tamar From: Simon Peyton Jones Sent: Tuesday, January 17, 2017 08:13 To: Phyx Cc: ghc-devs Subject: Windows build Hi Tamar The current state of a clean Windows build has improved – but still has multiple failures.  See below Thanks for your work on this. I can send more details if that’d be helpful Simon Unexpected failures:    plugins/plugins07.run  plugins07 [bad exit code] (normal)    plugins/T10420.run     T10420 [bad exit code] (normal)    plugins/T10294a.run    T10294a [bad exit code] (normal)    plugins/T11244.run     T11244 [bad stderr] (normal) Framework failures:    backpack/cabal/bkpcabal03/bkpcabal03.run   bkpcabal03 [runTest] (Unhandled exception: [Errno 90] Directory not empty: 'autogen')    plugins/plugins07.run                      plugins07 [normal] (pre_cmd failed: 2)    plugins/T10420.run                         T10420 [normal] (pre_cmd failed: 2)    plugins/T10294a.run                        T10294a [normal] (pre_cmd failed: 2)    plugins/plugins07.run                      plugins07 [runTest] (Unhandled exception: [Errno 90] Directory not empty: 'rule-defining-plugin-0.1')    plugins/T11244.run                         T11244 [normal] (pre_cmd failed: 2)    safeHaskell/check/pkg01/ImpSafeOnly03.run  ImpSafeOnly03 [runTest] (Unhandled exception: [Errno 90] Directory not empty: 'safePkg01-1.0-COK8JB4prM8EuKYKd3XaX3') -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekmett at gmail.com Tue Jan 17 19:42:35 2017 From: ekmett at gmail.com (Edward Kmett) Date: Tue, 17 Jan 2017 14:42:35 -0500 Subject: Magical function to support reflection In-Reply-To: References: Message-ID: That is the paper the reflection library API is based on. However, doing it the way mentioned in that paper (after modifying it to work around changes with the inliner for modern GHC) is about 3 orders of magnitude slower. We keep it around in reflection as the 'slow' path for portability to non-GHC compilers, and because that variant can make a form of Typeable reflection which is needed for some Exception gimmicks folks use. The current approach, and the sort of variant that David is pushing above, is basically free, as it costs a single unsafeCoerce. To make the reflection library work in a fully type-safe manner would take 1-3 additional wired ins that would consist of well-typed core. The stuff David is proposing above would be more general but less safe. -Edward On Tue, Jan 17, 2017 at 10:45 AM, Simon Peyton Jones wrote: > David says that this paper is relevant > > http://okmij.org/ftp/Haskell/tr-15-04.pdf > > > > Simon > > > > *From:* David Feuer [mailto:david.feuer at gmail.com] > *Sent:* 14 January 2017 00:50 > *To:* Simon Peyton Jones > *Cc:* ghc-devs ; Edward Kmett > *Subject:* RE: Magical function to support reflection > > > > I need to look through a bit more of this, but explicit type application > certainly can be avoided using Tagged. Once we get the necessary magic, > libraries will be able to come up with whatever interfaces they like. My > main concern about the generality of > > > > reify# :: forall r. (RC a => r) -> a -> r > > > > (as with the primop type Edward came up with) is that it lacks the `forall > s` safety mechanism of the reflection library. Along with its key role in > ensuring class coherence[*], that mechanism also makes it clear what > specialization is and is not allowed to do with reified values. Again, I'm > not sure it can mess up the simpler/more general form you and Edward > propose, but it makes me nervous. > > > > [*] Coherence: as long as an instance of Reifies S A exists for some > concrete S::K, users can't incoherently write a polymorphic Reifies > instance for s::K. > > > > On Jan 13, 2017 7:33 PM, "Simon Peyton Jones" > wrote: > > David, Edward > > Here’s my take on this thread about reflection. I’ll ignore Tagged and > the ‘s’ parameter, and the proxy arguments, since they are incidental. > > I can finally see a reasonable path; I think there’s a potential GHC > proposal here. > > Simon > > > > *First thing*: PLEASE let's give a Core rendering of whatever is > proposed. If it's expressible in Core that's reassuring. If it requires an > extension to Core, that's a whole different thing. > > > > *Second*. For any *particular* class, I think it's easy to express reify > in Core. Example (in Core): > > reifyTypeable :: (Typeable a => b) -> TypeRep a -> b > > reifyTypable k = k |> co > > where co is a coercion that witnesses > > co :: (forall a b. Typeable a => b) ~ forall a b. (TypeRep a -> b) > > > > *Third. *This does not depend, and should not depend, on the fact that > single-method classes are represented with a newtype. E.g. if we changed > Typeable to be represented with a data type thus (in Core) > > data Typeable a = MkTypeable (TypeRep a) > > using data rather than newtype, then we could still write reifyTypable. > > reifyTypeable :: (Typeable a => b) -> TypeRep a -> b > > reifyTypable = /\ab. \(f :: Typeable a => b). \(r :: TypeRep a). > > f (MkTypeable r) > > The efficiency of newtype is nice, but it’s not essential. > > > > *Fourth*. As you point out, reify# is far too polymorphic. *Clearly you > need reify# to be a class method!* Something like this > > class Reifiable a where > > type RC a :: Constraint -- Short for Reified Constraint > > reify# :: forall r. (RC a => r) -> a -> r > > Now (in Core at least) we can make instances > > instance Reifiable (TypeRep a) where > > type RC (TypeRep a) = Typeable a > > reify# k = k |> co -- For a suitable co > > Now, we can’t write those instances in Haskell, but we could make the > ‘deriving’ mechanism deal with it, thus: > > deriving instance Reifiable (Typeable a) > > You can supply a ‘where’ part if you like, but if you don’t GHC will fill > in the implementation for you. It’ll check that Typeable is a > single-method class; produce a suitable implementation (in Core, as above) > for reify#, and a suitable instance for RC. Pretty simple. Now the solver > can use those instances. > > There are lots of details > > · I’ve used a single parameter class and a type function, because > the call site of reify# will provide no information about the ‘c’ in (c => > r) argument. > > · What if some other class has the same method type? E.g. if > someone wrote > > class MyTR a where op :: TypeRep a > > would that mess up the use of reify# for Typeable? Well it would if they > also did > > deriving instance Reifiable (MyTR a) > > And there really is an ambiguity: what should (reify# k (tr :: TypeRep > Int)) do? Apply k to a TypeRep or to a MyTR? So a complaint here would be > entirely legitimate. > > · I suppose that another formulation might be to abstract over the > constraint, rather than the method type, and use explicit type application > at calls of reify#. So > > class Reifiable c where > > type RL c :: * > > reify# :: (c => r) -> RL c -> r > > Now all calls of reify# would have to look like > > reify# @(Typeable Int) k tr > > Maybe that’s acceptable. But it doesn’t seem as nice to me. > > · One could use functional dependencies and a 2-parameter type > class, but I don’t think it would change anything much. If type functions > work, they are more robust than fundeps. > > · One could abstract over the type constructor rather than the > type. I see no advantage and some disadvantages > > class Reifiable t where > > type RC t :: * -> Constraint -- Short for Reified Constraint > > reify# :: forall r. (RC t a => r) -> t a -> r > > > > > > | -----Original Message----- > > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org > ] On Behalf Of David > > | Feuer > > | Sent: 11 December 2016 05:01 > > | To: ghc-devs ; Edward Kmett > > | Subject: Magical function to support reflection > > | > > | The following proposal (with fancier formatting and some improved > > | wording) can be viewed at > > | https://ghc.haskell.org/trac/ghc/wiki/MagicalReflectionSupport > > | > > | Using the Data.Reflection has some runtime costs. Notably, there can be > no > > | inlining or unboxing of reified values. I think it would be nice to add > a > > | GHC special to support it. I'll get right to the point of what I want, > and > > | then give a bit of background about why. > > | > > | === What I want > > | > > | I propose the following absurdly over-general lie: > > | > > | reify# :: (forall s . c s a => t s r) -> a -> r > > | > > | `c` is assumed to be a single-method class with no superclasses whose > > | dictionary representation is exactly the same as the representation of > `a`, > > | and `t s r` is assumed to be a newtype wrapper around `r`. In > desugaring, > > | reify# f would be compiled to f at S, where S is a fresh type. I believe > it's > > | necessary to use a fresh type to prevent specialization from mixing up > > | different reified values. > > | > > | === Background > > | > > | Let me set up a few pieces. These pieces are slightly modified from > what the > > | package actually does to make things cleaner under the hood, but the > > | differences are fairly shallow. > > | > > | newtype Tagged s a = Tagged { unTagged :: a } > > | > > | unproxy :: (Proxy s -> a) -> Tagged s a > > | unproxy f = Tagged (f Proxy) > > | > > | class Reifies s a | s -> a where > > | reflect' :: Tagged s a > > | > > | -- For convenience > > | reflect :: forall s a proxy . Reifies s a => proxy s -> a reflect _ = > > | unTagged (reflect' :: Tagged s a) > > | > > | -- The key function--see below regarding implementation reify' :: > (forall s > > | . Reifies s a => Tagged s r) -> a -> r > > | > > | -- For convenience > > | reify :: a -> (forall s . Reifies s a => Proxy s -> r) -> r reify a f = > > | reify' (unproxy f) a > > | > > | The key idea of reify' is that something of type > > | > > | forall s . Reifies s a => Tagged s r > > | > > | is represented in memory exactly the same as a function of type > > | > > | a -> r > > | > > | So we can currently use unsafeCoerce to interpret one as the other. > > | Following the general approach of the library, we can do this as such: > > | > > | newtype Magic a r = Magic (forall s . Reifies s a => Tagged s r) reify' > :: > > | (forall s . Reifies s a => Tagged s r) -> a -> r reify' f = unsafeCoerce > > | (Magic f) > > | > > | This certainly works. The trouble is that any knowledge about what is > > | reflected is totally lost. For instance, if I write > > | > > | reify 12 $ \p -> reflect p + 3 > > | > > | then GHC will not see, at compile time, that the result is 15. If I > write > > | > > | reify (+1) $ \p -> reflect p x > > | > > | then GHC will never inline the application of (+1). Etc. > > | > > | I'd like to replace reify' with reify# to avoid this problem. > > | > > | Thanks, > > | David Feuer > > | _______________________________________________ > > | ghc-devs mailing list > > | ghc-devs at haskell.org > > | https://na01.safelinks.protection.outlook.com/?url= > http%3A%2F%2Fmail.haskell > > > | .org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- > > > | devs&data=02%7C01%7Csimonpj%40microsoft.com% > 7C488bf00986e34ac0833208d42182c4 > > > | 7a%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0% > 7C636170292905032831&sdata=quv > > > | Cny8vD%2Fw%2BjIIypEtungW3OWbVmCQxFAK4%2FXrX%2Bb8%3D&reserved=0 > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Tue Jan 17 19:49:38 2017 From: david.feuer at gmail.com (David Feuer) Date: Tue, 17 Jan 2017 14:49:38 -0500 Subject: Magical function to support reflection In-Reply-To: References: Message-ID: Simon has an idea for making it safer. I suspect it's only properly safe with the "forall s", but there may be a way to at least make it specialization-safe (if not conditionally coherence-safe) without that. On Jan 17, 2017 2:42 PM, "Edward Kmett" wrote: > That is the paper the reflection library API is based on. > > However, doing it the way mentioned in that paper (after modifying it to > work around changes with the inliner for modern GHC) is about 3 orders of > magnitude slower. We keep it around in reflection as the 'slow' path for > portability to non-GHC compilers, and because that variant can make a form > of Typeable reflection which is needed for some Exception gimmicks folks > use. > > The current approach, and the sort of variant that David is pushing above, > is basically free, as it costs a single unsafeCoerce. To make the > reflection library work in a fully type-safe manner would take 1-3 > additional wired ins that would consist of well-typed core. The stuff David > is proposing above would be more general but less safe. > > -Edward > > On Tue, Jan 17, 2017 at 10:45 AM, Simon Peyton Jones < > simonpj at microsoft.com> wrote: > >> David says that this paper is relevant >> >> http://okmij.org/ftp/Haskell/tr-15-04.pdf >> >> >> >> Simon >> >> >> >> *From:* David Feuer [mailto:david.feuer at gmail.com] >> *Sent:* 14 January 2017 00:50 >> *To:* Simon Peyton Jones >> *Cc:* ghc-devs ; Edward Kmett >> *Subject:* RE: Magical function to support reflection >> >> >> >> I need to look through a bit more of this, but explicit type application >> certainly can be avoided using Tagged. Once we get the necessary magic, >> libraries will be able to come up with whatever interfaces they like. My >> main concern about the generality of >> >> >> >> reify# :: forall r. (RC a => r) -> a -> r >> >> >> >> (as with the primop type Edward came up with) is that it lacks the >> `forall s` safety mechanism of the reflection library. Along with its key >> role in ensuring class coherence[*], that mechanism also makes it clear >> what specialization is and is not allowed to do with reified values. Again, >> I'm not sure it can mess up the simpler/more general form you and Edward >> propose, but it makes me nervous. >> >> >> >> [*] Coherence: as long as an instance of Reifies S A exists for some >> concrete S::K, users can't incoherently write a polymorphic Reifies >> instance for s::K. >> >> >> >> On Jan 13, 2017 7:33 PM, "Simon Peyton Jones" >> wrote: >> >> David, Edward >> >> Here’s my take on this thread about reflection. I’ll ignore Tagged and >> the ‘s’ parameter, and the proxy arguments, since they are incidental. >> >> I can finally see a reasonable path; I think there’s a potential GHC >> proposal here. >> >> Simon >> >> >> >> *First thing*: PLEASE let's give a Core rendering of whatever is >> proposed. If it's expressible in Core that's reassuring. If it requires an >> extension to Core, that's a whole different thing. >> >> >> >> *Second*. For any *particular* class, I think it's easy to express >> reify in Core. Example (in Core): >> >> reifyTypeable :: (Typeable a => b) -> TypeRep a -> b >> >> reifyTypable k = k |> co >> >> where co is a coercion that witnesses >> >> co :: (forall a b. Typeable a => b) ~ forall a b. (TypeRep a -> b) >> >> >> >> *Third. *This does not depend, and should not depend, on the fact that >> single-method classes are represented with a newtype. E.g. if we changed >> Typeable to be represented with a data type thus (in Core) >> >> data Typeable a = MkTypeable (TypeRep a) >> >> using data rather than newtype, then we could still write reifyTypable. >> >> reifyTypeable :: (Typeable a => b) -> TypeRep a -> b >> >> reifyTypable = /\ab. \(f :: Typeable a => b). \(r :: TypeRep a). >> >> f (MkTypeable r) >> >> The efficiency of newtype is nice, but it’s not essential. >> >> >> >> *Fourth*. As you point out, reify# is far too polymorphic. *Clearly >> you need reify# to be a class method!* Something like this >> >> class Reifiable a where >> >> type RC a :: Constraint -- Short for Reified Constraint >> >> reify# :: forall r. (RC a => r) -> a -> r >> >> Now (in Core at least) we can make instances >> >> instance Reifiable (TypeRep a) where >> >> type RC (TypeRep a) = Typeable a >> >> reify# k = k |> co -- For a suitable co >> >> Now, we can’t write those instances in Haskell, but we could make the >> ‘deriving’ mechanism deal with it, thus: >> >> deriving instance Reifiable (Typeable a) >> >> You can supply a ‘where’ part if you like, but if you don’t GHC will fill >> in the implementation for you. It’ll check that Typeable is a >> single-method class; produce a suitable implementation (in Core, as above) >> for reify#, and a suitable instance for RC. Pretty simple. Now the solver >> can use those instances. >> >> There are lots of details >> >> · I’ve used a single parameter class and a type function, because >> the call site of reify# will provide no information about the ‘c’ in (c => >> r) argument. >> >> · What if some other class has the same method type? E.g. if >> someone wrote >> >> class MyTR a where op :: TypeRep a >> >> would that mess up the use of reify# for Typeable? Well it would if >> they also did >> >> deriving instance Reifiable (MyTR a) >> >> And there really is an ambiguity: what should (reify# k (tr :: TypeRep >> Int)) do? Apply k to a TypeRep or to a MyTR? So a complaint here would be >> entirely legitimate. >> >> · I suppose that another formulation might be to abstract over >> the constraint, rather than the method type, and use explicit type >> application at calls of reify#. So >> >> class Reifiable c where >> >> type RL c :: * >> >> reify# :: (c => r) -> RL c -> r >> >> Now all calls of reify# would have to look like >> >> reify# @(Typeable Int) k tr >> >> Maybe that’s acceptable. But it doesn’t seem as nice to me. >> >> · One could use functional dependencies and a 2-parameter type >> class, but I don’t think it would change anything much. If type functions >> work, they are more robust than fundeps. >> >> · One could abstract over the type constructor rather than the >> type. I see no advantage and some disadvantages >> >> class Reifiable t where >> >> type RC t :: * -> Constraint -- Short for Reified Constraint >> >> reify# :: forall r. (RC t a => r) -> t a -> r >> >> >> >> >> >> | -----Original Message----- >> >> | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org >> ] On Behalf Of David >> >> | Feuer >> >> | Sent: 11 December 2016 05:01 >> >> | To: ghc-devs ; Edward Kmett >> >> | Subject: Magical function to support reflection >> >> | >> >> | The following proposal (with fancier formatting and some improved >> >> | wording) can be viewed at >> >> | https://ghc.haskell.org/trac/ghc/wiki/MagicalReflectionSupport >> >> | >> >> | Using the Data.Reflection has some runtime costs. Notably, there can >> be no >> >> | inlining or unboxing of reified values. I think it would be nice to >> add a >> >> | GHC special to support it. I'll get right to the point of what I want, >> and >> >> | then give a bit of background about why. >> >> | >> >> | === What I want >> >> | >> >> | I propose the following absurdly over-general lie: >> >> | >> >> | reify# :: (forall s . c s a => t s r) -> a -> r >> >> | >> >> | `c` is assumed to be a single-method class with no superclasses whose >> >> | dictionary representation is exactly the same as the representation of >> `a`, >> >> | and `t s r` is assumed to be a newtype wrapper around `r`. In >> desugaring, >> >> | reify# f would be compiled to f at S, where S is a fresh type. I believe >> it's >> >> | necessary to use a fresh type to prevent specialization from mixing up >> >> | different reified values. >> >> | >> >> | === Background >> >> | >> >> | Let me set up a few pieces. These pieces are slightly modified from >> what the >> >> | package actually does to make things cleaner under the hood, but the >> >> | differences are fairly shallow. >> >> | >> >> | newtype Tagged s a = Tagged { unTagged :: a } >> >> | >> >> | unproxy :: (Proxy s -> a) -> Tagged s a >> >> | unproxy f = Tagged (f Proxy) >> >> | >> >> | class Reifies s a | s -> a where >> >> | reflect' :: Tagged s a >> >> | >> >> | -- For convenience >> >> | reflect :: forall s a proxy . Reifies s a => proxy s -> a reflect _ = >> >> | unTagged (reflect' :: Tagged s a) >> >> | >> >> | -- The key function--see below regarding implementation reify' :: >> (forall s >> >> | . Reifies s a => Tagged s r) -> a -> r >> >> | >> >> | -- For convenience >> >> | reify :: a -> (forall s . Reifies s a => Proxy s -> r) -> r reify a f = >> >> | reify' (unproxy f) a >> >> | >> >> | The key idea of reify' is that something of type >> >> | >> >> | forall s . Reifies s a => Tagged s r >> >> | >> >> | is represented in memory exactly the same as a function of type >> >> | >> >> | a -> r >> >> | >> >> | So we can currently use unsafeCoerce to interpret one as the other. >> >> | Following the general approach of the library, we can do this as such: >> >> | >> >> | newtype Magic a r = Magic (forall s . Reifies s a => Tagged s r) >> reify' :: >> >> | (forall s . Reifies s a => Tagged s r) -> a -> r reify' f = >> unsafeCoerce >> >> | (Magic f) >> >> | >> >> | This certainly works. The trouble is that any knowledge about what is >> >> | reflected is totally lost. For instance, if I write >> >> | >> >> | reify 12 $ \p -> reflect p + 3 >> >> | >> >> | then GHC will not see, at compile time, that the result is 15. If I >> write >> >> | >> >> | reify (+1) $ \p -> reflect p x >> >> | >> >> | then GHC will never inline the application of (+1). Etc. >> >> | >> >> | I'd like to replace reify' with reify# to avoid this problem. >> >> | >> >> | Thanks, >> >> | David Feuer >> >> | _______________________________________________ >> >> | ghc-devs mailing list >> >> | ghc-devs at haskell.org >> >> | https://na01.safelinks.protection.outlook.com/?url=http%3A% >> 2F%2Fmail.haskell >> >> >> | .org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- >> >> >> | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C488bf00986e34a >> c0833208d42182c4 >> >> >> | 7a%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636170292905 >> 032831&sdata=quv >> >> >> | Cny8vD%2Fw%2BjIIypEtungW3OWbVmCQxFAK4%2FXrX%2Bb8%3D&reserved=0 >> >> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From maurerl at cs.uoregon.edu Tue Jan 17 20:24:11 2017 From: maurerl at cs.uoregon.edu (Luke Maurer) Date: Tue, 17 Jan 2017 12:24:11 -0800 Subject: Join points Message-ID: <34FEB985-E845-4021-AC54-BA1B8BF8E2D5@cs.uoregon.edu> Hello all! For your consideration, I present patch D2853: Join points. https://phabricator.haskell.org/D2853 It validates* and it's ready for wider reviewing, so please help me get it to land before the freeze! It's ... sizable. There's not really any way to split it, since there are many interdependencies. I've put up a "tour" as part of the SequentCore page on the wiki: https://ghc.haskell.org/trac/ghc/wiki/SequentCore This should help with where to start. Also included on the wiki are benchmarks and lots of general information. *Modulo a few long URLs and a known bug (D2964). - Luke Maurer University of Oregon maurerl at cs.uoregon.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Jan 17 21:41:09 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 17 Jan 2017 21:41:09 +0000 Subject: Magical function to support reflection In-Reply-To: References: Message-ID: To make the reflection library work in a fully type-safe manner would take 1-3 additional wired ins that would consist of well-typed core. The stuff David is proposing above would be more general but less safe. The approach I proposed below looks general, safe, and performant. Or not? To make progress it’d be good to update the wiki page, both in the light of the recent discussion, and with pointers to related packages, motivation, papers, to set the context Simon From: Edward Kmett [mailto:ekmett at gmail.com] Sent: 17 January 2017 19:43 To: Simon Peyton Jones Cc: David Feuer ; ghc-devs Subject: Re: Magical function to support reflection That is the paper the reflection library API is based on. However, doing it the way mentioned in that paper (after modifying it to work around changes with the inliner for modern GHC) is about 3 orders of magnitude slower. We keep it around in reflection as the 'slow' path for portability to non-GHC compilers, and because that variant can make a form of Typeable reflection which is needed for some Exception gimmicks folks use. The current approach, and the sort of variant that David is pushing above, is basically free, as it costs a single unsafeCoerce. To make the reflection library work in a fully type-safe manner would take 1-3 additional wired ins that would consist of well-typed core. The stuff David is proposing above would be more general but less safe. -Edward On Tue, Jan 17, 2017 at 10:45 AM, Simon Peyton Jones > wrote: David says that this paper is relevant http://okmij.org/ftp/Haskell/tr-15-04.pdf Simon From: David Feuer [mailto:david.feuer at gmail.com] Sent: 14 January 2017 00:50 To: Simon Peyton Jones > Cc: ghc-devs >; Edward Kmett > Subject: RE: Magical function to support reflection I need to look through a bit more of this, but explicit type application certainly can be avoided using Tagged. Once we get the necessary magic, libraries will be able to come up with whatever interfaces they like. My main concern about the generality of reify# :: forall r. (RC a => r) -> a -> r (as with the primop type Edward came up with) is that it lacks the `forall s` safety mechanism of the reflection library. Along with its key role in ensuring class coherence[*], that mechanism also makes it clear what specialization is and is not allowed to do with reified values. Again, I'm not sure it can mess up the simpler/more general form you and Edward propose, but it makes me nervous. [*] Coherence: as long as an instance of Reifies S A exists for some concrete S::K, users can't incoherently write a polymorphic Reifies instance for s::K. On Jan 13, 2017 7:33 PM, "Simon Peyton Jones" > wrote: David, Edward Here’s my take on this thread about reflection. I’ll ignore Tagged and the ‘s’ parameter, and the proxy arguments, since they are incidental. I can finally see a reasonable path; I think there’s a potential GHC proposal here. Simon First thing: PLEASE let's give a Core rendering of whatever is proposed. If it's expressible in Core that's reassuring. If it requires an extension to Core, that's a whole different thing. Second. For any particular class, I think it's easy to express reify in Core. Example (in Core): reifyTypeable :: (Typeable a => b) -> TypeRep a -> b reifyTypable k = k |> co where co is a coercion that witnesses co :: (forall a b. Typeable a => b) ~ forall a b. (TypeRep a -> b) Third. This does not depend, and should not depend, on the fact that single-method classes are represented with a newtype. E.g. if we changed Typeable to be represented with a data type thus (in Core) data Typeable a = MkTypeable (TypeRep a) using data rather than newtype, then we could still write reifyTypable. reifyTypeable :: (Typeable a => b) -> TypeRep a -> b reifyTypable = /\ab. \(f :: Typeable a => b). \(r :: TypeRep a). f (MkTypeable r) The efficiency of newtype is nice, but it’s not essential. Fourth. As you point out, reify# is far too polymorphic. Clearly you need reify# to be a class method! Something like this class Reifiable a where type RC a :: Constraint -- Short for Reified Constraint reify# :: forall r. (RC a => r) -> a -> r Now (in Core at least) we can make instances instance Reifiable (TypeRep a) where type RC (TypeRep a) = Typeable a reify# k = k |> co -- For a suitable co Now, we can’t write those instances in Haskell, but we could make the ‘deriving’ mechanism deal with it, thus: deriving instance Reifiable (Typeable a) You can supply a ‘where’ part if you like, but if you don’t GHC will fill in the implementation for you. It’ll check that Typeable is a single-method class; produce a suitable implementation (in Core, as above) for reify#, and a suitable instance for RC. Pretty simple. Now the solver can use those instances. There are lots of details • I’ve used a single parameter class and a type function, because the call site of reify# will provide no information about the ‘c’ in (c => r) argument. • What if some other class has the same method type? E.g. if someone wrote class MyTR a where op :: TypeRep a would that mess up the use of reify# for Typeable? Well it would if they also did deriving instance Reifiable (MyTR a) And there really is an ambiguity: what should (reify# k (tr :: TypeRep Int)) do? Apply k to a TypeRep or to a MyTR? So a complaint here would be entirely legitimate. • I suppose that another formulation might be to abstract over the constraint, rather than the method type, and use explicit type application at calls of reify#. So class Reifiable c where type RL c :: * reify# :: (c => r) -> RL c -> r Now all calls of reify# would have to look like reify# @(Typeable Int) k tr Maybe that’s acceptable. But it doesn’t seem as nice to me. • One could use functional dependencies and a 2-parameter type class, but I don’t think it would change anything much. If type functions work, they are more robust than fundeps. • One could abstract over the type constructor rather than the type. I see no advantage and some disadvantages class Reifiable t where type RC t :: * -> Constraint -- Short for Reified Constraint reify# :: forall r. (RC t a => r) -> t a -> r | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of David | Feuer | Sent: 11 December 2016 05:01 | To: ghc-devs >; Edward Kmett > | Subject: Magical function to support reflection | | The following proposal (with fancier formatting and some improved | wording) can be viewed at | https://ghc.haskell.org/trac/ghc/wiki/MagicalReflectionSupport | | Using the Data.Reflection has some runtime costs. Notably, there can be no | inlining or unboxing of reified values. I think it would be nice to add a | GHC special to support it. I'll get right to the point of what I want, and | then give a bit of background about why. | | === What I want | | I propose the following absurdly over-general lie: | | reify# :: (forall s . c s a => t s r) -> a -> r | | `c` is assumed to be a single-method class with no superclasses whose | dictionary representation is exactly the same as the representation of `a`, | and `t s r` is assumed to be a newtype wrapper around `r`. In desugaring, | reify# f would be compiled to f at S, where S is a fresh type. I believe it's | necessary to use a fresh type to prevent specialization from mixing up | different reified values. | | === Background | | Let me set up a few pieces. These pieces are slightly modified from what the | package actually does to make things cleaner under the hood, but the | differences are fairly shallow. | | newtype Tagged s a = Tagged { unTagged :: a } | | unproxy :: (Proxy s -> a) -> Tagged s a | unproxy f = Tagged (f Proxy) | | class Reifies s a | s -> a where | reflect' :: Tagged s a | | -- For convenience | reflect :: forall s a proxy . Reifies s a => proxy s -> a reflect _ = | unTagged (reflect' :: Tagged s a) | | -- The key function--see below regarding implementation reify' :: (forall s | . Reifies s a => Tagged s r) -> a -> r | | -- For convenience | reify :: a -> (forall s . Reifies s a => Proxy s -> r) -> r reify a f = | reify' (unproxy f) a | | The key idea of reify' is that something of type | | forall s . Reifies s a => Tagged s r | | is represented in memory exactly the same as a function of type | | a -> r | | So we can currently use unsafeCoerce to interpret one as the other. | Following the general approach of the library, we can do this as such: | | newtype Magic a r = Magic (forall s . Reifies s a => Tagged s r) reify' :: | (forall s . Reifies s a => Tagged s r) -> a -> r reify' f = unsafeCoerce | (Magic f) | | This certainly works. The trouble is that any knowledge about what is | reflected is totally lost. For instance, if I write | | reify 12 $ \p -> reflect p + 3 | | then GHC will not see, at compile time, that the result is 15. If I write | | reify (+1) $ \p -> reflect p x | | then GHC will never inline the application of (+1). Etc. | | I'd like to replace reify' with reify# to avoid this problem. | | Thanks, | David Feuer | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.haskell | .org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C488bf00986e34ac0833208d42182c4 | 7a%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636170292905032831&sdata=quv | Cny8vD%2Fw%2BjIIypEtungW3OWbVmCQxFAK4%2FXrX%2Bb8%3D&reserved=0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Jan 17 21:55:13 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 17 Jan 2017 21:55:13 +0000 Subject: Join points In-Reply-To: <34FEB985-E845-4021-AC54-BA1B8BF8E2D5@cs.uoregon.edu> References: <34FEB985-E845-4021-AC54-BA1B8BF8E2D5@cs.uoregon.edu> Message-ID: All, Yes, do take a look at this. It’s a fairly direct implementation of the paper “Compiling without continuations”. You should definitely read the paper first; it’ll set the scene. I am really hoping to get this into 8.2. It’s a significant step forward in GHC’s optimisation pipeline. I’ve reviewed quite a bit of the code with Luke, but you should feel free to say “couldn’t this be simpler?” where you get confused or stuck. Unlike, say, type-in-type (!) it’s not that hard, and there should not be anything really obscure. I’m sure there are ways to make it simpler and better. (That’s why I am endlessly refactoring GHC.) So review away. Thank you Luke! Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Luke Maurer Sent: 17 January 2017 20:24 To: ghc-devs Subject: Join points Hello all! For your consideration, I present patch D2853: Join points. https://phabricator.haskell.org/D2853 It validates* and it's ready for wider reviewing, so please help me get it to land before the freeze! It's ... sizable. There's not really any way to split it, since there are many interdependencies. I've put up a "tour" as part of the SequentCore page on the wiki: https://ghc.haskell.org/trac/ghc/wiki/SequentCore This should help with where to start. Also included on the wiki are benchmarks and lots of general information. *Modulo a few long URLs and a known bug (D2964). - Luke Maurer University of Oregon maurerl at cs.uoregon.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at well-typed.com Wed Jan 18 19:45:04 2017 From: david at well-typed.com (David Feuer) Date: Wed, 18 Jan 2017 14:45:04 -0500 Subject: Floating lazy primops Message-ID: <2454822.8oMLGhvfTC@squirrel> I opened up https://phabricator.haskell.org/D2987 to mark reallyUnsafePtrEquality# can_fail, but in the process I realized a couple things. 1. Part of https://phabricator.haskell.org/rGHC5a9a1738023a may actually not have been such a hot idea after all (although it certainly sounded good at the time). Are there actually any primops with lifted arguments where we *want* speculation? Perhaps the most important primop to consider is seq#, which is (mysteriously?) marked neither can_fail nor has_side_effects, but another to look at is unpackClosure#, which seems likely to give different results before and after forcing. Most other primops with lifted arguments are marked has_side_effects, and therefore won't be floated out anyway. 2. If dataToTag# is marked can_fail (an aspect of https:// phabricator.haskell.org/rGHC5a9a1738023a), is it still possible for it to end up being applied to an unevaluated argument? If not, perhaps the CorePrep specials can be removed altogether. David Feuer From david at well-typed.com Wed Jan 18 20:12:04 2017 From: david at well-typed.com (David Feuer) Date: Wed, 18 Jan 2017 15:12:04 -0500 Subject: Floating lazy primops In-Reply-To: <2454822.8oMLGhvfTC@squirrel> References: <2454822.8oMLGhvfTC@squirrel> Message-ID: <2078410.E5jd5AtOMq@squirrel> One more question: do you think it's *better* to let dataToTag# float and then fix it up later, or better to mark it can_fail? Unlike reallyUnsafePtrEquality#, dataToTag# is used all over the place, so it is important that it interacts as well as possible with the optimizer, whatever that entails. On Wednesday, January 18, 2017 2:45:04 PM EST David Feuer wrote: > I opened up https://phabricator.haskell.org/D2987 to mark > reallyUnsafePtrEquality# can_fail, but in the process I realized a couple > things. > > 1. Part of https://phabricator.haskell.org/rGHC5a9a1738023a may actually not > have been such a hot idea after all (although it certainly sounded good at > the time). Are there actually any primops with lifted arguments where we > *want* speculation? Perhaps the most important primop to consider is seq#, > which is (mysteriously?) marked neither can_fail nor has_side_effects, but > another to look at is unpackClosure#, which seems likely to give different > results before and after forcing. Most other primops with lifted arguments > are marked has_side_effects, and therefore won't be floated out anyway. > > 2. If dataToTag# is marked can_fail (an aspect of https:// > phabricator.haskell.org/rGHC5a9a1738023a), is it still possible for it to > end up being applied to an unevaluated argument? If not, perhaps the > CorePrep specials can be removed altogether. > > David Feuer > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From juhpetersen at gmail.com Thu Jan 19 00:37:54 2017 From: juhpetersen at gmail.com (Jens Petersen) Date: Thu, 19 Jan 2017 09:37:54 +0900 Subject: [ANNOUNCE] Glasgow Haskell Compiler 8.0.2 is available! In-Reply-To: <87wpe1ignm.fsf@ben-laptop.smart-cactus.org> References: <87wpe1ignm.fsf@ben-laptop.smart-cactus.org> Message-ID: On 12 January 2017 at 03:40, Ben Gamari wrote: > The GHC team is happy to at last announce the 8.0.2 release of the > Glasgow Haskell Compiler. Source and binary distributions are available > Thank you Fedora 24+ and RHEL 7 et al users can install it from my Fedora Copr repo: https://copr.fedorainfracloud.org/coprs/petersen/ghc-8.0.2/ The Fedora builds have been there for some time now but I just added the EPEL7 build yesterday. Jens -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at well-typed.com Thu Jan 19 02:58:31 2017 From: david at well-typed.com (David Feuer) Date: Wed, 18 Jan 2017 21:58:31 -0500 Subject: Magical function to support reflection In-Reply-To: References: Message-ID: <9284675.CcDGa0mOpJ@squirrel> I've updated https://ghc.haskell.org/trac/ghc/wiki/MagicalReflectionSupport to reflect both Simon's thoughts on the matter and my own reactions to them. I hope you'll give it a peek. David Feuer From ghc-devs at mlists.thewrittenword.com Thu Jan 19 02:53:49 2017 From: ghc-devs at mlists.thewrittenword.com (Albert Chin) Date: Wed, 18 Jan 2017 20:53:49 -0600 Subject: Building RUNPATH into ghc binaries for 3rd-party libraries Message-ID: <20170119025348.GA5449@thewrittenword.com> Hi. I am building ghc-8.0.2 on RHEL 6 and trying to get the runtime path for 3rd-party libraries outside of the system search path embedded into the ghc binaries. Specifically, this is for libffi and libgmp. I am using the attached patch for this and it seems to work for the most part. It does not work for ghc-iserv through: $ cd /usr/local/ghc80/lib/ghc-8.0.2/bin $ ldd ghc-iserv linux-vdso.so.1 => (0x00007fffba3ff000) librt.so.1 => /lib64/librt.so.1 (0x00000033e0e00000) libutil.so.1 => /lib64/libutil.so.1 (0x00000033f0200000) libdl.so.2 => /lib64/libdl.so.2 (0x00000033e0600000) libgmp.so.10 => not found libm.so.6 => /lib64/libm.so.6 (0x00000033e0200000) libffi.so.6 => not found libpthread.so.0 => /lib64/libpthread.so.0 (0x00000033e0a00000) libc.so.6 => /lib64/libc.so.6 (0x00000033dfe00000) /lib64/ld-linux-x86-64.so.2 (0x00000033dfa00000) I presume I need to add something like the following to iserv/ghc.mk somewhere? -optl-Wl,-rpath,$(FFILibDir) \ -optl-Wl,-rpath,$(GMP_LIB_DIRS) \ -- albert chin (china at thewrittenword.com) -------------- next part -------------- A non-text attachment was scrubbed... Name: ghc.patch Type: text/x-diff Size: 1055 bytes Desc: not available URL: From simonpj at microsoft.com Thu Jan 19 10:02:21 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 19 Jan 2017 10:02:21 +0000 Subject: Type family bug? Message-ID: Richard This works type family F (a :: k) type instance F Maybe = Char But this does not. Surely it should? type family F (a :: k) where -- = r | r -> a where F Maybe = Char The latter is rejected with Foo.hs:6:5: error: * Expecting one more argument to `Maybe' Expected kind `k', but `Maybe' has kind `* -> *' * In the first argument of `F', namely `Maybe' In the type family declaration for `F' If you agree I'll open a ticket. Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at cs.brynmawr.edu Thu Jan 19 13:03:44 2017 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Thu, 19 Jan 2017 08:03:44 -0500 Subject: Type family bug? In-Reply-To: References: Message-ID: <52724012-DE71-4655-AC06-97222D52DC92@cs.brynmawr.edu> This is correct behavior. The former has a CUSK, as all open type families have CUSKs with un-annotated kinds defaulting to Type. The latter does not have a CUSK, because the result kind is unknown. You therefore cannot specialize the k variable in the definition of the latter. There is a ticket (#10141) about improving the error message here to educate the user about CUSKs, but there's no progress on it. Richard > On Jan 19, 2017, at 5:02 AM, Simon Peyton Jones wrote: > > Richard > > This works > > type family F (a :: k) > type instance F Maybe = Char > But this does not. Surely it should? > > type family F (a :: k) where -- = r | r -> a where > F Maybe = Char > The latter is rejected with > > Foo.hs:6:5: error: > * Expecting one more argument to `Maybe' > Expected kind `k', but `Maybe' has kind `* -> *' > * In the first argument of `F', namely `Maybe' > In the type family declaration for `F' > If you agree I’ll open a ticket. > > Simon > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Jan 19 23:02:55 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 19 Jan 2017 23:02:55 +0000 Subject: Magical function to support reflection In-Reply-To: <9284675.CcDGa0mOpJ@squirrel> References: <9284675.CcDGa0mOpJ@squirrel> Message-ID: I've added some comments. I like ReflectableDF a lot. Simon | -----Original Message----- | From: David Feuer [mailto:david at well-typed.com] | Sent: 19 January 2017 02:59 | To: ghc-devs at haskell.org; Simon Peyton Jones | Cc: Edward Kmett | Subject: Re: Magical function to support reflection | | I've updated | https://ghc.haskell.org/trac/ghc/wiki/MagicalReflectionSupport to reflect | both Simon's thoughts on the matter and my own reactions to them. I hope | you'll give it a peek. | | David Feuer From matthewtpickering at gmail.com Fri Jan 20 14:14:16 2017 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Fri, 20 Jan 2017 15:14:16 +0100 Subject: Magical function to support reflection In-Reply-To: <9284675.CcDGa0mOpJ@squirrel> References: <9284675.CcDGa0mOpJ@squirrel> Message-ID: I modified the example on the wiki to compile but I seem to have missed something, could you perhaps point out what I missed? https://gist.github.com/mpickering/da6d7852af2f6c8f59f80ce726baa864 ``` *Main> test1 2 123 441212 441335 ``` On Thu, Jan 19, 2017 at 3:58 AM, David Feuer wrote: > I've updated https://ghc.haskell.org/trac/ghc/wiki/MagicalReflectionSupport to > reflect both Simon's thoughts on the matter and my own reactions to them. I > hope you'll give it a peek. > > David Feuer > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From kiss.csongor.kiss at gmail.com Fri Jan 20 14:54:03 2017 From: kiss.csongor.kiss at gmail.com (Kiss Csongor) Date: Fri, 20 Jan 2017 14:54:03 +0000 Subject: Magical function to support reflection In-Reply-To: References: <9284675.CcDGa0mOpJ@squirrel> Message-ID: <85BDF1D7-83D4-43E1-81C9-A2A708D69538@gmail.com> The problem is in the reify function: ``` reify :: forall a r. a -> (forall (s :: *). Reifies s a => Proxy s -> r) -> r reify a k = unsafeCoerce (Magic k :: Magic a r) (const a) Proxy ``` here, unsafeCoerce coerces `const a` to type `a`, in the concrete case, to Int. ``` *Main> unsafeCoerce (const 5) :: Int 1099511628032 ``` this is indeed what seems to be the issue: ``` *Main> reify 5 reflect 1099511628032 ``` which is why test1 then shows the wrong result. Also, in the Magic newtype, there’s a `Proxy s`, which afaik doesn’t have the expected runtime representation `a -> r`. (there’s a proxy in the middle, `a -> Proxy -> r`). Changing Magic to ``` newtype Magic a r = Magic (forall (s :: *) . Reifies s a => Tagged s r) ``` now has the correct runtime rep, and the reification can be done by coercing the Magic in to `a -> r`, as such ``` reify' :: a -> (forall (s :: *) . Reifies s a => Tagged s r) -> r reify' a f = unsafeCoerce (Magic f) a ``` the Proxy version is just a convenience, wrapped around the magic one: ``` reify :: forall r a. a -> (forall (s :: *) . Reifies s a => Proxy s -> r) -> r reify a f = reify' a (unproxy f) ``` Here’s the complete file, with the changes that compile and now work: https://gist.github.com/kcsongor/b2f829b2b60022505b7e48b1360d2679 — Csongor > On 20 Jan 2017, at 14:14, Matthew Pickering wrote: > > I modified the example on the wiki to compile but I seem to have > missed something, could you perhaps point out what I missed? > > https://gist.github.com/mpickering/da6d7852af2f6c8f59f80ce726baa864 > > ``` > *Main> test1 2 123 441212 > 441335 > ``` > > On Thu, Jan 19, 2017 at 3:58 AM, David Feuer wrote: >> I've updated https://ghc.haskell.org/trac/ghc/wiki/MagicalReflectionSupport to >> reflect both Simon's thoughts on the matter and my own reactions to them. I >> hope you'll give it a peek. >> >> David Feuer >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Jan 20 15:50:57 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 20 Jan 2017 15:50:57 +0000 Subject: Arc doesn't work Message-ID: I can't use arc. At the end of 'arc diff' it says Exception Some linters failed: - CommandException: Command failed with error #1! COMMAND python3 .arc-linters/check-cpp.py 'compiler/basicTypes/Id.hs' STDOUT (empty) STDERR File ".arc-linters/check-cpp.py", line 28 r = re.compile(rb'ASSERT\s+\(') ^ SyntaxError: invalid syntax (Run with `--trace` for a full exception trace.) simonpj at cam-05-unx:~/code/HEAD-3$ python3 --version python3 --version Python 3.2.3 Alas. Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Jan 20 17:11:02 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 20 Jan 2017 17:11:02 +0000 Subject: Arc doesn't work References: Message-ID: Have you tried to 'arc upgrade'? Yes, I did that. It did upgrade successfully, but had no effect on this. S From: Simon Peyton Jones Sent: 20 January 2017 17:11 To: 'Alan & Kim Zimmerman' Subject: RE: Arc doesn't work Have you tried to 'arc upgrade'? Yes, I did that. It did upgrade successfully, but had no effect on this. S From: Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] Sent: 20 January 2017 16:18 To: Simon Peyton Jones > Subject: Re: Arc doesn't work Have you tried to 'arc upgrade'? On 20 January 2017 at 17:50, Simon Peyton Jones via ghc-devs > wrote: I can’t use arc. At the end of ‘arc diff’ it says Exception Some linters failed: - CommandException: Command failed with error #1! COMMAND python3 .arc-linters/check-cpp.py 'compiler/basicTypes/Id.hs' STDOUT (empty) STDERR File ".arc-linters/check-cpp.py", line 28 r = re.compile(rb'ASSERT\s+\(') ^ SyntaxError: invalid syntax (Run with `--trace` for a full exception trace.) simonpj at cam-05-unx:~/code/HEAD-3$ python3 --version python3 --version Python 3.2.3 Alas. Simon _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From rwbarton at gmail.com Fri Jan 20 17:22:57 2017 From: rwbarton at gmail.com (Reid Barton) Date: Fri, 20 Jan 2017 12:22:57 -0500 Subject: Arc doesn't work In-Reply-To: References: Message-ID: >From the python 3 reference: New in version 3.3: The 'rb' prefix of raw bytes literals has been added as a synonym of 'br'. Simon, can you try replacing that occurrent of rb by br and see whether that fixes it? Just the one on the line it complained about. Regards, Reid Barton On Fri, Jan 20, 2017 at 10:50 AM, Simon Peyton Jones via ghc-devs wrote: > I can’t use arc. At the end of ‘arc diff’ it says > > Exception > > Some linters failed: > > - CommandException: Command failed with error #1! > > COMMAND > > python3 .arc-linters/check-cpp.py 'compiler/basicTypes/Id.hs' > > > > STDOUT > > (empty) > > > > STDERR > > File ".arc-linters/check-cpp.py", line 28 > > r = re.compile(rb'ASSERT\s+\(') > > ^ > > SyntaxError: invalid syntax > > > > (Run with `--trace` for a full exception trace.) > > > > simonpj at cam-05-unx:~/code/HEAD-3$ python3 --version > > python3 --version > > Python 3.2.3 > > Alas. > > Simon > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From simonpj at microsoft.com Fri Jan 20 17:45:50 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 20 Jan 2017 17:45:50 +0000 Subject: [Diffusion] [Build Failed] rGHCb78fa759bfb4: Simplify and improve CSE In-Reply-To: <20170120171247.130262.84991.B0F1C356@phabricator.haskell.org> References: <20170120171247.130262.84991.B0F1C356@phabricator.haskell.org> Message-ID: I'm sorry about all these Phab failures. I'm struggling with arc, and so committing directly. It all validates fine on my machine. They seem to be all about T1969. It validates fine on my machine. But only just! On my machine I get peak_megabytes = 64; but on Harbormaster it seems to be 65 or 66. But I think 64 is *just* within the 20% margin from 55, and 65 is just outside. The increase might well be peak-memory sampling noise. Anyway it looks as if the 20% increase all occurred some time ago. Maybe we should look at when and why? But meanwhile, Ben/Reid/David, if you agree that the increase is some time ago, would you like to re-centre the number and (maybe) open a ticket to find where the increase actually happened. I have to go home now... apologies. Simon From: noreply at phabricator.haskell.org [mailto:noreply at phabricator.haskell.org] Sent: 20 January 2017 17:13 To: Simon Peyton Jones Subject: [Diffusion] [Build Failed] rGHCb78fa759bfb4: Simplify and improve CSE Harbormaster failed to build B13285: rGHCb78fa759bfb4: Simplify and improve CSE! BRANCHES master USERS simonpj (Author) O7 (Auditor) O12 (Auditor) COMMIT https://phabricator.haskell.org/rGHCb78fa759bfb4 EMAIL PREFERENCES https://phabricator.haskell.org/settings/panel/emailpreferences/ To: simonpj, Harbormaster -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Jan 20 17:50:23 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 20 Jan 2017 17:50:23 +0000 Subject: Arc doesn't work In-Reply-To: References: Message-ID: Yes that worked! THanks https://phabricator.haskell.org/D2995 Will you make that change? S | -----Original Message----- | From: Reid Barton [mailto:rwbarton at gmail.com] | Sent: 20 January 2017 17:23 | To: Simon Peyton Jones | Cc: ghc-devs at haskell.org | Subject: Re: Arc doesn't work | | From the python 3 reference: | | New in version 3.3: The 'rb' prefix of raw bytes literals has been added as | a synonym of 'br'. | | Simon, can you try replacing that occurrent of rb by br and see whether that | fixes it? Just the one on the line it complained about. | | Regards, | Reid Barton | | On Fri, Jan 20, 2017 at 10:50 AM, Simon Peyton Jones via ghc-devs wrote: | > I can’t use arc. At the end of ‘arc diff’ it says | > | > Exception | > | > Some linters failed: | > | > - CommandException: Command failed with error #1! | > | > COMMAND | > | > python3 .arc-linters/check-cpp.py 'compiler/basicTypes/Id.hs' | > | > | > | > STDOUT | > | > (empty) | > | > | > | > STDERR | > | > File ".arc-linters/check-cpp.py", line 28 | > | > r = re.compile(rb'ASSERT\s+\(') | > | > ^ | > | > SyntaxError: invalid syntax | > | > | > | > (Run with `--trace` for a full exception trace.) | > | > | > | > simonpj at cam-05-unx:~/code/HEAD-3$ python3 --version | > | > python3 --version | > | > Python 3.2.3 | > | > Alas. | > | > Simon | > | > | > _______________________________________________ | > ghc-devs mailing list | > ghc-devs at haskell.org | > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.h | > askell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com%7Cff94b190c13e4417c34808d44158fa | c2%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636205297797264731&sdata=xYh | IGsBacpdYRuWbYB%2BYTc8Uh%2B0KfufpQbXM7gXfI4Q%3D&reserved=0 | > From ben at well-typed.com Fri Jan 20 18:02:19 2017 From: ben at well-typed.com (Ben Gamari) Date: Fri, 20 Jan 2017 13:02:19 -0500 Subject: T13156 Message-ID: <87ziilpq38.fsf@ben-laptop.smart-cactus.org> Hi Simon, I'm seeing some rather peculiar validation issues with your most recent patch. Namely, =====> T13156(normal) 1 of 1 [0, 0, 0] cd "./simplCore/should_compile/T13156.run" && $MAKE -s --no-print-directory T13156 Actual stdout output differs from expected: diff -uw "./simplCore/should_compile/T13156.run/T13156.stdout.normalised" "./simplCore/should_compile/T13156.run/T13156.run.stdout.normalised" --- ./simplCore/should_compile/T13156.run/T13156.stdout.normalised 2017-01-20 13:00:46.620654541 -0500 +++ ./simplCore/should_compile/T13156.run/T13156.run.stdout.normalised 2017-01-20 13:00:46.620654541 -0500 @@ -1,5 +1,2 @@ case GHC.List.reverse @ a x of sat { __DEFAULT -> - case \ (@ a) -> - case g x of { - case r @ GHC.Types.Any of { __DEFAULT -> - case r @ GHC.Types.Any of { __DEFAULT -> GHC.Types.True } + case case g x of { Oddly enough Harbormaster isn't reproducing this, but it seems to be quite reproducible locally. Do you have any idea what might be going on here? Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From rwbarton at gmail.com Fri Jan 20 18:07:32 2017 From: rwbarton at gmail.com (Reid Barton) Date: Fri, 20 Jan 2017 13:07:32 -0500 Subject: Arc doesn't work In-Reply-To: References: Message-ID: On Fri, Jan 20, 2017 at 12:50 PM, Simon Peyton Jones wrote: > Yes that worked! THanks > https://phabricator.haskell.org/D2995 > > Will you make that change? I have done so, in commit 5ff812c14594f507c48121f16be4752eee6e3c88. Regards, Reid Barton > S > > | -----Original Message----- > | From: Reid Barton [mailto:rwbarton at gmail.com] > | Sent: 20 January 2017 17:23 > | To: Simon Peyton Jones > | Cc: ghc-devs at haskell.org > | Subject: Re: Arc doesn't work > | > | From the python 3 reference: > | > | New in version 3.3: The 'rb' prefix of raw bytes literals has been added as > | a synonym of 'br'. > | > | Simon, can you try replacing that occurrent of rb by br and see whether that > | fixes it? Just the one on the line it complained about. > | > | Regards, > | Reid Barton > | > | On Fri, Jan 20, 2017 at 10:50 AM, Simon Peyton Jones via ghc-devs | devs at haskell.org> wrote: > | > I can’t use arc. At the end of ‘arc diff’ it says > | > > | > Exception > | > > | > Some linters failed: > | > > | > - CommandException: Command failed with error #1! > | > > | > COMMAND > | > > | > python3 .arc-linters/check-cpp.py 'compiler/basicTypes/Id.hs' > | > > | > > | > > | > STDOUT > | > > | > (empty) > | > > | > > | > > | > STDERR > | > > | > File ".arc-linters/check-cpp.py", line 28 > | > > | > r = re.compile(rb'ASSERT\s+\(') > | > > | > ^ > | > > | > SyntaxError: invalid syntax > | > > | > > | > > | > (Run with `--trace` for a full exception trace.) > | > > | > > | > > | > simonpj at cam-05-unx:~/code/HEAD-3$ python3 --version > | > > | > python3 --version > | > > | > Python 3.2.3 > | > > | > Alas. > | > > | > Simon > | > > | > > | > _______________________________________________ > | > ghc-devs mailing list > | > ghc-devs at haskell.org > | > https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.h > | > askell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- > | devs&data=02%7C01%7Csimonpj%40microsoft.com%7Cff94b190c13e4417c34808d44158fa > | c2%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636205297797264731&sdata=xYh > | IGsBacpdYRuWbYB%2BYTc8Uh%2B0KfufpQbXM7gXfI4Q%3D&reserved=0 > | > From ben at well-typed.com Fri Jan 20 18:14:08 2017 From: ben at well-typed.com (Ben Gamari) Date: Fri, 20 Jan 2017 13:14:08 -0500 Subject: T13156 In-Reply-To: <87ziilpq38.fsf@ben-laptop.smart-cactus.org> References: <87ziilpq38.fsf@ben-laptop.smart-cactus.org> Message-ID: <87wpdpppjj.fsf@ben-laptop.smart-cactus.org> Ben Gamari writes: > Hi Simon, > Hi Simon, Please ignore this; my working tree was dirty. Unfortunately this means that my version of akio's top-level strings patch regresses, but I can work this out on my own. Sorry for the noise. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From simonpj at microsoft.com Fri Jan 20 23:36:21 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 20 Jan 2017 23:36:21 +0000 Subject: [Diffusion] [Build Failed] rGHCb78fa759bfb4: Simplify and improve CSE In-Reply-To: References: <20170120171247.130262.84991.B0F1C356@phabricator.haskell.org> Message-ID: I should have made it clearer that my "just below" measurement was *before* my patch. So the situation before was just on the borderline, on my machine anyway. (And the situation afterwards is still below the borderline on my machine, just not on harbourmaster.) Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Simon Peyton Jones via ghc-devs Sent: 20 January 2017 17:46 To: ghc-devs at haskell.org Subject: RE: [Diffusion] [Build Failed] rGHCb78fa759bfb4: Simplify and improve CSE I'm sorry about all these Phab failures. I'm struggling with arc, and so committing directly. It all validates fine on my machine. They seem to be all about T1969. It validates fine on my machine. But only just! On my machine I get peak_megabytes = 64; but on Harbormaster it seems to be 65 or 66. But I think 64 is *just* within the 20% margin from 55, and 65 is just outside. The increase might well be peak-memory sampling noise. Anyway it looks as if the 20% increase all occurred some time ago. Maybe we should look at when and why? But meanwhile, Ben/Reid/David, if you agree that the increase is some time ago, would you like to re-centre the number and (maybe) open a ticket to find where the increase actually happened. I have to go home now... apologies. Simon From: noreply at phabricator.haskell.org [mailto:noreply at phabricator.haskell.org] Sent: 20 January 2017 17:13 To: Simon Peyton Jones > Subject: [Diffusion] [Build Failed] rGHCb78fa759bfb4: Simplify and improve CSE Harbormaster failed to build B13285: rGHCb78fa759bfb4: Simplify and improve CSE! BRANCHES master USERS simonpj (Author) O7 (Auditor) O12 (Auditor) COMMIT https://phabricator.haskell.org/rGHCb78fa759bfb4 EMAIL PREFERENCES https://phabricator.haskell.org/settings/panel/emailpreferences/ To: simonpj, Harbormaster -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Sat Jan 21 22:21:36 2017 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Sat, 21 Jan 2017 22:21:36 +0000 Subject: Next steps of the trac-to-maniphest migration? Message-ID: Hello devs, Thanks to everyone so far who has looked at and commented on the prototype. It seems that the response is generally positive so I would like to drive the process forwards. In order for that to happen, someone needs to decide whether we as a community think it is a good idea. It seems to make sense if those who use the tracker most make this decision so I propose that Simon and Ben should ultimately be the ones to do this. Therefore, I propose this timeline 1. Before 11th Feb (3 weeks from today) we decide whether we want to migrate the issue tracker. 2. A working group is established who will work through the details of the migration with the minimum of a final prototype built from a clone of the actual installation. 3. Migration would happen before the end of March. I think Ben summarised the discussions quite well on the wiki page - https://ghc.haskell.org/trac/ghc/wiki/Phabricator/Maniphest And the prototype continues to exist here. http://ec2-52-214-147-146.eu-west-1.compute.amazonaws.com/ As always, any comments welcome. Matt From johannes.waldmann at htwk-leipzig.de Sun Jan 22 16:09:16 2017 From: johannes.waldmann at htwk-leipzig.de (Johannes Waldmann) Date: Sun, 22 Jan 2017 17:09:16 +0100 Subject: StablePtr / StableName ? Message-ID: <7a68ffc9-8bad-cf39-5894-dc3f86a52e64@htwk-leipzig.de> Dear ghc devs, would the StablePtr performance issue (slow hash table) https://ghc.haskell.org/trac/ghc/ticket/13165 also affect StableNames? Cf. https://github.com/ekmett/ersatz/issues/30 Could making 10^5 stable names, and accessing each just once, take noticeable time? Where would this show up in a profile? I guess there's no easy way to change the ersatz library (StableName this is the fundamental mechanism for detecting sharing) but if these issues are related, then ersatz provides a performance test case. Thanks, Johannes. From simonpj at microsoft.com Mon Jan 23 21:47:00 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 23 Jan 2017 21:47:00 +0000 Subject: Floating lazy primops In-Reply-To: <2454822.8oMLGhvfTC@squirrel> References: <2454822.8oMLGhvfTC@squirrel> Message-ID: We should have this conversation on a ticket, perhaps #13027. | good at the time). Are there actually any primops with lifted arguments | where we *want* speculation? Perhaps the most important primop to | consider is seq#, which is I don't understand this question. comment:23 of #13027 specifically says to skip the ok-for-spec test for lifted args. So you as "are the any" whereas comment:23 says "all". | 2. If dataToTag# is marked can_fail (an aspect of https:// | phabricator.haskell.org/rGHC5a9a1738023a), is it still possible for it to | end up being applied to an unevaluated argument? If not, perhaps the | CorePrep specials can be removed altogether. That may be true, but it's not easy to GUARANTEE in the way that Lint guarantees types. So I'm happier leaving in the CorePrep stuff.. but please do add a comment there that points to the Note in primops.txt.pp and says that it seems unlikely this will ever occur. | One more question: do you think it's *better* to let dataToTag# float and | then fix it up later, or better to mark it can_fail? Unlike | reallyUnsafePtrEquality#, dataToTag# is used all over the place, so it is | important that it interacts as well as possible with the optimizer, | whatever that entails. I think better to do as now. That way the simplifier has the opportunity to common-up multiple evals into one. If we add them later that's harder. By all means add Notes to explain this. Thanks! Simon | -----Original Message----- | From: David Feuer [mailto:david at well-typed.com] | Sent: 18 January 2017 19:45 | To: Simon Peyton Jones | Cc: ghc-devs at haskell.org | Subject: Floating lazy primops | | I opened up https://phabricator.haskell.org/D2987 to mark | reallyUnsafePtrEquality# can_fail, but in the process I realized a couple | things. | | 1. Part of https://phabricator.haskell.org/rGHC5a9a1738023a may actually | not have been such a hot idea after all (although it certainly sounded | good at the time). Are there actually any primops with lifted arguments | where we *want* speculation? Perhaps the most important primop to | consider is seq#, which is | (mysteriously?) marked neither can_fail nor has_side_effects, but another | to look at is unpackClosure#, which seems likely to give different | results before and after forcing. Most other primops with lifted | arguments are marked has_side_effects, and therefore won't be floated out | anyway. | | 2. If dataToTag# is marked can_fail (an aspect of https:// | phabricator.haskell.org/rGHC5a9a1738023a), is it still possible for it to | end up being applied to an unevaluated argument? If not, perhaps the | CorePrep specials can be removed altogether. | | David Feuer From marlowsd at gmail.com Tue Jan 24 09:38:16 2017 From: marlowsd at gmail.com (Simon Marlow) Date: Tue, 24 Jan 2017 09:38:16 +0000 Subject: StablePtr / StableName ? In-Reply-To: <7a68ffc9-8bad-cf39-5894-dc3f86a52e64@htwk-leipzig.de> References: <7a68ffc9-8bad-cf39-5894-dc3f86a52e64@htwk-leipzig.de> Message-ID: StableNames do use the RTS hash table implementation, but StablePtr does *not*, the ticket is incorrect. But to be clear, nothing has changed - StableName has always used this hash table implementation. No doubt it could be faster if we used a better hash table, but whether it matters to you or not depends on what else your application is doing - is StableName in the inner loop? You'd have to measure it. Cheers Simon On 22 January 2017 at 16:09, Johannes Waldmann < johannes.waldmann at htwk-leipzig.de> wrote: > Dear ghc devs, > > would the StablePtr performance issue (slow hash table) > https://ghc.haskell.org/trac/ghc/ticket/13165 > also affect StableNames? > Cf. https://github.com/ekmett/ersatz/issues/30 > > Could making 10^5 stable names, and accessing each just once, > take noticeable time? Where would this show up in a profile? > > I guess there's no easy way to change the ersatz library > (StableName this is the fundamental mechanism for detecting sharing) > but if these issues are related, then ersatz provides a > performance test case. > > Thanks, Johannes. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Tue Jan 24 09:41:48 2017 From: marlowsd at gmail.com (Simon Marlow) Date: Tue, 24 Jan 2017 09:41:48 +0000 Subject: Next steps of the trac-to-maniphest migration? In-Reply-To: References: Message-ID: On 21 January 2017 at 22:21, Matthew Pickering wrote: > Hello devs, > > Thanks to everyone so far who has looked at and commented on the > prototype. It seems that the response is generally positive so I would > like to drive the process forwards. > > In order for that to happen, someone needs to decide whether we as a > community think it is a good idea. It seems to make sense if those who > use the tracker most make this decision so I propose that Simon and > Ben should ultimately be the ones to do this. > > Therefore, I propose this timeline > > 1. Before 11th Feb (3 weeks from today) we decide whether we want to > migrate the issue tracker. > 2. A working group is established who will work through the details of > the migration with the minimum of a final prototype built from a clone > of the actual installation. > 3. Migration would happen before the end of March. > Sounds good to me. I personally have only glanced at it so far, but I'll give it some attention. I'm pretty attached to Trac's ability to do complex queries on tickets and the ability to embed ticket queries into wiki pages, so the gains would have to be compelling to outweigh the losses for me. But I'll give it a closer look. Cheers Simon > I think Ben summarised the discussions quite well on the wiki page - > https://ghc.haskell.org/trac/ghc/wiki/Phabricator/Maniphest > > And the prototype continues to exist here. > http://ec2-52-214-147-146.eu-west-1.compute.amazonaws.com/ > > As always, any comments welcome. > > Matt > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Jan 24 10:26:34 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 24 Jan 2017 10:26:34 +0000 Subject: StablePtr / StableName ? In-Reply-To: References: <7a68ffc9-8bad-cf39-5894-dc3f86a52e64@htwk-leipzig.de> Message-ID: Add this conversation to #13165? From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Simon Marlow Sent: 24 January 2017 09:38 To: Johannes Waldmann Cc: ghc-devs at haskell.org Subject: Re: StablePtr / StableName ? StableNames do use the RTS hash table implementation, but StablePtr does *not*, the ticket is incorrect. But to be clear, nothing has changed - StableName has always used this hash table implementation. No doubt it could be faster if we used a better hash table, but whether it matters to you or not depends on what else your application is doing - is StableName in the inner loop? You'd have to measure it. Cheers Simon On 22 January 2017 at 16:09, Johannes Waldmann > wrote: Dear ghc devs, would the StablePtr performance issue (slow hash table) https://ghc.haskell.org/trac/ghc/ticket/13165 also affect StableNames? Cf. https://github.com/ekmett/ersatz/issues/30 Could making 10^5 stable names, and accessing each just once, take noticeable time? Where would this show up in a profile? I guess there's no easy way to change the ersatz library (StableName this is the fundamental mechanism for detecting sharing) but if these issues are related, then ersatz provides a performance test case. Thanks, Johannes. _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Tue Jan 24 10:37:36 2017 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Tue, 24 Jan 2017 10:37:36 +0000 Subject: Next steps of the trac-to-maniphest migration? In-Reply-To: References: Message-ID: Thank you Simon. If you have any example queries that you run often or queries which you have embedded into wikipages then it would be useful to share them so I can investigate. With regards to the last point. This is possible in a more structured way. You can create a dashboard with a single query embedded and then embed this using standard remarkup syntax. For example on a project page, I embedded a query which matched tickets with "PatternSynonyms" and "newcomer". http://ec2-52-214-147-146.eu-west-1.compute.amazonaws.com/project/profile/165/ You can embed this panel anywhere where remarkup is accepted. For example, in a wiki page - http://ec2-52-214-147-146.eu-west-1.compute.amazonaws.com/w/ or tickets http://ec2-52-214-147-146.eu-west-1.compute.amazonaws.com/T12548 It is a bit more heavyweight to setup but much easier to get right due to the structured editing interface which trac doesn't provide for these kinds of queries. Matt On Tue, Jan 24, 2017 at 9:41 AM, Simon Marlow wrote: > On 21 January 2017 at 22:21, Matthew Pickering > wrote: >> >> Hello devs, >> >> Thanks to everyone so far who has looked at and commented on the >> prototype. It seems that the response is generally positive so I would >> like to drive the process forwards. >> >> In order for that to happen, someone needs to decide whether we as a >> community think it is a good idea. It seems to make sense if those who >> use the tracker most make this decision so I propose that Simon and >> Ben should ultimately be the ones to do this. >> >> Therefore, I propose this timeline >> >> 1. Before 11th Feb (3 weeks from today) we decide whether we want to >> migrate the issue tracker. >> 2. A working group is established who will work through the details of >> the migration with the minimum of a final prototype built from a clone >> of the actual installation. >> 3. Migration would happen before the end of March. > > > Sounds good to me. I personally have only glanced at it so far, but I'll > give it some attention. I'm pretty attached to Trac's ability to do complex > queries on tickets and the ability to embed ticket queries into wiki pages, > so the gains would have to be compelling to outweigh the losses for me. But > I'll give it a closer look. > > Cheers > Simon > >> >> I think Ben summarised the discussions quite well on the wiki page - >> https://ghc.haskell.org/trac/ghc/wiki/Phabricator/Maniphest >> >> And the prototype continues to exist here. >> http://ec2-52-214-147-146.eu-west-1.compute.amazonaws.com/ >> >> As always, any comments welcome. >> >> Matt >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > From johannes.waldmann at htwk-leipzig.de Tue Jan 24 11:25:54 2017 From: johannes.waldmann at htwk-leipzig.de (Johannes Waldmann) Date: Tue, 24 Jan 2017 12:25:54 +0100 Subject: StablePtr / StableName ? In-Reply-To: References: <7a68ffc9-8bad-cf39-5894-dc3f86a52e64@htwk-leipzig.de> Message-ID: <26c68a51-9eb3-0d9e-efb2-aa23c47a1b75@htwk-leipzig.de> Dear Simon, thanks for looking into this. > is StableName in the inner loop? Yes. This application's inner loop uses a HashMap (StableName Expression) Int for memoization. This is the Tseitin transform: for each node, build a literal. Each node is stable-named. I guess the RTS's hashmap performance comes into play only when pointers are moved (in GC). The application's hashmap cost will dominate, because it's used more often. > You'd have to measure it. I did. It seems we're good on StableNames, and time goes elsewhere. https://github.com/ekmett/ersatz/issues/30#issuecomment-274775792 - Johannes. From marlowsd at gmail.com Tue Jan 24 13:26:23 2017 From: marlowsd at gmail.com (Simon Marlow) Date: Tue, 24 Jan 2017 13:26:23 +0000 Subject: Next steps of the trac-to-maniphest migration? In-Reply-To: References: Message-ID: On 24 January 2017 at 10:37, Matthew Pickering wrote: > Thank you Simon. > > If you have any example queries that you run often or queries which > you have embedded into wikipages then it would be useful to share them > so I can investigate. > The 8.2.1 status page has queries embedded: https://ghc.haskell.org/trac/ghc/wiki/Status/GHC-8.2.1 Personally I do queries like "all open bugs where Component = RuntimeSystem ordered by priority". It looks like we can probably do that with Maniphest. I couldn't take a look at the interface for creating a ticket because I have to create an account, and it says my account is pending approval. Does Maniphest have a concept of ticket dependencies? i.e. ticket X is blocked by Y. Can we have custom fields with Maniphest? I like the rich metadata we have with OS / Architecture / Component / Failure types. It's true that we don't use it consistently, but at least when we do use it there's an obvious and standard way to do it. When I search for RTS bugs I know that at least all the bugs I'm seeing are RTS bugs, even if I'm not seeing all the RTS bugs. People responsible for particular architectures can keep their metadata up to date to make it easier to manage their ticket lists. With regards to the last point. This is possible in a more structured > way. You can create a dashboard with a single query embedded and then > embed this using standard remarkup syntax. > > For example on a project page, I embedded a query which matched > tickets with "PatternSynonyms" and "newcomer". > > http://ec2-52-214-147-146.eu-west-1.compute.amazonaws.com/ > project/profile/165/ > > You can embed this panel anywhere where remarkup is accepted. For > example, in a wiki page - > http://ec2-52-214-147-146.eu-west-1.compute.amazonaws.com/w/ or > tickets http://ec2-52-214-147-146.eu-west-1.compute.amazonaws.com/T12548 > Ok, it's good to know that Phabricator can embed queries, but we're not planning to move the wiki, correct? > It is a bit more heavyweight to setup but much easier to get right due > to the structured editing interface which trac doesn't provide for > these kinds of queries. > > Matt > > On Tue, Jan 24, 2017 at 9:41 AM, Simon Marlow wrote: > > On 21 January 2017 at 22:21, Matthew Pickering < > matthewtpickering at gmail.com> > > wrote: > >> > >> Hello devs, > >> > >> Thanks to everyone so far who has looked at and commented on the > >> prototype. It seems that the response is generally positive so I would > >> like to drive the process forwards. > >> > >> In order for that to happen, someone needs to decide whether we as a > >> community think it is a good idea. It seems to make sense if those who > >> use the tracker most make this decision so I propose that Simon and > >> Ben should ultimately be the ones to do this. > >> > >> Therefore, I propose this timeline > >> > >> 1. Before 11th Feb (3 weeks from today) we decide whether we want to > >> migrate the issue tracker. > >> 2. A working group is established who will work through the details of > >> the migration with the minimum of a final prototype built from a clone > >> of the actual installation. > >> 3. Migration would happen before the end of March. > > > > > > Sounds good to me. I personally have only glanced at it so far, but I'll > > give it some attention. I'm pretty attached to Trac's ability to do > complex > > queries on tickets and the ability to embed ticket queries into wiki > pages, > > so the gains would have to be compelling to outweigh the losses for me. > But > > I'll give it a closer look. > > > > Cheers > > Simon > > > >> > >> I think Ben summarised the discussions quite well on the wiki page - > >> https://ghc.haskell.org/trac/ghc/wiki/Phabricator/Maniphest > >> > >> And the prototype continues to exist here. > >> http://ec2-52-214-147-146.eu-west-1.compute.amazonaws.com/ > >> > >> As always, any comments welcome. > >> > >> Matt > >> _______________________________________________ > >> ghc-devs mailing list > >> ghc-devs at haskell.org > >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Tue Jan 24 14:09:07 2017 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Tue, 24 Jan 2017 14:09:07 +0000 Subject: Next steps of the trac-to-maniphest migration? In-Reply-To: References: Message-ID: On Tue, Jan 24, 2017 at 1:26 PM, Simon Marlow wrote: > On 24 January 2017 at 10:37, Matthew Pickering > wrote: >> >> Thank you Simon. >> >> If you have any example queries that you run often or queries which >> you have embedded into wikipages then it would be useful to share them >> so I can investigate. > > > The 8.2.1 status page has queries embedded: > https://ghc.haskell.org/trac/ghc/wiki/Status/GHC-8.2.1 > > Personally I do queries like "all open bugs where Component = RuntimeSystem > ordered by priority". It looks like we can probably do that with Maniphest. > > I couldn't take a look at the interface for creating a ticket because I have > to create an account, and it says my account is pending approval. > > Does Maniphest have a concept of ticket dependencies? i.e. ticket X is > blocked by Y. Yes, see for example - http://ec2-52-214-147-146.eu-west-1.compute.amazonaws.com/T7724 > > Can we have custom fields with Maniphest? I like the rich metadata we have > with OS / Architecture / Component / Failure types. It's true that we don't > use it consistently, but at least when we do use it there's an obvious and > standard way to do it. When I search for RTS bugs I know that at least all > the bugs I'm seeing are RTS bugs, even if I'm not seeing all the RTS bugs. > People responsible for particular architectures can keep their metadata up > to date to make it easier to manage their ticket lists. There was a long discussion about this on the original thread with people echoing this sentiment. I am of the opinion that projects would be a better fit as 1. They integrate better with the rest of phabricator 2. They are not relevant to every ticket. There are tickets about infrastructure matters for which the concept of OS is irrelevant for example. I like to think of projects as structured unstructured metadata. The structure is that you can group different project tags together as subprojects of a parent project but adding projects to a ticket is unstructured. This is how "architecture" is implemented currently - http://ec2-52-214-147-146.eu-west-1.compute.amazonaws.com/project/view/101/ On trac, keywords are not very useful as they are completely unstructured and not discoverable. I think projects greatly improve on this. I also posted some images about the different work flows of projects, subprojects and custom fields. https://phabricator.haskell.org/M3/3/ http://ec2-52-214-147-146.eu-west-1.compute.amazonaws.com/M1 > >> With regards to the last point. This is possible in a more structured >> way. You can create a dashboard with a single query embedded and then >> embed this using standard remarkup syntax. >> >> For example on a project page, I embedded a query which matched >> tickets with "PatternSynonyms" and "newcomer". >> >> >> http://ec2-52-214-147-146.eu-west-1.compute.amazonaws.com/project/profile/165/ >> >> You can embed this panel anywhere where remarkup is accepted. For >> example, in a wiki page - >> http://ec2-52-214-147-146.eu-west-1.compute.amazonaws.com/w/ or >> tickets http://ec2-52-214-147-146.eu-west-1.compute.amazonaws.com/T12548 > > > Ok, it's good to know that Phabricator can embed queries, but we're not > planning to move the wiki, correct? I am not proposing that but if there is interest it could be investigated. The migration would be trickier due to the complicated ways that people use markup in wiki pages. > >> >> It is a bit more heavyweight to setup but much easier to get right due >> to the structured editing interface which trac doesn't provide for >> these kinds of queries. >> >> Matt >> >> On Tue, Jan 24, 2017 at 9:41 AM, Simon Marlow wrote: >> > On 21 January 2017 at 22:21, Matthew Pickering >> > >> > wrote: >> >> >> >> Hello devs, >> >> >> >> Thanks to everyone so far who has looked at and commented on the >> >> prototype. It seems that the response is generally positive so I would >> >> like to drive the process forwards. >> >> >> >> In order for that to happen, someone needs to decide whether we as a >> >> community think it is a good idea. It seems to make sense if those who >> >> use the tracker most make this decision so I propose that Simon and >> >> Ben should ultimately be the ones to do this. >> >> >> >> Therefore, I propose this timeline >> >> >> >> 1. Before 11th Feb (3 weeks from today) we decide whether we want to >> >> migrate the issue tracker. >> >> 2. A working group is established who will work through the details of >> >> the migration with the minimum of a final prototype built from a clone >> >> of the actual installation. >> >> 3. Migration would happen before the end of March. >> > >> > >> > Sounds good to me. I personally have only glanced at it so far, but >> > I'll >> > give it some attention. I'm pretty attached to Trac's ability to do >> > complex >> > queries on tickets and the ability to embed ticket queries into wiki >> > pages, >> > so the gains would have to be compelling to outweigh the losses for me. >> > But >> > I'll give it a closer look. >> > >> > Cheers >> > Simon >> > >> >> >> >> I think Ben summarised the discussions quite well on the wiki page - >> >> https://ghc.haskell.org/trac/ghc/wiki/Phabricator/Maniphest >> >> >> >> And the prototype continues to exist here. >> >> http://ec2-52-214-147-146.eu-west-1.compute.amazonaws.com/ >> >> >> >> As always, any comments welcome. >> >> >> >> Matt >> >> _______________________________________________ >> >> ghc-devs mailing list >> >> ghc-devs at haskell.org >> >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > >> > > > Hope that is useful. Matt From mail at joachim-breitner.de Tue Jan 24 16:00:41 2017 From: mail at joachim-breitner.de (Joachim Breitner) Date: Tue, 24 Jan 2017 11:00:41 -0500 Subject: Next steps of the trac-to-maniphest migration? In-Reply-To: References: Message-ID: <1485273641.6235.17.camel@joachim-breitner.de> Hi, Am Dienstag, den 24.01.2017, 10:37 +0000 schrieb Matthew Pickering: > If you have any example queries that you run often or queries which > you have embedded into wikipages then it would be useful to share > them so I can investigate. the embedded query on https://ghc.haskell.org/trac/ghc/wiki/Newcomers is pretty useful. But as you point out, that is still possible (if we migrate the wiki as well…) Greetings, Joachim -- Joachim “nomeata” Breitner   mail at joachim-breitner.de • https://www.joachim-breitner.de/   XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F   Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From matthewtpickering at gmail.com Tue Jan 24 16:16:33 2017 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Tue, 24 Jan 2017 16:16:33 +0000 Subject: Next steps of the trac-to-maniphest migration? In-Reply-To: <1485273641.6235.17.camel@joachim-breitner.de> References: <1485273641.6235.17.camel@joachim-breitner.de> Message-ID: Thinking about this, it would probably make sense to at least transfer the newcomer page as most of its instructions would reference things on phab rather than trac if we migrated the tracker. On Tue, Jan 24, 2017 at 4:00 PM, Joachim Breitner wrote: > Hi, > > Am Dienstag, den 24.01.2017, 10:37 +0000 schrieb Matthew Pickering: >> If you have any example queries that you run often or queries which >> you have embedded into wikipages then it would be useful to share >> them so I can investigate. > > the embedded query on > https://ghc.haskell.org/trac/ghc/wiki/Newcomers > is pretty useful. But as you point out, that is still possible (if we > migrate the wiki as well…) > > Greetings, > Joachim > -- > Joachim “nomeata” Breitner > mail at joachim-breitner.de • https://www.joachim-breitner.de/ > XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F > Debian Developer: nomeata at debian.org > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From marlowsd at gmail.com Tue Jan 24 18:12:15 2017 From: marlowsd at gmail.com (Simon Marlow) Date: Tue, 24 Jan 2017 18:12:15 +0000 Subject: Next steps of the trac-to-maniphest migration? In-Reply-To: References: Message-ID: On 24 January 2017 at 14:09, Matthew Pickering wrote: > On Tue, Jan 24, 2017 at 1:26 PM, Simon Marlow wrote: > > > Can we have custom fields with Maniphest? I like the rich metadata we > have > > with OS / Architecture / Component / Failure types. It's true that we > don't > > use it consistently, but at least when we do use it there's an obvious > and > > standard way to do it. When I search for RTS bugs I know that at least > all > > the bugs I'm seeing are RTS bugs, even if I'm not seeing all the RTS > bugs. > > People responsible for particular architectures can keep their metadata > up > > to date to make it easier to manage their ticket lists. > > There was a long discussion about this on the original thread with > people echoing this sentiment. I am of the opinion that projects would > be a better fit as > > 1. They integrate better with the rest of phabricator > 2. They are not relevant to every ticket. There are tickets about > infrastructure matters for which the concept of OS is irrelevant for > example. > > I like to think of projects as structured unstructured metadata. > The structure is that you > can group different project tags together as subprojects of a parent > project but adding projects to a ticket is unstructured. > This is how "architecture" is implemented currently - > http://ec2-52-214-147-146.eu-west-1.compute.amazonaws.com/pr > oject/view/101/ > On trac, keywords are not very useful as they are completely > unstructured and not discoverable. I think projects greatly improve on > this. > I think the problem here is that it's not obvious which projects should be added to tickets. As a ticket submitter, if I have metadata I'm not likely to add it, and as developers we'll probably forget which fields we could add. Yes, Trac keywords are even more useless. But we don't generally use keywords; the point here is about the other metadata fields (OS, Architecture, etc.). Just having some text on the ticket creation page to suggest adding OS / Architecture would help a lot. Cheers Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at well-typed.com Tue Jan 24 18:19:06 2017 From: david at well-typed.com (David Feuer) Date: Tue, 24 Jan 2017 13:19:06 -0500 Subject: Floating lazy primops Message-ID: <1nrrxj97wi29e9aew8xjcmd8.1485281730276@email.android.com> I've opened #13182 to explore one possible approach to dataToTag# that strikes me as likely to be simpler and to have fewer potential gotchas. But there could be critical points I'm missing about why we do it as we do now. -------- Original message --------From: Simon Peyton Jones Date: 1/23/17 4:47 PM (GMT-05:00) To: David Feuer Cc: ghc-devs at haskell.org Subject: RE: Floating lazy primops We should have this conversation on a ticket, perhaps #13027. | good at the time). Are there actually any primops with lifted arguments | where we *want* speculation? Perhaps the most important primop to | consider is seq#, which is I don't understand this question.  comment:23 of #13027 specifically says to skip the ok-for-spec test for lifted args. So you as "are the any" whereas comment:23 says "all". | 2. If dataToTag# is marked can_fail (an aspect of https:// | phabricator.haskell.org/rGHC5a9a1738023a), is it still possible for it to | end up being applied to an unevaluated argument? If not, perhaps the | CorePrep specials can be removed altogether. That may be true, but it's not easy to GUARANTEE in the way that Lint guarantees types.   So I'm happier leaving in the CorePrep stuff.. but please do add a comment there that points to the Note in primops.txt.pp and says that it seems unlikely this will ever occur. | One more question: do you think it's *better* to let dataToTag# float and | then fix it up later, or better to mark it can_fail? Unlike | reallyUnsafePtrEquality#, dataToTag# is used all over the place, so it is | important that it interacts as well as possible with the optimizer, | whatever that entails. I think better to do as now. That way the simplifier has the opportunity to common-up multiple evals into one.  If we add them later that's harder.  By all means add Notes to explain this. Thanks! Simon | -----Original Message----- | From: David Feuer [mailto:david at well-typed.com] | Sent: 18 January 2017 19:45 | To: Simon Peyton Jones | Cc: ghc-devs at haskell.org | Subject: Floating lazy primops | | I opened up https://phabricator.haskell.org/D2987 to mark | reallyUnsafePtrEquality# can_fail, but in the process I realized a couple | things. | | 1. Part of https://phabricator.haskell.org/rGHC5a9a1738023a may actually | not have been such a hot idea after all (although it certainly sounded | good at the time). Are there actually any primops with lifted arguments | where we *want* speculation? Perhaps the most important primop to | consider is seq#, which is | (mysteriously?) marked neither can_fail nor has_side_effects, but another | to look at is unpackClosure#, which seems likely to give different | results before and after forcing. Most other primops with lifted | arguments are marked has_side_effects, and therefore won't be floated out | anyway. | | 2. If dataToTag# is marked can_fail (an aspect of https:// | phabricator.haskell.org/rGHC5a9a1738023a), is it still possible for it to | end up being applied to an unevaluated argument? If not, perhaps the | CorePrep specials can be removed altogether. | | David Feuer -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at well-typed.com Tue Jan 24 21:10:46 2017 From: david at well-typed.com (David Feuer) Date: Tue, 24 Jan 2017 16:10:46 -0500 Subject: Intended meaning of nubBy use in GHC's compiler/cmm/Debug.hs Message-ID: <2456853.ugXtPeQiLE@squirrel> The meaning of nubBy when applied to functions other than equivalence relations changed around the time you created compiler/cmm/Debug.hs. This makes it extra tricky to figure out exactly what that nubBy is expected to do. Do you think you could explain? Furthermore, it would be helpful to know what (if anything) is known about the relationships between the source spans in the the argument to the nubBy. This code is proving to be a bit of a hot spot in #11095 and the better we understand what's going on the better our chances of coming up with good solutions. Thanks, David Feuer From simonpj at microsoft.com Wed Jan 25 10:11:19 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 25 Jan 2017 10:11:19 +0000 Subject: [commit: ghc] wip/discount-fv: Discount scrutinized free variables (fd9608e) In-Reply-To: <20170124172007.A50AC3A300@ghc.haskell.org> References: <20170124172007.A50AC3A300@ghc.haskell.org> Message-ID: Alex Interesting. Care to give us any background on what you are working on? I've often thought about discounting for free vars. Do you have some compelling examples? (Also fine if you just want to noodle privately for now.) Simon | -----Original Message----- | From: ghc-commits [mailto:ghc-commits-bounces at haskell.org] On Behalf Of | git at git.haskell.org | Sent: 24 January 2017 17:20 | To: ghc-commits at haskell.org | Subject: [commit: ghc] wip/discount-fv: Discount scrutinized free | variables (fd9608e) | | Repository : ssh://git at git.haskell.org/ghc | | On branch : wip/discount-fv | Link : | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fghc.haske | ll.org%2Ftrac%2Fghc%2Fchangeset%2Ffd9608ea93fc2389907b82c3fe540805d986c28 | e%2Fghc&data=02%7C01%7Csimonpj%40microsoft.com%7C6b18dd9581bc459c203b08d4 | 447d482c%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636208752257772884& | sdata=3%2F1y5zQjDsa5j1%2FhTEjnKc4mg0qNtCD8WyqMaNUq5mA%3D&reserved=0 | | >--------------------------------------------------------------- | | commit fd9608ea93fc2389907b82c3fe540805d986c28e | Author: alexbiehl | Date: Mon Jan 23 20:34:20 2017 +0100 | | Discount scrutinized free variables | | | >--------------------------------------------------------------- | | fd9608ea93fc2389907b82c3fe540805d986c28e | compiler/coreSyn/CoreUnfold.hs | 95 +++++++++++++++++++++++++----------- | ------ | 1 file changed, 56 insertions(+), 39 deletions(-) | | diff --git a/compiler/coreSyn/CoreUnfold.hs | b/compiler/coreSyn/CoreUnfold.hs index 574d841..36ea382 100644 | --- a/compiler/coreSyn/CoreUnfold.hs | +++ b/compiler/coreSyn/CoreUnfold.hs | @@ -62,8 +62,11 @@ import Bag | import Util | import Outputable | import ForeignCall | +import VarEnv | | +import Control.Applicative ((<|>)) | import qualified Data.ByteString as BS | +import Debug.Trace | | {- | ************************************************************************ | @@ -501,43 +504,51 @@ sizeExpr :: DynFlags | -- Note [Computing the size of an expression] | | sizeExpr dflags bOMB_OUT_SIZE top_args expr | - = size_up expr | + = size_up emptyInScopeSet expr | where | - size_up (Cast e _) = size_up e | - size_up (Tick _ e) = size_up e | - size_up (Type _) = sizeZero -- Types cost nothing | - size_up (Coercion _) = sizeZero | - size_up (Lit lit) = sizeN (litSize lit) | - size_up (Var f) | isRealWorldId f = sizeZero | + size_up :: InScopeSet -> CoreExpr -> ExprSize | + size_up is (Cast e _) = size_up is e | + size_up is (Tick _ e) = size_up is e | + size_up _ (Type _) = sizeZero -- Types cost nothing | + size_up _ (Coercion _) = sizeZero | + size_up _ (Lit lit) = sizeN (litSize lit) | + size_up _ (Var f) | isRealWorldId f = sizeZero | -- Make sure we get constructor discounts even | -- on nullary constructors | - | otherwise = size_up_call f [] 0 | - | - size_up (App fun arg) | - | isTyCoArg arg = size_up fun | - | otherwise = size_up arg `addSizeNSD` | - size_up_app fun [arg] (if isRealWorldExpr arg | then 1 else 0) | - | - size_up (Lam b e) | - | isId b && not (isRealWorldId b) = lamScrutDiscount dflags | (size_up e `addSizeN` 10) | - | otherwise = size_up e | - | - size_up (Let (NonRec binder rhs) body) | - = size_up rhs `addSizeNSD` | - size_up body `addSizeN` | + | otherwise = size_up_call f [] 0 | + | + size_up is (App fun arg) | + | isTyCoArg arg = size_up is fun | + | otherwise = size_up is arg `addSizeNSD` | + size_up_app is fun [arg] (if isRealWorldExpr | + arg then 1 else 0) | + | + size_up is (Lam b e) | + | isId b && not (isRealWorldId b) = lamScrutDiscount dflags | (size_up is e `addSizeN` 10) | + | otherwise = size_up is e | + | + size_up is (Let (NonRec binder rhs) body) | + = let | + is' = extendInScopeSet is binder | + in | + size_up is rhs `addSizeNSD` | + size_up is' body `addSizeN` | (if isUnliftedType (idType binder) then 0 else 10) | -- For the allocation | -- If the binder has an unlifted type there is no | allocation | | - size_up (Let (Rec pairs) body) | - = foldr (addSizeNSD . size_up . snd) | - (size_up body `addSizeN` (10 * length pairs)) -- | (length pairs) for the allocation | + size_up is (Let (Rec pairs) body) | + = let | + is' = extendInScopeSetList is (map fst pairs) | + in | + foldr (addSizeNSD . size_up is' . snd) | + (size_up is' body | + `addSizeN` (10 * length pairs)) -- (length pairs) | for the allocation | pairs | | - size_up (Case e _ _ alts) | - | Just v <- is_top_arg e -- We are scrutinising an argument | variable | + size_up is (Case e _ _ alts) | + | Just v <- is_top_arg e <|> is_free_var e -- We are | + scrutinising an argument variable or a free variable | = let | - alt_sizes = map size_up_alt alts | + alt_sizes = map (size_up_alt is) alts | | -- alts_size tries to compute a good discount for | -- the case when we are scrutinising an argument | variable @@ -569,9 +580,12 @@ sizeExpr dflags bOMB_OUT_SIZE top_args expr | is_top_arg (Cast e _) = is_top_arg e | is_top_arg _ = Nothing | | + is_free_var (Var v) | not (v `elemInScopeSet` is) = Just v | + is_free_var (Cast e _) = is_free_var e | + is_free_var _ = Nothing | | - size_up (Case e _ _ alts) = size_up e `addSizeNSD` | - foldr (addAltSize . size_up_alt) | case_size alts | + size_up is (Case e _ _ alts) = size_up is e `addSizeNSD` | + foldr (addAltSize . size_up_alt is) | + case_size alts | where | case_size | | is_inline_scrut e, not (lengthExceeds alts 1) = sizeN (- | 10) @@ -608,15 +622,15 @@ sizeExpr dflags bOMB_OUT_SIZE top_args expr | | ------------ | -- size_up_app is used when there's ONE OR MORE value args | - size_up_app (App fun arg) args voids | - | isTyCoArg arg = size_up_app fun args voids | - | isRealWorldExpr arg = size_up_app fun (arg:args) | (voids + 1) | - | otherwise = size_up arg `addSizeNSD` | - size_up_app fun (arg:args) | voids | - size_up_app (Var fun) args voids = size_up_call fun args voids | - size_up_app (Tick _ expr) args voids = size_up_app expr args voids | - size_up_app (Cast expr _) args voids = size_up_app expr args voids | - size_up_app other args voids = size_up other `addSizeN` | + size_up_app is (App fun arg) args voids | + | isTyCoArg arg = size_up_app is fun args voids | + | isRealWorldExpr arg = size_up_app is fun (arg:args) | (voids + 1) | + | otherwise = size_up is arg `addSizeNSD` | + size_up_app is fun (arg:args) | voids | + size_up_app _ (Var fun) args voids = size_up_call fun args | voids | + size_up_app is (Tick _ expr) args voids = size_up_app is expr args | voids | + size_up_app is (Cast expr _) args voids = size_up_app is expr args | voids | + size_up_app is other args voids = size_up is other | `addSizeN` | callSize (length args) voids | -- if the lhs is not an App or a Var, or an invisible thing like | a | -- Tick or Cast, then we should charge for a complete call plus | the @@ -633,7 +647,10 @@ sizeExpr dflags bOMB_OUT_SIZE top_args expr | _ -> funSize dflags top_args fun (length | val_args) voids | | ------------ | - size_up_alt (_con, _bndrs, rhs) = size_up rhs `addSizeN` 10 | + size_up_alt :: InScopeSet -> Alt Var -> ExprSize | + size_up_alt is (_con, bndrs, rhs) = size_up is' rhs `addSizeN` 10 | + where is' = extendInScopeSetList is bndrs | + | -- Don't charge for args, so that wrappers look cheap | -- (See comments about wrappers with Case) | -- | | _______________________________________________ | ghc-commits mailing list | ghc-commits at haskell.org | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | commits&data=02%7C01%7Csimonpj%40microsoft.com%7C6b18dd9581bc459c203b08d4 | 447d482c%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636208752257772884& | sdata=rGeUVlgqjfwCl%2FEdTX3%2BX0mQGX5UcS7bY9qadLT%2FSE4%3D&reserved=0 From matthewtpickering at gmail.com Wed Jan 25 10:38:30 2017 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Wed, 25 Jan 2017 10:38:30 +0000 Subject: [commit: ghc] wip/discount-fv: Discount scrutinized free variables (fd9608e) In-Reply-To: References: <20170124172007.A50AC3A300@ghc.haskell.org> Message-ID: I think the motivation was your suggestion in #4960. Matt On Wed, Jan 25, 2017 at 10:11 AM, Simon Peyton Jones via ghc-devs wrote: > Alex > > Interesting. Care to give us any background on what you are working on? > > I've often thought about discounting for free vars. Do you have some compelling examples? > > (Also fine if you just want to noodle privately for now.) > > Simon > > | -----Original Message----- > | From: ghc-commits [mailto:ghc-commits-bounces at haskell.org] On Behalf Of > | git at git.haskell.org > | Sent: 24 January 2017 17:20 > | To: ghc-commits at haskell.org > | Subject: [commit: ghc] wip/discount-fv: Discount scrutinized free > | variables (fd9608e) > | > | Repository : ssh://git at git.haskell.org/ghc > | > | On branch : wip/discount-fv > | Link : > | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fghc.haske > | ll.org%2Ftrac%2Fghc%2Fchangeset%2Ffd9608ea93fc2389907b82c3fe540805d986c28 > | e%2Fghc&data=02%7C01%7Csimonpj%40microsoft.com%7C6b18dd9581bc459c203b08d4 > | 447d482c%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636208752257772884& > | sdata=3%2F1y5zQjDsa5j1%2FhTEjnKc4mg0qNtCD8WyqMaNUq5mA%3D&reserved=0 > | > | >--------------------------------------------------------------- > | > | commit fd9608ea93fc2389907b82c3fe540805d986c28e > | Author: alexbiehl > | Date: Mon Jan 23 20:34:20 2017 +0100 > | > | Discount scrutinized free variables > | > | > | >--------------------------------------------------------------- > | > | fd9608ea93fc2389907b82c3fe540805d986c28e > | compiler/coreSyn/CoreUnfold.hs | 95 +++++++++++++++++++++++++----------- > | ------ > | 1 file changed, 56 insertions(+), 39 deletions(-) > | > | diff --git a/compiler/coreSyn/CoreUnfold.hs > | b/compiler/coreSyn/CoreUnfold.hs index 574d841..36ea382 100644 > | --- a/compiler/coreSyn/CoreUnfold.hs > | +++ b/compiler/coreSyn/CoreUnfold.hs > | @@ -62,8 +62,11 @@ import Bag > | import Util > | import Outputable > | import ForeignCall > | +import VarEnv > | > | +import Control.Applicative ((<|>)) > | import qualified Data.ByteString as BS > | +import Debug.Trace > | > | {- > | ************************************************************************ > | @@ -501,43 +504,51 @@ sizeExpr :: DynFlags > | -- Note [Computing the size of an expression] > | > | sizeExpr dflags bOMB_OUT_SIZE top_args expr > | - = size_up expr > | + = size_up emptyInScopeSet expr > | where > | - size_up (Cast e _) = size_up e > | - size_up (Tick _ e) = size_up e > | - size_up (Type _) = sizeZero -- Types cost nothing > | - size_up (Coercion _) = sizeZero > | - size_up (Lit lit) = sizeN (litSize lit) > | - size_up (Var f) | isRealWorldId f = sizeZero > | + size_up :: InScopeSet -> CoreExpr -> ExprSize > | + size_up is (Cast e _) = size_up is e > | + size_up is (Tick _ e) = size_up is e > | + size_up _ (Type _) = sizeZero -- Types cost nothing > | + size_up _ (Coercion _) = sizeZero > | + size_up _ (Lit lit) = sizeN (litSize lit) > | + size_up _ (Var f) | isRealWorldId f = sizeZero > | -- Make sure we get constructor discounts even > | -- on nullary constructors > | - | otherwise = size_up_call f [] 0 > | - > | - size_up (App fun arg) > | - | isTyCoArg arg = size_up fun > | - | otherwise = size_up arg `addSizeNSD` > | - size_up_app fun [arg] (if isRealWorldExpr arg > | then 1 else 0) > | - > | - size_up (Lam b e) > | - | isId b && not (isRealWorldId b) = lamScrutDiscount dflags > | (size_up e `addSizeN` 10) > | - | otherwise = size_up e > | - > | - size_up (Let (NonRec binder rhs) body) > | - = size_up rhs `addSizeNSD` > | - size_up body `addSizeN` > | + | otherwise = size_up_call f [] 0 > | + > | + size_up is (App fun arg) > | + | isTyCoArg arg = size_up is fun > | + | otherwise = size_up is arg `addSizeNSD` > | + size_up_app is fun [arg] (if isRealWorldExpr > | + arg then 1 else 0) > | + > | + size_up is (Lam b e) > | + | isId b && not (isRealWorldId b) = lamScrutDiscount dflags > | (size_up is e `addSizeN` 10) > | + | otherwise = size_up is e > | + > | + size_up is (Let (NonRec binder rhs) body) > | + = let > | + is' = extendInScopeSet is binder > | + in > | + size_up is rhs `addSizeNSD` > | + size_up is' body `addSizeN` > | (if isUnliftedType (idType binder) then 0 else 10) > | -- For the allocation > | -- If the binder has an unlifted type there is no > | allocation > | > | - size_up (Let (Rec pairs) body) > | - = foldr (addSizeNSD . size_up . snd) > | - (size_up body `addSizeN` (10 * length pairs)) -- > | (length pairs) for the allocation > | + size_up is (Let (Rec pairs) body) > | + = let > | + is' = extendInScopeSetList is (map fst pairs) > | + in > | + foldr (addSizeNSD . size_up is' . snd) > | + (size_up is' body > | + `addSizeN` (10 * length pairs)) -- (length pairs) > | for the allocation > | pairs > | > | - size_up (Case e _ _ alts) > | - | Just v <- is_top_arg e -- We are scrutinising an argument > | variable > | + size_up is (Case e _ _ alts) > | + | Just v <- is_top_arg e <|> is_free_var e -- We are > | + scrutinising an argument variable or a free variable > | = let > | - alt_sizes = map size_up_alt alts > | + alt_sizes = map (size_up_alt is) alts > | > | -- alts_size tries to compute a good discount for > | -- the case when we are scrutinising an argument > | variable @@ -569,9 +580,12 @@ sizeExpr dflags bOMB_OUT_SIZE top_args expr > | is_top_arg (Cast e _) = is_top_arg e > | is_top_arg _ = Nothing > | > | + is_free_var (Var v) | not (v `elemInScopeSet` is) = Just v > | + is_free_var (Cast e _) = is_free_var e > | + is_free_var _ = Nothing > | > | - size_up (Case e _ _ alts) = size_up e `addSizeNSD` > | - foldr (addAltSize . size_up_alt) > | case_size alts > | + size_up is (Case e _ _ alts) = size_up is e `addSizeNSD` > | + foldr (addAltSize . size_up_alt is) > | + case_size alts > | where > | case_size > | | is_inline_scrut e, not (lengthExceeds alts 1) = sizeN (- > | 10) @@ -608,15 +622,15 @@ sizeExpr dflags bOMB_OUT_SIZE top_args expr > | > | ------------ > | -- size_up_app is used when there's ONE OR MORE value args > | - size_up_app (App fun arg) args voids > | - | isTyCoArg arg = size_up_app fun args voids > | - | isRealWorldExpr arg = size_up_app fun (arg:args) > | (voids + 1) > | - | otherwise = size_up arg `addSizeNSD` > | - size_up_app fun (arg:args) > | voids > | - size_up_app (Var fun) args voids = size_up_call fun args voids > | - size_up_app (Tick _ expr) args voids = size_up_app expr args voids > | - size_up_app (Cast expr _) args voids = size_up_app expr args voids > | - size_up_app other args voids = size_up other `addSizeN` > | + size_up_app is (App fun arg) args voids > | + | isTyCoArg arg = size_up_app is fun args voids > | + | isRealWorldExpr arg = size_up_app is fun (arg:args) > | (voids + 1) > | + | otherwise = size_up is arg `addSizeNSD` > | + size_up_app is fun (arg:args) > | voids > | + size_up_app _ (Var fun) args voids = size_up_call fun args > | voids > | + size_up_app is (Tick _ expr) args voids = size_up_app is expr args > | voids > | + size_up_app is (Cast expr _) args voids = size_up_app is expr args > | voids > | + size_up_app is other args voids = size_up is other > | `addSizeN` > | callSize (length args) voids > | -- if the lhs is not an App or a Var, or an invisible thing like > | a > | -- Tick or Cast, then we should charge for a complete call plus > | the @@ -633,7 +647,10 @@ sizeExpr dflags bOMB_OUT_SIZE top_args expr > | _ -> funSize dflags top_args fun (length > | val_args) voids > | > | ------------ > | - size_up_alt (_con, _bndrs, rhs) = size_up rhs `addSizeN` 10 > | + size_up_alt :: InScopeSet -> Alt Var -> ExprSize > | + size_up_alt is (_con, bndrs, rhs) = size_up is' rhs `addSizeN` 10 > | + where is' = extendInScopeSetList is bndrs > | + > | -- Don't charge for args, so that wrappers look cheap > | -- (See comments about wrappers with Case) > | -- > | > | _______________________________________________ > | ghc-commits mailing list > | ghc-commits at haskell.org > | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask > | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- > | commits&data=02%7C01%7Csimonpj%40microsoft.com%7C6b18dd9581bc459c203b08d4 > | 447d482c%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636208752257772884& > | sdata=rGeUVlgqjfwCl%2FEdTX3%2BX0mQGX5UcS7bY9qadLT%2FSE4%3D&reserved=0 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From simonpj at microsoft.com Wed Jan 25 23:40:11 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 25 Jan 2017 23:40:11 +0000 Subject: [commit: ghc] wip/discount-fv: Discount scrutinized free variables (fd9608e) In-Reply-To: References: <20170124172007.A50AC3A300@ghc.haskell.org> Message-ID: Long story short: learning and experimenting how GHC works and eventually contribute my findings (if any). OK great! Let us know if you need help. Simon From: Alex Biehl [mailto:alex.biehl at gmail.com] Sent: 25 January 2017 10:32 To: Simon Peyton Jones Cc: ghc-devs at haskell.org Subject: Re: [commit: ghc] wip/discount-fv: Discount scrutinized free variables (fd9608e) I believe it was a false alarm. Unfortunately I could reproduce the reduced allocations even without my patch (I hadn't ran `validate` before, so I didn't know at that time). Ben was kind enough to push it to a branch so gipedia could pick up but it hadn't any effect either. What leaves me wondering though why are the allocations reduced drastically (by ~30% for haddock.cabal and haddock.base and even ~57% for T9203. c.f. https://ghc.haskell.org/trac/ghc/ticket/4960#comment:14). And not for others? I am using `./validate --testsuite-only --fast` (with a perf build GHC). The reason I did this was that I thought if I reduce `dupAppSize` in `CoreUtils` I could reduce code duplication in `case` expressions where GHC currently duplicates lots of alternatives (I only realized later that `dupAppSize` does not account for `case` expressions at all, so its probably some case-of-case stuff or something) in some of my code and I wanted to confirm if that is actually a good thing. I noticed the ticket and though before tackling that I will try myself on that discount stuff. Long story short: learning and experimenting how GHC works and eventually contribute my findings (if any). Simon Peyton Jones > schrieb am Mi., 25. Jan. 2017 um 11:11 Uhr: Alex Interesting. Care to give us any background on what you are working on? I've often thought about discounting for free vars. Do you have some compelling examples? (Also fine if you just want to noodle privately for now.) Simon | -----Original Message----- | From: ghc-commits [mailto:ghc-commits-bounces at haskell.org] On Behalf Of | git at git.haskell.org | Sent: 24 January 2017 17:20 | To: ghc-commits at haskell.org | Subject: [commit: ghc] wip/discount-fv: Discount scrutinized free | variables (fd9608e) | | Repository : ssh://git at git.haskell.org/ghc | | On branch : wip/discount-fv | Link : | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fghc.haske | ll.org%2Ftrac%2Fghc%2Fchangeset%2Ffd9608ea93fc2389907b82c3fe540805d986c28 | e%2Fghc&data=02%7C01%7Csimonpj%40microsoft.com%7C6b18dd9581bc459c203b08d4 | 447d482c%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636208752257772884& | sdata=3%2F1y5zQjDsa5j1%2FhTEjnKc4mg0qNtCD8WyqMaNUq5mA%3D&reserved=0 | | >--------------------------------------------------------------- | | commit fd9608ea93fc2389907b82c3fe540805d986c28e | Author: alexbiehl > | Date: Mon Jan 23 20:34:20 2017 +0100 | | Discount scrutinized free variables | | | >--------------------------------------------------------------- | | fd9608ea93fc2389907b82c3fe540805d986c28e | compiler/coreSyn/CoreUnfold.hs | 95 +++++++++++++++++++++++++----------- | ------ | 1 file changed, 56 insertions(+), 39 deletions(-) | | diff --git a/compiler/coreSyn/CoreUnfold.hs | b/compiler/coreSyn/CoreUnfold.hs index 574d841..36ea382 100644 | --- a/compiler/coreSyn/CoreUnfold.hs | +++ b/compiler/coreSyn/CoreUnfold.hs | @@ -62,8 +62,11 @@ import Bag | import Util | import Outputable | import ForeignCall | +import VarEnv | | +import Control.Applicative ((<|>)) | import qualified Data.ByteString as BS | +import Debug.Trace | | {- | ************************************************************************ | @@ -501,43 +504,51 @@ sizeExpr :: DynFlags | -- Note [Computing the size of an expression] | | sizeExpr dflags bOMB_OUT_SIZE top_args expr | - = size_up expr | + = size_up emptyInScopeSet expr | where | - size_up (Cast e _) = size_up e | - size_up (Tick _ e) = size_up e | - size_up (Type _) = sizeZero -- Types cost nothing | - size_up (Coercion _) = sizeZero | - size_up (Lit lit) = sizeN (litSize lit) | - size_up (Var f) | isRealWorldId f = sizeZero | + size_up :: InScopeSet -> CoreExpr -> ExprSize | + size_up is (Cast e _) = size_up is e | + size_up is (Tick _ e) = size_up is e | + size_up _ (Type _) = sizeZero -- Types cost nothing | + size_up _ (Coercion _) = sizeZero | + size_up _ (Lit lit) = sizeN (litSize lit) | + size_up _ (Var f) | isRealWorldId f = sizeZero | -- Make sure we get constructor discounts even | -- on nullary constructors | - | otherwise = size_up_call f [] 0 | - | - size_up (App fun arg) | - | isTyCoArg arg = size_up fun | - | otherwise = size_up arg `addSizeNSD` | - size_up_app fun [arg] (if isRealWorldExpr arg | then 1 else 0) | - | - size_up (Lam b e) | - | isId b && not (isRealWorldId b) = lamScrutDiscount dflags | (size_up e `addSizeN` 10) | - | otherwise = size_up e | - | - size_up (Let (NonRec binder rhs) body) | - = size_up rhs `addSizeNSD` | - size_up body `addSizeN` | + | otherwise = size_up_call f [] 0 | + | + size_up is (App fun arg) | + | isTyCoArg arg = size_up is fun | + | otherwise = size_up is arg `addSizeNSD` | + size_up_app is fun [arg] (if isRealWorldExpr | + arg then 1 else 0) | + | + size_up is (Lam b e) | + | isId b && not (isRealWorldId b) = lamScrutDiscount dflags | (size_up is e `addSizeN` 10) | + | otherwise = size_up is e | + | + size_up is (Let (NonRec binder rhs) body) | + = let | + is' = extendInScopeSet is binder | + in | + size_up is rhs `addSizeNSD` | + size_up is' body `addSizeN` | (if isUnliftedType (idType binder) then 0 else 10) | -- For the allocation | -- If the binder has an unlifted type there is no | allocation | | - size_up (Let (Rec pairs) body) | - = foldr (addSizeNSD . size_up . snd) | - (size_up body `addSizeN` (10 * length pairs)) -- | (length pairs) for the allocation | + size_up is (Let (Rec pairs) body) | + = let | + is' = extendInScopeSetList is (map fst pairs) | + in | + foldr (addSizeNSD . size_up is' . snd) | + (size_up is' body | + `addSizeN` (10 * length pairs)) -- (length pairs) | for the allocation | pairs | | - size_up (Case e _ _ alts) | - | Just v <- is_top_arg e -- We are scrutinising an argument | variable | + size_up is (Case e _ _ alts) | + | Just v <- is_top_arg e <|> is_free_var e -- We are | + scrutinising an argument variable or a free variable | = let | - alt_sizes = map size_up_alt alts | + alt_sizes = map (size_up_alt is) alts | | -- alts_size tries to compute a good discount for | -- the case when we are scrutinising an argument | variable @@ -569,9 +580,12 @@ sizeExpr dflags bOMB_OUT_SIZE top_args expr | is_top_arg (Cast e _) = is_top_arg e | is_top_arg _ = Nothing | | + is_free_var (Var v) | not (v `elemInScopeSet` is) = Just v | + is_free_var (Cast e _) = is_free_var e | + is_free_var _ = Nothing | | - size_up (Case e _ _ alts) = size_up e `addSizeNSD` | - foldr (addAltSize . size_up_alt) | case_size alts | + size_up is (Case e _ _ alts) = size_up is e `addSizeNSD` | + foldr (addAltSize . size_up_alt is) | + case_size alts | where | case_size | | is_inline_scrut e, not (lengthExceeds alts 1) = sizeN (- | 10) @@ -608,15 +622,15 @@ sizeExpr dflags bOMB_OUT_SIZE top_args expr | | ------------ | -- size_up_app is used when there's ONE OR MORE value args | - size_up_app (App fun arg) args voids | - | isTyCoArg arg = size_up_app fun args voids | - | isRealWorldExpr arg = size_up_app fun (arg:args) | (voids + 1) | - | otherwise = size_up arg `addSizeNSD` | - size_up_app fun (arg:args) | voids | - size_up_app (Var fun) args voids = size_up_call fun args voids | - size_up_app (Tick _ expr) args voids = size_up_app expr args voids | - size_up_app (Cast expr _) args voids = size_up_app expr args voids | - size_up_app other args voids = size_up other `addSizeN` | + size_up_app is (App fun arg) args voids | + | isTyCoArg arg = size_up_app is fun args voids | + | isRealWorldExpr arg = size_up_app is fun (arg:args) | (voids + 1) | + | otherwise = size_up is arg `addSizeNSD` | + size_up_app is fun (arg:args) | voids | + size_up_app _ (Var fun) args voids = size_up_call fun args | voids | + size_up_app is (Tick _ expr) args voids = size_up_app is expr args | voids | + size_up_app is (Cast expr _) args voids = size_up_app is expr args | voids | + size_up_app is other args voids = size_up is other | `addSizeN` | callSize (length args) voids | -- if the lhs is not an App or a Var, or an invisible thing like | a | -- Tick or Cast, then we should charge for a complete call plus | the @@ -633,7 +647,10 @@ sizeExpr dflags bOMB_OUT_SIZE top_args expr | _ -> funSize dflags top_args fun (length | val_args) voids | | ------------ | - size_up_alt (_con, _bndrs, rhs) = size_up rhs `addSizeN` 10 | + size_up_alt :: InScopeSet -> Alt Var -> ExprSize | + size_up_alt is (_con, bndrs, rhs) = size_up is' rhs `addSizeN` 10 | + where is' = extendInScopeSetList is bndrs | + | -- Don't charge for args, so that wrappers look cheap | -- (See comments about wrappers with Case) | -- | | _______________________________________________ | ghc-commits mailing list | ghc-commits at haskell.org | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | commits&data=02%7C01%7Csimonpj%40microsoft.com%7C6b18dd9581bc459c203b08d4 | 447d482c%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636208752257772884& | sdata=rGeUVlgqjfwCl%2FEdTX3%2BX0mQGX5UcS7bY9qadLT%2FSE4%3D&reserved=0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mhitza at gmail.com Thu Jan 26 10:22:26 2017 From: mhitza at gmail.com (Marius Ghita) Date: Thu, 26 Jan 2017 12:22:26 +0200 Subject: Cannot build GHC using the Newcomers guide Message-ID: Following is a list of steps that I ran and their output linked: - clone repo https://gist.github.com/mhitza/f5d4516b6c8386fe8e064f95b5ad62 0b - build.mk configuration https://gist.github.com/mhitza/ 2d979c64a646bdd3e097f65fd650c675 - boot https://gist.github.com/mhitza/e23df8b9ed2aac5b1b8881c70165bf3f - configure https://gist.github.com/mhitza/88c09179be3bb82024192bf6181aef13 - make FAILS https://gist.github.com/mhitza/95738bf49c8c87ce46c9319b4c266a 2c I'm using a 'stack-ghc' executable, that's only a shell wrapper to run ghc from stack (since I don't have a globally installed ghc) (source https://gist.github.com/mhitza/38fe96fb440daab28e57a50de47863d5 ), and I also have 'ghc-pkg' wrapped in the same way with a stack-ghc-pkg script (source https://gist.github.com/mhitza/ 6c2b1978ef802707161041abe1d2699e ) -- Google+: https://plus.google.com/111881868112036203454 -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Thu Jan 26 11:05:24 2017 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Thu, 26 Jan 2017 11:05:24 +0000 Subject: Cannot build GHC using the Newcomers guide In-Reply-To: References: Message-ID: Can you try with the "--no-ghc-package-path" option in your wrapper? Matt On Thu, Jan 26, 2017 at 10:22 AM, Marius Ghita wrote: > Following is a list of steps that I ran and their output linked: > > - clone repo > https://gist.github.com/mhitza/f5d4516b6c8386fe8e064f95b5ad620b > - build.mk configuration > https://gist.github.com/mhitza/2d979c64a646bdd3e097f65fd650c675 > - boot https://gist.github.com/mhitza/e23df8b9ed2aac5b1b8881c70165bf3f > - configure https://gist.github.com/mhitza/88c09179be3bb82024192bf6181aef13 > - make FAILS > https://gist.github.com/mhitza/95738bf49c8c87ce46c9319b4c266a2c > > I'm using a 'stack-ghc' executable, that's only a shell wrapper to run ghc > from stack (since I don't have a globally installed ghc) (source > https://gist.github.com/mhitza/38fe96fb440daab28e57a50de47863d5 ), and I > also have 'ghc-pkg' wrapped in the same way with > a stack-ghc-pkg script (source > https://gist.github.com/mhitza/6c2b1978ef802707161041abe1d2699e ) > > -- > Google+: https://plus.google.com/111881868112036203454 > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From harendra.kumar at gmail.com Thu Jan 26 14:21:17 2017 From: harendra.kumar at gmail.com (Harendra Kumar) Date: Thu, 26 Jan 2017 19:51:17 +0530 Subject: Cannot build GHC using the Newcomers guide In-Reply-To: References: Message-ID: I use "export PATH=`stack path --bin-path`" to make the stack installed ghc available in the PATH before building ghc. And that's all. Setting the PATH works better because we do not get any extra env variables set by stack in the environment and we do not go through the stack wrapper, so it may be a little bit faster as well. The GHC_PACKAGE_PATH variable set by the stack command is especially troublesome in some cases. You can try "stack exec env" to check all vars that stack puts in your environment. -harendra On 26 January 2017 at 15:52, Marius Ghita wrote: > Following is a list of steps that I ran and their output linked: > > - clone repo https://gist.github.com/mhitza/f5d4516b6c8386fe8e064f95b5ad6 > 20b > - build.mk configuration https://gist.github.com/mhitza > /2d979c64a646bdd3e097f65fd650c675 > - boot https://gist.github.com/mhitza/e23df8b9ed2aac5b1b8881c70165bf3f > - configure https://gist.github.com/mhitza/88c09179be3bb82024192bf6181ae > f13 > - make FAILS https://gist.github.com/mhitza/95738bf49c8c87ce46c9319b4c266 > a2c > > I'm using a 'stack-ghc' executable, that's only a shell wrapper to run ghc > from stack (since I don't have a globally installed ghc) (source > https://gist.github.com/mhitza/38fe96fb440daab28e57a50de47863d5 ), and I > also have 'ghc-pkg' wrapped in the same way with > a stack-ghc-pkg script (source https://gist.github.com/mhitza > /6c2b1978ef802707161041abe1d2699e ) > > -- > Google+: https://plus.google.com/111881868112036203454 > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lonetiger at gmail.com Thu Jan 26 17:18:14 2017 From: lonetiger at gmail.com (Phyx) Date: Thu, 26 Jan 2017 17:18:14 +0000 Subject: Cannot build GHC using the Newcomers guide In-Reply-To: References: Message-ID: Can I ask a silly question. I can't seem to find where stack is recommended for ghc development on the newcomers page, but why is it? I don't want to start another flame war but I can't imagine any scenario where this is useful. As far as I understand the whole benefit of stack is the curated packages. Which are moot here since almost everything you need is in the tree aside from Happy and Alex. Seems to me this is just overcomplicating a very simple process. Not to mention if you have to go through stack - - exec etc all for interactions with the build artifacts it would get old quickly. Also it doesn't seem reliable especially if stack is modifying the environment and or flags passed to the compiler. Let me reiterate, I have nothing against stack, I just don't see the benefits here. Ideally you'd want your environment as simple and vanilla as possible and *totally* in your control IMHO. What am I missing here? Thanks, Tamar On Thu, 26 Jan 2017, 14:21 Harendra Kumar, wrote: > I use "export PATH=`stack path --bin-path`" to make the stack installed > ghc available in the PATH before building ghc. And that's all. > > Setting the PATH works better because we do not get any extra env > variables set by stack in the environment and we do not go through the > stack wrapper, so it may be a little bit faster as well. The > GHC_PACKAGE_PATH variable set by the stack command is especially > troublesome in some cases. You can try "stack exec env" to check all vars > that stack puts in your environment. > > -harendra > > On 26 January 2017 at 15:52, Marius Ghita wrote: > > Following is a list of steps that I ran and their output linked: > > - clone repo > https://gist.github.com/mhitza/f5d4516b6c8386fe8e064f95b5ad620b > - build.mk configuration > https://gist.github.com/mhitza/2d979c64a646bdd3e097f65fd650c675 > - boot https://gist.github.com/mhitza/e23df8b9ed2aac5b1b8881c70165bf3f > - configure > https://gist.github.com/mhitza/88c09179be3bb82024192bf6181aef13 > - make FAILS > https://gist.github.com/mhitza/95738bf49c8c87ce46c9319b4c266a2c > > I'm using a 'stack-ghc' executable, that's only a shell wrapper to run ghc > from stack (since I don't have a globally installed ghc) (source > https://gist.github.com/mhitza/38fe96fb440daab28e57a50de47863d5 ), and I > also have 'ghc-pkg' wrapped in the same way with > a stack-ghc-pkg script (source > https://gist.github.com/mhitza/6c2b1978ef802707161041abe1d2699e ) > > -- > Google+: https://plus.google.com/111881868112036203454 > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Thu Jan 26 17:28:52 2017 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Thu, 26 Jan 2017 17:28:52 +0000 Subject: Cannot build GHC using the Newcomers guide In-Reply-To: References: Message-ID: I think the intention is that if you are already using stack to manage your GHC installations then it is desirable to use their managed version of GHC rather than have to install another version. It seems to me that the solution that Harendra suggests is the easiest for anyone with this setup. Fwiw, the easiest way I found to setup a clean development environment for GHC was to use the nix ghcHEAD derviation. nix-shell '' -A haskell.compiler.ghcHEAD but of course, this only works if you are using nix! Matt On Thu, Jan 26, 2017 at 5:18 PM, Phyx wrote: > Can I ask a silly question. I can't seem to find where stack is recommended > for ghc development on the newcomers page, but why is it? I don't want to > start another flame war but I can't imagine any scenario where this is > useful. As far as I understand the whole benefit of stack is the curated > packages. > > Which are moot here since almost everything you need is in the tree aside > from Happy and Alex. Seems to me this is just overcomplicating a very simple > process. > > Not to mention if you have to go through stack - - exec etc all for > interactions with the build artifacts it would get old quickly. Also it > doesn't seem reliable especially if stack is modifying the environment and > or flags passed to the compiler. > > Let me reiterate, I have nothing against stack, I just don't see the > benefits here. Ideally you'd want your environment as simple and vanilla as > possible and *totally* in your control IMHO. > > What am I missing here? > > Thanks, > Tamar > > > On Thu, 26 Jan 2017, 14:21 Harendra Kumar, wrote: >> >> I use "export PATH=`stack path --bin-path`" to make the stack installed >> ghc available in the PATH before building ghc. And that's all. >> >> Setting the PATH works better because we do not get any extra env >> variables set by stack in the environment and we do not go through the stack >> wrapper, so it may be a little bit faster as well. The GHC_PACKAGE_PATH >> variable set by the stack command is especially troublesome in some cases. >> You can try "stack exec env" to check all vars that stack puts in your >> environment. >> >> -harendra >> >> On 26 January 2017 at 15:52, Marius Ghita wrote: >>> >>> Following is a list of steps that I ran and their output linked: >>> >>> - clone repo >>> https://gist.github.com/mhitza/f5d4516b6c8386fe8e064f95b5ad620b >>> - build.mk configuration >>> https://gist.github.com/mhitza/2d979c64a646bdd3e097f65fd650c675 >>> - boot https://gist.github.com/mhitza/e23df8b9ed2aac5b1b8881c70165bf3f >>> - configure >>> https://gist.github.com/mhitza/88c09179be3bb82024192bf6181aef13 >>> - make FAILS >>> https://gist.github.com/mhitza/95738bf49c8c87ce46c9319b4c266a2c >>> >>> I'm using a 'stack-ghc' executable, that's only a shell wrapper to run >>> ghc from stack (since I don't have a globally installed ghc) (source >>> https://gist.github.com/mhitza/38fe96fb440daab28e57a50de47863d5 ), and I >>> also have 'ghc-pkg' wrapped in the same way with >>> a stack-ghc-pkg script (source >>> https://gist.github.com/mhitza/6c2b1978ef802707161041abe1d2699e ) >>> >>> -- >>> Google+: https://plus.google.com/111881868112036203454 >>> >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From lonetiger at gmail.com Thu Jan 26 17:38:39 2017 From: lonetiger at gmail.com (Phyx) Date: Thu, 26 Jan 2017 17:38:39 +0000 Subject: Cannot build GHC using the Newcomers guide In-Reply-To: References: Message-ID: But do you really want to do this? It seems to me you don't want to keep using your stage0 while working on ghc. As you don't want to break it and spend hours wondering why your build failed. Fair enough, if people want to do this. So long as it's not the defacto method. On Thu, 26 Jan 2017, 17:28 Matthew Pickering, wrote: > I think the intention is that if you are already using stack to manage > your GHC installations then it is desirable to use their managed > version of GHC rather than have to install another version. > > It seems to me that the solution that Harendra suggests is the easiest > for anyone with this setup. > > Fwiw, the easiest way I found to setup a clean development environment > for GHC was to use the nix ghcHEAD derviation. > > nix-shell '' -A haskell.compiler.ghcHEAD > > but of course, this only works if you are using nix! > > Matt > > On Thu, Jan 26, 2017 at 5:18 PM, Phyx wrote: > > Can I ask a silly question. I can't seem to find where stack is > recommended > > for ghc development on the newcomers page, but why is it? I don't want to > > start another flame war but I can't imagine any scenario where this is > > useful. As far as I understand the whole benefit of stack is the curated > > packages. > > > > Which are moot here since almost everything you need is in the tree aside > > from Happy and Alex. Seems to me this is just overcomplicating a very > simple > > process. > > > > Not to mention if you have to go through stack - - exec etc all for > > interactions with the build artifacts it would get old quickly. Also it > > doesn't seem reliable especially if stack is modifying the environment > and > > or flags passed to the compiler. > > > > Let me reiterate, I have nothing against stack, I just don't see the > > benefits here. Ideally you'd want your environment as simple and vanilla > as > > possible and *totally* in your control IMHO. > > > > What am I missing here? > > > > Thanks, > > Tamar > > > > > > On Thu, 26 Jan 2017, 14:21 Harendra Kumar, > wrote: > >> > >> I use "export PATH=`stack path --bin-path`" to make the stack installed > >> ghc available in the PATH before building ghc. And that's all. > >> > >> Setting the PATH works better because we do not get any extra env > >> variables set by stack in the environment and we do not go through the > stack > >> wrapper, so it may be a little bit faster as well. The GHC_PACKAGE_PATH > >> variable set by the stack command is especially troublesome in some > cases. > >> You can try "stack exec env" to check all vars that stack puts in your > >> environment. > >> > >> -harendra > >> > >> On 26 January 2017 at 15:52, Marius Ghita wrote: > >>> > >>> Following is a list of steps that I ran and their output linked: > >>> > >>> - clone repo > >>> https://gist.github.com/mhitza/f5d4516b6c8386fe8e064f95b5ad620b > >>> - build.mk configuration > >>> https://gist.github.com/mhitza/2d979c64a646bdd3e097f65fd650c675 > >>> - boot > https://gist.github.com/mhitza/e23df8b9ed2aac5b1b8881c70165bf3f > >>> - configure > >>> https://gist.github.com/mhitza/88c09179be3bb82024192bf6181aef13 > >>> - make FAILS > >>> https://gist.github.com/mhitza/95738bf49c8c87ce46c9319b4c266a2c > >>> > >>> I'm using a 'stack-ghc' executable, that's only a shell wrapper to run > >>> ghc from stack (since I don't have a globally installed ghc) (source > >>> https://gist.github.com/mhitza/38fe96fb440daab28e57a50de47863d5 ), > and I > >>> also have 'ghc-pkg' wrapped in the same way with > >>> a stack-ghc-pkg script (source > >>> https://gist.github.com/mhitza/6c2b1978ef802707161041abe1d2699e ) > >>> > >>> -- > >>> Google+: https://plus.google.com/111881868112036203454 > >>> > >>> _______________________________________________ > >>> ghc-devs mailing list > >>> ghc-devs at haskell.org > >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > >>> > >> > >> _______________________________________________ > >> ghc-devs mailing list > >> ghc-devs at haskell.org > >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Thu Jan 26 18:05:30 2017 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Thu, 26 Jan 2017 20:05:30 +0200 Subject: Cannot build GHC using the Newcomers guide In-Reply-To: References: Message-ID: FWIW, I use the docker image, as per https://ghc.haskell.org/trac/ghc/wiki/Building/Preparation/Linux#Docker, where I have the invocation in a one-line script Alan On 26 January 2017 at 19:38, Phyx wrote: > But do you really want to do this? > > It seems to me you don't want to keep using your stage0 while working on > ghc. As you don't want to break it and spend hours wondering why your build > failed. > > Fair enough, if people want to do this. So long as it's not the defacto > method. > > On Thu, 26 Jan 2017, 17:28 Matthew Pickering, > wrote: > >> I think the intention is that if you are already using stack to manage >> your GHC installations then it is desirable to use their managed >> version of GHC rather than have to install another version. >> >> It seems to me that the solution that Harendra suggests is the easiest >> for anyone with this setup. >> >> Fwiw, the easiest way I found to setup a clean development environment >> for GHC was to use the nix ghcHEAD derviation. >> >> nix-shell '' -A haskell.compiler.ghcHEAD >> >> but of course, this only works if you are using nix! >> >> Matt >> >> On Thu, Jan 26, 2017 at 5:18 PM, Phyx wrote: >> > Can I ask a silly question. I can't seem to find where stack is >> recommended >> > for ghc development on the newcomers page, but why is it? I don't want >> to >> > start another flame war but I can't imagine any scenario where this is >> > useful. As far as I understand the whole benefit of stack is the curated >> > packages. >> > >> > Which are moot here since almost everything you need is in the tree >> aside >> > from Happy and Alex. Seems to me this is just overcomplicating a very >> simple >> > process. >> > >> > Not to mention if you have to go through stack - - exec etc all for >> > interactions with the build artifacts it would get old quickly. Also it >> > doesn't seem reliable especially if stack is modifying the environment >> and >> > or flags passed to the compiler. >> > >> > Let me reiterate, I have nothing against stack, I just don't see the >> > benefits here. Ideally you'd want your environment as simple and >> vanilla as >> > possible and *totally* in your control IMHO. >> > >> > What am I missing here? >> > >> > Thanks, >> > Tamar >> > >> > >> > On Thu, 26 Jan 2017, 14:21 Harendra Kumar, >> wrote: >> >> >> >> I use "export PATH=`stack path --bin-path`" to make the stack installed >> >> ghc available in the PATH before building ghc. And that's all. >> >> >> >> Setting the PATH works better because we do not get any extra env >> >> variables set by stack in the environment and we do not go through the >> stack >> >> wrapper, so it may be a little bit faster as well. The GHC_PACKAGE_PATH >> >> variable set by the stack command is especially troublesome in some >> cases. >> >> You can try "stack exec env" to check all vars that stack puts in your >> >> environment. >> >> >> >> -harendra >> >> >> >> On 26 January 2017 at 15:52, Marius Ghita wrote: >> >>> >> >>> Following is a list of steps that I ran and their output linked: >> >>> >> >>> - clone repo >> >>> https://gist.github.com/mhitza/f5d4516b6c8386fe8e064f95b5ad620b >> >>> - build.mk configuration >> >>> https://gist.github.com/mhitza/2d979c64a646bdd3e097f65fd650c675 >> >>> - boot https://gist.github.com/mhitza/e23df8b9ed2aac5b1b8881c70165bf >> 3f >> >>> - configure >> >>> https://gist.github.com/mhitza/88c09179be3bb82024192bf6181aef13 >> >>> - make FAILS >> >>> https://gist.github.com/mhitza/95738bf49c8c87ce46c9319b4c266a2c >> >>> >> >>> I'm using a 'stack-ghc' executable, that's only a shell wrapper to run >> >>> ghc from stack (since I don't have a globally installed ghc) (source >> >>> https://gist.github.com/mhitza/38fe96fb440daab28e57a50de47863d5 ), >> and I >> >>> also have 'ghc-pkg' wrapped in the same way with >> >>> a stack-ghc-pkg script (source >> >>> https://gist.github.com/mhitza/6c2b1978ef802707161041abe1d2699e ) >> >>> >> >>> -- >> >>> Google+: https://plus.google.com/111881868112036203454 >> >>> >> >>> _______________________________________________ >> >>> ghc-devs mailing list >> >>> ghc-devs at haskell.org >> >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >>> >> >> >> >> _______________________________________________ >> >> ghc-devs mailing list >> >> ghc-devs at haskell.org >> >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > >> > >> > _______________________________________________ >> > ghc-devs mailing list >> > ghc-devs at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > >> > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From iavor.diatchki at gmail.com Thu Jan 26 21:35:39 2017 From: iavor.diatchki at gmail.com (Iavor Diatchki) Date: Thu, 26 Jan 2017 13:35:39 -0800 Subject: Problems with `arc` and `phabricator` Message-ID: Hello, I just fixed a small bug in GHC (#11406) and I'm trying to push the change to `phabricator` for review, but the command is failing. Here is my output, could you please advise on what to do? > arc diff Linting... LINT OKAY No lint problems. Running unit tests... No unit test engine is configured for this project. PUSH STAGING Pushing changes to staging area... Permission denied (publickey,keyboard-interactive). fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. STAGING FAILED Unable to push changes to the staging area. Usage Exception: Failed to push changes to staging area. Correct the issue, or use --skip-staging to skip this step. Clearly, there is some sort of permission issue... -Iavor -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Thu Jan 26 21:39:52 2017 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Thu, 26 Jan 2017 21:39:52 +0000 Subject: Problems with `arc` and `phabricator` In-Reply-To: References: Message-ID: Hello Iavor, You need to upload you ssh key to phabricator. You can do it here, https://phabricator.haskell.org/settings/user/yav/page/ssh/ If you need any more help with phab then feel free to ping me on IRC or by email. Matt On Thu, Jan 26, 2017 at 9:35 PM, Iavor Diatchki wrote: > Hello, > > I just fixed a small bug in GHC (#11406) and I'm trying to push the change > to `phabricator` for review, but the command is failing. Here is my output, > could you please advise on what to do? > >> arc diff > Linting... > LINT OKAY No lint problems. > Running unit tests... > No unit test engine is configured for this project. > PUSH STAGING Pushing changes to staging area... > Permission denied (publickey,keyboard-interactive). > fatal: Could not read from remote repository. > > Please make sure you have the correct access rights > and the repository exists. > STAGING FAILED Unable to push changes to the staging area. > Usage Exception: Failed to push changes to staging area. Correct the issue, > or use --skip-staging to skip this step. > > Clearly, there is some sort of permission issue... > > -Iavor > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From iavor.diatchki at gmail.com Thu Jan 26 21:42:45 2017 From: iavor.diatchki at gmail.com (Iavor Diatchki) Date: Thu, 26 Jan 2017 13:42:45 -0800 Subject: Problems with `arc` and `phabricator` In-Reply-To: References: Message-ID: Ah good, that worked. I've pushed before, I wonder what happened to my key? On Thu, Jan 26, 2017 at 1:39 PM, Matthew Pickering < matthewtpickering at gmail.com> wrote: > Hello Iavor, > > You need to upload you ssh key to phabricator. > > You can do it here, > > https://phabricator.haskell.org/settings/user/yav/page/ssh/ > > If you need any more help with phab then feel free to ping me on IRC > or by email. > > Matt > > > On Thu, Jan 26, 2017 at 9:35 PM, Iavor Diatchki > wrote: > > Hello, > > > > I just fixed a small bug in GHC (#11406) and I'm trying to push the > change > > to `phabricator` for review, but the command is failing. Here is my > output, > > could you please advise on what to do? > > > >> arc diff > > Linting... > > LINT OKAY No lint problems. > > Running unit tests... > > No unit test engine is configured for this project. > > PUSH STAGING Pushing changes to staging area... > > Permission denied (publickey,keyboard-interactive). > > fatal: Could not read from remote repository. > > > > Please make sure you have the correct access rights > > and the repository exists. > > STAGING FAILED Unable to push changes to the staging area. > > Usage Exception: Failed to push changes to staging area. Correct the > issue, > > or use --skip-staging to skip this step. > > > > Clearly, there is some sort of permission issue... > > > > -Iavor > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Thu Jan 26 22:29:34 2017 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 26 Jan 2017 17:29:34 -0500 Subject: Problems with `arc` and `phabricator` In-Reply-To: References: Message-ID: <877f5ho3ox.fsf@ben-laptop.smart-cactus.org> Iavor Diatchki writes: > Ah good, that worked. I've pushed before, I wonder what happened to my key? > It's quite possible you never had one. A key has only been mandatory for uploads for a few months now. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From mhitza at gmail.com Fri Jan 27 10:11:04 2017 From: mhitza at gmail.com (Marius Ghita) Date: Fri, 27 Jan 2017 12:11:04 +0200 Subject: Cannot build GHC using the Newcomers guide In-Reply-To: References: Message-ID: Thank you all for the feedback, I went with Harendra's solution and that worked fine (and removed any shell wrapping I had to do). To answer Phyx, using stack has become more convenient for managing ghc than using the packages my distro offers (which always falls behind the latest version of GHC by a couple of months), or the alternative of just downloading binaries and managing that process manually. There might also be a misunderstanding given your first post, because I only use stack to provide those ghc (ghc-pkg, alex, happy) binaries, and I'm not managing the ghc source as a stack package; and what that means is that I don't have to run `stack exec --` to interact with build artifacts. On Thu, Jan 26, 2017 at 8:05 PM, Alan & Kim Zimmerman wrote: > FWIW, I use the docker image, as per https://ghc.haskell.org/trac/ > ghc/wiki/Building/Preparation/Linux#Docker, where I have the invocation > in a one-line script > > Alan > > On 26 January 2017 at 19:38, Phyx wrote: > >> But do you really want to do this? >> >> It seems to me you don't want to keep using your stage0 while working on >> ghc. As you don't want to break it and spend hours wondering why your build >> failed. >> >> Fair enough, if people want to do this. So long as it's not the defacto >> method. >> >> On Thu, 26 Jan 2017, 17:28 Matthew Pickering, < >> matthewtpickering at gmail.com> wrote: >> >>> I think the intention is that if you are already using stack to manage >>> your GHC installations then it is desirable to use their managed >>> version of GHC rather than have to install another version. >>> >>> It seems to me that the solution that Harendra suggests is the easiest >>> for anyone with this setup. >>> >>> Fwiw, the easiest way I found to setup a clean development environment >>> for GHC was to use the nix ghcHEAD derviation. >>> >>> nix-shell '' -A haskell.compiler.ghcHEAD >>> >>> but of course, this only works if you are using nix! >>> >>> Matt >>> >>> On Thu, Jan 26, 2017 at 5:18 PM, Phyx wrote: >>> > Can I ask a silly question. I can't seem to find where stack is >>> recommended >>> > for ghc development on the newcomers page, but why is it? I don't want >>> to >>> > start another flame war but I can't imagine any scenario where this is >>> > useful. As far as I understand the whole benefit of stack is the >>> curated >>> > packages. >>> > >>> > Which are moot here since almost everything you need is in the tree >>> aside >>> > from Happy and Alex. Seems to me this is just overcomplicating a very >>> simple >>> > process. >>> > >>> > Not to mention if you have to go through stack - - exec etc all for >>> > interactions with the build artifacts it would get old quickly. Also it >>> > doesn't seem reliable especially if stack is modifying the environment >>> and >>> > or flags passed to the compiler. >>> > >>> > Let me reiterate, I have nothing against stack, I just don't see the >>> > benefits here. Ideally you'd want your environment as simple and >>> vanilla as >>> > possible and *totally* in your control IMHO. >>> > >>> > What am I missing here? >>> > >>> > Thanks, >>> > Tamar >>> > >>> > >>> > On Thu, 26 Jan 2017, 14:21 Harendra Kumar, >>> wrote: >>> >> >>> >> I use "export PATH=`stack path --bin-path`" to make the stack >>> installed >>> >> ghc available in the PATH before building ghc. And that's all. >>> >> >>> >> Setting the PATH works better because we do not get any extra env >>> >> variables set by stack in the environment and we do not go through >>> the stack >>> >> wrapper, so it may be a little bit faster as well. The >>> GHC_PACKAGE_PATH >>> >> variable set by the stack command is especially troublesome in some >>> cases. >>> >> You can try "stack exec env" to check all vars that stack puts in your >>> >> environment. >>> >> >>> >> -harendra >>> >> >>> >> On 26 January 2017 at 15:52, Marius Ghita wrote: >>> >>> >>> >>> Following is a list of steps that I ran and their output linked: >>> >>> >>> >>> - clone repo >>> >>> https://gist.github.com/mhitza/f5d4516b6c8386fe8e064f95b5ad620b >>> >>> - build.mk configuration >>> >>> https://gist.github.com/mhitza/2d979c64a646bdd3e097f65fd650c675 >>> >>> - boot https://gist.github.com/mhitza/e23df8b9ed2aac5b1b8881c70165b >>> f3f >>> >>> - configure >>> >>> https://gist.github.com/mhitza/88c09179be3bb82024192bf6181aef13 >>> >>> - make FAILS >>> >>> https://gist.github.com/mhitza/95738bf49c8c87ce46c9319b4c266a2c >>> >>> >>> >>> I'm using a 'stack-ghc' executable, that's only a shell wrapper to >>> run >>> >>> ghc from stack (since I don't have a globally installed ghc) (source >>> >>> https://gist.github.com/mhitza/38fe96fb440daab28e57a50de47863d5 ), >>> and I >>> >>> also have 'ghc-pkg' wrapped in the same way with >>> >>> a stack-ghc-pkg script (source >>> >>> https://gist.github.com/mhitza/6c2b1978ef802707161041abe1d2699e ) >>> >>> >>> >>> -- >>> >>> Google+: https://plus.google.com/111881868112036203454 >>> >>> >>> >>> _______________________________________________ >>> >>> ghc-devs mailing list >>> >>> ghc-devs at haskell.org >>> >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>> >>> >>> >> >>> >> _______________________________________________ >>> >> ghc-devs mailing list >>> >> ghc-devs at haskell.org >>> >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>> > >>> > >>> > _______________________________________________ >>> > ghc-devs mailing list >>> > ghc-devs at haskell.org >>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>> > >>> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >> > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -- Google+: https://plus.google.com/111881868112036203454 -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Fri Jan 27 16:42:16 2017 From: david.feuer at gmail.com (David Feuer) Date: Fri, 27 Jan 2017 11:42:16 -0500 Subject: Constant functions and selectors make for interesting arguments In-Reply-To: References: Message-ID: GHC's inliner has a notion of "interesting argument" it uses to encourage inlining of functions called with (I think) dictionary arguments. I think another class of argument is very interesting, by being very boring. Any argument that looks like either \ _ ... (Con _ ... x ... _ ) ... _ -> coerce x or \ _ ... _ -> k Has a pretty good chance of doing a lot of good when inlined, perhaps plugging a space leak. Would it make sense to try to identify such functions and consider them interesting for inlining? -------------- next part -------------- An HTML attachment was scrubbed... URL: From takenobu.hs at gmail.com Sat Jan 28 12:24:22 2017 From: takenobu.hs at gmail.com (Takenobu Tani) Date: Sat, 28 Jan 2017 21:24:22 +0900 Subject: simple pictures about GHC development flow In-Reply-To: References: <1477660775.1146.5.camel@joachim-breitner.de> <87r3702zxs.fsf@ben-laptop.smart-cactus.org> <1477763970.1364.3.camel@joachim-breitner.de> <87h97on49z.fsf@ben-laptop.smart-cactus.org> Message-ID: Dear devs, I updated diagrams about "Github PR", especially page 11: GHC development flow http://takenobu-hs.github.io/downloads/ghc_development_flow.pdf https://github.com/takenobu-hs/ghc-development-flow Please teach me if I have misunderstood. Regards, Takenobu 2016-11-04 7:36 GMT+09:00 Takenobu Tani : > Hi Ben, > > Thank you so much :) > After I remove "DRAFT" watermark, I'll introduce this PDF to cafe. > > Regards, > Takenobu > > > 2016-11-03 21:20 GMT+09:00 Ben Gamari : > >> Takenobu Tani writes: >> >> > Hi devs, >> > >> > 2016-10-31 20:02 GMT+09:00 Takenobu Tani : >> > >> >> > Also it might be good to add something about the process of fixing >> doc >> >> "bugs" and improving the doc. >> >> > >> >> > I think these are areas where less experienced Haskell developers can >> >> add value and contribute to the >> >> > ghc community. >> >> >> >> Indeed. It's good :) >> >> Update of documents is easy to contribute by new contributors. >> >> I'll understand the document process, then I'll try to draw the >> diagram. >> >> >> > >> > >> > I updated the following: >> > * page 11: add a link for test case >> > * page 12-15: add document flow >> > >> > Here is Rev.2016-Nov-03: >> > GHC development flow >> > http://takenobu-hs.github.io/downloads/ghc_development_flow.pdf >> > https://github.com/takenobu-hs/ghc-development-flow >> > >> > Please teach me if I have misunderstood, especially page 12-15. >> > >> Thanks, this looks great Takenobu! >> >> Cheers, >> >> - Ben >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From takenobu.hs at gmail.com Sat Jan 28 12:58:48 2017 From: takenobu.hs at gmail.com (Takenobu Tani) Date: Sat, 28 Jan 2017 21:58:48 +0900 Subject: quick building the user's guide Message-ID: Dear devs, Can We build the user's guide without building the ghc binary? If so, new contributors can easily check the generated html. When I execute the following command, a binary build is always executed. # vi mk/build.mk (BuildFlavour = quick; BUILD_SPHINX_HTML = YES) # ./boot # ./configure # cd docs/users_guide/ # make html Is there a way to skip binary builds? Regards, Takenobu -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Sat Jan 28 23:56:49 2017 From: ben at smart-cactus.org (Ben Gamari) Date: Sat, 28 Jan 2017 18:56:49 -0500 Subject: quick building the user's guide In-Reply-To: References: Message-ID: <87y3xulovy.fsf@ben-laptop.smart-cactus.org> Takenobu Tani writes: > Dear devs, > > Can We build the user's guide without building the ghc binary? > If so, new contributors can easily check the generated html. > Sadly no. I would like to make this possible, but currently we rely on the stage1 ghc library to extract information about flags. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From takenobu.hs at gmail.com Sun Jan 29 02:41:11 2017 From: takenobu.hs at gmail.com (Takenobu Tani) Date: Sun, 29 Jan 2017 11:41:11 +0900 Subject: quick building the user's guide In-Reply-To: <87y3xulovy.fsf@ben-laptop.smart-cactus.org> References: <87y3xulovy.fsf@ben-laptop.smart-cactus.org> Message-ID: Hi Ben, I see, it is necessary to generate flags. (docs/users_guide/flags-*.rst) Thank you very much for the explanation. Regards, Takenobu 2017-01-29 8:56 GMT+09:00 Ben Gamari : > Takenobu Tani writes: > > > Dear devs, > > > > Can We build the user's guide without building the ghc binary? > > If so, new contributors can easily check the generated html. > > > Sadly no. I would like to make this possible, but currently we rely on > the stage1 ghc library to extract information about flags. > > Cheers, > > - Ben > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Sun Jan 29 23:35:57 2017 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Sun, 29 Jan 2017 23:35:57 +0000 Subject: Next steps of the trac-to-maniphest migration? In-Reply-To: References: Message-ID: I discovered today that it is now possible to create custom dashboards for a project and make it the default view. This could be useful for projects which have a lot of associated queries or we want some more fine grained control about how the page looks. Matt On Tue, Jan 24, 2017 at 6:12 PM, Simon Marlow wrote: > On 24 January 2017 at 14:09, Matthew Pickering > wrote: >> >> On Tue, Jan 24, 2017 at 1:26 PM, Simon Marlow wrote: >> >> > Can we have custom fields with Maniphest? I like the rich metadata we >> > have >> > with OS / Architecture / Component / Failure types. It's true that we >> > don't >> > use it consistently, but at least when we do use it there's an obvious >> > and >> > standard way to do it. When I search for RTS bugs I know that at least >> > all >> > the bugs I'm seeing are RTS bugs, even if I'm not seeing all the RTS >> > bugs. >> > People responsible for particular architectures can keep their metadata >> > up >> > to date to make it easier to manage their ticket lists. >> >> There was a long discussion about this on the original thread with >> people echoing this sentiment. I am of the opinion that projects would >> be a better fit as >> >> 1. They integrate better with the rest of phabricator >> 2. They are not relevant to every ticket. There are tickets about >> infrastructure matters for which the concept of OS is irrelevant for >> example. >> >> I like to think of projects as structured unstructured metadata. >> The structure is that you >> can group different project tags together as subprojects of a parent >> project but adding projects to a ticket is unstructured. >> This is how "architecture" is implemented currently - >> >> http://ec2-52-214-147-146.eu-west-1.compute.amazonaws.com/project/view/101/ >> On trac, keywords are not very useful as they are completely >> unstructured and not discoverable. I think projects greatly improve on >> this. > > > I think the problem here is that it's not obvious which projects should be > added to tickets. As a ticket submitter, if I have metadata I'm not likely > to add it, and as developers we'll probably forget which fields we could > add. > > Yes, Trac keywords are even more useless. But we don't generally use > keywords; the point here is about the other metadata fields (OS, > Architecture, etc.). Just having some text on the ticket creation page to > suggest adding OS / Architecture would help a lot. > > Cheers > Simon > From david at well-typed.com Mon Jan 30 16:18:29 2017 From: david at well-typed.com (David Feuer) Date: Mon, 30 Jan 2017 11:18:29 -0500 Subject: Lazy ST vs concurrency Message-ID: <1721492.EdgZeEtm4b@squirrel> I forgot to CC ghc-devs the first time, so here's another copy. I was working on #11760 this weekend, which has to do with concurrency breaking lazy ST. I came up with what I thought was a pretty decent solution ( https://phabricator.haskell.org/D3038 ). Simon Peyton Jones, however, is quite unhappy about the idea of sticking this weird unsafePerformIO-like code (noDup, which I originally implemented as (unsafePerformIO . evaluate), but which he finds ugly regardless of the details) into fmap and (>>=). He's also concerned that the noDuplicate# applications will kill performance in the multi-threaded case, and suggests he would rather leave lazy ST broken, or even remove it altogether, than use a fix that will make it slow sometimes, particularly since there haven't been a lot of reports of problems in the wild. My view is that leaving it broken, even if it only causes trouble occasionally, is simply not an option. If users can't rely on it to always give correct answers, then it's effectively useless. And for the sake of backwards compatibility, I think it's a lot better to keep it around, even if it runs slowly multithreaded, than to remove it altogether. Note to Simon PJ: Yes, it's ugly to stick that noDup in there. But lazy ST has always been a bit of deep magic. You can't *really* carry a moment of time around in your pocket and make its history happen only if necessary. We can make it work in GHC because its execution model is entirely based around graph reduction, so evaluation is capable of driving execution. Whereas lazy IO is extremely tricky because it causes effects observable in the real world, lazy ST is only *moderately* tricky, causing effects that we have to make sure don't lead to weird interactions between threads. I don't think it's terribly surprising that it needs to do a few more weird things to work properly. David From adam at well-typed.com Mon Jan 30 16:54:06 2017 From: adam at well-typed.com (Adam Gundry) Date: Mon, 30 Jan 2017 16:54:06 +0000 Subject: Overloaded record fields for 8.2? Message-ID: Hi Ben, devs, I'd like to propose that we merge my latest ORF patch (https://phabricator.haskell.org/D2708) for 8.2 (modulo any code review improvements, of course). The corresponding proposal discussion (https://github.com/ghc-proposals/ghc-proposals/pull/6) seems to have reached a broad consensus. But it's not clear to me if some kind of final approval from the GHC committee is needed, or how to obtain that. This patch makes breaking changes to the experimental OverloadedLabels extension, and I think it would be good to make those quickly rather than having another release with the "old" version. Moreover, the work has been in progress for a long time, so it would be great to see it finally make it into a GHC release. Any objections? Adam -- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From ben at well-typed.com Mon Jan 30 17:09:39 2017 From: ben at well-typed.com (Ben Gamari) Date: Mon, 30 Jan 2017 12:09:39 -0500 Subject: Overloaded record fields for 8.2? In-Reply-To: References: Message-ID: <87lgtslbjg.fsf@ben-laptop.smart-cactus.org> Adam Gundry writes: > Hi Ben, devs, > > I'd like to propose that we merge my latest ORF patch > (https://phabricator.haskell.org/D2708) for 8.2 (modulo any code review > improvements, of course). The corresponding proposal discussion > (https://github.com/ghc-proposals/ghc-proposals/pull/6) seems to have > reached a broad consensus. But it's not clear to me if some kind of > final approval from the GHC committee is needed, or how to obtain that. > The committee is indeed looking at it. > This patch makes breaking changes to the experimental OverloadedLabels > extension, and I think it would be good to make those quickly rather > than having another release with the "old" version. Moreover, the work > has been in progress for a long time, so it would be great to see it > finally make it into a GHC release. > Agreed, it would be great to have this finally see the light of day. > Any objections? > Pending committee review this sounds good to me. There are still a few other large patches that need to be merged for 8.2, so we have a little bit of time. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From rwbarton at gmail.com Mon Jan 30 18:50:29 2017 From: rwbarton at gmail.com (Reid Barton) Date: Mon, 30 Jan 2017 13:50:29 -0500 Subject: Lazy ST vs concurrency In-Reply-To: <1721492.EdgZeEtm4b@squirrel> References: <1721492.EdgZeEtm4b@squirrel> Message-ID: I wrote a lazy ST microbenchmark (http://lpaste.net/351799) that uses nothing but lazy ST monad operations in the inner loop. With various caveats, it took around 3 times as long to run under +RTS -N2 after applying https://phabricator.haskell.org/D3038. The biggest caveat is that the cost of the `threadPaused` in `noDuplicate#` seems to be potentially proportional to the thread's stack depth, and I'm not sure how representative my microbenchmark is in that regard. I'm actually surprised the `noDuplicate#` version isn't an order of magnitude or so slower than that. Still, a 3x factor is a large price to pay. I don't yet understand what's going on here clearly enough to be sure that the `noDuplicate#` is necessary, or that we can't implement `noDuplicate#` more cheaply in the common case of no contention. My feeling is that if it turns out that we can implement the correct behavior cheaply, then it will be better to have left it broken for a little while than to first have a correct but slow implementation and then later replaced it with a correct and fast implementation. The latter is disruptive to two groups of people, those who are affected by the bug and also those who cannot afford to have their lazy ST code run 3 times slower; of which the former group is affected already, and we can advertise the existence of the bug until we have a workable solution. So I'm reluctant to go down this `noDuplicate#` path until we have exhausted our other options. In an ideal world with no users, it would be better to start with correct but slow, of course. Regards, Reid Barton On Mon, Jan 30, 2017 at 11:18 AM, David Feuer wrote: > I forgot to CC ghc-devs the first time, so here's another copy. > > I was working on #11760 this weekend, which has to do with concurrency > breaking lazy ST. I came up with what I thought was a pretty decent solution ( > https://phabricator.haskell.org/D3038 ). Simon Peyton Jones, however, is quite > unhappy about the idea of sticking this weird unsafePerformIO-like code > (noDup, which I originally implemented as (unsafePerformIO . evaluate), but > which he finds ugly regardless of the details) into fmap and (>>=). He's also > concerned that the noDuplicate# applications will kill performance in the > multi-threaded case, and suggests he would rather leave lazy ST broken, or > even remove it altogether, than use a fix that will make it slow sometimes, > particularly since there haven't been a lot of reports of problems in the > wild. > > My view is that leaving it broken, even if it only causes trouble > occasionally, is simply not an option. If users can't rely on it to always > give correct answers, then it's effectively useless. And for the sake of > backwards compatibility, I think it's a lot better to keep it around, even if > it runs slowly multithreaded, than to remove it altogether. > > Note to Simon PJ: Yes, it's ugly to stick that noDup in there. But lazy ST has > always been a bit of deep magic. You can't *really* carry a moment of time > around in your pocket and make its history happen only if necessary. We can > make it work in GHC because its execution model is entirely based around graph > reduction, so evaluation is capable of driving execution. Whereas lazy IO is > extremely tricky because it causes effects observable in the real world, lazy > ST is only *moderately* tricky, causing effects that we have to make sure > don't lead to weird interactions between threads. I don't think it's terribly > surprising that it needs to do a few more weird things to work properly. > > David > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From david at well-typed.com Mon Jan 30 19:29:10 2017 From: david at well-typed.com (David Feuer) Date: Mon, 30 Jan 2017 14:29:10 -0500 Subject: Lazy ST vs concurrency In-Reply-To: References: <1721492.EdgZeEtm4b@squirrel> Message-ID: <3129834.sKcbqlbDNz@squirrel> On Monday, January 30, 2017 1:50:29 PM EST Reid Barton wrote: > I wrote a lazy ST microbenchmark (http://lpaste.net/351799) that uses > nothing but lazy ST monad operations in the inner loop. This benchmark doesn't really look like code I'd expect people to use in practice. Normally, they're using lazy ST because they actually need to use STRefs or STArrays in the loop! I suspect your test case is likely about the worst possible slowdown for the fix. And 3x slowdown in lazy ST doesn't necessarily translate to a 3x slowdown in an application using it. David From simonpj at microsoft.com Mon Jan 30 21:01:34 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 30 Jan 2017 21:01:34 +0000 Subject: Constant functions and selectors make for interesting arguments In-Reply-To: References: Message-ID: Functions whose body is no bigger (by the inliner’s metrics) than the call are always inlined vigorously. So (\.....-> k) replaces a call by a single variable. GHC will do that a lot. These ideas are best backed by use-cases where something good is not happening. Do you have some? Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of David Feuer Sent: 27 January 2017 16:42 To: ghc-devs Subject: Constant functions and selectors make for interesting arguments GHC's inliner has a notion of "interesting argument" it uses to encourage inlining of functions called with (I think) dictionary arguments. I think another class of argument is very interesting, by being very boring. Any argument that looks like either \ _ ... (Con _ ... x ... _ ) ... _ -> coerce x or \ _ ... _ -> k Has a pretty good chance of doing a lot of good when inlined, perhaps plugging a space leak. Would it make sense to try to identify such functions and consider them interesting for inlining? -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at well-typed.com Mon Jan 30 21:41:31 2017 From: david at well-typed.com (David Feuer) Date: Mon, 30 Jan 2017 16:41:31 -0500 Subject: Constant functions and selectors make for interesting arguments In-Reply-To: References: Message-ID: <2136613.hrvAvMUfJO@squirrel> Here's an example: data Tree a = Bin (Tree a) a (Tree a) | Tip deriving Functor {-# NOINLINE replace #-} replace :: a -> Tree b -> Tree a replace x t = x <$ t When I compile this with -O2, I get Rec { -- RHS size: {terms: 18, types: 21, coercions: 0} $fFunctorTree_$cfmap :: forall a_ar2 b_ar3. (a_ar2 -> b_ar3) -> Tree a_ar2 -> Tree b_ar3 $fFunctorTree_$cfmap = \ (@ a_aGb) (@ b_aGc) (f_aFH :: a_aGb -> b_aGc) (ds_dGN :: Tree a_aGb) -> case ds_dGN of _ { Bin a1_aFI a2_aFJ a3_aFK -> Bin ($fFunctorTree_$cfmap f_aFH a1_aFI) (f_aFH a2_aFJ) ($fFunctorTree_$cfmap f_aFH a3_aFK); Tip -> Tip } end Rec } $fFunctorTree_$c<$ :: forall a_ar4 b_ar5. a_ar4 -> Tree b_ar5 -> Tree a_ar4 $fFunctorTree_$c<$ = \ (@ a_aGQ) (@ b_aGR) (eta_aGS :: a_aGQ) (eta1_B1 :: Tree b_aGR) -> $fFunctorTree_$cfmap (\ _ -> eta_aGS) eta1_B1 replace :: forall a_aqt b_aqu. a_aqt -> Tree b_aqu -> Tree a_aqt replace = $fFunctorTree_$c<$ This is no good at all, because replacing the values in the same tree over and over will build up a giant chain of thunks in each node carrying all the previous values. I suppose that inlining per se may not be quite enough to fix this problem, but I suspect there's some way to fix it. Fixing it in Functor deriving would be a start (I can look into that), but fixing it in user code would be quite good too. On Monday, January 30, 2017 9:01:34 PM EST Simon Peyton Jones via ghc-devs wrote: > Functions whose body is no bigger (by the inliner’s metrics) than the call > are always inlined vigorously. So (\.....-> k) replaces a call by a > single variable. GHC will do that a lot. > These ideas are best backed by use-cases where something good is not > happening. Do you have some? > Simon > > From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of David > Feuer Sent: 27 January 2017 16:42 > To: ghc-devs > Subject: Constant functions and selectors make for interesting arguments > > GHC's inliner has a notion of "interesting argument" it uses to encourage > inlining of functions called with (I think) dictionary arguments. I think > another class of argument is very interesting, by being very boring. Any > argument that looks like either > \ _ ... (Con _ ... x ... _ ) ... _ -> coerce x > > or > > \ _ ... _ -> k > > Has a pretty good chance of doing a lot of good when inlined, perhaps > plugging a space leak. Would it make sense to try to identify such > functions and consider them interesting for inlining? From marlowsd at gmail.com Mon Jan 30 21:50:56 2017 From: marlowsd at gmail.com (Simon Marlow) Date: Mon, 30 Jan 2017 21:50:56 +0000 Subject: Lazy ST vs concurrency In-Reply-To: <1721492.EdgZeEtm4b@squirrel> References: <1721492.EdgZeEtm4b@squirrel> Message-ID: On 30 January 2017 at 16:18, David Feuer wrote: > I forgot to CC ghc-devs the first time, so here's another copy. > > I was working on #11760 this weekend, which has to do with concurrency > breaking lazy ST. I came up with what I thought was a pretty decent > solution ( > https://phabricator.haskell.org/D3038 ). Simon Peyton Jones, however, is > quite > unhappy about the idea of sticking this weird unsafePerformIO-like code > (noDup, which I originally implemented as (unsafePerformIO . evaluate), but > which he finds ugly regardless of the details) into fmap and (>>=). He's > also > concerned that the noDuplicate# applications will kill performance in the > multi-threaded case, and suggests he would rather leave lazy ST broken, or > even remove it altogether, than use a fix that will make it slow sometimes, > particularly since there haven't been a lot of reports of problems in the > wild. > In a nutshell, I think we have to fix this despite the cost - the implementation is incorrect and unsafe. Unfortunately the mechanisms we have right now to fix it aren't ideal - noDuplicate# is a bigger hammer than we need. All we really need is some way to make a thunk atomic, it would require some special entry code to the thunk which did atomic eager-blackholing. Hmm, now that I think about it, perhaps we could just have a flag, -fatomic-eager-blackholing. We already do this for CAFs, incidentally. The idea is to compare-and-swap the blackhole info pointer into the thunk, and if we didn't win the race, just re-enter the thunk (which is now a blackhole). We already have the cmpxchg MachOp, so It shouldn't be more than a few lines in the code generator to implement it. It would be too expensive to do by default, but doing it just for Control.Monad.ST.Lazy should be ok and would fix the unsafety. (I haven't really thought this through, just an idea off the top of my head, so there could well be something I'm overlooking here...) Cheers Simon > My view is that leaving it broken, even if it only causes trouble > occasionally, is simply not an option. If users can't rely on it to always > give correct answers, then it's effectively useless. And for the sake of > backwards compatibility, I think it's a lot better to keep it around, even > if > it runs slowly multithreaded, than to remove it altogether. > > Note to Simon PJ: Yes, it's ugly to stick that noDup in there. But lazy ST > has > always been a bit of deep magic. You can't *really* carry a moment of time > around in your pocket and make its history happen only if necessary. We can > make it work in GHC because its execution model is entirely based around > graph > reduction, so evaluation is capable of driving execution. Whereas lazy IO > is > extremely tricky because it causes effects observable in the real world, > lazy > ST is only *moderately* tricky, causing effects that we have to make sure > don't lead to weird interactions between threads. I don't think it's > terribly > surprising that it needs to do a few more weird things to work properly. > > David > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Mon Jan 30 21:52:06 2017 From: mail at joachim-breitner.de (Joachim Breitner) Date: Mon, 30 Jan 2017 16:52:06 -0500 Subject: Constant functions and selectors make for interesting arguments In-Reply-To: <2136613.hrvAvMUfJO@squirrel> References: <2136613.hrvAvMUfJO@squirrel> Message-ID: <1485813126.10188.15.camel@joachim-breitner.de> Hi, Am Montag, den 30.01.2017, 16:41 -0500 schrieb David Feuer: > Here's an example: > > data Tree a = Bin (Tree a) a (Tree a) | Tip deriving Functor > > {-# NOINLINE replace #-} > replace :: a -> Tree b -> Tree a > replace x t = x <$ t > > When I compile this with -O2, I get > > Rec { > -- RHS size: {terms: 18, types: 21, coercions: 0} > $fFunctorTree_$cfmap >   :: forall a_ar2 b_ar3. (a_ar2 -> b_ar3) -> Tree a_ar2 -> Tree b_ar3 > $fFunctorTree_$cfmap = >   \ (@ a_aGb) >     (@ b_aGc) >     (f_aFH :: a_aGb -> b_aGc) >     (ds_dGN :: Tree a_aGb) -> >     case ds_dGN of _ { >       Bin a1_aFI a2_aFJ a3_aFK -> >         Bin >           ($fFunctorTree_$cfmap f_aFH a1_aFI) >           (f_aFH a2_aFJ) >           ($fFunctorTree_$cfmap f_aFH a3_aFK); >       Tip -> Tip >     } > end Rec } > > $fFunctorTree_$c<$ >   :: forall a_ar4 b_ar5. a_ar4 -> Tree b_ar5 -> Tree a_ar4 > $fFunctorTree_$c<$ = >   \ (@ a_aGQ) (@ b_aGR) (eta_aGS :: a_aGQ) (eta1_B1 :: Tree b_aGR) -> >     $fFunctorTree_$cfmap (\ _ -> eta_aGS) eta1_B1 > > replace :: forall a_aqt b_aqu. a_aqt -> Tree b_aqu -> Tree a_aqt > replace = $fFunctorTree_$c<$ > > This is no good at all, because replacing the values in the same tree over and  > over will build up a giant chain of thunks in each node carrying all the  > previous values. I suppose that inlining per se may not be quite enough to fix  > this problem, but I suspect there's some way to fix it. Fixing it in Functor  > deriving would be a start (I can look into that), but fixing it in user code  > would be quite good too. as far as I can tell, this would require * static argument transformation, which would make $fFunctorTree_$cfmap non-recursive (with a recursive local function which would *not* take f_aFH as an argument). * hen you’d have to inline that into $fFunctorTree_$c<$. This would duplicate the local recursive function, so not sure how eager GHC would do that. * And then finally inline (\ _ -> eta_aGS) into the duplicated local recursive function, which will then yield great code. https://ghc.haskell.org/trac/ghc/ticket/9374 is relevant. Greetings, Joachim -- Joachim “nomeata” Breitner   mail at joachim-breitner.de • https://www.joachim-breitner.de/   XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F   Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From david at well-typed.com Mon Jan 30 22:25:17 2017 From: david at well-typed.com (David Feuer) Date: Mon, 30 Jan 2017 17:25:17 -0500 Subject: Lazy ST vs concurrency In-Reply-To: References: <1721492.EdgZeEtm4b@squirrel> Message-ID: <1554400.bk4NQVUtyo@squirrel> On Monday, January 30, 2017 9:50:56 PM EST Simon Marlow wrote: > Unfortunately the mechanisms we have right now to fix it aren't ideal - > noDuplicate# is a bigger hammer than we need. Do you think you could explain this a bit more? What aspect of nuDuplicate# is overkill? What does it guard against that can't happen here? > All we really need is some > way to make a thunk atomic, it would require some special entry code to the > thunk which did atomic eager-blackholing. Hmm, now that I think about it, > perhaps we could just have a flag, -fatomic-eager-blackholing. If it's possible to use a primop to do this "locally", I think it would be very nice to get that as well as a global flag. If it affects code generation in an inherently global fashion, then of course we'll just have to live with that, and lots of NOINLINE. David From simonpj at microsoft.com Mon Jan 30 22:50:43 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 30 Jan 2017 22:50:43 +0000 Subject: Constant functions and selectors make for interesting arguments In-Reply-To: <2136613.hrvAvMUfJO@squirrel> References: <2136613.hrvAvMUfJO@squirrel> Message-ID: What code would you like to get? I think you are talking about specialising a recursive function ($fFunctorTree_$cfmap in this case) for a particular value of its function argument. That's a bit like SpecConstr (SpecFun perhaps) and nothing at all like inlining. Unless I'm missing something. I think there's a ticket somewhere about extending SpecConstr to work on function arugments, but it's tricky to do. Simon | -----Original Message----- | From: David Feuer [mailto:david at well-typed.com] | Sent: 30 January 2017 21:42 | To: ghc-devs at haskell.org; Simon Peyton Jones | Cc: David Feuer | Subject: Re: Constant functions and selectors make for interesting | arguments | | Here's an example: | | data Tree a = Bin (Tree a) a (Tree a) | Tip deriving Functor | | {-# NOINLINE replace #-} | replace :: a -> Tree b -> Tree a | replace x t = x <$ t | | When I compile this with -O2, I get | | Rec { | -- RHS size: {terms: 18, types: 21, coercions: 0} $fFunctorTree_$cfmap | :: forall a_ar2 b_ar3. (a_ar2 -> b_ar3) -> Tree a_ar2 -> Tree b_ar3 | $fFunctorTree_$cfmap = | \ (@ a_aGb) | (@ b_aGc) | (f_aFH :: a_aGb -> b_aGc) | (ds_dGN :: Tree a_aGb) -> | case ds_dGN of _ { | Bin a1_aFI a2_aFJ a3_aFK -> | Bin | ($fFunctorTree_$cfmap f_aFH a1_aFI) | (f_aFH a2_aFJ) | ($fFunctorTree_$cfmap f_aFH a3_aFK); | Tip -> Tip | } | end Rec } | | $fFunctorTree_$c<$ | :: forall a_ar4 b_ar5. a_ar4 -> Tree b_ar5 -> Tree a_ar4 | $fFunctorTree_$c<$ = | \ (@ a_aGQ) (@ b_aGR) (eta_aGS :: a_aGQ) (eta1_B1 :: Tree b_aGR) -> | $fFunctorTree_$cfmap (\ _ -> eta_aGS) eta1_B1 | | replace :: forall a_aqt b_aqu. a_aqt -> Tree b_aqu -> Tree a_aqt replace | = $fFunctorTree_$c<$ | | This is no good at all, because replacing the values in the same tree | over and over will build up a giant chain of thunks in each node carrying | all the previous values. I suppose that inlining per se may not be quite | enough to fix this problem, but I suspect there's some way to fix it. | Fixing it in Functor deriving would be a start (I can look into that), | but fixing it in user code would be quite good too. | | On Monday, January 30, 2017 9:01:34 PM EST Simon Peyton Jones via ghc- | devs | wrote: | > Functions whose body is no bigger (by the inliner’s metrics) than the | call | > are always inlined vigorously. So (\.....-> k) replaces a call by a | > single variable. GHC will do that a lot. | | > These ideas are best backed by use-cases where something good is not | > happening. Do you have some? | | > Simon | > | > From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | > David Feuer | Sent: 27 January 2017 16:42 | > To: ghc-devs | > Subject: Constant functions and selectors make for interesting | > arguments | > | > GHC's inliner has a notion of "interesting argument" it uses to | > encourage inlining of functions called with (I think) dictionary | > arguments. I think another class of argument is very interesting, by | > being very boring. Any argument that looks like either | | > \ _ ... (Con _ ... x ... _ ) ... _ -> coerce x | > | > or | > | > \ _ ... _ -> k | > | > Has a pretty good chance of doing a lot of good when inlined, perhaps | > plugging a space leak. Would it make sense to try to identify such | > functions and consider them interesting for inlining? | From simonpj at microsoft.com Mon Jan 30 22:56:31 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 30 Jan 2017 22:56:31 +0000 Subject: Lazy ST vs concurrency In-Reply-To: References: <1721492.EdgZeEtm4b@squirrel> Message-ID: We don’t want to do this on a per-module basis do we, as -fatomic-eager-blackholing would suggest. Rather, on per-thunk basis, no? Which thunks, precisely? I think perhaps precisely thunks one of whose free variables has type (Sttate# s) for some s. These are thunks that consume a state token, and must do so no more than once. If entering such thunks was atomic, could we kill off noDuplicate#? I still don’t understand exactly what noDuplicate# does, what problem it solves, and how the problem it solves relates to this LazyST problem. We need some kind of fix for 8.2. Simon what do you suggest? Simon From: Simon Marlow [mailto:marlowsd at gmail.com] Sent: 30 January 2017 21:51 To: David Feuer Cc: Simon Peyton Jones ; ghc-devs at haskell.org Subject: Re: Lazy ST vs concurrency On 30 January 2017 at 16:18, David Feuer > wrote: I forgot to CC ghc-devs the first time, so here's another copy. I was working on #11760 this weekend, which has to do with concurrency breaking lazy ST. I came up with what I thought was a pretty decent solution ( https://phabricator.haskell.org/D3038 ). Simon Peyton Jones, however, is quite unhappy about the idea of sticking this weird unsafePerformIO-like code (noDup, which I originally implemented as (unsafePerformIO . evaluate), but which he finds ugly regardless of the details) into fmap and (>>=). He's also concerned that the noDuplicate# applications will kill performance in the multi-threaded case, and suggests he would rather leave lazy ST broken, or even remove it altogether, than use a fix that will make it slow sometimes, particularly since there haven't been a lot of reports of problems in the wild. In a nutshell, I think we have to fix this despite the cost - the implementation is incorrect and unsafe. Unfortunately the mechanisms we have right now to fix it aren't ideal - noDuplicate# is a bigger hammer than we need. All we really need is some way to make a thunk atomic, it would require some special entry code to the thunk which did atomic eager-blackholing. Hmm, now that I think about it, perhaps we could just have a flag, -fatomic-eager-blackholing. We already do this for CAFs, incidentally. The idea is to compare-and-swap the blackhole info pointer into the thunk, and if we didn't win the race, just re-enter the thunk (which is now a blackhole). We already have the cmpxchg MachOp, so It shouldn't be more than a few lines in the code generator to implement it. It would be too expensive to do by default, but doing it just for Control.Monad.ST.Lazy should be ok and would fix the unsafety. (I haven't really thought this through, just an idea off the top of my head, so there could well be something I'm overlooking here...) Cheers Simon My view is that leaving it broken, even if it only causes trouble occasionally, is simply not an option. If users can't rely on it to always give correct answers, then it's effectively useless. And for the sake of backwards compatibility, I think it's a lot better to keep it around, even if it runs slowly multithreaded, than to remove it altogether. Note to Simon PJ: Yes, it's ugly to stick that noDup in there. But lazy ST has always been a bit of deep magic. You can't *really* carry a moment of time around in your pocket and make its history happen only if necessary. We can make it work in GHC because its execution model is entirely based around graph reduction, so evaluation is capable of driving execution. Whereas lazy IO is extremely tricky because it causes effects observable in the real world, lazy ST is only *moderately* tricky, causing effects that we have to make sure don't lead to weird interactions between threads. I don't think it's terribly surprising that it needs to do a few more weird things to work properly. David -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Mon Jan 30 23:06:28 2017 From: ben at smart-cactus.org (Ben Gamari) Date: Mon, 30 Jan 2017 18:06:28 -0500 Subject: Lazy ST vs concurrency In-Reply-To: <1554400.bk4NQVUtyo@squirrel> References: <1721492.EdgZeEtm4b@squirrel> <1554400.bk4NQVUtyo@squirrel> Message-ID: <87inowkv0r.fsf@ben-laptop.smart-cactus.org> David Feuer writes: > On Monday, January 30, 2017 9:50:56 PM EST Simon Marlow wrote: > >> Unfortunately the mechanisms we have right now to fix it aren't ideal - >> noDuplicate# is a bigger hammer than we need. > > Do you think you could explain this a bit more? What aspect of nuDuplicate# is > overkill? What does it guard against that can't happen here? > I suspect Simon is referring to the fact that noDuplicate# actually needs to call back into the RTS to claim ownership over all thunks on the stack before it can proceed. >> All we really need is some >> way to make a thunk atomic, it would require some special entry code to the >> thunk which did atomic eager-blackholing. Hmm, now that I think about it, >> perhaps we could just have a flag, -fatomic-eager-blackholing. > Indeed this sounds quite reasonable. > If it's possible to use a primop to do this "locally", I think it would be > very nice to get that as well as a global flag. If it affects code generation > in an inherently global fashion, then of course we'll just have to live with > that, and lots of NOINLINE. > I guess something like, eagerlyBlackhole :: a -> a Which would be lowered to a bit of code which would do eagerly blackhole the thunk before entering it? Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From david.feuer at gmail.com Tue Jan 31 03:43:31 2017 From: david.feuer at gmail.com (David Feuer) Date: Mon, 30 Jan 2017 22:43:31 -0500 Subject: Constant functions and selectors make for interesting arguments In-Reply-To: References: <2136613.hrvAvMUfJO@squirrel> Message-ID: Yes, I clearly haven't thought this through enough. For the purpose of Functor deriving, it looks like we can just derive better. I've been discussing it with Ryan and Reid, and it looks like we can get that taken care of. David On Mon, Jan 30, 2017 at 5:50 PM, Simon Peyton Jones wrote: > What code would you like to get? > > I think you are talking about specialising a recursive function ($fFunctorTree_$cfmap in this case) for a particular value of its function argument. That's a bit like SpecConstr (SpecFun perhaps) and nothing at all like inlining. > > Unless I'm missing something. > > I think there's a ticket somewhere about extending SpecConstr to work on function arugments, but it's tricky to do. > > Simon > > | -----Original Message----- > | From: David Feuer [mailto:david at well-typed.com] > | Sent: 30 January 2017 21:42 > | To: ghc-devs at haskell.org; Simon Peyton Jones > | Cc: David Feuer > | Subject: Re: Constant functions and selectors make for interesting > | arguments > | > | Here's an example: > | > | data Tree a = Bin (Tree a) a (Tree a) | Tip deriving Functor > | > | {-# NOINLINE replace #-} > | replace :: a -> Tree b -> Tree a > | replace x t = x <$ t > | > | When I compile this with -O2, I get > | > | Rec { > | -- RHS size: {terms: 18, types: 21, coercions: 0} $fFunctorTree_$cfmap > | :: forall a_ar2 b_ar3. (a_ar2 -> b_ar3) -> Tree a_ar2 -> Tree b_ar3 > | $fFunctorTree_$cfmap = > | \ (@ a_aGb) > | (@ b_aGc) > | (f_aFH :: a_aGb -> b_aGc) > | (ds_dGN :: Tree a_aGb) -> > | case ds_dGN of _ { > | Bin a1_aFI a2_aFJ a3_aFK -> > | Bin > | ($fFunctorTree_$cfmap f_aFH a1_aFI) > | (f_aFH a2_aFJ) > | ($fFunctorTree_$cfmap f_aFH a3_aFK); > | Tip -> Tip > | } > | end Rec } > | > | $fFunctorTree_$c<$ > | :: forall a_ar4 b_ar5. a_ar4 -> Tree b_ar5 -> Tree a_ar4 > | $fFunctorTree_$c<$ = > | \ (@ a_aGQ) (@ b_aGR) (eta_aGS :: a_aGQ) (eta1_B1 :: Tree b_aGR) -> > | $fFunctorTree_$cfmap (\ _ -> eta_aGS) eta1_B1 > | > | replace :: forall a_aqt b_aqu. a_aqt -> Tree b_aqu -> Tree a_aqt replace > | = $fFunctorTree_$c<$ > | > | This is no good at all, because replacing the values in the same tree > | over and over will build up a giant chain of thunks in each node carrying > | all the previous values. I suppose that inlining per se may not be quite > | enough to fix this problem, but I suspect there's some way to fix it. > | Fixing it in Functor deriving would be a start (I can look into that), > | but fixing it in user code would be quite good too. > | > | On Monday, January 30, 2017 9:01:34 PM EST Simon Peyton Jones via ghc- > | devs > | wrote: > | > Functions whose body is no bigger (by the inliner’s metrics) than the > | call > | > are always inlined vigorously. So (\.....-> k) replaces a call by a > | > single variable. GHC will do that a lot. > | > | > These ideas are best backed by use-cases where something good is not > | > happening. Do you have some? > | > | > Simon > | > > | > From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of > | > David Feuer > | Sent: 27 January 2017 16:42 > | > To: ghc-devs > | > Subject: Constant functions and selectors make for interesting > | > arguments > | > > | > GHC's inliner has a notion of "interesting argument" it uses to > | > encourage inlining of functions called with (I think) dictionary > | > arguments. I think another class of argument is very interesting, by > | > being very boring. Any argument that looks like either > | > | > \ _ ... (Con _ ... x ... _ ) ... _ -> coerce x > | > > | > or > | > > | > \ _ ... _ -> k > | > > | > Has a pretty good chance of doing a lot of good when inlined, perhaps > | > plugging a space leak. Would it make sense to try to identify such > | > functions and consider them interesting for inlining? > | > From marlowsd at gmail.com Tue Jan 31 08:59:01 2017 From: marlowsd at gmail.com (Simon Marlow) Date: Tue, 31 Jan 2017 08:59:01 +0000 Subject: Lazy ST vs concurrency In-Reply-To: References: <1721492.EdgZeEtm4b@squirrel> Message-ID: On 30 January 2017 at 22:56, Simon Peyton Jones wrote: > We don’t want to do this on a per-module basis do we, as > -fatomic-eager-blackholing would suggest. Rather, on per-thunk basis, no? > Which thunks, precisely? I think perhaps *precisely thunks one of whose > free variables has type (Sttate# s) for some s.* These are thunks that > consume a state token, and must do so no more than once. > If we could identify exactly the thunks we wanted to be atomic, then yes, that would be better than a whole-module solution. However I'm not sure how to do that - doing it on the basis of a free variable with State# type doesn't work if the State# is buried in a data structure or a function closure, for instance. > If entering such thunks was atomic, could we kill off noDuplicate#? > > > > I still don’t understand exactly what noDuplicate# does, what problem it > solves, and how the problem it solves relates to this LazyST problem. > > Back in our "Haskell on a Shared Memory Multiprocessor" paper ( http://simonmar.github.io/bib/papers/multiproc.pdf) we described a scheme to try to avoid duplication of work when multiple cores evaluate the same thunk. This is normally applied lazily, because it involves walking the stack and atomically black-holing thunks pointed to by update frames. The noDuplicate# primop just invokes the stack walk immediately; the idea is to try to prevent multiple threads from evaluating a thunk containing unsafePerformIO. It's expensive. It's also not foolproof, because if you already happened to create two copies of the unsafePerformIO thunk then noDuplicate# can't help. I've never really liked it for these reasons, but I don't know a better way. We have unsafeDupablePerformIO that doesn't call noDuplicate#, and the programmer can use when the unsafePerformIO can safely be executed multiple times. > > > We need some kind of fix for 8.2. Simon what do you suggest? > David's current fix would be OK (along with a clear notice in the release notes etc. to note that the implementation got slower). I think -fatomic-eager-blackholing might "fix" it with less overhead, though. Ben's suggestion: > eagerlyBlackhole :: a -> a is likely to be unreliable I think. We lack the control in the source language to tie it to a particular thunk. Cheers Simon > > Simon > > > > *From:* Simon Marlow [mailto:marlowsd at gmail.com] > *Sent:* 30 January 2017 21:51 > *To:* David Feuer > *Cc:* Simon Peyton Jones ; ghc-devs at haskell.org > *Subject:* Re: Lazy ST vs concurrency > > > > On 30 January 2017 at 16:18, David Feuer wrote: > > I forgot to CC ghc-devs the first time, so here's another copy. > > > I was working on #11760 this weekend, which has to do with concurrency > breaking lazy ST. I came up with what I thought was a pretty decent > solution ( > https://phabricator.haskell.org/D3038 ). Simon Peyton Jones, however, is > quite > unhappy about the idea of sticking this weird unsafePerformIO-like code > (noDup, which I originally implemented as (unsafePerformIO . evaluate), but > which he finds ugly regardless of the details) into fmap and (>>=). He's > also > concerned that the noDuplicate# applications will kill performance in the > multi-threaded case, and suggests he would rather leave lazy ST broken, or > even remove it altogether, than use a fix that will make it slow sometimes, > particularly since there haven't been a lot of reports of problems in the > wild. > > > > In a nutshell, I think we have to fix this despite the cost - the > implementation is incorrect and unsafe. > > > > Unfortunately the mechanisms we have right now to fix it aren't ideal - > noDuplicate# is a bigger hammer than we need. All we really need is some > way to make a thunk atomic, it would require some special entry code to the > thunk which did atomic eager-blackholing. Hmm, now that I think about it, > perhaps we could just have a flag, -fatomic-eager-blackholing. We already > do this for CAFs, incidentally. The idea is to compare-and-swap the > blackhole info pointer into the thunk, and if we didn't win the race, just > re-enter the thunk (which is now a blackhole). We already have the cmpxchg > MachOp, so It shouldn't be more than a few lines in the code generator to > implement it. It would be too expensive to do by default, but doing it > just for Control.Monad.ST.Lazy should be ok and would fix the unsafety. > > > > (I haven't really thought this through, just an idea off the top of my > head, so there could well be something I'm overlooking here...) > > > > Cheers > > Simon > > > > > > My view is that leaving it broken, even if it only causes trouble > occasionally, is simply not an option. If users can't rely on it to always > give correct answers, then it's effectively useless. And for the sake of > backwards compatibility, I think it's a lot better to keep it around, even > if > it runs slowly multithreaded, than to remove it altogether. > > Note to Simon PJ: Yes, it's ugly to stick that noDup in there. But lazy ST > has > always been a bit of deep magic. You can't *really* carry a moment of time > around in your pocket and make its history happen only if necessary. We can > make it work in GHC because its execution model is entirely based around > graph > reduction, so evaluation is capable of driving execution. Whereas lazy IO > is > extremely tricky because it causes effects observable in the real world, > lazy > ST is only *moderately* tricky, causing effects that we have to make sure > don't lead to weird interactions between threads. I don't think it's > terribly > surprising that it needs to do a few more weird things to work properly. > > David > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Jan 31 09:11:19 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 31 Jan 2017 09:11:19 +0000 Subject: Lazy ST vs concurrency In-Reply-To: References: <1721492.EdgZeEtm4b@squirrel> Message-ID: If we could identify exactly the thunks we wanted to be atomic, then yes, that would be better than a whole-module solution. However I'm not sure how to do that - doing it on the basis of a free variable with State# type doesn't work if the State# is buried in a data structure or a function closure, for instance. I disagree. Having a free State# variable is precisely necessary and sufficient, I claim. Can you provide a counter-example? Informal proof: · The model is that a value of type (State# t) is a linear value that we mutate in-place. We must not consume it twice. · Evaluating a thunk that has a free (State# t) variable is precisely “consuming” it. So we should only do that once. I think -fatomic-eager-blackholing might "fix" it with less overhead, though But precisely where would you have to use that flag? Inlining could meant that the code appears anywhere! Once we have the ability to atomically-blackhole a thunk, we can just use my criterion above, I claim. Stopgap story for 8.2. I am far from convinced that putting unsafePerformIO in the impl of (>>=) for the ST monad will be correct; but if you tell me it is, and if it is surrounded with huge banners saying that this is the wrong solution, and pointing to a new ticket to fix it, then OK. Simon From: Simon Marlow [mailto:marlowsd at gmail.com] Sent: 31 January 2017 08:59 To: Simon Peyton Jones Cc: David Feuer ; ghc-devs at haskell.org Subject: Re: Lazy ST vs concurrency On 30 January 2017 at 22:56, Simon Peyton Jones > wrote: We don’t want to do this on a per-module basis do we, as -fatomic-eager-blackholing would suggest. Rather, on per-thunk basis, no? Which thunks, precisely? I think perhaps precisely thunks one of whose free variables has type (Sttate# s) for some s. These are thunks that consume a state token, and must do so no more than once. If we could identify exactly the thunks we wanted to be atomic, then yes, that would be better than a whole-module solution. However I'm not sure how to do that - doing it on the basis of a free variable with State# type doesn't work if the State# is buried in a data structure or a function closure, for instance. If entering such thunks was atomic, could we kill off noDuplicate#? I still don’t understand exactly what noDuplicate# does, what problem it solves, and how the problem it solves relates to this LazyST problem. Back in our "Haskell on a Shared Memory Multiprocessor" paper (http://simonmar.github.io/bib/papers/multiproc.pdf) we described a scheme to try to avoid duplication of work when multiple cores evaluate the same thunk. This is normally applied lazily, because it involves walking the stack and atomically black-holing thunks pointed to by update frames. The noDuplicate# primop just invokes the stack walk immediately; the idea is to try to prevent multiple threads from evaluating a thunk containing unsafePerformIO. It's expensive. It's also not foolproof, because if you already happened to create two copies of the unsafePerformIO thunk then noDuplicate# can't help. I've never really liked it for these reasons, but I don't know a better way. We have unsafeDupablePerformIO that doesn't call noDuplicate#, and the programmer can use when the unsafePerformIO can safely be executed multiple times. We need some kind of fix for 8.2. Simon what do you suggest? David's current fix would be OK (along with a clear notice in the release notes etc. to note that the implementation got slower). I think -fatomic-eager-blackholing might "fix" it with less overhead, though. Ben's suggestion: > eagerlyBlackhole :: a -> a is likely to be unreliable I think. We lack the control in the source language to tie it to a particular thunk. Cheers Simon Simon From: Simon Marlow [mailto:marlowsd at gmail.com] Sent: 30 January 2017 21:51 To: David Feuer > Cc: Simon Peyton Jones >; ghc-devs at haskell.org Subject: Re: Lazy ST vs concurrency On 30 January 2017 at 16:18, David Feuer > wrote: I forgot to CC ghc-devs the first time, so here's another copy. I was working on #11760 this weekend, which has to do with concurrency breaking lazy ST. I came up with what I thought was a pretty decent solution ( https://phabricator.haskell.org/D3038 ). Simon Peyton Jones, however, is quite unhappy about the idea of sticking this weird unsafePerformIO-like code (noDup, which I originally implemented as (unsafePerformIO . evaluate), but which he finds ugly regardless of the details) into fmap and (>>=). He's also concerned that the noDuplicate# applications will kill performance in the multi-threaded case, and suggests he would rather leave lazy ST broken, or even remove it altogether, than use a fix that will make it slow sometimes, particularly since there haven't been a lot of reports of problems in the wild. In a nutshell, I think we have to fix this despite the cost - the implementation is incorrect and unsafe. Unfortunately the mechanisms we have right now to fix it aren't ideal - noDuplicate# is a bigger hammer than we need. All we really need is some way to make a thunk atomic, it would require some special entry code to the thunk which did atomic eager-blackholing. Hmm, now that I think about it, perhaps we could just have a flag, -fatomic-eager-blackholing. We already do this for CAFs, incidentally. The idea is to compare-and-swap the blackhole info pointer into the thunk, and if we didn't win the race, just re-enter the thunk (which is now a blackhole). We already have the cmpxchg MachOp, so It shouldn't be more than a few lines in the code generator to implement it. It would be too expensive to do by default, but doing it just for Control.Monad.ST.Lazy should be ok and would fix the unsafety. (I haven't really thought this through, just an idea off the top of my head, so there could well be something I'm overlooking here...) Cheers Simon My view is that leaving it broken, even if it only causes trouble occasionally, is simply not an option. If users can't rely on it to always give correct answers, then it's effectively useless. And for the sake of backwards compatibility, I think it's a lot better to keep it around, even if it runs slowly multithreaded, than to remove it altogether. Note to Simon PJ: Yes, it's ugly to stick that noDup in there. But lazy ST has always been a bit of deep magic. You can't *really* carry a moment of time around in your pocket and make its history happen only if necessary. We can make it work in GHC because its execution model is entirely based around graph reduction, so evaluation is capable of driving execution. Whereas lazy IO is extremely tricky because it causes effects observable in the real world, lazy ST is only *moderately* tricky, causing effects that we have to make sure don't lead to weird interactions between threads. I don't think it's terribly surprising that it needs to do a few more weird things to work properly. David -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Tue Jan 31 09:25:23 2017 From: marlowsd at gmail.com (Simon Marlow) Date: Tue, 31 Jan 2017 09:25:23 +0000 Subject: Lazy ST vs concurrency In-Reply-To: References: <1721492.EdgZeEtm4b@squirrel> Message-ID: On 31 January 2017 at 09:11, Simon Peyton Jones wrote: > If we could identify exactly the thunks we wanted to be atomic, then yes, > that would be better than a whole-module solution. However I'm not sure > how to do that - doing it on the basis of a free variable with State# type > doesn't work if the State# is buried in a data structure or a function > closure, for instance. > > > > I disagree. Having a free State# variable is precisely necessary and > sufficient, I claim. Can you provide a counter-example? > > Sure, what I had in mind is something like this, defining a local unsafePerformIO: \(s :: State# s) -> let unsafePerformIO = \g -> g s thunk = unsafePerformIO (\s -> ... ) in ... and "thunk" doesn't have a free variable of type State#. Cheers Simon > > Informal proof: > > · The model is that a value of type (State# t) is a linear value > that we mutate in-place. We must not consume it twice. > > · Evaluating a thunk that has a free (State# t) variable is > precisely “consuming” it. So we should only do that once > > > > > I think -fatomic-eager-blackholing might "fix" it with less overhead, > though > > > > But precisely where would you have to use that flag? Inlining could meant > that the code appears anywhere! Once we have the ability to > atomically-blackhole a thunk, we can just use my criterion above, I claim. > > > > Stopgap story for 8.2. I am far from convinced that putting > unsafePerformIO in the impl of (>>=) for the ST monad will be correct; but > if you tell me it is, and if it is surrounded with huge banners saying that > this is the wrong solution, and pointing to a new ticket to fix it, then OK. > Arguably this isn't all that urgent, given that it's been broken for 8 years or so. > > > Simon > > > > *From:* Simon Marlow [mailto:marlowsd at gmail.com] > *Sent:* 31 January 2017 08:59 > *To:* Simon Peyton Jones > *Cc:* David Feuer ; ghc-devs at haskell.org > > *Subject:* Re: Lazy ST vs concurrency > > > > On 30 January 2017 at 22:56, Simon Peyton Jones > wrote: > > We don’t want to do this on a per-module basis do we, as > -fatomic-eager-blackholing would suggest. Rather, on per-thunk basis, no? > Which thunks, precisely? I think perhaps *precisely thunks one of whose > free variables has type (Sttate# s) for some s.* These are thunks that > consume a state token, and must do so no more than once. > > > > If we could identify exactly the thunks we wanted to be atomic, then yes, > that would be better than a whole-module solution. However I'm not sure > how to do that - doing it on the basis of a free variable with State# type > doesn't work if the State# is buried in a data structure or a function > closure, for instance. > > > > If entering such thunks was atomic, could we kill off noDuplicate#? > > > > I still don’t understand exactly what noDuplicate# does, what problem it > solves, and how the problem it solves relates to this LazyST problem. > > > > Back in our "Haskell on a Shared Memory Multiprocessor" paper ( > http://simonmar.github.io/bib/papers/multiproc.pdf > ) > we described a scheme to try to avoid duplication of work when multiple > cores evaluate the same thunk. This is normally applied lazily, because it > involves walking the stack and atomically black-holing thunks pointed to by > update frames. The noDuplicate# primop just invokes the stack walk > immediately; the idea is to try to prevent multiple threads from evaluating > a thunk containing unsafePerformIO. > > > > It's expensive. It's also not foolproof, because if you already happened > to create two copies of the unsafePerformIO thunk then noDuplicate# can't > help. I've never really liked it for these reasons, but I don't know a > better way. We have unsafeDupablePerformIO that doesn't call noDuplicate#, > and the programmer can use when the unsafePerformIO can safely be executed > multiple times. > > > > > > We need some kind of fix for 8.2. Simon what do you suggest? > > > > David's current fix would be OK (along with a clear notice in the release > notes etc. to note that the implementation got slower). I think > -fatomic-eager-blackholing might "fix" it with less overhead, though. > > > > Ben's suggestion: > > > > > eagerlyBlackhole :: a -> a > > > > is likely to be unreliable I think. We lack the control in the source > language to tie it to a particular thunk. > > > > Cheers > > Simon > > > > > > Simon > > > > *From:* Simon Marlow [mailto:marlowsd at gmail.com] > *Sent:* 30 January 2017 21:51 > *To:* David Feuer > *Cc:* Simon Peyton Jones ; ghc-devs at haskell.org > *Subject:* Re: Lazy ST vs concurrency > > > > On 30 January 2017 at 16:18, David Feuer wrote: > > I forgot to CC ghc-devs the first time, so here's another copy. > > > I was working on #11760 this weekend, which has to do with concurrency > breaking lazy ST. I came up with what I thought was a pretty decent > solution ( > https://phabricator.haskell.org/D3038 ). Simon Peyton Jones, however, is > quite > unhappy about the idea of sticking this weird unsafePerformIO-like code > (noDup, which I originally implemented as (unsafePerformIO . evaluate), but > which he finds ugly regardless of the details) into fmap and (>>=). He's > also > concerned that the noDuplicate# applications will kill performance in the > multi-threaded case, and suggests he would rather leave lazy ST broken, or > even remove it altogether, than use a fix that will make it slow sometimes, > particularly since there haven't been a lot of reports of problems in the > wild. > > > > In a nutshell, I think we have to fix this despite the cost - the > implementation is incorrect and unsafe. > > > > Unfortunately the mechanisms we have right now to fix it aren't ideal - > noDuplicate# is a bigger hammer than we need. All we really need is some > way to make a thunk atomic, it would require some special entry code to the > thunk which did atomic eager-blackholing. Hmm, now that I think about it, > perhaps we could just have a flag, -fatomic-eager-blackholing. We already > do this for CAFs, incidentally. The idea is to compare-and-swap the > blackhole info pointer into the thunk, and if we didn't win the race, just > re-enter the thunk (which is now a blackhole). We already have the cmpxchg > MachOp, so It shouldn't be more than a few lines in the code generator to > implement it. It would be too expensive to do by default, but doing it > just for Control.Monad.ST.Lazy should be ok and would fix the unsafety. > > > > (I haven't really thought this through, just an idea off the top of my > head, so there could well be something I'm overlooking here...) > > > > Cheers > > Simon > > > > > > My view is that leaving it broken, even if it only causes trouble > occasionally, is simply not an option. If users can't rely on it to always > give correct answers, then it's effectively useless. And for the sake of > backwards compatibility, I think it's a lot better to keep it around, even > if > it runs slowly multithreaded, than to remove it altogether. > > Note to Simon PJ: Yes, it's ugly to stick that noDup in there. But lazy ST > has > always been a bit of deep magic. You can't *really* carry a moment of time > around in your pocket and make its history happen only if necessary. We can > make it work in GHC because its execution model is entirely based around > graph > reduction, so evaluation is capable of driving execution. Whereas lazy IO > is > extremely tricky because it causes effects observable in the real world, > lazy > ST is only *moderately* tricky, causing effects that we have to make sure > don't lead to weird interactions between threads. I don't think it's > terribly > surprising that it needs to do a few more weird things to work properly. > > David > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Jan 31 10:02:33 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 31 Jan 2017 10:02:33 +0000 Subject: Lazy ST vs concurrency In-Reply-To: References: <1721492.EdgZeEtm4b@squirrel> Message-ID: Huh. You are right. That’s horrible. OK, here’s another idea. Provide, applyOnce# :: (a->b) -> a -> b which behaves like applyOnce f x = f x but guarantees that any thunk (applyOnce# f x) will be evaluated with atomic eager black-holing. \(s :: State# s) -> let unsafePerformIO = \g -> g s thunk = applyOnce# unsafePerformIO (\s -> ... ) in ... Of course this does not guarantee safety. But I think it’d give a per-thunk way to specify it. Simon From: Simon Marlow [mailto:marlowsd at gmail.com] Sent: 31 January 2017 09:25 To: Simon Peyton Jones Cc: David Feuer ; ghc-devs at haskell.org Subject: Re: Lazy ST vs concurrency On 31 January 2017 at 09:11, Simon Peyton Jones > wrote: If we could identify exactly the thunks we wanted to be atomic, then yes, that would be better than a whole-module solution. However I'm not sure how to do that - doing it on the basis of a free variable with State# type doesn't work if the State# is buried in a data structure or a function closure, for instance. I disagree. Having a free State# variable is precisely necessary and sufficient, I claim. Can you provide a counter-example? Sure, what I had in mind is something like this, defining a local unsafePerformIO: \(s :: State# s) -> let unsafePerformIO = \g -> g s thunk = unsafePerformIO (\s -> ... ) in ... and "thunk" doesn't have a free variable of type State#. Cheers Simon Informal proof: • The model is that a value of type (State# t) is a linear value that we mutate in-place. We must not consume it twice. • Evaluating a thunk that has a free (State# t) variable is precisely “consuming” it. So we should only do that once I think -fatomic-eager-blackholing might "fix" it with less overhead, though But precisely where would you have to use that flag? Inlining could meant that the code appears anywhere! Once we have the ability to atomically-blackhole a thunk, we can just use my criterion above, I claim. Stopgap story for 8.2. I am far from convinced that putting unsafePerformIO in the impl of (>>=) for the ST monad will be correct; but if you tell me it is, and if it is surrounded with huge banners saying that this is the wrong solution, and pointing to a new ticket to fix it, then OK. Arguably this isn't all that urgent, given that it's been broken for 8 years or so. Simon From: Simon Marlow [mailto:marlowsd at gmail.com] Sent: 31 January 2017 08:59 To: Simon Peyton Jones > Cc: David Feuer >; ghc-devs at haskell.org Subject: Re: Lazy ST vs concurrency On 30 January 2017 at 22:56, Simon Peyton Jones > wrote: We don’t want to do this on a per-module basis do we, as -fatomic-eager-blackholing would suggest. Rather, on per-thunk basis, no? Which thunks, precisely? I think perhaps precisely thunks one of whose free variables has type (Sttate# s) for some s. These are thunks that consume a state token, and must do so no more than once. If we could identify exactly the thunks we wanted to be atomic, then yes, that would be better than a whole-module solution. However I'm not sure how to do that - doing it on the basis of a free variable with State# type doesn't work if the State# is buried in a data structure or a function closure, for instance. If entering such thunks was atomic, could we kill off noDuplicate#? I still don’t understand exactly what noDuplicate# does, what problem it solves, and how the problem it solves relates to this LazyST problem. Back in our "Haskell on a Shared Memory Multiprocessor" paper (http://simonmar.github.io/bib/papers/multiproc.pdf) we described a scheme to try to avoid duplication of work when multiple cores evaluate the same thunk. This is normally applied lazily, because it involves walking the stack and atomically black-holing thunks pointed to by update frames. The noDuplicate# primop just invokes the stack walk immediately; the idea is to try to prevent multiple threads from evaluating a thunk containing unsafePerformIO. It's expensive. It's also not foolproof, because if you already happened to create two copies of the unsafePerformIO thunk then noDuplicate# can't help. I've never really liked it for these reasons, but I don't know a better way. We have unsafeDupablePerformIO that doesn't call noDuplicate#, and the programmer can use when the unsafePerformIO can safely be executed multiple times. We need some kind of fix for 8.2. Simon what do you suggest? David's current fix would be OK (along with a clear notice in the release notes etc. to note that the implementation got slower). I think -fatomic-eager-blackholing might "fix" it with less overhead, though. Ben's suggestion: > eagerlyBlackhole :: a -> a is likely to be unreliable I think. We lack the control in the source language to tie it to a particular thunk. Cheers Simon Simon From: Simon Marlow [mailto:marlowsd at gmail.com] Sent: 30 January 2017 21:51 To: David Feuer > Cc: Simon Peyton Jones >; ghc-devs at haskell.org Subject: Re: Lazy ST vs concurrency On 30 January 2017 at 16:18, David Feuer > wrote: I forgot to CC ghc-devs the first time, so here's another copy. I was working on #11760 this weekend, which has to do with concurrency breaking lazy ST. I came up with what I thought was a pretty decent solution ( https://phabricator.haskell.org/D3038 ). Simon Peyton Jones, however, is quite unhappy about the idea of sticking this weird unsafePerformIO-like code (noDup, which I originally implemented as (unsafePerformIO . evaluate), but which he finds ugly regardless of the details) into fmap and (>>=). He's also concerned that the noDuplicate# applications will kill performance in the multi-threaded case, and suggests he would rather leave lazy ST broken, or even remove it altogether, than use a fix that will make it slow sometimes, particularly since there haven't been a lot of reports of problems in the wild. In a nutshell, I think we have to fix this despite the cost - the implementation is incorrect and unsafe. Unfortunately the mechanisms we have right now to fix it aren't ideal - noDuplicate# is a bigger hammer than we need. All we really need is some way to make a thunk atomic, it would require some special entry code to the thunk which did atomic eager-blackholing. Hmm, now that I think about it, perhaps we could just have a flag, -fatomic-eager-blackholing. We already do this for CAFs, incidentally. The idea is to compare-and-swap the blackhole info pointer into the thunk, and if we didn't win the race, just re-enter the thunk (which is now a blackhole). We already have the cmpxchg MachOp, so It shouldn't be more than a few lines in the code generator to implement it. It would be too expensive to do by default, but doing it just for Control.Monad.ST.Lazy should be ok and would fix the unsafety. (I haven't really thought this through, just an idea off the top of my head, so there could well be something I'm overlooking here...) Cheers Simon My view is that leaving it broken, even if it only causes trouble occasionally, is simply not an option. If users can't rely on it to always give correct answers, then it's effectively useless. And for the sake of backwards compatibility, I think it's a lot better to keep it around, even if it runs slowly multithreaded, than to remove it altogether. Note to Simon PJ: Yes, it's ugly to stick that noDup in there. But lazy ST has always been a bit of deep magic. You can't *really* carry a moment of time around in your pocket and make its history happen only if necessary. We can make it work in GHC because its execution model is entirely based around graph reduction, so evaluation is capable of driving execution. Whereas lazy IO is extremely tricky because it causes effects observable in the real world, lazy ST is only *moderately* tricky, causing effects that we have to make sure don't lead to weird interactions between threads. I don't think it's terribly surprising that it needs to do a few more weird things to work properly. David -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at well-typed.com Tue Jan 31 17:06:07 2017 From: david at well-typed.com (David Feuer) Date: Tue, 31 Jan 2017 12:06:07 -0500 Subject: Lazy ST vs concurrency Message-ID: <1548008.FWhzW4XKsu@squirrel> I think Ben's eagerlyBlackhole is what I called noDup. And it is indeed a bit tricky to use correctly. It needs the same sort of care around inlining that unsafePerformIO does. A bit of summary: 1. It's okay to duplicate a runST thunk or to enter it twice, because each copy will have its own set of references and arrays. This is important, because we have absolutely no control over what the user will do with such a thunk. 2. With the exception of a runST thunk, we must never duplicate or double-enter a thunk if it performs or suspends ST work. Based on my tests and the intuition I've developed about what's going on here, (2) breaks down into two pieces: 2a. Any time we perform or suspend ST work, we must use NOINLINE to avoid duplication. 2b. Any time we suspend ST work, we must set up the thunk involved with noDuplicate# or similar. For example, the code I wrote yesterday for the Applicative instance looks like this: fm <*> xm = ST $ \ s -> let {-# NOINLINE res1 #-} !res1 = unST fm s !(f, s') = res1 {-# NOINLINE res2 #-} res2 = noDup (unST xm s') (x, s'') = res2 in (f x, s'') I NOINLINE res1. If it were to inline (could that happen?), we'd get let res2 = noDup (unST xm (snd (unST fm s))) (x, s'') = res2 in (fst (unST fm s) x, s') and that would run the fm computation twice. But I don't noDup res1, because we force it immediately on creation; no one else ever handles it. I NOINLINE res2 for a similar reason, but I also use noDup on it. The res2 thunk escapes into the wild via x and s'' in the result; we need to make sure that it is not entered twice. I believe can use a few rewrite rules to reduce costs substantially in some situations. I will add those to the differential. -------- Original message -------- From: Simon Marlow Date: 1/31/17 3:59 AM (GMT-05:00) To: Simon Peyton Jones Cc: David Feuer , ghc-devs at haskell.org Subject: Re: Lazy ST vs concurrency On 30 January 2017 at 22:56, Simon Peyton Jones wrote: We don’t want to do this on a per-module basis do we, as -fatomic-eager-blackholing would suggest. Rather, on per-thunk basis, no? Which thunks, precisely? I think perhaps *precisely thunks one of whose free variables has type (Sttate# s) for some s.* These are thunks that consume a state token, and must do so no more than once. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Tue Jan 31 18:02:25 2017 From: ben at well-typed.com (Ben Gamari) Date: Tue, 31 Jan 2017 13:02:25 -0500 Subject: Freeze update Message-ID: <87a8a7kszy.fsf@ben-laptop.smart-cactus.org> Hello everyone, As you likely know, the feature freeze date for the 8.2.1 was yesterday. Unfortunately, it looks like that date may have been a bit optimistic as there are there are still several major patches outstanding that we would like to have for 8.2.1. These are namely, * Join points rework (D2853) * Overloaded Record Fields (D2708) * Separation of the Constraint and Type types within GHC (D3023) * Generalization of the kind of (->) (D2038) * Type-indexed Typeable (D2010) * Use top-level instance to solve superclasses where possible (D2714) * A variety of DWARF patches While many of these patches are very close to completion, in sum they represent a significant amount of work. While I would love to be able to offer a concrete timeline for how we will proceed, at the moment I can only say that things are still rather up in the air. In particular, there are some thorny, late-rising design questions in D2038 and D3023 which are currently under discussion. If there are no obviously easy paths forward on these questions then we may be forced to retarget one or more of these patches to 8.4. I believe within the next few days we'll have a better idea of where we stand on these. Finally, note that unless you have spoken to me previously about merging your work for 8.2 we will not be considering any additional features at this point. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From simonpj at microsoft.com Tue Jan 31 22:41:57 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 31 Jan 2017 22:41:57 +0000 Subject: D2038: [WIP] TysPrim: Generalize kind of (->) In-Reply-To: <20170131125046.126142.13535.3FCE5C50@phabricator.haskell.org> References: <20170131125046.126142.13535.3FCE5C50@phabricator.haskell.org> Message-ID: Replying by email because I'm on a train. Simon Huh. Put otherwise, your point is this. Suppose we have the following kind for `(->)`: (->) :: forall v1 v2 r1 r2. TYPE v1 r1 -> TYPE v2 r2 -> Type To coerce from (C a -> Int) to (a -> Int) we'd have to cough up a coercion `g`: g : (->) Vanilla Vanilla Ptr Ptr (C a) Int ~R (->) Constraint Vanilla Ptr Ptr a Int And now (Nth 1 g :: Vanilla ~R Constraint). Nothing about `KindCo` there; it's just that `(->)` takes some kind arguments. But that can only happen if `(->)` has suitable roles. What if it doesn't? What if we just had an axiom axArrow v :: (->) Vanilla v ~R (->) Constraint v or something like that. Then we get [W] g : (->) Vanilla Vanilla Ptr Ptr (C a) Int ~R (->) Constraint Vanilla Ptr Ptr a Int We decompose partly and solve thus g = (axArrow Vanilla) axC Simon From: noreply at phabricator.haskell.org [mailto:noreply at phabricator.haskell.org] Sent: 31 January 2017 12:51 To: Simon Peyton Jones Subject: [Differential] [Commented On] D2038: [WIP] TysPrim: Generalize kind of (->) goldfire added a comment. View Revision In D2038#89360, @simonpj wrote: To avoid being able to extract ContraintRep ~R LiftedPtrRep we decided to weaken one of the coercion constructors, the one that gets a kind coercion from a type coercion. We don't need it, and it's awkward here. The problem is that we need it with this patch. I was able to weaken this coercion constructor (KindCo) in my patch D3023, but this patch uses it in a fundamental way that we can't get around. To wit: class C a where meth :: a axC :: (C a :: Constraint) ~R (a :: Type) Now, we wish to cast C a -> a to a -> a.. This cast will look like (->) ?? axC . What goes in the ??? It's got to be something involving KindCo axC, which is disallowed as per our earlier decision. Therein lies the problem. As for reify: Yes, I'm agreed with that email. But is that implemented yet? Is a design settled on? I don't see a ghc-proposal. Are we wiling to take a dependency on that work in order to get this done? To be clear, my chief worry isn't that these problems cannot be solved by any means -- I'm just worried about the timing of this all and our desire to get 8.2 out the door. REPOSITORY rGHC Glasgow Haskell Compiler REVISION DETAIL https://phabricator.haskell.org/D2038 EMAIL PREFERENCES https://phabricator.haskell.org/settings/panel/emailpreferences/ To: bgamari, goldfire, austin Cc: simonpj, RyanGlScott, thomie -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at cs.brynmawr.edu Tue Jan 31 22:56:47 2017 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Tue, 31 Jan 2017 17:56:47 -0500 Subject: D2038: [WIP] TysPrim: Generalize kind of (->) In-Reply-To: References: <20170131125046.126142.13535.3FCE5C50@phabricator.haskell.org> Message-ID: <50EB9209-1A0F-4DA8-BA06-586DFD3C68DE@cs.brynmawr.edu> > On Jan 31, 2017, at 5:41 PM, Simon Peyton Jones wrote: > > But that can only happen if `(->)` has suitable roles. > What if it doesn’t? The “correct” roles for (->) of the kind you gave is `nominal nominal nominal nominal representational representational`. That is, the dependent arguments are nominal, and the others are representational. This is because all kind-level coercions are nominal. You seem to be suggesting giving (->) different roles. I honestly don’t know what that would mean -- normally, GHC prevents you from specifying a weaker role than it would infer. It smells pretty foul to me, but I can’t quite put my finger on what would go wrong at the moment. > > What if we just had an axiom > > axArrow v :: (->) Vanilla v > ~R (->) Constraint v I think we’d also need one for results... but maybe not. > > or something like that. Then we get > > [W] g : (->) Vanilla Vanilla Ptr Ptr (C a) Int > ~R (->) Constraint Vanilla Ptr Ptr a Int > > We decompose partly and solve thus > > g = (axArrow Vanilla) axC And this works only if we weaken (->)’s roles. This whole road just feels like the wrong way, as soon as we started contemplating a heterogeneous axiom, which are ruled out in the literature, even when we have kind equalities. I think the Right Answer is to get rid of newtype-classes & fix reify, and I’m worried that anything short of that will fail catastrophically at some point. Otherwise, it’s patches on top of patches. I don’t think there is disagreement here, but there is the question about what to do for 8.2.... and unless we’re ready to roll out the new reify, I think the best course of action is to delay the new Typeable and all this Constraint v Type stuff until 8.4. (The new levity polymorphism stuff already committed is hunky-dory.) Richard > > > Simon >   <> > From: noreply at phabricator.haskell.org [mailto:noreply at phabricator.haskell.org ] > Sent: 31 January 2017 12:51 > To: Simon Peyton Jones > > Subject: [Differential] [Commented On] D2038: [WIP] TysPrim: Generalize kind of (->) > > goldfire added a comment. > > View Revision > > > In D2038#89360 , @simonpj wrote: > > To avoid being able to extract ContraintRep ~R LiftedPtrRep we decided to weaken one of the coercion constructors, the one that gets a kind coercion from a type coercion. We don't need it, and it's awkward here. > > The problem is that we need it with this patch. I was able to weaken this coercion constructor (KindCo) in my patchD3023 , but this patch uses it in a fundamental way that we can't get around. To wit: > > class C a where > meth :: a > > axC :: (C a :: Constraint) ~R (a :: Type) > Now, we wish to cast C a -> a to a -> a.. This cast will look like (->) ?? axC . What goes in the??? It's got to be something involving KindCo axC, which is disallowed as per our earlier decision. Therein lies the problem. > > As for reify: Yes, I'm agreed with that email. But is that implemented yet? Is a design settled on? I don't see a ghc-proposal. Are we wiling to take a dependency on that work in order to get this done? > > To be clear, my chief worry isn't that these problems cannot be solved by any means -- I'm just worried about the timing of this all and our desire to get 8.2 out the door. > > > REPOSITORY > rGHC Glasgow Haskell Compiler > > REVISION DETAIL > https://phabricator.haskell.org/D2038 > > EMAIL PREFERENCES > https://phabricator.haskell.org/settings/panel/emailpreferences/ > > To: bgamari, goldfire, austin > Cc: simonpj, RyanGlScott, thomie -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Tue Jan 31 23:40:16 2017 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 31 Jan 2017 18:40:16 -0500 Subject: D2038: [WIP] TysPrim: Generalize kind of (->) In-Reply-To: <50EB9209-1A0F-4DA8-BA06-586DFD3C68DE@cs.brynmawr.edu> References: <20170131125046.126142.13535.3FCE5C50@phabricator.haskell.org> <50EB9209-1A0F-4DA8-BA06-586DFD3C68DE@cs.brynmawr.edu> Message-ID: <87wpdakdcv.fsf@ben-laptop.smart-cactus.org> Richard Eisenberg writes: snip > > I think the Right Answer is to get rid of newtype-classes & fix reify, and I’m worried that anything short of that will fail catastrophically at some point. Otherwise, it’s patches on top of patches. > > I don’t think there is disagreement here, but there is the question about what to do for 8.2.... and unless we’re ready to roll out the new reify, I think the best course of action is to delay the new Typeable and all this Constraint v Type stuff until 8.4. (The new levity polymorphism stuff already committed is hunky-dory.) > I am going to let you and Simon decide on this. While I would certainly like to get the Typeable stuff off my plate (it's not terribly easy to rebase), I am also acutely aware of the pressure to keep the release cycle moving. In particular, I would like to avoid another drawn out release like 8.0 if at all possible. Of course, I would be happy to help with the implementation of whatever plan we decide upon. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From maurerl at cs.uoregon.edu Tue Jan 31 23:48:25 2017 From: maurerl at cs.uoregon.edu (Luke Maurer) Date: Tue, 31 Jan 2017 15:48:25 -0800 Subject: Join points revised Message-ID: <2ce83529-5c80-ffe8-9b6f-76fad0070ef9@cs.uoregon.edu> Revised version of the join points patch is up: https://phabricator.haskell.org/D2853 Hoping to commit ASAP. Thanks all! - Luke Maurer University of Oregon maurerl at cs.uoregon.edu