From matthewtpickering at gmail.com Tue Jan 2 10:52:22 2018 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Tue, 2 Jan 2018 10:52:22 +0000 Subject: Nested CPR patch review In-Reply-To: References: Message-ID: I don't think anyone has run nofib on the rebased branch yet. The Akio2017 subpage is a more accurate summary. Sebastian has also been adding notes to explain the more intricate parts. Matt On Fri, Dec 22, 2017 at 5:27 PM, Simon Peyton Jones wrote: > Terrific! > > What are the nofib results? > > Can we have a couple of artificial benchmarks in cpranal/should_run that show substantial perf improvements because the nested CPR wins in some inner loop? > > Is https://ghc.haskell.org/trac/ghc/wiki/NestedCPR still an accurate summary of the idea? And the Akio2017 sub-page? It would be easier to review the code if the design documentation accurately described it. > > I'll look in the new year. Thanks! > > Simon > > | -----Original Message----- > | From: Matthew Pickering [mailto:matthewtpickering at gmail.com] > | Sent: 22 December 2017 17:09 > | To: GHC developers ; Simon Peyton Jones > | ; Joachim Breitner ; > | tkn.akio at gmail.com; Sebastian Graf > | Subject: Nested CPR patch review > | > | Hi all, > | > | I recently resurrected akio's nested cpr branch and put it on phabricator > | for review. > | > | https://phabricator.haskell.org/D4244 > | > | Sebastian has kindly been going over it and ironed out a few kinks in the > | last few days. He says now that he believes the patch is correct. > | > | Is there anything else which needs to be done before merging this patch? > | > | Simon, would you perhaps be able to give the patch a look over? > | > | Cheers, > | > | Matt From niteria at gmail.com Wed Jan 3 23:36:23 2018 From: niteria at gmail.com (Bartosz Nitka) Date: Thu, 4 Jan 2018 00:36:23 +0100 Subject: Can't push to staging area? Message-ID: I'm trying to update a diff and I run into this: $ arc diff Linting... LINT OKAY No lint problems. Running unit tests... No unit test engine is configured for this project. PUSH STAGING Pushing changes to staging area... sudo: a password is required fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. STAGING FAILED Unable to push changes to the staging area. Usage Exception: Failed to push changes to staging area. Correct the issue, or use --skip-staging to skip this step. I believe that it worked for me before with my setup, and I seem to be in compliance with https://ghc.haskell.org/trac/ghc/wiki/Phabricator#Startingoff:Fixingabugsubmittingareview Any ideas? Thanks, Bartosz From allbery.b at gmail.com Thu Jan 4 01:05:19 2018 From: allbery.b at gmail.com (Brandon Allbery) Date: Wed, 3 Jan 2018 20:05:19 -0500 Subject: Can't push to staging area? In-Reply-To: References: Message-ID: "sudo: a password is required" Can't even tell if that's local (beware e.g. Ubuntu defaults) or remote (which would be a configuration problem on the remote end, not to mention seeming like a bad idea). On Wed, Jan 3, 2018 at 6:36 PM, Bartosz Nitka wrote: > I'm trying to update a diff and I run into this: > > $ arc diff > Linting... > LINT OKAY No lint problems. > Running unit tests... > No unit test engine is configured for this project. > PUSH STAGING Pushing changes to staging area... > sudo: a password is required > fatal: Could not read from remote repository. > > Please make sure you have the correct access rights > and the repository exists. > STAGING FAILED Unable to push changes to the staging area. > Usage Exception: Failed to push changes to staging area. Correct the > issue, or use --skip-staging to skip this step. > > > I believe that it worked for me before with my setup, and I seem to be > in compliance with > https://ghc.haskell.org/trac/ghc/wiki/Phabricator#Startingoff: > Fixingabugsubmittingareview > > Any ideas? > > Thanks, > Bartosz > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From lonetiger at gmail.com Thu Jan 4 02:22:15 2018 From: lonetiger at gmail.com (Phyx) Date: Thu, 04 Jan 2018 02:22:15 +0000 Subject: Can't push to staging area? In-Reply-To: References: Message-ID: This is a local git configuration issue. Your pack scripts (git-upload-pack etc) are on a path that requires sudo or your sudoers configuration is wrong. See https://stackoverflow.com/questions/24059597/phabricator-git-ssh-clone-fails-with-password-required-error On Thu, Jan 4, 2018, 01:06 Brandon Allbery wrote: > "sudo: a password is required" > > Can't even tell if that's local (beware e.g. Ubuntu defaults) or remote > (which would be a configuration problem on the remote end, not to mention > seeming like a bad idea). > > On Wed, Jan 3, 2018 at 6:36 PM, Bartosz Nitka wrote: > >> I'm trying to update a diff and I run into this: >> >> $ arc diff >> Linting... >> LINT OKAY No lint problems. >> Running unit tests... >> No unit test engine is configured for this project. >> PUSH STAGING Pushing changes to staging area... >> sudo: a password is required >> fatal: Could not read from remote repository. >> >> Please make sure you have the correct access rights >> and the repository exists. >> STAGING FAILED Unable to push changes to the staging area. >> Usage Exception: Failed to push changes to staging area. Correct the >> issue, or use --skip-staging to skip this step. >> >> >> I believe that it worked for me before with my setup, and I seem to be >> in compliance with >> >> https://ghc.haskell.org/trac/ghc/wiki/Phabricator#Startingoff:Fixingabugsubmittingareview >> >> Any ideas? >> >> Thanks, >> Bartosz >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > > > > -- > brandon s allbery kf8nh sine nomine > associates > allbery.b at gmail.com > ballbery at sinenomine.net > unix, openafs, kerberos, infrastructure, xmonad > http://sinenomine.net > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From niteria at gmail.com Thu Jan 4 03:32:24 2018 From: niteria at gmail.com (Bartosz Nitka) Date: Thu, 4 Jan 2018 04:32:24 +0100 Subject: Can't push to staging area? In-Reply-To: References: Message-ID: Thanks for the pointers, I dug a bit deeper and I've also found https://secure.phabricator.com/book/phabricator/article/diffusion_hosting/#troubleshooting-ssh. That prompted me to try: $ ssh -v -T -p 2222 git at phabricator-origin.haskell.org git-receive-pack /diffusion/GHCDIFF/ The result: Authenticated to phabricator-origin.haskell.org ([23.253.149.35]:2222). debug1: channel 0: new [client-session] debug1: Requesting no-more-sessions at openssh.com debug1: Entering interactive session. debug1: pledge: network debug1: Remote: Forced command. debug1: Remote: Port forwarding disabled. debug1: Remote: X11 forwarding disabled. debug1: Remote: Agent forwarding disabled. debug1: Remote: Pty allocation disabled. debug1: Remote: Forced command. debug1: Remote: Port forwarding disabled. debug1: Remote: X11 forwarding disabled. debug1: Remote: Agent forwarding disabled. debug1: Remote: Pty allocation disabled. debug1: Sending environment. debug1: Sending env LC_MEASUREMENT = pl_PL.UTF-8 debug1: Sending env LC_PAPER = pl_PL.UTF-8 debug1: Sending env LC_MONETARY = pl_PL.UTF-8 debug1: Sending env LANG = en_GB.UTF-8 debug1: Sending env LC_NAME = pl_PL.UTF-8 debug1: Sending env LC_ADDRESS = pl_PL.UTF-8 debug1: Sending env LC_NUMERIC = pl_PL.UTF-8 debug1: Sending env LC_TELEPHONE = pl_PL.UTF-8 debug1: Sending env LC_IDENTIFICATION = pl_PL.UTF-8 debug1: Sending env LC_TIME = pl_PL.UTF-8 debug1: Sending command: git-receive-pack /diffusion/GHCDIFF/ sudo: a password is required debug1: client_input_channel_req: channel 0 rtype exit-status reply 0 debug1: client_input_channel_req: channel 0 rtype eow at openssh.com reply 0 debug1: channel 0: free: client-session, nchannels 1 Transferred: sent 3292, received 3000 bytes, in 0.6 seconds Bytes per second: sent 5170.1, received 4711.6 debug1: Exit status 1 I believe this means that it's failing remotely, not locally. git-receive-pack runs normally locally. 2018-01-04 3:22 GMT+01:00 Phyx : > This is a local git configuration issue. Your pack scripts (git-upload-pack > etc) are on a path that requires sudo or your sudoers configuration is > wrong. See > https://stackoverflow.com/questions/24059597/phabricator-git-ssh-clone-fails-with-password-required-error > > > On Thu, Jan 4, 2018, 01:06 Brandon Allbery wrote: >> >> "sudo: a password is required" >> >> Can't even tell if that's local (beware e.g. Ubuntu defaults) or remote >> (which would be a configuration problem on the remote end, not to mention >> seeming like a bad idea). >> >> On Wed, Jan 3, 2018 at 6:36 PM, Bartosz Nitka wrote: >>> >>> I'm trying to update a diff and I run into this: >>> >>> $ arc diff >>> Linting... >>> LINT OKAY No lint problems. >>> Running unit tests... >>> No unit test engine is configured for this project. >>> PUSH STAGING Pushing changes to staging area... >>> sudo: a password is required >>> fatal: Could not read from remote repository. >>> >>> Please make sure you have the correct access rights >>> and the repository exists. >>> STAGING FAILED Unable to push changes to the staging area. >>> Usage Exception: Failed to push changes to staging area. Correct the >>> issue, or use --skip-staging to skip this step. >>> >>> >>> I believe that it worked for me before with my setup, and I seem to be >>> in compliance with >>> >>> https://ghc.haskell.org/trac/ghc/wiki/Phabricator#Startingoff:Fixingabugsubmittingareview >>> >>> Any ideas? >>> >>> Thanks, >>> Bartosz >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >> >> >> >> -- >> brandon s allbery kf8nh sine nomine >> associates >> allbery.b at gmail.com >> ballbery at sinenomine.net >> unix, openafs, kerberos, infrastructure, xmonad >> http://sinenomine.net >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From demiobenour at gmail.com Thu Jan 4 14:07:33 2018 From: demiobenour at gmail.com (Demi Obenour) Date: Thu, 4 Jan 2018 09:07:33 -0500 Subject: Spectre mitigation In-Reply-To: References: Message-ID: The recent “Spectre” bug requires that speculative execution of indirect branches be disabled. For GHC, this will require passing a flag to LLVM and fixing the NCG to emit suitable calling sequences. This will be a disaster for the STG execution model, because it disables CPU branch prediction for indirect calls and jumps. This is a big argument in favor of doing a CPS→SSA conversion in the backend. -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Thu Jan 4 14:36:07 2018 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Thu, 04 Jan 2018 14:36:07 +0000 Subject: Spectre mitigation In-Reply-To: References: Message-ID: The only impacted code is the code which should already be engineered to be side channel resistant... which already need to be written in a way that has constant control flow and memory lookup. This is just a new and very powerful side channel attack. It would be interesting and possibly useful to explore fascilities that enable marked pieces of code to be compiled in ways that improve side channel resistance. But there’s so many different approaches that it’d be difficult to protect against all of them at once for general programs. I could be totally wrong, and I should read the spectre paper :) I guess I just mean that vulnerable Data should be hardened, but only when the cost makes sense. Every security issue has some finite cost. The sum of those security events cost must be weighed against the sum of the costs of preventing them On Thu, Jan 4, 2018 at 9:08 AM Demi Obenour wrote: > The recent “Spectre” bug requires that speculative execution of indirect > branches be disabled. For GHC, this will require passing a flag to LLVM > and fixing the NCG to emit suitable calling sequences. > > This will be a disaster for the STG execution model, because it disables > CPU branch prediction for indirect calls and jumps. This is a big argument > in favor of doing a CPS→SSA conversion in the backend. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Thu Jan 4 14:45:55 2018 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Thu, 04 Jan 2018 14:45:55 +0000 Subject: Spectre mitigation References: Message-ID: https://spectreattack.com/spectre.pdf Has a full exposition for the curious. It does seem that compiler level mitigations are one of several proposed approaches , but the performance cost would be great!!! That said, adding a “hardened” mode flag would be a viable approach if someone wanted to explore that. Would be a lot of work I think. Plus I suspect ghcs current architecture wouldn’t be the best starting point rts and compilation model wise ?? On Thu, Jan 4, 2018 at 9:36 AM Carter Schonwald wrote: > The only impacted code is the code which should already be engineered to > be side channel resistant... which already need to be written in a way > that has constant control flow and memory lookup. > > This is just a new and very powerful side channel attack. It would be > interesting and possibly useful to explore fascilities that enable marked > pieces of code to be compiled in ways that improve side channel > resistance. But there’s so many different approaches that it’d be > difficult to protect against all of them at once for general programs. > > I could be totally wrong, and I should read the spectre paper :) > > I guess I just mean that vulnerable Data should be hardened, but only when > the cost makes sense. Every security issue has some finite cost. The sum > of those security events cost must be weighed against the sum of the costs > of preventing them > > On Thu, Jan 4, 2018 at 9:08 AM Demi Obenour wrote: > >> The recent “Spectre” bug requires that speculative execution of indirect >> branches be disabled. For GHC, this will require passing a flag to LLVM >> and fixing the NCG to emit suitable calling sequences. >> >> This will be a disaster for the STG execution model, because it disables >> CPU branch prediction for indirect calls and jumps. This is a big argument >> in favor of doing a CPS→SSA conversion in the backend. >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjakway at nyu.edu Thu Jan 4 16:51:33 2018 From: tjakway at nyu.edu (Thomas Jakway) Date: Thu, 4 Jan 2018 08:51:33 -0800 Subject: Spectre mitigation In-Reply-To: References: Message-ID: I'm gonna start reading through the spectre paper in a few minutes but... is this really the death knell for speculative execution on x86/64...? If so, GHC getting patched is going to be pretty low on everyone's list of priorities. On Jan 4, 2018 6:36 AM, "Carter Schonwald" wrote: > The only impacted code is the code which should already be engineered to > be side channel resistant... which already need to be written in a way > that has constant control flow and memory lookup. > > This is just a new and very powerful side channel attack. It would be > interesting and possibly useful to explore fascilities that enable marked > pieces of code to be compiled in ways that improve side channel > resistance. But there’s so many different approaches that it’d be > difficult to protect against all of them at once for general programs. > > I could be totally wrong, and I should read the spectre paper :) > > I guess I just mean that vulnerable Data should be hardened, but only when > the cost makes sense. Every security issue has some finite cost. The sum > of those security events cost must be weighed against the sum of the costs > of preventing them > > On Thu, Jan 4, 2018 at 9:08 AM Demi Obenour wrote: > >> The recent “Spectre” bug requires that speculative execution of indirect >> branches be disabled. For GHC, this will require passing a flag to LLVM >> and fixing the NCG to emit suitable calling sequences. >> >> This will be a disaster for the STG execution model, because it disables >> CPU branch prediction for indirect calls and jumps. This is a big argument >> in favor of doing a CPS→SSA conversion in the backend. >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Thu Jan 4 18:41:26 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 04 Jan 2018 13:41:26 -0500 Subject: Can't push to staging area? In-Reply-To: References: Message-ID: <87incha2lr.fsf@ben-laptop.smart-cactus.org> Bartosz Nitka writes: > Thanks for the pointers, I dug a bit deeper and I've also found > https://secure.phabricator.com/book/phabricator/article/diffusion_hosting/#troubleshooting-ssh. > > That prompted me to try: > > $ ssh -v -T -p 2222 git at phabricator-origin.haskell.org > git-receive-pack /diffusion/GHCDIFF/ > > The result: > Authenticated to phabricator-origin.haskell.org ([23.253.149.35]:2222). > debug1: channel 0: new [client-session] > debug1: Requesting no-more-sessions at openssh.com > debug1: Entering interactive session. > debug1: pledge: network > debug1: Remote: Forced command. > debug1: Remote: Port forwarding disabled. > debug1: Remote: X11 forwarding disabled. > debug1: Remote: Agent forwarding disabled. > debug1: Remote: Pty allocation disabled. > debug1: Remote: Forced command. > debug1: Remote: Port forwarding disabled. > debug1: Remote: X11 forwarding disabled. > debug1: Remote: Agent forwarding disabled. > debug1: Remote: Pty allocation disabled. > debug1: Sending environment. > debug1: Sending env LC_MEASUREMENT = pl_PL.UTF-8 > debug1: Sending env LC_PAPER = pl_PL.UTF-8 > debug1: Sending env LC_MONETARY = pl_PL.UTF-8 > debug1: Sending env LANG = en_GB.UTF-8 > debug1: Sending env LC_NAME = pl_PL.UTF-8 > debug1: Sending env LC_ADDRESS = pl_PL.UTF-8 > debug1: Sending env LC_NUMERIC = pl_PL.UTF-8 > debug1: Sending env LC_TELEPHONE = pl_PL.UTF-8 > debug1: Sending env LC_IDENTIFICATION = pl_PL.UTF-8 > debug1: Sending env LC_TIME = pl_PL.UTF-8 > debug1: Sending command: git-receive-pack /diffusion/GHCDIFF/ > sudo: a password is required > debug1: client_input_channel_req: channel 0 rtype exit-status reply 0 > debug1: client_input_channel_req: channel 0 rtype eow at openssh.com reply 0 > debug1: channel 0: free: client-session, nchannels 1 > Transferred: sent 3292, received 3000 bytes, in 0.6 seconds > Bytes per second: sent 5170.1, received 4711.6 > debug1: Exit status 1 > Hmm, yes indeed this looks remote. I'll investigate. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From carter.schonwald at gmail.com Thu Jan 4 18:55:09 2018 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Thu, 04 Jan 2018 18:55:09 +0000 Subject: Spectre mitigation In-Reply-To: References: Message-ID: With the caveat of that I maybe have no clue what I’m talking about ;) : It’s a pretty epic attack/ side channel, but it still requires code execution. The kernel side channel more of an issue for vm providers And the spectre one probably will most heavily impact security conscious organizations that might be considering using tools like moby/ docker / Linux containers / kubernetes / mesos/ etc which depend on OS level process isolation etc for security. My fuzzy understanding is that one fix would be hardware support for per process isolation of memory even in the context users / processes ... which isn’t in any kit afaik. I do like my code not being slow. So it’s a dilemma :/ On Thu, Jan 4, 2018 at 11:51 AM Thomas Jakway wrote: > I'm gonna start reading through the spectre paper in a few minutes but... > is this really the death knell for speculative execution on x86/64...? If > so, GHC getting patched is going to be pretty low on everyone's list of > priorities. > > On Jan 4, 2018 6:36 AM, "Carter Schonwald" > wrote: > >> The only impacted code is the code which should already be engineered to >> be side channel resistant... which already need to be written in a way >> that has constant control flow and memory lookup. >> >> This is just a new and very powerful side channel attack. It would be >> interesting and possibly useful to explore fascilities that enable marked >> pieces of code to be compiled in ways that improve side channel >> resistance. But there’s so many different approaches that it’d be >> difficult to protect against all of them at once for general programs. >> >> I could be totally wrong, and I should read the spectre paper :) >> >> I guess I just mean that vulnerable Data should be hardened, but only >> when the cost makes sense. Every security issue has some finite cost. The >> sum of those security events cost must be weighed against the sum of the >> costs of preventing them >> >> On Thu, Jan 4, 2018 at 9:08 AM Demi Obenour >> wrote: >> >>> The recent “Spectre” bug requires that speculative execution of indirect >>> branches be disabled. For GHC, this will require passing a flag to LLVM >>> and fixing the NCG to emit suitable calling sequences. >>> >>> This will be a disaster for the STG execution model, because it disables >>> CPU branch prediction for indirect calls and jumps. This is a big argument >>> in favor of doing a CPS→SSA conversion in the backend. >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From eacameron at gmail.com Thu Jan 4 21:24:41 2018 From: eacameron at gmail.com (Elliot Cameron) Date: Thu, 4 Jan 2018 16:24:41 -0500 Subject: Spectre mitigation In-Reply-To: References: Message-ID: This may be relevant: https://support.google.com/faqs/answer/7625886 Note that both GCC and LLVM will be learning this Ratpoline technique. On Thu, Jan 4, 2018 at 1:55 PM, Carter Schonwald wrote: > With the caveat of that I maybe have no clue what I’m talking about ;) : > > It’s a pretty epic attack/ side channel, but it still requires code > execution. > > The kernel side channel more of an issue for vm providers > > And the spectre one probably will most heavily impact security conscious > organizations that might be considering using tools like moby/ docker / > Linux containers / kubernetes / mesos/ etc which depend on OS level process > isolation etc for security. > > My fuzzy understanding is that one fix would be hardware support for per > process isolation of memory even in the context users / processes ... which > isn’t in any kit afaik. > > I do like my code not being slow. So it’s a dilemma :/ > > On Thu, Jan 4, 2018 at 11:51 AM Thomas Jakway wrote: > >> I'm gonna start reading through the spectre paper in a few minutes but... >> is this really the death knell for speculative execution on x86/64...? If >> so, GHC getting patched is going to be pretty low on everyone's list of >> priorities. >> >> On Jan 4, 2018 6:36 AM, "Carter Schonwald" >> wrote: >> >>> The only impacted code is the code which should already be engineered to >>> be side channel resistant... which already need to be written in a way >>> that has constant control flow and memory lookup. >>> >>> This is just a new and very powerful side channel attack. It would be >>> interesting and possibly useful to explore fascilities that enable marked >>> pieces of code to be compiled in ways that improve side channel >>> resistance. But there’s so many different approaches that it’d be >>> difficult to protect against all of them at once for general programs. >>> >>> I could be totally wrong, and I should read the spectre paper :) >>> >>> I guess I just mean that vulnerable Data should be hardened, but only >>> when the cost makes sense. Every security issue has some finite cost. The >>> sum of those security events cost must be weighed against the sum of the >>> costs of preventing them >>> >>> On Thu, Jan 4, 2018 at 9:08 AM Demi Obenour >>> wrote: >>> >>>> The recent “Spectre” bug requires that speculative execution of >>>> indirect branches be disabled. For GHC, this will require passing a flag >>>> to LLVM and fixing the NCG to emit suitable calling sequences. >>>> >>>> This will be a disaster for the STG execution model, because it >>>> disables CPU branch prediction for indirect calls and jumps. This is a big >>>> argument in favor of doing a CPS→SSA conversion in the backend. >>>> _______________________________________________ >>>> ghc-devs mailing list >>>> ghc-devs at haskell.org >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>>> >>> >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>> >>> > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Thu Jan 4 23:06:56 2018 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Thu, 04 Jan 2018 23:06:56 +0000 Subject: Spectre mitigation In-Reply-To: References: Message-ID: Indeed. It’s worth noting that the discussed cases where you can recover the perf benefits of branch / jump prediction only work in the context of a first order and or whole program compilation approach. The ghc rts and design is not compatible with those approaches today. I suspect you could get them to work in a whole program optimizing compiler like MLTON, or a hypothetical compiler for Haskell that has a different rts rep On Thu, Jan 4, 2018 at 4:25 PM Elliot Cameron wrote: > This may be relevant: https://support.google.com/faqs/answer/7625886 > > Note that both GCC and LLVM will be learning this Ratpoline technique. > > On Thu, Jan 4, 2018 at 1:55 PM, Carter Schonwald < > carter.schonwald at gmail.com> wrote: > >> With the caveat of that I maybe have no clue what I’m talking about ;) : >> >> It’s a pretty epic attack/ side channel, but it still requires code >> execution. >> >> The kernel side channel more of an issue for vm providers >> >> And the spectre one probably will most heavily impact security conscious >> organizations that might be considering using tools like moby/ docker / >> Linux containers / kubernetes / mesos/ etc which depend on OS level process >> isolation etc for security. >> >> My fuzzy understanding is that one fix would be hardware support for per >> process isolation of memory even in the context users / processes ... which >> isn’t in any kit afaik. >> >> I do like my code not being slow. So it’s a dilemma :/ >> >> On Thu, Jan 4, 2018 at 11:51 AM Thomas Jakway wrote: >> >>> I'm gonna start reading through the spectre paper in a few minutes >>> but... is this really the death knell for speculative execution on >>> x86/64...? If so, GHC getting patched is going to be pretty low on >>> everyone's list of priorities. >>> >>> On Jan 4, 2018 6:36 AM, "Carter Schonwald" >>> wrote: >>> >>>> The only impacted code is the code which should already be engineered >>>> to be side channel resistant... which already need to be written in a way >>>> that has constant control flow and memory lookup. >>>> >>>> This is just a new and very powerful side channel attack. It would be >>>> interesting and possibly useful to explore fascilities that enable marked >>>> pieces of code to be compiled in ways that improve side channel >>>> resistance. But there’s so many different approaches that it’d be >>>> difficult to protect against all of them at once for general programs. >>>> >>>> I could be totally wrong, and I should read the spectre paper :) >>>> >>>> I guess I just mean that vulnerable Data should be hardened, but only >>>> when the cost makes sense. Every security issue has some finite cost. The >>>> sum of those security events cost must be weighed against the sum of the >>>> costs of preventing them >>>> >>>> On Thu, Jan 4, 2018 at 9:08 AM Demi Obenour >>>> wrote: >>>> >>>>> The recent “Spectre” bug requires that speculative execution of >>>>> indirect branches be disabled. For GHC, this will require passing a flag >>>>> to LLVM and fixing the NCG to emit suitable calling sequences. >>>>> >>>>> This will be a disaster for the STG execution model, because it >>>>> disables CPU branch prediction for indirect calls and jumps. This is a big >>>>> argument in favor of doing a CPS→SSA conversion in the backend. >>>>> _______________________________________________ >>>>> ghc-devs mailing list >>>>> ghc-devs at haskell.org >>>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>>>> >>>> >>>> _______________________________________________ >>>> ghc-devs mailing list >>>> ghc-devs at haskell.org >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>>> >>>> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at cs.brynmawr.edu Fri Jan 5 13:47:49 2018 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Fri, 5 Jan 2018 08:47:49 -0500 Subject: pattern signatures Message-ID: <9F06F4BE-5A5B-451E-8AB7-539D4347B0CA@cs.brynmawr.edu> Hi devs, Is a pattern signature a) something you put after `pattern P ::` ? b) something you put after `::` in a pattern, as in `foo (Proxy :: Proxy a)` ? I've seen the term "pattern signature" apply to both, and I've been tripped up by this. Does anyone have terminology that unambiguously separates these two constructs that we can all adopt? Thanks! Richard From simonpj at microsoft.com Fri Jan 5 14:41:16 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 5 Jan 2018 14:41:16 +0000 Subject: pattern signatures In-Reply-To: <9F06F4BE-5A5B-451E-8AB7-539D4347B0CA@cs.brynmawr.edu> References: <9F06F4BE-5A5B-451E-8AB7-539D4347B0CA@cs.brynmawr.edu> Message-ID: Ah yes. I think we started with "pattern synonym signature" for (b) but have since denenerated to "pattern signature" which is quite confusing. User advice would be good! S | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | Richard Eisenberg | Sent: 05 January 2018 13:48 | To: GHC | Subject: pattern signatures | | Hi devs, | | Is a pattern signature | | a) something you put after `pattern P ::` ? | b) something you put after `::` in a pattern, as in `foo (Proxy :: | Proxy a)` ? | | I've seen the term "pattern signature" apply to both, and I've been | tripped up by this. Does anyone have terminology that unambiguously | separates these two constructs that we can all adopt? | | Thanks! | Richard | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.h | askell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com%7Cbc86346cc90f4a9516d108d5 | 5442f5a0%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C6365075689386605 | 89&sdata=gvjnHyGAojz982UEV1u0hZPKH%2B%2F3UjiDlQm10%2BRZ7r8%3D&reserved | =0 From niteria at gmail.com Fri Jan 5 17:10:40 2018 From: niteria at gmail.com (Bartosz Nitka) Date: Fri, 5 Jan 2018 18:10:40 +0100 Subject: Can't push to staging area? In-Reply-To: <87incha2lr.fsf@ben-laptop.smart-cactus.org> References: <87incha2lr.fsf@ben-laptop.smart-cactus.org> Message-ID: I just want to report that it works for me now. Thanks, Bartosz From allbery.b at gmail.com Fri Jan 5 18:42:10 2018 From: allbery.b at gmail.com (Brandon Allbery) Date: Fri, 5 Jan 2018 13:42:10 -0500 Subject: pattern signatures In-Reply-To: References: <9F06F4BE-5A5B-451E-8AB7-539D4347B0CA@cs.brynmawr.edu> Message-ID: Further complicated by the fact that that form used to be called a "pattern signature" with accompanying extension, until that was folded into ScopedTypeVariables extension. On Fri, Jan 5, 2018 at 9:41 AM, Simon Peyton Jones via ghc-devs < ghc-devs at haskell.org> wrote: > Ah yes. I think we started with "pattern synonym signature" for (b) but > have since denenerated to "pattern signature" which is quite confusing. > > User advice would be good! > > S > > | -----Original Message----- > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of > | Richard Eisenberg > | Sent: 05 January 2018 13:48 > | To: GHC > | Subject: pattern signatures > | > | Hi devs, > | > | Is a pattern signature > | > | a) something you put after `pattern P ::` ? > | b) something you put after `::` in a pattern, as in `foo (Proxy :: > | Proxy a)` ? > | > | I've seen the term "pattern signature" apply to both, and I've been > | tripped up by this. Does anyone have terminology that unambiguously > | separates these two constructs that we can all adopt? > | > | Thanks! > | Richard > | _______________________________________________ > | ghc-devs mailing list > | ghc-devs at haskell.org > | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.h > | askell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- > | devs&data=02%7C01%7Csimonpj%40microsoft.com%7Cbc86346cc90f4a9516d108d5 > | 5442f5a0%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C6365075689386605 > | 89&sdata=gvjnHyGAojz982UEV1u0hZPKH%2B%2F3UjiDlQm10%2BRZ7r8%3D&reserved > | =0 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Fri Jan 5 20:50:49 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Fri, 05 Jan 2018 15:50:49 -0500 Subject: Can't push to staging area? In-Reply-To: References: <87incha2lr.fsf@ben-laptop.smart-cactus.org> Message-ID: <877esw9gij.fsf@ben-laptop.smart-cactus.org> Bartosz Nitka writes: > I just want to report that it works for me now. > Indeed; I should have mentioned that I fixed it yesterday. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From mail at joachim-breitner.de Fri Jan 5 21:12:18 2018 From: mail at joachim-breitner.de (Joachim Breitner) Date: Fri, 05 Jan 2018 22:12:18 +0100 Subject: pattern signatures In-Reply-To: References: <9F06F4BE-5A5B-451E-8AB7-539D4347B0CA@cs.brynmawr.edu> Message-ID: <1515186738.3425.27.camel@joachim-breitner.de> Hi, Am Freitag, den 05.01.2018, 13:42 -0500 schrieb Brandon Allbery: > Further complicated by the fact that that form used to be called a > "pattern signature" with accompanying extension, until that was > folded into ScopedTypeVariables extension. which I find super confusing, because sometimes I want a signature on a pattern and it is counter-intuitive to me why I should not longer use the obviously named PatternSignatures extension but rather the at first glance unrelated ScopedTypeVariable extension. But I am derailing the discussion a bit. Cheers, Joachim -- Joachim Breitner mail at joachim-breitner.de http://www.joachim-breitner.de/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From benno.fuenfstueck at gmail.com Fri Jan 5 21:49:03 2018 From: benno.fuenfstueck at gmail.com (=?UTF-8?B?QmVubm8gRsO8bmZzdMO8Y2s=?=) Date: Fri, 05 Jan 2018 21:49:03 +0000 Subject: Spectre mitigation In-Reply-To: References: Message-ID: > The only impacted code is the code which should already be engineered to be side channel resistant... which already need to be written in a way that has constant control flow and memory lookup. As far as I understand, that's not really true. If you have a process, which has secrets that you do not want to leak to arbitrary other code running on the same CPU, then not only do you need to avoid indirect branches in your side-channel resistent part (as is the case today) but the *rest* of the program also should not contain indirect branches (assuming the presence of gadgets which make memory leaking possible). So even if your crypto library uses no indirect branches and is side-channel resistant, that is no longer enough: if you link it into a program where other parts of the program have indirect branches, then you can use those branches to potentially leak the crypto keys. So in general, you need to apply mitigations for this attack if you, at any time, store secrets in the process memory that you do not want to be leaked (and being a hardware bug, leaking means that they can, potentially, be leaked to arbitrary users. Privilege-separation provided by the OS does not really matter here, so in theory it may be possible to leak it from JavaScript running in a browser sandbox for example.). Carter Schonwald schrieb am Fr., 5. Jan. 2018 um 00:07 Uhr: > Indeed. It’s worth noting that the discussed cases where you can recover > the perf benefits of branch / jump prediction only work in the context of a > first order and or whole program compilation approach. The ghc rts and > design is not compatible with those approaches today. > > I suspect you could get them to work in a whole program optimizing > compiler like MLTON, or a hypothetical compiler for Haskell that has a > different rts rep > > On Thu, Jan 4, 2018 at 4:25 PM Elliot Cameron wrote: > >> This may be relevant: https://support.google.com/faqs/answer/7625886 >> >> Note that both GCC and LLVM will be learning this Ratpoline technique. >> >> On Thu, Jan 4, 2018 at 1:55 PM, Carter Schonwald < >> carter.schonwald at gmail.com> wrote: >> >>> With the caveat of that I maybe have no clue what I’m talking about ;) : >>> >>> It’s a pretty epic attack/ side channel, but it still requires code >>> execution. >>> >>> The kernel side channel more of an issue for vm providers >>> >>> And the spectre one probably will most heavily impact security conscious >>> organizations that might be considering using tools like moby/ docker / >>> Linux containers / kubernetes / mesos/ etc which depend on OS level process >>> isolation etc for security. >>> >>> My fuzzy understanding is that one fix would be hardware support for >>> per process isolation of memory even in the context users / processes ... >>> which isn’t in any kit afaik. >>> >>> I do like my code not being slow. So it’s a dilemma :/ >>> >>> On Thu, Jan 4, 2018 at 11:51 AM Thomas Jakway wrote: >>> >>>> I'm gonna start reading through the spectre paper in a few minutes >>>> but... is this really the death knell for speculative execution on >>>> x86/64...? If so, GHC getting patched is going to be pretty low on >>>> everyone's list of priorities. >>>> >>>> On Jan 4, 2018 6:36 AM, "Carter Schonwald" >>>> wrote: >>>> >>>>> The only impacted code is the code which should already be engineered >>>>> to be side channel resistant... which already need to be written in a way >>>>> that has constant control flow and memory lookup. >>>>> >>>>> This is just a new and very powerful side channel attack. It would be >>>>> interesting and possibly useful to explore fascilities that enable marked >>>>> pieces of code to be compiled in ways that improve side channel >>>>> resistance. But there’s so many different approaches that it’d be >>>>> difficult to protect against all of them at once for general programs. >>>>> >>>>> I could be totally wrong, and I should read the spectre paper :) >>>>> >>>>> I guess I just mean that vulnerable Data should be hardened, but only >>>>> when the cost makes sense. Every security issue has some finite cost. The >>>>> sum of those security events cost must be weighed against the sum of the >>>>> costs of preventing them >>>>> >>>>> On Thu, Jan 4, 2018 at 9:08 AM Demi Obenour >>>>> wrote: >>>>> >>>>>> The recent “Spectre” bug requires that speculative execution of >>>>>> indirect branches be disabled. For GHC, this will require passing a flag >>>>>> to LLVM and fixing the NCG to emit suitable calling sequences. >>>>>> >>>>>> This will be a disaster for the STG execution model, because it >>>>>> disables CPU branch prediction for indirect calls and jumps. This is a big >>>>>> argument in favor of doing a CPS→SSA conversion in the backend. >>>>>> _______________________________________________ >>>>>> ghc-devs mailing list >>>>>> ghc-devs at haskell.org >>>>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>>>>> >>>>> >>>>> _______________________________________________ >>>>> ghc-devs mailing list >>>>> ghc-devs at haskell.org >>>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>>>> >>>>> >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>> >>> >> _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From iavor.diatchki at gmail.com Fri Jan 5 22:23:40 2018 From: iavor.diatchki at gmail.com (Iavor Diatchki) Date: Fri, 05 Jan 2018 22:23:40 +0000 Subject: pattern signatures In-Reply-To: <1515186738.3425.27.camel@joachim-breitner.de> References: <9F06F4BE-5A5B-451E-8AB7-539D4347B0CA@cs.brynmawr.edu> <1515186738.3425.27.camel@joachim-breitner.de> Message-ID: Well, as you say, "pattern signature" makes sense for both, so I would expect to use context to disambiguate. If I wanted to be explicit about which one I meant, I'd use: a) "Pattern synonym signature" b) "Signature on a pattern" -Iavor On Fri, Jan 5, 2018 at 1:12 PM Joachim Breitner wrote: > Hi, > > Am Freitag, den 05.01.2018, 13:42 -0500 schrieb Brandon Allbery: > > Further complicated by the fact that that form used to be called a > > "pattern signature" with accompanying extension, until that was > > folded into ScopedTypeVariables extension. > > which I find super confusing, because sometimes I want a signature on a > pattern and it is counter-intuitive to me why I should not longer use > the obviously named PatternSignatures extension but rather the at first > glance unrelated ScopedTypeVariable extension. > > But I am derailing the discussion a bit. > > Cheers, > Joachim > > -- > Joachim Breitner > mail at joachim-breitner.de > http://www.joachim-breitner.de/ > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Sat Jan 6 15:19:36 2018 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sat, 06 Jan 2018 15:19:36 +0000 Subject: Spectre mitigation In-Reply-To: References: Message-ID: Perhaps you are correct. That said: the retpoline style mitigation can only recover performance of normal pipelining / branch prediction if you statically know the common jump targets. Which brings you quickly into doing whole compilation strategies like type directed defunctionalization. Either way 1) the attacks require remote code execution. 2) the Data exfiltration risk only matters if there’s both remote code execution and a communication channel to exfiltrate with. On consumer facing desktop / laptops, the best immediate mitigation is to make sure you’re using Firefox 57.04 (already out )and or chrome >= 64 (due out later this month ). JavaScript in browsers being a remote code execution environment by design! There is a very simple mitigation in the case of java script, eg firefox is reducing the resolution of its high precision js timer to 20 microseconds. Which is afaict a tad too course for the applicable timing side channels On server end of things: Don’t allow unauthorized code executions / remote code executions ! The usual don’t allow code injections or buggy c parsers or return oriented buffer code injection hijinks still apply Security is about depth. This new class of attacks just means that remote code execution where the attacker knows how to interpret the memory layout of the target process is a game over. I guess this attack does increase the value proposition of systems configuration tools that whitelist the collection of processes a system is expected to run. Will we see attacks that masquerade as systems benchmarking/ microbrnchmarking tools? Point being: yes it’s a new very powerful attack. But that does not mean separate compilation and good performance for higher Order programming languages is now disallowed. It just means there’s more science and engineering to be done! On Fri, Jan 5, 2018 at 4:49 PM Benno Fünfstück wrote: > > The only impacted code is the code which should already be engineered to > be side channel resistant... which already need to be written in a way > that has constant control flow and memory lookup. > > As far as I understand, that's not really true. If you have a process, > which has secrets that you do not want to leak to arbitrary other code > running on the same CPU, then not only do you need to avoid indirect > branches in your side-channel resistent part (as is the case today) but the > *rest* of the program also should not contain indirect branches (assuming > the presence of gadgets which make memory leaking possible). So even if > your crypto library uses no indirect branches and is side-channel > resistant, that is no longer enough: if you link it into a program where > other parts of the program have indirect branches, then you can use those > branches to potentially leak the crypto keys. > > So in general, you need to apply mitigations for this attack if you, at > any time, store secrets in the process memory that you do not want to be > leaked (and being a hardware bug, leaking means that they can, potentially, > be leaked to arbitrary users. Privilege-separation provided by the OS does > not really matter here, so in theory it may be possible to leak it from > JavaScript running in a browser sandbox for example.). > > Carter Schonwald schrieb am Fr., 5. Jan. > 2018 um 00:07 Uhr: > >> Indeed. It’s worth noting that the discussed cases where you can recover >> the perf benefits of branch / jump prediction only work in the context of a >> first order and or whole program compilation approach. The ghc rts and >> design is not compatible with those approaches today. >> >> I suspect you could get them to work in a whole program optimizing >> compiler like MLTON, or a hypothetical compiler for Haskell that has a >> different rts rep >> >> On Thu, Jan 4, 2018 at 4:25 PM Elliot Cameron >> wrote: >> >>> This may be relevant: https://support.google.com/faqs/answer/7625886 >>> >>> Note that both GCC and LLVM will be learning this Ratpoline technique. >>> >>> On Thu, Jan 4, 2018 at 1:55 PM, Carter Schonwald < >>> carter.schonwald at gmail.com> wrote: >>> >>>> With the caveat of that I maybe have no clue what I’m talking about ;) >>>> : >>>> >>>> It’s a pretty epic attack/ side channel, but it still requires code >>>> execution. >>>> >>>> The kernel side channel more of an issue for vm providers >>>> >>>> And the spectre one probably will most heavily impact security >>>> conscious organizations that might be considering using tools like moby/ >>>> docker / Linux containers / kubernetes / mesos/ etc which depend on OS >>>> level process isolation etc for security. >>>> >>>> My fuzzy understanding is that one fix would be hardware support for >>>> per process isolation of memory even in the context users / processes ... >>>> which isn’t in any kit afaik. >>>> >>>> I do like my code not being slow. So it’s a dilemma :/ >>>> >>>> On Thu, Jan 4, 2018 at 11:51 AM Thomas Jakway wrote: >>>> >>>>> I'm gonna start reading through the spectre paper in a few minutes >>>>> but... is this really the death knell for speculative execution on >>>>> x86/64...? If so, GHC getting patched is going to be pretty low on >>>>> everyone's list of priorities. >>>>> >>>>> On Jan 4, 2018 6:36 AM, "Carter Schonwald" >>>>> wrote: >>>>> >>>>>> The only impacted code is the code which should already be engineered >>>>>> to be side channel resistant... which already need to be written in a way >>>>>> that has constant control flow and memory lookup. >>>>>> >>>>>> This is just a new and very powerful side channel attack. It would >>>>>> be interesting and possibly useful to explore fascilities that enable >>>>>> marked pieces of code to be compiled in ways that improve side channel >>>>>> resistance. But there’s so many different approaches that it’d be >>>>>> difficult to protect against all of them at once for general programs. >>>>>> >>>>>> I could be totally wrong, and I should read the spectre paper :) >>>>>> >>>>>> I guess I just mean that vulnerable Data should be hardened, but only >>>>>> when the cost makes sense. Every security issue has some finite cost. The >>>>>> sum of those security events cost must be weighed against the sum of the >>>>>> costs of preventing them >>>>>> >>>>>> On Thu, Jan 4, 2018 at 9:08 AM Demi Obenour >>>>>> wrote: >>>>>> >>>>>>> The recent “Spectre” bug requires that speculative execution of >>>>>>> indirect branches be disabled. For GHC, this will require passing a flag >>>>>>> to LLVM and fixing the NCG to emit suitable calling sequences. >>>>>>> >>>>>>> This will be a disaster for the STG execution model, because it >>>>>>> disables CPU branch prediction for indirect calls and jumps. This is a big >>>>>>> argument in favor of doing a CPS→SSA conversion in the backend. >>>>>>> _______________________________________________ >>>>>>> ghc-devs mailing list >>>>>>> ghc-devs at haskell.org >>>>>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> ghc-devs mailing list >>>>>> ghc-devs at haskell.org >>>>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>>>>> >>>>>> >>>> _______________________________________________ >>>> ghc-devs mailing list >>>> ghc-devs at haskell.org >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>>> >>>> >>> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From demiobenour at gmail.com Sat Jan 6 16:40:56 2018 From: demiobenour at gmail.com (Demi Obenour) Date: Sat, 6 Jan 2018 11:40:56 -0500 Subject: Spectre mitigation In-Reply-To: References: Message-ID: I think a Haskellton (whole program optimizing Haskell compiler) is a VERY good idea. If we could compile GHC with it and the resulting compiler supported Template Haskell, this could help reduce compile times of normal GHC. That said, I don't think it is as bad as I thought. The reason is that the OS kernel and hypervisor can stop this, by invalidating branch prediction information at context switches. This prevents one process from altering branches in another. On Jan 6, 2018 10:19 AM, "Carter Schonwald" wrote: > Perhaps you are correct. > > That said: the retpoline style mitigation can only recover performance of > normal pipelining / branch prediction if you statically know the common > jump targets. Which brings you quickly into doing whole compilation > strategies like type directed defunctionalization. > > Either way > > 1) the attacks require remote code execution. > > 2) the Data exfiltration risk only matters if there’s both remote code > execution and a communication channel to exfiltrate with. > > On consumer facing desktop / laptops, the best immediate mitigation is to > make sure you’re using Firefox 57.04 (already out )and or chrome >= 64 > (due out later this month ). JavaScript in browsers being a remote code > execution environment by design! There is a very simple mitigation in the > case of java script, eg firefox is reducing the resolution of its high > precision js timer to 20 microseconds. Which is afaict a tad too course > for the applicable timing side channels > > On server end of things: > Don’t allow unauthorized code executions / remote code executions ! The > usual don’t allow code injections or buggy c parsers or return oriented > buffer code injection hijinks still apply > > Security is about depth. This new class of attacks just means that remote > code execution where the attacker knows how to interpret the memory layout > of the target process is a game over. > > I guess this attack does increase the value proposition of systems > configuration tools that whitelist the collection of processes a system is > expected to run. > > Will we see attacks that masquerade as systems benchmarking/ > microbrnchmarking tools? > > Point being: yes it’s a new very powerful attack. But that does not mean > separate compilation and good performance for higher Order programming > languages is now disallowed. It just means there’s more science and > engineering to be done! > > > On Fri, Jan 5, 2018 at 4:49 PM Benno Fünfstück < > benno.fuenfstueck at gmail.com> wrote: > >> > The only impacted code is the code which should already be engineered >> to be side channel resistant... which already need to be written in a way >> that has constant control flow and memory lookup. >> >> As far as I understand, that's not really true. If you have a process, >> which has secrets that you do not want to leak to arbitrary other code >> running on the same CPU, then not only do you need to avoid indirect >> branches in your side-channel resistent part (as is the case today) but the >> *rest* of the program also should not contain indirect branches (assuming >> the presence of gadgets which make memory leaking possible). So even if >> your crypto library uses no indirect branches and is side-channel >> resistant, that is no longer enough: if you link it into a program where >> other parts of the program have indirect branches, then you can use those >> branches to potentially leak the crypto keys. >> >> So in general, you need to apply mitigations for this attack if you, at >> any time, store secrets in the process memory that you do not want to be >> leaked (and being a hardware bug, leaking means that they can, potentially, >> be leaked to arbitrary users. Privilege-separation provided by the OS does >> not really matter here, so in theory it may be possible to leak it from >> JavaScript running in a browser sandbox for example.). >> >> Carter Schonwald schrieb am Fr., 5. Jan. >> 2018 um 00:07 Uhr: >> >>> Indeed. It’s worth noting that the discussed cases where you can >>> recover the perf benefits of branch / jump prediction only work in the >>> context of a first order and or whole program compilation approach. The ghc >>> rts and design is not compatible with those approaches today. >>> >>> I suspect you could get them to work in a whole program optimizing >>> compiler like MLTON, or a hypothetical compiler for Haskell that has a >>> different rts rep >>> >>> On Thu, Jan 4, 2018 at 4:25 PM Elliot Cameron >>> wrote: >>> >>>> This may be relevant: https://support.google.com/faqs/answer/7625886 >>>> >>>> Note that both GCC and LLVM will be learning this Ratpoline technique. >>>> >>>> On Thu, Jan 4, 2018 at 1:55 PM, Carter Schonwald < >>>> carter.schonwald at gmail.com> wrote: >>>> >>>>> With the caveat of that I maybe have no clue what I’m talking about ;) >>>>> : >>>>> >>>>> It’s a pretty epic attack/ side channel, but it still requires code >>>>> execution. >>>>> >>>>> The kernel side channel more of an issue for vm providers >>>>> >>>>> And the spectre one probably will most heavily impact security >>>>> conscious organizations that might be considering using tools like moby/ >>>>> docker / Linux containers / kubernetes / mesos/ etc which depend on OS >>>>> level process isolation etc for security. >>>>> >>>>> My fuzzy understanding is that one fix would be hardware support for >>>>> per process isolation of memory even in the context users / processes ... >>>>> which isn’t in any kit afaik. >>>>> >>>>> I do like my code not being slow. So it’s a dilemma :/ >>>>> >>>>> On Thu, Jan 4, 2018 at 11:51 AM Thomas Jakway wrote: >>>>> >>>>>> I'm gonna start reading through the spectre paper in a few minutes >>>>>> but... is this really the death knell for speculative execution on >>>>>> x86/64...? If so, GHC getting patched is going to be pretty low on >>>>>> everyone's list of priorities. >>>>>> >>>>>> On Jan 4, 2018 6:36 AM, "Carter Schonwald" < >>>>>> carter.schonwald at gmail.com> wrote: >>>>>> >>>>>>> The only impacted code is the code which should already be >>>>>>> engineered to be side channel resistant... which already need to be >>>>>>> written in a way that has constant control flow and memory lookup. >>>>>>> >>>>>>> This is just a new and very powerful side channel attack. It would >>>>>>> be interesting and possibly useful to explore fascilities that enable >>>>>>> marked pieces of code to be compiled in ways that improve side channel >>>>>>> resistance. But there’s so many different approaches that it’d be >>>>>>> difficult to protect against all of them at once for general programs. >>>>>>> >>>>>>> I could be totally wrong, and I should read the spectre paper :) >>>>>>> >>>>>>> I guess I just mean that vulnerable Data should be hardened, but >>>>>>> only when the cost makes sense. Every security issue has some finite cost. >>>>>>> The sum of those security events cost must be weighed against the sum of >>>>>>> the costs of preventing them >>>>>>> >>>>>>> On Thu, Jan 4, 2018 at 9:08 AM Demi Obenour >>>>>>> wrote: >>>>>>> >>>>>>>> The recent “Spectre” bug requires that speculative execution of >>>>>>>> indirect branches be disabled. For GHC, this will require passing a flag >>>>>>>> to LLVM and fixing the NCG to emit suitable calling sequences. >>>>>>>> >>>>>>>> This will be a disaster for the STG execution model, because it >>>>>>>> disables CPU branch prediction for indirect calls and jumps. This is a big >>>>>>>> argument in favor of doing a CPS→SSA conversion in the backend. >>>>>>>> _______________________________________________ >>>>>>>> ghc-devs mailing list >>>>>>>> ghc-devs at haskell.org >>>>>>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> ghc-devs mailing list >>>>>>> ghc-devs at haskell.org >>>>>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>>>>>> >>>>>>> >>>>> _______________________________________________ >>>>> ghc-devs mailing list >>>>> ghc-devs at haskell.org >>>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>>>> >>>>> >>>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at nh2.me Sun Jan 7 00:01:50 2018 From: mail at nh2.me (=?UTF-8?Q?Niklas_Hamb=c3=bcchen?=) Date: Sun, 7 Jan 2018 01:01:50 +0100 Subject: Backing up and downloading Trac contents? Message-ID: <3d21df83-88f8-7c51-daca-a3a1ad33eee6@nh2.me> Working on something today, it came to my mind how much useful information is stored in Trac and how much time would get lost if it went down, corrupted or missing. With the source code there's no such issue as with git being a DVCS, everybody has the full history. But not so with Trac. Are there backups of Trac? Are there some publicly available ones that one could download and thus conveniently use offline? Thanks! From sgraf1337 at gmail.com Sun Jan 7 10:11:37 2018 From: sgraf1337 at gmail.com (Sebastian Graf) Date: Sun, 7 Jan 2018 11:11:37 +0100 Subject: Nested CPR patch review In-Reply-To: References: Message-ID: I've since run NoFib. You can find the results here: https://phabricator.haskell.org/D4244#119697 I wonder if you feel that any more notes are needed? The general idea of CPR remained the same, it's just the extension of the DmdResult lattice that needs some rationale as to why and when these new values are needed. I also wonder what the impact of "Slightly strengthen the strictness analysis" ( https://ghc.haskell.org/trac/ghc/wiki/NestedCPR/Akio2017#Changestothedemandanalyzer) would be if regarded in isolation. On Tue, Jan 2, 2018 at 11:52 AM, Matthew Pickering < matthewtpickering at gmail.com> wrote: > I don't think anyone has run nofib on the rebased branch yet. > > The Akio2017 subpage is a more accurate summary. Sebastian has also > been adding notes to explain the more intricate parts. > > Matt > > On Fri, Dec 22, 2017 at 5:27 PM, Simon Peyton Jones > wrote: > > Terrific! > > > > What are the nofib results? > > > > Can we have a couple of artificial benchmarks in cpranal/should_run that > show substantial perf improvements because the nested CPR wins in some > inner loop? > > > > Is https://ghc.haskell.org/trac/ghc/wiki/NestedCPR still an accurate > summary of the idea? And the Akio2017 sub-page? It would be easier to > review the code if the design documentation accurately described it. > > > > I'll look in the new year. Thanks! > > > > Simon > > > > | -----Original Message----- > > | From: Matthew Pickering [mailto:matthewtpickering at gmail.com] > > | Sent: 22 December 2017 17:09 > > | To: GHC developers ; Simon Peyton Jones > > | ; Joachim Breitner ; > > | tkn.akio at gmail.com; Sebastian Graf > > | Subject: Nested CPR patch review > > | > > | Hi all, > > | > > | I recently resurrected akio's nested cpr branch and put it on > phabricator > > | for review. > > | > > | https://phabricator.haskell.org/D4244 > > | > > | Sebastian has kindly been going over it and ironed out a few kinks in > the > > | last few days. He says now that he believes the patch is correct. > > | > > | Is there anything else which needs to be done before merging this > patch? > > | > > | Simon, would you perhaps be able to give the patch a look over? > > | > > | Cheers, > > | > > | Matt > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Sun Jan 7 18:19:24 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Sun, 07 Jan 2018 13:19:24 -0500 Subject: Backing up and downloading Trac contents? In-Reply-To: <3d21df83-88f8-7c51-daca-a3a1ad33eee6@nh2.me> References: <3d21df83-88f8-7c51-daca-a3a1ad33eee6@nh2.me> Message-ID: <87373ha5w9.fsf@ben-laptop.smart-cactus.org> Niklas Hambüchen writes: > Working on something today, it came to my mind how much useful > information is stored in Trac and how much time would get lost if it > went down, corrupted or missing. > > With the source code there's no such issue as with git being a DVCS, > everybody has the full history. But not so with Trac. > > Are there backups of Trac? I periodically pull down database dumps for archival. I'm a bit hesitant to expose the entire dump but most tables can be revealed without any trouble. I'll try to put something up shortly. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From jweakly at pdx.edu Sun Jan 7 18:57:06 2018 From: jweakly at pdx.edu (Jared Weakly) Date: Sun, 7 Jan 2018 10:57:06 -0800 Subject: Backing up and downloading Trac contents? In-Reply-To: <87373ha5w9.fsf@ben-laptop.smart-cactus.org> References: <3d21df83-88f8-7c51-daca-a3a1ad33eee6@nh2.me> <87373ha5w9.fsf@ben-laptop.smart-cactus.org> Message-ID: Once we get our devops stuff a little closer to completion, it would be a great idea to have a nightly backup script running somewhere that can be accessed by members with certain permissions. Shouldn't be terribly hard to set that up, I think? Granular permissions would be the tricky bit but even if it's a whitelist, Haskell isn't quite big enough for that to be painful to manage yet. On Jan 7, 2018 10:19 AM, "Ben Gamari" wrote: Niklas Hambüchen writes: > Working on something today, it came to my mind how much useful > information is stored in Trac and how much time would get lost if it > went down, corrupted or missing. > > With the source code there's no such issue as with git being a DVCS, > everybody has the full history. But not so with Trac. > > Are there backups of Trac? I periodically pull down database dumps for archival. I'm a bit hesitant to expose the entire dump but most tables can be revealed without any trouble. I'll try to put something up shortly. Cheers, - Ben _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.spiwack at tweag.io Mon Jan 8 10:11:09 2018 From: arnaud.spiwack at tweag.io (Spiwack, Arnaud) Date: Mon, 8 Jan 2018 11:11:09 +0100 Subject: pattern signatures In-Reply-To: References: <9F06F4BE-5A5B-451E-8AB7-539D4347B0CA@cs.brynmawr.edu> <1515186738.3425.27.camel@joachim-breitner.de> Message-ID: In my eyes, signatures are something which goes with a definition. So (a) is a pattern (synonym) signature, while (b) is merely a type annotation on a pattern. On Fri, Jan 5, 2018 at 11:23 PM, Iavor Diatchki wrote: > Well, as you say, "pattern signature" makes sense for both, so I would > expect to use context to disambiguate. If I wanted to be explicit about > which one I meant, I'd use: > > a) "Pattern synonym signature" > b) "Signature on a pattern" > > -Iavor > > > > > On Fri, Jan 5, 2018 at 1:12 PM Joachim Breitner > wrote: > >> Hi, >> >> Am Freitag, den 05.01.2018, 13:42 -0500 schrieb Brandon Allbery: >> > Further complicated by the fact that that form used to be called a >> > "pattern signature" with accompanying extension, until that was >> > folded into ScopedTypeVariables extension. >> >> which I find super confusing, because sometimes I want a signature on a >> pattern and it is counter-intuitive to me why I should not longer use >> the obviously named PatternSignatures extension but rather the at first >> glance unrelated ScopedTypeVariable extension. >> >> But I am derailing the discussion a bit. >> >> Cheers, >> Joachim >> >> -- >> Joachim Breitner >> mail at joachim-breitner.de >> http://www.joachim-breitner.de/ >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Jan 8 12:59:50 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 8 Jan 2018 12:59:50 +0000 Subject: pattern signatures In-Reply-To: References: <9F06F4BE-5A5B-451E-8AB7-539D4347B0CA@cs.brynmawr.edu> <1515186738.3425.27.camel@joachim-breitner.de> Message-ID: I like the idea of distinguishing “signatures” from “annotations”. But then what is currently a “pattern signature” with extension -XPatternSignatures, becomes “type annotation in a pattern” or perhaps “pattern type-annotation” which is a bit clumsy. Possibly “type specification” instead of “type annotation”. Thus “pattern type-spec” which is snappier. Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Spiwack, Arnaud Sent: 08 January 2018 10:11 Cc: Joachim Breitner ; ghc-devs at haskell.org Subject: Re: pattern signatures In my eyes, signatures are something which goes with a definition. So (a) is a pattern (synonym) signature, while (b) is merely a type annotation on a pattern. On Fri, Jan 5, 2018 at 11:23 PM, Iavor Diatchki > wrote: Well, as you say, "pattern signature" makes sense for both, so I would expect to use context to disambiguate. If I wanted to be explicit about which one I meant, I'd use: a) "Pattern synonym signature" b) "Signature on a pattern" -Iavor On Fri, Jan 5, 2018 at 1:12 PM Joachim Breitner > wrote: Hi, Am Freitag, den 05.01.2018, 13:42 -0500 schrieb Brandon Allbery: > Further complicated by the fact that that form used to be called a > "pattern signature" with accompanying extension, until that was > folded into ScopedTypeVariables extension. which I find super confusing, because sometimes I want a signature on a pattern and it is counter-intuitive to me why I should not longer use the obviously named PatternSignatures extension but rather the at first glance unrelated ScopedTypeVariable extension. But I am derailing the discussion a bit. Cheers, Joachim -- Joachim Breitner mail at joachim-breitner.de http://www.joachim-breitner.de/ _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From sylvain at haskus.fr Wed Jan 10 21:02:28 2018 From: sylvain at haskus.fr (Sylvain Henry) Date: Wed, 10 Jan 2018 22:02:28 +0100 Subject: pattern signatures In-Reply-To: References: <9F06F4BE-5A5B-451E-8AB7-539D4347B0CA@cs.brynmawr.edu> <1515186738.3425.27.camel@joachim-breitner.de> Message-ID: <59780cb8-ae17-df06-db61-ecb3b7029fba@haskus.fr> Or maybe "pattern ascription"? "type-ascription" is implied as "ascription" isn't commonly used for something else (AFAIK). Sylvain On 08/01/2018 13:59, Simon Peyton Jones via ghc-devs wrote: > > I like the idea of distinguishing “signatures” from “annotations”. > > But then what is currently a “pattern signature” with extension > -XPatternSignatures, becomes “type annotation in a pattern” or perhaps > “pattern type-annotation” which is a bit clumsy. > > Possibly “type specification” instead of “type annotation”.  Thus > “pattern type-spec” which is snappier. > > Simon > > *From:*ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of > *Spiwack, Arnaud > *Sent:* 08 January 2018 10:11 > *Cc:* Joachim Breitner ; ghc-devs at haskell.org > *Subject:* Re: pattern signatures > > In my eyes, signatures are something which goes with a definition. > > So (a) is a pattern (synonym) signature, while (b) is merely a type > annotation on a pattern. > > On Fri, Jan 5, 2018 at 11:23 PM, Iavor Diatchki > > wrote: > > Well, as you say, "pattern signature" makes sense for both, so I > would expect to use context to disambiguate.  If I wanted to be > explicit about which one I meant, I'd use: > > a) "Pattern synonym signature" > > b) "Signature on a pattern" > > -Iavor > > On Fri, Jan 5, 2018 at 1:12 PM Joachim Breitner > > wrote: > > Hi, > > Am Freitag, den 05.01.2018, 13:42 -0500 schrieb Brandon Allbery: > > Further complicated by the fact that that form used to be > called a > > "pattern signature" with accompanying extension, until that was > > folded into ScopedTypeVariables extension. > > which I find super confusing, because sometimes I want a > signature on a > pattern and it is counter-intuitive to me why I should not > longer use > the obviously named PatternSignatures extension but rather the > at first > glance unrelated ScopedTypeVariable extension. > > But I am derailing the discussion a bit. > > Cheers, > Joachim > > -- > Joachim Breitner > mail at joachim-breitner.de > http://www.joachim-breitner.de/ > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at well-typed.com Mon Jan 15 19:10:53 2018 From: david at well-typed.com (David Feuer) Date: Mon, 15 Jan 2018 14:10:53 -0500 Subject: Implementing pattern synonym constructor signatures Message-ID: <20180115183838.08709BCD9C@haskell.org> Over the past week I've started digging into the code that implements pattern synonyms with an eye toward implementing the pattern synonym construction function signature proposal. I think I understand a decent amount of what's going on most places. However, I don't understand enough about type checking to have any idea about what needs to change where or how. There are several things that need to be addressed: 0. Parsing. I wasn't actually able to find the code that parses pattern synonyms. Can someone point me in the right direction? 1. When there is a constructor signature, it needs to be used for the construction function instead of the pattern signature. Can someone give point me in the right direction about how to do this? 2. When there is a constructor signature but no pattern signature, what should we do? I think "give up" sounds okay for now. 3. A pattern synonym without a constructor signature needs to be treated as it is today, so the machinery for matching things up needs to remain available. David FeuerWell-Typed, LLP -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Mon Jan 15 20:27:13 2018 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Mon, 15 Jan 2018 20:27:13 +0000 Subject: Implementing pattern synonym constructor signatures In-Reply-To: <20180115183838.08709BCD9C@haskell.org> References: <20180115183838.08709BCD9C@haskell.org> Message-ID: What is a constructor signature? Where is this specified? On Mon, Jan 15, 2018 at 7:10 PM, David Feuer wrote: > Over the past week I've started digging into the code that implements > pattern synonyms with an eye toward implementing the pattern synonym > construction function signature proposal. I think I understand a decent > amount of what's going on most places. However, I don't understand enough > about type checking to have any idea about what needs to change where or > how. There are several things that need to be addressed: > > 0. Parsing. I wasn't actually able to find the code that parses pattern > synonyms. Can someone point me in the right direction? > > 1. When there is a constructor signature, it needs to be used for the > construction function instead of the pattern signature. Can someone give > point me in the right direction about how to do this? > > 2. When there is a constructor signature but no pattern signature, what > should we do? I think "give up" sounds okay for now. > > 3. A pattern synonym without a constructor signature needs to be treated as > it is today, so the machinery for matching things up needs to remain > available. > > David Feuer > Well-Typed, LLP > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From david at well-typed.com Mon Jan 15 20:45:03 2018 From: david at well-typed.com (David Feuer) Date: Mon, 15 Jan 2018 15:45:03 -0500 Subject: Implementing pattern synonym constructor signatures In-Reply-To: Message-ID: <20180115201246.0E900BCDC5@haskell.org> See the accepted proposal:  https://github.com/ghc-proposals/ghc-proposals/blob/master/proposals/0005-bidir-constr-sigs.rst David FeuerWell-Typed, LLP -------- Original message --------From: Matthew Pickering Date: 1/15/18 3:27 PM (GMT-05:00) To: David Feuer Cc: Simon Peyton Jones , GHC developers Subject: Re: Implementing pattern synonym constructor signatures What is a constructor signature? Where is this specified? On Mon, Jan 15, 2018 at 7:10 PM, David Feuer wrote: > Over the past week I've started digging into the code that implements > pattern synonyms with an eye toward implementing the pattern synonym > construction function signature proposal. I think I understand a decent > amount of what's going on most places. However, I don't understand enough > about type checking to have any idea about what needs to change where or > how. There are several things that need to be addressed: > > 0. Parsing. I wasn't actually able to find the code that parses pattern > synonyms. Can someone point me in the right direction? > > 1. When there is a constructor signature, it needs to be used for the > construction function instead of the pattern signature. Can someone give > point me in the right direction about how to do this? > > 2. When there is a constructor signature but no pattern signature, what > should we do? I think "give up" sounds okay for now. > > 3. A pattern synonym without a constructor signature needs to be treated as > it is today, so the machinery for matching things up needs to remain > available. > > David Feuer > Well-Typed, LLP > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Jan 15 23:36:23 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 15 Jan 2018 23:36:23 +0000 Subject: Implementing pattern synonym constructor signatures In-Reply-To: <84681626-474c-47f6-926c-136d917833b9@DM3NAM06FT012.Eop-nam06.prod.protection.outlook.com> References: <84681626-474c-47f6-926c-136d917833b9@DM3NAM06FT012.Eop-nam06.prod.protection.outlook.com> Message-ID: 0. Parsing. I wasn't actually able to find the code that parses pattern synonyms. Can someone point me in the right direction? Parser.y line 1356, production ‘patteron_synonym_decl’ looks plausible. Currently we have data HsPatSynDir id = Unidirectional | ImplicitBidirectional | ExplicitBidirectional (MatchGroup id (LHsExpr id)) so in the bidirectional case all we have a MatchGroup, built with mkPatSynMatchGroup. To serve the proposal we need an optional signature in there too. 1. When there is a constructor signature, it needs to be used for the construction function instead of the pattern signature. Can someone give point me in the right direction about how to do this? TcPatSyn.tcPatSynBuilderBind is a good place to start. 2. When there is a constructor signature but no pattern signature, what should we do? I think "give up" sounds okay for now. I don’t understand. Can you give an example? Simon From: David Feuer [mailto:david at well-typed.com] Sent: 15 January 2018 19:11 To: Simon Peyton Jones Cc: ghc-devs at haskell.org Subject: Implementing pattern synonym constructor signatures Over the past week I've started digging into the code that implements pattern synonyms with an eye toward implementing the pattern synonym construction function signature proposal. I think I understand a decent amount of what's going on most places. However, I don't understand enough about type checking to have any idea about what needs to change where or how. There are several things that need to be addressed: 0. Parsing. I wasn't actually able to find the code that parses pattern synonyms. Can someone point me in the right direction? 1. When there is a constructor signature, it needs to be used for the construction function instead of the pattern signature. Can someone give point me in the right direction about how to do this? 2. When there is a constructor signature but no pattern signature, what should we do? I think "give up" sounds okay for now. 3. A pattern synonym without a constructor signature needs to be treated as it is today, so the machinery for matching things up needs to remain available. David Feuer Well-Typed, LLP -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at well-typed.com Tue Jan 16 00:04:20 2018 From: david at well-typed.com (David Feuer) Date: Mon, 15 Jan 2018 19:04:20 -0500 Subject: Implementing pattern synonym constructor signatures In-Reply-To: <20180115183838.08709BCD9C@haskell.org> Message-ID: <20180115233159.40E88BCDC2@haskell.org> Never mind about parsing. It looks like the parser is already doing what it needs to do and I need to look to RdrHsSyn.hs. David FeuerWell-Typed, LLP -------- Original message --------From: David Feuer Date: 1/15/18 2:10 PM (GMT-05:00) To: Simon Peyton Jones Cc: ghc-devs at haskell.org Subject: Implementing pattern synonym constructor signatures Over the past week I've started digging into the code that implements pattern synonyms with an eye toward implementing the pattern synonym construction function signature proposal. I think I understand a decent amount of what's going on most places. However, I don't understand enough about type checking to have any idea about what needs to change where or how. There are several things that need to be addressed: 0. Parsing. I wasn't actually able to find the code that parses pattern synonyms. Can someone point me in the right direction? 1. When there is a constructor signature, it needs to be used for the construction function instead of the pattern signature. Can someone give point me in the right direction about how to do this? 2. When there is a constructor signature but no pattern signature, what should we do? I think "give up" sounds okay for now. 3. A pattern synonym without a constructor signature needs to be treated as it is today, so the machinery for matching things up needs to remain available. David FeuerWell-Typed, LLP -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at well-typed.com Tue Jan 16 05:21:13 2018 From: david at well-typed.com (David Feuer) Date: Tue, 16 Jan 2018 00:21:13 -0500 Subject: Implementing pattern synonym constructor signatures In-Reply-To: Message-ID: <20180116044850.73CCABCAAF@haskell.org> 3. Someone could write pattern P x <- ... where  P :: ...  P x = ... The pattern signature has to be the same as the constructor signature except for constraints, so it doesn't necessarily sound trivial to infer. David FeuerWell-Typed, LLP -------- Original message --------From: Simon Peyton Jones Date: 1/15/18 6:36 PM (GMT-05:00) To: David Feuer Cc: ghc-devs at haskell.org Subject: RE: Implementing pattern synonym constructor signatures 0. Parsing. I wasn't actually able to find the code that parses pattern synonyms. Can someone point me in the right direction? Parser.y line 1356, production ‘patteron_synonym_decl’ looks plausible. Currently we have data HsPatSynDir id   = Unidirectional   | ImplicitBidirectional   | ExplicitBidirectional (MatchGroup id (LHsExpr id)) so in the bidirectional case all we have a MatchGroup, built with mkPatSynMatchGroup.    To serve the proposal we need an optional signature in there too. 1. When there is a constructor signature, it needs to be used for the construction function instead of the pattern signature. Can someone give point me in the right direction about how to do this? TcPatSyn.tcPatSynBuilderBind is a good place to start. 2. When there is a constructor signature but no pattern signature, what should we do? I think "give up" sounds okay for now. I don’t understand.  Can you give an example? Simon From: David Feuer [mailto:david at well-typed.com] Sent: 15 January 2018 19:11 To: Simon Peyton Jones Cc: ghc-devs at haskell.org Subject: Implementing pattern synonym constructor signatures Over the past week I've started digging into the code that implements pattern synonyms with an eye toward implementing the pattern synonym construction function signature proposal. I think I understand a decent amount of what's going on most places. However, I don't understand enough about type checking to have any idea about what needs to change where or how. There are several things that need to be addressed: 0. Parsing. I wasn't actually able to find the code that parses pattern synonyms. Can someone point me in the right direction? 1. When there is a constructor signature, it needs to be used for the construction function instead of the pattern signature. Can someone give point me in the right direction about how to do this? 2. When there is a constructor signature but no pattern signature, what should we do? I think "give up" sounds okay for now. 3. A pattern synonym without a constructor signature needs to be treated as it is today, so the machinery for matching things up needs to remain available. David Feuer Well-Typed, LLP -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Tue Jan 16 11:15:44 2018 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Tue, 16 Jan 2018 11:15:44 +0000 Subject: Are join points inlined differently from normal bindings? Message-ID: I have quite a complicated program which relies on the optimiser inlining very aggressively. In 8.0.2, it works fine and produces the core I am expecting. In 8.2.2, one of the bindings is identified as a join point and then not inlined when doing so would lead to the same code. Bumping the unfolding-use-threshold to 10 (from 8) causes it to be inlined and produces the right core. Here is the core for reference - https://gist.github.com/mpickering/be30105b97fa7e4149c9fa935d72cd1c I haven't dug into which exact part of my program introduces this join point but this seems like a regression from 8.0.2. However, I make a mailing list post as I unsure how to expect the inliner to treat join points. Questions. 1. Does the inliner treat join point bindings differently to normal bindings? 2. How can I stop the compiler introducing this join point which seems to get in the way of the inliner? Cheers, Matt From simonpj at microsoft.com Tue Jan 16 14:21:53 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 16 Jan 2018 14:21:53 +0000 Subject: Are join points inlined differently from normal bindings? In-Reply-To: References: Message-ID: | 1. Does the inliner treat join point bindings differently to normal | bindings? I don't think so. Use -dverbose-core2core -ddump-inlinings -ddump-occur-anal to see exactly what is getting inlined and why. | 2. How can I stop the compiler introducing this join point which seems to | get in the way of the inliner? I don't think you can. I don't see why they should get in the way ... quite the reverse actually. Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Matthew | Pickering | Sent: 16 January 2018 11:16 | To: GHC developers | Subject: Are join points inlined differently from normal bindings? | | I have quite a complicated program which relies on the optimiser inlining | very aggressively. | | In 8.0.2, it works fine and produces the core I am expecting. | | In 8.2.2, one of the bindings is identified as a join point and then not | inlined when doing so would lead to the same code. Bumping the unfolding- | use-threshold to 10 (from 8) causes it to be inlined and produces the | right core. | | Here is the core for reference - | https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgist.git | hub.com%2Fmpickering%2Fbe30105b97fa7e4149c9fa935d72cd1c&data=02%7C01%7Csi | monpj%40microsoft.com%7C4bdf68e2ca8e480b1ba808d55cd2a9db%7C72f988bf86f141 | af91ab2d7cd011db47%7C1%7C0%7C636516982263961282&sdata=W59V18L00glmNB7XCKB | B90hTEU3QrgiT9rhGgO2ezec%3D&reserved=0 | | I haven't dug into which exact part of my program introduces this join | point but this seems like a regression from 8.0.2. | | However, I make a mailing list post as I unsure how to expect the inliner | to treat join points. | | Questions. | | 1. Does the inliner treat join point bindings differently to normal | bindings? | 2. How can I stop the compiler introducing this join point which seems to | get in the way of the inliner? | | Cheers, | | Matt | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C4bdf68e2ca8e480b1ba808d55cd | 2a9db%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636516982263961282&sda | ta=iNkZMaLfXCcP2ulWv2AlfVTvVq3LvsSadB6BytF6NUg%3D&reserved=0 From ben at well-typed.com Tue Jan 16 18:35:18 2018 From: ben at well-typed.com (Ben Gamari) Date: Tue, 16 Jan 2018 13:35:18 -0500 Subject: Versioning of libraries bundled with GHC pre-releases Message-ID: <87o9ltzm78.fsf@smart-cactus.org> TL;DR. We propose to start following the PVP for core libraries shipped with GHC alpha release. Let us know what you think. Hello everyone, GHC has recently been reworking its release policy, increasing the release cadence to two releases per year. We hope that this change facilitates earlier and more thorough testing of GHC. Of course, a compiler is worth little if no real-world packages can be built with it. Historically library maintainers have been reluctant to offer releases claiming compatibility with pre-release GHCs due to the lax versioning guarantees offered by such pre-releases. Specifically, changes to libraries shipped with GHC pre-releases have historically not had proper distinct version numbers, causing unnecessary breakage for released code (e.g. [1]). To make maintainers feel more at ease with releasing libraries compatible with GHC alpha releases, we propose to start using the Package Versioning Policy (PVP) [2] to version GHC's core libraries with each alpha release. That is, libraries which are not source-identical will get at very least a minor bump with each alpha release. By "core libraries" we mean the set of: * base * template-haskell * integer-gmp * integer-simple * hpc * ghci * ghc-compact * all GHC dependencies not maintained by GHC HQ * ghc-prim * ghc-boot * ghc-boot-th Following the PVP will allow maintainers to safely release libraries to Hackage without fear that they will break when the final GHC 8.4.1 release is made, easing the testing process for everyone. If you have an opinion one way or another on this matter please do share it on this list. Cheers, - Ben [1] https://github.com/tibbe/hashable/issues/143 [2] https://pvp.haskell.org/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From oleg.grenrus at iki.fi Tue Jan 16 20:14:22 2018 From: oleg.grenrus at iki.fi (Oleg Grenrus) Date: Tue, 16 Jan 2018 22:14:22 +0200 Subject: [Haskell-cafe] Versioning of libraries bundled with GHC pre-releases In-Reply-To: <87o9ltzm78.fsf@smart-cactus.org> References: <87o9ltzm78.fsf@smart-cactus.org> Message-ID: <21676f54-f7da-6eb9-1958-2159f30a284f@iki.fi> Hi Ben, Note that PVP dictates to do _major_ bump every time a breaking changes is introduced: 1. Breaking change. If any entity was removed, or the types of any entities or the definitions of datatypes or classes were changed, or orphan instances were added or any instances were removed, then the new A.B MUST be greater than the previous A.B. This means that first alpha-release for e.g. GHC-8.4.1/base-4.11.0.0 or GHC-8.6.1/base-4.12.0.0 will force to freeze both GHC and base. For example "Make the Div and Mod type families `infixl 7`" commit https://github.com/ghc/ghc/commit/fdfaa56b04b2cefb86e4dc557b1d163fd2e062dc is a breaking change. OTOH it's pity not to fix new feature before it's officially released. I cannot judge how much ghc-the-lib public API changes. TL;DR first alpha release is too early to do "PVP dictated freeze". I think that we need *staging* (mutable) package repository, where package authors can upload packages using lighter release procedure. Let's keep Hackage to high standards, and test in a staging environment, not the production one. - Oleg On 16.01.2018 20:35, Ben Gamari wrote: > TL;DR. We propose to start following the PVP for core libraries shipped > with GHC alpha release. Let us know what you think. > > > Hello everyone, > > GHC has recently been reworking its release policy, increasing the > release cadence to two releases per year. We hope that this change > facilitates earlier and more thorough testing of GHC. Of course, > a compiler is worth little if no real-world packages can be built with > it. > > Historically library maintainers have been reluctant to offer releases > claiming compatibility with pre-release GHCs due to the lax versioning > guarantees offered by such pre-releases. Specifically, changes to > libraries shipped with GHC pre-releases have historically not had > proper distinct version numbers, causing unnecessary breakage for > released code (e.g. [1]). > > To make maintainers feel more at ease with releasing libraries > compatible with GHC alpha releases, we propose to start using the > Package Versioning Policy (PVP) [2] to version GHC's core libraries with > each alpha release. That is, libraries which are not source-identical > will get at very least a minor bump with each alpha release. > > By "core libraries" we mean the set of: > > * base > * template-haskell > * integer-gmp > * integer-simple > * hpc > * ghci > * ghc-compact > * all GHC dependencies not maintained by GHC HQ > * ghc-prim > * ghc-boot > * ghc-boot-th > > Following the PVP will allow maintainers to safely release libraries to > Hackage without fear that they will break when the final GHC 8.4.1 > release is made, easing the testing process for everyone. > > If you have an opinion one way or another on this matter please do share > it on this list. > > Cheers, > > - Ben > > > [1] https://github.com/tibbe/hashable/issues/143 > [2] https://pvp.haskell.org/ > > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From kili at outback.escape.de Tue Jan 16 20:13:02 2018 From: kili at outback.escape.de (Matthias Kilian) Date: Tue, 16 Jan 2018 21:13:02 +0100 Subject: ghc, OpenBSD and stack pointer checking Message-ID: <20180116201302.GB3762@nutty.outback.escape.de> Hi, while working on an ghc update for OpenBSD (to ghc-8.2.2), I tested a diff for OpenBSD which enforces a special mmap(2) option, MAP_STACK for the system stack and, if the check fails, just aborts the process.[1] (Please note that this differs from the meaning of MAP_STACK on some other operating systems) At first, everything looked fine, but later during the build, *sometimes* ghc (to be specific, inplace/lib/bin/ghc-stage2) got aborted after *many* succesfull runs of it (for example, while compiling the bundled haddock and after already a couple of haddock sources had been successfully compiled). So, if the stack pointer checking diff to OpenBSD is correct, and if I'm not running into a completely unrelated problem: does ghc and/or the runtime library sometimes move the system stack pointer to newly allocated/mapped memory? If so, where in the code? Please note: the check happens on traps and system calls, so the doesn't happen when the stack pointer is changed to newly allocated/mapped memory, but after the next trap or system call. Unfurtunately, I was stupid enough to drop my recent build logs and back traces, but they weren't very enlightening, anyway ;-) I may to get some more information tomorrow or on thursday. I'd appreciate any help on this. After all it's probably just a matter of changing one call to mmap(2), shielded by an #ifdef. Ciao and thanks in advance for any help, Kili [1]: https://marc.info/?l=openbsd-tech&m=151572838911297&w=2 From simonpj at microsoft.com Wed Jan 17 10:15:05 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 17 Jan 2018 10:15:05 +0000 Subject: [GHC] #5889: -fno-prof-count-entries leads to linking errors In-Reply-To: <058.0ec3d2b791b1062af1996f81a16b902e@haskell.org> References: <043.c51b50cea9fd3755f487a8490fe8400e@haskell.org> <058.0ec3d2b791b1062af1996f81a16b902e@haskell.org> Message-ID: | Simon, we were wrong about CorePrep not dropping unfoldings for exported | ids, it really drops all unfoldings. I'd forgotten that. Notes are so useful! | Are there any other reasons for not doing this in CorePrep? Can we maybe | implement a pass before CorePrep just for cost center collection? Currently CorePrep is just in a unique-supply monad. But I guess you could add a writer monad to allow us to collect cost centres. That should not be hard. Alternatively, a separate pass (run only if we have profiling). I don’t feel strongly. A separate pass run always seems overkill somehow. Simon | -----Original Message----- | From: ghc-tickets [mailto:ghc-tickets-bounces at haskell.org] On Behalf Of | GHC | Sent: 17 January 2018 06:54 | Subject: Re: [GHC] #5889: -fno-prof-count-entries leads to linking errors | | #5889: -fno-prof-count-entries leads to linking errors | -------------------------------------+---------------------------------- | -------------------------------------+--- | Reporter: akio | Owner: bgamari | Type: bug | Status: new | Priority: highest | Milestone: 8.4.1 | Component: Compiler | Version: 8.3 | Resolution: | Keywords: | Operating System: Linux | Architecture: x86_64 | | (amd64) | Type of failure: GHC rejects | Test Case: | valid program | profiling/should_compile/T5889 | Blocked By: | Blocking: | Related Tickets: | Differential Rev(s): | Wiki Page: | | -------------------------------------+---------------------------------- | -------------------------------------+--- | | Comment (by osa1): | | Simon, we were wrong about CorePrep not dropping unfoldings for exported | ids, it really drops all unfoldings. There's a note | | {{{ | {- Note [Drop unfoldings and rules] | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | We want to drop the unfolding/rules on every Id: | | - We are now past interface-file generation, and in the | codegen pipeline, so we really don't need full unfoldings/rules | | - The unfolding/rule may be keeping stuff alive that we'd like | to discard. See Note [Dead code in CorePrep] | | - Getting rid of unnecessary unfoldings reduces heap usage | | - We are changing uniques, so if we didn't discard unfoldings/rules | we'd have to substitute in them | | HOWEVER, we want to preserve evaluated-ness; see Note [Preserve | evaluated-ness in CorePrep] | | Note [Preserve evaluated-ness in CorePrep] | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | We want to preserve the evaluated-ness of each binder (via | evaldUnfolding) for two reasons | | * In the code generator if we have | case x of y { Red -> e1; DEFAULT -> y } | we can return 'y' rather than entering it, if we know | it is evaluated (Trac #14626) | | * In the DataToTag magic (in CorePrep itself) we rely on | evaluated-ness. See Note Note [dataToTag magic]. | -} | }}} | | I can also clearly see that in the code we don't distinguish exported | from non-exported, we just zap all unfoldings (see `cpCloneBndrs` and | `zapUnfolding`). | | So it seems to me like we may have to collect cost centers before or | during CorePrep. I vaguely remember discussing this in the meeting and | one of the arguments against this was that `CorePrep` is already complex | enough so if possible it'd be nice to avoid making it even more complex. | Are there any other reasons for not doing this in CorePrep? Can we maybe | implement a pass before CorePrep just for cost center collection? | | -- | Ticket URL: | | GHC | | The Glasgow Haskell Compiler From ben at well-typed.com Wed Jan 17 19:25:55 2018 From: ben at well-typed.com (Ben Gamari) Date: Wed, 17 Jan 2018 14:25:55 -0500 Subject: [Haskell-cafe] Versioning of libraries bundled with GHC pre-releases In-Reply-To: <21676f54-f7da-6eb9-1958-2159f30a284f@iki.fi> References: <87o9ltzm78.fsf@smart-cactus.org> <21676f54-f7da-6eb9-1958-2159f30a284f@iki.fi> Message-ID: <87efmoz3r6.fsf@smart-cactus.org> Oleg Grenrus writes: > Hi Ben, > > Note that PVP dictates to do _major_ bump every time a breaking changes > is introduced: > Right; this is what I was trying to imply when I said "at least a minor bump" in the initial email. > 1. Breaking change. If any entity was removed, or the types of any > entities or the definitions of datatypes or classes were changed, or > orphan instances were added or any instances were removed, then the > new A.B MUST be greater than the previous A.B. > > This means that first alpha-release for e.g. GHC-8.4.1/base-4.11.0.0 or > GHC-8.6.1/base-4.12.0.0 will force to freeze both GHC and base. > > For example "Make the Div and Mod type families `infixl 7`" commit > https://github.com/ghc/ghc/commit/fdfaa56b04b2cefb86e4dc557b1d163fd2e062dc > is a breaking change. OTOH it's pity not to fix new feature before it's > officially released. > Yes, the fact that this sort of thing would require a decision between a major bump or punting until the next release is terribly unfortunate. In an idea world we would simply "be careful" and make sure tha major interface decisions are made by the time of the first alpha but unfortunately, as the above commit illustrates, mistakes are bound to happen. I don't know the right compromise here. > I cannot judge how much ghc-the-lib public API changes. > > TL;DR first alpha release is too early to do "PVP dictated freeze". > This may well be so. Hopefully this thread will help us determine the costs and benefits of freezing during the alpha phase. > I think that we need *staging* (mutable) package repository, where > package authors can upload packages using lighter release procedure. > Let's keep Hackage to high standards, and test in a staging environment, > not the production one. > That is reasonable; however, I am a bit worried that our current tooling isn't quite up to the task. Herbert's head.hackage effort is a great start, but I fear that the friction of maintaining and using the patchset may hamper adoption. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Thu Jan 18 18:21:27 2018 From: ben at well-typed.com (Ben Gamari) Date: Thu, 18 Jan 2018 13:21:27 -0500 Subject: ghc, OpenBSD and stack pointer checking In-Reply-To: <20180116201302.GB3762@nutty.outback.escape.de> References: <20180116201302.GB3762@nutty.outback.escape.de> Message-ID: <878tcvyqn2.fsf@smart-cactus.org> Matthias Kilian writes: > Hi, > > while working on an ghc update for OpenBSD (to ghc-8.2.2), I tested > a diff for OpenBSD which enforces a special mmap(2) option, MAP_STACK > for the system stack and, if the check fails, just aborts the > process.[1] (Please note that this differs from the meaning of > MAP_STACK on some other operating systems) > > At first, everything looked fine, but later during the build, > *sometimes* ghc (to be specific, inplace/lib/bin/ghc-stage2) got > aborted after *many* succesfull runs of it (for example, while > compiling the bundled haddock and after already a couple of haddock > sources had been successfully compiled). > > So, if the stack pointer checking diff to OpenBSD is correct, and > if I'm not running into a completely unrelated problem: does ghc > and/or the runtime library sometimes move the system stack pointer > to newly allocated/mapped memory? If so, where in the code? > As far as I know GHC shouldn't touch x86-64's $rsp at all; we specifically avoid using it for the STG stack to ease FFI. It would be interesting to know what is touching it. Unfortunately, without a tool like rr this may be hard to find. Cheers, - Ben [1] http://rr-project.org/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From nboldi at elte.hu Fri Jan 19 02:48:13 2018 From: nboldi at elte.hu (=?UTF-8?B?TsOpbWV0aCBCb2xkaXpzw6Fy?=) Date: Fri, 19 Jan 2018 11:48:13 +0900 Subject: compiling GHC: undefined references Message-ID: <53a912e3-11b9-acc5-c843-7fb43d4b4131@elte.hu> Dear GHC Devs, This is my first time trying to compile GHC on my machine and I need a little help to go on. I ran into a problem, I get a lot of undefined reference errors when linking. A sample of these: ghc\stage2\build\Main.o:fake:(.text+0x1f3): undefined reference to `ghczmprim_GHCziTypes_ZC_con_info' ghc\stage2\build\Main.o:fake:(.text+0x340): undefined reference to `ghczmprim_GHCziTypes_ZMZN_closure' ghc\stage2\build\Main.o:fake:(.text+0x498): undefined reference to `ghczmprim_GHCziTypes_False_closure' ghc\stage2\build\Main.o:fake:(.text+0x4c0): undefined reference to `ghczmprim_GHCziTypes_True_closure' Before the errors I have a warning: Warning: -rtsopts and -with-rtsopts have no effect with -no-hs-main.     Call hs_init_ghc() from your main() function to set these options. The only change I made is fixing some simple type errors in utils/ghc-cabal/Main.hs that stopped compilation. Commit ID is: cf2c029ccdb967441c85ffb66073974fbdb20c20 Best Regards, Boldizsár From douglas.wilson at gmail.com Fri Jan 19 04:23:42 2018 From: douglas.wilson at gmail.com (Douglas Wilson) Date: Fri, 19 Jan 2018 17:23:42 +1300 Subject: Fwd: compiling GHC: undefined references In-Reply-To: References: <53a912e3-11b9-acc5-c843-7fb43d4b4131@elte.hu> Message-ID: ---------- Forwarded message ---------- From: Douglas Wilson Date: Fri, Jan 19, 2018 at 5:23 PM Subject: Re: compiling GHC: undefined references To: Németh Boldizsár Hi Boldizsár, I infer from the path names that you are on windows? It's likely that you have not correctly configured your machine, since this commit passed validation on windows here: https://phabricator.haskell.org/B19134 Can you double check that you have followed the instructions here: https://ghc.haskell.org/trac/ghc/wiki/Building/Preparation/Windows > Before the errors I have a warning: Warning: -rtsopts and -with-rtsopts have no effect with -no-hs-main. > Call hs_init_ghc() from your main() function to set these options. > This is normal. > The only change I made is fixing some simple type errors in > utils/ghc-cabal/Main.hs that stopped compilation. > > It is very unusual for HEAD to fail to compile, and CI indicates that it does compiles on the test systems in this case. Good luck! Regards, Doug WIlson -------------- next part -------------- An HTML attachment was scrubbed... URL: From nboldi at elte.hu Fri Jan 19 09:35:29 2018 From: nboldi at elte.hu (=?UTF-8?B?TsOpbWV0aCBCb2xkaXpzw6Fy?=) Date: Fri, 19 Jan 2018 18:35:29 +0900 Subject: Extracting representation from GHC Message-ID: Dear GHC Developers, I would like to ask your opinion on my ideas to make it easier to use development tools with GHC. In the past when working on a Haskell refactoring tool I relied on using the GHC API for parsing and type checking Haskell sources. I extracted the representation and performed analysis and transformation on it as it was needed. However using the refactorer would be easier if it could work with build tools. To do this, my idea is to instruct GHC with a compilation flag to give out its internal representation of the source code. Most build tools let the user to configure the GHC flags so the refactoring tool would be usable in any build infrastructure. I'm thinking of using the pre-existing plugin architecture and adding two new fields to the Plugin datastructure. One would be called with the parsed representation (HsParsedModule) when parsing succeeds, another with the result of the type checking (TcGblEnv) when type checking is finished. What do you think about this solution? Boldizsár (ps: My first idea was using frontend plugins, but I could not access the representation from there and --frontend flag changed GHC compilation mode.) From matthewtpickering at gmail.com Fri Jan 19 09:41:06 2018 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Fri, 19 Jan 2018 09:41:06 +0000 Subject: Extracting representation from GHC In-Reply-To: References: Message-ID: I have too wanted this in the past and made a post to a similar effect on the mailing list 6 months ago. https://mail.haskell.org/pipermail/ghc-devs/2017-July/014427.html It references this proposal for a similar feature. https://ghc.haskell.org/trac/ghc/wiki/FrontendPluginsProposal#no1 If you would be glad to implement it then feel free to add me as a reviewer. Cheers, Matt On Fri, Jan 19, 2018 at 9:35 AM, Németh Boldizsár wrote: > Dear GHC Developers, > > I would like to ask your opinion on my ideas to make it easier to use > development tools with GHC. > > In the past when working on a Haskell refactoring tool I relied on using the > GHC API for parsing and type checking Haskell sources. I extracted the > representation and performed analysis and transformation on it as it was > needed. However using the refactorer would be easier if it could work with > build tools. > > To do this, my idea is to instruct GHC with a compilation flag to give out > its internal representation of the source code. Most build tools let the > user to configure the GHC flags so the refactoring tool would be usable in > any build infrastructure. I'm thinking of using the pre-existing plugin > architecture and adding two new fields to the Plugin datastructure. One > would be called with the parsed representation (HsParsedModule) when parsing > succeeds, another with the result of the type checking (TcGblEnv) when type > checking is finished. > > What do you think about this solution? > > Boldizsár > > (ps: My first idea was using frontend plugins, but I could not access the > representation from there and --frontend flag changed GHC compilation mode.) > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From astrohavoc at gmail.com Fri Jan 19 12:03:04 2018 From: astrohavoc at gmail.com (Shao Cheng) Date: Fri, 19 Jan 2018 12:03:04 +0000 Subject: Extracting representation from GHC In-Reply-To: References: Message-ID: Hi, IIRC you can already use hscFrontendHook in the DynFlags hooks to retrieve TcGblEnv, and with a little bit of work, also HsParsedModule. Regards, Shao Cheng On Fri, Jan 19, 2018, 5:41 PM Matthew Pickering wrote: > I have too wanted this in the past and made a post to a similar effect > on the mailing list 6 months ago. > > https://mail.haskell.org/pipermail/ghc-devs/2017-July/014427.html > > It references this proposal for a similar feature. > > https://ghc.haskell.org/trac/ghc/wiki/FrontendPluginsProposal#no1 > > If you would be glad to implement it then feel free to add me as a > reviewer. > > Cheers, > > Matt > > On Fri, Jan 19, 2018 at 9:35 AM, Németh Boldizsár wrote: > > Dear GHC Developers, > > > > I would like to ask your opinion on my ideas to make it easier to use > > development tools with GHC. > > > > In the past when working on a Haskell refactoring tool I relied on using > the > > GHC API for parsing and type checking Haskell sources. I extracted the > > representation and performed analysis and transformation on it as it was > > needed. However using the refactorer would be easier if it could work > with > > build tools. > > > > To do this, my idea is to instruct GHC with a compilation flag to give > out > > its internal representation of the source code. Most build tools let the > > user to configure the GHC flags so the refactoring tool would be usable > in > > any build infrastructure. I'm thinking of using the pre-existing plugin > > architecture and adding two new fields to the Plugin datastructure. One > > would be called with the parsed representation (HsParsedModule) when > parsing > > succeeds, another with the result of the type checking (TcGblEnv) when > type > > checking is finished. > > > > What do you think about this solution? > > > > Boldizsár > > > > (ps: My first idea was using frontend plugins, but I could not access the > > representation from there and --frontend flag changed GHC compilation > mode.) > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Jan 19 17:05:06 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 19 Jan 2018 17:05:06 +0000 Subject: Why is EvTerm limited? In-Reply-To: <1516370471.3059.4.camel@joachim-breitner.de> References: <1516370471.3059.4.camel@joachim-breitner.de> Message-ID: | What would break if we had | | | EvExpr CoreExpr | | as an additional constructor there? This has come up before. I think that'd be a solid win. In fact, eliminate all the existing evidence constructors with "smart constructors" that produce an EvExpr. That'd mean moving stuff from the desugarer into these smart constructors, but that's ok. I /think/ I didn't do that initially only because there were very few forms and it mean that there was no CoreExpr stuff in the type checker. But as we add more forms that decision looks and less good. You'd need to add zonkCoreExpr in place of zonkEvTerm. evVarsOfTerm is called quite a bit; you might want to cache the result in the EvExpr constructor. Make a ticket and execute? Simon | -----Original Message----- | From: Glasgow-haskell-users [mailto:glasgow-haskell-users- | bounces at haskell.org] On Behalf Of Joachim Breitner | Sent: 19 January 2018 14:01 | To: Glasgow-Haskell-Users users | Subject: Why is EvTerm limited? | | Hi, | | I had some funky idea where a type checker plugin would have to | synthesize code for a custom-solved instances on the fly. But it seems | that does not work because EvTerm is less expressive than Core | (especially, no lambdas): | https://na01.safelinks.protection.outlook.com/?url=https:%2F%2Fdownloa | ds.haskell.org%2F~ghc%2F8.2.2%2Fdocs%2Fhtml%2Flibraries%2Fghc- | 8.2.2%2FTcEvidence.html%23t:EvTerm&data=02%7C01%7Csimonpj%40microsoft. | com%7C513ff7ae83914913225008d55f452dec%7C72f988bf86f141af91ab2d7cd011d | b47%7C1%7C0%7C636519673089385423&sdata=kFkUugVn02Nfu4QXJ6dkVwtx8KWFrTM | fWcVEiwf6KyI%3D&reserved=0 | | What would break if we had | | | EvExpr CoreExpr | | as an additional constructor there? | | Cheers, | Joachim | | -- | Joachim “nomeata” Breitner | mail at joachim-breitner.de | | https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.j | oachim- | breitner.de%2F&data=02%7C01%7Csimonpj%40microsoft.com%7C513ff7ae839149 | 13225008d55f452dec%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636519 | 673089385423&sdata=Vh4BvbeEVUBIntKcf3XEseOzwUTx2RHPuANTY328dpM%3D&rese | rved=0 From simonpj at microsoft.com Fri Jan 19 17:27:44 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 19 Jan 2018 17:27:44 +0000 Subject: Extracting representation from GHC In-Reply-To: References: Message-ID: | To do this, my idea is to instruct GHC with a compilation flag to give | out its internal representation of the source code. Why can't you just use GHC as a library, and ask it to parse and typecheck the module and then look at what it gives you. Others are more used to the GHC API than me, though. S | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | Németh Boldizsár | Sent: 19 January 2018 09:35 | To: ghc-devs at haskell.org | Subject: Extracting representation from GHC | | Dear GHC Developers, | | I would like to ask your opinion on my ideas to make it easier to use | development tools with GHC. | | In the past when working on a Haskell refactoring tool I relied on | using the GHC API for parsing and type checking Haskell sources. I | extracted the representation and performed analysis and transformation | on it as it was needed. However using the refactorer would be easier | if it could work with build tools. | | To do this, my idea is to instruct GHC with a compilation flag to give | out its internal representation of the source code. Most build tools | let the user to configure the GHC flags so the refactoring tool would | be usable in any build infrastructure. I'm thinking of using the pre- | existing plugin architecture and adding two new fields to the Plugin | datastructure. One would be called with the parsed representation | (HsParsedModule) when parsing succeeds, another with the result of the | type checking (TcGblEnv) when type checking is finished. | | What do you think about this solution? | | Boldizsár | | (ps: My first idea was using frontend plugins, but I could not access | the representation from there and --frontend flag changed GHC | compilation mode.) | | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.h | askell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C9d78fd2d16994ade4d9008d5 | 5f2007c3%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C6365195135360471 | 87&sdata=voUEz%2BKTp0p3CtwP1Hx6xA3cXN0qoYONLPd9T7xRve8%3D&reserved=0 From palotai.robin at gmail.com Fri Jan 19 18:46:17 2018 From: palotai.robin at gmail.com (Robin Palotai) Date: Fri, 19 Jan 2018 19:46:17 +0100 Subject: Extracting representation from GHC In-Reply-To: References: Message-ID: See some additions inline. BR Robin 2018-01-19 18:27 GMT+01:00 Simon Peyton Jones via ghc-devs < ghc-devs at haskell.org>: > | To do this, my idea is to instruct GHC with a compilation flag to give > | out its internal representation of the source code. > > Why can't you just use GHC as a library, and ask it to parse and typecheck > the module and then look at what it gives you. > > Last time I checked (GHC 8.2, for haskell-indexer), using the library is not equivalent to using GHC's Main. GHC's Main does tremendous amount of magic with flag parsing and state setup, and doesn't expose all the functionality for libraries to do the same. AFAIR we saw two possible ways to get the AST out from a complicated setup (FFI, objects, packages, ...): 1) invoke GHC and use Frontend plugin (but Frontend plugin is/was more limited at the time - the gist in the below trac entry mentions that even the Frontend plugin didn't do everything Main does). 2) Refactor GHC Main and expose all the functionality to GHC API. I filed https://ghc.haskell.org/trac/ghc/ticket/14018 a while ago that's slightly related. By the way, you can click around http://stuff.codereview.me/ghc/#ghc/ghc/Main.hs?corpus=ghc-8.2.1-rc2&signature in the 'main' function to see all the magic. Others are more used to the GHC API than me, though. > > S > > | -----Original Message----- > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of > | Németh Boldizsár > | Sent: 19 January 2018 09:35 > | To: ghc-devs at haskell.org > | Subject: Extracting representation from GHC > | > | Dear GHC Developers, > | > | I would like to ask your opinion on my ideas to make it easier to use > | development tools with GHC. > | > | In the past when working on a Haskell refactoring tool I relied on > | using the GHC API for parsing and type checking Haskell sources. I > | extracted the representation and performed analysis and transformation > | on it as it was needed. However using the refactorer would be easier > | if it could work with build tools. > | > | To do this, my idea is to instruct GHC with a compilation flag to give > | out its internal representation of the source code. Most build tools > | let the user to configure the GHC flags so the refactoring tool would > | be usable in any build infrastructure. I'm thinking of using the pre- > | existing plugin architecture and adding two new fields to the Plugin > | datastructure. One would be called with the parsed representation > | (HsParsedModule) when parsing succeeds, another with the result of the > | type checking (TcGblEnv) when type checking is finished. > | > | What do you think about this solution? > | > | Boldizsár > | > | (ps: My first idea was using frontend plugins, but I could not access > | the representation from there and --frontend flag changed GHC > | compilation mode.) > | > | _______________________________________________ > | ghc-devs mailing list > | ghc-devs at haskell.org > | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.h > | askell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- > | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C9d78fd2d16994ade4d9008d5 > | 5f2007c3%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C6365195135360471 > | 87&sdata=voUEz%2BKTp0p3CtwP1Hx6xA3cXN0qoYONLPd9T7xRve8%3D&reserved=0 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From palotai.robin at gmail.com Fri Jan 19 18:50:45 2018 From: palotai.robin at gmail.com (Robin Palotai) Date: Fri, 19 Jan 2018 19:50:45 +0100 Subject: Extracting representation from GHC In-Reply-To: References: Message-ID: See also https://github.com/google/haskell-indexer/blob/master/haskell-indexer-backend-ghc/src/Language/Haskell/Indexer/Backend/GhcApiSupport.hs for an as-complete GHC API based setup as I could get. The comments indicate possible deficiencies. The test https://github.com/google/haskell-indexer/blob/master/haskell-indexer-backend-ghc/tests/Language/Haskell/Indexer/Backend/Ghc/Test/BasicTestBase.hs#L179 shows some cases that are covered (for example testTemplateHaskellCodeExecFFI ). In practice one can run AST extraction with HscNoLink + HscInterpreted for most targets, but for hairy ones (FFI invoked from TemplateHaskell in certain ways, for examples) that will fail. 2018-01-19 19:46 GMT+01:00 Robin Palotai : > See some additions inline. > BR > Robin > > 2018-01-19 18:27 GMT+01:00 Simon Peyton Jones via ghc-devs < > ghc-devs at haskell.org>: > >> | To do this, my idea is to instruct GHC with a compilation flag to give >> | out its internal representation of the source code. >> >> Why can't you just use GHC as a library, and ask it to parse and >> typecheck the module and then look at what it gives you. >> >> Last time I checked (GHC 8.2, for haskell-indexer), using the library is > not equivalent to using GHC's Main. GHC's Main does tremendous amount of > magic with flag parsing and state setup, and doesn't expose all the > functionality for libraries to do the same. > > AFAIR we saw two possible ways to get the AST out from a complicated setup > (FFI, objects, packages, ...): > 1) invoke GHC and use Frontend plugin (but Frontend plugin is/was more > limited at the time - the gist in the below trac entry mentions that even > the Frontend plugin didn't do everything Main does). > 2) Refactor GHC Main and expose all the functionality to GHC API. > > I filed https://ghc.haskell.org/trac/ghc/ticket/14018 a while ago that's > slightly related. > > By the way, you can click around http://stuff.codereview.me/ghc/#ghc/ghc/ > Main.hs?corpus=ghc-8.2.1-rc2&signature in the 'main' function to see all > the magic. > > Others are more used to the GHC API than me, though. >> >> S >> >> | -----Original Message----- >> | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of >> | Németh Boldizsár >> | Sent: 19 January 2018 09:35 >> | To: ghc-devs at haskell.org >> | Subject: Extracting representation from GHC >> | >> | Dear GHC Developers, >> | >> | I would like to ask your opinion on my ideas to make it easier to use >> | development tools with GHC. >> | >> | In the past when working on a Haskell refactoring tool I relied on >> | using the GHC API for parsing and type checking Haskell sources. I >> | extracted the representation and performed analysis and transformation >> | on it as it was needed. However using the refactorer would be easier >> | if it could work with build tools. >> | >> | To do this, my idea is to instruct GHC with a compilation flag to give >> | out its internal representation of the source code. Most build tools >> | let the user to configure the GHC flags so the refactoring tool would >> | be usable in any build infrastructure. I'm thinking of using the pre- >> | existing plugin architecture and adding two new fields to the Plugin >> | datastructure. One would be called with the parsed representation >> | (HsParsedModule) when parsing succeeds, another with the result of the >> | type checking (TcGblEnv) when type checking is finished. >> | >> | What do you think about this solution? >> | >> | Boldizsár >> | >> | (ps: My first idea was using frontend plugins, but I could not access >> | the representation from there and --frontend flag changed GHC >> | compilation mode.) >> | >> | _______________________________________________ >> | ghc-devs mailing list >> | ghc-devs at haskell.org >> | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.h >> | askell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- >> | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C9d78fd2d16994ade4d9008d5 >> | 5f2007c3%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C6365195135360471 >> <(513)%20536-0471> >> | 87&sdata=voUEz%2BKTp0p3CtwP1Hx6xA3cXN0qoYONLPd9T7xRve8%3D&reserved=0 >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Sun Jan 21 20:14:46 2018 From: ben at well-typed.com (Ben Gamari) Date: Sun, 21 Jan 2018 15:14:46 -0500 Subject: [ANNOUNCE] GHC 8.4.1-alpha2 available References: <87bmitcejz.fsf@ben-laptop.smart-cactus.org> Message-ID: <87shazx94g.fsf@smart-cactus.org> The GHC development team is pleased to announce the second alpha release of the 8.4.1 release. The usual release artifacts are available from https://downloads.haskell.org/~ghc/8.4.1-alpha2 Note that this alpha, like alpha1, is unfortunately afflicted by #14678. We will try to get an alpha3 out as soon as this issue has been resolved. However, as this alpha has a number of fixes since alpha1, we have decided it would be best not to delay it any further. Also, due to user demand we now offer a binary distribution for 64-bit Fedora 27; this distribution links against ncurses6. This is in contrast to the Debian 8 distribution, which links against ncurses5. Users of newer distributions (Fedora 27, Debian sid) should use this distribution. Note that this release drops compatibility with GCC 4.6 and earlier. While we generally try to place as few constraints on system toolchain as possible, this release depends upon the __atomic__ builtins provided by GCC 4.7 and later (see #14244). === Notes on release scheduling === The 8.4.1 release marks the first release where GHC will be adhering to its new, higher-cadence release schedule [1]. Under this new scheme, major releases will be made in 6-month intervals with interstitial minor releases as necessary. In order to minimize the likelihood of schedule slippage and to ensure adequate testing, each major release will be preceeded by a number of regular alpha releases. We will begin issuing these releases roughly three months before the final date of the major release and will issue roughly one every two weeks during this period. This high release cadence will allow us to quickly get fixes in to users hands and allow better feedback on the status of the release. GHC 8.4 is slated to be released in mid-February but, due to technical constraints, we are starting the alpha-release cycle a bit later than planned under the above schedule. For this reason, it would be greatly appreciated if users could put this alpha through its paces to make up for lost time. As always, do let us know if you encounter any trouble in the course of testing. Thanks for your help! Cheers, - Ben [1] https://ghc.haskell.org/trac/ghc/blog/2017-release-schedule -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From kili at outback.escape.de Mon Jan 22 20:52:01 2018 From: kili at outback.escape.de (Matthias Kilian) Date: Mon, 22 Jan 2018 21:52:01 +0100 Subject: ghc, OpenBSD and stack pointer checking In-Reply-To: <878tcvyqn2.fsf@smart-cactus.org> References: <20180116201302.GB3762@nutty.outback.escape.de> <878tcvyqn2.fsf@smart-cactus.org> Message-ID: <20180122205200.GA26895@nutty.outback.escape.de> Hi, On Thu, Jan 18, 2018 at 01:21:27PM -0500, Ben Gamari wrote: > > So, if the stack pointer checking diff to OpenBSD is correct, and > > if I'm not running into a completely unrelated problem: does ghc > > and/or the runtime library sometimes move the system stack pointer > > to newly allocated/mapped memory? If so, where in the code? > > > As far as I know GHC shouldn't touch x86-64's $rsp at all; we > specifically avoid using it for the STG stack to ease FFI. Thanks for the information. > It would be > interesting to know what is touching it. Unfortunately, without a tool > like rr this may be hard to find. I doubt it's easy to get rr ported to OpenBSD, so all I can think of at the moment is ktracing every single invocation of ghc during a build (should be relatively easy by patching the ghc wrapper script) and -- after an abort happended, look at the trace to see wether the current stack had been mmapped at all late during the process. At the moment, I'm busy updating all the haskell libraries and tools offically available as OpenBSD packages, but I hope to get back to debugging/tracing in a few days. Ciao, Kili From george.colpitts at gmail.com Mon Jan 22 21:19:27 2018 From: george.colpitts at gmail.com (George Colpitts) Date: Mon, 22 Jan 2018 21:19:27 +0000 Subject: [ANNOUNCE] GHC 8.4.1-alpha2 available In-Reply-To: <87shazx94g.fsf@smart-cactus.org> References: <87bmitcejz.fsf@ben-laptop.smart-cactus.org> <87shazx94g.fsf@smart-cactus.org> Message-ID: installed fine on my mac primitive can now be compiled unordered-containers does not compile, even with allow-new. This has been reported by Neil Mitchell haskell-src-exts does not compile, not clear where to report this On Sun, Jan 21, 2018 at 4:15 PM Ben Gamari wrote: > > The GHC development team is pleased to announce the second alpha release > of the 8.4.1 release. The usual release artifacts are available from > > https://downloads.haskell.org/~ghc/8.4.1-alpha2 > > Note that this alpha, like alpha1, is unfortunately afflicted by #14678. > We will try to get an alpha3 out as soon as this issue has been resolved. > However, as this alpha has a number of fixes since alpha1, we have > decided it would be best not to delay it any further. > > Also, due to user demand we now offer a binary distribution for 64-bit > Fedora 27; this distribution links against ncurses6. This is in contrast > to the Debian 8 distribution, which links against ncurses5. Users of > newer distributions (Fedora 27, Debian sid) should use this distribution. > > Note that this release drops compatibility with GCC 4.6 and earlier. > While we generally try to place as few constraints on system toolchain > as possible, this release depends upon the __atomic__ builtins provided > by GCC 4.7 and later (see #14244). > > > === Notes on release scheduling === > > The 8.4.1 release marks the first release where GHC will be adhering to > its new, higher-cadence release schedule [1]. Under this new scheme, > major releases will be made in 6-month intervals with interstitial minor > releases as necessary. > > In order to minimize the likelihood of schedule slippage and to ensure > adequate testing, each major release will be preceeded by a number of > regular alpha releases. We will begin issuing these releases roughly > three months before the final date of the major release and will issue > roughly one every two weeks during this period. This high release > cadence will allow us to quickly get fixes in to users hands and allow > better feedback on the status of the release. > > GHC 8.4 is slated to be released in mid-February but, due to technical > constraints, we are starting the alpha-release cycle a bit later than > planned under the above schedule. For this reason, it would be greatly > appreciated if users could put this alpha through its paces to make up > for lost time. > > As always, do let us know if you encounter any trouble in the course of > testing. Thanks for your help! > > Cheers, > > - Ben > > > [1] https://ghc.haskell.org/trac/ghc/blog/2017-release-schedule > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryan.gl.scott at gmail.com Thu Jan 25 14:09:30 2018 From: ryan.gl.scott at gmail.com (Ryan Scott) Date: Thu, 25 Jan 2018 09:09:30 -0500 Subject: [ANNOUNCE] GHC 8.4.1-alpha2 available Message-ID: Forgive me if I'm stating the obvious here, but the status of libraries like primitive, unordered-containers, and haskell-src-exts don't really have anything to do with GHC, since they're independent libraries that aren't shipped with GHC in any way. If you're impatient to use the Haskell library ecosystem with GHC 8.4.1, I highly encourage you to use a solution like head.hackage [1], as many library authors are not as much of early adopters as we are :) Ryan S. ----- [1] https://github.com/hvr/head.hackage -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at well-typed.com Sat Jan 27 02:49:52 2018 From: david at well-typed.com (David Feuer) Date: Fri, 26 Jan 2018 21:49:52 -0500 Subject: GHC builds are broken Message-ID: <3119474.1dq8aADM4E@squirrel> The Linux build has been failing with a segfault. It looks to me as though this started with 0e022e56b130ab9d277965b794e70d8d3fb29533: Turn EvTerm (almost) into CoreExpr. From mail at joachim-breitner.de Sat Jan 27 03:34:22 2018 From: mail at joachim-breitner.de (Joachim Breitner) Date: Fri, 26 Jan 2018 22:34:22 -0500 Subject: GHC builds are broken In-Reply-To: <3119474.1dq8aADM4E@squirrel> References: <3119474.1dq8aADM4E@squirrel> Message-ID: <1517024062.17652.2.camel@joachim-breitner.de> Hi, Am Freitag, den 26.01.2018, 21:49 -0500 schrieb David Feuer: > The Linux build has been failing with a segfault. It looks to me as > though this started with 0e022e56b130ab9d277965b794e70d8d3fb29533: > Turn EvTerm (almost) into CoreExpr. I have seen that, but it seemed to be intermittent. I saw a segfault in the differential revision for that patch, but restarting the build made it go away. Given that this patch is purely a refactoring in the type checker, I believe the problem is somewhere else. Did anyone observe the segfaults locally? Joachim -- Joachim Breitner mail at joachim-breitner.de http://www.joachim-breitner.de/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From mail at joachim-breitner.de Sat Jan 27 03:46:47 2018 From: mail at joachim-breitner.de (Joachim Breitner) Date: Fri, 26 Jan 2018 22:46:47 -0500 Subject: GHC builds are broken In-Reply-To: <1517024062.17652.2.camel@joachim-breitner.de> References: <3119474.1dq8aADM4E@squirrel> <1517024062.17652.2.camel@joachim-breitner.de> Message-ID: <1517024807.17652.4.camel@joachim-breitner.de> Hi, Am Freitag, den 26.01.2018, 22:34 -0500 schrieb Joachim Breitner: > Did anyone observe the segfaults locally? JFTR, neither of perf.haskell.org, Travis CI and CircleCI observe the segfaults. Joachim -- Joachim Breitner mail at joachim-breitner.de http://www.joachim-breitner.de/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From rae at cs.brynmawr.edu Sat Jan 27 03:53:00 2018 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Fri, 26 Jan 2018 22:53:00 -0500 Subject: testsuite failures in DEBUG Message-ID: Hi devs, It seems our CI infrastructure has become too good. My workflow normally involves working with a DEBUG compiler, and then occasionally running the whole testsuite. I don't validate from scratch mid-patch, but it's good to know if I'm failing tests. With HEAD, I'm getting quite a few testsuite failures, and they all seem to be coming from ASSERTs, and they're all problems that I'm pretty sure I didn't cause. My full list of failures (I'm on Mac) is below, but after looking into 5-6 of them and finding only stuff that isn't my fault, I've given up. So: would it be possible to have our CI infrastructure validate DEBUG mode as well? Then it would be easy to spot where these failures are from. backpack/cabal/bkpcabal02/bkpcabal02.run bkpcabal02 [bad stderr] (normal) backpack/cabal/bkpcabal04/bkpcabal04.run bkpcabal04 [bad stderr] (normal) backpack/cabal/bkpcabal03/bkpcabal03.run bkpcabal03 [bad stderr] (normal) backpack/cabal/bkpcabal05/bkpcabal05.run bkpcabal05 [bad stderr] (normal) backpack/cabal/T14304/T14304.run T14304 [bad stderr] (normal) backpack/cabal/bkpcabal06/bkpcabal06.run bkpcabal06 [bad stderr] (normal) backpack/cabal/bkpcabal07/bkpcabal07.run bkpcabal07 [bad stderr] (normal) backpack/cabal/bkpcabal01/bkpcabal01.run bkpcabal01 [bad stderr] (normal) cabal/cabal04/cabal04.run cabal04 [bad stderr] (normal) cabal/T12733/T12733.run T12733 [bad stderr] (normal) cabal/cabal01/cabal01.run cabal01 [bad stderr] (normal) cabal/cabal03/cabal03.run cabal03 [bad stderr] (normal) cabal/cabal09/cabal09.run cabal09 [bad stderr] (normal) cabal/cabal08/cabal08.run cabal08 [bad stderr] (normal) cabal/cabal05/cabal05.run cabal05 [bad stderr] (normal) cabal/cabal06/cabal06.run cabal06 [bad stderr] (normal) dependent/should_compile/T12442.run T12442 [exit code non-0] (normal) dependent/should_compile/T12176.run T12176 [exit code non-0] (normal) driver/T3007/T3007.run T3007 [bad stderr] (normal) gadt/T12087.run T12087 [stderr mismatch] (normal) ghci/scripts/ghci063.run ghci063 [bad stderr] (ghci) indexed-types/should_fail/T13877.run T13877 [stderr mismatch] (normal) parser/should_fail/NumericUnderscoresFail0.run NumericUnderscoresFail0 [stderr mismatch] (normal) parser/should_fail/NumericUnderscoresFail1.run NumericUnderscoresFail1 [stderr mismatch] (normal) patsyn/should_compile/T13350/T13350.run T13350 [bad stderr] (normal) pmcheck/should_compile/T11195.run T11195 [exit code non-0] (normal) polykinds/T14174a.run T14174a [exit code non-0] (normal) printer/T13050p.run T13050p [bad exit code] (normal) safeHaskell/check/pkg01/safePkg01.run safePkg01 [bad stderr] (normal) simplCore/should_compile/T13025.run T13025 [bad stderr] (normal) simplCore/should_compile/T13410.run T13410 [exit code non-0] (normal) typecheck/T13168/T13168.run T13168 [bad stderr] (normal) typecheck/bug1465/bug1465.run bug1465 [bad stderr] (normal) typecheck/should_compile/holes2.run holes2 [exit code non-0] (normal) typecheck/should_compile/valid_substitutions.run valid_substitutions [exit code non-0] (normal) typecheck/should_compile/holes3.run holes3 [stderr mismatch] (normal) typecheck/should_compile/holes.run holes [stderr mismatch] (normal) typecheck/should_compile/T13050.run T13050 [exit code non-0] (normal) typecheck/should_compile/T13822.run T13822 [exit code non-0] (normal) typecheck/should_compile/T14590.run T14590 [exit code non-0] (normal) typecheck/should_compile/T13032.run T13032 [stderr mismatch] (normal) typecheck/should_fail/T7175.run T7175 [stderr mismatch] (normal) Thanks! Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Sat Jan 27 04:23:13 2018 From: mail at joachim-breitner.de (Joachim Breitner) Date: Fri, 26 Jan 2018 23:23:13 -0500 Subject: testsuite failures in DEBUG In-Reply-To: References: Message-ID: <1517026993.17652.6.camel@joachim-breitner.de> Hi, Am Freitag, den 26.01.2018, 22:53 -0500 schrieb Richard Eisenberg: > So: would it be possible to have our CI infrastructure validate DEBUG mode as well? Then it would be easy to spot where these failures are from. unhelpful comment: Travis used to validate with DEBUG, but it no longer runs the test suite for build time reason. So plausibly we could add a -DEBUG variant to CircleCI? Joachim -- Joachim Breitner mail at joachim-breitner.de http://www.joachim-breitner.de/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From ben at smart-cactus.org Sat Jan 27 18:36:27 2018 From: ben at smart-cactus.org (Ben Gamari) Date: Sat, 27 Jan 2018 13:36:27 -0500 Subject: GHC builds are broken In-Reply-To: <3119474.1dq8aADM4E@squirrel> References: <3119474.1dq8aADM4E@squirrel> Message-ID: <87mv0z9mje.fsf@smart-cactus.org> David Feuer writes: > The Linux build has been failing with a segfault. It looks to me as > though this started with 0e022e56b130ab9d277965b794e70d8d3fb29533: > Turn EvTerm (almost) into CoreExpr. Yes, I know. I believe this is just due to a long-standing crash when the RTS is unable to allocate heap. Unfortunately I had trouble reproducing it earlier in the week. I'll try to have another look today. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From mail at joachim-breitner.de Sun Jan 28 15:59:16 2018 From: mail at joachim-breitner.de (Joachim Breitner) Date: Sun, 28 Jan 2018 10:59:16 -0500 Subject: https://downloads.haskell.org/~ghc/master/ Message-ID: <1517155156.1655.7.camel@joachim-breitner.de> Hi, just curious: What is the status of https://downloads.haskell.org/~ghc/master/ where I would expect the documentation of GHC HEAD? It seems to be last updated in October, and also be missing the GHC API docs. Cheers, Joachim -- Joachim Breitner mail at joachim-breitner.de http://www.joachim-breitner.de/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From ben at well-typed.com Sun Jan 28 22:34:07 2018 From: ben at well-typed.com (Ben Gamari) Date: Sun, 28 Jan 2018 17:34:07 -0500 Subject: https://downloads.haskell.org/~ghc/master/ In-Reply-To: <1517155156.1655.7.camel@joachim-breitner.de> References: <1517155156.1655.7.camel@joachim-breitner.de> Message-ID: <87fu6paa07.fsf@smart-cactus.org> Joachim Breitner writes: > Hi, > > just curious: What is the status of > https://downloads.haskell.org/~ghc/master/ > where I would expect the documentation of GHC HEAD? It seems to be last > updated in October, and also be missing the GHC API docs. > It is in principle updated on a nightly basis by a cron job running on a server in my living room. Unfortunately this arrangement has had a terrible history of breaking (this time it broke due to `git remote update` which was failing due to a changed key). I've fixed it and kicked off a manual run so it would hopefully be up-to-date in an hour or so. However, I have for a long time planned to move this over to CI. Perhaps I will try to make this transition this week. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From matthewtpickering at gmail.com Mon Jan 29 09:35:20 2018 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Mon, 29 Jan 2018 09:35:20 +0000 Subject: Nested CPR patch review In-Reply-To: References: Message-ID: Would one of you have time to review this patch now? Is there something more needed? Matt On Sun, Jan 7, 2018 at 10:11 AM, Sebastian Graf wrote: > I've since run NoFib. You can find the results here: > https://phabricator.haskell.org/D4244#119697 > > I wonder if you feel that any more notes are needed? The general idea of CPR > remained the same, it's just the extension of the DmdResult lattice that > needs some rationale as to why and when these new values are needed. > > I also wonder what the impact of "Slightly strengthen the strictness > analysis" > (https://ghc.haskell.org/trac/ghc/wiki/NestedCPR/Akio2017#Changestothedemandanalyzer) > would be if regarded in isolation. > > On Tue, Jan 2, 2018 at 11:52 AM, Matthew Pickering > wrote: >> >> I don't think anyone has run nofib on the rebased branch yet. >> >> The Akio2017 subpage is a more accurate summary. Sebastian has also >> been adding notes to explain the more intricate parts. >> >> Matt >> >> On Fri, Dec 22, 2017 at 5:27 PM, Simon Peyton Jones >> wrote: >> > Terrific! >> > >> > What are the nofib results? >> > >> > Can we have a couple of artificial benchmarks in cpranal/should_run that >> > show substantial perf improvements because the nested CPR wins in some inner >> > loop? >> > >> > Is https://ghc.haskell.org/trac/ghc/wiki/NestedCPR still an accurate >> > summary of the idea? And the Akio2017 sub-page? It would be easier to >> > review the code if the design documentation accurately described it. >> > >> > I'll look in the new year. Thanks! >> > >> > Simon >> > >> > | -----Original Message----- >> > | From: Matthew Pickering [mailto:matthewtpickering at gmail.com] >> > | Sent: 22 December 2017 17:09 >> > | To: GHC developers ; Simon Peyton Jones >> > | ; Joachim Breitner ; >> > | tkn.akio at gmail.com; Sebastian Graf >> > | Subject: Nested CPR patch review >> > | >> > | Hi all, >> > | >> > | I recently resurrected akio's nested cpr branch and put it on >> > phabricator >> > | for review. >> > | >> > | https://phabricator.haskell.org/D4244 >> > | >> > | Sebastian has kindly been going over it and ironed out a few kinks in >> > the >> > | last few days. He says now that he believes the patch is correct. >> > | >> > | Is there anything else which needs to be done before merging this >> > patch? >> > | >> > | Simon, would you perhaps be able to give the patch a look over? >> > | >> > | Cheers, >> > | >> > | Matt > > From simonpj at microsoft.com Mon Jan 29 16:57:10 2018 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 29 Jan 2018 16:57:10 +0000 Subject: Nested CPR patch review In-Reply-To: References: Message-ID: Yes, I apologise for being tardy. It's on my stack. Just swamped. Simon | -----Original Message----- | From: Matthew Pickering [mailto:matthewtpickering at gmail.com] | Sent: 29 January 2018 09:35 | To: Sebastian Graf | Cc: Simon Peyton Jones ; GHC developers ; Joachim Breitner ; | tkn.akio at gmail.com | Subject: Re: Nested CPR patch review | | Would one of you have time to review this patch now? | | Is there something more needed? | | Matt | | On Sun, Jan 7, 2018 at 10:11 AM, Sebastian Graf | wrote: | > I've since run NoFib. You can find the results here: | > https://phabricator.haskell.org/D4244#119697 | > | > I wonder if you feel that any more notes are needed? The general | idea | > of CPR remained the same, it's just the extension of the DmdResult | > lattice that needs some rationale as to why and when these new | values are needed. | > | > I also wonder what the impact of "Slightly strengthen the strictness | > analysis" | > | (https://ghc.haskell.org/trac/ghc/wiki/NestedCPR/Akio2017#Changestothe | > demandanalyzer) | > would be if regarded in isolation. | > | > On Tue, Jan 2, 2018 at 11:52 AM, Matthew Pickering | > wrote: | >> | >> I don't think anyone has run nofib on the rebased branch yet. | >> | >> The Akio2017 subpage is a more accurate summary. Sebastian has also | >> been adding notes to explain the more intricate parts. | >> | >> Matt | >> | >> On Fri, Dec 22, 2017 at 5:27 PM, Simon Peyton Jones | >> wrote: | >> > Terrific! | >> > | >> > What are the nofib results? | >> > | >> > Can we have a couple of artificial benchmarks in | cpranal/should_run | >> > that show substantial perf improvements because the nested CPR | wins | >> > in some inner loop? | >> > | >> > Is https://ghc.haskell.org/trac/ghc/wiki/NestedCPR still an | accurate | >> > summary of the idea? And the Akio2017 sub-page? It would be | easier to | >> > review the code if the design documentation accurately described | it. | >> > | >> > I'll look in the new year. Thanks! | >> > | >> > Simon | >> > | >> > | -----Original Message----- | >> > | From: Matthew Pickering [mailto:matthewtpickering at gmail.com] | >> > | Sent: 22 December 2017 17:09 | >> > | To: GHC developers ; Simon Peyton Jones | >> > | ; Joachim Breitner | >> > | ; tkn.akio at gmail.com; Sebastian Graf | >> > | | >> > | Subject: Nested CPR patch review | >> > | | >> > | Hi all, | >> > | | >> > | I recently resurrected akio's nested cpr branch and put it on | >> > phabricator | >> > | for review. | >> > | | >> > | https://phabricator.haskell.org/D4244 | >> > | | >> > | Sebastian has kindly been going over it and ironed out a few | >> > | kinks in | >> > the | >> > | last few days. He says now that he believes the patch is | correct. | >> > | | >> > | Is there anything else which needs to be done before merging | >> > | this | >> > patch? | >> > | | >> > | Simon, would you perhaps be able to give the patch a look | over? | >> > | | >> > | Cheers, | >> > | | >> > | Matt | > | >