From omeragacan at gmail.com Wed Jan 1 09:34:33 2020 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Wed, 1 Jan 2020 12:34:33 +0300 Subject: Code generation/SRT question Message-ID: Hi Simon, In Cmm if I have a recursive group of functions f and g, and I'm using f's closure as the SRT for this group, should f's entry block's info table have f_closure as its SRT? In Cmm syntax f_entry() { { info_tbls: [... (c1vn, label: ... rep: ... srt: ??????] stack_info: ... } {offset c1vn: ... } } Here should I have `f_closure` in the srt field? I'd expect yes, but looking at the current SRT code, in CmmBuildInfoTables.updInfoSRTs, we have this: (newInfo, srtEntries) = case mapLookup (g_entry g) funSRTEnv of Nothing -> -- if we don't add SRT entries to this closure, then we -- want to set the srt field in its info table as usual (info_tbl { cit_srt = mapLookup (g_entry g) srt_env }, []) Just srtEntries -> srtTrace "maybeStaticFun" (ppr res) (info_tbl { cit_rep = new_rep }, res) where res = [ CmmLabel lbl | SRTEntry lbl <- srtEntries ] Here we only update SRT field of the block if we're not adding SRT entries to the function's closure, so in the example above, because we're using the function as SRT (and adding SRT entries to its closure) SRT field of c1vn won't be updated. Am I missing anything? Thanks, Ömer From juhpetersen at gmail.com Thu Jan 2 02:54:53 2020 From: juhpetersen at gmail.com (Jens Petersen) Date: Thu, 2 Jan 2020 10:54:53 +0800 Subject: [ANNOUNCE] GHC 8.8.2-rc1 is now available In-Reply-To: <87lfrh719c.fsf@smart-cactus.org> References: <87lfrh719c.fsf@smart-cactus.org> Message-ID: On Fri, 13 Dec 2019 at 04:11, Ben Gamari wrote: > https://downloads.haskell.org/~ghc/8.8.2-rc1 > Thanks! I finally got round to doing some Fedora test builds . LGTM so far, though only tested lightly. I will be pushing this to the Fedora ghc:8.8 module testing stream soon. Jens -------------- next part -------------- An HTML attachment was scrubbed... URL: From juhpetersen at gmail.com Thu Jan 2 15:47:44 2020 From: juhpetersen at gmail.com (Jens Petersen) Date: Thu, 2 Jan 2020 23:47:44 +0800 Subject: [ANNOUNCE] Glasgow Haskell Compiler 8.10.1-alpha2 released In-Reply-To: <8736dq90pr.fsf@smart-cactus.org> References: <8736dq90pr.fsf@smart-cactus.org> Message-ID: On Thu, 12 Dec 2019 at 02:28, Ben Gamari wrote: > https://downloads.haskell.org/ghc/8.10.1-alpha2/ I also built this for fedora in 2 test builds (the latter for arm). Jens ps The reason it took so long being that I made some significant packaging changes earlier for Fedora 31 (added subpackages for prof), which had made it difficult to build Haskell packages for both F30 and F31. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Thu Jan 2 16:33:20 2020 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 02 Jan 2020 11:33:20 -0500 Subject: [ANNOUNCE] GHC 8.8.2-rc1 is now available In-Reply-To: References: <87lfrh719c.fsf@smart-cactus.org> Message-ID: On January 1, 2020 9:54:53 PM EST, Jens Petersen wrote: >On Fri, 13 Dec 2019 at 04:11, Ben Gamari wrote: > >> https://downloads.haskell.org/~ghc/8.8.2-rc1 >> > >Thanks! > >I finally got round to doing some Fedora test > builds >. >LGTM so far, though only tested lightly. > >I will be pushing this to the Fedora ghc:8.8 module testing stream >soon. > >Jens As always, thanks, Jens! From ben at well-typed.com Sun Jan 5 00:37:00 2020 From: ben at well-typed.com (Ben Gamari) Date: Sat, 04 Jan 2020 19:37:00 -0500 Subject: A small but useful tool for performance characterisation Message-ID: <87blri68ns.fsf@smart-cactus.org> Hi everyone, I have recently been doing a fair amount of performance characterisation and have long wanted a convenient means of collecting GHC runtime statistics for later analysis. For this I quickly developed a small wrapper utility [1]. To see what it does, let's consider an example. Say we made a change to GHC which we believe might affect the runtime performance of Program.hs. We could quickly check this by running, $ ghc-before/_build/stage1/bin/ghc -O Program.hs $ ghc_perf.py -o before.json ./Program $ ghc-before/_build/stage1/bin/ghc -O Program.hs $ ghc_perf.py -o after.json ./Program This will produce two files, before.json and after.json, which contain the various runtime statistics emitted by +RTS -s --machine-readable. These files are in the same format as is used by my nofib branch [2] and therefore can be compared using `nofib-compare` from that branch. In addition to being able to collect runtime metrics, ghc_perf is also able to collect performance counters (on Linux only) using perf. For instance, $ ghc_perf.py -o program.json \ -e instructions,cycles,cache-misses ./Program will produce program.json containing not only RTS statistics but also event counts from the perf instructions, cycles, and cache-misses events. Alternatively, passing simply `ghc_perf.py --perf` enables a reasonable default set of events (namely instructions, cycles, cache-misses, branches, and branch-misses). Finally, ghc_perf can also handle repeated runs. For instance, $ ghc_perf.py -o program.json -r 5 --summarize \ -e instructions,cycles,cache-misses ./Program will run Program 5 times, emit all of the collected samples to program.json, and produce a (very basic) statistical summary of what it collected on stdout. Note that there are a few possible TODOs that I've been considering: * I chose JSON as the output format to accomodate structured data (e.g. capture experimental parameters in a structured way). However, in practice this choice has lead to significantly more inconvenience than I would like, especially given that so far I've only used the format to capture basic key/value pairs. Perhaps reverting to CSV would be preferable. * It might be nice to also add support for cachegrind. Anyways, I hope that others find this as useful as I have. Cheers, - Ben [1] https://gitlab.haskell.org/bgamari/ghc-utils/blob/master/ghc_perf.py [2] https://gitlab.haskell.org/ghc/nofib/merge_requests/24 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From rae at richarde.dev Sun Jan 5 01:51:07 2020 From: rae at richarde.dev (Richard Eisenberg) Date: Sat, 4 Jan 2020 20:51:07 -0500 Subject: A small but useful tool for performance characterisation In-Reply-To: <87blri68ns.fsf@smart-cactus.org> References: <87blri68ns.fsf@smart-cactus.org> Message-ID: Hi Ben, This sounds great. Is there a place on the wiki to catalog tools like this? Thanks for telling us about it! Richard > On Jan 4, 2020, at 7:37 PM, Ben Gamari wrote: > > Hi everyone, > > I have recently been doing a fair amount of performance characterisation > and have long wanted a convenient means of collecting GHC runtime > statistics for later analysis. For this I quickly developed a small > wrapper utility [1]. > > To see what it does, let's consider an example. Say we made a change to > GHC which we believe might affect the runtime performance of Program.hs. > We could quickly check this by running, > > $ ghc-before/_build/stage1/bin/ghc -O Program.hs > $ ghc_perf.py -o before.json ./Program > $ ghc-before/_build/stage1/bin/ghc -O Program.hs > $ ghc_perf.py -o after.json ./Program > > This will produce two files, before.json and after.json, which contain > the various runtime statistics emitted by +RTS -s --machine-readable. > These files are in the same format as is used by my nofib branch [2] and > therefore can be compared using `nofib-compare` from that branch. > > In addition to being able to collect runtime metrics, ghc_perf is also > able to collect performance counters (on Linux only) using perf. For > instance, > > $ ghc_perf.py -o program.json \ > -e instructions,cycles,cache-misses ./Program > > will produce program.json containing not only RTS statistics but also > event counts from the perf instructions, cycles, and cache-misses > events. Alternatively, passing simply `ghc_perf.py --perf` enables a > reasonable default set of events (namely instructions, cycles, > cache-misses, branches, and branch-misses). > > Finally, ghc_perf can also handle repeated runs. For instance, > > $ ghc_perf.py -o program.json -r 5 --summarize \ > -e instructions,cycles,cache-misses ./Program > > will run Program 5 times, emit all of the collected samples to > program.json, and produce a (very basic) statistical summary of what it > collected on stdout. > > Note that there are a few possible TODOs that I've been considering: > > * I chose JSON as the output format to accomodate structured data (e.g. > capture experimental parameters in a structured way). However, in > practice this choice has lead to significantly more inconvenience > than I would like, especially given that so far I've only used the > format to capture basic key/value pairs. Perhaps reverting to CSV > would be preferable. > > * It might be nice to also add support for cachegrind. > > Anyways, I hope that others find this as useful as I have. > > Cheers, > > - Ben > > > [1] https://gitlab.haskell.org/bgamari/ghc-utils/blob/master/ghc_perf.py > [2] https://gitlab.haskell.org/ghc/nofib/merge_requests/24 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From rae at richarde.dev Sun Jan 5 02:38:00 2020 From: rae at richarde.dev (Richard Eisenberg) Date: Sat, 4 Jan 2020 21:38:00 -0500 Subject: Superclasses of type families returning constraints? In-Reply-To: <3A25348F-837E-4B2D-B4B6-E6A8981C5F96@gmail.com> References: <461E582E-D366-46F8-8E2B-2B1429111803@gmail.com> <1E0156EB-6241-4AF8-B509-041EC8DD0239@richarde.dev> <3A25348F-837E-4B2D-B4B6-E6A8981C5F96@gmail.com> Message-ID: <5EB80769-06EB-45F7-B4C0-F261F3B87842@richarde.dev> > On Dec 30, 2019, at 11:16 AM, Alexis King wrote: > > I don’t know if there is a way to encode those using the constraint language available in source Haskell today. I gave it a good try, but failed. The trick I was hoping to leverage was *partial improvement*. Here is an example of partial improvement: > class FD1 a b | a -> b > class FD2 a b | a -> b > > instance FD1 a b => FD2 (Maybe a) (Maybe b) > > f :: FD2 a b => a -> b > f = undefined > > x = f (Just True) This program will require solving [W] FD2 (Maybe Bool) beta, where the [W] indicates that we have a "wanted" constraint (a goal we are trying to satisfy) and beta is a unification variable. GHC can figure out from the fundep on FD2 that the only FD2 instance that can apply is the one we see above. It thus knows that beta must be Maybe beta2 (for some fresh beta2). This is the essence of partial improvement, when we solve one unification variable with an expression involving another (but is more informative somehow). I thought that, maybe, we could use partial improvement to give you what you want, by replacing the RHSs of the type family with expressions involving classes with fundeps. (Injective type families work very analogously, but it's not worthwhile showing the translation of this idea to that domain here.) I couldn't quite get it to work out, though. Maybe you can, armed with this description... but I doubt it. The problem is that partial improvement must always be a shadow of some "total" improvement process. In the example above, note that I haven't written down an instance of FD1. Clearly, we'll need to satisfy the FD1 constraint in order to satisfy the FD2 constraint. The problem is that, in your example, we really only want partial improvement... and I don't think there's a way to state that. What's tantalizing is that GHC already does all this partial improvement stuff -- it's just that we can't seem to write the instance declarations the way we want to satisfy your use case. I don't know if this really moves us forward at all, but maybe a description of my failure can lead to someone's success. Or perhaps it will lead to a language improvement that will allow us to express what you want. Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Sun Jan 5 03:34:07 2020 From: ben at well-typed.com (Ben Gamari) Date: Sat, 04 Jan 2020 22:34:07 -0500 Subject: A small but useful tool for performance characterisation In-Reply-To: References: <87blri68ns.fsf@smart-cactus.org> Message-ID: There is the "useful tools" page [1] which has mentioned the ghc-utils repository where the aforementioned script lives for a few years now. That being said, I get the impression that not many people have found it via this page. Everyone who I know of who has used anything in ghc-utils has discovered it via word of mouth. I'm not sure what to do about this. The page isn't *that* buried: from the wiki home page one arrives at it via the link path Working Conventions/Various tools. Cheers, - Ben On January 4, 2020 8:51:07 PM EST, Richard Eisenberg wrote: >Hi Ben, > >This sounds great. Is there a place on the wiki to catalog tools like >this? > >Thanks for telling us about it! >Richard > >> On Jan 4, 2020, at 7:37 PM, Ben Gamari wrote: >> >> Hi everyone, >> >> I have recently been doing a fair amount of performance >characterisation >> and have long wanted a convenient means of collecting GHC runtime >> statistics for later analysis. For this I quickly developed a small >> wrapper utility [1]. >> >> To see what it does, let's consider an example. Say we made a change >to >> GHC which we believe might affect the runtime performance of >Program.hs. >> We could quickly check this by running, >> >> $ ghc-before/_build/stage1/bin/ghc -O Program.hs >> $ ghc_perf.py -o before.json ./Program >> $ ghc-before/_build/stage1/bin/ghc -O Program.hs >> $ ghc_perf.py -o after.json ./Program >> >> This will produce two files, before.json and after.json, which >contain >> the various runtime statistics emitted by +RTS -s --machine-readable. >> These files are in the same format as is used by my nofib branch [2] >and >> therefore can be compared using `nofib-compare` from that branch. >> >> In addition to being able to collect runtime metrics, ghc_perf is >also >> able to collect performance counters (on Linux only) using perf. For >> instance, >> >> $ ghc_perf.py -o program.json \ >> -e instructions,cycles,cache-misses ./Program >> >> will produce program.json containing not only RTS statistics but also >> event counts from the perf instructions, cycles, and cache-misses >> events. Alternatively, passing simply `ghc_perf.py --perf` enables a >> reasonable default set of events (namely instructions, cycles, >> cache-misses, branches, and branch-misses). >> >> Finally, ghc_perf can also handle repeated runs. For instance, >> >> $ ghc_perf.py -o program.json -r 5 --summarize \ >> -e instructions,cycles,cache-misses ./Program >> >> will run Program 5 times, emit all of the collected samples to >> program.json, and produce a (very basic) statistical summary of what >it >> collected on stdout. >> >> Note that there are a few possible TODOs that I've been considering: >> >> * I chose JSON as the output format to accomodate structured data >(e.g. >> capture experimental parameters in a structured way). However, in >> practice this choice has lead to significantly more inconvenience >> than I would like, especially given that so far I've only used the >> format to capture basic key/value pairs. Perhaps reverting to CSV >> would be preferable. >> >> * It might be nice to also add support for cachegrind. >> >> Anyways, I hope that others find this as useful as I have. >> >> Cheers, >> >> - Ben >> >> >> [1] >https://gitlab.haskell.org/bgamari/ghc-utils/blob/master/ghc_perf.py >> [2] https://gitlab.haskell.org/ghc/nofib/merge_requests/24 >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -- Sent from my Android device with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From derek at lam.io Sun Jan 5 06:33:53 2020 From: derek at lam.io (Derek Lam) Date: Sat, 04 Jan 2020 22:33:53 -0800 Subject: How to specify import paths for a frontend plugin Message-ID: Hi ghc-devs, I’m making a first attempt to make a frontend plugin, to resolve cabal packages in the GHC API. However I’m running into troubles with module resolution in the GHC API, because I can’t control where it will search for modules at all. I've attached a minimal example, with a frontend plugin definition that can’t find modules (Plugin.hs), and an equivalent standalone program that does (Main.hs). Specifically, I'm following a solution Edward Yang published in 2017 (http://blog.ezyang.com/2017/02/how-to-integrate-ghc-api-programs-with-cabal/), where the frontend plugin is called through a helper script that passes flags forwarded from `cabal repl`. To test the plugin directly with GHC, I collected the args through the helper script and filtered them to the minimal set that made the plugin run:   ghc --frontend Plugin -itarget -package-db dist-newstyle/packagedb/ghc-8.6.5 Plugin -plugin-package sandbox -hide-all-packages This, as well as the full argument set, would complain that it can't find the target module under `./target/A.hs`:   : error: module ‘A’ cannot be found locally It does when the import path arg `-itarget` is absolute. Still, its `importPaths` are what I expect: [".", "target"], and the standalone program finds the target module with the same `importPaths`. I've tested this in GHC 8.6.5, 8.4.2 and 8.2.2, making me sure I'm just missing something, but I haven’t found help in the docs yet. I really appreciate some help to draw my hours over this to a close! Thanks, Derek PS the full invocation of the plugin (with absolute paths swapped with relative ones, except `-itarget` which was relative to begin with) is: ghc --frontend Plugin -plugin-package sandbox -fbuilding-cabal-package -O0 -outputdir dist-newstyle/build/x86_64-osx/ghc-8.6.5/sandbox-0.1.0.0/build -odir dist-newstyle/build/x86_64-osx/ghc-8.6.5/sandbox-0.1.0.0/build -hidir dist-newstyle/build/x86_64-osx/ghc-8.6.5/sandbox-0.1.0.0/build -stubdir dist-newstyle/build/x86_64-osx/ghc-8.6.5/sandbox-0.1.0.0/build -i -idist-newstyle/build/x86_64-osx/ghc-8.6.5/sandbox-0.1.0.0/build -i. -itarget -idist-newstyle/build/x86_64-osx/ghc-8.6.5/sandbox-0.1.0.0/build/autogen -idist-newstyle/build/x86_64-osx/ghc-8.6.5/sandbox-0.1.0.0/build/global-autogen -Idist-newstyle/build/x86_64-osx/ghc-8.6.5/sandbox-0.1.0.0/build/autogen -Idist-newstyle/build/x86_64-osx/ghc-8.6.5/sandbox-0.1.0.0/build/global-autogen -Idist-newstyle/build/x86_64-osx/ghc-8.6.5/sandbox-0.1.0.0/build -optP-include -optPdist-newstyle/build/x86_64-osx/ghc-8.6.5/sandbox-0.1.0.0/build/autogen/cabal_macros.h -this-unit-id sandbox-0.1.0.0-inplace -hide-all-packages -Wmissing-home-modules -no-user-package-db -package-db /Users/derek-lam/.cabal/store/ghc-8.6.5/package.db -package-db dist-newstyle/packagedb/ghc-8.6.5 -package-db dist-newstyle/build/x86_64-osx/ghc-8.6.5/sandbox-0.1.0.0/package.conf.inplace -package-id base-4.12.0.0 -package-id ghc-8.6.5 -XHaskell2010 Plugin -hide-all-packages -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: plugin.tar.gz Type: application/x-gzip Size: 1098 bytes Desc: not available URL: From lexi.lambda at gmail.com Sun Jan 5 18:02:27 2020 From: lexi.lambda at gmail.com (Alexis King) Date: Sun, 5 Jan 2020 12:02:27 -0600 Subject: Superclasses of type families returning constraints? In-Reply-To: <5EB80769-06EB-45F7-B4C0-F261F3B87842@richarde.dev> References: <461E582E-D366-46F8-8E2B-2B1429111803@gmail.com> <1E0156EB-6241-4AF8-B509-041EC8DD0239@richarde.dev> <3A25348F-837E-4B2D-B4B6-E6A8981C5F96@gmail.com> <5EB80769-06EB-45F7-B4C0-F261F3B87842@richarde.dev> Message-ID: > On Jan 4, 2020, at 20:38, Richard Eisenberg wrote: > > I thought that, maybe, we could use partial improvement to give you what you want I think that improvement alone cannot possibly be enough here, as improvement by its nature does not provide evidence. Improvement allows us to take a set of constraints like [G] FD2 a Bool [W] FD2 a b and derive [WD] b ~ Bool, but importantly this does not produce a new given! This only works if b is a metavariable, since we can solve the new wanted by simply taking b := Bool, but if b is rigid, we are just as stuck as before. In other words, improvement only helps resolve ambiguities, not derive any new information. That’s why I think the “superclass” characterization is more useful. If instead we express your FD2 class as class b ~ B a => FD2 a b where type B a then if we have [G] FD2 a Bool, we can actually derive [G] B a ~ Bool, which is much stronger than what we were able to derive using improvement. I imagine you are aware of all of the above already, but it’s not immediately clear to me from your description why you need functional dependencies (and therefore improvement) rather than this kind of approximation using superclasses and type families. Would modeling things with that approximation help at all? If not, why not? I think that would help me understand what you’re saying a little better. Thanks, Alexis From ben at well-typed.com Sun Jan 5 18:30:34 2020 From: ben at well-typed.com (Ben Gamari) Date: Sun, 05 Jan 2020 13:30:34 -0500 Subject: hadrian-util: An experiment in a more usable hadrian UX Message-ID: <877e2569iy.fsf@smart-cactus.org> Hi everyone, For the past few months I have been using Hadrian for the majority of my GHC builds. In due course I have encountered a few papercuts: * hadrian/cabal.build.sh is quite wordy (#16250); moreover, you need to be in the source root to invoke it (#16667) * editing hadrian.settings is quite difficult due to the lack of availability of tab-completion in vim * maintaining multiple build roots is quite error-prone since you must remember which build flavour you used for each (#16481, #16638) * there is no equivalent to setting `stage=2` in `mk/build.mk` to make the stage-1-freeze persistent To address these I cobbled together a small wrapper, hadrian-util. I have this installed in my home-manager environment with a shell alias, `hu`, meaning that building GHC is as easy as typing `hu run` anywhere in the tree. As discussed in the README, `hadrian-util` supports multiple build roots, has a moderately convenient interface for manipulating hadrian.settings (with completion!) and has enough persistent state to eliminate most of the error-prone boilerplate from Hadrian invocations without being confusing. There is the question of what the long-term future of hadrian-util should be. Arguably it is merely a hack papering over some of the shortcomings of Hadrian's current UX; perhaps eventually these will be fixed. However, in the meantime, I've found that hadrian-util makes hadrian quite pleasant to use. I hope others also find this useful. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Sun Jan 5 19:37:33 2020 From: ben at smart-cactus.org (Ben Gamari) Date: Sun, 05 Jan 2020 14:37:33 -0500 Subject: Fix for ticket 8095 in ghc-8.10.1 In-Reply-To: References: Message-ID: <874kx966ff.fsf@smart-cactus.org> Rene writes: > Hello, > > I wondering if the fix for https://gitlab.haskell.org/ghc/ghc/issues/8095 is going to make it into ghc-8.10.1 ? > Sadly it is not. I'm aiming for 8.12 at this point. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Sun Jan 5 19:39:39 2020 From: ben at smart-cactus.org (Ben Gamari) Date: Sun, 05 Jan 2020 14:39:39 -0500 Subject: .hie files for pre-installed GHC libraries In-Reply-To: References: Message-ID: <871rsd66br.fsf@smart-cactus.org> Csaba Hruska writes: > Hi, > > Is it planned to include the .hie files of the base and other libraries in > the GHC binary download package? > Not for 8.10. This is tracked as #16901 and there is an open MR (!1337). Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From carter.schonwald at gmail.com Mon Jan 6 03:11:30 2020 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sun, 5 Jan 2020 22:11:30 -0500 Subject: help understanding a ghc assert failure wrt coercions Message-ID: Hey all, I've been spending a little bit of time hacking out my own little "erase coercions" branch, and with asserts enabled i'm seeing a very strange panic that seems to be invalid? namely: its stating that the types have changed, BUT the error output indicates that the types are the same on both sides! ghc: panic! (the 'impossible' happened) (GHC version 8.11.0.20200106: ASSERT failed! optCoercion changed types! in_co: Univ(representational Erased :: IO a_a2Pd, State# RealWorld -> (# State# RealWorld, a_a2Pd #)) in_ty1: IO a_a2Pd in_ty2: State# RealWorld -> (# State# RealWorld, a_a2Pd #) out_co: Univ(representational Erased :: IO a_a2Pd, State# RealWorld -> (# State# RealWorld, a_a2Pd #)) out_ty1: IO a_a2Pd out_ty2: State# RealWorld -> (# State# RealWorld, a_a2Pd #) subst: [TCvSubst In scope: InScope {wild_00 augment ++ build foldr eqString bindIO returnIO otherwise assert thenIO breakpoint breakpointCond map $ join f_a19W x_a19X x_a1hI s_a1hZ $c==_a2rD $c/=_a2rM $cp1Ord_a2rY $ccompare_a2s0 $c<_a2se $c<=_a2ss $c>_a2sy $c>=_a2sE $cmax_a2sL $cmin_a2sS $cp1Monoid_a2t4 $cmappend_a2tf $cmconcat_a2tl $c>>=_a2tD $c>>_a2tS $creturn_a2u3 $cpure_a2uo $c<*>_a2uA $cliftA2_a2uN $c*>_a2v0 $c<*_a2vb $c>>=_a2vw $c>>_a2vJ $creturn_a2vU $cpure_a2wd $c<*>_a2wn $cliftA2_a2wy $c*>_a2wL $c<*_a2wW $c>>=_a2xf $c>>_a2xq $creturn_a2xB $c<*>_a2y0 $cliftA2_a2y9 $c*>_a2yj $c<*_a2yu $cmappend_a2yS $cmconcat_a2yY $cmappend_a2ze $cmconcat_a2zk $cp1Monoid_a2zE $cmempty_a2zG $cmappend_a2zU $cmconcat_a2A0 $cp1Monoid_a2Ai $cmempty_a2Ak $cmappend_a2Aw $cmconcat_a2AC $cp1Monoid_a2AS $cmempty_a2AU $cmappend_a2B4 $cmconcat_a2Ba $cp1Monoid_a2Bo $cmempty_a2Bq $cmappend_a2By $cmconcat_a2BE $cmappend_a2BU $cmconcat_a2C0 $cp1Monoid_a2Cb $cmappend_a2Cj $cmconcat_a2Cp $cmappend_a2CH $cmconcat_a2CN $c<>_a2CX $csconcat_a2D8 $cstimes_a2De $c<>_a2Dt $csconcat_a2DA $c<>_a2DV $csconcat_a2DZ $cstimes_a2E5 $c<>_a2Eu $csconcat_a2EI $cstimes_a2EO $c<>_a2Fp $csconcat_a2FB $cstimes_a2FH $c<>_a2Gc $csconcat_a2Gm $cstimes_a2Gs $c<>_a2GR $csconcat_a2GZ $cstimes_a2H5 $c<>_a2Hm $csconcat_a2Hq $cstimes_a2Hu $c<>_a2HG $csconcat_a2HM $cstimes_a2HS $c<>_a2I6 $csconcat_a2Id $cstimes_a2Ij $csconcat_a2IC $cstimes_a2II $c>>=_a2J0 $c>>_a2JO $creturn_a2JZ $cpure_a2Ke $c<*>_a2Kl $cliftA2_a2Kw $c*>_a2KJ $c<*_a2KU $cfmap_a2L7 $c<$_a2Lj $cmzero_a2LF $cmplus_a2LO $cmzero_a2M7 $cmplus_a2Mg $cmzero_a2Mz $cmplus_a2MI $csome_a2Na $cmany_a2Nj $csome_a2NK $cmany_a2NT $c<|>_a2Oe $csome_a2Oj $cmany_a2Os $c>>_a2OP $creturn_a2P0 $cfmap_a2Pb $c<$_a2Ps a_a2Pu b_a2Pv $c>>=_a2PJ $creturn_a2Q2 $c>>=_a2Qh $creturn_a2Qz $c>>=_a2QP $c>>_a2QV $creturn_a2R6 $c<*>_a2Rr $cliftA2_a2RC $c<*_a2RX $cpure_a2Se $c<*>_a2Sj $cliftA2_a2Sr $c*>_a2SA $c<*_a2SI $c<*>_a2T5 $cliftA2_a2Th $c*>_a2Tq $c<*_a2Tx $c<*>_a2TW $cliftA2_a2U2 $c*>_a2U9 $c<*_a2Uk $c<$_a2UF $cfmap_a2US $c<$_a2V0 $cfmap_a2Ve $c<$_a2Vk $c<$_a2VH $cfmap_a2VX $c<$_a2W3 $cfmap_a2Wi $c<$_a2Wo $krep_a37a $krep_a37b $krep_a37c $krep_a37d $krep_a37e $krep_a37f $krep_a37g $krep_a37h $krep_a37i $krep_a37j $krep_a37k $krep_a37l $krep_a37m $krep_a37n $krep_a37o $krep_a37p $krep_a37q $krep_a37r $krep_a37s $sap_d3dm $sap_d3dn $sliftM5_d3ds $sliftM5_d3dt $sliftM4_d3dy $sliftM4_d3dz $sliftM3_d3dE $sliftM3_d3dF $sliftM2_d3dK $sliftM2_d3dL $sliftM_d3dQ $sliftM_d3dR $swhen_d3e3 $swhen_d3e4 $s=<<_d3e9 $sliftA3_d3eh $sliftA3_d3ei $sliftA_d3em $sliftA_d3en $tcMonoid $tcSemigroup $trModule <**> liftA liftA3 =<< when sequence mapM liftM liftM2 liftM3 liftM4 liftM5 ap mapFB unsafeChr ord minInt maxInt id const . flip $! until asTypeOf failIO unIO getTag quotInt remInt divInt modInt quotRemInt divModInt divModInt# shiftL# shiftRL# iShiftL# iShiftRA# iShiftRL# $tcFunctor $dm<$ $fFunctor[] $fFunctorMaybe $fFunctor(,) $fFunctor-> $fFunctor(,,,) $fFunctor(,,) $tcApplicative $dm<*> $dmliftA2 $dm*> $dm<* $fApplicativeIO $fApplicative[] $fApplicativeMaybe $fApplicative-> $tcMonad $dm>> $dmreturn $fMonadIO $fFunctorIO $fMonad[] $fMonadMaybe $fMonad-> $tcAlternative $dmsome $dmmany $fAlternativeIO $fAlternative[] $fAlternativeMaybe $tcMonadPlus $dmmzero $dmmplus $fMonadPlusIO $fMonadPlus[] $fMonadPlusMaybe $tc':| $tcNonEmpty $fMonadNonEmpty $fApplicativeNonEmpty $fFunctorNonEmpty $dmsconcat $dmstimes $fSemigroupIO $fSemigroupMaybe $fSemigroupOrdering $fSemigroup(,,,,) $fSemigroup(,,,) $fSemigroup(,,) $fSemigroup(,) $fSemigroup() $fSemigroup-> $fSemigroupNonEmpty $fSemigroup[] $dmmappend $dmmconcat $tc'C:Monoid $fMonoidIO $fMonad(,,,) $fApplicative(,,,) $fMonad(,,) $fApplicative(,,) $fMonad(,) $fApplicative(,) $fMonoidMaybe $fMonoidOrdering $fMonoid(,,,,) $fMonoid(,,,) $fMonoid(,,) $fMonoid(,) $fMonoid() $fMonoid-> $fMonoid[] $tc'O $tcOpaque $fEqNonEmpty $fOrdNonEmpty returnIO_s3A8 failIO_s3A9 $cempty_s3Ab unIO_s3Ac thenIO_s3Ah bindIO_s3Am $trModule_s3AL $trModule_s3AM $trModule_s3AN $trModule_s3AO $krep_s3AP $tcFunctor_s3AQ $tcFunctor_s3AR $tcApplicative_s3AS $tcApplicative_s3AT $tcMonad_s3AU $tcMonad_s3AV $tcAlternative_s3AW $tcAlternative_s3AX $tcMonadPlus_s3AY $tcMonadPlus_s3AZ $tcNonEmpty_s3B0 $tcNonEmpty_s3B1 $krep_s3B2 $tc':|_s3B3 $tc':|_s3B4 $tcSemigroup_s3B5 $tcSemigroup_s3B6 $krep_s3B7 $tcMonoid_s3B8 $tcMonoid_s3B9 $krep_s3Ba $tc'C:Monoid_s3Bb $tc'C:Monoid_s3Bc $tcOpaque_s3Bd $tcOpaque_s3Be $tc'O_s3Bf $tc'O_s3Bg $cliftA2_s3BL $cfmap_s3BP $c<*>_s3BT $sap_s3BW $sliftM2_s3BZ $sliftM5_s3Cc $sliftM4_s3Ch $sliftM3_s3Cl $sliftM_s3Co f_s3FS} Type env: [a2Pd :-> b_a2Pv, a2Pe :-> a_a2Pu] Co env: []] Call stack: CallStack (from HasCallStack): callStackDoc, called at compiler/utils/Outputable.hs:1173:37 in ghc:Outputable pprPanic, called at compiler/utils/Outputable.hs:1243:5 in ghc:Outputable assertPprPanic, called at compiler/types/OptCoercion.hs:122:41 in ghc:OptCoercion -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Mon Jan 6 04:44:13 2020 From: ben at smart-cactus.org (Ben Gamari) Date: Sun, 05 Jan 2020 23:44:13 -0500 Subject: help understanding a ghc assert failure wrt coercions In-Reply-To: References: Message-ID: <220A8356-299E-46DF-9698-4D76B57028B6@smart-cactus.org> On January 5, 2020 10:11:30 PM EST, Carter Schonwald wrote: >Hey all, >I've been spending a little bit of time hacking out my own little >"erase >coercions" >branch, and with asserts enabled i'm seeing a very strange panic that >seems >to be invalid? >namely: its stating that the types have changed, > >BUT the error output indicates that the types are the same on both >sides! > > > >ghc: panic! (the 'impossible' happened) > (GHC version 8.11.0.20200106: >ASSERT failed! > optCoercion changed types! > in_co: > Univ(representational Erased > :: IO a_a2Pd, State# RealWorld -> (# State# RealWorld, a_a2Pd #)) > in_ty1: IO a_a2Pd > in_ty2: State# RealWorld -> (# State# RealWorld, a_a2Pd #) > out_co: > Univ(representational Erased > :: IO a_a2Pd, State# RealWorld -> (# State# RealWorld, a_a2Pd #)) > out_ty1: IO a_a2Pd > out_ty2: State# RealWorld -> (# State# RealWorld, a_a2Pd #) > subst: > [TCvSubst > In scope: InScope {wild_00 augment ++ build foldr eqString bindIO > returnIO otherwise assert thenIO breakpoint >breakpointCond map $ > join f_a19W x_a19X x_a1hI s_a1hZ $c==_a2rD >$c/=_a2rM $cp1Ord_a2rY > $ccompare_a2s0 $c<_a2se $c<=_a2ss $c>_a2sy >$c>=_a2sE $cmax_a2sL > $cmin_a2sS $cp1Monoid_a2t4 $cmappend_a2tf >$cmconcat_a2tl $c>>=_a2tD > $c>>_a2tS $creturn_a2u3 $cpure_a2uo $c<*>_a2uA >$cliftA2_a2uN > $c*>_a2v0 $c<*_a2vb $c>>=_a2vw $c>>_a2vJ >$creturn_a2vU $cpure_a2wd > $c<*>_a2wn $cliftA2_a2wy $c*>_a2wL $c<*_a2wW >$c>>=_a2xf $c>>_a2xq > $creturn_a2xB $c<*>_a2y0 $cliftA2_a2y9 $c*>_a2yj >$c<*_a2yu > $cmappend_a2yS $cmconcat_a2yY $cmappend_a2ze >$cmconcat_a2zk > $cp1Monoid_a2zE $cmempty_a2zG $cmappend_a2zU >$cmconcat_a2A0 > $cp1Monoid_a2Ai $cmempty_a2Ak $cmappend_a2Aw >$cmconcat_a2AC > $cp1Monoid_a2AS $cmempty_a2AU $cmappend_a2B4 >$cmconcat_a2Ba > $cp1Monoid_a2Bo $cmempty_a2Bq $cmappend_a2By >$cmconcat_a2BE > $cmappend_a2BU $cmconcat_a2C0 $cp1Monoid_a2Cb >$cmappend_a2Cj > $cmconcat_a2Cp $cmappend_a2CH $cmconcat_a2CN >$c<>_a2CX > $csconcat_a2D8 $cstimes_a2De $c<>_a2Dt >$csconcat_a2DA $c<>_a2DV > $csconcat_a2DZ $cstimes_a2E5 $c<>_a2Eu >$csconcat_a2EI $cstimes_a2EO > $c<>_a2Fp $csconcat_a2FB $cstimes_a2FH $c<>_a2Gc >$csconcat_a2Gm > $cstimes_a2Gs $c<>_a2GR $csconcat_a2GZ >$cstimes_a2H5 $c<>_a2Hm > $csconcat_a2Hq $cstimes_a2Hu $c<>_a2HG >$csconcat_a2HM $cstimes_a2HS > $c<>_a2I6 $csconcat_a2Id $cstimes_a2Ij >$csconcat_a2IC $cstimes_a2II > $c>>=_a2J0 $c>>_a2JO $creturn_a2JZ $cpure_a2Ke >$c<*>_a2Kl > $cliftA2_a2Kw $c*>_a2KJ $c<*_a2KU $cfmap_a2L7 >$c<$_a2Lj > $cmzero_a2LF $cmplus_a2LO $cmzero_a2M7 >$cmplus_a2Mg $cmzero_a2Mz > $cmplus_a2MI $csome_a2Na $cmany_a2Nj $csome_a2NK >$cmany_a2NT > $c<|>_a2Oe $csome_a2Oj $cmany_a2Os $c>>_a2OP >$creturn_a2P0 > $cfmap_a2Pb $c<$_a2Ps a_a2Pu b_a2Pv $c>>=_a2PJ >$creturn_a2Q2 > $c>>=_a2Qh $creturn_a2Qz $c>>=_a2QP $c>>_a2QV >$creturn_a2R6 > $c<*>_a2Rr $cliftA2_a2RC $c<*_a2RX $cpure_a2Se >$c<*>_a2Sj > $cliftA2_a2Sr $c*>_a2SA $c<*_a2SI $c<*>_a2T5 >$cliftA2_a2Th > $c*>_a2Tq $c<*_a2Tx $c<*>_a2TW $cliftA2_a2U2 >$c*>_a2U9 $c<*_a2Uk > $c<$_a2UF $cfmap_a2US $c<$_a2V0 $cfmap_a2Ve >$c<$_a2Vk $c<$_a2VH > $cfmap_a2VX $c<$_a2W3 $cfmap_a2Wi $c<$_a2Wo >$krep_a37a $krep_a37b > $krep_a37c $krep_a37d $krep_a37e $krep_a37f >$krep_a37g $krep_a37h > $krep_a37i $krep_a37j $krep_a37k $krep_a37l >$krep_a37m $krep_a37n > $krep_a37o $krep_a37p $krep_a37q $krep_a37r >$krep_a37s $sap_d3dm > $sap_d3dn $sliftM5_d3ds $sliftM5_d3dt >$sliftM4_d3dy $sliftM4_d3dz > $sliftM3_d3dE $sliftM3_d3dF $sliftM2_d3dK >$sliftM2_d3dL > $sliftM_d3dQ $sliftM_d3dR $swhen_d3e3 $swhen_d3e4 >$s=<<_d3e9 > $sliftA3_d3eh $sliftA3_d3ei $sliftA_d3em >$sliftA_d3en $tcMonoid > $tcSemigroup $trModule <**> liftA liftA3 =<< when >sequence mapM > liftM liftM2 liftM3 liftM4 liftM5 ap mapFB >unsafeChr ord minInt > maxInt id const . flip $! until asTypeOf failIO >unIO getTag quotInt > remInt divInt modInt quotRemInt divModInt >divModInt# shiftL# > shiftRL# iShiftL# iShiftRA# iShiftRL# $tcFunctor >$dm<$ $fFunctor[] > $fFunctorMaybe $fFunctor(,) $fFunctor-> >$fFunctor(,,,) > $fFunctor(,,) $tcApplicative $dm<*> $dmliftA2 >$dm*> $dm<* > $fApplicativeIO $fApplicative[] >$fApplicativeMaybe $fApplicative-> > $tcMonad $dm>> $dmreturn $fMonadIO $fFunctorIO >$fMonad[] > $fMonadMaybe $fMonad-> $tcAlternative $dmsome >$dmmany > $fAlternativeIO $fAlternative[] >$fAlternativeMaybe $tcMonadPlus > $dmmzero $dmmplus $fMonadPlusIO $fMonadPlus[] >$fMonadPlusMaybe > $tc':| $tcNonEmpty $fMonadNonEmpty >$fApplicativeNonEmpty > $fFunctorNonEmpty $dmsconcat $dmstimes >$fSemigroupIO > $fSemigroupMaybe $fSemigroupOrdering >$fSemigroup(,,,,) > $fSemigroup(,,,) $fSemigroup(,,) $fSemigroup(,) >$fSemigroup() > $fSemigroup-> $fSemigroupNonEmpty $fSemigroup[] >$dmmappend > $dmmconcat $tc'C:Monoid $fMonoidIO $fMonad(,,,) >$fApplicative(,,,) > $fMonad(,,) $fApplicative(,,) $fMonad(,) >$fApplicative(,) > $fMonoidMaybe $fMonoidOrdering $fMonoid(,,,,) >$fMonoid(,,,) > $fMonoid(,,) $fMonoid(,) $fMonoid() $fMonoid-> >$fMonoid[] $tc'O > $tcOpaque $fEqNonEmpty $fOrdNonEmpty >returnIO_s3A8 failIO_s3A9 > $cempty_s3Ab unIO_s3Ac thenIO_s3Ah bindIO_s3Am >$trModule_s3AL > $trModule_s3AM $trModule_s3AN $trModule_s3AO >$krep_s3AP > $tcFunctor_s3AQ $tcFunctor_s3AR >$tcApplicative_s3AS > $tcApplicative_s3AT $tcMonad_s3AU $tcMonad_s3AV >$tcAlternative_s3AW > $tcAlternative_s3AX $tcMonadPlus_s3AY >$tcMonadPlus_s3AZ > $tcNonEmpty_s3B0 $tcNonEmpty_s3B1 $krep_s3B2 >$tc':|_s3B3 > $tc':|_s3B4 $tcSemigroup_s3B5 $tcSemigroup_s3B6 >$krep_s3B7 > $tcMonoid_s3B8 $tcMonoid_s3B9 $krep_s3Ba >$tc'C:Monoid_s3Bb > $tc'C:Monoid_s3Bc $tcOpaque_s3Bd $tcOpaque_s3Be >$tc'O_s3Bf > $tc'O_s3Bg $cliftA2_s3BL $cfmap_s3BP $c<*>_s3BT >$sap_s3BW > $sliftM2_s3BZ $sliftM5_s3Cc $sliftM4_s3Ch >$sliftM3_s3Cl > $sliftM_s3Co f_s3FS} > Type env: [a2Pd :-> b_a2Pv, a2Pe :-> a_a2Pu] > Co env: []] > Call stack: > CallStack (from HasCallStack): > callStackDoc, called at compiler/utils/Outputable.hs:1173:37 in >ghc:Outputable > pprPanic, called at compiler/utils/Outputable.hs:1243:5 in >ghc:Outputable > assertPprPanic, called at compiler/types/OptCoercion.hs:122:41 in >ghc:OptCoercion Try adding -dppr-debug to your command line. This may reveal the difference. Cheers, - Ben From marlowsd at gmail.com Mon Jan 6 08:17:26 2020 From: marlowsd at gmail.com (Simon Marlow) Date: Mon, 6 Jan 2020 08:17:26 +0000 Subject: Code generation/SRT question In-Reply-To: References: Message-ID: There's no need to set the srt field of f_info if f_closure is the SRT, since any reference to f_info in the code will give rise to a reference to f_closure in the SRT corresponding to that code fragment. Does that make sense? The use of a closure as an SRT is really quite a nice optimisation actually. Cheers Simon On Wed, 1 Jan 2020 at 09:35, Ömer Sinan Ağacan wrote: > Hi Simon, > > In Cmm if I have a recursive group of functions f and g, and I'm using f's > closure as the SRT for this group, should f's entry block's info table have > f_closure as its SRT? > > In Cmm syntax > > f_entry() { > { info_tbls: [... > (c1vn, > label: ... > rep: ... > srt: ??????] > stack_info: ... > } > {offset > c1vn: > ... > } > } > > Here should I have `f_closure` in the srt field? > > I'd expect yes, but looking at the current SRT code, in > CmmBuildInfoTables.updInfoSRTs, we have this: > > (newInfo, srtEntries) = case mapLookup (g_entry g) funSRTEnv of > > Nothing -> > -- if we don't add SRT entries to this closure, then we > -- want to set the srt field in its info table as usual > (info_tbl { cit_srt = mapLookup (g_entry g) srt_env }, []) > > Just srtEntries -> srtTrace "maybeStaticFun" (ppr res) > (info_tbl { cit_rep = new_rep }, res) > where res = [ CmmLabel lbl | SRTEntry lbl <- srtEntries ] > > Here we only update SRT field of the block if we're not adding SRT entries > to > the function's closure, so in the example above, because we're using the > function as SRT (and adding SRT entries to its closure) SRT field of c1vn > won't > be updated. > > Am I missing anything? > > Thanks, > > Ömer > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alp at well-typed.com Mon Jan 6 09:06:11 2020 From: alp at well-typed.com (Alp Mestanogullari) Date: Mon, 6 Jan 2020 10:06:11 +0100 Subject: hadrian-util: An experiment in a more usable hadrian UX In-Reply-To: <877e2569iy.fsf@smart-cactus.org> References: <877e2569iy.fsf@smart-cactus.org> Message-ID: <02a8a954-d566-c95c-e1db-7b5b72cf686d@well-typed.com> For reference, hadrian-util lives at: https://gitlab.haskell.org/bgamari/hadrian-util I quite like the idea of trying out improvements to the UX via an external wrapper. For wider adoption, we'd have to distribute it in a nicer way I suppose, but taking this one step at a time sounds good. On 05/01/2020 19:30, Ben Gamari wrote: > Hi everyone, > > For the past few months I have been using Hadrian for the majority of my > GHC builds. In due course I have encountered a few papercuts: > > * hadrian/cabal.build.sh is quite wordy (#16250); moreover, you need to > be in the source root to invoke it (#16667) > > * editing hadrian.settings is quite difficult due to the lack of > availability of tab-completion in vim > > * maintaining multiple build roots is quite error-prone since you must > remember which build flavour you used for each (#16481, #16638) > > * there is no equivalent to setting `stage=2` in `mk/build.mk` to make > the stage-1-freeze persistent > > To address these I cobbled together a small wrapper, hadrian-util. I > have this installed in my home-manager environment with a shell alias, > `hu`, meaning that building GHC is as easy as typing `hu run` anywhere > in the tree. > > As discussed in the README, `hadrian-util` supports multiple build > roots, has a moderately convenient interface for manipulating > hadrian.settings (with completion!) and has enough persistent state to > eliminate most of the error-prone boilerplate from Hadrian invocations > without being confusing. > > There is the question of what the long-term future of hadrian-util > should be. Arguably it is merely a hack papering over some of the > shortcomings of Hadrian's current UX; perhaps eventually these will be > fixed. However, in the meantime, I've found that hadrian-util makes > hadrian quite pleasant to use. > > I hope others also find this useful. > > Cheers, > > - Ben > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -- Alp Mestanogullari, Haskell Consultant Well-Typed LLP, https://www.well-typed.com/ Registered in England and Wales, OC335890 118 Wymering Mansions, Wymering Road, London, W9 2NF, England -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at richarde.dev Mon Jan 6 11:15:24 2020 From: rae at richarde.dev (Richard Eisenberg) Date: Mon, 6 Jan 2020 11:15:24 +0000 Subject: hadrian-util: An experiment in a more usable hadrian UX In-Reply-To: <877e2569iy.fsf@smart-cactus.org> References: <877e2569iy.fsf@smart-cactus.org> Message-ID: This looks really useful, and might bridge the gap between the UX I'm used to and Hadrian, allowing me to adopt Hadrian sooner than I otherwise would. However, I would be disappointed to see hadrian-util still in active use when `make` is removed (#17527). I'm glad to see Ben's %Make removal milestone, which gives me more confidence that we'll wait until hadrian is really ready for prime-time before removing the old build system. Thanks! Richard > On Jan 5, 2020, at 6:30 PM, Ben Gamari wrote: > > Hi everyone, > > For the past few months I have been using Hadrian for the majority of my > GHC builds. In due course I have encountered a few papercuts: > > * hadrian/cabal.build.sh is quite wordy (#16250); moreover, you need to > be in the source root to invoke it (#16667) > > * editing hadrian.settings is quite difficult due to the lack of > availability of tab-completion in vim > > * maintaining multiple build roots is quite error-prone since you must > remember which build flavour you used for each (#16481, #16638) > > * there is no equivalent to setting `stage=2` in `mk/build.mk` to make > the stage-1-freeze persistent > > To address these I cobbled together a small wrapper, hadrian-util. I > have this installed in my home-manager environment with a shell alias, > `hu`, meaning that building GHC is as easy as typing `hu run` anywhere > in the tree. > > As discussed in the README, `hadrian-util` supports multiple build > roots, has a moderately convenient interface for manipulating > hadrian.settings (with completion!) and has enough persistent state to > eliminate most of the error-prone boilerplate from Hadrian invocations > without being confusing. > > There is the question of what the long-term future of hadrian-util > should be. Arguably it is merely a hack papering over some of the > shortcomings of Hadrian's current UX; perhaps eventually these will be > fixed. However, in the meantime, I've found that hadrian-util makes > hadrian quite pleasant to use. > > I hope others also find this useful. > > Cheers, > > - Ben > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From rae at richarde.dev Mon Jan 6 11:19:59 2020 From: rae at richarde.dev (Richard Eisenberg) Date: Mon, 6 Jan 2020 11:19:59 +0000 Subject: help understanding a ghc assert failure wrt coercions In-Reply-To: <220A8356-299E-46DF-9698-4D76B57028B6@smart-cactus.org> References: <220A8356-299E-46DF-9698-4D76B57028B6@smart-cactus.org> Message-ID: <71D81504-E2E7-4FD3-8D6E-D43D01EE411F@richarde.dev> > On Jan 6, 2020, at 4:44 AM, Ben Gamari wrote: > Try adding -dppr-debug to your command line. This may reveal the difference. Also, I recommend `-fprint-explicit-kinds -fprint-explicit-coercions -fprint-typechecker-elaboration -fprint-explicit-runtime-reps -fprint-explicit-foralls` just for extra confidence. Richard > > Cheers, > > - Ben > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at richarde.dev Mon Jan 6 11:29:46 2020 From: rae at richarde.dev (Richard Eisenberg) Date: Mon, 6 Jan 2020 11:29:46 +0000 Subject: Superclasses of type families returning constraints? In-Reply-To: References: <461E582E-D366-46F8-8E2B-2B1429111803@gmail.com> <1E0156EB-6241-4AF8-B509-041EC8DD0239@richarde.dev> <3A25348F-837E-4B2D-B4B6-E6A8981C5F96@gmail.com> <5EB80769-06EB-45F7-B4C0-F261F3B87842@richarde.dev> Message-ID: You're absolutely right that improvement doesn't solve your problem. But what I didn't say is that there is no real reason (I think) that we can't improve improvement to produce givens. This would likely require a change to the coercion language in Core (and thus not to be taken lightly), but if we can identify a class of programs that clearly benefits from that work, it is more likely to happen. The thoughts about improvement were just a very basic proof-of-concept. Sadly, the proof-of-concept failed, so I think a first step in this direction would be to somehow encode these partial improvements, and then work on changing Core. That road seems too long, though, so in the end, I think constrained type families (with superclass constraints) might be a more fruitful direction. Richard > On Jan 5, 2020, at 6:02 PM, Alexis King wrote: > >> On Jan 4, 2020, at 20:38, Richard Eisenberg wrote: >> >> I thought that, maybe, we could use partial improvement to give you what you want > > I think that improvement alone cannot possibly be enough here, as improvement by its nature does not provide evidence. Improvement allows us to take a set of constraints like > > [G] FD2 a Bool > [W] FD2 a b > > and derive [WD] b ~ Bool, but importantly this does not produce a new given! This only works if b is a metavariable, since we can solve the new wanted by simply taking b := Bool, but if b is rigid, we are just as stuck as before. In other words, improvement only helps resolve ambiguities, not derive any new information. > > That’s why I think the “superclass” characterization is more useful. If instead we express your FD2 class as > > class b ~ B a => FD2 a b where > type B a > > then if we have [G] FD2 a Bool, we can actually derive [G] B a ~ Bool, which is much stronger than what we were able to derive using improvement. > > I imagine you are aware of all of the above already, but it’s not immediately clear to me from your description why you need functional dependencies (and therefore improvement) rather than this kind of approximation using superclasses and type families. Would modeling things with that approximation help at all? If not, why not? I think that would help me understand what you’re saying a little better. > > Thanks, > Alexis From simonpj at microsoft.com Mon Jan 6 14:52:49 2020 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 6 Jan 2020 14:52:49 +0000 Subject: Superclasses of type families returning constraints? In-Reply-To: <461E582E-D366-46F8-8E2B-2B1429111803@gmail.com> References: <461E582E-D366-46F8-8E2B-2B1429111803@gmail.com> Message-ID: I've read this thread, superficially so far. But I'm puzzled about the goal. | type family F a b :: Constraint where | F a a = () | | eq :: F a b => a :~: b | eq = Refl This is rejected, and it's not in the least bit puzzling! You have evidence for (F a b) and you need to prove (a~b) -- for any a and b. Obviously you can't. And in Core you could write (eq @Int @Bool (error "urk")) and you jolly well don’t want it to return Refl. So what's the goal? Simon | -----Original Message----- | From: ghc-devs On Behalf Of Alexis King | Sent: 28 December 2019 00:17 | To: ghc-devs | Subject: Superclasses of type families returning constraints? | | Hello all, | | I recently noticed that GHC rejects the following program: | | type family F a b :: Constraint where | F a a = () | | eq :: F a b => a :~: b | eq = Refl | | This is certainly not shocking, but it is a little unsatisfying: as far as | I can tell, accepting this program would be entirely sound. That is, `a ~ | b` is morally a “superclass” of `F a b`. In this example the type family is | admittedly rather pointless, as `a ~ b` could be used instead, but it is | possible to construct more sophisticated examples that cannot be so | straightforwardly expressed in other ways. | | I am therefore curious: has this kind of scenario ever been discussed | before? If yes, is there a paper/GitLab issue/email thread somewhere that | discusses it? And if no, is there any fundamental reason that GHC does not | propagate such information (i.e. it’s incompatible with some aspect of the | type system or constraint solver), or is it simply something that has not | been explored? (Maybe you think the above program is horrible and | *shouldn’t* be accepted even if it were possible, but that is a different | question entirely. :)) | | Thanks, | Alexis | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.haske | ll.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C38bb107c30c94853b4b008d78 | b2b37b0%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637130890051710518& | ;sdata=z1ag%2BkXBxUbQSc7cY8wTWvEyo%2FxLzxh9tGfAXrabATo%3D&reserved=0 From ben at smart-cactus.org Mon Jan 6 15:14:01 2020 From: ben at smart-cactus.org (Ben Gamari) Date: Mon, 06 Jan 2020 10:14:01 -0500 Subject: How to specify import paths for a frontend plugin In-Reply-To: References: Message-ID: <87sgks4nyk.fsf@smart-cactus.org> Derek Lam writes: > Hi ghc-devs, > Hi Derek, > I’m making a first attempt to make a frontend plugin, to resolve cabal > packages in the GHC API. However I’m running into troubles with module > resolution in the GHC API, because I can’t control where it will > search for modules at all. I've attached a minimal example, with a > frontend plugin definition that can’t find modules (Plugin.hs), and an > equivalent standalone program that does (Main.hs). > > Specifically, I'm following a solution Edward Yang published in 2017 > (http://blog.ezyang.com/2017/02/how-to-integrate-ghc-api-programs-with-cabal/), > where the frontend plugin is called through a helper script that > passes flags forwarded from `cabal repl`. To test the plugin directly > with GHC, I collected the args through the helper script and filtered > them to the minimal set that made the plugin run: > >   ghc --frontend Plugin -itarget -package-db > dist-newstyle/packagedb/ghc-8.6.5 Plugin -plugin-package sandbox > -hide-all-packages > > This, as well as the full argument set, would complain that it can't > find the target module under `./target/A.hs`: > >   : error: module ‘A’ cannot be found locally > > It does when the import path arg `-itarget` is absolute. By "it does" do you mean "it still fails"? > Still, its `importPaths` are what I expect: [".", "target"], and the > standalone program finds the target module with the same > `importPaths`. I've tested this in GHC 8.6.5, 8.4.2 and 8.2.2, making > me sure I'm just missing something, but I haven’t found help in the > docs yet. I really appreciate some help to draw my hours over this to > a close! > Hmm, very interesting. If I recall correctly, the relevant codepath in GHC is Finder.findImportedModule which should find the module via Finder.findHomeModule. Unfortunately, in my cursory look I didn't see any obvious issues; it looks like this might require a build of GHC and a bit of debugging. If you can produce a minimal, concrete reproducer (e.g. your plugin and a set of specific instructions to reproduce the issue) it's possible I can have a look. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From simonpj at microsoft.com Mon Jan 6 17:50:19 2020 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 6 Jan 2020 17:50:19 +0000 Subject: Code generation/SRT question In-Reply-To: References: Message-ID: Omer, I think I’m not understanding all the details, but I have a clear “big picture”. Simon can correct me if I’m wrong. · The info table for any closure (top-level or otherwise) has a (possibly empty) Static Reference Table, SRT. · The SRT for an info table identifies the static top level closures that the code for that info table mentions. (In principle the garbage collector could parse the code! But it’s easier to find these references if they in a dedicated table alongside the code.) · A top level closure is a CAF if it is born updatable. · A top level closure is CAFFY if it is a CAF, or mentions another CAFFY closure. · An entry in the SRT can point o To a top-level updatable closure. This may now point into the dynamic heap, and is what we want to keep alive. If the closure hasn’t been updated, we should keep alive anything its SRT points to. o Directly to another SRT (or info table?) for a CAFFY top-level closure, which is a bit faster if we know the thing is non-updatable. · If a function f calls a top-level function g, and g is CAFFY, then f’s SRT should point to g’s closure or (if g is not a CAF) directly to its SRT. · If f is top level, and calls itself, there is no need to include a pointer to f’s closure in f’s own SRT. I think this last point is the one you are asking, but I’m not certain. All this should be written down somewhere, and perhaps is. But where? Simon From: ghc-devs On Behalf Of Simon Marlow Sent: 06 January 2020 08:17 To: Ömer Sinan Ağacan Cc: ghc-devs Subject: Re: Code generation/SRT question There's no need to set the srt field of f_info if f_closure is the SRT, since any reference to f_info in the code will give rise to a reference to f_closure in the SRT corresponding to that code fragment. Does that make sense? The use of a closure as an SRT is really quite a nice optimisation actually. Cheers Simon On Wed, 1 Jan 2020 at 09:35, Ömer Sinan Ağacan > wrote: Hi Simon, In Cmm if I have a recursive group of functions f and g, and I'm using f's closure as the SRT for this group, should f's entry block's info table have f_closure as its SRT? In Cmm syntax f_entry() { { info_tbls: [... (c1vn, label: ... rep: ... srt: ??????] stack_info: ... } {offset c1vn: ... } } Here should I have `f_closure` in the srt field? I'd expect yes, but looking at the current SRT code, in CmmBuildInfoTables.updInfoSRTs, we have this: (newInfo, srtEntries) = case mapLookup (g_entry g) funSRTEnv of Nothing -> -- if we don't add SRT entries to this closure, then we -- want to set the srt field in its info table as usual (info_tbl { cit_srt = mapLookup (g_entry g) srt_env }, []) Just srtEntries -> srtTrace "maybeStaticFun" (ppr res) (info_tbl { cit_rep = new_rep }, res) where res = [ CmmLabel lbl | SRTEntry lbl <- srtEntries ] Here we only update SRT field of the block if we're not adding SRT entries to the function's closure, so in the example above, because we're using the function as SRT (and adding SRT entries to its closure) SRT field of c1vn won't be updated. Am I missing anything? Thanks, Ömer -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at richarde.dev Mon Jan 6 18:04:38 2020 From: rae at richarde.dev (Richard Eisenberg) Date: Mon, 6 Jan 2020 18:04:38 +0000 Subject: Superclasses of type families returning constraints? In-Reply-To: References: <461E582E-D366-46F8-8E2B-2B1429111803@gmail.com> Message-ID: <9F380845-C65F-4BDB-B9F1-63559DDDC9D1@richarde.dev> > On Jan 6, 2020, at 2:52 PM, Simon Peyton Jones via ghc-devs wrote: > > | type family F a b :: Constraint where > | F a a = () > | > | eq :: F a b => a :~: b > | eq = Refl > > This is rejected, and it's not in the least bit puzzling! You have evidence for (F a b) and you need to prove (a~b) -- for any a and b. Obviously you can't. But how could you possibly have evidence for (F a b) without (a ~ b)? You can't. > And in Core you could write (eq @Int @Bool (error "urk")) and you jolly well don’t want it to return Refl. Why not? > eq = /\ a b. \ (d :: F a b). case d of Something (co :: a ~# b) -> Refl @a @b co This would obviously require extensions to Core and Haskell, but it's not a priori wrong to do so. Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From lexi.lambda at gmail.com Mon Jan 6 18:05:10 2020 From: lexi.lambda at gmail.com (Alexis King) Date: Mon, 6 Jan 2020 12:05:10 -0600 Subject: Superclasses of type families returning constraints? In-Reply-To: References: <461E582E-D366-46F8-8E2B-2B1429111803@gmail.com> Message-ID: <7E62CD56-ABF3-4F12-9986-9C659F202784@gmail.com> > On Jan 6, 2020, at 08:52, Simon Peyton Jones wrote: > > This is rejected, and it's not in the least bit puzzling! You have evidence for (F a b) and you need to prove (a~b) -- for any a and b. Obviously you can't. And in Core you could write (eq @Int @Bool (error "urk")) and you jolly well don’t want it to return Refl. I’m not sure I totally understand your complaint. If I were to write eq :: a ~ b => a :~: b eq = Refl then in core I could still write `eq @Int @Bool (error "urk")`, and that would be entirely well-typed. It would not really return Refl at runtime, of course, but neither would the example I gave in my original email. Perhaps a better way to phrase my original example is to write it this way: type family F a b where F a b = () -- regular (), not (() :: Constraint) eq :: F a b ~ () => a :~: b eq = Refl In this case, in core, we receive an argument of type `F a b ~ ()`, and we can force that to obtain a coercion of type `F a b ~# ()`. For reasons similar to the mechanisms behind injective type families, that equality really does imply `a ~# b`. The fact that the type family returns an empty constraint tuple is really incidental, and perhaps unnecessarily misleading. Does that clear up your confusion? Or have I misunderstood your concern? Alexis From marlowsd at gmail.com Mon Jan 6 18:16:55 2020 From: marlowsd at gmail.com (Simon Marlow) Date: Mon, 6 Jan 2020 18:16:55 +0000 Subject: Code generation/SRT question In-Reply-To: References: Message-ID: We have: * wiki: https://gitlab.haskell.org/ghc/ghc/wikis/commentary/rts/storage/gc/cafs * a huge Note in CmmBuildInfoTables: https://gitlab.haskell.org/ghc/ghc/blob/master/compiler%2Fcmm%2FCmmBuildInfoTables.hs#L42 Maybe we need links to these from other places? Omer's question is referring specifically to the [FUN] optimisation described in the Note. Cheers Simon On Mon, 6 Jan 2020 at 17:50, Simon Peyton Jones wrote: > Omer, > > > > I think I’m not understanding all the details, but I have a clear “big > picture”. Simon can correct me if I’m wrong. > > > > · The *info table* for any *closure* (top-level or otherwise) has > a (possibly empty) Static Reference Table, *SRT*. > > · The SRT for an info table identifies the static top level > closures that the *code* for that info table mentions. (In principle > the garbage collector could parse the code! But it’s easier to find these > references if they in a dedicated table alongside the code.) > > · A top level closure is a *CAF* if it is born updatable. > > · A top level closure is *CAFFY* if it is a CAF, or mentions > another CAFFY closure. > > · An entry in the SRT can point > > o To a top-level updatable closure. This may now point into the dynamic > heap, and is what we want to keep alive. If the closure hasn’t been > updated, we should keep alive anything its SRT points to. > > o Directly to another SRT (or info table?) for a CAFFY top-level > closure, which is a bit faster if we know the thing is non-updatable. > > · If a function f calls a top-level function g, and g is CAFFY, > then f’s SRT should point to g’s closure or (if g is not a CAF) directly to > its SRT. > > · If f is top level, and calls itself, there is no need to include > a pointer to f’s closure in f’s own SRT. > > I think this last point is the one you are asking, but I’m not certain. > > All this should be written down somewhere, and perhaps is. But where? > > Simon > > > > *From:* ghc-devs *On Behalf Of *Simon > Marlow > *Sent:* 06 January 2020 08:17 > *To:* Ömer Sinan Ağacan > *Cc:* ghc-devs > *Subject:* Re: Code generation/SRT question > > > > There's no need to set the srt field of f_info if f_closure is the SRT, > since any reference to f_info in the code will give rise to a reference to > f_closure in the SRT corresponding to that code fragment. Does that make > sense? > > > > The use of a closure as an SRT is really quite a nice optimisation > actually. > > > > Cheers > > Simon > > > > On Wed, 1 Jan 2020 at 09:35, Ömer Sinan Ağacan > wrote: > > Hi Simon, > > In Cmm if I have a recursive group of functions f and g, and I'm using f's > closure as the SRT for this group, should f's entry block's info table have > f_closure as its SRT? > > In Cmm syntax > > f_entry() { > { info_tbls: [... > (c1vn, > label: ... > rep: ... > srt: ??????] > stack_info: ... > } > {offset > c1vn: > ... > } > } > > Here should I have `f_closure` in the srt field? > > I'd expect yes, but looking at the current SRT code, in > CmmBuildInfoTables.updInfoSRTs, we have this: > > (newInfo, srtEntries) = case mapLookup (g_entry g) funSRTEnv of > > Nothing -> > -- if we don't add SRT entries to this closure, then we > -- want to set the srt field in its info table as usual > (info_tbl { cit_srt = mapLookup (g_entry g) srt_env }, []) > > Just srtEntries -> srtTrace "maybeStaticFun" (ppr res) > (info_tbl { cit_rep = new_rep }, res) > where res = [ CmmLabel lbl | SRTEntry lbl <- srtEntries ] > > Here we only update SRT field of the block if we're not adding SRT entries > to > the function's closure, so in the example above, because we're using the > function as SRT (and adding SRT entries to its closure) SRT field of c1vn > won't > be updated. > > Am I missing anything? > > Thanks, > > Ömer > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lexi.lambda at gmail.com Mon Jan 6 18:20:00 2020 From: lexi.lambda at gmail.com (Alexis King) Date: Mon, 6 Jan 2020 12:20:00 -0600 Subject: Superclasses of type families returning constraints? In-Reply-To: <7E62CD56-ABF3-4F12-9986-9C659F202784@gmail.com> References: <461E582E-D366-46F8-8E2B-2B1429111803@gmail.com> <7E62CD56-ABF3-4F12-9986-9C659F202784@gmail.com> Message-ID: > On Jan 6, 2020, at 12:05, Alexis King wrote: > > type family F a b where > F a b = () -- regular (), not (() :: Constraint) (Err, sorry, this should be `F a a = ()`. But I think you can understand what I’m getting at.) From lexi.lambda at gmail.com Mon Jan 6 19:39:44 2020 From: lexi.lambda at gmail.com (Alexis King) Date: Mon, 6 Jan 2020 13:39:44 -0600 Subject: Superclasses of type families returning constraints? In-Reply-To: References: Message-ID: <9E28CEA2-1C31-4CC2-8787-DC3C4367009D@gmail.com> > On Jan 6, 2020, at 05:29, Richard Eisenberg wrote: > > You're absolutely right that improvement doesn't solve your problem. But what I didn't say is that there is no real reason (I think) that we can't improve improvement to produce givens. This would likely require a change to the coercion language in Core (and thus not to be taken lightly), but if we can identify a class of programs that clearly benefits from that work, it is more likely to happen. The thoughts about improvement were just a very basic proof-of-concept. Sadly, the proof-of-concept failed, so I think a first step in this direction would be to somehow encode these partial improvements, and then work on changing Core. That road seems too long, though, so in the end, I think constrained type families (with superclass constraints) might be a more fruitful direction. Thanks, this all makes sense to me. I actually went back and re-read the injective type families paper after responding to your previous email, and I discovered it actually alludes to the issue we’re discussing! At the end of the paper, in section 7.3, it provides the following example: > Could closed type families move beyond injectivity and functional dependencies by applying closed-world reasoning that derives solutions of arbitrary equalities, provided a unique solution exists? Consider this example: > > type family J a where > J Int = Char > J Bool = Char > J Double = Float > > One might reasonably expect that if we wish to prove (J a ∼ Float), we will simplify to (a ∼ Double). Yet GHC does not do this as neither injectivity nor functional dependencies can discover this solution. This is not quite the same as what we’re talking about here, but it’s clearly in the same ballpark. I think what you’re describing makes a lot of sense, and it would be interesting to explore what it would take to encode into core. After thinking a little more on the topic, I think that probably makes by far the most sense from the core side of things. But I agree it seems like a significant change, and I don’t know enough about the formalism to know how difficult it would be to prove soundness. (I haven’t done much formal proving of anything!) Alexis -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Jan 6 21:10:58 2020 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 6 Jan 2020 21:10:58 +0000 Subject: Code generation/SRT question In-Reply-To: References: Message-ID: Aha, great. Well at least [Note SRTs] should point back to the wiki page. Omer's question is referring specifically to the [FUN] optimisation described in the Note. Hmm. So is he asking whether f’s SRT should have an entry for itself? No, that’ would be silly! It would not lead to any more CAFs being reachable. Omer, maybe we are misunderstanding. But if so, can you cast your question more precisely in terms of which lines of the wiki page or Note are you asking about? And let’s make sure that the appropriate bit gets updated when you’ve nailed the answer Simon From: Simon Marlow Sent: 06 January 2020 18:17 To: Simon Peyton Jones Cc: Ömer Sinan Ağacan ; ghc-devs Subject: Re: Code generation/SRT question We have: * wiki: https://gitlab.haskell.org/ghc/ghc/wikis/commentary/rts/storage/gc/cafs * a huge Note in CmmBuildInfoTables: https://gitlab.haskell.org/ghc/ghc/blob/master/compiler%2Fcmm%2FCmmBuildInfoTables.hs#L42 Maybe we need links to these from other places? Omer's question is referring specifically to the [FUN] optimisation described in the Note. Cheers Simon On Mon, 6 Jan 2020 at 17:50, Simon Peyton Jones > wrote: Omer, I think I’m not understanding all the details, but I have a clear “big picture”. Simon can correct me if I’m wrong. • The info table for any closure (top-level or otherwise) has a (possibly empty) Static Reference Table, SRT. • The SRT for an info table identifies the static top level closures that the code for that info table mentions. (In principle the garbage collector could parse the code! But it’s easier to find these references if they in a dedicated table alongside the code.) • A top level closure is a CAF if it is born updatable. • A top level closure is CAFFY if it is a CAF, or mentions another CAFFY closure. • An entry in the SRT can point o To a top-level updatable closure. This may now point into the dynamic heap, and is what we want to keep alive. If the closure hasn’t been updated, we should keep alive anything its SRT points to. o Directly to another SRT (or info table?) for a CAFFY top-level closure, which is a bit faster if we know the thing is non-updatable. • If a function f calls a top-level function g, and g is CAFFY, then f’s SRT should point to g’s closure or (if g is not a CAF) directly to its SRT. • If f is top level, and calls itself, there is no need to include a pointer to f’s closure in f’s own SRT. I think this last point is the one you are asking, but I’m not certain. All this should be written down somewhere, and perhaps is. But where? Simon From: ghc-devs > On Behalf Of Simon Marlow Sent: 06 January 2020 08:17 To: Ömer Sinan Ağacan > Cc: ghc-devs > Subject: Re: Code generation/SRT question There's no need to set the srt field of f_info if f_closure is the SRT, since any reference to f_info in the code will give rise to a reference to f_closure in the SRT corresponding to that code fragment. Does that make sense? The use of a closure as an SRT is really quite a nice optimisation actually. Cheers Simon On Wed, 1 Jan 2020 at 09:35, Ömer Sinan Ağacan > wrote: Hi Simon, In Cmm if I have a recursive group of functions f and g, and I'm using f's closure as the SRT for this group, should f's entry block's info table have f_closure as its SRT? In Cmm syntax f_entry() { { info_tbls: [... (c1vn, label: ... rep: ... srt: ??????] stack_info: ... } {offset c1vn: ... } } Here should I have `f_closure` in the srt field? I'd expect yes, but looking at the current SRT code, in CmmBuildInfoTables.updInfoSRTs, we have this: (newInfo, srtEntries) = case mapLookup (g_entry g) funSRTEnv of Nothing -> -- if we don't add SRT entries to this closure, then we -- want to set the srt field in its info table as usual (info_tbl { cit_srt = mapLookup (g_entry g) srt_env }, []) Just srtEntries -> srtTrace "maybeStaticFun" (ppr res) (info_tbl { cit_rep = new_rep }, res) where res = [ CmmLabel lbl | SRTEntry lbl <- srtEntries ] Here we only update SRT field of the block if we're not adding SRT entries to the function's closure, so in the example above, because we're using the function as SRT (and adding SRT entries to its closure) SRT field of c1vn won't be updated. Am I missing anything? Thanks, Ömer -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Jan 6 21:41:15 2020 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 6 Jan 2020 21:41:15 +0000 Subject: Superclasses of type families returning constraints? In-Reply-To: <7E62CD56-ABF3-4F12-9986-9C659F202784@gmail.com> References: <461E582E-D366-46F8-8E2B-2B1429111803@gmail.com> <7E62CD56-ABF3-4F12-9986-9C659F202784@gmail.com> Message-ID: | In this case, in core, we receive an argument of type `F a b ~ ()`, and we | can force that to obtain a coercion of type `F a b ~# ()`. For reasons | similar to the mechanisms behind injective type families, that equality | really does imply `a ~# b`. Ah, I see a bit better now. So you want a way to get from evidence that co1 :: F a b ~# () to evidence that co2 :: a ~# b So you'd need some coercion form like co2 = runBackwards co1 or something, where runBackwards is some kind of coercion form, like sym or trans, etc. I don't know what a design for this would look like. And even if we had it, would it pay its way, by adding substantial and useful new programming expressiveness? I now see better the connection with improvement. Currently functional dependencies and injectivity are (in GHC) used *only* to improve unification and type inference; they have zero impact on the forms of evidence. (That is a big difference between fundeps and type functions.) Maybe that could be changed, but again we'd lack a design. Simon | -----Original Message----- | From: Alexis King | Sent: 06 January 2020 18:05 | To: Simon Peyton Jones | Cc: ghc-devs | Subject: Re: Superclasses of type families returning constraints? | | > On Jan 6, 2020, at 08:52, Simon Peyton Jones | wrote: | > | > This is rejected, and it's not in the least bit puzzling! You have | evidence for (F a b) and you need to prove (a~b) -- for any a and b. | Obviously you can't. And in Core you could write (eq @Int @Bool (error | "urk")) and you jolly well don’t want it to return Refl. | | I’m not sure I totally understand your complaint. If I were to write | | eq :: a ~ b => a :~: b | eq = Refl | | then in core I could still write `eq @Int @Bool (error "urk")`, and that | would be entirely well-typed. It would not really return Refl at runtime, | of course, but neither would the example I gave in my original email. | | Perhaps a better way to phrase my original example is to write it this way: | | type family F a b where | F a b = () -- regular (), not (() :: Constraint) | | eq :: F a b ~ () => a :~: b | eq = Refl | | In this case, in core, we receive an argument of type `F a b ~ ()`, and we | can force that to obtain a coercion of type `F a b ~# ()`. For reasons | similar to the mechanisms behind injective type families, that equality | really does imply `a ~# b`. The fact that the type family returns an empty | constraint tuple is really incidental, and perhaps unnecessarily | misleading. | | Does that clear up your confusion? Or have I misunderstood your concern? | | Alexis From derek at lam.io Tue Jan 7 04:22:19 2020 From: derek at lam.io (Derek Lam) Date: Mon, 06 Jan 2020 20:22:19 -0800 Subject: How to specify import paths for a frontend plugin In-Reply-To: <87sgks4nyk.fsf@smart-cactus.org> References: <87sgks4nyk.fsf@smart-cactus.org> Message-ID: <956A1ABB-26C4-4142-ACE0-BA72CC28502B@lam.io> Hi Ben, Thanks for the pointer. I looked for the error call and it does indeed look like the path from the error in GhcMake.downsweep feeds from GhcMake.summariseModule and then Finder.findImportedModule. A cursory look at Finder seems to suggest that the importPaths retrieved in findInstalledHomeModule go untouched to the doesFileExist through searchPathExts. The working directories, at least as seen in the plugin body, is the directory of execution as expected, and is the same as the standalone program, but I could certainly be missing the whole picture in Finder. I've attached a package that demonstrates what I've tested so far, along with Edward Yang's original solution I was following. 1. The standalone executable runs with `cabal run standalone`. 2. The plugin runs via ghc with the same command from the last email: ghc --frontend Plugin -itarget -package-db dist-newstyle/packagedb/ghc-8.6.5 Plugin -plugin-package plugin-test -hide-all-packages 3. Running the plugin per Edward Yang's solution needs to `cabal build` the wrapper and lib first, then invoking via `cabal repl target -w` with an absolute path to the plugin executable: cabal repl target -w "$(pwd)/dist-newstyle/build//ghc-/plugin-test-0.1.0.0/x/plugin-wrapper/build/plugin-wrapper/plugin-wrapper" ...at least that's the path on my distribution. (Unfortunately it seems resolving the plugin by name alone doesn't seem to work >= 8.4.2). In case 1, it works. In cases 2 and 3, I get "error: module ‘A’ cannot be found locally". By the way, by "it does" I meant it does work when I use an absolute path. Hope this sheds some light! Thanks so much, Derek On 2020-01-06, 07:32, "Ben Gamari" wrote: Derek Lam writes: > Hi ghc-devs, > Hi Derek, > I’m making a first attempt to make a frontend plugin, to resolve cabal > packages in the GHC API. However I’m running into troubles with module > resolution in the GHC API, because I can’t control where it will > search for modules at all. I've attached a minimal example, with a > frontend plugin definition that can’t find modules (Plugin.hs), and an > equivalent standalone program that does (Main.hs). > > Specifically, I'm following a solution Edward Yang published in 2017 > (http://blog.ezyang.com/2017/02/how-to-integrate-ghc-api-programs-with-cabal/), > where the frontend plugin is called through a helper script that > passes flags forwarded from `cabal repl`. To test the plugin directly > with GHC, I collected the args through the helper script and filtered > them to the minimal set that made the plugin run: > > ghc --frontend Plugin -itarget -package-db > dist-newstyle/packagedb/ghc-8.6.5 Plugin -plugin-package sandbox > -hide-all-packages > > This, as well as the full argument set, would complain that it can't > find the target module under `./target/A.hs`: > > : error: module ‘A’ cannot be found locally > > It does when the import path arg `-itarget` is absolute. By "it does" do you mean "it still fails"? > Still, its `importPaths` are what I expect: [".", "target"], and the > standalone program finds the target module with the same > `importPaths`. I've tested this in GHC 8.6.5, 8.4.2 and 8.2.2, making > me sure I'm just missing something, but I haven’t found help in the > docs yet. I really appreciate some help to draw my hours over this to > a close! > Hmm, very interesting. If I recall correctly, the relevant codepath in GHC is Finder.findImportedModule which should find the module via Finder.findHomeModule. Unfortunately, in my cursory look I didn't see any obvious issues; it looks like this might require a build of GHC and a bit of debugging. If you can produce a minimal, concrete reproducer (e.g. your plugin and a set of specific instructions to reproduce the issue) it's possible I can have a look. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: plugin-test.tar.gz Type: application/x-gzip Size: 1429 bytes Desc: not available URL: From omeragacan at gmail.com Tue Jan 7 05:58:31 2020 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Tue, 7 Jan 2020 08:58:31 +0300 Subject: Code generation/SRT question In-Reply-To: References: Message-ID: Hi all, > There's no need to set the srt field of f_info if f_closure is the SRT, since > any reference to f_info in the code will give rise to a reference to f_closure > in the SRT corresponding to that code fragment. Does that make sense? Makes sense, thanks. > The use of a closure as an SRT is really quite a nice optimisation actually. Agreed. > · If f is top level, and calls itself, there is no need to include a pointer > to f’s closure in f’s own SRT. > > I think this last point is the one you are asking, but I’m not certain. Close, I'm asking whether we should include a pointer to f in f's SRT (when f is recursive) when we're using f as the SRT (the [FUN] optimisation). I'll document the code I quoted in my original email with this info. Thanks, Ömer Simon Peyton Jones , 7 Oca 2020 Sal, 00:11 tarihinde şunu yazdı: > > Aha, great. Well at least [Note SRTs] should point back to the wiki page. > > > > Omer's question is referring specifically to the [FUN] optimisation described in the Note. > > Hmm. So is he asking whether f’s SRT should have an entry for itself? No, that’ would be silly! It would not lead to any more CAFs being reachable. > > > > Omer, maybe we are misunderstanding. But if so, can you cast your question more precisely in terms of which lines of the wiki page or Note are you asking about? And let’s make sure that the appropriate bit gets updated when you’ve nailed the answer > > > > Simon > > > > From: Simon Marlow > Sent: 06 January 2020 18:17 > To: Simon Peyton Jones > Cc: Ömer Sinan Ağacan ; ghc-devs > Subject: Re: Code generation/SRT question > > > > We have: > > * wiki: https://gitlab.haskell.org/ghc/ghc/wikis/commentary/rts/storage/gc/cafs > > * a huge Note in CmmBuildInfoTables: https://gitlab.haskell.org/ghc/ghc/blob/master/compiler%2Fcmm%2FCmmBuildInfoTables.hs#L42 > > > > Maybe we need links to these from other places? > > > > Omer's question is referring specifically to the [FUN] optimisation described in the Note. > > > > Cheers > > Simon > > > > On Mon, 6 Jan 2020 at 17:50, Simon Peyton Jones wrote: > > Omer, > > > > I think I’m not understanding all the details, but I have a clear “big picture”. Simon can correct me if I’m wrong. > > > > · The info table for any closure (top-level or otherwise) has a (possibly empty) Static Reference Table, SRT. > > · The SRT for an info table identifies the static top level closures that the code for that info table mentions. (In principle the garbage collector could parse the code! But it’s easier to find these references if they in a dedicated table alongside the code.) > > · A top level closure is a CAF if it is born updatable. > > · A top level closure is CAFFY if it is a CAF, or mentions another CAFFY closure. > > · An entry in the SRT can point > > o To a top-level updatable closure. This may now point into the dynamic heap, and is what we want to keep alive. If the closure hasn’t been updated, we should keep alive anything its SRT points to. > > o Directly to another SRT (or info table?) for a CAFFY top-level closure, which is a bit faster if we know the thing is non-updatable. > > · If a function f calls a top-level function g, and g is CAFFY, then f’s SRT should point to g’s closure or (if g is not a CAF) directly to its SRT. > > · If f is top level, and calls itself, there is no need to include a pointer to f’s closure in f’s own SRT. > > I think this last point is the one you are asking, but I’m not certain. > > All this should be written down somewhere, and perhaps is. But where? > > Simon > > > > From: ghc-devs On Behalf Of Simon Marlow > Sent: 06 January 2020 08:17 > To: Ömer Sinan Ağacan > Cc: ghc-devs > Subject: Re: Code generation/SRT question > > > > There's no need to set the srt field of f_info if f_closure is the SRT, since any reference to f_info in the code will give rise to a reference to f_closure in the SRT corresponding to that code fragment. Does that make sense? > > > > The use of a closure as an SRT is really quite a nice optimisation actually. > > > > Cheers > > Simon > > > > On Wed, 1 Jan 2020 at 09:35, Ömer Sinan Ağacan wrote: > > Hi Simon, > > In Cmm if I have a recursive group of functions f and g, and I'm using f's > closure as the SRT for this group, should f's entry block's info table have > f_closure as its SRT? > > In Cmm syntax > > f_entry() { > { info_tbls: [... > (c1vn, > label: ... > rep: ... > srt: ??????] > stack_info: ... > } > {offset > c1vn: > ... > } > } > > Here should I have `f_closure` in the srt field? > > I'd expect yes, but looking at the current SRT code, in > CmmBuildInfoTables.updInfoSRTs, we have this: > > (newInfo, srtEntries) = case mapLookup (g_entry g) funSRTEnv of > > Nothing -> > -- if we don't add SRT entries to this closure, then we > -- want to set the srt field in its info table as usual > (info_tbl { cit_srt = mapLookup (g_entry g) srt_env }, []) > > Just srtEntries -> srtTrace "maybeStaticFun" (ppr res) > (info_tbl { cit_rep = new_rep }, res) > where res = [ CmmLabel lbl | SRTEntry lbl <- srtEntries ] > > Here we only update SRT field of the block if we're not adding SRT entries to > the function's closure, so in the example above, because we're using the > function as SRT (and adding SRT entries to its closure) SRT field of c1vn won't > be updated. > > Am I missing anything? > > Thanks, > > Ömer From simonpj at microsoft.com Tue Jan 7 08:29:22 2020 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 7 Jan 2020 08:29:22 +0000 Subject: Code generation/SRT question In-Reply-To: References: Message-ID: | Close, I'm asking whether we should include a pointer to f in f's SRT | (when f is | recursive) when we're using f as the SRT (the [FUN] optimisation). Definitely not. Doing so would not change the set of reachable CAFs, would it? Simon | -----Original Message----- | From: Ömer Sinan Ağacan | Sent: 07 January 2020 05:59 | To: Simon Peyton Jones | Cc: Simon Marlow ; ghc-devs | Subject: Re: Code generation/SRT question | | Hi all, | | > There's no need to set the srt field of f_info if f_closure is the | SRT, since | > any reference to f_info in the code will give rise to a reference to | f_closure | > in the SRT corresponding to that code fragment. Does that make | sense? | | Makes sense, thanks. | | > The use of a closure as an SRT is really quite a nice optimisation | actually. | | Agreed. | | > · If f is top level, and calls itself, there is no need to include a | pointer | > to f’s closure in f’s own SRT. | > | > I think this last point is the one you are asking, but I’m not | certain. | | Close, I'm asking whether we should include a pointer to f in f's SRT | (when f is | recursive) when we're using f as the SRT (the [FUN] optimisation). | | I'll document the code I quoted in my original email with this info. | | Thanks, | | Ömer | | Simon Peyton Jones , 7 Oca 2020 Sal, 00:11 | tarihinde şunu yazdı: | > | > Aha, great. Well at least [Note SRTs] should point back to the wiki | page. | > | > | > | > Omer's question is referring specifically to the [FUN] optimisation | described in the Note. | > | > Hmm. So is he asking whether f’s SRT should have an entry for | itself? No, that’ would be silly! It would not lead to any more CAFs | being reachable. | > | > | > | > Omer, maybe we are misunderstanding. But if so, can you cast your | question more precisely in terms of which lines of the wiki page or | Note are you asking about? And let’s make sure that the appropriate | bit gets updated when you’ve nailed the answer | > | > | > | > Simon | > | > | > | > From: Simon Marlow | > Sent: 06 January 2020 18:17 | > To: Simon Peyton Jones | > Cc: Ömer Sinan Ağacan ; ghc-devs | > Subject: Re: Code generation/SRT question | > | > | > | > We have: | > | > * wiki: | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitl | ab.haskell.org%2Fghc%2Fghc%2Fwikis%2Fcommentary%2Frts%2Fstorage%2Fgc%2 | Fcafs&data=02%7C01%7Csimonpj%40microsoft.com%7Cba8ca780e8cb4803d53 | 008d79336b5ff%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C63713973549 | 8781862&sdata=tz51aoG0hxYD8IMV0CTubLxuDnSO22pl9IU%2Bh6KxrGg%3D& | ;reserved=0 | > | > * a huge Note in CmmBuildInfoTables: | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitl | ab.haskell.org%2Fghc%2Fghc%2Fblob%2Fmaster%2Fcompiler%252Fcmm%252FCmmB | uildInfoTables.hs%23L42&data=02%7C01%7Csimonpj%40microsoft.com%7Cb | a8ca780e8cb4803d53008d79336b5ff%7C72f988bf86f141af91ab2d7cd011db47%7C1 | %7C0%7C637139735498781862&sdata=FymrjC9QgtuQM0AS0ZtV3yTec0hqFxYuKa | 55XNqihNs%3D&reserved=0 | > | > | > | > Maybe we need links to these from other places? | > | > | > | > Omer's question is referring specifically to the [FUN] optimisation | described in the Note. | > | > | > | > Cheers | > | > Simon | > | > | > | > On Mon, 6 Jan 2020 at 17:50, Simon Peyton Jones | wrote: | > | > Omer, | > | > | > | > I think I’m not understanding all the details, but I have a clear | “big picture”. Simon can correct me if I’m wrong. | > | > | > | > · The info table for any closure (top-level or otherwise) has | a (possibly empty) Static Reference Table, SRT. | > | > · The SRT for an info table identifies the static top level | closures that the code for that info table mentions. (In principle | the garbage collector could parse the code! But it’s easier to find | these references if they in a dedicated table alongside the code.) | > | > · A top level closure is a CAF if it is born updatable. | > | > · A top level closure is CAFFY if it is a CAF, or mentions | another CAFFY closure. | > | > · An entry in the SRT can point | > | > o To a top-level updatable closure. This may now point into the | dynamic heap, and is what we want to keep alive. If the closure | hasn’t been updated, we should keep alive anything its SRT points to. | > | > o Directly to another SRT (or info table?) for a CAFFY top-level | closure, which is a bit faster if we know the thing is non-updatable. | > | > · If a function f calls a top-level function g, and g is | CAFFY, then f’s SRT should point to g’s closure or (if g is not a CAF) | directly to its SRT. | > | > · If f is top level, and calls itself, there is no need to | include a pointer to f’s closure in f’s own SRT. | > | > I think this last point is the one you are asking, but I’m not | certain. | > | > All this should be written down somewhere, and perhaps is. But | where? | > | > Simon | > | > | > | > From: ghc-devs On Behalf Of Simon | Marlow | > Sent: 06 January 2020 08:17 | > To: Ömer Sinan Ağacan | > Cc: ghc-devs | > Subject: Re: Code generation/SRT question | > | > | > | > There's no need to set the srt field of f_info if f_closure is the | SRT, since any reference to f_info in the code will give rise to a | reference to f_closure in the SRT corresponding to that code fragment. | Does that make sense? | > | > | > | > The use of a closure as an SRT is really quite a nice optimisation | actually. | > | > | > | > Cheers | > | > Simon | > | > | > | > On Wed, 1 Jan 2020 at 09:35, Ömer Sinan Ağacan | wrote: | > | > Hi Simon, | > | > In Cmm if I have a recursive group of functions f and g, and I'm | using f's | > closure as the SRT for this group, should f's entry block's info | table have | > f_closure as its SRT? | > | > In Cmm syntax | > | > f_entry() { | > { info_tbls: [... | > (c1vn, | > label: ... | > rep: ... | > srt: ??????] | > stack_info: ... | > } | > {offset | > c1vn: | > ... | > } | > } | > | > Here should I have `f_closure` in the srt field? | > | > I'd expect yes, but looking at the current SRT code, in | > CmmBuildInfoTables.updInfoSRTs, we have this: | > | > (newInfo, srtEntries) = case mapLookup (g_entry g) funSRTEnv of | > | > Nothing -> | > -- if we don't add SRT entries to this closure, then we | > -- want to set the srt field in its info table as usual | > (info_tbl { cit_srt = mapLookup (g_entry g) srt_env }, []) | > | > Just srtEntries -> srtTrace "maybeStaticFun" (ppr res) | > (info_tbl { cit_rep = new_rep }, res) | > where res = [ CmmLabel lbl | SRTEntry lbl <- srtEntries ] | > | > Here we only update SRT field of the block if we're not adding SRT | entries to | > the function's closure, so in the example above, because we're using | the | > function as SRT (and adding SRT entries to its closure) SRT field of | c1vn won't | > be updated. | > | > Am I missing anything? | > | > Thanks, | > | > Ömer From marlowsd at gmail.com Tue Jan 7 12:59:03 2020 From: marlowsd at gmail.com (Simon Marlow) Date: Tue, 7 Jan 2020 12:59:03 +0000 Subject: Code generation/SRT question In-Reply-To: References: Message-ID: On Tue, 7 Jan 2020 at 05:59, Ömer Sinan Ağacan wrote: > Hi all, > > > There's no need to set the srt field of f_info if f_closure is the SRT, > since > > any reference to f_info in the code will give rise to a reference to > f_closure > > in the SRT corresponding to that code fragment. Does that make sense? > > Makes sense, thanks. > > > The use of a closure as an SRT is really quite a nice optimisation > actually. > > Agreed. > > > · If f is top level, and calls itself, there is no need to include a > pointer > > to f’s closure in f’s own SRT. > > > > I think this last point is the one you are asking, but I’m not certain. > > Close, I'm asking whether we should include a pointer to f in f's SRT > (when f is > recursive) when we're using f as the SRT (the [FUN] optimisation). > I think your original question was slightly different, it was about f's info table: > should f's entry block's info table have f_closure as its SRT? anyway, the answer to both questions is "no." Cheers Simon > I'll document the code I quoted in my original email with this info. > > Thanks, > > Ömer > > Simon Peyton Jones , 7 Oca 2020 Sal, 00:11 > tarihinde şunu yazdı: > > > > Aha, great. Well at least [Note SRTs] should point back to the wiki > page. > > > > > > > > Omer's question is referring specifically to the [FUN] optimisation > described in the Note. > > > > Hmm. So is he asking whether f’s SRT should have an entry for itself? > No, that’ would be silly! It would not lead to any more CAFs being > reachable. > > > > > > > > Omer, maybe we are misunderstanding. But if so, can you cast your > question more precisely in terms of which lines of the wiki page or Note > are you asking about? And let’s make sure that the appropriate bit gets > updated when you’ve nailed the answer > > > > > > > > Simon > > > > > > > > From: Simon Marlow > > Sent: 06 January 2020 18:17 > > To: Simon Peyton Jones > > Cc: Ömer Sinan Ağacan ; ghc-devs < > ghc-devs at haskell.org> > > Subject: Re: Code generation/SRT question > > > > > > > > We have: > > > > * wiki: > https://gitlab.haskell.org/ghc/ghc/wikis/commentary/rts/storage/gc/cafs > > > > * a huge Note in CmmBuildInfoTables: > https://gitlab.haskell.org/ghc/ghc/blob/master/compiler%2Fcmm%2FCmmBuildInfoTables.hs#L42 > > > > > > > > Maybe we need links to these from other places? > > > > > > > > Omer's question is referring specifically to the [FUN] optimisation > described in the Note. > > > > > > > > Cheers > > > > Simon > > > > > > > > On Mon, 6 Jan 2020 at 17:50, Simon Peyton Jones > wrote: > > > > Omer, > > > > > > > > I think I’m not understanding all the details, but I have a clear “big > picture”. Simon can correct me if I’m wrong. > > > > > > > > · The info table for any closure (top-level or otherwise) has a > (possibly empty) Static Reference Table, SRT. > > > > · The SRT for an info table identifies the static top level > closures that the code for that info table mentions. (In principle the > garbage collector could parse the code! But it’s easier to find these > references if they in a dedicated table alongside the code.) > > > > · A top level closure is a CAF if it is born updatable. > > > > · A top level closure is CAFFY if it is a CAF, or mentions > another CAFFY closure. > > > > · An entry in the SRT can point > > > > o To a top-level updatable closure. This may now point into the > dynamic heap, and is what we want to keep alive. If the closure hasn’t > been updated, we should keep alive anything its SRT points to. > > > > o Directly to another SRT (or info table?) for a CAFFY top-level > closure, which is a bit faster if we know the thing is non-updatable. > > > > · If a function f calls a top-level function g, and g is CAFFY, > then f’s SRT should point to g’s closure or (if g is not a CAF) directly to > its SRT. > > > > · If f is top level, and calls itself, there is no need to > include a pointer to f’s closure in f’s own SRT. > > > > I think this last point is the one you are asking, but I’m not certain. > > > > All this should be written down somewhere, and perhaps is. But where? > > > > Simon > > > > > > > > From: ghc-devs On Behalf Of Simon Marlow > > Sent: 06 January 2020 08:17 > > To: Ömer Sinan Ağacan > > Cc: ghc-devs > > Subject: Re: Code generation/SRT question > > > > > > > > There's no need to set the srt field of f_info if f_closure is the SRT, > since any reference to f_info in the code will give rise to a reference to > f_closure in the SRT corresponding to that code fragment. Does that make > sense? > > > > > > > > The use of a closure as an SRT is really quite a nice optimisation > actually. > > > > > > > > Cheers > > > > Simon > > > > > > > > On Wed, 1 Jan 2020 at 09:35, Ömer Sinan Ağacan > wrote: > > > > Hi Simon, > > > > In Cmm if I have a recursive group of functions f and g, and I'm using > f's > > closure as the SRT for this group, should f's entry block's info table > have > > f_closure as its SRT? > > > > In Cmm syntax > > > > f_entry() { > > { info_tbls: [... > > (c1vn, > > label: ... > > rep: ... > > srt: ??????] > > stack_info: ... > > } > > {offset > > c1vn: > > ... > > } > > } > > > > Here should I have `f_closure` in the srt field? > > > > I'd expect yes, but looking at the current SRT code, in > > CmmBuildInfoTables.updInfoSRTs, we have this: > > > > (newInfo, srtEntries) = case mapLookup (g_entry g) funSRTEnv of > > > > Nothing -> > > -- if we don't add SRT entries to this closure, then we > > -- want to set the srt field in its info table as usual > > (info_tbl { cit_srt = mapLookup (g_entry g) srt_env }, []) > > > > Just srtEntries -> srtTrace "maybeStaticFun" (ppr res) > > (info_tbl { cit_rep = new_rep }, res) > > where res = [ CmmLabel lbl | SRTEntry lbl <- srtEntries ] > > > > Here we only update SRT field of the block if we're not adding SRT > entries to > > the function's closure, so in the example above, because we're using the > > function as SRT (and adding SRT entries to its closure) SRT field of > c1vn won't > > be updated. > > > > Am I missing anything? > > > > Thanks, > > > > Ömer > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Wed Jan 8 22:38:25 2020 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Wed, 8 Jan 2020 22:38:25 +0000 Subject: Extension.hs ForallXXX constraints Message-ID: I have got back into some GHC dev, and noticed that the various ForallXXX type synonyms in Extension.hs are unused in GHC. In the early iteration, they were needed, but it seems not any more. Is there any reason they should not be removed? Alan -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at richarde.dev Thu Jan 9 09:32:20 2020 From: rae at richarde.dev (Richard Eisenberg) Date: Thu, 9 Jan 2020 09:32:20 +0000 Subject: Extension.hs ForallXXX constraints In-Reply-To: References: Message-ID: <69982B92-01D0-48F4-8D6B-ABEFA707DE6B@richarde.dev> Somehow, I remember this conversation coming up recently, and I thought it was decided to remove these. Indeed, I'm surprised they're still around. Strike them down! Thanks, Richard > On Jan 8, 2020, at 10:38 PM, Alan & Kim Zimmerman wrote: > > I have got back into some GHC dev, and noticed that the various ForallXXX type synonyms in Extension.hs are unused in GHC. > > In the early iteration, they were needed, but it seems not any more. > > Is there any reason they should not be removed? > > Alan > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From omeragacan at gmail.com Thu Jan 9 13:01:55 2020 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Thu, 9 Jan 2020 16:01:55 +0300 Subject: Linking stage 2 compiler with non-threaded RTS using Make? Message-ID: Anyone know how to link stage 2 with non-threaded RTS using Make build system? There's a variable GhcThreaded, but setting it "NO" makes no difference, stage 2 compiler is still threaded. So far the only way I could find is to redirect build system output to a file, find the step that linked ghc-stage2, repeat that command but without -threaded. It's really painful as I have to repeat this step after every rebuild. Any tips? Thanks, Ömer From allbery.b at gmail.com Thu Jan 9 15:09:49 2020 From: allbery.b at gmail.com (Brandon Allbery) Date: Thu, 9 Jan 2020 10:09:49 -0500 Subject: Linking stage 2 compiler with non-threaded RTS using Make? In-Reply-To: References: Message-ID: There are some hidden dependencies, in particular ghci requires GhcThreaded last I checked (and ghci == ghc --interactive, not a separate program that could be linked threaded). You may also have to disable the entire bytecode backend, which would take TH and runghc with it as well as ghci. On Thu, Jan 9, 2020 at 8:02 AM Ömer Sinan Ağacan wrote: > Anyone know how to link stage 2 with non-threaded RTS using Make build > system? > There's a variable GhcThreaded, but setting it "NO" makes no difference, > stage 2 > compiler is still threaded. > > So far the only way I could find is to redirect build system output to a > file, > find the step that linked ghc-stage2, repeat that command but without > -threaded. > It's really painful as I have to repeat this step after every rebuild. > > Any tips? > > Thanks, > > Ömer > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -- brandon s allbery kf8nh allbery.b at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From lonetiger at gmail.com Sat Jan 11 14:07:40 2020 From: lonetiger at gmail.com (lonetiger at gmail.com) Date: Sat, 11 Jan 2020 14:07:40 +0000 Subject: Marge bot review link Message-ID: <5e19d6ac.1c69fb81.4f04c.bf50@mx.google.com> Hi Ben, I’m wondering if it’s possible to get marge to amend the commit message before it merges it to include links to the review requests. I really miss that phab feature.. Thanks, Tamar -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryan.gl.scott at gmail.com Sat Jan 11 22:07:50 2020 From: ryan.gl.scott at gmail.com (Ryan Scott) Date: Sat, 11 Jan 2020 17:07:50 -0500 Subject: gitlab.haskell.org spam issues Message-ID: There appears to be an account [1] that is submitting spam as GitLab issues. I've noticed the following spam issues so far: * https://gitlab.haskell.org/ghc/ghc/issues/17664 * https://gitlab.haskell.org/ghc/ghc/issues/17666 * https://gitlab.haskell.org/haskell/ghcup/issues/125 * https://gitlab.haskell.org/haskell/ghcup/issues/127 * https://gitlab.haskell.org/haskell/ghcup/issues/129 Best, Ryan S. ----- [1] https://gitlab.haskell.org/nunikonaza88 -------------- next part -------------- An HTML attachment was scrubbed... URL: From takenobu.hs at gmail.com Sun Jan 12 03:08:40 2020 From: takenobu.hs at gmail.com (Takenobu Tani) Date: Sun, 12 Jan 2020 12:08:40 +0900 Subject: gitlab.haskell.org spam issues In-Reply-To: References: Message-ID: They are also in snippets: https://gitlab.haskell.org/explore/snippets Regards, Takenobu On Sun, Jan 12, 2020 at 7:08 AM Ryan Scott wrote: > > There appears to be an account [1] that is submitting spam as GitLab issues. I've noticed the following spam issues so far: > > * https://gitlab.haskell.org/ghc/ghc/issues/17664 > * https://gitlab.haskell.org/ghc/ghc/issues/17666 > * https://gitlab.haskell.org/haskell/ghcup/issues/125 > * https://gitlab.haskell.org/haskell/ghcup/issues/127 > * https://gitlab.haskell.org/haskell/ghcup/issues/129 > > Best, > Ryan S. > ----- > [1] https://gitlab.haskell.org/nunikonaza88 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From lexi.lambda at gmail.com Sun Jan 12 03:54:32 2020 From: lexi.lambda at gmail.com (Alexis King) Date: Sat, 11 Jan 2020 21:54:32 -0600 Subject: Superclasses of type families returning constraints? In-Reply-To: References: <461E582E-D366-46F8-8E2B-2B1429111803@gmail.com> <7E62CD56-ABF3-4F12-9986-9C659F202784@gmail.com> Message-ID: > On Jan 6, 2020, at 15:41, Simon Peyton Jones wrote: > > Ah, I see a bit better now. So you want a way to get from evidence that > co1 :: F a b ~# () > to evidence that > co2 :: a ~# b > > So you'd need some coercion form like > co2 = runBackwards co1 > > or something, where runBackwards is some kind of coercion form, like sym or trans, etc. Precisely. I’ve begun to feel there’s something markedly GADT-like going on here. Consider this similar but slightly different example: type family F1 a where F1 () = Bool Given this definition, this function is theoretically well-typed: f1 :: F1 a -> a f1 !_ = () Since we have access to closed-world reasoning, we know that matching on a term of type `F1 a` implies `a ~ ()`. But it gets more interesting with more complicated examples: type family F2 a b where F2 () () = Bool F2 (Maybe a) a = Int These equations theoretically permit the following definitions: f2a :: F2 () b -> b f2a !_ = () f2b :: F2 (Maybe ()) b -> b f2b !_ = () That is, matching on a term of type `F2 a b` gives rise to a set of *implications.* This is sort of interesting, since we can’t currently write implications in source Haskell. Normally we avoid this problem by using GADTs, so F2 would instead be written like this: data T2 a b where T2A :: Bool -> T2 () () T2B :: Int -> T2 (Maybe a) a But these have different representations: `T2` is tagged, so if we had a value of type `T2 a ()`, we could branch on it to find out if `a` is `()` or `Maybe ()` (and if we had a `Bool` or an `Int`, for that matter). `F2`, on the other hand, is just a synonym, so we cannot do that. In this case, arguably, the right answer is “just use a GADT.” In my case, however, I cannot, because I actually want to write something like type family F2' a b where F2' () () = Void# F2' (Maybe a) a = Void# so that the term has no runtime representation at all. Even if we had `UnliftedData` (and we don’t), and even if it supported GADTs (seems unlikely), this still couldn’t be encoded using it because the term would still have to be represented by an `Int#` for the same reason `(# Void# | Void# #)` is. On the other hand, if this worked as above, `F2'` would really just be a funny-looking way of encoding a proof of a set of implications in source Haskell. > I don't know what a design for this would look like. And even if we had it, would it pay its way, by adding substantial and useful new programming expressiveness? For the latter question, I definitely have no idea! In my time writing Haskell, I have never personally found myself wanting this until now, so it may be of limited practical use. I have no idea how to express my `F2'` term in Haskell today, and I’d very much like to be able to, but of course this is not the only mechanism by which it could be expressed. Still, I find the relationship interesting, and I wonder if this particular connection between type families and GADTs is well-known. If it isn’t, I wonder whether or not it’s useful more generally. Of course, it might not be... but then maybe someone else besides me will also find it interesting, at least. :) Alexis From ben at well-typed.com Sun Jan 12 09:10:48 2020 From: ben at well-typed.com (Ben Gamari) Date: Sun, 12 Jan 2020 04:10:48 -0500 Subject: Marge bot review link In-Reply-To: <5e19d6ac.1c69fb81.4f04c.bf50@mx.google.com> References: <5e19d6ac.1c69fb81.4f04c.bf50@mx.google.com> Message-ID: <526A1626-3A2E-4973-A948-799CAE9D596C@well-typed.com> It likely is possible. However, I have been a bit reluctant to touch Marge since it is supposed to be a temporary measure and changes have historically resulted in regressions. I do hope that merge train support will finally be usable in the next release of GitLab. Cheers, - Ben On January 11, 2020 9:07:40 AM EST, lonetiger at gmail.com wrote: >Hi Ben, > >I’m wondering if it’s possible to get marge to amend the commit message >before it merges it to include links to the review requests. > >I really miss that phab feature.. > >Thanks, >Tamar -- Sent from my Android device with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From omeragacan at gmail.com Sun Jan 12 10:00:43 2020 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Sun, 12 Jan 2020 13:00:43 +0300 Subject: gitlab.haskell.org spam issues In-Reply-To: References: Message-ID: I just deleted the user with their messages. I don't see a delete link in snippets, so not sure how to delete those. Ömer Takenobu Tani , 12 Oca 2020 Paz, 06:09 tarihinde şunu yazdı: > > They are also in snippets: > > https://gitlab.haskell.org/explore/snippets > > Regards, > Takenobu > > On Sun, Jan 12, 2020 at 7:08 AM Ryan Scott wrote: > > > > There appears to be an account [1] that is submitting spam as GitLab issues. I've noticed the following spam issues so far: > > > > * https://gitlab.haskell.org/ghc/ghc/issues/17664 > > * https://gitlab.haskell.org/ghc/ghc/issues/17666 > > * https://gitlab.haskell.org/haskell/ghcup/issues/125 > > * https://gitlab.haskell.org/haskell/ghcup/issues/127 > > * https://gitlab.haskell.org/haskell/ghcup/issues/129 > > > > Best, > > Ryan S. > > ----- > > [1] https://gitlab.haskell.org/nunikonaza88 > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From matthewtpickering at gmail.com Sun Jan 12 12:06:56 2020 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Sun, 12 Jan 2020 12:06:56 +0000 Subject: gitlab.haskell.org spam issues In-Reply-To: References: Message-ID: I just went through the "report abuse" queue and deleted users as reported by Takenobu. On Sun, Jan 12, 2020 at 10:01 AM Ömer Sinan Ağacan wrote: > > I just deleted the user with their messages. > > I don't see a delete link in snippets, so not sure how to delete those. > > Ömer > > Takenobu Tani , 12 Oca 2020 Paz, 06:09 > tarihinde şunu yazdı: > > > > They are also in snippets: > > > > https://gitlab.haskell.org/explore/snippets > > > > Regards, > > Takenobu > > > > On Sun, Jan 12, 2020 at 7:08 AM Ryan Scott wrote: > > > > > > There appears to be an account [1] that is submitting spam as GitLab issues. I've noticed the following spam issues so far: > > > > > > * https://gitlab.haskell.org/ghc/ghc/issues/17664 > > > * https://gitlab.haskell.org/ghc/ghc/issues/17666 > > > * https://gitlab.haskell.org/haskell/ghcup/issues/125 > > > * https://gitlab.haskell.org/haskell/ghcup/issues/127 > > > * https://gitlab.haskell.org/haskell/ghcup/issues/129 > > > > > > Best, > > > Ryan S. > > > ----- > > > [1] https://gitlab.haskell.org/nunikonaza88 > > > _______________________________________________ > > > ghc-devs mailing list > > > ghc-devs at haskell.org > > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From ben at smart-cactus.org Sun Jan 12 12:08:44 2020 From: ben at smart-cactus.org (Ben Gamari) Date: Sun, 12 Jan 2020 07:08:44 -0500 Subject: gitlab.haskell.org spam issues In-Reply-To: References: Message-ID: <87y2uc3mif.fsf@smart-cactus.org> Takenobu Tani writes: > They are also in snippets: > > https://gitlab.haskell.org/explore/snippets > I have handled these. Thanks Takenobu! Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From matthewtpickering at gmail.com Sun Jan 12 12:10:21 2020 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Sun, 12 Jan 2020 12:10:21 +0000 Subject: gitlab.haskell.org spam issues In-Reply-To: <87y2uc3mif.fsf@smart-cactus.org> References: <87y2uc3mif.fsf@smart-cactus.org> Message-ID: There are still *a lot* of spam users though, i'm not sure what we can do to tackle this. On Sun, Jan 12, 2020 at 12:09 PM Ben Gamari wrote: > > Takenobu Tani writes: > > > They are also in snippets: > > > > https://gitlab.haskell.org/explore/snippets > > > I have handled these. Thanks Takenobu! > > Cheers, > > - Ben > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From takenobu.hs at gmail.com Sun Jan 12 12:55:19 2020 From: takenobu.hs at gmail.com (Takenobu Tani) Date: Sun, 12 Jan 2020 21:55:19 +0900 Subject: gitlab.haskell.org spam issues In-Reply-To: References: <87y2uc3mif.fsf@smart-cactus.org> Message-ID: Thanks always, Ömer, Matthew, Ben :) Regards, Takenobu On Sun, Jan 12, 2020 at 9:10 PM Matthew Pickering wrote: > > There are still *a lot* of spam users though, i'm not sure what we can > do to tackle this. > > On Sun, Jan 12, 2020 at 12:09 PM Ben Gamari wrote: > > > > Takenobu Tani writes: > > > > > They are also in snippets: > > > > > > https://gitlab.haskell.org/explore/snippets > > > > > I have handled these. Thanks Takenobu! > > > > Cheers, > > > > - Ben > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From omeragacan at gmail.com Mon Jan 13 09:10:12 2020 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Mon, 13 Jan 2020 12:10:12 +0300 Subject: Fixing type synonyms to Uniq(D)FM newtypes Message-ID: Hi, UniqFM and UniqDFM types are basically maps from Uniques to other stuff. Most of the time we don't actually map Uniques but instead map things like Vars or Names. For those we have types like VarEnv, NameEnv, FastStringEnv, ... which are defined as type synonyms to UniqFM or UniqDFM, and operations are defined like extendFsEnv = addToUFM extendNameEnv = addToUFM extendVarEnv = addToUFM This causes problems when I have multiple Uniquables in scope and use the wrong one to update an environment because the program type checks and does the wrong thing in runtime. An example is #17667 where I did delVarEnv env name where `name :: Name`. I shouldn't be able to remove a Name from a Var env but this currently type checks. Two solutions proposed: - Make these env types newtypes instead of type synonyms. - Add a phantom type parameter to UniqFM/UniqDFM. If you could share your opinion on how to fix this I'd like to fix this soon. Personally I prefer (1) because it looks simpler but I'd be happy with (2) as well. Ömer From rae at richarde.dev Mon Jan 13 22:55:02 2020 From: rae at richarde.dev (Richard Eisenberg) Date: Mon, 13 Jan 2020 22:55:02 +0000 Subject: Fixing type synonyms to Uniq(D)FM newtypes In-Reply-To: References: Message-ID: I'd be fine with making these newtypes, but I still don't quite understand the motivation. Note that the specialized functions for the different instances of UniqFM all have type signatures. For example, delVarEnv will only work with a Var, not a Name. Was there a different scenario that you want to avoid? Thanks, Richard > On Jan 13, 2020, at 9:10 AM, Ömer Sinan Ağacan wrote: > > Hi, > > UniqFM and UniqDFM types are basically maps from Uniques to other stuff. Most of > the time we don't actually map Uniques but instead map things like Vars or > Names. For those we have types like VarEnv, NameEnv, FastStringEnv, ... which > are defined as type synonyms to UniqFM or UniqDFM, and operations are defined > like > > extendFsEnv = addToUFM > extendNameEnv = addToUFM > extendVarEnv = addToUFM > > This causes problems when I have multiple Uniquables in scope and use the wrong > one to update an environment because the program type checks and does the wrong > thing in runtime. An example is #17667 where I did > > delVarEnv env name > > where `name :: Name`. I shouldn't be able to remove a Name from a Var env but > this currently type checks. > > Two solutions proposed: > > - Make these env types newtypes instead of type synonyms. > - Add a phantom type parameter to UniqFM/UniqDFM. > > If you could share your opinion on how to fix this I'd like to fix this soon. > > Personally I prefer (1) because it looks simpler but I'd be happy with > (2) as well. > > Ömer > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From omeragacan at gmail.com Tue Jan 14 06:15:54 2020 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Tue, 14 Jan 2020 09:15:54 +0300 Subject: Fixing type synonyms to Uniq(D)FM newtypes In-Reply-To: References: Message-ID: > but I still don't quite understand the motivation I give a concrete example (something that happened to me that I had to debug in runtime) in the issue I linked in my original post. > For example, delVarEnv will only work with a Var, not a Name. delVarEnv will happily accept a NameEnv in its first argument, which is the problem I was trying to describe. Ömer Richard Eisenberg , 14 Oca 2020 Sal, 01:55 tarihinde şunu yazdı: > > I'd be fine with making these newtypes, but I still don't quite understand the motivation. Note that the specialized functions for the different instances of UniqFM all have type signatures. For example, delVarEnv will only work with a Var, not a Name. > > Was there a different scenario that you want to avoid? > > Thanks, > Richard > > > On Jan 13, 2020, at 9:10 AM, Ömer Sinan Ağacan wrote: > > > > Hi, > > > > UniqFM and UniqDFM types are basically maps from Uniques to other stuff. Most of > > the time we don't actually map Uniques but instead map things like Vars or > > Names. For those we have types like VarEnv, NameEnv, FastStringEnv, ... which > > are defined as type synonyms to UniqFM or UniqDFM, and operations are defined > > like > > > > extendFsEnv = addToUFM > > extendNameEnv = addToUFM > > extendVarEnv = addToUFM > > > > This causes problems when I have multiple Uniquables in scope and use the wrong > > one to update an environment because the program type checks and does the wrong > > thing in runtime. An example is #17667 where I did > > > > delVarEnv env name > > > > where `name :: Name`. I shouldn't be able to remove a Name from a Var env but > > this currently type checks. > > > > Two solutions proposed: > > > > - Make these env types newtypes instead of type synonyms. > > - Add a phantom type parameter to UniqFM/UniqDFM. > > > > If you could share your opinion on how to fix this I'd like to fix this soon. > > > > Personally I prefer (1) because it looks simpler but I'd be happy with > > (2) as well. > > > > Ömer > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From arnaud.spiwack at tweag.io Tue Jan 14 07:07:58 2020 From: arnaud.spiwack at tweag.io (Spiwack, Arnaud) Date: Tue, 14 Jan 2020 08:07:58 +0100 Subject: Fixing type synonyms to Uniq(D)FM newtypes In-Reply-To: References: Message-ID: In a way, if these types need to exist at all, they probably should be newtypes. That being said, I'm pretty sure that the current APIs are incomplete, so turning these into newtypes may be, in fact, quite a bit of work. But if we are starting this discussion, I'd like to understand first why all these types exist at all? Why not use `UniqFM` everywhere? /Arnaud On Tue, Jan 14, 2020 at 7:16 AM Ömer Sinan Ağacan wrote: > > but I still don't quite understand the motivation > > I give a concrete example (something that happened to me that I had to > debug in > runtime) in the issue I linked in my original post. > > > For example, delVarEnv will only work with a Var, not a Name. > > delVarEnv will happily accept a NameEnv in its first argument, which is the > problem I was trying to describe. > > Ömer > > Richard Eisenberg , 14 Oca 2020 Sal, 01:55 tarihinde > şunu yazdı: > > > > I'd be fine with making these newtypes, but I still don't quite > understand the motivation. Note that the specialized functions for the > different instances of UniqFM all have type signatures. For example, > delVarEnv will only work with a Var, not a Name. > > > > Was there a different scenario that you want to avoid? > > > > Thanks, > > Richard > > > > > On Jan 13, 2020, at 9:10 AM, Ömer Sinan Ağacan > wrote: > > > > > > Hi, > > > > > > UniqFM and UniqDFM types are basically maps from Uniques to other > stuff. Most of > > > the time we don't actually map Uniques but instead map things like > Vars or > > > Names. For those we have types like VarEnv, NameEnv, FastStringEnv, > ... which > > > are defined as type synonyms to UniqFM or UniqDFM, and operations are > defined > > > like > > > > > > extendFsEnv = addToUFM > > > extendNameEnv = addToUFM > > > extendVarEnv = addToUFM > > > > > > This causes problems when I have multiple Uniquables in scope and use > the wrong > > > one to update an environment because the program type checks and does > the wrong > > > thing in runtime. An example is #17667 where I did > > > > > > delVarEnv env name > > > > > > where `name :: Name`. I shouldn't be able to remove a Name from a Var > env but > > > this currently type checks. > > > > > > Two solutions proposed: > > > > > > - Make these env types newtypes instead of type synonyms. > > > - Add a phantom type parameter to UniqFM/UniqDFM. > > > > > > If you could share your opinion on how to fix this I'd like to fix > this soon. > > > > > > Personally I prefer (1) because it looks simpler but I'd be happy with > > > (2) as well. > > > > > > Ömer > > > _______________________________________________ > > > ghc-devs mailing list > > > ghc-devs at haskell.org > > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Tue Jan 14 10:02:17 2020 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 14 Jan 2020 05:02:17 -0500 Subject: Fixing type synonyms to Uniq(D)FM newtypes In-Reply-To: References: Message-ID: <87tv4y2w63.fsf@smart-cactus.org> Ömer Sinan Ağacan writes: > Hi, > > UniqFM and UniqDFM types are basically maps from Uniques to other stuff. Most of > the time we don't actually map Uniques but instead map things like Vars or > Names. For those we have types like VarEnv, NameEnv, FastStringEnv, ... which > are defined as type synonyms to UniqFM or UniqDFM, and operations are defined > like > > extendFsEnv = addToUFM > extendNameEnv = addToUFM > extendVarEnv = addToUFM > > This causes problems when I have multiple Uniquables in scope and use the wrong > one to update an environment because the program type checks and does the wrong > thing in runtime. An example is #17667 where I did > > delVarEnv env name > > where `name :: Name`. I shouldn't be able to remove a Name from a Var env but > this currently type checks. > At first I was a bit confused at how this could possibly typecheck. Afterall, delVarEnv has type, VarEnv a -> Var -> VarEnv a which seems quite reasonable and should correctly reject the application to a Name as given in Omer's example. However, the mistake in #17667 is actually that he wrote, delVarEnv env name instead of delNameEnv env (varName var) That is, because `VarEnv a ~ NameEnv a` one can easily mix up a NameEnv with a VarEnv and not get a compile-time error. I can see how this could be a nasty bug to track down. > Two solutions proposed: > > - Make these env types newtypes instead of type synonyms. > - Add a phantom type parameter to UniqFM/UniqDFM. > IIRC this has been suggested before. I, for one, see the value in this and certainly wouldn't be opposed to either of these options, although would weakly favor the former over the latter. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From matthewtpickering at gmail.com Tue Jan 14 10:04:58 2020 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Tue, 14 Jan 2020 10:04:58 +0000 Subject: Fixing type synonyms to Uniq(D)FM newtypes In-Reply-To: <87tv4y2w63.fsf@smart-cactus.org> References: <87tv4y2w63.fsf@smart-cactus.org> Message-ID: Can someone explain the benefit of the newtype wrappers over the phantom type parameter approach? In my mind adding a phantom type parameter to `UniqFM` solves the issue entirely but will result in less code churn and follows the example from the existing map data types from containers. Cheers, Matt On Tue, Jan 14, 2020 at 10:02 AM Ben Gamari wrote: > > Ömer Sinan Ağacan writes: > > > Hi, > > > > UniqFM and UniqDFM types are basically maps from Uniques to other stuff. Most of > > the time we don't actually map Uniques but instead map things like Vars or > > Names. For those we have types like VarEnv, NameEnv, FastStringEnv, ... which > > are defined as type synonyms to UniqFM or UniqDFM, and operations are defined > > like > > > > extendFsEnv = addToUFM > > extendNameEnv = addToUFM > > extendVarEnv = addToUFM > > > > This causes problems when I have multiple Uniquables in scope and use the wrong > > one to update an environment because the program type checks and does the wrong > > thing in runtime. An example is #17667 where I did > > > > delVarEnv env name > > > > where `name :: Name`. I shouldn't be able to remove a Name from a Var env but > > this currently type checks. > > > At first I was a bit confused at how this could possibly typecheck. > Afterall, delVarEnv has type, > > VarEnv a -> Var -> VarEnv a > > which seems quite reasonable and should correctly reject the application > to a Name as given in Omer's example. However, the mistake in #17667 > is actually that he wrote, > > delVarEnv env name > > instead of > > delNameEnv env (varName var) > > That is, because `VarEnv a ~ NameEnv a` one can easily mix up a > NameEnv with a VarEnv and not get a compile-time error. I can see how > this could be a nasty bug to track down. > > > > Two solutions proposed: > > > > - Make these env types newtypes instead of type synonyms. > > - Add a phantom type parameter to UniqFM/UniqDFM. > > > IIRC this has been suggested before. I, for one, see the value in this > and certainly wouldn't be opposed to either of these options, although > would weakly favor the former over the latter. > > Cheers, > > - Ben > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From ben at smart-cactus.org Tue Jan 14 10:31:12 2020 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 14 Jan 2020 05:31:12 -0500 Subject: Fixing type synonyms to Uniq(D)FM newtypes In-Reply-To: References: <87tv4y2w63.fsf@smart-cactus.org> Message-ID: <87r2022utw.fsf@smart-cactus.org> Matthew Pickering writes: > Can someone explain the benefit of the newtype wrappers over the > phantom type parameter approach? > > In my mind adding a phantom type parameter to `UniqFM` solves the > issue entirely but will result in less code churn and follows the > example from the existing map data types from containers. > I would say the same of newtype wrappers; afterall, we already have a convention of using the "specialised" type synonyms and their functions instead of UniqFM directly where possible. Turning VarEnv, etc. into newtypes likely touch little code outside of the modules where they are defined. Which approach is preferable is really a question of what degree of encapsulation we want. The advantage of making, e.g., VarEnv a newtype is that our use of Uniques remains an implementation detail (which it is, in my opinion). We are then in principle free to change the representation of VarEnv down the road. Of course, in practice it is hard to imagine GHC moving away from uniques for things like VarEnv. However, properly encapsulating them seems like good engineering practice and incurs very little cost (especially given our current conventions). Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From rae at richarde.dev Tue Jan 14 11:29:47 2020 From: rae at richarde.dev (Richard Eisenberg) Date: Tue, 14 Jan 2020 11:29:47 +0000 Subject: Fixing type synonyms to Uniq(D)FM newtypes In-Reply-To: <87r2022utw.fsf@smart-cactus.org> References: <87tv4y2w63.fsf@smart-cactus.org> <87r2022utw.fsf@smart-cactus.org> Message-ID: <671ED356-CD95-49E6-BFA7-CEBDB50879A8@richarde.dev> One advantage of the phantom-parameter approach is that it allows for nice polymorphism. > lookupEnv :: Uniquable dom => UniqFM dom rng -> dom -> Maybe rng Now, we don't need lookupVarEnv separately from lookupNameEnv, but we get the type-checking for free. I agree with Ben about the fact that newtypes have their own advantages. I don't have much of an opinion, in the end. Richard > On Jan 14, 2020, at 10:31 AM, Ben Gamari wrote: > > Matthew Pickering writes: > >> Can someone explain the benefit of the newtype wrappers over the >> phantom type parameter approach? >> >> In my mind adding a phantom type parameter to `UniqFM` solves the >> issue entirely but will result in less code churn and follows the >> example from the existing map data types from containers. >> > I would say the same of newtype wrappers; afterall, we already have a > convention of using the "specialised" type synonyms and their functions > instead of UniqFM directly where possible. Turning VarEnv, etc. into > newtypes likely touch little code outside of the modules where they are > defined. > > Which approach is preferable is really a question of what degree of > encapsulation we want. The advantage of making, e.g., VarEnv a newtype > is that our use of Uniques remains an implementation detail (which it > is, in my opinion). We are then in principle free to change the > representation of VarEnv down the road. > > Of course, in practice it is hard to imagine GHC moving away from > uniques for things like VarEnv. However, properly encapsulating them > seems like good engineering practice and incurs very little cost > (especially given our current conventions). > > Cheers, > > - Ben > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From ben at smart-cactus.org Tue Jan 14 11:29:58 2020 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 14 Jan 2020 12:29:58 +0100 Subject: Windows testsuite failures Message-ID: <1EB60CA7-A986-409C-8870-E44803975EC7@smart-cactus.org> Hi all, Currently Windows CI is a bit flaky due to some unfortunately rather elusive testsuite driver bugs. Progress in resolving this has been a bit slow due to travel over the last week but I will be back home tomorrow and should be able to resolve the issue soon thereafter. Cheers, - Ben From simonpj at microsoft.com Tue Jan 14 17:34:57 2020 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 14 Jan 2020 17:34:57 +0000 Subject: GHC perf Message-ID: Ben, David I'm still baffled by how to reliably get GHC perf metrics on my local machine. The wiki page https://gitlab.haskell.org/ghc/ghc/wikis/building/running-tests/performance-tests helps, but not enough! * There are two things going on: * CI perf measurements * Local machine perf measurements I think that they are somehow handled differently (why?) but they are all muddled up on the wiki page. * My goal is this: * Start with a master commit, say from Dec 2019. * Implement some change, on a branch. * sh validate -legacy (or something else if you like) * Look at perf regressions. * I believe I have first to utter the incantation $ git fetch https://gitlab.haskell.org/ghc/ghc-performance-notes.git refs/notes/perf:refs/notes/ci/perf * But then: * How do I ensure that the baseline perf numbers I get relate to the master commit I started from, back in Dec 2019? I don't want numbers from Jan 2020. * If I rebase my branch on top of HEAD, say, how do I update the perf baseline numbers to be for HEAD? * Generally, how can I tell the commit to which the baseline numbers relate? * Also, in my tree I have a series of incremental changes; I want to see if any of them have perf regressions. How do I do that? Thanks Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From omeragacan at gmail.com Wed Jan 15 11:11:56 2020 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Wed, 15 Jan 2020 14:11:56 +0300 Subject: Linking stage 2 compiler with non-threaded RTS using Make? In-Reply-To: References: Message-ID: Btw I just realized that this also makes ticky-ticky profiling harder becuase as far as I know ticky profiling not compatible with threaded runtime. I need to try ticky profiling for !1747 and I'm currently painfully manually linking the stage 2 executable using the method I described in my original email. Ömer Brandon Allbery , 9 Oca 2020 Per, 18:10 tarihinde şunu yazdı: > > There are some hidden dependencies, in particular ghci requires GhcThreaded last I checked (and ghci == ghc --interactive, not a separate program that could be linked threaded). You may also have to disable the entire bytecode backend, which would take TH and runghc with it as well as ghci. > > On Thu, Jan 9, 2020 at 8:02 AM Ömer Sinan Ağacan wrote: >> >> Anyone know how to link stage 2 with non-threaded RTS using Make build system? >> There's a variable GhcThreaded, but setting it "NO" makes no difference, stage 2 >> compiler is still threaded. >> >> So far the only way I could find is to redirect build system output to a file, >> find the step that linked ghc-stage2, repeat that command but without -threaded. >> It's really painful as I have to repeat this step after every rebuild. >> >> Any tips? >> >> Thanks, >> >> Ömer >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > -- > brandon s allbery kf8nh > allbery.b at gmail.com From omeragacan at gmail.com Wed Jan 15 11:16:57 2020 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Wed, 15 Jan 2020 14:16:57 +0300 Subject: Linking stage 2 compiler with non-threaded RTS using Make? In-Reply-To: References: Message-ID: I just realized that the wiki page for ticky profiling [1] suggests using `GhcThreaded = NO` becuase ticky profiling is not compatible with threaded runtime. I think this suggests that `GhcThreaded = NO` used to work. It doesn't work anymore. Ömer [1]: https://gitlab.haskell.org/ghc/ghc/wikis/debugging/ticky-ticky Ömer Sinan Ağacan , 15 Oca 2020 Çar, 14:11 tarihinde şunu yazdı: > > Btw I just realized that this also makes ticky-ticky profiling harder becuase as > far as I know ticky profiling not compatible with threaded runtime. I need to > try ticky profiling for !1747 and I'm currently painfully manually linking the > stage 2 executable using the method I described in my original email. > > Ömer > > Brandon Allbery , 9 Oca 2020 Per, 18:10 tarihinde > şunu yazdı: > > > > There are some hidden dependencies, in particular ghci requires GhcThreaded last I checked (and ghci == ghc --interactive, not a separate program that could be linked threaded). You may also have to disable the entire bytecode backend, which would take TH and runghc with it as well as ghci. > > > > On Thu, Jan 9, 2020 at 8:02 AM Ömer Sinan Ağacan wrote: > >> > >> Anyone know how to link stage 2 with non-threaded RTS using Make build system? > >> There's a variable GhcThreaded, but setting it "NO" makes no difference, stage 2 > >> compiler is still threaded. > >> > >> So far the only way I could find is to redirect build system output to a file, > >> find the step that linked ghc-stage2, repeat that command but without -threaded. > >> It's really painful as I have to repeat this step after every rebuild. > >> > >> Any tips? > >> > >> Thanks, > >> > >> Ömer > >> _______________________________________________ > >> ghc-devs mailing list > >> ghc-devs at haskell.org > >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > > > > > -- > > brandon s allbery kf8nh > > allbery.b at gmail.com From pho at cielonegro.org Wed Jan 15 12:05:51 2020 From: pho at cielonegro.org (PHO) Date: Wed, 15 Jan 2020 21:05:51 +0900 Subject: Linking stage 2 compiler with non-threaded RTS using Make? In-Reply-To: References: Message-ID: <4ce834db-00d3-5684-a306-cf4b94dee548@cielonegro.org> I'm a maintainer of a GHC package in pkgsrc - a package collection mainly used by NetBSD. Our package has the following patch to ghc/ghc.mk to work around the problem. The issue is that, while ghc/ghc-bin.cabal.in has a flag to link against non-threaded RTS, the build system doesn't propagate GhcThreaded to it: > --- ghc/ghc.mk.orig 2019-08-25 12:03:36.000000000 +0000 > +++ ghc/ghc.mk > @@ -61,7 +61,13 @@ ifeq "$(GhcThreaded)" "YES" > # Use threaded RTS with GHCi, so threads don't get blocked at the prompt. > ghc_stage2_MORE_HC_OPTS += -threaded > ghc_stage3_MORE_HC_OPTS += -threaded > +else > +# Opt out from threaded GHC. See ghc-bin.cabal.in > +ghc_stage2_CONFIGURE_OPTS += -f-threaded > +ghc_stage3_CONFIGURE_OPTS += -f-threaded > endif > +# Stage-0 compiler isn't guaranteed to have a threaded RTS. > +ghc_stage1_CONFIGURE_OPTS += -f-threaded > > ifeq "$(GhcProfiled)" "YES" > ghc_stage2_PROGRAM_WAY = p On 2020-01-15 20:16, Ömer Sinan Ağacan wrote: > I just realized that the wiki page for ticky profiling [1] suggests using > `GhcThreaded = NO` becuase ticky profiling is not compatible with threaded > runtime. > > I think this suggests that `GhcThreaded = NO` used to work. It doesn't work > anymore. > > Ömer > > [1]: https://gitlab.haskell.org/ghc/ghc/wikis/debugging/ticky-ticky > > Ömer Sinan Ağacan , 15 Oca 2020 Çar, 14:11 > tarihinde şunu yazdı: >> >> Btw I just realized that this also makes ticky-ticky profiling harder becuase as >> far as I know ticky profiling not compatible with threaded runtime. I need to >> try ticky profiling for !1747 and I'm currently painfully manually linking the >> stage 2 executable using the method I described in my original email. >> >> Ömer >> >> Brandon Allbery , 9 Oca 2020 Per, 18:10 tarihinde >> şunu yazdı: >>> >>> There are some hidden dependencies, in particular ghci requires GhcThreaded last I checked (and ghci == ghc --interactive, not a separate program that could be linked threaded). You may also have to disable the entire bytecode backend, which would take TH and runghc with it as well as ghci. >>> >>> On Thu, Jan 9, 2020 at 8:02 AM Ömer Sinan Ağacan wrote: >>>> >>>> Anyone know how to link stage 2 with non-threaded RTS using Make build system? >>>> There's a variable GhcThreaded, but setting it "NO" makes no difference, stage 2 >>>> compiler is still threaded. >>>> >>>> So far the only way I could find is to redirect build system output to a file, >>>> find the step that linked ghc-stage2, repeat that command but without -threaded. >>>> It's really painful as I have to repeat this step after every rebuild. >>>> >>>> Any tips? >>>> >>>> Thanks, >>>> >>>> Ömer >>>> _______________________________________________ >>>> ghc-devs mailing list >>>> ghc-devs at haskell.org >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>> >>> >>> >>> -- >>> brandon s allbery kf8nh >>> allbery.b at gmail.com > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From ben at well-typed.com Wed Jan 15 20:20:55 2020 From: ben at well-typed.com (Ben Gamari) Date: Wed, 15 Jan 2020 15:20:55 -0500 Subject: Linking stage 2 compiler with non-threaded RTS using Make? In-Reply-To: <4ce834db-00d3-5684-a306-cf4b94dee548@cielonegro.org> References: <4ce834db-00d3-5684-a306-cf4b94dee548@cielonegro.org> Message-ID: <87imlc31zw.fsf@smart-cactus.org> PHO writes: > I'm a maintainer of a GHC package in pkgsrc - a package collection > mainly used by NetBSD. Our package has the following patch to ghc/ghc.mk > to work around the problem. The issue is that, while > ghc/ghc-bin.cabal.in has a flag to link against non-threaded RTS, the > build system doesn't propagate GhcThreaded to it: > Thanks PHO! I have opened !2474 to merge this patch. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From trupill at gmail.com Thu Jan 16 16:19:09 2020 From: trupill at gmail.com (Alejandro Serrano Mena) Date: Thu, 16 Jan 2020 17:19:09 +0100 Subject: Find constraints over type Message-ID: Dear GHC devs, I am trying to figure out a way to obtain the constraints that hold over a type. Let me give you an example: suppose that I write the following function: f :: Eq a => [a] -> Bool f xs = xs == [] If I ask for the type of the Var ' xs', I get back '[a]'. This is correct, but I am missing the crucial information that '[a]' must be Eq. Is there an easy way to get it? It seems that 'varType' doesn't give me enough information. Regards, Alejandro -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Thu Jan 16 17:35:05 2020 From: ben at well-typed.com (Ben Gamari) Date: Thu, 16 Jan 2020 12:35:05 -0500 Subject: [ANNOUNCE] GHC 8.8.2 is now available Message-ID: <87r1zz1f09.fsf@smart-cactus.org> Hello everyone, The GHC team is proud to announce the release of GHC 8.8.2. The source distribution, binary distributions, and documentation are available at https://downloads.haskell.org/~ghc/8.8.2 Release notes are also available [1]. This release fixes a handful of issues affecting 8.8.1: - A bug (#17088) in the compacting garbage collector resulting in segmentations faults under specific circumstances. Note that this may affect user programs even if they did not explicitly request the compacting GC (using the -c RTS flag) since GHC may fallback to compacting collection during times of high memory pressure. - A code generation bug (#17334) resulting in GHC panics has been fixed. - A bug in the `process` library causing builds using `hsc2hs` to fail non-deterministically on Windows has been fixed (Trac #17480) - A typechecker bug (#12088) resulting in programs being unexpectedly rejected has been fixed. - A bug in the implementation of compact normal forms resulting in segmentation faults in some uses (#17044) has been fixed. - A bug causing GHC to incorrectly complain about incompatible LLVM versions when using LLVM 7.0.1 has been fixed (#16990). As always, if anything looks amiss do let us know. Happy compiling! Cheers, - Ben [1] https://downloads.haskell.org/ghc/8.8.2/docs/html/users_guide/8.8.2-notes.html -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Thu Jan 16 18:06:16 2020 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 16 Jan 2020 13:06:16 -0500 Subject: Find constraints over type In-Reply-To: References: Message-ID: <87o8v31dke.fsf@smart-cactus.org> Alejandro Serrano Mena writes: > Dear GHC devs, > I am trying to figure out a way to obtain the constraints that hold over a > type. Let me give you an example: suppose that I write the following > function: > > f :: Eq a => [a] -> Bool > f xs = xs == [] > > If I ask for the type of the Var ' xs', I get back '[a]'. This is correct, > but I am missing the crucial information that '[a]' must be Eq. > > Is there an easy way to get it? It seems that 'varType' doesn't give me > enough information. > Indeed `varType` of `xs` will only give you the type `a`. How to get back to the constraints that mention `xs` is a bit trick and will depend upon where in the compiler you are. As far as I can tell, getting constraints after desugaring will be quite difficult since they will have been lowered to dictionaries at that point. If you have access to hsSyn then you can of course easily compute the constraints explicitly provided by the signature for `xs` (assuming there is one). However, this will of course miss any inferred constraints. During typechecking it may be possible to compute some sort of constraint set by looking at the typechecking environment, although I'll admit this sounds quite dodgy. Can you provide more details on what you are trying to do? Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From simonpj at microsoft.com Thu Jan 16 18:47:36 2020 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 16 Jan 2020 18:47:36 +0000 Subject: Find constraints over type In-Reply-To: References: Message-ID: There is definitely no pure way to get from ‘a’ to its constraints. * It is far from clear what “its constraints” are. Is (C a b) such a constraint? C [a] b? What about superclasses? * Constraints vary depending on where you are. GADT matches can bring into scope extra constraints on existing type variables. So as Ben says, to give a sensible response you’ll need to explain more about your goal Simon From: ghc-devs On Behalf Of Alejandro Serrano Mena Sent: 16 January 2020 16:19 To: GHC developers Subject: Find constraints over type Dear GHC devs, I am trying to figure out a way to obtain the constraints that hold over a type. Let me give you an example: suppose that I write the following function: f :: Eq a => [a] -> Bool f xs = xs == [] If I ask for the type of the Var ' xs', I get back '[a]'. This is correct, but I am missing the crucial information that '[a]' must be Eq. Is there an easy way to get it? It seems that 'varType' doesn't give me enough information. Regards, Alejandro -------------- next part -------------- An HTML attachment was scrubbed... URL: From omeragacan at gmail.com Fri Jan 17 06:02:46 2020 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Fri, 17 Jan 2020 09:02:46 +0300 Subject: Windows testsuite failures In-Reply-To: <1EB60CA7-A986-409C-8870-E44803975EC7@smart-cactus.org> References: <1EB60CA7-A986-409C-8870-E44803975EC7@smart-cactus.org> Message-ID: Hi Ben, Can we please disable Windows CI? I've spent more time fighting the CI than doing useful work this week, it's really frustrating. Since we have no idea how to fix it maybe we should test Windows only before a release, manually (and use bisect in case of regressions). Ömer Ben Gamari , 14 Oca 2020 Sal, 14:30 tarihinde şunu yazdı: > > Hi all, > > Currently Windows CI is a bit flaky due to some unfortunately rather elusive testsuite driver bugs. Progress in resolving this has been a bit slow due to travel over the last week but I will be back home tomorrow and should be able to resolve the issue soon thereafter. > > Cheers, > > - Ben > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From lonetiger at gmail.com Fri Jan 17 06:10:17 2020 From: lonetiger at gmail.com (Phyx) Date: Fri, 17 Jan 2020 06:10:17 +0000 Subject: Windows testsuite failures In-Reply-To: References: <1EB60CA7-A986-409C-8870-E44803975EC7@smart-cactus.org> Message-ID: Sure because only testing once every 6 months is a very very good idea... Sent from my Mobile On Fri, Jan 17, 2020, 06:03 Ömer Sinan Ağacan wrote: > Hi Ben, > > Can we please disable Windows CI? I've spent more time fighting the CI than > doing useful work this week, it's really frustrating. > > Since we have no idea how to fix it maybe we should test Windows only > before a > release, manually (and use bisect in case of regressions). > > Ömer > > Ben Gamari , 14 Oca 2020 Sal, 14:30 tarihinde şunu > yazdı: > > > > Hi all, > > > > Currently Windows CI is a bit flaky due to some unfortunately rather > elusive testsuite driver bugs. Progress in resolving this has been a bit > slow due to travel over the last week but I will be back home tomorrow and > should be able to resolve the issue soon thereafter. > > > > Cheers, > > > > - Ben > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From omeragacan at gmail.com Fri Jan 17 06:16:57 2020 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Fri, 17 Jan 2020 09:16:57 +0300 Subject: Windows testsuite failures In-Reply-To: References: <1EB60CA7-A986-409C-8870-E44803975EC7@smart-cactus.org> Message-ID: We release more often than once in 6 months. We clearly have no idea how to test on Windows. If you know how to do it then feel free to submit a MR. Otherwise blocking every MR indefinitely is worse than testing Windows less frequently. Ömer Phyx , 17 Oca 2020 Cum, 09:10 tarihinde şunu yazdı: > > Sure because only testing once every 6 months is a very very good idea... > > Sent from my Mobile > > On Fri, Jan 17, 2020, 06:03 Ömer Sinan Ağacan wrote: >> >> Hi Ben, >> >> Can we please disable Windows CI? I've spent more time fighting the CI than >> doing useful work this week, it's really frustrating. >> >> Since we have no idea how to fix it maybe we should test Windows only before a >> release, manually (and use bisect in case of regressions). >> >> Ömer >> >> Ben Gamari , 14 Oca 2020 Sal, 14:30 tarihinde şunu yazdı: >> > >> > Hi all, >> > >> > Currently Windows CI is a bit flaky due to some unfortunately rather elusive testsuite driver bugs. Progress in resolving this has been a bit slow due to travel over the last week but I will be back home tomorrow and should be able to resolve the issue soon thereafter. >> > >> > Cheers, >> > >> > - Ben >> > _______________________________________________ >> > ghc-devs mailing list >> > ghc-devs at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From lonetiger at gmail.com Fri Jan 17 06:49:05 2020 From: lonetiger at gmail.com (Phyx) Date: Fri, 17 Jan 2020 06:49:05 +0000 Subject: Windows testsuite failures In-Reply-To: References: <1EB60CA7-A986-409C-8870-E44803975EC7@smart-cactus.org> Message-ID: Oh I spent a non-insignificant amount of time back in the phabricator days to make the CI stable. Now because people were committing to master directly without going through CI it was always a cat and mouse game and I gave up eventually. Now we have rewritten the CI and it's pointing out actual issues in the compiler. And your suggestion is well let's just ignore it. How about you use some of that energy to help I stead of taking the easy way? And I bet you're going to say you don't care about Windows to which I would say I don't care about the non-threaded runtime and wish we would get rid of it. But can't always get what you want. And to say we'll actually fix anything before release doesn't align with what I've seen so far, which had me scrambling last minute to ensure we can release Windows instead of making releases without it. Quite frankly I don't need you to tell me to submit MRs to fix it since that's what I spent again a lot of time doing. Or maybe you would like to pay my paycheck so I can spend more than a considerable amount of my free time on it. Kind regards, Tamar Sent from my Mobile On Fri, Jan 17, 2020, 06:17 Ömer Sinan Ağacan wrote: > We release more often than once in 6 months. > > We clearly have no idea how to test on Windows. If you know how to do it > then > feel free to submit a MR. Otherwise blocking every MR indefinitely is > worse than > testing Windows less frequently. > > Ömer > > Phyx , 17 Oca 2020 Cum, 09:10 tarihinde şunu yazdı: > > > > Sure because only testing once every 6 months is a very very good idea... > > > > Sent from my Mobile > > > > On Fri, Jan 17, 2020, 06:03 Ömer Sinan Ağacan > wrote: > >> > >> Hi Ben, > >> > >> Can we please disable Windows CI? I've spent more time fighting the CI > than > >> doing useful work this week, it's really frustrating. > >> > >> Since we have no idea how to fix it maybe we should test Windows only > before a > >> release, manually (and use bisect in case of regressions). > >> > >> Ömer > >> > >> Ben Gamari , 14 Oca 2020 Sal, 14:30 tarihinde > şunu yazdı: > >> > > >> > Hi all, > >> > > >> > Currently Windows CI is a bit flaky due to some unfortunately rather > elusive testsuite driver bugs. Progress in resolving this has been a bit > slow due to travel over the last week but I will be back home tomorrow and > should be able to resolve the issue soon thereafter. > >> > > >> > Cheers, > >> > > >> > - Ben > >> > _______________________________________________ > >> > ghc-devs mailing list > >> > ghc-devs at haskell.org > >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > >> _______________________________________________ > >> ghc-devs mailing list > >> ghc-devs at haskell.org > >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From omeragacan at gmail.com Fri Jan 17 07:02:08 2020 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Fri, 17 Jan 2020 10:02:08 +0300 Subject: Windows testsuite failures In-Reply-To: References: <1EB60CA7-A986-409C-8870-E44803975EC7@smart-cactus.org> Message-ID: > Now we have rewritten the CI and it's pointing out actual issues in the > compiler. And your suggestion is well let's just ignore it. When is the last time Windows CI caught an actual bug? All I see is random system failures [1, 2, 3]. It must be catching *some* bugs, but that's a rare event in my experience. Sure, I don't write Windows-specific code (e.g. IO manager, or library code), but then why am I fighting the Windows CI literally every day, it makes no sense. Give an option to skip Windows CI for my patches. > How about you use some of that energy to help I stead of taking the easy way? > And I bet you're going to say you don't care about Windows to which I would > say I don't care about the non-threaded runtime and wish we would get rid of > it. But can't always get what you want. I'm not suggesting we release buggy GHCs for Windows or stop Windows support. > And to say we'll actually fix anything before release doesn't align with what > I've seen so far, which had me scrambling last minute to ensure we can release > Windows instead of making releases without it. Are you saying we skip a platform we support when it's buggy? That makes no sense. I don't know when did Windows become a first-tier platform but since it is now we should be releasing Windows binaries similar to Linux and OSX binaries. It's not uncommon to do some testing for every patch, and do more comprehensive testing before releases. We did this many times in other projects in the past and I know some other compilers do this today. > Quite frankly I don't need you to tell me to submit MRs to fix it since that's > what I spent again a lot of time doing. Or maybe you would like to pay my > paycheck so I can spend more than a considerable amount of my free time on it. I wish someone paid me for the time I wasted because I'm only paid by the time I spend productively. I'd be happier waiting for the CI then. Ömer [1]: https://gitlab.haskell.org/ghc/ghc/-/jobs/237457 [2]: https://gitlab.haskell.org/osa1/ghc/-/jobs/238236 [3]: https://gitlab.haskell.org/osa1/ghc/-/jobs/237279 Phyx , 17 Oca 2020 Cum, 09:49 tarihinde şunu yazdı: > > Oh I spent a non-insignificant amount of time back in the phabricator days to make the CI stable. Now because people were committing to master directly without going through CI it was always a cat and mouse game and I gave up eventually. > > Now we have rewritten the CI and it's pointing out actual issues in the compiler. And your suggestion is well let's just ignore it. > > How about you use some of that energy to help I stead of taking the easy way? And I bet you're going to say you don't care about Windows to which I would say I don't care about the non-threaded runtime and wish we would get rid of it. But can't always get what you want. > > And to say we'll actually fix anything before release doesn't align with what I've seen so far, which had me scrambling last minute to ensure we can release Windows instead of making releases without it. > > Quite frankly I don't need you to tell me to submit MRs to fix it since that's what I spent again a lot of time doing. Or maybe you would like to pay my paycheck so I can spend more than a considerable amount of my free time on it. > > Kind regards, > Tamar > > > Sent from my Mobile > > On Fri, Jan 17, 2020, 06:17 Ömer Sinan Ağacan wrote: >> >> We release more often than once in 6 months. >> >> We clearly have no idea how to test on Windows. If you know how to do it then >> feel free to submit a MR. Otherwise blocking every MR indefinitely is worse than >> testing Windows less frequently. >> >> Ömer >> >> Phyx , 17 Oca 2020 Cum, 09:10 tarihinde şunu yazdı: >> > >> > Sure because only testing once every 6 months is a very very good idea... >> > >> > Sent from my Mobile >> > >> > On Fri, Jan 17, 2020, 06:03 Ömer Sinan Ağacan wrote: >> >> >> >> Hi Ben, >> >> >> >> Can we please disable Windows CI? I've spent more time fighting the CI than >> >> doing useful work this week, it's really frustrating. >> >> >> >> Since we have no idea how to fix it maybe we should test Windows only before a >> >> release, manually (and use bisect in case of regressions). >> >> >> >> Ömer >> >> >> >> Ben Gamari , 14 Oca 2020 Sal, 14:30 tarihinde şunu yazdı: >> >> > >> >> > Hi all, >> >> > >> >> > Currently Windows CI is a bit flaky due to some unfortunately rather elusive testsuite driver bugs. Progress in resolving this has been a bit slow due to travel over the last week but I will be back home tomorrow and should be able to resolve the issue soon thereafter. >> >> > >> >> > Cheers, >> >> > >> >> > - Ben >> >> > _______________________________________________ >> >> > ghc-devs mailing list >> >> > ghc-devs at haskell.org >> >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >> _______________________________________________ >> >> ghc-devs mailing list >> >> ghc-devs at haskell.org >> >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From lonetiger at gmail.com Fri Jan 17 07:38:52 2020 From: lonetiger at gmail.com (Phyx) Date: Fri, 17 Jan 2020 07:38:52 +0000 Subject: Windows testsuite failures In-Reply-To: References: <1EB60CA7-A986-409C-8870-E44803975EC7@smart-cactus.org> Message-ID: On Fri, Jan 17, 2020 at 7:02 AM Ömer Sinan Ağacan wrote: > > Now we have rewritten the CI and it's pointing out actual issues in the > > compiler. And your suggestion is well let's just ignore it. > > When is the last time Windows CI caught an actual bug? All I see is random > system failures [1, 2, 3]. > [1]: Symbolic link privileges are missing from the CI user, something has gone wrong with the permissions on that slave. There's code in the testsuite to symlink or copy. Should fix the permissions, or add permission detection to the python code or switch to copy. [2]: git checkout error, disk probably full. Testsuite runs tend to create a lot of temp files which aren't cleaned up. Over time the disk fills and you get errors such as these. There's a cron job to periodically clean these, but of course that is prone to a race condition. This can be made more reliable by using OS event triggers instead of a cron job. i.e. monitor disk 80% full events and run the cleanup. [3]: It's either trying to execute a non-executable file or something it executed loaded a shared library for a different architecture. Hard to tell which one by just that output. Will need more logs. Now to answer your question [4] and [5] are issues the CI caught that were quite important. [4] https://gitlab.haskell.org/ghc/ghc/issues/17480 [5] https://gitlab.haskell.org/ghc/ghc/issues/17691 just to name two, I can go on, plugin test failures, which pointed out someone submitted an patch tested only on ELF that broke loading on plugins as non-shared objects, etc. The list is quite long. > > It must be catching *some* bugs, but that's a rare event in my experience. > Sure if you go "ahh it's just Windows that's broken" and don't look at the underlying issues. > Sure, I don't write Windows-specific code (e.g. IO manager, or library > code), > but then why am I fighting the Windows CI literally every day, it makes no > sense. Give an option to skip Windows CI for my patches. > > > How about you use some of that energy to help I stead of taking the easy > way? > > And I bet you're going to say you don't care about Windows to which I > would > > say I don't care about the non-threaded runtime and wish we would get > rid of > > it. But can't always get what you want. > > I'm not suggesting we release buggy GHCs for Windows or stop Windows > support. > I'm sorry, how is disabling the Windows CI not exactly that? If you Disabling the CI just means you test it even less. You test it even less means by the time you get to testing it the issues are too many to fix. Over time you just stop trying and stop releasing it. So sorry, how *exactly* is your suggestion not exactly that. > > > And to say we'll actually fix anything before release doesn't align with > what > > I've seen so far, which had me scrambling last minute to ensure we can > release > > Windows instead of making releases without it. > > Are you saying we skip a platform we support when it's buggy? That makes no > sense. I don't know when did Windows become a first-tier platform but > since it > is now we should be releasing Windows binaries similar to Linux and OSX > binaries. > It's *always* been a tier one platform as far as I can tell. It's certainly been for the past 6 years. > It's not uncommon to do some testing for every patch, and do more > comprehensive > testing before releases. We did this many times in other projects in the > past > and I know some other compilers do this today. > Yes, but a project that doesn't test a tier one platform during development, which is what your want to do means it's not tier one. Which means you won't fix it for release. > > > Quite frankly I don't need you to tell me to submit MRs to fix it since > that's > > what I spent again a lot of time doing. Or maybe you would like to pay my > > paycheck so I can spend more than a considerable amount of my free time > on it. > > I wish someone paid me for the time I wasted because I'm only paid by the > time I > spend productively. I'd be happier waiting for the CI then. > Yeah, not waiting for CI is how we got in this mess in the first place. Tamar. > Ömer > > [1]: https://gitlab.haskell.org/ghc/ghc/-/jobs/237457 > [2]: https://gitlab.haskell.org/osa1/ghc/-/jobs/238236 > [3]: https://gitlab.haskell.org/osa1/ghc/-/jobs/237279 > > Phyx , 17 Oca 2020 Cum, 09:49 tarihinde şunu yazdı: > > > > Oh I spent a non-insignificant amount of time back in the phabricator > days to make the CI stable. Now because people were committing to master > directly without going through CI it was always a cat and mouse game and I > gave up eventually. > > > > Now we have rewritten the CI and it's pointing out actual issues in the > compiler. And your suggestion is well let's just ignore it. > > > > How about you use some of that energy to help I stead of taking the easy > way? And I bet you're going to say you don't care about Windows to which I > would say I don't care about the non-threaded runtime and wish we would get > rid of it. But can't always get what you want. > > > > And to say we'll actually fix anything before release doesn't align with > what I've seen so far, which had me scrambling last minute to ensure we can > release Windows instead of making releases without it. > > > > Quite frankly I don't need you to tell me to submit MRs to fix it since > that's what I spent again a lot of time doing. Or maybe you would like to > pay my paycheck so I can spend more than a considerable amount of my free > time on it. > > > > Kind regards, > > Tamar > > > > > > Sent from my Mobile > > > > On Fri, Jan 17, 2020, 06:17 Ömer Sinan Ağacan > wrote: > >> > >> We release more often than once in 6 months. > >> > >> We clearly have no idea how to test on Windows. If you know how to do > it then > >> feel free to submit a MR. Otherwise blocking every MR indefinitely is > worse than > >> testing Windows less frequently. > >> > >> Ömer > >> > >> Phyx , 17 Oca 2020 Cum, 09:10 tarihinde şunu > yazdı: > >> > > >> > Sure because only testing once every 6 months is a very very good > idea... > >> > > >> > Sent from my Mobile > >> > > >> > On Fri, Jan 17, 2020, 06:03 Ömer Sinan Ağacan > wrote: > >> >> > >> >> Hi Ben, > >> >> > >> >> Can we please disable Windows CI? I've spent more time fighting the > CI than > >> >> doing useful work this week, it's really frustrating. > >> >> > >> >> Since we have no idea how to fix it maybe we should test Windows > only before a > >> >> release, manually (and use bisect in case of regressions). > >> >> > >> >> Ömer > >> >> > >> >> Ben Gamari , 14 Oca 2020 Sal, 14:30 tarihinde > şunu yazdı: > >> >> > > >> >> > Hi all, > >> >> > > >> >> > Currently Windows CI is a bit flaky due to some unfortunately > rather elusive testsuite driver bugs. Progress in resolving this has been a > bit slow due to travel over the last week but I will be back home tomorrow > and should be able to resolve the issue soon thereafter. > >> >> > > >> >> > Cheers, > >> >> > > >> >> > - Ben > >> >> > _______________________________________________ > >> >> > ghc-devs mailing list > >> >> > ghc-devs at haskell.org > >> >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > >> >> _______________________________________________ > >> >> ghc-devs mailing list > >> >> ghc-devs at haskell.org > >> >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From trupill at gmail.com Fri Jan 17 08:25:43 2020 From: trupill at gmail.com (Alejandro Serrano Mena) Date: Fri, 17 Jan 2020 09:25:43 +0100 Subject: Find constraints over type In-Reply-To: References: Message-ID: My goal is to add type information on hover within `ghcide`. Right now, when you select a variable, we give back the type as reported in the corresponding Var. However, when this type involved a type variable, it misses a lot of important information about the typing context in which we are working. So the goal is to report back some of this typing context. Going back to my original example: f :: Eq a => [a] -> Bool f xs = xs == [] It would be great if by hovering over the 'xs', one would get '[a] where Eq a', or some other representation of the known constraints. Since this is intended to be a help for the programmer, it doesn't really matter whether we get "too many" constraints (for example, if we had "Ord a" it's OK to get "Eq a" too, since that's interesting constraint information). Right now I am working with TypecheckModules most of the time. Regards, Alejandro El jue., 16 ene. 2020 a las 19:47, Simon Peyton Jones (< simonpj at microsoft.com>) escribió: > There is definitely no pure way to get from ‘a’ to its constraints. > > - It is far from clear what “its constraints” are. Is (C a b) such a > constraint? C [a] b? What about superclasses? > - Constraints vary depending on where you are. GADT matches can > bring into scope extra constraints on existing type variables. > > > > So as Ben says, to give a sensible response you’ll need to explain more > about your goal > > > > Simon > > > > *From:* ghc-devs *On Behalf Of *Alejandro > Serrano Mena > *Sent:* 16 January 2020 16:19 > *To:* GHC developers > *Subject:* Find constraints over type > > > > Dear GHC devs, > > I am trying to figure out a way to obtain the constraints that hold over a > type. Let me give you an example: suppose that I write the following > function: > > > > f :: Eq a => [a] -> Bool > > f xs = xs == [] > > > > If I ask for the type of the Var ' xs', I get back '[a]'. This is correct, > but I am missing the crucial information that '[a]' must be Eq. > > > > Is there an easy way to get it? It seems that 'varType' doesn't give me > enough information. > > > > Regards, > > Alejandro > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Jan 17 08:46:12 2020 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 17 Jan 2020 08:46:12 +0000 Subject: Windows testsuite failures In-Reply-To: References: <1EB60CA7-A986-409C-8870-E44803975EC7@smart-cactus.org> Message-ID: Both Tamar and Omer are right. * Not doing CI on Windows is bad. It means that bugs get introduced, and not discovered until later. This is Tamer’s point, and it is valid. * Holding up MRs because a failures in Windows CI that is unrelated to the MR is also bad. It a frustrating waste of time, and discourages all authors. (In contrast, holding up an MR because it introduces a bug on Windows is fine, indeed desirable.) This is Omer’s point, and it is valid. The obvious solution is: let’s fix Windows CI, so that it doesn’t fail except when the MR genuinely introduces a bug. How hard would it be to do that? Do we even know what the problem is? Simon From: ghc-devs On Behalf Of Phyx Sent: 17 January 2020 06:49 To: Ömer Sinan Ağacan Cc: ghc-devs Subject: Re: Windows testsuite failures Oh I spent a non-insignificant amount of time back in the phabricator days to make the CI stable. Now because people were committing to master directly without going through CI it was always a cat and mouse game and I gave up eventually. Now we have rewritten the CI and it's pointing out actual issues in the compiler. And your suggestion is well let's just ignore it. How about you use some of that energy to help I stead of taking the easy way? And I bet you're going to say you don't care about Windows to which I would say I don't care about the non-threaded runtime and wish we would get rid of it. But can't always get what you want. And to say we'll actually fix anything before release doesn't align with what I've seen so far, which had me scrambling last minute to ensure we can release Windows instead of making releases without it. Quite frankly I don't need you to tell me to submit MRs to fix it since that's what I spent again a lot of time doing. Or maybe you would like to pay my paycheck so I can spend more than a considerable amount of my free time on it. Kind regards, Tamar Sent from my Mobile On Fri, Jan 17, 2020, 06:17 Ömer Sinan Ağacan > wrote: We release more often than once in 6 months. We clearly have no idea how to test on Windows. If you know how to do it then feel free to submit a MR. Otherwise blocking every MR indefinitely is worse than testing Windows less frequently. Ömer Phyx >, 17 Oca 2020 Cum, 09:10 tarihinde şunu yazdı: > > Sure because only testing once every 6 months is a very very good idea... > > Sent from my Mobile > > On Fri, Jan 17, 2020, 06:03 Ömer Sinan Ağacan > wrote: >> >> Hi Ben, >> >> Can we please disable Windows CI? I've spent more time fighting the CI than >> doing useful work this week, it's really frustrating. >> >> Since we have no idea how to fix it maybe we should test Windows only before a >> release, manually (and use bisect in case of regressions). >> >> Ömer >> >> Ben Gamari >, 14 Oca 2020 Sal, 14:30 tarihinde şunu yazdı: >> > >> > Hi all, >> > >> > Currently Windows CI is a bit flaky due to some unfortunately rather elusive testsuite driver bugs. Progress in resolving this has been a bit slow due to travel over the last week but I will be back home tomorrow and should be able to resolve the issue soon thereafter. >> > >> > Cheers, >> > >> > - Ben >> > _______________________________________________ >> > ghc-devs mailing list >> > ghc-devs at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Jan 17 08:53:43 2020 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 17 Jan 2020 08:53:43 +0000 Subject: Find constraints over type In-Reply-To: References: Message-ID: Starting from any point in the typechecked syntax tree, you can look outwards to find binding sites for enclosing constraints, and then use whatever method you like to decide if they are relevant. Binding sites are: * ConPatOut; binds dictionaries from GADT matches * AbsBinds: abs_ev_vars binds dictionaries from let-bindings Simon From: Alejandro Serrano Mena Sent: 17 January 2020 08:26 To: Simon Peyton Jones Cc: GHC developers Subject: Re: Find constraints over type My goal is to add type information on hover within `ghcide`. Right now, when you select a variable, we give back the type as reported in the corresponding Var. However, when this type involved a type variable, it misses a lot of important information about the typing context in which we are working. So the goal is to report back some of this typing context. Going back to my original example: f :: Eq a => [a] -> Bool f xs = xs == [] It would be great if by hovering over the 'xs', one would get '[a] where Eq a', or some other representation of the known constraints. Since this is intended to be a help for the programmer, it doesn't really matter whether we get "too many" constraints (for example, if we had "Ord a" it's OK to get "Eq a" too, since that's interesting constraint information). Right now I am working with TypecheckModules most of the time. Regards, Alejandro El jue., 16 ene. 2020 a las 19:47, Simon Peyton Jones (>) escribió: There is definitely no pure way to get from ‘a’ to its constraints. * It is far from clear what “its constraints” are. Is (C a b) such a constraint? C [a] b? What about superclasses? * Constraints vary depending on where you are. GADT matches can bring into scope extra constraints on existing type variables. So as Ben says, to give a sensible response you’ll need to explain more about your goal Simon From: ghc-devs > On Behalf Of Alejandro Serrano Mena Sent: 16 January 2020 16:19 To: GHC developers > Subject: Find constraints over type Dear GHC devs, I am trying to figure out a way to obtain the constraints that hold over a type. Let me give you an example: suppose that I write the following function: f :: Eq a => [a] -> Bool f xs = xs == [] If I ask for the type of the Var ' xs', I get back '[a]'. This is correct, but I am missing the crucial information that '[a]' must be Eq. Is there an easy way to get it? It seems that 'varType' doesn't give me enough information. Regards, Alejandro -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at richarde.dev Fri Jan 17 10:30:46 2020 From: rae at richarde.dev (Richard Eisenberg) Date: Fri, 17 Jan 2020 10:30:46 +0000 Subject: submodule instructions? Message-ID: Hi devs, Are there up-to-date instructions on how to deal with submodules in GHC? https://gitlab.haskell.org/ghc/ghc/wikis/working-conventions/git/submodules does give nice information. It refers to https://gitlab.haskell.org/ghc/ghc/wikis/repositories to learn about specific repositories. But is that second page correct? It refers to e.g. GitHub as the upstream for Haddock. Will our new GitLab-based CI pull from GitHub's Haddock? And what if a contributor doesn't have commit access? Maybe it's all there, but I don't quite see how a contributor can ensure that CI has access to patches on submodules. Thanks, Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From juhpetersen at gmail.com Fri Jan 17 12:24:09 2020 From: juhpetersen at gmail.com (Jens Petersen) Date: Fri, 17 Jan 2020 19:24:09 +0700 Subject: [ANNOUNCE] GHC 8.8.2 is now available In-Reply-To: <87r1zz1f09.fsf@smart-cactus.org> References: <87r1zz1f09.fsf@smart-cactus.org> Message-ID: On Fri, 17 Jan 2020, 00:35 Ben Gamari, wrote: > The GHC team is proud to announce the release of GHC 8.8.2. > Congrats I have built it already for the Fedora ghc:8.8 module stream: it should go into updates-testing-modular soon for F30, F31, and Rawhide. Thanks, Jens -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Fri Jan 17 21:15:08 2020 From: ben at well-typed.com (Ben Gamari) Date: Fri, 17 Jan 2020 16:15:08 -0500 Subject: submodule instructions? In-Reply-To: References: Message-ID: <87blr123an.fsf@smart-cactus.org> Richard Eisenberg writes: > Hi devs, > > Are there up-to-date instructions on how to deal with submodules in GHC? > > https://gitlab.haskell.org/ghc/ghc/wikis/working-conventions/git/submodules > > does give nice information. It refers to > https://gitlab.haskell.org/ghc/ghc/wikis/repositories > to learn about > specific repositories. But is that second page correct? It refers to > e.g. GitHub as the upstream for Haddock. Will our new GitLab-based CI > pull from GitHub's Haddock? And what if a contributor doesn't have > commit access? > Indeed GitHub is still the upstream for Haddock (although I think we talk to the Haddock maintainers about this). > Maybe it's all there, but I don't quite see how a contributor can > ensure that CI has access to patches on submodules. I have added a bit of language to [submodules] explaining how to accomplish this. Do let me know if this is still unclear. Cheers, - Ben [submodules]: https://gitlab.haskell.org/ghc/ghc/wikis/working-conventions/git/submodules -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Fri Jan 17 21:26:26 2020 From: ben at well-typed.com (Ben Gamari) Date: Fri, 17 Jan 2020 16:26:26 -0500 Subject: Windows testsuite failures In-Reply-To: References: <1EB60CA7-A986-409C-8870-E44803975EC7@smart-cactus.org> Message-ID: <877e1p22rm.fsf@smart-cactus.org> Ömer Sinan Ağacan writes: > Hi Ben, > > Can we please disable Windows CI? I've spent more time fighting the CI than > doing useful work this week, it's really frustrating. > Yes, this recent spate is issues indeed took too long to solve. Unfortunately this particular issue took quite a while to isolate since I had trouble reproducing it as it depended upon which CI builder the job ran on. However, I eventually found and pushed a patch to fix the root cause this morning. > Since we have no idea how to fix it maybe we should test Windows only before a > release, manually (and use bisect in case of regressions). > As pointed out by Tamar, this really is not a viable option. In truth, the pain we are experiencing now is precisely because we neglected Windows support for so long. I am now making a concerted effort to fix this and, while it has been painful, we are gradually approaching a half-way decent story for Windows. x86-64 is, as of this morning, believed to be mostly stable; I'm now working on i386. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Fri Jan 17 21:32:44 2020 From: ben at well-typed.com (Ben Gamari) Date: Fri, 17 Jan 2020 16:32:44 -0500 Subject: Windows testsuite failures In-Reply-To: References: <1EB60CA7-A986-409C-8870-E44803975EC7@smart-cactus.org> Message-ID: <874kwt22h2.fsf@smart-cactus.org> Ömer Sinan Ağacan writes: >> Now we have rewritten the CI and it's pointing out actual issues in the >> compiler. And your suggestion is well let's just ignore it. > > When is the last time Windows CI caught an actual bug? All I see is random > system failures [1, 2, 3]. > > It must be catching *some* bugs, but that's a rare event in my experience. > It's unfortunately not nearly as rare as you would hope. However, it is likely that these cases aren't widely seen as I have been working on marking broken tests as expect_broken over the last few months. Only recently (for roughly a month now, IIRC) has Windows been a mandatory-green platform and it took a significant amount of effort and several false-starts to get to this point. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Fri Jan 17 21:47:49 2020 From: ben at well-typed.com (Ben Gamari) Date: Fri, 17 Jan 2020 16:47:49 -0500 Subject: Windows testsuite failures In-Reply-To: References: <1EB60CA7-A986-409C-8870-E44803975EC7@smart-cactus.org> Message-ID: <871rrx21rz.fsf@smart-cactus.org> Simon Peyton Jones via ghc-devs writes: > Both Tamar and Omer are right. > > * Not doing CI on Windows is bad. It means that bugs get introduced, > and not discovered until later. This is Tamer’s point, and it is > valid. > * Holding up MRs because a failures in Windows CI that is unrelated > to the MR is also bad. It a frustrating waste of time, and > discourages all authors. (In contrast, holding up an MR because it > introduces a bug on Windows is fine, indeed desirable.) This is > Omer’s point, and it is valid. > > The obvious solution is: let’s fix Windows CI, so that it doesn’t fail > except when the MR genuinely introduces a bug. > > How hard would it be to do that? Do we even know what the problem is? > This latest issue was quite tricky since the root cause was in an unexpected place (it seems that some of the Windows GitLab runners somehow no longer had symlink permission, perhaps due to an operating system update; I had expected the problem to be in the testsuite driver due to previous issues in that area [1]). Given the relative scarcity of Windows CI capacity and the difficulty of hitting the issue to begin with, it took quite a while to realized the problem. However, this morning I identified the issue and, as a workaround, temporarily disabled forced usage of symlinks on Windows CI. I have also opened #17706, which should allow us to use symlinks without fear of this potential breakage. Cheers, - Ben [1] https://gitlab.haskell.org/ghc/ghc/commit/e35fe8d58f18bd179efdc848c617dc9eddf4478b -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Fri Jan 17 22:38:14 2020 From: ben at smart-cactus.org (Ben Gamari) Date: Fri, 17 Jan 2020 17:38:14 -0500 Subject: Fixing type synonyms to Uniq(D)FM newtypes In-Reply-To: <671ED356-CD95-49E6-BFA7-CEBDB50879A8@richarde.dev> References: <87tv4y2w63.fsf@smart-cactus.org> <87r2022utw.fsf@smart-cactus.org> <671ED356-CD95-49E6-BFA7-CEBDB50879A8@richarde.dev> Message-ID: <87o8v1zp2l.fsf@smart-cactus.org> Richard Eisenberg writes: > One advantage of the phantom-parameter approach is that it allows for nice polymorphism. > >> lookupEnv :: Uniquable dom => UniqFM dom rng -> dom -> Maybe rng > > Now, we don't need lookupVarEnv separately from lookupNameEnv, but we > get the type-checking for free. > This is true but some consider the fact that the function name captures the environment type to be a good thing. I don't have a strong opinion one way of the other on this. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From vamchale at gmail.com Mon Jan 20 01:49:02 2020 From: vamchale at gmail.com (Vanessa McHale) Date: Sun, 19 Jan 2020 19:49:02 -0600 Subject: Using lzip instead of xz for distributed tarballs Message-ID: <3a28d74e-57d5-4e84-fedb-bf26f8b1f67d@gmail.com> Hello all, GHC is distributed as .tar.xz tarballs; I assume this is because it produces small tarballs. However, xz is ill-suited for archiving due to its lack of error recovery. Moreover, lzip produces smaller tarballs with GHC (I tested with ghc-8.8.2-x86_64-deb9-linux.tar) and decompression takes about the same amount of time. There's more information on the project page: https://www.nongnu.org/lzip/lzip.html. Cheers, Vanessa McHale -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 659 bytes Desc: OpenPGP digital signature URL: From lonetiger at gmail.com Mon Jan 20 09:29:59 2020 From: lonetiger at gmail.com (Phyx) Date: Mon, 20 Jan 2020 09:29:59 +0000 Subject: Github mirror Message-ID: Hi Ben, It looks like the github mirror for ghc hasn't updated in a month. Kind regards, Tamar Sent from my Mobile -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at richarde.dev Mon Jan 20 09:39:24 2020 From: rae at richarde.dev (Richard Eisenberg) Date: Mon, 20 Jan 2020 09:39:24 +0000 Subject: submodule instructions? In-Reply-To: <87blr123an.fsf@smart-cactus.org> References: <87blr123an.fsf@smart-cactus.org> Message-ID: <1D1A9DD0-BD0B-4B51-8EDB-D61FAE39D089@richarde.dev> > On Jan 17, 2020, at 9:15 PM, Ben Gamari wrote: > > I have added a bit of language to [submodules] explaining how to > accomplish this. Very helpful -- thanks. A few questions (preferably answered directly on the wiki page): - Who is the `ghc` group and how does one join this group (is there a link to the current membership)? Or: is there a way to get CI working for those outside this group? - I don't see haddock listed under ghc/packages. I do see in the example a way to access haddock just at ghc/. Which one is correct? Are there submodules other than haddock that are not listed in ghc/packages? Thanks! Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From csaba.hruska at gmail.com Mon Jan 20 09:45:12 2020 From: csaba.hruska at gmail.com (Csaba Hruska) Date: Mon, 20 Jan 2020 10:45:12 +0100 Subject: is Unlifted Type == Primitive Type? Message-ID: Hello, According to GHC Wiki it seems that only primitive types can be unlifted. Is this true in general? (i.e. no user type can be unlifted) [image: image.png] Does the Stg to Cmm codegen support compilation for a variable of user defined ADT as unlifted? i.e. some analysis proved that it is always a constructor and never a thunk. Thanks, Csaba -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 42890 bytes Desc: not available URL: From rae at richarde.dev Mon Jan 20 09:58:44 2020 From: rae at richarde.dev (Richard Eisenberg) Date: Mon, 20 Jan 2020 09:58:44 +0000 Subject: is Unlifted Type == Primitive Type? In-Reply-To: References: Message-ID: The recent addition of -XUnliftedNewtypes means that user-defined newtypes (https://github.com/ghc-proposals/ghc-proposals/blob/master/proposals/0098-unlifted-newtypes.rst ) can indeed be unlifted and unboxed. There is also a proposal for more general unlifted data (https://github.com/ghc-proposals/ghc-proposals/pull/265 ). If the wiki is out of date, do you think you could update it? Thanks! Richard > On Jan 20, 2020, at 9:45 AM, Csaba Hruska wrote: > > Hello, > > According to GHC Wiki it seems that only primitive types can be unlifted. > Is this true in general? (i.e. no user type can be unlifted) > > Does the Stg to Cmm codegen support compilation for a variable of user defined ADT as unlifted? > i.e. some analysis proved that it is always a constructor and never a thunk. > > Thanks, > Csaba > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From csaba.hruska at gmail.com Mon Jan 20 10:13:25 2020 From: csaba.hruska at gmail.com (Csaba Hruska) Date: Mon, 20 Jan 2020 11:13:25 +0100 Subject: is Unlifted Type == Primitive Type? In-Reply-To: References: Message-ID: I'm also interested if Boxed Unlifted non Primitive types are supported by the codegen? Sorry, but I'm not confident enough in the topic to update the wiki. On Mon, Jan 20, 2020 at 10:58 AM Richard Eisenberg wrote: > The recent addition of -XUnliftedNewtypes means that user-defined newtypes > ( > https://github.com/ghc-proposals/ghc-proposals/blob/master/proposals/0098-unlifted-newtypes.rst) > can indeed be unlifted and unboxed. There is also a proposal for more > general unlifted data ( > https://github.com/ghc-proposals/ghc-proposals/pull/265). > > If the wiki is out of date, do you think you could update it? > > Thanks! > Richard > > On Jan 20, 2020, at 9:45 AM, Csaba Hruska wrote: > > Hello, > > According to GHC Wiki > > it seems that only primitive types can be unlifted. > Is this true in general? (i.e. no user type can be unlifted) > > Does the Stg to Cmm codegen support compilation for a variable of user > defined ADT as unlifted? > i.e. some analysis proved that it is always a constructor and never a > thunk. > > Thanks, > Csaba > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgraf1337 at gmail.com Mon Jan 20 10:18:58 2020 From: sgraf1337 at gmail.com (Sebastian Graf) Date: Mon, 20 Jan 2020 11:18:58 +0100 Subject: is Unlifted Type == Primitive Type? In-Reply-To: References: Message-ID: Hi Csaba, Yes, boxed unlifted ADTs are supported by code-gen, or at least the fix for codegen to deal with it is [rather simple]( https://gitlab.haskell.org/ghc/ghc/commit/fc4e2a03ebb40e2268ec0deb9833ec82bd2d7bee ). Hope that helps. Sebastian Am Mo., 20. Jan. 2020 um 11:13 Uhr schrieb Csaba Hruska < csaba.hruska at gmail.com>: > I'm also interested if Boxed Unlifted non Primitive types are supported by > the codegen? > Sorry, but I'm not confident enough in the topic to update the wiki. > > > On Mon, Jan 20, 2020 at 10:58 AM Richard Eisenberg > wrote: > >> The recent addition of -XUnliftedNewtypes means that user-defined >> newtypes ( >> https://github.com/ghc-proposals/ghc-proposals/blob/master/proposals/0098-unlifted-newtypes.rst) >> can indeed be unlifted and unboxed. There is also a proposal for more >> general unlifted data ( >> https://github.com/ghc-proposals/ghc-proposals/pull/265). >> >> If the wiki is out of date, do you think you could update it? >> >> Thanks! >> Richard >> >> On Jan 20, 2020, at 9:45 AM, Csaba Hruska wrote: >> >> Hello, >> >> According to GHC Wiki >> >> it seems that only primitive types can be unlifted. >> Is this true in general? (i.e. no user type can be unlifted) >> >> Does the Stg to Cmm codegen support compilation for a variable of user >> defined ADT as unlifted? >> i.e. some analysis proved that it is always a constructor and never a >> thunk. >> >> Thanks, >> Csaba >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >> >> _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From davide at well-typed.com Mon Jan 20 10:37:15 2020 From: davide at well-typed.com (David Eichmann) Date: Mon, 20 Jan 2020 10:37:15 +0000 Subject: GHC perf In-Reply-To: References: Message-ID: Hi Simon,|||| > > * There are two things going on: > > 1. CI perf measurements > 2. Local machine perf measurements > > I think that they are somehow handled differently (why?) but they are > all muddled up on the wiki page. > They are handled differently because we do not want to compare local metrics with CI metrics. The exception is when local metrics don't exist, then we fall back to CI metrics as a baseline (see How baseline metrics are calculated ). > * My goal is this: > > o Start with a master commit, say from Dec 2019. > o Implement some change, on a branch. > o sh validate –legacy (or something else if you like) > o Look at perf regressions. > Getting to the *raw data* should be easy: 1. Checkout an the commit. 2. Use `git status` to double check git sees a clean working tree. 3. Run the performance tests. 4. Check out your branch. 5. Use `git status` to double check git sees a clean working tree (else commit any changes) 6. Run the performance tests. 7. Compare metrics (filtering for `local` metrics and outputting a chart): |python3 testsuite/driver/perf_notes.py --chart chart.html --test-env local | see `|python3 testsuite/driver/perf_notes.py --help`| for more filtering options. This doesn't detect regressions automatically, it only shows you the raw data. Ideally we'd add an option to the testrunner to let you specify a baseline commit manually. I suspect that would be close to what you're looking for. > > * I believe I have first to utter the incantation > > $ git fetch https://gitlab.haskell.org/ghc/ghc-performance-notes.git > refs/notes/perf:refs/notes/ci/perf > Yes, this fetches the latest CI metrics into your git notes. > > * But then: > o How do I ensure that the baseline perf numbers I get relate to > the master commit I started from, back in Dec 2019?  I don’t > want numbers from Jan 2020. > see above. > > o If I rebase my branch on top of HEAD, say, how do I update the > perf baseline numbers to be for HEAD > The test runner should use HEAD's metrics automatically (see How baseline metrics are calculated ), though you will need to fetch CI metrics or run the perf tests locally on HEAD to get the relevant metrics. > > o Generally, how can I tell the commit to which the baseline > numbers relate? > The test runner will output (per test) which baseline commit is used e.g. "... from local baseline @ HEAD~2" says the baseline was a local run from 2 commits ago. > > * Also, in my tree I have a series of incremental changes; I want to > see if any of them have perf regressions.    How do I do that? > You can run the perf tests on each commit *in commit order*, and the previous commit will always be used as the baseline. You can also then chart the results: |python3 testsuite/driver/perf_notes.py --chart chart.html --test-env local ..| Sorry if this is a bit unoptimal, but I Hope that helps - David E -- David Eichmann, Haskell Consultant Well-Typed LLP, http://www.well-typed.com Registered in England & Wales, OC335890 118 Wymering Mansions, Wymering Road, London W9 2NF, England -------------- next part -------------- An HTML attachment was scrubbed... URL: From csaba.hruska at gmail.com Mon Jan 20 10:40:41 2020 From: csaba.hruska at gmail.com (Csaba Hruska) Date: Mon, 20 Jan 2020 11:40:41 +0100 Subject: Unboxed Tuple in a single STG variable Message-ID: Hello, Can an STG variable (Id) store a whole unboxed tuple value? Or is it required to decompose a returned unboxed value immediately by using an StgCase expression? If so, then is it correct that the STG variables can store values only from the following types: Addr, Float, Double, Int, Word, Ptr (for boxed values)? (and no multi-value values) Thanks, Csaba -------------- next part -------------- An HTML attachment was scrubbed... URL: From omeragacan at gmail.com Mon Jan 20 11:10:00 2020 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Mon, 20 Jan 2020 14:10:00 +0300 Subject: Unboxed Tuple in a single STG variable In-Reply-To: References: Message-ID: Binders in STG can be bound to multi-values (stuff represented by multiple values, e.g. unboxed tuples with more than one non-void arguments) initially but before codegen those need to be eliminated. The pass that does is "unarise", see GHC.Stg.Unarise module which has lots of notes explaning everything in details. So CoreToStg generates STG with multi-value binders. Unarise eliminates those binders, making sure every binder holds one value. Ömer Csaba Hruska , 20 Oca 2020 Pzt, 13:41 tarihinde şunu yazdı: > > Hello, > > Can an STG variable (Id) store a whole unboxed tuple value? > Or is it required to decompose a returned unboxed value immediately by using an StgCase expression? > If so, then is it correct that the STG variables can store values only from the following types: Addr, Float, Double, Int, Word, Ptr (for boxed values)? (and no multi-value values) > > Thanks, > Csaba > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From ben at well-typed.com Mon Jan 20 15:49:08 2020 From: ben at well-typed.com (Ben Gamari) Date: Mon, 20 Jan 2020 10:49:08 -0500 Subject: GitLab restart Message-ID: <87ftgayvpt.fsf@smart-cactus.org> Hi everyone, In around 30 minutes I'll be migrating some data on gitlab.haskell.org to a larger storage volume. This will take around 10 minutes, during which time GitLab will down. I'll sent another email before I begin the migration. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Mon Jan 20 16:21:24 2020 From: ben at well-typed.com (Ben Gamari) Date: Mon, 20 Jan 2020 11:21:24 -0500 Subject: GitLab restart In-Reply-To: <87ftgayvpt.fsf@smart-cactus.org> References: <87ftgayvpt.fsf@smart-cactus.org> Message-ID: <875zh6yu7x.fsf@smart-cactus.org> Ben Gamari writes: > Hi everyone, > > In around 30 minutes I'll be migrating some data on gitlab.haskell.org > to a larger storage volume. This will take around 10 minutes, during > which time GitLab will down. I'll sent another email before I begin the > migration. > Starting migration momentarily. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Mon Jan 20 16:26:49 2020 From: ben at well-typed.com (Ben Gamari) Date: Mon, 20 Jan 2020 11:26:49 -0500 Subject: GitLab restart In-Reply-To: <875zh6yu7x.fsf@smart-cactus.org> References: <87ftgayvpt.fsf@smart-cactus.org> <875zh6yu7x.fsf@smart-cactus.org> Message-ID: <8736caytyw.fsf@smart-cactus.org> Ben Gamari writes: > Ben Gamari writes: > >> Hi everyone, >> >> In around 30 minutes I'll be migrating some data on gitlab.haskell.org >> to a larger storage volume. This will take around 10 minutes, during >> which time GitLab will down. I'll sent another email before I begin the >> migration. >> > Starting migration momentarily. > The migration is now complete. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Mon Jan 20 16:34:49 2020 From: ben at well-typed.com (Ben Gamari) Date: Mon, 20 Jan 2020 11:34:49 -0500 Subject: submodule instructions? In-Reply-To: <1D1A9DD0-BD0B-4B51-8EDB-D61FAE39D089@richarde.dev> References: <87blr123an.fsf@smart-cactus.org> <1D1A9DD0-BD0B-4B51-8EDB-D61FAE39D089@richarde.dev> Message-ID: <87zheixf15.fsf@smart-cactus.org> Richard Eisenberg writes: >> On Jan 17, 2020, at 9:15 PM, Ben Gamari wrote: >> >> I have added a bit of language to [submodules] explaining how to >> accomplish this. > > Very helpful -- thanks. A few questions (preferably answered directly on the wiki page): > > - Who is the `ghc` group and how does one join this group (is there a > link to the current membership)? Or: is there a way to get CI working > for those outside this group? > I've clarified this on the wiki but to summarize: The "ghc group" is found here [1] and pretty much anyone who is interested in contributing can request to be a member (with "developer" role). This gives you the ability to push wip/ branches to any project in the ghc/ namespace. [1] https://gitlab.haskell.org/ghc > - I don't see haddock listed under ghc/packages. I do see in the > example a way to access haddock just at ghc/. Which one is correct? > Are there submodules other than haddock that are not listed in > ghc/packages? > For historical reasons haddock is located under ghc/haddock, not ghc/packages/haddock. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Mon Jan 20 22:15:00 2020 From: ben at well-typed.com (Ben Gamari) Date: Mon, 20 Jan 2020 17:15:00 -0500 Subject: Using lzip instead of xz for distributed tarballs In-Reply-To: <3a28d74e-57d5-4e84-fedb-bf26f8b1f67d@gmail.com> References: <3a28d74e-57d5-4e84-fedb-bf26f8b1f67d@gmail.com> Message-ID: <87wo9lyduo.fsf@smart-cactus.org> Vanessa McHale writes: > Hello all, > > > GHC is distributed as .tar.xz tarballs; I assume this is because it > produces small tarballs. However, xz is ill-suited for archiving due to > its lack of error recovery. Moreover, lzip produces smaller tarballs > with GHC (I tested with ghc-8.8.2-x86_64-deb9-linux.tar) and > decompression takes about the same amount of time. > Indeed I recall seeing the "Why xz is not suitable for archival purposes" blog post quite a while ago and considered moving away from xz at the time but wasn't entirely convinced that the benefits would justify the churn, especially since xz tends to be pretty ubiquitous at this point while lzip is a fair bit less so. I'd be happy to hear further reasons why we should switch but I'll admit that I still don't quite see what switching would buy us; we do have a few backups spread across the planet so the probability of us having to rely on the compressor for error recovery pretty small. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From rae at richarde.dev Tue Jan 21 10:28:53 2020 From: rae at richarde.dev (Richard Eisenberg) Date: Tue, 21 Jan 2020 10:28:53 +0000 Subject: submodule instructions? In-Reply-To: <87zheixf15.fsf@smart-cactus.org> References: <87blr123an.fsf@smart-cactus.org> <1D1A9DD0-BD0B-4B51-8EDB-D61FAE39D089@richarde.dev> <87zheixf15.fsf@smart-cactus.org> Message-ID: <9ACF4980-6BA1-4BCE-B2B5-759CDF2CFDCB@richarde.dev> Great -- thanks. I think it's all clarified now. Richard > On Jan 20, 2020, at 4:34 PM, Ben Gamari wrote: > > Richard Eisenberg writes: > >>> On Jan 17, 2020, at 9:15 PM, Ben Gamari wrote: >>> >>> I have added a bit of language to [submodules] explaining how to >>> accomplish this. >> >> Very helpful -- thanks. A few questions (preferably answered directly on the wiki page): >> >> - Who is the `ghc` group and how does one join this group (is there a >> link to the current membership)? Or: is there a way to get CI working >> for those outside this group? >> > I've clarified this on the wiki but to summarize: The "ghc group" is > found here [1] and pretty much anyone who is interested in contributing > can request to be a member (with "developer" role). This gives you the > ability to push wip/ branches to any project in the ghc/ namespace. > > [1] https://gitlab.haskell.org/ghc > >> - I don't see haddock listed under ghc/packages. I do see in the >> example a way to access haddock just at ghc/. Which one is correct? >> Are there submodules other than haddock that are not listed in >> ghc/packages? >> > For historical reasons haddock is located under ghc/haddock, not > ghc/packages/haddock. From carter.schonwald at gmail.com Tue Jan 21 16:25:28 2020 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Tue, 21 Jan 2020 11:25:28 -0500 Subject: lots of spam repos on gitlab Message-ID: theres a lot of spam repos on the gitlab :( 11:07 AM https://gitlab.haskell.org/wandaibson/oberton-paris https://gitlab.haskell.org/explore/projects?utf8=%E2%9C%93&name=keto&sort=latest_activity_desc lots of keto 11:10 AM two CBD 11:10 AM https://gitlab.haskell.org/explore/projects?utf8=%E2%9C%93&name=CBD&sort=latest_activity_desc 11:11 AM https://gitlab.haskell.org/explore/projects?utf8=%E2%9C%93&name=pep&sort=latest_activity_desc 11:13 AM https://gitlab.haskell.org/explore/projects?utf8=%E2%9C%93&name=pill&sort=latest_activity_desc 11:17 AM https://gitlab.haskell.org/explore/projects?utf8=%E2%9C%93&name=vitality&sort=latest_activity_desc 11:17 AM alp: ? 11:23 AM ⇐ zeta_0 quit (~zeta at h78.48.155.207.dynamic.ip.windstream.net) Quit: doing a rebuild switch 11:24 AM https://gitlab.haskell.org/explore/projects?utf8=%E2%9C%93&name=suppl&sort=latest_activity_desc -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Tue Jan 21 16:40:14 2020 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 21 Jan 2020 11:40:14 -0500 Subject: lots of spam repos on gitlab In-Reply-To: References: Message-ID: On January 21, 2020 11:25:28 AM EST, Carter Schonwald wrote: >theres a lot of spam repos on the gitlab :( > >11:07 AM > https://gitlab.haskell.org/wandaibson/oberton-paris > >https://gitlab.haskell.org/explore/projects?utf8=%E2%9C%93&name=keto&sort=latest_activity_desc >lots >of keto >11:10 AM > two CBD >11:10 AM > >https://gitlab.haskell.org/explore/projects?utf8=%E2%9C%93&name=CBD&sort=latest_activity_desc >11:11 AM > >https://gitlab.haskell.org/explore/projects?utf8=%E2%9C%93&name=pep&sort=latest_activity_desc >11:13 AM > >https://gitlab.haskell.org/explore/projects?utf8=%E2%9C%93&name=pill&sort=latest_activity_desc >11:17 AM > >https://gitlab.haskell.org/explore/projects?utf8=%E2%9C%93&name=vitality&sort=latest_activity_desc >11:17 AM > alp: ? >11:23 AM ⇐ zeta_0 quit (~zeta at h78.48.155.207.dynamic.ip.windstream.net) >Quit: doing a rebuild switch >11:24 AM > >https://gitlab.haskell.org/explore/projects?utf8=%E2%9C%93&name=suppl&sort=latest_activity_desc Sigh. Indeed our spam detection service could be better. I'm working on handling these. From vamchale at gmail.com Tue Jan 21 16:44:15 2020 From: vamchale at gmail.com (Vanessa McHale) Date: Tue, 21 Jan 2020 10:44:15 -0600 Subject: Using lzip instead of xz for distributed tarballs In-Reply-To: <87wo9lyduo.fsf@smart-cactus.org> References: <3a28d74e-57d5-4e84-fedb-bf26f8b1f67d@gmail.com> <87wo9lyduo.fsf@smart-cactus.org> Message-ID: Would it be plausible to distribute both? That way users would not have to install lzip. Cheers, Vanessa McHale > On Jan 20, 2020, at 4:15 PM, Ben Gamari wrote: > > Vanessa McHale writes: > >> Hello all, >> >> >> GHC is distributed as .tar.xz tarballs; I assume this is because it >> produces small tarballs. However, xz is ill-suited for archiving due to >> its lack of error recovery. Moreover, lzip produces smaller tarballs >> with GHC (I tested with ghc-8.8.2-x86_64-deb9-linux.tar) and >> decompression takes about the same amount of time. >> > Indeed I recall seeing the "Why xz is not suitable for archival > purposes" blog post quite a while ago and considered moving away from xz > at the time but wasn't entirely convinced that the benefits would > justify the churn, especially since xz tends to be pretty ubiquitous at > this point while lzip is a fair bit less so. > > I'd be happy to hear further reasons why we should switch but I'll admit > that I still don't quite see what switching would buy us; we do have > a few backups spread across the planet so the probability of us having > to rely on the compressor for error recovery pretty small. > > Cheers, > > - Ben From ben at smart-cactus.org Tue Jan 21 16:56:18 2020 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 21 Jan 2020 11:56:18 -0500 Subject: lots of spam repos on gitlab In-Reply-To: References: Message-ID: <0AA005D6-CEDD-4691-99D2-1117417529CE@smart-cactus.org> On January 21, 2020 11:25:28 AM EST, Carter Schonwald wrote: >theres a lot of spam repos on the gitlab :( > >11:07 AM > https://gitlab.haskell.org/wandaibson/oberton-paris > >https://gitlab.haskell.org/explore/projects?utf8=%E2%9C%93&name=keto&sort=latest_activity_desc >lots >of keto >11:10 AM > two CBD >11:10 AM > >https://gitlab.haskell.org/explore/projects?utf8=%E2%9C%93&name=CBD&sort=latest_activity_desc >11:11 AM > >https://gitlab.haskell.org/explore/projects?utf8=%E2%9C%93&name=pep&sort=latest_activity_desc >11:13 AM > >https://gitlab.haskell.org/explore/projects?utf8=%E2%9C%93&name=pill&sort=latest_activity_desc >11:17 AM > >https://gitlab.haskell.org/explore/projects?utf8=%E2%9C%93&name=vitality&sort=latest_activity_desc >11:17 AM > alp: ? >11:23 AM ⇐ zeta_0 quit (~zeta at h78.48.155.207.dynamic.ip.windstream.net) >Quit: doing a rebuild switch >11:24 AM > >https://gitlab.haskell.org/explore/projects?utf8=%E2%9C%93&name=suppl&sort=latest_activity_desc For future reference, if you see cases like this it would be helpful if you could report it by navigating to the user's page and using the "Report Abuse" button in the top right corner (the icon of which is a little exclamation mark in a circle). Cheers, - Ben From ben at smart-cactus.org Tue Jan 21 17:50:27 2020 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 21 Jan 2020 12:50:27 -0500 Subject: Using lzip instead of xz for distributed tarballs In-Reply-To: References: <3a28d74e-57d5-4e84-fedb-bf26f8b1f67d@gmail.com> <87wo9lyduo.fsf@smart-cactus.org> Message-ID: <48C3BD51-B044-4CEB-9F15-27AFC9C6647C@smart-cactus.org> On January 21, 2020 11:44:15 AM EST, Vanessa McHale wrote: >Would it be plausible to distribute both? That way users would not have >to install lzip. > >Cheers, >Vanessa McHale > >> On Jan 20, 2020, at 4:15 PM, Ben Gamari wrote: >> >> Vanessa McHale writes: >> >>> Hello all, >>> >>> >>> GHC is distributed as .tar.xz tarballs; I assume this is because it >>> produces small tarballs. However, xz is ill-suited for archiving due >to >>> its lack of error recovery. Moreover, lzip produces smaller tarballs >>> with GHC (I tested with ghc-8.8.2-x86_64-deb9-linux.tar) and >>> decompression takes about the same amount of time. >>> >> Indeed I recall seeing the "Why xz is not suitable for archival >> purposes" blog post quite a while ago and considered moving away from >xz >> at the time but wasn't entirely convinced that the benefits would >> justify the churn, especially since xz tends to be pretty ubiquitous >at >> this point while lzip is a fair bit less so. >> >> I'd be happy to hear further reasons why we should switch but I'll >admit >> that I still don't quite see what switching would buy us; we do have >> a few backups spread across the planet so the probability of us >having >> to rely on the compressor for error recovery pretty small. >> >> Cheers, >> >> - Ben > >_______________________________________________ >ghc-devs mailing list >ghc-devs at haskell.org >http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs There is indeed precedent for this. IIRC, we distributed both bzip2 and xz tarballs for several years. I'm not opposed to offering both, the biggest cost is the storage and that is relatively minor. I have opened #17726 to track this. Cheers, - Ben From liuyiyun at terpmail.umd.edu Tue Jan 21 19:21:38 2020 From: liuyiyun at terpmail.umd.edu (Yiyun Liu) Date: Tue, 21 Jan 2020 14:21:38 -0500 Subject: How to turn LHExpr GhcPs into CoreExpr Message-ID: <4e780d90-d551-8a15-1561-c8148ff67bee@terpmail.umd.edu> Hi ghc-devs, I've been trying to implementing a function with the signature: elaborateExpr :: GhcMonad m => String -> m CoreExpr It should take a string of the form: "\x y -> x <> y :: Semigroup a => a -> a -> a" and give back a core expression with the dictionaries filled in: \ ($dSem :: Semigroup a) (x :: a) (y :: a) -> (<>) $dSem x y The goal is to use the function to add elaboration support for liquidhaskell. I looked into the implementation of exprType and defined my own elaborateExpr similarly by calling desugarExpr on the expression (which has type LHsExpr GhcTcId) returned by tcInferSigma. GhcTcId is a synonym of GhcTc so the program I wrote typechecks, but it's definitely not right. The elaborateExpr function I defined would return something even when the expression doesn't typecheck, or occasionally give a runtime exception: ghc-elaboration-test: panic! (the 'impossible' happened)   (GHC version 8.6.5 for x86_64-unknown-linux):     dsEvBinds I must have broken some invariants somehow. What is the correct way of defining such a function (takes a string and returns a CoreExpr)? It appears to me that I should convert LHsExpr GhcPs into LHsExpr GhcTc first before calling deSugarExpr, but I don't know how. Thank you, - Yiyun -------------- next part -------------- An HTML attachment was scrubbed... URL: From sylvain at haskus.fr Wed Jan 22 08:54:36 2020 From: sylvain at haskus.fr (Sylvain Henry) Date: Wed, 22 Jan 2020 09:54:36 +0100 Subject: Marge bot review link In-Reply-To: <526A1626-3A2E-4973-A948-799CAE9D596C@well-typed.com> References: <5e19d6ac.1c69fb81.4f04c.bf50@mx.google.com> <526A1626-3A2E-4973-A948-799CAE9D596C@well-typed.com> Message-ID: It seems that we just have to add `add-part-of: true` in marge bot config file according to https://github.com/smarkets/marge-bot Cheers Sylvain On 12/01/2020 10:10, Ben Gamari wrote: > It likely is possible. However, I have been a bit reluctant to touch > Marge since it is supposed to be a temporary measure and changes have > historically resulted in regressions. I do hope that merge train > support will finally be usable in the next release of GitLab. > > Cheers, > > - Ben > > On January 11, 2020 9:07:40 AM EST, lonetiger at gmail.com wrote: > > Hi Ben, > > I’m wondering if it’s possible to get marge to amend the commit > message before it merges it to include links to the review requests. > > I really miss that phab feature.. > > Thanks, > > Tamar > > > -- > Sent from my Android device with K-9 Mail. Please excuse my brevity. > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Wed Jan 22 09:04:17 2020 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Wed, 22 Jan 2020 09:04:17 +0000 Subject: Marge bot review link In-Reply-To: References: <5e19d6ac.1c69fb81.4f04c.bf50@mx.google.com> <526A1626-3A2E-4973-A948-799CAE9D596C@well-typed.com> Message-ID: I would't assume that will work without reading the source code for two reasons. 1. The batch mode does not implement all the flags of the single shot mode. 2. I modified the code quite a bit for our instance to get it to work reliably. Matt On Wed, Jan 22, 2020 at 9:00 AM Sylvain Henry wrote: > > It seems that we just have to add `add-part-of: true` in marge bot config file according to https://github.com/smarkets/marge-bot > > Cheers > Sylvain > > > On 12/01/2020 10:10, Ben Gamari wrote: > > It likely is possible. However, I have been a bit reluctant to touch Marge since it is supposed to be a temporary measure and changes have historically resulted in regressions. I do hope that merge train support will finally be usable in the next release of GitLab. > > Cheers, > > - Ben > > On January 11, 2020 9:07:40 AM EST, lonetiger at gmail.com wrote: >> >> Hi Ben, >> >> >> >> I’m wondering if it’s possible to get marge to amend the commit message before it merges it to include links to the review requests. >> >> >> >> I really miss that phab feature.. >> >> >> >> Thanks, >> >> Tamar > > > -- > Sent from my Android device with K-9 Mail. Please excuse my brevity. > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From rae at richarde.dev Wed Jan 22 09:36:29 2020 From: rae at richarde.dev (Richard Eisenberg) Date: Wed, 22 Jan 2020 09:36:29 +0000 Subject: How to turn LHExpr GhcPs into CoreExpr In-Reply-To: <4e780d90-d551-8a15-1561-c8148ff67bee@terpmail.umd.edu> References: <4e780d90-d551-8a15-1561-c8148ff67bee@terpmail.umd.edu> Message-ID: <07D49BF1-AC80-4E91-B14C-7F8BEB82F47C@richarde.dev> You'll need to run the expression through the whole pipeline. 1. Parsing 2. Renaming 3. Type-checking 3a. Constraint generation 3b. Constraint solving 3c. Zonking 4. Desugaring I don't have the exact calls you should use for these steps, but I can give you some pointers. 1. parseExpression 2. rnLExpr 3a. tcInferSigma 3b. simplifyInfer 3c. zonkTopLExpr 4. dsLExpr You may want to examine tcRnExpr for a template on how to do steps 2 and 3. Note that this function drops the expression and continues only with the type, but the setup around constraint solving is likely what you want. Another place to look for inspiration is in the pipeline that GHCi uses to process user-written expressions. hscParsedStmt and its caller, hscStmtWithLocation may also be helpful. These functions also compile the statement (which you don't want to do), but you can see where the desugared statement comes out. I hope this helps move you in the right direction! Richard > On Jan 21, 2020, at 7:21 PM, Yiyun Liu wrote: > > Hi ghc-devs, > > I've been trying to implementing a function with the signature: > > elaborateExpr :: GhcMonad m => String -> m CoreExpr > > It should take a string of the form: > > "\x y -> x <> y :: Semigroup a => a -> a -> a" > > and give back a core expression with the dictionaries filled in: > \ ($dSem :: Semigroup a) (x :: a) (y :: a) -> (<>) $dSem x y > > The goal is to use the function to add elaboration support for liquidhaskell. > > I looked into the implementation of exprType and defined my own elaborateExpr similarly by calling desugarExpr on the expression (which has type LHsExpr GhcTcId) returned by tcInferSigma. > > GhcTcId is a synonym of GhcTc so the program I wrote typechecks, but it's definitely not right. The elaborateExpr function I defined would return something even when the expression doesn't typecheck, or occasionally give a runtime exception: > > ghc-elaboration-test: panic! (the 'impossible' happened) > (GHC version 8.6.5 for x86_64-unknown-linux): > dsEvBinds > > I must have broken some invariants somehow. > > What is the correct way of defining such a function (takes a string and returns a CoreExpr)? It appears to me that I should convert LHsExpr GhcPs into LHsExpr GhcTc first before calling deSugarExpr, but I don't know how. > > Thank you, > > - Yiyun > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Wed Jan 22 10:54:50 2020 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 22 Jan 2020 10:54:50 +0000 Subject: GHC perf In-Reply-To: References: Message-ID: David Thanks. Concerning this: 1. Checkout an the commit. 2. Use `git status` to double check git sees a clean working tree. 3. Run the performance tests. 4. Check out your branch. 5. Use `git status` to double check git sees a clean working tree (else commit any changes) 6. Run the performance tests. 7. Compare metrics (filtering for `local` metrics and outputting a chart): python3 testsuite/driver/perf_notes.py --chart chart.html --test-env local I believe that * This compares two local builds * It does not require fetching CI perf data; in fact it 100% independent of the CI system * It does require two separate build trees (that is fine) Is that right? If so, two questions * In that Python command line (step 7) is "" the path to the root of the baseline tree, or to some file within that tree? * Is this process (and what it does) written up on some wiki page somewhere? Where? Rather than replying to me individually, it'd be better to use this conversation to produce better guidance for everyone. Thanks Simon From: David Eichmann Sent: 20 January 2020 10:37 To: Simon Peyton Jones ; Ben Gamari Cc: ghc-devs Subject: Re: GHC perf Hi Simon, * There are two things going on: * CI perf measurements * Local machine perf measurements I think that they are somehow handled differently (why?) but they are all muddled up on the wiki page. They are handled differently because we do not want to compare local metrics with CI metrics. The exception is when local metrics don't exist, then we fall back to CI metrics as a baseline (see How baseline metrics are calculated). * My goal is this: * Start with a master commit, say from Dec 2019. * Implement some change, on a branch. * sh validate -legacy (or something else if you like) * Look at perf regressions. Getting to the *raw data* should be easy: 1. Checkout an the commit. 2. Use `git status` to double check git sees a clean working tree. 3. Run the performance tests. 4. Check out your branch. 5. Use `git status` to double check git sees a clean working tree (else commit any changes) 6. Run the performance tests. 7. Compare metrics (filtering for `local` metrics and outputting a chart): python3 testsuite/driver/perf_notes.py --chart chart.html --test-env local see `python3 testsuite/driver/perf_notes.py --help` for more filtering options. This doesn't detect regressions automatically, it only shows you the raw data. Ideally we'd add an option to the testrunner to let you specify a baseline commit manually. I suspect that would be close to what you're looking for. * I believe I have first to utter the incantation $ git fetch https://gitlab.haskell.org/ghc/ghc-performance-notes.git refs/notes/perf:refs/notes/ci/perf Yes, this fetches the latest CI metrics into your git notes. * But then: * How do I ensure that the baseline perf numbers I get relate to the master commit I started from, back in Dec 2019? I don't want numbers from Jan 2020. see above. * If I rebase my branch on top of HEAD, say, how do I update the perf baseline numbers to be for HEAD The test runner should use HEAD's metrics automatically (see How baseline metrics are calculated), though you will need to fetch CI metrics or run the perf tests locally on HEAD to get the relevant metrics. * Generally, how can I tell the commit to which the baseline numbers relate? The test runner will output (per test) which baseline commit is used e.g. "... from local baseline @ HEAD~2" says the baseline was a local run from 2 commits ago. * Also, in my tree I have a series of incremental changes; I want to see if any of them have perf regressions. How do I do that? You can run the perf tests on each commit *in commit order*, and the previous commit will always be used as the baseline. You can also then chart the results: python3 testsuite/driver/perf_notes.py --chart chart.html --test-env local .. Sorry if this is a bit unoptimal, but I Hope that helps - David E -- David Eichmann, Haskell Consultant Well-Typed LLP, http://www.well-typed.com Registered in England & Wales, OC335890 118 Wymering Mansions, Wymering Road, London W9 2NF, England -------------- next part -------------- An HTML attachment was scrubbed... URL: From csaba.hruska at gmail.com Wed Jan 22 11:56:01 2020 From: csaba.hruska at gmail.com (Csaba Hruska) Date: Wed, 22 Jan 2020 12:56:01 +0100 Subject: FloatRep and DoubleRep ADT Argument in STG Message-ID: Hello, I try to use GHC backend via STG. For that reason I build small STG program AST maually. So far the generated programs worked fine (compile/link/run). However I run into problems when a lifted ADT has a FloatRep argument. Interestingly it works for DoubleRep. I'm using GHC 8.6.1 64 bit to generate the code for the constructed STG AST. I wonder if I break an invariant or is this actually a bug? Is it valid to use FloatRep argument in a boxed ADT on 64 bit? I made a gist with the Haskell source and the generated Cmm code is also included. https://gist.github.com/csabahruska/e9e143390c863f7b10b0298a7ae80ac1 Working program (DoubleRep): https://gist.github.com/csabahruska/e9e143390c863f7b10b0298a7ae80ac1#file-stgsample-hs-L134-L198 Wrong program (FloatRep): https://gist.github.com/csabahruska/e9e143390c863f7b10b0298a7ae80ac1#file-stgsample-hs-L64-L132 Thanks, Csaba -------------- next part -------------- An HTML attachment was scrubbed... URL: From klebinger.andreas at gmx.at Wed Jan 22 12:09:28 2020 From: klebinger.andreas at gmx.at (Andreas Klebinger) Date: Wed, 22 Jan 2020 13:09:28 +0100 Subject: How to turn LHExpr GhcPs into CoreExpr In-Reply-To: <07D49BF1-AC80-4E91-B14C-7F8BEB82F47C@richarde.dev> References: <4e780d90-d551-8a15-1561-c8148ff67bee@terpmail.umd.edu> <07D49BF1-AC80-4E91-B14C-7F8BEB82F47C@richarde.dev> Message-ID: <6c1ee9c9-a759-e246-aa9b-2e129b3ecfa8@gmx.at> I tried this for fun a while ago and ran into the issue of needing to provide a type environment containing Prelude and so on. I gave up on that when some of the calls failed because I must have missed to set up some implicit state properly. I didn't have an actual use case (only curiosity) so I didn't look further into it. If you do find a way please let me know. I would also support adding any missing functions to GHC-the-library to make this possible if any turn out to be required. As an alternative you could also use the GHCi approach of using a fake Module. This would allow you to copy whatever GHCi is doing. But I expect that to be slower if you expect to process many such strings, Richard Eisenberg schrieb am 22.01.2020 um 10:36: > You'll need to run the expression through the whole pipeline. > > 1. Parsing > 2. Renaming > 3. Type-checking > 3a. Constraint generation >   3b. Constraint solving >   3c. Zonking > 4. Desugaring -------------- next part -------------- An HTML attachment was scrubbed... URL: From omeragacan at gmail.com Wed Jan 22 14:20:29 2020 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Wed, 22 Jan 2020 17:20:29 +0300 Subject: FloatRep and DoubleRep ADT Argument in STG In-Reply-To: References: Message-ID: What is the problem you're having? What do you mean by "run into problems"? What's going wrong? It'd be helpful if you could show us your program in STG syntax. > Is it valid to use FloatRep argument in a boxed ADT on 64 bit? It should be valid, yes. I'd also try with `-dstg-lint -dcmm-lint`. Ömer Csaba Hruska , 22 Oca 2020 Çar, 14:56 tarihinde şunu yazdı: > > Hello, > > I try to use GHC backend via STG. For that reason I build small STG program AST maually. So far the generated programs worked fine (compile/link/run). > However I run into problems when a lifted ADT has a FloatRep argument. > Interestingly it works for DoubleRep. > I'm using GHC 8.6.1 64 bit to generate the code for the constructed STG AST. > I wonder if I break an invariant or is this actually a bug? > Is it valid to use FloatRep argument in a boxed ADT on 64 bit? > I made a gist with the Haskell source and the generated Cmm code is also included. > > https://gist.github.com/csabahruska/e9e143390c863f7b10b0298a7ae80ac1 > > Working program (DoubleRep): > https://gist.github.com/csabahruska/e9e143390c863f7b10b0298a7ae80ac1#file-stgsample-hs-L134-L198 > > Wrong program (FloatRep): > https://gist.github.com/csabahruska/e9e143390c863f7b10b0298a7ae80ac1#file-stgsample-hs-L64-L132 > > Thanks, > Csaba > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From csaba.hruska at gmail.com Wed Jan 22 15:21:49 2020 From: csaba.hruska at gmail.com (Csaba Hruska) Date: Wed, 22 Jan 2020 16:21:49 +0100 Subject: FloatRep and DoubleRep ADT Argument in STG In-Reply-To: References: Message-ID: Sorry, I should have noted that the gist has a description comment at the bottom. https://gist.github.com/csabahruska/e9e143390c863f7b10b0298a7ae80ac1#gistcomment-3148797 On Wed, Jan 22, 2020 at 3:21 PM Ömer Sinan Ağacan wrote: > What is the problem you're having? What do you mean by "run into problems"? > What's going wrong? > > It'd be helpful if you could show us your program in STG syntax. > > > Is it valid to use FloatRep argument in a boxed ADT on 64 bit? > > It should be valid, yes. > > I'd also try with `-dstg-lint -dcmm-lint`. > > Ömer > > Csaba Hruska , 22 Oca 2020 Çar, 14:56 > tarihinde şunu yazdı: > > > > Hello, > > > > I try to use GHC backend via STG. For that reason I build small STG > program AST maually. So far the generated programs worked fine > (compile/link/run). > > However I run into problems when a lifted ADT has a FloatRep argument. > > Interestingly it works for DoubleRep. > > I'm using GHC 8.6.1 64 bit to generate the code for the constructed STG > AST. > > I wonder if I break an invariant or is this actually a bug? > > Is it valid to use FloatRep argument in a boxed ADT on 64 bit? > > I made a gist with the Haskell source and the generated Cmm code is also > included. > > > > https://gist.github.com/csabahruska/e9e143390c863f7b10b0298a7ae80ac1 > > > > Working program (DoubleRep): > > > https://gist.github.com/csabahruska/e9e143390c863f7b10b0298a7ae80ac1#file-stgsample-hs-L134-L198 > > > > Wrong program (FloatRep): > > > https://gist.github.com/csabahruska/e9e143390c863f7b10b0298a7ae80ac1#file-stgsample-hs-L64-L132 > > > > Thanks, > > Csaba > > > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From csaba.hruska at gmail.com Wed Jan 22 15:40:42 2020 From: csaba.hruska at gmail.com (Csaba Hruska) Date: Wed, 22 Jan 2020 16:40:42 +0100 Subject: FloatRep and DoubleRep ADT Argument in STG In-Reply-To: References: Message-ID: Here are the pretty printed STG in GHC syntax: *WORKING (DoubleRep): prints 3.14* *x0 :: Any[GblId] = "Value: MyConA %d %d\n"#;x1 :: Any[GblId] = "Value: MyConB %lf\n"#;main :: Any[GblId] = [] \u [void_0E] case MyConB [3.14##] of x100 { __DEFAULT -> case x100 of x101 { MyConA x200 x202 -> __pkg_ccall [x0 x200 x202]; MyConB x203 -> __pkg_ccall [x1 x203]; }; };* *WRONG (FloatRep) : prints 0.00 instead of 3.14* *x0 :: Any[GblId] = "Value: MyConA %d %d\n"#;x1 :: Any[GblId] = "Value: MyConB %f\n"#;main :: Any[GblId] = [] \u [void_0E] case MyConB [3.14#] of x100 { __DEFAULT -> case x100 of x101 { MyConA x200 x202 -> __pkg_ccall [x0 x200 x202]; MyConB x203 -> __pkg_ccall [x1 x203]; }; };* Thanks, Csaba On Wed, Jan 22, 2020 at 4:21 PM Csaba Hruska wrote: > Sorry, I should have noted that the gist has a description comment at the > bottom. > > https://gist.github.com/csabahruska/e9e143390c863f7b10b0298a7ae80ac1#gistcomment-3148797 > > > On Wed, Jan 22, 2020 at 3:21 PM Ömer Sinan Ağacan > wrote: > >> What is the problem you're having? What do you mean by "run into >> problems"? >> What's going wrong? >> >> It'd be helpful if you could show us your program in STG syntax. >> >> > Is it valid to use FloatRep argument in a boxed ADT on 64 bit? >> >> It should be valid, yes. >> >> I'd also try with `-dstg-lint -dcmm-lint`. >> >> Ömer >> >> Csaba Hruska , 22 Oca 2020 Çar, 14:56 >> tarihinde şunu yazdı: >> > >> > Hello, >> > >> > I try to use GHC backend via STG. For that reason I build small STG >> program AST maually. So far the generated programs worked fine >> (compile/link/run). >> > However I run into problems when a lifted ADT has a FloatRep argument. >> > Interestingly it works for DoubleRep. >> > I'm using GHC 8.6.1 64 bit to generate the code for the constructed STG >> AST. >> > I wonder if I break an invariant or is this actually a bug? >> > Is it valid to use FloatRep argument in a boxed ADT on 64 bit? >> > I made a gist with the Haskell source and the generated Cmm code is >> also included. >> > >> > https://gist.github.com/csabahruska/e9e143390c863f7b10b0298a7ae80ac1 >> > >> > Working program (DoubleRep): >> > >> https://gist.github.com/csabahruska/e9e143390c863f7b10b0298a7ae80ac1#file-stgsample-hs-L134-L198 >> > >> > Wrong program (FloatRep): >> > >> https://gist.github.com/csabahruska/e9e143390c863f7b10b0298a7ae80ac1#file-stgsample-hs-L64-L132 >> > >> > Thanks, >> > Csaba >> > >> > >> > _______________________________________________ >> > ghc-devs mailing list >> > ghc-devs at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From csaba.hruska at gmail.com Wed Jan 22 21:27:27 2020 From: csaba.hruska at gmail.com (Csaba Hruska) Date: Wed, 22 Jan 2020 22:27:27 +0100 Subject: FloatRep and DoubleRep ADT Argument in STG In-Reply-To: References: Message-ID: I added Stg and Cmm linter to my custom pipeline and they report no errors. On Wed, Jan 22, 2020 at 4:40 PM Csaba Hruska wrote: > Here are the pretty printed STG in GHC syntax: > > *WORKING (DoubleRep): prints 3.14* > > > > > > > > > > > > > > > > > > *x0 :: Any[GblId] = "Value: MyConA %d %d\n"#;x1 :: Any[GblId] = > "Value: MyConB %lf\n"#;main :: Any[GblId] = [] \u [void_0E] case > MyConB [3.14##] of x100 { __DEFAULT -> case x100 of > x101 { MyConA x200 x202 -> __pkg_ccall [x0 x200 x202]; > MyConB x203 -> __pkg_ccall [x1 x203]; }; };* > > > *WRONG (FloatRep) : prints 0.00 instead of 3.14* > > > > > > > > > > > > > > > > > > *x0 :: Any[GblId] = "Value: MyConA %d %d\n"#;x1 :: Any[GblId] = > "Value: MyConB %f\n"#;main :: Any[GblId] = [] \u [void_0E] case > MyConB [3.14#] of x100 { __DEFAULT -> case x100 of > x101 { MyConA x200 x202 -> __pkg_ccall [x0 x200 x202]; > MyConB x203 -> __pkg_ccall [x1 x203]; }; };* > Thanks, > Csaba > > On Wed, Jan 22, 2020 at 4:21 PM Csaba Hruska > wrote: > >> Sorry, I should have noted that the gist has a description comment at the >> bottom. >> >> https://gist.github.com/csabahruska/e9e143390c863f7b10b0298a7ae80ac1#gistcomment-3148797 >> >> >> On Wed, Jan 22, 2020 at 3:21 PM Ömer Sinan Ağacan >> wrote: >> >>> What is the problem you're having? What do you mean by "run into >>> problems"? >>> What's going wrong? >>> >>> It'd be helpful if you could show us your program in STG syntax. >>> >>> > Is it valid to use FloatRep argument in a boxed ADT on 64 bit? >>> >>> It should be valid, yes. >>> >>> I'd also try with `-dstg-lint -dcmm-lint`. >>> >>> Ömer >>> >>> Csaba Hruska , 22 Oca 2020 Çar, 14:56 >>> tarihinde şunu yazdı: >>> > >>> > Hello, >>> > >>> > I try to use GHC backend via STG. For that reason I build small STG >>> program AST maually. So far the generated programs worked fine >>> (compile/link/run). >>> > However I run into problems when a lifted ADT has a FloatRep argument. >>> > Interestingly it works for DoubleRep. >>> > I'm using GHC 8.6.1 64 bit to generate the code for the constructed >>> STG AST. >>> > I wonder if I break an invariant or is this actually a bug? >>> > Is it valid to use FloatRep argument in a boxed ADT on 64 bit? >>> > I made a gist with the Haskell source and the generated Cmm code is >>> also included. >>> > >>> > https://gist.github.com/csabahruska/e9e143390c863f7b10b0298a7ae80ac1 >>> > >>> > Working program (DoubleRep): >>> > >>> https://gist.github.com/csabahruska/e9e143390c863f7b10b0298a7ae80ac1#file-stgsample-hs-L134-L198 >>> > >>> > Wrong program (FloatRep): >>> > >>> https://gist.github.com/csabahruska/e9e143390c863f7b10b0298a7ae80ac1#file-stgsample-hs-L64-L132 >>> > >>> > Thanks, >>> > Csaba >>> > >>> > >>> > _______________________________________________ >>> > ghc-devs mailing list >>> > ghc-devs at haskell.org >>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From csaba.hruska at gmail.com Wed Jan 22 23:07:02 2020 From: csaba.hruska at gmail.com (Csaba Hruska) Date: Thu, 23 Jan 2020 00:07:02 +0100 Subject: FloatRep and DoubleRep ADT Argument in STG In-Reply-To: References: Message-ID: Hello, I have to apologize because I've fooled myself. Everything works fine in the Haskell side. The problem was that I tried to pass a float value to printf which is a variadic C function. According to the C standard / stack overflow: > * because printf and its friends are variadic functions, so a float > parameter undergoes automatic conversion to double as part of the default > argument promotions (see section 6.5.2.2 of the C99 standard).* > Sorry for the confusion. Thanks, Csaba On Wed, Jan 22, 2020 at 10:27 PM Csaba Hruska wrote: > I added Stg and Cmm linter to my custom pipeline and they report no errors. > > On Wed, Jan 22, 2020 at 4:40 PM Csaba Hruska > wrote: > >> Here are the pretty printed STG in GHC syntax: >> >> *WORKING (DoubleRep): prints 3.14* >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> *x0 :: Any[GblId] = "Value: MyConA %d %d\n"#;x1 :: Any[GblId] = >> "Value: MyConB %lf\n"#;main :: Any[GblId] = [] \u [void_0E] case >> MyConB [3.14##] of x100 { __DEFAULT -> case x100 of >> x101 { MyConA x200 x202 -> __pkg_ccall [x0 x200 x202]; >> MyConB x203 -> __pkg_ccall [x1 x203]; }; };* >> >> >> *WRONG (FloatRep) : prints 0.00 instead of 3.14* >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> *x0 :: Any[GblId] = "Value: MyConA %d %d\n"#;x1 :: Any[GblId] = >> "Value: MyConB %f\n"#;main :: Any[GblId] = [] \u [void_0E] case >> MyConB [3.14#] of x100 { __DEFAULT -> case x100 of >> x101 { MyConA x200 x202 -> __pkg_ccall [x0 x200 x202]; >> MyConB x203 -> __pkg_ccall [x1 x203]; }; };* >> Thanks, >> Csaba >> >> On Wed, Jan 22, 2020 at 4:21 PM Csaba Hruska >> wrote: >> >>> Sorry, I should have noted that the gist has a description comment at >>> the bottom. >>> >>> https://gist.github.com/csabahruska/e9e143390c863f7b10b0298a7ae80ac1#gistcomment-3148797 >>> >>> >>> On Wed, Jan 22, 2020 at 3:21 PM Ömer Sinan Ağacan >>> wrote: >>> >>>> What is the problem you're having? What do you mean by "run into >>>> problems"? >>>> What's going wrong? >>>> >>>> It'd be helpful if you could show us your program in STG syntax. >>>> >>>> > Is it valid to use FloatRep argument in a boxed ADT on 64 bit? >>>> >>>> It should be valid, yes. >>>> >>>> I'd also try with `-dstg-lint -dcmm-lint`. >>>> >>>> Ömer >>>> >>>> Csaba Hruska , 22 Oca 2020 Çar, 14:56 >>>> tarihinde şunu yazdı: >>>> > >>>> > Hello, >>>> > >>>> > I try to use GHC backend via STG. For that reason I build small STG >>>> program AST maually. So far the generated programs worked fine >>>> (compile/link/run). >>>> > However I run into problems when a lifted ADT has a FloatRep argument. >>>> > Interestingly it works for DoubleRep. >>>> > I'm using GHC 8.6.1 64 bit to generate the code for the constructed >>>> STG AST. >>>> > I wonder if I break an invariant or is this actually a bug? >>>> > Is it valid to use FloatRep argument in a boxed ADT on 64 bit? >>>> > I made a gist with the Haskell source and the generated Cmm code is >>>> also included. >>>> > >>>> > https://gist.github.com/csabahruska/e9e143390c863f7b10b0298a7ae80ac1 >>>> > >>>> > Working program (DoubleRep): >>>> > >>>> https://gist.github.com/csabahruska/e9e143390c863f7b10b0298a7ae80ac1#file-stgsample-hs-L134-L198 >>>> > >>>> > Wrong program (FloatRep): >>>> > >>>> https://gist.github.com/csabahruska/e9e143390c863f7b10b0298a7ae80ac1#file-stgsample-hs-L64-L132 >>>> > >>>> > Thanks, >>>> > Csaba >>>> > >>>> > >>>> > _______________________________________________ >>>> > ghc-devs mailing list >>>> > ghc-devs at haskell.org >>>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From liuyiyun at terpmail.umd.edu Thu Jan 23 04:52:00 2020 From: liuyiyun at terpmail.umd.edu (Yiyun Liu) Date: Wed, 22 Jan 2020 22:52:00 -0600 Subject: How to turn LHExpr GhcPs into CoreExpr In-Reply-To: <6c1ee9c9-a759-e246-aa9b-2e129b3ecfa8@gmx.at> References: <4e780d90-d551-8a15-1561-c8148ff67bee@terpmail.umd.edu> <07D49BF1-AC80-4E91-B14C-7F8BEB82F47C@richarde.dev> <6c1ee9c9-a759-e246-aa9b-2e129b3ecfa8@gmx.at> Message-ID: Thank you all for your help! It turns out that I was missing the constraint solving and zonking step by desugaring the result of tcInferSigma directly. I have the implementation of the function here . Not sure if it's 100% correct but at least it works for all the examples I can come up with so far. - Yiyun On 1/22/20 7:09 AM, Andreas Klebinger wrote: > I tried this for fun a while ago and ran into the issue of needing to > provide a type environment containing Prelude and so on. > I gave up on that when some of the calls failed because I must have > missed to set up some implicit state properly. > I didn't have an actual use case (only curiosity) so I didn't look > further into it. If you do find a way please let me know. > > I would also support adding any missing functions to GHC-the-library > to make this possible if any turn out to be required. > > As an alternative you could also use the GHCi approach of using a fake > Module. This would allow you to copy whatever GHCi is doing. > But I expect that to be slower if you expect to process many such > strings, > > Richard Eisenberg schrieb am 22.01.2020 um 10:36: >> You'll need to run the expression through the whole pipeline. >> >> 1. Parsing >> 2. Renaming >> 3. Type-checking >> 3a. Constraint generation >>   3b. Constraint solving >>   3c. Zonking >> 4. Desugaring > -------------- next part -------------- An HTML attachment was scrubbed... URL: From omeragacan at gmail.com Thu Jan 23 06:54:04 2020 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Thu, 23 Jan 2020 09:54:04 +0300 Subject: Bug in SRT generation for procs in .cmm files? Message-ID: Hi Simon, Currently CmmParse only generates CmmLabels for procs, and those are considered non-CAFFY by hasCAF (and thus CmmBuildInfoTables). As a result if I have two procs in a .cmm file: - p1, refers to a CAF in base - p2, refers to p1 I *think* (haven't checked) we don't consider p1 as CAFFY, and even if we do, we don't consider p2 as CAFFY becuase the reference from p2 to p1 won't be considered CAFFY by hasCAF. So we currently can't define a CAFFY Cmm proc in .cmm files as the SRT algorithm will never build SRTs for procs in .cmm files. Is this intentional? I'd expect this to be possible, because there's nothing preventing me from referring to a CAFFY definition in a library (e.g. base) in a .cmm file, but doing this would be a bug in runtime. Thanks, Ömer From rae at richarde.dev Thu Jan 23 09:04:03 2020 From: rae at richarde.dev (Richard Eisenberg) Date: Thu, 23 Jan 2020 09:04:03 +0000 Subject: How to turn LHExpr GhcPs into CoreExpr In-Reply-To: References: <4e780d90-d551-8a15-1561-c8148ff67bee@terpmail.umd.edu> <07D49BF1-AC80-4E91-B14C-7F8BEB82F47C@richarde.dev> <6c1ee9c9-a759-e246-aa9b-2e129b3ecfa8@gmx.at> Message-ID: <2BF53EB5-E9F8-4F48-AE52-860602CCF623@richarde.dev> I don't know the exact semantics of the interactive context, etc., but that looks plausible. It won't give the *wrong* answer. :) Thanks for sharing! Richard > On Jan 23, 2020, at 4:52 AM, Yiyun Liu wrote: > > Thank you all for your help! It turns out that I was missing the constraint solving and zonking step by desugaring the result of tcInferSigma directly. > I have the implementation of the function here . Not sure if it's 100% correct but at least it works for all the examples I can come up with so far. > - Yiyun > On 1/22/20 7:09 AM, Andreas Klebinger wrote: >> I tried this for fun a while ago and ran into the issue of needing to provide a type environment containing Prelude and so on. >> I gave up on that when some of the calls failed because I must have missed to set up some implicit state properly. >> I didn't have an actual use case (only curiosity) so I didn't look further into it. If you do find a way please let me know. >> >> I would also support adding any missing functions to GHC-the-library to make this possible if any turn out to be required. >> >> As an alternative you could also use the GHCi approach of using a fake Module. This would allow you to copy whatever GHCi is doing. >> But I expect that to be slower if you expect to process many such strings, >> >> Richard Eisenberg schrieb am 22.01.2020 um 10:36: >>> You'll need to run the expression through the whole pipeline. >>> >>> 1. Parsing >>> 2. Renaming >>> 3. Type-checking >>> 3a. Constraint generation >>> 3b. Constraint solving >>> 3c. Zonking >>> 4. Desugaring >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Thu Jan 23 12:15:53 2020 From: ben at well-typed.com (Ben Gamari) Date: Thu, 23 Jan 2020 07:15:53 -0500 Subject: Bug in SRT generation for procs in .cmm files? In-Reply-To: References: Message-ID: While it's true that in principle one could imagine a case where you would want a CAFfy Cmm proc, I can't think of any stuck cases in the RTS today. Consequently it wouldn't surprise me if this was broken. Frankly, I wouldn't worry too much about this if it's nontrivial to fix. Cheers, - Ben On January 23, 2020 1:54:04 AM EST, "Ömer Sinan Ağacan" wrote: >Hi Simon, > >Currently CmmParse only generates CmmLabels for procs, and those are >considered >non-CAFFY by hasCAF (and thus CmmBuildInfoTables). > >As a result if I have two procs in a .cmm file: > >- p1, refers to a CAF in base >- p2, refers to p1 > >I *think* (haven't checked) we don't consider p1 as CAFFY, and even if >we do, we >don't consider p2 as CAFFY becuase the reference from p2 to p1 won't be >considered CAFFY by hasCAF. > >So we currently can't define a CAFFY Cmm proc in .cmm files as the SRT >algorithm >will never build SRTs for procs in .cmm files. > >Is this intentional? I'd expect this to be possible, because there's >nothing >preventing me from referring to a CAFFY definition in a library (e.g. >base) in a >.cmm file, but doing this would be a bug in runtime. > >Thanks, > >Ömer >_______________________________________________ >ghc-devs mailing list >ghc-devs at haskell.org >http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -- Sent from my Android device with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Jan 23 12:31:19 2020 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 23 Jan 2020 12:31:19 +0000 Subject: GHC perf In-Reply-To: <4a47e27d-0c2a-17af-df44-501a60c3e526@well-typed.com> References: <4a47e27d-0c2a-17af-df44-501a60c3e526@well-typed.com> Message-ID: Thanks This information is a bit spread out over the wiki page. Which wiki page? Yes, it'd be fantastic to write this out clearly. Thanks! $ git checkout a12b34c56 && git submodule update --init $ ./hadrian/build.sh test --only-perf $ git checkout x98y76z54 && git submodule update --init $ ./hadrian/build.sh test --only-perf $ python3 testsuite/driver/perf_notes.py --chart chart.html --test-env local a12b34c56 x98y76z54 $ firefox chart.html Ah. Now I'm lost. Somehow the second and fourth line must be recording info, locally in my tree, but two distinct batches of information. Perhaps kept distinct by the current commit? Where is the info actually stored? OK, suppose I start from commit XX, and make some local changes. Then I do the -only-perf thing. presumably that'll be recorded tagged with XX. That's fine; just want it to be clear. Worth adding this info to the wiki page, so we have a clear mental model. Thanks Simon From: David Eichmann Sent: 23 January 2020 11:19 To: Simon Peyton Jones Subject: Re: GHC perf Simon * This compares two local builds Yes * It does not require fetching CI perf data; in fact it 100% independent of the CI system Yes * It does require two separate build trees (that is fine) No, this does not require different build trees, and are git commits (or similar e.g. branch name). The actual process might look like: $ git checkout a12b34c56 && git submodule update --init $ ./hadrian/build.sh test --only-perf $ git checkout x98y76z54 && git submodule update --init $ ./hadrian/build.sh test --only-perf $ python3 testsuite/driver/perf_notes.py --chart chart.html --test-env local a12b34c56 x98y76z54 $ firefox chart.html This information is a bit spread out over the wiki page. Perhaps a "quick start" section describing this use case would be helpful. On 1/22/20 10:54 AM, Simon Peyton Jones wrote: David Thanks. Concerning this: 1. Checkout an the commit. 2. Use `git status` to double check git sees a clean working tree. 3. Run the performance tests. 4. Check out your branch. 5. Use `git status` to double check git sees a clean working tree (else commit any changes) 6. Run the performance tests. 7. Compare metrics (filtering for `local` metrics and outputting a chart): python3 testsuite/driver/perf_notes.py --chart chart.html --test-env local I believe that * This compares two local builds * It does not require fetching CI perf data; in fact it 100% independent of the CI system * It does require two separate build trees (that is fine) Is that right? If so, two questions * In that Python command line (step 7) is "" the path to the root of the baseline tree, or to some file within that tree? * Is this process (and what it does) written up on some wiki page somewhere? Where? Rather than replying to me individually, it'd be better to use this conversation to produce better guidance for everyone. Thanks Simon From: David Eichmann Sent: 20 January 2020 10:37 To: Simon Peyton Jones ; Ben Gamari Cc: ghc-devs Subject: Re: GHC perf Hi Simon, * There are two things going on: * CI perf measurements * Local machine perf measurements I think that they are somehow handled differently (why?) but they are all muddled up on the wiki page. They are handled differently because we do not want to compare local metrics with CI metrics. The exception is when local metrics don't exist, then we fall back to CI metrics as a baseline (see How baseline metrics are calculated). * My goal is this: * Start with a master commit, say from Dec 2019. * Implement some change, on a branch. * sh validate -legacy (or something else if you like) * Look at perf regressions. Getting to the *raw data* should be easy: 1. Checkout an the commit. 2. Use `git status` to double check git sees a clean working tree. 3. Run the performance tests. 4. Check out your branch. 5. Use `git status` to double check git sees a clean working tree (else commit any changes) 6. Run the performance tests. 7. Compare metrics (filtering for `local` metrics and outputting a chart): python3 testsuite/driver/perf_notes.py --chart chart.html --test-env local see `python3 testsuite/driver/perf_notes.py --help` for more filtering options. This doesn't detect regressions automatically, it only shows you the raw data. Ideally we'd add an option to the testrunner to let you specify a baseline commit manually. I suspect that would be close to what you're looking for. * I believe I have first to utter the incantation $ git fetch https://gitlab.haskell.org/ghc/ghc-performance-notes.git refs/notes/perf:refs/notes/ci/perf Yes, this fetches the latest CI metrics into your git notes. * But then: * How do I ensure that the baseline perf numbers I get relate to the master commit I started from, back in Dec 2019? I don't want numbers from Jan 2020. see above. * If I rebase my branch on top of HEAD, say, how do I update the perf baseline numbers to be for HEAD The test runner should use HEAD's metrics automatically (see How baseline metrics are calculated), though you will need to fetch CI metrics or run the perf tests locally on HEAD to get the relevant metrics. * Generally, how can I tell the commit to which the baseline numbers relate? The test runner will output (per test) which baseline commit is used e.g. "... from local baseline @ HEAD~2" says the baseline was a local run from 2 commits ago. * Also, in my tree I have a series of incremental changes; I want to see if any of them have perf regressions. How do I do that? You can run the perf tests on each commit *in commit order*, and the previous commit will always be used as the baseline. You can also then chart the results: python3 testsuite/driver/perf_notes.py --chart chart.html --test-env local .. Sorry if this is a bit unoptimal, but I Hope that helps - David E -- David Eichmann, Haskell Consultant Well-Typed LLP, http://www.well-typed.com Registered in England & Wales, OC335890 118 Wymering Mansions, Wymering Road, London W9 2NF, England -- David Eichmann, Haskell Consultant Well-Typed LLP, http://www.well-typed.com Registered in England & Wales, OC335890 118 Wymering Mansions, Wymering Road, London W9 2NF, England -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Thu Jan 23 12:53:48 2020 From: ben at well-typed.com (Ben Gamari) Date: Thu, 23 Jan 2020 07:53:48 -0500 Subject: How to turn LHExpr GhcPs into CoreExpr In-Reply-To: <2BF53EB5-E9F8-4F48-AE52-860602CCF623@richarde.dev> References: <4e780d90-d551-8a15-1561-c8148ff67bee@terpmail.umd.edu> <07D49BF1-AC80-4E91-B14C-7F8BEB82F47C@richarde.dev> <6c1ee9c9-a759-e246-aa9b-2e129b3ecfa8@gmx.at> <2BF53EB5-E9F8-4F48-AE52-860602CCF623@richarde.dev> Message-ID: It is slightly disheartening that this relatively simple use-case requires reaching so deeply into the typechecker. If there really exusts no easier interface then perhaps we should consider adopting your elaborateExpr as part of the GHC API. Cheers, - Ben On January 23, 2020 4:04:03 AM EST, Richard Eisenberg wrote: >I don't know the exact semantics of the interactive context, etc., but >that looks plausible. It won't give the *wrong* answer. :) > >Thanks for sharing! >Richard > >> On Jan 23, 2020, at 4:52 AM, Yiyun Liu >wrote: >> >> Thank you all for your help! It turns out that I was missing the >constraint solving and zonking step by desugaring the result of >tcInferSigma directly. >> I have the implementation of the function here >. >Not sure if it's 100% correct but at least it works for all the >examples I can come up with so far. >> - Yiyun >> On 1/22/20 7:09 AM, Andreas Klebinger wrote: >>> I tried this for fun a while ago and ran into the issue of needing >to provide a type environment containing Prelude and so on. >>> I gave up on that when some of the calls failed because I must have >missed to set up some implicit state properly. >>> I didn't have an actual use case (only curiosity) so I didn't look >further into it. If you do find a way please let me know. >>> >>> I would also support adding any missing functions to GHC-the-library >to make this possible if any turn out to be required. >>> >>> As an alternative you could also use the GHCi approach of using a >fake Module. This would allow you to copy whatever GHCi is doing. >>> But I expect that to be slower if you expect to process many such >strings, >>> >>> Richard Eisenberg schrieb am 22.01.2020 um 10:36: >>>> You'll need to run the expression through the whole pipeline. >>>> >>>> 1. Parsing >>>> 2. Renaming >>>> 3. Type-checking >>>> 3a. Constraint generation >>>> 3b. Constraint solving >>>> 3c. Zonking >>>> 4. Desugaring >>> -- Sent from my Android device with K-9 Mail. Please excuse my brevity. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cheng.shao at tweag.io Thu Jan 23 13:21:25 2020 From: cheng.shao at tweag.io (Shao, Cheng) Date: Thu, 23 Jan 2020 14:21:25 +0100 Subject: How to turn LHExpr GhcPs into CoreExpr In-Reply-To: References: <4e780d90-d551-8a15-1561-c8148ff67bee@terpmail.umd.edu> <07D49BF1-AC80-4E91-B14C-7F8BEB82F47C@richarde.dev> <6c1ee9c9-a759-e246-aa9b-2e129b3ecfa8@gmx.at> <2BF53EB5-E9F8-4F48-AE52-860602CCF623@richarde.dev> Message-ID: How about using `hscCompileCoreExprHook` to intercept the `CoreExpr` from the ghci pipeline? There exist GHC API to evaluate a String to a ForeignHValue iirc; we are not interested in the final ForeignHValue in this case, we just want the CoreExpr, and the logic of generating and linking BCO can be discarded. Cheers, Cheng On Thu, Jan 23, 2020 at 1:55 PM Ben Gamari wrote: > > It is slightly disheartening that this relatively simple use-case requires reaching so deeply into the typechecker. > > If there really exusts no easier interface then perhaps we should consider adopting your elaborateExpr as part of the GHC API. > > Cheers, > > - Ben > > On January 23, 2020 4:04:03 AM EST, Richard Eisenberg wrote: >> >> I don't know the exact semantics of the interactive context, etc., but that looks plausible. It won't give the *wrong* answer. :) >> >> Thanks for sharing! >> Richard >> >> On Jan 23, 2020, at 4:52 AM, Yiyun Liu wrote: >> >> Thank you all for your help! It turns out that I was missing the constraint solving and zonking step by desugaring the result of tcInferSigma directly. >> >> I have the implementation of the function here. Not sure if it's 100% correct but at least it works for all the examples I can come up with so far. >> >> - Yiyun >> >> On 1/22/20 7:09 AM, Andreas Klebinger wrote: >> >> I tried this for fun a while ago and ran into the issue of needing to provide a type environment containing Prelude and so on. >> I gave up on that when some of the calls failed because I must have missed to set up some implicit state properly. >> I didn't have an actual use case (only curiosity) so I didn't look further into it. If you do find a way please let me know. >> >> I would also support adding any missing functions to GHC-the-library to make this possible if any turn out to be required. >> >> As an alternative you could also use the GHCi approach of using a fake Module. This would allow you to copy whatever GHCi is doing. >> But I expect that to be slower if you expect to process many such strings, >> >> Richard Eisenberg schrieb am 22.01.2020 um 10:36: >> >> You'll need to run the expression through the whole pipeline. >> >> 1. Parsing >> 2. Renaming >> 3. Type-checking >> 3a. Constraint generation >> 3b. Constraint solving >> 3c. Zonking >> 4. Desugaring >> >> >> > > -- > Sent from my Android device with K-9 Mail. Please excuse my brevity. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From simonpj at microsoft.com Thu Jan 23 14:57:05 2020 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 23 Jan 2020 14:57:05 +0000 Subject: GHC perf In-Reply-To: <206cb659-6161-3399-cca7-064202c510f5@well-typed.com> References: <4a47e27d-0c2a-17af-df44-501a60c3e526@well-typed.com> <206cb659-6161-3399-cca7-064202c510f5@well-typed.com> Message-ID: We store the metrics in git notes *per-commit*. All metrics for commit XX are stored on the git note for commit XX. You can even view the raw data with this command (where XX is the commit hash): OK. But the master repo *already* has perf notes for that commit (I assume). Do mine somehow overwrite the master copy? So suppose, on my local machine, I do $ git checkout a12b34c56 && git submodule update --init $ ./hadrian/build.sh test --only-perf Now you say that I'm going to create git notes for a12b34c56. But those are purely for my local machine! Maybe my compiler is build with -DDEBUG. I don't want them to accidentally land in the main repo as the canonical perf figures for a12b34c56. How do I avoid accidentally pushing them? I should stress one caveat: we do not save metrics if you have uncommitted changes. Oh wow. Put that in MASSIVE BOLD CAPITALS. You mean that the entire exercise will (silently) be bogus if I have any uncommitted changes? That's a bit of a pain if I make a change, run some perf tests, make another change, run again. But I can live with it if I know. Simon From: David Eichmann Sent: 23 January 2020 14:48 To: Simon Peyton Jones Subject: Re: GHC perf Which wiki page? https://gitlab.haskell.org/ghc/ghc/wikis/building/running-tests/performance-tests Ah. Now I'm lost. Somehow the second and fourth line must be recording info, locally in my tree, but two distinct batches of information. Perhaps kept distinct by the current commit? Where is the info actually stored? All metric results are stored in git notes. This is a feature of git that lets you attach arbitrary text to a commit (without affecting the commit's hash). It's mentioned here. Whenever you run a performance test, the raw metrics will be appended to the git note for the current commit in a simple tab separated value (tsv) format. OK, suppose I start from commit XX, and make some local changes. Then I do the -only-perf thing. presumably that'll be recorded tagged with XX. That's fine; just want it to be clear. Worth adding this info to the wiki page, so we have a clear mental model. We store the metrics in git notes *per-commit*. All metrics for commit XX are stored on the git note for commit XX. You can even view the raw data with this command (where XX is the commit hash): $ git notes --ref perf show XX NOTE `--only-perf` is optional. It limits the test runner to only run performance tests but the performance metrics will be stored regardless of this option. So, if you've ever run performance test locally, chances are the metrics will have be record without you even knowing. I should stress one caveat: we do not save metrics if you have uncommitted changes. -- David Eichmann, Haskell Consultant Well-Typed LLP, http://www.well-typed.com Registered in England & Wales, OC335890 118 Wymering Mansions, Wymering Road, London W9 2NF, England -------------- next part -------------- An HTML attachment was scrubbed... URL: From omeragacan at gmail.com Thu Jan 23 14:57:06 2020 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Thu, 23 Jan 2020 17:57:06 +0300 Subject: Bug in SRT generation for procs in .cmm files? In-Reply-To: References: Message-ID: The main problem I'm trying to solve is explained in my comment [1]. Basically when building .cmm files the new SRT algorithm re-order definitions in a way that breaks dependency ordering, which in turn breaks C backend, because in C we should declare before using. (see my comment for why we don't have this problem when building Haskell modules) If we don't allow defining CAFFY things in Cmm files then I can simply not do SRT analysis on Cmm files and avoid the problem. Ömer [1]: https://gitlab.haskell.org/ghc/ghc/merge_requests/1304#note_248547 Ben Gamari , 23 Oca 2020 Per, 15:17 tarihinde şunu yazdı: > > While it's true that in principle one could imagine a case where you would want a CAFfy Cmm proc, I can't think of any stuck cases in the RTS today. Consequently it wouldn't surprise me if this was broken. > > Frankly, I wouldn't worry too much about this if it's nontrivial to fix. > > Cheers, > > - Ben > > On January 23, 2020 1:54:04 AM EST, "Ömer Sinan Ağacan" wrote: >> >> Hi Simon, >> >> Currently CmmParse only generates CmmLabels for procs, and those are considered >> non-CAFFY by hasCAF (and thus CmmBuildInfoTables). >> >> As a result if I have two procs in a .cmm file: >> >> - p1, refers to a CAF in base >> - p2, refers to p1 >> >> I *think* (haven't checked) we don't consider p1 as CAFFY, and even if we do, we >> don't consider p2 as CAFFY becuase the reference from p2 to p1 won't be >> considered CAFFY by hasCAF. >> >> So we currently can't define a CAFFY Cmm proc in .cmm files as the SRT algorithm >> will never build SRTs for procs in .cmm files. >> >> Is this intentional? I'd expect this to be possible, because there's nothing >> preventing me from referring to a CAFFY definition in a library (e.g. base) in a >> .cmm file, but doing this would be a bug in runtime. >> >> Thanks, >> >> Ömer >> ________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > -- > Sent from my Android device with K-9 Mail. Please excuse my brevity. From sgraf1337 at gmail.com Thu Jan 23 16:00:12 2020 From: sgraf1337 at gmail.com (Sebastian Graf) Date: Thu, 23 Jan 2020 17:00:12 +0100 Subject: Residency profiles In-Reply-To: References: Message-ID: This recently came up again. It seems that `+RTS -h -i0` will just turn every minor collection into a major one: https://gitlab.haskell.org/ghc/ghc/issues/17387#note_248705 `-i0` seems significantly different from `-i0.001`, say, in that it just turns minor GCs into major ones and doesn't introduce non-determinism otherwise. Sampling rate can be controlled with `-A`, much like `-F1` (but it's still faster for some reason). Am Mo., 10. Dez. 2018 um 09:11 Uhr schrieb Simon Marlow : > https://phabricator.haskell.org/D5428 > > > On Sun, 9 Dec 2018 at 10:12, Sebastian Graf wrote: > >> Ah, I was only looking at `+RTS --help`, not the users guide. Silly me. >> >> Am Do., 6. Dez. 2018 um 20:53 Uhr schrieb Simon Marlow < >> marlowsd at gmail.com>: >> >>> It is documented! >>> https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/runtime_control.html#rts-flag--F%20%E2%9F%A8factor%E2%9F%A9 >>> >>> On Thu, 6 Dec 2018 at 16:21, Sebastian Graf wrote: >>> >>>> Hey, >>>> >>>> thanks, all! Measuring with `-A1M -F1` delivers much more reliable >>>> residency numbers. >>>> `-F` doesn't seem to be documented. From reading `rts/RtsFlags.c` and >>>> `rts/sm/GC.c` I gather that it's the factor by which to multiply the number >>>> of live bytes by to get the new old gen size? >>>> So effectively, the old gen will 'overflow' on every minor GC, neat! >>>> >>>> Greetings >>>> Sebastian >>>> >>>> Am Do., 6. Dez. 2018 um 12:52 Uhr schrieb Simon Peyton Jones via >>>> ghc-devs : >>>> >>>>> | Right. A parameter for fixing the nursery size would be easy to >>>>> implement, >>>>> | I think. Just a new flag, then in GC.c:resize_nursery() use the >>>>> flag as the >>>>> | nursery size. >>>>> >>>>> Super! That would be v useful. >>>>> >>>>> | "Max. residency" is really hard to measure (need to do very >>>>> frequent GCs), >>>>> | perhaps a better question to ask is "residency when the program is >>>>> in state >>>>> | S". >>>>> >>>>> Actually, Sebastian simply wants to see an accurate, reproducible >>>>> residency profile, and doing frequent GCs might well be an acceptable >>>>> cost. >>>>> >>>>> Simon >>>>> _______________________________________________ >>>>> ghc-devs mailing list >>>>> ghc-devs at haskell.org >>>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Fri Jan 24 08:05:01 2020 From: marlowsd at gmail.com (Simon Marlow) Date: Fri, 24 Jan 2020 08:05:01 +0000 Subject: Bug in SRT generation for procs in .cmm files? In-Reply-To: References: Message-ID: Yes, I think my assumption was that we wouldn't be referring to any CAFs from .cmm source code so we didn't need to track the CAFyness of labels. It would be quite a pain to support this I think - in .cmm you can refer to anything by its raw label, so we would have to either declare whether something is CAFy or reverse engineer the original entity name and load the interface file etc. Furthermore we would need to tell the compiler about the CAFyness of RTS labels somehow so that they could be added to SRTs where necessary. I'm fine with not running the SRT analysis on .cmm code. Cheers Simon On Thu, 23 Jan 2020 at 14:57, Ömer Sinan Ağacan wrote: > The main problem I'm trying to solve is explained in my comment [1]. > Basically > when building .cmm files the new SRT algorithm re-order definitions in a > way > that breaks dependency ordering, which in turn breaks C backend, because > in C we > should declare before using. (see my comment for why we don't have this > problem > when building Haskell modules) > > If we don't allow defining CAFFY things in Cmm files then I can simply not > do > SRT analysis on Cmm files and avoid the problem. > > Ömer > > [1]: https://gitlab.haskell.org/ghc/ghc/merge_requests/1304#note_248547 > > Ben Gamari , 23 Oca 2020 Per, 15:17 tarihinde şunu > yazdı: > > > > While it's true that in principle one could imagine a case where you > would want a CAFfy Cmm proc, I can't think of any stuck cases in the RTS > today. Consequently it wouldn't surprise me if this was broken. > > > > Frankly, I wouldn't worry too much about this if it's nontrivial to fix. > > > > Cheers, > > > > - Ben > > > > On January 23, 2020 1:54:04 AM EST, "Ömer Sinan Ağacan" < > omeragacan at gmail.com> wrote: > >> > >> Hi Simon, > >> > >> Currently CmmParse only generates CmmLabels for procs, and those are > considered > >> non-CAFFY by hasCAF (and thus CmmBuildInfoTables). > >> > >> As a result if I have two procs in a .cmm file: > >> > >> - p1, refers to a CAF in base > >> - p2, refers to p1 > >> > >> I *think* (haven't checked) we don't consider p1 as CAFFY, and even if > we do, we > >> don't consider p2 as CAFFY becuase the reference from p2 to p1 won't be > >> considered CAFFY by hasCAF. > >> > >> So we currently can't define a CAFFY Cmm proc in .cmm files as the SRT > algorithm > >> will never build SRTs for procs in .cmm files. > >> > >> Is this intentional? I'd expect this to be possible, because there's > nothing > >> preventing me from referring to a CAFFY definition in a library (e.g. > base) in a > >> .cmm file, but doing this would be a bug in runtime. > >> > >> Thanks, > >> > >> Ömer > >> ________________________________ > >> ghc-devs mailing list > >> ghc-devs at haskell.org > >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > > > -- > > Sent from my Android device with K-9 Mail. Please excuse my brevity. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Fri Jan 24 15:31:51 2020 From: mail at joachim-breitner.de (Joachim Breitner) Date: Fri, 24 Jan 2020 16:31:51 +0100 Subject: [ghc-steering-committee] The GHC Committee welcomes its new members Message-ID: Dear Haskell community, the GHC Steering committee welcomes three new members, Alejandro Serrano Mena, Cale Gibbard and Tom Harding. We are happy to see that there is continued interest in our work, and are looking forward to having fresh insights and energy on the committee. They take the seat of Sandy Maguire, who we thank for this productive work during his tenure. As you might notice, we picked more candidates than we filled slots. This is not because Sandy’s productivity need three people to fill. The idea is that there is no reason to turn away a motivated contributor just to stick to the arbitrary 10 members. Instead, we will aim at “roughly 10”, and probably not call for new members until we have dropped below that number again. On behalf of the committee, Joachim Breitner PS: Alejandro, Cale and Tom, please send me your github handles and subscribe to the mailing list at https://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-steering-committee -- Joachim “nomeata” Breitner mail at joachim-breitner.de https://www.joachim-breitner.de/ From ben at well-typed.com Sat Jan 25 04:58:02 2020 From: ben at well-typed.com (Ben Gamari) Date: Fri, 24 Jan 2020 23:58:02 -0500 Subject: [ANNOUNCE] Glasgow Haskell Compiler 8.10.1-rc1 released Message-ID: <87muacxhd6.fsf@smart-cactus.org> Hello all, The GHC team is happy to announce the availability of the first release candidate of GHC 8.10.1. Source and binary distributions are available at the usual place: https://downloads.haskell.org/ghc/8.10.1-rc1/ GHC 8.10.1 will bring a number of new features including: * The new UnliftedNewtypes extension allowing newtypes around unlifted types. * The new StandaloneKindSignatures extension allows users to give top-level kind signatures to type, type family, and class declarations. * A new warning, -Wderiving-defaults, to draw attention to ambiguous deriving clauses * A number of improvements in code generation, including changes * A new GHCi command, :instances, for listing the class instances available for a type. * An upgraded Windows toolchain lifting the MAX_PATH limitation * A new, low-latency garbage collector. * Improved support profiling, including support for sending profiler samples to the eventlog, allowing correlation between the profile and other program events This is the first and likely final release candidate. For a variety of reasons, it comes a few weeks later than the original schedule of release late December. However, besides a few core libraries book-keeping issues this candidate is believed to be in good condition for the final release. As such, the final 8.10.1 release will likely come in two weeks. Note that at the moment we still require that macOS Catalina users exempt the binary distribution from the notarization requirement by running `xattr -cr .` on the unpacked tree before running `make install`. In addition, we are still looking for any Alpine Linux to help diagnose the correctness issues in the Alpine binary distribution [1]. If you use Alpine any you can offer here would be greatly appreciated. Please do test this release and let us know if you encounter any other issues. Cheers, - Ben [1] https://gitlab.haskell.org/ghc/ghc/issues/17508 [2] https://gitlab.haskell.org/ghc/ghc/issues/17418 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From carter.schonwald at gmail.com Sat Jan 25 06:33:54 2020 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sat, 25 Jan 2020 01:33:54 -0500 Subject: Anyone able to build current master on OS X with GCC? Message-ID: I’m finding myself only able to successfully run a full validate build if I set the build configuration to use clang rather than gcc. Otherwise I get a cpp error that makes it look like some #defines are missing in the Gmp binding codes. Anyone able to do a gcc flavored Mac build that survives validate recently ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Sat Jan 25 12:39:05 2020 From: allbery.b at gmail.com (Brandon Allbery) Date: Sat, 25 Jan 2020 07:39:05 -0500 Subject: [ANNOUNCE] Glasgow Haskell Compiler 8.10.1-rc1 released In-Reply-To: <87muacxhd6.fsf@smart-cactus.org> References: <87muacxhd6.fsf@smart-cactus.org> Message-ID: On Fri, Jan 24, 2020 at 11:58 PM Ben Gamari wrote: > * A number of improvements in code generation, including changes > This seems like it's missing some detail. -- brandon s allbery kf8nh allbery.b at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Sat Jan 25 14:15:51 2020 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sat, 25 Jan 2020 09:15:51 -0500 Subject: [ANNOUNCE] Glasgow Haskell Compiler 8.10.1-rc1 released In-Reply-To: References: <87muacxhd6.fsf@smart-cactus.org> Message-ID: Yeah, like how we removed x87 support from code gen except one tiny piece in the abi, so 32bit x86 code gen is always -msse2 flavor so rounding now acts sane :) (A tiny step in a long running make floating point great effort of mine) On Sat, Jan 25, 2020 at 7:39 AM Brandon Allbery wrote: > > > On Fri, Jan 24, 2020 at 11:58 PM Ben Gamari wrote: > >> * A number of improvements in code generation, including changes >> > > This seems like it's missing some detail. > > -- > brandon s allbery kf8nh > allbery.b at gmail.com > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Sat Jan 25 20:26:29 2020 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Sat, 25 Jan 2020 20:26:29 +0000 Subject: stage2 build fails Message-ID: I'm getting this with "sh validate -legacy" compiler/main/DynFlags.hs:1344:15: error: [-Woverlapping-patterns, -Werror=overlapping-patterns] Pattern match is redundant In an equation for 'settings': settings s | otherwise = ... | 1344 | | otherwise = panic $ "Invalid cfg parameters." ++ exampleString | ^^^^^^^^^ This is when compiling the stage-2 compiler. There's an ifdef in DynFlags thus #if __GLASGOW_HASKELL__ <= 810 | otherwise = panic $ "Invalid cfg parameters." ++ exampleString #endif but somehow it's not triggering for the stage2 compiler. Any ideas? It's blocking a full build. This #ifdef was added in 8038cbd96f4, when GHC became better at reporting redundant code. Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From george.colpitts at gmail.com Sun Jan 26 14:58:31 2020 From: george.colpitts at gmail.com (George Colpitts) Date: Sun, 26 Jan 2020 10:58:31 -0400 Subject: [ANNOUNCE] Glasgow Haskell Compiler 8.10.1-rc1 released In-Reply-To: <87muacxhd6.fsf@smart-cactus.org> References: <87muacxhd6.fsf@smart-cactus.org> Message-ID: Thanks Ben, installed fine on my Mac running 10.14.6. For the release notes I suggest we document https://gitlab.haskell.org/ghc/ghc/issues/17341 which is associated with https://github.com/haskell/cabal/issues/6262 and https://github.com/haskell/cabal/issues/6104 I realize this will be controversial and may be rejected but I think the current status is very confusing to new / inexperienced users so I felt I should suggest it. As I mentioned in 17341, can we add a milestone and priority to this feature request? Thanks again for everybody's work on GHC Cheers George On Sat, Jan 25, 2020 at 12:58 AM Ben Gamari wrote: > > Hello all, > > The GHC team is happy to announce the availability of the first release > candidate of GHC 8.10.1. Source and binary distributions are > available at the usual place: > > https://downloads.haskell.org/ghc/8.10.1-rc1/ > > GHC 8.10.1 will bring a number of new features including: > > * The new UnliftedNewtypes extension allowing newtypes around unlifted > types. > > * The new StandaloneKindSignatures extension allows users to give > top-level kind signatures to type, type family, and class > declarations. > > * A new warning, -Wderiving-defaults, to draw attention to ambiguous > deriving clauses > > * A number of improvements in code generation, including changes > > * A new GHCi command, :instances, for listing the class instances > available for a type. > > * An upgraded Windows toolchain lifting the MAX_PATH limitation > > * A new, low-latency garbage collector. > > * Improved support profiling, including support for sending profiler > samples to the eventlog, allowing correlation between the profile and > other program events > > This is the first and likely final release candidate. For a variety of > reasons, it comes a few weeks later than the original schedule of > release late December. However, besides a few core libraries > book-keeping issues this candidate is believed to be in good condition > for the final release. As such, the final 8.10.1 release will likely > come in two weeks. > > Note that at the moment we still require that macOS Catalina users > exempt the binary distribution from the notarization requirement by > running `xattr -cr .` on the unpacked tree before running `make install`. > > In addition, we are still looking for any Alpine Linux to help diagnose > the correctness issues in the Alpine binary distribution [1]. If you use > Alpine any you can offer here would be greatly appreciated. > > Please do test this release and let us know if you encounter any other > issues. > > Cheers, > > - Ben > > > [1] https://gitlab.haskell.org/ghc/ghc/issues/17508 > [2] https://gitlab.haskell.org/ghc/ghc/issues/17418 > _______________________________________________ > Glasgow-haskell-users mailing list > Glasgow-haskell-users at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at richarde.dev Mon Jan 27 09:38:17 2020 From: rae at richarde.dev (Richard Eisenberg) Date: Mon, 27 Jan 2020 09:38:17 +0000 Subject: more submodule questions Message-ID: Hi devs, I recently found this text at the end of https://gitlab.haskell.org/ghc/ghc/wikis/working-conventions/git/submodules : --- The CI pipeline of ghc/ghc> includes a linting step to ensure that all submodules refer only to "persistent" commits of the upstream repositories (e.g. not wip/ branches, which may disappear in the future). Specifically, the linter checks that any submodules refer to commits that are reachable by at least one branch that doesn't begin with the prefix wip/. Consequently, you must ensure that any submodule changes introduced in a ghc/ghc> merge request are merged upstream before the merge request is added to the merge queue. --- I don't understand what this means. - By citing "ghc/ghc>", does this mean that the linter only checks for this on branches of the ghc/ghc repo? If I have a fork (e.g. rae/ghc), are these checks disabled? - Does this linter stop CI from progressing to, say, running the testsuite? If so, then how can we run the testsuite via CI if we have any submodule changes? We want to run the testsuite while the work is still in progress. - By "you must ensure ... before the merge request is added to the merge queue": this makes me wonder whether the linter is just a warning or an error. That is, if I must ensure it, then it suggests that CI is not ensuring it. Sorry to be dense here! Thanks, Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at richarde.dev Mon Jan 27 09:54:07 2020 From: rae at richarde.dev (Richard Eisenberg) Date: Mon, 27 Jan 2020 09:54:07 +0000 Subject: gaining access to the `ghc` group Message-ID: Hi devs, I'm onboarding a new contributor (Gert-Jan Bottu), whose patch (!2465) makes commensurate changes in Haddock. In order to use CI, then, he needs to be able to push a wip/ branch to our fork of Haddock. In order to do that, he needs to be in the `ghc` group. (I'm assuming -- but have not checked -- that just forking Haddock on the gitlab.haskell.org instance is not enough. The wiki page on submodules (https://gitlab.haskell.org/ghc/ghc/wikis/working-conventions/git/submodules ) suggests it is not.) That same wiki page says the one needs merely ask to join the `ghc` group. Good. So I follow the "ask" link to get to https://gitlab.haskell.org/ghc/ghc/wikis/mailing-lists-and-irc#mailing-lists-and-irc . That page helpfully describes where we can be reached, but it's not all that helpful for someone who wants to join the `ghc` group. According to that page, newcomers have to post publicly in either ghc-devs (I have never seen such a request there) or in #ghc. If I'm new in town, I wouldn't feel all that happy doing so, likely wanting to wait until I actually had a patch accepted before requesting rights.... but of course I need access to CI in order to get a patch accepted. Instead, would it be possible to have some sort of ghc-admin list, perhaps? While (I think) I know the individuals to contact for a request like this, an official mailing list would make this more transparent and, in my opinion, easier to onboard new contributors. I understand if folks don't want yet another mailing list, but then is here some other approach here that doesn't require posting in public? Thanks! Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Jan 27 11:42:20 2020 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 27 Jan 2020 11:42:20 +0000 Subject: stage2 build fails In-Reply-To: References: Message-ID: It would be good to know how to fix this. It's blocking my builds. For some reason it doesn't seem to kill CI Simon From: Simon Peyton Jones Sent: 25 January 2020 20:26 To: ghc-devs Subject: stage2 build fails I'm getting this with "sh validate -legacy" compiler/main/DynFlags.hs:1344:15: error: [-Woverlapping-patterns, -Werror=overlapping-patterns] Pattern match is redundant In an equation for 'settings': settings s | otherwise = ... | 1344 | | otherwise = panic $ "Invalid cfg parameters." ++ exampleString | ^^^^^^^^^ This is when compiling the stage-2 compiler. There's an ifdef in DynFlags thus #if __GLASGOW_HASKELL__ <= 810 | otherwise = panic $ "Invalid cfg parameters." ++ exampleString #endif but somehow it's not triggering for the stage2 compiler. Any ideas? It's blocking a full build. This #ifdef was added in 8038cbd96f4, when GHC became better at reporting redundant code. Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From sylvain at haskus.fr Mon Jan 27 13:13:43 2020 From: sylvain at haskus.fr (Sylvain Henry) Date: Mon, 27 Jan 2020 14:13:43 +0100 Subject: stage2 build fails In-Reply-To: References: Message-ID: Which stage 0 compiler are you using? It seems to be <= 8.10 and still has 8038cbd96f4 merged which seems contradictory. Anyway the alternative seems to be redundant from the beginning and should have been removed IMO. I have opened https://gitlab.haskell.org/ghc/ghc/merge_requests/2564 to fix this. Does it work after applying this patch? Sylvain On 27/01/2020 12:42, Simon Peyton Jones via ghc-devs wrote: > > It would be good to know how to fix this.  It’s blocking my builds. > > For some reason it doesn’t seem to kill CI > > Simon > > *From:*Simon Peyton Jones > *Sent:* 25 January 2020 20:26 > *To:* ghc-devs > *Subject:* stage2 build fails > > I’m getting this with “sh validate –legacy” > > compiler/main/DynFlags.hs:1344:15: error: [-Woverlapping-patterns, > -Werror=overlapping-patterns] > >     Pattern match is redundant > >     In an equation for ‘settings’: settings s | otherwise = ... > >      | > > 1344 |             | otherwise = panic $ "Invalid cfg parameters." ++ > exampleString > >      |               ^^^^^^^^^ > > This is when compiling the stage-2 compiler.  There’s an ifdef in > DynFlags thus > > #if __GLASGOW_HASKELL__ <= 810 > >             | otherwise = panic $ "Invalid cfg parameters." ++ > exampleString > > #endif > > but somehow it’s not triggering for the stage2 compiler. > > Any ideas?  It’s blocking a full build. > > This #ifdef was added in 8038cbd96f4, when GHC became better at > reporting redundant code. > > Simon > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Mon Jan 27 14:50:42 2020 From: ben at smart-cactus.org (Ben Gamari) Date: Mon, 27 Jan 2020 09:50:42 -0500 Subject: stage2 build fails In-Reply-To: References: Message-ID: <87ftg1x8ao.fsf@smart-cactus.org> Simon Peyton Jones via ghc-devs writes: > I'm getting this with "sh validate -legacy" > > compiler/main/DynFlags.hs:1344:15: error: [-Woverlapping-patterns, -Werror=overlapping-patterns] > > Pattern match is redundant > > In an equation for 'settings': settings s | otherwise = ... > > | > > 1344 | | otherwise = panic $ "Invalid cfg parameters." ++ exampleString > > | ^^^^^^^^^ > This is when compiling the stage-2 compiler. There's an ifdef in DynFlags thus > > #if __GLASGOW_HASKELL__ <= 810 > > | otherwise = panic $ "Invalid cfg parameters." ++ exampleString > > #endif > but somehow it's not triggering for the stage2 compiler. > Any ideas? It's blocking a full build. > This #ifdef was added in 8038cbd96f4, when GHC became better at reporting redundant code. > Simon Indeed it would be nice to know which compiler you are using to bootstrap. I suspect Sylvain is correct that the alternative can be removed but first I would like to understand why this is arising only now. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From simonpj at microsoft.com Mon Jan 27 14:55:06 2020 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 27 Jan 2020 14:55:06 +0000 Subject: stage2 build fails In-Reply-To: <87ftg1x8ao.fsf@smart-cactus.org> References: <87ftg1x8ao.fsf@smart-cactus.org> Message-ID: | Indeed it would be nice to know which compiler you are using to | bootstrap. I suspect Sylvain is correct that the alternative can be | removed but first I would like to understand why this is arising only | now. I'm using 8.6.4 as my bootstrap compiler. But this message occurs only when compiling the *stage2* compiler with the *stage1* compiler. At that moment the bootstrap compiler is irrelevant. It's as if the build system doesn't know that the stage1 compiler is >= 8.10 Simon | -----Original Message----- | From: Ben Gamari | Sent: 27 January 2020 14:51 | To: Simon Peyton Jones ; ghc-devs | Subject: Re: stage2 build fails | | Simon Peyton Jones via ghc-devs writes: | | > I'm getting this with "sh validate -legacy" | > | > compiler/main/DynFlags.hs:1344:15: error: [-Woverlapping-patterns, - | Werror=overlapping-patterns] | > | > Pattern match is redundant | > | > In an equation for 'settings': settings s | otherwise = ... | > | > | | > | > 1344 | | otherwise = panic $ "Invalid cfg parameters." ++ | exampleString | > | > | ^^^^^^^^^ | > This is when compiling the stage-2 compiler. There's an ifdef in | DynFlags thus | > | > #if __GLASGOW_HASKELL__ <= 810 | > | > | otherwise = panic $ "Invalid cfg parameters." ++ | exampleString | > | > #endif | > but somehow it's not triggering for the stage2 compiler. | > Any ideas? It's blocking a full build. | > This #ifdef was added in 8038cbd96f4, when GHC became better at | reporting redundant code. | > Simon | | Indeed it would be nice to know which compiler you are using to | bootstrap. I suspect Sylvain is correct that the alternative can be | removed but first I would like to understand why this is arising only | now. | | Cheers, | | - Ben From ben at smart-cactus.org Mon Jan 27 18:58:36 2020 From: ben at smart-cactus.org (Ben Gamari) Date: Mon, 27 Jan 2020 13:58:36 -0500 Subject: stage2 build fails In-Reply-To: References: <87ftg1x8ao.fsf@smart-cactus.org> Message-ID: <87a768ybdx.fsf@smart-cactus.org> Simon Peyton Jones writes: > | Indeed it would be nice to know which compiler you are using to > | bootstrap. I suspect Sylvain is correct that the alternative can be > | removed but first I would like to understand why this is arising only > | now. > > I'm using 8.6.4 as my bootstrap compiler. But this message occurs only when compiling the *stage2* compiler with the *stage1* compiler. At that moment the bootstrap compiler is irrelevant. > > It's as if the build system doesn't know that the stage1 compiler is >= 8.10 > Hmm, interesting. Has this tree been cleaned in the last month or two? I suspect that this may have something to do with the changes to the GHC_STAGE macro that were merged a while back (perhaps a stale header file?). Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Mon Jan 27 19:46:28 2020 From: ben at smart-cactus.org (Ben Gamari) Date: Mon, 27 Jan 2020 14:46:28 -0500 Subject: gaining access to the `ghc` group In-Reply-To: References: Message-ID: <875zgwy969.fsf@smart-cactus.org> Richard Eisenberg writes: > Hi devs, > > I'm onboarding a new contributor (Gert-Jan Bottu), whose patch (!2465) > makes commensurate changes in Haddock. In order to use CI, then, he > needs to be able to push a wip/ branch to our fork of Haddock. In > order to do that, he needs to be in the `ghc` group. (I'm assuming -- > but have not checked -- that just forking Haddock on the > gitlab.haskell.org instance is not > enough. The wiki page on submodules > (https://gitlab.haskell.org/ghc/ghc/wikis/working-conventions/git/submodules > suggests it is not.) That same wiki page says the one needs merely ask > to join the `ghc` group. Good. So I follow the "ask" link to get to > https://gitlab.haskell.org/ghc/ghc/wikis/mailing-lists-and-irc#mailing-lists-and-irc > > . That page helpfully describes where we can be reached, but it's not > all that helpful for someone who wants to join the `ghc` group. > According to that page, newcomers have to post publicly in either > ghc-devs (I have never seen such a request there) or in #ghc. If I'm > new in town, I wouldn't feel all that happy doing so, likely wanting > to wait until I actually had a patch accepted before requesting > rights.... but of course I need access to CI in order to get a patch > accepted. > > Instead, would it be possible to have some sort of ghc-admin list, > perhaps? While (I think) I know the individuals to contact for a > request like this, an official mailing list would make this more > transparent and, in my opinion, easier to onboard new contributors. I > understand if folks don't want yet another mailing list, but then is > here some other approach here that doesn't require posting in public? > Hi Richard, Regarding requesting access to the `ghc` group, GitLab already provides a mechanism for this: there is a "Request Access" link right below the group title here [1]. This is admittedly a recent addition as I hadn't noticed that the feature was disabled in the group configuration (the link turns into a "Leave group" link if you are already a group member so I hadn't noticed that the feature was inactive). I have updated the instructions on the submodules wiki page to reflect this new feature. Does this look better? Cheers, - Ben [1] https://gitlab.haskell.org/ghc -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Mon Jan 27 20:19:44 2020 From: ben at smart-cactus.org (Ben Gamari) Date: Mon, 27 Jan 2020 15:19:44 -0500 Subject: more submodule questions In-Reply-To: References: Message-ID: <8736c0y7ms.fsf@smart-cactus.org> Richard Eisenberg writes: > Hi devs, > > I recently found this text at the end of https://gitlab.haskell.org/ghc/ghc/wikis/working-conventions/git/submodules : > > --- > The CI pipeline of ghc/ghc> includes a linting step to ensure that all submodules refer only to "persistent" commits of the upstream repositories (e.g. not wip/ branches, which may disappear in the future). Specifically, the linter checks that any submodules refer to commits that are reachable by at least one branch that doesn't begin with the prefix wip/. Consequently, you must ensure that any submodule changes introduced in a ghc/ghc> merge request are merged upstream before the merge request is added to the merge queue. > --- > > I don't understand what this means. > I have amended the text, hopefully clearing things up. To summarize: > - By citing "ghc/ghc>", does this mean that the linter only checks for > this on branches of the ghc/ghc repo? If I have a fork (e.g. rae/ghc), > are these checks disabled? > Forks run the same CI configuration as ghc/ghc and are subject to the same linter. > - Does this linter stop CI from progressing to, say, running the > testsuite? If so, then how can we run the testsuite via CI if we have > any submodule changes? We want to run the testsuite while the work is > still in progress. > > - By "you must ensure ... before the merge request is added to the > merge queue": this makes me wonder whether the linter is just a > warning or an error. That is, if I must ensure it, then it suggests > that CI is not ensuring it. > The linter does not hold up builds for merge requests but will hold up a "pre-merge" validation job (e.g. a validation of an MR created by @marge-bot). This ensures that a patch containing a wip/ submodule reference will not be merged to master. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From rae at richarde.dev Mon Jan 27 22:12:47 2020 From: rae at richarde.dev (Richard Eisenberg) Date: Mon, 27 Jan 2020 22:12:47 +0000 Subject: more submodule questions In-Reply-To: <8736c0y7ms.fsf@smart-cactus.org> References: <8736c0y7ms.fsf@smart-cactus.org> Message-ID: <3A7FD7F2-27AC-49E4-AD56-959A3C5E006A@richarde.dev> > On Jan 27, 2020, at 8:19 PM, Ben Gamari wrote: > > The linter does not hold up builds for merge requests but will hold up a > "pre-merge" validation job (e.g. a validation of an MR created by > @marge-bot). This ensures that a patch containing a wip/ submodule > reference will not be merged to master. Very interesting! Are there other such checks? I always assume that if an MR passes CI, then it is suitable for merging. Of course, what you describe makes perfect sense here -- we don't require upstream to have our commits during CI, but we do during merging. I'm just wondering if there are other such scenarios that are checked for. Thanks, Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at richarde.dev Mon Jan 27 22:13:59 2020 From: rae at richarde.dev (Richard Eisenberg) Date: Mon, 27 Jan 2020 22:13:59 +0000 Subject: gaining access to the `ghc` group In-Reply-To: <875zgwy969.fsf@smart-cactus.org> References: <875zgwy969.fsf@smart-cactus.org> Message-ID: > > I have updated the instructions on the submodules wiki page to reflect > this new feature. Does this look better? Perfect. Thank you! From carter.schonwald at gmail.com Tue Jan 28 03:37:20 2020 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 27 Jan 2020 22:37:20 -0500 Subject: Intree gmp builds using Hadrian use system gmp headers Message-ID: Hey all, For those using Hadrian for platforms where intree gmp config is important, some sleuthing I’ve done over the past half seems to indicate that currently it’s not correct via Hadrian, but only via make. https://gitlab.haskell.org/ghc/ghc/issues/17756 I’m able to reproduce this on multiple OS X machines (though it took me a while to read apart that is Hadrian specific.) Punchline : at the moment, Hadrian intree gmp builds succeed because gmp headers in default include search paths of gcc/clang get picked up. In a properly isolated environment / one without the offending header leakage, I’m able at least on OS X to get Hadrian with intree gmp to consistently fail. -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Tue Jan 28 03:46:51 2020 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 27 Jan 2020 22:46:51 -0500 Subject: Intree gmp builds using Hadrian use system gmp headers In-Reply-To: References: Message-ID: If folks can reproduce this issue on other platforms than OS X , id greatly appreciate it. If only to confirm that it’s as reproducible as I believe it is. I should mention that at least on OS X, it’s slightly easier to trigger the issue / stumble across it with gcc rather than clang (on OS X, clang was picking up the o stalled system headers but gcc was failing. Without installed headers both fail for me. ) On Mon, Jan 27, 2020 at 10:37 PM Carter Schonwald < carter.schonwald at gmail.com> wrote: > Hey all, > For those using Hadrian for platforms where intree gmp config is > important, some sleuthing I’ve done over the past half seems to indicate > that currently it’s not correct via Hadrian, but only via make. > > https://gitlab.haskell.org/ghc/ghc/issues/17756 > I’m able to reproduce this on multiple OS X machines (though it took me a > while to read apart that is Hadrian specific.) > > Punchline : at the moment, Hadrian intree gmp builds succeed because gmp > headers in default include search paths of gcc/clang get picked up. In a > properly isolated environment / one without the offending header leakage, > I’m able at least on OS X to get Hadrian with intree gmp to consistently > fail. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lexi.lambda at gmail.com Tue Jan 28 08:19:40 2020 From: lexi.lambda at gmail.com (Alexis King) Date: Tue, 28 Jan 2020 02:19:40 -0600 Subject: Feasibility of native RTS support for continuations? Message-ID: Hi all, tl;dr: I want to try to implement native support for capturing slices of the RTS stack as a personal experiment; please tell me the obstacles I am likely to run into. Much more context follows. --- I have been working on an implementation of an algebraic effect system that uses unsafe primops to be as performant as possible. However, the unavoidable need to CPS the entire program balloons heap allocation. Consider the following definition: f a b = g a >>= \c -> h (b + c) Assume `g` and `h` are not inlined. If the monad used is IO, this will be compiled efficiently: the result of `g` is returned on the stack, and no closure needs to be allocated for the lambda. However, if the monad supports capturing the continuation, the above definition must be CPS’d. After inlining, we end up with f a b = \k -> let lvl = \c -> h (b + c) k in g a lvl which must allocate a closure on the heap. This is frustrating, as it must happen for every call to a non-inlined monadic operation, even if that operation never captures the continuation. In an algebraic effect system, there are many shortcuts that avoid the need to capture the continuation, and under my implementation, large swaths of code never do so. I’ve managed to exploit that to get some savings, but I can’t escape the need to allocate all these closures. This motivates my question: how difficult would it be to allow capturing a portion of the RTS call stack directly? My requirements are fairly minimal, as continuations go: 1. Capturing a continuation is only legal from within a strict state thread (i.e. IO or strict ST). 2. The continuation is captured up to a prompt, which would be a new kind of RTS stack frame. Prompts are not tagged, so there is only ever exactly one prompt active at any time (which may be the root prompt). 3. Capturing a continuation is unsafe. The behavior of capturing a continuation is undefined if the current prompt was not created by the current state thread (and it is never legal to capture up to the root prompt). 4. Applying a continuation is unsafe. Captured continuations return `Any`, and type safety is the caller’s obligation. 5. Continuations are “functional,” which is to say applying them does not trigger any additional stack unwinding. This minimal support for primitive continuation capturing would be enough to support my efficient, safe delimited control implementation. In my ignorant mind, implementing this ought to be as simple as defining two new primops, reset# :: (State# s -> (# State# s, a #)) -> State# s -> (# State# s, a #) shift# :: ((a -> State# s -> (# State# s, Any #)) -> State# s -> (# State# s, Any #)) -> State# s -> (# State# s, a #) where reset# pushes a new prompt frame and shift# captures a slice of the RTS stack up to that frame and copies it into the heap. Restoring a continuation would copy all the captured frames onto the end of the current stack. Sounds simple enough! I would like to experiment with implementing something like this myself, just to see if it would really work, but somehow I doubt it is actually as simple as it sounds. Minor complications are fine, but what worries me are major obstacles I haven’t found in my limited attempts to learn about the RTS. So far, I’ve read the old “The New GHC/Hugs Runtime System” paper, which still seems mostly accurate from a high level, though I imagine many details have changed since then. I’ve also read the Commentary/RTS/Storage/Stack wiki page, and I’ve peeked at some of the RTS source code. I’ve also skimmed a handful of other papers and wiki pages, but it’s hard to know what I’m looking for. Can anyone point me to better resources or give me some insight into what will be hard? Thanks in advance, Alexis From simonpj at microsoft.com Tue Jan 28 10:09:20 2020 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 28 Jan 2020 10:09:20 +0000 Subject: Feasibility of native RTS support for continuations? In-Reply-To: References: Message-ID: Alexis I've thought about this quite a bit in the past, but got stalled for lack of cycles to think about it more. But there's a paper or two: https://www.microsoft.com/en-us/research/publication/composable-scheduler-activations-haskell/ On that link you can also see a link to an earlier, shorter, conference version (rejected 😊). Also this earlier (2007) work https://www.microsoft.com/en-us/research/publication/lightweight-concurrency-primitives-for-ghc/ On the effects front I think Daan Leijen is doing interesting stuff, although I'm not very up to date: https://www.microsoft.com/en-us/research/people/daan/publications/ One interesting dimension is whether or not the continuations you capture are one-shot. If so, particularly efficient implementations are possible. Also: much of the "capture stack chunk" stuff is *already* implemented, because it is (I think) what happens when a thread receives an asynchronous exception, and just abandon its evaluation of thunks that it has started work on. Simon | -----Original Message----- | From: ghc-devs On Behalf Of Alexis King | Sent: 28 January 2020 08:20 | To: ghc-devs | Subject: Feasibility of native RTS support for continuations? | | Hi all, | | tl;dr: I want to try to implement native support for capturing slices of | the RTS stack as a personal experiment; please tell me the obstacles I | am likely to run into. Much more context follows. | | --- | | I have been working on an implementation of an algebraic effect system | that uses unsafe primops to be as performant as possible. However, the | unavoidable need to CPS the entire program balloons heap allocation. | Consider the following definition: | | f a b = g a >>= \c -> h (b + c) | | Assume `g` and `h` are not inlined. If the monad used is IO, this will | be compiled efficiently: the result of `g` is returned on the stack, and | no closure needs to be allocated for the lambda. However, if the monad | supports capturing the continuation, the above definition must be CPS’d. | After inlining, we end up with | | f a b = \k -> let lvl = \c -> h (b + c) k in g a lvl | | which must allocate a closure on the heap. This is frustrating, as it | must happen for every call to a non-inlined monadic operation, even if | that operation never captures the continuation. In an algebraic effect | system, there are many shortcuts that avoid the need to capture the | continuation, and under my implementation, large swaths of code never do | so. I’ve managed to exploit that to get some savings, but I can’t escape | the need to allocate all these closures. | | This motivates my question: how difficult would it be to allow capturing | a portion of the RTS call stack directly? My requirements are fairly | minimal, as continuations go: | | 1. Capturing a continuation is only legal from within a strict state | thread (i.e. IO or strict ST). | | 2. The continuation is captured up to a prompt, which would be a new | kind of RTS stack frame. Prompts are not tagged, so there is only | ever exactly one prompt active at any time (which may be the root | prompt). | | 3. Capturing a continuation is unsafe. The behavior of capturing a | continuation is undefined if the current prompt was not created by | the current state thread (and it is never legal to capture up to | the root prompt). | | 4. Applying a continuation is unsafe. Captured continuations return | `Any`, and type safety is the caller’s obligation. | | 5. Continuations are “functional,” which is to say applying them does | not trigger any additional stack unwinding. | | This minimal support for primitive continuation capturing would be | enough to support my efficient, safe delimited control implementation. | In my ignorant mind, implementing this ought to be as simple as defining | two new primops, | | reset# :: (State# s -> (# State# s, a #)) | -> State# s -> (# State# s, a #) | | shift# :: ((a -> State# s -> (# State# s, Any #)) | -> State# s -> (# State# s, Any #)) | -> State# s -> (# State# s, a #) | | where reset# pushes a new prompt frame and shift# captures a slice of | the RTS stack up to that frame and copies it into the heap. Restoring a | continuation would copy all the captured frames onto the end of the | current stack. Sounds simple enough! | | I would like to experiment with implementing something like this myself, | just to see if it would really work, but somehow I doubt it is actually | as simple as it sounds. Minor complications are fine, but what worries | me are major obstacles I haven’t found in my limited attempts to learn | about the RTS. | | So far, I’ve read the old “The New GHC/Hugs Runtime System” paper, which | still seems mostly accurate from a high level, though I imagine many | details have changed since then. I’ve also read the | Commentary/RTS/Storage/Stack wiki page, and I’ve peeked at some of the | RTS source code. I’ve also skimmed a handful of other papers and wiki | pages, but it’s hard to know what I’m looking for. Can anyone point me | to better resources or give me some insight into what will be hard? | | Thanks in advance, | Alexis | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C4ee2fda49b5346e0b2b508d7 | a3cadd55%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637157964004023983&a | mp;sdata=3iyrmA6Ie5yLSkZkBz3Wcj5LS7GvE3zwivDEcxv2Y%2B4%3D&reserved=0 From simonpj at microsoft.com Tue Jan 28 10:41:33 2020 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 28 Jan 2020 10:41:33 +0000 Subject: stage2 build fails In-Reply-To: <87a768ybdx.fsf@smart-cactus.org> References: <87ftg1x8ao.fsf@smart-cactus.org> <87a768ybdx.fsf@smart-cactus.org> Message-ID: I did "sh validate --legacy" which uses "make maintainer-clean". I'll try again to double check. Is there any particular file I should remove? Simon | -----Original Message----- | From: Ben Gamari | Sent: 27 January 2020 18:59 | To: Simon Peyton Jones ; ghc-devs | Subject: RE: stage2 build fails | | Simon Peyton Jones writes: | | > | Indeed it would be nice to know which compiler you are using to | > | bootstrap. I suspect Sylvain is correct that the alternative can be | > | removed but first I would like to understand why this is arising only | > | now. | > | > I'm using 8.6.4 as my bootstrap compiler. But this message occurs only | when compiling the *stage2* compiler with the *stage1* compiler. At that | moment the bootstrap compiler is irrelevant. | > | > It's as if the build system doesn't know that the stage1 compiler is >= | 8.10 | > | Hmm, interesting. Has this tree been cleaned in the last month or two? | I suspect that this may have something to do with the changes to the | GHC_STAGE macro that were merged a while back (perhaps a stale header | file?). | | Cheers, | | - Ben From ml at stefansf.de Tue Jan 28 11:32:16 2020 From: ml at stefansf.de (Stefan Schulze Frielinghaus) Date: Tue, 28 Jan 2020 12:32:16 +0100 Subject: stage2 build fails In-Reply-To: References: <87ftg1x8ao.fsf@smart-cactus.org> <87a768ybdx.fsf@smart-cactus.org> Message-ID: <20200128113216.GA28561@dyn-9-152-222-24.boeblingen.de.ibm.com> I ran into a similar(?) problem a while ago. "make maintainer-clean" does not necessarily remove every generated file. See for example https://mail.haskell.org/pipermail/ghc-devs/2019-August/018049.html Since then I only use git clean -dfxq && git submodule foreach git clean -dfxq which worked reliably for me (beware that this removes any untracked file). Maybe this helps? Cheers, Stefan On Tue, Jan 28, 2020 at 10:41:33AM +0000, Simon Peyton Jones via ghc-devs wrote: > I did "sh validate --legacy" which uses "make maintainer-clean". > > I'll try again to double check. Is there any particular file I should remove? > > Simon > > | -----Original Message----- > | From: Ben Gamari > | Sent: 27 January 2020 18:59 > | To: Simon Peyton Jones ; ghc-devs | devs at haskell.org> > | Subject: RE: stage2 build fails > | > | Simon Peyton Jones writes: > | > | > | Indeed it would be nice to know which compiler you are using to > | > | bootstrap. I suspect Sylvain is correct that the alternative can be > | > | removed but first I would like to understand why this is arising only > | > | now. > | > > | > I'm using 8.6.4 as my bootstrap compiler. But this message occurs only > | when compiling the *stage2* compiler with the *stage1* compiler. At that > | moment the bootstrap compiler is irrelevant. > | > > | > It's as if the build system doesn't know that the stage1 compiler is >= > | 8.10 > | > > | Hmm, interesting. Has this tree been cleaned in the last month or two? > | I suspect that this may have something to do with the changes to the > | GHC_STAGE macro that were merged a while back (perhaps a stale header > | file?). > | > | Cheers, > | > | - Ben > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From simonpj at microsoft.com Tue Jan 28 11:44:56 2020 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 28 Jan 2020 11:44:56 +0000 Subject: stage2 build fails In-Reply-To: <20200128113216.GA28561@dyn-9-152-222-24.boeblingen.de.ibm.com> References: <87ftg1x8ao.fsf@smart-cactus.org> <87a768ybdx.fsf@smart-cactus.org> <20200128113216.GA28561@dyn-9-152-222-24.boeblingen.de.ibm.com> Message-ID: Thanks -- I'll try that. Simion | -----Original Message----- | From: Stefan Schulze Frielinghaus | Sent: 28 January 2020 11:32 | To: Simon Peyton Jones | Cc: Ben Gamari ; ghc-devs | Subject: Re: stage2 build fails | | I ran into a similar(?) problem a while ago. "make maintainer-clean" | does not necessarily remove every generated file. See for example | | | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail.has | kell.org%2Fpipermail%2Fghc-devs%2F2019- | August%2F018049.html&data=02%7C01%7Csimonpj%40microsoft.com%7Cd2614760 | 6ba1488142db08d7a3e5bbce%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C6371 | 58079800335489&sdata=GRkcbjTJ7rSZiCrIMnXF2l5c0wS1h4GVfPkyCnRkNYU%3D&am | p;reserved=0 | | Since then I only use | | git clean -dfxq && git submodule foreach git clean -dfxq | | which worked reliably for me (beware that this removes any untracked | file). Maybe this helps? | | Cheers, | Stefan | | On Tue, Jan 28, 2020 at 10:41:33AM +0000, Simon Peyton Jones via ghc-devs | wrote: | > I did "sh validate --legacy" which uses "make maintainer-clean". | > | > I'll try again to double check. Is there any particular file I should | remove? | > | > Simon | > | > | -----Original Message----- | > | From: Ben Gamari | > | Sent: 27 January 2020 18:59 | > | To: Simon Peyton Jones ; ghc-devs | devs at haskell.org> | > | Subject: RE: stage2 build fails | > | | > | Simon Peyton Jones writes: | > | | > | > | Indeed it would be nice to know which compiler you are using to | > | > | bootstrap. I suspect Sylvain is correct that the alternative can | be | > | > | removed but first I would like to understand why this is arising | only | > | > | now. | > | > | > | > I'm using 8.6.4 as my bootstrap compiler. But this message occurs | only | > | when compiling the *stage2* compiler with the *stage1* compiler. At | that | > | moment the bootstrap compiler is irrelevant. | > | > | > | > It's as if the build system doesn't know that the stage1 compiler | is >= | > | 8.10 | > | > | > | Hmm, interesting. Has this tree been cleaned in the last month or | two? | > | I suspect that this may have something to do with the changes to the | > | GHC_STAGE macro that were merged a while back (perhaps a stale header | > | file?). | > | | > | Cheers, | > | | > | - Ben | > | > _______________________________________________ | > ghc-devs mailing list | > ghc-devs at haskell.org | > | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com%7Cd26147606ba1488142db08d7 | a3e5bbce%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637158079800335489&a | mp;sdata=Lp01b0COxfxCitC%2BsmSUKlbOTKZhFwY4H%2FdNff2PdAc%3D&reserved=0 From simonpj at microsoft.com Tue Jan 28 13:34:19 2020 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 28 Jan 2020 13:34:19 +0000 Subject: stage2 build fails In-Reply-To: <20200128113216.GA28561@dyn-9-152-222-24.boeblingen.de.ibm.com> References: <87ftg1x8ao.fsf@smart-cactus.org> <87a768ybdx.fsf@smart-cactus.org> <20200128113216.GA28561@dyn-9-152-222-24.boeblingen.de.ibm.com> Message-ID: git clean -dfxq && git submodule foreach git clean -dfxq That made it work! Thank you. (Of course I have no idea *what* file 'make maintainer-clean' isn't removing, but maybe we should not worry over-much.) Simon | -----Original Message----- | From: Stefan Schulze Frielinghaus | Sent: 28 January 2020 11:32 | To: Simon Peyton Jones | Cc: Ben Gamari ; ghc-devs | Subject: Re: stage2 build fails | | I ran into a similar(?) problem a while ago. "make maintainer-clean" | does not necessarily remove every generated file. See for example | | | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail.has | kell.org%2Fpipermail%2Fghc-devs%2F2019- | August%2F018049.html&data=02%7C01%7Csimonpj%40microsoft.com%7Cd2614760 | 6ba1488142db08d7a3e5bbce%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C6371 | 58079800335489&sdata=GRkcbjTJ7rSZiCrIMnXF2l5c0wS1h4GVfPkyCnRkNYU%3D&am | p;reserved=0 | | Since then I only use | | git clean -dfxq && git submodule foreach git clean -dfxq | | which worked reliably for me (beware that this removes any untracked | file). Maybe this helps? | | Cheers, | Stefan | | On Tue, Jan 28, 2020 at 10:41:33AM +0000, Simon Peyton Jones via ghc-devs | wrote: | > I did "sh validate --legacy" which uses "make maintainer-clean". | > | > I'll try again to double check. Is there any particular file I should | remove? | > | > Simon | > | > | -----Original Message----- | > | From: Ben Gamari | > | Sent: 27 January 2020 18:59 | > | To: Simon Peyton Jones ; ghc-devs | devs at haskell.org> | > | Subject: RE: stage2 build fails | > | | > | Simon Peyton Jones writes: | > | | > | > | Indeed it would be nice to know which compiler you are using to | > | > | bootstrap. I suspect Sylvain is correct that the alternative can | be | > | > | removed but first I would like to understand why this is arising | only | > | > | now. | > | > | > | > I'm using 8.6.4 as my bootstrap compiler. But this message occurs | only | > | when compiling the *stage2* compiler with the *stage1* compiler. At | that | > | moment the bootstrap compiler is irrelevant. | > | > | > | > It's as if the build system doesn't know that the stage1 compiler | is >= | > | 8.10 | > | > | > | Hmm, interesting. Has this tree been cleaned in the last month or | two? | > | I suspect that this may have something to do with the changes to the | > | GHC_STAGE macro that were merged a while back (perhaps a stale header | > | file?). | > | | > | Cheers, | > | | > | - Ben | > | > _______________________________________________ | > ghc-devs mailing list | > ghc-devs at haskell.org | > | https://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com%7Cd26147606ba1488142db08d7 | a3e5bbce%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637158079800335489&a | mp;sdata=Lp01b0COxfxCitC%2BsmSUKlbOTKZhFwY4H%2FdNff2PdAc%3D&reserved=0 From ben at smart-cactus.org Tue Jan 28 14:10:10 2020 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 28 Jan 2020 09:10:10 -0500 Subject: stage2 build fails In-Reply-To: References: <87ftg1x8ao.fsf@smart-cactus.org> <87a768ybdx.fsf@smart-cactus.org> <20200128113216.GA28561@dyn-9-152-222-24.boeblingen.de.ibm.com> Message-ID: <87y2trwu2q.fsf@smart-cactus.org> Simon Peyton Jones writes: > git clean -dfxq && git submodule foreach git clean -dfxq > > That made it work! Thank you. > > (Of course I have no idea *what* file 'make maintainer-clean' isn't > removing, but maybe we should not worry over-much.) > Right, given that this only occurs with the make build system my inclination has been to just live with it until we finally switch to Hadrian. Out of curiosity, is there a reason you are not using Hadrian in this tree? Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From simonpj at microsoft.com Tue Jan 28 14:27:02 2020 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 28 Jan 2020 14:27:02 +0000 Subject: stage2 build fails In-Reply-To: <87y2trwu2q.fsf@smart-cactus.org> References: <87ftg1x8ao.fsf@smart-cactus.org> <87a768ybdx.fsf@smart-cactus.org> <20200128113216.GA28561@dyn-9-152-222-24.boeblingen.de.ibm.com> <87y2trwu2q.fsf@smart-cactus.org> Message-ID: | Out of curiosity, is there a reason you are not using Hadrian in this | tree? Yes... I tried again three weeks ago and ran into several obstacles that I asked about on ghc-devs at the time. I'd be happy to re-state them when it's a good time to do so. I think this thread is the most recent one: https://mail.haskell.org/pipermail/ghc-devs/2019-December/018362.html Simon | -----Original Message----- | From: Ben Gamari | Sent: 28 January 2020 14:10 | To: Simon Peyton Jones ; Stefan Schulze | Frielinghaus | Cc: ghc-devs | Subject: RE: stage2 build fails | | Simon Peyton Jones writes: | | > git clean -dfxq && git submodule foreach git clean -dfxq | > | > That made it work! Thank you. | > | > (Of course I have no idea *what* file 'make maintainer-clean' isn't | > removing, but maybe we should not worry over-much.) | > | Right, given that this only occurs with the make build system my | inclination has been to just live with it until we finally switch to | Hadrian. | | Out of curiosity, is there a reason you are not using Hadrian in this | tree? | | Cheers, | | - Ben From ben at smart-cactus.org Tue Jan 28 15:33:48 2020 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 28 Jan 2020 10:33:48 -0500 Subject: stage2 build fails In-Reply-To: References: <87ftg1x8ao.fsf@smart-cactus.org> <87a768ybdx.fsf@smart-cactus.org> <20200128113216.GA28561@dyn-9-152-222-24.boeblingen.de.ibm.com> <87y2trwu2q.fsf@smart-cactus.org> Message-ID: <87sgjzwq7c.fsf@smart-cactus.org> Simon Peyton Jones writes: > | Out of curiosity, is there a reason you are not using Hadrian in this > | tree? > > Yes... I tried again three weeks ago and ran into several obstacles > that I asked about on ghc-devs at the time. I'd be happy to re-state > them when it's a good time to do so. > Ahh yes, #17534. I've been working around this when it occurs by removing the dyn_o errant file. However, I agree that this gets rather tiresome. We'll have to prioritize this. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From lexi.lambda at gmail.com Tue Jan 28 22:19:14 2020 From: lexi.lambda at gmail.com (Alexis King) Date: Tue, 28 Jan 2020 16:19:14 -0600 Subject: Feasibility of native RTS support for continuations? In-Reply-To: References: Message-ID: > On Jan 28, 2020, at 04:09, Simon Peyton Jones wrote: > > I've thought about this quite a bit in the past, but got stalled for lack of cycles to think about it more. But there's a paper or two Many thanks! I’d stumbled upon the 2007 paper, but I hadn’t seen the 2016 one. In the case of the former, I had thought it probably wasn’t enormously relevant, since the “continuations” appear to be fundamentally one-shot. At first glance, that doesn’t seem to have changed in the JFP article, but I haven’t really read it yet, so maybe I’m mistaken. I’ll take a closer look. > On the effects front I think Daan Leijen is doing interesting stuff, although I'm not very up to date: > https://www.microsoft.com/en-us/research/people/daan/publications/ Indeed, I’ve read a handful of his papers while working on this! I didn’t mention it in the original email, but I’ve also talked a little with Matthew Flatt about efficient implementation of delimited control, and he pointed me to a few papers a couple of months ago. One of those was “Final Shift for call/cc: a Direct Implementation of Shift and Reset” by Gasbichler and Sperber, which describes an approach closest to what I currently have in mind to try to implement in the RTS. > One interesting dimension is whether or not the continuations you capture are one-shot. If so, particularly efficient implementations are possible. Quite so. One thing I’ve considered is that it’s possible to obtain much of that efficiency even without requiring strict one-shot continuations if you have a separate operation for restoring a continuation that guarantees you won’t ever restore it again, sort of like the existing unsafeThaw/unsafeFreeze operations. That is, you can essentially convert a multi-shot continuation into a one-shot continuation and reap performance benefits, even if you’ve already applied the continuation. This is a micro-optimization, though, so I’m not worrying too much about it right now. > Also: much of the "capture stack chunk" stuff is *already* implemented, because it is (I think) what happens when a thread receives an asynchronous exception, and just abandon its evaluation of thunks that it has started work on. Now that is very interesting, and certainly not something I would have expected! Why would asynchronous exceptions need to capture any portion of the stack? Exceptions obviously trigger stack unwinding, so I assumed the “abort to the current prompt” part of my implementation would already exist, but not the “capture a slice of the stack” part. Could you say a little more about this, or point me to some relevant code? Thanks again, Alexis From simonpj at microsoft.com Wed Jan 29 09:32:32 2020 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 29 Jan 2020 09:32:32 +0000 Subject: Feasibility of native RTS support for continuations? In-Reply-To: References: Message-ID: | Now that is very interesting, and certainly not something I would have | expected! Why would asynchronous exceptions need to capture any portion of | the stack? Exceptions obviously trigger stack unwinding, so I assumed the | “abort to the current prompt” part of my implementation would already | exist, but not the “capture a slice of the stack” part. Could you say a | little more about this, or point me to some relevant code? Suppose a thread happens to be evaluating a pure thunk for (factorial 200). Then it gets an asynchronous exception from another thread. That asynch exn is nothing to do with (factorial 200). So we could either A. revert the thunk to (factorial 200), abandoning all the work done so far, or B. capture the stack and attach it to the thunk, so that ie any other thread enters that thunk, it'll just run that stack. Now (A) means that every thunk has to be revertible, which means keeping its original free variables, which leads to space leaks. And extra work to avoid losing any info you need for reversion. Extra work is painful; we want to put all of the extra work on the asynch exn. So we do (B). See Section 8 of "Asynchronous exceptions in Haskell". https://www.microsoft.com/en-us/research/publication/asynchronous-exceptions-haskell-3/ And "An implementation of resumable black holes" (Reid). https://alastairreid.github.io/papers/IFL_98/ This stack-freezing stuff is definitely implemented. I'm not quite sure where, but I'm cc'ing Simon Marlow who can point you at it. ------------ You need to be careful. Suppose a thread pushes a prompt, then later evaluates a thunk T1, which in turn evaluates a thunk T2. If you capture the stack down to the prompt, you MUST overwrite T1 and T2 with a resumable continuation capturing their portion of the stack, in case some other, unrelated thread needs their value. But as I say, all this is implemented. --------------- Keep us posted. It'd be good to have a design that accommodated some of the applications in the 'composable scheduler activations' paper too. Simon | -----Original Message----- | From: Alexis King | Sent: 28 January 2020 22:19 | To: Simon Peyton Jones | Cc: ghc-devs | Subject: Re: Feasibility of native RTS support for continuations? | | > On Jan 28, 2020, at 04:09, Simon Peyton Jones | wrote: | > | > I've thought about this quite a bit in the past, but got stalled for | lack of cycles to think about it more. But there's a paper or two | | Many thanks! I’d stumbled upon the 2007 paper, but I hadn’t seen the 2016 | one. In the case of the former, I had thought it probably wasn’t | enormously relevant, since the “continuations” appear to be fundamentally | one-shot. At first glance, that doesn’t seem to have changed in the JFP | article, but I haven’t really read it yet, so maybe I’m mistaken. I’ll | take a closer look. | | > On the effects front I think Daan Leijen is doing interesting stuff, | although I'm not very up to date: | > | https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.micr | osoft.com%2Fen- | us%2Fresearch%2Fpeople%2Fdaan%2Fpublications%2F&data=02%7C01%7Csimonpj | %40microsoft.com%7C1f6ac242e0334d662c8c08d7a4401d95%7C72f988bf86f141af91ab | 2d7cd011db47%7C1%7C0%7C637158467589648051&sdata=%2BPgJblk6y%2BjXRc5bA0 | gdzQEjrqgAQB6UYytdw7UtLQQ%3D&reserved=0 | | Indeed, I’ve read a handful of his papers while working on this! I didn’t | mention it in the original email, but I’ve also talked a little with | Matthew Flatt about efficient implementation of delimited control, and he | pointed me to a few papers a couple of months ago. One of those was “Final | Shift for call/cc: a Direct Implementation of Shift and Reset” by | Gasbichler and Sperber, which describes an approach closest to what I | currently have in mind to try to implement in the RTS. | | > One interesting dimension is whether or not the continuations you | capture are one-shot. If so, particularly efficient implementations are | possible. | | Quite so. One thing I’ve considered is that it’s possible to obtain much | of that efficiency even without requiring strict one-shot continuations if | you have a separate operation for restoring a continuation that guarantees | you won’t ever restore it again, sort of like the existing | unsafeThaw/unsafeFreeze operations. That is, you can essentially convert a | multi-shot continuation into a one-shot continuation and reap performance | benefits, even if you’ve already applied the continuation. | | This is a micro-optimization, though, so I’m not worrying too much about | it right now. | | > Also: much of the "capture stack chunk" stuff is *already* implemented, | because it is (I think) what happens when a thread receives an | asynchronous exception, and just abandon its evaluation of thunks that it | has started work on. | | Now that is very interesting, and certainly not something I would have | expected! Why would asynchronous exceptions need to capture any portion of | the stack? Exceptions obviously trigger stack unwinding, so I assumed the | “abort to the current prompt” part of my implementation would already | exist, but not the “capture a slice of the stack” part. Could you say a | little more about this, or point me to some relevant code? | | Thanks again, | Alexis From lexi.lambda at gmail.com Thu Jan 30 00:55:05 2020 From: lexi.lambda at gmail.com (Alexis King) Date: Wed, 29 Jan 2020 18:55:05 -0600 Subject: Feasibility of native RTS support for continuations? In-Reply-To: References: Message-ID: <72859DAE-06D5-473F-BA92-AD6A40543C97@gmail.com> > On Jan 29, 2020, at 03:32, Simon Peyton Jones wrote: > > Suppose a thread happens to be evaluating a pure thunk for (factorial 200). […] This stack-freezing stuff is definitely implemented. That’s fascinating! I had no idea, but your explanation makes sense (as do the papers you linked). That is definitely promising, as it seems like many of the tricky cases may already be accounted for? I’ll see if I can follow the Cmm code well enough to hunt down how it’s implemented. One other thing I have been thinking about: this is completely incompatible with the state hack, isn’t it? That is not a showstopper, of course—I do not intend to suggest that continuations be capturable in ordinary IO—but it does mean I probably want a way to selectively opt out. (But I’ll worry about that if I ever get that far.) Alexis From marlowsd at gmail.com Thu Jan 30 08:35:58 2020 From: marlowsd at gmail.com (Simon Marlow) Date: Thu, 30 Jan 2020 08:35:58 +0000 Subject: Feasibility of native RTS support for continuations? In-Reply-To: <72859DAE-06D5-473F-BA92-AD6A40543C97@gmail.com> References: <72859DAE-06D5-473F-BA92-AD6A40543C97@gmail.com> Message-ID: My guess is you can almost do what you want with asynchronous exceptions but some changes to the RTS would be needed. There's a bit of code in the IO library that literally looks like this ( https://gitlab.haskell.org/ghc/ghc/blob/master/libraries%2Fbase%2FGHC%2FIO%2FHandle%2FInternals.hs#L175 ): t <- myThreadId throwTo t e ... carry on ... that is, it throws an exception to the current thread using throwTo, and then there is code to handle what happens if the enclosing thunk is evaluated after the exception has been thrown. That is, throwing an exception to the current thread is an IO operation that returns later! This only works with throwTo, not with throwIO, because throwIO is a *synchronous* exception that destructively tears down the stack. I suppose if you want to pass a value to the thread after resumption you could do it via an IORef. But the issue with this is that you can only apply the continuation once: GHC treats the captured continuation like a thunk, which means that after evaluating it, it will be updated with its value. But for your purposes you need to be able to apply it at least twice - once because we want to continue after shift#, and again when we apply the continuation later. Somehow the thunks we build this way would need to be marked non-updatable. Perhaps this could be done with a new primitive `throwToNonUpdatable` (hopefully with a better name) that creates non-updatable thunks. Also you might want to optimise the implementation so that it doesn't actually tear down the stack as it copies it into the heap, so that you could avoid the need to copy it back from the heap again in shift#. So that's shift#. What about reset#? I expect it's something like `unsafeInterleaveIO`, that is it creates a thunk to name the continuation. You probably also want a `catch` in there, so that we don't tear down more of the stack than we need to. Hope this is helpful. Cheers Simon On Thu, 30 Jan 2020 at 00:55, Alexis King wrote: > > On Jan 29, 2020, at 03:32, Simon Peyton Jones > wrote: > > > > Suppose a thread happens to be evaluating a pure thunk for (factorial > 200). […] This stack-freezing stuff is definitely implemented. > > That’s fascinating! I had no idea, but your explanation makes sense (as do > the papers you linked). That is definitely promising, as it seems like many > of the tricky cases may already be accounted for? I’ll see if I can follow > the Cmm code well enough to hunt down how it’s implemented. > > One other thing I have been thinking about: this is completely > incompatible with the state hack, isn’t it? That is not a showstopper, of > course—I do not intend to suggest that continuations be capturable in > ordinary IO—but it does mean I probably want a way to selectively opt out. > (But I’ll worry about that if I ever get that far.) > > Alexis -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Thu Jan 30 16:31:48 2020 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 30 Jan 2020 11:31:48 -0500 Subject: Feasibility of native RTS support for continuations? In-Reply-To: References: Message-ID: <87mua4x5w0.fsf@smart-cactus.org> Simon Peyton Jones via ghc-devs writes: > | Now that is very interesting, and certainly not something I would have > | expected! Why would asynchronous exceptions need to capture any portion of > | the stack? Exceptions obviously trigger stack unwinding, so I assumed the > | “abort to the current prompt” part of my implementation would already > | exist, but not the “capture a slice of the stack” part. Could you say a > | little more about this, or point me to some relevant code? > > Suppose a thread happens to be evaluating a pure thunk for (factorial 200). Then it gets an asynchronous exception from another thread. That asynch exn is nothing to do with (factorial 200). So we could either > > A. revert the thunk to (factorial 200), abandoning all > the work done so far, or > B. capture the stack and attach it to the thunk, so that ie any other > thread enters that thunk, it'll just run that stack. > > Now (A) means that every thunk has to be revertible, which means keeping its original free variables, which leads to space leaks. And extra work to avoid losing any info you need for reversion. Extra work is painful; we want to put all of the extra work on the asynch exn. > > So we do (B). > > See Section 8 of "Asynchronous exceptions in Haskell". > https://www.microsoft.com/en-us/research/publication/asynchronous-exceptions-haskell-3/ > > And "An implementation of resumable black holes" (Reid). > https://alastairreid.github.io/papers/IFL_98/ > > This stack-freezing stuff is definitely implemented. I'm not quite > sure where, but I'm cc'ing Simon Marlow who can point you at it. > For the record, runtime system captures the stack state in an AP_STACK closure. This is done in rts/RaiseAsync.c:raiseAsync and some of this is described in the comment attached to that function. As Simon PJ points out, this is all very tricky stuff, especially in a concurrent context. If you make any changes in this area do be sure to keep in mind the considerations described in Note [AP_STACKs must be eagerly blackholed], which arose out of the very-nast #13615. Cheers and good luck! - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Thu Jan 30 16:49:59 2020 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 30 Jan 2020 11:49:59 -0500 Subject: more submodule questions In-Reply-To: <3A7FD7F2-27AC-49E4-AD56-959A3C5E006A@richarde.dev> References: <8736c0y7ms.fsf@smart-cactus.org> <3A7FD7F2-27AC-49E4-AD56-959A3C5E006A@richarde.dev> Message-ID: <87h80cx51p.fsf@smart-cactus.org> Richard Eisenberg writes: >> On Jan 27, 2020, at 8:19 PM, Ben Gamari wrote: >> >> The linter does not hold up builds for merge requests but will hold up a >> "pre-merge" validation job (e.g. a validation of an MR created by >> @marge-bot). This ensures that a patch containing a wip/ submodule >> reference will not be merged to master. > > Very interesting! Are there other such checks? I always assume that if > an MR passes CI, then it is suitable for merging. Of course, what you > describe makes perfect sense here -- we don't require upstream to have > our commits during CI, but we do during merging. I'm just wondering if > there are other such scenarios that are checked for. > Indeed we run a whole suite of linters in the `lint` stage of the CI pipeline. See, for instance, [1]. These linters generally fall into a few categories: * Some (e.g. the submodule and changelog linters) enforce policy that would otherwise be mere social convention. The former we already discussed. The latter checks that the string TBD doesn't appear in core library changelogs, ensuring that we take the necessary steps to finalize the changelogs before cutting a release. * Some (e.g. the makefile, CPP, and shellcheck linters) check for portability issues that we may not otherwise stumble upon in the set of platforms that we routinely test * Some (e.g. the typecheck-testsuite and lint-testsuite linters) are simply cheaper checks for obvious mistakes that we would otherwise catch stumble upon late in the build. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From juhpetersen at gmail.com Fri Jan 31 04:02:37 2020 From: juhpetersen at gmail.com (Jens Petersen) Date: Fri, 31 Jan 2020 12:02:37 +0800 Subject: [ANNOUNCE] Glasgow Haskell Compiler 8.10.1-rc1 released In-Reply-To: <87muacxhd6.fsf@smart-cactus.org> References: <87muacxhd6.fsf@smart-cactus.org> Message-ID: Thanks for the RC1 release! When I try to do a test build without haddocks (on Fedora), a mysterious thing happens: it still builds and installs documentation for the containers library?!? I should file a bug anyway - though I guess it is not a release blocker, just a minor annoyance I suppose. Thanks, Jens On Sat, 25 Jan 2020 at 12:58, Ben Gamari wrote: > > Hello all, > > The GHC team is happy to announce the availability of the first release > candidate of GHC 8.10.1. Source and binary distributions are > available at the usual place: > > https://downloads.haskell.org/ghc/8.10.1-rc1/ > > GHC 8.10.1 will bring a number of new features including: > > * The new UnliftedNewtypes extension allowing newtypes around unlifted > types. > > * The new StandaloneKindSignatures extension allows users to give > top-level kind signatures to type, type family, and class > declarations. > > * A new warning, -Wderiving-defaults, to draw attention to ambiguous > deriving clauses > > * A number of improvements in code generation, including changes > > * A new GHCi command, :instances, for listing the class instances > available for a type. > > * An upgraded Windows toolchain lifting the MAX_PATH limitation > > * A new, low-latency garbage collector. > > * Improved support profiling, including support for sending profiler > samples to the eventlog, allowing correlation between the profile and > other program events > > This is the first and likely final release candidate. For a variety of > reasons, it comes a few weeks later than the original schedule of > release late December. However, besides a few core libraries > book-keeping issues this candidate is believed to be in good condition > for the final release. As such, the final 8.10.1 release will likely > come in two weeks. > > Note that at the moment we still require that macOS Catalina users > exempt the binary distribution from the notarization requirement by > running `xattr -cr .` on the unpacked tree before running `make install`. > > In addition, we are still looking for any Alpine Linux to help diagnose > the correctness issues in the Alpine binary distribution [1]. If you use > Alpine any you can offer here would be greatly appreciated. > > Please do test this release and let us know if you encounter any other > issues. > > Cheers, > > - Ben > > > [1] https://gitlab.haskell.org/ghc/ghc/issues/17508 > [2] https://gitlab.haskell.org/ghc/ghc/issues/17418 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: