From jweakly at pdx.edu Fri Sep 1 04:00:35 2017 From: jweakly at pdx.edu (Jared Weakly) Date: Thu, 31 Aug 2017 21:00:35 -0700 Subject: Feedback request regarding HSOC project (bringing sanity to the GHC performance test-suite) Message-ID: Hey y'all, A quick ToC before I dive right in: * What my HSOC project is on * My progress so far * Feedback welcome * What I have left to do * Theoretical potential improvements ----------- My HSOC project was on bringing sanity to the GHC performance test-suite. My blog post on this is here: https://jaredweakly.com/blog/haskell-summer-of-code/ The Trac ticket that corresponds to this is here: https://ghc.haskell.org/trac/ghc/ticket/12758 The Phabricator ticket for this patch: https://phabricator.haskell.org/D3758 The tl;dr of my HSOC project is that GHC's performance tests currently require the programmer to add in expected numbers manually, updated them, handhold the testsuite, etc. This is a bit absurd and my project's overall aim is to reduce the effort required of the programmer to as close to zero as possible while simultaneously increasing the potential ability of the testsuite to catch regressions as much as possible. ------------ My progress so far: - I have a few comparison tools in perf_notes.py. These allow people to compare performance numbers of tests across commits - I have all the performance numbers generated by running the tests automatically stored in git notes and referenced by both the comparison tool and the testsuite - I have refactored the testsuite to use my new code that pulls expected numbers automatically from git notes (trivially passing if the note does not yet exist for that test), then it compares that expected number with the number that was gotten from running the testsuite on the latest commit. The comparison passes if it's within a certain deviation (20% by default, but can be customized by the programmer). - I have refactored all of the all.T files to use the new comparison functions for the performance tests and ensured that this doesn't break any existing tests. ------------ Anyone who wants to checkout the wip/perf-testsuite and try this out is more than welcome. Feedback on anything is welcome; comments are appreciated; discussion is welcome, etc. ------------- What I have left to do is: 1. Finish writing up the documentation 2. Update the wiki in all the relevant places concerning additions/modifications to the testsuite and test driver 3. Make sure everyone is happy with the change (and make small changes as necessary) -------------- Possible features and improvements I am thinking about adding in: * As a stopgap to full integration with performance tracking tools (such as Gipedia), optionally emitting a test warning with the test summary if there is any regression detected whatsoever (even if the number falls within the allowed deviation) * Some tests, such as T7702, have a somewhat nonsensical regression percentage. Ideally the testsuite could handle those better. I could potentially build in multiple ways to determine a regression (percentage, 'above a certain value', 'taking longer than X amount of time', as potential examples) * Currently some tests require installing some Haskell packages; they are skipped if the packages are not installed. I could try to build in a way to automatically attempt to install all necessary Haskell packages if someone attempts to run a test that requires them. (Perhaps using a command such as 'make test exhaustive') * The performance metric 'peak_megabytes' is sometimes not accurate enough; I could see if adding something like `RTS -h -i0.01` automatically to tests that use 'peak_megabytes' would resolve that. Currently it is a manual debugging step. Any thoughts? Comments? Questions? Regards, Jared Weakly From oleg.grenrus at iki.fi Fri Sep 1 08:38:34 2017 From: oleg.grenrus at iki.fi (Oleg Grenrus) Date: Fri, 1 Sep 2017 11:38:34 +0300 Subject: Type-level generics In-Reply-To: <2360843.yIA53fhjkg@squirrel> References: <8748542.10fH5JaPFX@squirrel> <2360843.yIA53fhjkg@squirrel> Message-ID: Seems that by making a class you can "prove" by requiring this isomorphism: class (To r ~ v, From v ~ r) -- , To (From v :: Rep a x) ~ v) => TypeGeneric a (r :: Rep a x) (v :: a) where type To r :: a type From v :: Rep a x See attachment or [1] for the whole file. Cheers, Oleg [1]: https://gist.github.com/phadej/fab7c627efbca5cba16ba258c8f10337 On 31.08.2017 23:22, David Feuer wrote: > One other thing I should add. We'd really, really like to have isomorphism > evidence: > > toThenFrom :: pr p -> To (From x :: Rep a p) :~: (x :: a) > fromThenTo :: pr1 a -> pr2 (r :: Rep a p) -> From (To r :: a) :~: (r :: Rep a p) > > I believe these would make the To and From families considerably more > useful. Unfortunately, while I'm pretty sure those are completely legit for > any Generic-derived types, I don't think there's ever any way to prove > them in Haskell! Ugh. > > On Thursday, August 31, 2017 3:37:15 PM EDT David Feuer wrote: >> I've been thinking for several weeks that it might be useful to offer >> type-level generics. That is, along with >> >> to :: Rep a k -> a >> from :: a -> Rep a >> >> perhaps we should also derive >> >> type family To (r :: Rep a x) :: a >> type family From (v :: a) :: Rep a x >> >> This would allow us to use generic programming at the type level >> For example, we could write a generic ordering family: >> >> class OrdK (k :: Type) where >> type Compare (x :: k) (y :: k) :: Ordering >> type Compare (x :: k) (y :: k) = GenComp (Rep k ()) (From x) (From y) >> >> instance OrdK Nat where >> type Compare x y = CmpNat x y >> >> instance OrdK Symbol where >> type Compare x y = CmpSymbol x y >> >> instance OrdK [a] -- No implementation needed! >> >> type family GenComp k (x :: k) (y :: k) :: Ordering where >> GenComp (M1 i c f p) ('M1 x) ('M1 y) = GenComp (f p) x y >> GenComp (K1 i c p) ('K1 x) ('K1 y) = Compare x y >> GenComp ((x :+: y) p) ('L1 m) ('L1 n) = GenComp (x p) m n >> GenComp ((x :+: y) p) ('R1 m) ('R1 n) = GenComp (y p) m n >> GenComp ((x :+: y) p) ('L1 _) ('R1 _) = 'LT >> GenComp ((x :+: y) p) ('R1 _) ('L1 _) = 'GT >> GenComp ((x :*: y) p) (x1 ':*: y1) (x2 ':*: y2) = >> PComp (GenComp (x p) x1 x2) (y p) y1 y2 >> GenComp (U1 p) _ _ = 'EQ >> GenComp (V1 p) _ _ = 'EQ >> >> type family PComp (c :: Ordering) k (x :: k) (y :: k) :: Ordering where >> PComp 'EQ k x y = GenComp k x y >> PComp x _ _ _ = x >> >> For people who want to play around with the idea, here are the definitions of To and From >> for lists: >> >> To ('M1 ('L1 ('M1 'U1))) = '[] >> To ('M1 ('R1 ('M1 ('M1 ('K1 x) ':*: 'M1 ('K1 xs))))) = x ': xs >> From '[] = 'M1 ('L1 ('M1 'U1)) >> From (x ': xs) = 'M1 ('R1 ('M1 ('M1 ('K1 x) ':*: 'M1 ('K1 xs)))) >> >> David > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- A non-text attachment was scrubbed... Name: type-generics.hs Type: text/x-haskell Size: 3009 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From moritz.angermann at gmail.com Fri Sep 1 08:44:21 2017 From: moritz.angermann at gmail.com (Moritz Angermann) Date: Fri, 1 Sep 2017 16:44:21 +0800 Subject: [broken HEAD] In which the strict state monad fails at basic arithmetic Message-ID: Hi *, while working on some related code. I came across a rather peculiar behavior with GHC built from the current master branch at b2c2e3e8. After condensing the application quite a bit[1], the test case now produces 8 with ghc 8.2.1 and -6 with ghc 8.3 @ b2c2e3e8 The sample application is essentially a strict `State Int a` monad, that is being advanced by 1 and then by another 7. ``` module Lib where import Control.Monad.Trans.State.Strict eval :: Int -> State Int a -> a eval p = fst . flip runState p advance :: Int -> State Int () advance = modify' . (+) loc :: State Int Int loc = get emit1 :: State Int () emit1 = advance 1 emitN :: Int -> State Int () -- adding in the 0 case, breaks with HEAD. 8.2.1 is fine with it. -- emitN 0 = advance 0 emitN 0 = pure () emitN n = advance n align8 :: State Int () align8 = do bits <- (`mod` 8) <$> loc emitN (8 - bits) ``` with the test driver ``` module Main where import Lib import System.Exit main :: IO () main = do let p = eval 0 (emit1 >> align8 >> loc) putStrLn $ show p if p == 8 then putStrLn "OK" >> exitSuccess else putStrLn "FAIL" >> exitFailure ``` Compiling both with ghc, will *NOT* exhibit the issue. Only when the `Lib` module is packed, and `Main` is linked against the package is the issue visible. A cabal file for this is contained in [1]. Using the following git bisect script (where [1] is in `../break` relative to ghc) ``` #!/bin/bash git submodule update --init --recursive make -s clean make -s distclean ./boot > /dev/null if ./configure --silent --disable-large-address-space && make -s -j9 then (cd ../break && rm -fR dist-newstyle && cabal new-run test -w ../ghc/inplace/bin/ghc-stage2) status=$? else status=125 fi exit $status ``` $ git bisect $PWD/bisect.sh yields: ``` 193664d42dbceadaa1e4689dfa17ff1cf5a405a0 is the first bad commit commit 193664d42dbceadaa1e4689dfa17ff1cf5a405a0 Author: Simon Peyton Jones Date: Wed Mar 8 10:26:47 2017 +0000 Re-engineer caseRules to add tagToEnum/dataToTag See Note [Scrutinee Constant Folding] in SimplUtils * Add cases for tagToEnum and dataToTag. This is the main new bit. It allows the simplifier to remove the pervasive uses of case tagToEnum (a > b) of False -> e1 True -> e2 and replace it by the simpler case a > b of DEFAULT -> e1 1# -> e2 See Note [caseRules for tagToEnum] and Note [caseRules for dataToTag] in PrelRules. * This required some changes to the API of caseRules, and hence to code in SimplUtils. See Note [Scrutinee Constant Folding] in SimplUtils. * Avoid duplication of work in the (unusual) case of case BIG + 3# of b DEFAULT -> e1 6# -> e2 Previously we got case BIG of DEFAULT -> let b = BIG + 3# in e1 3# -> let b = 6# in e2 Now we get case BIG of b# DEFAULT -> let b = b' + 3# in e1 3# -> let b = 6# in e2 * Avoid duplicated code in caseRules A knock-on refactoring: * Move Note [Word/Int underflow/overflow] to Literal, as documentation to accompany mkMachIntWrap etc; and get rid of PrelRuls.intResult' in favour of mkMachIntWrap ``` I do not yet understand exactly where this goes wrong. But I hope someone else will be able to help out? I do find it curious though that this bug seems to have gone unnoticed (assuming the commit git bisect found is indeed the underlying issue) for almost half a year. And please, if my analysis is faulty at some point don’t hesitate to point that out! Cheers, Moritz PS: can we have a folder in ghc, which contains cabal packages, and part of the validation is just iterating over all those packages with `cabal new-test -w /path/to/inplace/bin/ghc-stage2`? In that case, one could simply change the executable target in [1] into a testsuite, and drop the package into that folder? — [1]: https://gist.github.com/angerman/c6ee51e4892ce6efdbcabb8c5ab990fa From ryan.gl.scott at gmail.com Fri Sep 1 13:12:26 2017 From: ryan.gl.scott at gmail.com (Ryan Scott) Date: Fri, 1 Sep 2017 09:12:26 -0400 Subject: Type-level generics Message-ID: While we're on the topic, I'll mention that at one point I attempted to modify the singletons [1] library so that it would automatically generate promoted (i.e., type-level) and singled versions of Generic instances for any data type that derived Generic. I wasn't successful, since it turns out singletons are difficult to adapt to data types with higher-kinded types [2] and type classes with associated type families [3], but I did manage to write some examples on a very limited subset of GHC.Generics in this gist [4]. The promoted version of Generic (PGeneric) in that gist works pretty much identically to Oleg's version, but with one notable difference: I don't attempt to put the Generic laws as a superclass of PGeneric. Instead, I make the laws class methods of the singled version of Generic (SGeneric). I found it more convenient to do it this way since I needed to pattern-match on these proofs directly in a generic implementation of decidable equality, but I'm sure this isn't the only way it could be done. Speaking of which, it astounds me that the Generic laws aren't documented in the Haddocks! We really should do that. Ryan S. ----- [1] http://hackage.haskell.org/package/singletons [2] See the extended discussion in https://github.com/goldfirere/singletons/issues/150 [3] https://github.com/goldfirere/singletons/issues/198 [4] https://gist.github.com/RyanGlScott/daeb63be7885244d9882dcbb1bbc10cc From ben at well-typed.com Fri Sep 1 13:24:51 2017 From: ben at well-typed.com (Ben Gamari) Date: Fri, 01 Sep 2017 09:24:51 -0400 Subject: [broken HEAD] In which the strict state monad fails at basic arithmetic In-Reply-To: References: Message-ID: <871snqy2jg.fsf@ben-laptop.smart-cactus.org> Moritz Angermann writes: > Hi *, > > while working on some related code. I came across a rather peculiar behavior > with GHC built from the current master branch at b2c2e3e8. > Indeed this sounds like a real bug. Can you open a ticket? Also, it looks like the gist has projected out directory structure; do you think you could push the testcase as a proper git repository? > > PS: can we have a folder in ghc, which contains cabal packages, > and part of the validation is just iterating over all those > packages with `cabal new-test -w /path/to/inplace/bin/ghc-stage2`? > In that case, one could simply change the executable target in > [1] into a testsuite, and drop the package into that folder? > The problem is that we don't have access to cabal-install. However, I think there is certainly room for this sort of testing as part of, for instance, the nightly test cycle. In this case we'd likely want to contain this infrastructure in a repository outside of ghc proper. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Fri Sep 1 13:53:12 2017 From: ben at smart-cactus.org (Ben Gamari) Date: Fri, 01 Sep 2017 09:53:12 -0400 Subject: [broken HEAD] In which the strict state monad fails at basic arithmetic In-Reply-To: References: Message-ID: <87val2wmnr.fsf@ben-laptop.smart-cactus.org> Moritz Angermann writes: > Hi *, > > while working on some related code. I came across a rather peculiar behavior > with GHC built from the current master branch at b2c2e3e8. > The issue was a bug indeed introduced by the commit you cite below. The problem was a mistake in a change in constant folding which, frighteningly, the testsuite did not catch. See D3904 for a fix and a test is forthcoming. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Fri Sep 1 18:26:40 2017 From: ben at well-typed.com (Ben Gamari) Date: Fri, 01 Sep 2017 14:26:40 -0400 Subject: Convenient URL alias for Trac tickets Message-ID: <87r2vqw9zz.fsf@ben-laptop.smart-cactus.org> Hello everyone, Earlier today a contributor requested that we have an easier-to-remember URL for Trac tickets. Consequently, I've configured ghc.haskell.org to redirect URLs of the form, http://ghc.haskell.org/t/$n to the appropriate Trac ticket. For instance, https://ghc.haskell.org/t/14171 will bring you to the ticket for #14171. Hopefully others also will find this helpful. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From hvriedel at gmail.com Fri Sep 1 19:27:11 2017 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Fri, 1 Sep 2017 21:27:11 +0200 Subject: Convenient URL alias for Trac tickets In-Reply-To: <87r2vqw9zz.fsf@ben-laptop.smart-cactus.org> References: <87r2vqw9zz.fsf@ben-laptop.smart-cactus.org> Message-ID: Good idea! ...btw, note that a couple years ago, I set up the little known http://ghc.haskell.org/ticket/1234 alias... :-) On Fri, Sep 1, 2017 at 8:26 PM, Ben Gamari wrote: > Hello everyone, > > Earlier today a contributor requested that we have an easier-to-remember URL > for Trac tickets. Consequently, I've configured ghc.haskell.org to redirect > URLs of the form, > > http://ghc.haskell.org/t/$n > > to the appropriate Trac ticket. For instance, > https://ghc.haskell.org/t/14171 will bring you to the ticket for #14171. > Hopefully others also will find this helpful. > > Cheers, > > - Ben > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From ben at well-typed.com Fri Sep 1 20:18:58 2017 From: ben at well-typed.com (Ben Gamari) Date: Fri, 01 Sep 2017 16:18:58 -0400 Subject: Convenient URL alias for Trac tickets In-Reply-To: References: <87r2vqw9zz.fsf@ben-laptop.smart-cactus.org> Message-ID: <87o9quw4st.fsf@ben-laptop.smart-cactus.org> Herbert Valerio Riedel writes: > Good idea! > > ...btw, note that a couple years ago, I set up the little known > > http://ghc.haskell.org/ticket/1234 > > alias... :-) > Indeed I noticed that and it almost deterred me from adding the new alias. However, I am sympathetic to mobile users who complain about long urls. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From simonpj at microsoft.com Fri Sep 1 20:37:57 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 1 Sep 2017 20:37:57 +0000 Subject: [broken HEAD] In which the strict state monad fails at basic arithmetic In-Reply-To: <87val2wmnr.fsf@ben-laptop.smart-cactus.org> References: <87val2wmnr.fsf@ben-laptop.smart-cactus.org> Message-ID: Wow -- Fast work! Do add a test case Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Ben | Gamari | Sent: 01 September 2017 14:53 | To: Moritz Angermann ; GHC developers | Subject: Re: [broken HEAD] In which the strict state monad fails at basic | arithmetic | | Moritz Angermann writes: | | > Hi *, | > | > while working on some related code. I came across a rather peculiar | > behavior with GHC built from the current master branch at b2c2e3e8. | > | The issue was a bug indeed introduced by the commit you cite below. The | problem was a mistake in a change in constant folding which, | frighteningly, the testsuite did not catch. See D3904 for a fix and a | test is forthcoming. | | Cheers, | | - Ben From wolfgang-it at jeltsch.info Fri Sep 1 21:23:49 2017 From: wolfgang-it at jeltsch.info (Wolfgang Jeltsch) Date: Sat, 02 Sep 2017 00:23:49 +0300 Subject: Type-level generics In-Reply-To: <8748542.10fH5JaPFX@squirrel> References: <8748542.10fH5JaPFX@squirrel> Message-ID: <1504301029.21745.87.camel@jeltsch.info> Hi! Before starting with generics support at the type level, please first improve the generics support at the value level. When I looked at it the last time, there were some apparent leftovers in the form of types or type parameters never used. In addition, it seems awkward that you have to pass these p-parameters around when working with types of kind *, and that there is no possibility to work with types with more than one parameter. I think that GHC’s approach to generics is very good in general, but that the GHC.Generics module looks a bit unpolished and ad- hoc at the moment. Maybe it would be possible to solve the abovementioned problems using TypeInType. All the best, Wolfgang Am Donnerstag, den 31.08.2017, 15:37 -0400 schrieb David Feuer: > I've been thinking for several weeks that it might be useful to offer > type-level generics. That is, along with > > to :: Rep a k -> a > from :: a -> Rep a > > perhaps we should also derive > > type family To (r :: Rep a x) :: a > type family From (v :: a) :: Rep a x > > This would allow us to use generic programming at the type level > For example, we could write a generic ordering family: > > class OrdK (k :: Type) where >   type Compare (x :: k) (y :: k) :: Ordering >   type Compare (x :: k) (y :: k) = GenComp (Rep k ()) (From x) (From > y) > > instance OrdK Nat where >   type Compare x y = CmpNat x y > > instance OrdK Symbol where >   type Compare x y = CmpSymbol x y > > instance OrdK [a] -- No implementation needed! > > type family GenComp k (x :: k) (y :: k) :: Ordering where >   GenComp (M1 i c f p) ('M1 x) ('M1 y) = GenComp (f p) x y >   GenComp (K1 i c p) ('K1 x) ('K1 y) = Compare x y >   GenComp ((x :+: y) p) ('L1 m) ('L1 n) = GenComp (x p) m n >   GenComp ((x :+: y) p) ('R1 m) ('R1 n) = GenComp (y p) m n >   GenComp ((x :+: y) p) ('L1 _) ('R1 _) = 'LT >   GenComp ((x :+: y) p) ('R1 _) ('L1 _) = 'GT >   GenComp ((x :*: y) p) (x1 ':*: y1) (x2 ':*: y2) = >     PComp (GenComp (x p) x1 x2) (y p) y1 y2 >   GenComp (U1 p) _ _ = 'EQ >   GenComp (V1 p) _ _ = 'EQ > > type family PComp (c :: Ordering) k (x :: k) (y :: k) :: Ordering > where >   PComp 'EQ k x y = GenComp k x y >   PComp x _ _ _ = x > > For people who want to play around with the idea, here are the > definitions of To and From > for lists: > >   To ('M1 ('L1 ('M1 'U1))) = '[] >   To ('M1 ('R1 ('M1 ('M1 ('K1 x) ':*: 'M1 ('K1 xs))))) = x ': xs >   From '[] = 'M1 ('L1 ('M1 'U1)) >   From (x ': xs) = 'M1 ('R1 ('M1 ('M1 ('K1 x) ':*: 'M1 ('K1 xs)))) > > David > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From qdunkan at gmail.com Fri Sep 1 21:57:27 2017 From: qdunkan at gmail.com (Evan Laforge) Date: Fri, 1 Sep 2017 14:57:27 -0700 Subject: Type-level generics In-Reply-To: <1504301029.21745.87.camel@jeltsch.info> References: <8748542.10fH5JaPFX@squirrel> <1504301029.21745.87.camel@jeltsch.info> Message-ID: On Fri, Sep 1, 2017 at 2:23 PM, Wolfgang Jeltsch wrote: > Hi! > > Before starting with generics support at the type level, please first > improve the generics support at the value level. When I looked at it the > last time, there were some apparent leftovers in the form of types or > type parameters never used. In addition, it seems awkward that you have I was just about to complain about this myself, since every year or so I go fail to figure out GHC.Generics after tripping over lots of out of date documentation, confusing type aliases, and obsolete aliases, and wrong examples, but I just looked again and it seems like GHC.Generics got a major update in ghc 8. It looks like there's still one confusing reference to Par0: "Note how Par0 and Rec0 both being mapped to K1 allows us to define a uniform instance here. " but at least it's not tangled up in the already very confusing examples and signatures. I think that sentence can be deleted entirely now? I have no idea what it's trying to express. So thanks to whoever did that. I'll give it another try. From jan.stolarek at p.lodz.pl Sat Sep 2 10:17:14 2017 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Sat, 2 Sep 2017 11:17:14 +0100 Subject: Convenient URL alias for Trac tickets In-Reply-To: <87o9quw4st.fsf@ben-laptop.smart-cactus.org> References: <87r2vqw9zz.fsf@ben-laptop.smart-cactus.org> <87o9quw4st.fsf@ben-laptop.smart-cactus.org> Message-ID: <201709021117.14662.jan.stolarek@p.lodz.pl> One can also create bookmark with a keyword, as described on GHC wiki: https://ghc.haskell.org/trac/ghc/wiki/FirefoxTips Janek --- Politechnika Łódzka Lodz University of Technology Treść tej wiadomości zawiera informacje przeznaczone tylko dla adresata. Jeżeli nie jesteście Państwo jej adresatem, bądź otrzymaliście ją przez pomyłkę prosimy o powiadomienie o tym nadawcy oraz trwałe jej usunięcie. This email contains information intended solely for the use of the individual to whom it is addressed. If you are not the intended recipient or if you have received this message in error, please notify the sender and delete it from your system. From ben at well-typed.com Sat Sep 2 14:21:04 2017 From: ben at well-typed.com (Ben Gamari) Date: Sat, 02 Sep 2017 10:21:04 -0400 Subject: Convenient URL alias for Trac tickets In-Reply-To: <201709021117.14662.jan.stolarek@p.lodz.pl> References: <87r2vqw9zz.fsf@ben-laptop.smart-cactus.org> <87o9quw4st.fsf@ben-laptop.smart-cactus.org> <201709021117.14662.jan.stolarek@p.lodz.pl> Message-ID: <87h8wlw59r.fsf@ben-laptop.smart-cactus.org> Jan Stolarek writes: > One can also create bookmark with a keyword, as described on GHC wiki: > Indeed that is true; however, this won't work in most mobile browsers it seems. Isn't progress great? :) Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Sat Sep 2 14:37:30 2017 From: ben at well-typed.com (Ben Gamari) Date: Sat, 02 Sep 2017 10:37:30 -0400 Subject: Darwin Harbormaster builder down Message-ID: <87bmmtw4id.fsf@ben-laptop.smart-cactus.org> Hello everyone, It seems that the OS X Harbormaster builder has dropped off the face of the internet. I'm trying to get in touch with Futureice about this but in the meantime I've disabled the OS X build plan. This means that patches submitted won't be validated on OS X until this is resolved. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ryan.gl.scott at gmail.com Sat Sep 2 15:35:58 2017 From: ryan.gl.scott at gmail.com (Ryan Scott) Date: Sat, 2 Sep 2017 11:35:58 -0400 Subject: Type-level generics Message-ID: Several good points were brought up. Let me go through them and try to make sense of what I can: > When I looked at it the last time, there were some apparent leftovers in the form of types or type parameters never used. Are you referring to the `p` type parameters that are found in most of the data types in GHC.Generics? If so, they are most definitely used—try deriving Generic1 to see this in action! It's true that in the context of Generic (without the 1 at the end) the `p` isn't used, but this is by design, as this allows us to share the same representation types across Generic and Generic1. > there is no possibility to work with types with more than one parameter. Quite true. But I posit that engineering GHC.Generics to work with more than one type parameter at a time is much harder than it sounds. After all, to profitably work with even a *single* type parameter (what Generic1 does), we must bring in three additional representation types: Par1, Rec1, and (:.:), depending on where in the datatype the last type parameter occurs. If we wanted to have, say, Generic2, we'd similarly need to be able to work with many more combinations of type parameter positions, such as: * data Foo1 a b = Foo1 a b * data Foo2 a b = Foo2 (Either a b) * data Foo3 a b = Foo3 (Either b a) * etc. A naïve approach would be to tack on another type parameter at the end of every representation type, and introduce more types to deal with all the combinations of the first and second type parameter that could arise. But this approach doesn't scale well—after all, at what number N do you stop introducing new representation types? So extending GHC.Generics to deal with more than one type parameter is far from obvious to me (let alone whether it could be made backwards compatible with the current API). > the GHC.Generics module looks a bit unpolished and ad-hoc at the moment. Yes, quite literally everything in GHC.Generics is one large, ad hoc hack. But it's also a darn useful one :) > It looks like there's still one confusing reference to Par0: "Note how Par0 and Rec0 both being mapped to K1 allows us to define a uniform instance here. " but at least it's not tangled up in the already very confusing examples and signatures. I think that sentence can be deleted entirely now? Indeed, an earlier part of the documentation in that module mentions that Par0 was deprecated (and removed, in fact), so we really shouldn't be mentioning it elsewhere. I'll remove that sentence. Ryan S. From wolfgang-it at jeltsch.info Sat Sep 2 23:43:19 2017 From: wolfgang-it at jeltsch.info (Wolfgang Jeltsch) Date: Sun, 03 Sep 2017 02:43:19 +0300 Subject: Type-level generics In-Reply-To: References: Message-ID: <1504395799.2713.33.camel@jeltsch.info> Am Samstag, den 02.09.2017, 11:35 -0400 schrieb Ryan Scott: > > When I looked at it the last time, there were some apparent > > leftovers in the form of types or type parameters never used. > > Are you referring to the `p` type parameters that are found in most of > the data types in GHC.Generics? No, there were really unused things. > If so, they are most definitely used—try deriving Generic1 to see this > in action! I know that they are needed for Generic1, but they are not needed for Generic. > It's true that in the context of Generic (without the 1 at the end) > the `p` isn't used, but this is by design, as this allows us to share > the same representation types across Generic and Generic1. It would be great if we could employ kind polymorphism or even type-in- type to have a single set of representation types, but still no unused parameters. > > there is no possibility to work with types with more than one > > parameter. > > Quite true. But I posit that engineering GHC.Generics to work with > more than one type parameter at a time is much harder than it sounds. > After all, to profitably work with even a *single* type parameter > (what Generic1 does), we must bring in three additional representation > types: Par1, Rec1, and (:.:), depending on where in the datatype the > last type parameter occurs. If we wanted to have, say, Generic2, we'd > similarly need to be able to work with many more combinations of type > parameter positions, such as: > > * data Foo1 a b = Foo1 a b > * data Foo2 a b = Foo2 (Either a b) > * data Foo3 a b = Foo3 (Either b a) > * etc. Actually, I am looking for something even bigger: not just a Generic2 class, but a Generic class that can deal with types of any arity. > A naïve approach would be to tack on another type parameter at the end > of every representation type, and introduce more types to deal with > all the combinations of the first and second type parameter that could > arise. But this approach doesn't scale well—after all, at what number > N do you stop introducing new representation types? We should nowhere stop, but allow an arbitrary number of parameters. ☺ Maybe through striving for a Generic class that works with arbitrary arities, we will find some deeper pattern, which could relieve us from having ad-hoc types such as the Foo1, Foo2, and so on you mention above. > So extending GHC.Generics to deal with more than one type parameter is > far from obvious to me It is also far from obvious for me. 😉 I actually think that makin g GHC.Generics more generic (making it work with types of arbitrary arity) is a nice research task, not something than can be done very easily. > (let alone whether it could be made backwards compatible with the > current API). It should not be backwards compatible. If we insist on backwards compatibility, we can never arrive at a version that works with types of any arity. > > the GHC.Generics module looks a bit unpolished and ad-hoc at the > > moment. > > Yes, quite literally everything in GHC.Generics is one large, ad hoc > hack. But it's also a darn useful one :) I am worried that people get so much used to the current interface that it will be hard to change it for something better later. You already argued in favor of backwards compatibility. ☹ All the best, Wolfgang From ryan.gl.scott at gmail.com Sun Sep 3 02:36:29 2017 From: ryan.gl.scott at gmail.com (Ryan Scott) Date: Sat, 2 Sep 2017 22:36:29 -0400 Subject: Type-level generics Message-ID: If you're willing to go a completely different route from GHC.Generics, then you might be interested in the paper Generic Programming with Multiple Parameters [1] (whose existence I just learned of—thanks to Pedro, the author, for pointing it out to me). It does present a single Generic class that is capable of working over any number of type parameters, although the interface presented is significantly more complex than the current GHC.Generics. The only reason I mention backwards compatibility is that if we are going to introduce a GHC.Generics 2.0 some day, it'd be nice to have a way to subsume the old interface with the new one, and fortunately, the aforementioned paper includes an algorithm for doing so. My hope was that we'd be able to incorporate these ideas into a design that also grants the ability to write Generic instances for GADTs, but I don't think GHC has a fancy enough type system to do this satisfactorily at the moment. Ryan S. ----- [1] http://dreixel.net/research/pdf/gpmp_colour.pdf From david at well-typed.com Sun Sep 3 04:33:21 2017 From: david at well-typed.com (David Feuer) Date: Sun, 03 Sep 2017 00:33:21 -0400 Subject: Type-level generics Message-ID: <20170903040626.6370DBCAD4@haskell.org> Ah, nice. I was actually exploring the vague general idea behind that approach earlier this evening. Magalhães (unsurprisingly) has developed it much much further. David FeuerWell-Typed, LLP -------- Original message --------From: Ryan Scott Date: 9/2/17 10:36 PM (GMT-05:00) To: ghc-devs at haskell.org Subject: Re: Type-level generics If you're willing to go a completely different route from GHC.Generics, then you might be interested in the paper Generic Programming with Multiple Parameters [1] (whose existence I just learned of—thanks to Pedro, the author, for pointing it out to me). It does present a single Generic class that is capable of working over any number of type parameters, although the interface presented is significantly more complex than the current GHC.Generics. The only reason I mention backwards compatibility is that if we are going to introduce a GHC.Generics 2.0 some day, it'd be nice to have a way to subsume the old interface with the new one, and fortunately, the aforementioned paper includes an algorithm for doing so. My hope was that we'd be able to incorporate these ideas into a design that also grants the ability to write Generic instances for GADTs, but I don't think GHC has a fancy enough type system to do this satisfactorily at the moment. Ryan S. ----- [1] http://dreixel.net/research/pdf/gpmp_colour.pdf _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Sun Sep 3 08:55:49 2017 From: mail at joachim-breitner.de (Joachim Breitner) Date: Sun, 03 Sep 2017 09:55:49 +0100 Subject: Disabling Travis? In-Reply-To: <2E2A20F3-7185-4ABA-8AD8-EDA2A4DE036B@cs.brynmawr.edu> References: <1503245493.4020.2.camel@joachim-breitner.de> <7920340D-2B0B-426E-8960-8A391AA46689@cs.brynmawr.edu> <1503302065.29178.1.camel@joachim-breitner.de> <2E2A20F3-7185-4ABA-8AD8-EDA2A4DE036B@cs.brynmawr.edu> Message-ID: <1504428949.11527.2.camel@joachim-breitner.de> Hi Richard, I saw you tried to get Travis working again. I also gave it a quick shot (in wip/travis2), but hit time limits again. Did you have more luck? Greetings, Joachim Am Montag, den 21.08.2017, 23:17 -0400 schrieb Richard Eisenberg: > I have not been getting emails from Travis, despite using it for > other projects (and thus being registered). So perhaps something is > going wrong there. Regardless, it's useful for me, even without the > emails. > > Another nice thing about Travis: I can get a quick check of the > history of failures, even in DEBUG mode. > > Richard > > > On Aug 21, 2017, at 3:54 AM, Joachim Breitner > r.de> wrote: > > > > Hi, > > > > Am Sonntag, den 20.08.2017, 23:55 -0400 schrieb Richard Eisenberg: > > > The big minus to Travis, as I see it, are that only committers > > > can > > > use it. (A forked repo just doesn't work because of the way that > > > submodules are checked out, IIRC.) This minus doesn't affect me, > > > however. > > > > I think the biggest minus is that Travis can either be configured > > to > > send mails to one specific address (as it was initially, when I > > received all the mail and could check whether I want to notify the > > author), or to the author of the patch, but not both. Currently it > > is > > sent to notify the author, but people seem to simply ignore the > > mails.  > > > > (Or maybe even only authors who also registered on travis get > > them?) > > > > So if there is demand to keep using Travis, maybe we have to go > > back to > > sending notification mails to one person who oversees this service? > > > > Joachim > > > > > > > > --  > > Joachim Breitner > >  mail at joachim-breitner.de > >  http://www.joachim-breitner.de/ > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -- Joachim “nomeata” Breitner mail at joachim-breitner.de https://www.joachim-breitner.de/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From rae at cs.brynmawr.edu Sun Sep 3 09:57:04 2017 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Sun, 3 Sep 2017 10:57:04 +0100 Subject: Disabling Travis? In-Reply-To: <1504428949.11527.2.camel@joachim-breitner.de> References: <1503245493.4020.2.camel@joachim-breitner.de> <7920340D-2B0B-426E-8960-8A391AA46689@cs.brynmawr.edu> <1503302065.29178.1.camel@joachim-breitner.de> <2E2A20F3-7185-4ABA-8AD8-EDA2A4DE036B@cs.brynmawr.edu> <1504428949.11527.2.camel@joachim-breitner.de> Message-ID: <95F2B586-B24A-4E51-AF27-473AF66E3B3E@cs.brynmawr.edu> No -- I hit the limits, too. :( One of my challenges in getting patches in is that I have no reliable way of validating. Validating locally is possible, but it gets false negatives (Mac validation usually isn't clean) and is inconvenient. I will be working with my school's sysadmin to get a Linux server working to do this, but not everyone has that possibility. Richard > On Sep 3, 2017, at 9:55 AM, Joachim Breitner wrote: > > Hi Richard, > > I saw you tried to get Travis working again. I also gave it a quick > shot (in wip/travis2), but hit time limits again. Did you have more > luck? > > Greetings, > Joachim > > Am Montag, den 21.08.2017, 23:17 -0400 schrieb Richard Eisenberg: >> I have not been getting emails from Travis, despite using it for >> other projects (and thus being registered). So perhaps something is >> going wrong there. Regardless, it's useful for me, even without the >> emails. >> >> Another nice thing about Travis: I can get a quick check of the >> history of failures, even in DEBUG mode. >> >> Richard >> >>> On Aug 21, 2017, at 3:54 AM, Joachim Breitner >> r.de> wrote: >>> >>> Hi, >>> >>> Am Sonntag, den 20.08.2017, 23:55 -0400 schrieb Richard Eisenberg: >>>> The big minus to Travis, as I see it, are that only committers >>>> can >>>> use it. (A forked repo just doesn't work because of the way that >>>> submodules are checked out, IIRC.) This minus doesn't affect me, >>>> however. >>> >>> I think the biggest minus is that Travis can either be configured >>> to >>> send mails to one specific address (as it was initially, when I >>> received all the mail and could check whether I want to notify the >>> author), or to the author of the patch, but not both. Currently it >>> is >>> sent to notify the author, but people seem to simply ignore the >>> mails. >>> >>> (Or maybe even only authors who also registered on travis get >>> them?) >>> >>> So if there is demand to keep using Travis, maybe we have to go >>> back to >>> sending notification mails to one person who oversees this service? >>> >>> Joachim >>> >>> >>> >>> -- >>> Joachim Breitner >>> mail at joachim-breitner.de >>> http://www.joachim-breitner.de/ >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -- > Joachim “nomeata” Breitner > mail at joachim-breitner.de > https://www.joachim-breitner.de/ > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From simonpj at microsoft.com Sun Sep 3 13:15:36 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Sun, 3 Sep 2017 13:15:36 +0000 Subject: GHC staus Message-ID: Ben, Simon, and ghc-devs I have to write slides for the GHC status talk in the Haskell Implementor's meeting. Usually we have 1. Current status (current release) 2. What's cooking for the next release 3. GHC community comments As background we have * Our Apr 17 status page * Our 8.2 release notes * Our 8.4 status page What would you put under (1-3)? Anything you'd like to see highlighted? Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: GHC Sept16.pptx Type: application/vnd.openxmlformats-officedocument.presentationml.presentation Size: 603712 bytes Desc: GHC Sept16.pptx URL: From ben at smart-cactus.org Sun Sep 3 14:05:27 2017 From: ben at smart-cactus.org (Ben Gamari) Date: Sun, 03 Sep 2017 10:05:27 -0400 Subject: GHC staus In-Reply-To: References: Message-ID: <873783x4go.fsf@ben-laptop.smart-cactus.org> Simon Peyton Jones writes: > Ben, Simon, and ghc-devs > I have to write slides for the GHC status talk in the Haskell Implementor's meeting. > Usually we have > > 1. Current status (current release) I think there is no shortage of things to say about 8.2. Lots of features and numerous important bugfixes. > 2. What's cooking for the next release Currently it is looking like 8.4.1 will be another cleanup release. The only item I can think of beyond those listed on the status page is the possibility of progress on #8809. > 3. GHC community comments > For what it's worth I'll be saying a few words about some of the development infrastructure efforts that we've been undertaking. This includes changes in release timing, our CI infrastructure and the status of the proposal process. My time will be shared with Andrey, who will share some words about the Hadrian merge and remaining tasks therein. > As background we have > > * Our Apr 17 status page > * Our 8.2 release notes > * Our 8.4 status page > What would you put under (1-3)? Anything you'd like to see highlighted? > I'll respond with more as I think of them. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From george.colpitts at gmail.com Sun Sep 3 14:34:37 2017 From: george.colpitts at gmail.com (George Colpitts) Date: Sun, 03 Sep 2017 14:34:37 +0000 Subject: GHC staus In-Reply-To: <873783x4go.fsf@ben-laptop.smart-cactus.org> References: <873783x4go.fsf@ben-laptop.smart-cactus.org> Message-ID: inline On Sun, Sep 3, 2017 at 11:06 AM Ben Gamari wrote: > Simon Peyton Jones writes: > > > Ben, Simon, and ghc-devs > > I have to write slides for the GHC status talk in the Haskell > Implementor's meeting. > > Usually we have > > > > 1. Current status (current release) > > I think there is no shortage of things to say about 8.2. Lots of > features and numerous important bugfixes. > > > 2. What's cooking for the next release > > Currently it is looking like 8.4.1 will be another cleanup release. > The only item I can think of beyond those listed on the status page is > the possibility of progress on #8809. > I believe there was a post on the Tweag IO blog suggesting linear types work might make 8.4.1. If not true it might be worth saying this is coming post 8.4.1 but I guess that's true of lots of things. On the bug section of the status page it has "Implement Improved LLVM backend". It might be worth mentioning that in the highlights section if it is still on track. > > > 3. GHC community comments > > > For what it's worth I'll be saying a few words about some of the > development infrastructure efforts that we've been undertaking. This > includes changes in release timing, our CI infrastructure and the status > of the proposal process. My time will be shared with Andrey, who will > share some words about the Hadrian merge and remaining tasks therein. > > > As background we have > > > > * Our Apr 17 status page< > https://ghc.haskell.org/trac/ghc/wiki/Status/Apr17> > > * Our 8.2 release notes< > https://downloads.haskell.org/~ghc/8.2.1/docs/html/users_guide/8.2.1-notes.html > > > > * Our 8.4 status page< > https://ghc.haskell.org/trac/ghc/wiki/Status/GHC-8.4.1> > > What would you put under (1-3)? Anything you'd like to see highlighted? > > > I'll respond with more as I think of them. > > Cheers, > > - Ben > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From m at tweag.io Sun Sep 3 14:52:50 2017 From: m at tweag.io (Boespflug, Mathieu) Date: Sun, 3 Sep 2017 16:52:50 +0200 Subject: GHC staus In-Reply-To: References: <873783x4go.fsf@ben-laptop.smart-cactus.org> Message-ID: Hi George, >> Currently it is looking like 8.4.1 will be another cleanup release. >> The only item I can think of beyond those listed on the status page is >> the possibility of progress on #8809. > > > I believe there was a post on the Tweag IO blog suggesting linear types work > might make 8.4.1. If not true it might be worth saying this is coming post > 8.4.1 but I guess that's true of lots of things. That would depend on several things: 1. the target date for GHC 8.4. Major releases used to be spaced by about a year in between them. But there has been discussion of substantially shorter release cycles. Ben, has a target date for GHC 8.4 been set yet? 2. whether -XLinearTypes as proposed (it hasn't yet!) gets accepted by the GHC proposals committee. 3. whether, once the branch is ready, it passes review and the release manager deems the branch stable enough to be merged into the next release branch. Great to see interest regarding this in-development language extension! Best, -- Mathieu Boespflug Founder at http://tweag.io. From rae at cs.brynmawr.edu Sun Sep 3 20:24:22 2017 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Sun, 3 Sep 2017 21:24:22 +0100 Subject: GHC staus In-Reply-To: References: <873783x4go.fsf@ben-laptop.smart-cactus.org> Message-ID: <42AA4D2B-1AB2-4048-B76F-E63E972421F8@cs.brynmawr.edu> Might implication constraints make it in time for 8.4? And I believe two of Stephanie's students (with some guidance from yours truly) are working on visible kind application and visible type patterns, respectively. But I have no idea about the timeline. I'm not personally planning anything new and shiny. More bugfixes. Finally fixing #8095 (removing coercions). I'm toying with the idea of getting underway at implementing a dependent Core language, but it certainly won't make it for 8.4. Richard > On Sep 3, 2017, at 3:52 PM, Boespflug, Mathieu wrote: > > Hi George, > >>> Currently it is looking like 8.4.1 will be another cleanup release. >>> The only item I can think of beyond those listed on the status page is >>> the possibility of progress on #8809. >> >> >> I believe there was a post on the Tweag IO blog suggesting linear types work >> might make 8.4.1. If not true it might be worth saying this is coming post >> 8.4.1 but I guess that's true of lots of things. > > That would depend on several things: > > 1. the target date for GHC 8.4. Major releases used to be spaced by > about a year in between them. But there has been discussion of > substantially shorter release cycles. Ben, has a target date for GHC > 8.4 been set yet? > 2. whether -XLinearTypes as proposed (it hasn't yet!) gets accepted by > the GHC proposals committee. > 3. whether, once the branch is ready, it passes review and the release > manager deems the branch stable enough to be merged into the next > release branch. > > Great to see interest regarding this in-development language extension! > > Best, > > -- > Mathieu Boespflug > Founder at http://tweag.io. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From wolfgang-it at jeltsch.info Sun Sep 3 20:47:56 2017 From: wolfgang-it at jeltsch.info (Wolfgang Jeltsch) Date: Sun, 03 Sep 2017 23:47:56 +0300 Subject: Type-level generics In-Reply-To: References: Message-ID: <1504471676.26271.5.camel@jeltsch.info> Am Samstag, den 02.09.2017, 22:36 -0400 schrieb Ryan Scott: > If you're willing to go a completely different route from > GHC.Generics, then you might be interested in the paper Generic > Programming with Multiple Parameters [1] (whose existence I just > learned of—thanks to Pedro, the author, for pointing it out to me). It > does present a single Generic class that is capable of working over > any number of type parameters, although the interface presented is > significantly more complex than the current GHC.Generics. Very interesting! I will go and read it. All the best, Wolfgang From rae at cs.brynmawr.edu Sun Sep 3 21:16:46 2017 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Sun, 3 Sep 2017 22:16:46 +0100 Subject: Feedback request regarding HSOC project (bringing sanity to the GHC performance test-suite) In-Reply-To: References: Message-ID: Hi Jared, I have nothing in particular to offer in the way of feedback, other than that this sounds great. Thanks for doing this, and good luck finishing up the work! Richard > On Sep 1, 2017, at 5:00 AM, Jared Weakly wrote: > > Hey y'all, > > A quick ToC before I dive right in: > > * What my HSOC project is on > * My progress so far > * Feedback welcome > * What I have left to do > * Theoretical potential improvements > > ----------- > > My HSOC project was on bringing sanity to the GHC performance test-suite. > My blog post on this is here: > https://jaredweakly.com/blog/haskell-summer-of-code/ > The Trac ticket that corresponds to this is here: > https://ghc.haskell.org/trac/ghc/ticket/12758 > The Phabricator ticket for this patch: https://phabricator.haskell.org/D3758 > > The tl;dr of my HSOC project is that GHC's performance tests currently > require the programmer to add in expected numbers manually, updated > them, handhold the testsuite, etc. This is a bit absurd and my > project's overall aim is to reduce the effort required of the > programmer to as close to zero as possible while simultaneously > increasing the potential ability of the testsuite to catch regressions > as much as possible. > > ------------ > > My progress so far: > - I have a few comparison tools in perf_notes.py. These allow people > to compare performance numbers of tests across commits > - I have all the performance numbers generated by running the tests > automatically stored in git notes and referenced by both the > comparison tool and the testsuite > - I have refactored the testsuite to use my new code that pulls > expected numbers automatically from git notes (trivially passing if > the note does not yet exist for that test), then it compares that > expected number with the number that was gotten from running the > testsuite on the latest commit. The comparison passes if it's within a > certain deviation (20% by default, but can be customized by the > programmer). > - I have refactored all of the all.T files to use the new comparison > functions for the performance tests and ensured that this doesn't > break any existing tests. > > ------------ > > > Anyone who wants to checkout the wip/perf-testsuite and try this out > is more than welcome. Feedback on anything is welcome; comments are > appreciated; discussion is welcome, etc. > > ------------- > > > What I have left to do is: > > 1. Finish writing up the documentation > 2. Update the wiki in all the relevant places concerning > additions/modifications to the testsuite and test driver > 3. Make sure everyone is happy with the change (and make small changes > as necessary) > > -------------- > > Possible features and improvements I am thinking about adding in: > * As a stopgap to full integration with performance tracking tools > (such as Gipedia), optionally emitting a test warning with the test > summary if there is any regression detected whatsoever (even if the > number falls within the allowed deviation) > * Some tests, such as T7702, have a somewhat nonsensical regression > percentage. Ideally the testsuite could handle those better. I could > potentially build in multiple ways to determine a regression > (percentage, 'above a certain value', 'taking longer than X amount of > time', as potential examples) > * Currently some tests require installing some Haskell packages; they > are skipped if the packages are not installed. I could try to build in > a way to automatically attempt to install all necessary Haskell > packages if someone attempts to run a test that requires them. > (Perhaps using a command such as 'make test exhaustive') > * The performance metric 'peak_megabytes' is sometimes not accurate > enough; I could see if adding something like `RTS -h -i0.01` > automatically to tests that use 'peak_megabytes' would resolve that. > Currently it is a manual debugging step. > > Any thoughts? Comments? Questions? > > Regards, > Jared Weakly > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From oleg.grenrus at iki.fi Mon Sep 4 05:51:39 2017 From: oleg.grenrus at iki.fi (Oleg Grenrus) Date: Mon, 4 Sep 2017 08:51:39 +0300 Subject: GHC staus In-Reply-To: References: Message-ID: <607a605d-a371-2ea9-06cf-c3a3699cb0a2@iki.fi> I don't know if it's worth mentioning, but GHC-8.4 will bundle text, mtl and parsec. - Oleg On 03.09.2017 16:15, Simon Peyton Jones via ghc-devs wrote: > > Ben, Simon, and ghc-devs > > I have to write slides for the GHC status talk in the Haskell > Implementor’s meeting. > > Usually we have > > 1. Current status (current release) > 2. What’s cooking for the next release > 3. GHC community comments > > As background we have > > * Our Apr 17 status page > > * Our 8.2 release notes > > * Our 8.4 status page > > > What would you put under (1-3)? Anything you’d like to see highlighted? > > Simon > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From iavor.diatchki at gmail.com Mon Sep 4 07:01:40 2017 From: iavor.diatchki at gmail.com (Iavor Diatchki) Date: Mon, 04 Sep 2017 07:01:40 +0000 Subject: GHC staus In-Reply-To: References: Message-ID: Hello, Trevor Elliott and I have been slowly working on implementing Simon M's "Mutable Constructor Fields" proposal [1]. The current state of the code is here: https://github.com/yav/ghc/tree/wip/mutable-fields I am not sure if this would be ready in time for 8.4 as I don't know what the time-line looks like, and also, the actual proposal is still in the process of being reviewed by the GHC committee. -Iavor [1] https://github.com/simonmar/ghc-proposals/blob/mutable-fields/proposals/0000-mutable-fields.rst On Sun, Sep 3, 2017 at 2:15 PM Simon Peyton Jones via ghc-devs < ghc-devs at haskell.org> wrote: > Ben, Simon, and ghc-devs > > I have to write slides for the GHC status talk in the Haskell > Implementor’s meeting. > > Usually we have > > 1. Current status (current release) > 2. What’s cooking for the next release > 3. GHC community comments > > As background we have > > - Our Apr 17 status page > > - Our 8.2 release notes > > - Our 8.4 status page > > > What would you put under (1-3)? Anything you’d like to see highlighted? > > Simon > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From moritz.angermann at gmail.com Mon Sep 4 08:37:12 2017 From: moritz.angermann at gmail.com (Moritz Angermann) Date: Mon, 4 Sep 2017 09:37:12 +0100 Subject: GHC staus In-Reply-To: References: Message-ID: <386095DC-FB4B-44B5-B909-C501B6D1BC95@gmail.com> Hi, not sure if this is noteworthy: The following is or will hopefully make(*) it into 8.4 as well - (1) iserv-remote (run iserv on a remote device over the network) - (2) arm / aarch64 linker for elf and mach-o - (3*) `-staticlib` support for Linux and BSD derivatives (was darwin only): - (4*) `-llvmng` new llvm bitcode code gen - (5*) refactored llvm pipeline This essentially is all part of making GHC natively support cross compiling (including support for Template Haskell) to android/iOS/RaspberryPi. I hope to give a lighting talk around those, if I get a slot. Cheers, Moritz Sent from my iPhone > On 4 Sep 2017, at 8:01 AM, Iavor Diatchki wrote: > > Hello, > > Trevor Elliott and I have been slowly working on implementing Simon M's "Mutable Constructor Fields" proposal [1]. > > The current state of the code is here: > https://github.com/yav/ghc/tree/wip/mutable-fields > > I am not sure if this would be ready in time for 8.4 as I don't know what the time-line looks like, and also, the actual proposal is still in the process of being reviewed by the GHC committee. > > -Iavor > > [1] https://github.com/simonmar/ghc-proposals/blob/mutable-fields/proposals/0000-mutable-fields.rst > > > >> On Sun, Sep 3, 2017 at 2:15 PM Simon Peyton Jones via ghc-devs wrote: >> Ben, Simon, and ghc-devs >> >> I have to write slides for the GHC status talk in the Haskell Implementor’s meeting. >> >> Usually we have >> >> Current status (current release) >> What’s cooking for the next release >> GHC community comments >> As background we have >> >> Our Apr 17 status page >> Our 8.2 release notes >> Our 8.4 status page >> What would you put under (1-3)? Anything you’d like to see highlighted? >> >> >> Simon >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From chak at justtesting.org Mon Sep 4 10:16:56 2017 From: chak at justtesting.org (Manuel M T Chakravarty) Date: Mon, 4 Sep 2017 20:16:56 +1000 Subject: GHC staus In-Reply-To: <386095DC-FB4B-44B5-B909-C501B6D1BC95@gmail.com> References: <386095DC-FB4B-44B5-B909-C501B6D1BC95@gmail.com> Message-ID: <1083864F-4E65-421D-9B52-4F5D3A408B6E@justtesting.org> +1 for a lighting talk on that! (You can tell the organisers that ;) Also, we should make sure to meet and talk about cross-compilation and GHC for iOS :) Manuel > Moritz Angermann : > > Hi, > > not sure if this is noteworthy: > > The following is or will hopefully make(*) it > into 8.4 as well > > - (1) iserv-remote (run iserv on a remote device over the network) > - (2) arm / aarch64 linker for elf and mach-o > - (3*) `-staticlib` support for Linux and BSD derivatives (was darwin only): > - (4*) `-llvmng` new llvm bitcode code gen > - (5*) refactored llvm pipeline > > This essentially is all part of making GHC natively > support cross compiling (including support for Template Haskell) to android/iOS/RaspberryPi. > > I hope to give a lighting talk around those, if I get a slot. > > Cheers, > Moritz > > Sent from my iPhone > > On 4 Sep 2017, at 8:01 AM, Iavor Diatchki > wrote: > >> Hello, >> >> Trevor Elliott and I have been slowly working on implementing Simon M's "Mutable Constructor Fields" proposal [1]. >> >> The current state of the code is here: >> https://github.com/yav/ghc/tree/wip/mutable-fields >> >> I am not sure if this would be ready in time for 8.4 as I don't know what the time-line looks like, and also, the actual proposal is still in the process of being reviewed by the GHC committee. >> >> -Iavor >> >> [1] https://github.com/simonmar/ghc-proposals/blob/mutable-fields/proposals/0000-mutable-fields.rst >> >> >> >> On Sun, Sep 3, 2017 at 2:15 PM Simon Peyton Jones via ghc-devs > wrote: >> Ben, Simon, and ghc-devs >> >> I have to write slides for the GHC status talk in the Haskell Implementor’s meeting. >> >> Usually we have >> >> Current status (current release) >> What’s cooking for the next release >> GHC community comments >> As background we have >> >> Our Apr 17 status page >> Our 8.2 release notes >> Our 8.4 status page >> What would you put under (1-3)? Anything you’d like to see highlighted? >> >> >> Simon >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Mon Sep 4 10:21:18 2017 From: marlowsd at gmail.com (Simon Marlow) Date: Mon, 4 Sep 2017 11:21:18 +0100 Subject: GHC staus In-Reply-To: <1083864F-4E65-421D-9B52-4F5D3A408B6E@justtesting.org> References: <386095DC-FB4B-44B5-B909-C501B6D1BC95@gmail.com> <1083864F-4E65-421D-9B52-4F5D3A408B6E@justtesting.org> Message-ID: I have a vested interest in the cross-compilation story because it's very related to Remote GHCi, so I'd love to join in if you have a discussion about that :) On 4 September 2017 at 11:16, Manuel M T Chakravarty wrote: > +1 for a lighting talk on that! (You can tell the organisers that ;) > > Also, we should make sure to meet and talk about cross-compilation and GHC > for iOS :) > > Manuel > > Moritz Angermann : > > Hi, > > not sure if this is noteworthy: > > The following is or will hopefully make(*) it > into 8.4 as well > > - (1) iserv-remote (run iserv on a remote device over the network) > - (2) arm / aarch64 linker for elf and mach-o > - (3*) `-staticlib` support for Linux and BSD derivatives (was darwin > only): > - (4*) `-llvmng` new llvm bitcode code gen > - (5*) refactored llvm pipeline > > This essentially is all part of making GHC natively > support cross compiling (including support for Template Haskell) to > android/iOS/RaspberryPi. > > I hope to give a lighting talk around those, if I get a slot. > > Cheers, > Moritz > > Sent from my iPhone > > On 4 Sep 2017, at 8:01 AM, Iavor Diatchki > wrote: > > Hello, > > Trevor Elliott and I have been slowly working on implementing Simon M's > "Mutable Constructor Fields" proposal [1]. > > The current state of the code is here: > https://github.com/yav/ghc/tree/wip/mutable-fields > > I am not sure if this would be ready in time for 8.4 as I don't know what > the time-line looks like, and also, the actual proposal is still in the > process of being reviewed by the GHC committee. > > -Iavor > > [1] https://github.com/simonmar/ghc-proposals/blob/ > mutable-fields/proposals/0000-mutable-fields.rst > > > > On Sun, Sep 3, 2017 at 2:15 PM Simon Peyton Jones via ghc-devs < > ghc-devs at haskell.org> wrote: > >> Ben, Simon, and ghc-devs >> >> I have to write slides for the GHC status talk in the Haskell >> Implementor’s meeting. >> >> Usually we have >> >> 1. Current status (current release) >> 2. What’s cooking for the next release >> 3. GHC community comments >> >> As background we have >> >> - Our Apr 17 status page >> >> - Our 8.2 release notes >> >> - Our 8.4 status page >> >> >> What would you put under (1-3)? Anything you’d like to see highlighted? >> >> Simon >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at ara.io Mon Sep 4 10:22:35 2017 From: me at ara.io (Ara Adkins) Date: Mon, 4 Sep 2017 11:22:35 +0100 Subject: GHC staus In-Reply-To: <1083864F-4E65-421D-9B52-4F5D3A408B6E@justtesting.org> References: <386095DC-FB4B-44B5-B909-C501B6D1BC95@gmail.com> <1083864F-4E65-421D-9B52-4F5D3A408B6E@justtesting.org> Message-ID: I’m definitely interested in hearing more about cross compilation for iOS. Might actually make an idea I’ve had brewing into something feasible! _ara > On 4 Sep 2017, at 11:16, Manuel M T Chakravarty wrote: > > +1 for a lighting talk on that! (You can tell the organisers that ;) > > Also, we should make sure to meet and talk about cross-compilation and GHC for iOS :) > > Manuel > >> Moritz Angermann : >> >> Hi, >> >> not sure if this is noteworthy: >> >> The following is or will hopefully make(*) it >> into 8.4 as well >> >> - (1) iserv-remote (run iserv on a remote device over the network) >> - (2) arm / aarch64 linker for elf and mach-o >> - (3*) `-staticlib` support for Linux and BSD derivatives (was darwin only): >> - (4*) `-llvmng` new llvm bitcode code gen >> - (5*) refactored llvm pipeline >> >> This essentially is all part of making GHC natively >> support cross compiling (including support for Template Haskell) to android/iOS/RaspberryPi. >> >> I hope to give a lighting talk around those, if I get a slot. >> >> Cheers, >> Moritz >> >> Sent from my iPhone >> >>> On 4 Sep 2017, at 8:01 AM, Iavor Diatchki wrote: >>> >>> Hello, >>> >>> Trevor Elliott and I have been slowly working on implementing Simon M's "Mutable Constructor Fields" proposal [1]. >>> >>> The current state of the code is here: >>> https://github.com/yav/ghc/tree/wip/mutable-fields >>> >>> I am not sure if this would be ready in time for 8.4 as I don't know what the time-line looks like, and also, the actual proposal is still in the process of being reviewed by the GHC committee. >>> >>> -Iavor >>> >>> [1] https://github.com/simonmar/ghc-proposals/blob/mutable-fields/proposals/0000-mutable-fields.rst >>> >>> >>> >>>> On Sun, Sep 3, 2017 at 2:15 PM Simon Peyton Jones via ghc-devs wrote: >>>> Ben, Simon, and ghc-devs >>>> >>>> I have to write slides for the GHC status talk in the Haskell Implementor’s meeting. >>>> >>>> Usually we have >>>> >>>> Current status (current release) >>>> What’s cooking for the next release >>>> GHC community comments >>>> As background we have >>>> >>>> Our Apr 17 status page >>>> Our 8.2 release notes >>>> Our 8.4 status page >>>> What would you put under (1-3)? Anything you’d like to see highlighted? >>>> >>>> >>>> Simon >>>> >>>> _______________________________________________ >>>> ghc-devs mailing list >>>> ghc-devs at haskell.org >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Mon Sep 4 10:30:22 2017 From: marlowsd at gmail.com (Simon Marlow) Date: Mon, 4 Sep 2017 11:30:22 +0100 Subject: GHC staus In-Reply-To: References: Message-ID: One thing for 8.4: there have been a series of performance improvements in ghc --make and GHCi, with some quite dramatic improvements on very large programs (1K+ modules). Bartosz can supply more details. On 3 September 2017 at 14:15, Simon Peyton Jones wrote: > Ben, Simon, and ghc-devs > > I have to write slides for the GHC status talk in the Haskell > Implementor’s meeting. > > Usually we have > > 1. Current status (current release) > 2. What’s cooking for the next release > 3. GHC community comments > > As background we have > > - Our Apr 17 status page > > - Our 8.2 release notes > > - Our 8.4 status page > > > What would you put under (1-3)? Anything you’d like to see highlighted? > > Simon > -------------- next part -------------- An HTML attachment was scrubbed... URL: From moritz.angermann at gmail.com Mon Sep 4 10:39:00 2017 From: moritz.angermann at gmail.com (Moritz Angermann) Date: Mon, 4 Sep 2017 11:39:00 +0100 Subject: GHC staus In-Reply-To: References: <386095DC-FB4B-44B5-B909-C501B6D1BC95@gmail.com> <1083864F-4E65-421D-9B52-4F5D3A408B6E@justtesting.org> Message-ID: <7A145B62-79E6-4B54-84C1-ECD5F4180D2D@gmail.com> I'm around all week. I actually do have remote ghci on my todo list ;-) Cheers, Moritz Sent from my iPhone > On 4 Sep 2017, at 11:21 AM, Simon Marlow wrote: > > I have a vested interest in the cross-compilation story because it's very related to Remote GHCi, so I'd love to join in if you have a discussion about that :) > >> On 4 September 2017 at 11:16, Manuel M T Chakravarty wrote: >> +1 for a lighting talk on that! (You can tell the organisers that ;) >> >> Also, we should make sure to meet and talk about cross-compilation and GHC for iOS :) >> >> Manuel >> >>> Moritz Angermann : >>> >>> Hi, >>> >>> not sure if this is noteworthy: >>> >>> The following is or will hopefully make(*) it >>> into 8.4 as well >>> >>> - (1) iserv-remote (run iserv on a remote device over the network) >>> - (2) arm / aarch64 linker for elf and mach-o >>> - (3*) `-staticlib` support for Linux and BSD derivatives (was darwin only): >>> - (4*) `-llvmng` new llvm bitcode code gen >>> - (5*) refactored llvm pipeline >>> >>> This essentially is all part of making GHC natively >>> support cross compiling (including support for Template Haskell) to android/iOS/RaspberryPi. >>> >>> I hope to give a lighting talk around those, if I get a slot. >>> >>> Cheers, >>> Moritz >>> >>> Sent from my iPhone >>> >>>> On 4 Sep 2017, at 8:01 AM, Iavor Diatchki wrote: >>>> >>>> Hello, >>>> >>>> Trevor Elliott and I have been slowly working on implementing Simon M's "Mutable Constructor Fields" proposal [1]. >>>> >>>> The current state of the code is here: >>>> https://github.com/yav/ghc/tree/wip/mutable-fields >>>> >>>> I am not sure if this would be ready in time for 8.4 as I don't know what the time-line looks like, and also, the actual proposal is still in the process of being reviewed by the GHC committee. >>>> >>>> -Iavor >>>> >>>> [1] https://github.com/simonmar/ghc-proposals/blob/mutable-fields/proposals/0000-mutable-fields.rst >>>> >>>> >>>> >>>>> On Sun, Sep 3, 2017 at 2:15 PM Simon Peyton Jones via ghc-devs wrote: >>>>> Ben, Simon, and ghc-devs >>>>> >>>>> I have to write slides for the GHC status talk in the Haskell Implementor’s meeting. >>>>> >>>>> Usually we have >>>>> >>>>> Current status (current release) >>>>> What’s cooking for the next release >>>>> GHC community comments >>>>> As background we have >>>>> >>>>> Our Apr 17 status page >>>>> Our 8.2 release notes >>>>> Our 8.4 status page >>>>> What would you put under (1-3)? Anything you’d like to see highlighted? >>>>> >>>>> >>>>> Simon >>>>> >>>>> _______________________________________________ >>>>> ghc-devs mailing list >>>>> ghc-devs at haskell.org >>>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>>> _______________________________________________ >>>> ghc-devs mailing list >>>> ghc-devs at haskell.org >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chak at justtesting.org Mon Sep 4 11:04:58 2017 From: chak at justtesting.org (Manuel M T Chakravarty) Date: Mon, 4 Sep 2017 21:04:58 +1000 Subject: GHC staus In-Reply-To: <7A145B62-79E6-4B54-84C1-ECD5F4180D2D@gmail.com> References: <386095DC-FB4B-44B5-B909-C501B6D1BC95@gmail.com> <1083864F-4E65-421D-9B52-4F5D3A408B6E@justtesting.org> <7A145B62-79E6-4B54-84C1-ECD5F4180D2D@gmail.com> Message-ID: I’ll only arrive Wednesday during the day, so maybe sometime second half of the week? > Moritz Angermann : > > I'm around all week. I actually do have remote ghci on my todo list ;-) > > Cheers, > Moritz > Sent from my iPhone > > On 4 Sep 2017, at 11:21 AM, Simon Marlow > wrote: > >> I have a vested interest in the cross-compilation story because it's very related to Remote GHCi, so I'd love to join in if you have a discussion about that :) >> >> On 4 September 2017 at 11:16, Manuel M T Chakravarty > wrote: >> +1 for a lighting talk on that! (You can tell the organisers that ;) >> >> Also, we should make sure to meet and talk about cross-compilation and GHC for iOS :) >> >> Manuel >> >>> Moritz Angermann >: >>> >>> Hi, >>> >>> not sure if this is noteworthy: >>> >>> The following is or will hopefully make(*) it >>> into 8.4 as well >>> >>> - (1) iserv-remote (run iserv on a remote device over the network) >>> - (2) arm / aarch64 linker for elf and mach-o >>> - (3*) `-staticlib` support for Linux and BSD derivatives (was darwin only): >>> - (4*) `-llvmng` new llvm bitcode code gen >>> - (5*) refactored llvm pipeline >>> >>> This essentially is all part of making GHC natively >>> support cross compiling (including support for Template Haskell) to android/iOS/RaspberryPi. >>> >>> I hope to give a lighting talk around those, if I get a slot. >>> >>> Cheers, >>> Moritz >>> >>> Sent from my iPhone >>> >>> On 4 Sep 2017, at 8:01 AM, Iavor Diatchki > wrote: >>> >>>> Hello, >>>> >>>> Trevor Elliott and I have been slowly working on implementing Simon M's "Mutable Constructor Fields" proposal [1]. >>>> >>>> The current state of the code is here: >>>> https://github.com/yav/ghc/tree/wip/mutable-fields >>>> >>>> I am not sure if this would be ready in time for 8.4 as I don't know what the time-line looks like, and also, the actual proposal is still in the process of being reviewed by the GHC committee. >>>> >>>> -Iavor >>>> >>>> [1] https://github.com/simonmar/ghc-proposals/blob/mutable-fields/proposals/0000-mutable-fields.rst >>>> >>>> >>>> >>>> On Sun, Sep 3, 2017 at 2:15 PM Simon Peyton Jones via ghc-devs > wrote: >>>> Ben, Simon, and ghc-devs >>>> >>>> I have to write slides for the GHC status talk in the Haskell Implementor’s meeting. >>>> >>>> Usually we have >>>> >>>> Current status (current release) >>>> What’s cooking for the next release >>>> GHC community comments >>>> As background we have >>>> >>>> Our Apr 17 status page >>>> Our 8.2 release notes >>>> Our 8.4 status page >>>> What would you put under (1-3)? Anything you’d like to see highlighted? >>>> >>>> >>>> Simon >>>> >>>> _______________________________________________ >>>> ghc-devs mailing list >>>> ghc-devs at haskell.org >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>>> _______________________________________________ >>>> ghc-devs mailing list >>>> ghc-devs at haskell.org >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >> > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From lonetiger at gmail.com Tue Sep 5 05:02:49 2017 From: lonetiger at gmail.com (Phyx) Date: Tue, 05 Sep 2017 05:02:49 +0000 Subject: Feedback request regarding HSOC project (bringing sanity to the GHC performance test-suite) In-Reply-To: References: Message-ID: Hi Jared, First off, thanks for all the hard work on this. I checked out your branch and made a run, I noticed at the end it had Framework failures: . ./perf/compiler/all.T [] (unexpected indent (, line 298)) so I assume none of the perf tests were run? Though I do see a .git/refs/notes/perf, so I assume your ref is is perf? Doing a git notes --ref perf show I see somethings were collected at some point local T3924 normal bytes allocated 47064 local haddock.base normal bytes allocated 18427047160 local haddock.Cabal normal bytes allocated 15863910848 local haddock.compiler normal bytes allocated 50656428952 which brings me up to my first question, I'm guessing the number here is the number of bytes allocated for the test? Is there a way for me to see what the maximum deviation the test allows is and how far I am from it? Do I just get the information like before only when a test fails? How does that look like? Same as before? It's also not entirely clear to be what perf_notes.py can be used, is it just an infrastructure tool? or is it something you foresee as useful for a developer? lastly, how often do you update notes? It's probably too late for this now, but git, especially msys git can be especially slow, so I would have liked the notes to be updated in batches to not slow down the testsuite run on Windows. Which brings me to my next question, how resilient are you to failures updating git? some IDE/environments like vscode automatically issue git operations in the background. so git may be busy when you try to update and the operation would fail saying the repo is locked. Does your new system recover from such failures? Also how do you deal with platform discrepancies? We've had in the past tests that behave radically different on different platforms, so we've also historically had the ability to record a platform specific value. Thanks, Tamar On Fri, Sep 1, 2017, 05:01 Jared Weakly wrote: > Hey y'all, > > A quick ToC before I dive right in: > > * What my HSOC project is on > * My progress so far > * Feedback welcome > * What I have left to do > * Theoretical potential improvements > > ----------- > > My HSOC project was on bringing sanity to the GHC performance test-suite. > My blog post on this is here: > https://jaredweakly.com/blog/haskell-summer-of-code/ > The Trac ticket that corresponds to this is here: > https://ghc.haskell.org/trac/ghc/ticket/12758 > The Phabricator ticket for this patch: > https://phabricator.haskell.org/D3758 > > The tl;dr of my HSOC project is that GHC's performance tests currently > require the programmer to add in expected numbers manually, updated > them, handhold the testsuite, etc. This is a bit absurd and my > project's overall aim is to reduce the effort required of the > programmer to as close to zero as possible while simultaneously > increasing the potential ability of the testsuite to catch regressions > as much as possible. > > ------------ > > My progress so far: > - I have a few comparison tools in perf_notes.py. These allow people > to compare performance numbers of tests across commits > - I have all the performance numbers generated by running the tests > automatically stored in git notes and referenced by both the > comparison tool and the testsuite > - I have refactored the testsuite to use my new code that pulls > expected numbers automatically from git notes (trivially passing if > the note does not yet exist for that test), then it compares that > expected number with the number that was gotten from running the > testsuite on the latest commit. The comparison passes if it's within a > certain deviation (20% by default, but can be customized by the > programmer). > - I have refactored all of the all.T files to use the new comparison > functions for the performance tests and ensured that this doesn't > break any existing tests. > > ------------ > > > Anyone who wants to checkout the wip/perf-testsuite and try this out > is more than welcome. Feedback on anything is welcome; comments are > appreciated; discussion is welcome, etc. > > ------------- > > > What I have left to do is: > > 1. Finish writing up the documentation > 2. Update the wiki in all the relevant places concerning > additions/modifications to the testsuite and test driver > 3. Make sure everyone is happy with the change (and make small changes > as necessary) > > -------------- > > Possible features and improvements I am thinking about adding in: > * As a stopgap to full integration with performance tracking tools > (such as Gipedia), optionally emitting a test warning with the test > summary if there is any regression detected whatsoever (even if the > number falls within the allowed deviation) > * Some tests, such as T7702, have a somewhat nonsensical regression > percentage. Ideally the testsuite could handle those better. I could > potentially build in multiple ways to determine a regression > (percentage, 'above a certain value', 'taking longer than X amount of > time', as potential examples) > * Currently some tests require installing some Haskell packages; they > are skipped if the packages are not installed. I could try to build in > a way to automatically attempt to install all necessary Haskell > packages if someone attempts to run a test that requires them. > (Perhaps using a command such as 'make test exhaustive') > * The performance metric 'peak_megabytes' is sometimes not accurate > enough; I could see if adding something like `RTS -h -i0.01` > automatically to tests that use 'peak_megabytes' would resolve that. > Currently it is a manual debugging step. > > Any thoughts? Comments? Questions? > > Regards, > Jared Weakly > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From captaintrunky at gmail.com Tue Sep 5 12:42:28 2017 From: captaintrunky at gmail.com (Sergey Bykov) Date: Tue, 5 Sep 2017 20:42:28 +0800 Subject: A newcomer: how to start Message-ID: Hi, my name is Sergey and I'm recently joined to this mailing list. I would like to help GHC's community and start contributing to the GHC project. Could someone recommend a few tasks, which seems to be suitable for a beginner? About myself: I'm pretty experienced Research Scientist/Software Developer in such areas like discrete optimization and machine learning. Thanks in advance! From me at ara.io Tue Sep 5 12:46:38 2017 From: me at ara.io (Ara Adkins) Date: Tue, 5 Sep 2017 13:46:38 +0100 Subject: A newcomer: how to start In-Reply-To: References: Message-ID: Hey Sergey, I’d honestly head straight for the GHC newcomers guide [0]! Once you’re familiar with all of that just head for Trac and look for the low hanging fruit tasks! Most importantly, don’t forget to ask questions! Nobody here bites! Best, _ara [0] https://ghc.haskell.org/trac/ghc/wiki/Newcomers > On 5 Sep 2017, at 13:42, Sergey Bykov wrote: > > Hi, > > my name is Sergey and I'm recently joined to this mailing list. I would like to help GHC's community and start contributing to the GHC project. Could someone recommend a few tasks, which seems to be suitable for a beginner? > > About myself: > > I'm pretty experienced Research Scientist/Software Developer in such areas like discrete optimization and machine learning. > > > Thanks in advance! > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From jweakly at pdx.edu Tue Sep 5 17:39:25 2017 From: jweakly at pdx.edu (Jared Weakly) Date: Tue, 5 Sep 2017 10:39:25 -0700 Subject: Feedback request regarding HSOC project (bringing sanity to the GHC performance test-suite) In-Reply-To: References: Message-ID: Hi Tamar, That framework failure is due to a somewhat embarrassing error that I thought I had caught earlier; line 298 shouldn't have existed (it was a small mistake from converting the all.T file from using the old function to using the new collect_stats function. I have fixed this and it will be pushed by the time you read this email. That being said, the individual tests or units are very isolated and a framework failure simply means that unit didn't get run; in this case it means that entire all.T file didn't get run since the error was in reading the file, but the rest of the files should've had their performance tests recorded properly. The .git/refs/notes/perf is an implementation detail. Git notes have the concept of namespaces; so, in order to avoid cluttering up a global namespace in git notes with stuff only the performance tests will use, all the performance metrics are stored in the namespace 'perf'. This format of the git notes is mentioned in the code for the testsuite but I will make this more visible in the README and other documentation. The format of the git notes is: $test-env $test_name $test_way $metric_measured $value_collected (separated by tabs) The maximum deviation the test allows is inside the respective all.T's; additionally, if you set the verbosity level of the test-suite to a value >= 4, you will see the expected value, allowed deviation, lower bound, upper bound, the actual measured value, and (if the test fell outside the bounds) how much the actual value deviated from the expected value. This information will also print if the test falls outside of the allowed bounds. perf_notes.py exists as both an internal library and a measurement tool (hopefully to be useful to developers). You can give the tool several commits and it will give you a comparison of the union of all the tests in those commits, with an output very similar in style to noFib. I imagine this will be useful mostly to people who want to improve the performance of the compiler so they can see which tests have regressed the most over time (or which have improved the most over time); but as it works over commits, it can also be useful for a developer wanting to know if they've made a measurable difference with their patch. The notes are updated every time the testsuite is ran. However they are updated only at the very end of the execution of the testsuite in a single command (the information is collected in a python datastructure which is turned into a string and given to git notes). This behavior means that if the testsuite is ran more than once in-between commits that 'duplicate values' will exist in the git notes. I'm not quite sure how to deal with this yet; I am considering just grabbing the latest value if multiples exist. This also means you can test just one test and then run other tests and have those values added into the git notes without losing your older values which is why the behavior is kept this way. (I will make sure this is more prominent in the docs somewhere). The note update is done using python's subprocess library. I have no idea how resilient that is to git failure; I'd imagine that if it was busy it would just silently fail to update. Fortunately, the update process is as close to atomic as one can get. I'll see if I can figure out a way to force a repo lock to test this out. I'm open to suggestions as to how to deal with this better and I'll also google around and see if anyone has a good solution. Platform discrepancies are completely sidestepped because of the way git notes work. The performance metrics are entirely local and stay on your computer; they won't be pushed or shared with any other users. That means that the performance numbers are completely tailored to your platform so there is effectively an 0% margin of "OS-related" error that needs to be accounted for. The collect_stats function is very much designed to be declarative and "set it and forget it". As such, the need to even record values at all is obsoleted (one of the main motivators of this project in the first place). Hopefully this answers some questions; I'll make sure this sort of information is available somewhere so that later users can find these answers again. Thanks for your thoughts! They were very helpful. Regards, Jared On Mon, Sep 4, 2017 at 10:02 PM, Phyx wrote: > Hi Jared, > > First off, thanks for all the hard work on this. I checked out your branch > and made a run, I noticed at the end it had > > Framework failures: > . ./perf/compiler/all.T [] (unexpected indent (, line 298)) > > so I assume none of the perf tests were run? > > Though I do see a .git/refs/notes/perf, so I assume your ref is is perf? > > Doing a git notes --ref perf show I see somethings were collected at some > point > > local T3924 normal bytes allocated 47064 > local haddock.base normal bytes allocated 18427047160 > local haddock.Cabal normal bytes allocated 15863910848 > local haddock.compiler normal bytes allocated 50656428952 > > which brings me up to my first question, I'm guessing the number here is the > number of bytes allocated for the test? Is there a way for me to see > what the maximum deviation the test allows is and how far I am from it? Do I > just get the information like before only when a test fails? How does that > look like? Same as before? > > It's also not entirely clear to be what perf_notes.py can be used, is it > just an infrastructure tool? or is it something you foresee as useful for a > developer? > > lastly, how often do you update notes? It's probably too late for this now, > but git, especially msys git can be especially slow, so I would have liked > the notes to be updated in batches to not slow down the testsuite run on > Windows. > > Which brings me to my next question, how resilient are you to failures > updating git? some IDE/environments like vscode automatically issue git > operations in the background. so git may be busy when you try to update and > the operation would fail saying the repo is locked. Does your new system > recover from such failures? > > Also how do you deal with platform discrepancies? We've had in the past > tests that behave radically different on different platforms, so we've also > historically had the ability to record a platform specific value. > > Thanks, > Tamar > > On Fri, Sep 1, 2017, 05:01 Jared Weakly wrote: >> >> Hey y'all, >> >> A quick ToC before I dive right in: >> >> * What my HSOC project is on >> * My progress so far >> * Feedback welcome >> * What I have left to do >> * Theoretical potential improvements >> >> ----------- >> >> My HSOC project was on bringing sanity to the GHC performance test-suite. >> My blog post on this is here: >> https://jaredweakly.com/blog/haskell-summer-of-code/ >> The Trac ticket that corresponds to this is here: >> https://ghc.haskell.org/trac/ghc/ticket/12758 >> The Phabricator ticket for this patch: >> https://phabricator.haskell.org/D3758 >> >> The tl;dr of my HSOC project is that GHC's performance tests currently >> require the programmer to add in expected numbers manually, updated >> them, handhold the testsuite, etc. This is a bit absurd and my >> project's overall aim is to reduce the effort required of the >> programmer to as close to zero as possible while simultaneously >> increasing the potential ability of the testsuite to catch regressions >> as much as possible. >> >> ------------ >> >> My progress so far: >> - I have a few comparison tools in perf_notes.py. These allow people >> to compare performance numbers of tests across commits >> - I have all the performance numbers generated by running the tests >> automatically stored in git notes and referenced by both the >> comparison tool and the testsuite >> - I have refactored the testsuite to use my new code that pulls >> expected numbers automatically from git notes (trivially passing if >> the note does not yet exist for that test), then it compares that >> expected number with the number that was gotten from running the >> testsuite on the latest commit. The comparison passes if it's within a >> certain deviation (20% by default, but can be customized by the >> programmer). >> - I have refactored all of the all.T files to use the new comparison >> functions for the performance tests and ensured that this doesn't >> break any existing tests. >> >> ------------ >> >> >> Anyone who wants to checkout the wip/perf-testsuite and try this out >> is more than welcome. Feedback on anything is welcome; comments are >> appreciated; discussion is welcome, etc. >> >> ------------- >> >> >> What I have left to do is: >> >> 1. Finish writing up the documentation >> 2. Update the wiki in all the relevant places concerning >> additions/modifications to the testsuite and test driver >> 3. Make sure everyone is happy with the change (and make small changes >> as necessary) >> >> -------------- >> >> Possible features and improvements I am thinking about adding in: >> * As a stopgap to full integration with performance tracking tools >> (such as Gipedia), optionally emitting a test warning with the test >> summary if there is any regression detected whatsoever (even if the >> number falls within the allowed deviation) >> * Some tests, such as T7702, have a somewhat nonsensical regression >> percentage. Ideally the testsuite could handle those better. I could >> potentially build in multiple ways to determine a regression >> (percentage, 'above a certain value', 'taking longer than X amount of >> time', as potential examples) >> * Currently some tests require installing some Haskell packages; they >> are skipped if the packages are not installed. I could try to build in >> a way to automatically attempt to install all necessary Haskell >> packages if someone attempts to run a test that requires them. >> (Perhaps using a command such as 'make test exhaustive') >> * The performance metric 'peak_megabytes' is sometimes not accurate >> enough; I could see if adding something like `RTS -h -i0.01` >> automatically to tests that use 'peak_megabytes' would resolve that. >> Currently it is a manual debugging step. >> >> Any thoughts? Comments? Questions? >> >> Regards, >> Jared Weakly >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From davean at xkcd.com Thu Sep 7 02:13:09 2017 From: davean at xkcd.com (davean) Date: Wed, 6 Sep 2017 22:13:09 -0400 Subject: [GHC] #13716: Move CI to Jenkins In-Reply-To: <061.6517cbd2ef77768a965a148a910128cf@haskell.org> References: <046.48e2facaae8e965ca7d5990f8113aa95@haskell.org> <061.6517cbd2ef77768a965a148a910128cf@haskell.org> Message-ID: Thats actually how this project started. On Wed, Sep 6, 2017 at 7:05 PM, GHC wrote: > #13716: Move CI to Jenkins > -------------------------------------+---------------------- > --------------- > Reporter: bgamari | Owner: (none) > Type: task | Status: new > Priority: normal | Milestone: > Component: None | Version: 8.0.1 > Resolution: | Keywords: > Operating System: Unknown/Multiple | Architecture: > | Unknown/Multiple > Type of failure: None/Unknown | Test Case: > Blocked By: 13897 | Blocking: > Related Tickets: #11958 | Differential Rev(s): > Wiki Page: | > -------------------------------------+---------------------- > --------------- > > Comment (by refold): > > Is there any chance Cabal/cabal-install could piggyback on this effort? > We've outgrown free services like Travis and AppVeyor and could use > support for additional platforms (OpenBSD and maybe ARM) too. > > -- > Ticket URL: > GHC > The Glasgow Haskell Compiler > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lonetiger at gmail.com Thu Sep 7 06:44:24 2017 From: lonetiger at gmail.com (Phyx) Date: Thu, 07 Sep 2017 06:44:24 +0000 Subject: Feedback request regarding HSOC project (bringing sanity to the GHC performance test-suite) In-Reply-To: References: Message-ID: Hi Jared, On Tue, Sep 5, 2017, 19:39 Jared Weakly wrote: > Hi Tamar, > > That framework failure is due to a somewhat embarrassing error that I > thought I had caught earlier; line 298 shouldn't have existed (it was > a small mistake from converting the all.T file from using the old > function to using the new collect_stats function. I have fixed this > and it will be pushed by the time you read this email. That being > said, the individual tests or units are very isolated and a framework > failure simply means that unit didn't get run; in this case it means > that entire all.T file didn't get run since the error was in reading > the file, but the rest of the files should've had their performance > tests recorded properly. > > The .git/refs/notes/perf is an implementation detail. Git notes have > the concept of namespaces; so, in order to avoid cluttering up a > global namespace in git notes with stuff only the performance tests > will use, all the performance metrics are stored in the namespace > 'perf'. > > This format of the git notes is mentioned in the code for the > testsuite but I will make this more visible in the README and other > documentation. The format of the git notes is: > $test-env $test_name $test_way $metric_measured $value_collected > (separated by tabs) > > The maximum deviation the test allows is inside the respective > all.T's; additionally, if you set the verbosity level of the > test-suite to a value >= 4, you will see the expected value, allowed > deviation, lower bound, upper bound, the actual measured value, and > (if the test fell outside the bounds) how much the actual value > deviated from the expected value. This information will also print if > the test falls outside of the allowed bounds. > > perf_notes.py exists as both an internal library and a measurement > tool (hopefully to be useful to developers). You can give the tool > several commits and it will give you a comparison of the union of all > the tests in those commits, with an output very similar in style to > noFib. I imagine this will be useful mostly to people who want to > improve the performance of the compiler so they can see which tests > have regressed the most over time (or which have improved the most > over time); but as it works over commits, it can also be useful for a > developer wanting to know if they've made a measurable difference with > their patch. > > The notes are updated every time the testsuite is ran. However they > are updated only at the very end of the execution of the testsuite in > a single command (the information is collected in a python > datastructure which is turned into a string and given to git notes). > This behavior means that if the testsuite is ran more than once > in-between commits that 'duplicate values' will exist in the git > notes. I'm not quite sure how to deal with this yet; I am considering > just grabbing the latest value if multiples exist. This also means you > can test just one test and then run other tests and have those values > added into the git notes without losing your older values which is why > the behavior is kept this way. (I will make sure this is more > prominent in the docs somewhere). > > The note update is done using python's subprocess library. I have no > idea how resilient that is to git failure; I'd imagine that if it was > busy it would just silently fail to update. Fortunately, the update > process is as close to atomic as one can get. I'll see if I can figure > out a way to force a repo lock to test this out. I'm open to > suggestions as to how to deal with this better and I'll also google > around and see if anyone has a good solution. > I think you can just wait and retry and do so a specified max number of r times. That should be good enough for most cases. Unless the process on the other end has died but then the user needs to clean up the lock firsts. > Platform discrepancies are completely sidestepped because of the way > git notes work. The performance metrics are entirely local and stay on > your computer; they won't be pushed or shared with any other users. > That means that the performance numbers are completely tailored to > your platform so there is effectively an 0% margin of "OS-related" > error that needs to be accounted for. The collect_stats function is > very much designed to be declarative and "set it and forget it". As > such, the need to even record values at all is obsoleted (one of the > main motivators of this project in the first place). > This I don't quite understand. If I get this right. It means I will now always have to run the full performance benchmark suite for each change I have twice? Before and after locally? I had thought the notes would be pushed as I would find it useful to have the perfect history locally if I wanted to. Performance by its very nature is very platform specific, I feel that this change makes it harder for the platforms we don't have a CI for to run benchmarks. So basically only Linux. This would be unfortunate as it would mean we would effectively stop tracking performance on e.g. Windows and Mac OS since the current implementation doesn't allow for the data to live together in the same repo. > > Hopefully this answers some questions; I'll make sure this sort of > information is available somewhere so that later users can find these > answers again. Thanks for your thoughts! They were very helpful. > > Regards, > Jared > > On Mon, Sep 4, 2017 at 10:02 PM, Phyx wrote: > > Hi Jared, > > > > First off, thanks for all the hard work on this. I checked out your > branch > > and made a run, I noticed at the end it had > > > > Framework failures: > > . ./perf/compiler/all.T [] (unexpected indent (, line 298)) > > > > so I assume none of the perf tests were run? > > > > Though I do see a .git/refs/notes/perf, so I assume your ref is is perf? > > > > Doing a git notes --ref perf show I see somethings were collected at some > > point > > > > local T3924 normal bytes allocated 47064 > > local haddock.base normal bytes allocated 18427047160 > > local haddock.Cabal normal bytes allocated 15863910848 > > local haddock.compiler normal bytes allocated 50656428952 > > > > which brings me up to my first question, I'm guessing the number here is > the > > number of bytes allocated for the test? Is there a way for me to see > > what the maximum deviation the test allows is and how far I am from it? > Do I > > just get the information like before only when a test fails? How does > that > > look like? Same as before? > > > > It's also not entirely clear to be what perf_notes.py can be used, is it > > just an infrastructure tool? or is it something you foresee as useful > for a > > developer? > > > > lastly, how often do you update notes? It's probably too late for this > now, > > but git, especially msys git can be especially slow, so I would have > liked > > the notes to be updated in batches to not slow down the testsuite run on > > Windows. > > > > Which brings me to my next question, how resilient are you to failures > > updating git? some IDE/environments like vscode automatically issue git > > operations in the background. so git may be busy when you try to update > and > > the operation would fail saying the repo is locked. Does your new system > > recover from such failures? > > > > Also how do you deal with platform discrepancies? We've had in the past > > tests that behave radically different on different platforms, so we've > also > > historically had the ability to record a platform specific value. > > > > Thanks, > > Tamar > > > > On Fri, Sep 1, 2017, 05:01 Jared Weakly wrote: > >> > >> Hey y'all, > >> > >> A quick ToC before I dive right in: > >> > >> * What my HSOC project is on > >> * My progress so far > >> * Feedback welcome > >> * What I have left to do > >> * Theoretical potential improvements > >> > >> ----------- > >> > >> My HSOC project was on bringing sanity to the GHC performance > test-suite. > >> My blog post on this is here: > >> https://jaredweakly.com/blog/haskell-summer-of-code/ > >> The Trac ticket that corresponds to this is here: > >> https://ghc.haskell.org/trac/ghc/ticket/12758 > >> The Phabricator ticket for this patch: > >> https://phabricator.haskell.org/D3758 > >> > >> The tl;dr of my HSOC project is that GHC's performance tests currently > >> require the programmer to add in expected numbers manually, updated > >> them, handhold the testsuite, etc. This is a bit absurd and my > >> project's overall aim is to reduce the effort required of the > >> programmer to as close to zero as possible while simultaneously > >> increasing the potential ability of the testsuite to catch regressions > >> as much as possible. > >> > >> ------------ > >> > >> My progress so far: > >> - I have a few comparison tools in perf_notes.py. These allow people > >> to compare performance numbers of tests across commits > >> - I have all the performance numbers generated by running the tests > >> automatically stored in git notes and referenced by both the > >> comparison tool and the testsuite > >> - I have refactored the testsuite to use my new code that pulls > >> expected numbers automatically from git notes (trivially passing if > >> the note does not yet exist for that test), then it compares that > >> expected number with the number that was gotten from running the > >> testsuite on the latest commit. The comparison passes if it's within a > >> certain deviation (20% by default, but can be customized by the > >> programmer). > >> - I have refactored all of the all.T files to use the new comparison > >> functions for the performance tests and ensured that this doesn't > >> break any existing tests. > >> > >> ------------ > >> > >> > >> Anyone who wants to checkout the wip/perf-testsuite and try this out > >> is more than welcome. Feedback on anything is welcome; comments are > >> appreciated; discussion is welcome, etc. > >> > >> ------------- > >> > >> > >> What I have left to do is: > >> > >> 1. Finish writing up the documentation > >> 2. Update the wiki in all the relevant places concerning > >> additions/modifications to the testsuite and test driver > >> 3. Make sure everyone is happy with the change (and make small changes > >> as necessary) > >> > >> -------------- > >> > >> Possible features and improvements I am thinking about adding in: > >> * As a stopgap to full integration with performance tracking tools > >> (such as Gipedia), optionally emitting a test warning with the test > >> summary if there is any regression detected whatsoever (even if the > >> number falls within the allowed deviation) > >> * Some tests, such as T7702, have a somewhat nonsensical regression > >> percentage. Ideally the testsuite could handle those better. I could > >> potentially build in multiple ways to determine a regression > >> (percentage, 'above a certain value', 'taking longer than X amount of > >> time', as potential examples) > >> * Currently some tests require installing some Haskell packages; they > >> are skipped if the packages are not installed. I could try to build in > >> a way to automatically attempt to install all necessary Haskell > >> packages if someone attempts to run a test that requires them. > >> (Perhaps using a command such as 'make test exhaustive') > >> * The performance metric 'peak_megabytes' is sometimes not accurate > >> enough; I could see if adding something like `RTS -h -i0.01` > >> automatically to tests that use 'peak_megabytes' would resolve that. > >> Currently it is a manual debugging step. > >> > >> Any thoughts? Comments? Questions? > >> > >> Regards, > >> Jared Weakly > >> _______________________________________________ > >> ghc-devs mailing list > >> ghc-devs at haskell.org > >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.kjeldaas at gmail.com Thu Sep 7 07:24:34 2017 From: alexander.kjeldaas at gmail.com (Alexander Kjeldaas) Date: Thu, 7 Sep 2017 09:24:34 +0200 Subject: [GHC] #13716: Move CI to Jenkins In-Reply-To: References: <046.48e2facaae8e965ca7d5990f8113aa95@haskell.org> <061.6517cbd2ef77768a965a148a910128cf@haskell.org> Message-ID: I find that using gitlab with a docker swarm is a good compromise. Then gitlab runs all the infrastructure for the UI part, access control ++, and all you have to do is to run the gitlab-runner docker image and register it in gitlab. This also brings other options such as having one run from scratch and other "quick" runs where a half-built docker image is updated. I use docker hub for syncing images between the registered docker runners. Alexander On Thu, Sep 7, 2017 at 4:13 AM, davean wrote: > Thats actually how this project started. > > On Wed, Sep 6, 2017 at 7:05 PM, GHC wrote: > >> #13716: Move CI to Jenkins >> -------------------------------------+---------------------- >> --------------- >> Reporter: bgamari | Owner: (none) >> Type: task | Status: new >> Priority: normal | Milestone: >> Component: None | Version: 8.0.1 >> Resolution: | Keywords: >> Operating System: Unknown/Multiple | Architecture: >> | Unknown/Multiple >> Type of failure: None/Unknown | Test Case: >> Blocked By: 13897 | Blocking: >> Related Tickets: #11958 | Differential Rev(s): >> Wiki Page: | >> -------------------------------------+---------------------- >> --------------- >> >> Comment (by refold): >> >> Is there any chance Cabal/cabal-install could piggyback on this effort? >> We've outgrown free services like Travis and AppVeyor and could use >> support for additional platforms (OpenBSD and maybe ARM) too. >> >> -- >> Ticket URL: >> GHC >> The Glasgow Haskell Compiler >> > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From joehillen at gmail.com Thu Sep 7 19:20:05 2017 From: joehillen at gmail.com (Joe Hillenbrand) Date: Thu, 7 Sep 2017 12:20:05 -0700 Subject: [GHC] #13716: Move CI to Jenkins In-Reply-To: References: <046.48e2facaae8e965ca7d5990f8113aa95@haskell.org> <061.6517cbd2ef77768a965a148a910128cf@haskell.org> Message-ID: Docker is not a solution for multi-platform testing. On Thu, Sep 7, 2017 at 12:24 AM, Alexander Kjeldaas wrote: > I find that using gitlab with a docker swarm is a good compromise. > > Then gitlab runs all the infrastructure for the UI part, access control ++, > and all you have to do is to run the gitlab-runner docker image and register > it in gitlab. > > This also brings other options such as having one run from scratch and other > "quick" runs where a half-built docker image is updated. I use docker hub > for syncing images between the registered docker runners. > > Alexander > > On Thu, Sep 7, 2017 at 4:13 AM, davean wrote: >> >> Thats actually how this project started. >> >> On Wed, Sep 6, 2017 at 7:05 PM, GHC wrote: >>> >>> #13716: Move CI to Jenkins >>> >>> -------------------------------------+------------------------------------- >>> Reporter: bgamari | Owner: (none) >>> Type: task | Status: new >>> Priority: normal | Milestone: >>> Component: None | Version: 8.0.1 >>> Resolution: | Keywords: >>> Operating System: Unknown/Multiple | Architecture: >>> | Unknown/Multiple >>> Type of failure: None/Unknown | Test Case: >>> Blocked By: 13897 | Blocking: >>> Related Tickets: #11958 | Differential Rev(s): >>> Wiki Page: | >>> >>> -------------------------------------+------------------------------------- >>> >>> Comment (by refold): >>> >>> Is there any chance Cabal/cabal-install could piggyback on this effort? >>> We've outgrown free services like Travis and AppVeyor and could use >>> support for additional platforms (OpenBSD and maybe ARM) too. >>> >>> -- >>> Ticket URL: >>> GHC >>> The Glasgow Haskell Compiler >> >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From simonpj at microsoft.com Thu Sep 7 23:50:44 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 7 Sep 2017 23:50:44 +0000 Subject: A newcomer: how to start In-Reply-To: References: Message-ID: Yes, welcome! Do join ghc-devs and watch the fun. There is a learning curve, but as Ara says, everyone is friendly. Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Ara Adkins Sent: 05 September 2017 13:47 To: Sergey Bykov Cc: ghc-devs at haskell.org Subject: Re: A newcomer: how to start Hey Sergey, I’d honestly head straight for the GHC newcomers guide [0]! Once you’re familiar with all of that just head for Trac and look for the low hanging fruit tasks! Most importantly, don’t forget to ask questions! Nobody here bites! Best, _ara [0] https://ghc.haskell.org/trac/ghc/wiki/Newcomers On 5 Sep 2017, at 13:42, Sergey Bykov > wrote: Hi, my name is Sergey and I'm recently joined to this mailing list. I would like to help GHC's community and start contributing to the GHC project. Could someone recommend a few tasks, which seems to be suitable for a beginner? About myself: I'm pretty experienced Research Scientist/Software Developer in such areas like discrete optimization and machine learning. Thanks in advance! _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at well-typed.com Fri Sep 8 00:08:46 2017 From: david at well-typed.com (David Feuer) Date: Thu, 07 Sep 2017 20:08:46 -0400 Subject: Help with #14140 Message-ID: <20170907234141.5AB75BCCCF@haskell.org> Could you maybe point me toward where the constant folding is happening in this context? I'd like to take a glance and see if I can guess how to upgrade it to deal with what we know things *aren't*. Thanks in advance. David FeuerWell-Typed, LLP -------------- next part -------------- An HTML attachment was scrubbed... URL: From harendra.kumar at gmail.com Fri Sep 8 09:21:09 2017 From: harendra.kumar at gmail.com (Harendra Kumar) Date: Fri, 8 Sep 2017 14:51:09 +0530 Subject: Performance degradation when factoring out common code Message-ID: Hi, I have this code snippet for the bind implementation of a Monad: AsyncT m >>= f = AsyncT $ \_ stp yld -> let run x = (runAsyncT x) Nothing stp yld yield a _ Nothing = run $ f a yield a _ (Just r) = run $ f a <> (r >>= f) in m Nothing stp yield I want to have multiple versions of this implementation parameterized by a function, like this: bindWith k (AsyncT m) f = AsyncT $ \_ stp yld -> let run x = (runAsyncT x) Nothing stp yld yield a _ Nothing = run $ f a yield a _ (Just r) = run $ f a `k` (bindWith k r f) in m Nothing stp yield And then the bind function becomes: (>>=) = bindWith (<>) But this leads to a performance degradation of more than 10%. inlining does not help, I tried INLINE pragma as well as the "inline" GHC builtin. I thought this should be a more or less straightforward replacement making the second version equivalent to the first one. But apparently there is something going on here that makes it perform worse. I did not look at the core, stg or asm yet. Hoping someone can quickly comment on it. Any ideas why is it so? Can this be worked around somehow? Thanks, Harendra -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikolaj at well-typed.com Fri Sep 8 11:42:12 2017 From: mikolaj at well-typed.com (Mikolaj Konarski) Date: Fri, 8 Sep 2017 13:42:12 +0200 Subject: Performance degradation when factoring out common code In-Reply-To: References: Message-ID: Hello, I've had a similar problem that's been fixed in 8.2.1: https://ghc.haskell.org/trac/ghc/ticket/12603 You can also use some extreme global flags, such as ghc-options: -fexpose-all-unfoldings -fspecialise-aggressively to get most the GHC subtlety and shyness out of the way when experimenting. Good luck Mikolaj On Fri, Sep 8, 2017 at 11:21 AM, Harendra Kumar wrote: > Hi, > > I have this code snippet for the bind implementation of a Monad: > > AsyncT m >>= f = AsyncT $ \_ stp yld -> > let run x = (runAsyncT x) Nothing stp yld > yield a _ Nothing = run $ f a > yield a _ (Just r) = run $ f a <> (r >>= f) > in m Nothing stp yield > > I want to have multiple versions of this implementation parameterized by a > function, like this: > > bindWith k (AsyncT m) f = AsyncT $ \_ stp yld -> > let run x = (runAsyncT x) Nothing stp yld > yield a _ Nothing = run $ f a > yield a _ (Just r) = run $ f a `k` (bindWith k r f) > in m Nothing stp yield > > And then the bind function becomes: > > (>>=) = bindWith (<>) > > But this leads to a performance degradation of more than 10%. inlining does > not help, I tried INLINE pragma as well as the "inline" GHC builtin. I > thought this should be a more or less straightforward replacement making the > second version equivalent to the first one. But apparently there is > something going on here that makes it perform worse. > > I did not look at the core, stg or asm yet. Hoping someone can quickly > comment on it. Any ideas why is it so? Can this be worked around somehow? > > Thanks, > Harendra > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From harendra.kumar at gmail.com Fri Sep 8 12:04:48 2017 From: harendra.kumar at gmail.com (Harendra Kumar) Date: Fri, 8 Sep 2017 17:34:48 +0530 Subject: Performance degradation when factoring out common code In-Reply-To: References: Message-ID: Thanks Mikolaj! I have seen some surprising behavior quite a few times recently and I was wondering whether GHC should do better. In one case I had to use SPECIALIZE very aggressively, in another version of the same code it worked well without that. I have been doing a lot of trial and error with the INLINE/NOINLINE pragmas to figure out what the right combination is. Sometimes it just feels like black magic, because I cannot find a rationale to explain the behavior. I am not sure if there are any more such problems lurking in, perhaps this is an area where some improvement looks possible. -harendra On 8 September 2017 at 17:10, Mikolaj Konarski wrote: > Hello, > > I've had a similar problem that's been fixed in 8.2.1: > > https://ghc.haskell.org/trac/ghc/ticket/12603 > > You can also use some extreme global flags, such as > > ghc-options: -fexpose-all-unfoldings -fspecialise-aggressively > > to get most the GHC subtlety and shyness out of the way > when experimenting. > > Good luck > Mikolaj > > > > On Fri, Sep 8, 2017 at 11:21 AM, Harendra Kumar > wrote: > > Hi, > > > > I have this code snippet for the bind implementation of a Monad: > > > > AsyncT m >>= f = AsyncT $ \_ stp yld -> > > let run x = (runAsyncT x) Nothing stp yld > > yield a _ Nothing = run $ f a > > yield a _ (Just r) = run $ f a <> (r >>= f) > > in m Nothing stp yield > > > > I want to have multiple versions of this implementation parameterized by > a > > function, like this: > > > > bindWith k (AsyncT m) f = AsyncT $ \_ stp yld -> > > let run x = (runAsyncT x) Nothing stp yld > > yield a _ Nothing = run $ f a > > yield a _ (Just r) = run $ f a `k` (bindWith k r f) > > in m Nothing stp yield > > > > And then the bind function becomes: > > > > (>>=) = bindWith (<>) > > > > But this leads to a performance degradation of more than 10%. inlining > does > > not help, I tried INLINE pragma as well as the "inline" GHC builtin. I > > thought this should be a more or less straightforward replacement making > the > > second version equivalent to the first one. But apparently there is > > something going on here that makes it perform worse. > > > > I did not look at the core, stg or asm yet. Hoping someone can quickly > > comment on it. Any ideas why is it so? Can this be worked around somehow? > > > > Thanks, > > Harendra > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From harendra.kumar at gmail.com Fri Sep 8 12:49:48 2017 From: harendra.kumar at gmail.com (Harendra Kumar) Date: Fri, 8 Sep 2017 18:19:48 +0530 Subject: Performance degradation when factoring out common code In-Reply-To: References: Message-ID: I should also point out that I saw performance improvements by manually factoring out and propagating some common expressions to outer loops in performance sensitive paths. Now I have made this a habit to do this manually. Not sure if something like this has also been fixed with that ticket or some other ticket. -harendra On 8 September 2017 at 17:34, Harendra Kumar wrote: > Thanks Mikolaj! I have seen some surprising behavior quite a few times > recently and I was wondering whether GHC should do better. In one case I > had to use SPECIALIZE very aggressively, in another version of the same > code it worked well without that. I have been doing a lot of trial and > error with the INLINE/NOINLINE pragmas to figure out what the right > combination is. Sometimes it just feels like black magic, because I cannot > find a rationale to explain the behavior. I am not sure if there are any > more such problems lurking in, perhaps this is an area where some > improvement looks possible. > > -harendra > > > On 8 September 2017 at 17:10, Mikolaj Konarski > wrote: > >> Hello, >> >> I've had a similar problem that's been fixed in 8.2.1: >> >> https://ghc.haskell.org/trac/ghc/ticket/12603 >> >> You can also use some extreme global flags, such as >> >> ghc-options: -fexpose-all-unfoldings -fspecialise-aggressively >> >> to get most the GHC subtlety and shyness out of the way >> when experimenting. >> >> Good luck >> Mikolaj >> >> >> >> On Fri, Sep 8, 2017 at 11:21 AM, Harendra Kumar >> wrote: >> > Hi, >> > >> > I have this code snippet for the bind implementation of a Monad: >> > >> > AsyncT m >>= f = AsyncT $ \_ stp yld -> >> > let run x = (runAsyncT x) Nothing stp yld >> > yield a _ Nothing = run $ f a >> > yield a _ (Just r) = run $ f a <> (r >>= f) >> > in m Nothing stp yield >> > >> > I want to have multiple versions of this implementation parameterized >> by a >> > function, like this: >> > >> > bindWith k (AsyncT m) f = AsyncT $ \_ stp yld -> >> > let run x = (runAsyncT x) Nothing stp yld >> > yield a _ Nothing = run $ f a >> > yield a _ (Just r) = run $ f a `k` (bindWith k r f) >> > in m Nothing stp yield >> > >> > And then the bind function becomes: >> > >> > (>>=) = bindWith (<>) >> > >> > But this leads to a performance degradation of more than 10%. inlining >> does >> > not help, I tried INLINE pragma as well as the "inline" GHC builtin. I >> > thought this should be a more or less straightforward replacement >> making the >> > second version equivalent to the first one. But apparently there is >> > something going on here that makes it perform worse. >> > >> > I did not look at the core, stg or asm yet. Hoping someone can quickly >> > comment on it. Any ideas why is it so? Can this be worked around >> somehow? >> > >> > Thanks, >> > Harendra >> > >> > _______________________________________________ >> > ghc-devs mailing list >> > ghc-devs at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From harendra.kumar at gmail.com Fri Sep 8 13:02:42 2017 From: harendra.kumar at gmail.com (Harendra Kumar) Date: Fri, 8 Sep 2017 18:32:42 +0530 Subject: Performance degradation when factoring out common code In-Reply-To: References: Message-ID: I tested my code on GHC 8.2.1, there is a good news and a bad news. The good news is that with 8.2.1 performance of my code has improved by a huge margin of around 20%, I had reported a similar perf improvement earlier in another one of my package with 8.2. The bad news is that the problem that I reported today is not fixed. I am still seeing a 15% difference between automatic and manual inlining. I guess I should raise a ticket. -harendra On 8 September 2017 at 17:10, Mikolaj Konarski wrote: > Hello, > > I've had a similar problem that's been fixed in 8.2.1: > > https://ghc.haskell.org/trac/ghc/ticket/12603 > > You can also use some extreme global flags, such as > > ghc-options: -fexpose-all-unfoldings -fspecialise-aggressively > > to get most the GHC subtlety and shyness out of the way > when experimenting. > > Good luck > Mikolaj > > > > On Fri, Sep 8, 2017 at 11:21 AM, Harendra Kumar > wrote: > > Hi, > > > > I have this code snippet for the bind implementation of a Monad: > > > > AsyncT m >>= f = AsyncT $ \_ stp yld -> > > let run x = (runAsyncT x) Nothing stp yld > > yield a _ Nothing = run $ f a > > yield a _ (Just r) = run $ f a <> (r >>= f) > > in m Nothing stp yield > > > > I want to have multiple versions of this implementation parameterized by > a > > function, like this: > > > > bindWith k (AsyncT m) f = AsyncT $ \_ stp yld -> > > let run x = (runAsyncT x) Nothing stp yld > > yield a _ Nothing = run $ f a > > yield a _ (Just r) = run $ f a `k` (bindWith k r f) > > in m Nothing stp yield > > > > And then the bind function becomes: > > > > (>>=) = bindWith (<>) > > > > But this leads to a performance degradation of more than 10%. inlining > does > > not help, I tried INLINE pragma as well as the "inline" GHC builtin. I > > thought this should be a more or less straightforward replacement making > the > > second version equivalent to the first one. But apparently there is > > something going on here that makes it perform worse. > > > > I did not look at the core, stg or asm yet. Hoping someone can quickly > > comment on it. Any ideas why is it so? Can this be worked around somehow? > > > > Thanks, > > Harendra > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Sep 8 13:05:35 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 8 Sep 2017 13:05:35 +0000 Subject: Performance degradation when factoring out common code In-Reply-To: References: Message-ID: I know that this is not an easy request, but can either of you produce a small example that demonstrates your problem? If so, please open a ticket. I don’t like hearing about people having to use trial and error with INLINE or SPECIALISE pragmas. But I can’t even begin to solve the problem unless I can reproduce it. Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Harendra Kumar Sent: 08 September 2017 13:50 To: Mikolaj Konarski Cc: ghc-devs at haskell.org Subject: Re: Performance degradation when factoring out common code I should also point out that I saw performance improvements by manually factoring out and propagating some common expressions to outer loops in performance sensitive paths. Now I have made this a habit to do this manually. Not sure if something like this has also been fixed with that ticket or some other ticket. -harendra On 8 September 2017 at 17:34, Harendra Kumar > wrote: Thanks Mikolaj! I have seen some surprising behavior quite a few times recently and I was wondering whether GHC should do better. In one case I had to use SPECIALIZE very aggressively, in another version of the same code it worked well without that. I have been doing a lot of trial and error with the INLINE/NOINLINE pragmas to figure out what the right combination is. Sometimes it just feels like black magic, because I cannot find a rationale to explain the behavior. I am not sure if there are any more such problems lurking in, perhaps this is an area where some improvement looks possible. -harendra On 8 September 2017 at 17:10, Mikolaj Konarski > wrote: Hello, I've had a similar problem that's been fixed in 8.2.1: https://ghc.haskell.org/trac/ghc/ticket/12603 You can also use some extreme global flags, such as ghc-options: -fexpose-all-unfoldings -fspecialise-aggressively to get most the GHC subtlety and shyness out of the way when experimenting. Good luck Mikolaj On Fri, Sep 8, 2017 at 11:21 AM, Harendra Kumar > wrote: > Hi, > > I have this code snippet for the bind implementation of a Monad: > > AsyncT m >>= f = AsyncT $ \_ stp yld -> > let run x = (runAsyncT x) Nothing stp yld > yield a _ Nothing = run $ f a > yield a _ (Just r) = run $ f a <> (r >>= f) > in m Nothing stp yield > > I want to have multiple versions of this implementation parameterized by a > function, like this: > > bindWith k (AsyncT m) f = AsyncT $ \_ stp yld -> > let run x = (runAsyncT x) Nothing stp yld > yield a _ Nothing = run $ f a > yield a _ (Just r) = run $ f a `k` (bindWith k r f) > in m Nothing stp yield > > And then the bind function becomes: > > (>>=) = bindWith (<>) > > But this leads to a performance degradation of more than 10%. inlining does > not help, I tried INLINE pragma as well as the "inline" GHC builtin. I > thought this should be a more or less straightforward replacement making the > second version equivalent to the first one. But apparently there is > something going on here that makes it perform worse. > > I did not look at the core, stg or asm yet. Hoping someone can quickly > comment on it. Any ideas why is it so? Can this be worked around somehow? > > Thanks, > Harendra > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From harendra.kumar at gmail.com Fri Sep 8 13:19:33 2017 From: harendra.kumar at gmail.com (Harendra Kumar) Date: Fri, 8 Sep 2017 18:49:33 +0530 Subject: Performance degradation when factoring out common code In-Reply-To: References: Message-ID: I will try creating a minimal example and open a ticket for the inlining problem, the one I am sure about. -harendra On 8 September 2017 at 18:35, Simon Peyton Jones wrote: > *I know that this is not an easy request*, but can either of you produce > a small example that demonstrates your problem? If so, please open a > ticket. > > > > I don’t like hearing about people having to use trial and error with > INLINE or SPECIALISE pragmas. But I can’t even begin to solve the problem > unless I can reproduce it. > > > > Simon > > > > *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of *Harendra > Kumar > *Sent:* 08 September 2017 13:50 > *To:* Mikolaj Konarski > *Cc:* ghc-devs at haskell.org > *Subject:* Re: Performance degradation when factoring out common code > > > > I should also point out that I saw performance improvements by manually > factoring out and propagating some common expressions to outer loops in > performance sensitive paths. Now I have made this a habit to do this > manually. Not sure if something like this has also been fixed with that > ticket or some other ticket. > > > > -harendra > > > > On 8 September 2017 at 17:34, Harendra Kumar > wrote: > > Thanks Mikolaj! I have seen some surprising behavior quite a few times > recently and I was wondering whether GHC should do better. In one case I > had to use SPECIALIZE very aggressively, in another version of the same > code it worked well without that. I have been doing a lot of trial and > error with the INLINE/NOINLINE pragmas to figure out what the right > combination is. Sometimes it just feels like black magic, because I cannot > find a rationale to explain the behavior. I am not sure if there are any > more such problems lurking in, perhaps this is an area where some > improvement looks possible. > > > > -harendra > > > > > > On 8 September 2017 at 17:10, Mikolaj Konarski > wrote: > > Hello, > > I've had a similar problem that's been fixed in 8.2.1: > > https://ghc.haskell.org/trac/ghc/ticket/12603 > > You can also use some extreme global flags, such as > > ghc-options: -fexpose-all-unfoldings -fspecialise-aggressively > > to get most the GHC subtlety and shyness out of the way > when experimenting. > > Good luck > Mikolaj > > > > > On Fri, Sep 8, 2017 at 11:21 AM, Harendra Kumar > wrote: > > Hi, > > > > I have this code snippet for the bind implementation of a Monad: > > > > AsyncT m >>= f = AsyncT $ \_ stp yld -> > > let run x = (runAsyncT x) Nothing stp yld > > yield a _ Nothing = run $ f a > > yield a _ (Just r) = run $ f a <> (r >>= f) > > in m Nothing stp yield > > > > I want to have multiple versions of this implementation parameterized by > a > > function, like this: > > > > bindWith k (AsyncT m) f = AsyncT $ \_ stp yld -> > > let run x = (runAsyncT x) Nothing stp yld > > yield a _ Nothing = run $ f a > > yield a _ (Just r) = run $ f a `k` (bindWith k r f) > > in m Nothing stp yield > > > > And then the bind function becomes: > > > > (>>=) = bindWith (<>) > > > > But this leads to a performance degradation of more than 10%. inlining > does > > not help, I tried INLINE pragma as well as the "inline" GHC builtin. I > > thought this should be a more or less straightforward replacement making > the > > second version equivalent to the first one. But apparently there is > > something going on here that makes it perform worse. > > > > I did not look at the core, stg or asm yet. Hoping someone can quickly > > comment on it. Any ideas why is it so? Can this be worked around somehow? > > > > Thanks, > > Harendra > > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From harendra.kumar at gmail.com Sat Sep 9 05:05:59 2017 From: harendra.kumar at gmail.com (Harendra Kumar) Date: Sat, 9 Sep 2017 10:35:59 +0530 Subject: Performance degradation when factoring out common code In-Reply-To: References: Message-ID: While trying to come up with a minimal example I discovered one more puzzling thing. runghc is fastest, ghc is slower, ghc with optimization is slowest. This is completely reverse of the expected order. ghc -O1 (-O2 is similar): time 15.23 ms (14.72 ms .. 15.73 ms) ghc -O0: time 3.612 ms (3.548 ms .. 3.728 ms) runghc: time 2.250 ms (2.156 ms .. 2.348 ms) I am grokking it further. Any pointers will be helpful. I understand that -O2 can sometimes be slower e.g. aggressive inlining can sometimes be counterproductive. But 4x variation is a lot and this is the case with -O1 as well which should be relatively safer than -O2 in general. Worst of all runghc is significantly faster than ghc. What's going on? -harendra On 8 September 2017 at 18:49, Harendra Kumar wrote: > I will try creating a minimal example and open a ticket for the inlining > problem, the one I am sure about. > > -harendra > > On 8 September 2017 at 18:35, Simon Peyton Jones > wrote: > >> *I know that this is not an easy request*, but can either of you produce >> a small example that demonstrates your problem? If so, please open a >> ticket. >> >> >> >> I don’t like hearing about people having to use trial and error with >> INLINE or SPECIALISE pragmas. But I can’t even begin to solve the problem >> unless I can reproduce it. >> >> >> >> Simon >> >> >> >> *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of *Harendra >> Kumar >> *Sent:* 08 September 2017 13:50 >> *To:* Mikolaj Konarski >> *Cc:* ghc-devs at haskell.org >> *Subject:* Re: Performance degradation when factoring out common code >> >> >> >> I should also point out that I saw performance improvements by manually >> factoring out and propagating some common expressions to outer loops in >> performance sensitive paths. Now I have made this a habit to do this >> manually. Not sure if something like this has also been fixed with that >> ticket or some other ticket. >> >> >> >> -harendra >> >> >> >> On 8 September 2017 at 17:34, Harendra Kumar >> wrote: >> >> Thanks Mikolaj! I have seen some surprising behavior quite a few times >> recently and I was wondering whether GHC should do better. In one case I >> had to use SPECIALIZE very aggressively, in another version of the same >> code it worked well without that. I have been doing a lot of trial and >> error with the INLINE/NOINLINE pragmas to figure out what the right >> combination is. Sometimes it just feels like black magic, because I cannot >> find a rationale to explain the behavior. I am not sure if there are any >> more such problems lurking in, perhaps this is an area where some >> improvement looks possible. >> >> >> >> -harendra >> >> >> >> >> >> On 8 September 2017 at 17:10, Mikolaj Konarski < >> mikolaj.konarski at gmail.com> wrote: >> >> Hello, >> >> I've had a similar problem that's been fixed in 8.2.1: >> >> https://ghc.haskell.org/trac/ghc/ticket/12603 >> >> You can also use some extreme global flags, such as >> >> ghc-options: -fexpose-all-unfoldings -fspecialise-aggressively >> >> to get most the GHC subtlety and shyness out of the way >> when experimenting. >> >> Good luck >> Mikolaj >> >> >> >> >> On Fri, Sep 8, 2017 at 11:21 AM, Harendra Kumar >> wrote: >> > Hi, >> > >> > I have this code snippet for the bind implementation of a Monad: >> > >> > AsyncT m >>= f = AsyncT $ \_ stp yld -> >> > let run x = (runAsyncT x) Nothing stp yld >> > yield a _ Nothing = run $ f a >> > yield a _ (Just r) = run $ f a <> (r >>= f) >> > in m Nothing stp yield >> > >> > I want to have multiple versions of this implementation parameterized >> by a >> > function, like this: >> > >> > bindWith k (AsyncT m) f = AsyncT $ \_ stp yld -> >> > let run x = (runAsyncT x) Nothing stp yld >> > yield a _ Nothing = run $ f a >> > yield a _ (Just r) = run $ f a `k` (bindWith k r f) >> > in m Nothing stp yield >> > >> > And then the bind function becomes: >> > >> > (>>=) = bindWith (<>) >> > >> > But this leads to a performance degradation of more than 10%. inlining >> does >> > not help, I tried INLINE pragma as well as the "inline" GHC builtin. I >> > thought this should be a more or less straightforward replacement >> making the >> > second version equivalent to the first one. But apparently there is >> > something going on here that makes it perform worse. >> > >> > I did not look at the core, stg or asm yet. Hoping someone can quickly >> > comment on it. Any ideas why is it so? Can this be worked around >> somehow? >> > >> > Thanks, >> > Harendra >> > >> >> > _______________________________________________ >> > ghc-devs mailing list >> > ghc-devs at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >> > >> >> >> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Sat Sep 9 08:00:02 2017 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Sat, 9 Sep 2017 09:00:02 +0100 Subject: Performance degradation when factoring out common code In-Reply-To: References: Message-ID: Do you have the code? On Sat, Sep 9, 2017 at 6:05 AM, Harendra Kumar wrote: > While trying to come up with a minimal example I discovered one more > puzzling thing. runghc is fastest, ghc is slower, ghc with optimization is > slowest. This is completely reverse of the expected order. > > ghc -O1 (-O2 is similar): > > time 15.23 ms (14.72 ms .. 15.73 ms) > > ghc -O0: > > time 3.612 ms (3.548 ms .. 3.728 ms) > > runghc: > > time 2.250 ms (2.156 ms .. 2.348 ms) > > > I am grokking it further. Any pointers will be helpful. I understand that > -O2 can sometimes be slower e.g. aggressive inlining can sometimes be > counterproductive. But 4x variation is a lot and this is the case with -O1 > as well which should be relatively safer than -O2 in general. Worst of all > runghc is significantly faster than ghc. What's going on? > > -harendra > > > On 8 September 2017 at 18:49, Harendra Kumar > wrote: >> >> I will try creating a minimal example and open a ticket for the inlining >> problem, the one I am sure about. >> >> -harendra >> >> On 8 September 2017 at 18:35, Simon Peyton Jones >> wrote: >>> >>> I know that this is not an easy request, but can either of you produce a >>> small example that demonstrates your problem? If so, please open a ticket. >>> >>> >>> >>> I don’t like hearing about people having to use trial and error with >>> INLINE or SPECIALISE pragmas. But I can’t even begin to solve the problem >>> unless I can reproduce it. >>> >>> >>> >>> Simon >>> >>> >>> >>> From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of >>> Harendra Kumar >>> Sent: 08 September 2017 13:50 >>> To: Mikolaj Konarski >>> Cc: ghc-devs at haskell.org >>> Subject: Re: Performance degradation when factoring out common code >>> >>> >>> >>> I should also point out that I saw performance improvements by manually >>> factoring out and propagating some common expressions to outer loops in >>> performance sensitive paths. Now I have made this a habit to do this >>> manually. Not sure if something like this has also been fixed with that >>> ticket or some other ticket. >>> >>> >>> >>> -harendra >>> >>> >>> >>> On 8 September 2017 at 17:34, Harendra Kumar >>> wrote: >>> >>> Thanks Mikolaj! I have seen some surprising behavior quite a few times >>> recently and I was wondering whether GHC should do better. In one case I had >>> to use SPECIALIZE very aggressively, in another version of the same code it >>> worked well without that. I have been doing a lot of trial and error with >>> the INLINE/NOINLINE pragmas to figure out what the right combination is. >>> Sometimes it just feels like black magic, because I cannot find a rationale >>> to explain the behavior. I am not sure if there are any more such problems >>> lurking in, perhaps this is an area where some improvement looks possible. >>> >>> >>> >>> -harendra >>> >>> >>> >>> >>> >>> On 8 September 2017 at 17:10, Mikolaj Konarski >>> wrote: >>> >>> Hello, >>> >>> I've had a similar problem that's been fixed in 8.2.1: >>> >>> https://ghc.haskell.org/trac/ghc/ticket/12603 >>> >>> You can also use some extreme global flags, such as >>> >>> ghc-options: -fexpose-all-unfoldings -fspecialise-aggressively >>> >>> to get most the GHC subtlety and shyness out of the way >>> when experimenting. >>> >>> Good luck >>> Mikolaj >>> >>> >>> >>> >>> On Fri, Sep 8, 2017 at 11:21 AM, Harendra Kumar >>> wrote: >>> > Hi, >>> > >>> > I have this code snippet for the bind implementation of a Monad: >>> > >>> > AsyncT m >>= f = AsyncT $ \_ stp yld -> >>> > let run x = (runAsyncT x) Nothing stp yld >>> > yield a _ Nothing = run $ f a >>> > yield a _ (Just r) = run $ f a <> (r >>= f) >>> > in m Nothing stp yield >>> > >>> > I want to have multiple versions of this implementation parameterized >>> > by a >>> > function, like this: >>> > >>> > bindWith k (AsyncT m) f = AsyncT $ \_ stp yld -> >>> > let run x = (runAsyncT x) Nothing stp yld >>> > yield a _ Nothing = run $ f a >>> > yield a _ (Just r) = run $ f a `k` (bindWith k r f) >>> > in m Nothing stp yield >>> > >>> > And then the bind function becomes: >>> > >>> > (>>=) = bindWith (<>) >>> > >>> > But this leads to a performance degradation of more than 10%. inlining >>> > does >>> > not help, I tried INLINE pragma as well as the "inline" GHC builtin. I >>> > thought this should be a more or less straightforward replacement >>> > making the >>> > second version equivalent to the first one. But apparently there is >>> > something going on here that makes it perform worse. >>> > >>> > I did not look at the core, stg or asm yet. Hoping someone can quickly >>> > comment on it. Any ideas why is it so? Can this be worked around >>> > somehow? >>> > >>> > Thanks, >>> > Harendra >>> > >>> >>> > _______________________________________________ >>> > ghc-devs mailing list >>> > ghc-devs at haskell.org >>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>> > >>> >>> >>> >>> >> >> > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From harendra.kumar at gmail.com Sat Sep 9 08:38:38 2017 From: harendra.kumar at gmail.com (Harendra Kumar) Date: Sat, 9 Sep 2017 14:08:38 +0530 Subject: Performance degradation when factoring out common code In-Reply-To: References: Message-ID: The code is at: https://github.com/harendra-kumar/asyncly. The benchmark code is in "benchmark/Main.hs". The relevant function is "asyncly_basic". If you want to run it, you can use the following steps to reproduce the behavior I reported below: 1) Run "stack build" 2) Run "stack runghc benchmark/Main.hs" for runghc figures 3) Run "stack ghc benchmark/Main.hs && benchmark/Main" to compile and run normally 4) Run "stack ghc -- -O2 benchmark/Main.hs && benchmark/Main" to compile and run with -O2 flag Just look at the first benchmark (asyncly-serial), you can comment out all others if you want to. Note that the library gets compiled without any optimization flags (see the ghc options in the cabal file). So what we are seeing here is just the effect of -O2 on compiling benchmarks/Main.hs. I am also trying to isolate the problem to a minimal case. I tried removing all the INLINE pragmas in the library to make sure that I am not screwing it up by asking the compiler to inline aggressively, but that does not seem to make any difference to the situation. Let me know if you need any information from me or help in running it. There are three issues that I am trying to get answers for: 1) Why runghc is faster? It means that there is a possibility for the program to run as fast as runghc runs it. How do I get that performance or an explanation of it? 2) Why -O1/O2 degrades performance so much by 4-5x. 3) The third one is the original problem that I posted in this thread, compiler is unable to match manual inlining. It is possible that this is an issue only when -O1/O2 is used and not when -O0 is used. Thanks for the help. -harendra On 9 September 2017 at 13:30, Matthew Pickering wrote: > Do you have the code? > > On Sat, Sep 9, 2017 at 6:05 AM, Harendra Kumar > wrote: > > While trying to come up with a minimal example I discovered one more > > puzzling thing. runghc is fastest, ghc is slower, ghc with optimization > is > > slowest. This is completely reverse of the expected order. > > > > ghc -O1 (-O2 is similar): > > > > time 15.23 ms (14.72 ms .. 15.73 ms) > > > > ghc -O0: > > > > time 3.612 ms (3.548 ms .. 3.728 ms) > > > > runghc: > > > > time 2.250 ms (2.156 ms .. 2.348 ms) > > > > > > I am grokking it further. Any pointers will be helpful. I understand that > > -O2 can sometimes be slower e.g. aggressive inlining can sometimes be > > counterproductive. But 4x variation is a lot and this is the case with > -O1 > > as well which should be relatively safer than -O2 in general. Worst of > all > > runghc is significantly faster than ghc. What's going on? > > > > -harendra > > > > > > On 8 September 2017 at 18:49, Harendra Kumar > > wrote: > >> > >> I will try creating a minimal example and open a ticket for the inlining > >> problem, the one I am sure about. > >> > >> -harendra > >> > >> On 8 September 2017 at 18:35, Simon Peyton Jones > > >> wrote: > >>> > >>> I know that this is not an easy request, but can either of you produce > a > >>> small example that demonstrates your problem? If so, please open a > ticket. > >>> > >>> > >>> > >>> I don’t like hearing about people having to use trial and error with > >>> INLINE or SPECIALISE pragmas. But I can’t even begin to solve the > problem > >>> unless I can reproduce it. > >>> > >>> > >>> > >>> Simon > >>> > >>> > >>> > >>> From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of > >>> Harendra Kumar > >>> Sent: 08 September 2017 13:50 > >>> To: Mikolaj Konarski > >>> Cc: ghc-devs at haskell.org > >>> Subject: Re: Performance degradation when factoring out common code > >>> > >>> > >>> > >>> I should also point out that I saw performance improvements by manually > >>> factoring out and propagating some common expressions to outer loops in > >>> performance sensitive paths. Now I have made this a habit to do this > >>> manually. Not sure if something like this has also been fixed with that > >>> ticket or some other ticket. > >>> > >>> > >>> > >>> -harendra > >>> > >>> > >>> > >>> On 8 September 2017 at 17:34, Harendra Kumar > > >>> wrote: > >>> > >>> Thanks Mikolaj! I have seen some surprising behavior quite a few times > >>> recently and I was wondering whether GHC should do better. In one case > I had > >>> to use SPECIALIZE very aggressively, in another version of the same > code it > >>> worked well without that. I have been doing a lot of trial and error > with > >>> the INLINE/NOINLINE pragmas to figure out what the right combination > is. > >>> Sometimes it just feels like black magic, because I cannot find a > rationale > >>> to explain the behavior. I am not sure if there are any more such > problems > >>> lurking in, perhaps this is an area where some improvement looks > possible. > >>> > >>> > >>> > >>> -harendra > >>> > >>> > >>> > >>> > >>> > >>> On 8 September 2017 at 17:10, Mikolaj Konarski > >>> wrote: > >>> > >>> Hello, > >>> > >>> I've had a similar problem that's been fixed in 8.2.1: > >>> > >>> https://ghc.haskell.org/trac/ghc/ticket/12603 > >>> > >>> You can also use some extreme global flags, such as > >>> > >>> ghc-options: -fexpose-all-unfoldings -fspecialise-aggressively > >>> > >>> to get most the GHC subtlety and shyness out of the way > >>> when experimenting. > >>> > >>> Good luck > >>> Mikolaj > >>> > >>> > >>> > >>> > >>> On Fri, Sep 8, 2017 at 11:21 AM, Harendra Kumar > >>> wrote: > >>> > Hi, > >>> > > >>> > I have this code snippet for the bind implementation of a Monad: > >>> > > >>> > AsyncT m >>= f = AsyncT $ \_ stp yld -> > >>> > let run x = (runAsyncT x) Nothing stp yld > >>> > yield a _ Nothing = run $ f a > >>> > yield a _ (Just r) = run $ f a <> (r >>= f) > >>> > in m Nothing stp yield > >>> > > >>> > I want to have multiple versions of this implementation parameterized > >>> > by a > >>> > function, like this: > >>> > > >>> > bindWith k (AsyncT m) f = AsyncT $ \_ stp yld -> > >>> > let run x = (runAsyncT x) Nothing stp yld > >>> > yield a _ Nothing = run $ f a > >>> > yield a _ (Just r) = run $ f a `k` (bindWith k r f) > >>> > in m Nothing stp yield > >>> > > >>> > And then the bind function becomes: > >>> > > >>> > (>>=) = bindWith (<>) > >>> > > >>> > But this leads to a performance degradation of more than 10%. > inlining > >>> > does > >>> > not help, I tried INLINE pragma as well as the "inline" GHC builtin. > I > >>> > thought this should be a more or less straightforward replacement > >>> > making the > >>> > second version equivalent to the first one. But apparently there is > >>> > something going on here that makes it perform worse. > >>> > > >>> > I did not look at the core, stg or asm yet. Hoping someone can > quickly > >>> > comment on it. Any ideas why is it so? Can this be worked around > >>> > somehow? > >>> > > >>> > Thanks, > >>> > Harendra > >>> > > >>> > >>> > _______________________________________________ > >>> > ghc-devs mailing list > >>> > ghc-devs at haskell.org > >>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > >>> > > >>> > >>> > >>> > >>> > >> > >> > > > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From harendra.kumar at gmail.com Sat Sep 9 13:23:29 2017 From: harendra.kumar at gmail.com (Harendra Kumar) Date: Sat, 9 Sep 2017 18:53:29 +0530 Subject: Performance degradation when factoring out common code In-Reply-To: References: Message-ID: I could pinpoint one part of the problem. Please see the ticket: https://ghc.haskell.org/trac/ghc/ticket/14208. Here is the description that I wrote in the ticket: In this particular case -O2 is 2x slower than -O0 and -O0 is 2x slower than runghc. Please see the github repo: ​ https://github.com/harendra-kumar/ghc-perf to reproduce the issue. Readme file in the repo has instructions to reproduce. The issue seems to occur when the code is placed in a different module. When all the code is in the same module the problem does not occur. In that case -O2 is faster than -O0. However, when the code is split into two modules the performance gets inverted. Also, it does not occur always, when I tried to change the code to make it simpler for repro the problem did not occur. -harendra On 9 September 2017 at 14:08, Harendra Kumar wrote: > The code is at: https://github.com/harendra-kumar/asyncly. The benchmark > code is in "benchmark/Main.hs". The relevant function is "asyncly_basic". > > If you want to run it, you can use the following steps to reproduce the > behavior I reported below: > > 1) Run "stack build" > 2) Run "stack runghc benchmark/Main.hs" for runghc figures > 3) Run "stack ghc benchmark/Main.hs && benchmark/Main" to compile and run > normally > 4) Run "stack ghc -- -O2 benchmark/Main.hs && benchmark/Main" to compile > and run with -O2 flag > > Just look at the first benchmark (asyncly-serial), you can comment out all > others if you want to. Note that the library gets compiled without any > optimization flags (see the ghc options in the cabal file). So what we are > seeing here is just the effect of -O2 on compiling benchmarks/Main.hs. > > I am also trying to isolate the problem to a minimal case. I tried > removing all the INLINE pragmas in the library to make sure that I am not > screwing it up by asking the compiler to inline aggressively, but that does > not seem to make any difference to the situation. Let me know if you need > any information from me or help in running it. > > There are three issues that I am trying to get answers for: > > 1) Why runghc is faster? It means that there is a possibility for the > program to run as fast as runghc runs it. How do I get that performance or > an explanation of it? > > 2) Why -O1/O2 degrades performance so much by 4-5x. > > 3) The third one is the original problem that I posted in this thread, > compiler is unable to match manual inlining. It is possible that this is an > issue only when -O1/O2 is used and not when -O0 is used. > > Thanks for the help. > > -harendra > > > On 9 September 2017 at 13:30, Matthew Pickering < > matthewtpickering at gmail.com> wrote: > >> Do you have the code? >> >> On Sat, Sep 9, 2017 at 6:05 AM, Harendra Kumar >> wrote: >> > While trying to come up with a minimal example I discovered one more >> > puzzling thing. runghc is fastest, ghc is slower, ghc with optimization >> is >> > slowest. This is completely reverse of the expected order. >> > >> > ghc -O1 (-O2 is similar): >> > >> > time 15.23 ms (14.72 ms .. 15.73 ms) >> > >> > ghc -O0: >> > >> > time 3.612 ms (3.548 ms .. 3.728 ms) >> > >> > runghc: >> > >> > time 2.250 ms (2.156 ms .. 2.348 ms) >> > >> > >> > I am grokking it further. Any pointers will be helpful. I understand >> that >> > -O2 can sometimes be slower e.g. aggressive inlining can sometimes be >> > counterproductive. But 4x variation is a lot and this is the case with >> -O1 >> > as well which should be relatively safer than -O2 in general. Worst of >> all >> > runghc is significantly faster than ghc. What's going on? >> > >> > -harendra >> > >> > >> > On 8 September 2017 at 18:49, Harendra Kumar >> > wrote: >> >> >> >> I will try creating a minimal example and open a ticket for the >> inlining >> >> problem, the one I am sure about. >> >> >> >> -harendra >> >> >> >> On 8 September 2017 at 18:35, Simon Peyton Jones < >> simonpj at microsoft.com> >> >> wrote: >> >>> >> >>> I know that this is not an easy request, but can either of you >> produce a >> >>> small example that demonstrates your problem? If so, please open a >> ticket. >> >>> >> >>> >> >>> >> >>> I don’t like hearing about people having to use trial and error with >> >>> INLINE or SPECIALISE pragmas. But I can’t even begin to solve the >> problem >> >>> unless I can reproduce it. >> >>> >> >>> >> >>> >> >>> Simon >> >>> >> >>> >> >>> >> >>> From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of >> >>> Harendra Kumar >> >>> Sent: 08 September 2017 13:50 >> >>> To: Mikolaj Konarski >> >>> Cc: ghc-devs at haskell.org >> >>> Subject: Re: Performance degradation when factoring out common code >> >>> >> >>> >> >>> >> >>> I should also point out that I saw performance improvements by >> manually >> >>> factoring out and propagating some common expressions to outer loops >> in >> >>> performance sensitive paths. Now I have made this a habit to do this >> >>> manually. Not sure if something like this has also been fixed with >> that >> >>> ticket or some other ticket. >> >>> >> >>> >> >>> >> >>> -harendra >> >>> >> >>> >> >>> >> >>> On 8 September 2017 at 17:34, Harendra Kumar < >> harendra.kumar at gmail.com> >> >>> wrote: >> >>> >> >>> Thanks Mikolaj! I have seen some surprising behavior quite a few times >> >>> recently and I was wondering whether GHC should do better. In one >> case I had >> >>> to use SPECIALIZE very aggressively, in another version of the same >> code it >> >>> worked well without that. I have been doing a lot of trial and error >> with >> >>> the INLINE/NOINLINE pragmas to figure out what the right combination >> is. >> >>> Sometimes it just feels like black magic, because I cannot find a >> rationale >> >>> to explain the behavior. I am not sure if there are any more such >> problems >> >>> lurking in, perhaps this is an area where some improvement looks >> possible. >> >>> >> >>> >> >>> >> >>> -harendra >> >>> >> >>> >> >>> >> >>> >> >>> >> >>> On 8 September 2017 at 17:10, Mikolaj Konarski >> >>> wrote: >> >>> >> >>> Hello, >> >>> >> >>> I've had a similar problem that's been fixed in 8.2.1: >> >>> >> >>> https://ghc.haskell.org/trac/ghc/ticket/12603 >> >>> >> >>> You can also use some extreme global flags, such as >> >>> >> >>> ghc-options: -fexpose-all-unfoldings -fspecialise-aggressively >> >>> >> >>> to get most the GHC subtlety and shyness out of the way >> >>> when experimenting. >> >>> >> >>> Good luck >> >>> Mikolaj >> >>> >> >>> >> >>> >> >>> >> >>> On Fri, Sep 8, 2017 at 11:21 AM, Harendra Kumar >> >>> wrote: >> >>> > Hi, >> >>> > >> >>> > I have this code snippet for the bind implementation of a Monad: >> >>> > >> >>> > AsyncT m >>= f = AsyncT $ \_ stp yld -> >> >>> > let run x = (runAsyncT x) Nothing stp yld >> >>> > yield a _ Nothing = run $ f a >> >>> > yield a _ (Just r) = run $ f a <> (r >>= f) >> >>> > in m Nothing stp yield >> >>> > >> >>> > I want to have multiple versions of this implementation >> parameterized >> >>> > by a >> >>> > function, like this: >> >>> > >> >>> > bindWith k (AsyncT m) f = AsyncT $ \_ stp yld -> >> >>> > let run x = (runAsyncT x) Nothing stp yld >> >>> > yield a _ Nothing = run $ f a >> >>> > yield a _ (Just r) = run $ f a `k` (bindWith k r f) >> >>> > in m Nothing stp yield >> >>> > >> >>> > And then the bind function becomes: >> >>> > >> >>> > (>>=) = bindWith (<>) >> >>> > >> >>> > But this leads to a performance degradation of more than 10%. >> inlining >> >>> > does >> >>> > not help, I tried INLINE pragma as well as the "inline" GHC >> builtin. I >> >>> > thought this should be a more or less straightforward replacement >> >>> > making the >> >>> > second version equivalent to the first one. But apparently there is >> >>> > something going on here that makes it perform worse. >> >>> > >> >>> > I did not look at the core, stg or asm yet. Hoping someone can >> quickly >> >>> > comment on it. Any ideas why is it so? Can this be worked around >> >>> > somehow? >> >>> > >> >>> > Thanks, >> >>> > Harendra >> >>> > >> >>> >> >>> > _______________________________________________ >> >>> > ghc-devs mailing list >> >>> > ghc-devs at haskell.org >> >>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >>> > >> >>> >> >>> >> >>> >> >>> >> >> >> >> >> > >> > >> > _______________________________________________ >> > ghc-devs mailing list >> > ghc-devs at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From harendra.kumar at gmail.com Sun Sep 10 02:41:29 2017 From: harendra.kumar at gmail.com (Harendra Kumar) Date: Sun, 10 Sep 2017 08:11:29 +0530 Subject: Performance degradation when factoring out common code In-Reply-To: References: Message-ID: Ok, I filed a ticket for the inlining issue as well - https://ghc.haskell.org/trac/ghc/ticket/14211. The reproduction test case is in the same repo on the "inlining-issue" branch, here - https://github.com/harendra-kumar/ghc-perf/tree/inlining-issue . Performance with manually inlining a function is more than 10% faster compared to factoring out code and using INLINE pragma. stack bench for compiler inlined code time 46.71 ms (45.53 ms .. 47.79 ms) stack bench --flag ghc-perf:manual for manually inlined code time 39.46 ms (38.92 ms .. 39.94 ms) -harendra On 9 September 2017 at 14:08, Harendra Kumar wrote: > The code is at: https://github.com/harendra-kumar/asyncly. The benchmark > code is in "benchmark/Main.hs". The relevant function is "asyncly_basic". > > If you want to run it, you can use the following steps to reproduce the > behavior I reported below: > > 1) Run "stack build" > 2) Run "stack runghc benchmark/Main.hs" for runghc figures > 3) Run "stack ghc benchmark/Main.hs && benchmark/Main" to compile and run > normally > 4) Run "stack ghc -- -O2 benchmark/Main.hs && benchmark/Main" to compile > and run with -O2 flag > > Just look at the first benchmark (asyncly-serial), you can comment out all > others if you want to. Note that the library gets compiled without any > optimization flags (see the ghc options in the cabal file). So what we are > seeing here is just the effect of -O2 on compiling benchmarks/Main.hs. > > I am also trying to isolate the problem to a minimal case. I tried > removing all the INLINE pragmas in the library to make sure that I am not > screwing it up by asking the compiler to inline aggressively, but that does > not seem to make any difference to the situation. Let me know if you need > any information from me or help in running it. > > There are three issues that I am trying to get answers for: > > 1) Why runghc is faster? It means that there is a possibility for the > program to run as fast as runghc runs it. How do I get that performance or > an explanation of it? > > 2) Why -O1/O2 degrades performance so much by 4-5x. > > 3) The third one is the original problem that I posted in this thread, > compiler is unable to match manual inlining. It is possible that this is an > issue only when -O1/O2 is used and not when -O0 is used. > > Thanks for the help. > > -harendra > > > On 9 September 2017 at 13:30, Matthew Pickering < > matthewtpickering at gmail.com> wrote: > >> Do you have the code? >> >> On Sat, Sep 9, 2017 at 6:05 AM, Harendra Kumar >> wrote: >> > While trying to come up with a minimal example I discovered one more >> > puzzling thing. runghc is fastest, ghc is slower, ghc with optimization >> is >> > slowest. This is completely reverse of the expected order. >> > >> > ghc -O1 (-O2 is similar): >> > >> > time 15.23 ms (14.72 ms .. 15.73 ms) >> > >> > ghc -O0: >> > >> > time 3.612 ms (3.548 ms .. 3.728 ms) >> > >> > runghc: >> > >> > time 2.250 ms (2.156 ms .. 2.348 ms) >> > >> > >> > I am grokking it further. Any pointers will be helpful. I understand >> that >> > -O2 can sometimes be slower e.g. aggressive inlining can sometimes be >> > counterproductive. But 4x variation is a lot and this is the case with >> -O1 >> > as well which should be relatively safer than -O2 in general. Worst of >> all >> > runghc is significantly faster than ghc. What's going on? >> > >> > -harendra >> > >> > >> > On 8 September 2017 at 18:49, Harendra Kumar >> > wrote: >> >> >> >> I will try creating a minimal example and open a ticket for the >> inlining >> >> problem, the one I am sure about. >> >> >> >> -harendra >> >> >> >> On 8 September 2017 at 18:35, Simon Peyton Jones < >> simonpj at microsoft.com> >> >> wrote: >> >>> >> >>> I know that this is not an easy request, but can either of you >> produce a >> >>> small example that demonstrates your problem? If so, please open a >> ticket. >> >>> >> >>> >> >>> >> >>> I don’t like hearing about people having to use trial and error with >> >>> INLINE or SPECIALISE pragmas. But I can’t even begin to solve the >> problem >> >>> unless I can reproduce it. >> >>> >> >>> >> >>> >> >>> Simon >> >>> >> >>> >> >>> >> >>> From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of >> >>> Harendra Kumar >> >>> Sent: 08 September 2017 13:50 >> >>> To: Mikolaj Konarski >> >>> Cc: ghc-devs at haskell.org >> >>> Subject: Re: Performance degradation when factoring out common code >> >>> >> >>> >> >>> >> >>> I should also point out that I saw performance improvements by >> manually >> >>> factoring out and propagating some common expressions to outer loops >> in >> >>> performance sensitive paths. Now I have made this a habit to do this >> >>> manually. Not sure if something like this has also been fixed with >> that >> >>> ticket or some other ticket. >> >>> >> >>> >> >>> >> >>> -harendra >> >>> >> >>> >> >>> >> >>> On 8 September 2017 at 17:34, Harendra Kumar < >> harendra.kumar at gmail.com> >> >>> wrote: >> >>> >> >>> Thanks Mikolaj! I have seen some surprising behavior quite a few times >> >>> recently and I was wondering whether GHC should do better. In one >> case I had >> >>> to use SPECIALIZE very aggressively, in another version of the same >> code it >> >>> worked well without that. I have been doing a lot of trial and error >> with >> >>> the INLINE/NOINLINE pragmas to figure out what the right combination >> is. >> >>> Sometimes it just feels like black magic, because I cannot find a >> rationale >> >>> to explain the behavior. I am not sure if there are any more such >> problems >> >>> lurking in, perhaps this is an area where some improvement looks >> possible. >> >>> >> >>> >> >>> >> >>> -harendra >> >>> >> >>> >> >>> >> >>> >> >>> >> >>> On 8 September 2017 at 17:10, Mikolaj Konarski >> >>> wrote: >> >>> >> >>> Hello, >> >>> >> >>> I've had a similar problem that's been fixed in 8.2.1: >> >>> >> >>> https://ghc.haskell.org/trac/ghc/ticket/12603 >> >>> >> >>> You can also use some extreme global flags, such as >> >>> >> >>> ghc-options: -fexpose-all-unfoldings -fspecialise-aggressively >> >>> >> >>> to get most the GHC subtlety and shyness out of the way >> >>> when experimenting. >> >>> >> >>> Good luck >> >>> Mikolaj >> >>> >> >>> >> >>> >> >>> >> >>> On Fri, Sep 8, 2017 at 11:21 AM, Harendra Kumar >> >>> wrote: >> >>> > Hi, >> >>> > >> >>> > I have this code snippet for the bind implementation of a Monad: >> >>> > >> >>> > AsyncT m >>= f = AsyncT $ \_ stp yld -> >> >>> > let run x = (runAsyncT x) Nothing stp yld >> >>> > yield a _ Nothing = run $ f a >> >>> > yield a _ (Just r) = run $ f a <> (r >>= f) >> >>> > in m Nothing stp yield >> >>> > >> >>> > I want to have multiple versions of this implementation >> parameterized >> >>> > by a >> >>> > function, like this: >> >>> > >> >>> > bindWith k (AsyncT m) f = AsyncT $ \_ stp yld -> >> >>> > let run x = (runAsyncT x) Nothing stp yld >> >>> > yield a _ Nothing = run $ f a >> >>> > yield a _ (Just r) = run $ f a `k` (bindWith k r f) >> >>> > in m Nothing stp yield >> >>> > >> >>> > And then the bind function becomes: >> >>> > >> >>> > (>>=) = bindWith (<>) >> >>> > >> >>> > But this leads to a performance degradation of more than 10%. >> inlining >> >>> > does >> >>> > not help, I tried INLINE pragma as well as the "inline" GHC >> builtin. I >> >>> > thought this should be a more or less straightforward replacement >> >>> > making the >> >>> > second version equivalent to the first one. But apparently there is >> >>> > something going on here that makes it perform worse. >> >>> > >> >>> > I did not look at the core, stg or asm yet. Hoping someone can >> quickly >> >>> > comment on it. Any ideas why is it so? Can this be worked around >> >>> > somehow? >> >>> > >> >>> > Thanks, >> >>> > Harendra >> >>> > >> >>> >> >>> > _______________________________________________ >> >>> > ghc-devs mailing list >> >>> > ghc-devs at haskell.org >> >>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >>> > >> >>> >> >>> >> >>> >> >>> >> >> >> >> >> > >> > >> > _______________________________________________ >> > ghc-devs mailing list >> > ghc-devs at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From manpacket at gmail.com Sun Sep 10 03:03:10 2017 From: manpacket at gmail.com (Michael Baikov) Date: Sun, 10 Sep 2017 11:03:10 +0800 Subject: GHC Threads affinity Message-ID: Greetings Currently GHC supports two kinds of threads - pinned to a specific capability (bound threads) and those it can migrate between any capabilities (unbound threads). For purposes of achieving lower latency in Haskell applications it would be nice to have something in between - threads GHC can migrate but within a certain subset of capabilities only. I'm developing a program that contains several kinds of threads - those that do little work and sensitive to latency and those that can spend more CPU time and less latency sensitive. I looked into several cases of increased latency in those sensitive threads (using GHC eventlog) and in all cases sensitive threads were waiting for non-sensitive threads to finish working. I was able to reduce worst case latency by factor of 10 by pinning all the threads in the program to specific capability but manually distributing threads (60+ of them) between capabilities (several different machines with different numbers of cores available) seems very fragile. World stopping GC is still a problem but at least in my case is much less frequently so. It would be nice to be able to allow GHC runtime to migrate a thread between a subset of capabilities using interface similar to this one: -- creates a thread that is allowed to migrate between capabilities according to following rule: ghc is allowed to run this thread on Nth capability if Nth `mod` size_of_word bit in mask is set. forkOn' :: Int -> IO () -> IO ThreadId forkOn' mask act = undefined This should allow to define up to 64 (32) distinct groups and allow user to break down their threads into bigger number of potentially intersecting groups by specifying things like capability 0 does latency sensitive things, caps 1..5 - less sensitive things, caps 6-7 bulk things. Anything obvious I'm missing? Any recommendations to how to implement this? -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Sun Sep 10 06:10:32 2017 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sun, 10 Sep 2017 06:10:32 +0000 Subject: New primitive types? In-Reply-To: References: Message-ID: I think it should be mostly fine... memory alignment is my main bugbear of a worry, but I guess that requires experimentation 😊😊 On Sun, Aug 27, 2017 at 6:50 PM Michal Terepeta wrote: > > On Thu, Aug 3, 2017 at 2:28 AM Sylvain Henry wrote: > > Hi, > > > > I also think we should do this but it has a lot of ramifications: > contant folding in Core, codegen, TH, etc. > > > > Also it will break codes that use primitive types directly, so maybe > it's worth a ghc proposal. > > Ok, a short proposal sounds reasonable. > > I don't think this would break a lot of code - based on a few searches > it seems that people don't really extract `Int#` from > `Int8/Int16/Int32` (similarly with words). > Or am I missing something? > > Thanks, > Michal > > PS. Sorry for slow reply - I was traveling. > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From harendra.kumar at gmail.com Sun Sep 10 07:24:24 2017 From: harendra.kumar at gmail.com (Harendra Kumar) Date: Sun, 10 Sep 2017 12:54:24 +0530 Subject: Semigroup repeat (base package) Message-ID: Hi, I am sending this question here since base ships with ghc, let me know if this is not the right forum for this. I could not find a function that repeats a value using a semigroup append. I am looking for something like this: srepeat :: Semigroup a => a -> a srepeat x = xs where xs = x <> xs Is it already available somewhere? Does it make sense to add it to Data.Semigroup? Thanks, Harendra -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvriedel at gmail.com Sun Sep 10 08:39:37 2017 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Sun, 10 Sep 2017 10:39:37 +0200 Subject: Semigroup repeat (base package) In-Reply-To: References: Message-ID: Hi, On Sun, Sep 10, 2017 at 9:24 AM, Harendra Kumar wrote: > I could not find a function that repeats a value using a semigroup append. I > am looking for something like this: > > srepeat :: Semigroup a => a -> a > srepeat x = xs where xs = x <> xs > > Is it already available somewhere? Does it make sense to add it to > Data.Semigroup? What you seem to be searching for looks more like what we know as `cycle :: [a] -> [a]`, and in fact there is its generalisation at http://hackage.haskell.org/package/base-4.10.0.0/docs/Data-Semigroup.html#v:cycle1 hth From harendra.kumar at gmail.com Sun Sep 10 09:34:08 2017 From: harendra.kumar at gmail.com (Harendra Kumar) Date: Sun, 10 Sep 2017 15:04:08 +0530 Subject: Semigroup repeat (base package) In-Reply-To: References: Message-ID: Indeed it is cycle1, sorry to have missed it. It will be easier to spot it if it close to the stimes* functions in the beginning of the docs. It is placed too far down below even after Monoid re-exports. -harendra On 10 September 2017 at 14:09, Herbert Valerio Riedel wrote: > Hi, > > On Sun, Sep 10, 2017 at 9:24 AM, Harendra Kumar > wrote: > > I could not find a function that repeats a value using a semigroup > append. I > > am looking for something like this: > > > > srepeat :: Semigroup a => a -> a > > srepeat x = xs where xs = x <> xs > > > > Is it already available somewhere? Does it make sense to add it to > > Data.Semigroup? > > What you seem to be searching for looks more like what we know as > `cycle :: [a] -> [a]`, and in fact there is its generalisation at > > http://hackage.haskell.org/package/base-4.10.0.0/docs/ > Data-Semigroup.html#v:cycle1 > > hth > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wolfgang-it at jeltsch.info Sun Sep 10 21:16:08 2017 From: wolfgang-it at jeltsch.info (Wolfgang Jeltsch) Date: Mon, 11 Sep 2017 00:16:08 +0300 Subject: Semigroup repeat (base package) In-Reply-To: References: Message-ID: <1505078168.13497.65.camel@jeltsch.info> Am Sonntag, den 10.09.2017, 10:39 +0200 schrieb Herbert Valerio Riedel: > What you seem to be searching for looks more like what we know as > `cycle :: [a] -> [a]`, and in fact there is its generalisation at > > http://hackage.haskell.org/package/base-4.10.0.0/docs/Data-Semigroup.html#v:cycle1 Why is this function called cycle1, not cycle? What does the “1” stand for? All the best, Wolfgang From nicolas.frisby at gmail.com Sun Sep 10 22:24:44 2017 From: nicolas.frisby at gmail.com (Nicolas Frisby) Date: Sun, 10 Sep 2017 22:24:44 +0000 Subject: A type checker plugin for row types Message-ID: Hi all. I've been spending my free time for the last couple months on a type checker plugin for row types. The free time waxes and wanes; sending an email like this one was my primary goal for the past couple weeks. At the very least, I hoped this project would let me finally get some hands on experience with OutsideIn. And I definitely have. But I've also made more progress than I anticipated, and I think the plugin is starting to have legs! I haven't uploaded the code yet to github -- it's not quite ready to share. But I did do a write up on the dev wiki. https://ghc.haskell.org/trac/ghc/wiki/Plugins/TypeChecker/RowTypes/Coxswain I would really appreciate and questions, comments, and --- boy, oh boy --- answers. I hope to upload within a week or so, and I'll update that wiki page and reply to this email when I do. Thanks very much. -Nick P.S. -- I've CC'd and BCC'd people who I anticipate would be specifically interested in this (e.g. plugins, row types, etc). Please feel free to forward to others that come to mind; I know some inboxes abjectly can't afford default list traffic. P.P.S. -- One hold up for the upload is: which license? I intend to release under BSD3, mainly to match GHC since one ideal scenario would involve being packaged with/integrated into GHC. But my brief recent research suggests that the Apache license might be more conducive to eventual widespread adoption. If you'd be willing to advise or even just refer me to other write ups, please feel free to email me directly or to start a separate thread on a more appropriate distribution list (CC'ing me, please). Thanks again. -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at ara.io Sun Sep 10 22:41:54 2017 From: me at ara.io (Ara Adkins) Date: Sun, 10 Sep 2017 23:41:54 +0100 Subject: A type checker plugin for row types In-Reply-To: References: Message-ID: <92AFB6E1-449E-446C-B8D8-7FF5094A03C5@ara.io> Just given this a read! It looks like you’ve put a fantastic amount of effort into this so far, and I can certainly see how it’s finding its legs! I’m very much looking forward to seeing this develop further. I can definitely foresee some uses for polykinded column types, and the possibility for named arguments is certainly interesting (though I have some concerns about performance — though none are relevant at such an early stage). Unfortunately I don’t think I can answer any of the questions that I spotted on my read-through. Again, I’m looking forward to seeing this develop, and the naming of `coxswain` and `sculls` gave me a giggle. _ara > On 10 Sep 2017, at 23:24, Nicolas Frisby wrote: > > Hi all. I've been spending my free time for the last couple months on a type checker plugin for row types. The free time waxes and wanes; sending an email like this one was my primary goal for the past couple weeks. > > At the very least, I hoped this project would let me finally get some hands on experience with OutsideIn. And I definitely have. But I've also made more progress than I anticipated, and I think the plugin is starting to have legs! > > I haven't uploaded the code yet to github -- it's not quite ready to share. But I did do a write up on the dev wiki. > > https://ghc.haskell.org/trac/ghc/wiki/Plugins/TypeChecker/RowTypes/Coxswain > > I would really appreciate and questions, comments, and --- boy, oh boy --- answers. > > I hope to upload within a week or so, and I'll update that wiki page and reply to this email when I do. > > Thanks very much. -Nick > > P.S. -- I've CC'd and BCC'd people who I anticipate would be specifically interested in this (e.g. plugins, row types, etc). Please feel free to forward to others that come to mind; I know some inboxes abjectly can't afford default list traffic. > > P.P.S. -- One hold up for the upload is: which license? I intend to release under BSD3, mainly to match GHC since one ideal scenario would involve being packaged with/integrated into GHC. But my brief recent research suggests that the Apache license might be more conducive to eventual widespread adoption. If you'd be willing to advise or even just refer me to other write ups, please feel free to email me directly or to start a separate thread on a more appropriate distribution list (CC'ing me, please). Thanks again. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.frisby at gmail.com Sun Sep 10 22:58:45 2017 From: nicolas.frisby at gmail.com (Nicolas Frisby) Date: Sun, 10 Sep 2017 22:58:45 +0000 Subject: A type checker plugin for row types In-Reply-To: <92AFB6E1-449E-446C-B8D8-7FF5094A03C5@ara.io> References: <92AFB6E1-449E-446C-B8D8-7FF5094A03C5@ara.io> Message-ID: Whoops! I forgot about that section of my draft. I added a little blurb ("Performance?") Thanks Ara! On Sun, Sep 10, 2017 at 3:41 PM Ara Adkins wrote: > Just given this a read! > > It looks like you’ve put a fantastic amount of effort into this so far, > and I can certainly see how it’s finding its legs! I’m very much looking > forward to seeing this develop further. I can definitely foresee some uses > for polykinded column types, and the possibility for named arguments is > certainly interesting (though I have some concerns about performance — > though none are relevant at such an early stage). > > Unfortunately I don’t think I can answer any of the questions that I > spotted on my read-through. > > Again, I’m looking forward to seeing this develop, and the naming of > `coxswain` and `sculls` gave me a giggle. > > _ara > > On 10 Sep 2017, at 23:24, Nicolas Frisby wrote: > > Hi all. I've been spending my free time for the last couple months on a > type checker plugin for row types. The free time waxes and wanes; sending > an email like this one was my primary goal for the past couple weeks. > > At the very least, I hoped this project would let me finally get some > hands on experience with OutsideIn. And I definitely have. But I've also > made more progress than I anticipated, and I think the plugin is starting > to have legs! > > I haven't uploaded the code yet to github -- it's not quite ready to > share. But I did do a write up on the dev wiki. > > > https://ghc.haskell.org/trac/ghc/wiki/Plugins/TypeChecker/RowTypes/Coxswain > > I would really appreciate and questions, comments, and --- boy, oh boy --- > answers. > > I hope to upload within a week or so, and I'll update that wiki page and > reply to this email when I do. > > Thanks very much. -Nick > > P.S. -- I've CC'd and BCC'd people who I anticipate would be specifically > interested in this (e.g. plugins, row types, etc). Please feel free to > forward to others that come to mind; I know some inboxes abjectly can't > afford default list traffic. > > P.P.S. -- One hold up for the upload is: which license? I intend to > release under BSD3, mainly to match GHC since one ideal scenario would > involve being packaged with/integrated into GHC. But my brief recent > research suggests that the Apache license might be more conducive to > eventual widespread adoption. If you'd be willing to advise or even just > refer me to other write ups, please feel free to email me directly or to > start a separate thread on a more appropriate distribution list (CC'ing me, > please). Thanks again. > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at ara.io Sun Sep 10 23:09:18 2017 From: me at ara.io (Ara Adkins) Date: Mon, 11 Sep 2017 00:09:18 +0100 Subject: A type checker plugin for row types In-Reply-To: References: <92AFB6E1-449E-446C-B8D8-7FF5094A03C5@ara.io> Message-ID: <757508B5-8E8C-4D3C-A632-53F3C672B8E2@ara.io> Glad I could be of help! I just gave it a read and that generated core is much better than I expected. I’d still have some concerns regarding certain uses (e.g. named arguments) having more performance overhead than hoped, but at this stage it’s far better than I would’ve initially thought! Definitely a useful addition to the wiki page. _ara > On 10 Sep 2017, at 23:58, Nicolas Frisby wrote: > > Whoops! I forgot about that section of my draft. I added a little blurb ("Performance?") Thanks Ara! > > > >> On Sun, Sep 10, 2017 at 3:41 PM Ara Adkins wrote: >> Just given this a read! >> >> It looks like you’ve put a fantastic amount of effort into this so far, and I can certainly see how it’s finding its legs! I’m very much looking forward to seeing this develop further. I can definitely foresee some uses for polykinded column types, and the possibility for named arguments is certainly interesting (though I have some concerns about performance — though none are relevant at such an early stage). >> >> Unfortunately I don’t think I can answer any of the questions that I spotted on my read-through. >> >> Again, I’m looking forward to seeing this develop, and the naming of `coxswain` and `sculls` gave me a giggle. >> >> _ara >> >>> On 10 Sep 2017, at 23:24, Nicolas Frisby wrote: >>> >>> Hi all. I've been spending my free time for the last couple months on a type checker plugin for row types. The free time waxes and wanes; sending an email like this one was my primary goal for the past couple weeks. >>> >>> At the very least, I hoped this project would let me finally get some hands on experience with OutsideIn. And I definitely have. But I've also made more progress than I anticipated, and I think the plugin is starting to have legs! >>> >>> I haven't uploaded the code yet to github -- it's not quite ready to share. But I did do a write up on the dev wiki. >>> >>> https://ghc.haskell.org/trac/ghc/wiki/Plugins/TypeChecker/RowTypes/Coxswain >>> >>> I would really appreciate and questions, comments, and --- boy, oh boy --- answers. >>> >>> I hope to upload within a week or so, and I'll update that wiki page and reply to this email when I do. >>> >>> Thanks very much. -Nick >>> >>> P.S. -- I've CC'd and BCC'd people who I anticipate would be specifically interested in this (e.g. plugins, row types, etc). Please feel free to forward to others that come to mind; I know some inboxes abjectly can't afford default list traffic. >>> >>> P.P.S. -- One hold up for the upload is: which license? I intend to release under BSD3, mainly to match GHC since one ideal scenario would involve being packaged with/integrated into GHC. But my brief recent research suggests that the Apache license might be more conducive to eventual widespread adoption. If you'd be willing to advise or even just refer me to other write ups, please feel free to email me directly or to start a separate thread on a more appropriate distribution list (CC'ing me, please). Thanks again. >> >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From harendra.kumar at gmail.com Mon Sep 11 01:25:38 2017 From: harendra.kumar at gmail.com (Harendra Kumar) Date: Mon, 11 Sep 2017 06:55:38 +0530 Subject: Semigroup repeat (base package) In-Reply-To: <1505078168.13497.65.camel@jeltsch.info> References: <1505078168.13497.65.camel@jeltsch.info> Message-ID: On 11 September 2017 at 02:46, Wolfgang Jeltsch wrote: > Am Sonntag, den 10.09.2017, 10:39 +0200 schrieb Herbert Valerio Riedel: > > What you seem to be searching for looks more like what we know as > > `cycle :: [a] -> [a]`, and in fact there is its generalisation at > > > > http://hackage.haskell.org/package/base-4.10.0.0/docs/ > Data-Semigroup.html#v:cycle1 > > Why is this function called cycle1, not cycle? What does the “1” stand > for? I guess this is not named "cycle" to avoid conflict with "Data.List.cycle". I was also wondering why it is "cycle1" instead of, say "scycle". It can be thought of as cycling just one value instead of cycling a list in case of "Data.List.cycle". I Just made up this explanation, original writers would know better. I would prefer "scycle" which is consistent with other functions in this package like "stimes", cycle1 sounds a bit random at first look. -harendra -------------- next part -------------- An HTML attachment was scrubbed... URL: From harendra.kumar at gmail.com Mon Sep 11 05:43:57 2017 From: harendra.kumar at gmail.com (Harendra Kumar) Date: Mon, 11 Sep 2017 11:13:57 +0530 Subject: GHC 8.2.1 release commit Message-ID: Hi, In the GHC git repo how do I figure out which commit belongs to release 8.2.1. I cannot find 8.2.1 in "git tag" output. I tried "git log" and searching for 8.2.1 but there seems to be no definitive comment marking 8.2.1. -harendra -------------- next part -------------- An HTML attachment was scrubbed... URL: From harendra.kumar at gmail.com Mon Sep 11 05:59:52 2017 From: harendra.kumar at gmail.com (Harendra Kumar) Date: Mon, 11 Sep 2017 11:29:52 +0530 Subject: GHC 8.2.1 release commit In-Reply-To: References: Message-ID: It seems I had only the master branch in my tree. A git fetch showed all the branches including 8.2.1. -harendra On 11 September 2017 at 11:13, Harendra Kumar wrote: > Hi, > > In the GHC git repo how do I figure out which commit belongs to release > 8.2.1. I cannot find 8.2.1 in "git tag" output. I tried "git log" and > searching for 8.2.1 but there seems to be no definitive comment marking > 8.2.1. > > -harendra > -------------- next part -------------- An HTML attachment was scrubbed... URL: From iavor.diatchki at gmail.com Mon Sep 11 07:14:14 2017 From: iavor.diatchki at gmail.com (Iavor Diatchki) Date: Mon, 11 Sep 2017 07:14:14 +0000 Subject: A type checker plugin for row types In-Reply-To: <757508B5-8E8C-4D3C-A632-53F3C672B8E2@ara.io> References: <92AFB6E1-449E-446C-B8D8-7FF5094A03C5@ara.io> <757508B5-8E8C-4D3C-A632-53F3C672B8E2@ara.io> Message-ID: Hello Nick, very nice! Do you have any thoughts on how to use rows in class/type family declarations (i.e. how do we match on them)? For example, if I was to use the rows to make up some sort of record, system (i.e. declare `Rec :: Row -> Type`), how might I define the `Show` instance for `Rec`? -Iavor On Mon, Sep 11, 2017 at 12:10 AM Ara Adkins wrote: > Glad I could be of help! I just gave it a read and that generated core is > much better than I expected. I’d still have some concerns regarding certain > uses (e.g. named arguments) having more performance overhead than hoped, > but at this stage it’s far better than I would’ve initially thought! > > Definitely a useful addition to the wiki page. > > > _ara > > On 10 Sep 2017, at 23:58, Nicolas Frisby wrote: > > Whoops! I forgot about that section of my draft. I added a little blurb > ("Performance?") Thanks Ara! > > > > On Sun, Sep 10, 2017 at 3:41 PM Ara Adkins wrote: > >> Just given this a read! >> >> It looks like you’ve put a fantastic amount of effort into this so far, >> and I can certainly see how it’s finding its legs! I’m very much looking >> forward to seeing this develop further. I can definitely foresee some uses >> for polykinded column types, and the possibility for named arguments is >> certainly interesting (though I have some concerns about performance — >> though none are relevant at such an early stage). >> >> Unfortunately I don’t think I can answer any of the questions that I >> spotted on my read-through. >> >> Again, I’m looking forward to seeing this develop, and the naming of >> `coxswain` and `sculls` gave me a giggle. >> >> _ara >> >> On 10 Sep 2017, at 23:24, Nicolas Frisby >> wrote: >> >> Hi all. I've been spending my free time for the last couple months on a >> type checker plugin for row types. The free time waxes and wanes; sending >> an email like this one was my primary goal for the past couple weeks. >> >> At the very least, I hoped this project would let me finally get some >> hands on experience with OutsideIn. And I definitely have. But I've also >> made more progress than I anticipated, and I think the plugin is starting >> to have legs! >> >> I haven't uploaded the code yet to github -- it's not quite ready to >> share. But I did do a write up on the dev wiki. >> >> >> https://ghc.haskell.org/trac/ghc/wiki/Plugins/TypeChecker/RowTypes/Coxswain >> >> I would really appreciate and questions, comments, and --- boy, oh boy >> --- answers. >> >> I hope to upload within a week or so, and I'll update that wiki page and >> reply to this email when I do. >> >> Thanks very much. -Nick >> >> P.S. -- I've CC'd and BCC'd people who I anticipate would be specifically >> interested in this (e.g. plugins, row types, etc). Please feel free to >> forward to others that come to mind; I know some inboxes abjectly can't >> afford default list traffic. >> >> P.P.S. -- One hold up for the upload is: which license? I intend to >> release under BSD3, mainly to match GHC since one ideal scenario would >> involve being packaged with/integrated into GHC. But my brief recent >> research suggests that the Apache license might be more conducive to >> eventual widespread adoption. If you'd be willing to advise or even just >> refer me to other write ups, please feel free to email me directly or to >> start a separate thread on a more appropriate distribution list (CC'ing me, >> please). Thanks again. >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >> _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From adam at well-typed.com Mon Sep 11 08:34:15 2017 From: adam at well-typed.com (Adam Gundry) Date: Mon, 11 Sep 2017 09:34:15 +0100 Subject: A type checker plugin for row types In-Reply-To: References: Message-ID: <225e0ebe-7c81-63fb-a348-8e1481a37aeb@well-typed.com> Hi Nick, This is great work, and I look forward to seeing the code once it is ready. I've had a quick glance over your wiki page, and thought I should send you some initial comments, though it deserves deeper attention which I will try to find time to give it. :-) I don't see a reference to Iavor's paper "Improving Haskell Types with SMT" (http://yav.github.io/publications/improving-smt-types.pdf). If you've not come across it, it might give a useful alternative perspective on how plugins work, especially with regard to derived constraints. The following is based on my faulty memory, so apologies if it is out of date or misleading... > When/where exactly do Derived constraints arise? Suppose I have a class with an equality superclass class a ~ b => C a b and a wanted constraint `C alpha Int`, for some touchable variable `alpha`. This leads to a derived constraint `alpha ~ Int` thanks to the superclass (derived means we don't actually need evidence for it in order to build the core term, but solving it might help fill in some touchable variables). Sorry if this is obvious and not exact enough! > When do touchables "naturally" arise in Given constraints? Do you mean "touchable" or "unification variable" here (and elsewhere?). A skolem is always untouchable, but the converse is not true. I think that unification variables can arise in Given constraints, but that they will always be untouchable. Suppose we have defined f :: forall a b . ((a ~ b) => a -> b) -> Int (never mind that it is ambiguous) and consider type-checking the call `f id`. We end up checking `id` against type `a -> b` with given `a ~ b` where `a` and `b` are unification variables. They must be untouchable, however, otherwise we might unify them, which would be wrong. Hope this helps, Adam On 10/09/17 23:24, Nicolas Frisby wrote: > Hi all. I've been spending my free time for the last couple months on a > type checker plugin for row types. The free time waxes and wanes; > sending an email like this one was my primary goal for the past couple > weeks. > > At the very least, I hoped this project would let me finally get some > hands on experience with OutsideIn. And I definitely have. But I've also > made more progress than I anticipated, and I think the plugin is > starting to have legs! > > I haven't uploaded the code yet to github -- it's not quite ready to > share. But I did do a write up on the dev wiki. > > > https://ghc.haskell.org/trac/ghc/wiki/Plugins/TypeChecker/RowTypes/Coxswain > > I would really appreciate and questions, comments, and --- boy, oh boy > --- answers. > > I hope to upload within a week or so, and I'll update that wiki page and > reply to this email when I do. > > Thanks very much. -Nick > > P.S. -- I've CC'd and BCC'd people who I anticipate would be > specifically interested in this (e.g. plugins, row types, etc). Please > feel free to forward to others that come to mind; I know some inboxes > abjectly can't afford default list traffic. > > P.P.S. -- One hold up for the upload is: which license? I intend to > release under BSD3, mainly to match GHC since one ideal scenario would > involve being packaged with/integrated into GHC. But my brief recent > research suggests that the Apache license might be more conducive to > eventual widespread adoption. If you'd be willing to advise or even just > refer me to other write ups, please feel free to email me directly or to > start a separate thread on a more appropriate distribution list (CC'ing > me, please). Thanks again. -- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From iavor.diatchki at gmail.com Mon Sep 11 12:08:57 2017 From: iavor.diatchki at gmail.com (Iavor Diatchki) Date: Mon, 11 Sep 2017 12:08:57 +0000 Subject: A type checker plugin for row types In-Reply-To: <225e0ebe-7c81-63fb-a348-8e1481a37aeb@well-typed.com> References: <225e0ebe-7c81-63fb-a348-8e1481a37aeb@well-typed.com> Message-ID: Ah, yes, derived constraints. The mapping to logic I have in my mind is as follows: Given = assumption, used in proofs Wanted = goal, needs proof Derived = implied by assumptions and goals A derived constraint may be used to instantiate touchable unification variables, and guarantees that in doing so we are not loosing any generally. Consider, for example, a wanted: `(x + 5) ~ 8`. From this we can generate a new derived constraint `x ~ 3`, which would allow GHC to instantiate `x` to 3. A derived constraint could also be used to detect that the current set of goals is inconsistent, for example if we got a derived constraint equivalent to False, we would immediately know that the current set of goals has no solution. -Iavor On Mon, Sep 11, 2017, 9:35 AM Adam Gundry wrote: > Hi Nick, > > This is great work, and I look forward to seeing the code once it is > ready. I've had a quick glance over your wiki page, and thought I should > send you some initial comments, though it deserves deeper attention > which I will try to find time to give it. :-) > > I don't see a reference to Iavor's paper "Improving Haskell Types with > SMT" (http://yav.github.io/publications/improving-smt-types.pdf). If > you've not come across it, it might give a useful alternative > perspective on how plugins work, especially with regard to derived > constraints. > > The following is based on my faulty memory, so apologies if it is out of > date or misleading... > > > When/where exactly do Derived constraints arise? > > Suppose I have a class with an equality superclass > > class a ~ b => C a b > > and a wanted constraint `C alpha Int`, for some touchable variable > `alpha`. This leads to a derived constraint `alpha ~ Int` thanks to the > superclass (derived means we don't actually need evidence for it in > order to build the core term, but solving it might help fill in some > touchable variables). Sorry if this is obvious and not exact enough! > > > When do touchables "naturally" arise in Given constraints? > > Do you mean "touchable" or "unification variable" here (and elsewhere?). > A skolem is always untouchable, but the converse is not true. > > I think that unification variables can arise in Given constraints, but > that they will always be untouchable. Suppose we have defined > > f :: forall a b . ((a ~ b) => a -> b) -> Int > > (never mind that it is ambiguous) and consider type-checking the call `f > id`. We end up checking `id` against type `a -> b` with given `a ~ b` > where `a` and `b` are unification variables. They must be untouchable, > however, otherwise we might unify them, which would be wrong. > > Hope this helps, > > Adam > > > On 10/09/17 23:24, Nicolas Frisby wrote: > > Hi all. I've been spending my free time for the last couple months on a > > type checker plugin for row types. The free time waxes and wanes; > > sending an email like this one was my primary goal for the past couple > > weeks. > > > > At the very least, I hoped this project would let me finally get some > > hands on experience with OutsideIn. And I definitely have. But I've also > > made more progress than I anticipated, and I think the plugin is > > starting to have legs! > > > > I haven't uploaded the code yet to github -- it's not quite ready to > > share. But I did do a write up on the dev wiki. > > > > > > > https://ghc.haskell.org/trac/ghc/wiki/Plugins/TypeChecker/RowTypes/Coxswain > > > > I would really appreciate and questions, comments, and --- boy, oh boy > > --- answers. > > > > I hope to upload within a week or so, and I'll update that wiki page and > > reply to this email when I do. > > > > Thanks very much. -Nick > > > > P.S. -- I've CC'd and BCC'd people who I anticipate would be > > specifically interested in this (e.g. plugins, row types, etc). Please > > feel free to forward to others that come to mind; I know some inboxes > > abjectly can't afford default list traffic. > > > > P.P.S. -- One hold up for the upload is: which license? I intend to > > release under BSD3, mainly to match GHC since one ideal scenario would > > involve being packaged with/integrated into GHC. But my brief recent > > research suggests that the Apache license might be more conducive to > > eventual widespread adoption. If you'd be willing to advise or even just > > refer me to other write ups, please feel free to email me directly or to > > start a separate thread on a more appropriate distribution list (CC'ing > > me, please). Thanks again. > > > -- > Adam Gundry, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Mon Sep 11 12:14:25 2017 From: marlowsd at gmail.com (Simon Marlow) Date: Mon, 11 Sep 2017 13:14:25 +0100 Subject: GHC Threads affinity In-Reply-To: References: Message-ID: On 10 September 2017 at 04:03, Michael Baikov wrote: > Greetings > > > Currently GHC supports two kinds of threads - pinned to a specific > capability (bound threads) and those it can migrate between any > capabilities (unbound threads). For purposes of achieving lower latency in > Haskell applications it would be nice to have something in between - > threads GHC can migrate but within a certain subset of capabilities only. > That's not correct actually: a bound thread is associated with a particular OS thread, but it can migrate between capabilities just like unbound threads. > I'm developing a program that contains several kinds of threads - those > that do little work and sensitive to latency and those that can spend more > CPU time and less latency sensitive. I looked into several cases of > increased latency in those sensitive threads (using GHC eventlog) and in > all cases sensitive threads were waiting for non-sensitive threads to > finish working. I was able to reduce worst case latency by factor of 10 by > pinning all the threads in the program to specific capability but manually > distributing threads (60+ of them) between capabilities (several different > machines with different numbers of cores available) seems very fragile. > World stopping GC is still a problem but at least in my case is much less > frequently so. > If you have a fixed set of threads you might just want to use -N -qn, and then pin every thread to a different capability. This gives you 1:1 scheduling at the GHC level, delegating the scheduling job to the OS. You will also want to use nursery chunks with something like -n2m, so you don't waste too much nursery space on the idle capabilities. Even if your set of threads isn't fixed you might be able to use a hybrid scheme with -N -qn and pin the high-priority threads on their own capability, while putting all the low-priority threads on a single capability, or a few separate ones. It would be nice to be able to allow GHC runtime to migrate a thread > between a subset of capabilities using interface similar to this one: > > -- creates a thread that is allowed to migrate between capabilities > according to following rule: ghc is allowed to run this thread on Nth > capability if Nth `mod` size_of_word bit in mask is set. > forkOn' :: Int -> IO () -> IO ThreadId > forkOn' mask act = undefined > > This should allow to define up to 64 (32) distinct groups and allow user > to break down their threads into bigger number of potentially intersecting > groups by specifying things like capability 0 does latency sensitive > things, caps 1..5 - less sensitive things, caps 6-7 bulk things. > We could do this, but it would add some complexity to the scheduler and load balancer (which has already been quite hard to get right, I fixed a handful of bugs there recently). I'd be happy review a patch if you want to try it though. Cheers Simon Anything obvious I'm missing? Any recommendations to how to implement this? > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From manpacket at gmail.com Mon Sep 11 12:54:07 2017 From: manpacket at gmail.com (Michael Baikov) Date: Mon, 11 Sep 2017 20:54:07 +0800 Subject: GHC Threads affinity In-Reply-To: References: Message-ID: >> I'm developing a program that contains several kinds of threads - those that do little work and sensitive to latency and those that can spend more CPU time and less latency sensitive. I looked into several cases of increased latency in those sensitive threads (using GHC eventlog) and in all cases sensitive threads were waiting for non-sensitive threads to finish working. I was able to reduce worst case latency by factor of 10 by pinning all the threads in the program to specific capability but manually distributing threads (60+ of them) between capabilities (several different machines with different numbers of cores available) seems very fragile. World stopping GC is still a problem but at least in my case is much less frequently so. > > If you have a fixed set of threads you might just want to use -N -qn, and then pin every thread to a different capability. This gives you 1:1 scheduling at the GHC level, delegating the scheduling job to the OS. You will also want to use nursery chunks with something like -n2m, so you don't waste too much nursery space on the idle capabilities. > > Even if your set of threads isn't fixed you might be able to use a hybrid scheme with -N -qn and pin the high-priority threads on their own capability, while putting all the low-priority threads on a single capability, or a few separate ones. There's about 80 threads right now and some of them are very short lived. Most of them are low priority and require lots of CPU which means having to manually distribute them over several capabilities - this process I'd like to avoid. >> It would be nice to be able to allow GHC runtime to migrate a thread between a subset of capabilities using interface similar to this one: >> >> -- creates a thread that is allowed to migrate between capabilities according to following rule: ghc is allowed to run this thread on Nth capability if Nth `mod` size_of_word bit in mask is set. >> forkOn' :: Int -> IO () -> IO ThreadId >> forkOn' mask act = undefined >> >> This should allow to define up to 64 (32) distinct groups and allow user to break down their threads into bigger number of potentially intersecting groups by specifying things like capability 0 does latency sensitive things, caps 1..5 - less sensitive things, caps 6-7 bulk things. > > > We could do this, but it would add some complexity to the scheduler and load balancer (which has already been quite hard to get right, I fixed a handful of bugs there recently). I'd be happy review a patch if you want to try it though. I guess I'll start by studying the scheduler and load balancer in more details. Thank you for your input Simon! -------------- next part -------------- An HTML attachment was scrubbed... URL: From jweakly at pdx.edu Mon Sep 11 16:43:38 2017 From: jweakly at pdx.edu (Jared Weakly) Date: Mon, 11 Sep 2017 09:43:38 -0700 Subject: Feedback request regarding HSOC project (bringing sanity to the GHC performance test-suite) In-Reply-To: References: Message-ID: Hi Phyx, Sorry for the late reply. I already implemented a wait and retry approach since it seemed the most sensible. Let me see if I can clear this up a bit. Suppose you want to check out a new branch and work on a feature. You check out that branch, run the testsuite once (using ONLY_PERFORMANCE_TESTS=YES if you just want to populate the performance test comparison data). Every commit you would ideally run the testsuite again, although you don't have to. Let's say you finish up your new feature in 10 commits and you ran the testsuite in each commit; you can use the comparison tool across all 10 commits to track performance changes over the development of the feature. But if you only ran the performance testsuite once before you started coding the feature, you could run the testsuite again after you're finished and then use the comparison tool using the last and first commit as data-points. If you forgot to run the performance testsuite at all you can always check out the first commit and run it on there and then check the branch back out and compare from the HEAD. While the notes will not be pushed, if you create a branch from your branch, the notes will copy over. They will also not be deleted ever (although they are tied to their respective commit hashes so a rebase will "remove them" as it changes the commit hashes; however you can manually copy over the notes from the "old hash" to the "new hash" if you want to--the data is not lost ever). As such, long time GHC developers will eventually build up a very long and respectable data-set that will be almost as comprehensive as the data-set that the CI builders will develop. Your local history of performance tests will be as perfect as you want it to be since it doesn't really matter for the CI story. Does that clarify things regarding the local story? -------------- Now, on the CI side of things... You are right that performance is very platform specific. In fact, due to the very platform specific nature of performance testing, it's just really ridiculously hard to come up with a "perfect" way to normalize scores and have some way to push numbers somewhere to be referenced. So, as far as I can see, the only really sensible approach is to only consider numbers gathered by "that" computer relative to each other (as performance is something that is inherently relative to some perspective). This means that, yes, if there is not a CI running tests for that OS then automated performance regression testing will suffer a bit; however, if there is not a CI running tests for that OS then that OS is not being tested *at all* and the lack of accurate performance regression testing is the least of GHC's concerns. The way I see it is that performance can only truthfully be tracked on platforms it's actually measured on. Further, *correctness* of the compiler can only truthfully be tracked on platforms that it's being tested on; if no CI or builder exists for OSX or Windows at all, then the entire concept of measuring performance is already moot; how can one evaluate a comparison that never happened? This change should make things much easier as regressions can be very tightly tracked on every platform that the testsuite is regularly ran on. Right now, Linux is the only platform that the tests are regularly run on; it is a lie to continue to pretend that the testsuite is fully functional on Windows (it is not) and on OSX (it is not)--in an environment where "clean test run" means "same amount of broken tests as there used to be", any performance metric would just be a lie. I'd even go so far as to say that if any builder is broken at all, the performance has ceased to be measured in any meaningful way and that all of the numbers that exist inside the current testsuite are basically meaningless for OSX and Windows and only *barely* useful for Linux. On the other hand, information gathered from Linux can now be used to tightly control performance regressions in GHC for Linux; since Linux is the only platform being tested regularly right now, it's an overall vast improvement since any tight regression control is better than none. Once a regular CI builder is constructed for OSX and one is constructed for Windows, the situation will improve by leaps and bounds--entirely for free. I look forward to your thoughts, Jared On Wed, Sep 6, 2017 at 11:44 PM, Phyx wrote: > Hi Jared, > > > On Tue, Sep 5, 2017, 19:39 Jared Weakly wrote: >> >> Hi Tamar, >> >> That framework failure is due to a somewhat embarrassing error that I >> thought I had caught earlier; line 298 shouldn't have existed (it was >> a small mistake from converting the all.T file from using the old >> function to using the new collect_stats function. I have fixed this >> and it will be pushed by the time you read this email. That being >> said, the individual tests or units are very isolated and a framework >> failure simply means that unit didn't get run; in this case it means >> that entire all.T file didn't get run since the error was in reading >> the file, but the rest of the files should've had their performance >> tests recorded properly. >> >> The .git/refs/notes/perf is an implementation detail. Git notes have >> the concept of namespaces; so, in order to avoid cluttering up a >> global namespace in git notes with stuff only the performance tests >> will use, all the performance metrics are stored in the namespace >> 'perf'. >> >> This format of the git notes is mentioned in the code for the >> testsuite but I will make this more visible in the README and other >> documentation. The format of the git notes is: >> $test-env $test_name $test_way $metric_measured $value_collected >> (separated by tabs) >> >> The maximum deviation the test allows is inside the respective >> all.T's; additionally, if you set the verbosity level of the >> test-suite to a value >= 4, you will see the expected value, allowed >> deviation, lower bound, upper bound, the actual measured value, and >> (if the test fell outside the bounds) how much the actual value >> deviated from the expected value. This information will also print if >> the test falls outside of the allowed bounds. >> >> perf_notes.py exists as both an internal library and a measurement >> tool (hopefully to be useful to developers). You can give the tool >> several commits and it will give you a comparison of the union of all >> the tests in those commits, with an output very similar in style to >> noFib. I imagine this will be useful mostly to people who want to >> improve the performance of the compiler so they can see which tests >> have regressed the most over time (or which have improved the most >> over time); but as it works over commits, it can also be useful for a >> developer wanting to know if they've made a measurable difference with >> their patch. >> >> The notes are updated every time the testsuite is ran. However they >> are updated only at the very end of the execution of the testsuite in >> a single command (the information is collected in a python >> datastructure which is turned into a string and given to git notes). >> This behavior means that if the testsuite is ran more than once >> in-between commits that 'duplicate values' will exist in the git >> notes. I'm not quite sure how to deal with this yet; I am considering >> just grabbing the latest value if multiples exist. This also means you >> can test just one test and then run other tests and have those values >> added into the git notes without losing your older values which is why >> the behavior is kept this way. (I will make sure this is more >> prominent in the docs somewhere). >> >> The note update is done using python's subprocess library. I have no >> idea how resilient that is to git failure; I'd imagine that if it was >> busy it would just silently fail to update. Fortunately, the update >> process is as close to atomic as one can get. I'll see if I can figure >> out a way to force a repo lock to test this out. I'm open to >> suggestions as to how to deal with this better and I'll also google >> around and see if anyone has a good solution. > > > I think you can just wait and retry and do so a specified max number of r > times. That should be good enough for most cases. Unless the process on the > other end has died but then the user needs to clean up the lock firsts. > >> >> Platform discrepancies are completely sidestepped because of the way >> git notes work. The performance metrics are entirely local and stay on >> your computer; they won't be pushed or shared with any other users. >> That means that the performance numbers are completely tailored to >> your platform so there is effectively an 0% margin of "OS-related" >> error that needs to be accounted for. The collect_stats function is >> very much designed to be declarative and "set it and forget it". As >> such, the need to even record values at all is obsoleted (one of the >> main motivators of this project in the first place). > > > This I don't quite understand. If I get this right. It means I will now > always have to run the full performance benchmark suite for each change I > have twice? Before and after locally? > > I had thought the notes would be pushed as I would find it useful to have > the perfect history locally if I wanted to. Performance by its very nature > is very platform specific, I feel that this change makes it harder for the > platforms we don't have a CI for to run benchmarks. So basically only Linux. > > This would be unfortunate as it would mean we would effectively stop > tracking performance on e.g. Windows and Mac OS since the current > implementation doesn't allow for the data to live together in the same repo. > >> >> >> Hopefully this answers some questions; I'll make sure this sort of >> information is available somewhere so that later users can find these >> answers again. Thanks for your thoughts! They were very helpful. >> >> Regards, >> Jared >> >> On Mon, Sep 4, 2017 at 10:02 PM, Phyx wrote: >> > Hi Jared, >> > >> > First off, thanks for all the hard work on this. I checked out your >> > branch >> > and made a run, I noticed at the end it had >> > >> > Framework failures: >> > . ./perf/compiler/all.T [] (unexpected indent (, line 298)) >> > >> > so I assume none of the perf tests were run? >> > >> > Though I do see a .git/refs/notes/perf, so I assume your ref is is perf? >> > >> > Doing a git notes --ref perf show I see somethings were collected at >> > some >> > point >> > >> > local T3924 normal bytes allocated 47064 >> > local haddock.base normal bytes allocated 18427047160 >> > local haddock.Cabal normal bytes allocated 15863910848 >> > local haddock.compiler normal bytes allocated 50656428952 >> > >> > which brings me up to my first question, I'm guessing the number here is >> > the >> > number of bytes allocated for the test? Is there a way for me to see >> > what the maximum deviation the test allows is and how far I am from it? >> > Do I >> > just get the information like before only when a test fails? How does >> > that >> > look like? Same as before? >> > >> > It's also not entirely clear to be what perf_notes.py can be used, is it >> > just an infrastructure tool? or is it something you foresee as useful >> > for a >> > developer? >> > >> > lastly, how often do you update notes? It's probably too late for this >> > now, >> > but git, especially msys git can be especially slow, so I would have >> > liked >> > the notes to be updated in batches to not slow down the testsuite run on >> > Windows. >> > >> > Which brings me to my next question, how resilient are you to failures >> > updating git? some IDE/environments like vscode automatically issue git >> > operations in the background. so git may be busy when you try to update >> > and >> > the operation would fail saying the repo is locked. Does your new system >> > recover from such failures? >> > >> > Also how do you deal with platform discrepancies? We've had in the past >> > tests that behave radically different on different platforms, so we've >> > also >> > historically had the ability to record a platform specific value. >> > >> > Thanks, >> > Tamar >> > >> > On Fri, Sep 1, 2017, 05:01 Jared Weakly wrote: >> >> >> >> Hey y'all, >> >> >> >> A quick ToC before I dive right in: >> >> >> >> * What my HSOC project is on >> >> * My progress so far >> >> * Feedback welcome >> >> * What I have left to do >> >> * Theoretical potential improvements >> >> >> >> ----------- >> >> >> >> My HSOC project was on bringing sanity to the GHC performance >> >> test-suite. >> >> My blog post on this is here: >> >> https://jaredweakly.com/blog/haskell-summer-of-code/ >> >> The Trac ticket that corresponds to this is here: >> >> https://ghc.haskell.org/trac/ghc/ticket/12758 >> >> The Phabricator ticket for this patch: >> >> https://phabricator.haskell.org/D3758 >> >> >> >> The tl;dr of my HSOC project is that GHC's performance tests currently >> >> require the programmer to add in expected numbers manually, updated >> >> them, handhold the testsuite, etc. This is a bit absurd and my >> >> project's overall aim is to reduce the effort required of the >> >> programmer to as close to zero as possible while simultaneously >> >> increasing the potential ability of the testsuite to catch regressions >> >> as much as possible. >> >> >> >> ------------ >> >> >> >> My progress so far: >> >> - I have a few comparison tools in perf_notes.py. These allow people >> >> to compare performance numbers of tests across commits >> >> - I have all the performance numbers generated by running the tests >> >> automatically stored in git notes and referenced by both the >> >> comparison tool and the testsuite >> >> - I have refactored the testsuite to use my new code that pulls >> >> expected numbers automatically from git notes (trivially passing if >> >> the note does not yet exist for that test), then it compares that >> >> expected number with the number that was gotten from running the >> >> testsuite on the latest commit. The comparison passes if it's within a >> >> certain deviation (20% by default, but can be customized by the >> >> programmer). >> >> - I have refactored all of the all.T files to use the new comparison >> >> functions for the performance tests and ensured that this doesn't >> >> break any existing tests. >> >> >> >> ------------ >> >> >> >> >> >> Anyone who wants to checkout the wip/perf-testsuite and try this out >> >> is more than welcome. Feedback on anything is welcome; comments are >> >> appreciated; discussion is welcome, etc. >> >> >> >> ------------- >> >> >> >> >> >> What I have left to do is: >> >> >> >> 1. Finish writing up the documentation >> >> 2. Update the wiki in all the relevant places concerning >> >> additions/modifications to the testsuite and test driver >> >> 3. Make sure everyone is happy with the change (and make small changes >> >> as necessary) >> >> >> >> -------------- >> >> >> >> Possible features and improvements I am thinking about adding in: >> >> * As a stopgap to full integration with performance tracking tools >> >> (such as Gipedia), optionally emitting a test warning with the test >> >> summary if there is any regression detected whatsoever (even if the >> >> number falls within the allowed deviation) >> >> * Some tests, such as T7702, have a somewhat nonsensical regression >> >> percentage. Ideally the testsuite could handle those better. I could >> >> potentially build in multiple ways to determine a regression >> >> (percentage, 'above a certain value', 'taking longer than X amount of >> >> time', as potential examples) >> >> * Currently some tests require installing some Haskell packages; they >> >> are skipped if the packages are not installed. I could try to build in >> >> a way to automatically attempt to install all necessary Haskell >> >> packages if someone attempts to run a test that requires them. >> >> (Perhaps using a command such as 'make test exhaustive') >> >> * The performance metric 'peak_megabytes' is sometimes not accurate >> >> enough; I could see if adding something like `RTS -h -i0.01` >> >> automatically to tests that use 'peak_megabytes' would resolve that. >> >> Currently it is a manual debugging step. >> >> >> >> Any thoughts? Comments? Questions? >> >> >> >> Regards, >> >> Jared Weakly >> >> _______________________________________________ >> >> ghc-devs mailing list >> >> ghc-devs at haskell.org >> >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From nicolas.frisby at gmail.com Mon Sep 11 19:07:25 2017 From: nicolas.frisby at gmail.com (Nicolas Frisby) Date: Mon, 11 Sep 2017 19:07:25 +0000 Subject: A type checker plugin for row types In-Reply-To: <225e0ebe-7c81-63fb-a348-8e1481a37aeb@well-typed.com> References: <225e0ebe-7c81-63fb-a348-8e1481a37aeb@well-typed.com> Message-ID: Adam, thanks for: 1) The reference to Iavor's paper --- it is a nice more-detailed description of the plugin API/semantics, and the Nelson-Oppen parallel is very illuminating! 2) Asking "Do you mean "touchable" or "unification variable" here and elsewhere?" That prompted me to finally dig deeper into something, and I've updated/simplified the wiki page accordingly. Basically, I was just using newFlexiTyVar (since it's pretty much the only option in the "official" TcPluginsM interface) without understanding if it's the touchability or the skolem-vs-unification status that was enabling the Given-Given interactions. I'm happy to report that touchability apparently has nothing to do with any of my test cases (including the record and variant library, etc). I'm relieved about that: touchability is a restriction on unification, and my general goal with my plugin architecture is to leave as many of the unification details to GHC's type equality solver as possible. Thanks. -Nick On Mon, Sep 11, 2017 at 1:34 AM Adam Gundry wrote: > Hi Nick, > > This is great work, and I look forward to seeing the code once it is > ready. I've had a quick glance over your wiki page, and thought I should > send you some initial comments, though it deserves deeper attention > which I will try to find time to give it. :-) > > I don't see a reference to Iavor's paper "Improving Haskell Types with > SMT" (http://yav.github.io/publications/improving-smt-types.pdf). If > you've not come across it, it might give a useful alternative > perspective on how plugins work, especially with regard to derived > constraints. > > The following is based on my faulty memory, so apologies if it is out of > date or misleading... > > > When/where exactly do Derived constraints arise? > > Suppose I have a class with an equality superclass > > class a ~ b => C a b > > and a wanted constraint `C alpha Int`, for some touchable variable > `alpha`. This leads to a derived constraint `alpha ~ Int` thanks to the > superclass (derived means we don't actually need evidence for it in > order to build the core term, but solving it might help fill in some > touchable variables). Sorry if this is obvious and not exact enough! > > > When do touchables "naturally" arise in Given constraints? > > Do you mean "touchable" or "unification variable" here (and elsewhere?). > A skolem is always untouchable, but the converse is not true. > > I think that unification variables can arise in Given constraints, but > that they will always be untouchable. Suppose we have defined > > f :: forall a b . ((a ~ b) => a -> b) -> Int > > (never mind that it is ambiguous) and consider type-checking the call `f > id`. We end up checking `id` against type `a -> b` with given `a ~ b` > where `a` and `b` are unification variables. They must be untouchable, > however, otherwise we might unify them, which would be wrong. > > Hope this helps, > > Adam > > > On 10/09/17 23:24, Nicolas Frisby wrote: > > Hi all. I've been spending my free time for the last couple months on a > > type checker plugin for row types. The free time waxes and wanes; > > sending an email like this one was my primary goal for the past couple > > weeks. > > > > At the very least, I hoped this project would let me finally get some > > hands on experience with OutsideIn. And I definitely have. But I've also > > made more progress than I anticipated, and I think the plugin is > > starting to have legs! > > > > I haven't uploaded the code yet to github -- it's not quite ready to > > share. But I did do a write up on the dev wiki. > > > > > > > https://ghc.haskell.org/trac/ghc/wiki/Plugins/TypeChecker/RowTypes/Coxswain > > > > I would really appreciate and questions, comments, and --- boy, oh boy > > --- answers. > > > > I hope to upload within a week or so, and I'll update that wiki page and > > reply to this email when I do. > > > > Thanks very much. -Nick > > > > P.S. -- I've CC'd and BCC'd people who I anticipate would be > > specifically interested in this (e.g. plugins, row types, etc). Please > > feel free to forward to others that come to mind; I know some inboxes > > abjectly can't afford default list traffic. > > > > P.P.S. -- One hold up for the upload is: which license? I intend to > > release under BSD3, mainly to match GHC since one ideal scenario would > > involve being packaged with/integrated into GHC. But my brief recent > > research suggests that the Apache license might be more conducive to > > eventual widespread adoption. If you'd be willing to advise or even just > > refer me to other write ups, please feel free to email me directly or to > > start a separate thread on a more appropriate distribution list (CC'ing > > me, please). Thanks again. > > > -- > Adam Gundry, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wolfgang-it at jeltsch.info Mon Sep 11 20:12:04 2017 From: wolfgang-it at jeltsch.info (Wolfgang Jeltsch) Date: Mon, 11 Sep 2017 23:12:04 +0300 Subject: Semigroup repeat (base package) In-Reply-To: References: <1505078168.13497.65.camel@jeltsch.info> Message-ID: <1505160724.13497.218.camel@jeltsch.info> Am Montag, den 11.09.2017, 06:55 +0530 schrieb Harendra Kumar: > On 11 September 2017 at 02:46, Wolfgang Jeltsch wrote: > > Am Sonntag, den 10.09.2017, 10:39 +0200 schrieb Herbert Valerio > > Riedel: > > > What you seem to be searching for looks more like what we know as > > > `cycle :: [a] -> [a]`, and in fact there is its generalisation at > > > > > > http://hackage.haskell.org/package/base-4.10.0.0/docs/Data-Semigroup.html#v:cycle1 > > > > Why is this function called cycle1, not cycle? What does the “1” > > stand for? > > I guess this is not named "cycle" to avoid conflict with > "Data.List.cycle". Why? We have qualified imports. It seems very wrong to add single characters to identifiers to denote name spaces. > I was also wondering why it is "cycle1" instead of, say "scycle". It > can be thought of as cycling just one value instead of cycling a list > in case of "Data.List.cycle". Also Data.List.cycle cycles only one value. It is just that this single value happens to be a list. If you specialize cycle1 to the list monoid, you get exactly Data.List.cycle. All the best, Wolfgang From ben at smart-cactus.org Mon Sep 11 13:48:12 2017 From: ben at smart-cactus.org (Ben Gamari) Date: Mon, 11 Sep 2017 14:48:12 +0100 Subject: A type checker plugin for row types In-Reply-To: <225e0ebe-7c81-63fb-a348-8e1481a37aeb@well-typed.com> References: <225e0ebe-7c81-63fb-a348-8e1481a37aeb@well-typed.com> Message-ID: <382B9451-CF68-417E-BF53-127B92A1D081@smart-cactus.org> On September 11, 2017 9:34:15 AM GMT+01:00, Adam Gundry wrote: >Hi Nick, > >This is great work, and I look forward to seeing the code once it is >ready. I've had a quick glance over your wiki page, and thought I >should >send you some initial comments, though it deserves deeper attention >which I will try to find time to give it. :-) > >I don't see a reference to Iavor's paper "Improving Haskell Types with >SMT" (http://yav.github.io/publications/improving-smt-types.pdf). If >you've not come across it, it might give a useful alternative >perspective on how plugins work, especially with regard to derived >constraints. > >The following is based on my faulty memory, so apologies if it is out >of >date or misleading... > >> When/where exactly do Derived constraints arise? > >Suppose I have a class with an equality superclass > > class a ~ b => C a b > >and a wanted constraint `C alpha Int`, for some touchable variable >`alpha`. This leads to a derived constraint `alpha ~ Int` thanks to the >superclass (derived means we don't actually need evidence for it in >order to build the core term, but solving it might help fill in some >touchable variables). Sorry if this is obvious and not exact enough! > >> When do touchables "naturally" arise in Given constraints? > >Do you mean "touchable" or "unification variable" here (and >elsewhere?). >A skolem is always untouchable, but the converse is not true. > >I think that unification variables can arise in Given constraints, but >that they will always be untouchable. Suppose we have defined > > f :: forall a b . ((a ~ b) => a -> b) -> Int > >(never mind that it is ambiguous) and consider type-checking the call >`f >id`. We end up checking `id` against type `a -> b` with given `a ~ b` >where `a` and `b` are unification variables. They must be untouchable, >however, otherwise we might unify them, which would be wrong. > >Hope this helps, > >Adam > > >On 10/09/17 23:24, Nicolas Frisby wrote: >> Hi all. I've been spending my free time for the last couple months on >a >> type checker plugin for row types. The free time waxes and wanes; >> sending an email like this one was my primary goal for the past >couple >> weeks. >> >> At the very least, I hoped this project would let me finally get some >> hands on experience with OutsideIn. And I definitely have. But I've >also >> made more progress than I anticipated, and I think the plugin is >> starting to have legs! >> >> I haven't uploaded the code yet to github -- it's not quite ready to >> share. But I did do a write up on the dev wiki. >> >> >> >https://ghc.haskell.org/trac/ghc/wiki/Plugins/TypeChecker/RowTypes/Coxswain >> >> I would really appreciate and questions, comments, and --- boy, oh >boy >> --- answers. >> >> I hope to upload within a week or so, and I'll update that wiki page >and >> reply to this email when I do. >> >> Thanks very much. -Nick >> >> P.S. -- I've CC'd and BCC'd people who I anticipate would be >> specifically interested in this (e.g. plugins, row types, etc). >Please >> feel free to forward to others that come to mind; I know some inboxes >> abjectly can't afford default list traffic. >> >> P.P.S. -- One hold up for the upload is: which license? I intend to >> release under BSD3, mainly to match GHC since one ideal scenario >would >> involve being packaged with/integrated into GHC. But my brief recent >> research suggests that the Apache license might be more conducive to >> eventual widespread adoption. If you'd be willing to advise or even >just >> refer me to other write ups, please feel free to email me directly or >to >> start a separate thread on a more appropriate distribution list >(CC'ing >> me, please). Thanks again. > > >-- >Adam Gundry, Haskell Consultant >Well-Typed LLP, http://www.well-typed.com/ >_______________________________________________ >ghc-devs mailing list >ghc-devs at haskell.org >http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs It would be great if someone could extract the conclusion of this thread into a Note. Clearly there is a hole in the current state of our source documentation. Cheers, - Ben From sgraf1337 at gmail.com Tue Sep 12 07:19:48 2017 From: sgraf1337 at gmail.com (Sebastian Graf) Date: Tue, 12 Sep 2017 09:19:48 +0200 Subject: Semigroup repeat (base package) In-Reply-To: <1505160724.13497.218.camel@jeltsch.info> References: <1505078168.13497.65.camel@jeltsch.info> <1505160724.13497.218.camel@jeltsch.info> Message-ID: It's the same convention as with other Semigroup-like functions, such as `foldl1`, `scanl1`, etc. Doesn't really makes sense to distinguish between `cycle` and `cycle1` in this case, but that's just bike shedding. Also, at some point in the future, `cycle` can go in `Data.OldList` and be replaced by `cycle1`, renamed accordingly. On Mon, Sep 11, 2017 at 10:12 PM, Wolfgang Jeltsch wrote: > Am Montag, den 11.09.2017, 06:55 +0530 schrieb Harendra Kumar: > > On 11 September 2017 at 02:46, Wolfgang Jeltsch wrote: > > > Am Sonntag, den 10.09.2017, 10:39 +0200 schrieb Herbert Valerio > > > Riedel: > > > > What you seem to be searching for looks more like what we know as > > > > `cycle :: [a] -> [a]`, and in fact there is its generalisation at > > > > > > > > http://hackage.haskell.org/package/base-4.10.0.0/docs/ > Data-Semigroup.html#v:cycle1 > > > > > > Why is this function called cycle1, not cycle? What does the “1” > > > stand for? > > > > I guess this is not named "cycle" to avoid conflict with > > "Data.List.cycle". > > Why? We have qualified imports. It seems very wrong to add single > characters to identifiers to denote name spaces. > > > I was also wondering why it is "cycle1" instead of, say "scycle". It > > can be thought of as cycling just one value instead of cycling a list > > in case of "Data.List.cycle". > > Also Data.List.cycle cycles only one value. It is just that this single > value happens to be a list. If you specialize cycle1 to the list monoid, > you get exactly Data.List.cycle. > > All the best, > Wolfgang > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From takenobu.hs at gmail.com Tue Sep 12 12:15:05 2017 From: takenobu.hs at gmail.com (Takenobu Tani) Date: Tue, 12 Sep 2017 21:15:05 +0900 Subject: GHC Threads affinity In-Reply-To: References: Message-ID: Hi, Here is a simple diagram of forkIO, forkOn and forkOS: https://takenobu-hs.github.io/downloads/haskell_ghc_illustrated.pdf#page=69 Regards, Takenobu 2017-09-11 21:54 GMT+09:00 Michael Baikov : > > >> I'm developing a program that contains several kinds of threads - those > that do little work and sensitive to latency and those that can spend more > CPU time and less latency sensitive. I looked into several cases of > increased latency in those sensitive threads (using GHC eventlog) and in > all cases sensitive threads were waiting for non-sensitive threads to > finish working. I was able to reduce worst case latency by factor of 10 by > pinning all the threads in the program to specific capability but manually > distributing threads (60+ of them) between capabilities (several different > machines with different numbers of cores available) seems very fragile. > World stopping GC is still a problem but at least in my case is much less > frequently so. > > > > If you have a fixed set of threads you might just want to use > -N -qn, and then pin every thread to a different > capability. This gives you 1:1 scheduling at the GHC level, delegating the > scheduling job to the OS. You will also want to use nursery chunks with > something like -n2m, so you don't waste too much nursery space on the idle > capabilities. > > > > Even if your set of threads isn't fixed you might be able to use a > hybrid scheme with -N -qn and pin the high-priority threads > on their own capability, while putting all the low-priority threads on a > single capability, or a few separate ones. > > There's about 80 threads right now and some of them are very short lived. > Most of them are low priority and require lots of CPU which means having to > manually distribute them over several capabilities - this process I'd like > to avoid. > > >> It would be nice to be able to allow GHC runtime to migrate a thread > between a subset of capabilities using interface similar to this one: > >> > >> -- creates a thread that is allowed to migrate between capabilities > according to following rule: ghc is allowed to run this thread on Nth > capability if Nth `mod` size_of_word bit in mask is set. > >> forkOn' :: Int -> IO () -> IO ThreadId > >> forkOn' mask act = undefined > >> > >> This should allow to define up to 64 (32) distinct groups and allow > user to break down their threads into bigger number of potentially > intersecting groups by specifying things like capability 0 does latency > sensitive things, caps 1..5 - less sensitive things, caps 6-7 bulk things. > > > > > > We could do this, but it would add some complexity to the scheduler and > load balancer (which has already been quite hard to get right, I fixed a > handful of bugs there recently). I'd be happy review a patch if you want to > try it though. > > > I guess I'll start by studying the scheduler and load balancer in more > details. Thank you for your input Simon! > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wolfgang-it at jeltsch.info Tue Sep 12 13:03:17 2017 From: wolfgang-it at jeltsch.info (Wolfgang Jeltsch) Date: Tue, 12 Sep 2017 16:03:17 +0300 Subject: Semigroup repeat (base package) In-Reply-To: References: <1505078168.13497.65.camel@jeltsch.info> <1505160724.13497.218.camel@jeltsch.info> Message-ID: <1505221397.13497.243.camel@jeltsch.info> No. Functions like foldl1 are named such because they start building a value with the first (“1”) value of a list and consequently do not work with empty lists. They have counterparts without the “1” in their names, which receive the initial value as an extra argument. Things are completely different with cycle1, which does not even take a list. All the best, Wolfgang Am Dienstag, den 12.09.2017, 09:19 +0200 schrieb Sebastian Graf: > It's the same convention as with other Semigroup-like functions, such > as `foldl1`, `scanl1`, etc. > Doesn't really makes sense to distinguish between `cycle` and `cycle1` > in this case, but that's just bike shedding. > > Also, at some point in the future, `cycle` can go in `Data.OldList` > and be replaced by `cycle1`, renamed accordingly. > > On Mon, Sep 11, 2017 at 10:12 PM, Wolfgang Jeltsch > h.info> wrote: > > Am Montag, den 11.09.2017, 06:55 +0530 schrieb Harendra Kumar: > > > On 11 September 2017 at 02:46, Wolfgang Jeltsch wrote: > > > > Am Sonntag, den 10.09.2017, 10:39 +0200 schrieb Herbert Valerio > > > > Riedel: > > > > > What you seem to be searching for looks more like what we know > > as > > > > > `cycle :: [a] -> [a]`, and in fact there is its generalisation > > at > > > > > > > > > > http://hackage.haskell.org/package/base-4.10.0.0/docs/Data-Sem > > igroup.html#v:cycle1 > > > > > > > > Why is this function called cycle1, not cycle? What does the “1” > > > > stand for? > > > > > > I guess this is not named "cycle" to avoid conflict with > > > "Data.List.cycle". > > > > Why? We have qualified imports. It seems very wrong to add single > > characters to identifiers to denote name spaces. > > > > > I was also wondering why it is "cycle1" instead of, say "scycle". > > It > > > can be thought of as cycling just one value instead of cycling a > > list > > > in case of "Data.List.cycle". > > > > Also Data.List.cycle cycles only one value. It is just that this > > single > > value happens to be a list. If you specialize cycle1 to the list > > monoid, > > you get exactly Data.List.cycle. > > > > All the best, > > Wolfgang > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Tue Sep 12 20:30:43 2017 From: marlowsd at gmail.com (Simon Marlow) Date: Tue, 12 Sep 2017 21:30:43 +0100 Subject: Phab: conditional approval In-Reply-To: <95D2EA45-B7A1-497E-B174-2D8037054C36@cs.brynmawr.edu> References: <95D2EA45-B7A1-497E-B174-2D8037054C36@cs.brynmawr.edu> Message-ID: On 19 August 2017 at 03:56, Richard Eisenberg wrote: > Hi devs, > > When reviewing a diff on Phab, I can "accept" or "request changes". > Sometimes, though, I want to do both: I suggest very minor (e.g., typo) > changes, but then when these changes are made, I accept. I'm leery of > making the suggestions and saying "accept", because then someone working > quickly may merge without noticing the typos. Does Phab have such an option? > "Accept with nits" is standard practice, but you're right it can go wrong when someone else is merging accepted diffs. We could adopt a standard comment keyword, e.g. "NITS" that indicates you'd like the nits to be fixed before committing, perhaps? Also, I don't think it's a good idea to merge commits when the author is a committer, they can land themselves. Cheers Simon > Thanks, > Richard > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From klebinger.andreas at gmx.at Wed Sep 13 10:23:04 2017 From: klebinger.andreas at gmx.at (Andreas Klebinger) Date: Wed, 13 Sep 2017 12:23:04 +0200 Subject: Feedback on applying implicit exceptions wanted Message-ID: <59B90708.5050108@gmx.at> I'm currently working out ideas for potential optimizations in pattern matching compilation for GHC. While I think the theory works out fine in regards to exceptions I would like feedback on the possible implications. Trac ticket with details is here: https://ghc.haskell.org/trac/ghc/ticket/14201#comment:4 As for the theory I worked based of the assumptions that error = bottom and pattern match failure = error. Otherwise we get different results from both cases even in theory. While I mostly worry about the practical implications I would also appreciate any other feedback. From facundo.dominguez at tweag.io Wed Sep 13 11:58:05 2017 From: facundo.dominguez at tweag.io (=?UTF-8?Q?Facundo_Dom=C3=ADnguez?=) Date: Wed, 13 Sep 2017 08:58:05 -0300 Subject: Does span run in bounded space? Message-ID: Dear devs, I used to think that span is a function which doesn't run in bounded space. But ghc defied this understanding. The program copied below runs in bounded space with "ghc-8.2.1 -O0 main.hs; ./main" but it does not if run with "runghc-8.2.1 -O0 main.hs". Does someone have any insights on what's the optimization that is making a difference in the memory footprint of these two methods of running? Or is there a bug in ghci, and span is supposed to run in bounded space after all? Thanks, Facundo import Prelude hiding (span) import Data.List (foldl') main = case span (>0) (replicate 100000000 1) of (xs, ys) -> do print (foldl' (+) 0 xs) print (foldl' (+) 0 ys) span :: (a -> Bool) -> [a] -> ([a],[a]) span _ xs@[] = (xs, xs) span p xs@(x:xs') | p x = let (ys, zs) = span p xs' in (x : ys, zs) | otherwise = ([],xs) From ben at smart-cactus.org Wed Sep 13 14:29:23 2017 From: ben at smart-cactus.org (Ben Gamari) Date: Wed, 13 Sep 2017 10:29:23 -0400 Subject: Phab: conditional approval In-Reply-To: References: <95D2EA45-B7A1-497E-B174-2D8037054C36@cs.brynmawr.edu> Message-ID: <87poau1xks.fsf@ben-laptop.smart-cactus.org> Simon Marlow writes: > On 19 August 2017 at 03:56, Richard Eisenberg wrote: > >> Hi devs, >> >> When reviewing a diff on Phab, I can "accept" or "request changes". >> Sometimes, though, I want to do both: I suggest very minor (e.g., typo) >> changes, but then when these changes are made, I accept. I'm leery of >> making the suggestions and saying "accept", because then someone working >> quickly may merge without noticing the typos. Does Phab have such an option? >> > > "Accept with nits" is standard practice, but you're right it can go wrong > when someone else is merging accepted diffs. We could adopt a standard > comment keyword, e.g. "NITS" that indicates you'd like the nits to be fixed > before committing, perhaps? > Sounds reasonable to me. > Also, I don't think it's a good idea to merge commits when the author is a > committer, they can land themselves. > I would be quite happy to not have to merge such patches; I merely merge them currently since I thought it was generally expected. On the other hand, I generally do integration builds on the batches of patches that I merge which can sometimes catch validation issues. However, I expect this will be less of an issue with the test-before-merge support in the pipeline. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Wed Sep 13 15:53:45 2017 From: ben at smart-cactus.org (Ben Gamari) Date: Wed, 13 Sep 2017 11:53:45 -0400 Subject: GHC 8.2.1 release commit In-Reply-To: References: Message-ID: <87mv5y1to6.fsf@ben-laptop.smart-cactus.org> Harendra Kumar writes: > Hi, > Sorry for the latency; just catching up on the post-ICFP mail queue. > In the GHC git repo how do I figure out which commit belongs to release > 8.2.1. I cannot find 8.2.1 in "git tag" output. I tried "git log" and > searching for 8.2.1 but there seems to be no definitive comment marking > 8.2.1. > Releases are tagged with the ghc-$VER-release naming convention. You should find a tag for every release since 7.2.1. For instance, $ git tag | grep ghc ghc-7.10.1-rc1 ghc-7.10.1-rc2 ghc-7.10.1-rc3 ghc-7.10.1-release ghc-7.10.2-rc1 ghc-7.10.2-rc2 ghc-7.10.2-release ghc-7.10.3-rc1 ghc-7.10.3-rc2 ghc-7.10.3-rc3 ghc-7.10.3-release ghc-7.10.3a-release ghc-7.11-start ghc-7.2.1-release ghc-7.2.2-release ghc-7.4.1-release ghc-7.4.2-release ghc-7.6.1-release ghc-7.6.2-release ghc-7.6.3-release ghc-7.8.1-release ghc-7.8.2-release ghc-7.8.3-release ghc-7.8.4-release ghc-7.9-start ghc-8.0.1-rc1 ghc-8.0.1-rc2 ghc-8.0.1-rc3 ghc-8.0.1-rc4 ghc-8.0.1-release ghc-8.0.2-rc1 ghc-8.0.2-rc2 ghc-8.0.2-release ghc-8.1-start ghc-8.2.1-rc1 ghc-8.2.1-rc2 ghc-8.2.1-rc3 ghc-8.2.1-release ghc-8.3-start Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From mail at joachim-breitner.de Wed Sep 13 17:56:56 2017 From: mail at joachim-breitner.de (Joachim Breitner) Date: Wed, 13 Sep 2017 13:56:56 -0400 Subject: nofib behind master? Message-ID: <1505325416.27041.1.camel@joachim-breitner.de> Hi, nofib currently does not build (and hence perf.haskell.org is broken), du to the Monoid-Semigroup-Change. I looked into it and found that nofib, as refernced by ghc HEAD, is 26 commits behind nofib master. Ben, you did some of those commits. Any good reason not to update the submodule in ghc? Greetings, Joachim -- Joachim “nomeata” Breitner mail at joachim-breitner.de https://www.joachim-breitner.de/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From ben at smart-cactus.org Wed Sep 13 19:50:16 2017 From: ben at smart-cactus.org (Ben Gamari) Date: Wed, 13 Sep 2017 15:50:16 -0400 Subject: nofib behind master? In-Reply-To: <1505325416.27041.1.camel@joachim-breitner.de> References: <1505325416.27041.1.camel@joachim-breitner.de> Message-ID: <87a81y1ipz.fsf@ben-laptop.smart-cactus.org> Joachim Breitner writes: > Hi, > > > nofib currently does not build (and hence perf.haskell.org is broken), > du to the Monoid-Semigroup-Change. I looked into it and found that > nofib, as refernced by ghc HEAD, is 26 commits behind nofib master. > > Ben, you did some of those commits. Any good reason not to update the > submodule in ghc? > Nope, not at all. I'll bump it now. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From simonpj at microsoft.com Thu Sep 14 12:42:38 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 14 Sep 2017 12:42:38 +0000 Subject: Perf improvement Message-ID: I'm seeing this in validate bytes allocated value is too low: (If this is because you have improved GHC, please update the test so that GHC doesn't regress again) Expected T5837(normal) bytes allocated: 56782344 +/-7% Lower bound T5837(normal) bytes allocated: 52807579 Upper bound T5837(normal) bytes allocated: 60757109 Actual T5837(normal) bytes allocated: 52424864 Deviation T5837(normal) bytes allocated: -7.7 % *** unexpected stat test failure for T5837(normal) Does anyone else? Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From niteria at gmail.com Thu Sep 14 12:57:22 2017 From: niteria at gmail.com (Bartosz Nitka) Date: Thu, 14 Sep 2017 13:57:22 +0100 Subject: ./validate --slow results Message-ID: Hi all, I happened to run ./validate --slow on my linux machine and I thought it would be useful to share the results. Results: Unexpected results from: TEST="EtaExpandLevPoly PatternSplice StrictPats T10508_api T11627b T12809 T12870a T12870b T12870c T12870d T12870e T12870f T12870g T12870h T12903 T12962 T13366 T13543 T13688 T13780c T13822 T13949 T14137 T2552 T2783 T3001-2 T4114c T4114d T4188 T4334 T5129 T5363 T5559 T5611 T680 T7411 T7837 T7944 T8025 T8089 T8542 TH_spliceE5_prof_ext UnsafeReenter compact_gc compact_share dsrun014 dynamic-paper haddock.Cabal hpc_fork prof-doc-fib prof-doc-last profinline001 read029 return_mem_to_os rn041 scc001 scc002 scc003 scc005 space_leak_001 tc165 tryReadMVar2" SUMMARY for test run started at Thu Sep 14 04:57:20 2017 PDT 0:12:27 spent to go through 6091 total tests, which gave rise to 24105 test cases, of which 4531 were skipped 143 had missing libraries 19120 expected passes 203 expected failures 0 caused framework failures 0 caused framework warnings 6 unexpected passes 97 unexpected failures 5 unexpected stat failures Unexpected passes: /tmp/ghctest-e17wqi8b/test spaces/./dependent/should_compile/dynamic-paper.run dynamic-paper [unexpected] (optasm) /tmp/ghctest-e17wqi8b/test spaces/./indexed-types/should_compile/T7837.run T7837 [unexpected] (profasm) /tmp/ghctest-e17wqi8b/test spaces/./parser/should_compile/read029.run read029 [unexpected] (optasm) /tmp/ghctest-e17wqi8b/test spaces/./rename/should_compile/rn041.run rn041 [unexpected] (optasm) /tmp/ghctest-e17wqi8b/test spaces/./simplCore/should_fail/T7411.run T7411 [unexpected] (hpc) /tmp/ghctest-e17wqi8b/test spaces/./typecheck/should_compile/tc165.run tc165 [unexpected] (optasm) Unexpected failures: /tmp/ghctest-e17wqi8b/test spaces/./codeGen/should_run/T5129.run T5129 [bad exit code] (hpc) /tmp/ghctest-e17wqi8b/test spaces/./codeGen/should_run/T5129.run T5129 [bad exit code] (optasm) /tmp/ghctest-e17wqi8b/test spaces/./codeGen/should_run/T5129.run T5129 [bad exit code] (threaded2) /tmp/ghctest-e17wqi8b/test spaces/./codeGen/should_run/T5129.run T5129 [bad exit code] (dyn) /tmp/ghctest-e17wqi8b/test spaces/./concurrent/should_run/T5611.run T5611 [bad stdout] (ghci) /tmp/ghctest-e17wqi8b/test spaces/./concurrent/should_run/tryReadMVar2.run tryReadMVar2 [bad heap profile] (profasm) /tmp/ghctest-e17wqi8b/test spaces/./deSugar/should_run/dsrun014.run dsrun014 [bad stderr] (hpc) /tmp/ghctest-e17wqi8b/test spaces/./deSugar/should_run/dsrun014.run dsrun014 [bad stderr] (optasm) /tmp/ghctest-e17wqi8b/test spaces/./deSugar/should_run/dsrun014.run dsrun014 [bad stderr] (threaded2) /tmp/ghctest-e17wqi8b/test spaces/./deSugar/should_run/dsrun014.run dsrun014 [bad stderr] (dyn) /tmp/ghctest-e17wqi8b/test spaces/./dependent/should_fail/T13780c.run T13780c [stderr mismatch] (normal) /tmp/ghctest-e17wqi8b/test spaces/./dependent/should_compile/dynamic-paper.run dynamic-paper [exit code non-0] (profasm) /tmp/ghctest-e17wqi8b/test spaces/./driver/T4114d.run T4114d [bad exit code] (ghci) /tmp/ghctest-e17wqi8b/test spaces/./driver/T4114c.run T4114c [bad exit code] (ghci) /tmp/ghctest-e17wqi8b/test spaces/./ghc-api/T10508_api.run T10508_api [bad exit code] (profasm) /tmp/ghctest-e17wqi8b/test spaces/./numeric/should_compile/T8542.run T8542 [stderr mismatch] (hpc) /tmp/ghctest-e17wqi8b/test spaces/./ghc-api/T10508_api.run T10508_api [bad exit code] (profthreaded) /tmp/ghctest-e17wqi8b/test spaces/./partial-sigs/should_compile/PatternSplice.run PatternSplice [exit code non-0] (profasm) /tmp/ghctest-e17wqi8b/test spaces/./patsyn/should_run/T13688.run T13688 [exit code non-0] (profasm) /tmp/ghctest-e17wqi8b/test spaces/./patsyn/should_run/T13688.run T13688 [exit code non-0] (profthreaded) /tmp/ghctest-e17wqi8b/test spaces/./profiling/should_run/T3001-2.run T3001-2 [exit code non-0] (prof_hb) /tmp/ghctest-e17wqi8b/test spaces/./profiling/should_run/scc003.run scc003 [bad profile] (profasm) /tmp/ghctest-e17wqi8b/test spaces/./profiling/should_run/T11627b.run T11627b [bad heap profile] (profasm) /tmp/ghctest-e17wqi8b/test spaces/./profiling/should_run/T12962.run T12962 [bad profile] (profasm) /tmp/ghctest-e17wqi8b/test spaces/./profiling/should_run/scc003.run scc003 [bad profile] (profthreaded) /tmp/ghctest-e17wqi8b/test spaces/./profiling/should_run/T12962.run T12962 [bad profile] (profthreaded) /tmp/ghctest-e17wqi8b/test spaces/./profiling/should_run/scc003.run scc003 [bad profile] (prof) /tmp/ghctest-e17wqi8b/test spaces/./profiling/should_run/T12962.run T12962 [bad profile] (prof) /tmp/ghctest-e17wqi8b/test spaces/./profiling/should_run/scc001.run scc001 [bad exit code] (ghci-ext-prof) /tmp/ghctest-e17wqi8b/test spaces/./profiling/should_run/scc005.run scc005 [bad exit code] (ghci-ext-prof) /tmp/ghctest-e17wqi8b/test spaces/./profiling/should_run/profinline001.run profinline001 [bad exit code] (ghci-ext-prof) /tmp/ghctest-e17wqi8b/test spaces/./profiling/should_run/T5559.run T5559 [bad exit code] (ghci-ext-prof) /tmp/ghctest-e17wqi8b/test spaces/./profiling/should_run/T11627b.run T11627b [bad heap profile] (prof_hc_hb) /tmp/ghctest-e17wqi8b/test spaces/./profiling/should_run/scc003.run scc003 [bad exit code] (ghci-ext-prof) /tmp/ghctest-e17wqi8b/test spaces/./profiling/should_run/T680.run T680 [bad exit code] (ghci-ext-prof) /tmp/ghctest-e17wqi8b/test spaces/./profiling/should_run/scc002.run scc002 [bad exit code] (ghci-ext-prof) /tmp/ghctest-e17wqi8b/test spaces/./profiling/should_run/T12962.run T12962 [bad exit code] (ghci-ext-prof) /tmp/ghctest-e17wqi8b/test spaces/./profiling/should_run/prof-doc-fib.run prof-doc-fib [bad exit code] (ghci-ext-prof) /tmp/ghctest-e17wqi8b/test spaces/./profiling/should_run/T11627b.run T11627b [bad heap profile] (prof_hb) /tmp/ghctest-e17wqi8b/test spaces/./profiling/should_run/T2552.run T2552 [bad exit code] (ghci-ext-prof) /tmp/ghctest-e17wqi8b/test spaces/./profiling/should_run/prof-doc-last.run prof-doc-last [bad exit code] (ghci-ext-prof) /tmp/ghctest-e17wqi8b/test spaces/./profiling/should_run/T11627b.run T11627b [bad heap profile] (prof_hd) /tmp/ghctest-e17wqi8b/test spaces/./profiling/should_run/T5363.run T5363 [bad exit code] (ghci-ext-prof) /tmp/ghctest-e17wqi8b/test spaces/./profiling/should_run/T11627b.run T11627b [bad heap profile] (prof_hy) /tmp/ghctest-e17wqi8b/test spaces/./profiling/should_run/T11627b.run T11627b [bad heap profile] (prof_hr) /tmp/ghctest-e17wqi8b/test spaces/./rts/T2783.run T2783 [bad exit code] (threaded1) /tmp/ghctest-e17wqi8b/test spaces/./rts/return_mem_to_os.run return_mem_to_os [bad heap profile] (profasm) /tmp/ghctest-e17wqi8b/test spaces/./rts/T12903.run T12903 [bad heap profile] (profasm) /tmp/ghctest-e17wqi8b/test spaces/./rts/flags/T12870a.run T12870a [bad stderr] (ghci) /tmp/ghctest-e17wqi8b/test spaces/./rts/flags/T12870f.run T12870f [bad stdout] (profasm) /tmp/ghctest-e17wqi8b/test spaces/./rts/flags/T12870b.run T12870b [bad heap profile] (profasm) /tmp/ghctest-e17wqi8b/test spaces/./rts/flags/T12870c.run T12870c [bad heap profile] (profasm) /tmp/ghctest-e17wqi8b/test spaces/./rts/flags/T12870e.run T12870e [bad stdout] (profasm) /tmp/ghctest-e17wqi8b/test spaces/./rts/flags/T12870d.run T12870d [bad stderr] (ghci) /tmp/ghctest-e17wqi8b/test spaces/./rts/flags/T12870b.run T12870b [bad exit code] (ghci) /tmp/ghctest-e17wqi8b/test spaces/./rts/flags/T12870f.run T12870f [bad stderr] (ghci) /tmp/ghctest-e17wqi8b/test spaces/./rts/flags/T12870c.run T12870c [bad exit code] (ghci) /tmp/ghctest-e17wqi8b/test spaces/./rts/flags/T12870e.run T12870e [bad stderr] (ghci) /tmp/ghctest-e17wqi8b/test spaces/./rts/flags/T12870g.run T12870g [bad stderr] (ghci) /tmp/ghctest-e17wqi8b/test spaces/./rts/flags/T12870h.run T12870h [bad heap profile] (profasm) /tmp/ghctest-e17wqi8b/test spaces/./rts/flags/T12870h.run T12870h [bad stderr] (ghci) /tmp/ghctest-e17wqi8b/test spaces/./rts/flags/T12870f.run T12870f [bad stdout] (threaded2) /tmp/ghctest-e17wqi8b/test spaces/./rts/flags/T12870e.run T12870e [bad stdout] (threaded2) /tmp/ghctest-e17wqi8b/test spaces/./rts/flags/T12870f.run T12870f [bad stdout] (profthreaded) /tmp/ghctest-e17wqi8b/test spaces/./rts/flags/T12870e.run T12870e [bad stdout] (profthreaded) /tmp/ghctest-e17wqi8b/test spaces/./simplCore/should_compile/T7944.run T7944 [exit code non-0] (profasm) /tmp/ghctest-e17wqi8b/test spaces/./simplCore/should_compile/T13543.run T13543 [stderr mismatch] (hpc) /tmp/ghctest-e17wqi8b/test spaces/./simplCore/should_compile/T13543.run T13543 [stderr mismatch] (optasm) /tmp/ghctest-e17wqi8b/test spaces/./simplCore/should_compile/T13543.run T13543 [stderr mismatch] (profasm) /tmp/ghctest-e17wqi8b/test spaces/./simplCore/should_compile/T14137.run T14137 [stderr mismatch] (profasm) /tmp/ghctest-e17wqi8b/test spaces/./th/TH_spliceE5_prof_ext.run TH_spliceE5_prof_ext [bad exit code] (normal) /tmp/ghctest-e17wqi8b/test spaces/./th/T4188.run T4188 [exit code non-0] (normal) /tmp/ghctest-e17wqi8b/test spaces/./th/T4188.run T4188 [exit code non-0] (ext-interp) /tmp/ghctest-e17wqi8b/test spaces/./th/T13366.run T13366 [bad stderr] (ghci) /tmp/ghctest-e17wqi8b/test spaces/./th/should_compile/T8025/T8025.run T8025 [exit code non-0] (profasm) /tmp/ghctest-e17wqi8b/test spaces/./th/should_compile/T13949/T13949.run T13949 [exit code non-0] (profasm) /tmp/ghctest-e17wqi8b/test spaces/./typecheck/should_compile/T13822.run T13822 [exit code non-0] (normal) /tmp/ghctest-e17wqi8b/test spaces/./typecheck/should_compile/T13822.run T13822 [exit code non-0] (hpc) /tmp/ghctest-e17wqi8b/test spaces/./typecheck/should_compile/T13822.run T13822 [exit code non-0] (optasm) /tmp/ghctest-e17wqi8b/test spaces/./typecheck/should_compile/T13822.run T13822 [exit code non-0] (profasm) /tmp/ghctest-e17wqi8b/test spaces/./typecheck/should_run/EtaExpandLevPoly.run EtaExpandLevPoly [exit code non-0] (profasm) /tmp/ghctest-e17wqi8b/test spaces/./typecheck/should_run/EtaExpandLevPoly.run EtaExpandLevPoly [bad stderr] (ghci) /tmp/ghctest-e17wqi8b/test spaces/./typecheck/should_run/T12809.run T12809 [bad stderr] (ghci) /tmp/ghctest-e17wqi8b/test spaces/./typecheck/should_run/StrictPats.run StrictPats [bad stderr] (ghci) /tmp/ghctest-e17wqi8b/test spaces/./typecheck/should_run/EtaExpandLevPoly.run EtaExpandLevPoly [exit code non-0] (profthreaded) /tmp/ghctest-e17wqi8b/test spaces/../../libraries/base/tests/T8089.run T8089 [bad exit code] (ghci) /tmp/ghctest-e17wqi8b/test spaces/../../libraries/base/tests/T8089.run T8089 [bad exit code] (threaded1) /tmp/ghctest-e17wqi8b/test spaces/../../libraries/base/tests/T8089.run T8089 [bad exit code] (threaded2) /tmp/ghctest-e17wqi8b/test spaces/../../libraries/base/tests/T8089.run T8089 [bad exit code] (profthreaded) /tmp/ghctest-e17wqi8b/test spaces/../../libraries/hpc/tests/fork/hpc_fork.run hpc_fork [bad heap profile] (profasm) /tmp/ghctest-e17wqi8b/test spaces/../../libraries/ghc-compact/tests/compact_share.run compact_share [bad stdout] (profasm) /tmp/ghctest-e17wqi8b/test spaces/../../libraries/ghc-compact/tests/compact_share.run compact_share [bad stdout] (ghci) /tmp/ghctest-e17wqi8b/test spaces/../../libraries/ghc-compact/tests/compact_gc.run compact_gc [bad heap profile] (profasm) /tmp/ghctest-e17wqi8b/test spaces/../../libraries/ghc-compact/tests/compact_share.run compact_share [bad stdout] (profthreaded) /tmp/ghctest-e17wqi8b/test spaces/./ffi/should_fail/UnsafeReenter.run UnsafeReenter [bad exit code] (threaded1) /tmp/ghctest-e17wqi8b/test spaces/./ffi/should_fail/UnsafeReenter.run UnsafeReenter [bad exit code] (threaded2) /tmp/ghctest-e17wqi8b/test spaces/./ffi/should_fail/UnsafeReenter.run UnsafeReenter [bad exit code] (profthreaded) Unexpected stat failures: /tmp/ghctest-e17wqi8b/test spaces/./perf/haddock/haddock.Cabal.run haddock.Cabal [stat not good enough] (normal) /tmp/ghctest-e17wqi8b/test spaces/./perf/space_leaks/space_leak_001.run space_leak_001 [stat too good] (hpc) /tmp/ghctest-e17wqi8b/test spaces/./perf/space_leaks/space_leak_001.run space_leak_001 [stat too good] (optasm) /tmp/ghctest-e17wqi8b/test spaces/./perf/space_leaks/T4334.run T4334 [stat not good enough] (threaded2) /tmp/ghctest-e17wqi8b/test spaces/./perf/space_leaks/space_leak_001.run space_leak_001 [stat too good] (dyn) >From my point of view, with this amount of failures it's difficult to tell if a new change introduces problems. Shall we mark them as broken and open tickets for follow ups? Cheers, Bartosz From simonpj at microsoft.com Thu Sep 14 13:19:30 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 14 Sep 2017 13:19:30 +0000 Subject: ./validate --slow results In-Reply-To: References: Message-ID: Wow... 97 unexpected failures is bad. Ben/David/someone else: might you investigate/characterise them? Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Bartosz | Nitka | Sent: 14 September 2017 13:57 | To: ghc-devs Devs | Subject: ./validate --slow results | | Hi all, | | I happened to run ./validate --slow on my linux machine and I thought it | would be useful to share the results. | | Results: | | Unexpected results from: | TEST="EtaExpandLevPoly PatternSplice StrictPats T10508_api T11627b | T12809 T12870a T12870b T12870c T12870d T12870e T12870f T12870g T12870h | T12903 T12962 T13366 T13543 T13688 T13780c T13822 T13949 T14137 T2552 | T2783 T3001-2 T4114c T4114d T4188 T4334 T5129 T5363 T5559 T5611 T680 | T7411 T7837 T7944 T8025 T8089 T8542 TH_spliceE5_prof_ext UnsafeReenter | compact_gc compact_share dsrun014 dynamic-paper haddock.Cabal hpc_fork prof- | doc-fib prof-doc-last profinline001 read029 return_mem_to_os | rn041 scc001 scc002 scc003 scc005 space_leak_001 tc165 tryReadMVar2" | | SUMMARY for test run started at Thu Sep 14 04:57:20 2017 PDT | 0:12:27 spent to go through | 6091 total tests, which gave rise to | 24105 test cases, of which | 4531 were skipped | | 143 had missing libraries | 19120 expected passes | 203 expected failures | | 0 caused framework failures | 0 caused framework warnings | 6 unexpected passes | 97 unexpected failures | 5 unexpected stat failures | | Unexpected passes: | /tmp/ghctest-e17wqi8b/test | spaces/./dependent/should_compile/dynamic-paper.run dynamic-paper | [unexpected] (optasm) | /tmp/ghctest-e17wqi8b/test | spaces/./indexed-types/should_compile/T7837.run T7837 | [unexpected] (profasm) | /tmp/ghctest-e17wqi8b/test | spaces/./parser/should_compile/read029.run read029 | [unexpected] (optasm) | /tmp/ghctest-e17wqi8b/test | spaces/./rename/should_compile/rn041.run rn041 | [unexpected] (optasm) | /tmp/ghctest-e17wqi8b/test | spaces/./simplCore/should_fail/T7411.run T7411 | [unexpected] (hpc) | /tmp/ghctest-e17wqi8b/test | spaces/./typecheck/should_compile/tc165.run tc165 | [unexpected] (optasm) | | Unexpected failures: | /tmp/ghctest-e17wqi8b/test spaces/./codeGen/should_run/T5129.run | T5129 [bad exit code] (hpc) | /tmp/ghctest-e17wqi8b/test spaces/./codeGen/should_run/T5129.run | T5129 [bad exit code] (optasm) | /tmp/ghctest-e17wqi8b/test spaces/./codeGen/should_run/T5129.run | T5129 [bad exit code] (threaded2) | /tmp/ghctest-e17wqi8b/test spaces/./codeGen/should_run/T5129.run | T5129 [bad exit code] (dyn) | /tmp/ghctest-e17wqi8b/test | spaces/./concurrent/should_run/T5611.run T5611 [bad | stdout] (ghci) | /tmp/ghctest-e17wqi8b/test | spaces/./concurrent/should_run/tryReadMVar2.run | tryReadMVar2 [bad heap profile] (profasm) | /tmp/ghctest-e17wqi8b/test | spaces/./deSugar/should_run/dsrun014.run dsrun014 | [bad stderr] (hpc) | /tmp/ghctest-e17wqi8b/test | spaces/./deSugar/should_run/dsrun014.run dsrun014 | [bad stderr] (optasm) | /tmp/ghctest-e17wqi8b/test | spaces/./deSugar/should_run/dsrun014.run dsrun014 | [bad stderr] (threaded2) | /tmp/ghctest-e17wqi8b/test | spaces/./deSugar/should_run/dsrun014.run dsrun014 | [bad stderr] (dyn) | /tmp/ghctest-e17wqi8b/test | spaces/./dependent/should_fail/T13780c.run T13780c | [stderr mismatch] (normal) | /tmp/ghctest-e17wqi8b/test | spaces/./dependent/should_compile/dynamic-paper.run | dynamic-paper [exit code non-0] (profasm) | /tmp/ghctest-e17wqi8b/test spaces/./driver/T4114d.run | T4114d [bad exit code] (ghci) | /tmp/ghctest-e17wqi8b/test spaces/./driver/T4114c.run | T4114c [bad exit code] (ghci) | /tmp/ghctest-e17wqi8b/test spaces/./ghc-api/T10508_api.run | T10508_api [bad exit code] (profasm) | /tmp/ghctest-e17wqi8b/test | spaces/./numeric/should_compile/T8542.run T8542 | [stderr mismatch] (hpc) | /tmp/ghctest-e17wqi8b/test spaces/./ghc-api/T10508_api.run | T10508_api [bad exit code] (profthreaded) | /tmp/ghctest-e17wqi8b/test | spaces/./partial-sigs/should_compile/PatternSplice.run | PatternSplice [exit code non-0] (profasm) | /tmp/ghctest-e17wqi8b/test spaces/./patsyn/should_run/T13688.run | T13688 [exit code non-0] (profasm) | /tmp/ghctest-e17wqi8b/test spaces/./patsyn/should_run/T13688.run | T13688 [exit code non-0] (profthreaded) | /tmp/ghctest-e17wqi8b/test | spaces/./profiling/should_run/T3001-2.run T3001-2 | [exit code non-0] (prof_hb) | /tmp/ghctest-e17wqi8b/test | spaces/./profiling/should_run/scc003.run scc003 | [bad profile] (profasm) | /tmp/ghctest-e17wqi8b/test | spaces/./profiling/should_run/T11627b.run T11627b | [bad heap profile] (profasm) | /tmp/ghctest-e17wqi8b/test | spaces/./profiling/should_run/T12962.run T12962 | [bad profile] (profasm) | /tmp/ghctest-e17wqi8b/test | spaces/./profiling/should_run/scc003.run scc003 | [bad profile] (profthreaded) | /tmp/ghctest-e17wqi8b/test | spaces/./profiling/should_run/T12962.run T12962 | [bad profile] (profthreaded) | /tmp/ghctest-e17wqi8b/test | spaces/./profiling/should_run/scc003.run scc003 | [bad profile] (prof) | /tmp/ghctest-e17wqi8b/test | spaces/./profiling/should_run/T12962.run T12962 | [bad profile] (prof) | /tmp/ghctest-e17wqi8b/test | spaces/./profiling/should_run/scc001.run scc001 | [bad exit code] (ghci-ext-prof) | /tmp/ghctest-e17wqi8b/test | spaces/./profiling/should_run/scc005.run scc005 | [bad exit code] (ghci-ext-prof) | /tmp/ghctest-e17wqi8b/test | spaces/./profiling/should_run/profinline001.run | profinline001 [bad exit code] (ghci-ext-prof) | /tmp/ghctest-e17wqi8b/test | spaces/./profiling/should_run/T5559.run T5559 [bad | exit code] (ghci-ext-prof) | /tmp/ghctest-e17wqi8b/test | spaces/./profiling/should_run/T11627b.run T11627b | [bad heap profile] (prof_hc_hb) | /tmp/ghctest-e17wqi8b/test | spaces/./profiling/should_run/scc003.run scc003 | [bad exit code] (ghci-ext-prof) | /tmp/ghctest-e17wqi8b/test spaces/./profiling/should_run/T680.run | T680 [bad exit code] (ghci-ext-prof) | /tmp/ghctest-e17wqi8b/test | spaces/./profiling/should_run/scc002.run scc002 | [bad exit code] (ghci-ext-prof) | /tmp/ghctest-e17wqi8b/test | spaces/./profiling/should_run/T12962.run T12962 | [bad exit code] (ghci-ext-prof) | /tmp/ghctest-e17wqi8b/test | spaces/./profiling/should_run/prof-doc-fib.run | prof-doc-fib [bad exit code] (ghci-ext-prof) | /tmp/ghctest-e17wqi8b/test | spaces/./profiling/should_run/T11627b.run T11627b | [bad heap profile] (prof_hb) | /tmp/ghctest-e17wqi8b/test | spaces/./profiling/should_run/T2552.run T2552 [bad | exit code] (ghci-ext-prof) | /tmp/ghctest-e17wqi8b/test | spaces/./profiling/should_run/prof-doc-last.run | prof-doc-last [bad exit code] (ghci-ext-prof) | /tmp/ghctest-e17wqi8b/test | spaces/./profiling/should_run/T11627b.run T11627b | [bad heap profile] (prof_hd) | /tmp/ghctest-e17wqi8b/test | spaces/./profiling/should_run/T5363.run T5363 [bad | exit code] (ghci-ext-prof) | /tmp/ghctest-e17wqi8b/test | spaces/./profiling/should_run/T11627b.run T11627b | [bad heap profile] (prof_hy) | /tmp/ghctest-e17wqi8b/test | spaces/./profiling/should_run/T11627b.run T11627b | [bad heap profile] (prof_hr) | /tmp/ghctest-e17wqi8b/test spaces/./rts/T2783.run | T2783 [bad exit code] (threaded1) | /tmp/ghctest-e17wqi8b/test spaces/./rts/return_mem_to_os.run | return_mem_to_os [bad heap profile] (profasm) | /tmp/ghctest-e17wqi8b/test spaces/./rts/T12903.run | T12903 [bad heap profile] (profasm) | /tmp/ghctest-e17wqi8b/test spaces/./rts/flags/T12870a.run | T12870a [bad stderr] (ghci) | /tmp/ghctest-e17wqi8b/test spaces/./rts/flags/T12870f.run | T12870f [bad stdout] (profasm) | /tmp/ghctest-e17wqi8b/test spaces/./rts/flags/T12870b.run | T12870b [bad heap profile] (profasm) | /tmp/ghctest-e17wqi8b/test spaces/./rts/flags/T12870c.run | T12870c [bad heap profile] (profasm) | /tmp/ghctest-e17wqi8b/test spaces/./rts/flags/T12870e.run | T12870e [bad stdout] (profasm) | /tmp/ghctest-e17wqi8b/test spaces/./rts/flags/T12870d.run | T12870d [bad stderr] (ghci) | /tmp/ghctest-e17wqi8b/test spaces/./rts/flags/T12870b.run | T12870b [bad exit code] (ghci) | /tmp/ghctest-e17wqi8b/test spaces/./rts/flags/T12870f.run | T12870f [bad stderr] (ghci) | /tmp/ghctest-e17wqi8b/test spaces/./rts/flags/T12870c.run | T12870c [bad exit code] (ghci) | /tmp/ghctest-e17wqi8b/test spaces/./rts/flags/T12870e.run | T12870e [bad stderr] (ghci) | /tmp/ghctest-e17wqi8b/test spaces/./rts/flags/T12870g.run | T12870g [bad stderr] (ghci) | /tmp/ghctest-e17wqi8b/test spaces/./rts/flags/T12870h.run | T12870h [bad heap profile] (profasm) | /tmp/ghctest-e17wqi8b/test spaces/./rts/flags/T12870h.run | T12870h [bad stderr] (ghci) | /tmp/ghctest-e17wqi8b/test spaces/./rts/flags/T12870f.run | T12870f [bad stdout] (threaded2) | /tmp/ghctest-e17wqi8b/test spaces/./rts/flags/T12870e.run | T12870e [bad stdout] (threaded2) | /tmp/ghctest-e17wqi8b/test spaces/./rts/flags/T12870f.run | T12870f [bad stdout] (profthreaded) | /tmp/ghctest-e17wqi8b/test spaces/./rts/flags/T12870e.run | T12870e [bad stdout] (profthreaded) | /tmp/ghctest-e17wqi8b/test | spaces/./simplCore/should_compile/T7944.run T7944 | [exit code non-0] (profasm) | /tmp/ghctest-e17wqi8b/test | spaces/./simplCore/should_compile/T13543.run T13543 | [stderr mismatch] (hpc) | /tmp/ghctest-e17wqi8b/test | spaces/./simplCore/should_compile/T13543.run T13543 | [stderr mismatch] (optasm) | /tmp/ghctest-e17wqi8b/test | spaces/./simplCore/should_compile/T13543.run T13543 | [stderr mismatch] (profasm) | /tmp/ghctest-e17wqi8b/test | spaces/./simplCore/should_compile/T14137.run T14137 | [stderr mismatch] (profasm) | /tmp/ghctest-e17wqi8b/test spaces/./th/TH_spliceE5_prof_ext.run | TH_spliceE5_prof_ext [bad exit code] (normal) | /tmp/ghctest-e17wqi8b/test spaces/./th/T4188.run | T4188 [exit code non-0] (normal) | /tmp/ghctest-e17wqi8b/test spaces/./th/T4188.run | T4188 [exit code non-0] (ext-interp) | /tmp/ghctest-e17wqi8b/test spaces/./th/T13366.run | T13366 [bad stderr] (ghci) | /tmp/ghctest-e17wqi8b/test | spaces/./th/should_compile/T8025/T8025.run T8025 | [exit code non-0] (profasm) | /tmp/ghctest-e17wqi8b/test | spaces/./th/should_compile/T13949/T13949.run T13949 | [exit code non-0] (profasm) | /tmp/ghctest-e17wqi8b/test | spaces/./typecheck/should_compile/T13822.run T13822 | [exit code non-0] (normal) | /tmp/ghctest-e17wqi8b/test | spaces/./typecheck/should_compile/T13822.run T13822 | [exit code non-0] (hpc) | /tmp/ghctest-e17wqi8b/test | spaces/./typecheck/should_compile/T13822.run T13822 | [exit code non-0] (optasm) | /tmp/ghctest-e17wqi8b/test | spaces/./typecheck/should_compile/T13822.run T13822 | [exit code non-0] (profasm) | /tmp/ghctest-e17wqi8b/test | spaces/./typecheck/should_run/EtaExpandLevPoly.run | EtaExpandLevPoly [exit code non-0] (profasm) | /tmp/ghctest-e17wqi8b/test | spaces/./typecheck/should_run/EtaExpandLevPoly.run | EtaExpandLevPoly [bad stderr] (ghci) | /tmp/ghctest-e17wqi8b/test | spaces/./typecheck/should_run/T12809.run T12809 | [bad stderr] (ghci) | /tmp/ghctest-e17wqi8b/test | spaces/./typecheck/should_run/StrictPats.run StrictPats | [bad stderr] (ghci) | /tmp/ghctest-e17wqi8b/test | spaces/./typecheck/should_run/EtaExpandLevPoly.run | EtaExpandLevPoly [exit code non-0] (profthreaded) | /tmp/ghctest-e17wqi8b/test | spaces/../../libraries/base/tests/T8089.run T8089 [bad | exit code] (ghci) | /tmp/ghctest-e17wqi8b/test | spaces/../../libraries/base/tests/T8089.run T8089 [bad | exit code] (threaded1) | /tmp/ghctest-e17wqi8b/test | spaces/../../libraries/base/tests/T8089.run T8089 [bad | exit code] (threaded2) | /tmp/ghctest-e17wqi8b/test | spaces/../../libraries/base/tests/T8089.run T8089 [bad | exit code] (profthreaded) | /tmp/ghctest-e17wqi8b/test | spaces/../../libraries/hpc/tests/fork/hpc_fork.run hpc_fork | [bad heap profile] (profasm) | /tmp/ghctest-e17wqi8b/test | spaces/../../libraries/ghc-compact/tests/compact_share.run | compact_share [bad stdout] (profasm) | /tmp/ghctest-e17wqi8b/test | spaces/../../libraries/ghc-compact/tests/compact_share.run | compact_share [bad stdout] (ghci) | /tmp/ghctest-e17wqi8b/test | spaces/../../libraries/ghc-compact/tests/compact_gc.run compact_gc | [bad heap profile] (profasm) | /tmp/ghctest-e17wqi8b/test | spaces/../../libraries/ghc-compact/tests/compact_share.run | compact_share [bad stdout] (profthreaded) | /tmp/ghctest-e17wqi8b/test | spaces/./ffi/should_fail/UnsafeReenter.run | UnsafeReenter [bad exit code] (threaded1) | /tmp/ghctest-e17wqi8b/test | spaces/./ffi/should_fail/UnsafeReenter.run | UnsafeReenter [bad exit code] (threaded2) | /tmp/ghctest-e17wqi8b/test | spaces/./ffi/should_fail/UnsafeReenter.run | UnsafeReenter [bad exit code] (profthreaded) | | Unexpected stat failures: | /tmp/ghctest-e17wqi8b/test | spaces/./perf/haddock/haddock.Cabal.run haddock.Cabal [stat not | good enough] (normal) | /tmp/ghctest-e17wqi8b/test | spaces/./perf/space_leaks/space_leak_001.run space_leak_001 [stat too good] | (hpc) | /tmp/ghctest-e17wqi8b/test | spaces/./perf/space_leaks/space_leak_001.run space_leak_001 [stat too good] | (optasm) | /tmp/ghctest-e17wqi8b/test spaces/./perf/space_leaks/T4334.run | T4334 [stat not good enough] (threaded2) | /tmp/ghctest-e17wqi8b/test | spaces/./perf/space_leaks/space_leak_001.run space_leak_001 [stat too good] | (dyn) | | | | From my point of view, with this amount of failures it's difficult to tell | if a new change introduces problems. Shall we mark them as broken and open | tickets for follow ups? | | Cheers, | Bartosz | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.haskell | .org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C7101e74b491e414c118f08d4fb7051 | 72%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636409907213307133&sdata=Jfy | 606rPp0S0wb7lwh5ttspRXSE7sB8KPkXPynCYGe4%3D&reserved=0 From ben at smart-cactus.org Thu Sep 14 15:01:30 2017 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 14 Sep 2017 11:01:30 -0400 Subject: Perf improvement In-Reply-To: References: Message-ID: <8760cl1fzp.fsf@ben-laptop.smart-cactus.org> Simon Peyton Jones via ghc-devs writes: > I'm seeing this in validate > > bytes allocated value is too low: > > (If this is because you have improved GHC, please > > update the test so that GHC doesn't regress again) > > Expected T5837(normal) bytes allocated: 56782344 +/-7% > > Lower bound T5837(normal) bytes allocated: 52807579 > > Upper bound T5837(normal) bytes allocated: 60757109 > > Actual T5837(normal) bytes allocated: 52424864 > > Deviation T5837(normal) bytes allocated: -7.7 % > > *** unexpected stat test failure for T5837(normal) This test has historically been quite unstable. I'm running a validate as we speak; I'll let you know if I can reproduce this. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Thu Sep 14 15:54:03 2017 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 14 Sep 2017 11:54:03 -0400 Subject: Perf improvement In-Reply-To: References: Message-ID: <87377p1dk4.fsf@ben-laptop.smart-cactus.org> Simon Peyton Jones via ghc-devs writes: > I'm seeing this in validate > > bytes allocated value is too low: > > (If this is because you have improved GHC, please > > update the test so that GHC doesn't regress again) > > Expected T5837(normal) bytes allocated: 56782344 +/-7% > > Lower bound T5837(normal) bytes allocated: 52807579 > > Upper bound T5837(normal) bytes allocated: 60757109 > > Actual T5837(normal) bytes allocated: 52424864 > > Deviation T5837(normal) bytes allocated: -7.7 % > > *** unexpected stat test failure for T5837(normal) > Does anyone else? Unfortunately I'm not seeing this locally. It sounds like yet another environmentally-sensitive issue. Perhaps we should just bump it. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Thu Sep 14 16:11:16 2017 From: ben at well-typed.com (Ben Gamari) Date: Thu, 14 Sep 2017 12:11:16 -0400 Subject: ./validate --slow results In-Reply-To: References: Message-ID: <87wp51z2e3.fsf@ben-laptop.smart-cactus.org> Simon Peyton Jones via ghc-devs writes: > Wow... 97 unexpected failures is bad. > > Ben/David/someone else: might you investigate/characterise them? > I periodically run --slow and have opened a number of tickets in response in the past (#11819 being the one I was able to easily find). Bartosz, perhaps you could add a comment to that ticket with your results? David, do you think you could take a few hours to go through the failing tests and classify them thematically? Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From simonpj at microsoft.com Thu Sep 14 16:21:18 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 14 Sep 2017 16:21:18 +0000 Subject: Perf improvement In-Reply-To: <87377p1dk4.fsf@ben-laptop.smart-cactus.org> References: <87377p1dk4.fsf@ben-laptop.smart-cactus.org> Message-ID: Bump it down, maybe... it got /better/! | -----Original Message----- | From: Ben Gamari [mailto:ben at smart-cactus.org] | Sent: 14 September 2017 16:54 | To: Simon Peyton Jones ; ghc-devs at haskell.org | Subject: Re: Perf improvement | | Simon Peyton Jones via ghc-devs writes: | | > I'm seeing this in validate | > | > bytes allocated value is too low: | > | > (If this is because you have improved GHC, please | > | > update the test so that GHC doesn't regress again) | > | > Expected T5837(normal) bytes allocated: 56782344 +/-7% | > | > Lower bound T5837(normal) bytes allocated: 52807579 | > | > Upper bound T5837(normal) bytes allocated: 60757109 | > | > Actual T5837(normal) bytes allocated: 52424864 | > | > Deviation T5837(normal) bytes allocated: -7.7 % | > | > *** unexpected stat test failure for T5837(normal) Does anyone else? | | Unfortunately I'm not seeing this locally. It sounds like yet another | environmentally-sensitive issue. Perhaps we should just bump it. | | Cheers, | | - Ben From rae at cs.brynmawr.edu Thu Sep 14 16:37:12 2017 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Thu, 14 Sep 2017 12:37:12 -0400 Subject: Phab: conditional approval In-Reply-To: <87poau1xks.fsf@ben-laptop.smart-cactus.org> References: <95D2EA45-B7A1-497E-B174-2D8037054C36@cs.brynmawr.edu> <87poau1xks.fsf@ben-laptop.smart-cactus.org> Message-ID: <6063F3FA-D77F-4C45-B82A-9EAAA02ADD5B@cs.brynmawr.edu> Yes, this works for me. As for merging, I'm always very grateful when Ben does it -- though I agree that it would make more sense for me to do it when I can test-then-merge. Thanks, Richard > On Sep 13, 2017, at 10:29 AM, Ben Gamari wrote: > > Simon Marlow writes: > >> On 19 August 2017 at 03:56, Richard Eisenberg wrote: >> >>> Hi devs, >>> >>> When reviewing a diff on Phab, I can "accept" or "request changes". >>> Sometimes, though, I want to do both: I suggest very minor (e.g., typo) >>> changes, but then when these changes are made, I accept. I'm leery of >>> making the suggestions and saying "accept", because then someone working >>> quickly may merge without noticing the typos. Does Phab have such an option? >>> >> >> "Accept with nits" is standard practice, but you're right it can go wrong >> when someone else is merging accepted diffs. We could adopt a standard >> comment keyword, e.g. "NITS" that indicates you'd like the nits to be fixed >> before committing, perhaps? >> > Sounds reasonable to me. > > >> Also, I don't think it's a good idea to merge commits when the author is a >> committer, they can land themselves. >> > I would be quite happy to not have to merge such patches; I merely merge > them currently since I thought it was generally expected. > > On the other hand, I generally do integration builds on the batches of > patches that I merge which can sometimes catch validation issues. > However, I expect this will be less of an issue with the > test-before-merge support in the pipeline. > > Cheers, > > - Ben > From alan.zimm at gmail.com Thu Sep 14 18:01:45 2017 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Thu, 14 Sep 2017 20:01:45 +0200 Subject: Phab: conditional approval In-Reply-To: <6063F3FA-D77F-4C45-B82A-9EAAA02ADD5B@cs.brynmawr.edu> References: <95D2EA45-B7A1-497E-B174-2D8037054C36@cs.brynmawr.edu> <87poau1xks.fsf@ben-laptop.smart-cactus.org> <6063F3FA-D77F-4C45-B82A-9EAAA02ADD5B@cs.brynmawr.edu> Message-ID: William Casarin recently tweeted a link to the bitcoincore devs ACK system[1], which are Concept ACK - Agree with the idea and overall direction, but haven't reviewed the code changes or tested them. utACK (untested ACK) - Reviewed and agree with the code changes but haven't actually tested them. Tested ACK - Reviewed the code changes and have verified the functionality or bug fix. ACK - A loose ACK can be confusing. It's best to avoid them unless it's a documentation/comment only change in which case there is nothing to test/verify; therefore the tested/untested distinction is not there. NACK - Disagree with the code changes/concept. Should be accompanied by an explanation. [1] https://github.com/bitcoin/bitcoin/issues/6100 Alan On 14 September 2017 at 18:37, Richard Eisenberg wrote: > Yes, this works for me. > > As for merging, I'm always very grateful when Ben does it -- though I > agree that it would make more sense for me to do it when I can > test-then-merge. > > Thanks, > Richard > > > > On Sep 13, 2017, at 10:29 AM, Ben Gamari wrote: > > > > Simon Marlow writes: > > > >> On 19 August 2017 at 03:56, Richard Eisenberg > wrote: > >> > >>> Hi devs, > >>> > >>> When reviewing a diff on Phab, I can "accept" or "request changes". > >>> Sometimes, though, I want to do both: I suggest very minor (e.g., typo) > >>> changes, but then when these changes are made, I accept. I'm leery of > >>> making the suggestions and saying "accept", because then someone > working > >>> quickly may merge without noticing the typos. Does Phab have such an > option? > >>> > >> > >> "Accept with nits" is standard practice, but you're right it can go > wrong > >> when someone else is merging accepted diffs. We could adopt a standard > >> comment keyword, e.g. "NITS" that indicates you'd like the nits to be > fixed > >> before committing, perhaps? > >> > > Sounds reasonable to me. > > > > > >> Also, I don't think it's a good idea to merge commits when the author > is a > >> committer, they can land themselves. > >> > > I would be quite happy to not have to merge such patches; I merely merge > > them currently since I thought it was generally expected. > > > > On the other hand, I generally do integration builds on the batches of > > patches that I merge which can sometimes catch validation issues. > > However, I expect this will be less of an issue with the > > test-before-merge support in the pipeline. > > > > Cheers, > > > > - Ben > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at cs.brynmawr.edu Thu Sep 14 18:12:39 2017 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Thu, 14 Sep 2017 14:12:39 -0400 Subject: A type checker plugin for row types In-Reply-To: <382B9451-CF68-417E-BF53-127B92A1D081@smart-cactus.org> References: <225e0ebe-7c81-63fb-a348-8e1481a37aeb@well-typed.com> <382B9451-CF68-417E-BF53-127B92A1D081@smart-cactus.org> Message-ID: <9EB4122D-36ED-4E23-B324-3EB11613AAC7@cs.brynmawr.edu> Here are my stabs at answers to two of your questions. > • When/where exactly do Derived constraints arise? I'm not recognizing them in the OutsideIn paper. I agree with others' comments on this point, but perhaps I can expand. A Derived constraint is essentially a Wanted constraint, but one where there is no need for evidence. They arise for several reasons: - If we have Ord a as a Wanted, then we'll get Eq a (a superclass constraint) as a Derived. While GHC has no need for an Eq a dictionary, perhaps solving the Eq a constraint will help us solve the Ord a constraint. So the Eq a is Derived. - Functional dependencies and type family dependencies give rise to Derived constraints. That is, if we have class C a b | a -> b, instance C Int Bool, and [W] C Int a, then we'll get [D] a ~ Bool. - GHC says that Wanteds cannot rewrite Wanteds. That is, if we have [W] a ~ Bool and [W] a ~ Int, we *don't* want GHC to simplify to [W] Int ~ Bool and then report that. It's very confusing to users! However, sometimes, (I don't have an example) we need Wanteds to rewrite Wanteds in order to make progress. So GHC clones Wanteds into Deriveds, and says that Deriveds *can* rewrite Deriveds. Perhaps after a lot of rewriting, some variables will unify, and then GHC can make progress on the Wanteds. Of course, all this cloning would be non-performant, so GHC essentially does a copy-on-write implementation, where many constraints are [WD], both Wanted and Derived. While Deriveds *arise* for several reasons, they contribute back to the world in only one: unification. That is, the whole point of Deriveds is so that GHC will discover [D] a ~ Int and then set a := Int. > • GHC has various kinds of variable and skolem (e.g. signature skolem) that I'm not recognizing in the OutsideIn paper. Is there a comprehensive discussion of them all? I'm unaware of one, but here are some pointers. - A skolem tends to be a user-written type variable, unifiable with no other. If you have a declaration (id :: a -> b; id x = x), GHC will reject because it won't unify a with b. a and b are skolems. - GHC also has "super-skolems", which have a slightly different behavior w.r.t. instances. See Note [Binding when looking up instances] in InstEnv. I've never had to care about this distinction. - GHC has several different types of unification variables (= metavars). * A TauTv is your regular, run-of-the-mill unification variable. It can unify only with monotypes. * A SigTv is a metavar that can unify only with another variable. See Note [Signature skolems] in TcType for examples of why this is useful. (Editorial comment: I think the design around SigTvs is wrong, because generally there is a set of variables a given SigTv should not unify with. But GHC doesn't track this info.) * FlatMetaTvs and FlatSkolTvs are an implementation detail of how GHC deals with type families. They are documented in Note [The flattening story] in TcFlatten. I think that's it, but feel free to ask if you run into more. (In case you're wondering, I didn't really look over the design you're proposing. I leave that to others.) Good luck! Richard > On Sep 11, 2017, at 9:48 AM, Ben Gamari wrote: > > On September 11, 2017 9:34:15 AM GMT+01:00, Adam Gundry wrote: >> Hi Nick, >> >> This is great work, and I look forward to seeing the code once it is >> ready. I've had a quick glance over your wiki page, and thought I >> should >> send you some initial comments, though it deserves deeper attention >> which I will try to find time to give it. :-) >> >> I don't see a reference to Iavor's paper "Improving Haskell Types with >> SMT" (http://yav.github.io/publications/improving-smt-types.pdf). If >> you've not come across it, it might give a useful alternative >> perspective on how plugins work, especially with regard to derived >> constraints. >> >> The following is based on my faulty memory, so apologies if it is out >> of >> date or misleading... >> >>> When/where exactly do Derived constraints arise? >> >> Suppose I have a class with an equality superclass >> >> class a ~ b => C a b >> >> and a wanted constraint `C alpha Int`, for some touchable variable >> `alpha`. This leads to a derived constraint `alpha ~ Int` thanks to the >> superclass (derived means we don't actually need evidence for it in >> order to build the core term, but solving it might help fill in some >> touchable variables). Sorry if this is obvious and not exact enough! >> >>> When do touchables "naturally" arise in Given constraints? >> >> Do you mean "touchable" or "unification variable" here (and >> elsewhere?). >> A skolem is always untouchable, but the converse is not true. >> >> I think that unification variables can arise in Given constraints, but >> that they will always be untouchable. Suppose we have defined >> >> f :: forall a b . ((a ~ b) => a -> b) -> Int >> >> (never mind that it is ambiguous) and consider type-checking the call >> `f >> id`. We end up checking `id` against type `a -> b` with given `a ~ b` >> where `a` and `b` are unification variables. They must be untouchable, >> however, otherwise we might unify them, which would be wrong. >> >> Hope this helps, >> >> Adam >> >> >> On 10/09/17 23:24, Nicolas Frisby wrote: >>> Hi all. I've been spending my free time for the last couple months on >> a >>> type checker plugin for row types. The free time waxes and wanes; >>> sending an email like this one was my primary goal for the past >> couple >>> weeks. >>> >>> At the very least, I hoped this project would let me finally get some >>> hands on experience with OutsideIn. And I definitely have. But I've >> also >>> made more progress than I anticipated, and I think the plugin is >>> starting to have legs! >>> >>> I haven't uploaded the code yet to github -- it's not quite ready to >>> share. But I did do a write up on the dev wiki. >>> >>> >>> >> https://ghc.haskell.org/trac/ghc/wiki/Plugins/TypeChecker/RowTypes/Coxswain >>> >>> I would really appreciate and questions, comments, and --- boy, oh >> boy >>> --- answers. >>> >>> I hope to upload within a week or so, and I'll update that wiki page >> and >>> reply to this email when I do. >>> >>> Thanks very much. -Nick >>> >>> P.S. -- I've CC'd and BCC'd people who I anticipate would be >>> specifically interested in this (e.g. plugins, row types, etc). >> Please >>> feel free to forward to others that come to mind; I know some inboxes >>> abjectly can't afford default list traffic. >>> >>> P.P.S. -- One hold up for the upload is: which license? I intend to >>> release under BSD3, mainly to match GHC since one ideal scenario >> would >>> involve being packaged with/integrated into GHC. But my brief recent >>> research suggests that the Apache license might be more conducive to >>> eventual widespread adoption. If you'd be willing to advise or even >> just >>> refer me to other write ups, please feel free to email me directly or >> to >>> start a separate thread on a more appropriate distribution list >> (CC'ing >>> me, please). Thanks again. >> >> >> -- >> Adam Gundry, Haskell Consultant >> Well-Typed LLP, http://www.well-typed.com/ >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > It would be great if someone could extract the conclusion of this thread into a Note. Clearly there is a hole in the current state of our source documentation. > > Cheers, > > - Ben > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From simonpj at microsoft.com Fri Sep 15 14:55:22 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 15 Sep 2017 14:55:22 +0000 Subject: A type checker plugin for row types In-Reply-To: References: Message-ID: Nick Good work! You ask some questions about the constraint solver – I hope that the answer from others have helped. If not, do re-ask. My main comment is: what does Core look like? I think your answer is “No change to Core, but there are lots of unsafe coerces littered around”. But even then I’m not sure. Even f :: Lacks r “f” => V (Row (r .& (“f” .= Int))) -> V (Row (r .& (“f” .= Int))) f n x = ??? Somehow in the ??? I have to update field n of a tuple x. How do I do that? And I’m also very uncomfortable having Core littered with unsafeCoerces. I like Core being statically typed. What is the simplest primitive(s) we could add to Core to make it possible to express this stuff type-safely? Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Nicolas Frisby Sent: 10 September 2017 23:25 To: ghc-devs at haskell.org Cc: Andres Löh ; Adam Gundry ; Richard Eisenberg Subject: A type checker plugin for row types Hi all. I've been spending my free time for the last couple months on a type checker plugin for row types. The free time waxes and wanes; sending an email like this one was my primary goal for the past couple weeks. At the very least, I hoped this project would let me finally get some hands on experience with OutsideIn. And I definitely have. But I've also made more progress than I anticipated, and I think the plugin is starting to have legs! I haven't uploaded the code yet to github -- it's not quite ready to share. But I did do a write up on the dev wiki. https://ghc.haskell.org/trac/ghc/wiki/Plugins/TypeChecker/RowTypes/Coxswain I would really appreciate and questions, comments, and --- boy, oh boy --- answers. I hope to upload within a week or so, and I'll update that wiki page and reply to this email when I do. Thanks very much. -Nick P.S. -- I've CC'd and BCC'd people who I anticipate would be specifically interested in this (e.g. plugins, row types, etc). Please feel free to forward to others that come to mind; I know some inboxes abjectly can't afford default list traffic. P.P.S. -- One hold up for the upload is: which license? I intend to release under BSD3, mainly to match GHC since one ideal scenario would involve being packaged with/integrated into GHC. But my brief recent research suggests that the Apache license might be more conducive to eventual widespread adoption. If you'd be willing to advise or even just refer me to other write ups, please feel free to email me directly or to start a separate thread on a more appropriate distribution list (CC'ing me, please). Thanks again. -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Fri Sep 15 15:46:17 2017 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Fri, 15 Sep 2017 16:46:17 +0100 Subject: Overapproximation of loopbreakers due to unfoldings Message-ID: I have been puzzling over the example presented by Harendra in #14211 for the last two days. Ultimately it led me to discover that if a self-recursive definition is marked with an INLINE pragma then it will always be marked as a loopbreaker. This is undesirable in this case as the simplifier ends up with in the first pass.. bindWith = ... bindWith ... => bindWith = bindWith_abc |> co bindWith_abc = ... bindWith ... So we can then inline `bindWith` into the RHS of `bindWith_abc` and create a single self-recursive function rather than a mutually recursive block. Marking `bindWith` as `INLINE` means that both `bindWith` and `bindWith_abc` are marked as loopbreakers. Mutual recursive blocks are bad as they completely stop the static argument transformation from working. My question is, why is it necessary to mark "bindWith" as a loopbreaker in the current module? Are there any tickets or notes which discuss this problem? Matt From nicolas.frisby at gmail.com Fri Sep 15 15:52:32 2017 From: nicolas.frisby at gmail.com (Nicolas Frisby) Date: Fri, 15 Sep 2017 15:52:32 +0000 Subject: A type checker plugin for row types In-Reply-To: References: Message-ID: Hello Simon! Thanks for taking a look. I've attempted to address your Core question by adding a new section to the wiki page: https://ghc.haskell.org/trac/ghc/wiki/Plugins/TypeChecker/RowTypes/Coxswain#SomeLightCoreSnorkeling Regarding the necessary EvTerms, I haven't thought through that enough to answer. I'll add it to the wiki page. Thanks. -Nick On Fri, Sep 15, 2017 at 7:55 AM Simon Peyton Jones wrote: > Nick > > > > Good work! > > > > You ask some questions about the constraint solver – I hope that the > answer from others have helped. If not, do re-ask. > > > > My main comment is: what does Core look like? I think your answer is “No > change to Core, but there are lots of unsafe coerces littered around”. > But even then I’m not sure. Even > > > > f :: Lacks r “f” => V (Row (r .& (“f” .= Int))) -> V (Row (r .& (“f” .= > Int))) > > f n x = ??? > > > > Somehow in the ??? I have to update field n of a tuple x. How do I do > that? > > > > And I’m also very uncomfortable having Core littered with unsafeCoerces. > I like Core being statically typed. What is the simplest primitive(s) we > could add to Core to make it possible to express this stuff type-safely? > > > > Simon > > > > *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of *Nicolas > Frisby > *Sent:* 10 September 2017 23:25 > *To:* ghc-devs at haskell.org > *Cc:* Andres Löh ; Adam Gundry ; > Richard Eisenberg > *Subject:* A type checker plugin for row types > > > > Hi all. I've been spending my free time for the last couple months on a > type checker plugin for row types. The free time waxes and wanes; sending > an email like this one was my primary goal for the past couple weeks. > > > > At the very least, I hoped this project would let me finally get some > hands on experience with OutsideIn. And I definitely have. But I've also > made more progress than I anticipated, and I think the plugin is starting > to have legs! > > > > I haven't uploaded the code yet to github -- it's not quite ready to > share. But I did do a write up on the dev wiki. > > > > > https://ghc.haskell.org/trac/ghc/wiki/Plugins/TypeChecker/RowTypes/Coxswain > > > > I would really appreciate and questions, comments, and --- boy, oh boy --- > answers. > > > > I hope to upload within a week or so, and I'll update that wiki page and > reply to this email when I do. > > > > Thanks very much. -Nick > > > > P.S. -- I've CC'd and BCC'd people who I anticipate would be specifically > interested in this (e.g. plugins, row types, etc). Please feel free to > forward to others that come to mind; I know some inboxes abjectly can't > afford default list traffic. > > > > P.P.S. -- One hold up for the upload is: which license? I intend to > release under BSD3, mainly to match GHC since one ideal scenario would > involve being packaged with/integrated into GHC. But my brief recent > research suggests that the Apache license might be more conducive to > eventual widespread adoption. If you'd be willing to advise or even just > refer me to other write ups, please feel free to email me directly or to > start a separate thread on a more appropriate distribution list (CC'ing me, > please). Thanks again. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Sep 15 17:43:53 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 15 Sep 2017 17:43:53 +0000 Subject: Overapproximation of loopbreakers due to unfoldings In-Reply-To: References: Message-ID: INLINE means "Inline what I wrote". So in your example we'd have ===> bindWith [INLINE = ] = bindWith_abc |> co bindWith_abc = If you see a call to bindWith, we will /not/ inline bindWith_abc |> co! We'll inline . That's what the programmer asked for. That's why self-recursive INLINE things are loop breakers. Does that help? Now, it's true that /in the rhs of bindWith_abc/ it might be better to inline (bindWith_abc |> co) in place of bindWith, rather than inlining the . But currently GHC can carry only one unfolding for an Id. We could review that, but it'd make things more complicated. I have not yet found time to dig into the mysteries of #14211. Thank you for investigating so diligently. Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Matthew | Pickering | Sent: 15 September 2017 16:46 | To: GHC developers | Subject: Overapproximation of loopbreakers due to unfoldings | | I have been puzzling over the example presented by Harendra in #14211 for | the last two days. | | Ultimately it led me to discover that if a self-recursive definition is | marked with an INLINE pragma then it will always be marked as a | loopbreaker. | | This is undesirable in this case as the simplifier ends up with in the | first pass.. | | bindWith = ... bindWith ... | => | bindWith = bindWith_abc |> co | bindWith_abc = ... bindWith ... | | So we can then inline `bindWith` into the RHS of `bindWith_abc` and | create a single self-recursive function rather than a mutually recursive | block. Marking `bindWith` as `INLINE` means that both `bindWith` and | `bindWith_abc` are marked as loopbreakers. | | Mutual recursive blocks are bad as they completely stop the static | argument transformation from working. | | My question is, why is it necessary to mark "bindWith" as a loopbreaker | in the current module? Are there any tickets or notes which discuss this | problem? | | Matt | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.hask | ell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C2a71eb929ce24b4dcefc08d4fc5 | 0f2db%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636410871981371365&sda | ta=RmtbOXJcB9bzKa%2FV95pICCIyTbTfzw1%2BooD52dI%2FzE0%3D&reserved=0 From ben at smart-cactus.org Fri Sep 15 21:06:24 2017 From: ben at smart-cactus.org (Ben Gamari) Date: Fri, 15 Sep 2017 17:06:24 -0400 Subject: Feedback request regarding HSOC project (bringing sanity to the GHC performance test-suite) In-Reply-To: References: Message-ID: <87lglfzn73.fsf@ben-laptop.smart-cactus.org> Phyx writes: [snip] Hi Phyx, Sorry for the late reply here; Jared did a good job of summarizing the effort. I just want to make sure that we clearly put this particular concern to rest: > This would be unfortunate as it would mean we would effectively stop > tracking performance on e.g. Windows and Mac OS since the current > implementation doesn't allow for the data to live together in the same > repo. > The implementation as it stands does indeed allow data from multiple platforms to live side-by-side. Each measurement from a given testsuite run carries with it a "test environment" which is meant to allow us to distinguish between various operating system/architecture/hardware configurations. This defaults to "local" but can be overridden on the `make test` command line. The CI builders will do this to provide with unique, descriptive names, giving us a running history of the test characteristics on all of our CI platforms. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From nicolas.frisby at gmail.com Sat Sep 16 21:27:12 2017 From: nicolas.frisby at gmail.com (Nicolas Frisby) Date: Sat, 16 Sep 2017 21:27:12 +0000 Subject: A type checker plugin for row types In-Reply-To: References: Message-ID: I've uploaded the code to GitHub. https://github.com/nfrisby/coxswain I went with a BSD3 licence. It's still very much a work in progress, so I only recommend using it for experimentation for now. Thanks. -Nick On Sun, Sep 10, 2017 at 3:24 PM Nicolas Frisby wrote: > Hi all. I've been spending my free time for the last couple months on a > type checker plugin for row types. The free time waxes and wanes; sending > an email like this one was my primary goal for the past couple weeks. > > At the very least, I hoped this project would let me finally get some > hands on experience with OutsideIn. And I definitely have. But I've also > made more progress than I anticipated, and I think the plugin is starting > to have legs! > > I haven't uploaded the code yet to github -- it's not quite ready to > share. But I did do a write up on the dev wiki. > > > https://ghc.haskell.org/trac/ghc/wiki/Plugins/TypeChecker/RowTypes/Coxswain > > I would really appreciate and questions, comments, and --- boy, oh boy --- > answers. > > I hope to upload within a week or so, and I'll update that wiki page and > reply to this email when I do. > > Thanks very much. -Nick > > P.S. -- I've CC'd and BCC'd people who I anticipate would be specifically > interested in this (e.g. plugins, row types, etc). Please feel free to > forward to others that come to mind; I know some inboxes abjectly can't > afford default list traffic. > > P.P.S. -- One hold up for the upload is: which license? I intend to > release under BSD3, mainly to match GHC since one ideal scenario would > involve being packaged with/integrated into GHC. But my brief recent > research suggests that the Apache license might be more conducive to > eventual widespread adoption. If you'd be willing to advise or even just > refer me to other write ups, please feel free to email me directly or to > start a separate thread on a more appropriate distribution list (CC'ing me, > please). Thanks again. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at ara.io Sat Sep 16 21:32:21 2017 From: me at ara.io (Ara Adkins) Date: Sat, 16 Sep 2017 22:32:21 +0100 Subject: A type checker plugin for row types In-Reply-To: References: Message-ID: Excellent! I've very much been looking forward to having a look at the code for this! On Sat, Sep 16, 2017 at 10:27 PM, Nicolas Frisby wrote: > I've uploaded the code to GitHub. > > https://github.com/nfrisby/coxswain > > I went with a BSD3 licence. > > It's still very much a work in progress, so I only recommend using it for > experimentation for now. > > Thanks. -Nick > > > On Sun, Sep 10, 2017 at 3:24 PM Nicolas Frisby > wrote: > >> Hi all. I've been spending my free time for the last couple months on a >> type checker plugin for row types. The free time waxes and wanes; sending >> an email like this one was my primary goal for the past couple weeks. >> >> At the very least, I hoped this project would let me finally get some >> hands on experience with OutsideIn. And I definitely have. But I've also >> made more progress than I anticipated, and I think the plugin is starting >> to have legs! >> >> I haven't uploaded the code yet to github -- it's not quite ready to >> share. But I did do a write up on the dev wiki. >> >> https://ghc.haskell.org/trac/ghc/wiki/Plugins/TypeChecker/ >> RowTypes/Coxswain >> >> I would really appreciate and questions, comments, and --- boy, oh boy >> --- answers. >> >> I hope to upload within a week or so, and I'll update that wiki page and >> reply to this email when I do. >> >> Thanks very much. -Nick >> >> P.S. -- I've CC'd and BCC'd people who I anticipate would be specifically >> interested in this (e.g. plugins, row types, etc). Please feel free to >> forward to others that come to mind; I know some inboxes abjectly can't >> afford default list traffic. >> >> P.P.S. -- One hold up for the upload is: which license? I intend to >> release under BSD3, mainly to match GHC since one ideal scenario would >> involve being packaged with/integrated into GHC. But my brief recent >> research suggests that the Apache license might be more conducive to >> eventual widespread adoption. If you'd be willing to advise or even just >> refer me to other write ups, please feel free to email me directly or to >> start a separate thread on a more appropriate distribution list (CC'ing me, >> please). Thanks again. >> > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.frisby at gmail.com Sat Sep 16 22:00:33 2017 From: nicolas.frisby at gmail.com (Nicolas Frisby) Date: Sat, 16 Sep 2017 22:00:33 +0000 Subject: A type checker plugin for row types In-Reply-To: References: Message-ID: If you'd like to see how exactly the plugin manipulates constraints, I suggest using the `summarize` and `trace` options that are discussed here: https://ghc.haskell.org/trac/ghc/wiki/Plugins/TypeChecker/RowTypes/Coxswain#PluginDebugOptions Also, the `sculls/Examples.hs` and `sculls/Elm.hs` files contain my only test cases involving records and variants. Also see `coxswain/test/README`. I'm kind of spent from rushing towards this "release", so I might be a bit less productive for a while. But I'll be generally responsive about it if others are spending time on it. In particular: any advice for how to share the generated Haddock documentation without uploading to Hackage/Stackage? And thanks to all for responding to my questions. I'll have to think harder about the answers given. At least for Derived constraints, in particular, I still don't think we have all of the relevant information in one place. For example, I recalling thinking that I was seeing some Derived constraints that seemed to arise from the unifier "giving up" on a complicated equality and emitting a Given equality instead, and nobody mentioned that in their answers here. I'll try to suss out a repro of that exactly. Thanks. -Nick On Sat, Sep 16, 2017 at 2:27 PM Nicolas Frisby wrote: > I've uploaded the code to GitHub. > > https://github.com/nfrisby/coxswain > > I went with a BSD3 licence. > > It's still very much a work in progress, so I only recommend using it for > experimentation for now. > > Thanks. -Nick > > > On Sun, Sep 10, 2017 at 3:24 PM Nicolas Frisby > wrote: > >> Hi all. I've been spending my free time for the last couple months on a >> type checker plugin for row types. The free time waxes and wanes; sending >> an email like this one was my primary goal for the past couple weeks. >> >> At the very least, I hoped this project would let me finally get some >> hands on experience with OutsideIn. And I definitely have. But I've also >> made more progress than I anticipated, and I think the plugin is starting >> to have legs! >> >> I haven't uploaded the code yet to github -- it's not quite ready to >> share. But I did do a write up on the dev wiki. >> >> >> https://ghc.haskell.org/trac/ghc/wiki/Plugins/TypeChecker/RowTypes/Coxswain >> >> I would really appreciate and questions, comments, and --- boy, oh boy >> --- answers. >> >> I hope to upload within a week or so, and I'll update that wiki page and >> reply to this email when I do. >> >> Thanks very much. -Nick >> >> P.S. -- I've CC'd and BCC'd people who I anticipate would be specifically >> interested in this (e.g. plugins, row types, etc). Please feel free to >> forward to others that come to mind; I know some inboxes abjectly can't >> afford default list traffic. >> >> P.P.S. -- One hold up for the upload is: which license? I intend to >> release under BSD3, mainly to match GHC since one ideal scenario would >> involve being packaged with/integrated into GHC. But my brief recent >> research suggests that the Apache license might be more conducive to >> eventual widespread adoption. If you'd be willing to advise or even just >> refer me to other write ups, please feel free to email me directly or to >> start a separate thread on a more appropriate distribution list (CC'ing me, >> please). Thanks again. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From moritz.angermann at gmail.com Sun Sep 17 14:02:46 2017 From: moritz.angermann at gmail.com (Moritz Angermann) Date: Sun, 17 Sep 2017 22:02:46 +0800 Subject: -0 rational? Message-ID: Hi *, I just ran into the following case: > fromRat (-0 :: Rational) :: Double 0.0 > -0 :: Rational 0 % 1 How do I encode negative zero as a rational? The question come up as `CmmFloat` is defined as `CmmFloat Rational Width` in compiler/cmm/CmmExpr.hs. And my code generator thus turned (Int, Rational) into a floating point expression, and when trying to turn it into a `double` value, I ended up getting bad negative zeros. This makes me believe I can’t actually encode `-0` as a Rational, and such `CmmFloat` will never contain `-0`? Cheers, Moritz From erkokl at gmail.com Mon Sep 18 00:48:44 2017 From: erkokl at gmail.com (Levent Erkok) Date: Sun, 17 Sep 2017 17:48:44 -0700 Subject: -0 rational? In-Reply-To: References: Message-ID: Moritz: Looks like you're raising the same concern as in here: https://ghc.haskell.org/trac/ghc/ticket/13124 Long story short: You cannot encode -0.0 directly. You have to represent it as an expression, multiplying `-1` with `fromRational 0`. Similar concerns apply to NaN and Infinity as well, though those cannot be written as literals to start with, so it isn't a big issue there. -Levent. On Sun, Sep 17, 2017 at 7:02 AM, Moritz Angermann < moritz.angermann at gmail.com> wrote: > Hi *, > > I just ran into the following case: > > > fromRat (-0 :: Rational) :: Double > 0.0 > > > -0 :: Rational > 0 % 1 > > How do I encode negative zero as a rational? The question come > up as `CmmFloat` is defined as `CmmFloat Rational Width` in > compiler/cmm/CmmExpr.hs. And my code generator thus turned > (Int, Rational) into a floating point expression, and when trying > to turn it into a `double` value, I ended up getting bad negative > zeros. > > This makes me believe I can’t actually encode `-0` as a Rational, > and such `CmmFloat` will never contain `-0`? > > Cheers, > Moritz > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Mon Sep 18 11:11:11 2017 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Mon, 18 Sep 2017 12:11:11 +0100 Subject: A type checker plugin for row types In-Reply-To: References: Message-ID: Hi Nicolas, I briefly tried out the plugin this morning and have a few comments. 1. The example file on the wiki page fails to compile. I needed to modify two of the constraints on `getName` and `getPos` to get it to work. 2. I was inspecting how well the compiler optimises your representation and came across from very strange core that I didn't understand. https://gist.github.com/mpickering/6ec5501400e103d64284701d7537f223 Notice how there is a case on a lambda which binds an unused kind variable. But I can't understand how this works at all as I can't see where the lambda is applied. 3. Running -dcore-lint on the example file quickly complains. I think making sure that core-lint passes can be quite an important sanity check, is a fundamental reason which makes core lint work poorly with type checker plugins? If there is then we should perhaps fix this. I also packaged up the project for nix if that is useful to anyone else. https://gist.github.com/2a4b2fa25351bd900052d955a00ace6a Matt On Sat, Sep 16, 2017 at 11:00 PM, Nicolas Frisby wrote: > If you'd like to see how exactly the plugin manipulates constraints, I > suggest using the `summarize` and `trace` options that are discussed here: > > https://ghc.haskell.org/trac/ghc/wiki/Plugins/TypeChecker/RowTypes/Coxswain#PluginDebugOptions > > Also, the `sculls/Examples.hs` and `sculls/Elm.hs` files contain my only > test cases involving records and variants. Also see `coxswain/test/README`. > > I'm kind of spent from rushing towards this "release", so I might be a bit > less productive for a while. But I'll be generally responsive about it if > others are spending time on it. > > In particular: any advice for how to share the generated Haddock > documentation without uploading to Hackage/Stackage? > > And thanks to all for responding to my questions. I'll have to think harder > about the answers given. At least for Derived constraints, in particular, I > still don't think we have all of the relevant information in one place. For > example, I recalling thinking that I was seeing some Derived constraints > that seemed to arise from the unifier "giving up" on a complicated equality > and emitting a Given equality instead, and nobody mentioned that in their > answers here. I'll try to suss out a repro of that exactly. > > Thanks. -Nick > > On Sat, Sep 16, 2017 at 2:27 PM Nicolas Frisby > wrote: >> >> I've uploaded the code to GitHub. >> >> https://github.com/nfrisby/coxswain >> >> I went with a BSD3 licence. >> >> It's still very much a work in progress, so I only recommend using it for >> experimentation for now. >> >> Thanks. -Nick >> >> >> On Sun, Sep 10, 2017 at 3:24 PM Nicolas Frisby >> wrote: >>> >>> Hi all. I've been spending my free time for the last couple months on a >>> type checker plugin for row types. The free time waxes and wanes; sending an >>> email like this one was my primary goal for the past couple weeks. >>> >>> At the very least, I hoped this project would let me finally get some >>> hands on experience with OutsideIn. And I definitely have. But I've also >>> made more progress than I anticipated, and I think the plugin is starting to >>> have legs! >>> >>> I haven't uploaded the code yet to github -- it's not quite ready to >>> share. But I did do a write up on the dev wiki. >>> >>> >>> https://ghc.haskell.org/trac/ghc/wiki/Plugins/TypeChecker/RowTypes/Coxswain >>> >>> I would really appreciate and questions, comments, and --- boy, oh boy >>> --- answers. >>> >>> I hope to upload within a week or so, and I'll update that wiki page and >>> reply to this email when I do. >>> >>> Thanks very much. -Nick >>> >>> P.S. -- I've CC'd and BCC'd people who I anticipate would be specifically >>> interested in this (e.g. plugins, row types, etc). Please feel free to >>> forward to others that come to mind; I know some inboxes abjectly can't >>> afford default list traffic. >>> >>> P.P.S. -- One hold up for the upload is: which license? I intend to >>> release under BSD3, mainly to match GHC since one ideal scenario would >>> involve being packaged with/integrated into GHC. But my brief recent >>> research suggests that the Apache license might be more conducive to >>> eventual widespread adoption. If you'd be willing to advise or even just >>> refer me to other write ups, please feel free to email me directly or to >>> start a separate thread on a more appropriate distribution list (CC'ing me, >>> please). Thanks again. > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From hvriedel at gmail.com Mon Sep 18 12:04:17 2017 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Mon, 18 Sep 2017 14:04:17 +0200 Subject: ANN: Overlay Hackage Package Index for GHC HEAD Message-ID: <87mv5smcvy.fsf@gmail.com> Hi GHC devs, A long-standing common problem when testing/dogfooding GHC HEAD is that at some point during the development cycle more and more packages from Hackage will run into build failures. Obviously, constantly nagging upstream maintainers to release quickfixes for unreleased GHC HEAD snapshots which will likely break again a few weeks later (as things are generally in flux until freeze comes into effect) does not scale and only introduces a latency/coordination bottleneck, and on top of it ultimately results in spamming the primary Hackage Package index with releases (which has non-negligible negative impact/costs of its own on the Hackage infrastructure, and thus ought to be minimised). OTOH, we need the ability to easily test, debug, profile, and prototype changes to GHC HEAD while things are still in motion, and case in point, if you try to e.g. build `pandoc` with GHC HEAD today, you'll currently run into a dozen or so of packages not building with GHC HEAD. To this end, I've finally found time to work on a side-project (related to matrix.hackage.haskell.org) which implements a scheme tailored to `cabal new-build`, which is inspired by how Eta copes with a very related issue (they rely on it for stable versions of the compiler); i.e., they maintain a set of patches at https://github.com/typelead/eta-hackage/tree/master/patches which fix up existing Hackage packages to work with the Eta compiler. And this gave me the idea to use a similar scheme for GHC HEAD: https://github.com/hvr/head.hackage/tree/master/patches This folder already contains several of patches (which mostly originate from Ryan, Ben and myself) to packages which I needed to patch in order to build popular Hackage packages & tools. The main difference is how those patches are applied; Eta uses a modified `cabal` which they renamed to `etlas` which is checks availability of .patch & .cabal files in the GitHub repo linked above; Whereas for GHC HEAD with `cabal new-build` a different scheme makes more sense: we simply generate an add-on Hackage repo, and use the existing `cabal` facilities (e.g. multi-repo support or the nix-style package store which makes sure that unofficially patched packages don't contaminate "normal" install-plans, etc.) to opt into the opt-in Hackage repo containing fixed up packages. I've tried to describe how to use the HEAD.hackage add-on repo in the README at https://github.com/hvr/head.hackage#how-to-use And finally, here's a practical example of how you can use it to build e.g. the `pandoc` executable with GHC HEAD (can easily be adapted to build your project of choice; please refer to http://cabal.readthedocs.io/en/latest/nix-local-build-overview.html to learn more about how to describe your project via `cabal.project` files): 0.) This assumes you have a recent cabal 2.1 snapshot built from Git 1.) create & `cd` into a new work-folder 2.) invoke `head.hackage.sh update` to update the HEAD.hackage index cache 3.) invoke `head.hackage.sh init` to create an initial `cabal.project` configuration which locally activates the HEAD.hackage overlay repo 4.) If needed, edit the cabal.project file and change where GHC HEAD can be found (the script currently assumes GHC HEAD is installed from my Ubuntu PPA), e.g. with-compiler: /home/hvr/src/ghc-dev/inplace/bin/ghc-stage2 or you can add something like `optional-packages: deps/*/*.cabal` to have cabal pick up package source-trees unpacked in the deps/ folder, or you can inject ghc-options, relax upper bounds via `allow-newer: *:base` etc (please refer to the cabal user guide) 5.) Create a `dummy.cabal` file (in future we will have `cabal new-install` or other facilities, but for now we use this workaround): --8<---------------cut here---------------start------------->8--- name: dummy version: 0 build-type: Simple cabal-version: >=2.0 library default-language: Haskell2010 -- library components you want cabal to solve & build for -- and become accessible via .ghc.environment files and/or -- `cabal new-repl` build-depends: base, lens -- executable components you want cabal to build build-tool-depends: pandoc:pandoc --8<---------------cut here---------------end--------------->8--- 6.) invoke `cabal new-build` 7.) If everything works, you'll find the `pandoc:pandoc` executable somewhere in your ~/.cabal/store/ghc-8.3.*/ folder (you can use http://hackage.haskell.org/package/cabal-plan to conveniently list the location via `cabal-plan list-bins`) 8.) As for libraries, you can either use `cabal new-repl` or you can leverage GHC's package environment files: `cabal new-build` will have generated a file like .ghc.environment.x86_64-linux-8.3.20170913 which brings into scope all transitive dependencies of `build-depends: base, lens` Now all you need to do is simply call ghc-stage2 --make MyTestProg.hs to compile a program against those libs, or start up GHCi via ghc-stage2 --interactive and you'll be thrown into that package environment. I hope you find this useful Cheers, Herbert From matthewtpickering at gmail.com Mon Sep 18 12:31:53 2017 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Mon, 18 Sep 2017 13:31:53 +0100 Subject: ANN: Overlay Hackage Package Index for GHC HEAD In-Reply-To: <87mv5smcvy.fsf@gmail.com> References: <87mv5smcvy.fsf@gmail.com> Message-ID: Something like this is definitely useful for testing. When building GHC HEAD I override the ghcHEAD derivation on nixpkgs to the right commit I want to use and then can similarly specify which patches and versions of packages to use by modifying the 'configuration-ghc-head.nix' file. This is quite a bit more flexible than just patch files as I can point to specific commits in git repos etc. This repo will definitely be useful for me with this workflow as well. Thanks, Matt On Mon, Sep 18, 2017 at 1:04 PM, Herbert Valerio Riedel wrote: > Hi GHC devs, > > A long-standing common problem when testing/dogfooding GHC HEAD is that > at some point during the development cycle more and more packages from > Hackage will run into build failures. > > Obviously, constantly nagging upstream maintainers to release quickfixes > for unreleased GHC HEAD snapshots which will likely break again a few > weeks later (as things are generally in flux until freeze comes into > effect) does not scale and only introduces a latency/coordination > bottleneck, and on top of it ultimately results in spamming the primary > Hackage Package index with releases (which has non-negligible negative > impact/costs of its own on the Hackage infrastructure, and thus ought to > be minimised). > > OTOH, we need the ability to easily test, debug, profile, and prototype > changes to GHC HEAD while things are still in motion, and case in point, > if you try to e.g. build `pandoc` with GHC HEAD today, you'll currently > run into a dozen or so of packages not building with GHC HEAD. > > To this end, I've finally found time to work on a side-project (related > to matrix.hackage.haskell.org) which implements a scheme tailored to > `cabal new-build`, which is inspired by how Eta copes with a very > related issue (they rely on it for stable versions of the compiler); > i.e., they maintain a set of patches at > > https://github.com/typelead/eta-hackage/tree/master/patches > > which fix up existing Hackage packages to work with the Eta compiler. > > > And this gave me the idea to use a similar scheme for GHC HEAD: > > https://github.com/hvr/head.hackage/tree/master/patches > > This folder already contains several of patches (which mostly originate > from Ryan, Ben and myself) to packages which I needed to patch in order > to build popular Hackage packages & tools. > > The main difference is how those patches are applied; Eta uses a > modified `cabal` which they renamed to `etlas` which is checks > availability of .patch & .cabal files in the GitHub repo linked above; > > Whereas for GHC HEAD with `cabal new-build` a different scheme makes > more sense: we simply generate an add-on Hackage repo, and use the > existing `cabal` facilities (e.g. multi-repo support or the nix-style > package store which makes sure that unofficially patched packages don't > contaminate "normal" install-plans, etc.) to opt into the opt-in Hackage > repo containing fixed up packages. > > > > I've tried to describe how to use the HEAD.hackage add-on repo in the > README at > > https://github.com/hvr/head.hackage#how-to-use > > > And finally, here's a practical example of how you can use it to build > e.g. the `pandoc` executable with GHC HEAD (can easily be adapted to > build your project of choice; please refer to > > http://cabal.readthedocs.io/en/latest/nix-local-build-overview.html > > to learn more about how to describe your project via `cabal.project` > files): > > > 0.) This assumes you have a recent cabal 2.1 snapshot built from Git > > 1.) create & `cd` into a new work-folder > > 2.) invoke `head.hackage.sh update` to update the HEAD.hackage index cache > > 3.) invoke `head.hackage.sh init` to create an initial `cabal.project` > configuration which locally activates the HEAD.hackage overlay repo > > 4.) If needed, edit the cabal.project file and change where GHC > HEAD can be found (the script currently assumes GHC HEAD is > installed from my Ubuntu PPA), e.g. > > with-compiler: /home/hvr/src/ghc-dev/inplace/bin/ghc-stage2 > > or you can add something like `optional-packages: deps/*/*.cabal` > to have cabal pick up package source-trees unpacked in the deps/ > folder, or you can inject ghc-options, relax upper bounds via > `allow-newer: *:base` etc (please refer to the cabal user guide) > > 5.) Create a `dummy.cabal` file (in future we will have `cabal > new-install` or other facilities, but for now we use this > workaround): > > > --8<---------------cut here---------------start------------->8--- > name: dummy > version: 0 > build-type: Simple > cabal-version: >=2.0 > > library > default-language: Haskell2010 > > -- library components you want cabal to solve & build for > -- and become accessible via .ghc.environment files and/or > -- `cabal new-repl` > build-depends: base, lens > > -- executable components you want cabal to build > build-tool-depends: pandoc:pandoc > > --8<---------------cut here---------------end--------------->8--- > > 6.) invoke `cabal new-build` > > 7.) If everything works, you'll find the `pandoc:pandoc` executable > somewhere in your ~/.cabal/store/ghc-8.3.*/ folder > (you can use http://hackage.haskell.org/package/cabal-plan > to conveniently list the location via `cabal-plan list-bins`) > > 8.) As for libraries, you can either use `cabal new-repl` > or you can leverage GHC's package environment files: > > `cabal new-build` will have generated a file like > > .ghc.environment.x86_64-linux-8.3.20170913 > > which brings into scope all transitive dependencies of > `build-depends: base, lens` > > Now all you need to do is simply call > > ghc-stage2 --make MyTestProg.hs > > to compile a program against those libs, or start up GHCi via > > ghc-stage2 --interactive > > and you'll be thrown into that package environment. > > > > I hope you find this useful > > Cheers, > Herbert > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From palotai.robin at gmail.com Mon Sep 18 13:00:13 2017 From: palotai.robin at gmail.com (Robin Palotai) Date: Mon, 18 Sep 2017 15:00:13 +0200 Subject: ANN: Overlay Hackage Package Index for GHC HEAD In-Reply-To: References: <87mv5smcvy.fsf@gmail.com> Message-ID: Sounds amazing - could you add the procedure you described to the "stock" multi-GHC Travis example [1]? It already has a "HEAD" env (see "env: BUILD=cabal GHCVER=head"...), but breaks [2] due to the package problems you mentioned. [1]: https://raw.githubusercontent.com/commercialhaskell/stack/master/doc/travis-complex.yml [2]: for haskell-indexer: https://travis-ci.org/robinp/haskell-indexer/builds/276778630 2017-09-18 14:31 GMT+02:00 Matthew Pickering : > Something like this is definitely useful for testing. > > When building GHC HEAD I override the ghcHEAD derivation on nixpkgs to > the right commit I want to use and then can similarly specify which > patches and versions of packages to use by modifying the > 'configuration-ghc-head.nix' file. > This is quite a bit more flexible than just patch files as I can point > to specific commits in git repos etc. > > This repo will definitely be useful for me with this workflow as well. > > Thanks, > > Matt > > On Mon, Sep 18, 2017 at 1:04 PM, Herbert Valerio Riedel > wrote: > > Hi GHC devs, > > > > A long-standing common problem when testing/dogfooding GHC HEAD is that > > at some point during the development cycle more and more packages from > > Hackage will run into build failures. > > > > Obviously, constantly nagging upstream maintainers to release quickfixes > > for unreleased GHC HEAD snapshots which will likely break again a few > > weeks later (as things are generally in flux until freeze comes into > > effect) does not scale and only introduces a latency/coordination > > bottleneck, and on top of it ultimately results in spamming the primary > > Hackage Package index with releases (which has non-negligible negative > > impact/costs of its own on the Hackage infrastructure, and thus ought to > > be minimised). > > > > OTOH, we need the ability to easily test, debug, profile, and prototype > > changes to GHC HEAD while things are still in motion, and case in point, > > if you try to e.g. build `pandoc` with GHC HEAD today, you'll currently > > run into a dozen or so of packages not building with GHC HEAD. > > > > To this end, I've finally found time to work on a side-project (related > > to matrix.hackage.haskell.org) which implements a scheme tailored to > > `cabal new-build`, which is inspired by how Eta copes with a very > > related issue (they rely on it for stable versions of the compiler); > > i.e., they maintain a set of patches at > > > > https://github.com/typelead/eta-hackage/tree/master/patches > > > > which fix up existing Hackage packages to work with the Eta compiler. > > > > > > And this gave me the idea to use a similar scheme for GHC HEAD: > > > > https://github.com/hvr/head.hackage/tree/master/patches > > > > This folder already contains several of patches (which mostly originate > > from Ryan, Ben and myself) to packages which I needed to patch in order > > to build popular Hackage packages & tools. > > > > The main difference is how those patches are applied; Eta uses a > > modified `cabal` which they renamed to `etlas` which is checks > > availability of .patch & .cabal files in the GitHub repo linked above; > > > > Whereas for GHC HEAD with `cabal new-build` a different scheme makes > > more sense: we simply generate an add-on Hackage repo, and use the > > existing `cabal` facilities (e.g. multi-repo support or the nix-style > > package store which makes sure that unofficially patched packages don't > > contaminate "normal" install-plans, etc.) to opt into the opt-in Hackage > > repo containing fixed up packages. > > > > > > > > I've tried to describe how to use the HEAD.hackage add-on repo in the > > README at > > > > https://github.com/hvr/head.hackage#how-to-use > > > > > > And finally, here's a practical example of how you can use it to build > > e.g. the `pandoc` executable with GHC HEAD (can easily be adapted to > > build your project of choice; please refer to > > > > http://cabal.readthedocs.io/en/latest/nix-local-build-overview.html > > > > to learn more about how to describe your project via `cabal.project` > > files): > > > > > > 0.) This assumes you have a recent cabal 2.1 snapshot built from Git > > > > 1.) create & `cd` into a new work-folder > > > > 2.) invoke `head.hackage.sh update` to update the HEAD.hackage index > cache > > > > 3.) invoke `head.hackage.sh init` to create an initial `cabal.project` > > configuration which locally activates the HEAD.hackage overlay repo > > > > 4.) If needed, edit the cabal.project file and change where GHC > > HEAD can be found (the script currently assumes GHC HEAD is > > installed from my Ubuntu PPA), e.g. > > > > with-compiler: /home/hvr/src/ghc-dev/inplace/bin/ghc-stage2 > > > > or you can add something like `optional-packages: deps/*/*.cabal` > > to have cabal pick up package source-trees unpacked in the deps/ > > folder, or you can inject ghc-options, relax upper bounds via > > `allow-newer: *:base` etc (please refer to the cabal user guide) > > > > 5.) Create a `dummy.cabal` file (in future we will have `cabal > > new-install` or other facilities, but for now we use this > > workaround): > > > > > > --8<---------------cut here---------------start------------->8--- > > name: dummy > > version: 0 > > build-type: Simple > > cabal-version: >=2.0 > > > > library > > default-language: Haskell2010 > > > > -- library components you want cabal to solve & build for > > -- and become accessible via .ghc.environment files and/or > > -- `cabal new-repl` > > build-depends: base, lens > > > > -- executable components you want cabal to build > > build-tool-depends: pandoc:pandoc > > > > --8<---------------cut here---------------end--------------->8--- > > > > 6.) invoke `cabal new-build` > > > > 7.) If everything works, you'll find the `pandoc:pandoc` executable > > somewhere in your ~/.cabal/store/ghc-8.3.*/ folder > > (you can use http://hackage.haskell.org/package/cabal-plan > > to conveniently list the location via `cabal-plan list-bins`) > > > > 8.) As for libraries, you can either use `cabal new-repl` > > or you can leverage GHC's package environment files: > > > > `cabal new-build` will have generated a file like > > > > .ghc.environment.x86_64-linux-8.3.20170913 > > > > which brings into scope all transitive dependencies of > > `build-depends: base, lens` > > > > Now all you need to do is simply call > > > > ghc-stage2 --make MyTestProg.hs > > > > to compile a program against those libs, or start up GHCi via > > > > ghc-stage2 --interactive > > > > and you'll be thrown into that package environment. > > > > > > > > I hope you find this useful > > > > Cheers, > > Herbert > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.frisby at gmail.com Mon Sep 18 15:54:23 2017 From: nicolas.frisby at gmail.com (Nicolas Frisby) Date: Mon, 18 Sep 2017 15:54:23 +0000 Subject: A type checker plugin for row types In-Reply-To: References: Message-ID: Thank you Matthew! I very much appreciate this kind of feedback. 1) I've updated the (bulk of the) wiki page to match the github repo just now. In particular, the "Elm" example lives here https://github.com/nfrisby/coxswain/blob/master/sculls/Elm.hs, so I just copied its most recent version onto the wiki. (The delta was the "off-by-one" `Short` constraints, right?) 2) I absolutely agree about -dcore-lint. I wrongly had it in my head that I needed a DEBUG build. See https://github.com/nfrisby/coxswain/issues/3#issuecomment-330266479 3) Awesome! I'll add a link to the README. Thanks again. -Nick On Mon, Sep 18, 2017 at 4:11 AM Matthew Pickering < matthewtpickering at gmail.com> wrote: > Hi Nicolas, I briefly tried out the plugin this morning and have a few > comments. > > 1. The example file on the wiki page fails to compile. I needed to > modify two of the constraints on `getName` and `getPos` to get it to > work. > > 2. I was inspecting how well the compiler optimises your > representation and came across from very strange core that I didn't > understand. > > https://gist.github.com/mpickering/6ec5501400e103d64284701d7537f223 > > Notice how there is a case on a lambda which binds an unused kind > variable. But I can't understand how this works at all as I can't see > where the lambda is applied. > > 3. Running -dcore-lint on the example file quickly complains. I think > making sure that core-lint passes can be quite an important sanity > check, is a fundamental reason which makes core lint work poorly with > type checker plugins? If there is then we should perhaps fix this. > > I also packaged up the project for nix if that is useful to anyone else. > > https://gist.github.com/2a4b2fa25351bd900052d955a00ace6a > > Matt > > On Sat, Sep 16, 2017 at 11:00 PM, Nicolas Frisby > wrote: > > If you'd like to see how exactly the plugin manipulates constraints, I > > suggest using the `summarize` and `trace` options that are discussed > here: > > > > > https://ghc.haskell.org/trac/ghc/wiki/Plugins/TypeChecker/RowTypes/Coxswain#PluginDebugOptions > > > > Also, the `sculls/Examples.hs` and `sculls/Elm.hs` files contain my only > > test cases involving records and variants. Also see > `coxswain/test/README`. > > > > I'm kind of spent from rushing towards this "release", so I might be a > bit > > less productive for a while. But I'll be generally responsive about it if > > others are spending time on it. > > > > In particular: any advice for how to share the generated Haddock > > documentation without uploading to Hackage/Stackage? > > > > And thanks to all for responding to my questions. I'll have to think > harder > > about the answers given. At least for Derived constraints, in > particular, I > > still don't think we have all of the relevant information in one place. > For > > example, I recalling thinking that I was seeing some Derived constraints > > that seemed to arise from the unifier "giving up" on a complicated > equality > > and emitting a Given equality instead, and nobody mentioned that in their > > answers here. I'll try to suss out a repro of that exactly. > > > > Thanks. -Nick > > > > On Sat, Sep 16, 2017 at 2:27 PM Nicolas Frisby > > > wrote: > >> > >> I've uploaded the code to GitHub. > >> > >> https://github.com/nfrisby/coxswain > >> > >> I went with a BSD3 licence. > >> > >> It's still very much a work in progress, so I only recommend using it > for > >> experimentation for now. > >> > >> Thanks. -Nick > >> > >> > >> On Sun, Sep 10, 2017 at 3:24 PM Nicolas Frisby < > nicolas.frisby at gmail.com> > >> wrote: > >>> > >>> Hi all. I've been spending my free time for the last couple months on a > >>> type checker plugin for row types. The free time waxes and wanes; > sending an > >>> email like this one was my primary goal for the past couple weeks. > >>> > >>> At the very least, I hoped this project would let me finally get some > >>> hands on experience with OutsideIn. And I definitely have. But I've > also > >>> made more progress than I anticipated, and I think the plugin is > starting to > >>> have legs! > >>> > >>> I haven't uploaded the code yet to github -- it's not quite ready to > >>> share. But I did do a write up on the dev wiki. > >>> > >>> > >>> > https://ghc.haskell.org/trac/ghc/wiki/Plugins/TypeChecker/RowTypes/Coxswain > >>> > >>> I would really appreciate and questions, comments, and --- boy, oh boy > >>> --- answers. > >>> > >>> I hope to upload within a week or so, and I'll update that wiki page > and > >>> reply to this email when I do. > >>> > >>> Thanks very much. -Nick > >>> > >>> P.S. -- I've CC'd and BCC'd people who I anticipate would be > specifically > >>> interested in this (e.g. plugins, row types, etc). Please feel free to > >>> forward to others that come to mind; I know some inboxes abjectly can't > >>> afford default list traffic. > >>> > >>> P.P.S. -- One hold up for the upload is: which license? I intend to > >>> release under BSD3, mainly to match GHC since one ideal scenario would > >>> involve being packaged with/integrated into GHC. But my brief recent > >>> research suggests that the Apache license might be more conducive to > >>> eventual widespread adoption. If you'd be willing to advise or even > just > >>> refer me to other write ups, please feel free to email me directly or > to > >>> start a separate thread on a more appropriate distribution list > (CC'ing me, > >>> please). Thanks again. > > > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rahulmutt at gmail.com Mon Sep 18 16:37:08 2017 From: rahulmutt at gmail.com (Rahul Muttineni) Date: Mon, 18 Sep 2017 12:37:08 -0400 Subject: ANN: Overlay Hackage Package Index for GHC HEAD In-Reply-To: <87mv5smcvy.fsf@gmail.com> References: <87mv5smcvy.fsf@gmail.com> Message-ID: Hi Herbert, This sounds like a great idea :) Glad to see ideas from Eta inspiring new ideas in GHC. "constantly nagging upstream maintainers to release quickfixes for unreleased GHC HEAD snapshots which will likely break again a few weeks later" ^ This is what motivated us to start eta-hackage in the first place. In our case, we didn't know whether the patches were completely correct when porting C FFI to Java FFI, at least in the corner cases, so we needed a way to quickly deploy updates to patches without having the end user wait a significant amount of time. Once they stabilise, we do plan on pushing the patches upstream to the respective packages - we've already done that for a few of them. Thanks, Rahul On Mon, Sep 18, 2017 at 8:04 AM, Herbert Valerio Riedel wrote: > Hi GHC devs, > > A long-standing common problem when testing/dogfooding GHC HEAD is that > at some point during the development cycle more and more packages from > Hackage will run into build failures. > > Obviously, constantly nagging upstream maintainers to release quickfixes > for unreleased GHC HEAD snapshots which will likely break again a few > weeks later (as things are generally in flux until freeze comes into > effect) does not scale and only introduces a latency/coordination > bottleneck, and on top of it ultimately results in spamming the primary > Hackage Package index with releases (which has non-negligible negative > impact/costs of its own on the Hackage infrastructure, and thus ought to > be minimised). > > OTOH, we need the ability to easily test, debug, profile, and prototype > changes to GHC HEAD while things are still in motion, and case in point, > if you try to e.g. build `pandoc` with GHC HEAD today, you'll currently > run into a dozen or so of packages not building with GHC HEAD. > > To this end, I've finally found time to work on a side-project (related > to matrix.hackage.haskell.org) which implements a scheme tailored to > `cabal new-build`, which is inspired by how Eta copes with a very > related issue (they rely on it for stable versions of the compiler); > i.e., they maintain a set of patches at > > https://github.com/typelead/eta-hackage/tree/master/patches > > which fix up existing Hackage packages to work with the Eta compiler. > > > And this gave me the idea to use a similar scheme for GHC HEAD: > > https://github.com/hvr/head.hackage/tree/master/patches > > This folder already contains several of patches (which mostly originate > from Ryan, Ben and myself) to packages which I needed to patch in order > to build popular Hackage packages & tools. > > The main difference is how those patches are applied; Eta uses a > modified `cabal` which they renamed to `etlas` which is checks > availability of .patch & .cabal files in the GitHub repo linked above; > > Whereas for GHC HEAD with `cabal new-build` a different scheme makes > more sense: we simply generate an add-on Hackage repo, and use the > existing `cabal` facilities (e.g. multi-repo support or the nix-style > package store which makes sure that unofficially patched packages don't > contaminate "normal" install-plans, etc.) to opt into the opt-in Hackage > repo containing fixed up packages. > > > > I've tried to describe how to use the HEAD.hackage add-on repo in the > README at > > https://github.com/hvr/head.hackage#how-to-use > > > And finally, here's a practical example of how you can use it to build > e.g. the `pandoc` executable with GHC HEAD (can easily be adapted to > build your project of choice; please refer to > > http://cabal.readthedocs.io/en/latest/nix-local-build-overview.html > > to learn more about how to describe your project via `cabal.project` > files): > > > 0.) This assumes you have a recent cabal 2.1 snapshot built from Git > > 1.) create & `cd` into a new work-folder > > 2.) invoke `head.hackage.sh update` to update the HEAD.hackage index cache > > 3.) invoke `head.hackage.sh init` to create an initial `cabal.project` > configuration which locally activates the HEAD.hackage overlay repo > > 4.) If needed, edit the cabal.project file and change where GHC > HEAD can be found (the script currently assumes GHC HEAD is > installed from my Ubuntu PPA), e.g. > > with-compiler: /home/hvr/src/ghc-dev/inplace/bin/ghc-stage2 > > or you can add something like `optional-packages: deps/*/*.cabal` > to have cabal pick up package source-trees unpacked in the deps/ > folder, or you can inject ghc-options, relax upper bounds via > `allow-newer: *:base` etc (please refer to the cabal user guide) > > 5.) Create a `dummy.cabal` file (in future we will have `cabal > new-install` or other facilities, but for now we use this > workaround): > > > --8<---------------cut here---------------start------------->8--- > name: dummy > version: 0 > build-type: Simple > cabal-version: >=2.0 > > library > default-language: Haskell2010 > > -- library components you want cabal to solve & build for > -- and become accessible via .ghc.environment files and/or > -- `cabal new-repl` > build-depends: base, lens > > -- executable components you want cabal to build > build-tool-depends: pandoc:pandoc > > --8<---------------cut here---------------end--------------->8--- > > 6.) invoke `cabal new-build` > > 7.) If everything works, you'll find the `pandoc:pandoc` executable > somewhere in your ~/.cabal/store/ghc-8.3.*/ folder > (you can use http://hackage.haskell.org/package/cabal-plan > to conveniently list the location via `cabal-plan list-bins`) > > 8.) As for libraries, you can either use `cabal new-repl` > or you can leverage GHC's package environment files: > > `cabal new-build` will have generated a file like > > .ghc.environment.x86_64-linux-8.3.20170913 > > which brings into scope all transitive dependencies of > `build-depends: base, lens` > > Now all you need to do is simply call > > ghc-stage2 --make MyTestProg.hs > > to compile a program against those libs, or start up GHCi via > > ghc-stage2 --interactive > > and you'll be thrown into that package environment. > > > > I hope you find this useful > > Cheers, > Herbert > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -- Rahul Muttineni -------------- next part -------------- An HTML attachment was scrubbed... URL: From conal at conal.net Mon Sep 18 20:05:09 2017 From: conal at conal.net (Conal Elliott) Date: Mon, 18 Sep 2017 13:05:09 -0700 Subject: Spurious recompilations when using a compiler plugin Message-ID: It appears that use of GHC plugins causes client code to get needlessly recompiled. (See Trac issues 12567 and 7414 .) It’s becoming more of a problem for usability of the plugin I’ve been developing, and I’m wondering what can be done. Some questions: - Is there any work in progress on fixing this situation? - Are there serious obstacles to fixing it? - Do plugin writers or users have any workarounds? Other insights appreciated. – Conal -------------- next part -------------- An HTML attachment was scrubbed... URL: From palotai.robin at gmail.com Tue Sep 19 05:38:33 2017 From: palotai.robin at gmail.com (Robin Palotai) Date: Tue, 19 Sep 2017 07:38:33 +0200 Subject: Determine instance method from class method callsite Message-ID: Sorry, I messed up subject and mailing list. Copying to both list now after the mistake (wanted only ghc-devs for specificity). Thanks! 2017-09-19 7:36 GMT+02:00 Robin Palotai : > Hello GHC devs, > > Before inventing the wheel, want to check if there is a GHC API way to > look up the (fully) resolved instance method from a class method. > > For example, given a code > > data Foo Int deriving Show > > bar = show (Foo 3) > > when inspecting the Typechecked AST for bar's show call, I would like to > get to the Name / Id of 'show' of the 'Show' typeclass. > > I believe I could use splitHsSigmaTy on the HsType of the function call to > get the context, and then evaluate the HsWrapper somehow to find out what > instance dictionary is applied to the class restriction in the context, and > then look up the instance method from the dictionary.. > > Two questions: > > 1) Is there maybe functionality for this? > > 2) If not, is there any guarantee about the constraint order in the > context, at the method call? So I could more easily determine which > constraint's application to look for.. > > Any hints welcome && Thank you! > Robin > -------------- next part -------------- An HTML attachment was scrubbed... URL: From moritz.angermann at gmail.com Tue Sep 19 06:38:45 2017 From: moritz.angermann at gmail.com (Moritz Angermann) Date: Tue, 19 Sep 2017 14:38:45 +0800 Subject: ANN: preview of alternative llvm code gen Message-ID: <2D57A1E9-2E1E-4C1D-B43E-5CD82FBC4AFA@gmail.com> Hi, as presented during the HIW at ICFP 2016, and ICFP 2017, I’ve been spending a some time on an alternative llvm backend. GHC used LLVM as an external tool, and communicates with LLVM via LLVMs Intermediate Representation. For this the LLVM backend in GHC (via `-fllvm`) writes out text files containing the textual LLVM IR, and feeds those into LLVMs the `opt` and `llc` tools to produce the final machine code. The textual IR has been changing quite a bit for each release in the past. But seems to have been quite stable from LLVM3.9 through LLVM4 and now LLVM5. However LLVM also has a binary IR called LLVM Bitcode, which is a very stable format and the one they advise to work against. Sadly, the documentation on it is mostly contained in the BitcodeReader and BitcodeWriter C++ files from the LLVM project. As Cmm hands out only labels for function without function signatures, and the LLVM backend wants to be a good consumer, we assume functions to always be pointers to int8 (i8*), as LLVM expects them to be properly typed. Of course when defining functions we do have the full function signature and also need it to pass arguments. Yet to make i8* assumption work, we create aliases for each function we define, that are of type i8*. On macOS (and iOS) where the mach-o format is used, the linker uses a feature called `subsections_via_symbols` to strip dead code. This works, by assuming that all code between two symbols belongs to the first symbol. GHCs use of Tables Next To Code, and the `prefixdata` feature that was added to llvm just for this purpose however put data in front of symbols, and the linker will strip that data if it determines the previous symbol is not used. Thus we can not strip any code on macOS or iOS produced by GHC. The NCG inserts `$dsp` suffix symbols in front of the TNTC data, and marks those as used, to work around the dead-strip issue. The current llvm backend uses the LLVMManger to strip out the `.symbols_via_subsections` directive from the generated assembly prior to handing it off to the assembler. I therefore sat out to try and see if I could fix some of the issues we ran into. This has lead to building three libraries: - data-bitcode (github.com/angerman/data-bitcode) A bitcode reader/writer. In itself Bitcode is a rather simple encoding format. It is based on sequences of bits as opposed to bytes. And encoded so called blocks (with IDs), that can give meaning to records (think: structs) that are in those blocks. It also comes with a compression mechanism, where one can define so called abbreviated records (think: packed structs). - data-bitcode-llvm (github.com/angerman/data-bitcode-llvm) A package to model LLVM modules, and lower them into the bitcode AST - data-bitcode-edsl (github.com/angerman/data-bitcode-edsl) A Haskell EDSL, that allows to construct LLVM modules. E.g. testModule :: Module testModule = mod "undef" [ def "main" ([i32, ptr =<< i8ptr] --> i32) $ \[ argc, argv ] -> mdo block "entry" $ do mem <- undef =<< (arr 10 =<< i8) memG <- global (mutable . private) "mem" mem ptr <- gep memG =<< sequence [int32 0, int32 0] memset <- fun "llvm.memset.p0i8.i32" =<< [i8ptr, i8, i32, i32, i1] --> void ccall memset =<< (ptr:) <$> sequence [ int8 0, int32 10, int32 4, int 1 0 ] ret =<< int32 0 pure () ] More examples can be found in github.com/angerman/data-bitcode-edsl/blob/master/test/EDSLSpec.hs With this in hand, I went a head and ported the existing llvm code gen to use the EDSL instead of concatenating strings. After this introduction, I’m now pleased to inform you that the `llvmng` backend now passes fast and slow validation, with the exception of the peculiar case of T6084 (ghc.haskell.org/t/6084) where the callee and caller signatures do not match up, and this causes the `llvmng` backend to topple over. Another note is in order: the llvmng backend is currently quite a bit more memory hungry and time consuming than the current llvm backend for non trivial modules. We do get dead-strippable binaries through (for `main = putStrLn “hello world”`): 1.2M Main8.2-llvm 972k MainHEAD-llvmng The relevant code can be found in github.com/zw3rk/ghc/tree/llvm-ng Cheers, Moritz Validation Results Below: fast validation: Unexpected results from: TEST="MultiLayerModules T12707 T13379 T13701 T13719 T6084 T9630" SUMMARY for test run started at Tue Sep 19 12:53:34 2017 +08 0:08:06 spent to go through 6077 total tests, which gave rise to 23781 test cases, of which 16423 were skipped 17 had missing libraries 2392 expected passes 32 expected failures 0 caused framework failures 0 caused framework warnings 0 unexpected passes 1 unexpected failures 6 unexpected stat failures Unexpected failures: codeGen/should_run/T6084.run T6084 [exit code non-0] (llvmng) Unexpected stat failures: perf/compiler/T13379.run T13379 [stat not good enough] (llvmng) perf/compiler/T9630.run T9630 [stat not good enough] (llvmng) perf/compiler/T12707.run T12707 [stat not good enough] (llvmng) perf/compiler/MultiLayerModules.run MultiLayerModules [stat not good enough] (llvmng) perf/compiler/T13719.run T13719 [stat not good enough] (llvmng) perf/compiler/T13701.run T13701 [stat not good enough] (llvmng) slow validation: Unexpected results from: TEST="MultiLayerModules T12707 T13379 T13701 T13719 T6084 T9630" SUMMARY for test run started at Tue Sep 19 13:14:18 2017 +08 0:28:57 spent to go through 6077 total tests, which gave rise to 23908 test cases, of which 16302 were skipped 69 had missing libraries 6223 expected passes 72 expected failures 0 caused framework failures 0 caused framework warnings 0 unexpected passes 3 unexpected failures 12 unexpected stat failures Unexpected failures: codeGen/should_run/T6084.run T6084 [exit code non-0] (llvmng) codeGen/should_run/T6084.run T6084 [exit code non-0] (llvmng) codeGen/should_run/T6084.run T6084 [exit code non-0] (llvmng) Unexpected stat failures: perf/compiler/T13379.run T13379 [stat not good enough] (llvmng) perf/compiler/T13379.run T13379 [stat not good enough] (llvmng) perf/compiler/T9630.run T9630 [stat not good enough] (llvmng) perf/compiler/T12707.run T12707 [stat not good enough] (llvmng) perf/compiler/T9630.run T9630 [stat not good enough] (llvmng) perf/compiler/T12707.run T12707 [stat not good enough] (llvmng) perf/compiler/T13719.run T13719 [stat not good enough] (llvmng) perf/compiler/MultiLayerModules.run MultiLayerModules [stat not good enough] (llvmng) perf/compiler/T13701.run T13701 [stat not good enough] (llvmng) perf/compiler/T13719.run T13719 [stat not good enough] (llvmng) perf/compiler/MultiLayerModules.run MultiLayerModules [stat not good enough] (llvmng) perf/compiler/T13701.run T13701 [stat not good enough] (llvmng) From sgraf1337 at gmail.com Tue Sep 19 08:17:00 2017 From: sgraf1337 at gmail.com (Sebastian Graf) Date: Tue, 19 Sep 2017 10:17:00 +0200 Subject: ANN: Overlay Hackage Package Index for GHC HEAD In-Reply-To: References: <87mv5smcvy.fsf@gmail.com> Message-ID: Hey, this sounds extremely related to stack's Local dependency packages . Basically, what you specify in your stack.yaml when overriding the containers version looks like this: packages: - location: . # The actual project - location: git: git at github.com:haskell/containers commit: 5879fe5cece8cd4493016b793015469046d421b2 extra-dep: true I find this extremely useful for playing around with stuff that's not yet published on hackage. I think this is not so useful when stuff gets actually published on hackage, because hackage can only reference packages on hackage IIRC, but this is exactly the use case we would have for GHC. So long, Sebastian On Mon, Sep 18, 2017 at 6:37 PM, Rahul Muttineni wrote: > Hi Herbert, > > This sounds like a great idea :) Glad to see ideas from Eta inspiring new > ideas in GHC. > > "constantly nagging upstream maintainers to release quickfixes > for unreleased GHC HEAD snapshots which will likely break again a few > weeks later" > > ^ This is what motivated us to start eta-hackage in the first place. In > our case, we didn't know whether the patches were completely correct when > porting C FFI to Java FFI, at least in the corner cases, so we needed a way > to quickly deploy updates to patches without having the end user wait a > significant amount of time. Once they stabilise, we do plan on pushing the > patches upstream to the respective packages - we've already done that for a > few of them. > > Thanks, > Rahul > > On Mon, Sep 18, 2017 at 8:04 AM, Herbert Valerio Riedel < > hvriedel at gmail.com> wrote: > >> Hi GHC devs, >> >> A long-standing common problem when testing/dogfooding GHC HEAD is that >> at some point during the development cycle more and more packages from >> Hackage will run into build failures. >> >> Obviously, constantly nagging upstream maintainers to release quickfixes >> for unreleased GHC HEAD snapshots which will likely break again a few >> weeks later (as things are generally in flux until freeze comes into >> effect) does not scale and only introduces a latency/coordination >> bottleneck, and on top of it ultimately results in spamming the primary >> Hackage Package index with releases (which has non-negligible negative >> impact/costs of its own on the Hackage infrastructure, and thus ought to >> be minimised). >> >> OTOH, we need the ability to easily test, debug, profile, and prototype >> changes to GHC HEAD while things are still in motion, and case in point, >> if you try to e.g. build `pandoc` with GHC HEAD today, you'll currently >> run into a dozen or so of packages not building with GHC HEAD. >> >> To this end, I've finally found time to work on a side-project (related >> to matrix.hackage.haskell.org) which implements a scheme tailored to >> `cabal new-build`, which is inspired by how Eta copes with a very >> related issue (they rely on it for stable versions of the compiler); >> i.e., they maintain a set of patches at >> >> https://github.com/typelead/eta-hackage/tree/master/patches >> >> which fix up existing Hackage packages to work with the Eta compiler. >> >> >> And this gave me the idea to use a similar scheme for GHC HEAD: >> >> https://github.com/hvr/head.hackage/tree/master/patches >> >> This folder already contains several of patches (which mostly originate >> from Ryan, Ben and myself) to packages which I needed to patch in order >> to build popular Hackage packages & tools. >> >> The main difference is how those patches are applied; Eta uses a >> modified `cabal` which they renamed to `etlas` which is checks >> availability of .patch & .cabal files in the GitHub repo linked above; >> >> Whereas for GHC HEAD with `cabal new-build` a different scheme makes >> more sense: we simply generate an add-on Hackage repo, and use the >> existing `cabal` facilities (e.g. multi-repo support or the nix-style >> package store which makes sure that unofficially patched packages don't >> contaminate "normal" install-plans, etc.) to opt into the opt-in Hackage >> repo containing fixed up packages. >> >> >> >> I've tried to describe how to use the HEAD.hackage add-on repo in the >> README at >> >> https://github.com/hvr/head.hackage#how-to-use >> >> >> And finally, here's a practical example of how you can use it to build >> e.g. the `pandoc` executable with GHC HEAD (can easily be adapted to >> build your project of choice; please refer to >> >> http://cabal.readthedocs.io/en/latest/nix-local-build-overview.html >> >> to learn more about how to describe your project via `cabal.project` >> files): >> >> >> 0.) This assumes you have a recent cabal 2.1 snapshot built from Git >> >> 1.) create & `cd` into a new work-folder >> >> 2.) invoke `head.hackage.sh update` to update the HEAD.hackage index >> cache >> >> 3.) invoke `head.hackage.sh init` to create an initial `cabal.project` >> configuration which locally activates the HEAD.hackage overlay repo >> >> 4.) If needed, edit the cabal.project file and change where GHC >> HEAD can be found (the script currently assumes GHC HEAD is >> installed from my Ubuntu PPA), e.g. >> >> with-compiler: /home/hvr/src/ghc-dev/inplace/bin/ghc-stage2 >> >> or you can add something like `optional-packages: deps/*/*.cabal` >> to have cabal pick up package source-trees unpacked in the deps/ >> folder, or you can inject ghc-options, relax upper bounds via >> `allow-newer: *:base` etc (please refer to the cabal user guide) >> >> 5.) Create a `dummy.cabal` file (in future we will have `cabal >> new-install` or other facilities, but for now we use this >> workaround): >> >> >> --8<---------------cut here---------------start------------->8--- >> name: dummy >> version: 0 >> build-type: Simple >> cabal-version: >=2.0 >> >> library >> default-language: Haskell2010 >> >> -- library components you want cabal to solve & build for >> -- and become accessible via .ghc.environment files and/or >> -- `cabal new-repl` >> build-depends: base, lens >> >> -- executable components you want cabal to build >> build-tool-depends: pandoc:pandoc >> >> --8<---------------cut here---------------end--------------->8--- >> >> 6.) invoke `cabal new-build` >> >> 7.) If everything works, you'll find the `pandoc:pandoc` executable >> somewhere in your ~/.cabal/store/ghc-8.3.*/ folder >> (you can use http://hackage.haskell.org/package/cabal-plan >> to conveniently list the location via `cabal-plan list-bins`) >> >> 8.) As for libraries, you can either use `cabal new-repl` >> or you can leverage GHC's package environment files: >> >> `cabal new-build` will have generated a file like >> >> .ghc.environment.x86_64-linux-8.3.20170913 >> >> which brings into scope all transitive dependencies of >> `build-depends: base, lens` >> >> Now all you need to do is simply call >> >> ghc-stage2 --make MyTestProg.hs >> >> to compile a program against those libs, or start up GHCi via >> >> ghc-stage2 --interactive >> >> and you'll be thrown into that package environment. >> >> >> >> I hope you find this useful >> >> Cheers, >> Herbert >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > > > > -- > Rahul Muttineni > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Tue Sep 19 12:59:54 2017 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 19 Sep 2017 08:59:54 -0400 Subject: Spurious recompilations when using a compiler plugin In-Reply-To: References: Message-ID: <87tvzyyhbp.fsf@ben-laptop.smart-cactus.org> Conal Elliott writes: > It appears that use of GHC plugins causes client code to get needlessly > recompiled. (See Trac issues 12567 > and 7414 > .) It’s becoming more of a > problem for usability of the plugin > I’ve been developing, and > I’m wondering what can be done. Some questions: > > - Is there any work in progress on fixing this situation? > - Are there serious obstacles to fixing it? > - Do plugin writers or users have any workarounds? > I think the real question is what sort of interface do plugin authors need? I suspect there are a few distinct tasks here, * compute and record module implementation hashes in interface files * to include plugin implementation hashes in the recompilation check * to provide an interface allowing a plugin to compute a hash of its arguments which can be included into the recompilation check. One way of realising this would be to add a field like the following to Plugin, pluginHash :: [CommandLineOption] -> Maybe Fingerprint -- Nothing would denote "always rebuild" Would this help in your case? This would allow us to fix the TH problem in #7277 and fix the plugins problem in #7414 and #12567 in a nearly optimal way (assuming the plugin author is able to precisely define a hash). None of this is terribly difficult and given Nick's recent work on his row types plugin, it seems like it's getting more urgent. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From george.colpitts at gmail.com Tue Sep 19 14:06:41 2017 From: george.colpitts at gmail.com (George Colpitts) Date: Tue, 19 Sep 2017 14:06:41 +0000 Subject: Spurious recompilations when using a compiler plugin In-Reply-To: <87tvzyyhbp.fsf@ben-laptop.smart-cactus.org> References: <87tvzyyhbp.fsf@ben-laptop.smart-cactus.org> Message-ID: is it possible that this is also connected to https://ghc.haskell.org/trac/ghc/ticket/13604 ? On Tue, Sep 19, 2017 at 10:00 AM Ben Gamari wrote: > Conal Elliott writes: > > > It appears that use of GHC plugins causes client code to get needlessly > > recompiled. (See Trac issues 12567 > > and 7414 > > .) It’s becoming more of a > > problem for usability of the plugin > > I’ve been developing, > and > > I’m wondering what can be done. Some questions: > > > > - Is there any work in progress on fixing this situation? > > - Are there serious obstacles to fixing it? > > - Do plugin writers or users have any workarounds? > > > I think the real question is what sort of interface do plugin authors > need? > > I suspect there are a few distinct tasks here, > > * compute and record module implementation hashes in interface files > > * to include plugin implementation hashes in the recompilation check > > * to provide an interface allowing a plugin to compute a hash of its > arguments which can be included into the recompilation check. One way > of realising this would be to add a field like the following to Plugin, > > pluginHash :: [CommandLineOption] -> Maybe Fingerprint > -- Nothing would denote "always rebuild" > > Would this help in your case? > > This would allow us to fix the TH problem in #7277 and fix the plugins > problem in #7414 and #12567 in a nearly optimal way (assuming the plugin > author is able to precisely define a hash). > > None of this is terribly difficult and given Nick's recent work on his > row types plugin, it seems like it's getting more urgent. > > Cheers, > > - Ben > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From raichoo at googlemail.com Tue Sep 19 14:26:57 2017 From: raichoo at googlemail.com (raichoo) Date: Tue, 19 Sep 2017 16:26:57 +0200 Subject: [PATCH] Enable DTrace support on FreeBSD for dynamically linked RTS Message-ID: Hi, I've played around a little bit and came up with a little proof-of-concept patch to enable DTrace support in FreeBSD. I've only managed to get it to work for the dynamically linked RTS. It would be great if someone could take a look at this so this can get possibly merged. :) Kind regards, raichoo -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: dtrace.patch Type: application/octet-stream Size: 12349 bytes Desc: not available URL: From ben at smart-cactus.org Tue Sep 19 15:25:50 2017 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 19 Sep 2017 11:25:50 -0400 Subject: [PATCH] Enable DTrace support on FreeBSD for dynamically linked RTS In-Reply-To: References: Message-ID: <87r2v2yakh.fsf@ben-laptop.smart-cactus.org> raichoo via ghc-devs writes: > Hi, > > I've played around a little bit and came up with a little proof-of-concept > patch to enable DTrace support in FreeBSD. I've only managed to get > it to work for the dynamically linked RTS. It would be great if someone > could > take a look at this so this can get possibly merged. :) > Do you have a git branch? If not, what release did you produce this patch against? If so perhaps a pointer to it would be helpful. I tried applying it to master but it was an explosion of conflicts. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at smart-cactus.org Tue Sep 19 15:34:17 2017 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 19 Sep 2017 11:34:17 -0400 Subject: Spurious recompilations when using a compiler plugin In-Reply-To: References: <87tvzyyhbp.fsf@ben-laptop.smart-cactus.org> Message-ID: <87lglaya6e.fsf@ben-laptop.smart-cactus.org> George Colpitts writes: > is it possible that this is also connected to > https://ghc.haskell.org/trac/ghc/ticket/13604 ? > In that it pertains to the recompilation checker, yes. I've tried to collect the recompilation checker bugs in play here under the RecompilationCheck keyword in Trac. It would be great to get some eyes on these issues as they are clearly causing real pain for some users. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From nicolas.frisby at gmail.com Tue Sep 19 15:50:31 2017 From: nicolas.frisby at gmail.com (Nicolas Frisby) Date: Tue, 19 Sep 2017 15:50:31 +0000 Subject: Invariants about UnivCo? Message-ID: [I summarize with some direct questions at the bottom of this email.] I spent time last night trying to eliminate -dcore-lint errors from my record and variant library using the coxswain row types plugin. I made some progress, but I'm currently stuck, as discussed on this github Issue. https://github.com/nfrisby/coxswain/issues/3#issuecomment-330577609 Here's the relevant bit: The latest unresolved -dcore-lint error is an out-of-scope cobox co var. I'm certainly not creating it *directly* (there are no U(plugin:coxswain,... in the Core Lint warning), but I have to wonder if my somewhat loose use of UnivCo is violating some assumptions somewhere that's causing GHC to drop the co var binding or overlook this occurrence of it on a renaming/subst pass. I checked UnivCo for source comments looking for anything it should *not* be used for, but I didn't find an obvious explanation along those lines. I haven't yet been able to effectively distill the test case. I'm doing this all at -O0. With `-ddump-tc-trace`, I can see the offending cobox (cobox_a67M) is present in an "implication evbinds" listing after a "solveImplication end }" delimiter, but that's the last obvious binding of it. [G] cobox_a67J = CO Sym cobox_a654, [G] cobox_a67M = cobox_a67J `cast` U(plugin:coxswain,...) cobox_a654 is introduced by a GADT pattern match. I'm also not seeing obvious occurrences of cobox_a67M, but I think the reason is that I'm seeing several (Sym cobox) with no uniques printed (even with `-dppr-debug`). Those are probably the cobox in question, but I can't confirm. Questions: 1) Is there a robust way to ensure that covar's uniques are always printed? (Is the pprIface reuse with a free cobox part of the issue here?) 2) Is my plugin asking for this kind of trouble by using UnivCo to cast coboxes? 3) If I spent the effort to create non-UnivCo coercions where possible, would that likely help? This is currently an "eventually" task, but I haven't seen an urgency for it yet. I could bump its priority if it might help. E.G. I'm using UnivCo to cast entire givens when all I'm doing is reducing a type family application somewhere "deep" within the given's predtype. I could, with considerable effort, instead wrap a single, localized UnivCo within a bunch of non-UnivCo "lifting" coercion constructors. Would that likely help? 3) Is there a usual suspect for this kind of situation where a cobox binding is seemingly dropped (by the typechecker) even though there's an occurrence of it? Thank you for your time. -Nick -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Tue Sep 19 16:52:07 2017 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 19 Sep 2017 12:52:07 -0400 Subject: ANN: Overlay Hackage Package Index for GHC HEAD In-Reply-To: References: <87mv5smcvy.fsf@gmail.com> Message-ID: <87bmm6y6ko.fsf@ben-laptop.smart-cactus.org> Matthew Pickering writes: > Something like this is definitely useful for testing. > > When building GHC HEAD I override the ghcHEAD derivation on nixpkgs to > the right commit I want to use and then can similarly specify which > patches and versions of packages to use by modifying the > 'configuration-ghc-head.nix' file. > This is quite a bit more flexible than just patch files as I can point > to specific commits in git repos etc. > This sounds amazing. I think you may have finally sold me on Nix. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From raichoo at googlemail.com Tue Sep 19 17:15:40 2017 From: raichoo at googlemail.com (raichoo) Date: Tue, 19 Sep 2017 19:15:40 +0200 Subject: [PATCH] Enable DTrace support on FreeBSD for dynamically linked RTS In-Reply-To: <87r2v2yakh.fsf@ben-laptop.smart-cactus.org> References: <87r2v2yakh.fsf@ben-laptop.smart-cactus.org> Message-ID: Hi Ben, I've just forked GHC on github, you can find the patch here: https://github.com/raichoo/ghc/commit/ebdec3f83fc25d75aec91ebedd1bd70f9c6f27ad I hope this is more helpful. Kind regards, raichoo On Tue, Sep 19, 2017 at 5:25 PM, Ben Gamari wrote: > raichoo via ghc-devs writes: > > > Hi, > > > > I've played around a little bit and came up with a little > proof-of-concept > > patch to enable DTrace support in FreeBSD. I've only managed to get > > it to work for the dynamically linked RTS. It would be great if someone > > could > > take a look at this so this can get possibly merged. :) > > > Do you have a git branch? If not, what release did you produce this > patch against? If so perhaps a pointer to it would be helpful. I tried > applying it to master but it was an explosion of conflicts. > > Cheers, > > - Ben > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Tue Sep 19 17:31:07 2017 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 19 Sep 2017 13:31:07 -0400 Subject: [PATCH] Enable DTrace support on FreeBSD for dynamically linked RTS In-Reply-To: References: <87r2v2yakh.fsf@ben-laptop.smart-cactus.org> Message-ID: <878thay4ro.fsf@ben-laptop.smart-cactus.org> raichoo writes: > Hi Ben, > > I've just forked GHC on github, you can find the patch here: > > https://github.com/raichoo/ghc/commit/ebdec3f83fc25d75aec91ebedd1bd70f9c6f27ad > > I hope this is more helpful. > I've opened https://phabricator.haskell.org/D3994 with the patch and will review in just a moment. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From moritz.angermann at gmail.com Wed Sep 20 09:44:33 2017 From: moritz.angermann at gmail.com (Moritz Angermann) Date: Wed, 20 Sep 2017 17:44:33 +0800 Subject: The Curious Case of T6084 -or- Register Confusion with LLVM Message-ID: <8BB224E1-7D9F-4200-B96F-8F1393BF7A2D@gmail.com> Hi *, TLDR: The LLVM backend might confuse floating registers in GHC. # Demo (Ticket #14251) Let Demo.hs be the following short program (a minor modification from T6084): ``` {-# LANGUAGE MagicHash, BangPatterns #-} module Main where import GHC.Exts {-# NOINLINE f #-} f :: (Int# -> Float# -> Double# -> Float# -> Double# -> String) -> String f g = g 3# 4.0# 5.0## 6.0# 6.9## ++ " World!" {-# NOINLINE q #-} q :: Int# -> Float# -> Double# -> Float# -> Double# -> String q i j k l m = "Hello " ++ show (F# l) ++ " " ++ show (D# m) main = putStrLn (f $ q) ``` What happens if we compile them with the NCG and LLVM? $ ghc -fasm -fforce-recomp Demo.hs -O2 -o Demo-ncg && ./Demo-ncg Hello 6.0 6.9 World! $ ghc -fllvm -fforce-recomp Demo.hs -O2 -o Demo-llvm && ./Demo-llvm Hello 4.0 5.0 World! # Discussion What is happening here? The LLVM backend passes the registers in arguments, which are then mapped to registers via the GHC calling convention we added to LLVM. As the LLVM backend takes off from Cmm, we produce function that always hold the always live registers (on x86_64 these are: Base, Sp, Hp, R1, R2, R3, R4, R5, R6, SpLim) and appends those registers that are live throughout the function call: in the case of `q` this is one Float and one Double register. Let’s assume these are F3 and D4. Thus the function signature we generate looks like: ghccc void @q(i64*, i64*, i64*, i64, i64, i64, i64, i64, i64, i64, float, double) And expect the passed arguments to represent the following registers: base, sp, hp, r1, r2, r3, r4, r5, r6, spLim, f3, d4 as we found that f1 and d1 are not live. Yet, when we call `q` in the form of `g` in the body of `f`. We will pass it 14 arguments instead of 12. To make this “typecheck” in LLVM, we @q' = bitcast @q to (i64*, i64*, i64*, i64, i64, i64, i64, i64, i64, i64, float, double, float, double) and call @q’(base, sp, hp, r1, r2, r3, r4, r5, r6, spLim, f1, d2, f3, d4). at this point, we now assign f3 <- f1 and d4 <- d2; while silently ignoring the passed arguments f3 and d4. (This is where my llvmng backend fell over, as it does not bitcast function signatures but tries to unify them.) # Solution? Initially, Ben and I though we could simply always pass all registers as arguments in LLVM and call it a day with the downside of create more verbose but correct code. As I found out, that comes with a few complications. For some reason, all active stg registers for my machine give me Base, Sp, Hp, R1, R2, R3, R4, R5, R6, SpLim, F1, D1, F2, D2, F3, D3, XMM1,XMM2,XMM3,XMM4,XMM5,XMM6, YMM1,YMM2,YMM3,YMM4,YMM5,YMM6, ZMM1,ZMM2,ZMM3,ZMM4,ZMM5,ZMM6 I should not have the YMM*, and ZMM* registers as I don’t have any AVX nor AVX512; that looks like only a patch away. However we try to optimize our register, such that we can pass up to six doubles or six floats or any combination of both if needed in registers, without having to allocate them on the stack, by assuming overlapping registers (See Note [Overlapping global registers]). And as such a full function signature in LLVM would as opposed to one that’s based on the “live” registers as we have right now, would consist of 12 float/double registers, and LLVM only maps 6. My current idea is to, pass only the explicit F1,D1,…,F3,D3 and try to disable the register overlapping for LLVM. This would probably force more floating values to be stack allocated rather than passed via registers, but would likely guarantee that the registers match up. The other option I can think of is to define some viertual generic floating registers in the llvm code gen: V1,…,V6 and then perform something like F1 <- V1 as float D1 <- V1 as double in the body of the function, while trying to use the `live` information at the call site to decide which of F1 or D1 to pass as V1. Ideas? Cheers, Moritz From ben at well-typed.com Wed Sep 20 15:53:53 2017 From: ben at well-typed.com (Ben Gamari) Date: Wed, 20 Sep 2017 11:53:53 -0400 Subject: The Curious Case of T6084 -or- Register Confusion with LLVM In-Reply-To: <8BB224E1-7D9F-4200-B96F-8F1393BF7A2D@gmail.com> References: <8BB224E1-7D9F-4200-B96F-8F1393BF7A2D@gmail.com> Message-ID: <87377hxt66.fsf@ben-laptop.smart-cactus.org> Moritz Angermann writes: [snip] > > I should not have the YMM*, and ZMM* registers as I don’t have any AVX nor AVX512; > that looks like only a patch away. However we try to optimize our register, such > that we can pass up to six doubles or six floats or any combination of both if needed > in registers, without having to allocate them on the stack, by assuming overlapping > registers (See Note [Overlapping global registers]). > > And as such a full function signature in LLVM would as opposed to one that’s based on > the “live” registers as we have right now, would consist of 12 float/double registers, > and LLVM only maps 6. My current idea is to, pass only the explicit F1,D1,…,F3,D3 > and try to disable the register overlapping for LLVM. This would probably force more > floating values to be stack allocated rather than passed via registers, but would > likely guarantee that the registers match up. The other option I can think of is to > define some viertual generic floating registers in the llvm code gen: V1,…,V6 > and then perform something like > > F1 <- V1 as float > D1 <- V1 as double > > in the body of the function, while trying to use the `live` information at the call site > to decide which of F1 or D1 to pass as V1. > Arguably the fundamental problem here is the assumption that all STG entry-points have the same machine-level calling convention. As you point out, our calling conventions in fact change due to things like register overlap. Ideally the LLVM we produce would reflect this. One way to make this happen would be for C-- call nodes to carry information about the calling convention of the target (e.g. how many arguments of each type the function expects; in the same way identifiers in Core carry their type). Unfortunately a brief look at the code generator suggests that this may require a fair amount of plumbing. It's important to note though that this overlap problem is something that will need to be addressed eventually if we are are to have proper SIMD support (due to overlap between XMM, YMM, and ZMM). Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From mail at joachim-breitner.de Wed Sep 20 16:11:05 2017 From: mail at joachim-breitner.de (Joachim Breitner) Date: Wed, 20 Sep 2017 12:11:05 -0400 Subject: RTS changes affect runtime when they =?UTF-8?Q?shouldn=E2=80=99t?= Message-ID: <1505923865.5913.7.camel@joachim-breitner.de> Hi, while keeping an eye on the performance numbers, I notice a pattern where basically any change to the rts makes some benchmarks go up or down by a significant percentage. Recent example: https://git.haskell.org/ghc.git/commitdiff/0aba999f60babe6878a1fd2cc8410139358cad16 which exposed an additional secure modular power function in integer (and should really not affect any of our test cases) causes these changes: Benchmark name prev change now nofib/time/FS 0.434 - 4.61% 0.414 seco nds nofib/time/VS 0.369 + 15.45% 0.426 seco nds nofib/time/scs 0.411 - 4.62% 0.392 sec onds https://perf.haskell.org/ghc/#revision/0aba999f60babe6878a1fd2cc8410139 358cad16 The new effBench benchmarks (FS, VS) are particularly often affected, but also old friends like scs, lambda, integer… In a case like this I can see that the effect is spurious, but it really limits our ability to properly evaluate changes to the compiler – in some cases it makes us cheer about improvements that are not really there, in other cases it makes us hunt for ghosts. Does anyone have a solid idea what is causing these differences? Are they specific to the builder for perf.haskell.org, or do you observe them as well? And what can we do here? For the measurements in my thesis I switched to measuring instruction counts (using valgrind) instead. These are much more stable, requires only a single NoFibRun, and the machine does not have to be otherwise quiet. Should I start using these on perf.haskell.org? Or would we lose too much by not tracking actual running times any more? Greetings, Joachim -- Joachim Breitner mail at joachim-breitner.de http://www.joachim-breitner.de/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From ben at smart-cactus.org Wed Sep 20 18:33:33 2017 From: ben at smart-cactus.org (Ben Gamari) Date: Wed, 20 Sep 2017 14:33:33 -0400 Subject: RTS changes affect runtime when they =?utf-8?Q?shouldn?= =?utf-8?Q?=E2=80=99t?= In-Reply-To: <1505923865.5913.7.camel@joachim-breitner.de> References: <1505923865.5913.7.camel@joachim-breitner.de> Message-ID: <87vakdw77m.fsf@ben-laptop.smart-cactus.org> Joachim Breitner writes: [snip] > > Does anyone have a solid idea what is causing these differences? Are > they specific to the builder for perf.haskell.org, or do you observe > them as well? And what can we do here? > There is certainly no shortage of possible causes: https://www.youtube.com/watch?v=IX16gcX4vDQ It would be interesting to take a few days to really try to build an understanding of a few of these performance jumps with perf. At the moment we can only speculate. > For the measurements in my thesis I switched to measuring instruction > counts (using valgrind) instead. These are much more stable, requires > only a single NoFibRun, and the machine does not have to be otherwise > quiet. Should I start using these on perf.haskell.org? Or would we lose > too much by not tracking actual running times any more? > Note that valgrind can also do cache modelling so I suspect it can give you a reasonably good picture of execution; certainly better than runtime. However, the trade-off is that (last I checked) it's incredibly slow. Don't you think I mean just a bit less peppy than usual. I mean soul-crushingly, mollasses-on-a-cold-December-morning slow. If we think that we can bear the cost of running valigrind then I think it would be a great improvement. As you point out, the current run times are essentially useless. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From mail at joachim-breitner.de Wed Sep 20 20:13:24 2017 From: mail at joachim-breitner.de (Joachim Breitner) Date: Wed, 20 Sep 2017 16:13:24 -0400 Subject: RTS changes affect runtime when they =?UTF-8?Q?shouldn=E2=80=99t?= In-Reply-To: <87vakdw77m.fsf@ben-laptop.smart-cactus.org> References: <1505923865.5913.7.camel@joachim-breitner.de> <87vakdw77m.fsf@ben-laptop.smart-cactus.org> Message-ID: <1505938404.1534.5.camel@joachim-breitner.de> Hi Am Mittwoch, den 20.09.2017, 14:33 -0400 schrieb Ben Gamari: > Note that valgrind can also do cache modelling so I suspect it can give > you a reasonably good picture of execution; certainly better than > runtime. However, the trade-off is that (last I checked) it's incredibly > slow. Don't you think I mean just a bit less peppy than usual. I mean > soul-crushingly, mollasses-on-a-cold-December-morning slow. > > If we think that we can bear the cost of running valigrind then I think > it would be a great improvement. As you point out, the current run times > are essentially useless. I did it for my thesis and I found it ok. I mean I always sent it off to some machine and looked at the results later, so I did not really care whether it took 30mins or 2h. I think I’ll try it in perf.haskell.org and see what happens. Joachim -- Joachim “nomeata” Breitner mail at joachim-breitner.de https://www.joachim-breitner.de/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From sgraf1337 at gmail.com Wed Sep 20 22:34:55 2017 From: sgraf1337 at gmail.com (Sebastian Graf) Date: Thu, 21 Sep 2017 00:34:55 +0200 Subject: =?UTF-8?Q?Re=3A_RTS_changes_affect_runtime_when_they_shouldn=E2=80=99t?= In-Reply-To: <1505938404.1534.5.camel@joachim-breitner.de> References: <1505923865.5913.7.camel@joachim-breitner.de> <87vakdw77m.fsf@ben-laptop.smart-cactus.org> <1505938404.1534.5.camel@joachim-breitner.de> Message-ID: Hi, I did it for my thesis and I found it ok. I mean I always sent it off > to some machine and looked at the results later, so I did not really > care whether it took 30mins or 2h. I did the same for my thesis (the setup of which basically was a rip-off of Joachim's) and it was really quite bearable. I think it was even faster than doing NoFibRuns=30 without counting instructions. The only real drawback I see is that instruction count might skew results, because AFAIK it doesn't properly take the architecture (pipeline, latencies, etc.) into account. It might be just OK for the average program, though. On Wed, Sep 20, 2017 at 10:13 PM, Joachim Breitner wrote: > Hi > > Am Mittwoch, den 20.09.2017, 14:33 -0400 schrieb Ben Gamari: > > Note that valgrind can also do cache modelling so I suspect it can give > > you a reasonably good picture of execution; certainly better than > > runtime. However, the trade-off is that (last I checked) it's incredibly > > slow. Don't you think I mean just a bit less peppy than usual. I mean > > soul-crushingly, mollasses-on-a-cold-December-morning slow. > > > > If we think that we can bear the cost of running valigrind then I think > > it would be a great improvement. As you point out, the current run times > > are essentially useless. > > I did it for my thesis and I found it ok. I mean I always sent it off > to some machine and looked at the results later, so I did not really > care whether it took 30mins or 2h. > > I think I’ll try it in perf.haskell.org and see what happens. > > Joachim > -- > Joachim “nomeata” Breitner > mail at joachim-breitner.de > https://www.joachim-breitner.de/ > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Thu Sep 21 02:38:16 2017 From: mail at joachim-breitner.de (Joachim Breitner) Date: Wed, 20 Sep 2017 22:38:16 -0400 Subject: RTS changes affect runtime when they =?UTF-8?Q?shouldn=E2=80=99t?= In-Reply-To: References: <1505923865.5913.7.camel@joachim-breitner.de> <87vakdw77m.fsf@ben-laptop.smart-cactus.org> <1505938404.1534.5.camel@joachim-breitner.de> Message-ID: <1505961496.10873.0.camel@joachim-breitner.de> Hi, Am Donnerstag, den 21.09.2017, 00:34 +0200 schrieb Sebastian Graf: > Hi, > > > I did it for my thesis and I found it ok. I mean I always sent it off > > to some machine and looked at the results later, so I did not really > > care whether it took 30mins or 2h. > > I did the same for my thesis (the setup of which basically was a rip- > off of Joachim's) and it was really quite bearable. I think it was > even faster than doing NoFibRuns=30 without counting instructions. > > The only real drawback I see is that instruction count might skew > results, because AFAIK it doesn't properly take the architecture > (pipeline, latencies, etc.) into account. It might be just OK for the > average program, though. I’ll try that now, and see if I like the results better. It might take a few iterations to find the right settings, so perf.haskell.org might not update quickly for now. Greetings, Joachim -- Joachim Breitner mail at joachim-breitner.de http://www.joachim-breitner.de/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From rae at cs.brynmawr.edu Wed Sep 20 23:45:07 2017 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Wed, 20 Sep 2017 17:45:07 -0600 Subject: Invariants about UnivCo? In-Reply-To: References: Message-ID: <15060AF2-104A-43F1-AD4A-0FD7FFC8665B@cs.brynmawr.edu> > On Sep 19, 2017, at 9:50 AM, Nicolas Frisby wrote: > > Questions: > > 1) Is there a robust way to ensure that covar's uniques are always printed? (Is the pprIface reuse with a free cobox part of the issue here?) Try rebasing. I ran into a similar issue not long ago and revised the code around printing coercions. Also, -fprint-explicit-kinds might help. An occurrence of a tyvar is also an implicit occurrence of all the free variables in its kinds. > > 2) Is my plugin asking for this kind of trouble by using UnivCo to cast coboxes? No. `UnivCo`s should work fine. You might potentially be asking for "shoot the gorillas" problems (it has been suggested that we refrain from "launching the rockets", as that event seems a mite too likely these days, unsafeCoerceIO or no; I propose this new formulation, inspired by an utterance by Simon PJ about how when you're tackling bugs, sometimes you shoot one gorilla only to find more behind it... but then he remarked that one, of course, would never want to actually shoot a gorilla.), but I think Core Lint should be OK. > > 3) If I spent the effort to create non-UnivCo coercions where possible, would that likely help? This is currently an "eventually" task, but I haven't seen an urgency for it yet. I could bump its priority if it might help. E.G. I'm using UnivCo to cast entire givens when all I'm doing is reducing a type family application somewhere "deep" within the given's predtype. I could, with considerable effort, instead wrap a single, localized UnivCo within a bunch of non-UnivCo "lifting" coercion constructors. Would that likely help? I don't think this line of inquiry will be helpful. > > 3) Is there a usual suspect for this kind of situation where a cobox binding is seemingly dropped (by the typechecker) even though there's an occurrence of it? Not for me. I hope this helps! Richard > > Thank you for your time. -Nick > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From nicolas.frisby at gmail.com Thu Sep 21 02:52:15 2017 From: nicolas.frisby at gmail.com (Nicolas Frisby) Date: Thu, 21 Sep 2017 02:52:15 +0000 Subject: Invariants about UnivCo? In-Reply-To: <15060AF2-104A-43F1-AD4A-0FD7FFC8665B@cs.brynmawr.edu> References: <15060AF2-104A-43F1-AD4A-0FD7FFC8665B@cs.brynmawr.edu> Message-ID: Thanks for the reply Richard, I really appreciate it. Your email actually buzzed my phone just as I was opening my laptop to draft this email. I'm optimistic that I may have identified part of the issue. * Note [Coercion evidence terms] in TcEvidence.hs lists an INVARIANT that the evidence for t1 ~# t2 must be built with EvCoercion. I'm very confident that I am instead building it with a by-fiat EvCast: this is what I meant by "by using UnivCo to cast coboxes" in my previous email. I'm optimistic that fixing this up will help. Does it sound promising/ring any bells for you? Thanks. -Nick On Wed, Sep 20, 2017 at 7:47 PM Richard Eisenberg wrote: > > > On Sep 19, 2017, at 9:50 AM, Nicolas Frisby > wrote: > > > > Questions: > > > > 1) Is there a robust way to ensure that covar's uniques are always > printed? (Is the pprIface reuse with a free cobox part of the issue here?) > > Try rebasing. I ran into a similar issue not long ago and revised the code > around printing coercions. Also, -fprint-explicit-kinds might help. An > occurrence of a tyvar is also an implicit occurrence of all the free > variables in its kinds. > > > > > 2) Is my plugin asking for this kind of trouble by using UnivCo to cast > coboxes? > > No. `UnivCo`s should work fine. You might potentially be asking for "shoot > the gorillas" problems (it has been suggested that we refrain from > "launching the rockets", as that event seems a mite too likely these days, > unsafeCoerceIO or no; I propose this new formulation, inspired by an > utterance by Simon PJ about how when you're tackling bugs, sometimes you > shoot one gorilla only to find more behind it... but then he remarked that > one, of course, would never want to actually shoot a gorilla.), but I think > Core Lint should be OK. > > > > > 3) If I spent the effort to create non-UnivCo coercions where possible, > would that likely help? This is currently an "eventually" task, but I > haven't seen an urgency for it yet. I could bump its priority if it might > help. E.G. I'm using UnivCo to cast entire givens when all I'm doing is > reducing a type family application somewhere "deep" within the given's > predtype. I could, with considerable effort, instead wrap a single, > localized UnivCo within a bunch of non-UnivCo "lifting" coercion > constructors. Would that likely help? > > I don't think this line of inquiry will be helpful. > > > > > 3) Is there a usual suspect for this kind of situation where a cobox > binding is seemingly dropped (by the typechecker) even though there's an > occurrence of it? > > Not for me. > > I hope this helps! > Richard > > > > > Thank you for your time. -Nick > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at cs.brynmawr.edu Thu Sep 21 02:59:50 2017 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Wed, 20 Sep 2017 20:59:50 -0600 Subject: Invariants about UnivCo? In-Reply-To: References: <15060AF2-104A-43F1-AD4A-0FD7FFC8665B@cs.brynmawr.edu> Message-ID: <1ABECBCB-647E-4FF4-B95D-1B33F3635205@cs.brynmawr.edu> > On Sep 20, 2017, at 8:52 PM, Nicolas Frisby wrote: > > * Note [Coercion evidence terms] in TcEvidence.hs lists an INVARIANT that the evidence for t1 ~# t2 must be built with EvCoercion. I'm very confident that I am instead building it with a by-fiat EvCast: this is what I meant by "by using UnivCo to cast coboxes" in my previous email. Hm. If you break an INVARIANT, anything can happen. Perhaps you've hit it. I think instead of using EvCast, you should use Coercion.castCoercionKind, but I'm not sure without knowing more details. Richard > > I'm optimistic that fixing this up will help. Does it sound promising/ring any bells for you? > > Thanks. -Nick > > On Wed, Sep 20, 2017 at 7:47 PM Richard Eisenberg > wrote: > > > On Sep 19, 2017, at 9:50 AM, Nicolas Frisby > wrote: > > > > Questions: > > > > 1) Is there a robust way to ensure that covar's uniques are always printed? (Is the pprIface reuse with a free cobox part of the issue here?) > > Try rebasing. I ran into a similar issue not long ago and revised the code around printing coercions. Also, -fprint-explicit-kinds might help. An occurrence of a tyvar is also an implicit occurrence of all the free variables in its kinds. > > > > > 2) Is my plugin asking for this kind of trouble by using UnivCo to cast coboxes? > > No. `UnivCo`s should work fine. You might potentially be asking for "shoot the gorillas" problems (it has been suggested that we refrain from "launching the rockets", as that event seems a mite too likely these days, unsafeCoerceIO or no; I propose this new formulation, inspired by an utterance by Simon PJ about how when you're tackling bugs, sometimes you shoot one gorilla only to find more behind it... but then he remarked that one, of course, would never want to actually shoot a gorilla.), but I think Core Lint should be OK. > > > > > 3) If I spent the effort to create non-UnivCo coercions where possible, would that likely help? This is currently an "eventually" task, but I haven't seen an urgency for it yet. I could bump its priority if it might help. E.G. I'm using UnivCo to cast entire givens when all I'm doing is reducing a type family application somewhere "deep" within the given's predtype. I could, with considerable effort, instead wrap a single, localized UnivCo within a bunch of non-UnivCo "lifting" coercion constructors. Would that likely help? > > I don't think this line of inquiry will be helpful. > > > > > 3) Is there a usual suspect for this kind of situation where a cobox binding is seemingly dropped (by the typechecker) even though there's an occurrence of it? > > Not for me. > > I hope this helps! > Richard > > > > > Thank you for your time. -Nick > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.frisby at gmail.com Thu Sep 21 03:08:32 2017 From: nicolas.frisby at gmail.com (Nicolas Frisby) Date: Thu, 21 Sep 2017 03:08:32 +0000 Subject: Invariants about UnivCo? In-Reply-To: <1ABECBCB-647E-4FF4-B95D-1B33F3635205@cs.brynmawr.edu> References: <15060AF2-104A-43F1-AD4A-0FD7FFC8665B@cs.brynmawr.edu> <1ABECBCB-647E-4FF4-B95D-1B33F3635205@cs.brynmawr.edu> Message-ID: Bah. I misparsed the note; it doesn't rule out casting coboxes. However, it does refer to Note [Bind new Givens immediately] in TcRnTypes, which indicates that maybe I should be explicitly binding an evidence variable when adjusting given constraints of type t1 ~# t2... Thanks for the castCoercionKind tip On Wed, Sep 20, 2017 at 7:59 PM Richard Eisenberg wrote: > On Sep 20, 2017, at 8:52 PM, Nicolas Frisby > wrote: > > * Note [Coercion evidence terms] in TcEvidence.hs lists an INVARIANT that > the evidence for t1 ~# t2 must be built with EvCoercion. I'm very confident > that I am instead building it with a by-fiat EvCast: this is what I meant > by "by using UnivCo to cast coboxes" in my previous email. > > > Hm. If you break an INVARIANT, anything can happen. Perhaps you've hit it. > I think instead of using EvCast, you should use Coercion.castCoercionKind, > but I'm not sure without knowing more details. > > Richard > > > I'm optimistic that fixing this up will help. Does it sound promising/ring > any bells for you? > > Thanks. -Nick > > On Wed, Sep 20, 2017 at 7:47 PM Richard Eisenberg > wrote: > >> >> > On Sep 19, 2017, at 9:50 AM, Nicolas Frisby >> wrote: >> > >> > Questions: >> > >> > 1) Is there a robust way to ensure that covar's uniques are always >> printed? (Is the pprIface reuse with a free cobox part of the issue here?) >> >> Try rebasing. I ran into a similar issue not long ago and revised the >> code around printing coercions. Also, -fprint-explicit-kinds might help. An >> occurrence of a tyvar is also an implicit occurrence of all the free >> variables in its kinds. >> >> > >> > 2) Is my plugin asking for this kind of trouble by using UnivCo to cast >> coboxes? >> >> No. `UnivCo`s should work fine. You might potentially be asking for >> "shoot the gorillas" problems (it has been suggested that we refrain from >> "launching the rockets", as that event seems a mite too likely these days, >> unsafeCoerceIO or no; I propose this new formulation, inspired by an >> utterance by Simon PJ about how when you're tackling bugs, sometimes you >> shoot one gorilla only to find more behind it... but then he remarked that >> one, of course, would never want to actually shoot a gorilla.), but I think >> Core Lint should be OK. >> >> > >> > 3) If I spent the effort to create non-UnivCo coercions where possible, >> would that likely help? This is currently an "eventually" task, but I >> haven't seen an urgency for it yet. I could bump its priority if it might >> help. E.G. I'm using UnivCo to cast entire givens when all I'm doing is >> reducing a type family application somewhere "deep" within the given's >> predtype. I could, with considerable effort, instead wrap a single, >> localized UnivCo within a bunch of non-UnivCo "lifting" coercion >> constructors. Would that likely help? >> >> I don't think this line of inquiry will be helpful. >> >> > >> > 3) Is there a usual suspect for this kind of situation where a cobox >> binding is seemingly dropped (by the typechecker) even though there's an >> occurrence of it? >> >> Not for me. >> >> I hope this helps! >> Richard >> >> > >> > Thank you for your time. -Nick >> > _______________________________________________ >> > ghc-devs mailing list >> > ghc-devs at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Sep 21 07:40:57 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 21 Sep 2017 07:40:57 +0000 Subject: The Curious Case of T6084 -or- Register Confusion with LLVM In-Reply-To: <8BB224E1-7D9F-4200-B96F-8F1393BF7A2D@gmail.com> References: <8BB224E1-7D9F-4200-B96F-8F1393BF7A2D@gmail.com> Message-ID: Moritz Talk to Kavon. He was thinking about passing a struct instead of a huge list of registers, and only initialising the live fields of the struct. I don't know whether he ultimately discarded the idea, but it sounded promising. Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Moritz | Angermann | Sent: 20 September 2017 10:45 | To: GHC developers | Subject: The Curious Case of T6084 -or- Register Confusion with LLVM | | Hi *, | | TLDR: The LLVM backend might confuse floating registers in GHC. | | # Demo (Ticket #14251) | | Let Demo.hs be the following short program (a minor modification from | T6084): | ``` | {-# LANGUAGE MagicHash, BangPatterns #-} module Main where | | import GHC.Exts | | {-# NOINLINE f #-} | f :: (Int# -> Float# -> Double# -> Float# -> Double# -> String) -> String f | g = g 3# 4.0# 5.0## 6.0# 6.9## ++ " World!" | | {-# NOINLINE q #-} | q :: Int# -> Float# -> Double# -> Float# -> Double# -> String q i j k l m = | "Hello " ++ show (F# l) ++ " " ++ show (D# m) | | main = putStrLn (f $ q) | ``` | | What happens if we compile them with the NCG and LLVM? | | $ ghc -fasm -fforce-recomp Demo.hs -O2 -o Demo-ncg && ./Demo-ncg Hello 6.0 | 6.9 World! | | $ ghc -fllvm -fforce-recomp Demo.hs -O2 -o Demo-llvm && ./Demo-llvm Hello | 4.0 5.0 World! | | # Discussion | | What is happening here? The LLVM backend passes the registers in arguments, | which are then mapped to registers via the GHC calling convention we added | to LLVM. | | As the LLVM backend takes off from Cmm, we produce function that always hold | the always live registers (on x86_64 these are: Base, Sp, Hp, R1, R2, R3, | R4, R5, R6, SpLim) and appends those registers that are live throughout the | function call: in the case of `q` this is one Float and one Double register. | Let’s assume these are | F3 and D4. Thus the function signature we generate looks like: | | ghccc void @q(i64*, i64*, i64*, i64, i64, i64, i64, i64, i64, i64, float, | double) | | And expect the passed arguments to represent the following registers: | | base, sp, hp, r1, r2, r3, r4, r5, r6, spLim, f3, d4 | | as we found that f1 and d1 are not live. | | Yet, when we call `q` in the form of `g` in the body of `f`. We will pass it | 14 arguments instead of 12. To make this “typecheck” in LLVM, we | | @q' = bitcast @q to (i64*, i64*, i64*, i64, i64, i64, i64, i64, i64, i64, | float, double, float, double) | | and call @q’(base, sp, hp, r1, r2, r3, r4, r5, r6, spLim, f1, d2, f3, d4). | | at this point, we now assign f3 <- f1 and d4 <- d2; while silently ignoring | the passed arguments f3 and d4. | | (This is where my llvmng backend fell over, as it does not bitcast function | signatures but tries to unify them.) | | # Solution? | | Initially, Ben and I though we could simply always pass all registers as | arguments in LLVM and call it a day with the downside of create more verbose | but correct code. As I found out, that comes with a few complications. For | some reason, all active stg registers for my machine give me | | Base, Sp, Hp, R1, R2, R3, R4, R5, R6, SpLim, | F1, D1, F2, D2, F3, D3, | XMM1,XMM2,XMM3,XMM4,XMM5,XMM6, | YMM1,YMM2,YMM3,YMM4,YMM5,YMM6, | ZMM1,ZMM2,ZMM3,ZMM4,ZMM5,ZMM6 | | I should not have the YMM*, and ZMM* registers as I don’t have any AVX nor | AVX512; that looks like only a patch away. However we try to optimize our | register, such that we can pass up to six doubles or six floats or any | combination of both if needed in registers, without having to allocate them | on the stack, by assuming overlapping registers (See Note [Overlapping | global registers]). | | And as such a full function signature in LLVM would as opposed to one that’s | based on the “live” registers as we have right now, would consist of 12 | float/double registers, and LLVM only maps 6. My current idea is to, pass | only the explicit F1,D1,…,F3,D3 and try to disable the register overlapping | for LLVM. This would probably force more floating values to be stack | allocated rather than passed via registers, but would likely guarantee that | the registers match up. The other option I can think of is to define some | viertual generic floating registers in the llvm code gen: V1,…,V6 and then | perform something like | | F1 <- V1 as float | D1 <- V1 as double | | in the body of the function, while trying to use the `live` information at | the call site to decide which of F1 or D1 to pass as V1. | | Ideas? | | Cheers, | Moritz | | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.haskell | .org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C89f152a90a1b43caa39408d5000c4b | 8c%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636414975169335863&sdata=Dda | hjmHVAKIaK3YVrmX7lS8s3OswoeLoP5sDRV060eE%3D&reserved=0 From simonpj at microsoft.com Thu Sep 21 07:44:57 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 21 Sep 2017 07:44:57 +0000 Subject: The Curious Case of T6084 -or- Register Confusion with LLVM In-Reply-To: <87377hxt66.fsf@ben-laptop.smart-cactus.org> References: <8BB224E1-7D9F-4200-B96F-8F1393BF7A2D@gmail.com> <87377hxt66.fsf@ben-laptop.smart-cactus.org> Message-ID: | One way to make this happen would be for C-- call nodes to carry information | about the calling convention of the target (e.g. how many arguments of each | type the function expects; in the same way identifiers in Core carry their | type). That's be entirely possible for "known" calls, where the target is known, but not for "unknown" (i.e higher order) ones where the target of the call varies. The "Making a fast curry" paper goes into this in some detail. I think we already have different entry points for these two cases. So maybe they could have different entry conventions... Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Ben Gamari | Sent: 20 September 2017 16:54 | To: Moritz Angermann ; GHC developers | Subject: Re: The Curious Case of T6084 -or- Register Confusion with LLVM | | Moritz Angermann writes: | | [snip] | > | > I should not have the YMM*, and ZMM* registers as I don’t have any AVX | > nor AVX512; that looks like only a patch away. However we try to | > optimize our register, such that we can pass up to six doubles or six | > floats or any combination of both if needed in registers, without | > having to allocate them on the stack, by assuming overlapping registers | (See Note [Overlapping global registers]). | > | > And as such a full function signature in LLVM would as opposed to one | > that’s based on the “live” registers as we have right now, would | > consist of 12 float/double registers, and LLVM only maps 6. My | > current idea is to, pass only the explicit F1,D1,…,F3,D3 and try to | > disable the register overlapping for LLVM. This would probably force | > more floating values to be stack allocated rather than passed via | > registers, but would likely guarantee that the registers match up. | > The other option I can think of is to define some viertual generic | > floating registers in the llvm code gen: V1,…,V6 and then perform | > something like | > | > F1 <- V1 as float | > D1 <- V1 as double | > | > in the body of the function, while trying to use the `live` | > information at the call site to decide which of F1 or D1 to pass as V1. | > | Arguably the fundamental problem here is the assumption that all STG entry- | points have the same machine-level calling convention. As you point out, our | calling conventions in fact change due to things like register overlap. | Ideally the LLVM we produce would reflect this. | | One way to make this happen would be for C-- call nodes to carry information | about the calling convention of the target (e.g. how many arguments of each | type the function expects; in the same way identifiers in Core carry their | type). Unfortunately a brief look at the code generator suggests that this | may require a fair amount of plumbing. | | It's important to note though that this overlap problem is something that | will need to be addressed eventually if we are are to have proper SIMD | support (due to overlap between XMM, YMM, and ZMM). | | Cheers, | | - Ben From simonpj at microsoft.com Thu Sep 21 09:59:49 2017 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 21 Sep 2017 09:59:49 +0000 Subject: Invariants about UnivCo? In-Reply-To: References: Message-ID: Some thoughts * Read Note [Coercion holes] in TyCoRep. * As you’ll see, generally we don’t create value-bindings for (unboxed) coercions of type t1 ~# t2. (yes for boxed ones t1 ~ t2). Reasons in the Note. Exception: for superclasses of Givens we do create (co :: a ~# b) = sc_sel1 d where d is some dictionary with a superclass of type (a ~# b). Side note: the use of “cobox” is wildly unhelpful. These Ids are specifically unboxed! I’m going to change it to just “co”. * You appear to have bindings like[G] cobox_a67J = CO Sym cobox_a654. That is suspicious. Who is creating them? It may not actually be wrong but it’s suspicious. The time it’d be outright wrong is if you dropped the ev-binds on the floor. Ha! runTcSEqualites makes up an ev_binds_var, and solves the equalities – but it should be the case that no value bindings end up in the ev_binds_var. (reason: we are solving equalities in a type signature, so there is no place to put the evidence bindigns) I suggest you add a DEBUG-only assertion to check this. * Do -ddump-tc -fprint-typechecker-elaboration; that should show you the evidence binds. Can I ask you a favour? Separately from your branch, can you start a branch of small patches to GHC that include * Extra assertions, such as that above * Notes that explain things you wish you’d known earlier, with references to those Notes from the places you were studying when you that information would have been useful Richard and I know too much! – your learning curve is very valuable and I don’t want to lose it. Keeping this separate from your branch is useful : you can commit (via Phab) these updates right away, so they aren’t predicated on adding row types to GHC. Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Nicolas Frisby Sent: 19 September 2017 16:51 To: ghc-devs at haskell.org Subject: Invariants about UnivCo? [I summarize with some direct questions at the bottom of this email.] I spent time last night trying to eliminate -dcore-lint errors from my record and variant library using the coxswain row types plugin. I made some progress, but I'm currently stuck, as discussed on this github Issue. https://github.com/nfrisby/coxswain/issues/3#issuecomment-330577609 Here's the relevant bit: The latest unresolved -dcore-lint error is an out-of-scope cobox co var. I'm certainly not creating it directly (there are no U(plugin:coxswain,... in the Core Lint warning), but I have to wonder if my somewhat loose use of UnivCo is violating some assumptions somewhere that's causing GHC to drop the co var binding or overlook this occurrence of it on a renaming/subst pass. I checked UnivCo for source comments looking for anything it should not be used for, but I didn't find an obvious explanation along those lines. I haven't yet been able to effectively distill the test case. I'm doing this all at -O0. With `-ddump-tc-trace`, I can see the offending cobox (cobox_a67M) is present in an "implication evbinds" listing after a "solveImplication end }" delimiter, but that's the last obvious binding of it. [G] cobox_a67J = CO Sym cobox_a654, [G] cobox_a67M = cobox_a67J `cast` U(plugin:coxswain,...) cobox_a654 is introduced by a GADT pattern match. I'm also not seeing obvious occurrences of cobox_a67M, but I think the reason is that I'm seeing several (Sym cobox) with no uniques printed (even with `-dppr-debug`). Those are probably the cobox in question, but I can't confirm. Questions: 1) Is there a robust way to ensure that covar's uniques are always printed? (Is the pprIface reuse with a free cobox part of the issue here?) 2) Is my plugin asking for this kind of trouble by using UnivCo to cast coboxes? 3) If I spent the effort to create non-UnivCo coercions where possible, would that likely help? This is currently an "eventually" task, but I haven't seen an urgency for it yet. I could bump its priority if it might help. E.G. I'm using UnivCo to cast entire givens when all I'm doing is reducing a type family application somewhere "deep" within the given's predtype. I could, with considerable effort, instead wrap a single, localized UnivCo within a bunch of non-UnivCo "lifting" coercion constructors. Would that likely help? 3) Is there a usual suspect for this kind of situation where a cobox binding is seemingly dropped (by the typechecker) even though there's an occurrence of it? Thank you for your time. -Nick -------------- next part -------------- An HTML attachment was scrubbed... URL: From m at tweag.io Thu Sep 21 12:25:33 2017 From: m at tweag.io (Boespflug, Mathieu) Date: Thu, 21 Sep 2017 14:25:33 +0200 Subject: Disabling Travis? In-Reply-To: <95F2B586-B24A-4E51-AF27-473AF66E3B3E@cs.brynmawr.edu> References: <1503245493.4020.2.camel@joachim-breitner.de> <7920340D-2B0B-426E-8960-8A391AA46689@cs.brynmawr.edu> <1503302065.29178.1.camel@joachim-breitner.de> <2E2A20F3-7185-4ABA-8AD8-EDA2A4DE036B@cs.brynmawr.edu> <1504428949.11527.2.camel@joachim-breitner.de> <95F2B586-B24A-4E51-AF27-473AF66E3B3E@cs.brynmawr.edu> Message-ID: It took me no more than a couple hours to get this working, but using CircleCI, for our fork of GHC. I started from Joachim's TravisCI script. https://circleci.com/gh/tweag/ghc/tree/circleci It would be trivial to activate this for github.com/ghc/ghc as well. A few notes: - It runs ./validate --fast in 40 minutes. - CircleCI has OS X support as well. I think we should just migrate to using CircleCI for OS X testing instead of the custom drones, one or all of which are currently down. - CircleCI graciously agreed to running on one of the beefy AWS node types, called c4.xlarge (8 cores). On the standard node type (2 cores), validate takes just over an hour to run. It would be great if ./validate could scale better to more cores. From mail at joachim-breitner.de Thu Sep 21 13:55:51 2017 From: mail at joachim-breitner.de (Joachim Breitner) Date: Thu, 21 Sep 2017 09:55:51 -0400 Subject: CircleCI (Was: Disable Travis?) In-Reply-To: References: <1503245493.4020.2.camel@joachim-breitner.de> <7920340D-2B0B-426E-8960-8A391AA46689@cs.brynmawr.edu> <1503302065.29178.1.camel@joachim-breitner.de> <2E2A20F3-7185-4ABA-8AD8-EDA2A4DE036B@cs.brynmawr.edu> <1504428949.11527.2.camel@joachim-breitner.de> <95F2B586-B24A-4E51-AF27-473AF66E3B3E@cs.brynmawr.edu> Message-ID: <1506002151.973.17.camel@joachim-breitner.de> Hi, Am Donnerstag, den 21.09.2017, 14:25 +0200 schrieb Boespflug, Mathieu: > It took me no more than a couple hours to get this working, but using > CircleCI, for our fork of GHC. I started from Joachim's TravisCI > script. > > https://circleci.com/gh/tweag/ghc/tree/circleci > > It would be trivial to activate this for github.com/ghc/ghc as well. > > A few notes: > - It runs ./validate --fast in 40 minutes. > - CircleCI has OS X support as well. I think we should just migrate > to > using CircleCI for OS X testing instead of the custom drones, one or > all of which are currently down. > - CircleCI graciously agreed to running on one of the beefy AWS node > types, called c4.xlarge (8 cores). On the standard node type (2 > cores), validate takes just over an hour to run. It would be great if > ./validate could scale better to more cores. nice! Yes, let’s do this. More CI never hurts (if someone keeps an eye on it and fixes breakage that is not due to the code). Can you configure circleci to mail both the committeer and a specific person (e.g. you, or me) on every failed committ? I enabled it now for ghc/ghc, but it says > Configurable resource class is not enabled in your project. Please > contact your CSM person or our support team to whitelist your > project. Can you do that? Greetings, Joachim -- Joachim Breitner mail at joachim-breitner.de http://www.joachim-breitner.de/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From kavon at farvard.in Thu Sep 21 17:32:12 2017 From: kavon at farvard.in (Kavon Farvardin) Date: Thu, 21 Sep 2017 12:32:12 -0500 Subject: The Curious Case of T6084 -or- Register Confusion with LLVM In-Reply-To: <8BB224E1-7D9F-4200-B96F-8F1393BF7A2D@gmail.com> References: <8BB224E1-7D9F-4200-B96F-8F1393BF7A2D@gmail.com> Message-ID: <21AC39DE-D786-4069-85B7-636884DBB8F7@farvard.in> Responses are inline below: > As the LLVM backend takes off from Cmm, we produce function that always hold > the always live registers (on x86_64 these are: Base, Sp, Hp, R1, R2, R3, R4, R5, R6, SpLim) > and appends those registers that are live throughout the function call: in the > case of `q` this is one Float and one Double register. To be more precise, we append only the live floating point or vector arguments to this always live list. We need to do this because of overlapping register usage in our calling convention on x86-64 (F1 and D1 are both put in XMM1). See Note [Overlapping global registers] for details. > Let’s assume these are F3 and D4. Thus the function signature we generate looks like: > > ghccc void @q(i64*, i64*, i64*, i64, i64, i64, i64, i64, i64, i64, float, double) > > And expect the passed arguments to represent the following registers: > > base, sp, hp, r1, r2, r3, r4, r5, r6, spLim, f3, d4 > > as we found that f1 and d1 are not live. I think it's wrong to assume that `q` accepts its first two floating-point arguments in F3 and D4, because I'm pretty sure the standard Cmm calling convention assigns them to F1 and D2, respectively. Are we actually outputting `q` such that F3 and D4 are used? > (This is where my llvmng backend fell over, as it does not bitcast function > signatures but tries to unify them.) I think to solve this problem, we'll want to bitcast functions whenever we call them, because the type of an LLVM function is important for us to get the calling convention correct. ~kavon > On Sep 20, 2017, at 4:44 AM, Moritz Angermann wrote: > > Hi *, > > TLDR: The LLVM backend might confuse floating registers in GHC. > > # Demo (Ticket #14251) > > Let Demo.hs be the following short program (a minor modification from T6084): > ``` > {-# LANGUAGE MagicHash, BangPatterns #-} > module Main where > > import GHC.Exts > > {-# NOINLINE f #-} > f :: (Int# -> Float# -> Double# -> Float# -> Double# -> String) -> String > f g = g 3# 4.0# 5.0## 6.0# 6.9## ++ " World!" > > {-# NOINLINE q #-} > q :: Int# -> Float# -> Double# -> Float# -> Double# -> String > q i j k l m = "Hello " ++ show (F# l) ++ " " ++ show (D# m) > > main = putStrLn (f $ q) > ``` > > What happens if we compile them with the NCG and LLVM? > > $ ghc -fasm -fforce-recomp Demo.hs -O2 -o Demo-ncg && ./Demo-ncg > Hello 6.0 6.9 World! > > $ ghc -fllvm -fforce-recomp Demo.hs -O2 -o Demo-llvm && ./Demo-llvm > Hello 4.0 5.0 World! > > # Discussion > > What is happening here? The LLVM backend passes the registers in arguments, > which are then mapped to registers via the GHC calling convention we added > to LLVM. > > As the LLVM backend takes off from Cmm, we produce function that always hold > the always live registers (on x86_64 these are: Base, Sp, Hp, R1, R2, R3, R4, R5, R6, SpLim) > and appends those registers that are live throughout the function call: in the > case of `q` this is one Float and one Double register. Let’s assume these are > F3 and D4. Thus the function signature we generate looks like: > > ghccc void @q(i64*, i64*, i64*, i64, i64, i64, i64, i64, i64, i64, float, double) > > And expect the passed arguments to represent the following registers: > > base, sp, hp, r1, r2, r3, r4, r5, r6, spLim, f3, d4 > > as we found that f1 and d1 are not live. > > Yet, when we call `q` in the form of `g` in the body of `f`. We will pass it 14 arguments > instead of 12. To make this “typecheck” in LLVM, we > > @q' = bitcast @q to (i64*, i64*, i64*, i64, i64, i64, i64, i64, i64, i64, float, double, float, double) > > and call @q’(base, sp, hp, r1, r2, r3, r4, r5, r6, spLim, f1, d2, f3, d4). > > at this point, we now assign f3 <- f1 and d4 <- d2; while silently ignoring > the passed arguments f3 and d4. > > (This is where my llvmng backend fell over, as it does not bitcast function > signatures but tries to unify them.) > > # Solution? > > Initially, Ben and I though we could simply always pass all registers as > arguments in LLVM and call it a day with the downside of create more verbose > but correct code. As I found out, that comes with a few complications. For > some reason, all active stg registers for my machine give me > > Base, Sp, Hp, R1, R2, R3, R4, R5, R6, SpLim, > F1, D1, F2, D2, F3, D3, > XMM1,XMM2,XMM3,XMM4,XMM5,XMM6, > YMM1,YMM2,YMM3,YMM4,YMM5,YMM6, > ZMM1,ZMM2,ZMM3,ZMM4,ZMM5,ZMM6 > > I should not have the YMM*, and ZMM* registers as I don’t have any AVX nor AVX512; > that looks like only a patch away. However we try to optimize our register, such > that we can pass up to six doubles or six floats or any combination of both if needed > in registers, without having to allocate them on the stack, by assuming overlapping > registers (See Note [Overlapping global registers]). > > And as such a full function signature in LLVM would as opposed to one that’s based on > the “live” registers as we have right now, would consist of 12 float/double registers, > and LLVM only maps 6. My current idea is to, pass only the explicit F1,D1,…,F3,D3 > and try to disable the register overlapping for LLVM. This would probably force more > floating values to be stack allocated rather than passed via registers, but would > likely guarantee that the registers match up. The other option I can think of is to > define some viertual generic floating registers in the llvm code gen: V1,…,V6 > and then perform something like > > F1 <- V1 as float > D1 <- V1 as double > > in the body of the function, while trying to use the `live` information at the call site > to decide which of F1 or D1 to pass as V1. > > Ideas? > > Cheers, > Moritz > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From kavon at farvard.in Thu Sep 21 18:08:58 2017 From: kavon at farvard.in (Kavon Farvardin) Date: Thu, 21 Sep 2017 13:08:58 -0500 Subject: The Curious Case of T6084 -or- Register Confusion with LLVM In-Reply-To: <21AC39DE-D786-4069-85B7-636884DBB8F7@farvard.in> References: <8BB224E1-7D9F-4200-B96F-8F1393BF7A2D@gmail.com> <21AC39DE-D786-4069-85B7-636884DBB8F7@farvard.in> Message-ID: Let me elaborate a bit more because I clearly missed some points you already made in your original message. Sorry about that: I don't think we need a heavyweight solution to this problem (the suggestions of: disabling overlapping registers for LLVM, or adding a new virtual register class Vx). Instead, let's first remember how the type of the called function pointer corresponds to its calling convention when it is lowered to assembly in LLVM. In our GHC calling convention in LLVM, we can specify that if type == float OR type == double, use: XMM1,XMM2,XMM3,XMM4,XMM5,XMM6 When a calling convention is being determined by LLVM for any function definition or call, it goes in order from left to right in the list of parameters, and assigns float or double arguments to the first currently available register in that XMM list. So, if `q` were indeed using F3 and D4 to accept its first two floating point arguments, the function signature we generate, ghccc void @q(i64*, i64*, i64*, i64, i64, i64, i64, i64, i64, i64, float, double) is wrong. The registers for the `float, double` arguments will be assigned to XMM1 and XMM2 by LLVM. Since F3 and D4 use XMM3 and XMM4, respectively, we should have padded out the type of `q` in LLVM to be: ghccc void @q(i64*, i64*, i64*, i64, i64, i64, i64, i64, i64, i64, float, double, float, double) where the first `float, double` parameters are now unused. We would also perform the same type of padding at every call site where the first two float arguments are F3 and D4, so that they end up in the right physical registers. We pass `undef` for the first two `float, double` arguments. > On Sep 21, 2017, at 12:32 PM, Kavon Farvardin wrote: > > Responses are inline below: > >> As the LLVM backend takes off from Cmm, we produce function that always hold >> the always live registers (on x86_64 these are: Base, Sp, Hp, R1, R2, R3, R4, R5, R6, SpLim) >> and appends those registers that are live throughout the function call: in the >> case of `q` this is one Float and one Double register. > > To be more precise, we append only the live floating point or vector arguments to this always live list. We need to do this because of overlapping register usage in our calling convention on x86-64 (F1 and D1 are both put in XMM1). See Note [Overlapping global registers] for details. > > >> Let’s assume these are F3 and D4. Thus the function signature we generate looks like: >> >> ghccc void @q(i64*, i64*, i64*, i64, i64, i64, i64, i64, i64, i64, float, double) >> >> And expect the passed arguments to represent the following registers: >> >> base, sp, hp, r1, r2, r3, r4, r5, r6, spLim, f3, d4 >> >> as we found that f1 and d1 are not live. > > > I think it's wrong to assume that `q` accepts its first two floating-point arguments in F3 and D4, because I'm pretty sure the standard Cmm calling convention assigns them to F1 and D2, respectively. Are we actually outputting `q` such that F3 and D4 are used? > > >> (This is where my llvmng backend fell over, as it does not bitcast function >> signatures but tries to unify them.) > > > I think to solve this problem, we'll want to bitcast functions whenever we call them, because the type of an LLVM function is important for us to get the calling convention correct. > > > ~kavon > > >> On Sep 20, 2017, at 4:44 AM, Moritz Angermann wrote: >> >> Hi *, >> >> TLDR: The LLVM backend might confuse floating registers in GHC. >> >> # Demo (Ticket #14251) >> >> Let Demo.hs be the following short program (a minor modification from T6084): >> ``` >> {-# LANGUAGE MagicHash, BangPatterns #-} >> module Main where >> >> import GHC.Exts >> >> {-# NOINLINE f #-} >> f :: (Int# -> Float# -> Double# -> Float# -> Double# -> String) -> String >> f g = g 3# 4.0# 5.0## 6.0# 6.9## ++ " World!" >> >> {-# NOINLINE q #-} >> q :: Int# -> Float# -> Double# -> Float# -> Double# -> String >> q i j k l m = "Hello " ++ show (F# l) ++ " " ++ show (D# m) >> >> main = putStrLn (f $ q) >> ``` >> >> What happens if we compile them with the NCG and LLVM? >> >> $ ghc -fasm -fforce-recomp Demo.hs -O2 -o Demo-ncg && ./Demo-ncg >> Hello 6.0 6.9 World! >> >> $ ghc -fllvm -fforce-recomp Demo.hs -O2 -o Demo-llvm && ./Demo-llvm >> Hello 4.0 5.0 World! >> >> # Discussion >> >> What is happening here? The LLVM backend passes the registers in arguments, >> which are then mapped to registers via the GHC calling convention we added >> to LLVM. >> >> As the LLVM backend takes off from Cmm, we produce function that always hold >> the always live registers (on x86_64 these are: Base, Sp, Hp, R1, R2, R3, R4, R5, R6, SpLim) >> and appends those registers that are live throughout the function call: in the >> case of `q` this is one Float and one Double register. Let’s assume these are >> F3 and D4. Thus the function signature we generate looks like: >> >> ghccc void @q(i64*, i64*, i64*, i64, i64, i64, i64, i64, i64, i64, float, double) >> >> And expect the passed arguments to represent the following registers: >> >> base, sp, hp, r1, r2, r3, r4, r5, r6, spLim, f3, d4 >> >> as we found that f1 and d1 are not live. >> >> Yet, when we call `q` in the form of `g` in the body of `f`. We will pass it 14 arguments >> instead of 12. To make this “typecheck” in LLVM, we >> >> @q' = bitcast @q to (i64*, i64*, i64*, i64, i64, i64, i64, i64, i64, i64, float, double, float, double) >> >> and call @q’(base, sp, hp, r1, r2, r3, r4, r5, r6, spLim, f1, d2, f3, d4). >> >> at this point, we now assign f3 <- f1 and d4 <- d2; while silently ignoring >> the passed arguments f3 and d4. >> >> (This is where my llvmng backend fell over, as it does not bitcast function >> signatures but tries to unify them.) >> >> # Solution? >> >> Initially, Ben and I though we could simply always pass all registers as >> arguments in LLVM and call it a day with the downside of create more verbose >> but correct code. As I found out, that comes with a few complications. For >> some reason, all active stg registers for my machine give me >> >> Base, Sp, Hp, R1, R2, R3, R4, R5, R6, SpLim, >> F1, D1, F2, D2, F3, D3, >> XMM1,XMM2,XMM3,XMM4,XMM5,XMM6, >> YMM1,YMM2,YMM3,YMM4,YMM5,YMM6, >> ZMM1,ZMM2,ZMM3,ZMM4,ZMM5,ZMM6 >> >> I should not have the YMM*, and ZMM* registers as I don’t have any AVX nor AVX512; >> that looks like only a patch away. However we try to optimize our register, such >> that we can pass up to six doubles or six floats or any combination of both if needed >> in registers, without having to allocate them on the stack, by assuming overlapping >> registers (See Note [Overlapping global registers]). >> >> And as such a full function signature in LLVM would as opposed to one that’s based on >> the “live” registers as we have right now, would consist of 12 float/double registers, >> and LLVM only maps 6. My current idea is to, pass only the explicit F1,D1,…,F3,D3 >> and try to disable the register overlapping for LLVM. This would probably force more >> floating values to be stack allocated rather than passed via registers, but would >> likely guarantee that the registers match up. The other option I can think of is to >> define some viertual generic floating registers in the llvm code gen: V1,…,V6 >> and then perform something like >> >> F1 <- V1 as float >> D1 <- V1 as double >> >> in the body of the function, while trying to use the `live` information at the call site >> to decide which of F1 or D1 to pass as V1. >> >> Ideas? >> >> Cheers, >> Moritz >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From palotai.robin at gmail.com Thu Sep 21 22:38:28 2017 From: palotai.robin at gmail.com (Robin Palotai) Date: Fri, 22 Sep 2017 00:38:28 +0200 Subject: Determine instance method from class method callsite In-Reply-To: References: Message-ID: My conclusion so far: there's no royal way. One can get the instance dictionary DFunId pretty easy, but then access to the Typechecked AST of the instance declaration is really needed to find all the method bindings $c... (that get applied when constructing the dictionary $d...). 2017-09-19 7:38 GMT+02:00 Robin Palotai : > Sorry, I messed up subject and mailing list. Copying to both list now > after the mistake (wanted only ghc-devs for specificity). > > Thanks! > > 2017-09-19 7:36 GMT+02:00 Robin Palotai : > >> Hello GHC devs, >> >> Before inventing the wheel, want to check if there is a GHC API way to >> look up the (fully) resolved instance method from a class method. >> >> For example, given a code >> >> data Foo Int deriving Show >> >> bar = show (Foo 3) >> >> when inspecting the Typechecked AST for bar's show call, I would like to >> get to the Name / Id of 'show' of the 'Show' typeclass. >> >> I believe I could use splitHsSigmaTy on the HsType of the function call >> to get the context, and then evaluate the HsWrapper somehow to find out >> what instance dictionary is applied to the class restriction in the >> context, and then look up the instance method from the dictionary.. >> >> Two questions: >> >> 1) Is there maybe functionality for this? >> >> 2) If not, is there any guarantee about the constraint order in the >> context, at the method call? So I could more easily determine which >> constraint's application to look for.. >> >> Any hints welcome && Thank you! >> Robin >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From moritz.angermann at gmail.com Thu Sep 21 23:10:16 2017 From: moritz.angermann at gmail.com (Moritz Angermann) Date: Fri, 22 Sep 2017 07:10:16 +0800 Subject: The Curious Case of T6084 -or- Register Confusion with LLVM In-Reply-To: References: <8BB224E1-7D9F-4200-B96F-8F1393BF7A2D@gmail.com> <21AC39DE-D786-4069-85B7-636884DBB8F7@farvard.in> Message-ID: The issue is at the function definition. In the price point splitting code we determine that the F1 and D2 registers are not actually used in the body of `q`. And as such optimize the set of live register tees from R1, F1, D2, F3, D4 to R1, F3, D4. Thus in https://phabricator.haskell.org/D4003 I simply retain the live registers of the top proc instead of updating them to the optimized set. As such we generate the correct function signature in the llvm backend. Sent from my iPhone > On 22 Sep 2017, at 2:08 AM, Kavon Farvardin wrote: > > Let me elaborate a bit more because I clearly missed some points you already made in your original message. Sorry about that: > > > I don't think we need a heavyweight solution to this problem (the suggestions of: disabling overlapping registers for LLVM, or adding a new virtual register class Vx). > > Instead, let's first remember how the type of the called function pointer corresponds to its calling convention when it is lowered to assembly in LLVM. In our GHC calling convention in LLVM, we can specify that > > if type == float OR type == double, use: > XMM1,XMM2,XMM3,XMM4,XMM5,XMM6 > > When a calling convention is being determined by LLVM for any function definition or call, it goes in order from left to right in the list of parameters, and assigns float or double arguments to the first currently available register in that XMM list. > > So, if `q` were indeed using F3 and D4 to accept its first two floating point arguments, the function signature we generate, > > ghccc void @q(i64*, i64*, i64*, i64, i64, i64, i64, i64, i64, i64, float, double) > > is wrong. The registers for the `float, double` arguments will be assigned to XMM1 and XMM2 by LLVM. Since F3 and D4 use XMM3 and XMM4, respectively, we should have padded out the type of `q` in LLVM to be: > > > ghccc void @q(i64*, i64*, i64*, i64, i64, i64, i64, i64, i64, i64, float, double, float, double) > > where the first `float, double` parameters are now unused. We would also perform the same type of padding at every call site where the first two float arguments are F3 and D4, so that they end up in the right physical registers. > We pass `undef` for the first two `float, double` arguments. > > > >> On Sep 21, 2017, at 12:32 PM, Kavon Farvardin wrote: >> >> Responses are inline below: >> >>> As the LLVM backend takes off from Cmm, we produce function that always hold >>> the always live registers (on x86_64 these are: Base, Sp, Hp, R1, R2, R3, R4, R5, R6, SpLim) >>> and appends those registers that are live throughout the function call: in the >>> case of `q` this is one Float and one Double register. >> >> To be more precise, we append only the live floating point or vector arguments to this always live list. We need to do this because of overlapping register usage in our calling convention on x86-64 (F1 and D1 are both put in XMM1). See Note [Overlapping global registers] for details. >> >> >>> Let’s assume these are F3 and D4. Thus the function signature we generate looks like: >>> >>> ghccc void @q(i64*, i64*, i64*, i64, i64, i64, i64, i64, i64, i64, float, double) >>> >>> And expect the passed arguments to represent the following registers: >>> >>> base, sp, hp, r1, r2, r3, r4, r5, r6, spLim, f3, d4 >>> >>> as we found that f1 and d1 are not live. >> >> >> I think it's wrong to assume that `q` accepts its first two floating-point arguments in F3 and D4, because I'm pretty sure the standard Cmm calling convention assigns them to F1 and D2, respectively. Are we actually outputting `q` such that F3 and D4 are used? >> >> >>> (This is where my llvmng backend fell over, as it does not bitcast function >>> signatures but tries to unify them.) >> >> >> I think to solve this problem, we'll want to bitcast functions whenever we call them, because the type of an LLVM function is important for us to get the calling convention correct. >> >> >> ~kavon >> >> >>> On Sep 20, 2017, at 4:44 AM, Moritz Angermann wrote: >>> >>> Hi *, >>> >>> TLDR: The LLVM backend might confuse floating registers in GHC. >>> >>> # Demo (Ticket #14251) >>> >>> Let Demo.hs be the following short program (a minor modification from T6084): >>> ``` >>> {-# LANGUAGE MagicHash, BangPatterns #-} >>> module Main where >>> >>> import GHC.Exts >>> >>> {-# NOINLINE f #-} >>> f :: (Int# -> Float# -> Double# -> Float# -> Double# -> String) -> String >>> f g = g 3# 4.0# 5.0## 6.0# 6.9## ++ " World!" >>> >>> {-# NOINLINE q #-} >>> q :: Int# -> Float# -> Double# -> Float# -> Double# -> String >>> q i j k l m = "Hello " ++ show (F# l) ++ " " ++ show (D# m) >>> >>> main = putStrLn (f $ q) >>> ``` >>> >>> What happens if we compile them with the NCG and LLVM? >>> >>> $ ghc -fasm -fforce-recomp Demo.hs -O2 -o Demo-ncg && ./Demo-ncg >>> Hello 6.0 6.9 World! >>> >>> $ ghc -fllvm -fforce-recomp Demo.hs -O2 -o Demo-llvm && ./Demo-llvm >>> Hello 4.0 5.0 World! >>> >>> # Discussion >>> >>> What is happening here? The LLVM backend passes the registers in arguments, >>> which are then mapped to registers via the GHC calling convention we added >>> to LLVM. >>> >>> As the LLVM backend takes off from Cmm, we produce function that always hold >>> the always live registers (on x86_64 these are: Base, Sp, Hp, R1, R2, R3, R4, R5, R6, SpLim) >>> and appends those registers that are live throughout the function call: in the >>> case of `q` this is one Float and one Double register. Let’s assume these are >>> F3 and D4. Thus the function signature we generate looks like: >>> >>> ghccc void @q(i64*, i64*, i64*, i64, i64, i64, i64, i64, i64, i64, float, double) >>> >>> And expect the passed arguments to represent the following registers: >>> >>> base, sp, hp, r1, r2, r3, r4, r5, r6, spLim, f3, d4 >>> >>> as we found that f1 and d1 are not live. >>> >>> Yet, when we call `q` in the form of `g` in the body of `f`. We will pass it 14 arguments >>> instead of 12. To make this “typecheck” in LLVM, we >>> >>> @q' = bitcast @q to (i64*, i64*, i64*, i64, i64, i64, i64, i64, i64, i64, float, double, float, double) >>> >>> and call @q’(base, sp, hp, r1, r2, r3, r4, r5, r6, spLim, f1, d2, f3, d4). >>> >>> at this point, we now assign f3 <- f1 and d4 <- d2; while silently ignoring >>> the passed arguments f3 and d4. >>> >>> (This is where my llvmng backend fell over, as it does not bitcast function >>> signatures but tries to unify them.) >>> >>> # Solution? >>> >>> Initially, Ben and I though we could simply always pass all registers as >>> arguments in LLVM and call it a day with the downside of create more verbose >>> but correct code. As I found out, that comes with a few complications. For >>> some reason, all active stg registers for my machine give me >>> >>> Base, Sp, Hp, R1, R2, R3, R4, R5, R6, SpLim, >>> F1, D1, F2, D2, F3, D3, >>> XMM1,XMM2,XMM3,XMM4,XMM5,XMM6, >>> YMM1,YMM2,YMM3,YMM4,YMM5,YMM6, >>> ZMM1,ZMM2,ZMM3,ZMM4,ZMM5,ZMM6 >>> >>> I should not have the YMM*, and ZMM* registers as I don’t have any AVX nor AVX512; >>> that looks like only a patch away. However we try to optimize our register, such >>> that we can pass up to six doubles or six floats or any combination of both if needed >>> in registers, without having to allocate them on the stack, by assuming overlapping >>> registers (See Note [Overlapping global registers]). >>> >>> And as such a full function signature in LLVM would as opposed to one that’s based on >>> the “live” registers as we have right now, would consist of 12 float/double registers, >>> and LLVM only maps 6. My current idea is to, pass only the explicit F1,D1,…,F3,D3 >>> and try to disable the register overlapping for LLVM. This would probably force more >>> floating values to be stack allocated rather than passed via registers, but would >>> likely guarantee that the registers match up. The other option I can think of is to >>> define some viertual generic floating registers in the llvm code gen: V1,…,V6 >>> and then perform something like >>> >>> F1 <- V1 as float >>> D1 <- V1 as double >>> >>> in the body of the function, while trying to use the `live` information at the call site >>> to decide which of F1 or D1 to pass as V1. >>> >>> Ideas? >>> >>> Cheers, >>> Moritz >>> >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From moritz.angermann at gmail.com Fri Sep 22 06:32:53 2017 From: moritz.angermann at gmail.com (Moritz Angermann) Date: Fri, 22 Sep 2017 14:32:53 +0800 Subject: In C--: should CmmCall and CmmProc agree on their live registers? Message-ID: <38053934-5C91-4921-8B3D-A5F03CF86CA4@gmail.com> Hi, apologies for writing so many emails recently. This is a minor spinoff from the "The Curious Case of T6084" email. While digging into it, I keep asking myself if CmmProc’s live registers should match those of the CmmCall that is calling it? Is there any invariant we try to enforce or would want to enforce? - Can the CmmProcs live registers be a strict superset of the corresponding CmmCalls? From the source comments in `compiler/cmm/Cmm.hs`: > Registers live on entry. Note that the set of live > registers will be correct in generated C-- code, but > not in hand-written C-- code. However, > splitAtProcPoints calculates correct liveness > information for CmmProcs. I would assume that this is an invalid case? - Can the CmmProcs live registers be a strict subset of the corresponding CmmCalls? This case however seems to be valid case. However, this makes me wonder if we can, and should(?) propagate the live register info from the CmmProc to the CmmCall so that they match up, and the registers are not kept live at the origin of the CmmCall if they aren’t needed? And as such potentially compute anything to put into the registers the CmmCall considers live, but the CmmProc would ignore anyway? Cheers, Moritz From mail at joachim-breitner.de Fri Sep 22 13:06:14 2017 From: mail at joachim-breitner.de (Joachim Breitner) Date: Fri, 22 Sep 2017 09:06:14 -0400 Subject: perf.haskell.org update: Now using cachegrind In-Reply-To: <1505961496.10873.0.camel@joachim-breitner.de> References: <1505923865.5913.7.camel@joachim-breitner.de> <87vakdw77m.fsf@ben-laptop.smart-cactus.org> <1505938404.1534.5.camel@joachim-breitner.de> <1505961496.10873.0.camel@joachim-breitner.de> Message-ID: <1506085574.1005.4.camel@joachim-breitner.de> Hi, I have switched perf.haskell.org to run nofib with $ make -C nofib EXTRA_RUNTEST_OPTS=-cachegrind NoFibRuns=1 mode=slow -j8 Right now, a complete build with testsuite and nofib takes ~2½h, but that was before I added the "-j8" to the line above, so let’s see if that helps. Even with cachegrind, most nofib tests finish in under one minute, 7 take between one and 10 minutes, k-nucleotide needs 20 minutes and fannkuch-redux needs 27 minutes. All in all not too bad. I reset the perf.haskell.org database and started re-measuring commits from 055d73c6576bed2affaf96ef6a6b89aeb2cd2e9f on. It will take a while (a week maybe?) for it to catch up with master. I hope that this allows us to spot performance regressions with greater precision. Greetings, Joachim -- Joachim “nomeata” Breitner mail at joachim-breitner.de https://www.joachim-breitner.de/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From svenpanne at gmail.com Sat Sep 23 18:45:36 2017 From: svenpanne at gmail.com (Sven Panne) Date: Sat, 23 Sep 2017 20:45:36 +0200 Subject: =?UTF-8?Q?Re=3A_RTS_changes_affect_runtime_when_they_shouldn=E2=80=99t?= In-Reply-To: References: <1505923865.5913.7.camel@joachim-breitner.de> <87vakdw77m.fsf@ben-laptop.smart-cactus.org> <1505938404.1534.5.camel@joachim-breitner.de> Message-ID: 2017-09-21 0:34 GMT+02:00 Sebastian Graf : > [...] The only real drawback I see is that instruction count might skew > results, because AFAIK it doesn't properly take the architecture (pipeline, > latencies, etc.) into account. It might be just OK for the average program, > though. > It really depends on what you're trying to measure: The raw instruction count is basically useless if you want to have a number which has any connection to the real time taken by the program. The average number of cycles per CPU instruction varies by 2 orders of magnitude on modern architectures, see e.g. the Skylake section in http://www.agner.org/optimize/instruction_tables.pdf (IMHO a must-read for anyone doing serious optimizations/measurements on the assembly level). And these numbers don't even include the effects of the caches, pipeline stalls, branch prediction, execution units/ports, etc. etc. which can easily add another 1 or 2 orders of magnitude. So what can one do? It basically boils down to a choice: * Use a stable number like the instruction count (the "Instructions Read" (Ir) events), which has no real connection to the speed of a program. * Use a relatively volatile number like real time and/or cycles used, which is what your users will care about. If you put a non-trivial amount of work into your compiler, you can make these numbers a bit more stable (e.g. by making the code layout/alignment more stable), but you will still get quite different numbers if you switch to another CPU generation/manufacturer. A bit tragic, but that's life in 2017... :-} -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Sat Sep 23 19:06:32 2017 From: mail at joachim-breitner.de (Joachim Breitner) Date: Sat, 23 Sep 2017 15:06:32 -0400 Subject: RTS changes affect runtime when they =?UTF-8?Q?shouldn=E2=80=99t?= In-Reply-To: References: <1505923865.5913.7.camel@joachim-breitner.de> <87vakdw77m.fsf@ben-laptop.smart-cactus.org> <1505938404.1534.5.camel@joachim-breitner.de> Message-ID: <1506193592.8947.6.camel@joachim-breitner.de> Hi, Am Samstag, den 23.09.2017, 20:45 +0200 schrieb Sven Panne: > 2017-09-21 0:34 GMT+02:00 Sebastian Graf : > > [...] The only real drawback I see is that instruction count might > > skew results, because AFAIK it doesn't properly take the > > architecture (pipeline, latencies, etc.) into account. It might be > > just OK for the average program, though. > > > > It really depends on what you're trying to measure: The raw > instruction count is basically useless if you want to have a number > which has any connection to the real time taken by the program. The > average number of cycles per CPU instruction varies by 2 orders of > magnitude on modern architectures, see e.g. the Skylake section in ht > tp://www.agner.org/optimize/instruction_tables.pdf (IMHO a must-read > for anyone doing serious optimizations/measurements on the assembly > level). And these numbers don't even include the effects of the > caches, pipeline stalls, branch prediction, execution units/ports, > etc. etc. which can easily add another 1 or 2 orders of magnitude. > > So what can one do? It basically boils down to a choice: > > * Use a stable number like the instruction count (the > "Instructions Read" (Ir) events), which has no real connection to the > speed of a program. > > * Use a relatively volatile number like real time and/or cycles > used, which is what your users will care about. If you put a non- > trivial amount of work into your compiler, you can make these numbers > a bit more stable (e.g. by making the code layout/alignment more > stable), but you will still get quite different numbers if you switch > to another CPU generation/manufacturer. > > A bit tragic, but that's life in 2017... :-} what I want to do is to reliably catch regressions. What are the odds that a change to the Haskell compiler (in particular to Core2Core transformations) will cause a significant increase in runtime without a significant increase in instruction count? (Honest question, not rhetoric). Greetings, Joachim -- Joachim Breitner mail at joachim-breitner.de http://www.joachim-breitner.de/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From spam at scientician.net Sat Sep 23 19:08:23 2017 From: spam at scientician.net (Bardur Arantsson) Date: Sat, 23 Sep 2017 21:08:23 +0200 Subject: =?UTF-8?Q?Re:_RTS_changes_affect_runtime_when_they_shouldn=e2=80=99?= =?UTF-8?Q?t?= In-Reply-To: References: <1505923865.5913.7.camel@joachim-breitner.de> <87vakdw77m.fsf@ben-laptop.smart-cactus.org> <1505938404.1534.5.camel@joachim-breitner.de> Message-ID: On 2017-09-23 20:45, Sven Panne wrote: > 2017-09-21 0:34 GMT+02:00 Sebastian Graf >: > > [...] The only real drawback I see is that instruction count might > skew results, because AFAIK it doesn't properly take the > architecture (pipeline, latencies, etc.) into account. It might be > just OK for the average program, though. > > > It really depends on what you're trying to measure: The raw instruction > count is basically useless if you want to have a number which has any > connection to the real time taken by the program. The average number of > cycles per CPU instruction varies by 2 orders of magnitude on modern > architectures, see e.g. the Skylake section > in http://www.agner.org/optimize/instruction_tables.pdf (IMHO a > must-read for anyone doing serious optimizations/measurements on the > assembly level). And these numbers don't even include the effects of the > caches, pipeline stalls, branch prediction, execution units/ports, etc. > etc. which can easily add another 1 or 2 orders of magnitude. > > So what can one do? It basically boils down to a choice: > >    * Use a stable number like the instruction count (the "Instructions > Read" (Ir) events), which has no real connection to the speed of a program. > >    * Use a relatively volatile number like real time and/or cycles used, > which is what your users will care about. If you put a non-trivial > amount of work into your compiler, you can make these numbers a bit more > stable (e.g. by making the code layout/alignment more stable), but you > will still get quite different numbers if you switch to another CPU > generation/manufacturer. > > A bit tragic, but that's life in 2017... :-} > > I may be missing something since I have only quickly skimmed the thread, but...: Why not track all of these things and correlate them with individual runs? The Linux 'perf' tool can retrieve a *lot* of interesting numbers, esp. around cache hit rates, branch predicition hit rates, etc. Regards, From moritz.angermann at gmail.com Sun Sep 24 05:44:39 2017 From: moritz.angermann at gmail.com (Moritz Angermann) Date: Sun, 24 Sep 2017 13:44:39 +0800 Subject: Ignoring ANN Module "HLint: ..." Message-ID: <31BB30E1-02C5-4183-A71B-F977DEA2B07A@gmail.com> Hi *, can we detect annotations like `{-# ANN module "HLint: ignore Reduce duplication" #-}` easily? Right now this will result (without -fexternal-interpreter or a stage2 compiler) in Ignoring ANN annotation, because this is a stage-1 compiler without -fexternal-interpreter or doesn't support GHCi However, I do not see why this should require the external interpreter at all. This looks to me more like an informational comment? Cheers, Moritz From allbery.b at gmail.com Sun Sep 24 05:48:03 2017 From: allbery.b at gmail.com (Brandon Allbery) Date: Sun, 24 Sep 2017 01:48:03 -0400 Subject: Ignoring ANN Module "HLint: ..." In-Reply-To: <31BB30E1-02C5-4183-A71B-F977DEA2B07A@gmail.com> References: <31BB30E1-02C5-4183-A71B-F977DEA2B07A@gmail.com> Message-ID: On Sun, Sep 24, 2017 at 1:44 AM, Moritz Angermann < moritz.angermann at gmail.com> wrote: > can we detect annotations like `{-# ANN module "HLint: ignore Reduce > duplication" #-}` easily? > Right now this will result (without -fexternal-interpreter or a stage2 > compiler) in > > Ignoring ANN annotation, because this is a stage-1 compiler without > -fexternal-interpreter or doesn't support GHCi > > However, I do not see why this should require the external interpreter at > all. This looks to me > more like an informational comment? > As I understand it, ANNotations are intended for use with ghc plugins; hlint's use of them is not *quite* an abuse, since it is relying on haskell-src-exts handling of them rather than ghc's. -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From moritz.angermann at gmail.com Sun Sep 24 05:51:22 2017 From: moritz.angermann at gmail.com (Moritz Angermann) Date: Sun, 24 Sep 2017 13:51:22 +0800 Subject: Ignoring ANN Module "HLint: ..." In-Reply-To: References: <31BB30E1-02C5-4183-A71B-F977DEA2B07A@gmail.com> Message-ID: <86C5DA92-FC1C-4EF9-82C9-B3214880E4C4@gmail.com> > On Sep 24, 2017, at 1:48 PM, Brandon Allbery wrote: > > On Sun, Sep 24, 2017 at 1:44 AM, Moritz Angermann wrote: > can we detect annotations like `{-# ANN module "HLint: ignore Reduce duplication" #-}` easily? > Right now this will result (without -fexternal-interpreter or a stage2 compiler) in > > Ignoring ANN annotation, because this is a stage-1 compiler without -fexternal-interpreter or doesn't support GHCi > > However, I do not see why this should require the external interpreter at all. This looks to me > more like an informational comment? > > As I understand it, ANNotations are intended for use with ghc plugins; hlint's use of them is not *quite* an abuse, since it is relying on haskell-src-exts handling of them rather than ghc's. > Hi Brandon, I did not mean to imply abuse here. Just that this kind of annotation should not trigger the external interpreter route (if given). While this might not have much of an impact with stage2 compilers, where the interpreter and the compiler are essentially the same. But in cases where this is not the case (read: external interpreter, possibly on a different machine). This results in unnecessary roundtrips, as far as I can see. As such I’m wondering if we could teach GHC to ignore those easily? Cheers, Moritz From allbery.b at gmail.com Sun Sep 24 05:54:17 2017 From: allbery.b at gmail.com (Brandon Allbery) Date: Sun, 24 Sep 2017 01:54:17 -0400 Subject: Ignoring ANN Module "HLint: ..." In-Reply-To: <86C5DA92-FC1C-4EF9-82C9-B3214880E4C4@gmail.com> References: <31BB30E1-02C5-4183-A71B-F977DEA2B07A@gmail.com> <86C5DA92-FC1C-4EF9-82C9-B3214880E4C4@gmail.com> Message-ID: On Sun, Sep 24, 2017 at 1:51 AM, Moritz Angermann < moritz.angermann at gmail.com> wrote: > > > As I understand it, ANNotations are intended for use with ghc plugins; > hlint's use of them is not *quite* an abuse, since it is relying on > haskell-src-exts handling of them rather than ghc's. > > I did not mean to imply abuse here. Just that this kind of annotation > should not trigger the external interpreter > I didn't either, really; the main point is that ghc expects annotations to be related to plugins, which *would* require the interpreter. -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at well-typed.com Sun Sep 24 11:45:55 2017 From: david at well-typed.com (David Feuer) Date: Sun, 24 Sep 2017 07:45:55 -0400 Subject: Why isn't this Typeable? Message-ID: <20170924111811.A6CBDBCAB7@haskell.org> data Foo :: (forall a. a -> Maybe a) -> Type Neither Foo nor Foo 'Just is Typeable. There seems to be a certain sense to excluding Foo proper, because it can't be decomposed with Fun. But why not Foo 'Just? Is there a fundamental reason, or is that largely an implementation artifact? David FeuerWell-Typed, LLP -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryan.gl.scott at gmail.com Sun Sep 24 14:08:49 2017 From: ryan.gl.scott at gmail.com (Ryan Scott) Date: Sun, 24 Sep 2017 10:08:49 -0400 Subject: Why isn't this Typeable? Message-ID: Trying to conclude Typeable Foo (or, if expanded with -fprint-explicit-kinds, Typeable ((forall a. a -> Maybe a) -> Type) Foo) is beyond GHC's capabilities at the moment, as that would require impredicative polymorphism. This problem has arose in other contexts too—see Trac #13895 [1] for one example. I don't think you can conclude Typeable (Foo 'Just) either, since that requires concluding both Typeable Foo and Typeable 'Just, so you ultimately run into the same problem. While there an in-the-works plan to allow a limited form of impredicativity through explicit use of visible type application [2], my fear is that that wouldn't be enough to address the problem you've encountered, since there's no way to visibly apply @((forall a. a -> Maybe a) -> Type) to Typeable at the moment. To accomplish this, you would need visible kind application [3]. Ryan S. ----- [1] https://ghc.haskell.org/trac/ghc/ticket/13895 [2] https://ghc.haskell.org/trac/ghc/ticket/11319#comment:11 [3] https://ghc.haskell.org/trac/ghc/ticket/12045 From david at well-typed.com Sun Sep 24 17:07:22 2017 From: david at well-typed.com (David Feuer) Date: Sun, 24 Sep 2017 13:07:22 -0400 Subject: Why isn't this Typeable? Message-ID: <20170924163934.36BB2BCAB5@haskell.org> I don't see why Typeable (Foo 'Just) requires that. I'd expect to get back a TrTyCon, not a TrApp. Some modifications to the structure of TrTyCon might be required. David FeuerWell-Typed, LLP -------- Original message --------From: Ryan Scott Date: 9/24/17 10:08 AM (GMT-05:00) To: ghc-devs at haskell.org Subject: Re: Why isn't this Typeable? Trying to conclude Typeable Foo (or, if expanded with -fprint-explicit-kinds, Typeable ((forall a. a -> Maybe a) -> Type) Foo) is beyond GHC's capabilities at the moment, as that would require impredicative polymorphism. This problem has arose in other contexts too—see Trac #13895 [1] for one example. I don't think you can conclude Typeable (Foo 'Just) either, since that requires concluding both Typeable Foo and Typeable 'Just, so you ultimately run into the same problem. While there an in-the-works plan to allow a limited form of impredicativity through explicit use of visible type application [2], my fear is that that wouldn't be enough to address the problem you've encountered, since there's no way to visibly apply @((forall a. a -> Maybe a) -> Type) to Typeable at the moment. To accomplish this, you would need visible kind application [3]. Ryan S. ----- [1] https://ghc.haskell.org/trac/ghc/ticket/13895 [2] https://ghc.haskell.org/trac/ghc/ticket/11319#comment:11 [3] https://ghc.haskell.org/trac/ghc/ticket/12045 _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From svenpanne at gmail.com Sun Sep 24 18:00:08 2017 From: svenpanne at gmail.com (Sven Panne) Date: Sun, 24 Sep 2017 20:00:08 +0200 Subject: =?UTF-8?Q?Re=3A_RTS_changes_affect_runtime_when_they_shouldn=E2=80=99t?= In-Reply-To: <1506193592.8947.6.camel@joachim-breitner.de> References: <1505923865.5913.7.camel@joachim-breitner.de> <87vakdw77m.fsf@ben-laptop.smart-cactus.org> <1505938404.1534.5.camel@joachim-breitner.de> <1506193592.8947.6.camel@joachim-breitner.de> Message-ID: 2017-09-23 21:06 GMT+02:00 Joachim Breitner : > what I want to do is to reliably catch regressions. The main question is: Which kind of regressions do you want to catch? Do you care about runtime as experienced by the user? Measure the runtime. Do you care abou code size? Measure the code size. etc. etc. Measuring things like the number of fetched instructions as an indicator for the experienced runtime is basically a useless exercise, unless you do this on ancient RISC processors, where each instruction takes a fixed number of cycles. > What are the odds that a change to the Haskell compiler (in particular to > Core2Core > transformations) will cause a significant increase in runtime without a > significant increase in instruction count? > (Honest question, not rhetoric). > The odds are actually quite high, especially when you define "significant" as "changing a few percent" (which we do!). Just a few examples from current CPUs: * If branch prediction has not enough information to do this better, it assumes that backward branches are taken (think: loops) and forward branches are not taken (so you should put "exceptional" code out of the common, straight-line code). If by some innocent looking change the code layout changes, you can easily get a very measurable difference in runtime even if the number of executed instructions stays exactly the same. * Even if the number of instructions changes only a tiny bit, it could be the case that it is just enough to make caching much worse and/or make the loop stream detector fail to detect a loop. There are lots of other scenarios, so in a nutshell: Measure what you really care about, not something you think might be related to that. As already mentioned in another reply, "perf" can give you very detailed hints about how good your program uses the pipeline, caches, branch prediction etc. Perhaps the performance dashboard should really collect these, too, this would remove a lot of guesswork. -------------- next part -------------- An HTML attachment was scrubbed... URL: From david at well-typed.com Sun Sep 24 18:11:03 2017 From: david at well-typed.com (David Feuer) Date: Sun, 24 Sep 2017 14:11:03 -0400 Subject: =?UTF-8?Q?Re:_RTS_changes_affect_r?= =?UTF-8?Q?untime_when_they_shouldn=E2=80=99t?= Message-ID: <20170924174320.EDD34BCA6B@haskell.org> I think changes to the RTS, code generator, and general heap layout are exactly where we *do* want to worry about these very low-level details. Changes in type checking, desugaring, core-to-core, etc., probably are not, because it's just too hard to tease out the relationship between what they do and what instructions are emitted in the end. David FeuerWell-Typed, LLP -------- Original message --------From: Sven Panne Date: 9/24/17 2:00 PM (GMT-05:00) To: Joachim Breitner Cc: ghc-devs at haskell.org Subject: Re: RTS changes affect runtime when they shouldn’t 2017-09-23 21:06 GMT+02:00 Joachim Breitner : > what I want to do is to reliably catch regressions. The main question is: Which kind of regressions do you want to catch? Do you care about runtime as experienced by the user? Measure the runtime. Do you care abou code size? Measure the code size. etc. etc. Measuring things like the number of fetched instructions as an indicator for the experienced runtime is basically a useless exercise, unless you do this on ancient RISC processors, where each instruction takes a fixed number of cycles. > What are the odds that a change to the Haskell compiler (in particular to > Core2Core > transformations) will cause a significant increase in runtime without a >  significant increase in instruction count? > (Honest question, not rhetoric). > The odds are actually quite high, especially when you define "significant" as "changing a few percent" (which we do!). Just a few examples from current CPUs:    * If branch prediction has not enough information to do this better, it assumes that backward branches are taken (think: loops) and forward branches are not taken (so you should put "exceptional" code out of the common, straight-line code). If by some innocent looking change the code layout changes, you can easily get a very measurable difference in runtime even if the number of executed instructions stays exactly the same.    * Even if the number of instructions changes only a tiny bit, it could be the case that it is just enough to make caching much worse and/or make the loop stream detector fail to detect a loop. There are lots of other scenarios, so in a nutshell: Measure what you really care about, not something you think might be related to that. As already mentioned in another reply, "perf" can give you very detailed hints about how good your program uses the pipeline, caches, branch prediction etc. Perhaps the performance dashboard should really collect these, too, this would remove a lot of guesswork. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at cs.brynmawr.edu Sun Sep 24 19:16:37 2017 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Sun, 24 Sep 2017 13:16:37 -0600 Subject: Why isn't this Typeable? In-Reply-To: <20170924111811.A6CBDBCAB7@haskell.org> References: <20170924111811.A6CBDBCAB7@haskell.org> Message-ID: <895C8A5A-83A0-4E77-9D17-6203909DCD36@cs.brynmawr.edu> The problem is that to get Typeable (Foo 'Just), we need Typeable 'Just. However, the kind parameter for Typeable 'Just would be (forall a. a -> Maybe a), making the full constraint Typable (forall a. a -> Maybe a) 'Just. And saying that would be impredicative. In other contexts, 'Just *can* be Typeable, but it's 'Just invisibly instantiated at some monotype for `a`. So I think that this boils down to impredicativity and that the implementation is doing the right thing here. Richard > On Sep 24, 2017, at 5:45 AM, David Feuer wrote: > > data Foo :: (forall a. a -> Maybe a) -> Type > > Neither Foo nor Foo 'Just is Typeable. There seems to be a certain sense to excluding Foo proper, because it can't be decomposed with Fun. But why not Foo 'Just? Is there a fundamental reason, or is that largely an implementation artifact? > > David Feuer > Well-Typed, LLP > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From m at tweag.io Mon Sep 25 11:36:15 2017 From: m at tweag.io (Boespflug, Mathieu) Date: Mon, 25 Sep 2017 13:36:15 +0200 Subject: CircleCI (Was: Disable Travis?) In-Reply-To: <1506002151.973.17.camel@joachim-breitner.de> References: <1503245493.4020.2.camel@joachim-breitner.de> <7920340D-2B0B-426E-8960-8A391AA46689@cs.brynmawr.edu> <1503302065.29178.1.camel@joachim-breitner.de> <2E2A20F3-7185-4ABA-8AD8-EDA2A4DE036B@cs.brynmawr.edu> <1504428949.11527.2.camel@joachim-breitner.de> <95F2B586-B24A-4E51-AF27-473AF66E3B3E@cs.brynmawr.edu> <1506002151.973.17.camel@joachim-breitner.de> Message-ID: Hi Joachim, great! > Can you configure circleci to mail both the committeer and a specific person (e.g. you, or me) on every failed committ? That's a good question. The way things work in CircleCI is that this is a user setting. You can go to https://circleci.com/account/notifications and "subscribe" to notifications on a per-organization or per-project basis. Regarding the resource_class setting, that's a feature CircleCI graciously enabled for tweag/ghc. I could do the same request for the ghc/ Github org. But better if it's an admin of the org. Best, -- Mathieu Boespflug Founder at http://tweag.io. On 21 September 2017 at 15:55, Joachim Breitner wrote: > Hi, > > Am Donnerstag, den 21.09.2017, 14:25 +0200 schrieb Boespflug, Mathieu: >> It took me no more than a couple hours to get this working, but using >> CircleCI, for our fork of GHC. I started from Joachim's TravisCI >> script. >> >> https://circleci.com/gh/tweag/ghc/tree/circleci >> >> It would be trivial to activate this for github.com/ghc/ghc as well. >> >> A few notes: >> - It runs ./validate --fast in 40 minutes. >> - CircleCI has OS X support as well. I think we should just migrate >> to >> using CircleCI for OS X testing instead of the custom drones, one or >> all of which are currently down. >> - CircleCI graciously agreed to running on one of the beefy AWS node >> types, called c4.xlarge (8 cores). On the standard node type (2 >> cores), validate takes just over an hour to run. It would be great if >> ./validate could scale better to more cores. > > nice! Yes, let’s do this. More CI never hurts (if someone keeps an eye > on it and fixes breakage that is not due to the code). > > Can you configure circleci to mail both the committeer and a specific > person (e.g. you, or me) on every failed committ? > > I enabled it now for ghc/ghc, but it says > >> Configurable resource class is not enabled in your project. Please >> contact your CSM person or our support team to whitelist your >> project. > > Can you do that? > > Greetings, > Joachim > > > -- > Joachim Breitner > mail at joachim-breitner.de > http://www.joachim-breitner.de/ From david at well-typed.com Mon Sep 25 18:28:58 2017 From: david at well-typed.com (David Feuer) Date: Mon, 25 Sep 2017 14:28:58 -0400 Subject: Why isn't this Typeable? In-Reply-To: <895C8A5A-83A0-4E77-9D17-6203909DCD36@cs.brynmawr.edu> References: <20170924111811.A6CBDBCAB7@haskell.org> <895C8A5A-83A0-4E77-9D17-6203909DCD36@cs.brynmawr.edu> Message-ID: <41854077.kjUZiYhXhN@squirrel> My example wasn't quite the one I intended (although I think it should work as well, and it's simpler). Here's the sort of example I really intended: data Bar :: forall (j :: forall k. k -> Maybe k) (a :: Type). Proxy (j a) -> Type I would expect Bar :: Proxy ('Just Int) -> Type or, to abuse notation a bit, Bar @'Just @Int to be Typeable. What I'm really suggesting is that we should distinguish between things that are typeable and things that can be decomposed into typeable components. We already make a limited distinction here. For example, we have 'Just :: forall a. a -> Maybe a 'Just itself cannot be Typeable, but once it's applied to a kind variable, it is Typeable. 'Just @Int is Typeable even though that (kind) application cannot be broken with App. Similarly, I'd expect Foo 'Just to be Typeable even though that (type) application cannot be broken with App (or Fun). Putting things in terms of fingerprints, we can offer type-indexed fingerprints newtype Finger a = Finger Fingerprint for anything we can fingerprint. Is there any difficulty fingerprinting types like Foo 'Just and Bar @'Just @Int? Fingerprints are useful for lots of applications where decomposition isn't necessary. On Sunday, September 24, 2017 1:16:37 PM EDT Richard Eisenberg wrote: > The problem is that to get Typeable (Foo 'Just), we need Typeable 'Just. However, the kind parameter for Typeable 'Just would be (forall a. a -> Maybe a), making the full constraint Typable (forall a. a -> Maybe a) 'Just. And saying that would be impredicative. In other contexts, 'Just *can* be Typeable, but it's 'Just invisibly instantiated at some monotype for `a`. > > So I think that this boils down to impredicativity and that the implementation is doing the right thing here. > > Richard > > > On Sep 24, 2017, at 5:45 AM, David Feuer wrote: > > > > data Foo :: (forall a. a -> Maybe a) -> Type > > > > Neither Foo nor Foo 'Just is Typeable. There seems to be a certain sense to excluding Foo proper, because it can't be decomposed with Fun. But why not Foo 'Just? Is there a fundamental reason, or is that largely an implementation artifact? > > > > David Feuer > > Well-Typed, LLP > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From rae at cs.brynmawr.edu Mon Sep 25 18:42:12 2017 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Mon, 25 Sep 2017 14:42:12 -0400 Subject: Why isn't this Typeable? In-Reply-To: <41854077.kjUZiYhXhN@squirrel> References: <20170924111811.A6CBDBCAB7@haskell.org> <895C8A5A-83A0-4E77-9D17-6203909DCD36@cs.brynmawr.edu> <41854077.kjUZiYhXhN@squirrel> Message-ID: <6A95040C-8118-44D0-BB98-EED94103BB00@cs.brynmawr.edu> I suppose this is conceivable, but it would complicate the representation and solver for TypeReps considerably. Do you have a real use case? Richard > On Sep 25, 2017, at 2:28 PM, David Feuer wrote: > > My example wasn't quite the one I intended (although I think it should > work as well, and it's simpler). Here's the sort of example I really intended: > > data Bar :: forall (j :: forall k. k -> Maybe k) (a :: Type). Proxy (j a) -> Type > > I would expect > > Bar :: Proxy ('Just Int) -> Type > > or, to abuse notation a bit, > > Bar @'Just @Int > > to be Typeable. What I'm really suggesting is that we should distinguish between things that are typeable and > things that can be decomposed into typeable components. We already make a limited distinction > here. For example, we have > > 'Just :: forall a. a -> Maybe a > > 'Just itself cannot be Typeable, but once it's applied to a kind variable, it is Typeable. > 'Just @Int is Typeable even though that (kind) application cannot be broken with App. Similarly, I'd expect > Foo 'Just to be Typeable even though that (type) application cannot be broken with App (or Fun). > > Putting things in terms of fingerprints, we can offer type-indexed fingerprints > > newtype Finger a = Finger Fingerprint > > for anything we can fingerprint. Is there any difficulty fingerprinting types like Foo 'Just and > Bar @'Just @Int? Fingerprints are useful for lots of applications where decomposition isn't > necessary. > > On Sunday, September 24, 2017 1:16:37 PM EDT Richard Eisenberg wrote: >> The problem is that to get Typeable (Foo 'Just), we need Typeable 'Just. However, the kind parameter for Typeable 'Just would be (forall a. a -> Maybe a), making the full constraint Typable (forall a. a -> Maybe a) 'Just. And saying that would be impredicative. In other contexts, 'Just *can* be Typeable, but it's 'Just invisibly instantiated at some monotype for `a`. >> >> So I think that this boils down to impredicativity and that the implementation is doing the right thing here. >> >> Richard >> >>> On Sep 24, 2017, at 5:45 AM, David Feuer wrote: >>> >>> data Foo :: (forall a. a -> Maybe a) -> Type >>> >>> Neither Foo nor Foo 'Just is Typeable. There seems to be a certain sense to excluding Foo proper, because it can't be decomposed with Fun. But why not Foo 'Just? Is there a fundamental reason, or is that largely an implementation artifact? >>> >>> David Feuer >>> Well-Typed, LLP >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > > From david at well-typed.com Mon Sep 25 19:00:00 2017 From: david at well-typed.com (David Feuer) Date: Mon, 25 Sep 2017 15:00:00 -0400 Subject: Why isn't this Typeable? Message-ID: <20170925183208.7A6E4BC978@haskell.org> No. What led me down this path is that I was thinking about whether we could simplify the representation and reduce the TCB. The as-yet-incomplete ideas I had (largely based on the concept of using a constructor name as a singletons-style defunctionalization symbol) seem difficult to adapt to the generalization I describe, so I wanted to check first how much that matters. David FeuerWell-Typed, LLP -------- Original message --------From: Richard Eisenberg Date: 9/25/17 2:42 PM (GMT-05:00) To: David Feuer Cc: Ben Gamari , ghc-devs at haskell.org Subject: Re: Why isn't this Typeable? I suppose this is conceivable, but it would complicate the representation and solver for TypeReps considerably. Do you have a real use case? Richard > On Sep 25, 2017, at 2:28 PM, David Feuer wrote: > > My example wasn't quite the one I intended (although I think it should > work as well, and it's simpler). Here's the sort of example I really intended: > >    data Bar :: forall (j :: forall k. k -> Maybe k) (a :: Type). Proxy (j a) ->  Type > > I would expect > >    Bar :: Proxy ('Just Int) -> Type > > or, to abuse notation a bit, > >    Bar @'Just @Int > > to be Typeable. What I'm really suggesting is that we should distinguish between things that are typeable and > things that can be decomposed into typeable components. We already make a limited distinction > here. For example, we have > >  'Just :: forall a. a -> Maybe a > > 'Just itself cannot be Typeable, but once it's applied to a kind variable, it is Typeable. > 'Just @Int is Typeable even though that (kind) application cannot be broken with App. Similarly, I'd expect > Foo 'Just to be Typeable even though that (type) application cannot be broken with App (or Fun). > > Putting things in terms of fingerprints, we can offer type-indexed fingerprints > > newtype Finger a = Finger Fingerprint > > for anything we can fingerprint. Is there any difficulty fingerprinting types like Foo 'Just and > Bar @'Just @Int? Fingerprints are useful for lots of applications where decomposition isn't > necessary. > > On Sunday, September 24, 2017 1:16:37 PM EDT Richard Eisenberg wrote: >> The problem is that to get Typeable (Foo 'Just), we need Typeable 'Just. However, the kind parameter for Typeable 'Just would be (forall a. a -> Maybe a), making the full constraint Typable (forall a. a -> Maybe a) 'Just. And saying that would be impredicative. In other contexts, 'Just *can* be Typeable, but it's 'Just invisibly instantiated at some monotype for `a`. >> >> So I think that this boils down to impredicativity and that the implementation is doing the right thing here. >> >> Richard >> >>> On Sep 24, 2017, at 5:45 AM, David Feuer wrote: >>> >>> data Foo :: (forall a. a -> Maybe a) -> Type >>> >>> Neither Foo nor Foo 'Just is Typeable. There seems to be a certain sense to excluding Foo proper, because it can't be decomposed with Fun. But why not Foo 'Just? Is there a fundamental reason, or is that largely an implementation artifact? >>> >>> David Feuer >>> Well-Typed, LLP >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at cs.brynmawr.edu Mon Sep 25 19:05:28 2017 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Mon, 25 Sep 2017 15:05:28 -0400 Subject: Why isn't this Typeable? Message-ID: I think we're a long way off from supporting Typeable for higher-kinded types, so I wouldn't worry about that dark, spider-ridden corner. Richard > On Sep 25, 2017, at 3:00 PM, David Feuer wrote: > > No. What led me down this path is that I was thinking about whether we could simplify the representation and reduce the TCB. The as-yet-incomplete ideas I had (largely based on the concept of using a constructor name as a singletons-style defunctionalization symbol) seem difficult to adapt to the generalization I describe, so I wanted to check first how much that matters. > > > David Feuer > Well-Typed, LLP > > -------- Original message -------- > From: Richard Eisenberg > Date: 9/25/17 2:42 PM (GMT-05:00) > To: David Feuer > Cc: Ben Gamari , ghc-devs at haskell.org > Subject: Re: Why isn't this Typeable? > > I suppose this is conceivable, but it would complicate the representation and solver for TypeReps considerably. Do you have a real use case? > > Richard > > > On Sep 25, 2017, at 2:28 PM, David Feuer wrote: > > > > My example wasn't quite the one I intended (although I think it should > > work as well, and it's simpler). Here's the sort of example I really intended: > > > > data Bar :: forall (j :: forall k. k -> Maybe k) (a :: Type). Proxy (j a) -> Type > > > > I would expect > > > > Bar :: Proxy ('Just Int) -> Type > > > > or, to abuse notation a bit, > > > > Bar @'Just @Int > > > > to be Typeable. What I'm really suggesting is that we should distinguish between things that are typeable and > > things that can be decomposed into typeable components. We already make a limited distinction > > here. For example, we have > > > > 'Just :: forall a. a -> Maybe a > > > > 'Just itself cannot be Typeable, but once it's applied to a kind variable, it is Typeable. > > 'Just @Int is Typeable even though that (kind) application cannot be broken with App. Similarly, I'd expect > > Foo 'Just to be Typeable even though that (type) application cannot be broken with App (or Fun). > > > > Putting things in terms of fingerprints, we can offer type-indexed fingerprints > > > > newtype Finger a = Finger Fingerprint > > > > for anything we can fingerprint. Is there any difficulty fingerprinting types like Foo 'Just and > > Bar @'Just @Int? Fingerprints are useful for lots of applications where decomposition isn't > > necessary. > > > > On Sunday, September 24, 2017 1:16:37 PM EDT Richard Eisenberg wrote: > >> The problem is that to get Typeable (Foo 'Just), we need Typeable 'Just. However, the kind parameter for Typeable 'Just would be (forall a. a -> Maybe a), making the full constraint Typable (forall a. a -> Maybe a) 'Just. And saying that would be impredicative. In other contexts, 'Just *can* be Typeable, but it's 'Just invisibly instantiated at some monotype for `a`. > >> > >> So I think that this boils down to impredicativity and that the implementation is doing the right thing here. > >> > >> Richard > >> > >>> On Sep 24, 2017, at 5:45 AM, David Feuer wrote: > >>> > >>> data Foo :: (forall a. a -> Maybe a) -> Type > >>> > >>> Neither Foo nor Foo 'Just is Typeable. There seems to be a certain sense to excluding Foo proper, because it can't be decomposed with Fun. But why not Foo 'Just? Is there a fundamental reason, or is that largely an implementation artifact? > >>> > >>> David Feuer > >>> Well-Typed, LLP > >>> _______________________________________________ > >>> ghc-devs mailing list > >>> ghc-devs at haskell.org > >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > >> > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Mon Sep 25 19:30:22 2017 From: mail at joachim-breitner.de (Joachim Breitner) Date: Mon, 25 Sep 2017 15:30:22 -0400 Subject: CircleCI (Was: Disable Travis?) In-Reply-To: References: <1503245493.4020.2.camel@joachim-breitner.de> <7920340D-2B0B-426E-8960-8A391AA46689@cs.brynmawr.edu> <1503302065.29178.1.camel@joachim-breitner.de> <2E2A20F3-7185-4ABA-8AD8-EDA2A4DE036B@cs.brynmawr.edu> <1504428949.11527.2.camel@joachim-breitner.de> <95F2B586-B24A-4E51-AF27-473AF66E3B3E@cs.brynmawr.edu> <1506002151.973.17.camel@joachim-breitner.de> Message-ID: <1506367822.1913.2.camel@joachim-breitner.de> Hi, Am Montag, den 25.09.2017, 13:36 +0200 schrieb Boespflug, Mathieu: > Can you configure circleci to mail both the committeer and a specific > person (e.g. you, or me) on every failed committ? > > That's a good question. The way things work in CircleCI is that this > is a user setting. You can go to > > https://circleci.com/account/notifications > > and "subscribe" to notifications on a per-organization or per-project basis. ok, so it seems that I can ensure I get mail. Do you know if there is a way to notify the committer, even if the committer is not a CircleCI user? > Regarding the resource_class setting, that's a feature CircleCI > graciously enabled for tweag/ghc. I could do the same request for the > ghc/ Github org. But better if it's an admin of the org. Even without this, it often runs in 40mins, and sometimes takes an hour. So for now, I’ll just let it run like this. Joachim -- Joachim “nomeata” Breitner mail at joachim-breitner.de https://www.joachim-breitner.de/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From m at tweag.io Mon Sep 25 20:51:28 2017 From: m at tweag.io (Boespflug, Mathieu) Date: Mon, 25 Sep 2017 22:51:28 +0200 Subject: CircleCI (Was: Disable Travis?) In-Reply-To: <1506367822.1913.2.camel@joachim-breitner.de> References: <1503245493.4020.2.camel@joachim-breitner.de> <7920340D-2B0B-426E-8960-8A391AA46689@cs.brynmawr.edu> <1503302065.29178.1.camel@joachim-breitner.de> <2E2A20F3-7185-4ABA-8AD8-EDA2A4DE036B@cs.brynmawr.edu> <1504428949.11527.2.camel@joachim-breitner.de> <95F2B586-B24A-4E51-AF27-473AF66E3B3E@cs.brynmawr.edu> <1506002151.973.17.camel@joachim-breitner.de> <1506367822.1913.2.camel@joachim-breitner.de> Message-ID: > Do you know if there is a way to notify the committer, even if the committer is not a CircleCI user? I don't know. The default might be to notify the committer if the build failed, even when not a CircleCI user. But I haven't tried. -- Mathieu Boespflug Founder at http://tweag.io. On 25 September 2017 at 21:30, Joachim Breitner wrote: > Hi, > > Am Montag, den 25.09.2017, 13:36 +0200 schrieb Boespflug, Mathieu: >> Can you configure circleci to mail both the committeer and a specific >> person (e.g. you, or me) on every failed committ? >> >> That's a good question. The way things work in CircleCI is that this >> is a user setting. You can go to >> >> https://circleci.com/account/notifications >> >> and "subscribe" to notifications on a per-organization or per-project basis. > > ok, so it seems that I can ensure I get mail. Do you know if there is a > way to notify the committer, even if the committer is not a CircleCI > user? > > >> Regarding the resource_class setting, that's a feature CircleCI >> graciously enabled for tweag/ghc. I could do the same request for the >> ghc/ Github org. But better if it's an admin of the org. > > > Even without this, it often runs in 40mins, and sometimes takes an > hour. So for now, I’ll just let it run like this. > > Joachim > -- > Joachim “nomeata” Breitner > mail at joachim-breitner.de > https://www.joachim-breitner.de/ > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From takenobu.hs at gmail.com Tue Sep 26 12:40:03 2017 From: takenobu.hs at gmail.com (Takenobu Tani) Date: Tue, 26 Sep 2017 21:40:03 +0900 Subject: Underscore in binary literals Message-ID: Dear devs, GHC's BinaryLiterals extension is useful. (For example, x = 0b110111000101) Is it difficult to include underscore(_) in the format like Verilog-HDL[1] ? (For example, x = 0b1101_1100_0101) [1]: https://inst.eecs.berkeley.edu/~cs150/fa06/Labs/verilog-ieee.pdf#page=20 Regards, Takenobu -------------- next part -------------- An HTML attachment was scrubbed... URL: From rahulmutt at gmail.com Tue Sep 26 12:58:45 2017 From: rahulmutt at gmail.com (Rahul Muttineni) Date: Tue, 26 Sep 2017 08:58:45 -0400 Subject: Underscore in binary literals In-Reply-To: References: Message-ID: Implementation-wise, it's no so difficult to include - the lexer needs to be tweaked. But it seems like a specialised use-case that will only affect a minority of users is probably not worthwhile as an extension to the language/compiler. Maybe you can try using OverloadedStrings and implement this as a library? ``` newtype Binary = Binary Integer deriving Num instance IsString Binary where fromString binaryLiteral = error "Code here to parse binary literal with underscores" binaryVal :: Binary binaryVal = "1101_1110_0101" ``` Hope that helps, Rahul On Tue, Sep 26, 2017 at 8:40 AM, Takenobu Tani wrote: > Dear devs, > > GHC's BinaryLiterals extension is useful. > (For example, x = 0b110111000101) > > Is it difficult to include underscore(_) in the format like Verilog-HDL[1] > ? > (For example, x = 0b1101_1100_0101) > > [1]: https://inst.eecs.berkeley.edu/~cs150/fa06/Labs/verilog- > ieee.pdf#page=20 > > Regards, > Takenobu > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -- Rahul Muttineni -------------- next part -------------- An HTML attachment was scrubbed... URL: From takenobu.hs at gmail.com Tue Sep 26 13:19:11 2017 From: takenobu.hs at gmail.com (Takenobu Tani) Date: Tue, 26 Sep 2017 22:19:11 +0900 Subject: Underscore in binary literals In-Reply-To: References: Message-ID: Hi Rahul, Thanks for the explanation. Hmm, Is not there much need... Thank you code example. I will also look at lexer for my study [1]. [1]: https://phabricator.haskell.org/D22 Thank you :) , Takenobu 2017-09-26 21:58 GMT+09:00 Rahul Muttineni : > Implementation-wise, it's no so difficult to include - the lexer needs to > be tweaked. But it seems like a specialised use-case that will only affect > a minority of users is probably not worthwhile as an extension to the > language/compiler. > > Maybe you can try using OverloadedStrings and implement this as a library? > > ``` > newtype Binary = Binary Integer > deriving Num > > instance IsString Binary where > fromString binaryLiteral = error "Code here to parse binary literal with > underscores" > > binaryVal :: Binary > binaryVal = "1101_1110_0101" > ``` > > Hope that helps, > Rahul > > On Tue, Sep 26, 2017 at 8:40 AM, Takenobu Tani > wrote: > >> Dear devs, >> >> GHC's BinaryLiterals extension is useful. >> (For example, x = 0b110111000101) >> >> Is it difficult to include underscore(_) in the format like >> Verilog-HDL[1] ? >> (For example, x = 0b1101_1100_0101) >> >> [1]: https://inst.eecs.berkeley.edu/~cs150/fa06/Labs/verilog-ieee >> .pdf#page=20 >> >> Regards, >> Takenobu >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >> > > > -- > Rahul Muttineni > -------------- next part -------------- An HTML attachment was scrubbed... URL: From moritz at lichtzwerge.de Tue Sep 26 13:43:30 2017 From: moritz at lichtzwerge.de (Moritz Angermann) Date: Tue, 26 Sep 2017 21:43:30 +0800 Subject: Underscore in binary literals In-Reply-To: References: Message-ID: Hi, I for one, would like to have this in more than just BinaryLiterals (which I find rather useful as well!) I’d like to see `_` in any numeric literal being simply ignored, as I find it makes writing and reading numbers in source code much easier. let i = 1_000_000 :: Int f = 1_123.456 :: Float b = 0b1100_0011 And grouping (via underscore) might be very much domain specific. (One might want to denote magnitudes or patterns.) I ended up writing a quite a bit of stupid boilerplate[1] to support readable binary notation[2]. Cheers, Moritz — [1]: https://github.com/angerman/data-bitcode/blob/c9818debd3dae774967c0507882b6b3bec7f0ee4/test/BitcodeSpec.hs#L22-L83 [2]: https://github.com/angerman/data-bitcode/blob/c9818debd3dae774967c0507882b6b3bec7f0ee4/test/BitcodeSpec.hs#L146-L150 > On Sep 26, 2017, at 9:19 PM, Takenobu Tani wrote: > > Hi Rahul, > > Thanks for the explanation. > Hmm, Is not there much need... > > Thank you code example. > I will also look at lexer for my study [1]. > > [1]: https://phabricator.haskell.org/D22 > > Thank you :) , > Takenobu > > > 2017-09-26 21:58 GMT+09:00 Rahul Muttineni : > Implementation-wise, it's no so difficult to include - the lexer needs to be tweaked. But it seems like a specialised use-case that will only affect a minority of users is probably not worthwhile as an extension to the language/compiler. > > Maybe you can try using OverloadedStrings and implement this as a library? > > ``` > newtype Binary = Binary Integer > deriving Num > > instance IsString Binary where > fromString binaryLiteral = error "Code here to parse binary literal with underscores" > > binaryVal :: Binary > binaryVal = "1101_1110_0101" > ``` > > Hope that helps, > Rahul > > On Tue, Sep 26, 2017 at 8:40 AM, Takenobu Tani wrote: > Dear devs, > > GHC's BinaryLiterals extension is useful. > (For example, x = 0b110111000101) > > Is it difficult to include underscore(_) in the format like Verilog-HDL[1] ? > (For example, x = 0b1101_1100_0101) > > [1]: https://inst.eecs.berkeley.edu/~cs150/fa06/Labs/verilog-ieee.pdf#page=20 > > Regards, > Takenobu > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > > -- > Rahul Muttineni > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs ————————————————— Moritz Angermann +49 170 54 33 0 74 moritz at lichtzwerge.de lichtzwerge GmbH Raiffeisenstr. 8 93185 Michelsneukirchen Amtsgericht Regensburg HRB 14723 Geschäftsführung: Moritz Angermann, Ralf Sangl USt-Id: DE291948767 Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht gestattet. This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. Any unauthorized copying, disclosure or distribution of the material in this e-mail is strictly forbidden. From mail at nh2.me Tue Sep 26 13:43:48 2017 From: mail at nh2.me (=?UTF-8?Q?Niklas_Hamb=c3=bcchen?=) Date: Tue, 26 Sep 2017 15:43:48 +0200 Subject: Underscore in binary literals In-Reply-To: References: Message-ID: <4802ce60-2896-2971-714b-f20d6e621251@nh2.me> I'd find that quite useful for hex and binary. It's useful for distinguishing e.g. 0xffffffff and 0xfffffff which when confused accidentally and lead to big bugs. Rust has exactly this feature for all numeric literals: https://rustbyexample.com/primitives/literals.html On 26/09/17 14:40, Takenobu Tani wrote: > GHC's BinaryLiterals extension is useful. > (For example, x = 0b110111000101) > > Is it difficult to include underscore(_) in the format like Verilog-HDL[1] ? > (For example, x = 0b1101_1100_0101) From ben at smart-cactus.org Tue Sep 26 13:43:41 2017 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 26 Sep 2017 09:43:41 -0400 Subject: Underscore in binary literals In-Reply-To: References: Message-ID: <87poadvalu.fsf@ben-laptop.smart-cactus.org> Takenobu Tani writes: > Hi Rahul, > > Thanks for the explanation. > Hmm, Is not there much need... > > Thank you code example. > I will also look at lexer for my study [1]. > If you do want to try pursuing a language change do feel free to submit a proposal [1]. I would also like the ability to break up large literals. However, why limit it to BinaryLiterals? I would like the same syntax for deciaml and hexadecimal literals as well. Cheers, - Ben [1] https://github.com/ghc-proposals/ghc-proposals -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From merijn at inconsistent.nl Tue Sep 26 13:56:43 2017 From: merijn at inconsistent.nl (Merijn Verstraaten) Date: Tue, 26 Sep 2017 15:56:43 +0200 Subject: Underscore in binary literals In-Reply-To: <87poadvalu.fsf@ben-laptop.smart-cactus.org> References: <87poadvalu.fsf@ben-laptop.smart-cactus.org> Message-ID: <5242587B-879E-4A72-9C0E-265C403D0956@inconsistent.nl> I, too, have wished for the ability to have a separator in large number literals. So a strong +1 from me. Cheers, Merijn > On 26 Sep 2017, at 15:43, Ben Gamari wrote: > > Takenobu Tani writes: > >> Hi Rahul, >> >> Thanks for the explanation. >> Hmm, Is not there much need... >> >> Thank you code example. >> I will also look at lexer for my study [1]. >> > If you do want to try pursuing a language change do feel free to submit > a proposal [1]. I would also like the ability to break up large > literals. However, why limit it to BinaryLiterals? I would like the same > syntax for deciaml and hexadecimal literals as well. > > Cheers, > > - Ben > > > [1] https://github.com/ghc-proposals/ghc-proposals > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 874 bytes Desc: Message signed with OpenPGP URL: From takenobu.hs at gmail.com Tue Sep 26 14:05:48 2017 From: takenobu.hs at gmail.com (Takenobu Tani) Date: Tue, 26 Sep 2017 23:05:48 +0900 Subject: Underscore in binary literals In-Reply-To: <5242587B-879E-4A72-9C0E-265C403D0956@inconsistent.nl> References: <87poadvalu.fsf@ben-laptop.smart-cactus.org> <5242587B-879E-4A72-9C0E-265C403D0956@inconsistent.nl> Message-ID: Dear all, Thank you very much for the response and kind explanation. After studying, I will submit to ghc-proposals :) Regards, Takenobu 2017-09-26 22:56 GMT+09:00 Merijn Verstraaten : > I, too, have wished for the ability to have a separator in large number > literals. > > So a strong +1 from me. > > Cheers, > Merijn > > > On 26 Sep 2017, at 15:43, Ben Gamari wrote: > > > > Takenobu Tani writes: > > > >> Hi Rahul, > >> > >> Thanks for the explanation. > >> Hmm, Is not there much need... > >> > >> Thank you code example. > >> I will also look at lexer for my study [1]. > >> > > If you do want to try pursuing a language change do feel free to submit > > a proposal [1]. I would also like the ability to break up large > > literals. However, why limit it to BinaryLiterals? I would like the same > > syntax for deciaml and hexadecimal literals as well. > > > > Cheers, > > > > - Ben > > > > > > [1] https://github.com/ghc-proposals/ghc-proposals > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Tue Sep 26 16:35:08 2017 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 26 Sep 2017 12:35:08 -0400 Subject: RTS changes affect runtime when they =?utf-8?Q?shouldn?= =?utf-8?Q?=E2=80=99t?= In-Reply-To: References: <1505923865.5913.7.camel@joachim-breitner.de> <87vakdw77m.fsf@ben-laptop.smart-cactus.org> <1505938404.1534.5.camel@joachim-breitner.de> Message-ID: <87a81hv2o3.fsf@ben-laptop.smart-cactus.org> Bardur Arantsson writes: > I may be missing something since I have only quickly skimmed the thread, > but...: Why not track all of these things and correlate them with > individual runs? The Linux 'perf' tool can retrieve a *lot* of > interesting numbers, esp. around cache hit rates, branch predicition hit > rates, etc. > While it's not a bad idea, I think it's easy to drown in information. Of course, it's also fairly easy to hide information that we don't care about, so perhaps this is worth doing regardless. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From michal.terepeta at gmail.com Tue Sep 26 19:19:44 2017 From: michal.terepeta at gmail.com (Michal Terepeta) Date: Tue, 26 Sep 2017 19:19:44 +0000 Subject: New primitive types? In-Reply-To: References: Message-ID: On Sun, Aug 27, 2017 at 7:49 PM Michal Terepeta wrote: > > On Thu, Aug 3, 2017 at 2:28 AM Sylvain Henry wrote: > > Hi, > > > > I also think we should do this but it has a lot of ramifications: > contant folding in Core, codegen, TH, etc. > > > > Also it will break codes that use primitive types directly, so maybe > it's worth a ghc proposal. > > Ok, a short proposal sounds reasonable. > Just FYI: I've opened: https://github.com/ghc-proposals/ghc-proposals/pull/74 Cheers, Michal -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Tue Sep 26 20:58:15 2017 From: ben at well-typed.com (Ben Gamari) Date: Tue, 26 Sep 2017 16:58:15 -0400 Subject: Review of CONTRIBUTORS.md Message-ID: <874lrpuqhk.fsf@ben-laptop.smart-cactus.org> Hello everyone, Today I sat down and wrote a CONTRIBUTORS.md document for the ghc repository. This is recommended by GitHub to ensure that contributors can easily discover the first steps to contributing to the project. Moreover, it provided a nice blank slate to summarize the existing contributor documentation. It would be greatly appreciated if you could give it [1] a read-through and leave your feedback. Cheers, - Ben [1] https://phabricator.haskell.org/D4037 https://github.com/bgamari/ghc/blob/contributing/CONTRIBUTING.md -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From me at ara.io Tue Sep 26 21:41:37 2017 From: me at ara.io (Ara Adkins) Date: Tue, 26 Sep 2017 22:41:37 +0100 Subject: Review of CONTRIBUTORS.md In-Reply-To: <874lrpuqhk.fsf@ben-laptop.smart-cactus.org> References: <874lrpuqhk.fsf@ben-laptop.smart-cactus.org> Message-ID: Hey Ben, Looks good to me for the most part! I think that you should include a link to the Newcomers page [1] as a whole, even though it has some overlap with the quick start guide and you’ve included a link to the ‘Finding a Ticket’ subsection therein. I also think that a few more pointers on what is hoped for in a review (under ‘Reviewing Patches’) could be helpful for those looking to get involved! Code review styles and expectations can vary greatly, so some more info here would be good. Best, _ara [1] https://ghc.haskell.org/trac/ghc/wiki/Newcomers > On 26 Sep 2017, at 21:58, Ben Gamari wrote: > > Hello everyone, > > Today I sat down and wrote a CONTRIBUTORS.md document for the ghc > repository. This is recommended by GitHub to ensure that contributors > can easily discover the first steps to contributing to the project. > Moreover, it provided a nice blank slate to summarize the existing > contributor documentation. > > It would be greatly appreciated if you could give it [1] a read-through > and leave your feedback. > > Cheers, > > - Ben > > > [1] https://phabricator.haskell.org/D4037 > https://github.com/bgamari/ghc/blob/contributing/CONTRIBUTING.md > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From manpacket at gmail.com Tue Sep 26 23:58:52 2017 From: manpacket at gmail.com (Michael Baikov) Date: Wed, 27 Sep 2017 07:58:52 +0800 Subject: ghc-devs Digest, Vol 169, Issue 57 In-Reply-To: References: Message-ID: > I, too, have wished for the ability to have a separator in large number literals. > > So a strong +1 from me. more +1 here From ben at smart-cactus.org Wed Sep 27 01:38:11 2017 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 26 Sep 2017 21:38:11 -0400 Subject: Including libffi as a submodule Message-ID: <87mv5gudj0.fsf@ben-laptop.smart-cactus.org> Hello everyone, As you may know, GHC carries a dependency on the libffi library. Libffi is used on most platforms (notably not x86 and x86-64) for foreign function invocation (see rts/Adjustor.c for details). While libffi works well, it is unfortunately not particularly actively maintained. In fact, it has been nearly three years since the last official release. A lot can happen in three years and there is now at least two bugs [1,2] present in the current release which makes it impossible to build GHC in some configurations. These bugs have been fixed upstream but these fixes are unreleased. Numerous attempts have been made to get the libffi maintainers to cut a new release but sadly no progress has been made in over six months of trying. In light of this I propose that we begin treating libffi as a submodule until a release is made. In particular, adding libffi as a submodule will ease development on ARM and AArch64 (especially Apple) targets, which have seen quite a bit of developer attention recently. Note that under this scheme it will still be possible to link against the system's libffi installation using ./configure's --enable-system-libffi flag. Are there any strong objections to plan? If so please speak up. If there is no objection by Saturday we will move ahead. Cheers, - Ben [1] https://github.com/libffi/libffi/issues/191 [2] https://github.com/libffi/libffi/pull/263 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From johnw at newartisans.com Wed Sep 27 01:44:53 2017 From: johnw at newartisans.com (John Wiegley) Date: Tue, 26 Sep 2017 18:44:53 -0700 Subject: Including libffi as a submodule In-Reply-To: <87mv5gudj0.fsf@ben-laptop.smart-cactus.org> (Ben Gamari's message of "Tue, 26 Sep 2017 21:38:11 -0400") References: <87mv5gudj0.fsf@ben-laptop.smart-cactus.org> Message-ID: >>>>> "BG" == Ben Gamari writes: BG> Note that under this scheme it will still be possible to link against the BG> system's libffi installation using ./configure's --enable-system-libffi BG> flag. Will distributions of GHC be using a system libffi which still has the bugs you mentioned? -- John Wiegley GPG fingerprint = 4710 CF98 AF9B 327B B80F http://newartisans.com 60E1 46C4 BD1A 7AC1 4BA2 From ben at smart-cactus.org Wed Sep 27 02:02:03 2017 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 26 Sep 2017 22:02:03 -0400 Subject: Including libffi as a submodule In-Reply-To: References: <87mv5gudj0.fsf@ben-laptop.smart-cactus.org> Message-ID: <87ing4ucf8.fsf@ben-laptop.smart-cactus.org> John Wiegley writes: >>>>>> "BG" == Ben Gamari writes: > > BG> Note that under this scheme it will still be possible to link against the > BG> system's libffi installation using ./configure's --enable-system-libffi > BG> flag. > > Will distributions of GHC be using a system libffi which still has the bugs > you mentioned? > My inclination would be to use the system libffi unless this option is precluded due to bugs. By this standard I believe the only GHC HQ binary distributions which would include an unreleased libffi would those for ARM and AArch64. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From svenpanne at gmail.com Wed Sep 27 11:26:34 2017 From: svenpanne at gmail.com (Sven Panne) Date: Wed, 27 Sep 2017 13:26:34 +0200 Subject: =?UTF-8?Q?Re=3A_RTS_changes_affect_runtime_when_they_shouldn=E2=80=99t?= In-Reply-To: <87a81hv2o3.fsf@ben-laptop.smart-cactus.org> References: <1505923865.5913.7.camel@joachim-breitner.de> <87vakdw77m.fsf@ben-laptop.smart-cactus.org> <1505938404.1534.5.camel@joachim-breitner.de> <87a81hv2o3.fsf@ben-laptop.smart-cactus.org> Message-ID: 2017-09-26 18:35 GMT+02:00 Ben Gamari : > While it's not a bad idea, I think it's easy to drown in information. Of > course, it's also fairly easy to hide information that we don't care > about, so perhaps this is worth doing regardless. > The point is: You don't know in advance which of the many performance characteristics "perf" spits out is relevant. If e.g. you see a regression in runtime although you really didn't expect one (tiny RTS change etc.), a quick look at the diffs of all perf values can often give a hint (e.g. branch prediction was screwed up by different code layout etc.). So I think it's best to collect all data, but make the user-relevant data (runtime, code size) more prominent than the technical/internal data (cache hit ratio, branch prediction hit ratio, etc.), which is for analysis only. Although the latter is a cause for the former, from a compiler user's perspective it's irrelevant. So there is no actual risk in drowning in data, because you primarily care only for a small subset of it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggreif at gmail.com Wed Sep 27 21:12:13 2017 From: ggreif at gmail.com (Gabor Greif) Date: Wed, 27 Sep 2017 23:12:13 +0200 Subject: [GHC] #14294: IndexError: pop from empty list In-Reply-To: <063.70a79f9f06c4e621cd9cd418fb57e730@haskell.org> References: <048.d023a95248a740b652c7c9c06b669e05@haskell.org> <063.70a79f9f06c4e621cd9cd418fb57e730@haskell.org> Message-ID: Right. On 9/27/17, GHC wrote: > #14294: IndexError: pop from empty list > -------------------------------------+------------------------------------- > Reporter: heisenbug | Owner: (none) > Type: bug | Status: new > Priority: normal | Milestone: > Component: Compiler | Version: 8.2.1 > Resolution: | Keywords: > Operating System: Unknown/Multiple | Architecture: > | Unknown/Multiple > Type of failure: None/Unknown | Test Case: > Blocked By: | Blocking: > Related Tickets: | Differential Rev(s): > Wiki Page: | > -------------------------------------+------------------------------------- > > Comment (by bgamari): > > Was the URL you were requesting > https://ghc.haskell.org/trac/ghc/attachment/ticket/14293/ ? > > -- > Ticket URL: > GHC > The Glasgow Haskell Compiler > From ben at well-typed.com Thu Sep 28 00:33:30 2017 From: ben at well-typed.com (Ben Gamari) Date: Wed, 27 Sep 2017 20:33:30 -0400 Subject: Review of CONTRIBUTORS.md In-Reply-To: References: <874lrpuqhk.fsf@ben-laptop.smart-cactus.org> Message-ID: <874lrnu0f9.fsf@ben-laptop.smart-cactus.org> Ara Adkins writes: > Hey Ben, > > Looks good to me for the most part! I think that you should include a > link to the Newcomers page [1] as a whole, even though it has some > overlap with the quick start guide and you’ve included a link to the > ‘Finding a Ticket’ subsection therein. > > I also think that a few more pointers on what is hoped for in a review > (under ‘Reviewing Patches’) could be helpful for those looking to get > involved! Code review styles and expectations can vary greatly, so > some more info here would be good. > Thanks Ara! I've tried to address both of these in the patch. Do let me know if you have further suggestions. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From david at well-typed.com Thu Sep 28 04:27:27 2017 From: david at well-typed.com (David Feuer) Date: Thu, 28 Sep 2017 00:27:27 -0400 Subject: Including libffi as a submodule In-Reply-To: <87mv5gudj0.fsf@ben-laptop.smart-cactus.org> References: <87mv5gudj0.fsf@ben-laptop.smart-cactus.org> Message-ID: <4950212.yqCxJzOxSi@squirrel> On Tuesday, September 26, 2017 9:38:11 PM EDT Ben Gamari wrote: > Numerous attempts have > been made to get the libffi maintainers to cut a new release but sadly > no progress has been made in over six months of trying. Has anyone followed the process described in the somewhat poorly named https://wiki.haskell.org/Taking_over_a_package to try to add one or more maintainers upstream? If the current maintainer(s) object to that, would it make sense to produce a proper fork on Hackage? David From m at tweag.io Thu Sep 28 07:56:38 2017 From: m at tweag.io (Boespflug, Mathieu) Date: Thu, 28 Sep 2017 09:56:38 +0200 Subject: Including libffi as a submodule In-Reply-To: <4950212.yqCxJzOxSi@squirrel> References: <87mv5gudj0.fsf@ben-laptop.smart-cactus.org> <4950212.yqCxJzOxSi@squirrel> Message-ID: Aren't we talking about the C project here? https://sourceware.org/libffi/ -- Mathieu Boespflug Founder at http://tweag.io. On 28 September 2017 at 06:27, David Feuer wrote: > On Tuesday, September 26, 2017 9:38:11 PM EDT Ben Gamari wrote: > >> Numerous attempts have >> been made to get the libffi maintainers to cut a new release but sadly >> no progress has been made in over six months of trying. > > Has anyone followed the process described in the somewhat poorly named > https://wiki.haskell.org/Taking_over_a_package to try to add one or more > maintainers upstream? If the current maintainer(s) object to that, would it make > sense to produce a proper fork on Hackage? > > David > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From moritz.angermann at gmail.com Thu Sep 28 08:57:18 2017 From: moritz.angermann at gmail.com (Moritz Angermann) Date: Thu, 28 Sep 2017 16:57:18 +0800 Subject: Including libffi as a submodule In-Reply-To: References: <87mv5gudj0.fsf@ben-laptop.smart-cactus.org> <4950212.yqCxJzOxSi@squirrel> Message-ID: <37946892-3D5E-449B-929F-7A07F47227F5@gmail.com> Yes. that’s the one. > On Sep 28, 2017, at 3:56 PM, Boespflug, Mathieu wrote: > > Aren't we talking about the C project here? https://sourceware.org/libffi/ > -- > Mathieu Boespflug > Founder at http://tweag.io. > > > On 28 September 2017 at 06:27, David Feuer wrote: >> On Tuesday, September 26, 2017 9:38:11 PM EDT Ben Gamari wrote: >> >>> Numerous attempts have >>> been made to get the libffi maintainers to cut a new release but sadly >>> no progress has been made in over six months of trying. >> >> Has anyone followed the process described in the somewhat poorly named >> https://wiki.haskell.org/Taking_over_a_package to try to add one or more >> maintainers upstream? If the current maintainer(s) object to that, would it make >> sense to produce a proper fork on Hackage? >> >> David >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From david at well-typed.com Thu Sep 28 09:06:09 2017 From: david at well-typed.com (David Feuer) Date: Thu, 28 Sep 2017 05:06:09 -0400 Subject: Including libffi as a submodule Message-ID: <20170928083810.811BCBC89B@haskell.org> Sorry; wasn't thinking straight! David FeuerWell-Typed, LLP -------- Original message --------From: "Boespflug, Mathieu" Date: 9/28/17 3:56 AM (GMT-05:00) To: David Feuer Cc: ghc-devs Subject: Re: Including libffi as a submodule Aren't we talking about the C project here? https://sourceware.org/libffi/ -- Mathieu Boespflug Founder at http://tweag.io. On 28 September 2017 at 06:27, David Feuer wrote: > On Tuesday, September 26, 2017 9:38:11 PM EDT Ben Gamari wrote: > >> Numerous attempts have >> been made to get the libffi maintainers to cut a new release but sadly >> no progress has been made in over six months of trying. > > Has anyone followed the process described in the somewhat poorly named > https://wiki.haskell.org/Taking_over_a_package to try to add one or more > maintainers upstream? If the current maintainer(s) object to that, would it make > sense to produce a proper fork on Hackage? > > David > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at ara.io Thu Sep 28 09:36:34 2017 From: me at ara.io (Ara Adkins) Date: Thu, 28 Sep 2017 10:36:34 +0100 Subject: Review of CONTRIBUTORS.md In-Reply-To: <874lrnu0f9.fsf@ben-laptop.smart-cactus.org> References: <874lrpuqhk.fsf@ben-laptop.smart-cactus.org> <874lrnu0f9.fsf@ben-laptop.smart-cactus.org> Message-ID: <98CC9392-7493-46E5-AF84-CAD07D5DD6CF@ara.io> Hi Ben, No further suggestions, but the link to the newcomers page is broken, and there is a typo towards the end of the ‘Writing Patches’ section: ‘relative unconversial’ -> ‘relatively uncontroversial’. Sorry for the lack of line numbers but I can’t work out how to view the raw file on my phone! _ara > On 28 Sep 2017, at 01:33, Ben Gamari wrote: > > Ara Adkins writes: > >> Hey Ben, >> >> Looks good to me for the most part! I think that you should include a >> link to the Newcomers page [1] as a whole, even though it has some >> overlap with the quick start guide and you’ve included a link to the >> ‘Finding a Ticket’ subsection therein. >> >> I also think that a few more pointers on what is hoped for in a review >> (under ‘Reviewing Patches’) could be helpful for those looking to get >> involved! Code review styles and expectations can vary greatly, so >> some more info here would be good. >> > Thanks Ara! I've tried to address both of these in the patch. Do let me > know if you have further suggestions. > > Cheers, > > - Ben > From niteria at gmail.com Thu Sep 28 10:01:59 2017 From: niteria at gmail.com (Bartosz Nitka) Date: Thu, 28 Sep 2017 11:01:59 +0100 Subject: cvs-ghc archives Message-ID: Hello, I was reading a comment that pointed to http://www.haskell.org/pipermail/cvs-ghc/2007-September/038458.html. It's a dead link and I couldn't find any place where cvs-ghc is archived. I've tried webarchive. Looks like Simon had a simliar problem 4 years ago: http://ghc-devs.haskell.narkive.com/3dIDGsHy/cvs-ghc-archives Is the data forever lost? There aren't actually that many references in the source tree: ./libraries/base/configure.ac:# See http://www.haskell.org/pipermail/cvs-ghc/2011-September/065980.html ./libraries/base/configure:# See http://www.haskell.org/pipermail/cvs-ghc/2011-September/065980.html ./libraries/base/autom4te.cache/output.0:# See http://www.haskell.org/pipermail/cvs-ghc/2011-September/065980.html ./rts/linker/Elf.c: * See thread http://www.haskell.org/pipermail/cvs-ghc/2007-September/038458.html ./rts/Linker.c: // see http://www.haskell.org/pipermail/cvs-ghc/2007-September/038570.html ./testsuite/tests/typecheck/should_fail/tcfail201.hs:-- http://www.haskell.org/pipermail/cvs-ghc/2008-June/043173.html I'm happy to remove them if that's what we want to do. Coincidentally cvs-ghc at haskell.org appears 58 times in the GHC source tree. Do we want to keep using it? Thanks, Bartosz From niteria at gmail.com Thu Sep 28 10:07:43 2017 From: niteria at gmail.com (Bartosz Nitka) Date: Thu, 28 Sep 2017 11:07:43 +0100 Subject: cvs-ghc archives In-Reply-To: References: Message-ID: I think someone has archived the data for us here https://www.mail-archive.com/cvs-all at haskell.org/. I don't have a good way to map the old links to new ones, though. 2017-09-28 11:01 GMT+01:00 Bartosz Nitka : > Hello, > > I was reading a comment that pointed to > http://www.haskell.org/pipermail/cvs-ghc/2007-September/038458.html. > It's a dead link and I couldn't find any place where cvs-ghc is archived. > I've tried webarchive. > > Looks like Simon had a simliar problem 4 years ago: > http://ghc-devs.haskell.narkive.com/3dIDGsHy/cvs-ghc-archives > > Is the data forever lost? > > There aren't actually that many references in the source tree: > ./libraries/base/configure.ac:# See > http://www.haskell.org/pipermail/cvs-ghc/2011-September/065980.html > ./libraries/base/configure:# See > http://www.haskell.org/pipermail/cvs-ghc/2011-September/065980.html > ./libraries/base/autom4te.cache/output.0:# See > http://www.haskell.org/pipermail/cvs-ghc/2011-September/065980.html > ./rts/linker/Elf.c: * See thread > http://www.haskell.org/pipermail/cvs-ghc/2007-September/038458.html > ./rts/Linker.c: // see > http://www.haskell.org/pipermail/cvs-ghc/2007-September/038570.html > ./testsuite/tests/typecheck/should_fail/tcfail201.hs:-- > http://www.haskell.org/pipermail/cvs-ghc/2008-June/043173.html > > I'm happy to remove them if that's what we want to do. > > Coincidentally cvs-ghc at haskell.org appears 58 times in the GHC source > tree. Do we want to keep using it? > > Thanks, > Bartosz From gershomb at gmail.com Thu Sep 28 14:24:52 2017 From: gershomb at gmail.com (Gershom B) Date: Thu, 28 Sep 2017 10:24:52 -0400 Subject: cvs-ghc archives In-Reply-To: References: Message-ID: You can also get cvs-ghc specific mail at: https://www.mail-archive.com/cvs-ghc=40haskell.org/ Note: We still have the old list archives. It looks like the list itself = was deleted on mailman when the old cvs- lists were decomissioned so as t= o prevent confusion. I think they=E2=80=99ve been inaccessable for years = so it doesn=E2=80=99t seem like its a pressing concern=3F That said, we c= an look into hosting the archives with a redirect on archives.haskell.org= . Cheers, Gershom On September 28, 2017 at 8:32:26 AM, ghc-devs-request=40haskell.org (ghc-= devs-request=40haskell.org) wrote: > Send ghc-devs mailing list submissions to > ghc-devs=40haskell.org > =20 > To subscribe or unsubscribe via the World Wide Web, visit > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > or, via email, send a message with subject or body 'help' to > ghc-devs-request=40haskell.org > =20 > You can reach the person managing the list at > ghc-devs-owner=40haskell.org > =20 > When replying, please edit your Subject line so it is more specific > than =22Re: Contents of ghc-devs digest...=22 > =20 > =20 > Today's Topics: > =20 > 1. Re: cvs-ghc archives (Bartosz Nitka) > =20 > =20 > ---------------------------------------------------------------------- > =20 > Message: 1 > Date: Thu, 28 Sep 2017 11:07:43 +0100 > =46rom: Bartosz Nitka =20 > To: ghc-devs Devs =20 > Subject: Re: cvs-ghc archives > Message-ID: > =20 > Content-Type: text/plain; charset=3D=22UT=46-8=22 > =20 > I think someone has archived the data for us here > https://www.mail-archive.com/cvs-all=40haskell.org/. > I don't have a good way to map the old links to new ones, though. > =20 > 2017-09-28 11:01 GMT+01:00 Bartosz Nitka : > > Hello, > > > > I was reading a comment that pointed to > > http://www.haskell.org/pipermail/cvs-ghc/2007-September/038458.html. > > It's a dead link and I couldn't find any place where cvs-ghc is archi= ved. > > I've tried webarchive. > > > > Looks like Simon had a simliar problem 4 years ago: > > http://ghc-devs.haskell.narkive.com/3dIDGsHy/cvs-ghc-archives > > > > Is the data forever lost=3F > > > > There aren't actually that many references in the source tree: > > ./libraries/base/configure.ac:=23 See > > http://www.haskell.org/pipermail/cvs-ghc/2011-September/065980.html > > ./libraries/base/configure:=23 See > > http://www.haskell.org/pipermail/cvs-ghc/2011-September/065980.html > > ./libraries/base/autom4te.cache/output.0:=23 See > > http://www.haskell.org/pipermail/cvs-ghc/2011-September/065980.html > > ./rts/linker/Elf.c: * See thread > > http://www.haskell.org/pipermail/cvs-ghc/2007-September/038458.html > > ./rts/Linker.c: // see > > http://www.haskell.org/pipermail/cvs-ghc/2007-September/038570.html > > ./testsuite/tests/typecheck/should=5Ffail/tcfail201.hs:-- > > http://www.haskell.org/pipermail/cvs-ghc/2008-June/043173.html > > > > I'm happy to remove them if that's what we want to do. > > > > Coincidentally cvs-ghc=40haskell.org appears 58 times in the GHC sour= ce > > tree. Do we want to keep using it=3F > > > > Thanks, > > Bartosz > =20 > =20 > ------------------------------ > =20 > Subject: Digest =46ooter > =20 > =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F= =5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F=5F > ghc-devs mailing list > ghc-devs=40haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > =20 > =20 > ------------------------------ > =20 > End of ghc-devs Digest, Vol 169, Issue 60 > ***************************************** > =20 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 101 bytes Desc: Message signed with OpenPGP using AMPGpg URL: From rae at cs.brynmawr.edu Thu Sep 28 21:46:48 2017 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Thu, 28 Sep 2017 17:46:48 -0400 Subject: CircleCI (Was: Disable Travis?) In-Reply-To: References: <1503245493.4020.2.camel@joachim-breitner.de> <7920340D-2B0B-426E-8960-8A391AA46689@cs.brynmawr.edu> <1503302065.29178.1.camel@joachim-breitner.de> <2E2A20F3-7185-4ABA-8AD8-EDA2A4DE036B@cs.brynmawr.edu> <1504428949.11527.2.camel@joachim-breitner.de> <95F2B586-B24A-4E51-AF27-473AF66E3B3E@cs.brynmawr.edu> <1506002151.973.17.camel@joachim-breitner.de> <1506367822.1913.2.camel@joachim-breitner.de> Message-ID: I just tried to get on board this new train. But I can't seem to figure out how to "follow" ghc. I signed up through my GitHub account, but CircleCI doesn't seem to want me to follow an open-source project with which I have no formal association in GitHub. Joachim, it seemed you unlocked this capability. How? It'd be lovely to get notifications of builds on ghc's wip/rae branch, in particular. Thanks! Richard > On Sep 25, 2017, at 4:51 PM, Boespflug, Mathieu wrote: > >> Do you know if there is a way to notify the committer, even if the committer is not a CircleCI user? > > I don't know. The default might be to notify the committer if the > build failed, even when not a CircleCI user. But I haven't tried. > -- > Mathieu Boespflug > Founder at http://tweag.io. > > > On 25 September 2017 at 21:30, Joachim Breitner > wrote: >> Hi, >> >> Am Montag, den 25.09.2017, 13:36 +0200 schrieb Boespflug, Mathieu: >>> Can you configure circleci to mail both the committeer and a specific >>> person (e.g. you, or me) on every failed committ? >>> >>> That's a good question. The way things work in CircleCI is that this >>> is a user setting. You can go to >>> >>> https://circleci.com/account/notifications >>> >>> and "subscribe" to notifications on a per-organization or per-project basis. >> >> ok, so it seems that I can ensure I get mail. Do you know if there is a >> way to notify the committer, even if the committer is not a CircleCI >> user? >> >> >>> Regarding the resource_class setting, that's a feature CircleCI >>> graciously enabled for tweag/ghc. I could do the same request for the >>> ghc/ Github org. But better if it's an admin of the org. >> >> >> Even without this, it often runs in 40mins, and sometimes takes an >> hour. So for now, I’ll just let it run like this. >> >> Joachim >> -- >> Joachim “nomeata” Breitner >> mail at joachim-breitner.de >> https://www.joachim-breitner.de/ >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From ben at well-typed.com Fri Sep 29 16:03:23 2017 From: ben at well-typed.com (Ben Gamari) Date: Fri, 29 Sep 2017 12:03:23 -0400 Subject: Updating Phabricator Message-ID: <87lgkxsd9w.fsf@ben-laptop.smart-cactus.org> Hello everyone, It seems that Harbormaster is acting up so I'm going to take this opportunity to upgrade Phabricator while fixing it. I expect this will take about 30 minutes. I'll wait an hour begin beginning to allow people to finish their work, however. Let me know if this would cause a great inconvenience. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From ben at well-typed.com Fri Sep 29 18:51:02 2017 From: ben at well-typed.com (Ben Gamari) Date: Fri, 29 Sep 2017 14:51:02 -0400 Subject: Updating Phabricator In-Reply-To: <87lgkxsd9w.fsf@ben-laptop.smart-cactus.org> References: <87lgkxsd9w.fsf@ben-laptop.smart-cactus.org> Message-ID: <87ing1s5ih.fsf@ben-laptop.smart-cactus.org> Ben Gamari writes: > Hello everyone, > > It seems that Harbormaster is acting up so I'm going to take this > opportunity to upgrade Phabricator while fixing it. I expect this will > take about 30 minutes. I'll wait an hour begin beginning to allow people > to finish their work, however. I finished the Phabricator upgrade quite a while ago. However, the amd64 Harbormaster situation still isn't healthy. I'll try to notify before making any changes that might affect existing sessions. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From takenobu.hs at gmail.com Sat Sep 30 06:46:37 2017 From: takenobu.hs at gmail.com (Takenobu Tani) Date: Sat, 30 Sep 2017 15:46:37 +0900 Subject: Underscore in binary literals In-Reply-To: References: <87poadvalu.fsf@ben-laptop.smart-cactus.org> <5242587B-879E-4A72-9C0E-265C403D0956@inconsistent.nl> Message-ID: Dear devs, > Thank you very much for the response and kind explanation. > After studying, I will submit to ghc-proposals :) I submitted a ghc-proposal #76 [1]. Please feedback:) [1]: https://github.com/ghc-proposals/ghc-proposals/pull/76 Regards, Takenobu 2017-09-26 23:05 GMT+09:00 Takenobu Tani : > Dear all, > > Thank you very much for the response and kind explanation. > After studying, I will submit to ghc-proposals :) > > Regards, > Takenobu > > > > 2017-09-26 22:56 GMT+09:00 Merijn Verstraaten : > >> I, too, have wished for the ability to have a separator in large number >> literals. >> >> So a strong +1 from me. >> >> Cheers, >> Merijn >> >> > On 26 Sep 2017, at 15:43, Ben Gamari wrote: >> > >> > Takenobu Tani writes: >> > >> >> Hi Rahul, >> >> >> >> Thanks for the explanation. >> >> Hmm, Is not there much need... >> >> >> >> Thank you code example. >> >> I will also look at lexer for my study [1]. >> >> >> > If you do want to try pursuing a language change do feel free to submit >> > a proposal [1]. I would also like the ability to break up large >> > literals. However, why limit it to BinaryLiterals? I would like the same >> > syntax for deciaml and hexadecimal literals as well. >> > >> > Cheers, >> > >> > - Ben >> > >> > >> > [1] https://github.com/ghc-proposals/ghc-proposals >> > _______________________________________________ >> > ghc-devs mailing list >> > ghc-devs at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Sat Sep 30 15:56:12 2017 From: mail at joachim-breitner.de (Joachim Breitner) Date: Sat, 30 Sep 2017 11:56:12 -0400 Subject: perf.haskell.org update: Now using cachegrind In-Reply-To: <1506085574.1005.4.camel@joachim-breitner.de> References: <1505923865.5913.7.camel@joachim-breitner.de> <87vakdw77m.fsf@ben-laptop.smart-cactus.org> <1505938404.1534.5.camel@joachim-breitner.de> <1505961496.10873.0.camel@joachim-breitner.de> <1506085574.1005.4.camel@joachim-breitner.de> Message-ID: <1506786972.23302.4.camel@joachim-breitner.de> Hi, update ton this: Am Freitag, den 22.09.2017, 09:06 -0400 schrieb Joachim Breitner: > I have switched perf.haskell.org to run nofib with > $ make -C nofib EXTRA_RUNTEST_OPTS=-cachegrind NoFibRuns=1 mode=slow -j8 it looks like this will not work out, with the current setup; perf.haskell.org has failed to catch up with the number of commits coming in. I’ll see if there are some low-hanging fruits to speed this up, such as not building haddocks. I also wonder whether, when using cachegrind, the results from different machines are actually comparable. Maybe I’ll try that some day. Greetings, Joachim -- Joachim “nomeata” Breitner mail at joachim-breitner.de https://www.joachim-breitner.de/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From svenpanne at gmail.com Sat Sep 30 17:28:14 2017 From: svenpanne at gmail.com (Sven Panne) Date: Sat, 30 Sep 2017 19:28:14 +0200 Subject: perf.haskell.org update: Now using cachegrind In-Reply-To: <1506786972.23302.4.camel@joachim-breitner.de> References: <1505923865.5913.7.camel@joachim-breitner.de> <87vakdw77m.fsf@ben-laptop.smart-cactus.org> <1505938404.1534.5.camel@joachim-breitner.de> <1505961496.10873.0.camel@joachim-breitner.de> <1506085574.1005.4.camel@joachim-breitner.de> <1506786972.23302.4.camel@joachim-breitner.de> Message-ID: 2017-09-30 17:56 GMT+02:00 Joachim Breitner : > [...] I also wonder whether, when using cachegrind, the results from > different machines are actually comparable. [...] > In general, they are not really comparable: cachegrind doesn't collect *actual* cache statistics, it emulates a simplified version of the caching machinery, trying to auto-detect parameters, see e.g.: http://valgrind.org/docs/manual/cg-manual.html#cg-manual.overview This doesn't mean that the numbers are useless, but they are only (good) general hints, and in rare extreme cases, they can be totally off from the real numbers. For the "real stuff", one has to use "perf", but then you can only compare numbers from the same CPU models. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at nh2.me Sat Sep 30 23:51:30 2017 From: mail at nh2.me (=?UTF-8?Q?Niklas_Hamb=c3=bcchen?=) Date: Sun, 1 Oct 2017 01:51:30 +0200 Subject: GHC Threads affinity In-Reply-To: References: Message-ID: Hey Michael, greetings! Here's a little side issue that may also be of interest to you in case you've got HyperThreading on: https://ghc.haskell.org/trac/ghc/ticket/10229 Niklas