From johan.tibell at gmail.com Tue Jul 1 05:49:47 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Tue, 1 Jul 2014 07:49:47 +0200 Subject: Proposal: require Haddock comment for every new top-level function and type in GHC source code In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF103EECD8@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: On Mon, Jun 30, 2014 at 11:22 PM, Dominick Samperi wrote: > Given the examples provided with this proposal it looks like this > change is targeted mostly at compiler hackers, and not at > library/package developers. Yes, this discussion is only about documenting the GHC modules. -- Johan From jan.stolarek at p.lodz.pl Tue Jul 1 07:16:57 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Tue, 1 Jul 2014 09:16:57 +0200 Subject: [commit: ghc] master: Overlapable pragmas for individual instances (#9242) (6290eea) In-Reply-To: <87vbrixg92.fsf@gmail.com> References: <20140630003752.20ED72406D@ghc.haskell.org> <87vbrixg92.fsf@gmail.com> Message-ID: <201407010916.57143.jan.stolarek@p.lodz.pl> > Sure, I've just added you to the set of users allowed to push to > GitHub's haskell/haddock repo. > > There's a ghc-devs team in the haskell github organization, > > https://github.com/orgs/haskell/teams/ghc-devs This link is broken. Is this the same as: https://github.com/orgs/haskell/members ? Janek From hvr at gnu.org Tue Jul 1 09:19:13 2014 From: hvr at gnu.org (Herbert Valerio Riedel) Date: Tue, 01 Jul 2014 11:19:13 +0200 Subject: DPH-setting in mk/build.mk (was: HEADS-UP: Git submodule conversion imminent) In-Reply-To: <201406300845.57744.jan.stolarek@p.lodz.pl> (Jan Stolarek's message of "Mon, 30 Jun 2014 08:45:57 +0200") References: <87ionpv3bb.fsf@gmail.com> <87k381w6xa.fsf@gmail.com> <201406300845.57744.jan.stolarek@p.lodz.pl> Message-ID: <87bnt9wi5q.fsf_-_@gnu.org> On 2014-06-30 at 08:45:57 +0200, Jan Stolarek wrote: > Herbert, all, > > I just pulled the new HEAD and have a question which I believe was not addressed so far. In my > work on the GHC tree I never pulled the dph subrepo because the only thing it adds for me is > extra build time (of course I pull it for my validation tree because I have no choice). Now it > seems that getting rid of dph is not that simple. If I `rm -df libraries/dph` then it gets > restored after `./sync-all pull`. Running `rm -df libraries/dph/*` seems to prevent that but I > imagine there will be problems if the dph submodule actually gets modified and I try to pull the > latest version. Moreover in both cases `git status` lists the submodule content as modified, > which I see as noise. So is there a good way of removing dph from the > source tree? You probably haven't seen http://git.haskell.org/ghc.git/commit/88d85aa65ea15d984bf207f82d99928eda0b6c26 yet, which now provides a way to disable DPH via mk/build.mk It may be worth considering setting BUILD_DPH=NO in some of the quick-build templates in mk/build.mk, but I didn't want to change anything w/o discussion here first. From jan.stolarek at p.lodz.pl Tue Jul 1 09:32:53 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Tue, 1 Jul 2014 11:32:53 +0200 Subject: DPH-setting in mk/build.mk (was: HEADS-UP: Git submodule conversion imminent) In-Reply-To: <87bnt9wi5q.fsf_-_@gnu.org> References: <87ionpv3bb.fsf@gmail.com> <201406300845.57744.jan.stolarek@p.lodz.pl> <87bnt9wi5q.fsf_-_@gnu.org> Message-ID: <201407011132.53910.jan.stolarek@p.lodz.pl> > It may be worth considering setting BUILD_DPH=NO in some of the > quick-build templates in mk/build.mk, but I didn't want to change > anything w/o discussion here first. +1 from me. I see this change as being beginner-friendly - most newcomers probably don't need DPH. Janek From simonpj at microsoft.com Tue Jul 1 09:35:45 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 1 Jul 2014 09:35:45 +0000 Subject: DPH-setting in mk/build.mk (was: HEADS-UP: Git submodule conversion imminent) In-Reply-To: <87bnt9wi5q.fsf_-_@gnu.org> References: <87ionpv3bb.fsf@gmail.com> <87k381w6xa.fsf@gmail.com> <201406300845.57744.jan.stolarek@p.lodz.pl> <87bnt9wi5q.fsf_-_@gnu.org> Message-ID: <618BE556AADD624C9C918AA5D5911BEF1040408A@DB3PRD3001MB020.064d.mgd.msft.net> Good to document this in our building guide somewhere. https://ghc.haskell.org/trac/ghc/wiki/Building/Using perhaps Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | Herbert Valerio Riedel | Sent: 01 July 2014 10:19 | To: Jan Stolarek | Cc: ghc-devs at haskell.org | Subject: DPH-setting in mk/build.mk (was: HEADS-UP: Git submodule | conversion imminent) | | On 2014-06-30 at 08:45:57 +0200, Jan Stolarek wrote: | > Herbert, all, | > | > I just pulled the new HEAD and have a question which I believe was not | > addressed so far. In my work on the GHC tree I never pulled the dph | > subrepo because the only thing it adds for me is extra build time (of | > course I pull it for my validation tree because I have no choice). Now | > it seems that getting rid of dph is not that simple. If I `rm -df | > libraries/dph` then it gets restored after `./sync-all pull`. Running | > `rm -df libraries/dph/*` seems to prevent that but I imagine there | > will be problems if the dph submodule actually gets modified and I try | > to pull the latest version. Moreover in both cases `git status` lists | the submodule content as modified, which I see as noise. So is there a | good way of removing dph from the source tree? | | You probably haven't seen | | | http://git.haskell.org/ghc.git/commit/88d85aa65ea15d984bf207f82d99928eda | 0b6c26 | | yet, which now provides a way to disable DPH via mk/build.mk | | | It may be worth considering setting BUILD_DPH=NO in some of the quick- | build templates in mk/build.mk, but I didn't want to change anything w/o | discussion here first. | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From afarmer at ittc.ku.edu Tue Jul 1 14:19:58 2014 From: afarmer at ittc.ku.edu (Andrew Farmer) Date: Tue, 1 Jul 2014 09:19:58 -0500 Subject: Proposal: require Haddock comment for every new top-level function and type in GHC source code In-Reply-To: <8738em2kol.fsf@gmail.com> References: <53B18555.5040900@gmail.com> <8738em2kol.fsf@gmail.com> Message-ID: At the risk of sounding redundantly redundant, I'd like to third this. My workflow for finding stuff in the GHC codebase is a mixture of grep and Hoogle. Searching Hoogle for "+ghc :: [TyVar] -> Type -> Type" is a huge timesaver, and Hoogle sends me to the generated haddock comments. Usually the haddocks themselves aren't there, but the "Source" link is a handy way to jump to the code and explore. So having actual haddock documentation would only help in this regard. The Notes are *great* for subtle issues with the implementation of some function, but it'd be nice to have some commentary on how to _use_ that function without having to understand how it works. On Mon, Jun 30, 2014 at 3:42 PM, Ben Gamari wrote: > David Luposchainsky writes: > >> Hey list, >> >> I am strongly in favour of the proposal. As a pedestrian-level GHC >> contributor, the *vast* majority of my time is spent trying to figure >> out what certain things do, when the answer could be found in a one- >> or two-line comment above a definition. >> > I'd like to second this. As an occassional contributor, I find myself > wading through a lot of code to deduce functions' purpose. While I'm > often pleasantly surprised by the quality of the notes scattered about > the code, per-definition Haddocks would fill in the many remaining gaps > and provide a nice overview of each module. > > I agree that enforcing the quality of the rendered Haddocks is > unnecessary. Once the language has been written there are many > contributors (such as myself) who can further clean up the formatting. > > Cheers, > > - Ben > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From mark.lentczner at gmail.com Tue Jul 1 14:26:05 2014 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Tue, 1 Jul 2014 07:26:05 -0700 Subject: the case of the missing Cabal doc In-Reply-To: <20140630202503.GA29322@matrix.chaos.earth.li> References: <20140630202503.GA29322@matrix.chaos.earth.li> Message-ID: On Mon, Jun 30, 2014 at 1:25 PM, Ian Lynagh wrote: > It was converted to markdown, and GHC's build system doesn't know how to > build markdown docs. It's in libraries/Cabal/Cabal/doc/. > Okay then... I can see how the GHC build doesn't want to depend on pandoc! I'll probably remove this from the Mac HP distribution, and then have the installed "Start Here" do just have a link to the cabal doc on the web. ...Unless anyone has a good, short markdown->html script that runs in the browser.... :-) - Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Tue Jul 1 17:51:29 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Tue, 01 Jul 2014 18:51:29 +0100 Subject: Help needed: parsing pattern synonym contexts In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF103F294F@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF103EB3F2@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF103F294F@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <53B2F521.90901@gmail.com> The s/r conflicts can also be a problem, depending on what you're trying to parse. It's generally a good idea to get rid of them if you can, but at the least you should understand why they exist (use happy --info) and document them in Parser.y.pp. Cheers, Simon On 30/06/2014 14:04, Simon Peyton Jones wrote: > Gergo > > It's just a question of chasing down the reduce/reduce errors. You can give a flag to Happy that makes it dump a file with all the info about it parsing states, and where the reduce/reduce errors come from, and you can go from there. > > If you don't know how to interpret that file, just commit your best try to your wip/ branch and maybe some other ghc devs will help you. Maybe me. But without being able to reproduce it, it's hard to help. > > Simon > > | -----Original Message----- > | From: Dr. ERDI Gergo [mailto:gergo at erdi.hu] > | Sent: 25 June 2014 14:20 > | To: Simon Peyton Jones > | Cc: GHC Devs > | Subject: RE: Help needed: parsing pattern synonym contexts > | > | On Tue, 24 Jun 2014, Simon Peyton Jones wrote: > | > | > In the latter case, what happened to the shift/reduce and > | > reduce/reduce errors reported by Happy? Esp the latter. If you are > | > getting more you need to track them down. > | > | I think I've figured out what might be causing the problem. > | > | First, a couple figures. With these rules: > | > | pattern_synonym_decl > | : 'pattern' con vars0 patsyn_token pat > | | 'pattern' varid conop varid patsyn_token pat > | > | pattern_synonym_sig > | : 'pattern' patsyn_stuff '::' ctype > | > | patsyn_stuff > | : constr_stuff > | > | I'm getting 112 new reduce/reduce conflicts. > | > | If I add the context like you recommended: > | > | pattern_synonym_sig > | : 'pattern' patsyn_context patsyn_stuff '::' ctype > | > | patsyn_context :: { LHsContext RdrName } > | : forall > | | forall context '=>' > | > | > | then I get 54 new shift/reduce conflicts and no (new) reduce/reduce > | conflicts. > | > | My feeling is the problem is that patterns don't need any special > | parentheses around type annotations, which means the following is a > | legal pattern synonym definition: > | > | pattern Single x = [x] :: [Int] > | > | and I think that the difference (the '=' or '<-') is too 'deep' between > | this and something like > | > | pattern Single a :: [a] > | > | Unfortunately, I still have no idea how to solve this problem... > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From omeragacan at gmail.com Tue Jul 1 18:27:08 2014 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Tue, 1 Jul 2014 21:27:08 +0300 Subject: Why we need CCS entries in apply functions (rts/Apply.cmm and rts/StgStdThunks.cmm) Message-ID: Hi all, Code for entering cost-centres before executing the function/thunk body is generated in `compiler/codeGen/StgCmmBind.hs`, by `thunkCode` and `closureCodeBody` functions. But we also have some `enterCostCentreThunk` and `enterCostCentreFun` calls in `rts/Apply.cmm` and `rts/StgStdThunks.cmm`. I'm wondering why are those necessary. Thunk/function bodies already have CCS entries generated by functions in StgCmmBind so I'm having trouble seeing the need for this additional entries. Can anyone explain those to me? Thanks, --- ?mer Sinan A?acan http://osa1.net From jan.stolarek at p.lodz.pl Wed Jul 2 07:57:57 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Wed, 2 Jul 2014 09:57:57 +0200 Subject: Code redundancy in PrelNames? Message-ID: <201407020957.57571.jan.stolarek@p.lodz.pl> I'm looking at PrelNames.lhs and I believe there is some redundancy that can also lead to confusion. Consider these definitions: bindMName = methName gHC_BASE (fsLit ">>=") bindMClassOpKey arrAName = varQual aRROW (fsLit "arr") arrAIdKey Former is defined as "methName", while the latter is "varQual". These are defined like this: varQual :: Module -> FastString -> Unique -> Name varQual = mk_known_key_name varName mk_known_key_name :: NameSpace -> Module -> FastString -> Unique -> Name mk_known_key_name space modu str unique = mkExternalName unique modu (mkOccNameFS space str) noSrcSpan methName :: Module -> FastString -> Unique -> Name methName modu occ unique = mkExternalName unique modu (mkVarOccFS occ) noSrcSpan Expanding the call to mk_known_key_name in varQual the RHS of varQual becomes : mkExternalName unique modu (mkOccNameFS varName str) noSrcSpan Now, the call to "mkVarOccFS occ" in methName resolves to "mkOccNameFS varName occ", making the call to varQual identical to methName. Is this redundancy a conscious choice or just an accident? If it's conscious then what is the purpose? I spent several minutes trying to understand why bindMName and arrAName are defined differently. Names suggest that bindMName is a method, while arrAName is a qualified variable. I find this confusing and I think it would be better to drop varQual in favour of methName. Thoughts? Janek From marlowsd at gmail.com Wed Jul 2 08:08:10 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Wed, 02 Jul 2014 09:08:10 +0100 Subject: Proposal: require Haddock comment for every new top-level function and type in GHC source code In-Reply-To: References: Message-ID: <53B3BDEA.8060108@gmail.com> Agreed, let's do it. Thanks for the well-argued proposal. Next up: consistent style :-) Cheers, Simon On 27/06/2014 10:51, Johan Tibell wrote: > Hi! > > I found myself exploring new parts of the GHC code base the last few > weeks (exciting!), which again reminded me of my biggest frustration > when working on GHC: the lack of per-function/type (Haddock) comments. > > GHC code is sometimes commented with "notes", which are great but tend > to (1) mostly cover the exceptional cases and (2) talk about the > implementation of a function, not how a caller might use it or why. > > Lack of documentation, in GHC and other software projects, usually has > (at least) two causes: > > * Programmers comment code they think is "complex enough to warrant a > comment". The problem is that the author is usually a poor judge of > what's complex enough, because he/she is too familiar with the code > and tends to under-document code when following this principle. > * Documenting is boring and tends to have little benefit the person > writing to documentation. Given lack of incentives we tend to > document less than we ought to. > > I've only seen one successful way to combat the lack of documentation > that stems from the above: have the project's style guide mandate that > top-level functions and types (or at least those that are exported) have > documentation. This works well at Google. > > Anecdote: we have one code base inside Google that was until recently > exempt from this rule and documentation is almost completely absent in > that code base, even though hundreds of engineers work on and need to > understand it every day. This breeds institutional knowledge problems > i.e. if the author of a core piece of code leaves, lots of knowledge is > lost. > > *Proposal: *I propose that we require that new top-level functions and > types have Haddock comments, even if they start out as a single, humble > sentence. > > I've found that putting even that one sentence (1) helps new users and > (2) establishes a place for improvements to be made. There's a strong > "broken window" effect to lack of comments, in that lack of comments > breeds more lack of comments as developers follow established practices. > > We should add this requirement to the style guide. Having it as a > written down policy tends to prevent having to re-hash the whole > argument about documentation over and over again. This has also helped > us a lot at Google, because programmers can spend endless amount of time > arguing about comments, placement of curly braces, etc. and having a > written policy helps cut down on that. > > To give an idea of how to write good comments, here are two examples of > undocumented code I ran into in GHC and how better comments would have > helped. > > *First example* > In compiler/nativeGen/X86/Instr.hs there's a (local) function called > mkRUR, which is a helper function use when computing instruction > register usage. > > The first question that I asked upon seeing uses of that function was > "what does RUR stand for?" Given the context the function is in, I > guessed it stands for read-update-read, because R is used to mean "read" > in the enclosing function and "updating" is related to "reading" so that > must be what U stands for. It turns out that it stands for > RegUsageReadonly. Here's a comment that would have captured, in a single > sentence, what this function is for: > > -- | Create register usage info for instruction that only > -- reads registers. > mkRUR src = src' `seq` RU src' [] > where src' = filter (interesting platform) src > > That already a big improvement. A note about the register filtering, > which means that not all registers you pass to the function will be > recorded as being read in the end, could also be useful. > > Aside: providing a type signature, which would have made it clear that > the return type is RU, might also have helped in this particular case. > > *Second example* > In the same file there a function called x86_regUsageOfInstr. It's the > function that encloses the local function mkRUR above. > > I could figure out that this function has something to do with register > usage, of the instruction passed as an argument, and that register usage > is important for the register allocator. However, trying to understand > in more detail what that meant was more of challenge than it needed to > be. First, a comment more clearly explaining what computing register > usage means in practice would be helpful: > > -- | Returns which registers are read and written by this > -- instruction, as a (read, written) pair. This info is used > -- by the register allocator. > x86_regUsageOfInstr :: Platform -> Instr -> RegUsage > > The reason mentioning that the return value is essentially a (read, > written) pair is helpful is because the body of the function a big case > statement full of lines like this one: > > GCMP _ src1 src2 -> mkRUR [src1,src2] > ... > FDIV _ src dst -> usageRM src dst > > It's not immediately clear that all the various helper functions used > here just end up computing a pair of the above form. A top-level comment > lets you understand what's going on without understanding exactly what > all these helper functions are doing. > > Thoughts? > > -- Johan > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From marlowsd at gmail.com Wed Jul 2 08:20:00 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Wed, 02 Jul 2014 09:20:00 +0100 Subject: Fixing Trac #9046 (Panic in GHCi when using :print). In-Reply-To: References: Message-ID: <53B3C0B0.6010700@gmail.com> I'm not sure of the correct way to fix this either - Simon, what do you think? This is a regression in 7.8 relative to 7.6 which unfortunately wasn't caught by the test suite. The commit that broke this fixed something else, so it's not a simple matter of reverting it. Vitaly, one concrete thing you can do right now is add a test case and mark it broken by #9046. Cheers, Simon On 22/06/2014 15:47, Vitaly Bragilevsky wrote: > Hello, > > I am trying to fix the bug #9046 (Panic in GHCi when using :print). > I've discovered that It was introduced by this SPJ's commit (25 Nov 2013): > https://ghc.haskell.org/trac/ghc/changeset/a8ac471d435214dbdc1fa70f938c63128993a1db/ghc > and especially by this change: > > -type QuantifiedType = ([TyVar], Type) -- Make the free type > variables explicit > +type QuantifiedType = ([TyVar], Type) > + -- Make the free type variables explicit > + -- The returned Type should have no top-level foralls (I believe) > > quantifyType :: Type -> QuantifiedType > --- Generalize the type: find all free tyvars and wrap in the appropiate ForAll. > -quantifyType ty = (varSetElems (tyVarsOfType ty), ty) > +-- Generalize the type: find all free and forall'd tyvars > +-- and return them, together with the type inside, which > +-- should not be a forall type. > +-- > +-- Thus (quantifyType (forall a. a->[b])) > +-- returns ([a,b], a -> [b]) > + > +quantifyType ty = (varSetElems (tyVarsOfType rho), rho) > + where > + (_tvs, rho) = tcSplitForAllTys ty > > While this change looks reasonable to me it breaks exploiting thunks > created by `:print` ghci's command > from polymorphic values. Before this change we could have: > > $ ghci -fprint-explicit-foralls > GHCi, version 7.6.3.20130421: http://www.haskell.org/ghc/ :? for help >> :print length > length = (_t1::forall a. [a] -> Int) >> _t1 [1,2,3] > 3 > > And now we have: > $ ghci -fprint-explicit-foralls > GHCi, version 7.8.2: http://www.haskell.org/ghc/ :? for help >> :print length > length = (_t1::[a] -> Int) >> _t1 [1,2,3] > ghc: panic! (the 'impossible' happened) > (GHC version 7.8.2 for x86_64-unknown-linux): > tcTyVarDetails a{tv arw} [tv] > > > This particular panic is caused by unification > (compiler/typecheck/TcUnify.lhs line 1042). > In case of `read` instead of `length` we get panic even earlier in > simplification (compiler/typecheck/TcSimplify line 85). > > The concrete reason for the panic is wrong constructor used for tyvar > (TyVar instead of TcTyVar). > > I see several ways to fix this: > 1) rollback to forall-types in `:print` (requires change in > cvObtainTerm, RtClosureInspect.hs); > 2) skolemise free tyvar to RuntimeUnk (it becomes TcTyVar then): > a) in cvObtainTerm; > b) somewhere before typechecking. > > I can't decide the correct way to fix this bug or maybe there are > other alternatives. > Anyway I am ready to work on this though I need some advice. > > With best regards, > Vitaly Bragilevsky > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From gergo at erdi.hu Wed Jul 2 11:45:29 2014 From: gergo at erdi.hu (Dr. ERDI Gergo) Date: Wed, 2 Jul 2014 19:45:29 +0800 (SGT) Subject: Help needed: parsing pattern synonym contexts In-Reply-To: <53B2F521.90901@gmail.com> References: <618BE556AADD624C9C918AA5D5911BEF103EB3F2@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF103F294F@DB3PRD3001MB020.064d.mgd.msft.net> <53B2F521.90901@gmail.com> Message-ID: Hi, On Tue, 1 Jul 2014, Simon Marlow wrote: > The s/r conflicts can also be a problem, depending on what you're trying to > parse. It's generally a good idea to get rid of them if you can, but at the > least you should understand why they exist (use happy --info) and document > them in Parser.y.pp. OK I've uploaded the code to wip/T9023 and the happy --info report to http://unsafePerform.IO/files/Parser.y.happy-info.gz Loads of shift/reduce conflicts come from state 214. Looking at them, I would have thought that the solution would be to parse pattern synonym definitions like we parse data constructors as types: by parsing it as a single pattern and then splitting it. A quick shortcut to this should be to just disable the parsing rule for infix pattern synonyms; so I tried that. As expected, this cuts down on the number of shift/reduce conflicts in that state considerably. The rest then seems to be coming from state 570: patsyn_context -> forall . (rule 157) patsyn_context -> forall . context '=>' (rule 158) Unfortunately, I don't have a quick workaround for that one yet. Thanks, Gergo From ggreif at gmail.com Wed Jul 2 12:42:19 2014 From: ggreif at gmail.com (Gabor Greif) Date: Wed, 2 Jul 2014 14:42:19 +0200 Subject: GenericsPropositionalEquality [Was: Resolved+new Q: TypeLits question, how to build a Type Application with Symbol index] Message-ID: [added Pedro and Richard to "to:"] On 6/30/14, Simon Peyton Jones wrote: > | Yes I have a branch, and it works! A bunch of things is still missing > | (notably record selectors), but I have a proof-of-concept with a gdiff > | library hooked up to GHC.Generics, and by appealing to type-level > | reasoning I can obtain a difference tree from True to False (which > | looks good) by using the reflection (i.e. class Generic) only, no need > | for TH or hand-coding. Comparing bigger trees (and then 'patch'ing > | them) appears to be SMOP from here. > > Do you have a wiki page explaining what "it" is (the thing that works). Here is a wiki page https://ghc.haskell.org/trac/ghc/wiki/GenericsPropositionalEquality you all are invited to make suggestions, ask questions and generally beautify. > > | instance Datatype (Dat "MyModule" "Foo") ... > | > | I get an 'orphan instance' warning. I believe that these are harmless, > > The downside of orphan instances is that GHC must visit every .hi file that > has an orphan instance, just in case it contains a relevant instance decl. > That slows down *every* compilation, whether or not it uses the instance. > > The best way to get rid of it is to declare something local that is "from > this module". Something like > > data MyModule_Foo > instance DataType (Dat MyModule_Foo) where ... > > Now MyModule_Foo is a data type from the module currently being compiled. > That tells GHC which .hi file to look in, and means the instance isn't > orphan. Yes I see, I noted it in the discussion and came up with a conservative approach (to be implemented). Furter opinions? Please add to the wiki. Thanks and cheers, Gabor > > Simon > > | so is there a way to suppress them? Since I never insert tyvars in the > | instance head, there should never be any overlap too. > | > | Cheers, > | > | Gabor > | > | On 6/30/14, Simon Peyton Jones wrote: > | > You'll need to give a lot more info than this before I can help Gabor. > | > Currently I have only the vaguest idea about what you are trying to > | > accomplish. Is there a wiki page that describes the design (user's eye > | > view) in detail? > | > > | > I see you have a branch. If you are stuck, and give me repro > | instructions, > | > I can attempt to help. > | > > | > Simon > | > > | > | -----Original Message----- > | > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of > | Gabor > | > | Greif > | > | Sent: 27 June 2014 17:51 > | > | To: ghc-devs > | > | Subject: Resolved+new Q: TypeLits question, how to build a Type > | > | Application with Symbol index > | > | > | > | I succeeded to solve all of them :-) > | > | > | > | But now I am blocked on on a panic > | > | > | > | "not in scope during type checking, but it passed the renamer". > | > | > | > | I suspect that while "deriving Generic" some instances are defined in > | > | some empty TcEnv, which does not contain my definition in context. > | > | > | > | Is there a way to inject some type constructor into the TcEnv? > | > | > | > | Thanks, > | > | > | > | Gabor > | > | > | > | On 6/27/14, Gabor Greif wrote: > | > | > Hello devs, > | > | > > | > | > I have > | > | > > | > | > {{{ > | > | > data D (n :: Symbol) > | > | > }}} > | > | > > | > | > in my module, and I want to obtain a type > | > | > > | > | > {{{ > | > | > D "YAY!" > | > | > }}} > | > | > > | > | > programmatically. Where can I find code that performs this (or > | > | > something similar)? > | > | > > | > | > 1) I have to look up |D| in the current TyEnv (what if it is in a > | > | > specific module?), > | > | > 2) I have to build the type index (of kind Symbol), this involves > | > | > FastString, looks non-trivial, > | > | > 3) Apply 1) on 2), this is easy. > | > | > > | > | > Any hints welcome! > | > | > > | > | > Thanks and cheers, > | > | > > | > | > Gabor > | > | > > | > | > > | > | > PS: some morsels I have so far: > | > | > > | > | > for 1) > | > | > compiler/prelude/PrelNames.lhs:gHC_GENERICS = mkBaseModule > | (fsLit > | > | > "GHC.Generics") > | > | > > | > | _______________________________________________ > | > | ghc-devs mailing list > | > | ghc-devs at haskell.org > | > | http://www.haskell.org/mailman/listinfo/ghc-devs > | > > From ekmett at gmail.com Wed Jul 2 15:56:09 2014 From: ekmett at gmail.com (Edward Kmett) Date: Wed, 2 Jul 2014 11:56:09 -0400 Subject: Proposal: require Haddock comment for every new top-level function and type in GHC source code In-Reply-To: <53B3BDEA.8060108@gmail.com> References: <53B3BDEA.8060108@gmail.com> Message-ID: That has a high chance of backfiring and requiring everyone to use do { ...; ... } with explicit braces and semis. ;) -Edward On Wed, Jul 2, 2014 at 4:08 AM, Simon Marlow wrote: > Agreed, let's do it. Thanks for the well-argued proposal. > > Next up: consistent style :-) > > Cheers, > Simon > > > On 27/06/2014 10:51, Johan Tibell wrote: > >> Hi! >> >> I found myself exploring new parts of the GHC code base the last few >> weeks (exciting!), which again reminded me of my biggest frustration >> when working on GHC: the lack of per-function/type (Haddock) comments. >> >> GHC code is sometimes commented with "notes", which are great but tend >> to (1) mostly cover the exceptional cases and (2) talk about the >> implementation of a function, not how a caller might use it or why. >> >> Lack of documentation, in GHC and other software projects, usually has >> (at least) two causes: >> >> * Programmers comment code they think is "complex enough to warrant a >> >> comment". The problem is that the author is usually a poor judge of >> what's complex enough, because he/she is too familiar with the code >> and tends to under-document code when following this principle. >> * Documenting is boring and tends to have little benefit the person >> >> writing to documentation. Given lack of incentives we tend to >> document less than we ought to. >> >> I've only seen one successful way to combat the lack of documentation >> that stems from the above: have the project's style guide mandate that >> top-level functions and types (or at least those that are exported) have >> documentation. This works well at Google. >> >> Anecdote: we have one code base inside Google that was until recently >> exempt from this rule and documentation is almost completely absent in >> that code base, even though hundreds of engineers work on and need to >> understand it every day. This breeds institutional knowledge problems >> i.e. if the author of a core piece of code leaves, lots of knowledge is >> lost. >> >> *Proposal: *I propose that we require that new top-level functions and >> >> types have Haddock comments, even if they start out as a single, humble >> sentence. >> >> I've found that putting even that one sentence (1) helps new users and >> (2) establishes a place for improvements to be made. There's a strong >> "broken window" effect to lack of comments, in that lack of comments >> breeds more lack of comments as developers follow established practices. >> >> We should add this requirement to the style guide. Having it as a >> written down policy tends to prevent having to re-hash the whole >> argument about documentation over and over again. This has also helped >> us a lot at Google, because programmers can spend endless amount of time >> arguing about comments, placement of curly braces, etc. and having a >> written policy helps cut down on that. >> >> To give an idea of how to write good comments, here are two examples of >> undocumented code I ran into in GHC and how better comments would have >> helped. >> >> *First example* >> >> In compiler/nativeGen/X86/Instr.hs there's a (local) function called >> mkRUR, which is a helper function use when computing instruction >> register usage. >> >> The first question that I asked upon seeing uses of that function was >> "what does RUR stand for?" Given the context the function is in, I >> guessed it stands for read-update-read, because R is used to mean "read" >> in the enclosing function and "updating" is related to "reading" so that >> must be what U stands for. It turns out that it stands for >> RegUsageReadonly. Here's a comment that would have captured, in a single >> sentence, what this function is for: >> >> -- | Create register usage info for instruction that only >> -- reads registers. >> mkRUR src = src' `seq` RU src' [] >> where src' = filter (interesting platform) src >> >> That already a big improvement. A note about the register filtering, >> which means that not all registers you pass to the function will be >> recorded as being read in the end, could also be useful. >> >> Aside: providing a type signature, which would have made it clear that >> the return type is RU, might also have helped in this particular case. >> >> *Second example* >> >> In the same file there a function called x86_regUsageOfInstr. It's the >> function that encloses the local function mkRUR above. >> >> I could figure out that this function has something to do with register >> usage, of the instruction passed as an argument, and that register usage >> is important for the register allocator. However, trying to understand >> in more detail what that meant was more of challenge than it needed to >> be. First, a comment more clearly explaining what computing register >> usage means in practice would be helpful: >> >> -- | Returns which registers are read and written by this >> -- instruction, as a (read, written) pair. This info is used >> -- by the register allocator. >> x86_regUsageOfInstr :: Platform -> Instr -> RegUsage >> >> The reason mentioning that the return value is essentially a (read, >> written) pair is helpful is because the body of the function a big case >> statement full of lines like this one: >> >> GCMP _ src1 src2 -> mkRUR [src1,src2] >> ... >> FDIV _ src dst -> usageRM src dst >> >> It's not immediately clear that all the various helper functions used >> here just end up computing a pair of the above form. A top-level comment >> lets you understand what's going on without understanding exactly what >> all these helper functions are doing. >> >> Thoughts? >> >> -- Johan >> >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin at well-typed.com Wed Jul 2 16:04:07 2014 From: austin at well-typed.com (Austin Seipp) Date: Wed, 2 Jul 2014 11:04:07 -0500 Subject: Proposal: require Haddock comment for every new top-level function and type in GHC source code In-Reply-To: References: Message-ID: I also support this proposal (I actually tripped up on these exact functions as well!) As stated elsewhere, I think this should be the case for all top-level functions, not just exported ones. Of course, I'd also like it if this rule explicitly extended to top-level data types, type classes, etc as well. I believe that was the intention but I'm just making sure. :) (Finally, I actually would like some kind of mechanical enforcement of this, but I don't think it has to be a hard rule - we shouldn't reject things on that basis alone. I'm not sure how we would do that anyway, though.) On Fri, Jun 27, 2014 at 4:51 AM, Johan Tibell wrote: > Hi! > > I found myself exploring new parts of the GHC code base the last few weeks > (exciting!), which again reminded me of my biggest frustration when working > on GHC: the lack of per-function/type (Haddock) comments. > > GHC code is sometimes commented with "notes", which are great but tend to > (1) mostly cover the exceptional cases and (2) talk about the implementation > of a function, not how a caller might use it or why. > > Lack of documentation, in GHC and other software projects, usually has (at > least) two causes: > > Programmers comment code they think is "complex enough to warrant a > comment". The problem is that the author is usually a poor judge of what's > complex enough, because he/she is too familiar with the code and tends to > under-document code when following this principle. > Documenting is boring and tends to have little benefit the person writing to > documentation. Given lack of incentives we tend to document less than we > ought to. > > I've only seen one successful way to combat the lack of documentation that > stems from the above: have the project's style guide mandate that top-level > functions and types (or at least those that are exported) have > documentation. This works well at Google. > > Anecdote: we have one code base inside Google that was until recently exempt > from this rule and documentation is almost completely absent in that code > base, even though hundreds of engineers work on and need to understand it > every day. This breeds institutional knowledge problems i.e. if the author > of a core piece of code leaves, lots of knowledge is lost. > > Proposal: I propose that we require that new top-level functions and types > have Haddock comments, even if they start out as a single, humble sentence. > > I've found that putting even that one sentence (1) helps new users and (2) > establishes a place for improvements to be made. There's a strong "broken > window" effect to lack of comments, in that lack of comments breeds more > lack of comments as developers follow established practices. > > We should add this requirement to the style guide. Having it as a written > down policy tends to prevent having to re-hash the whole argument about > documentation over and over again. This has also helped us a lot at Google, > because programmers can spend endless amount of time arguing about comments, > placement of curly braces, etc. and having a written policy helps cut down > on that. > > To give an idea of how to write good comments, here are two examples of > undocumented code I ran into in GHC and how better comments would have > helped. > > First example > In compiler/nativeGen/X86/Instr.hs there's a (local) function called mkRUR, > which is a helper function use when computing instruction register usage. > > The first question that I asked upon seeing uses of that function was "what > does RUR stand for?" Given the context the function is in, I guessed it > stands for read-update-read, because R is used to mean "read" in the > enclosing function and "updating" is related to "reading" so that must be > what U stands for. It turns out that it stands for RegUsageReadonly. Here's > a comment that would have captured, in a single sentence, what this function > is for: > > -- | Create register usage info for instruction that only > -- reads registers. > mkRUR src = src' `seq` RU src' [] > where src' = filter (interesting platform) src > > That already a big improvement. A note about the register filtering, which > means that not all registers you pass to the function will be recorded as > being read in the end, could also be useful. > > Aside: providing a type signature, which would have made it clear that the > return type is RU, might also have helped in this particular case. > > Second example > In the same file there a function called x86_regUsageOfInstr. It's the > function that encloses the local function mkRUR above. > > I could figure out that this function has something to do with register > usage, of the instruction passed as an argument, and that register usage is > important for the register allocator. However, trying to understand in more > detail what that meant was more of challenge than it needed to be. First, a > comment more clearly explaining what computing register usage means in > practice would be helpful: > > -- | Returns which registers are read and written by this > -- instruction, as a (read, written) pair. This info is used > -- by the register allocator. > x86_regUsageOfInstr :: Platform -> Instr -> RegUsage > > The reason mentioning that the return value is essentially a (read, written) > pair is helpful is because the body of the function a big case statement > full of lines like this one: > > GCMP _ src1 src2 -> mkRUR [src1,src2] > ... > FDIV _ src dst -> usageRM src dst > > It's not immediately clear that all the various helper functions used here > just end up computing a pair of the above form. A top-level comment lets you > understand what's going on without understanding exactly what all these > helper functions are doing. > > Thoughts? > > -- Johan > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From stegeman at gmail.com Wed Jul 2 16:14:09 2014 From: stegeman at gmail.com (Luite Stegeman) Date: Wed, 2 Jul 2014 18:14:09 +0200 Subject: GHCJS now runs Template Haskell on node.js - Any interest in out of process TH for general cross compilation? Message-ID: hi all, I've added some code [1] [2] to GHCJS to make it run Template Haskell code on node.js, rather than using the GHC linker. GHCJS has supported TH for a long time now, but so far always relied on native (host) code for it. This is the main reason that GHCJS always builds native and JavaScript code for everything (another is that Cabal Setup.hs scripts need to be compiled to some host-runnable form, but that can also be JavaScript if you have node.js) Now besides the compiler having to do twice the work, this has some other disadvantages: - Our JavaScript code has the same dependencies (packages) as native code, which means packages like unix or Win32 show up somewhere, depending on the host environment. This also limits our options in choosing JS-specific packages. - The Template Haskell code runs on the host environment, which might be slightly different from the target, for example in integer size or operating system specific constants. Moreover, building native code made the GHCJS installation procedure more tricky, making end users think about libgmp or libiconv locations, since it basically required the same preparation as building GHC from source. This change will make installing much easier and more reliable (we still have to update the build scripts). How it works is pretty simple: - When any code needs to be run on the target (hscCompileCoreExpr, through the Hooks API new in GHC 7.8), GHCJS starts a node.js process with the thrunner.js [3] script, - GHCJS sends its RTS and the Template Haskell server code [1] to node.js, the script starts a Haskell thread running the server, - for every splice, GHCJS compiles it to JavaScript and links it using its incremental linking functionality. The code for the splice, including dependencies that have not yet been sent to the runner (for earlier splices), is then sent in a RunTH [4] message, - the runner loads and runs the code in the Q monad, can send queries to GHCJS for reification, - the runner sends back the result as a serialized Template Haskell AST (using GHC.Generics for the Binary instances). All Template Haskell functionality is supported, including recent additions for reifying modules and annotations. I still need to clean up and push the patches for the directory and process packages, but after that, the TH code can read/write files, run processes and interact with them and make network connections, all through node.js. Now since this approach is in no way specific to JavaScript, I was wondering if there's any interest in getting this functionality into GHC 7.10 for general cross compilation. The runner would be a native (target) program with dynamic libraries (or object files) being sent over to the target machine (or emulator) for the splices. Thanks to Andras Slemmer from Prezi who helped build the initial proof of concept (without reification) at BudHac. cheers, Luite [1] https://github.com/ghcjs/ghcjs/blob/414eefb2bb8825b3c4c5cddfec4d79a142bc261a/src/Gen2/TH.hs [2] https://github.com/ghcjs/ghcjs-prim/blob/2dffdc2d732b044377037e1d6ebeac2812d4f9a4/GHCJS/Prim/TH/Eval.hs [3] https://github.com/ghcjs/ghcjs/blob/414eefb2bb8825b3c4c5cddfec4d79a142bc261a/lib/etc/thrunner.js [4] https://github.com/ghcjs/ghcjs-prim/blob/2dffdc2d732b044377037e1d6ebeac2812d4f9a4/GHCJS/Prim/TH/Types.hs#L29 -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Wed Jul 2 16:33:21 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Wed, 2 Jul 2014 18:33:21 +0200 Subject: Proposal: require Haddock comment for every new top-level function and type in GHC source code In-Reply-To: References: Message-ID: On Wed, Jul 2, 2014 at 6:04 PM, Austin Seipp wrote: > Of course, I'd also like it if this rule explicitly extended to > top-level data types, type classes, etc as well. I believe that was > the intention but I'm just making sure. :) That was the intention. > (Finally, I actually would like some kind of mechanical enforcement of > this, but I don't think it has to be a hard rule - we shouldn't reject > things on that basis alone. I'm not sure how we would do that anyway, > though.) The way I suggest we do this, if we do this, is to add a linter to Phabricator that adds a note to the code review that the new code lacks the appropriate docs. That way we encourage users to add them, without e.g. making validate fail or something similar. This is what we do at Google (and FB too I presume). From austin at well-typed.com Wed Jul 2 16:45:33 2014 From: austin at well-typed.com (Austin Seipp) Date: Wed, 2 Jul 2014 11:45:33 -0500 Subject: Proposal: require Haddock comment for every new top-level function and type in GHC source code In-Reply-To: References: Message-ID: Yes, Phabricator/arcanist is probably the best way to do it, I agree. I was more wondering technically how we'd enforce it (just regex? parse/lex the code for top level definitions? etc). There are also other opportunities for linting here, so we should think about that a bit. Anyway, good stuff! On Wed, Jul 2, 2014 at 11:33 AM, Johan Tibell wrote: > On Wed, Jul 2, 2014 at 6:04 PM, Austin Seipp wrote: >> Of course, I'd also like it if this rule explicitly extended to >> top-level data types, type classes, etc as well. I believe that was >> the intention but I'm just making sure. :) > > That was the intention. > >> (Finally, I actually would like some kind of mechanical enforcement of >> this, but I don't think it has to be a hard rule - we shouldn't reject >> things on that basis alone. I'm not sure how we would do that anyway, >> though.) > > The way I suggest we do this, if we do this, is to add a linter to > Phabricator that adds a note to the code review that the new code > lacks the appropriate docs. That way we encourage users to add them, > without e.g. making validate fail or something similar. This is what > we do at Google (and FB too I presume). > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From carter.schonwald at gmail.com Wed Jul 2 18:20:53 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Wed, 2 Jul 2014 14:20:53 -0400 Subject: GHCJS now runs Template Haskell on node.js - Any interest in out of process TH for general cross compilation? In-Reply-To: References: Message-ID: wow, this is great work! If theres a clear path to getting the generic tooling into 7.10, i'm all for it :) (and willing to help on concrete mechanical subtasks) On Wed, Jul 2, 2014 at 12:14 PM, Luite Stegeman wrote: > hi all, > > I've added some code [1] [2] to GHCJS to make it run Template Haskell code > on node.js, rather than using the GHC linker. GHCJS has supported TH for a > long time now, but so far always relied on native (host) code for it. This > is the main reason that GHCJS always builds native and JavaScript code for > everything (another is that Cabal Setup.hs scripts need to be compiled to > some host-runnable form, but that can also be JavaScript if you have > node.js) > > Now besides the compiler having to do twice the work, this has some other > disadvantages: > > - Our JavaScript code has the same dependencies (packages) as native code, > which means packages like unix or Win32 show up somewhere, depending on the > host environment. This also limits our options in choosing JS-specific > packages. > - The Template Haskell code runs on the host environment, which might be > slightly different from the target, for example in integer size or > operating system specific constants. > > Moreover, building native code made the GHCJS installation procedure more > tricky, making end users think about libgmp or libiconv locations, since it > basically required the same preparation as building GHC from source. This > change will make installing much easier and more reliable (we still have to > update the build scripts). > > How it works is pretty simple: > > - When any code needs to be run on the target (hscCompileCoreExpr, through > the Hooks API new in GHC 7.8), GHCJS starts a node.js process with the > thrunner.js [3] script, > - GHCJS sends its RTS and the Template Haskell server code [1] to node.js, > the script starts a Haskell thread running the server, > - for every splice, GHCJS compiles it to JavaScript and links it using its > incremental linking functionality. The code for the splice, including > dependencies that have not yet been sent to the runner (for earlier > splices), is then sent in a RunTH [4] message, > - the runner loads and runs the code in the Q monad, can send queries to > GHCJS for reification, > - the runner sends back the result as a serialized Template Haskell AST > (using GHC.Generics for the Binary instances). > > All Template Haskell functionality is supported, including recent > additions for reifying modules and annotations. I still need to clean up > and push the patches for the directory and process packages, but after > that, the TH code can read/write files, run processes and interact with > them and make network connections, all through node.js. > > Now since this approach is in no way specific to JavaScript, I was > wondering if there's any interest in getting this functionality into GHC > 7.10 for general cross compilation. The runner would be a native (target) > program with dynamic libraries (or object files) being sent over to the > target machine (or emulator) for the splices. > > Thanks to Andras Slemmer from Prezi who helped build the initial proof of > concept (without reification) at BudHac. > > cheers, > > Luite > > [1] > https://github.com/ghcjs/ghcjs/blob/414eefb2bb8825b3c4c5cddfec4d79a142bc261a/src/Gen2/TH.hs > [2] > https://github.com/ghcjs/ghcjs-prim/blob/2dffdc2d732b044377037e1d6ebeac2812d4f9a4/GHCJS/Prim/TH/Eval.hs > [3] > https://github.com/ghcjs/ghcjs/blob/414eefb2bb8825b3c4c5cddfec4d79a142bc261a/lib/etc/thrunner.js > [4] > https://github.com/ghcjs/ghcjs-prim/blob/2dffdc2d732b044377037e1d6ebeac2812d4f9a4/GHCJS/Prim/TH/Types.hs#L29 > > _______________________________________________ > Glasgow-haskell-users mailing list > Glasgow-haskell-users at haskell.org > http://www.haskell.org/mailman/listinfo/glasgow-haskell-users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Wed Jul 2 18:32:02 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 2 Jul 2014 18:32:02 +0000 Subject: Help needed: parsing pattern synonym contexts In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF103EB3F2@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF103F294F@DB3PRD3001MB020.064d.mgd.msft.net> <53B2F521.90901@gmail.com> Message-ID: <618BE556AADD624C9C918AA5D5911BEF10420657@DB3PRD3001MB020.064d.mgd.msft.net> How can I reproduce? Is this on your wip/pattern-synonyms branch? Simon | -----Original Message----- | From: Dr. ERDI Gergo [mailto:gergo at erdi.hu] | Sent: 02 July 2014 12:45 | To: Simon Marlow | Cc: Simon Peyton Jones; GHC Devs | Subject: Re: Help needed: parsing pattern synonym contexts | | Hi, | | On Tue, 1 Jul 2014, Simon Marlow wrote: | | > The s/r conflicts can also be a problem, depending on what you're | trying to | > parse. It's generally a good idea to get rid of them if you can, but | at the | > least you should understand why they exist (use happy --info) and | document | > them in Parser.y.pp. | | OK I've uploaded the code to wip/T9023 and the happy --info report to | http://unsafePerform.IO/files/Parser.y.happy-info.gz | | Loads of shift/reduce conflicts come from state 214. Looking at them, I | would have thought that the solution would be to parse pattern synonym | definitions like we parse data constructors as types: by parsing it as a | single pattern and then splitting it. A quick shortcut to this should be | to just disable the parsing rule for infix pattern synonyms; so I tried | that. | | As expected, this cuts down on the number of shift/reduce conflicts in | that state considerably. The rest then seems to be coming from state 570: | | patsyn_context -> forall . (rule 157) | patsyn_context -> forall . context '=>' (rule 158) | | Unfortunately, I don't have a quick workaround for that one yet. | | Thanks, | Gergo From simonpj at microsoft.com Wed Jul 2 18:36:54 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 2 Jul 2014 18:36:54 +0000 Subject: Code redundancy in PrelNames? In-Reply-To: <201407020957.57571.jan.stolarek@p.lodz.pl> References: <201407020957.57571.jan.stolarek@p.lodz.pl> Message-ID: <618BE556AADD624C9C918AA5D5911BEF1042069F@DB3PRD3001MB020.064d.mgd.msft.net> | Is this redundancy a conscious choice or just an accident? If it's It's an accident. Good catch. I suggest eliminating methName and using varQual instead. The reverse doesn't work; varQual is used a lot! thanks Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Jan | Stolarek | Sent: 02 July 2014 08:58 | To: ghc-devs at haskell.org | Subject: Code redundancy in PrelNames? | | I'm looking at PrelNames.lhs and I believe there is some redundancy that | can also lead to | confusion. Consider these definitions: | | bindMName = methName gHC_BASE (fsLit ">>=") bindMClassOpKey | arrAName = varQual aRROW (fsLit "arr") arrAIdKey | | Former is defined as "methName", while the latter is "varQual". These are | defined like this: | | varQual :: Module -> FastString -> Unique -> Name | varQual = mk_known_key_name varName | | mk_known_key_name :: NameSpace -> Module -> FastString -> Unique -> | Name | mk_known_key_name space modu str unique | = mkExternalName unique modu (mkOccNameFS space str) noSrcSpan | | methName :: Module -> FastString -> Unique -> Name | methName modu occ unique | = mkExternalName unique modu (mkVarOccFS occ) noSrcSpan | | Expanding the call to mk_known_key_name in varQual the RHS of varQual | becomes : | | mkExternalName unique modu (mkOccNameFS varName str) noSrcSpan | | Now, the call to "mkVarOccFS occ" in methName resolves to "mkOccNameFS | varName occ", making the | call to varQual identical to methName. | | Is this redundancy a conscious choice or just an accident? If it's | conscious then what is the | purpose? I spent several minutes trying to understand why bindMName and | arrAName are defined | differently. Names suggest that bindMName is a method, while arrAName is | a qualified variable. I | find this confusing and I think it would be better to drop varQual in | favour of methName. | Thoughts? | | Janek | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From austin at well-typed.com Wed Jul 2 19:59:13 2014 From: austin at well-typed.com (Austin Seipp) Date: Wed, 2 Jul 2014 14:59:13 -0500 Subject: RFC: style cleanup & guidelines for GHC, and related bikeshedding Message-ID: Hi *, First off, WARNING: BIKESHEDDING AHEAD. With that out of the way - today on IRC, there was some discussion about some stylistic/consistency issues in GHC, and being spurred by Johans recent proposal for top-level documentation, I figured perhaps we should beat the drum on this issue as well. The TL;DR is that GHC has a lot of inconsistent style issues, including things like: - Mixing literate haskell with non-literate haskell files - Legacy code with tabs and spaces intermixed - Related to the last one, trailing whitespace - Mixing styles of do notation in different parts of the compiler (braces vs no braces) - Probably things like indentation mismatches even in the same code - Probably many other things I've missed, obvious or not. These issues by themselves aren't too bad, but together they make the coding style for GHC very inconsistent, and this hurts maintainability a bit I feel. Furthermore, some of these issues block related improvements - for example, https://ghc.haskell.org/trac/ghc/ticket/9230 which is probably quite reasonable will likely be a bit annoying to implement until GHC itself is de-tabbed - we use -Werror during ./validate. This particular issue is what started the discussion. Also, with developers now using arcanist and phabricator, they have linting enabled for new patches, but they will often warn about surrounding issues, mostly tabs and trailing spaces. This is a bit annoying for submitters, and would be fixed by enforcing it. First attack plan ~~~~~~~~~~~~~~~ So, to start, I'd like to propose that we make some guidelines for these kinds of things, and also a plan to fix some of them. To start: #1) We should really consider going ahead and detabbing the remaining files that have them. We already enforce this on new commits with git hooks, but by doing this, we can make -fwarn-tabs a default flag and then validate with -Werror in the development process. #2) Similarly, we should kill all the trailing whitespace. (I think this is less controversial than #1) #3) We should most certainly move the remaining files from literate haskell to non-literate haskell. Most of the files in the compiler are already in this form, and the literate haskell documentation can't be used to generate PDFs or anything similar. I suggest we get rid of it. More Haskell users use non-literate files anyway. This is probably the least controversial. Merge issues ~~~~~~~~~~~~~~~~~ The reason we haven't done the above three things historically is that it makes merge conflicts nastier. A useful approximation suggested on IRC might be to detab and remove whitespace for files older than a certain date (say, 6 months). However, in general I'm thinking perhaps it's best to go ahead and bite the bullet. maybe. I'd like to know what other people think! If we have a vote and most people are in favor of doing this, maybe we should really do it. I'd especially like to hear about this if you have an outstanding branch. Some numbers on these issues ~~~~~~~~~~~~~~~~~~~~~~~~ Here are some quick numbers on where most of the tabs reside, as well as the breakdown of literate files vs non-literate files. NOTE: these tests occurred in the 'compiler' subdirectory of the GHC repository, which is where most of the relevant code is. LITERATE vs NON-LITERATE: $ find . -type f -iname '*.hs' | wc -l 206 $ find . -type f -iname '*.lhs' | wc -l 194 Non-literate wins by a slim margin! But having the compiler divided in half is really not a good thing IMO... NUMBER OF TABS PER SUBDIRECTORY: NOTE: this counts the number of lines which have tabs in them. It does not count the total number of tab occurrences. $ for x in `echo */`; do echo -n "$x:\t\t"; find $x -type f -regex '.*\.\(lhs\|hs\)' | xargs grep -P '\t' | wc -l; done basicTypes/: 919 cbits/: 0 cmm/: 38 codeGen/: 0 coreSyn/: 843 deSugar/: 545 ghci/: 90 hsSyn/: 120 iface/: 213 llvmGen/: 0 main/: 8 nativeGen/: 1213 parser/: 19 prelude/: 182 profiling/: 39 rename/: 188 simplCore/: 754 simplStg/: 0 specialise/: 0 stgSyn/: 0 stranal/: 336 typecheck/: 1171 types/: 301 utils/: 220 vectorise/: 0 >From these numbers, we can see a few useful things at least, primarily that there are definitely some places where removing tabs should be easy. For example, parser/, profiling/, main/, and cmm/ can all be de-tabbed without much of a problem, I think. nativeGen is very often not touched, so even though it has a *huge* amount of tabs, it can likely be de-tabbed as well with minimal impact. Other style issues ~~~~~~~~~~~~~~~~~ We should also discuss some related issues, like what general block-width to use for indentations, naming conventions, and other stuff. However, I leave this all to you, and perhaps it is best we split that part off into a separate thread. Some things I'd like you all to consider: - Block width for indentation - Naming conventions (we use camelCase and_underscores_sometimes which isReally_confusing) - Import/export styles (I think we have some sloppiness here too) - Other things worth arguing forever about. Thoughts on the above issues? -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From slyich at gmail.com Wed Jul 2 21:35:05 2014 From: slyich at gmail.com (Sergei Trofimovich) Date: Thu, 3 Jul 2014 00:35:05 +0300 Subject: RFC: style cleanup & guidelines for GHC, and related bikeshedding In-Reply-To: References: Message-ID: <20140703003505.0f3f361c@sf> On Wed, 2 Jul 2014 14:59:13 -0500 Austin Seipp wrote: > Hi *, > > First off, WARNING: BIKESHEDDING AHEAD. > > With that out of the way - today on IRC, there was some discussion > about some stylistic/consistency issues in GHC, and being spurred by > Johans recent proposal for top-level documentation, I figured perhaps > we should beat the drum on this issue as well. > > The TL;DR is that GHC has a lot of inconsistent style issues, > including things like: > > - Mixing literate haskell with non-literate haskell files > - Legacy code with tabs and spaces intermixed > - Related to the last one, trailing whitespace > - Mixing styles of do notation in different parts of the compiler > (braces vs no braces) > - Probably things like indentation mismatches even in the same code > - Probably many other things I've missed, obvious or not. I'd vote for detabbing/un-.lhs-ing in one go and ASAP. Fore example, this weekend. There will always be people creating derivative work on top of current source tree. I, for example, didn't dare to detab rts bits to make patches readable. Detabbing commits are very easy to check in git: 'git show -w' (should output nothing). It's a pain for current patches, but I'd say it's bearable. The rest of changes are harder to achieve, but should be quite easy to maintain with hlint forcing one rule (everyone agree on) at a time. -- Sergei -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: not available URL: From gergo at erdi.hu Wed Jul 2 23:56:49 2014 From: gergo at erdi.hu (=?UTF-8?B?RHIuIMOJUkRJIEdlcmfFkQ==?=) Date: Thu, 3 Jul 2014 07:56:49 +0800 Subject: Help needed: parsing pattern synonym contexts In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF10420657@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF103EB3F2@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF103F294F@DB3PRD3001MB020.064d.mgd.msft.net> <53B2F521.90901@gmail.com> <618BE556AADD624C9C918AA5D5911BEF10420657@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: >> OK I've uploaded the code to wip/T9023 It's pushed to wip/T9023 on the GHC git repo. On Jul 3, 2014 2:32 AM, "Simon Peyton Jones" wrote: > How can I reproduce? Is this on your wip/pattern-synonyms branch? > > Simon > > | -----Original Message----- > | From: Dr. ERDI Gergo [mailto:gergo at erdi.hu] > | Sent: 02 July 2014 12:45 > | To: Simon Marlow > | Cc: Simon Peyton Jones; GHC Devs > | Subject: Re: Help needed: parsing pattern synonym contexts > | > | Hi, > | > | On Tue, 1 Jul 2014, Simon Marlow wrote: > | > | > The s/r conflicts can also be a problem, depending on what you're > | trying to > | > parse. It's generally a good idea to get rid of them if you can, but > | at the > | > least you should understand why they exist (use happy --info) and > | document > | > them in Parser.y.pp. > | > | OK I've uploaded the code to wip/T9023 and the happy --info report to > | http://unsafePerform.IO/files/Parser.y.happy-info.gz > | > | Loads of shift/reduce conflicts come from state 214. Looking at them, I > | would have thought that the solution would be to parse pattern synonym > | definitions like we parse data constructors as types: by parsing it as a > | single pattern and then splitting it. A quick shortcut to this should be > | to just disable the parsing rule for infix pattern synonyms; so I tried > | that. > | > | As expected, this cuts down on the number of shift/reduce conflicts in > | that state considerably. The rest then seems to be coming from state 570: > | > | patsyn_context -> forall . (rule 157) > | patsyn_context -> forall . context '=>' (rule 158) > | > | Unfortunately, I don't have a quick workaround for that one yet. > | > | Thanks, > | Gergo > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kazu at iij.ad.jp Thu Jul 3 00:16:34 2014 From: kazu at iij.ad.jp (Kazu Yamamoto (=?iso-2022-jp?B?GyRCOzNLXE9CSScbKEI=?=)) Date: Thu, 03 Jul 2014 09:16:34 +0900 (JST) Subject: GHC 7.8.3 release In-Reply-To: <53B09D9D.9020907@fuuzetsu.co.uk> References: <53B09D9D.9020907@fuuzetsu.co.uk> Message-ID: <20140703.091634.403313079140510076.kazu@iij.ad.jp> Hi, I would like to the status of GHC 7.8.3, too. Thanks. --Kazu > On 05/27/2014 10:06 AM, Austin Seipp wrote: >> Hello all, >> >> After a long week, I've finally gotten a little time to reply to >> emails, and I mainly have one question I'd like to ask. >> >> First, please direct your attention to this: >> >> https://ghc.haskell.org/trac/ghc/query?status=closed&status=merge&status=patch&milestone=7.8.3&group=resolution&col=id&col=summary&col=owner&col=type&col=priority&col=component&col=version&order=priority >> >> This is the 7.8.3 milestone, but it only considers things that are: >> >> - 1) Fixed >> - 2) Going to be merged >> - 3) Are a patch to be still merged. >> >> That is, it is a solid representation of the difference between 7.8.2 >> and the 7.8 branch tip. >> >> The question is: when should we do the release? There are several bugs >> there that seem quite problematic for users - #9045, #7097, #9001, >> #8768 and #9078 in particular. >> >> If these bugs are really problematic (and I sort of feel they are) >> then the release can happen soon. I can do it within a week from now, >> and we could punt more to a 7.8.4 release. >> >> I ask this because my time to dedicate to GHC is a bit thin right now, >> so you must help me decide what's important! So please let me know - >> just a general vote in favor of doing it within some X timeframe (even >> 'real soon' or 'a week would be great') would be nice. >> >> PS: I apologize for the lack of status updates and brief email - my >> time for GHC has been in very short order the past two weeks in >> particular, and I've finally just returned to a computer (not mine) >> for right now to ask this. >> >> PPS: This might also impact the 7.10 schedule, but last Simon and I >> talked, we thought perhaps shooting for ICFP this time (and actually >> hitting it) was a good plan. So I'd estimate on that a 7.8.4 might >> happen a few months from now, after summer. >> > > It has been a month since and the general opinion seems to have been > Real Soon Now, are there any big blockers? > > -- > Mateusz K. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From eir at cis.upenn.edu Thu Jul 3 00:32:51 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Wed, 2 Jul 2014 20:32:51 -0400 Subject: RFC: style cleanup & guidelines for GHC, and related bikeshedding In-Reply-To: References: Message-ID: I have mixed feelings on all of this. First, a disclaimer: I have a significant (~10,000 lines of difference, perhaps) branch and would be hit hard by this change. (Branch is at github.com/goldfirere/ghc under the "nokinds" branch.) That said, if I'm careful as I'm merging, I could probably make this less painful by merging up to the commit right before the changeover, then merging just whitespace changes, then merging everything afterward. It probably wouldn't be much worse than merges are for me, anyway. (They're already terrible, but that's strictly my problem.) But, I'm not sure of the practical benefits of doing any of this. The aesthete in me really wants this change to happen -- these inconsistencies and the tabs are surely a blemish. At the same time, I don't think my understanding of the code has ever really been hindered by these problems. If anything, I think a rigid style guide would slow me down a bit, because the perfectionist in me would require making sure all my code conformed well. Now, I try to have my code match the surrounding context, but I can see that it's not critical I get it bang on. So, addressing advocates of this change: why do you want it? Is it just for the sake of beauty (not to diminish the importance of beauty)? Or do the problems outlines here properly trip you up? If the answers are mostly about beauty, that doesn't kill the proposal, but it allows us to evaluate the pros and cons a little more crisply. Richard On Jul 2, 2014, at 3:59 PM, Austin Seipp wrote: > Hi *, > > First off, WARNING: BIKESHEDDING AHEAD. > > With that out of the way - today on IRC, there was some discussion > about some stylistic/consistency issues in GHC, and being spurred by > Johans recent proposal for top-level documentation, I figured perhaps > we should beat the drum on this issue as well. > > The TL;DR is that GHC has a lot of inconsistent style issues, > including things like: > > - Mixing literate haskell with non-literate haskell files > - Legacy code with tabs and spaces intermixed > - Related to the last one, trailing whitespace > - Mixing styles of do notation in different parts of the compiler > (braces vs no braces) > - Probably things like indentation mismatches even in the same code > - Probably many other things I've missed, obvious or not. > > These issues by themselves aren't too bad, but together they make the > coding style for GHC very inconsistent, and this hurts maintainability > a bit I feel. Furthermore, some of these issues block related > improvements - for example, > https://ghc.haskell.org/trac/ghc/ticket/9230 which is probably quite > reasonable will likely be a bit annoying to implement until GHC itself > is de-tabbed - we use -Werror during ./validate. This particular issue > is what started the discussion. > > Also, with developers now using arcanist and phabricator, they have > linting enabled for new patches, but they will often warn about > surrounding issues, mostly tabs and trailing spaces. This is a bit > annoying for submitters, and would be fixed by enforcing it. > > First attack plan > ~~~~~~~~~~~~~~~ > > So, to start, I'd like to propose that we make some guidelines for > these kinds of things, and also a plan to fix some of them. To start: > > #1) We should really consider going ahead and detabbing the remaining > files that have them. We already enforce this on new commits with git > hooks, but by doing this, we can make -fwarn-tabs a default flag and > then validate with -Werror in the development process. > > #2) Similarly, we should kill all the trailing whitespace. (I think > this is less controversial than #1) > > #3) We should most certainly move the remaining files from literate > haskell to non-literate haskell. Most of the files in the compiler are > already in this form, and the literate haskell documentation can't be > used to generate PDFs or anything similar. I suggest we get rid of it. > More Haskell users use non-literate files anyway. This is probably the > least controversial. > > Merge issues > ~~~~~~~~~~~~~~~~~ > > The reason we haven't done the above three things historically is that > it makes merge conflicts nastier. A useful approximation suggested on > IRC might be to detab and remove whitespace for files older than a > certain date (say, 6 months). > > However, in general I'm thinking perhaps it's best to go ahead and > bite the bullet. maybe. I'd like to know what other people think! If > we have a vote and most people are in favor of doing this, maybe we > should really do it. > > I'd especially like to hear about this if you have an outstanding branch. > > Some numbers on these issues > ~~~~~~~~~~~~~~~~~~~~~~~~ > > Here are some quick numbers on where most of the tabs reside, as well > as the breakdown of literate files vs non-literate files. > > NOTE: these tests occurred in the 'compiler' subdirectory of the GHC > repository, which is where most of the relevant code is. > > LITERATE vs NON-LITERATE: > > $ find . -type f -iname '*.hs' | wc -l > 206 > > $ find . -type f -iname '*.lhs' | wc -l > 194 > > Non-literate wins by a slim margin! But having the compiler divided in > half is really not a good thing IMO... > > NUMBER OF TABS PER SUBDIRECTORY: > > NOTE: this counts the number of lines which have tabs in them. It does > not count the total number of tab occurrences. > > $ for x in `echo */`; do echo -n "$x:\t\t"; find $x -type f -regex > '.*\.\(lhs\|hs\)' | xargs grep -P '\t' | wc -l; done > basicTypes/: 919 > cbits/: 0 > cmm/: 38 > codeGen/: 0 > coreSyn/: 843 > deSugar/: 545 > ghci/: 90 > hsSyn/: 120 > iface/: 213 > llvmGen/: 0 > main/: 8 > nativeGen/: 1213 > parser/: 19 > prelude/: 182 > profiling/: 39 > rename/: 188 > simplCore/: 754 > simplStg/: 0 > specialise/: 0 > stgSyn/: 0 > stranal/: 336 > typecheck/: 1171 > types/: 301 > utils/: 220 > vectorise/: 0 > > From these numbers, we can see a few useful things at least, primarily > that there are definitely some places where removing tabs should be > easy. For example, parser/, profiling/, main/, and cmm/ can all be > de-tabbed without much of a problem, I think. > > nativeGen is very often not touched, so even though it has a *huge* > amount of tabs, it can likely be de-tabbed as well with minimal > impact. > > Other style issues > ~~~~~~~~~~~~~~~~~ > > We should also discuss some related issues, like what general > block-width to use for indentations, naming conventions, and other > stuff. However, I leave this all to you, and perhaps it is best we > split that part off into a separate thread. Some things I'd like you > all to consider: > > - Block width for indentation > - Naming conventions (we use camelCase and_underscores_sometimes > which isReally_confusing) > - Import/export styles (I think we have some sloppiness here too) > - Other things worth arguing forever about. > > Thoughts on the above issues? > > -- > Regards, > > Austin Seipp, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From carter.schonwald at gmail.com Thu Jul 3 00:54:20 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Wed, 2 Jul 2014 20:54:20 -0400 Subject: GHCJS now runs Template Haskell on node.js - Any interest in out of process TH for general cross compilation? In-Reply-To: References: Message-ID: This would probably be a great boon for those trying to use haskell for Android and IOS right? how might the emulation setup work for those? On Wed, Jul 2, 2014 at 2:20 PM, Carter Schonwald wrote: > wow, this is great work! > > If theres a clear path to getting the generic tooling into 7.10, i'm all > for it :) (and willing to help on concrete mechanical subtasks) > > > On Wed, Jul 2, 2014 at 12:14 PM, Luite Stegeman > wrote: > >> hi all, >> >> I've added some code [1] [2] to GHCJS to make it run Template Haskell >> code on node.js, rather than using the GHC linker. GHCJS has supported TH >> for a long time now, but so far always relied on native (host) code for it. >> This is the main reason that GHCJS always builds native and JavaScript code >> for everything (another is that Cabal Setup.hs scripts need to be compiled >> to some host-runnable form, but that can also be JavaScript if you have >> node.js) >> >> Now besides the compiler having to do twice the work, this has some other >> disadvantages: >> >> - Our JavaScript code has the same dependencies (packages) as native >> code, which means packages like unix or Win32 show up somewhere, depending >> on the host environment. This also limits our options in choosing >> JS-specific packages. >> - The Template Haskell code runs on the host environment, which might be >> slightly different from the target, for example in integer size or >> operating system specific constants. >> >> Moreover, building native code made the GHCJS installation procedure more >> tricky, making end users think about libgmp or libiconv locations, since it >> basically required the same preparation as building GHC from source. This >> change will make installing much easier and more reliable (we still have to >> update the build scripts). >> >> How it works is pretty simple: >> >> - When any code needs to be run on the target (hscCompileCoreExpr, >> through the Hooks API new in GHC 7.8), GHCJS starts a node.js process with >> the thrunner.js [3] script, >> - GHCJS sends its RTS and the Template Haskell server code [1] to >> node.js, the script starts a Haskell thread running the server, >> - for every splice, GHCJS compiles it to JavaScript and links it using >> its incremental linking functionality. The code for the splice, including >> dependencies that have not yet been sent to the runner (for earlier >> splices), is then sent in a RunTH [4] message, >> - the runner loads and runs the code in the Q monad, can send queries to >> GHCJS for reification, >> - the runner sends back the result as a serialized Template Haskell AST >> (using GHC.Generics for the Binary instances). >> >> All Template Haskell functionality is supported, including recent >> additions for reifying modules and annotations. I still need to clean up >> and push the patches for the directory and process packages, but after >> that, the TH code can read/write files, run processes and interact with >> them and make network connections, all through node.js. >> >> Now since this approach is in no way specific to JavaScript, I was >> wondering if there's any interest in getting this functionality into GHC >> 7.10 for general cross compilation. The runner would be a native (target) >> program with dynamic libraries (or object files) being sent over to the >> target machine (or emulator) for the splices. >> >> Thanks to Andras Slemmer from Prezi who helped build the initial proof of >> concept (without reification) at BudHac. >> >> cheers, >> >> Luite >> >> [1] >> https://github.com/ghcjs/ghcjs/blob/414eefb2bb8825b3c4c5cddfec4d79a142bc261a/src/Gen2/TH.hs >> [2] >> https://github.com/ghcjs/ghcjs-prim/blob/2dffdc2d732b044377037e1d6ebeac2812d4f9a4/GHCJS/Prim/TH/Eval.hs >> [3] >> https://github.com/ghcjs/ghcjs/blob/414eefb2bb8825b3c4c5cddfec4d79a142bc261a/lib/etc/thrunner.js >> [4] >> https://github.com/ghcjs/ghcjs-prim/blob/2dffdc2d732b044377037e1d6ebeac2812d4f9a4/GHCJS/Prim/TH/Types.hs#L29 >> >> _______________________________________________ >> Glasgow-haskell-users mailing list >> Glasgow-haskell-users at haskell.org >> http://www.haskell.org/mailman/listinfo/glasgow-haskell-users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From iavor.diatchki at gmail.com Thu Jul 3 02:43:17 2014 From: iavor.diatchki at gmail.com (Iavor Diatchki) Date: Wed, 2 Jul 2014 19:43:17 -0700 Subject: RFC: style cleanup & guidelines for GHC, and related bikeshedding In-Reply-To: References: Message-ID: Hello, I also have somewhat mixed feelings about this: yes, it would be nice to get rid of tabs, trailing space, and perhaps even literal files but, honestly, I don't think that they are any kind of serious blocker to working on the compiler. Otoh, I'd have to do a bunch of mergining also, and I am really not looking forward to that... The only practical benefit for me would be, possibly, better syntax highlighting in `vim`, which is pretty terrible at handling the \begin\end style literate files. So, overall, I kind of like our current policy of fixing tabs and white-space when we have to modify the code anyway. Turning literate to non-literate files would make relatively few changes to the actual file (i.e. add {- -} here and there), but the names would be different. Does anyone have experience with git and this sort of change (i.e., would it be able to work out what happened, or would manual intervention be required)? Cheers, -Iavor On Wed, Jul 2, 2014 at 5:32 PM, Richard Eisenberg wrote: > I have mixed feelings on all of this. > > First, a disclaimer: I have a significant (~10,000 lines of difference, > perhaps) branch and would be hit hard by this change. (Branch is at > github.com/goldfirere/ghc under the "nokinds" branch.) That said, if I'm > careful as I'm merging, I could probably make this less painful by merging > up to the commit right before the changeover, then merging just whitespace > changes, then merging everything afterward. It probably wouldn't be much > worse than merges are for me, anyway. (They're already terrible, but that's > strictly my problem.) > > But, I'm not sure of the practical benefits of doing any of this. The > aesthete in me really wants this change to happen -- these inconsistencies > and the tabs are surely a blemish. At the same time, I don't think my > understanding of the code has ever really been hindered by these problems. > If anything, I think a rigid style guide would slow me down a bit, because > the perfectionist in me would require making sure all my code conformed > well. Now, I try to have my code match the surrounding context, but I can > see that it's not critical I get it bang on. So, addressing advocates of > this change: why do you want it? Is it just for the sake of beauty (not to > diminish the importance of beauty)? Or do the problems outlines here > properly trip you up? > > If the answers are mostly about beauty, that doesn't kill the proposal, > but it allows us to evaluate the pros and cons a little more crisply. > > Richard > > On Jul 2, 2014, at 3:59 PM, Austin Seipp wrote: > > > Hi *, > > > > First off, WARNING: BIKESHEDDING AHEAD. > > > > With that out of the way - today on IRC, there was some discussion > > about some stylistic/consistency issues in GHC, and being spurred by > > Johans recent proposal for top-level documentation, I figured perhaps > > we should beat the drum on this issue as well. > > > > The TL;DR is that GHC has a lot of inconsistent style issues, > > including things like: > > > > - Mixing literate haskell with non-literate haskell files > > - Legacy code with tabs and spaces intermixed > > - Related to the last one, trailing whitespace > > - Mixing styles of do notation in different parts of the compiler > > (braces vs no braces) > > - Probably things like indentation mismatches even in the same code > > - Probably many other things I've missed, obvious or not. > > > > These issues by themselves aren't too bad, but together they make the > > coding style for GHC very inconsistent, and this hurts maintainability > > a bit I feel. Furthermore, some of these issues block related > > improvements - for example, > > https://ghc.haskell.org/trac/ghc/ticket/9230 which is probably quite > > reasonable will likely be a bit annoying to implement until GHC itself > > is de-tabbed - we use -Werror during ./validate. This particular issue > > is what started the discussion. > > > > Also, with developers now using arcanist and phabricator, they have > > linting enabled for new patches, but they will often warn about > > surrounding issues, mostly tabs and trailing spaces. This is a bit > > annoying for submitters, and would be fixed by enforcing it. > > > > First attack plan > > ~~~~~~~~~~~~~~~ > > > > So, to start, I'd like to propose that we make some guidelines for > > these kinds of things, and also a plan to fix some of them. To start: > > > > #1) We should really consider going ahead and detabbing the remaining > > files that have them. We already enforce this on new commits with git > > hooks, but by doing this, we can make -fwarn-tabs a default flag and > > then validate with -Werror in the development process. > > > > #2) Similarly, we should kill all the trailing whitespace. (I think > > this is less controversial than #1) > > > > #3) We should most certainly move the remaining files from literate > > haskell to non-literate haskell. Most of the files in the compiler are > > already in this form, and the literate haskell documentation can't be > > used to generate PDFs or anything similar. I suggest we get rid of it. > > More Haskell users use non-literate files anyway. This is probably the > > least controversial. > > > > Merge issues > > ~~~~~~~~~~~~~~~~~ > > > > The reason we haven't done the above three things historically is that > > it makes merge conflicts nastier. A useful approximation suggested on > > IRC might be to detab and remove whitespace for files older than a > > certain date (say, 6 months). > > > > However, in general I'm thinking perhaps it's best to go ahead and > > bite the bullet. maybe. I'd like to know what other people think! If > > we have a vote and most people are in favor of doing this, maybe we > > should really do it. > > > > I'd especially like to hear about this if you have an outstanding branch. > > > > Some numbers on these issues > > ~~~~~~~~~~~~~~~~~~~~~~~~ > > > > Here are some quick numbers on where most of the tabs reside, as well > > as the breakdown of literate files vs non-literate files. > > > > NOTE: these tests occurred in the 'compiler' subdirectory of the GHC > > repository, which is where most of the relevant code is. > > > > LITERATE vs NON-LITERATE: > > > > $ find . -type f -iname '*.hs' | wc -l > > 206 > > > > $ find . -type f -iname '*.lhs' | wc -l > > 194 > > > > Non-literate wins by a slim margin! But having the compiler divided in > > half is really not a good thing IMO... > > > > NUMBER OF TABS PER SUBDIRECTORY: > > > > NOTE: this counts the number of lines which have tabs in them. It does > > not count the total number of tab occurrences. > > > > $ for x in `echo */`; do echo -n "$x:\t\t"; find $x -type f -regex > > '.*\.\(lhs\|hs\)' | xargs grep -P '\t' | wc -l; done > > basicTypes/: 919 > > cbits/: 0 > > cmm/: 38 > > codeGen/: 0 > > coreSyn/: 843 > > deSugar/: 545 > > ghci/: 90 > > hsSyn/: 120 > > iface/: 213 > > llvmGen/: 0 > > main/: 8 > > nativeGen/: 1213 > > parser/: 19 > > prelude/: 182 > > profiling/: 39 > > rename/: 188 > > simplCore/: 754 > > simplStg/: 0 > > specialise/: 0 > > stgSyn/: 0 > > stranal/: 336 > > typecheck/: 1171 > > types/: 301 > > utils/: 220 > > vectorise/: 0 > > > > From these numbers, we can see a few useful things at least, primarily > > that there are definitely some places where removing tabs should be > > easy. For example, parser/, profiling/, main/, and cmm/ can all be > > de-tabbed without much of a problem, I think. > > > > nativeGen is very often not touched, so even though it has a *huge* > > amount of tabs, it can likely be de-tabbed as well with minimal > > impact. > > > > Other style issues > > ~~~~~~~~~~~~~~~~~ > > > > We should also discuss some related issues, like what general > > block-width to use for indentations, naming conventions, and other > > stuff. However, I leave this all to you, and perhaps it is best we > > split that part off into a separate thread. Some things I'd like you > > all to consider: > > > > - Block width for indentation > > - Naming conventions (we use camelCase and_underscores_sometimes > > which isReally_confusing) > > - Import/export styles (I think we have some sloppiness here too) > > - Other things worth arguing forever about. > > > > Thoughts on the above issues? > > > > -- > > Regards, > > > > Austin Seipp, Haskell Consultant > > Well-Typed LLP, http://www.well-typed.com/ > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mp2e at archlinux.us Thu Jul 3 03:18:15 2014 From: mp2e at archlinux.us (member MP2E) Date: Wed, 2 Jul 2014 20:18:15 -0700 Subject: Fwd: GHCJS now runs Template Haskell on node.js - Any interest in out of process TH for general cross compilation? In-Reply-To: References: Message-ID: I signed up for the mailing list to express my high level of interest in this project, I have been working with GHC 7.8 built as a cross compiler for Android and currently the only way to use libraries like lens, singletons, and anything else that uses Template Haskell is to add a patch to enable it in GHC during Stage1, and then to make drastic modifications to any cabal project source files that use TH and some hlint suggestions. Typically the current workflow for a cross-compiler is as follows: 1. Cross GHC 7.8 using the Stage1 TH patch and a few other patches available from Neurocyte. These need to be updated slightly as they are for GHC 7.6, I can post updated patches if anyone else is curious. I would be very interested in cleaning up and integrating the needed changes into GHC, if cross Template Haskell is to be implemented. 2. Create wrappers for hsc2hs and cabal for the cross compiler (I also have updated these) 3. Use a program called EvilSplicer on a log generated with 'cabal build --ghc-options=-ddump-splices 2>&1 | tee log' with the host compiler. This splices the host TH into the source files for use with the cross toolchain. You may notice this is cheating, and may not even work if the TH being generated is dependent on the target platform. 4. Remove most hlint hints used with the ANN pragma. For some reason these cause GHC to load shared objects which cause it to immediately crash(probably because of the whole host system/target system mismatch). I was told this is related to the TemplateHaskell issue Even getting lens to build is quite a chore, and I think with these changes, we could look forward to better iOS and Android support! I can't imagine the situation is much better for iOS devs at the moment. Again, I'd like to help get Android support polished, provided this change is ported and merged however I can :) On Wed, Jul 2, 2014 at 5:54 PM, Carter Schonwald wrote: > This would probably be a great boon for those trying to use haskell for > Android and IOS right? how might the emulation setup work for those? > > > > > On Wed, Jul 2, 2014 at 2:20 PM, Carter Schonwald < > carter.schonwald at gmail.com> wrote: > >> wow, this is great work! >> >> If theres a clear path to getting the generic tooling into 7.10, i'm all >> for it :) (and willing to help on concrete mechanical subtasks) >> >> >> On Wed, Jul 2, 2014 at 12:14 PM, Luite Stegeman >> wrote: >> >>> hi all, >>> >>> I've added some code [1] [2] to GHCJS to make it run Template Haskell >>> code on node.js, rather than using the GHC linker. GHCJS has supported TH >>> for a long time now, but so far always relied on native (host) code for it. >>> This is the main reason that GHCJS always builds native and JavaScript code >>> for everything (another is that Cabal Setup.hs scripts need to be compiled >>> to some host-runnable form, but that can also be JavaScript if you have >>> node.js) >>> >>> Now besides the compiler having to do twice the work, this has some >>> other disadvantages: >>> >>> - Our JavaScript code has the same dependencies (packages) as native >>> code, which means packages like unix or Win32 show up somewhere, depending >>> on the host environment. This also limits our options in choosing >>> JS-specific packages. >>> - The Template Haskell code runs on the host environment, which might be >>> slightly different from the target, for example in integer size or >>> operating system specific constants. >>> >>> Moreover, building native code made the GHCJS installation procedure >>> more tricky, making end users think about libgmp or libiconv locations, >>> since it basically required the same preparation as building GHC from >>> source. This change will make installing much easier and more reliable (we >>> still have to update the build scripts). >>> >>> How it works is pretty simple: >>> >>> - When any code needs to be run on the target (hscCompileCoreExpr, >>> through the Hooks API new in GHC 7.8), GHCJS starts a node.js process with >>> the thrunner.js [3] script, >>> - GHCJS sends its RTS and the Template Haskell server code [1] to >>> node.js, the script starts a Haskell thread running the server, >>> - for every splice, GHCJS compiles it to JavaScript and links it using >>> its incremental linking functionality. The code for the splice, including >>> dependencies that have not yet been sent to the runner (for earlier >>> splices), is then sent in a RunTH [4] message, >>> - the runner loads and runs the code in the Q monad, can send queries to >>> GHCJS for reification, >>> - the runner sends back the result as a serialized Template Haskell AST >>> (using GHC.Generics for the Binary instances). >>> >>> All Template Haskell functionality is supported, including recent >>> additions for reifying modules and annotations. I still need to clean up >>> and push the patches for the directory and process packages, but after >>> that, the TH code can read/write files, run processes and interact with >>> them and make network connections, all through node.js. >>> >>> Now since this approach is in no way specific to JavaScript, I was >>> wondering if there's any interest in getting this functionality into GHC >>> 7.10 for general cross compilation. The runner would be a native (target) >>> program with dynamic libraries (or object files) being sent over to the >>> target machine (or emulator) for the splices. >>> >>> Thanks to Andras Slemmer from Prezi who helped build the initial proof >>> of concept (without reification) at BudHac. >>> >>> cheers, >>> >>> Luite >>> >>> [1] >>> https://github.com/ghcjs/ghcjs/blob/414eefb2bb8825b3c4c5cddfec4d79a142bc261a/src/Gen2/TH.hs >>> [2] >>> https://github.com/ghcjs/ghcjs-prim/blob/2dffdc2d732b044377037e1d6ebeac2812d4f9a4/GHCJS/Prim/TH/Eval.hs >>> [3] >>> https://github.com/ghcjs/ghcjs/blob/414eefb2bb8825b3c4c5cddfec4d79a142bc261a/lib/etc/thrunner.js >>> [4] >>> https://github.com/ghcjs/ghcjs-prim/blob/2dffdc2d732b044377037e1d6ebeac2812d4f9a4/GHCJS/Prim/TH/Types.hs#L29 >>> >>> _______________________________________________ >>> Glasgow-haskell-users mailing list >>> Glasgow-haskell-users at haskell.org >>> http://www.haskell.org/mailman/listinfo/glasgow-haskell-users >>> >>> >> > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chak at cse.unsw.edu.au Thu Jul 3 04:38:33 2014 From: chak at cse.unsw.edu.au (Manuel M T Chakravarty) Date: Thu, 3 Jul 2014 14:38:33 +1000 Subject: Proposal: require Haddock comment for every new top-level function and type in GHC source code In-Reply-To: References: <53B3BDEA.8060108@gmail.com> Message-ID: <30C04B72-0C17-4A0C-B41E-7CBF0694333E@cse.unsw.edu.au> Which makes a lot of GHC code more readable ? I?m serious! Manuel PS: I have resisted it for a while, but after slogging through GHC for extended periods, I?ve come to appreciate the additional clarity in large and tricky functions (e.g., in the type checker & renamer). Edward Kmett : > That has a high chance of backfiring and requiring everyone to use do { ...; ... } with explicit braces and semis. ;) > > -Edward > > > On Wed, Jul 2, 2014 at 4:08 AM, Simon Marlow wrote: > Agreed, let's do it. Thanks for the well-argued proposal. > > Next up: consistent style :-) > > Cheers, > Simon > > > On 27/06/2014 10:51, Johan Tibell wrote: > Hi! > > I found myself exploring new parts of the GHC code base the last few > weeks (exciting!), which again reminded me of my biggest frustration > when working on GHC: the lack of per-function/type (Haddock) comments. > > GHC code is sometimes commented with "notes", which are great but tend > to (1) mostly cover the exceptional cases and (2) talk about the > implementation of a function, not how a caller might use it or why. > > Lack of documentation, in GHC and other software projects, usually has > (at least) two causes: > > * Programmers comment code they think is "complex enough to warrant a > > comment". The problem is that the author is usually a poor judge of > what's complex enough, because he/she is too familiar with the code > and tends to under-document code when following this principle. > * Documenting is boring and tends to have little benefit the person > > writing to documentation. Given lack of incentives we tend to > document less than we ought to. > > I've only seen one successful way to combat the lack of documentation > that stems from the above: have the project's style guide mandate that > top-level functions and types (or at least those that are exported) have > documentation. This works well at Google. > > Anecdote: we have one code base inside Google that was until recently > exempt from this rule and documentation is almost completely absent in > that code base, even though hundreds of engineers work on and need to > understand it every day. This breeds institutional knowledge problems > i.e. if the author of a core piece of code leaves, lots of knowledge is > lost. > > *Proposal: *I propose that we require that new top-level functions and > > types have Haddock comments, even if they start out as a single, humble > sentence. > > I've found that putting even that one sentence (1) helps new users and > (2) establishes a place for improvements to be made. There's a strong > "broken window" effect to lack of comments, in that lack of comments > breeds more lack of comments as developers follow established practices. > > We should add this requirement to the style guide. Having it as a > written down policy tends to prevent having to re-hash the whole > argument about documentation over and over again. This has also helped > us a lot at Google, because programmers can spend endless amount of time > arguing about comments, placement of curly braces, etc. and having a > written policy helps cut down on that. > > To give an idea of how to write good comments, here are two examples of > undocumented code I ran into in GHC and how better comments would have > helped. > > *First example* > > In compiler/nativeGen/X86/Instr.hs there's a (local) function called > mkRUR, which is a helper function use when computing instruction > register usage. > > The first question that I asked upon seeing uses of that function was > "what does RUR stand for?" Given the context the function is in, I > guessed it stands for read-update-read, because R is used to mean "read" > in the enclosing function and "updating" is related to "reading" so that > must be what U stands for. It turns out that it stands for > RegUsageReadonly. Here's a comment that would have captured, in a single > sentence, what this function is for: > > -- | Create register usage info for instruction that only > -- reads registers. > mkRUR src = src' `seq` RU src' [] > where src' = filter (interesting platform) src > > That already a big improvement. A note about the register filtering, > which means that not all registers you pass to the function will be > recorded as being read in the end, could also be useful. > > Aside: providing a type signature, which would have made it clear that > the return type is RU, might also have helped in this particular case. > > *Second example* > > In the same file there a function called x86_regUsageOfInstr. It's the > function that encloses the local function mkRUR above. > > I could figure out that this function has something to do with register > usage, of the instruction passed as an argument, and that register usage > is important for the register allocator. However, trying to understand > in more detail what that meant was more of challenge than it needed to > be. First, a comment more clearly explaining what computing register > usage means in practice would be helpful: > > -- | Returns which registers are read and written by this > -- instruction, as a (read, written) pair. This info is used > -- by the register allocator. > x86_regUsageOfInstr :: Platform -> Instr -> RegUsage > > The reason mentioning that the return value is essentially a (read, > written) pair is helpful is because the body of the function a big case > statement full of lines like this one: > > GCMP _ src1 src2 -> mkRUR [src1,src2] > ... > FDIV _ src dst -> usageRM src dst > > It's not immediately clear that all the various helper functions used > here just end up computing a pair of the above form. A top-level comment > lets you understand what's going on without understanding exactly what > all these helper functions are doing. > > Thoughts? > > -- Johan > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Jul 3 05:57:24 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 3 Jul 2014 05:57:24 +0000 Subject: GHC 7.8.3 release In-Reply-To: <20140703.091634.403313079140510076.kazu@iij.ad.jp> References: <53B09D9D.9020907@fuuzetsu.co.uk> <20140703.091634.403313079140510076.kazu@iij.ad.jp> Message-ID: <618BE556AADD624C9C918AA5D5911BEF10420C03@DB3PRD3001MB020.064d.mgd.msft.net> | I would like to the status of GHC 7.8.3, too. Thanks. Any day now. When I talked to Austin on Monday it was going to be Monday, then Tuesday. Maybe today. Austin: can you brief us? Thanks Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Kazu | Yamamoto | Sent: 03 July 2014 01:17 | To: ghc-devs at haskell.org | Subject: Re: GHC 7.8.3 release | | Hi, | | I would like to the status of GHC 7.8.3, too. Thanks. | | --Kazu | | > On 05/27/2014 10:06 AM, Austin Seipp wrote: | >> Hello all, | >> | >> After a long week, I've finally gotten a little time to reply to | >> emails, and I mainly have one question I'd like to ask. | >> | >> First, please direct your attention to this: | >> | >> | https://ghc.haskell.org/trac/ghc/query?status=closed&status=merge&sta | >> | tus=patch&milestone=7.8.3&group=resolution&col=id&col=summary&col=own | >> er&col=type&col=priority&col=component&col=version&order=priority | >> | >> This is the 7.8.3 milestone, but it only considers things that are: | >> | >> - 1) Fixed | >> - 2) Going to be merged | >> - 3) Are a patch to be still merged. | >> | >> That is, it is a solid representation of the difference between | 7.8.2 | >> and the 7.8 branch tip. | >> | >> The question is: when should we do the release? There are several | >> bugs there that seem quite problematic for users - #9045, #7097, | >> #9001, | >> #8768 and #9078 in particular. | >> | >> If these bugs are really problematic (and I sort of feel they are) | >> then the release can happen soon. I can do it within a week from | now, | >> and we could punt more to a 7.8.4 release. | >> | >> I ask this because my time to dedicate to GHC is a bit thin right | >> now, so you must help me decide what's important! So please let me | >> know - just a general vote in favor of doing it within some X | >> timeframe (even 'real soon' or 'a week would be great') would be | nice. | >> | >> PS: I apologize for the lack of status updates and brief email - my | >> time for GHC has been in very short order the past two weeks in | >> particular, and I've finally just returned to a computer (not mine) | >> for right now to ask this. | >> | >> PPS: This might also impact the 7.10 schedule, but last Simon and I | >> talked, we thought perhaps shooting for ICFP this time (and actually | >> hitting it) was a good plan. So I'd estimate on that a 7.8.4 might | >> happen a few months from now, after summer. | >> | > | > It has been a month since and the general opinion seems to have been | > Real Soon Now, are there any big blockers? | > | > -- | > Mateusz K. | > _______________________________________________ | > ghc-devs mailing list | > ghc-devs at haskell.org | > http://www.haskell.org/mailman/listinfo/ghc-devs | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From jan.stolarek at p.lodz.pl Thu Jul 3 06:44:11 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Thu, 3 Jul 2014 08:44:11 +0200 Subject: "No entry for "Haskell CPP command"" error with testsuite Message-ID: <201407030844.11869.jan.stolarek@p.lodz.pl> I'm trying to run the testsuite in my symlinked build tree, but keep getting this error: [killy at xerxes : /dane/projekty/ghc/build] make test make -C testsuite/tests CLEANUP=1 OUTPUT_SUMMARY=../../testsuite_summary.txt fast make[1]: Wej?cie do katalogu `/dane/projekty/ghc/build/testsuite/tests' ../mk/boilerplate.mk:168: ../mk/ghcconfig_dane_projekty_ghc_build_inplace_bin_ghc-stage2.mk: No such file or directory "/dane/projekty/ghc/build/inplace/bin/ghc-stage2" --make -o ../mk/ghc-config ../mk/ghc-config.hs No entry for "Haskell CPP command" in "/dane/projekty/ghc/build/inplace/lib/settings" make[1]: *** [../mk/ghc-config] Error 1 make[1]: Leaving directory `/dane/projekty/ghc/build/testsuite/tests' make: *** [test] Error 2 Help? Janek From jan.stolarek at p.lodz.pl Thu Jul 3 08:44:45 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Thu, 3 Jul 2014 10:44:45 +0200 Subject: RFC: style cleanup & guidelines for GHC, and related bikeshedding In-Reply-To: References: Message-ID: <201407031044.46291.jan.stolarek@p.lodz.pl> I fully support Austin's proposal. My eyes hurt when I work on 5 files and each of them is written in a different style. Now, to address a few points that were raised. > Is it just for the sake of beauty (not to diminish the importance of beauty)? * I believe that trailing whitespaces are a practical issue: they are invisible to the human eye (unless you have (setq-default show-trailing-whitespace t) \n (setq-default indicate-empty-lines t) in your .emacs file) but carry a semantic meaning for git and other version control systems. This means that accidental removal of a trailing whitespace - which can and will happen if you don't highlight them - will lead to false changes in the diff. That said, simply removing existing trailing whitespaces is not enough - we would need a way to keep them from reappearing. Sadly, this idea was rejected: http://www.haskell.org/pipermail/ghc-devs/2013-August/002074.html Git has some cool tools (like git diff --check) that aid programmer in dealing with trailing whitespaces, but not everyone uses them. Here's a relatively recent example of a new trailing whitespace sneaking into the source code: https://github.com/ghc/ghc/commit/ce19d5079ea85d3190e837a1fc60000fbd82134d#diff-ababf87bf3da1f44484a901a8fbc0eb6R388 So without a way to enforce this policy removing trailing whitespaces doesn't seem to make sense. * Tabs will become a practical problem once #9230 is fixed. Also, tab width can be custom-set to whatever value in an editor, which can trip some people. I for one used to have tab width set to 2 (unlike the default of 4 or 8) but gave up, because too many tab-formatted things were displayed incorrectly. * I strongly support turning lhs files into hs files. This is practical and explained below. From my point of view other things are just pure aesthetics. ====== I'm not sure about the style of do-notation. When I first saw do { ... ; ... } I thought it was terrible but now, after some time, I see the merit of it. Consider this code snippet from StgCmmBind: \(_offset, node, arg_regs) -> do { (...) ; withSelfLoop (bndr, loop_header_id, arg_regs) $ do { (...) ; entryHeapCheck cl_info node' arity arg_regs $ do { (...) }}} I believe that without { ... ; ... } these nested dos would have to be indented, which wouldn't aid readability IMO. I agree that we should use one style of the do-notation but I'm not sure which one should that be. I'd favor the { ... ; ... } one. ===== > So, overall, I kind of like our current policy of fixing tabs and white-space when we have to modify the code anyway. I proposed to remove tabs from the source code 1,5 year ago: http://www.haskell.org/pipermail/ghc-devs/2013-January/000053.html Back then my conclusion was that most of the files that contain tabs were modified in the previous 6 months. This means that people are not following the policy to remove tabs when they modify the file (or at least they weren't following that policy back then). So this policy seems to be only theory. ====== One more style issue that was not mentioned by Austin are block comments: I strongly advocate removal of all block comments in favour of single-line ones. Why? Here's a reason: {- Note [Blah blah] ~~~~~~~~~~~~ ... Code snippet: foo :: Foo -> Bar foo = ... -} My primary method of finding things in the source code is grepping and I believe that many people do the same. With block comments it is impossible to tell whether the line found by grep is part of the comment. With line comments it is immediately obvious. Moreover, if all comments were line comments it would be really easy to grep for something only in the comments. Currently this is impossible. This is also true for lhs files and for that reason I also consider lhs files to be practical issue. Another reason (albeit minor) to have line comments instead of block comments is that with the former style each comment is independent and can be freely moved around. With block comments we might need to create extra comment blocks when moving comments around. ====== Now, I understand people who don't want such change because of merge conflicts. But the truth is there will never be a good moment to implement such changes because there is always some ongoing work and outstanding branches. So I believe we should think whether these changes move us in a good direction or not and if we decide that these changes are a Good Thing - which I believe they are - we should bite the bullet. Otherwise we will have mess in the source code forever. Since I'm advocating strongly for these I am of course willing to put my work into making this happen. So far I've assigned #9230 to myself and if we agree to detab all the source code I can be the person to do it. Janek PS. I am proud of myself, because I wrote a mail as long as Austin's, which I always thought was impossible :-) From jpm at cs.uu.nl Thu Jul 3 08:57:37 2014 From: jpm at cs.uu.nl (=?UTF-8?Q?Jos=C3=A9_Pedro_Magalh=C3=A3es?=) Date: Thu, 3 Jul 2014 09:57:37 +0100 Subject: GenericsPropositionalEquality [Was: Resolved+new Q: TypeLits question, how to build a Type Application with Symbol index] In-Reply-To: References: Message-ID: Gabor and all, Below you'll find my encoding of GHC.Generics with DataKinds. The most important part, for this discussion, is the treatment of meta-information. I don't think we need |sameDatatype|, in particular; why not just use |sameSymbol|? Cheers, Pedro {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE GADTs #-} {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE DataKinds #-} {-# LANGUAGE PolyKinds #-} {-# LANGUAGE TypeOperators #-} {-# LANGUAGE DefaultSignatures #-} {-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE OverlappingInstances #-} module Test where import GHC.TypeLits import Data.Proxy ( Proxy(..) ) -------------------------------------------------------------------------------- -- Universe encoding -------------------------------------------------------------------------------- data Un s = -- s is to always be set to * -- Void (used for datatypes without constructors) VV -- Unit | UU -- Meta-data | MD MetaData (Un s) | MC MetaCons (Un s) | MS MetaSel (Un s) -- A parameter | PAR -- Constants (either other parameters or recursion into types of kind *) | KK PRU s -- Recursion into types of kind (* -> *) | REC SO (s -> s) -- Sum, product | Un s :+: Un s | Un s :**: Un s -- Composition | (s -> s) :.: Un s -------------------------------------------------------------------------------- -- Meta-data -------------------------------------------------------------------------------- -- Parameter, Recursive occurrence, or Unknown/other data PRU = P | R SO | U -- Self or Other data SO = S | O data MetaData = MetaData { dataName :: Symbol, dataModule :: Symbol } data MetaCons = MetaCons { conName :: Symbol , conFixity :: Fixity , conIsRecord :: Bool } data Fixity = Prefix | Infix Associativity Nat data Associativity = LeftAssociative | RightAssociative | NotAssociative deriving (Eq, Show, Ord, Read) data MetaSel = MetaSel { selName :: Maybe Symbol } -------------------------------------------------------------------------------- -- Interpretation (as a GADT) -------------------------------------------------------------------------------- data In (u :: Un *) (p :: *) :: * where -- No interpretation for VV, as it shouldn't map to any value -- Unit U1 :: In UU p -- Datatype meta-data D1 :: { unD1 :: In a p } -> In (MD md a) p -- Constructor meta-data C1 :: { unC1 :: In a p } -> In (MC mc a) p -- Selector meta-data S1 :: { unS1 :: In a p } -> In (MS ms a) p -- The parameter Par1 :: { unPar1 :: p } -> In PAR p -- Constants K1 :: { unK1 :: x} -> In (KK pru x) p -- Recursion Rec1 :: { unRec1 :: f p } -> In (REC i f) p -- Sum L1 :: In f p -> In (f :+: g) p R1 :: In g p -> In (f :+: g) p -- Product (:*:) :: In f p -> In g p -> In (f :**: g) p -- Composition Comp1 :: { unComp1 :: f (In g p) } -> In (f :.: g) p -------------------------------------------------------------------------------- -- Conversions to/from user datatypes -------------------------------------------------------------------------------- class Generic (a :: *) where type Rep a :: Un * from :: a -> In (Rep a) p to :: In (Rep a) p -> a class Generic1 (f :: * -> *) where type Rep1 f :: Un * from1 :: f p -> In (Rep1 f) p to1 :: In (Rep1 f) p -> f p -------------------------------------------------------------------------------- -- Example encoding: lists (with some twisted meta-data for example purposes) -------------------------------------------------------------------------------- instance Generic [a] where type Rep [a] = MD ('MetaData "[]" "Prelude") (MC ('MetaCons "[]" Prefix False) UU :+: MC ('MetaCons ":" (Infix RightAssociative 5) False) ( MS ('MetaSel (Just "el")) (KK P a) :**: MS ('MetaSel Nothing) (KK (R S) [a]))) from [] = D1 (L1 (C1 U1)) from (h:t) = D1 (R1 (C1 (S1 (K1 h) :*: S1 (K1 t)))) to (D1 (L1 (C1 U1))) = [] to (D1 (R1 (C1 (S1 (K1 h) :*: S1 (K1 t))))) = h:t -- Should have meta-information as well, but the one above is enough for now instance Generic1 [] where type Rep1 [] = UU :+: (PAR :**: REC S []) from1 [] = L1 U1 from1 (h:t) = R1 (Par1 h :*: Rec1 t) to1 (L1 U1) = [] to1 (R1 (Par1 h :*: Rec1 t)) = h:t -------------------------------------------------------------------------------- -- Show -------------------------------------------------------------------------------- class GShow (r :: Un *) where gshow :: In r p -> String instance (Show' a) => GShow (KK pru a) where gshow (K1 a) = show' a instance GShow UU where gshow U1 = "" instance (GShow r) => GShow (MD md r) where gshow (D1 x) = gshow x -- We can now match meta-data properties at the type level instance (KnownSymbol name) => GShow (MC ('MetaCons name Prefix isRec) UU) where gshow (C1 x) = symbolVal (Proxy :: Proxy name) instance (KnownSymbol name, GShow r) => GShow (MC ('MetaCons name Prefix isRec) r) where gshow (C1 x) = "(" ++ symbolVal (Proxy :: Proxy name) ++ " " ++ gshow x ++ ")" -- Note how we assume that the structure under an Infix MC must be a product. -- This is not encoded in the universe, regrettably, and might lead to "missing -- instance" errors if we're not careful. instance (KnownSymbol name, GShow a, GShow b) => GShow (MC ('MetaCons name (Infix assoc fix) isRec) (a :**: b)) where gshow (C1 (a :*: b)) = "(" ++ gshow a ++ " " ++ symbolVal (Proxy :: Proxy name) ++ " " ++ gshow b ++ ")" instance (GShow r) => GShow (MS ('MetaSel Nothing) r) where gshow (S1 x) = gshow x instance (KnownSymbol name, GShow r) => GShow (MS ('MetaSel (Just name)) r) where gshow (S1 x) = "{ " ++ symbolVal (Proxy :: Proxy name) ++ " = " ++ gshow x ++ " }" instance (GShow a, GShow b) => GShow (a :+: b) where gshow (L1 a) = gshow a gshow (R1 b) = gshow b instance (GShow a, GShow b) => GShow (a :**: b) where gshow (a :*: b) = gshow a ++ " " ++ gshow b -- User-facing class class Show' (a :: *) where show' :: a -> String default show' :: (Generic a, GShow (Rep a)) => a -> String show' = gshow . from -------------------------------------------------------------------------------- -- Test -------------------------------------------------------------------------------- instance Show' Int where show' = show instance (Show' a) => Show' [a] test1, test2 :: String test1 = show' ([] :: [Int]) -- "[]" test2 = show' [1,2::Int] -- "({ el = 1 } : ({ el = 2 } : []))" On Wed, Jul 2, 2014 at 1:42 PM, Gabor Greif wrote: > [added Pedro and Richard to "to:"] > > On 6/30/14, Simon Peyton Jones wrote: > > | Yes I have a branch, and it works! A bunch of things is still missing > > | (notably record selectors), but I have a proof-of-concept with a gdiff > > | library hooked up to GHC.Generics, and by appealing to type-level > > | reasoning I can obtain a difference tree from True to False (which > > | looks good) by using the reflection (i.e. class Generic) only, no need > > | for TH or hand-coding. Comparing bigger trees (and then 'patch'ing > > | them) appears to be SMOP from here. > > > > Do you have a wiki page explaining what "it" is (the thing that works). > > Here is a wiki page > > https://ghc.haskell.org/trac/ghc/wiki/GenericsPropositionalEquality > > you all are invited to make suggestions, ask questions and generally > beautify. > > > > > | instance Datatype (Dat "MyModule" "Foo") ... > > | > > | I get an 'orphan instance' warning. I believe that these are harmless, > > > > The downside of orphan instances is that GHC must visit every .hi file > that > > has an orphan instance, just in case it contains a relevant instance > decl. > > That slows down *every* compilation, whether or not it uses the instance. > > > > The best way to get rid of it is to declare something local that is "from > > this module". Something like > > > > data MyModule_Foo > > instance DataType (Dat MyModule_Foo) where ... > > > > Now MyModule_Foo is a data type from the module currently being compiled. > > That tells GHC which .hi file to look in, and means the instance isn't > > orphan. > > Yes I see, I noted it in the discussion and came up with a > conservative approach (to be implemented). > > Furter opinions? Please add to the wiki. > > Thanks and cheers, > > Gabor > > > > > Simon > > > > | so is there a way to suppress them? Since I never insert tyvars in the > > | instance head, there should never be any overlap too. > > | > > | Cheers, > > | > > | Gabor > > | > > | On 6/30/14, Simon Peyton Jones wrote: > > | > You'll need to give a lot more info than this before I can help > Gabor. > > | > Currently I have only the vaguest idea about what you are trying to > > | > accomplish. Is there a wiki page that describes the design (user's > eye > > | > view) in detail? > > | > > > | > I see you have a branch. If you are stuck, and give me repro > > | instructions, > > | > I can attempt to help. > > | > > > | > Simon > > | > > > | > | -----Original Message----- > > | > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of > > | Gabor > > | > | Greif > > | > | Sent: 27 June 2014 17:51 > > | > | To: ghc-devs > > | > | Subject: Resolved+new Q: TypeLits question, how to build a Type > > | > | Application with Symbol index > > | > | > > | > | I succeeded to solve all of them :-) > > | > | > > | > | But now I am blocked on on a panic > > | > | > > | > | "not in scope during type checking, but it passed the renamer". > > | > | > > | > | I suspect that while "deriving Generic" some instances are defined > in > > | > | some empty TcEnv, which does not contain my definition in context. > > | > | > > | > | Is there a way to inject some type constructor into the TcEnv? > > | > | > > | > | Thanks, > > | > | > > | > | Gabor > > | > | > > | > | On 6/27/14, Gabor Greif wrote: > > | > | > Hello devs, > > | > | > > > | > | > I have > > | > | > > > | > | > {{{ > > | > | > data D (n :: Symbol) > > | > | > }}} > > | > | > > > | > | > in my module, and I want to obtain a type > > | > | > > > | > | > {{{ > > | > | > D "YAY!" > > | > | > }}} > > | > | > > > | > | > programmatically. Where can I find code that performs this (or > > | > | > something similar)? > > | > | > > > | > | > 1) I have to look up |D| in the current TyEnv (what if it is in a > > | > | > specific module?), > > | > | > 2) I have to build the type index (of kind Symbol), this involves > > | > | > FastString, looks non-trivial, > > | > | > 3) Apply 1) on 2), this is easy. > > | > | > > > | > | > Any hints welcome! > > | > | > > > | > | > Thanks and cheers, > > | > | > > > | > | > Gabor > > | > | > > > | > | > > > | > | > PS: some morsels I have so far: > > | > | > > > | > | > for 1) > > | > | > compiler/prelude/PrelNames.lhs:gHC_GENERICS = mkBaseModule > > | (fsLit > > | > | > "GHC.Generics") > > | > | > > > | > | _______________________________________________ > > | > | ghc-devs mailing list > > | > | ghc-devs at haskell.org > > | > | http://www.haskell.org/mailman/listinfo/ghc-devs > > | > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Thu Jul 3 09:13:54 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Thu, 03 Jul 2014 11:13:54 +0200 Subject: RFC: style cleanup & guidelines for GHC, and related bikeshedding In-Reply-To: <201407031044.46291.jan.stolarek@p.lodz.pl> References: <201407031044.46291.jan.stolarek@p.lodz.pl> Message-ID: <1404378834.19001.3.camel@kirk> Hi, Am Donnerstag, den 03.07.2014, 10:44 +0200 schrieb Jan Stolarek: > Now, I understand people who don't want such change because of merge > conflicts. But the truth is there will never be a good moment to > implement such changes because there is always some ongoing > work and outstanding branches. when I first looked at GHC code I also thought ?ugh, ugly?. But I can cope, it does not actually hinder me while working on GHC. On the other hand, having a ?detab and rename? horizon where merging patches from before is much harder, and where "git log -L" and "git blame" fail to work properly would be a hindrance. Also, backporting patches from GHC HEAD to distribution releases would become annoying, for at least one release cycle. So my conclusion is that it?s ok to have the mess in the source code forever. Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From cj at vdbonline.com Thu Jul 3 12:25:46 2014 From: cj at vdbonline.com (CJ van den Berg) Date: Thu, 03 Jul 2014 14:25:46 +0200 Subject: GHCJS now runs Template Haskell on node.js - Any interest in out of process TH for general cross compilation? In-Reply-To: References: Message-ID: <53B54BCA.5060301@vdbonline.com> Yes! This would definitely be of great interest to users of the Android cross compilers. It should be quite feasible to drive a TH runner process on a development device or emulator. Having genuine TH support would be a huge improvement to the usefulness of GHC in a cross compiling scenario. I would love to start work on integrating TH runner support into ghc-android. On 2014-07-02 18:14, Luite Stegeman wrote: > Now since this approach is in no way specific to JavaScript, I was > wondering if there's any interest in getting this functionality into GHC > 7.10 for general cross compilation. The runner would be a native > (target) program with dynamic libraries (or object files) being sent > over to the target machine (or emulator) for the splices. > -- CJ van den Berg mailto:cj at vdbonline.com xmpp:neurocyte at gmail.com From adam at sandbergericsson.se Thu Jul 3 12:36:13 2014 From: adam at sandbergericsson.se (Adam Sandberg Ericsson) Date: Thu, 3 Jul 2014 14:36:13 +0200 Subject: "No entry for "Haskell CPP command"" error with testsuite In-Reply-To: <201407030844.11869.jan.stolarek@p.lodz.pl> References: <201407030844.11869.jan.stolarek@p.lodz.pl> Message-ID: The error is probably related to: https://phabricator.haskell.org/D26 Adam Sandberg Ericsson On Thu, Jul 3, 2014 at 8:44 AM, Jan Stolarek wrote: > I'm trying to run the testsuite in my symlinked build tree, but keep > getting this error: > > [killy at xerxes : /dane/projekty/ghc/build] make test > make -C testsuite/tests CLEANUP=1 > OUTPUT_SUMMARY=../../testsuite_summary.txt fast > make[1]: Wej?cie do katalogu `/dane/projekty/ghc/build/testsuite/tests' > ../mk/boilerplate.mk:168: ../mk/ > ghcconfig_dane_projekty_ghc_build_inplace_bin_ghc-stage2.mk: No > such file or directory > "/dane/projekty/ghc/build/inplace/bin/ghc-stage2" --make -o > ../mk/ghc-config ../mk/ghc-config.hs > No entry for "Haskell CPP command" in > "/dane/projekty/ghc/build/inplace/lib/settings" > make[1]: *** [../mk/ghc-config] Error 1 > make[1]: Leaving directory `/dane/projekty/ghc/build/testsuite/tests' > make: *** [test] Error 2 > > Help? > > Janek > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cj at vdbonline.com Thu Jul 3 12:48:20 2014 From: cj at vdbonline.com (CJ van den Berg) Date: Thu, 03 Jul 2014 14:48:20 +0200 Subject: Fwd: GHCJS now runs Template Haskell on node.js - Any interest in out of process TH for general cross compilation? In-Reply-To: References: Message-ID: <53B55114.4040906@vdbonline.com> That is great! Please send your patches to me. I would love to update ghc-android to support 7.8. A pull request on github would be ideal, but plain patches are ok too. On 2014-07-03 05:18, member MP2E wrote: > 1. Cross GHC 7.8 using the Stage1 TH patch and a few other patches > available from Neurocyte. These need to be updated slightly as they are > for GHC 7.6, I can post updated patches if anyone else is curious. I > would be very interested in cleaning up and integrating the needed > changes into GHC, if cross Template Haskell is to be implemented. > 2. Create wrappers for hsc2hs and cabal for the cross compiler (I also > have updated these) -- CJ van den Berg mailto:cj at vdbonline.com xmpp:neurocyte at gmail.com From stegeman at gmail.com Thu Jul 3 12:51:11 2014 From: stegeman at gmail.com (Luite Stegeman) Date: Thu, 3 Jul 2014 14:51:11 +0200 Subject: GHCJS now runs Template Haskell on node.js - Any interest in out of process TH for general cross compilation? In-Reply-To: References: Message-ID: I think GHC could use more or less the same communication method as GHCJS now does: Start some user-specifiied process and send messages through pipes (GHCJS uses stdin/stderr of the node process), with the difference that it would get dynamic libraries for the target rather than blobs of JS code. That user process is then responsible for setting up the actual communication with the runner on the emulator or development device. A requirement for complete TH support is that more code can be loaded at runtime, so that multiple splices can be run by the same runner (because of the persistent map, qGetQ / qPutQ), I'm not sure if this is problematic on iOS. On Thu, Jul 3, 2014 at 2:54 AM, Carter Schonwald wrote: > This would probably be a great boon for those trying to use haskell for > Android and IOS right? how might the emulation setup work for those? > > > > > On Wed, Jul 2, 2014 at 2:20 PM, Carter Schonwald < > carter.schonwald at gmail.com> wrote: > >> wow, this is great work! >> >> If theres a clear path to getting the generic tooling into 7.10, i'm all >> for it :) (and willing to help on concrete mechanical subtasks) >> >> >> On Wed, Jul 2, 2014 at 12:14 PM, Luite Stegeman >> wrote: >> >>> hi all, >>> >>> I've added some code [1] [2] to GHCJS to make it run Template Haskell >>> code on node.js, rather than using the GHC linker. GHCJS has supported TH >>> for a long time now, but so far always relied on native (host) code for it. >>> This is the main reason that GHCJS always builds native and JavaScript code >>> for everything (another is that Cabal Setup.hs scripts need to be compiled >>> to some host-runnable form, but that can also be JavaScript if you have >>> node.js) >>> >>> Now besides the compiler having to do twice the work, this has some >>> other disadvantages: >>> >>> - Our JavaScript code has the same dependencies (packages) as native >>> code, which means packages like unix or Win32 show up somewhere, depending >>> on the host environment. This also limits our options in choosing >>> JS-specific packages. >>> - The Template Haskell code runs on the host environment, which might be >>> slightly different from the target, for example in integer size or >>> operating system specific constants. >>> >>> Moreover, building native code made the GHCJS installation procedure >>> more tricky, making end users think about libgmp or libiconv locations, >>> since it basically required the same preparation as building GHC from >>> source. This change will make installing much easier and more reliable (we >>> still have to update the build scripts). >>> >>> How it works is pretty simple: >>> >>> - When any code needs to be run on the target (hscCompileCoreExpr, >>> through the Hooks API new in GHC 7.8), GHCJS starts a node.js process with >>> the thrunner.js [3] script, >>> - GHCJS sends its RTS and the Template Haskell server code [1] to >>> node.js, the script starts a Haskell thread running the server, >>> - for every splice, GHCJS compiles it to JavaScript and links it using >>> its incremental linking functionality. The code for the splice, including >>> dependencies that have not yet been sent to the runner (for earlier >>> splices), is then sent in a RunTH [4] message, >>> - the runner loads and runs the code in the Q monad, can send queries to >>> GHCJS for reification, >>> - the runner sends back the result as a serialized Template Haskell AST >>> (using GHC.Generics for the Binary instances). >>> >>> All Template Haskell functionality is supported, including recent >>> additions for reifying modules and annotations. I still need to clean up >>> and push the patches for the directory and process packages, but after >>> that, the TH code can read/write files, run processes and interact with >>> them and make network connections, all through node.js. >>> >>> Now since this approach is in no way specific to JavaScript, I was >>> wondering if there's any interest in getting this functionality into GHC >>> 7.10 for general cross compilation. The runner would be a native (target) >>> program with dynamic libraries (or object files) being sent over to the >>> target machine (or emulator) for the splices. >>> >>> Thanks to Andras Slemmer from Prezi who helped build the initial proof >>> of concept (without reification) at BudHac. >>> >>> cheers, >>> >>> Luite >>> >>> [1] >>> https://github.com/ghcjs/ghcjs/blob/414eefb2bb8825b3c4c5cddfec4d79a142bc261a/src/Gen2/TH.hs >>> [2] >>> https://github.com/ghcjs/ghcjs-prim/blob/2dffdc2d732b044377037e1d6ebeac2812d4f9a4/GHCJS/Prim/TH/Eval.hs >>> [3] >>> https://github.com/ghcjs/ghcjs/blob/414eefb2bb8825b3c4c5cddfec4d79a142bc261a/lib/etc/thrunner.js >>> [4] >>> https://github.com/ghcjs/ghcjs-prim/blob/2dffdc2d732b044377037e1d6ebeac2812d4f9a4/GHCJS/Prim/TH/Types.hs#L29 >>> >>> _______________________________________________ >>> Glasgow-haskell-users mailing list >>> Glasgow-haskell-users at haskell.org >>> http://www.haskell.org/mailman/listinfo/glasgow-haskell-users >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Thu Jul 3 13:19:24 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Thu, 3 Jul 2014 09:19:24 -0400 Subject: "No entry for "Haskell CPP command"" error with testsuite In-Reply-To: <201407030844.11869.jan.stolarek@p.lodz.pl> References: <201407030844.11869.jan.stolarek@p.lodz.pl> Message-ID: Did you do a make maintainer-clean ; perl boot ; ./configure After you pulled the new changes? It patched the build system so you have to do the above. Can't reuse a previous build On Thursday, July 3, 2014, Jan Stolarek wrote: > I'm trying to run the testsuite in my symlinked build tree, but keep > getting this error: > > [killy at xerxes : /dane/projekty/ghc/build] make test > make -C testsuite/tests CLEANUP=1 > OUTPUT_SUMMARY=../../testsuite_summary.txt fast > make[1]: Wej?cie do katalogu `/dane/projekty/ghc/build/testsuite/tests' > ../mk/boilerplate.mk:168: ../mk/ > ghcconfig_dane_projekty_ghc_build_inplace_bin_ghc-stage2.mk: No > such file or directory > "/dane/projekty/ghc/build/inplace/bin/ghc-stage2" --make -o > ../mk/ghc-config ../mk/ghc-config.hs > No entry for "Haskell CPP command" in > "/dane/projekty/ghc/build/inplace/lib/settings" > make[1]: *** [../mk/ghc-config] Error 1 > make[1]: Leaving directory `/dane/projekty/ghc/build/testsuite/tests' > make: *** [test] Error 2 > > Help? > > Janek > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander at plaimi.net Thu Jul 3 14:45:42 2014 From: alexander at plaimi.net (Alexander Berntsen) Date: Thu, 03 Jul 2014 16:45:42 +0200 Subject: RFC: style cleanup & guidelines for GHC, and related bikeshedding In-Reply-To: <1404378834.19001.3.camel@kirk> References: <201407031044.46291.jan.stolarek@p.lodz.pl> <1404378834.19001.3.camel@kirk> Message-ID: <53B56C96.4050208@plaimi.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 03/07/14 11:13, Joachim Breitner wrote: > So my conclusion is that it?s ok to have the mess in the source > code forever. I mostly agree. The only way to handle this that I can presently identify is to agree on some guidelines, put them up on the Wiki, enforce them socially (maybe technologically as well, using e.g. Phabricator) with code review. I.e. make sure to not make the source *uglier* at least. Anything more drastic than this will get in everyone's way. Of course if something sufficiently drastic has to happen anyhow (AMP etc.), then this is an apt opportunity to beautifying the related code. And if you are changing 90% of a file for some reason, it probably doesn't hurt to detab it as well, etc. - -- Alexander alexander at plaimi.net https://secure.plaimi.net/~alexander -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iF4EAREIAAYFAlO1bJYACgkQRtClrXBQc7Wp8wEAjdnJRecFseuhW5Vxd41N7Z6f E0+PqXXpU/T8oiYuzt4A/1+7imYJOt1vsaRhj+HvSCvM5SoRk7T1M8Aoxh3QEOsC =CeaG -----END PGP SIGNATURE----- From jan.stolarek at p.lodz.pl Thu Jul 3 15:38:56 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Thu, 3 Jul 2014 17:38:56 +0200 Subject: RFC: style cleanup & guidelines for GHC, and related bikeshedding In-Reply-To: <53B56C96.4050208@plaimi.net> References: <1404378834.19001.3.camel@kirk> <53B56C96.4050208@plaimi.net> Message-ID: <201407031738.57099.jan.stolarek@p.lodz.pl> > And if you are changing 90% of a file for some reason, it > probably doesn't hurt to detab it as well, etc. I think the reason we still have tabs in the source code is that people usually don't change 90% of a file, but 5% or something like that and they feel this is not enough to justify detabing of a whole file. Janek From alexander at plaimi.net Thu Jul 3 15:39:35 2014 From: alexander at plaimi.net (Alexander Berntsen) Date: Thu, 03 Jul 2014 17:39:35 +0200 Subject: RFC: style cleanup & guidelines for GHC, and related bikeshedding In-Reply-To: <201407031738.57099.jan.stolarek@p.lodz.pl> References: <1404378834.19001.3.camel@kirk> <53B56C96.4050208@plaimi.net> <201407031738.57099.jan.stolarek@p.lodz.pl> Message-ID: <53B57937.2010403@plaimi.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 03/07/14 17:38, Jan Stolarek wrote: > I think the reason we still have tabs in the source code is that > people usually don't change 90% of a file, but 5% or something > like that and they feel this is not enough to justify detabing of a > whole file. Which is fine, in my opinion. - -- Alexander alexander at plaimi.net https://secure.plaimi.net/~alexander -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iF4EAREIAAYFAlO1eTcACgkQRtClrXBQc7WqzwD+NdW5OqYSLpIlMgbOBI3grR5i EpKPA2+SWboRvdB1iAIBAKBn4lZ7CaofQ2YOnXpjX0eV3kauOxBZ27YpaQ2xQ11i =QDfM -----END PGP SIGNATURE----- From jan.stolarek at p.lodz.pl Thu Jul 3 15:39:59 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Thu, 3 Jul 2014 17:39:59 +0200 Subject: "No entry for "Haskell CPP command"" error with testsuite In-Reply-To: References: <201407030844.11869.jan.stolarek@p.lodz.pl> Message-ID: <201407031739.59515.jan.stolarek@p.lodz.pl> > Did you do a make maintainer-clean ; perl boot ; ./configure > After you pulled the new changes? It patched the build system so you have > to do the above. Can't reuse a previous build I guess that was it. Did a clean build and all is fine. Janek From afarmer at ittc.ku.edu Thu Jul 3 15:44:25 2014 From: afarmer at ittc.ku.edu (Andrew Farmer) Date: Thu, 3 Jul 2014 10:44:25 -0500 Subject: RFC: style cleanup & guidelines for GHC, and related bikeshedding In-Reply-To: <1404378834.19001.3.camel@kirk> References: <201407031044.46291.jan.stolarek@p.lodz.pl> <1404378834.19001.3.camel@kirk> Message-ID: On Thu, Jul 3, 2014 at 4:13 AM, Joachim Breitner wrote: > On the other hand, having a ?detab and rename? horizon where merging patches from > before is much harder, and where "git log -L" and "git blame" fail to > work properly would be a hindrance. Minor point, but you can use "git blame -w" to tell blame to ignore whitespace changes and show you the last commit that actually changed the code. From simonpj at microsoft.com Thu Jul 3 16:18:33 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 3 Jul 2014 16:18:33 +0000 Subject: GHCJS now runs Template Haskell on node.js - Any interest in out of process TH for general cross compilation? In-Reply-To: References: Message-ID: <618BE556AADD624C9C918AA5D5911BEF10422771@DB3PRD3001MB020.064d.mgd.msft.net> Luite I lack the bandwidth to respond at any technical depth, but I?d like to make encouraging noises. If you can figure out a way to make GHC do these things without making the compiler terribly complicated and making maintaining it harder, then I?m open to your proposals. Several people seem to have said ?oh yes, that?d be interesting?. Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Luite Stegeman Sent: 02 July 2014 17:14 To: ghc-devs; glasgow-haskell-users at haskell.org Subject: GHCJS now runs Template Haskell on node.js - Any interest in out of process TH for general cross compilation? hi all, I've added some code [1] [2] to GHCJS to make it run Template Haskell code on node.js, rather than using the GHC linker. GHCJS has supported TH for a long time now, but so far always relied on native (host) code for it. This is the main reason that GHCJS always builds native and JavaScript code for everything (another is that Cabal Setup.hs scripts need to be compiled to some host-runnable form, but that can also be JavaScript if you have node.js) Now besides the compiler having to do twice the work, this has some other disadvantages: - Our JavaScript code has the same dependencies (packages) as native code, which means packages like unix or Win32 show up somewhere, depending on the host environment. This also limits our options in choosing JS-specific packages. - The Template Haskell code runs on the host environment, which might be slightly different from the target, for example in integer size or operating system specific constants. Moreover, building native code made the GHCJS installation procedure more tricky, making end users think about libgmp or libiconv locations, since it basically required the same preparation as building GHC from source. This change will make installing much easier and more reliable (we still have to update the build scripts). How it works is pretty simple: - When any code needs to be run on the target (hscCompileCoreExpr, through the Hooks API new in GHC 7.8), GHCJS starts a node.js process with the thrunner.js [3] script, - GHCJS sends its RTS and the Template Haskell server code [1] to node.js, the script starts a Haskell thread running the server, - for every splice, GHCJS compiles it to JavaScript and links it using its incremental linking functionality. The code for the splice, including dependencies that have not yet been sent to the runner (for earlier splices), is then sent in a RunTH [4] message, - the runner loads and runs the code in the Q monad, can send queries to GHCJS for reification, - the runner sends back the result as a serialized Template Haskell AST (using GHC.Generics for the Binary instances). All Template Haskell functionality is supported, including recent additions for reifying modules and annotations. I still need to clean up and push the patches for the directory and process packages, but after that, the TH code can read/write files, run processes and interact with them and make network connections, all through node.js. Now since this approach is in no way specific to JavaScript, I was wondering if there's any interest in getting this functionality into GHC 7.10 for general cross compilation. The runner would be a native (target) program with dynamic libraries (or object files) being sent over to the target machine (or emulator) for the splices. Thanks to Andras Slemmer from Prezi who helped build the initial proof of concept (without reification) at BudHac. cheers, Luite [1] https://github.com/ghcjs/ghcjs/blob/414eefb2bb8825b3c4c5cddfec4d79a142bc261a/src/Gen2/TH.hs [2] https://github.com/ghcjs/ghcjs-prim/blob/2dffdc2d732b044377037e1d6ebeac2812d4f9a4/GHCJS/Prim/TH/Eval.hs [3] https://github.com/ghcjs/ghcjs/blob/414eefb2bb8825b3c4c5cddfec4d79a142bc261a/lib/etc/thrunner.js [4] https://github.com/ghcjs/ghcjs-prim/blob/2dffdc2d732b044377037e1d6ebeac2812d4f9a4/GHCJS/Prim/TH/Types.hs#L29 -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Thu Jul 3 16:20:44 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Thu, 3 Jul 2014 12:20:44 -0400 Subject: "No entry for "Haskell CPP command"" error with testsuite In-Reply-To: <201407031739.59515.jan.stolarek@p.lodz.pl> References: <201407030844.11869.jan.stolarek@p.lodz.pl> <201407031739.59515.jan.stolarek@p.lodz.pl> Message-ID: yay! :) (i spent quite a bit of time working on that patch, so you scared me initially) On Thu, Jul 3, 2014 at 11:39 AM, Jan Stolarek wrote: > > Did you do a make maintainer-clean ; perl boot ; ./configure > > After you pulled the new changes? It patched the build system so you have > > to do the above. Can't reuse a previous build > I guess that was it. Did a clean build and all is fine. > > Janek > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Thu Jul 3 16:28:14 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Thu, 03 Jul 2014 17:28:14 +0100 Subject: "No entry for "Haskell CPP command"" error with testsuite In-Reply-To: References: <201407030844.11869.jan.stolarek@p.lodz.pl> <201407031739.59515.jan.stolarek@p.lodz.pl> Message-ID: <1404404864-sup-6449@sabre> I wonder if we should have some magic variable in the build system that induces a full rebuild, so if you push a patch which requires a rebuild you can bump the variable and let it know. Edward Excerpts from Carter Schonwald's message of 2014-07-03 17:20:44 +0100: > yay! :) > (i spent quite a bit of time working on that patch, so you scared me > initially) > > On Thu, Jul 3, 2014 at 11:39 AM, Jan Stolarek > wrote: > > > > Did you do a make maintainer-clean ; perl boot ; ./configure > > > After you pulled the new changes? It patched the build system so you have > > > to do the above. Can't reuse a previous build > > I guess that was it. Did a clean build and all is fine. > > > > Janek > > From simonpj at microsoft.com Thu Jul 3 16:29:40 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 3 Jul 2014 16:29:40 +0000 Subject: RFC: style cleanup & guidelines for GHC, and related bikeshedding In-Reply-To: <201407031044.46291.jan.stolarek@p.lodz.pl> References: <201407031044.46291.jan.stolarek@p.lodz.pl> Message-ID: <618BE556AADD624C9C918AA5D5911BEF10422828@DB3PRD3001MB020.064d.mgd.msft.net> Just to say that * In general I don't have a strong opinion about these stylistic issues. Moreover I have little bikeshed time, and if I don't contribute to a debate I can't expect to influence it much. So I'm mostly happy to accept a consensus view. However, some thoughts * I don't think we should be over-prescriptive. Eg personally I like a do-notation style with semicolons at the beginning, eg do { blah ; x <- foo ; etc } but I'm disinclined to force everyone to do so, and I don't want to be forced to do something different myself. I can adapt to the style of the author here. * A *primary* form of consumption is the source code itself. I've found that Haddock-compliant comments can be rather less readable in source code. (Eg. CoreSyn.lhs where the #blah# notation coexists uneasily with Note [blah].) So I'd be nervous of mandating Haddock-compliance. * Insisting on line comments exclusively, carries a cost. The on-screen distraction of line comments, and the nuisance of writing them, is not trivial. Perhaps it is bearable, but it's non-zero. See example below. Simon With line comments -- Note [Extra dependencies from .hs-boot files] -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -- Consider the following case: -- -- module A where -- import B -- data A1 = A1 B1 -- -- module B where -- import {-# SOURCE #-} A -- type DisguisedA1 = A1 -- data B1 = B1 DisguisedA1 -- -- We do not follow type synonyms when building the dependencies for each datatype, -- so we will not find out that B1 really depends on A1 (which means it depends on -- itself). To handle this problem, at the moment we add dependencies to everything -- that comes from an .hs-boot file. But we don't add those dependencies to -- everything. Imagine module B above had another datatype declaration: -- -- data B2 = B2 Int -- -- Even though B2 has a dependency (on Int), all its dependencies are from things -- that live on other packages. Since we don't have mutual dependencies across -- packages, it is safe not to add the dependencies on the .hs-boot stuff to B2. -- -- See also Note [Grouping of type and class declarations] in TcTyClsDecls. With block comments Note [Extra dependencies from .hs-boot files] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Consider the following case: module A where import B data A1 = A1 B1 module B where import {-# SOURCE #-} A type DisguisedA1 = A1 data B1 = B1 DisguisedA1 We do not follow type synonyms when building the dependencies for each datatype, so we will not find out that B1 really depends on A1 (which means it depends on itself). To handle this problem, at the moment we add dependencies to everything that comes from an .hs-boot file. But we don't add those dependencies to everything. Imagine module B above had another datatype declaration: data B2 = B2 Int Even though B2 has a dependency (on Int), all its dependencies are from things that live on other packages. Since we don't have mutual dependencies across packages, it is safe not to add the dependencies on the .hs-boot stuff to B2. See also Note [Grouping of type and class declarations] in TcTyClsDecls. | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Jan | Stolarek | Sent: 03 July 2014 09:45 | To: ghc-devs at haskell.org | Subject: Re: RFC: style cleanup & guidelines for GHC, and related | bikeshedding | | I fully support Austin's proposal. My eyes hurt when I work on 5 files | and each of them is written | in a different style. | | Now, to address a few points that were raised. | | > Is it just for the sake of beauty (not to diminish the importance of | beauty)? | | * I believe that trailing whitespaces are a practical issue: they are | invisible to the human eye | (unless you have (setq-default show-trailing-whitespace t) \n (setq- | default indicate-empty-lines | t) in your .emacs file) but carry a semantic meaning for git and other | version control systems. | This means that accidental removal of a trailing whitespace - which can | and will happen if you | don't highlight them - will lead to false changes in the diff. | That said, simply removing existing trailing whitespaces is not enough - | we would need a way to | keep them from reappearing. Sadly, this idea was rejected: | http://www.haskell.org/pipermail/ghc-devs/2013-August/002074.html | Git has some cool tools (like git diff --check) that aid programmer in | dealing with trailing | whitespaces, but not everyone uses them. Here's a relatively recent | example of a new trailing | whitespace sneaking into the source code: | | https://github.com/ghc/ghc/commit/ce19d5079ea85d3190e837a1fc60000fbd82134 | d#diff-ababf87bf3da1f44484a901a8fbc0eb6R388 | | So without a way to enforce this policy removing trailing whitespaces | doesn't seem to make sense. | | * Tabs will become a practical problem once #9230 is fixed. Also, tab | width can be custom-set to | whatever value in an editor, which can trip some people. I for one used | to have tab width set to | 2 (unlike the default of 4 or 8) but gave up, because too many tab- | formatted things were | displayed incorrectly. | | * I strongly support turning lhs files into hs files. This is practical | and explained below. | | From my point of view other things are just pure aesthetics. | | ====== | | I'm not sure about the style of do-notation. When I first saw do { ... ; | ... } I thought it was | terrible but now, after some time, I see the merit of it. Consider this | code snippet from | StgCmmBind: | | \(_offset, node, arg_regs) -> do | { (...) | ; withSelfLoop (bndr, loop_header_id, arg_regs) $ do | { (...) | ; entryHeapCheck cl_info node' arity arg_regs $ do | { | (...) | }}} | | I believe that without { ... ; ... } these nested dos would have to be | indented, which wouldn't | aid readability IMO. I agree that we should use one style of the do- | notation but I'm not sure | which one should that be. I'd favor the { ... ; ... } one. | | ===== | | > So, overall, I kind of like our current policy of fixing tabs and | white-space when we have to | modify the code anyway. | I proposed to remove tabs from the source code 1,5 year ago: | | http://www.haskell.org/pipermail/ghc-devs/2013-January/000053.html | | Back then my conclusion was that most of the files that contain tabs were | modified in the previous | 6 months. This means that people are not following the policy to remove | tabs when they modify the | file (or at least they weren't following that policy back then). So this | policy seems to be only | theory. | | ====== | | One more style issue that was not mentioned by Austin are block comments: | I strongly advocate | removal of all block comments in favour of single-line ones. Why? Here's | a reason: | | {- | Note [Blah blah] | ~~~~~~~~~~~~ | ... Code snippet: | foo :: Foo -> Bar | foo = ... | -} | | My primary method of finding things in the source code is grepping and I | believe that many people | do the same. With block comments it is impossible to tell whether the | line found by grep is part | of the comment. With line comments it is immediately obvious. Moreover, | if all comments were line | comments it would be really easy to grep for something only in the | comments. Currently this is | impossible. This is also true for lhs files and for that reason I also | consider lhs files to be | practical issue. | Another reason (albeit minor) to have line comments instead of block | comments is that with the | former style each comment is independent and can be freely moved around. | With block comments we | might need to create extra comment blocks when moving comments around. | | ====== | | Now, I understand people who don't want such change because of merge | conflicts. But the truth is | there will never be a good moment to implement such changes because there | is always some ongoing | work and outstanding branches. So I believe we should think whether these | changes move us in a | good direction or not and if we decide that these changes are a Good | Thing - which I believe they | are - we should bite the bullet. Otherwise we will have mess in the | source code forever. | | Since I'm advocating strongly for these I am of course willing to put my | work into making this | happen. So far I've assigned #9230 to myself and if we agree to detab all | the source code I can | be the person to do it. | | Janek | | PS. I am proud of myself, because I wrote a mail as long as Austin's, | which I always thought was | impossible :-) | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From johan.tibell at gmail.com Thu Jul 3 16:46:54 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Thu, 3 Jul 2014 18:46:54 +0200 Subject: RFC: style cleanup & guidelines for GHC, and related bikeshedding In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF10422828@DB3PRD3001MB020.064d.mgd.msft.net> References: <201407031044.46291.jan.stolarek@p.lodz.pl> <618BE556AADD624C9C918AA5D5911BEF10422828@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: On Thu, Jul 3, 2014 at 6:29 PM, Simon Peyton Jones wrote: > > * Insisting on line comments exclusively, carries a cost. The on-screen > distraction > of line comments, and the nuisance of writing them, is not trivial. > Perhaps it > is bearable, but it's non-zero. See example below. I think you can use {-| Note [Extra dependencies from .hs-boot files] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Consider the following case: module A where import B data A1 = A1 B1 module B where import {-# SOURCE #-} A type DisguisedA1 = A1 data B1 = B1 DisguisedA1 We do not follow type synonyms when building the dependencies for each datatype, so we will not find out that B1 really depends on A1 (which means it depends on itself). To handle this problem, at the moment we add dependencies to everything that comes from an .hs-boot file. But we don't add those dependencies to everything. Imagine module B above had another datatype declaration: data B2 = B2 Int Even though B2 has a dependency (on Int), all its dependencies are from things that live on other packages. Since we don't have mutual dependencies across packages, it is safe not to add the dependencies on the .hs-boot stuff to B2. See also Note [Grouping of type and class declarations] in TcTyClsDecls. -} instead (skip the | after {- if you don't want it to be a Haddock comment). -------------- next part -------------- An HTML attachment was scrubbed... URL: From pali.gabor at gmail.com Thu Jul 3 18:57:46 2014 From: pali.gabor at gmail.com (=?UTF-8?B?UMOhbGkgR8OhYm9yIErDoW5vcw==?=) Date: Thu, 3 Jul 2014 20:57:46 +0200 Subject: [commit: ghc] master: Re-add more primops for atomic ops on byte arrays (4ee4ab0) In-Reply-To: <20140630201339.3307A2406D@ghc.haskell.org> References: <20140630201339.3307A2406D@ghc.haskell.org> Message-ID: Hello Johan, 2014-06-30 22:13 GMT+02:00 : > commit 4ee4ab01c1d97845aecb7707ad2f9a80933e7a49 > Author: Johan Tibell > Date: Fri Jun 27 13:48:24 2014 +0200 > > Re-add more primops for atomic ops on byte arrays It seems this change breaks the build on FreeBSD/i386 [1]: "inplace/bin/ghc-stage1" -o utils/dll-split/dist-install/build/tmp/dll-split -hisuf dyn_hi -osuf dyn_o -hcsuf dyn_hc -fPIC -dynamic -H32m -O -hide-all-packages -i -iutils/dll-split/. -iutils/dll-split/dist-install/build -iutils/dll-split/dist-install/build/autogen -Iutils/dll-split/dist-install/build -Iutils/dll-split/dist-install/build/autogen -optP-include -optPutils/dll-split/dist-install/build/autogen/cabal_macros.h -package base-4.7.1.0 -package containers-0.5.5.1 -package filepath-1.3.0.2 -XHaskell2010 -no-user-package-db -rtsopts -odir utils/dll-split/dist-install/build -hidir utils/dll-split/dist-install/build -stubdir utils/dll-split/dist-install/build -optl-L'/usr/home/ghc-builder/work/builder/tempbuild/build/libraries/filepath/dist-install/build' -optl-L'/usr/home/ghc-builder/work/builder/tempbuild/build/libraries/containers/dist-install/build' -optl-L'/usr/home/ghc-builder/work/builder/tempbuild/build/libraries/deepseq/dist-install/build' -optl-L'/usr/home/ghc-builder/work/builder/tempbuild/build/libraries/array/dist-install/build' -optl-L'/usr/home/ghc-builder/work/builder/tempbuild/build/libraries/base/dist-install/build' -optl-L'/usr/local/lib' -optl-L'/usr/home/ghc-builder/work/builder/tempbuild/build/libraries/integer-gmp/dist-install/build' -optl-L'/usr/local/lib' -optl-L'/usr/home/ghc-builder/work/builder/tempbuild/build/libraries/ghc-prim/dist-install/build' -optl-L'/usr/home/ghc-builder/work/builder/tempbuild/build/rts/dist/build' -optl-liconv -optl-lcharset -optl-lgmp -optl-lm -optl-lrt -optl-lpthread -fPIC -dynamic -H32m -O -hide-all-packages -i -iutils/dll-split/. -iutils/dll-split/dist-install/build -iutils/dll-split/dist-install/build/autogen -Iutils/dll-split/dist-install/build -Iutils/dll-split/dist-install/build/autogen -optP-include -optPutils/dll-split/dist-install/build/autogen/cabal_macros.h -package base-4.7.1.0 -package containers-0.5.5.1 -package filepath-1.3.0.2 -XHaskell2010 -no-user-package-db -rtsopts -fno-use-rpaths -optl-Wl,-rpath -optl-Wl,'$ORIGIN/../filepath-1.3.0.2' -optl-Wl,-rpath -optl-Wl,'$ORIGIN/../containers-0.5.5.1' -optl-Wl,-rpath -optl-Wl,'$ORIGIN/../deepseq-1.3.0.2' -optl-Wl,-rpath -optl-Wl,'$ORIGIN/../array-0.5.0.0' -optl-Wl,-rpath -optl-Wl,'$ORIGIN/../base-4.7.1.0' -optl-Wl,-rpath -optl-Wl,'$ORIGIN/../integer-gmp-0.5.1.0' -optl-Wl,-rpath -optl-Wl,'$ORIGIN/../ghc-prim-0.3.1.0' -optl-Wl,-rpath -optl-Wl,'$ORIGIN/../rts-1.0' -optl-Wl,-zorigin utils/dll-split/dist-install/build/Main.dyn_o /usr/home/ghc-builder/work/builder/tempbuild/build/libraries/ghc-prim/dist-install/build/libHSghc-prim-0.3.1.0-ghc7.9.20140703.so: undefined reference to `__sync_fetch_and_xor_8' /usr/home/ghc-builder/work/builder/tempbuild/build/libraries/ghc-prim/dist-install/build/libHSghc-prim-0.3.1.0-ghc7.9.20140703.so: undefined reference to `__sync_fetch_and_and_8' /usr/home/ghc-builder/work/builder/tempbuild/build/libraries/ghc-prim/dist-install/build/libHSghc-prim-0.3.1.0-ghc7.9.20140703.so: undefined reference to `__sync_fetch_and_nand_8' /usr/home/ghc-builder/work/builder/tempbuild/build/libraries/ghc-prim/dist-install/build/libHSghc-prim-0.3.1.0-ghc7.9.20140703.so: undefined reference to `__sync_val_compare_and_swap_8' /usr/home/ghc-builder/work/builder/tempbuild/build/libraries/ghc-prim/dist-install/build/libHSghc-prim-0.3.1.0-ghc7.9.20140703.so: undefined reference to `__sync_fetch_and_sub_8' /usr/home/ghc-builder/work/builder/tempbuild/build/libraries/ghc-prim/dist-install/build/libHSghc-prim-0.3.1.0-ghc7.9.20140703.so: undefined reference to `__sync_fetch_and_add_8' /usr/home/ghc-builder/work/builder/tempbuild/build/libraries/ghc-prim/dist-install/build/libHSghc-prim-0.3.1.0-ghc7.9.20140703.so: undefined reference to `__sync_fetch_and_or_8' collect2: error: ld returned 1 exit status This probably due to the strict defaults of the FreeBSD of GCC, where march=i386 is assumed, which does not support the required CMPXCHG8B instruction. One would need at least an i586 to get that. It is not observed on GNU/Linux as it allegedly defaults to i586 code generation. The trivial patch below makes this explicit so now it could build: --- ghc-prim.cabal.orig 2014-07-03 03:06:16.000000000 +0200 +++ ghc-prim.cabal 2014-07-03 20:27:49.709087595 +0200 @@ -62,3 +62,6 @@ -- We need to set the package name to ghc-prim (without a version number) -- as it's magic. ghc-options: -package-name ghc-prim + -- Due to CMPXCHG8B, we have to ask for Pentium code generation. + cc-options: -march=i586 + However, my fellow developer, Konstantin Belousov pointed out that using CMPXCHG8B would officially require checking for the corresponding CPUID bit set, because that is what truly indicates the presence of that instruction. [1] http://haskell.inf.elte.hu/builders/freebsd-i386-head/304/10.html From john at repetae.net Thu Jul 3 20:47:51 2014 From: john at repetae.net (John Meacham) Date: Thu, 3 Jul 2014 13:47:51 -0700 Subject: GHCJS now runs Template Haskell on node.js - Any interest in out of process TH for general cross compilation? In-Reply-To: References: Message-ID: In case anyone wanted to start writing haskell android code now, jhc fully supports android as a target. here is an app made with it https://play.google.com/store/apps/details?id=org.metasepi.ajhc.android.cube this was made with Kiwamu's ajhc branch but code has been merged back into the main tree. On Wed, Jul 2, 2014 at 5:54 PM, Carter Schonwald wrote: > This would probably be a great boon for those trying to use haskell for > Android and IOS right? how might the emulation setup work for those? > > > > > On Wed, Jul 2, 2014 at 2:20 PM, Carter Schonwald > wrote: >> >> wow, this is great work! >> >> If theres a clear path to getting the generic tooling into 7.10, i'm all >> for it :) (and willing to help on concrete mechanical subtasks) >> >> >> On Wed, Jul 2, 2014 at 12:14 PM, Luite Stegeman >> wrote: >>> >>> hi all, >>> >>> I've added some code [1] [2] to GHCJS to make it run Template Haskell >>> code on node.js, rather than using the GHC linker. GHCJS has supported TH >>> for a long time now, but so far always relied on native (host) code for it. >>> This is the main reason that GHCJS always builds native and JavaScript code >>> for everything (another is that Cabal Setup.hs scripts need to be compiled >>> to some host-runnable form, but that can also be JavaScript if you have >>> node.js) >>> >>> Now besides the compiler having to do twice the work, this has some other >>> disadvantages: >>> >>> - Our JavaScript code has the same dependencies (packages) as native >>> code, which means packages like unix or Win32 show up somewhere, depending >>> on the host environment. This also limits our options in choosing >>> JS-specific packages. >>> - The Template Haskell code runs on the host environment, which might be >>> slightly different from the target, for example in integer size or operating >>> system specific constants. >>> >>> Moreover, building native code made the GHCJS installation procedure more >>> tricky, making end users think about libgmp or libiconv locations, since it >>> basically required the same preparation as building GHC from source. This >>> change will make installing much easier and more reliable (we still have to >>> update the build scripts). >>> >>> How it works is pretty simple: >>> >>> - When any code needs to be run on the target (hscCompileCoreExpr, >>> through the Hooks API new in GHC 7.8), GHCJS starts a node.js process with >>> the thrunner.js [3] script, >>> - GHCJS sends its RTS and the Template Haskell server code [1] to >>> node.js, the script starts a Haskell thread running the server, >>> - for every splice, GHCJS compiles it to JavaScript and links it using >>> its incremental linking functionality. The code for the splice, including >>> dependencies that have not yet been sent to the runner (for earlier >>> splices), is then sent in a RunTH [4] message, >>> - the runner loads and runs the code in the Q monad, can send queries to >>> GHCJS for reification, >>> - the runner sends back the result as a serialized Template Haskell AST >>> (using GHC.Generics for the Binary instances). >>> >>> All Template Haskell functionality is supported, including recent >>> additions for reifying modules and annotations. I still need to clean up and >>> push the patches for the directory and process packages, but after that, the >>> TH code can read/write files, run processes and interact with them and make >>> network connections, all through node.js. >>> >>> Now since this approach is in no way specific to JavaScript, I was >>> wondering if there's any interest in getting this functionality into GHC >>> 7.10 for general cross compilation. The runner would be a native (target) >>> program with dynamic libraries (or object files) being sent over to the >>> target machine (or emulator) for the splices. >>> >>> Thanks to Andras Slemmer from Prezi who helped build the initial proof of >>> concept (without reification) at BudHac. >>> >>> cheers, >>> >>> Luite >>> >>> [1] >>> https://github.com/ghcjs/ghcjs/blob/414eefb2bb8825b3c4c5cddfec4d79a142bc261a/src/Gen2/TH.hs >>> [2] >>> https://github.com/ghcjs/ghcjs-prim/blob/2dffdc2d732b044377037e1d6ebeac2812d4f9a4/GHCJS/Prim/TH/Eval.hs >>> [3] >>> https://github.com/ghcjs/ghcjs/blob/414eefb2bb8825b3c4c5cddfec4d79a142bc261a/lib/etc/thrunner.js >>> [4] >>> https://github.com/ghcjs/ghcjs-prim/blob/2dffdc2d732b044377037e1d6ebeac2812d4f9a4/GHCJS/Prim/TH/Types.hs#L29 >>> >>> _______________________________________________ >>> Glasgow-haskell-users mailing list >>> Glasgow-haskell-users at haskell.org >>> http://www.haskell.org/mailman/listinfo/glasgow-haskell-users >>> >> > > > _______________________________________________ > Glasgow-haskell-users mailing list > Glasgow-haskell-users at haskell.org > http://www.haskell.org/mailman/listinfo/glasgow-haskell-users > -- John Meacham - http://notanumber.net/ From mark.lentczner at gmail.com Thu Jul 3 22:35:27 2014 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Thu, 3 Jul 2014 15:35:27 -0700 Subject: Head's up, yall: Cabal's dir layout changed Message-ID: I just noticed that the default directory layout that cabal uses (under .cabal) has changed between 1.16 and 1.18. In particular: In 1.16 -- libsubdir: $pkgid/$compiler -- datasubdir: $pkgid -- docdir: $datadir/doc/$pkgid in 1.18 -- libsubdir: $arch-$os-$compiler/$pkgid -- datasubdir: $arch-$os-$compiler/$pkgid -- docdir: $datadir/doc/$arch-$os-$compiler/$pkgid This is a rather big change: When the same package is built with multiple compilers, before they were stored all under the same dir, now the packages are grouped by compiler. This change is actually better from a system-installer (and uninstaller) point of view, and the Mac platfrom has been shipping with a customer cabal wrapper that sets up the user's default in a similar fashion for years. This change breaks the existing uninstaller, and will require some retrenching on the platform... I'm going to try to reconcile the existing "Mac way" with the "new way" since they are closer. - Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Thu Jul 3 23:18:30 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Thu, 3 Jul 2014 19:18:30 -0400 Subject: Head's up, yall: Cabal's dir layout changed In-Reply-To: References: Message-ID: I'll admit i've pushed a lot of haskell platform users to migrate to 1.20 style ~/.cabal/config's for a while :) On Thu, Jul 3, 2014 at 6:35 PM, Mark Lentczner wrote: > I just noticed that the default directory layout that cabal uses (under > .cabal) has changed between 1.16 and 1.18. In particular: > > In 1.16 > -- libsubdir: $pkgid/$compiler > -- datasubdir: $pkgid > -- docdir: $datadir/doc/$pkgid > > in 1.18 > -- libsubdir: $arch-$os-$compiler/$pkgid > -- datasubdir: $arch-$os-$compiler/$pkgid > -- docdir: $datadir/doc/$arch-$os-$compiler/$pkgid > > This is a rather big change: When the same package is built with multiple > compilers, before they were stored all under the same dir, now the packages > are grouped by compiler. > > This change is actually better from a system-installer (and uninstaller) > point of view, and the Mac platfrom has been shipping with a customer cabal > wrapper that sets up the user's default in a similar fashion for years. > > This change breaks the existing uninstaller, and will require some > retrenching on the platform... I'm going to try to reconcile the existing > "Mac way" with the "new way" since they are closer. > > - Mark > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From djsamperi at gmail.com Fri Jul 4 04:43:24 2014 From: djsamperi at gmail.com (Dominick Samperi) Date: Fri, 4 Jul 2014 00:43:24 -0400 Subject: GHCJS now runs Template Haskell on node.js - Any interest in out of process TH for general cross compilation? In-Reply-To: References: Message-ID: Hello John, I tried to install the Haskell demo Cube on my Nexus 7 and got: Error: package file was not signed correctly. D On Thu, Jul 3, 2014 at 4:47 PM, John Meacham wrote: > In case anyone wanted to start writing haskell android code now, jhc > fully supports android as a target. here is an app made with it > > https://play.google.com/store/apps/details?id=org.metasepi.ajhc.android.cube > > this was made with Kiwamu's ajhc branch but code has been merged back > into the main tree. > > On Wed, Jul 2, 2014 at 5:54 PM, Carter Schonwald > wrote: >> This would probably be a great boon for those trying to use haskell for >> Android and IOS right? how might the emulation setup work for those? >> >> >> >> >> On Wed, Jul 2, 2014 at 2:20 PM, Carter Schonwald >> wrote: >>> >>> wow, this is great work! >>> >>> If theres a clear path to getting the generic tooling into 7.10, i'm all >>> for it :) (and willing to help on concrete mechanical subtasks) >>> >>> >>> On Wed, Jul 2, 2014 at 12:14 PM, Luite Stegeman >>> wrote: >>>> >>>> hi all, >>>> >>>> I've added some code [1] [2] to GHCJS to make it run Template Haskell >>>> code on node.js, rather than using the GHC linker. GHCJS has supported TH >>>> for a long time now, but so far always relied on native (host) code for it. >>>> This is the main reason that GHCJS always builds native and JavaScript code >>>> for everything (another is that Cabal Setup.hs scripts need to be compiled >>>> to some host-runnable form, but that can also be JavaScript if you have >>>> node.js) >>>> >>>> Now besides the compiler having to do twice the work, this has some other >>>> disadvantages: >>>> >>>> - Our JavaScript code has the same dependencies (packages) as native >>>> code, which means packages like unix or Win32 show up somewhere, depending >>>> on the host environment. This also limits our options in choosing >>>> JS-specific packages. >>>> - The Template Haskell code runs on the host environment, which might be >>>> slightly different from the target, for example in integer size or operating >>>> system specific constants. >>>> >>>> Moreover, building native code made the GHCJS installation procedure more >>>> tricky, making end users think about libgmp or libiconv locations, since it >>>> basically required the same preparation as building GHC from source. This >>>> change will make installing much easier and more reliable (we still have to >>>> update the build scripts). >>>> >>>> How it works is pretty simple: >>>> >>>> - When any code needs to be run on the target (hscCompileCoreExpr, >>>> through the Hooks API new in GHC 7.8), GHCJS starts a node.js process with >>>> the thrunner.js [3] script, >>>> - GHCJS sends its RTS and the Template Haskell server code [1] to >>>> node.js, the script starts a Haskell thread running the server, >>>> - for every splice, GHCJS compiles it to JavaScript and links it using >>>> its incremental linking functionality. The code for the splice, including >>>> dependencies that have not yet been sent to the runner (for earlier >>>> splices), is then sent in a RunTH [4] message, >>>> - the runner loads and runs the code in the Q monad, can send queries to >>>> GHCJS for reification, >>>> - the runner sends back the result as a serialized Template Haskell AST >>>> (using GHC.Generics for the Binary instances). >>>> >>>> All Template Haskell functionality is supported, including recent >>>> additions for reifying modules and annotations. I still need to clean up and >>>> push the patches for the directory and process packages, but after that, the >>>> TH code can read/write files, run processes and interact with them and make >>>> network connections, all through node.js. >>>> >>>> Now since this approach is in no way specific to JavaScript, I was >>>> wondering if there's any interest in getting this functionality into GHC >>>> 7.10 for general cross compilation. The runner would be a native (target) >>>> program with dynamic libraries (or object files) being sent over to the >>>> target machine (or emulator) for the splices. >>>> >>>> Thanks to Andras Slemmer from Prezi who helped build the initial proof of >>>> concept (without reification) at BudHac. >>>> >>>> cheers, >>>> >>>> Luite >>>> >>>> [1] >>>> https://github.com/ghcjs/ghcjs/blob/414eefb2bb8825b3c4c5cddfec4d79a142bc261a/src/Gen2/TH.hs >>>> [2] >>>> https://github.com/ghcjs/ghcjs-prim/blob/2dffdc2d732b044377037e1d6ebeac2812d4f9a4/GHCJS/Prim/TH/Eval.hs >>>> [3] >>>> https://github.com/ghcjs/ghcjs/blob/414eefb2bb8825b3c4c5cddfec4d79a142bc261a/lib/etc/thrunner.js >>>> [4] >>>> https://github.com/ghcjs/ghcjs-prim/blob/2dffdc2d732b044377037e1d6ebeac2812d4f9a4/GHCJS/Prim/TH/Types.hs#L29 >>>> >>>> _______________________________________________ >>>> Glasgow-haskell-users mailing list >>>> Glasgow-haskell-users at haskell.org >>>> http://www.haskell.org/mailman/listinfo/glasgow-haskell-users >>>> >>> >> >> >> _______________________________________________ >> Glasgow-haskell-users mailing list >> Glasgow-haskell-users at haskell.org >> http://www.haskell.org/mailman/listinfo/glasgow-haskell-users >> > > > > -- > John Meacham - http://notanumber.net/ > _______________________________________________ > Glasgow-haskell-users mailing list > Glasgow-haskell-users at haskell.org > http://www.haskell.org/mailman/listinfo/glasgow-haskell-users From jan.stolarek at p.lodz.pl Fri Jul 4 04:43:36 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Fri, 4 Jul 2014 06:43:36 +0200 Subject: RFC: style cleanup & guidelines for GHC, and related bikeshedding In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF10422828@DB3PRD3001MB020.064d.mgd.msft.net> References: <201407031044.46291.jan.stolarek@p.lodz.pl> <618BE556AADD624C9C918AA5D5911BEF10422828@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <201407040643.36176.jan.stolarek@p.lodz.pl> > The on-screen distraction of line comments, and the nuisance of writing them, is not > trivial. Well, I don't consider one-line comments to be distracting at all. To each his own, I guess., though your opinion on this is more important than mine. But I don't agree that writing one-line comments is a nuisance. Actually, it's very simple in Emacs: M-j will start a new line of comment. Maintains indentation. M-q will reformat a single paragraph of a comment to match the default line length (known as "fill-collumn" in Emacs). Use C-x f to set or add this to your Emacs file: (setq-default fill-column 80) Janek From stegeman at gmail.com Fri Jul 4 11:03:35 2014 From: stegeman at gmail.com (Luite Stegeman) Date: Fri, 4 Jul 2014 13:03:35 +0200 Subject: GHCJS now runs Template Haskell on node.js - Any interest in out of process TH for general cross compilation? In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF10422771@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF10422771@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: On Thu, Jul 3, 2014 at 6:18 PM, Simon Peyton Jones wrote: > Luite > > > > I lack the bandwidth to respond at any technical depth, but I?d like to > make encouraging noises. If you can figure out a way to make GHC do these > things without making the compiler terribly complicated and making > maintaining it harder, then I?m open to your proposals. > > > I think most of the communication code could go into a separate executable that can be built by the user for the target-specific communication. The GHC API facing part of the implementation in GHCJS is under 300 lines and that includes some non-exported code duplicated from GHC, so I'm reasonably optimistic that it can be done without too much impact. Unfortunately I won't have much time in the near future, since a GHCJS release is well overdue (mostly because I keep adding features like this...) and I want to focus on that first, but getting it ready before 7.10 should be doable (especially if other people want to help!) luite -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Jul 4 23:09:58 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 4 Jul 2014 23:09:58 +0000 Subject: Holiday Message-ID: <618BE556AADD624C9C918AA5D5911BEF10426293@DB3PRD3001MB020.064d.mgd.msft.net> Friends I'm on holiday all next week, back 14 July. I'm not taking my laptop! Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From john at repetae.net Fri Jul 4 23:58:07 2014 From: john at repetae.net (John Meacham) Date: Fri, 4 Jul 2014 16:58:07 -0700 Subject: GHCJS now runs Template Haskell on node.js - Any interest in out of process TH for general cross compilation? In-Reply-To: References: Message-ID: Hmm.. It works on my nexus 4. Kiwamu of the metasepi http://ajhc.metasepi.org/ is the one that uploaded the demo. Perhaps he needs to update the key or something. On Thu, Jul 3, 2014 at 9:43 PM, Dominick Samperi wrote: > Hello John, > I tried to install the Haskell demo Cube on my Nexus 7 > and got: Error: package file was not signed correctly. > D > > On Thu, Jul 3, 2014 at 4:47 PM, John Meacham wrote: >> In case anyone wanted to start writing haskell android code now, jhc >> fully supports android as a target. here is an app made with it >> >> https://play.google.com/store/apps/details?id=org.metasepi.ajhc.android.cube >> >> this was made with Kiwamu's ajhc branch but code has been merged back >> into the main tree. >> >> On Wed, Jul 2, 2014 at 5:54 PM, Carter Schonwald >> wrote: >>> This would probably be a great boon for those trying to use haskell for >>> Android and IOS right? how might the emulation setup work for those? >>> >>> >>> >>> >>> On Wed, Jul 2, 2014 at 2:20 PM, Carter Schonwald >>> wrote: >>>> >>>> wow, this is great work! >>>> >>>> If theres a clear path to getting the generic tooling into 7.10, i'm all >>>> for it :) (and willing to help on concrete mechanical subtasks) >>>> >>>> >>>> On Wed, Jul 2, 2014 at 12:14 PM, Luite Stegeman >>>> wrote: >>>>> >>>>> hi all, >>>>> >>>>> I've added some code [1] [2] to GHCJS to make it run Template Haskell >>>>> code on node.js, rather than using the GHC linker. GHCJS has supported TH >>>>> for a long time now, but so far always relied on native (host) code for it. >>>>> This is the main reason that GHCJS always builds native and JavaScript code >>>>> for everything (another is that Cabal Setup.hs scripts need to be compiled >>>>> to some host-runnable form, but that can also be JavaScript if you have >>>>> node.js) >>>>> >>>>> Now besides the compiler having to do twice the work, this has some other >>>>> disadvantages: >>>>> >>>>> - Our JavaScript code has the same dependencies (packages) as native >>>>> code, which means packages like unix or Win32 show up somewhere, depending >>>>> on the host environment. This also limits our options in choosing >>>>> JS-specific packages. >>>>> - The Template Haskell code runs on the host environment, which might be >>>>> slightly different from the target, for example in integer size or operating >>>>> system specific constants. >>>>> >>>>> Moreover, building native code made the GHCJS installation procedure more >>>>> tricky, making end users think about libgmp or libiconv locations, since it >>>>> basically required the same preparation as building GHC from source. This >>>>> change will make installing much easier and more reliable (we still have to >>>>> update the build scripts). >>>>> >>>>> How it works is pretty simple: >>>>> >>>>> - When any code needs to be run on the target (hscCompileCoreExpr, >>>>> through the Hooks API new in GHC 7.8), GHCJS starts a node.js process with >>>>> the thrunner.js [3] script, >>>>> - GHCJS sends its RTS and the Template Haskell server code [1] to >>>>> node.js, the script starts a Haskell thread running the server, >>>>> - for every splice, GHCJS compiles it to JavaScript and links it using >>>>> its incremental linking functionality. The code for the splice, including >>>>> dependencies that have not yet been sent to the runner (for earlier >>>>> splices), is then sent in a RunTH [4] message, >>>>> - the runner loads and runs the code in the Q monad, can send queries to >>>>> GHCJS for reification, >>>>> - the runner sends back the result as a serialized Template Haskell AST >>>>> (using GHC.Generics for the Binary instances). >>>>> >>>>> All Template Haskell functionality is supported, including recent >>>>> additions for reifying modules and annotations. I still need to clean up and >>>>> push the patches for the directory and process packages, but after that, the >>>>> TH code can read/write files, run processes and interact with them and make >>>>> network connections, all through node.js. >>>>> >>>>> Now since this approach is in no way specific to JavaScript, I was >>>>> wondering if there's any interest in getting this functionality into GHC >>>>> 7.10 for general cross compilation. The runner would be a native (target) >>>>> program with dynamic libraries (or object files) being sent over to the >>>>> target machine (or emulator) for the splices. >>>>> >>>>> Thanks to Andras Slemmer from Prezi who helped build the initial proof of >>>>> concept (without reification) at BudHac. >>>>> >>>>> cheers, >>>>> >>>>> Luite >>>>> >>>>> [1] >>>>> https://github.com/ghcjs/ghcjs/blob/414eefb2bb8825b3c4c5cddfec4d79a142bc261a/src/Gen2/TH.hs >>>>> [2] >>>>> https://github.com/ghcjs/ghcjs-prim/blob/2dffdc2d732b044377037e1d6ebeac2812d4f9a4/GHCJS/Prim/TH/Eval.hs >>>>> [3] >>>>> https://github.com/ghcjs/ghcjs/blob/414eefb2bb8825b3c4c5cddfec4d79a142bc261a/lib/etc/thrunner.js >>>>> [4] >>>>> https://github.com/ghcjs/ghcjs-prim/blob/2dffdc2d732b044377037e1d6ebeac2812d4f9a4/GHCJS/Prim/TH/Types.hs#L29 >>>>> >>>>> _______________________________________________ >>>>> Glasgow-haskell-users mailing list >>>>> Glasgow-haskell-users at haskell.org >>>>> http://www.haskell.org/mailman/listinfo/glasgow-haskell-users >>>>> >>>> >>> >>> >>> _______________________________________________ >>> Glasgow-haskell-users mailing list >>> Glasgow-haskell-users at haskell.org >>> http://www.haskell.org/mailman/listinfo/glasgow-haskell-users >>> >> >> >> >> -- >> John Meacham - http://notanumber.net/ >> _______________________________________________ >> Glasgow-haskell-users mailing list >> Glasgow-haskell-users at haskell.org >> http://www.haskell.org/mailman/listinfo/glasgow-haskell-users -- John Meacham - http://notanumber.net/ From ptrommler at me.com Sat Jul 5 10:34:03 2014 From: ptrommler at me.com (Peter Trommler) Date: Sat, 05 Jul 2014 12:34:03 +0200 Subject: "No entry for "Haskell CPP command"" error with testsuite In-Reply-To: <1404404864-sup-6449@sabre> References: <201407030844.11869.jan.stolarek@p.lodz.pl> <201407031739.59515.jan.stolarek@p.lodz.pl> <1404404864-sup-6449@sabre> Message-ID: <0B0A7109-6CAF-4E28-95DE-279A09675B63@me.com> I think this particular case can be solved by making configure depend on configure.ac and aclocal.m4. Peter On 03.07.2014, at 18:28, Edward Z. Yang wrote: > I wonder if we should have some magic variable in the build system > that induces a full rebuild, so if you push a patch which requires > a rebuild you can bump the variable and let it know. > > Edward > > Excerpts from Carter Schonwald's message of 2014-07-03 17:20:44 +0100: >> yay! :) >> (i spent quite a bit of time working on that patch, so you scared me >> initially) >> >> On Thu, Jul 3, 2014 at 11:39 AM, Jan Stolarek >> wrote: >> >>>> Did you do a make maintainer-clean ; perl boot ; ./configure >>>> After you pulled the new changes? It patched the build system so you have >>>> to do the above. Can't reuse a previous build >>> I guess that was it. Did a clean build and all is fine. >>> >>> Janek >>> > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From stegeman at gmail.com Sat Jul 5 20:09:08 2014 From: stegeman at gmail.com (Luite Stegeman) Date: Sat, 5 Jul 2014 22:09:08 +0200 Subject: GHCJS now runs Template Haskell on node.js - Any interest in out of process TH for general cross compilation? In-Reply-To: References: <53B821A0.70508@pkturner.org> Message-ID: How would you do reification with that approach? On Sat, Jul 5, 2014 at 9:59 PM, John Meacham wrote: > Actually, I was looking into it a little, and template haskell could > effectively be implemented by a pre-processor and a portable library > that is compiler independent. If one could get ghc to spit out the > template haskell source after it expands it then that can be fed to > jhc as a quick first pass, but ideally the pre-processor TH would > create programs that can be run under the target compiler. that would > bring TH to every haskell compiler. > > John > > On Sat, Jul 5, 2014 at 10:38 AM, Brandon Allbery > wrote: > > On Sat, Jul 5, 2014 at 1:34 PM, Carter Schonwald > > wrote: > >> > >> does JHC support template haskell? > > > > > > Pretty sure TH is too closely tied to ghc. > > > > -- > > brandon s allbery kf8nh sine nomine > associates > > allbery.b at gmail.com > ballbery at sinenomine.net > > unix, openafs, kerberos, infrastructure, xmonad > http://sinenomine.net > > > > -- > John Meacham - http://notanumber.net/ > _______________________________________________ > Glasgow-haskell-users mailing list > Glasgow-haskell-users at haskell.org > http://www.haskell.org/mailman/listinfo/glasgow-haskell-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From john at repetae.net Sat Jul 5 20:39:22 2014 From: john at repetae.net (John Meacham) Date: Sat, 5 Jul 2014 13:39:22 -0700 Subject: GHCJS now runs Template Haskell on node.js - Any interest in out of process TH for general cross compilation? In-Reply-To: References: <53B821A0.70508@pkturner.org> Message-ID: The target compiler would have the TH libraries, which could be made to be portable. The external program would just extract the TH bits and turn them into a program that spits the TH expanded output to a new file to compile, and repeat the process til no TH expansions exist and finally that is the result you pass to the compiler. John On Sat, Jul 5, 2014 at 1:09 PM, Luite Stegeman wrote: > How would you do reification with that approach? > > > On Sat, Jul 5, 2014 at 9:59 PM, John Meacham wrote: >> >> Actually, I was looking into it a little, and template haskell could >> effectively be implemented by a pre-processor and a portable library >> that is compiler independent. If one could get ghc to spit out the >> template haskell source after it expands it then that can be fed to >> jhc as a quick first pass, but ideally the pre-processor TH would >> create programs that can be run under the target compiler. that would >> bring TH to every haskell compiler. >> >> John >> >> On Sat, Jul 5, 2014 at 10:38 AM, Brandon Allbery >> wrote: >> > On Sat, Jul 5, 2014 at 1:34 PM, Carter Schonwald >> > wrote: >> >> >> >> does JHC support template haskell? >> > >> > >> > Pretty sure TH is too closely tied to ghc. >> > >> > -- >> > brandon s allbery kf8nh sine nomine >> > associates >> > allbery.b at gmail.com >> > ballbery at sinenomine.net >> > unix, openafs, kerberos, infrastructure, xmonad >> > http://sinenomine.net >> >> >> >> -- >> John Meacham - http://notanumber.net/ >> _______________________________________________ >> Glasgow-haskell-users mailing list >> Glasgow-haskell-users at haskell.org >> http://www.haskell.org/mailman/listinfo/glasgow-haskell-users > > -- John Meacham - http://notanumber.net/ From stegeman at gmail.com Sat Jul 5 20:56:40 2014 From: stegeman at gmail.com (Luite Stegeman) Date: Sat, 5 Jul 2014 22:56:40 +0200 Subject: GHCJS now runs Template Haskell on node.js - Any interest in out of process TH for general cross compilation? In-Reply-To: References: <53B821A0.70508@pkturner.org> Message-ID: I'm not sure I correctly understand your approach, but to have the template haskell reification work without any runtime communication with the compiler you'd have to include the entire typechecker state, at least for all names reachable from the splice (see http://hackage.haskell.org/package/template-haskell-2.9.0.0/docs/Language-Haskell-TH-Syntax.html , the Quasi class for the required functionality). This would mean serializing all names with types modules, annotations, instances. I briefly looked into this for GHCJS but decided that just querying the compiler would be better. On Sat, Jul 5, 2014 at 10:39 PM, John Meacham wrote: > The target compiler would have the TH libraries, which could be made > to be portable. The external program would just extract the TH bits > and turn them into a program that spits the TH expanded output to a > new file to compile, and repeat the process til no TH expansions exist > and finally that is the result you pass to the compiler. > > John > > On Sat, Jul 5, 2014 at 1:09 PM, Luite Stegeman wrote: > > How would you do reification with that approach? > > > > > > On Sat, Jul 5, 2014 at 9:59 PM, John Meacham wrote: > >> > >> Actually, I was looking into it a little, and template haskell could > >> effectively be implemented by a pre-processor and a portable library > >> that is compiler independent. If one could get ghc to spit out the > >> template haskell source after it expands it then that can be fed to > >> jhc as a quick first pass, but ideally the pre-processor TH would > >> create programs that can be run under the target compiler. that would > >> bring TH to every haskell compiler. > >> > >> John > >> > >> On Sat, Jul 5, 2014 at 10:38 AM, Brandon Allbery > >> wrote: > >> > On Sat, Jul 5, 2014 at 1:34 PM, Carter Schonwald > >> > wrote: > >> >> > >> >> does JHC support template haskell? > >> > > >> > > >> > Pretty sure TH is too closely tied to ghc. > >> > > >> > -- > >> > brandon s allbery kf8nh sine nomine > >> > associates > >> > allbery.b at gmail.com > >> > ballbery at sinenomine.net > >> > unix, openafs, kerberos, infrastructure, xmonad > >> > http://sinenomine.net > >> > >> > >> > >> -- > >> John Meacham - http://notanumber.net/ > >> _______________________________________________ > >> Glasgow-haskell-users mailing list > >> Glasgow-haskell-users at haskell.org > >> http://www.haskell.org/mailman/listinfo/glasgow-haskell-users > > > > > > > > -- > John Meacham - http://notanumber.net/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pointlessmonad at gmail.com Sun Jul 6 19:25:13 2014 From: pointlessmonad at gmail.com (Christopher Rodrigues) Date: Sun, 6 Jul 2014 14:25:13 -0500 Subject: GHC silently turns off dynamic output, should this be an error? In-Reply-To: <20140624203609.GA12853@matrix.chaos.earth.li> References: <20140624203609.GA12853@matrix.chaos.earth.li> Message-ID: Investigating #9176 further, GHC does at least three different things with imported modules under -dynamic-too compilation. It probably should be changed to handle imports uniformly. It looks like dynamic-too support was bolted on to a system designed for one-pass compilation, and I'm not clear on how to restructure it. Guidance would be appreciated, as it would cut down the time I need to figure out how the driver works. Here are the three things that can happen: 1. When compiling in one-shot mode, with -c, GHC will do what Igloo described. If normal and dynamic interface hashes don't match, it will compile the normal way, then compile again the dynamic way. 2. When compiling with --make and importing from a package, if normal and dynamic interface hashes don't match, GHC will disable dynamic-too, but it will not restart the pipeline. Only the normal .hi file is produced. 3. When compiling with --make and importing from a non-package module, GHC will not check interface hashes. It will proceed as if normal and dynamic interface hashes match. -Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From william.knop.nospam at gmail.com Sun Jul 6 23:26:40 2014 From: william.knop.nospam at gmail.com (William Knop) Date: Sun, 6 Jul 2014 19:26:40 -0400 Subject: Continuous Integration and Cross Compilation In-Reply-To: References: <11C296CC-10A6-4E76-B959-13FBB4C69651@gmail.com> <2A968885-C68E-4D51-A374-1CF063D34FEC@gmail.com> Message-ID: Hi Pali, Apologies for the delayed response. I treated cloud compilation as ?free? in the context of the buildbots. If we can cross-compile (on Amazon EC2 or the like) ghcs which run on each arch we have for buildbots, the buildbots themselves will have 1/5 the load. I came to that figure from the buildbot page, where it looked like the average compile time was around 80 minutes, and the average test suite run was around 20 minutes. I see your point about cloud cross compilation and buildbot testing not covering all cases of regressions. I think this is where the CI vs. nightly builds distinction applies well. Cloud compilation and buildbot testing may be fast enough to do CI on every patch set, while total regression coverage could be provided by nightly builds. Jenkins CI allows us to roll our own CI with our own machines, cloud compute services, and loads of other content/auditing/workflow services. That said, while I think it would be nice to have quick CI in addition to nightly builds, I don?t know if it?s sensible/desired for ghc. Since Jerkins CI is stable yet very actively developed, it seems at least it wouldn't incur too much maintenance on our part. Of course, the devil is in the details, so I?d be happy to set it up on a few of my machines to investigate. Will On Jun 20, 2014, at 6:15 AM, P?li G?bor J?nos wrote: > Hello William, > > 2014-06-20 0:50 GMT+02:00 William Knop : >> 1. We have a pretty good spread of buildbots, but as far as I know there aren?t >> very many of them. Running only the test suite would increase their utility by >> roughly 5x (from looking at the buildbot time breakdowns [1]). > > How would this increase their utility? I naively believe the purpose > of CI is to rebuild and test the source code after each changeset to > see if it was bringing regressions. Running the test suite only does > not seem to convey this. Many of the regressions could be observed > build-time, which means the most safe bet would be to rebuild and test > everything on the very same platform. > >> 2. Building ghc is time and resource intensive, which makes it hard for people >> to host buildbots. Even though my machines are relatively new, I can?t usually >> host one because it would interfere with my other work. I would be more >> tempted to if it was limited to just the test suite, and perhaps others would as >> well. > > My buildbots complete the steps (git clone, full build, testing) in > about 1 hour 40 minutes (with about 1 hour 15 minutes spent in the > compilation phase), while they run in parallel with a shift about an > hour. They run on the same machine, together with the coordination > server. This is just a 3.4-GHz 4-core Intel Core i5, with a couple of > GBs of RAM, I would not call it a high-end box, though. > > Note that it is on purpose that the builders do not use -j for builds, > meaning that they do not parallelize the invoked make(1)-subprocesses, > which automatically makes the builds longer. Perhaps it would be > worth experimenting with incremental builds and allowing for parallel > builds as they could cut down on the build times more efficiently. From william.knop.nospam at gmail.com Mon Jul 7 01:40:17 2014 From: william.knop.nospam at gmail.com (William Knop) Date: Sun, 6 Jul 2014 21:40:17 -0400 Subject: Continuous Integration and Cross Compilation In-Reply-To: References: <11C296CC-10A6-4E76-B959-13FBB4C69651@gmail.com> <2A968885-C68E-4D51-A374-1CF063D34FEC@gmail.com> Message-ID: <4D785CC1-8275-4F3F-BA72-D5E8B30607E8@gmail.com> Hi again, I think I may have been too brief in my reply. To recap previous discussion, it seems there are a few pieces which can be approached separately: 1) arbitrary/discretionary cross compilation 2) continuous integration for all patchsets 3) nightly builds The first, as has been pointed out, is a lot of nontrivial work. The second either requires the first and a cloud service, or a lot of hardware (though it was mentioned that the buildbots can work in a CI mode). The third, we already have, thanks to the buildbots and those who have set them up. I think using Jenkins may be a step in the right direction for a few reasons: ? there are hundreds of supported plugins [1] which cover notifications, code review [2], cloud computing services, and so on ? there is quite a lot of polish as far as generated reports go [3] ? it seems easy/nice to use out of the box (from a few minutes? fiddling on my part) Now, I don?t have much experience with buildbots, so I may be unfairly elevating Jenkins here. If buildbots can be easily extended to do exactly what we need, I?m all for it, and in that case I?d volunteer to help in that regard. Will [1] https://wiki.jenkins-ci.org/display/JENKINS/Plugins [2] http://www.dctrwatson.com/2013/01/jenkins-and-phabricator/ [3] https://ci.jenkins-ci.org On Jul 6, 2014, at 7:26 PM, William Knop wrote: > Hi Pali, > > Apologies for the delayed response. > > I treated cloud compilation as ?free? in the context of the buildbots. If we can cross-compile (on Amazon EC2 or the like) ghcs which run on each arch we have for buildbots, the buildbots themselves will have 1/5 the load. I came to that figure from the buildbot page, where it looked like the average compile time was around 80 minutes, and the average test suite run was around 20 minutes. > > I see your point about cloud cross compilation and buildbot testing not covering all cases of regressions. I think this is where the CI vs. nightly builds distinction applies well. Cloud compilation and buildbot testing may be fast enough to do CI on every patch set, while total regression coverage could be provided by nightly builds. Jenkins CI allows us to roll our own CI with our own machines, cloud compute services, and loads of other content/auditing/workflow services. > > That said, while I think it would be nice to have quick CI in addition to nightly builds, I don?t know if it?s sensible/desired for ghc. Since Jerkins CI is stable yet very actively developed, it seems at least it wouldn't incur too much maintenance on our part. Of course, the devil is in the details, so I?d be happy to set it up on a few of my machines to investigate. > > Will > > > On Jun 20, 2014, at 6:15 AM, P?li G?bor J?nos wrote: > >> Hello William, >> >> 2014-06-20 0:50 GMT+02:00 William Knop : >>> 1. We have a pretty good spread of buildbots, but as far as I know there aren?t >>> very many of them. Running only the test suite would increase their utility by >>> roughly 5x (from looking at the buildbot time breakdowns [1]). >> >> How would this increase their utility? I naively believe the purpose >> of CI is to rebuild and test the source code after each changeset to >> see if it was bringing regressions. Running the test suite only does >> not seem to convey this. Many of the regressions could be observed >> build-time, which means the most safe bet would be to rebuild and test >> everything on the very same platform. >> >>> 2. Building ghc is time and resource intensive, which makes it hard for people >>> to host buildbots. Even though my machines are relatively new, I can?t usually >>> host one because it would interfere with my other work. I would be more >>> tempted to if it was limited to just the test suite, and perhaps others would as >>> well. >> >> My buildbots complete the steps (git clone, full build, testing) in >> about 1 hour 40 minutes (with about 1 hour 15 minutes spent in the >> compilation phase), while they run in parallel with a shift about an >> hour. They run on the same machine, together with the coordination >> server. This is just a 3.4-GHz 4-core Intel Core i5, with a couple of >> GBs of RAM, I would not call it a high-end box, though. >> >> Note that it is on purpose that the builders do not use -j for builds, >> meaning that they do not parallelize the invoked make(1)-subprocesses, >> which automatically makes the builds longer. Perhaps it would be >> worth experimenting with incremental builds and allowing for parallel >> builds as they could cut down on the build times more efficiently. > From hvriedel at gmail.com Mon Jul 7 07:19:28 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Mon, 07 Jul 2014 09:19:28 +0200 Subject: Continuous Integration and Cross Compilation In-Reply-To: <4D785CC1-8275-4F3F-BA72-D5E8B30607E8@gmail.com> (William Knop's message of "Sun, 6 Jul 2014 21:40:17 -0400") References: <11C296CC-10A6-4E76-B959-13FBB4C69651@gmail.com> <2A968885-C68E-4D51-A374-1CF063D34FEC@gmail.com> <4D785CC1-8275-4F3F-BA72-D5E8B30607E8@gmail.com> Message-ID: <874myt7i0v.fsf@gmail.com> On 2014-07-07 at 03:40:17 +0200, William Knop wrote: [...] > I think using Jenkins may be a step in the right direction for a few reasons: > > ? there are hundreds of supported plugins [1] which cover notifications, code review [2], cloud computing services, and so on > ? there is quite a lot of polish as far as generated reports go [3] > ? it seems easy/nice to use out of the box (from a few minutes? fiddling on my part) > > Now, I don?t have much experience with buildbots, so I may be unfairly > elevating Jenkins here. If buildbots can be easily extended to do > exactly what we need, I?m all for it, and in that case I?d volunteer > to help in that regard. Btw, one feature I don't know how to achieve with Jenkins (yet): - Try to build/test every single commit, while - priorizing latest commits, - work its way back during idle-time (which is more or less what http://bitten.edgewall.org/ does) For GHC, since it's properly submoduled now, it would suffice to test each single commit in ghc.git. Having easily accessible metrics for each single commit (on various configurations) would be very useful. For instance, knowing the last-known-working commit is especially important if the latest commit fails to build (or exhibits some other significant metric regression). In some cases this can save developers time to git-bisect manually, as the answer is already in plain sight. Does anyone have any idea how so set something like that up with Jenkins? Cheers, hvr From johan.tibell at gmail.com Mon Jul 7 07:38:44 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Mon, 7 Jul 2014 09:38:44 +0200 Subject: ANN: New source documentation policy for GHC Message-ID: Hi all! After some discussion [1] we've decided to require Haddock comments for all top-level entities (i.e. functions, classes, and data types) in new [2] GHC code. If you're writing new code, please try to add at least a sentence of two documenting what the function does and why. When you're reviewing patches, please ask the author to add comments. This policy doesn't replace GHC's use of [Notes], which talk mostly about implementation details, but instead compliments it by making it clearer how to *use* the code. See the original thread [1] for some example comments. We will use Haddock for our comments, in order to make the generated Haddock docs more useful, but we don't require that you use any special markup in the comments themselves or required that you validate that your docs render nicely [3]. We're not adding any technical enforcement [4], to avoid extending the compile/validate cycle, instead rely on social enforcement; please encourage people to write comments and remind them during code review! Cheers, Johan 1. https://www.mail-archive.com/ghc-devs at haskell.org/msg05135.html 2. Documenting old code is of course also much appreciated! 3. This is a pragmatic trade-off to not add too much extra work for frequent contributions. It's easy for anyone to fix up bad markup, so allowing some bad markup to slip in temporarily isn't a big issue. 4. We might try to add lint warning to Phabricator, to serve as an extra reminder to patch authors. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mietek at bak.io Mon Jul 7 15:56:33 2014 From: mietek at bak.io (=?windows-1252?Q?Mi=EBtek_Bak?=) Date: Mon, 7 Jul 2014 16:56:33 +0100 Subject: GHC status report Message-ID: On 2014-05-01, at 23:46, Edward Kmett wrote: > With the old custom linker we weren't able to get our custom MPFR linked in > properly on all platforms for use in ghci. > > On Macs we ran into some rather interesting problems. We could get it to > work for actual executables, but ghci would segfault with stuff resolved to > clearly wrong addresses. If I recall correctly it may have been some kind > of MachO symbol type that wasn't being resolved properly by the custom GHC > linker, perhaps? We chased after it off and on for a long time to no avail. Did these segfaults look like the following GHCi 7.6.3 crash? https://ghc.haskell.org/trac/ghc/ticket/9278 It looks like Jason ran into something similar last year: https://gist.github.com/dagit/5986541 Unfortunately, while the new linker in GHCi 7.8.2 doesn?t suffer from this problem, there appears to be another issue: https://ghc.haskell.org/trac/ghc/ticket/9277 Any clues? Thanks, -- Mi?tek From lukexipd at gmail.com Tue Jul 8 06:06:12 2014 From: lukexipd at gmail.com (Luke Iannini) Date: Mon, 7 Jul 2014 23:06:12 -0700 Subject: ARM64 Task Force Message-ID: Howdy all, Would anyone like to team up on getting ARM64 support into GHC? Cheers Luke -------------- next part -------------- An HTML attachment was scrubbed... URL: From pali.gabor at gmail.com Tue Jul 8 09:57:22 2014 From: pali.gabor at gmail.com (=?UTF-8?B?UMOhbGkgR8OhYm9yIErDoW5vcw==?=) Date: Tue, 8 Jul 2014 11:57:22 +0200 Subject: Continuous Integration and Cross Compilation In-Reply-To: <4D785CC1-8275-4F3F-BA72-D5E8B30607E8@gmail.com> References: <11C296CC-10A6-4E76-B959-13FBB4C69651@gmail.com> <2A968885-C68E-4D51-A374-1CF063D34FEC@gmail.com> <4D785CC1-8275-4F3F-BA72-D5E8B30607E8@gmail.com> Message-ID: 2014-07-07 3:40 GMT+02:00 William Knop : > I think using Jenkins may be a step in the right direction for a few reasons: [..] > Now, I don?t have much experience with buildbots, so I may be unfairly > elevating Jenkins here. If buildbots can be easily extended to do exactly what > we need, I?m all for it, and in that case I?d volunteer to help in that regard. I do not see any problem if you decide to go with Jenkins. I volunteered to maintain the buildbots because I felt it useful for maintaining the FreeBSD port, and because it did not require more than a working Haskell software stack which would the compilation require anyway. To be honest, I do not really want to fiddle with Jenkins and cloud services, and I would feel overkill to turn the buildbots into a fully-fledged CI service. This a home-brew solution and probably has no chance to compete with Jenkins, and it does not want to. I like it is implemented in the functional programming domain, that is all. For what it is worth, I am planning to extend the buildbots with more long-term testing instead. For example, it would be nice to add steps for building cabal-install and then build Stackage on every available platforms to provide some more real-world load. As a side-effect, we could also provide up-to-date snapshots for the users if they do not want to build the sources themselves, which may help with testing. I also want to add clang-based validators to see if everything works with Clang as well. And, of course, working out heuristics for spotting valid errors from the logs without much human intervention. Note that the aforementioned 80 minutes build time and 20 minutes build time is due to the single-threaded build and testing done from scratch. Obviously, with incremental builds and on more multiple threads, things would get quicker, but -- as I wrote previously -- they are disabled for clarity/correctness. Although, what we could do is launching builds for every commit, so they could preserve this invariant while utilizing the underlying hardware more. But that is where I feel this would be just in vain; Jenkins is probably a lot better solution, especially that it integrates nicely with the recently introduced Phabricator. Unfortunately, I cannot offer any experience in that regard. I see the diversity of testing implementations as an advantage, I naively believe different solutions can peacefully co-exist at the same time, helping each other. From gergo at erdi.hu Tue Jul 8 12:38:25 2014 From: gergo at erdi.hu (Dr. ERDI Gergo) Date: Tue, 8 Jul 2014 20:38:25 +0800 (SGT) Subject: Explicitly bidirectional pattern synonyms for 7.8.3? Message-ID: Hi, I've spent the last couple evenings implementing explicitly-bidirectional pattern synonyms, partly while waiting for someone to help out with #9023. Explicitly-bidirectional pattern synonyms are described in #8581. My question is: now that I have this mostly working, is this something that could be added to 7.8.3, or is this too much of a new feature for a point release? I'd like to know because then I can prioritize finishing the work (it still needs some tests added and also someone to review the changes). Thanks, Gergo From marlowsd at gmail.com Tue Jul 8 14:14:31 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Tue, 08 Jul 2014 07:14:31 -0700 Subject: RFC: style cleanup & guidelines for GHC, and related bikeshedding In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF10422828@DB3PRD3001MB020.064d.mgd.msft.net> References: <201407031044.46291.jan.stolarek@p.lodz.pl> <618BE556AADD624C9C918AA5D5911BEF10422828@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <53BBFCC7.4040902@gmail.com> On 03/07/2014 09:29, Simon Peyton Jones wrote: > * A *primary* form of consumption is the source code itself. I've found that > Haddock-compliant comments can be rather less readable in source code. > (Eg. CoreSyn.lhs where the #blah# notation coexists uneasily with Note [blah].) > So I'd be nervous of mandating Haddock-compliance. I agree with this point, and it's why I always disliked literate source code: we only look at source code in a text editor, so using unwieldy markup compromises the most common use case. But I don't think there was a proposal to use Haddock markup everywhere (e.g. Notes), only in function documentation. Notes are for looking at in a text editor, so we can use whatever conventions we like. Cheers, Simon From marlowsd at gmail.com Tue Jul 8 14:31:58 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Tue, 08 Jul 2014 07:31:58 -0700 Subject: RFC: style cleanup & guidelines for GHC, and related bikeshedding In-Reply-To: References: Message-ID: <53BC00DE.9050807@gmail.com> Austin didn't mention this, so I will: we have a wiki page for style https://ghc.haskell.org/trac/ghc/wiki/Commentary/CodingStyle It has a pretty clear set of guidelines for imports/exports, for example (that we don't follow as much as we should). I'd be in favour of changing .lhs files to .hs files, replacing all the \begin{code}...\end{code} with -}...{-. As I said in my reply to Simon, literate source files aren't providing any real benefit to us, and in the name of consistency this would be a positive step. I'm all in favour of gardening the code base to clean up things like this. However, the best time for a big stylistic sweep is a time that minimizes the number of merges we have to do across these commits. That would be just before we branch for a new major release; hopefully at that point most of the feature branches will be merged and we're not going to merge any further patches into the previous release branch. I'm less enthusiastic about fixing whitespace things. It's a tough call, but I'm guessing that fixing it would cause more pain than not fixing it. Opinions might differ, and I wouldn't mind at all if the consensus were to do a whitespace sweep too. One other thing I'd like to propose is an 80-column limit on new code. Personally I've always used an 80-column limit for various reasons. This is the biggest bikeshed ever and we could talk all day about it, but here are a couple of concrete points that I think are uncontroversial: - there has to be *some* limit, so that we know how wide to make our windows. The only valid discussion is what the limit should be. - Phabricator's side-by-side diffs are hard to read on a laptop screen when lines go beyond 80 columns. And I think 80 is a good enough number, especially for Haskell where you can pack a lot into an 80-column line. Phabricator is already flagging up >80 column lines in its linter, which is its default setting. Cheers, Simon On 02/07/2014 12:59, Austin Seipp wrote: > Hi *, > > First off, WARNING: BIKESHEDDING AHEAD. > > With that out of the way - today on IRC, there was some discussion > about some stylistic/consistency issues in GHC, and being spurred by > Johans recent proposal for top-level documentation, I figured perhaps > we should beat the drum on this issue as well. > > The TL;DR is that GHC has a lot of inconsistent style issues, > including things like: > > - Mixing literate haskell with non-literate haskell files > - Legacy code with tabs and spaces intermixed > - Related to the last one, trailing whitespace > - Mixing styles of do notation in different parts of the compiler > (braces vs no braces) > - Probably things like indentation mismatches even in the same code > - Probably many other things I've missed, obvious or not. > > These issues by themselves aren't too bad, but together they make the > coding style for GHC very inconsistent, and this hurts maintainability > a bit I feel. Furthermore, some of these issues block related > improvements - for example, > https://ghc.haskell.org/trac/ghc/ticket/9230 which is probably quite > reasonable will likely be a bit annoying to implement until GHC itself > is de-tabbed - we use -Werror during ./validate. This particular issue > is what started the discussion. > > Also, with developers now using arcanist and phabricator, they have > linting enabled for new patches, but they will often warn about > surrounding issues, mostly tabs and trailing spaces. This is a bit > annoying for submitters, and would be fixed by enforcing it. > > First attack plan > ~~~~~~~~~~~~~~~ > > So, to start, I'd like to propose that we make some guidelines for > these kinds of things, and also a plan to fix some of them. To start: > > #1) We should really consider going ahead and detabbing the remaining > files that have them. We already enforce this on new commits with git > hooks, but by doing this, we can make -fwarn-tabs a default flag and > then validate with -Werror in the development process. > > #2) Similarly, we should kill all the trailing whitespace. (I think > this is less controversial than #1) > > #3) We should most certainly move the remaining files from literate > haskell to non-literate haskell. Most of the files in the compiler are > already in this form, and the literate haskell documentation can't be > used to generate PDFs or anything similar. I suggest we get rid of it. > More Haskell users use non-literate files anyway. This is probably the > least controversial. > > Merge issues > ~~~~~~~~~~~~~~~~~ > > The reason we haven't done the above three things historically is that > it makes merge conflicts nastier. A useful approximation suggested on > IRC might be to detab and remove whitespace for files older than a > certain date (say, 6 months). > > However, in general I'm thinking perhaps it's best to go ahead and > bite the bullet. maybe. I'd like to know what other people think! If > we have a vote and most people are in favor of doing this, maybe we > should really do it. > > I'd especially like to hear about this if you have an outstanding branch. > > Some numbers on these issues > ~~~~~~~~~~~~~~~~~~~~~~~~ > > Here are some quick numbers on where most of the tabs reside, as well > as the breakdown of literate files vs non-literate files. > > NOTE: these tests occurred in the 'compiler' subdirectory of the GHC > repository, which is where most of the relevant code is. > > LITERATE vs NON-LITERATE: > > $ find . -type f -iname '*.hs' | wc -l > 206 > > $ find . -type f -iname '*.lhs' | wc -l > 194 > > Non-literate wins by a slim margin! But having the compiler divided in > half is really not a good thing IMO... > > NUMBER OF TABS PER SUBDIRECTORY: > > NOTE: this counts the number of lines which have tabs in them. It does > not count the total number of tab occurrences. > > $ for x in `echo */`; do echo -n "$x:\t\t"; find $x -type f -regex > '.*\.\(lhs\|hs\)' | xargs grep -P '\t' | wc -l; done > basicTypes/: 919 > cbits/: 0 > cmm/: 38 > codeGen/: 0 > coreSyn/: 843 > deSugar/: 545 > ghci/: 90 > hsSyn/: 120 > iface/: 213 > llvmGen/: 0 > main/: 8 > nativeGen/: 1213 > parser/: 19 > prelude/: 182 > profiling/: 39 > rename/: 188 > simplCore/: 754 > simplStg/: 0 > specialise/: 0 > stgSyn/: 0 > stranal/: 336 > typecheck/: 1171 > types/: 301 > utils/: 220 > vectorise/: 0 > > From these numbers, we can see a few useful things at least, primarily > that there are definitely some places where removing tabs should be > easy. For example, parser/, profiling/, main/, and cmm/ can all be > de-tabbed without much of a problem, I think. > > nativeGen is very often not touched, so even though it has a *huge* > amount of tabs, it can likely be de-tabbed as well with minimal > impact. > > Other style issues > ~~~~~~~~~~~~~~~~~ > > We should also discuss some related issues, like what general > block-width to use for indentations, naming conventions, and other > stuff. However, I leave this all to you, and perhaps it is best we > split that part off into a separate thread. Some things I'd like you > all to consider: > > - Block width for indentation > - Naming conventions (we use camelCase and_underscores_sometimes > which isReally_confusing) > - Import/export styles (I think we have some sloppiness here too) > - Other things worth arguing forever about. > > Thoughts on the above issues? > From carter.schonwald at gmail.com Tue Jul 8 14:39:31 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Tue, 8 Jul 2014 10:39:31 -0400 Subject: Explicitly bidirectional pattern synonyms for 7.8.3? In-Reply-To: References: Message-ID: 7.8.3 is a bug fix release, is this a bug fix? :-) On Tuesday, July 8, 2014, Dr. ERDI Gergo wrote: > Hi, > > I've spent the last couple evenings implementing explicitly-bidirectional > pattern synonyms, partly while waiting for someone to help out with #9023. > Explicitly-bidirectional pattern synonyms are described in #8581. > > My question is: now that I have this mostly working, is this something > that could be added to 7.8.3, or is this too much of a new feature for a > point release? I'd like to know because then I can prioritize finishing the > work (it still needs some tests added and also someone to review the > changes). > > Thanks, > Gergo > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Tue Jul 8 14:54:15 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Tue, 08 Jul 2014 07:54:15 -0700 Subject: GHC silently turns off dynamic output, should this be an error? In-Reply-To: References: <20140624203609.GA12853@matrix.chaos.earth.li> Message-ID: <53BC0617.5000809@gmail.com> On 06/07/2014 12:25, Christopher Rodrigues wrote: > Investigating #9176 further, GHC does at least three different things > with imported modules under -dynamic-too compilation. It probably > should be changed to handle imports uniformly. It looks like > dynamic-too support was bolted on to a system designed for one-pass > compilation, and I'm not clear on how to restructure it. Guidance would > be appreciated, as it would cut down the time I need to figure out how > the driver works. > > Here are the three things that can happen: > > 1. When compiling in one-shot mode, with -c, GHC will do what Igloo > described. If normal and dynamic interface hashes don't match, it will > compile the normal way, then compile again the dynamic way. > > 2. When compiling with --make and importing from a package, if normal > and dynamic interface hashes don't match, GHC will disable dynamic-too, > but it will not restart the pipeline. Only the normal .hi file is produced. This does sound wrong. > 3. When compiling with --make and importing from a non-package module, > GHC will not check interface hashes. It will proceed as if normal and > dynamic interface hashes match. This might be OK if we have already compiled all the modules in this compilation pass. If some were compiled previously, then it is possible that we have some non-matching .hi files, and in that case I think we should probably just consider the module out of date and recompile it, generating both dynamic and non-dynamic .hi files, which will be guaranteed to have the same hash. Cheers, Simon From gergo at erdi.hu Tue Jul 8 15:02:52 2014 From: gergo at erdi.hu (=?UTF-8?B?RHIuIMOJUkRJIEdlcmfFkQ==?=) Date: Tue, 8 Jul 2014 23:02:52 +0800 Subject: Explicitly bidirectional pattern synonyms for 7.8.3? In-Reply-To: References: Message-ID: It... fixes the bug of... a missing feature? ... I'll take that as a "no". :) On Jul 8, 2014 10:40 PM, "Carter Schonwald" wrote: > 7.8.3 is a bug fix release, is this a bug fix? :-) > > On Tuesday, July 8, 2014, Dr. ERDI Gergo wrote: > >> Hi, >> >> I've spent the last couple evenings implementing explicitly-bidirectional >> pattern synonyms, partly while waiting for someone to help out with #9023. >> Explicitly-bidirectional pattern synonyms are described in #8581. >> >> My question is: now that I have this mostly working, is this something >> that could be added to 7.8.3, or is this too much of a new feature for a >> point release? I'd like to know because then I can prioritize finishing the >> work (it still needs some tests added and also someone to review the >> changes). >> >> Thanks, >> Gergo >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.lentczner at gmail.com Wed Jul 9 15:01:49 2014 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Wed, 9 Jul 2014 08:01:49 -0700 Subject: Status of Haskell Platform 2014.2.0.0 Message-ID: The status is: *Good-to-Go!* The new-build branch of Haskell Platform is in pretty great shape: - One consistent build system using Shake - Builds source tarball - Builds linux distribution tarball - Builds Mac installer - Builds in one command line from a GHC bindist to end-user installer! - We have a running travis-ci instance * (currently red as we await a 7.8.3 bindist)* As soon as GHC 7.8.3 is out, I'll be running an alpha (or is this rc1?) or Haskell Platform (well, after I build the bindists for GHC for Mac... so it'll be a few hours on my puny MacBook Air...) *Shout out to: Yitzchak Gale, Bob Ipoloito, and Randy Polen for lots of help with the new build; Carter Schonwald for Mac and GHC build issues; Robert Lefkowitz for wiki and issue converstion (now on github!); Neil Mitchell for consultations on Shake.* The platform is now in a place that we'll be able to turn it much more quickly. This means we can track GHC release more closely, handle important library fixes when needed. And more importantly, spend time improving the platform itself, rather than sapping our energies build it! What you need to do now: - Check the version list in Release2014.hs - If you see a problem with the version of your library, let us know - If you are a packager for one of the linux distros - please try out the build and see how it works, and how it meshes with your packaging process for the distro. Come talk to me if you need help or brainstorming w.r.t. to the new-build. - If you have a Mac, you can try building the platform. *Build note: The repo is in sync with head of GHC 7.8 and we've been working with bindists we build ourselves off of head. If you want to try out the HP build with a 7.8.2 bindist, you'll need to change the version of base back to 4.7.0.0.* What you can do to help There are few aspects in progress, but none of these are show stoppers: - Windows installer using the new build being written by Randy Polen. (Existing windows build from the source tarball is still possible) - contact him if you'd like to help. - Platform website is being revamped by Erin Depew in concert with the general Haskell redesign by Chris Done - Platform website is template generated... but the pages haven't been templated much. if you're interested, e-mail and I can supply more details and hook up interested parties and Erin. - The travis-ci instance is enabled for Mac builds, but isn't configured to do them right. - There is a TO DO small task list in BUILD-NEW in the repo - I'd like to see the As and Bs done. If you want to tackle one just do it and send me a merge request. - Many people have asked for a "server edition" of the platform w/o packages that don't make sense on a server (such as the OpenGL stuff). The new-build could easily be extended to do this now. - I'd love to see a version of Simon Hengel's Haskell Platform Versions Comparison Chart as part of the platform website. Now that it is templatized and the raw data is in Haskell, it should be possible. (I have prior year's data in the right format, just not checked in, ask me...) - It would be wonderful if we could get tests incorporated: Both running the tests that are part of the included libraries, and perhaps some "big integration tests" (is compiling Pandoc enough?). It would be great if these tests can be run "in-place" after the platform is built (for travis-ci), and if they could be run on the target machine (post-installation). ? Mark "is it July already?" Lentczner -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Wed Jul 9 15:17:48 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Wed, 9 Jul 2014 17:17:48 +0200 Subject: Status of Haskell Platform 2014.2.0.0 In-Reply-To: References: Message-ID: Thanks Mark! Notes on my packages: * hashable can be bumped to 1.2.2.0. * network can be bumped to 2.4.2.3 * unordered-containers can be bumped to 0.2.4.0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dons00 at gmail.com Wed Jul 9 16:10:50 2014 From: dons00 at gmail.com (Don Stewart) Date: Wed, 9 Jul 2014 17:10:50 +0100 Subject: Status of Haskell Platform 2014.2.0.0 In-Reply-To: References: Message-ID: Well done! On Wednesday, 9 July 2014, Mark Lentczner wrote: > The status is: *Good-to-Go!* > > The new-build branch of Haskell Platform is in pretty great shape: > > - One consistent build system using Shake > - Builds source tarball > - Builds linux distribution tarball > - Builds Mac installer > - Builds in one command line from a GHC bindist to end-user installer! > - We have a running travis-ci instance > * (currently red as we > await a 7.8.3 bindist)* > > As soon as GHC 7.8.3 is out, I'll be running an alpha (or is this rc1?) or > Haskell Platform (well, after I build the bindists for GHC for Mac... so > it'll be a few hours on my puny MacBook Air...) > > *Shout out to: Yitzchak Gale, Bob Ipoloito, and Randy Polen for lots of > help with the new build; Carter Schonwald for Mac and GHC build issues; > Robert Lefkowitz for wiki and issue converstion (now on github!); Neil > Mitchell for consultations on Shake.* > > The platform is now in a place that we'll be able to turn it much more > quickly. This means we can track GHC release more closely, handle important > library fixes when needed. And more importantly, spend time improving the > platform itself, rather than sapping our energies build it! > > What you need to do now: > > - Check the version list in Release2014.hs > > - If you see a problem with the version of your library, let us know > - If you are a packager for one of the linux distros - please try out > the build and see how it works, and how it meshes with your packaging > process for the distro. Come talk to me if you need help or brainstorming > w.r.t. to the new-build. > - If you have a Mac, you can try building the platform. > > *Build note: The repo is in sync with head of GHC 7.8 and we've been > working with bindists we build ourselves off of head. If you want to try > out the HP build with a 7.8.2 bindist, you'll need to change the version of > base back to 4.7.0.0.* > > What you can do to help > There are few aspects in progress, but none of these are show stoppers: > > - Windows installer using the new build being written by Randy Polen. > (Existing windows build from the source tarball is still possible) - > contact him if you'd like to help. > - Platform website is being revamped by Erin Depew in concert with the > general Haskell redesign by Chris Done > - Platform website is template generated... but the pages haven't been > templated much. if you're interested, e-mail and I can supply more > details and hook up interested parties and Erin. > - The travis-ci instance is enabled for Mac builds, but isn't > configured to do them right. > - There is a TO DO small task list in BUILD-NEW > in > the repo - I'd like to see the As and Bs done. If you want to tackle one > just do it and send me a merge request. > - Many people have asked for a "server edition" of the platform w/o > packages that don't make sense on a server (such as the OpenGL stuff). The > new-build could easily be extended to do this now. > - I'd love to see a version of Simon Hengel's Haskell Platform > Versions Comparison Chart > as > part of the platform website. Now that it is templatized and the raw data > is in Haskell, it should be possible. (I have prior year's data in the > right format, just not checked in, ask me...) > - It would be wonderful if we could get tests incorporated: Both > running the tests that are part of the included libraries, and perhaps some > "big integration tests" (is compiling Pandoc enough?). It would be great if > these tests can be run "in-place" after the platform is built (for > travis-ci), and if they could be run on the target machine > (post-installation). > > ? Mark "is it July already?" Lentczner > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg at gregorycollins.net Wed Jul 9 16:13:48 2014 From: greg at gregorycollins.net (Gregory Collins) Date: Wed, 9 Jul 2014 18:13:48 +0200 Subject: Status of Haskell Platform 2014.2.0.0 In-Reply-To: References: Message-ID: On Wed, Jul 9, 2014 at 5:01 PM, Mark Lentczner wrote: > The status is: *Good-to-Go!* > Great work guys, this is a fantastic cleanup. G -- Gregory Collins -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Wed Jul 9 20:35:35 2014 From: alan.zimm at gmail.com (AlanKim Zimmerman) Date: Wed, 9 Jul 2014 22:35:35 +0200 Subject: HaRe and incremental type checking / type inference Message-ID: I have hit a problem in HaRe when lifting a declaration from e.g. a where clause of a function to the top level, where there is a type signature of any complexity. e.g lifting 'baz' from function 'foo' below -------------------------- foo a = baz where baz :: Int baz = xx 1 a xx :: (Num t) => t -> t -> t xx p1 p2 = p1 + p2 -------------------------------------- becomes --------------------------------------------- foo a = (baz xx a) where xx :: (Num t) => t -> t -> t xx p1 p2 = p1 + p2 -- baz:: (forall t. Num t => t -> t -> t) -> Int ->Int baz :: Num a => (a -> t1 -> t) -> t1 -> t baz xx a= xx 1 a ----------------------------------------------- For a very small subset this can be calculated easily, but for full generality it would be great to access the full power of the GHC type system. So before diving in too deeply, I thought I would test the waters as to the feasibility of doing something like this. I was hoping that perhaps the effort at an external constraint solver might be making the interfacing slightly simpler. Regards Alan -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin at well-typed.com Thu Jul 10 00:34:15 2014 From: austin at well-typed.com (Austin Seipp) Date: Wed, 9 Jul 2014 19:34:15 -0500 Subject: 7.8.3 source tarball imminent Message-ID: Hello *, The 7.8.3 source tarball is imminent. I'll be creating it as soon as my build is done. I expect it will take an hour or so to finish. Note that I have not yet pushed the tag (which will mark the official release version), but everything should be just fine. I'm simply a bit paranoid and running everything through its steps again before handing it out. Gabor, Luke, Mark, Karel - you know what to do when it's here. As usual, I'll be taking care of Windows and the matrix of Linux builds. Mark has stepped up to build the OS X release, for which I'm quite grateful. Sorry for the wait everyone, but it's here, and it's looking good.... -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From austin at well-typed.com Thu Jul 10 06:50:57 2014 From: austin at well-typed.com (Austin Seipp) Date: Thu, 10 Jul 2014 01:50:57 -0500 Subject: 7.8.3 source tarball imminent In-Reply-To: References: Message-ID: Hi all, After all the paranoia and double-checking, the source tarballs are online: https://www.haskell.org/ghc/dist/7.8.3/ NOTE: if the others making binary distributions wouldn't mind, could you please also create .tar.xz's of your builds? In practice the last time I did this, but repackaging the .bz2's into .xz's on my side took a significant amount of time and CPU (several hours, actually, because xz is very CPU intensive at level 9 compression). As a result, if you all could do it, that would be nice, to help parallelize the workload a bit. You can create a .xz along with a .bz2 by saying: $ make binary-dist # create bz2 $ make binary-dist TAR_COMP=xz # create xz This will put both the .tar.bz2 and .tar.xz in the working directory of your build tree. If you can't or don't want to take the time, I can of course repackage them, but it would be nice. Thanks everyone, let's get this over with... On Wed, Jul 9, 2014 at 7:34 PM, Austin Seipp wrote: > Hello *, > > The 7.8.3 source tarball is imminent. I'll be creating it as soon as > my build is done. I expect it will take an hour or so to finish. > > Note that I have not yet pushed the tag (which will mark the official > release version), but everything should be just fine. I'm simply a bit > paranoid and running everything through its steps again before handing > it out. > > Gabor, Luke, Mark, Karel - you know what to do when it's here. > > As usual, I'll be taking care of Windows and the matrix of Linux > builds. Mark has stepped up to build the OS X release, for which I'm > quite grateful. > > Sorry for the wait everyone, but it's here, and it's looking good.... > > -- > Regards, > > Austin Seipp, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From pali.gabor at gmail.com Thu Jul 10 08:59:51 2014 From: pali.gabor at gmail.com (=?UTF-8?B?UMOhbGkgR8OhYm9yIErDoW5vcw==?=) Date: Thu, 10 Jul 2014 10:59:51 +0200 Subject: 7.8.3 source tarball imminent In-Reply-To: References: Message-ID: 2014-07-10 8:50 GMT+02:00 Austin Seipp : > After all the paranoia and double-checking, the source tarballs are online: > > https://www.haskell.org/ghc/dist/7.8.3/ So as the FreeBSD builds: http://haskell.inf.elte.hu/ghc/README.html http://haskell.inf.elte.hu/ghc/SHA256SUMS http://haskell.inf.elte.hu/ghc/ghc-7.8.3-i386-portbld-freebsd.tar.xz http://haskell.inf.elte.hu/ghc/ghc-7.8.3-i386-portbld-freebsd.tar.bz2 http://haskell.inf.elte.hu/ghc/ghc-7.8.3-x86_64-portbld-freebsd.tar.xz http://haskell.inf.elte.hu/ghc/ghc-7.8.3-x86_64-portbld-freebsd.tar.bz2 From mail at joachim-breitner.de Thu Jul 10 09:27:22 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Thu, 10 Jul 2014 11:27:22 +0200 Subject: 7.8.3 source tarball imminent In-Reply-To: References: Message-ID: <1404984442.2049.11.camel@kirk> [Resending this from an address allowed to post on ghc-devs] Hi, Am Donnerstag, den 10.07.2014, 01:50 -0500 schrieb Austin Seipp: > Hi all, > > After all the paranoia and double-checking, the source tarballs are online: > > https://www.haskell.org/ghc/dist/7.8.3/ Debian package on its way to the experimental suite. I don?t expect 7.8 to enter unstable soon. It still fails to build on a number of non-tier-1 architectures, so I expect the next Debian stable release to be conservative and stick to 7.6. Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From hvriedel at gmail.com Thu Jul 10 11:14:04 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Thu, 10 Jul 2014 13:14:04 +0200 Subject: 7.8.3 source tarball imminent In-Reply-To: (Austin Seipp's message of "Thu, 10 Jul 2014 01:50:57 -0500") References: Message-ID: <87mwch5uv7.fsf@gmail.com> On 2014-07-10 at 08:50:57 +0200, Austin Seipp wrote: > Hi all, > > After all the paranoia and double-checking, the source tarballs are online: > > https://www.haskell.org/ghc/dist/7.8.3/ ...and here are the Ubuntu 12.04/12.10/13.04/13.10/14.04 compatible binary .deb packages for i386 and x86_64: https://launchpad.net/~hvr/+archive/ubuntu/ghc/+sourcepub/4084194/+listing-archive-extra As for Debian, GHC 7.8.3 .deb packages are expected to show up at http://deb.haskell.org/stable/ soon (courtesy of Joachim). Cheers, hvr From petersen at fedoraproject.org Thu Jul 10 13:11:21 2014 From: petersen at fedoraproject.org (Jens Petersen) Date: Thu, 10 Jul 2014 22:11:21 +0900 Subject: 7.8.3 source tarball imminent In-Reply-To: References: Message-ID: 2014/07/10 15:51 "Austin Seipp" : > https://www.haskell.org/ghc/dist/7.8.3/ Thank you! I have built it for Fedora 20 and 21 (still in development) in my Fedora copr repo: http://copr.fedoraproject.org/coprs/petersen/ghc-7.8/ It may be too late to get 7.8.3 into Fedora 21 since we just passed the feature freeze and branched from Rawhide this week. Cheers, Jens [1] http://fedoraproject.org/wiki/Releases/21/Schedule -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.lentczner at gmail.com Thu Jul 10 13:51:25 2014 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Thu, 10 Jul 2014 06:51:25 -0700 Subject: 7.8.3 source tarball imminent In-Reply-To: References: Message-ID: Morning all. I've just kicked off the Mac builds.... Since I have several to do, and they take awhile... and I have to go to work today... I expect these to be ready sometime late tonight PST.? - Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.stolarek at p.lodz.pl Thu Jul 10 14:37:36 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Thu, 10 Jul 2014 16:37:36 +0200 Subject: Injective type families Message-ID: <201407101637.37175.jan.stolarek@p.lodz.pl> Hi all, I'd like to take a stab at implementing injective type families (#6018). My plan of attack looks like this: 1. Implement injective type families that are: a) injective in all arguments b) only admit RHS that is a concrete type or a type variable introduced by the LHS or a recursive call to self. 2. Lift restriction a) ie. allow type families injective only in some arguments 3. Lift restriction b) ie. allow injective type families to call other type families. I'm not sure if 3) really gets implemented as this seems more in lines of #4259, which I don't intend to approach. Now, let's discuss the most important of all matters - the syntax! :-) Note that syntax must: 1. allow to define injectivity only for some parameters 2. work for both open and closed type families Here's my proposal (based on Richard's idea to use functional-dependencies-like syntax): 1. Standard injective type family (all parameters uniquely determined by the RHS): injective type family F a b c | a b c type family instance F Int Char Bool = Bool type family instance F Char Bool Int = Int type family instance F Bool Int Char = Char 2. Type family injective only in some parameters (ie. only some parameters uniquely determined by the RHS): injective type family G a b c | a b type family instance G Int Char Bool = Bool type family instance G Int Char Int = Bool type family instance G Bool Int Int = Int Here knowing the RHS allows us to determine a and b, but not c. 3. Type families where knowing the RHS and some parameters on the LHS makes other parameters injective: Example 1: knowing the RHS and any single parameter uniquely determines other parameters injective type family H a b c | a -> b c, b -> a c, c -> a b type family instance H Int Char Double = Int type family instance H Bool Double Char = Int Example 2: knowing the RHS and either a or b allows to uniquely determine other parameters, but knowing the RHS and c gives us no information about a or b injective type family J a b c | a -> b c, b -> a c type family instance J Int Char Double = Int type family instance J Bool Double Double = Int For closed type families this notation would be identical: type family declaration would be followed by the "where" keyword. In comment 34 of #6018 Richard proposed a notation that uses a keyword "Result". This would allow for a more uniform notation. Notice that in my proposal some families use ->, while others don't. The idea is that on the right side of the arrow we write what we can infer from the things on the left of the arrow. If the only thing left of the arrow is the right hand side of the type family equation (the result) then the arrow can be elided. In other words these would be equivalent: injective type family F a b c | a b c injective type family F a b c | result -> a b c injective type family G a b c | a b injective type family G a b c | result -> a b (note: it is also true that "injective type family G a b c | result c -> a b") injective type family H a b c | a -> b c, b -> a c, c -> a b injective type family H a b c | result a -> b c, result b -> a c, result c -> a b injective type family J a b c | a -> b c, b -> a c injective type family J a b c | result a -> b c, result b -> a c I find the notation explicitly using "result" to be easier to understand, but it is also more verbose. I also suspect that it might be a bit easier to parse. Which of the notations do you find better? What do you think about using the "injective" keyword? Are there any alternative proposals? Janek From eir at cis.upenn.edu Thu Jul 10 15:47:30 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Thu, 10 Jul 2014 11:47:30 -0400 Subject: Injective type families In-Reply-To: <201407101637.37175.jan.stolarek@p.lodz.pl> References: <201407101637.37175.jan.stolarek@p.lodz.pl> Message-ID: I'm not convinced I have very good taste where it comes to concrete syntax, but I will say that the `injective` keyword below is redundant -- it can be inferred from the presence of the `|`. Also, by restriction (b), I imagine you also allow things like `Maybe a`, where `a` is a variable bound on the LHS. That doesn't seem to be included in a tight reading of your proposal. Thanks for taking this on! Richard On Jul 10, 2014, at 10:37 AM, Jan Stolarek wrote: > Hi all, > > I'd like to take a stab at implementing injective type families (#6018). My plan of attack looks > like this: > > 1. Implement injective type families that are: > a) injective in all arguments > b) only admit RHS that is a concrete type or a type variable introduced by the LHS or a > recursive call to self. > 2. Lift restriction a) ie. allow type families injective only in some arguments > 3. Lift restriction b) ie. allow injective type families to call other type families. > > I'm not sure if 3) really gets implemented as this seems more in lines of #4259, which I don't > intend to approach. > > Now, let's discuss the most important of all matters - the syntax! :-) Note that syntax must: > 1. allow to define injectivity only for some parameters > 2. work for both open and closed type families > > Here's my proposal (based on Richard's idea to use functional-dependencies-like syntax): > > 1. Standard injective type family (all parameters uniquely determined by the RHS): > > injective type family F a b c | a b c > type family instance F Int Char Bool = Bool > type family instance F Char Bool Int = Int > type family instance F Bool Int Char = Char > > 2. Type family injective only in some parameters (ie. only some parameters uniquely determined by > the RHS): > > injective type family G a b c | a b > type family instance G Int Char Bool = Bool > type family instance G Int Char Int = Bool > type family instance G Bool Int Int = Int > > Here knowing the RHS allows us to determine a and b, but not c. > > 3. Type families where knowing the RHS and some parameters on the LHS makes other parameters > injective: > > Example 1: knowing the RHS and any single parameter uniquely determines other parameters > > injective type family H a b c | a -> b c, b -> a c, c -> a b > type family instance H Int Char Double = Int > type family instance H Bool Double Char = Int > > Example 2: knowing the RHS and either a or b allows to uniquely determine other parameters, but > knowing the RHS and c gives us no information about a or b > > injective type family J a b c | a -> b c, b -> a c > type family instance J Int Char Double = Int > type family instance J Bool Double Double = Int > > For closed type families this notation would be identical: type family declaration would be > followed by the "where" keyword. > > In comment 34 of #6018 Richard proposed a notation that uses a keyword "Result". This would allow > for a more uniform notation. Notice that in my proposal some families use ->, while others don't. > The idea is that on the right side of the arrow we write what we can infer from the things on the > left of the arrow. If the only thing left of the arrow is the right hand side of the type family > equation (the result) then the arrow can be elided. In other words these would be equivalent: > > injective type family F a b c | a b c > injective type family F a b c | result -> a b c > > injective type family G a b c | a b > injective type family G a b c | result -> a b > (note: it is also true that "injective type family G a b c | result c -> a b") > > injective type family H a b c | a -> b c, b -> a c, c -> a b > injective type family H a b c | result a -> b c, result b -> a c, result c -> a b > > injective type family J a b c | a -> b c, b -> a c > injective type family J a b c | result a -> b c, result b -> a c > > I find the notation explicitly using "result" to be easier to understand, but it is also more > verbose. I also suspect that it might be a bit easier to parse. Which of the notations do you > find better? What do you think about using the "injective" keyword? Are there any alternative > proposals? > > Janek > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From ky3 at atamo.com Thu Jul 10 16:40:44 2014 From: ky3 at atamo.com (Kim-Ee Yeoh) Date: Thu, 10 Jul 2014 23:40:44 +0700 Subject: Injective type families In-Reply-To: <201407101637.37175.jan.stolarek@p.lodz.pl> References: <201407101637.37175.jan.stolarek@p.lodz.pl> Message-ID: On Thu, Jul 10, 2014 at 9:37 PM, Jan Stolarek wrote: > 1. Standard injective type family (all parameters uniquely determined by > the RHS): > injective type family F a b c | a b c > > > > 2. Type family injective only in some parameters (ie. only some > parameters uniquely determined by the RHS): > injective type family G a b c | a b > English is not my mother tongue. Perhaps you mean: 'the image under the type function is uniquely determined by the parameters on the right of the vertical bar'? As they stand, the sentences confuse which determines what. Such confusion has a way of squirreling into official documentation. -- Kim-Ee -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggreif at gmail.com Thu Jul 10 18:34:13 2014 From: ggreif at gmail.com (Gabor Greif) Date: Thu, 10 Jul 2014 20:34:13 +0200 Subject: Injective type families In-Reply-To: <201407101637.37175.jan.stolarek@p.lodz.pl> References: <201407101637.37175.jan.stolarek@p.lodz.pl> Message-ID: Jan, this is great! Thanks for attacking this issue. Regarding "result", I do not like the idea to introduce arbitrary words with special meanings. What if somebody writes > injective type family F a result c | result -> a result c it will be totally confusing. One could write like this: > injective type family F a b c | F a b c -> a b c -- (*) or even shorter: > injective type family F a b c | F -> a b c -- (**) in (*) the syntax is inconsistent because to the left of the "|" juxtaposition is not meaning application. Also (*) would permit "... | F x b c -> a b c" which is confusing and would require a naming rule. (**) can be read as "F's result uniquely determines all of a b and c". It sounds ok if you repeat it often enough :-) Regarding "injective" I go with Richard. It is unneeded noise. Cheers, Gabor On 7/10/14, Jan Stolarek wrote: > Hi all, > > I'd like to take a stab at implementing injective type families (#6018). My > plan of attack looks > like this: > > 1. Implement injective type families that are: > a) injective in all arguments > b) only admit RHS that is a concrete type or a type variable introduced > by the LHS or a > recursive call to self. > 2. Lift restriction a) ie. allow type families injective only in some > arguments > 3. Lift restriction b) ie. allow injective type families to call other type > families. > > I'm not sure if 3) really gets implemented as this seems more in lines of > #4259, which I don't > intend to approach. > > Now, let's discuss the most important of all matters - the syntax! :-) Note > that syntax must: > 1. allow to define injectivity only for some parameters > 2. work for both open and closed type families > > Here's my proposal (based on Richard's idea to use > functional-dependencies-like syntax): > > 1. Standard injective type family (all parameters uniquely determined by the > RHS): > > injective type family F a b c | a b c > type family instance F Int Char Bool = Bool > type family instance F Char Bool Int = Int > type family instance F Bool Int Char = Char > > 2. Type family injective only in some parameters (ie. only some parameters > uniquely determined by > the RHS): > > injective type family G a b c | a b > type family instance G Int Char Bool = Bool > type family instance G Int Char Int = Bool > type family instance G Bool Int Int = Int > > Here knowing the RHS allows us to determine a and b, but not c. > > 3. Type families where knowing the RHS and some parameters on the LHS makes > other parameters > injective: > > Example 1: knowing the RHS and any single parameter uniquely determines > other parameters > > injective type family H a b c | a -> b c, b -> a c, c -> a b > type family instance H Int Char Double = Int > type family instance H Bool Double Char = Int > > Example 2: knowing the RHS and either a or b allows to uniquely determine > other parameters, but > knowing the RHS and c gives us no information about a or b > > injective type family J a b c | a -> b c, b -> a c > type family instance J Int Char Double = Int > type family instance J Bool Double Double = Int > > For closed type families this notation would be identical: type family > declaration would be > followed by the "where" keyword. > > In comment 34 of #6018 Richard proposed a notation that uses a keyword > "Result". This would allow > for a more uniform notation. Notice that in my proposal some families use > ->, while others don't. > The idea is that on the right side of the arrow we write what we can infer > from the things on the > left of the arrow. If the only thing left of the arrow is the right hand > side of the type family > equation (the result) then the arrow can be elided. In other words these > would be equivalent: > > injective type family F a b c | a b c > injective type family F a b c | result -> a b c > > injective type family G a b c | a b > injective type family G a b c | result -> a b > (note: it is also true that "injective type family G a b c | result c -> > a b") > > injective type family H a b c | a -> b c, b -> a c, c -> a b > injective type family H a b c | result a -> b c, result b -> a c, result > c -> a b > > injective type family J a b c | a -> b c, b -> a c > injective type family J a b c | result a -> b c, result b -> a c > > I find the notation explicitly using "result" to be easier to understand, > but it is also more > verbose. I also suspect that it might be a bit easier to parse. Which of the > notations do you > find better? What do you think about using the "injective" keyword? Are > there any alternative > proposals? > > Janek > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From roma at ro-che.info Thu Jul 10 20:27:26 2014 From: roma at ro-che.info (Roman Cheplyaka) Date: Thu, 10 Jul 2014 23:27:26 +0300 Subject: Injective type families In-Reply-To: References: <201407101637.37175.jan.stolarek@p.lodz.pl> Message-ID: <20140710202726.GB13449@sniper> * Kim-Ee Yeoh [2014-07-10 23:40:44+0700] > On Thu, Jul 10, 2014 at 9:37 PM, Jan Stolarek > wrote: > > > 1. Standard injective type family (all parameters uniquely determined by > > the RHS): > > injective type family F a b c | a b c > > > > > > > > 2. Type family injective only in some parameters (ie. only some > > parameters uniquely determined by the RHS): > > injective type family G a b c | a b > > > > English is not my mother tongue. Perhaps you mean: 'the image under the > type function is uniquely determined by the parameters on the right of the > vertical bar'? The result of any function is always determined by that function's parameters. Injectivity means that the *parameters* are determined by the result. So I think Jan's definition is correct. > As they stand, the sentences confuse which determines what. > > Such confusion has a way of squirreling into official documentation. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: Digital signature URL: From ky3 at atamo.com Fri Jul 11 03:34:15 2014 From: ky3 at atamo.com (Kim-Ee Yeoh) Date: Fri, 11 Jul 2014 10:34:15 +0700 Subject: Injective type families In-Reply-To: <20140710202726.GB13449@sniper> References: <201407101637.37175.jan.stolarek@p.lodz.pl> <20140710202726.GB13449@sniper> Message-ID: On Fri, Jul 11, 2014 at 3:27 AM, Roman Cheplyaka wrote: > The result of any function is always determined by that function's > parameters. > > Injectivity means that the *parameters* are determined by the result. > > So I think Jan's definition is correct. > What's correct could still be confusing. Now that you brought it up, I no longer know what "RHS" in OP means! It could be (1) the result. Or (2) the right of the vertical bar. For easy reference, here are the statements in question: * "Standard injective type family (all parameters uniquely determined by the RHS)" * "Type family injective only in some parameters (ie. only some parameters uniquely determined by the RHS):" So when you say "arguments always determine result", one has to pause and think about what that means in the presence of multiple arguments as we have here. Because that doesn't hold when only some arguments are applied. Also, note the usage of the expression "uniquely determined" in OP. When a single-argument function is injective, we say that the argument 'uniquely determines' (as opposed to just determine) the result. But note how it appears flipped in OP! Finally, let's look at how the verb 'determine' is used. When defining a type family case-by-case, one works from arguments to the result determined by the arguments. But saying "Injectivity means that the *parameters* are determined by the result" raises the specter of the inverse function. Suddenly, the arrow of work gets flipped in reverse. Doesn't that only contribute to cognitive noise? Imprecise language not only trips up experts, it also sets up insurmountable barriers for outsiders. We can do much better here. -- Kim-Ee -------------- next part -------------- An HTML attachment was scrubbed... URL: From ml at isaac.cedarswampstudios.org Fri Jul 11 03:54:36 2014 From: ml at isaac.cedarswampstudios.org (Isaac Dupree) Date: Thu, 10 Jul 2014 23:54:36 -0400 Subject: Injective type families In-Reply-To: References: <201407101637.37175.jan.stolarek@p.lodz.pl> Message-ID: <53BF5FFC.2000508@isaac.cedarswampstudios.org> On 07/10/2014 02:34 PM, Gabor Greif wrote: > Jan, this is great! Thanks for attacking this issue. > > Regarding "result", I do not like the idea to introduce arbitrary > words with special meanings. What if somebody writes > >> injective type family F a result c | result -> a result c > > it will be totally confusing. > > One could write like this: > >> injective type family F a b c | F a b c -> a b c -- (*) > > or even shorter: > >> injective type family F a b c | F -> a b c -- (**) > > in (*) the syntax is inconsistent because to the left of the "|" > juxtaposition is not meaning application. Also (*) would permit "... | > F x b c -> a b c" which is confusing and would require a naming rule. > > (**) can be read as "F's result uniquely determines all of a b and c". > It sounds ok if you repeat it often enough :-) At the risk of being too clever, one could use the keyword "type" to reference the result. If F is a family of types (a "type family"), then the result is conceptually a single type in this family. > type family F a b c | type -> a b c -Isaac From andrew.gibiansky at gmail.com Fri Jul 11 04:36:53 2014 From: andrew.gibiansky at gmail.com (Andrew Gibiansky) Date: Thu, 10 Jul 2014 21:36:53 -0700 Subject: Building GHC API Documentation? Message-ID: Hello, I am trying to create my first patch, for #9294, where I want to export some extra things from Parser along with a bit of documentation. However, I cannot figure out how to regenerate the documentation for the GHC API (not for the libraries). I tried running `make html stage=0 FAST=YES` in `./compiler`, and got the following helpful message: make[2]: *** No rule to make target `html_compiler'. Stop. make[1]: *** [html_compiler] Error 2 make: *** [html] Error 2 So I'm a bit stumped. How do I regenerate Haddock documentation for the GHC API? Thanks! Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.stolarek at p.lodz.pl Fri Jul 11 07:56:27 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Fri, 11 Jul 2014 09:56:27 +0200 Subject: Injective type families In-Reply-To: <53BF5FFC.2000508@isaac.cedarswampstudios.org> References: <201407101637.37175.jan.stolarek@p.lodz.pl> <53BF5FFC.2000508@isaac.cedarswampstudios.org> Message-ID: <201407110956.28021.jan.stolarek@p.lodz.pl> To Richard: > the `injective` keyword below is redundant -- it can be inferred from the presence of the `|`. Yes, I was wondering if someone will point it out. I agree it can be omitted. Since Gabor also raised that concern I officially drop the idea of using "injective" keyword :-) > Also, by restriction (b), I imagine you also allow things like `Maybe a`, where `a` is a > variable bound on the LHS. That doesn't seem to be included in a tight reading of your > proposal. Isn't 'Maybe a' a "concrete type"? To Kim-Ee: Roman has this right, but let me attempt to clarify further: > I no longer know what "RHS" in OP means! It could be (1) the result. Or (2) the right of the > vertical bar. RHS means (1) the result. So in this example: ? ?injective type family F a b c | a b c ? ?type family instance F Int ?Char Bool = Bool ? ?type family instance F Char Bool Int ?= Int ? ?type family instance F Bool Int ?Char = Char Bool is the RHS of the first equation, Int is the RHS of the second equation and Char is the RHS of the third equation. Note: I sometime use terms "equation" and "clause" interchangeably. Now that you brought it up it would be a Good Thing to give "(2) the right of the vertical bar" a concrete name, so we can refer to it concisely in a discussion. How about "injectivity conditions"? > So when you say "arguments always determine result", one has to pause and think about what that > means in the presence of multiple arguments as we have here. Because that doesn't hold when > only some arguments are applied. Type families cannot be partially applied - you alwyas have all the parameters. > When a single-argument function is injective, we say that the argument 'uniquely determines' > (as opposed to just determine) the result. No, you have the definition backwards. When an argument of a relation uniquely determines the result then we call that relation a "function". When a function is injective it means that every element of the image is assigned to at most one element of the domain. Which means you can determine the argument from the result. > When defining a type family case-by-case, one works from arguments to the result determined by > the arguments. But saying "Injectivity means that the *parameters* are determined by the > result" raises the specter of the inverse function We want to define functions as injective so that GHC can infer (a ~ b) based on (F a ~ F b). So in a sense we want inverse functions here. Why do you say about a "specter of the inverse function" as if it was something unthinkable? To Gabor: > Jan, this is great! Thanks for attacking this issue. Thank me when it's implemented. > Regarding "result", I do not like the idea to introduce arbitrary > words with special meanings Well, if we followed that path then certainly "result" would become an "arbitrary word with a special meaning" a.k.a. "keyword". I'm not pushing strongly for this (in fact I'm not pushing in any direction with the syntax), but these are the alternatives I see: 1) use the "result" keyword: syntax is more coherent and easier to understand, but we pay the price of introducing a new keyword and more verbosity 2) don't use the "result" keyword: syntax is shorter, but a bit incoherent which may make it harder to understand (it does for me, even though I wrote down these examples). > What if somebody writes > > > ? ?injective type family F a result c | result -> a result c > > it will be totally confusing. This would become an error. Janek From mail at joachim-breitner.de Fri Jul 11 10:36:23 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Fri, 11 Jul 2014 12:36:23 +0200 Subject: Put GHC on travis for real Message-ID: <1405074983.2152.7.camel@kirk> Hi, with all packages as submodules, ghc-complete (which is basically a git repository tracking the ?fingerprint? of the main repository) is obsolete. So we could move the travis-checking of the main line to run on the ghc repository directly. This would require * adding a .travis.yaml based on the contents of https://github.com/nomeata/ghc-complete/blob/master/.travis.yml and https://github.com/nomeata/ghc-complete/blob/master/validate.sh to the repository. * enabling Travis for this repository. I can do the former, but the latter needs to be done by a member of the ?ghc? on GitHub. I can do the latter (and keep managing the travis instance) if someone adds me to the organization... Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From gergo at erdi.hu Fri Jul 11 10:52:57 2014 From: gergo at erdi.hu (Dr. ERDI Gergo) Date: Fri, 11 Jul 2014 18:52:57 +0800 (SGT) Subject: Explicitly bidirectional pattern synonyms In-Reply-To: References: Message-ID: On Tue, 8 Jul 2014, Dr. ERDI Gergo wrote: > I've spent the last couple evenings implementing explicitly-bidirectional > pattern synonyms, partly while waiting for someone to help out with #9023. > Explicitly-bidirectional pattern synonyms are described in #8581. Please note I've uploaded an implementation of that to wip/pattern-synonyms, and it's ready for merging into master after someone (maybe SPJ after he comes back?) reviews it. From hvriedel at gmail.com Fri Jul 11 11:39:52 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Fri, 11 Jul 2014 13:39:52 +0200 Subject: Put GHC on travis for real In-Reply-To: <1405074983.2152.7.camel@kirk> (Joachim Breitner's message of "Fri, 11 Jul 2014 12:36:23 +0200") References: <1405074983.2152.7.camel@kirk> Message-ID: <87ha2ojf93.fsf@gmail.com> Hello Joachim, On 2014-07-11 at 12:36:23 +0200, Joachim Breitner wrote: > with all packages as submodules, ghc-complete (which is basically a git > repository tracking the ?fingerprint? of the main repository) is > obsolete. So we could move the travis-checking of the main line to run > on the ghc repository directly. This would require > > * adding a .travis.yaml based on the contents of > https://github.com/nomeata/ghc-complete/blob/master/.travis.yml > and https://github.com/nomeata/ghc-complete/blob/master/validate.sh to > the repository. > * enabling Travis for this repository. Travis-CI is enabled for ghc/ghc with "Build only if .travis.yml is present" I'd suggest you create a self-contained .travis.yml in a wip/ branch ( as that should trigger travis-builds) and tweak it there until it's ready for master Btw, can we get a clang-based build-config in the matrix as well? Cheers, hvr From mail at joachim-breitner.de Fri Jul 11 12:12:14 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Fri, 11 Jul 2014 14:12:14 +0200 Subject: Put GHC on travis for real In-Reply-To: <87ha2ojf93.fsf@gmail.com> References: <1405074983.2152.7.camel@kirk> <87ha2ojf93.fsf@gmail.com> Message-ID: <1405080734.10764.9.camel@kirk> Hi, Am Freitag, den 11.07.2014, 13:39 +0200 schrieb Herbert Valerio Riedel: > Travis-CI is enabled for ghc/ghc with "Build only if .travis.yml is present" > > I'd suggest you create a self-contained .travis.yml in a wip/ branch ( > as that should trigger travis-builds) and tweak it there until it's > ready for master heh, fat chance. Travis is unable to check out the repository from github, because the submodule URL points to something invalid: Cloning into 'libraries/Cabal'... fatal: remote error: ghc/packages/Cabal is not a valid repository name Email support at github.com for help Clone of 'git://github.com/ghc/packages/Cabal.git' into submodule path 'libraries/Cabal' failed and clearly this happens before we get a chance to change them. Maybe it is finally time to re-think the directory layout on git.haskell.org to not use / and get rid of this problem for once and all. (Maybe with symlinks on git.haskell.org the old URLs can just continue to work.) Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From hvriedel at gmail.com Fri Jul 11 12:48:23 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Fri, 11 Jul 2014 14:48:23 +0200 Subject: Put GHC on travis for real In-Reply-To: <1405080734.10764.9.camel@kirk> (Joachim Breitner's message of "Fri, 11 Jul 2014 14:12:14 +0200") References: <1405074983.2152.7.camel@kirk> <87ha2ojf93.fsf@gmail.com> <1405080734.10764.9.camel@kirk> Message-ID: <878uo0jc2w.fsf@gmail.com> On 2014-07-11 at 14:12:14 +0200, Joachim Breitner wrote: [...] > heh, fat chance. Travis is unable to check out the repository from > github, because the submodule URL points to something invalid: > > Cloning into 'libraries/Cabal'... > fatal: remote error: > ghc/packages/Cabal is not a valid repository name > Email support at github.com for help > Clone of 'git://github.com/ghc/packages/Cabal.git' into submodule path > 'libraries/Cabal' failed > > and clearly this happens before we get a chance to change them. So travis does a recursive clone by default? So that we can't even easily inject https://ghc.haskell.org/trac/ghc/wiki/WorkingConventions/Git/Submodules#AlternativeGitHubrewriterules ? If I get the docs right, you should be able to disable that and manually clone the submodules: http://docs.travis-ci.com/user/build-configuration/#Git-Submodules > Maybe it is finally time to re-think the directory layout on > git.haskell.org to not use / and get rid of this problem for once and > all. (Maybe with symlinks on git.haskell.org the old URLs can just > continue to work.) TBH, I don't like the idea of symlinks, as they would break the assumption in the server-side scripting that a repo is only accessible via a single path, which I'm sure would cause all sorts of subtle issues and confusing situations when you forget about that. Cheers, hvr From ggreif at gmail.com Fri Jul 11 12:51:17 2014 From: ggreif at gmail.com (Gabor Greif) Date: Fri, 11 Jul 2014 14:51:17 +0200 Subject: [commit: packages/base] ghc-7.8: Update changelog.md (d762c0c) In-Reply-To: <20140711115916.0D26C2406D@ghc.haskell.org> References: <20140711115916.0D26C2406D@ghc.haskell.org> Message-ID: See this comment on Reddit: http://www.reddit.com/r/haskell/comments/2af4jf/ghc783_is_out/ciufcfc Gabor On 7/11/14, git at git.haskell.org wrote: > Repository : ssh://git at git.haskell.org/base > > On branch : ghc-7.8 > Link : > http://ghc.haskell.org/trac/ghc/changeset/d762c0c11b3e9b3031c76caaa895202d1b81acdf/base > >>--------------------------------------------------------------- > > commit d762c0c11b3e9b3031c76caaa895202d1b81acdf > Author: Herbert Valerio Riedel > Date: Fri Jul 11 13:58:41 2014 +0200 > > Update changelog.md > > >>--------------------------------------------------------------- > > d762c0c11b3e9b3031c76caaa895202d1b81acdf > changelog.md | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/changelog.md b/changelog.md > index 5eef01f..bb42c1e 100644 > --- a/changelog.md > +++ b/changelog.md > @@ -1,6 +1,6 @@ > # Changelog for [`base` package](http://hackage.haskell.org/package/base) > > -## 4.7.0.1 *Jun 2014* > +## 4.7.0.1 *Jul 2014* > > * Bundled with GHC 7.8.3 > > @@ -11,6 +11,8 @@ > > * Fix regression in Data.Fixed Read instance (#9231) > > + * Fix `fdReady` to honor `FD_SETSIZE` (#9168) > + > ## 4.7.0.0 *Mar 2014* > > * Bundled with GHC 7.8.1 > > _______________________________________________ > ghc-commits mailing list > ghc-commits at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-commits > From mail at joachim-breitner.de Fri Jul 11 13:54:38 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Fri, 11 Jul 2014 15:54:38 +0200 Subject: Put GHC on travis for real In-Reply-To: <878uo0jc2w.fsf@gmail.com> References: <1405074983.2152.7.camel@kirk> <87ha2ojf93.fsf@gmail.com> <1405080734.10764.9.camel@kirk> <878uo0jc2w.fsf@gmail.com> Message-ID: <1405086878.10764.13.camel@kirk> Hi, Am Freitag, den 11.07.2014, 14:48 +0200 schrieb Herbert Valerio Riedel: > So travis does a recursive clone by default? So that we can't even > easily inject > > https://ghc.haskell.org/trac/ghc/wiki/WorkingConventions/Git/Submodules#AlternativeGitHubrewriterules > > ? > > If I get the docs right, you should be able to disable that and manually > clone the submodules: > > http://docs.travis-ci.com/user/build-configuration/#Git-Submodules thanks, I?ll try that. > > Maybe it is finally time to re-think the directory layout on > > git.haskell.org to not use / and get rid of this problem for once and > > all. (Maybe with symlinks on git.haskell.org the old URLs can just > > continue to work.) > > TBH, I don't like the idea of symlinks, as they would break the > assumption in the server-side scripting that a repo is only accessible > via a single path, which I'm sure would cause all sorts of subtle issues > and confusing situations when you forget about that. And has a complete change or the URLs ever been considered? Now that people shouldn?t have to use ./sync-all any more, but can use standard tools to check out the repository, being able to clone from github directly and without such patching of paths might be desirable. (It wouldn?t help with cloning from any other fork, though. Seems to be an unavoidable issue with submodules.) Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From tuncer.ayaz at gmail.com Fri Jul 11 15:11:50 2014 From: tuncer.ayaz at gmail.com (Tuncer Ayaz) Date: Fri, 11 Jul 2014 17:11:50 +0200 Subject: Put GHC on travis for real In-Reply-To: <87ha2ojf93.fsf@gmail.com> References: <1405074983.2152.7.camel@kirk> <87ha2ojf93.fsf@gmail.com> Message-ID: On Fri, Jul 11, 2014 at 1:39 PM, Herbert Valerio Riedel wrote: > Btw, can we get a clang-based build-config in the matrix as well? Assuming tests are run, what about enabling -fsanitize switches? There are more switches, but the following are documented to work in both Clang and GCC: -fsanitize=address (2x slowdown) -fsanitize=undefined -fsanitize=thread (5x-15x slowdown) As the generated code will be slower, it may either complicate or limit its use, but -fsanitize is a useful tool to catch bugs. That said, it's probably easier to adopt on GHC builder slave nodes. http://clang.llvm.org/docs/UsersManual.html#controlling-code-generation http://gcc.gnu.org/onlinedocs/gcc/Debugging-Options.html From george.colpitts at gmail.com Fri Jul 11 22:11:49 2014 From: george.colpitts at gmail.com (George Colpitts) Date: Fri, 11 Jul 2014 19:11:49 -0300 Subject: Status of Haskell Platform 2014.2.0.0 In-Reply-To: References: Message-ID: Not sure if you want QuickCheck 2.7.5 rather than 2.6 On Wed, Jul 9, 2014 at 12:01 PM, Mark Lentczner wrote: > The status is: *Good-to-Go!* > > The new-build branch of Haskell Platform is in pretty great shape: > > - One consistent build system using Shake > - Builds source tarball > - Builds linux distribution tarball > - Builds Mac installer > - Builds in one command line from a GHC bindist to end-user installer! > - We have a running travis-ci instance > * (currently red as we > await a 7.8.3 bindist)* > > As soon as GHC 7.8.3 is out, I'll be running an alpha (or is this rc1?) or > Haskell Platform (well, after I build the bindists for GHC for Mac... so > it'll be a few hours on my puny MacBook Air...) > > *Shout out to: Yitzchak Gale, Bob Ipoloito, and Randy Polen for lots of > help with the new build; Carter Schonwald for Mac and GHC build issues; > Robert Lefkowitz for wiki and issue converstion (now on github!); Neil > Mitchell for consultations on Shake.* > > The platform is now in a place that we'll be able to turn it much more > quickly. This means we can track GHC release more closely, handle important > library fixes when needed. And more importantly, spend time improving the > platform itself, rather than sapping our energies build it! > > What you need to do now: > > - Check the version list in Release2014.hs > > - If you see a problem with the version of your library, let us know > - If you are a packager for one of the linux distros - please try out > the build and see how it works, and how it meshes with your packaging > process for the distro. Come talk to me if you need help or brainstorming > w.r.t. to the new-build. > - If you have a Mac, you can try building the platform. > > *Build note: The repo is in sync with head of GHC 7.8 and we've been > working with bindists we build ourselves off of head. If you want to try > out the HP build with a 7.8.2 bindist, you'll need to change the version of > base back to 4.7.0.0.* > > What you can do to help > There are few aspects in progress, but none of these are show stoppers: > > - Windows installer using the new build being written by Randy Polen. > (Existing windows build from the source tarball is still possible) - > contact him if you'd like to help. > - Platform website is being revamped by Erin Depew in concert with the > general Haskell redesign by Chris Done > - Platform website is template generated... but the pages haven't been > templated much. if you're interested, e-mail and I can supply more > details and hook up interested parties and Erin. > - The travis-ci instance is enabled for Mac builds, but isn't > configured to do them right. > - There is a TO DO small task list in BUILD-NEW > in > the repo - I'd like to see the As and Bs done. If you want to tackle one > just do it and send me a merge request. > - Many people have asked for a "server edition" of the platform w/o > packages that don't make sense on a server (such as the OpenGL stuff). The > new-build could easily be extended to do this now. > - I'd love to see a version of Simon Hengel's Haskell Platform > Versions Comparison Chart > as > part of the platform website. Now that it is templatized and the raw data > is in Haskell, it should be possible. (I have prior year's data in the > right format, just not checked in, ask me...) > - It would be wonderful if we could get tests incorporated: Both > running the tests that are part of the included libraries, and perhaps some > "big integration tests" (is compiling Pandoc enough?). It would be great if > these tests can be run "in-place" after the platform is built (for > travis-ci), and if they could be run on the target machine > (post-installation). > > ? Mark "is it July already?" Lentczner > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lukexipd at gmail.com Sat Jul 12 02:54:29 2014 From: lukexipd at gmail.com (Luke Iannini) Date: Fri, 11 Jul 2014 19:54:29 -0700 Subject: 7.8.3 source tarball imminent In-Reply-To: References: Message-ID: Hi Austin/all, Here are the iOS builds! http://tree.is/files/ghc-7.8.3-arm-apple-ios.tar.bz2 http://tree.is/files/ghc-7.8.3-arm-apple-ios.tar.bz2.sha1 http://tree.is/files/ghc-7.8.3-arm-apple-ios.tar.xz http://tree.is/files/ghc-7.8.3-arm-apple-ios.tar.xz.sha1 http://tree.is/files/ghc-7.8.3-i386-apple-ios.tar.bz2 http://tree.is/files/ghc-7.8.3-i386-apple-ios.tar.bz2.sha1 http://tree.is/files/ghc-7.8.3-i386-apple-ios.tar.xz http://tree.is/files/ghc-7.8.3-i386-apple-ios.tar.xz.sha1 Best Luke On Thu, Jul 10, 2014 at 6:51 AM, Mark Lentczner wrote: > Morning all. I've just kicked off the Mac builds.... > > Since I have several to do, and they take awhile... and I have to go to > work today... I expect these to be ready sometime late tonight PST. > > - Mark > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvriedel at gmail.com Sat Jul 12 07:51:54 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Sat, 12 Jul 2014 09:51:54 +0200 Subject: RFC: unsafeShrinkMutableByteArray# Message-ID: <87sim7815x.fsf@gmail.com> Hello Simon (et al.) While experimenting with refactoring/improving integer-gmp, I'd like to represent a GMP number just by a ByteArrays# (and thus save a redundant limb-count field). However, for that I'd need an efficient way to resize a MutableByteArray# for the result value in case its initial size over- (or under-)allocated. Right now I'd re-allocate via newByteArray# with the final size and copyMutableByteArray#, and now I was wondering if we couldn't simply have an unsafeShrinkMutableByteArray# :: MutableByteArray# s# -> Int# -> State# s -> State# s operation, which would allow for zero-copying. (the 'unsafe' denotes this wouldn't check if the new size is less-or-equal to the current size, and that one has to be careful when subsequently using sizeofMutableByteArray# which is currently a pure function) Is such an operation feasible, or is there something in the RTS/GC that would trip over when a ByteArray has suddenly a smaller byte-count than its originally newByteArray#'ed amount? PS: maybe unsafeShrinkMutableByteArray# could unsafe-freeze the ByteArray# while at it (thus be called something like unsafeShrinkAndFreezeMutableByteArray#), as once I know the final smaller size I would freeze it anyway right after shrinking. Cheers, hvr From austin at well-typed.com Sat Jul 12 08:08:03 2014 From: austin at well-typed.com (Austin Seipp) Date: Sat, 12 Jul 2014 03:08:03 -0500 Subject: Note: Phabricator can now build code reviews! Message-ID: Hi *, Quick notice: I spent some time hacking to get Phabricator to build code reviews you publish against the GHC repository. It luckily was not that hard to do so - hooray! Go here for an example: https://phabricator.haskell.org/D64 Look at the top set of information, and find 'Build Status'. This points to the current build being produced for this revision. Icon colors are - red is failed, blue is in progress, green means good. The build is driven by an application called 'Harbormaster'. Harbormaster makes an HTTP request when a new diff is posted, which triggers a build on a backend system that applies the diff and runs ./validate. Now, look at this specific link: https://phabricator.haskell.org/D64#16 I updated the diff, and the build failed and notified Harbormaster. This adds a notification to the page (just below the comment I linked), emails interested parties, and marks the build status at the top. After a build happens, a bot named `phaskell` will drop by and give some results. Validate logs are gzip'd, and uploaded to Phabricator as .txt.gz files. You can download them and look at the full results. If the build fails, the tests that failed are output in the comment on the revision automatically. Builds happen per diff. This means they happen every time you run 'arc diff'. This is all extremely new. Please be nice to it. :) However - if you have some cool changes, please post them! I might just post some to let it do ./validate - it's nice if the machines can just do the work and you can be lazy. :) In the future, Phabricator's Drydock and Harbormaster applications will expand to hopefully do this on their own, with no custom infrastructure from us. We can also utilize resources more effectively. Here's where you can look for the current build queue: https://phabricator.haskell.org/harbormaster/query/all/ The build queue is currently serial - that means the system only runs one build at a time on a 4 core dedicated machine. This simplifies the management but it does mean it could get slightly backed up. It's only a 30 minute wait to validate something though, and that can probably be improved further. Next up: Linking Phabricator to Trac. P.S. The way it works technically is Phabricator -> HTTP GET -> Build machine. This machine runs an HTTP Server - a Python Flask server that simply runs a build script, which reports back to phabricator.haskell.org with JSON requests. Phabricator requests are HTTPS secured. GET requests are securely tunneled over an encrypted pipe (spiped). This build script update Harbormaster's status, post comments, and uploads the build log. Source code: https://github.com/haskell-infra/phab-ghc-builder -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From marlowsd at gmail.com Sat Jul 12 15:40:07 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Sat, 12 Jul 2014 08:40:07 -0700 Subject: RFC: unsafeShrinkMutableByteArray# In-Reply-To: <87sim7815x.fsf@gmail.com> References: <87sim7815x.fsf@gmail.com> Message-ID: <53C156D7.9080008@gmail.com> Yes, this will cause problems in some modes, namely -debug and -prof that need to be able to scan the heap linearly. Usually we invoke the OVERWRITING_CLOSURE() macro which overwrites the original closure with zero words, but this won't work in your case because you want to keep the original contents. So you'll need a version of OVERWRITING_CLOSURE() that takes the size that you want to retain, and doesn't overwrite that part of the closure. This is probably a good idea anyway, because it might save some work in other places where we use OVERWRITING_CLOSURE(). I am worried about sizeofMutableByteArray# though. It wouldn't be safe to call sizeofMutableByteArray# on the original array, just in case it was evaluated after the shrink. You could make things slightly safer by having unsafeShrinkMutableByteArray# return the new array, so that you have a safe way to call sizeofMutableByteArray# after the shrink. This still doesn't seem very satisfactory to me though. Cheers, Simon On 12/07/2014 00:51, Herbert Valerio Riedel wrote: > Hello Simon (et al.) > > While experimenting with refactoring/improving integer-gmp, I'd like to > represent a GMP number just by a ByteArrays# (and thus save a redundant > limb-count field). However, for that I'd need an efficient way to resize > a MutableByteArray# for the result value in case its initial size over- > (or under-)allocated. > > Right now I'd re-allocate via newByteArray# with the final size and > copyMutableByteArray#, and now I was wondering if we couldn't simply > have an > > unsafeShrinkMutableByteArray# > :: MutableByteArray# s# -> Int# -> State# s -> State# s > > operation, which would allow for zero-copying. (the 'unsafe' denotes > this wouldn't check if the new size is less-or-equal to the current > size, and that one has to be careful when subsequently using > sizeofMutableByteArray# which is currently a pure function) > > Is such an operation feasible, or is there something in the RTS/GC that > would trip over when a ByteArray has suddenly a smaller byte-count than > its originally newByteArray#'ed amount? > > PS: maybe unsafeShrinkMutableByteArray# could unsafe-freeze the > ByteArray# while at it (thus be called something like > unsafeShrinkAndFreezeMutableByteArray#), as once I know the final > smaller size I would freeze it anyway right after shrinking. > > Cheers, > hvr > From mail at joachim-breitner.de Sat Jul 12 17:04:50 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Sat, 12 Jul 2014 19:04:50 +0200 Subject: Travis now tests ghc directly Message-ID: <1405184690.22017.6.camel@kirk> Hi, I just added a .travis.yml file to GHC master. This means that every push will be validated automatic and for free by travis; you can check the build status at https://travis-ci.org/ghc/ghc/builds It is not a full validation. In particular, * it skips DPH * it does not build dynamic libraries and no dynamic executables * does not build haddock or generate documentation * only runs fast tests, and no performance tests This way we stay under the 50 minuite limit. Failures are reported to me, and I?ll manually report relevant breakage to you. It is also configured to mail to ghc-builds at haskell.org, but I don?t see these messages there. Maybe some mailing list admin needs to whitelist mails from Travis CI ? I?ve also added a link to it from http://ghc.haskell.org/trac/ghc/, including a nice icon showing the current build status. If you have that .travis.yml file in your branch, travis will also test these. My unofficial ghc-complete repository is therefore obsolete, I stopped updating it. Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From ggreif at gmail.com Sat Jul 12 18:06:38 2014 From: ggreif at gmail.com (Gabor Greif) Date: Sat, 12 Jul 2014 20:06:38 +0200 Subject: Travis now tests ghc directly In-Reply-To: <1405184690.22017.6.camel@kirk> References: <1405184690.22017.6.camel@kirk> Message-ID: On 7/12/14, Joachim Breitner wrote: > Hi, > > I just added a .travis.yml file to GHC master. This means that every > push will be validated automatic and for free by travis; you can check > the build status at https://travis-ci.org/ghc/ghc/builds > > It is not a full validation. In particular, > * it skips DPH > * it does not build dynamic libraries and no dynamic executables > * does not build haddock or generate documentation > * only runs fast tests, and no performance tests > > This way we stay under the 50 minuite limit. > > Failures are reported to me, and I?ll manually report relevant breakage > to you. It is also configured to mail to ghc-builds at haskell.org, but I > don?t see these messages there. Maybe some mailing list admin needs to > whitelist mails from Travis CI ? > > I?ve also added a link to it from http://ghc.haskell.org/trac/ghc/, > including a nice icon showing the current build status. Heh, cool, just added that thingy to the GitHub front page. Thanks for this awesome feature, Joachim! Gabor > > If you have that .travis.yml file in your branch, travis will also test > these. > > My unofficial ghc-complete repository is therefore obsolete, I stopped > updating it. > > Greetings, > Joachim > > > > -- > Joachim ?nomeata? Breitner > mail at joachim-breitner.de ? http://www.joachim-breitner.de/ > Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F > Debian Developer: nomeata at debian.org > > From mark.lentczner at gmail.com Sat Jul 12 19:27:56 2014 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Sat, 12 Jul 2014 12:27:56 -0700 Subject: gcc vs. clang builds of 7.8.3 on OS X Message-ID: In building the OS X bindist for 7.8.3, I had to choose which of several ways to build it. In particular, I could build it with a newere Xcode, which uses clang, or an older Xcode which uses gcc. I decided to nofib benchmark the variations and see before I released. Here is what I found... I compared two candidate builds: - x86_64 architecture - targeted at OS X 10.7 and later - one built with Xcode 5.1 on 10.9, which uses clang - one built with Xcode 4.5 on 10.8, which uses gcc I installed both bindists, side-by-side on the same machine: a 10.9 machine, with Xcode 5.1, which uses clang. The machine is a MacMini, 2.5GHz Intel Core i5 (dual core, reports as 4 cpus). Summary: - clang build was always faster - non-threaded was -3.2% run-time - threaded was -7.3% run-time - clang's improvement in GC run-time was better than -10% - clang builds were significantly bigger You can find the details here: - analysis-Silver-10.9-gcc-vs-clang.html - analysis-Silver-10.9-gcc-vs-clang-threaded.html The only concern is that the binary sizes were significantly bigger: +230% - I haven't investigated more, but I'm wondering if nofib doesn't strip the binaries before measuring, and perhaps clang's debugging info is much greater? Next up... we are evaluating a bindist built with the HPC Mac OS X gcc compiler (based on gcc 4.9)... and preliminary results are looking even better! Stay tuned... - Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Sat Jul 12 21:20:24 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Sat, 12 Jul 2014 23:20:24 +0200 Subject: gcc vs. clang builds of 7.8.3 on OS X In-Reply-To: References: Message-ID: I thought clang was slower than gcc because clang doesn't support thread local variables (in some form we need) and therefore GC performance suffered a lot on clang. On Sat, Jul 12, 2014 at 9:27 PM, Mark Lentczner wrote: > In building the OS X bindist for 7.8.3, I had to choose which of several > ways to build it. In particular, I could build it with a newere Xcode, > which uses clang, or an older Xcode which uses gcc. I decided to nofib > benchmark the variations and see before I released. Here is what I found... > > I compared two candidate builds: > > - x86_64 architecture > - targeted at OS X 10.7 and later > - one built with Xcode 5.1 on 10.9, which uses clang > - one built with Xcode 4.5 on 10.8, which uses gcc > > I installed both bindists, side-by-side on the same machine: a 10.9 > machine, with Xcode 5.1, which uses clang. The machine is a MacMini, 2.5GHz > Intel Core i5 (dual core, reports as 4 cpus). > > Summary: > > - clang build was always faster > - non-threaded was -3.2% run-time > - threaded was -7.3% run-time > - clang's improvement in GC run-time was better than -10% > - clang builds were significantly bigger > > You can find the details here: > > - analysis-Silver-10.9-gcc-vs-clang.html > > - analysis-Silver-10.9-gcc-vs-clang-threaded.html > > > The only concern is that the binary sizes were significantly bigger: +230% > - I haven't investigated more, but I'm wondering if nofib doesn't strip the > binaries before measuring, and perhaps clang's debugging info is much > greater? > > Next up... we are evaluating a bindist built with the HPC Mac OS X gcc > compiler (based on gcc 4.9)... and preliminary results are looking even > better! Stay tuned... > > - Mark > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Sat Jul 12 21:21:47 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Sat, 12 Jul 2014 23:21:47 +0200 Subject: Travis now tests ghc directly In-Reply-To: References: <1405184690.22017.6.camel@kirk> Message-ID: This is great! -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin at well-typed.com Sat Jul 12 21:54:34 2014 From: austin at well-typed.com (Austin Seipp) Date: Sat, 12 Jul 2014 16:54:34 -0500 Subject: 7.8.3 source tarball imminent In-Reply-To: References: Message-ID: Thank you Luke. The binaries are uploaded and on the webpage. On Fri, Jul 11, 2014 at 9:54 PM, Luke Iannini wrote: > Hi Austin/all, > Here are the iOS builds! > http://tree.is/files/ghc-7.8.3-arm-apple-ios.tar.bz2 > http://tree.is/files/ghc-7.8.3-arm-apple-ios.tar.bz2.sha1 > http://tree.is/files/ghc-7.8.3-arm-apple-ios.tar.xz > http://tree.is/files/ghc-7.8.3-arm-apple-ios.tar.xz.sha1 > > http://tree.is/files/ghc-7.8.3-i386-apple-ios.tar.bz2 > http://tree.is/files/ghc-7.8.3-i386-apple-ios.tar.bz2.sha1 > http://tree.is/files/ghc-7.8.3-i386-apple-ios.tar.xz > http://tree.is/files/ghc-7.8.3-i386-apple-ios.tar.xz.sha1 > > Best > Luke > > > > On Thu, Jul 10, 2014 at 6:51 AM, Mark Lentczner > wrote: >> >> Morning all. I've just kicked off the Mac builds.... >> >> Since I have several to do, and they take awhile... and I have to go to >> work today... I expect these to be ready sometime late tonight PST. >> >> - Mark >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From carter.schonwald at gmail.com Sat Jul 12 22:27:26 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sat, 12 Jul 2014 18:27:26 -0400 Subject: gcc vs. clang builds of 7.8.3 on OS X In-Reply-To: References: Message-ID: Maybe it depends on the version of OS X being used? Maybe TLS works differently pre 10.8 or 10.9? On Saturday, July 12, 2014, Johan Tibell wrote: > I thought clang was slower than gcc because clang doesn't support thread > local variables (in some form we need) and therefore GC performance > suffered a lot on clang. > > > On Sat, Jul 12, 2014 at 9:27 PM, Mark Lentczner > wrote: > >> In building the OS X bindist for 7.8.3, I had to choose which of several >> ways to build it. In particular, I could build it with a newere Xcode, >> which uses clang, or an older Xcode which uses gcc. I decided to nofib >> benchmark the variations and see before I released. Here is what I found... >> >> I compared two candidate builds: >> >> - x86_64 architecture >> - targeted at OS X 10.7 and later >> - one built with Xcode 5.1 on 10.9, which uses clang >> - one built with Xcode 4.5 on 10.8, which uses gcc >> >> I installed both bindists, side-by-side on the same machine: a 10.9 >> machine, with Xcode 5.1, which uses clang. The machine is a MacMini, 2.5GHz >> Intel Core i5 (dual core, reports as 4 cpus). >> >> Summary: >> >> - clang build was always faster >> - non-threaded was -3.2% run-time >> - threaded was -7.3% run-time >> - clang's improvement in GC run-time was better than -10% >> - clang builds were significantly bigger >> >> You can find the details here: >> >> - analysis-Silver-10.9-gcc-vs-clang.html >> >> - analysis-Silver-10.9-gcc-vs-clang-threaded.html >> >> >> The only concern is that the binary sizes were significantly bigger: >> +230% - I haven't investigated more, but I'm wondering if nofib doesn't >> strip the binaries before measuring, and perhaps clang's debugging info is >> much greater? >> >> Next up... we are evaluating a bindist built with the HPC Mac OS X gcc >> compiler (based on gcc 4.9)... and preliminary results are looking even >> better! Stay tuned... >> >> - Mark >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.lentczner at gmail.com Sun Jul 13 01:04:50 2014 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Sat, 12 Jul 2014 18:04:50 -0700 Subject: gcc vs. clang builds of 7.8.3 on OS X In-Reply-To: References: Message-ID: I will try to measure on 10.7 later today. Preliminary numbers for gcc 4.9 are even better than clang - it saves 12% over gcc 4.2 builds. However, the gcc runtime isn't the same as the Apple standard... and we are so far at a loss how to package a ghc based on 4.9 that would work for Mac users without gcc 4.9. - Mark ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Sun Jul 13 01:40:35 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sat, 12 Jul 2014 21:40:35 -0400 Subject: gcc vs. clang builds of 7.8.3 on OS X In-Reply-To: References: Message-ID: why wouldn't it work? heres my 4.9 gcc build, I believe it should work on any >= 10.7 system that has xcode cli tools installed, please let me know if it fails! http://www.wellposed.com/opensource/ghc/releasebuild-unofficial/ghc-7.8.3-x86_64-apple-darwin.tar.bz2 On Sat, Jul 12, 2014 at 9:04 PM, Mark Lentczner wrote: > I will try to measure on 10.7 later today. > > Preliminary numbers for gcc 4.9 are even better than clang - it saves 12% > over gcc 4.2 builds. However, the gcc runtime isn't the same as the Apple > standard... and we are so far at a loss how to package a ghc based on 4.9 > that would work for Mac users without gcc 4.9. > > - Mark > ? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bos at serpentine.com Sun Jul 13 03:52:09 2014 From: bos at serpentine.com (Bryan O'Sullivan) Date: Sat, 12 Jul 2014 20:52:09 -0700 Subject: [Haskell] ANNOUNCE: GHC version 7.8.3 In-Reply-To: References: Message-ID: On Fri, Jul 11, 2014 at 6:40 AM, Austin Seipp wrote: > The GHC Team is pleased to announce a new patchlevel release of GHC, 7.8.3. > haddock 2.14.3 that ships with it seems to be quite broken. Perhaps it's a bad interaction with cabal, it's hard to say from the outside, but here are some details. Here's the kind of error I get from haddock when I try to use cabal to install a package (primitive, in this case): module ?primitive-0.5.3.0:Main? is defined in multiple files: dist/build/tmp-92638/Control/Monad/Primitive.hs dist/build/tmp-92638/Data/Primitive/MachDeps.hs dist/build/tmp-92638/Data/Primitive/Types.hs dist/build/tmp-92638/Data/Primitive/Array.hs dist/build/tmp-92638/Data/Primitive/ByteArray.hs dist/build/tmp-92638/Data/Primitive/Internal/Compat.hs The result is that I have no haddock docs for any packages now :-( -------------- next part -------------- An HTML attachment was scrubbed... URL: From erkokl at gmail.com Sun Jul 13 04:07:30 2014 From: erkokl at gmail.com (Levent Erkok) Date: Sat, 12 Jul 2014 21:07:30 -0700 Subject: [GHC] #9304: Floating point woes; Different behavior on Mac vs Linux In-Reply-To: <060.771882ba3533f278283f5b4b5be80495@haskell.org> References: <045.d05a83ce5d196096a6368324dde68686@haskell.org> <060.771882ba3533f278283f5b4b5be80495@haskell.org> Message-ID: So, it appears that the one ending with 21 is the likely correct result; as opposed to 22. Is this an issue with some underlying library (glibc etc.); or an issue with GHC itself? On Sat, Jul 12, 2014 at 8:03 PM, GHC wrote: > #9304: Floating point woes; Different behavior on Mac vs Linux > -------------------------------------+------------------------------------ > Reporter: lerkok | Owner: > Type: bug | Status: new > Priority: high | Milestone: > Component: Compiler | Version: 7.8.3 > Resolution: | Keywords: floating point > Operating System: Unknown/Multiple | Architecture: Unknown/Multiple > Type of failure: None/Unknown | Difficulty: Unknown > Test Case: | Blocked By: 9276 > Blocking: | Related Tickets: > -------------------------------------+------------------------------------ > > Comment (by carter): > > I ran it on a 64 bit linux server i have using ghci > {{{ > Prelude> :set -XScopedTypeVariables > Prelude> let x :: Double = -4.4 > Prelude> let y :: Double = 2.4999999999999956 > Prelude> decodeFloat (x*y) > (-6192449487634421,-49) > > }}} > > so if anything, it looks like its 32bit vs 64bit > > could you try running the above snippet in GHCi on your 32bit machine? > > -- > Ticket URL: > GHC > The Glasgow Haskell Compiler > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.lentczner at gmail.com Sun Jul 13 06:36:41 2014 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Sat, 12 Jul 2014 23:36:41 -0700 Subject: gcc vs. clang builds of 7.8.3 on OS X In-Reply-To: References: Message-ID: It won't work in our case because the gcc 4.9 build we have references it's own c rts lib, which is 4.9 specific, and is notably different than what is on a stock Mac Imagine if we were to ship a libHSrts.a, compiled against the gcc 4.9 libc (and it's includes). Now a user without gcc 4.9 on thier system, installs that bindist. And when they compile code that references libc, it'll compile against the system libc (and it's includes). If tha code is paired with Haskell code (or *is* Haskell code via the FFI), and is then linked with libHSrts.a from the bindist.... now we have an executable that has parts compiled against two different libc-s. This won't work unless the two libcs (and their includes) are ABI compatible.... which I don't know if it is between gcc's 4.9 libc, and the libc Apple ships for it's systems. On Sat, Jul 12, 2014 at 6:40 PM, Carter Schonwald < carter.schonwald at gmail.com> wrote: > why wouldn't it work? > > heres my 4.9 gcc build, I believe it should work on any >= 10.7 system > that has xcode cli tools installed, > please let me know if it fails! > > > http://www.wellposed.com/opensource/ghc/releasebuild-unofficial/ghc-7.8.3-x86_64-apple-darwin.tar.bz2 > > > > > On Sat, Jul 12, 2014 at 9:04 PM, Mark Lentczner > wrote: > >> I will try to measure on 10.7 later today. >> >> Preliminary numbers for gcc 4.9 are even better than clang - it saves 12% >> over gcc 4.2 builds. However, the gcc runtime isn't the same as the Apple >> standard... and we are so far at a loss how to package a ghc based on 4.9 >> that would work for Mac users without gcc 4.9. >> >> - Mark >> ? >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.lentczner at gmail.com Sun Jul 13 06:48:53 2014 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Sat, 12 Jul 2014 23:48:53 -0700 Subject: gcc vs. clang builds of 7.8.3 on OS X In-Reply-To: References: Message-ID: The clang executable size mystery deepens: The sizes are indeed waaaay big: 7.4M test-files-clang/test* 4.5M test-files-clang/test-stripped* 1.4M test-files-gcc/test* 1.1M test-files-gcc/test-stripped* Looking at the load info from the stripped versions, it is all in the main text segment: test-files-clang/load test-files-gcc/load __TEXT.__text : 3,554,134 833,502 __TEXT.__stubs : 876 672 __TEXT.__stub_helper : 1,476 1,136 __TEXT.__const : 59,040 32,104 __TEXT.__cstring : 24,156 24,900 __TEXT.__dof_HaskellEv : 4,774 4,774 __TEXT.__eh_frame : 22,976 46,664 __DATA.__got : 1,264 880 __DATA.__nl_symbol_ptr : 16 16 __DATA.__la_symbol_ptr : 1,168 896 __DATA.__mod_init_func : 8 8 __DATA.__const : 130,048 79,744 __DATA.__data : 231,848 22,904 __DATA.__common : 45,924 46,092 __DATA.__bss : 856 840 TOTAL SIZE : 4,078,564 1,095,132 But, the compiled sizes are identical: 1.9K test-files-clang/Main.o 1.9K test-files-gcc/Main.o And, after dumping the link command, and looking up all the libs linked in (identical set in both cases), the clang libs are actually *smaller:* 13,004,272 ...clang.../lib/ghc-7.8.3/base-4.7.0.1/libHSbase-4.7.0.1.a 792,352 ...clang.../lib/ghc-7.8.3/ghc-prim-0.3.1.0/libHSghc-prim-0.3.1.0.a 1,010,824 ...clang.../lib/ghc-7.8.3/integer-gmp-0.5.1.0/libHSinteger-gmp-0.5.1.0.a 55,816 ...clang.../lib/ghc-7.8.3/rts-1.0/libCffi.a 565,112 ...clang.../lib/ghc-7.8.3/rts-1.0/libHSrts.a 24,378,416 ...gcc.../lib/ghc-7.8.3/base-4.7.0.1/libHSbase-4.7.0.1.a 1,253,176 ...gcc.../lib/ghc-7.8.3/ghc-prim-0.3.1.0/libHSghc-prim-0.3.1.0.a 1,014,280 ...gcc.../lib/ghc-7.8.3/integer-gmp-0.5.1.0/libHSinteger-gmp-0.5.1.0.a 57,984 ...gcc.../lib/ghc-7.8.3/rts-1.0/libCffi.a 556,432 ...gcc.../lib/ghc-7.8.3/rts-1.0/libHSrts.a So now I'm totally mystified! What is in that 3M of extra text segment?!?!? - Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.lentczner at gmail.com Sun Jul 13 06:53:23 2014 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Sat, 12 Jul 2014 23:53:23 -0700 Subject: gcc vs. clang builds of 7.8.3 on OS X In-Reply-To: References: Message-ID: AHA! The clang libs weren't/aren't being built split-obj! -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.lentczner at gmail.com Sun Jul 13 07:13:24 2014 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Sun, 13 Jul 2014 00:13:24 -0700 Subject: gcc vs. clang builds of 7.8.3 on OS X In-Reply-To: References: Message-ID: Found the culprit!!!!!!! XCodeVersion=`xcodebuild -version | grep Xcode | sed "s/Xcode //"` This line in configure doesn't work on a system that just the Xcode command line tools installed! It also won't work on an OS X system that has some other tool chain (say, via brew) installed. On such systems, itsets XCodeVersion to "", which in tur The follow on code sets XCodeVersion1 and XCodeVersion2 to "0", and then this code runs, causing the problem: SplitObjsBroken=NO if test "$TargetOS_CPP" = "darwin" then # Split objects is broken (#4013) with XCode < 3.2 if test "$XCodeVersion1" -lt 3 then SplitObjsBroken=YES else if test "$XCodeVersion1" -eq 3 then if test "$XCodeVersion2" -lt 2 then SplitObjsBroken=YES fi fi fi fi ?Alas, it doesn't look like SplitObjsBroken has the logic to allow it to be overriden on the ./configure invocation (anyone know for sure? my autoconf is very rusty....) Too late here for me to think of a fix.... - Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvriedel at gmail.com Sun Jul 13 07:39:17 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Sun, 13 Jul 2014 09:39:17 +0200 Subject: [Haskell] ANNOUNCE: GHC version 7.8.3 In-Reply-To: (Bryan O'Sullivan's message of "Sat, 12 Jul 2014 20:52:09 -0700") References: Message-ID: <8738e5hfmi.fsf@gmail.com> On 2014-07-13 at 05:52:09 +0200, Bryan O'Sullivan wrote: [...] > haddock 2.14.3 that ships with it seems to be quite broken. Perhaps it's a > bad interaction with cabal, it's hard to say from the outside, but here are > some details. > > Here's the kind of error I get from haddock when I try to use cabal to > install a package (primitive, in this case): > > module ?primitive-0.5.3.0:Main? is defined in multiple files: [...] > The result is that I have no haddock docs for any packages now :-( I can't reproduce that here with a ghc-7.8.3/haddock-2.14.3 install on Linux. However, this sounds *alot* like a Clang/CPP issue, specifically https://github.com/haskell/cabal/issues/1740#issuecomment-39559026 From mietek at bak.io Sun Jul 13 11:06:27 2014 From: mietek at bak.io (=?windows-1252?Q?Mi=EBtek_Bak?=) Date: Sun, 13 Jul 2014 12:06:27 +0100 Subject: [Haskell] ANNOUNCE: GHC version 7.8.3 In-Reply-To: <8738e5hfmi.fsf@gmail.com> References: <8738e5hfmi.fsf@gmail.com> Message-ID: <838F2347-0966-42FF-B450-6A002013DBB3@bak.io> >> haddock 2.14.3 that ships with it seems to be quite broken. Perhaps it's a >> bad interaction with cabal, it's hard to say from the outside, but here are >> some details. >> >> Here's the kind of error I get from haddock when I try to use cabal to >> install a package (primitive, in this case): >> >> module ?primitive-0.5.3.0:Main? is defined in multiple files: > > I can't reproduce that here with a ghc-7.8.3/haddock-2.14.3 install on > Linux. However, this sounds *alot* like a Clang/CPP issue, specifically > > https://github.com/haskell/cabal/issues/1740#issuecomment-39559026 This is a Clang/CPP issue. A workaround is to pass `?ghc-options=-optP-P` to `cabal haddock`. The corresponding GHC issue was closed 5 weeks ago: https://ghc.haskell.org/trac/ghc/ticket/9174 -- Mi?tek From hvriedel at gmail.com Sun Jul 13 13:15:42 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Sun, 13 Jul 2014 15:15:42 +0200 Subject: RFC: unsafeShrinkMutableByteArray# In-Reply-To: <53C156D7.9080008@gmail.com> (Simon Marlow's message of "Sat, 12 Jul 2014 08:40:07 -0700") References: <87sim7815x.fsf@gmail.com> <53C156D7.9080008@gmail.com> Message-ID: <87tx6lflhd.fsf@gmail.com> On 2014-07-12 at 17:40:07 +0200, Simon Marlow wrote: > Yes, this will cause problems in some modes, namely -debug and -prof > that need to be able to scan the heap linearly. ...and I assume we don't want to fallback to a non-zerocopy mode for -debug & -prof in order avoid distorting the profiling measurements either? > Usually we invoke the > OVERWRITING_CLOSURE() macro which overwrites the original closure with > zero words, but this won't work in your case because you want to keep > the original contents. So you'll need a version of > OVERWRITING_CLOSURE() that takes the size that you want to retain, and > doesn't overwrite that part of the closure. This is probably a good > idea anyway, because it might save some work in other places where we > use OVERWRITING_CLOSURE(). I'm not sure I follow. What's the purpose of overwriting the original closure payload with zeros while in debug/profile mode? (and on what occasions that would be problematic for a MutableByteArray does it happen?) > I am worried about sizeofMutableByteArray# though. It wouldn't be > safe to call sizeofMutableByteArray# on the original array, just in > case it was evaluated after the shrink. You could make things > slightly safer by having unsafeShrinkMutableByteArray# return the new > array, so that you have a safe way to call sizeofMutableByteArray# > after the shrink. This still doesn't seem very satisfactory to me > though. ...as a somewhat drastic obvious measure, one could change the type-sig of sizeofMutableByteArray# to :: MutableByteArray# s a -> State# s -> (# State# s, Int# #) and fwiw, I could find only one use-site of sizeofMutableByteArray# inside ghc.git, so I'm wondering if that primitive is used much anyway. btw, is it currently safe to call/evaluate sizeofMutableByteArray# on the original MBA after a unsafeFreezeByteArray# was performed? Otoh, if we are to thread a MutableByteArray# through the call anyway, can't we just combine shrinking and freezing in one primop (as suggested below)? [...] >> PS: maybe unsafeShrinkMutableByteArray# could unsafe-freeze the >> ByteArray# while at it (thus be called something like >> unsafeShrinkAndFreezeMutableByteArray#), as once I know the final >> smaller size I would freeze it anyway right after shrinking. From fuuzetsu at fuuzetsu.co.uk Sun Jul 13 13:17:02 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Sun, 13 Jul 2014 15:17:02 +0200 Subject: [Haskell] ANNOUNCE: GHC version 7.8.3 In-Reply-To: <838F2347-0966-42FF-B450-6A002013DBB3@bak.io> References: <8738e5hfmi.fsf@gmail.com> <838F2347-0966-42FF-B450-6A002013DBB3@bak.io> Message-ID: <53C286CE.7080607@fuuzetsu.co.uk> On 07/13/2014 01:06 PM, Mi?tek Bak wrote: >>> haddock 2.14.3 that ships with it seems to be quite broken. Perhaps it's a >>> bad interaction with cabal, it's hard to say from the outside, but here are >>> some details. >>> >>> Here's the kind of error I get from haddock when I try to use cabal to >>> install a package (primitive, in this case): >>> >>> module ?primitive-0.5.3.0:Main? is defined in multiple files: >> >> I can't reproduce that here with a ghc-7.8.3/haddock-2.14.3 install on >> Linux. However, this sounds *alot* like a Clang/CPP issue, specifically >> >> https://github.com/haskell/cabal/issues/1740#issuecomment-39559026 > > This is a Clang/CPP issue. A workaround is to pass `?ghc-options=-optP-P` to `cabal haddock`. > > The corresponding GHC issue was closed 5 weeks ago: > https://ghc.haskell.org/trac/ghc/ticket/9174 > > Right, as pointed out, this has been hacked around in cabal. I was under the impression that the mentioned commit was going into whatever the next cabal version is and that it would be in the cabal distributed with 7.8.3. Is that not the case? -- Mateusz K. From mark.lentczner at gmail.com Sun Jul 13 14:35:19 2014 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Sun, 13 Jul 2014 07:35:19 -0700 Subject: [Haskell] ANNOUNCE: GHC version 7.8.3 In-Reply-To: References: Message-ID: If this is happening on OS X and your computer is really clang (gcc -version will say clang somewhere).... ... then please try this simple fix: Edit the compiler settings file, find the line with "Haskell CPP Flags", and add -P to the list of flags. If that works, then the upcoming Haskell Platform release will take care of doing this, no need to hack this into cabal. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuuzetsu at fuuzetsu.co.uk Sun Jul 13 14:44:37 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Sun, 13 Jul 2014 16:44:37 +0200 Subject: Proposal: require Haddock comment for every new top-level function and type in GHC source code In-Reply-To: References: Message-ID: <53C29B55.1030104@fuuzetsu.co.uk> On 06/30/2014 04:19 PM, Richard Eisenberg wrote: > Thanks, Johan, for starting this discussion. > > I mostly agree with the proposal. However, one (at times, serious) drawback to using Haddock is that it means that editing comments can cause parse failures. The way the GHC build works, these failures may not be detected until the end of a hacking session (if I'm using, say, `make 2`, as I tend to do) and then can be hard to diagnose. I've actually been bitten by this when working on GHC. You should never get a parse failure anymore, as of 2.14.x versions. At worst, the comment will render in an ugly way but it will no longer cause failures due to typos in comment syntax. What might cause failures is if you put a Haddock comment where GHC doesn't expect it to. This should be fixed in GHC parser. If you don't want to mess around with pretty comments, all you have to do is to turn your function comments from ?-- ? to ?-- |? and suddenly everyone else can benefit without source-diving. > So, I have to ask: why use Haddock? Do folks read the Haddock docs for GHC? (I don't, but perhaps that's because the docs aren't so good right now.) Would it be acceptable to change this proposal not to require Haddock docs? I read the generated Haddock docs. The advantage of Haddock over something else here is that you can look at class instances, have clickable links and so on. Having to navigate on Haddock pages and reading the function docs elsewhere would be cluttered. > Even if we decide to keep this proposal about Haddock docs specifically, I would strongly request that correct rendering of the Haddock docs not be scrutinized. At the end of a hacking session (which is hard enough to find time for, as is), I don't want to be expected to look through the generated HTML to make sure that my typewriter font and italics are rendering correctly. This is something of a corollary to Simon's comment about wanting to refer to Notes from Haddock comments -- I would want the Haddock output to be quite secondary to the proper documentation in the source code. > > (Note that this "demotion" of the role of Haddock is certainly not my practice in released libraries! But, Haddock is much less useful in an application like GHC than in a library.) GHC is also a library, I think it's unreasonable to expect people to jump into source and manually tracking down all the comments (if any) when they want to use GHC API. > All that said, I do agree with the intent of this proposal and am happy to take on my part of the burden of documenting new (and perhaps some old) functions as I work. I have been very guilty of the "broken window" effect in not documenting new code. > > Thanks, > Richard > -- Mateusz K. From bob at redivi.com Sun Jul 13 15:20:39 2014 From: bob at redivi.com (Bob Ippolito) Date: Sun, 13 Jul 2014 08:20:39 -0700 Subject: gcc vs. clang builds of 7.8.3 on OS X In-Reply-To: References: Message-ID: On Sun, Jul 13, 2014 at 12:13 AM, Mark Lentczner wrote: > Found the culprit!!!!!!! > > XCodeVersion=`xcodebuild -version | grep Xcode | sed "s/Xcode //"` > > > This line in configure doesn't work on a system that just the Xcode > command line tools installed! It also won't work on an OS X system that has > some other tool chain (say, via brew) installed. On such systems, itsets > XCodeVersion to "", which in tur > > The follow on code sets XCodeVersion1 and XCodeVersion2 to "0", and then > this code runs, causing the problem: > > SplitObjsBroken=NO > if test "$TargetOS_CPP" = "darwin" > then > # Split objects is broken (#4013) with XCode < 3.2 > if test "$XCodeVersion1" -lt 3 > then > SplitObjsBroken=YES > else > if test "$XCodeVersion1" -eq 3 > then > if test "$XCodeVersion2" -lt 2 > then > SplitObjsBroken=YES > fi > fi > fi > fi > > > ?Alas, it doesn't look like SplitObjsBroken has the logic to allow it to > be overriden on the ./configure invocation (anyone know for sure? my > autoconf is very rusty....) > > Too late here for me to think of a fix.... > Would it be possible to simply stop supporting Xcode builds that old? #4013 is three years old and Xcode < 3.2 only applies to Mac OS X 10.5 and earlier. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bos at serpentine.com Sun Jul 13 16:42:40 2014 From: bos at serpentine.com (Bryan O'Sullivan) Date: Sun, 13 Jul 2014 09:42:40 -0700 Subject: [Haskell] ANNOUNCE: GHC version 7.8.3 In-Reply-To: References: Message-ID: <917BB1CE-5AB4-4F67-881A-60B375E1DEEA@serpentine.com> It was exactly the clang cpp issue. Since I didn't know what I was doing, I reinstalled GHC telling it to use cpphs (at Carter's suggestion), and that worked. Thanks! > On Jul 13, 2014, at 7:35, Mark Lentczner wrote: > > If this is happening on OS X and your computer is really clang (gcc -version will say clang somewhere).... > > ... then please try this simple fix: Edit the compiler settings file, find the line with "Haskell CPP Flags", and add -P to the list of flags. > > If that works, then the upcoming Haskell Platform release will take care of doing this, no need to hack this into cabal. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.lentczner at gmail.com Sun Jul 13 17:08:56 2014 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Sun, 13 Jul 2014 10:08:56 -0700 Subject: [Haskell] ANNOUNCE: GHC version 7.8.3 In-Reply-To: <917BB1CE-5AB4-4F67-881A-60B375E1DEEA@serpentine.com> References: <917BB1CE-5AB4-4F67-881A-60B375E1DEEA@serpentine.com> Message-ID: Glad that worked for you... I'd like to be sure that my -P fix as part of the Platform fixes things. Can you point me to a small package that that haddock fails on? Then I can repro with the 7.8.3 bindist, and test the fix. - Mark ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From bos at serpentine.com Sun Jul 13 17:23:00 2014 From: bos at serpentine.com (Bryan O'Sullivan) Date: Sun, 13 Jul 2014 10:23:00 -0700 Subject: [Haskell] ANNOUNCE: GHC version 7.8.3 In-Reply-To: References: <917BB1CE-5AB4-4F67-881A-60B375E1DEEA@serpentine.com> Message-ID: On Sun, Jul 13, 2014 at 10:08 AM, Mark Lentczner wrote: > I'd like to be sure that my -P fix as part of the Platform fixes things. > Can you point me to a small package that that haddock fails on? Then I can > repro with the 7.8.3 bindist, and test the fix. > Haddock was failing on just about everything I tried - a suitably quick repro with no dependencies would be primitive. -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Sun Jul 13 17:30:10 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sun, 13 Jul 2014 13:30:10 -0400 Subject: ANNOUNCE: GHC version 7.8.3 In-Reply-To: References: <917BB1CE-5AB4-4F67-881A-60B375E1DEEA@serpentine.com> Message-ID: I'm just surprised that this wasn't caught before.... Would have been easy to fix. On Sunday, July 13, 2014, Bryan O'Sullivan wrote: > > On Sun, Jul 13, 2014 at 10:08 AM, Mark Lentczner > wrote: > >> I'd like to be sure that my -P fix as part of the Platform fixes things. >> Can you point me to a small package that that haddock fails on? Then I can >> repro with the 7.8.3 bindist, and test the fix. >> > > Haddock was failing on just about everything I tried - a suitably quick > repro with no dependencies would be primitive. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Sun Jul 13 17:31:11 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sun, 13 Jul 2014 13:31:11 -0400 Subject: ANNOUNCE: GHC version 7.8.3 In-Reply-To: References: <917BB1CE-5AB4-4F67-881A-60B375E1DEEA@serpentine.com> Message-ID: I'm pretty sure I tested on clang only though On Sunday, July 13, 2014, Carter Schonwald wrote: > I'm just surprised that this wasn't caught before.... Would have been > easy to fix. > > On Sunday, July 13, 2014, Bryan O'Sullivan > wrote: > >> >> On Sun, Jul 13, 2014 at 10:08 AM, Mark Lentczner < >> mark.lentczner at gmail.com> wrote: >> >>> I'd like to be sure that my -P fix as part of the Platform fixes things. >>> Can you point me to a small package that that haddock fails on? Then I can >>> repro with the 7.8.3 bindist, and test the fix. >>> >> >> Haddock was failing on just about everything I tried - a suitably quick >> repro with no dependencies would be primitive. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Sun Jul 13 17:51:05 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sun, 13 Jul 2014 13:51:05 -0400 Subject: ANNOUNCE: GHC version 7.8.3 In-Reply-To: References: <917BB1CE-5AB4-4F67-881A-60B375E1DEEA@serpentine.com> Message-ID: Bryan: what version of cabal-install do you have? I suspect it might be because you have an older version of cabal install? On Sunday, July 13, 2014, Carter Schonwald wrote: > I'm pretty sure I tested on clang only though > > On Sunday, July 13, 2014, Carter Schonwald > wrote: > >> I'm just surprised that this wasn't caught before.... Would have been >> easy to fix. >> >> On Sunday, July 13, 2014, Bryan O'Sullivan wrote: >> >>> >>> On Sun, Jul 13, 2014 at 10:08 AM, Mark Lentczner < >>> mark.lentczner at gmail.com> wrote: >>> >>>> I'd like to be sure that my -P fix as part of the Platform fixes >>>> things. Can you point me to a small package that that haddock fails on? >>>> Then I can repro with the 7.8.3 bindist, and test the fix. >>>> >>> >>> Haddock was failing on just about everything I tried - a suitably quick >>> repro with no dependencies would be primitive. >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bos at serpentine.com Sun Jul 13 19:33:39 2014 From: bos at serpentine.com (Bryan O'Sullivan) Date: Sun, 13 Jul 2014 12:33:39 -0700 Subject: ANNOUNCE: GHC version 7.8.3 In-Reply-To: References: <917BB1CE-5AB4-4F67-881A-60B375E1DEEA@serpentine.com> Message-ID: On Sun, Jul 13, 2014 at 10:51 AM, Carter Schonwald < carter.schonwald at gmail.com> wrote: > Bryan: what version of cabal-install do you have? I suspect it might be > because you have an older version of cabal install? > I have the very latest version. I'll have to retry with clang's cpp, though, as cpphs prints annoying warnings during every compile :-( -------------- next part -------------- An HTML attachment was scrubbed... URL: From bos at serpentine.com Sun Jul 13 19:37:32 2014 From: bos at serpentine.com (Bryan O'Sullivan) Date: Sun, 13 Jul 2014 12:37:32 -0700 Subject: [Haskell] ANNOUNCE: GHC version 7.8.3 In-Reply-To: References: <917BB1CE-5AB4-4F67-881A-60B375E1DEEA@serpentine.com> Message-ID: On Sun, Jul 13, 2014 at 10:08 AM, Mark Lentczner wrote: > I'd like to be sure that my -P fix as part of the Platform fixes things. > Can you point me to a small package that that haddock fails on? Then I can > repro with the 7.8.3 bindist, and test the fix. > Using -P seems to work for me - thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eir at cis.upenn.edu Sun Jul 13 20:21:22 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Sun, 13 Jul 2014 16:21:22 -0400 Subject: HaRe and incremental type checking / type inference In-Reply-To: References: Message-ID: <12B39F67-72DE-4F5F-B149-2B627F4EB513@cis.upenn.edu> Hello Alan, It's not clear what you're asking for here. What's keeping you from "accessing the full power of the GHC type system"? Richard On Jul 9, 2014, at 4:35 PM, AlanKim Zimmerman wrote: > I have hit a problem in HaRe when lifting a declaration from e.g. a where clause of a function to the top level, where there is a type signature of any complexity. > > e.g lifting 'baz' from function 'foo' below > > -------------------------- > foo a = baz > where > baz :: Int > baz = xx 1 a > > xx :: (Num t) => t -> t -> t > xx p1 p2 = p1 + p2 > -------------------------------------- > > becomes > > --------------------------------------------- > foo a = (baz xx a) > where > xx :: (Num t) => t -> t -> t > xx p1 p2 = p1 + p2 > > -- baz:: (forall t. Num t => t -> t -> t) -> Int ->Int > baz :: Num a => (a -> t1 -> t) -> t1 -> t > baz xx a= xx 1 a > ----------------------------------------------- > > For a very small subset this can be calculated easily, but for full generality it would be great to access the full power of the GHC type system. > > So before diving in too deeply, I thought I would test the waters as to the feasibility of doing something like this. I was hoping that perhaps the effort at an external constraint solver might be making the interfacing slightly simpler. > > Regards > Alan > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From alan.zimm at gmail.com Sun Jul 13 20:31:16 2014 From: alan.zimm at gmail.com (AlanKim Zimmerman) Date: Sun, 13 Jul 2014 22:31:16 +0200 Subject: HaRe and incremental type checking / type inference In-Reply-To: <12B39F67-72DE-4F5F-B149-2B627F4EB513@cis.upenn.edu> References: <12B39F67-72DE-4F5F-B149-2B627F4EB513@cis.upenn.edu> Message-ID: Basically I want to be able to make a few small changes to an already type checked AST and then query the type system, similar to what is possible with typed holes. So in my example I would like to move baz to the top level, and then invoke the type checker to get the required signature, without having to convert the partial result back to source and re-run the entire compilation. Alan On Sun, Jul 13, 2014 at 10:21 PM, Richard Eisenberg wrote: > Hello Alan, > > It's not clear what you're asking for here. What's keeping you from > "accessing the full power of the GHC type system"? > > Richard > > On Jul 9, 2014, at 4:35 PM, AlanKim Zimmerman wrote: > > > I have hit a problem in HaRe when lifting a declaration from e.g. a > where clause of a function to the top level, where there is a type > signature of any complexity. > > > > e.g lifting 'baz' from function 'foo' below > > > > -------------------------- > > foo a = baz > > where > > baz :: Int > > baz = xx 1 a > > > > xx :: (Num t) => t -> t -> t > > xx p1 p2 = p1 + p2 > > -------------------------------------- > > > > becomes > > > > --------------------------------------------- > > foo a = (baz xx a) > > where > > xx :: (Num t) => t -> t -> t > > xx p1 p2 = p1 + p2 > > > > -- baz:: (forall t. Num t => t -> t -> t) -> Int ->Int > > baz :: Num a => (a -> t1 -> t) -> t1 -> t > > baz xx a= xx 1 a > > ----------------------------------------------- > > > > For a very small subset this can be calculated easily, but for full > generality it would be great to access the full power of the GHC type > system. > > > > So before diving in too deeply, I thought I would test the waters as to > the feasibility of doing something like this. I was hoping that perhaps the > effort at an external constraint solver might be making the interfacing > slightly simpler. > > > > Regards > > Alan > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eir at cis.upenn.edu Sun Jul 13 20:41:01 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Sun, 13 Jul 2014 16:41:01 -0400 Subject: HaRe and incremental type checking / type inference In-Reply-To: References: <12B39F67-72DE-4F5F-B149-2B627F4EB513@cis.upenn.edu> Message-ID: <4205A6B8-6AA9-4087-9EC9-F167155E5D41@cis.upenn.edu> Sorry -- I'm still not sure what you mean. What AST are you working with? HsSyn RdrName? HsSyn Name? HsSyn Id? Core? And, in your original example, there is a commented-out type signature and then an uncommented-out one. Which are you trying to generate? Would the recent partial type signature work help you? (I don't think it's in HEAD yet.) Perhaps give the type of a function in the GHC API that you would like to have... Richard On Jul 13, 2014, at 4:31 PM, AlanKim Zimmerman wrote: > Basically I want to be able to make a few small changes to an already type checked AST and then query the type system, similar to what is possible with typed holes. > > So in my example I would like to move baz to the top level, and then invoke the type checker to get the required signature, without having to convert the partial result back to source and re-run the entire compilation. > > Alan > > > > > > On Sun, Jul 13, 2014 at 10:21 PM, Richard Eisenberg wrote: > Hello Alan, > > It's not clear what you're asking for here. What's keeping you from "accessing the full power of the GHC type system"? > > Richard > > On Jul 9, 2014, at 4:35 PM, AlanKim Zimmerman wrote: > > > I have hit a problem in HaRe when lifting a declaration from e.g. a where clause of a function to the top level, where there is a type signature of any complexity. > > > > e.g lifting 'baz' from function 'foo' below > > > > -------------------------- > > foo a = baz > > where > > baz :: Int > > baz = xx 1 a > > > > xx :: (Num t) => t -> t -> t > > xx p1 p2 = p1 + p2 > > -------------------------------------- > > > > becomes > > > > --------------------------------------------- > > foo a = (baz xx a) > > where > > xx :: (Num t) => t -> t -> t > > xx p1 p2 = p1 + p2 > > > > -- baz:: (forall t. Num t => t -> t -> t) -> Int ->Int > > baz :: Num a => (a -> t1 -> t) -> t1 -> t > > baz xx a= xx 1 a > > ----------------------------------------------- > > > > For a very small subset this can be calculated easily, but for full generality it would be great to access the full power of the GHC type system. > > > > So before diving in too deeply, I thought I would test the waters as to the feasibility of doing something like this. I was hoping that perhaps the effort at an external constraint solver might be making the interfacing slightly simpler. > > > > Regards > > Alan > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Sun Jul 13 21:11:02 2014 From: alan.zimm at gmail.com (AlanKim Zimmerman) Date: Sun, 13 Jul 2014 23:11:02 +0200 Subject: HaRe and incremental type checking / type inference In-Reply-To: <4205A6B8-6AA9-4087-9EC9-F167155E5D41@cis.upenn.edu> References: <12B39F67-72DE-4F5F-B149-2B627F4EB513@cis.upenn.edu> <4205A6B8-6AA9-4087-9EC9-F167155E5D41@cis.upenn.edu> Message-ID: I am working with HsSyn Name, resulting from calling.typecheckModule from the GHC API. This seems to give a workable balance between the ParsedSource which is lexically close to the original source but lacks grouping of related elements, and the TypecheckedSource which has had the type information reduced to the type checker output. I am not sure which of the two type signatures is the "right" one for a refactorer. The commented out one preserves that the original baz returns an Int, and splices in the xx signature, the uncommented one is as generated by GHC 7.6.3 when using the :i command in ghci. The API calls I would like to make would be something like reTypecheckModule :: GhcMonad m => ParsedModule -> m TypecheckedModule which would take a modified ParsedModule (or RenamedSource, or whatever context is required), restore the original type checking environment, and redo the type checking phase. This would then allow a query something like the existing getInfo call to retrieve the inferred type. Alternatively, explicitly putting a typed hole in for the required signature as part of the changes going in to the call to reTypecheckModule and then processing the returned warnings would work just as well. But I suspect I should poke around in the internals a bit more to clarify my question by trying to actually do it. Alan On Sun, Jul 13, 2014 at 10:41 PM, Richard Eisenberg wrote: > Sorry -- I'm still not sure what you mean. > > What AST are you working with? HsSyn RdrName? HsSyn Name? HsSyn Id? Core? > And, in your original example, there is a commented-out type signature and > then an uncommented-out one. Which are you trying to generate? Would the > recent partial type signature work help you? (I don't think it's in HEAD > yet.) > > Perhaps give the type of a function in the GHC API that you would like to > have... > > Richard > > On Jul 13, 2014, at 4:31 PM, AlanKim Zimmerman > wrote: > > Basically I want to be able to make a few small changes to an already type > checked AST and then query the type system, similar to what is possible > with typed holes. > > So in my example I would like to move baz to the top level, and then > invoke the type checker to get the required signature, without having to > convert the partial result back to source and re-run the entire compilation. > > Alan > > > > > > On Sun, Jul 13, 2014 at 10:21 PM, Richard Eisenberg > wrote: > >> Hello Alan, >> >> It's not clear what you're asking for here. What's keeping you from >> "accessing the full power of the GHC type system"? >> >> Richard >> >> On Jul 9, 2014, at 4:35 PM, AlanKim Zimmerman >> wrote: >> >> > I have hit a problem in HaRe when lifting a declaration from e.g. a >> where clause of a function to the top level, where there is a type >> signature of any complexity. >> > >> > e.g lifting 'baz' from function 'foo' below >> > >> > -------------------------- >> > foo a = baz >> > where >> > baz :: Int >> > baz = xx 1 a >> > >> > xx :: (Num t) => t -> t -> t >> > xx p1 p2 = p1 + p2 >> > -------------------------------------- >> > >> > becomes >> > >> > --------------------------------------------- >> > foo a = (baz xx a) >> > where >> > xx :: (Num t) => t -> t -> t >> > xx p1 p2 = p1 + p2 >> > >> > -- baz:: (forall t. Num t => t -> t -> t) -> Int ->Int >> > baz :: Num a => (a -> t1 -> t) -> t1 -> t >> > baz xx a= xx 1 a >> > ----------------------------------------------- >> > >> > For a very small subset this can be calculated easily, but for full >> generality it would be great to access the full power of the GHC type >> system. >> > >> > So before diving in too deeply, I thought I would test the waters as to >> the feasibility of doing something like this. I was hoping that perhaps the >> effort at an external constraint solver might be making the interfacing >> slightly simpler. >> > >> > Regards >> > Alan >> > >> > _______________________________________________ >> > ghc-devs mailing list >> > ghc-devs at haskell.org >> > http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From karel.gardas at centrum.cz Mon Jul 14 06:24:48 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Mon, 14 Jul 2014 08:24:48 +0200 Subject: 64bit Solaris was: Re: 7.8.1 plan In-Reply-To: <53464A9C.8070807@dfki.de> References: <53311BD4.3030008@mail.ru> <533DAACF.3050801@fuuzetsu.co.uk> <53E2835B-4D02-466F-9F14-2E860DFB97F2@gmail.com> <534509A8.3070309@centrum.cz> <4A7CF78C-9146-4801-843F-CC67E2362056@gmail.com> <53464A9C.8070807@dfki.de> Message-ID: <53C377B0.4070008@centrum.cz> Folks, thanks to 64bit GHC binaries provided by Alain's SmartOS builder I've been able to hack GHC to compile well on/for x86_64-solaris platform. Austin already merged the support into GHC HEAD, see https://phabricator.haskell.org/rGHC6da603213b097a267418d8c14cbfaf0021ac2b2c It would be great if you also give it a try and test on your systems or with cross-compiling without a need to install SmartOS libraries. Thanks! Karel On 04/10/14 09:39 AM, Christian Maeder wrote: > Hi, > > I've tried to cross-compile > https://ghc.haskell.org/trac/ghc/ticket/8910 > but did not succeed, yet. > > There seem to be problems with Int64 and/or Word data types. > The proper start would be to make a careful comparison of the test-suite > results, but there are already failures for the 32bit version (under > Solaris 10). Unfortunately I don't have time to look into it further. > > Cheers Christian > > Am 09.04.2014 16:49, schrieb Pavel Ryzhov: >> Hi Karel, >> >> That is great! >> >> Do you know if there any plans for 64-bit build of GHC for Solaris 11? >> >> Regards, >> Pavel From kazu at iij.ad.jp Mon Jul 14 07:00:05 2014 From: kazu at iij.ad.jp (Kazu Yamamoto (=?iso-2022-jp?B?GyRCOzNLXE9CSScbKEI=?=)) Date: Mon, 14 Jul 2014 16:00:05 +0900 (JST) Subject: Huge space leak of GHC API 7.8.x? Message-ID: <20140714.160005.306769439121664539.kazu@iij.ad.jp> Hi, Some guys reported to me that ghc-mod uses about 1G bytes on Mac and I can reproduce this. I tried to understand why ghc-mod uses such huge memory. I found that GHC API 7.8.x uses much more memory than GHC API 7.6.x. Attached two files demonstrate this: - A.hs -- Simple program using GHC API (copied from Wiki) - B.hs -- A target file, just hello world You can compile A.hs as follows: % ghc A.hs -package ghc -package ghc-paths The following is the result: Mac (64bit) Linux (64bit) GHC 7.6.3: 20MB 4MB GHC 7.8.3: 106MB 13MB So, I think GHC API 7.8.x has huge space leak. (And I'm wondering why Mac uses much more memory than Linux). I would like to hear opinions from you guys. --Kazu import Control.Concurrent import CoreMonad (liftIO) import DynFlags import GHC import GHC.Paths (libdir) main = defaultErrorHandler defaultFatalMessager defaultFlushOut $ do runGhc (Just libdir) $ do dflags <- getSessionDynFlags let dflags' = dflags {hscTarget = HscInterpreted ,ghcLink = LinkInMemory ,ghcMode = CompManager } setSessionDynFlags dflags' target <- guessTarget "B.hs" Nothing setTargets [target] load LoadAllTargets liftIO $ threadDelay 10000000 module B where main = print "Hello, World!" From simonpj at microsoft.com Mon Jul 14 07:34:07 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 14 Jul 2014 07:34:07 +0000 Subject: Trac spam Message-ID: <618BE556AADD624C9C918AA5D5911BEF1042B530@DB3PRD3001MB020.064d.mgd.msft.net> Dear GHC devs Is it possible to stop spam on GHC's Trac? Simon -----Original Message----- From: ghc-tickets [mailto:ghc-tickets-bounces at haskell.org] On Behalf Of GHC Sent: 14 July 2014 04:24 Cc: ghc-tickets at haskell.org Subject: [GHC] #9310: This Cleanse Ultima is very beneficial to regulate a good blood circulation #9310: This Cleanse Ultima is very beneficial to regulate a good blood circulation ------------------------------------+----------------------------------- ------------------------------------+-- Reporter: joerjpool | Owner: Type: bug | Status: new Priority: normal | Milestone: Component: Compiler | Version: 7.8.2 Keywords: Cleanse Ultima | Operating System: Unknown/Multiple Architecture: Unknown/Multiple | Type of failure: None/Unknown Difficulty: Unknown | Test Case: Blocked By: | Blocking: Related Tickets: | ------------------------------------+----------------------------------- ------------------------------------+-- He something that would punish if you feel something in the question is how are you actually going to do I liked it physically working mostly stress myself out our food Afghani book more than half my life when hapless seafood gumbo I have Lucy golf when Wane have you know some sushi or thought not downtown today know what more you deprive yourself the more you want be changed when you get sold week we're not moving why outtakes house days fish they still hadn't fully fused week you some two dozen times but hey it happens that I wanted to take a little bit it is not a little bit better which will find few lost five straight days %uh week yeah I want to our ERP user roughly everything you worked so hard for all anyway so if not before keep good food you know people who you are happy light snack crackers and stuff like that my skate since I people I illegal you know I was clapping her [http://cleanseultima-uk.com/ Cleanse Ultima] hands they know they're not going to be. and what any death of Natalie wasting money first off secondly it?s not Billy is not going to help you get through your weight loss process some things that hike to do was keep so I may even take don't use to the fact that I sweet potato chips actually sweet potato fries around Kasha pizza its peak but I'll be healthy so what's that me he likes me happy nasty old who was involved you don't mind any something around the time I sleep he on a healthy track remind mean my healthy lifestyle ha-ha sweet shocking video must if you see how check on the food eat some other schools that joy hey stop talking hospital all major install look back up debt but soon up today work 350 pounds on top it's not worth it if you have some see accident I just have work 18th dub step make use of your own you created movie didn't happen ask. http://cleanseultima-uk.com/ -- Ticket URL: GHC The Glasgow Haskell Compiler _______________________________________________ ghc-tickets mailing list ghc-tickets at haskell.org http://www.haskell.org/mailman/listinfo/ghc-tickets From karel.gardas at centrum.cz Mon Jul 14 07:38:11 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Mon, 14 Jul 2014 09:38:11 +0200 Subject: Huge space leak of GHC API 7.8.x? In-Reply-To: <20140714.160005.306769439121664539.kazu@iij.ad.jp> References: <20140714.160005.306769439121664539.kazu@iij.ad.jp> Message-ID: <53C388E3.6050308@centrum.cz> On 07/14/14 09:00 AM, Kazu Yamamoto (????) wrote: > Mac (64bit) Linux (64bit) > GHC 7.6.3: 20MB 4MB > GHC 7.8.3: 106MB 13MB On Solaris 11 i386 (32bit binary) I see: GHC 7.8.2: 91MB (size), 81MB (RSS) GHC 7.6.3 53MB (size), 44MB (RSS) Cheers, Karel From simonpj at microsoft.com Mon Jul 14 07:39:53 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 14 Jul 2014 07:39:53 +0000 Subject: Huge space leak of GHC API 7.8.x? In-Reply-To: <20140714.160005.306769439121664539.kazu@iij.ad.jp> References: <20140714.160005.306769439121664539.kazu@iij.ad.jp> Message-ID: <618BE556AADD624C9C918AA5D5911BEF1042B5F5@DB3PRD3001MB020.064d.mgd.msft.net> Would you like to create a Trac ticket? Is anyone able to investigate? Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Kazu | Yamamoto | Sent: 14 July 2014 08:00 | To: ghc-devs at haskell.org | Subject: Huge space leak of GHC API 7.8.x? | | Hi, | | Some guys reported to me that ghc-mod uses about 1G bytes on Mac and I | can reproduce this. I tried to understand why ghc-mod uses such huge | memory. | | I found that GHC API 7.8.x uses much more memory than GHC API 7.6.x. | Attached two files demonstrate this: | | - A.hs -- Simple program using GHC API (copied from Wiki) | - B.hs -- A target file, just hello world | | You can compile A.hs as follows: | | % ghc A.hs -package ghc -package ghc-paths | | The following is the result: | | Mac (64bit) Linux (64bit) | GHC 7.6.3: 20MB 4MB | GHC 7.8.3: 106MB 13MB | | So, I think GHC API 7.8.x has huge space leak. (And I'm wondering why | Mac uses much more memory than Linux). | | I would like to hear opinions from you guys. | | --Kazu | | | import Control.Concurrent | import CoreMonad (liftIO) | import DynFlags | import GHC | import GHC.Paths (libdir) | | main = defaultErrorHandler defaultFatalMessager defaultFlushOut $ do | runGhc (Just libdir) $ do | dflags <- getSessionDynFlags | let dflags' = dflags {hscTarget = HscInterpreted | ,ghcLink = LinkInMemory | ,ghcMode = CompManager | } | setSessionDynFlags dflags' | target <- guessTarget "B.hs" Nothing | setTargets [target] | load LoadAllTargets | liftIO $ threadDelay 10000000 | | | module B where | | main = print "Hello, World!" | | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From hvriedel at gmail.com Mon Jul 14 07:43:49 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Mon, 14 Jul 2014 09:43:49 +0200 Subject: Trac spam In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF1042B530@DB3PRD3001MB020.064d.mgd.msft.net> (Simon Peyton Jones's message of "Mon, 14 Jul 2014 07:34:07 +0000") References: <618BE556AADD624C9C918AA5D5911BEF1042B530@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <87ha2kbd1m.fsf@gmail.com> On 2014-07-14 at 09:34:07 +0200, Simon Peyton Jones wrote: > Is it possible to stop spam on GHC's Trac? A few days ago, bots/users registering via @yahoo.com addresses managed to break the (rather simple) text-based algebraic captcha. I guess we need to up the game and maybe switch to a (simple) Haskell-themed captcha... any suggestions of something easy to implement (in Python)? Cheers, hvr From mail at joachim-breitner.de Mon Jul 14 07:47:41 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Mon, 14 Jul 2014 09:47:41 +0200 Subject: Trac spam In-Reply-To: <87ha2kbd1m.fsf@gmail.com> References: <618BE556AADD624C9C918AA5D5911BEF1042B530@DB3PRD3001MB020.064d.mgd.msft.net> <87ha2kbd1m.fsf@gmail.com> Message-ID: <1405324061.2216.4.camel@kirk> Hi, Am Montag, den 14.07.2014, 09:43 +0200 schrieb Herbert Valerio Riedel: > On 2014-07-14 at 09:34:07 +0200, Simon Peyton Jones wrote: > > Is it possible to stop spam on GHC's Trac? > > A few days ago, bots/users registering via @yahoo.com addresses managed > to break the (rather simple) text-based algebraic captcha. I guess we > need to up the game and maybe switch to a (simple) Haskell-themed > captcha... any suggestions of something easy to implement (in Python)? So they start clicking links in registration mails? Sigh. In my experience, a simple static question is sufficient, as these bot don?t target our trac manually. Such as: GHC is a compiler for what language? Greeting, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From kazu at iij.ad.jp Mon Jul 14 07:51:28 2014 From: kazu at iij.ad.jp (Kazu Yamamoto (=?iso-2022-jp?B?GyRCOzNLXE9CSScbKEI=?=)) Date: Mon, 14 Jul 2014 16:51:28 +0900 (JST) Subject: Huge space leak of GHC API 7.8.x? In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF1042B5F5@DB3PRD3001MB020.064d.mgd.msft.net> References: <20140714.160005.306769439121664539.kazu@iij.ad.jp> <618BE556AADD624C9C918AA5D5911BEF1042B5F5@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <20140714.165128.96540364177872034.kazu@iij.ad.jp> Hi Simon, > Would you like to create a Trac ticket? Here: https://ghc.haskell.org/trac/ghc/ticket/9314 --Kazu From simonpj at microsoft.com Mon Jul 14 08:26:33 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 14 Jul 2014 08:26:33 +0000 Subject: Put GHC on travis for real In-Reply-To: <1405074983.2152.7.camel@kirk> References: <1405074983.2152.7.camel@kirk> Message-ID: <618BE556AADD624C9C918AA5D5911BEF1042B997@DB3PRD3001MB020.064d.mgd.msft.net> | I can do the former, but the latter needs to be done by a member of the | ?ghc? on GitHub. I can do the latter (and keep managing the travis | instance) if someone adds me to the organization... Johachim, you contribute a lot, thank you. You should be a member of the group! Who decides that? I know about GHC committers...is that the same as the ghc group on Github. Anyway I support adding Joachim! Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | Joachim Breitner | Sent: 11 July 2014 11:36 | To: ghc-devs at haskell.org | Subject: Put GHC on travis for real | | Hi, | | with all packages as submodules, ghc-complete (which is basically a git | repository tracking the ?fingerprint? of the main repository) is | obsolete. So we could move the travis-checking of the main line to run | on the ghc repository directly. This would require | | * adding a .travis.yaml based on the contents of | https://github.com/nomeata/ghc-complete/blob/master/.travis.yml | and https://github.com/nomeata/ghc-complete/blob/master/validate.sh to | the repository. | * enabling Travis for this repository. | | I can do the former, but the latter needs to be done by a member of the | ?ghc? on GitHub. I can do the latter (and keep managing the travis | instance) if someone adds me to the organization... | | Greetings, | Joachim | | | | -- | Joachim ?nomeata? Breitner | mail at joachim-breitner.de ? http://www.joachim-breitner.de/ | Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F | Debian Developer: nomeata at debian.org From simonpj at microsoft.com Mon Jul 14 11:13:08 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 14 Jul 2014 11:13:08 +0000 Subject: [commit: ghc] master: Update various performance benchmarks (194107e) In-Reply-To: <20140714110319.AB6512406D@ghc.haskell.org> References: <20140714110319.AB6512406D@ghc.haskell.org> Message-ID: <618BE556AADD624C9C918AA5D5911BEF1042BC0A@DB3PRD3001MB020.064d.mgd.msft.net> | I started monitoring perfomance on a per-commit base. These seem to | be | off for a while now. Adjusting them, and from now I hope I can keep | closer tabs on them. Thank you. That's most helpful. One of the reasons that perf tests get worse is that each time the limit is reached, the author thinks "oh my bit probably only made it a tiny bit worse", so we miss big jumps. It's the big jumps we thereby miss, and nailing them to a particular commit would be great. I'ts important to monitor the actual bump, rather than the first time the limit is broken. (They aren't the same, of course.) Simon From mail at joachim-breitner.de Mon Jul 14 11:20:52 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Mon, 14 Jul 2014 13:20:52 +0200 Subject: [commit: ghc] master: Update various performance benchmarks (194107e) In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF1042BC0A@DB3PRD3001MB020.064d.mgd.msft.net> References: <20140714110319.AB6512406D@ghc.haskell.org> <618BE556AADD624C9C918AA5D5911BEF1042BC0A@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <1405336852.2216.20.camel@kirk> Hi, Am Montag, den 14.07.2014, 11:13 +0000 schrieb Simon Peyton Jones: > I'ts important to monitor the actual bump, rather than the first time > the limit is broken. (They aren't the same, of course.) Quite right. I am in the process of setting up per-commit monitoring of _nofib_ results, and I just added ?number of failing tests? to the list of ?benchmark numbers?. But that doesn?t help with performance testcases Simple way forward: Always print the results from performance benchmarks (not only when they are out of the limits). Then, when they break the limit, one can go and grep through the old logs for changes. Next step after that: Include the numbers from all performance benchmarks in the same monitoring as the nofib numbers. Technically easy, but I?m not sure how well it scales for the large(?) number of perf tests we have. But one step at a time. Expect an announcement of the performance dashboard thingy soon. Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From Christian.Maeder at dfki.de Mon Jul 14 12:53:22 2014 From: Christian.Maeder at dfki.de (Christian Maeder) Date: Mon, 14 Jul 2014 14:53:22 +0200 Subject: 64bit Solaris was: Re: 7.8.1 plan In-Reply-To: <53C377B0.4070008@centrum.cz> References: <53311BD4.3030008@mail.ru> <533DAACF.3050801@fuuzetsu.co.uk> <53E2835B-4D02-466F-9F14-2E860DFB97F2@gmail.com> <534509A8.3070309@centrum.cz> <4A7CF78C-9146-4801-843F-CC67E2362056@gmail.com> <53464A9C.8070807@dfki.de> <53C377B0.4070008@centrum.cz> Message-ID: <53C3D2C2.5080808@dfki.de> Hi Karel, usually I do not build HEAD. My attempt starting to do so failed as follows: git clone git://git.haskell.org/ghc.git cd ghc ./sync-all get autoreconf ./configure gmake ... ghc.mk:690: libraries/haskeline/ghc.mk: Datei oder Verzeichnis nicht gefunden libraries/dph/ghc.mk:134: *** dph_th_deps(v): libraries/dph/dph-base_dist-install_GHCI_LIB not defined!. Schluss. gmake: *** [all] Fehler 2 Is there somewhere a x86_64-solaris2 binary-dist (for Solaris 10) to try out first? Cheers Christian Am 14.07.2014 08:24, schrieb Karel Gardas: > > Folks, > > thanks to 64bit GHC binaries provided by Alain's SmartOS builder I've > been able to hack GHC to compile well on/for x86_64-solaris platform. > Austin already merged the support into GHC HEAD, see > https://phabricator.haskell.org/rGHC6da603213b097a267418d8c14cbfaf0021ac2b2c > > > It would be great if you also give it a try and test on your systems or > with cross-compiling without a need to install SmartOS libraries. > > Thanks! > Karel > > On 04/10/14 09:39 AM, Christian Maeder wrote: >> Hi, >> >> I've tried to cross-compile >> https://ghc.haskell.org/trac/ghc/ticket/8910 >> but did not succeed, yet. >> >> There seem to be problems with Int64 and/or Word data types. >> The proper start would be to make a careful comparison of the test-suite >> results, but there are already failures for the 32bit version (under >> Solaris 10). Unfortunately I don't have time to look into it further. >> >> Cheers Christian >> >> Am 09.04.2014 16:49, schrieb Pavel Ryzhov: >>> Hi Karel, >>> >>> That is great! >>> >>> Do you know if there any plans for 64-bit build of GHC for Solaris 11? >>> >>> Regards, >>> Pavel > From karel.gardas at centrum.cz Mon Jul 14 13:16:30 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Mon, 14 Jul 2014 15:16:30 +0200 Subject: 64bit Solaris was: Re: 7.8.1 plan In-Reply-To: <53C3D2C2.5080808@dfki.de> References: <53311BD4.3030008@mail.ru> <533DAACF.3050801@fuuzetsu.co.uk> <53E2835B-4D02-466F-9F14-2E860DFB97F2@gmail.com> <534509A8.3070309@centrum.cz> <4A7CF78C-9146-4801-843F-CC67E2362056@gmail.com> <53464A9C.8070807@dfki.de> <53C377B0.4070008@centrum.cz> <53C3D2C2.5080808@dfki.de> Message-ID: <53C3D82E.4040101@centrum.cz> On 07/14/14 02:53 PM, Christian Maeder wrote: > Hi Karel, > > usually I do not build HEAD. My attempt starting to do so failed as > follows: > > git clone git://git.haskell.org/ghc.git > cd ghc > ./sync-all get > autoreconf ^ I'm not sure, but shouldn't that be `perl boot' ? > ./configure ^ if you don't have x86_64-solaris ghc yet on your system, you will probably need to cross-compile with --target= param. > Is there somewhere a x86_64-solaris2 binary-dist (for Solaris 10) to try > out first? I haven't tried that yet as my primary target is Solaris 11. Karel From austin at well-typed.com Mon Jul 14 13:27:08 2014 From: austin at well-typed.com (Austin Seipp) Date: Mon, 14 Jul 2014 08:27:08 -0500 Subject: Status updates Message-ID: Hello all, I've been lax on the status updates, but better late than never. :) Here are some things that I've been up to. - 7.8.3 is released! Hooray! - Phabricator can now build code reviews if you submit them. I sent an email about this earlier in the week[1] in case you missed it. Unfortunately it has a rather stupid bug that it doesn't catch stderr properly, but I'll fix that soon. :) - I also spent some time on trying to get Phabricator to post to Trac (when a code review for a ticket is posted), but to no avail yet. Hopefully I'll have this done in the next few days. - I spent a few hours yesterday and on Saturday on the AMP work still, because I missed a few things actually (broken deriving mechanism due to forgetting some changes). I also need to send one more patch to Cabal upstream I believe, but otherwise things look OK. I'll post a new code review today when I've finished. - This morning I've combed some tickets and remilestoning stuff, since there was a *lot* left around for 7.6.2 still! This should all be fixed. I'll keep doing this today and this morning probably. - I want to reorganize the wiki pages around the Git repository, and I have been meaning to do this, but not gotten around to it yet. :( Some other things on my mind: - When I released 7.8.3, I noticed we have a whole bunch of new upstream distribution packages now, but the download page for those is pretty out of date I think.[2] That should probably be updated. - There are patches in the Trac queue I need to get done with. If I've ignored your patch I'm sorry! I'll try to get this done quickly. - Johan reminded me about D4 this morning[3] which I still need to merge upstream after addressing the details in the review. [1]: https://www.haskell.org/pipermail/ghc-devs/2014-July/005509.html [2]: https://www.haskell.org/ghc/download_ghc_7_8_3#distros [3]: https://phabricator.haskell.org/D4 -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From mark.lentczner at gmail.com Mon Jul 14 14:45:51 2014 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Mon, 14 Jul 2014 07:45:51 -0700 Subject: updated OS X 7.8.3 bindist Message-ID: The release 7.8.3 bindist for OS X was flawed: It was not build SplitObjs. That is my fault for not catching it before releasing it. You may demand a beer from me at ICFP as restitution.... I have built a new 7.8.3 bindist for OS X, and tested in 10.9, 10.7, and 10.6(!). It is SplitObjs, and still performs better than the gcc based build. Find it here: ghc-7.8.3-x86_64-apple-darwin-r2.tar.bz2 shasum -a 256: 7f056a48de5158a950e49f208d1e79c6721c2a908db55f7691f7695171c63306 Updated blurb for the release page: This is a bindist for Mac OS X, 10.6 or later. The package requires the command line tools package of Xcode 4 or XCode 5 to be installed. You can find Xcode at http://developer.apple.com/. If you are installing on a gcc based system (gcc --version doesn't not include the word "clang"), then then when installing this, use the command make install CC_CLANG_BACKEND=0). - Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Jul 14 16:28:04 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 14 Jul 2014 16:28:04 +0000 Subject: Status updates In-Reply-To: References: Message-ID: <618BE556AADD624C9C918AA5D5911BEF1042C46C@DB3PRD3001MB020.064d.mgd.msft.net> Cherished GHC devs, GHC has long since exceeded the modest capacities of GHC HQ, even before Simon M's move to Facebook. But in fact Simon's move simply made clearer something that was already true, namely that GHC is *mainly* reliant on the committed support of its developer community (i.e. you). I am hugely grateful for all that you do, as are all Haskell users -- thank you. I'm happy to say that Microsoft Research is continuing to fund Well Typed to offer front-line support for GHC, which currently takes the visible and energetic form of Austin. He and Simon and I have a weekly Skype call to coordinate what we are going to do. However, I feel that we have been less good at communicating with all of you, about what Austin is currently doing, and what we plan to do, when. This has been particular visible recently, when the 7.8 releases have been repeatedly delayed, often with good reason, but we haven't been very good about keeping you in the loop. So henceforth Austin is going to send you a weekly roundup, on Mondays, saying - what he has been up to - what he plans to do this week - what he'd like help with I'm hoping that this will help to keep you all better informed. Perhaps you can do the same when you are actively working on things. Keeping accurate, timely, and friendly communication going among GHC developers is very important. Please yell if it's going wrong, and suggest ways to make it better. Thanks again Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | Austin Seipp | Sent: 14 July 2014 14:27 | To: ghc-devs at haskell.org | Subject: Status updates | | Hello all, | | I've been lax on the status updates, but better late than never. :) | Here are some things that I've been up to. | | - 7.8.3 is released! Hooray! | | - Phabricator can now build code reviews if you submit them. I sent an | email about this earlier in the week[1] in case you missed it. | Unfortunately it has a rather stupid bug that it doesn't catch stderr | properly, but I'll fix that soon. :) | | - I also spent some time on trying to get Phabricator to post to Trac | (when a code review for a ticket is posted), but to no avail yet. | Hopefully I'll have this done in the next few days. | | - I spent a few hours yesterday and on Saturday on the AMP work still, | because I missed a few things actually (broken deriving mechanism due | to forgetting some changes). I also need to send one more patch to | Cabal upstream I believe, but otherwise things look OK. | I'll post a new code review today when I've finished. | | - This morning I've combed some tickets and remilestoning stuff, since | there was a *lot* left around for 7.6.2 still! This should all be | fixed. I'll keep doing this today and this morning probably. | | - I want to reorganize the wiki pages around the Git repository, and I | have been meaning to do this, but not gotten around to it yet. :( | | Some other things on my mind: | | - When I released 7.8.3, I noticed we have a whole bunch of new | upstream distribution packages now, but the download page for those is | pretty out of date I think.[2] That should probably be updated. | | - There are patches in the Trac queue I need to get done with. If I've | ignored your patch I'm sorry! I'll try to get this done quickly. | | - Johan reminded me about D4 this morning[3] which I still need to | merge upstream after addressing the details in the review. | | [1]: https://www.haskell.org/pipermail/ghc-devs/2014-July/005509.html | | [2]: https://www.haskell.org/ghc/download_ghc_7_8_3#distros | | [3]: https://phabricator.haskell.org/D4 | | -- | Regards, | | Austin Seipp, Haskell Consultant | Well-Typed LLP, http://www.well-typed.com/ | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From simonpj at microsoft.com Mon Jul 14 16:43:12 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 14 Jul 2014 16:43:12 +0000 Subject: T9203 Message-ID: <618BE556AADD624C9C918AA5D5911BEF1042C539@DB3PRD3001MB020.064d.mgd.msft.net> Joachim In commit 194107ea9333c1d9d61abf307db2da6a699847af you reduced the allocation of T9203 by half. Butbut I get allocation of 95747304 on my 64 bit machine, so the test fails. Any idea why? Simon diff --git a/testsuite/tests/perf/should_run/all.T b/testsuite/tests/perf/should_run/all.T index 4fa77a5..e9e7ef3 100644 --- a/testsuite/tests/perf/should_run/all.T +++ b/testsuite/tests/perf/should_run/all.T @@ -371,7 +371,9 @@ test('InlineCloneArrayAlloc', test('T9203', [stats_num_field('bytes allocated', [ (wordsize(32), 50000000, 5) - , (wordsize(64), 95747304, 5) ]), + , (wordsize(64), 42946176, 5) ]), + # previously: 95747304 + # 2014-07-14: 42946176 (amd64/Linux) -------------- next part -------------- An HTML attachment was scrubbed... URL: From eir at cis.upenn.edu Mon Jul 14 18:41:28 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Mon, 14 Jul 2014 14:41:28 -0400 Subject: Phab vs. Trac Message-ID: <0A4E6774-F42A-47F2-8171-83750DB297D5@cis.upenn.edu> Hi all, Sometimes, the progress on a particular issue is tracked both on Trac and on Phab. What posts should go where? I know Austin is trying to get Trac to be notified when a relevant post happens on Phab -- great. But, if I have a comment, where should I put it? Here is my proposed answer: The Phab reviews are a good place for code-specific commentary/feedback but Trac is better for design issues. A rule of thumb might be to pretend that Phab comments are all forgotten in a month or two, whereas Trac comments are expected to be around in 5 years. Another feature request around Phab/Trac: Is it possible to have a field *at the top of a Trac ticket* that links to the Differential page? Thanks! Richard From mail at joachim-breitner.de Mon Jul 14 18:47:53 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Mon, 14 Jul 2014 20:47:53 +0200 Subject: T9203 In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF1042C539@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF1042C539@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <1405363673.2216.23.camel@kirk> Hi, Am Montag, den 14.07.2014, 16:43 +0000 schrieb Simon Peyton Jones: > In commit 194107ea9333c1d9d61abf307db2da6a699847af you reduced the > allocation of T9203 by half. Butbut I get allocation of 95747304 on > my 64 bit machine, so the test fails. no, and and I wish I had. See (and continue on) https://ghc.haskell.org/trac/ghc/ticket/9315 for a discussion of that problem. Greetings, Joachim -- Joachim Breitner e-Mail: mail at joachim-breitner.de Homepage: http://www.joachim-breitner.de Jabber-ID: nomeata at joachim-breitner.de -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From simonpj at microsoft.com Mon Jul 14 19:25:02 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 14 Jul 2014 19:25:02 +0000 Subject: Phab vs. Trac In-Reply-To: <0A4E6774-F42A-47F2-8171-83750DB297D5@cis.upenn.edu> References: <0A4E6774-F42A-47F2-8171-83750DB297D5@cis.upenn.edu> Message-ID: <618BE556AADD624C9C918AA5D5911BEF1042C689@DB3PRD3001MB020.064d.mgd.msft.net> | Here is my proposed answer: The Phab reviews are a good place for code- | specific commentary/feedback but Trac is better for design issues. A rule | of thumb might be to pretend that Phab comments are all forgotten in a | month or two, whereas Trac comments are expected to be around in 5 years. Good criterion. I'm littering the Notes in GHC with references to Trac tickets, on the grounds that those tickets will be permanently available. Let's not commit to making the Phab stuff permanently available too. (Of course it may be.) Simon From simonpj at microsoft.com Mon Jul 14 19:28:29 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 14 Jul 2014 19:28:29 +0000 Subject: Phab vs. Trac In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF1042C689@DB3PRD3001MB020.064d.mgd.msft.net> References: <0A4E6774-F42A-47F2-8171-83750DB297D5@cis.upenn.edu> <618BE556AADD624C9C918AA5D5911BEF1042C689@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF1042C8D8@DB3PRD3001MB020.064d.mgd.msft.net> PS: and if we agree this convention can we document it in our Working Conventions pages? S | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Simon | Peyton Jones | Sent: 14 July 2014 20:25 | To: Richard Eisenberg; ghc-devs at haskell.org | Subject: RE: Phab vs. Trac | | | Here is my proposed answer: The Phab reviews are a good place for code- | | specific commentary/feedback but Trac is better for design issues. A | rule | | of thumb might be to pretend that Phab comments are all forgotten in a | | month or two, whereas Trac comments are expected to be around in 5 | years. | | Good criterion. I'm littering the Notes in GHC with references to Trac | tickets, on the grounds that those tickets will be permanently available. | Let's not commit to making the Phab stuff permanently available too. (Of | course it may be.) | | Simon | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From austin at well-typed.com Mon Jul 14 20:07:22 2014 From: austin at well-typed.com (Austin Seipp) Date: Mon, 14 Jul 2014 15:07:22 -0500 Subject: Phab vs. Trac In-Reply-To: <0A4E6774-F42A-47F2-8171-83750DB297D5@cis.upenn.edu> References: <0A4E6774-F42A-47F2-8171-83750DB297D5@cis.upenn.edu> Message-ID: On Mon, Jul 14, 2014 at 1:41 PM, Richard Eisenberg wrote: > Hi all, > > Sometimes, the progress on a particular issue is tracked both on Trac and on Phab. What posts should go where? I know Austin is trying to get Trac to be notified when a relevant post happens on Phab -- great. But, if I have a comment, where should I put it? > > Here is my proposed answer: The Phab reviews are a good place for code-specific commentary/feedback but Trac is better for design issues. A rule of thumb might be to pretend that Phab comments are all forgotten in a month or two, whereas Trac comments are expected to be around in 5 years. I think this is the right idea. We're only using Phabricator for code review, which means: - We should only use it to discuss problems pertinent to the implementation or patch someone posts. - Code reviews will always exist, but in practice they are not the most important piece of information. In practice anything requiring a design spec larger than a ticket probably deserves a wiki page and a once-over on the mailing list. > Another feature request around Phab/Trac: Is it possible to have a field *at the top of a Trac ticket* that links to the Differential page? Good idea! This is now done: https://ghc.haskell.org/trac/ghc/ticket/8634 Look at the top - there is a link to 'Phab:D69', which is the differential revision hyperlinked properly. If you use this syntax in Trac now, it will automatically hyperlink revisions. Perhaps in the future we can shorten it to just 'D69' for example. But this syntax works now to hyperlink directly to Phabricator. > Thanks! > Richard > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From simonpj at microsoft.com Mon Jul 14 21:37:48 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 14 Jul 2014 21:37:48 +0000 Subject: Phab vs. Trac In-Reply-To: References: <0A4E6774-F42A-47F2-8171-83750DB297D5@cis.upenn.edu> Message-ID: <618BE556AADD624C9C918AA5D5911BEF1042C9EB@DB3PRD3001MB020.064d.mgd.msft.net> | Look at the top - there is a link to 'Phab:D69', which is the | differential revision hyperlinked properly. | | If you use this syntax in Trac now, it will automatically hyperlink | revisions. Perhaps in the future we can shorten it to just 'D69' for | example. But this syntax works now to hyperlink directly to | Phabricator. Is this now part of GHC Trac wiki markup syntax generally? Ie can occur in any wiki text? If so, could it be documented? Do we have a place where we collect all GHC-trac-specific Wiki markup? If not, could we have one. Otherwise how would a new person ever find out? Simon From simonpj at microsoft.com Mon Jul 14 21:47:50 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 14 Jul 2014 21:47:50 +0000 Subject: how to get ghci to load compiled modules in 7.8? In-Reply-To: References: Message-ID: <618BE556AADD624C9C918AA5D5911BEF1042CA39@DB3PRD3001MB020.064d.mgd.msft.net> | One thing, how about an addition to the release notes in the | -dynamic-too section saying "By the way, this doesn't actually work, | see https://ghc.haskell.org/trac/ghc/ticket/8736." Not a bad idea. Austin can we conveniently do that retrospectively? From bos at serpentine.com Tue Jul 15 00:57:30 2014 From: bos at serpentine.com (Bryan O'Sullivan) Date: Mon, 14 Jul 2014 17:57:30 -0700 Subject: Forcing apps to collect GC stats? Message-ID: I spent a bit of time over the weekend trying to figure out how to force the RTS to collect GC statistics, but was unable to do so. I'm currently working on enriching criterion's ability to gather data, among which I'd like to see GC statistics. If I try to obtain GC stats using criterion when I'm not running the benchmark app with +RTS -T, I get an exception. Is there a way to allow criterion to forcibly enable stats collection? My efforts to do so have gotten me nowhere. It would be unfortunate if I had to tell users of criterion that they should always run with +RTS -T or add a -rtsopts clause, as they'll simply forget. And while I'm asking, why does GHC not simply collect GC stats by default? Collecting them seems to have zero cost, from what I can see? Thanks! Bryan. -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew.gibiansky at gmail.com Tue Jul 15 02:50:17 2014 From: andrew.gibiansky at gmail.com (Andrew Gibiansky) Date: Mon, 14 Jul 2014 19:50:17 -0700 Subject: Building GHC API Documentation? In-Reply-To: References: Message-ID: Any suggestions? I'm still stuck on this, and don't really know what to try next. Andrew On Thu, Jul 10, 2014 at 9:36 PM, Andrew Gibiansky < andrew.gibiansky at gmail.com> wrote: > Hello, > > I am trying to create my first patch, for #9294, where I want to export > some extra things from Parser along with a bit of documentation. However, I > cannot figure out how to regenerate the documentation for the GHC API (not > for the libraries). > > I tried running `make html stage=0 FAST=YES` in `./compiler`, and got the > following helpful message: > > make[2]: *** No rule to make target `html_compiler'. Stop. > make[1]: *** [html_compiler] Error 2 > make: *** [html] Error 2 > > So I'm a bit stumped. How do I regenerate Haddock documentation for the > GHC API? > > Thanks! > Andrew > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Tue Jul 15 03:07:39 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 14 Jul 2014 23:07:39 -0400 Subject: Building GHC API Documentation? In-Reply-To: References: Message-ID: if you wanna resuse the same source tree, the heavy hammer for doing a rebuild is make maintainer-clean ; make maintainer-clean wipes alll build artifacts so its a clean tree, so everything will be built from scratch there might be a better way, but that sledgehammer should work On Mon, Jul 14, 2014 at 10:50 PM, Andrew Gibiansky < andrew.gibiansky at gmail.com> wrote: > Any suggestions? I'm still stuck on this, and don't really know what to > try next. > > Andrew > > > On Thu, Jul 10, 2014 at 9:36 PM, Andrew Gibiansky < > andrew.gibiansky at gmail.com> wrote: > >> Hello, >> >> I am trying to create my first patch, for #9294, where I want to export >> some extra things from Parser along with a bit of documentation. However, I >> cannot figure out how to regenerate the documentation for the GHC API (not >> for the libraries). >> >> I tried running `make html stage=0 FAST=YES` in `./compiler`, and got the >> following helpful message: >> >> make[2]: *** No rule to make target `html_compiler'. Stop. >> make[1]: *** [html_compiler] Error 2 >> make: *** [html] Error 2 >> >> So I'm a bit stumped. How do I regenerate Haddock documentation for the >> GHC API? >> >> Thanks! >> Andrew >> > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Tue Jul 15 03:08:49 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 14 Jul 2014 23:08:49 -0400 Subject: Building GHC API Documentation? In-Reply-To: References: Message-ID: wooops, would need to be make maintainer-clean ; perl boot ; ./configure ; make On Mon, Jul 14, 2014 at 11:07 PM, Carter Schonwald < carter.schonwald at gmail.com> wrote: > if you wanna resuse the same source tree, the heavy hammer for doing a > rebuild is > > make maintainer-clean ; make > > maintainer-clean wipes alll build artifacts so its a clean tree, so > everything will be built from scratch > > there might be a better way, but that sledgehammer should work > > > On Mon, Jul 14, 2014 at 10:50 PM, Andrew Gibiansky < > andrew.gibiansky at gmail.com> wrote: > >> Any suggestions? I'm still stuck on this, and don't really know what to >> try next. >> >> Andrew >> >> >> On Thu, Jul 10, 2014 at 9:36 PM, Andrew Gibiansky < >> andrew.gibiansky at gmail.com> wrote: >> >>> Hello, >>> >>> I am trying to create my first patch, for #9294, where I want to export >>> some extra things from Parser along with a bit of documentation. However, I >>> cannot figure out how to regenerate the documentation for the GHC API (not >>> for the libraries). >>> >>> I tried running `make html stage=0 FAST=YES` in `./compiler`, and got >>> the following helpful message: >>> >>> make[2]: *** No rule to make target `html_compiler'. Stop. >>> make[1]: *** [html_compiler] Error 2 >>> make: *** [html] Error 2 >>> >>> So I'm a bit stumped. How do I regenerate Haddock documentation for the >>> GHC API? >>> >>> Thanks! >>> Andrew >>> >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin at well-typed.com Tue Jul 15 03:16:07 2014 From: austin at well-typed.com (Austin Seipp) Date: Mon, 14 Jul 2014 22:16:07 -0500 Subject: Phab vs. Trac In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF1042C9EB@DB3PRD3001MB020.064d.mgd.msft.net> References: <0A4E6774-F42A-47F2-8171-83750DB297D5@cis.upenn.edu> <618BE556AADD624C9C918AA5D5911BEF1042C9EB@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: I had no idea about it either; Herbert made me aware of it. There is a mapping of the 'interwiki syntax' we can use here: https://ghc.haskell.org/trac/ghc/wiki/InterMapTxt The 'Phab' syntax is just one of many. I'll add it to the wiki page! On Mon, Jul 14, 2014 at 4:37 PM, Simon Peyton Jones wrote: > | Look at the top - there is a link to 'Phab:D69', which is the > | differential revision hyperlinked properly. > | > | If you use this syntax in Trac now, it will automatically hyperlink > | revisions. Perhaps in the future we can shorten it to just 'D69' for > | example. But this syntax works now to hyperlink directly to > | Phabricator. > > Is this now part of GHC Trac wiki markup syntax generally? Ie can occur in any wiki text? If so, could it be documented? Do we have a place where we collect all GHC-trac-specific Wiki markup? If not, could we have one. > > Otherwise how would a new person ever find out? > > Simon > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From simonpj at microsoft.com Tue Jul 15 07:01:24 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 15 Jul 2014 07:01:24 +0000 Subject: Updating Haddock submodule Message-ID: <618BE556AADD624C9C918AA5D5911BEF1042CC47@DB3PRD3001MB020.064d.mgd.msft.net> Herbert, Austin I've just made a change to GHC that has a (trivial) knock-on effect in Haddock, so I had to update the submodule. Here is what I did, after consulting Austin. Can I humbly implore you both (or someone) to write down the workflow so that git-na?ve people like me can do this with confidence, rather than (as now) in fear? Below is my draft of the workflow. It seems pretty complicated, and there are three places (in red) where I am unsure what to do. Please in writing the workflow, document every step as I have done below. Thanks! Simon 1. Starting point: ? all changes made in GHC and in utils/haddock. ? validate works 2. cd utils/haddock 3. git stash Keep my changes out of the way 4. git branch -av Keep the output 5. git checkout master Why 'master'? Because I know from talking to Austin that the ghc repo tracks Haddock's master branch. There is no way to get this information without talking to Someone Who Knows. There should be a wiki page that documents it. 6. Check that in the output of step 4, the current branch (which should be detatched-head) is the same commit as origin/master. I have no idea what to do if this isn't the case. 7. git pull Now Haddock's 'master' is up to date 8. git stash pop Apply the changes to the Haddock master branch 9. git add/commit to record and commit the patch as usual 10. git push Phew! At this stage we have updated Haddock. I think. 11. cd ../.. Back into main ghc repo 12. git add/commit to record patch as usual, but ? include utils/haddock in the "files" you add. This will update the submodule pointer in the ghc repo ? include the word "submodule" in the commit message 13. git push At this stage we have updated GHC too 14. ???? to get utils/haddock back into the detached-head state. How do I do this? git submodule update doesn't -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Jul 15 07:08:04 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 15 Jul 2014 07:08:04 +0000 Subject: Phab vs. Trac In-Reply-To: References: <0A4E6774-F42A-47F2-8171-83750DB297D5@cis.upenn.edu> <618BE556AADD624C9C918AA5D5911BEF1042C9EB@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF1042CCB2@DB3PRD3001MB020.064d.mgd.msft.net> | There is a mapping of the 'interwiki syntax' we can use here: | | https://ghc.haskell.org/trac/ghc/wiki/InterMapTxt Interesting. That looks useful. How would one find that page? Is it linked from "Wiki notes" in the left margin box? I don't think so. As to the content * What does "This page is interpreted in a special way by Trac" mean?? And the following text. It's utterly opaque to me. * I believe that the main payload is this: look at the table. If you use Prefix:blah, where Prefix comes from the first column, then you get a link to Site/blah, where Site comes from the second column. Maybe. * What are the dollars? * Why do we need links to a cheese shop? * Are there other useful wiki-isms that are not mentioned here? Thanks Simon | -----Original Message----- | From: mad.one at gmail.com [mailto:mad.one at gmail.com] On Behalf Of Austin | Seipp | Sent: 15 July 2014 04:16 | To: Simon Peyton Jones | Cc: Austin Seipp; Richard Eisenberg; ghc-devs at haskell.org | Subject: Re: Phab vs. Trac | | I had no idea about it either; Herbert made me aware of it. | | There is a mapping of the 'interwiki syntax' we can use here: | | https://ghc.haskell.org/trac/ghc/wiki/InterMapTxt | | The 'Phab' syntax is just one of many. | | I'll add it to the wiki page! | | On Mon, Jul 14, 2014 at 4:37 PM, Simon Peyton Jones | wrote: | > | Look at the top - there is a link to 'Phab:D69', which is the | > | differential revision hyperlinked properly. | > | | > | If you use this syntax in Trac now, it will automatically hyperlink | > | revisions. Perhaps in the future we can shorten it to just 'D69' | for | > | example. But this syntax works now to hyperlink directly to | > | Phabricator. | > | > Is this now part of GHC Trac wiki markup syntax generally? Ie can | occur in any wiki text? If so, could it be documented? Do we have a | place where we collect all GHC-trac-specific Wiki markup? If not, could | we have one. | > | > Otherwise how would a new person ever find out? | > | > Simon | > | | | | -- | Regards, | | Austin Seipp, Haskell Consultant | Well-Typed LLP, http://www.well-typed.com/ From johan.tibell at gmail.com Tue Jul 15 07:49:28 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Tue, 15 Jul 2014 09:49:28 +0200 Subject: Status of Haskell Platform 2014.2.0.0 In-Reply-To: References: Message-ID: This was discussed earlier. We need to stick with the Cabal that ships with the GHC version we're using and thus we need to stick with cabal-install-1.18. On Tue, Jul 15, 2014 at 9:33 AM, Alois Cochard wrote: > Why not directly to 1.20.x? > On Jul 15, 2014 7:59 AM, "Andres L?h" wrote: > >> Hi. >> >> > Check the version list in Release2014.hs >> >> Would it still be possible to bump cabal-install to 1.18.0.5? >> >> Cheers, >> Andres >> _______________________________________________ >> Libraries mailing list >> Libraries at haskell.org >> http://www.haskell.org/mailman/listinfo/libraries >> > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://www.haskell.org/mailman/listinfo/libraries > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.stolarek at p.lodz.pl Tue Jul 15 07:53:25 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Tue, 15 Jul 2014 09:53:25 +0200 Subject: Updating Haddock submodule In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF1042CC47@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF1042CC47@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <201407150953.25423.jan.stolarek@p.lodz.pl> Simon, I'm surprised that you used 'git stash'/'git stash pop'. Why not use a branch and then merge it into master (possibly with rebasing)? That's what I would do. But if for some reason it is necessary to use 'git stash' then it would be good to have that reason documented in the workflow as well. Janek From simonpj at microsoft.com Tue Jul 15 08:13:29 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 15 Jul 2014 08:13:29 +0000 Subject: Updating Haddock submodule In-Reply-To: <201407150953.25423.jan.stolarek@p.lodz.pl> References: <618BE556AADD624C9C918AA5D5911BEF1042CC47@DB3PRD3001MB020.064d.mgd.msft.net> <201407150953.25423.jan.stolarek@p.lodz.pl> Message-ID: <618BE556AADD624C9C918AA5D5911BEF1042CF34@DB3PRD3001MB020.064d.mgd.msft.net> I used stash to avoid creating and then deleting a branch. No deeper reason. Simon | -----Original Message----- | From: Jan Stolarek [mailto:jan.stolarek at p.lodz.pl] | Sent: 15 July 2014 08:53 | To: ghc-devs at haskell.org | Cc: Simon Peyton Jones; Herbert Valerio Riedel; 'Austin Seipp' | Subject: Re: Updating Haddock submodule | | Simon, | | I'm surprised that you used 'git stash'/'git stash pop'. Why not use a | branch and then merge it into master (possibly with rebasing)? That's | what I would do. But if for some reason it is necessary to use 'git | stash' then it would be good to have that reason documented in the | workflow as well. | | Janek From simonpj at microsoft.com Tue Jul 15 08:38:07 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 15 Jul 2014 08:38:07 +0000 Subject: Travis now tests ghc directly In-Reply-To: <1405184690.22017.6.camel@kirk> References: <1405184690.22017.6.camel@kirk> Message-ID: <618BE556AADD624C9C918AA5D5911BEF1042CFCD@DB3PRD3001MB020.064d.mgd.msft.net> This is all fantastic, thank you Joachim. But how would a new GHC dev find out this information? Could it please be documented on the wiki? Perhaps a page called Continuous integration and buildbots to cover both Travis and the buildbots? I'll call it the "CI page". How would one find the CI page? I looked at Working on GHC: https://ghc.haskell.org/trac/ghc/wiki/WorkingConventions An item in the first bulleted list, pointing to the CI page Status reports: https://ghc.haskell.org/trac/ghc/wiki/Status Has a section called "Nightly builds", which could point instead to the CI page Building guide: https://ghc.haskell.org/trac/ghc/wiki/Building Could have a link to the CI page. That "Nightly builds" heading points to http://haskell.inf.elte.hu/builders/, which is utterly opaque to me. It needs a wiki page explain what all this means. I'm sorry that I keep bleating (daily!) about writing up on the wiki, but it's really key to helping new people feel part of the story and able to contribute. The button on the home page is good too, but a) it only helps if you have some idea what Travis is. Otherwise it's just noise b) it only helps if you know how to interpret the page you get to when you click it, which I don't. c) I never look at the home page. I look at "status" or "building" or whatever from the LH column. The CI page could explain all that. Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | Joachim Breitner | Sent: 12 July 2014 18:05 | To: ghc-devs | Subject: Travis now tests ghc directly | | Hi, | | I just added a .travis.yml file to GHC master. This means that every | push will be validated automatic and for free by travis; you can check | the build status at https://travis-ci.org/ghc/ghc/builds | | It is not a full validation. In particular, | * it skips DPH | * it does not build dynamic libraries and no dynamic executables | * does not build haddock or generate documentation | * only runs fast tests, and no performance tests | | This way we stay under the 50 minuite limit. | | Failures are reported to me, and I?ll manually report relevant breakage | to you. It is also configured to mail to ghc-builds at haskell.org, but I | don?t see these messages there. Maybe some mailing list admin needs to | whitelist mails from Travis CI ? | | I?ve also added a link to it from http://ghc.haskell.org/trac/ghc/, | including a nice icon showing the current build status. | | If you have that .travis.yml file in your branch, travis will also test | these. | | My unofficial ghc-complete repository is therefore obsolete, I stopped | updating it. | | Greetings, | Joachim | | | | -- | Joachim ?nomeata? Breitner | mail at joachim-breitner.de ? http://www.joachim-breitner.de/ | Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F | Debian Developer: nomeata at debian.org From jan.stolarek at p.lodz.pl Tue Jul 15 09:11:02 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Tue, 15 Jul 2014 11:11:02 +0200 Subject: Core-to-core transformations: current state-of-the-art? Message-ID: <201407151111.02093.jan.stolarek@p.lodz.pl> Hi devs, I'm curious about the current state of core-to-core transformations inside GHC. I know about following papers that cover this subject: 1. "Compilation by Transformation in Non-Strict Functional Languages", a PhD dissertation by Santos (I assume this summarizes earlier papers by Santos and SPJ). 2. "A transformation-based optimiser for Haskell", paper by SPJ and Santos. 3. "Let-floating: moving bindings to give faster programs", paper by SPJ, Partain and Santos. 4. "Modular, Higher-Order Cardinality Analysis in Theory and Practice", paper by Sergey, Vytiniotis and SPJ. Is there any other work that I should be aware of? Are there transformations implemented in GHC that were not described in any paper? Janek PS. I'm aware of the wiki pages that list the GHC papers. From eir at cis.upenn.edu Tue Jul 15 11:36:00 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Tue, 15 Jul 2014 07:36:00 -0400 Subject: Building GHC API Documentation? In-Reply-To: References: Message-ID: I've discovered that passing options to `make` doesn't necessarily work all that well. Certain ones work great, but it's not as reliable as I might like. That said, have you tried running the command in the ./ghc subdirectory? (Not the top directory -- it's probably ...../ghc/ghc on your machine.) If you get that error on a clean checkout, you could perhaps post a bug report. Richard On Jul 14, 2014, at 10:50 PM, Andrew Gibiansky wrote: > Any suggestions? I'm still stuck on this, and don't really know what to try next. > > Andrew > > > On Thu, Jul 10, 2014 at 9:36 PM, Andrew Gibiansky wrote: > Hello, > > I am trying to create my first patch, for #9294, where I want to export some extra things from Parser along with a bit of documentation. However, I cannot figure out how to regenerate the documentation for the GHC API (not for the libraries). > > I tried running `make html stage=0 FAST=YES` in `./compiler`, and got the following helpful message: > > make[2]: *** No rule to make target `html_compiler'. Stop. > make[1]: *** [html_compiler] Error 2 > make: *** [html] Error 2 > > So I'm a bit stumped. How do I regenerate Haddock documentation for the GHC API? > > Thanks! > Andrew > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From pali.gabor at gmail.com Tue Jul 15 12:02:37 2014 From: pali.gabor at gmail.com (=?UTF-8?B?UMOhbGkgR8OhYm9yIErDoW5vcw==?=) Date: Tue, 15 Jul 2014 14:02:37 +0200 Subject: Travis now tests ghc directly In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF1042CFCD@DB3PRD3001MB020.064d.mgd.msft.net> References: <1405184690.22017.6.camel@kirk> <618BE556AADD624C9C918AA5D5911BEF1042CFCD@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: 2014-07-15 10:38 GMT+02:00 Simon Peyton Jones : > That "Nightly builds" heading points to http://haskell.inf.elte.hu/builders/, which > is utterly opaque to me. It needs a wiki page explain what all this means. I have added a dedicated wiki page [1] to explain it and have linked it to the corresponding places. Hope it makes things a bit clearer now. > The CI page could explain all that. Feel free to add it to or merge with the CI page, if needed. [1] https://ghc.haskell.org/trac/ghc/wiki/BuilderSummary From simonpj at microsoft.com Tue Jul 15 12:09:55 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 15 Jul 2014 12:09:55 +0000 Subject: Travis now tests ghc directly In-Reply-To: References: <1405184690.22017.6.camel@kirk> <618BE556AADD624C9C918AA5D5911BEF1042CFCD@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF1042D33F@DB3PRD3001MB020.064d.mgd.msft.net> Very excellent thank you! | -----Original Message----- | From: P?li G?bor J?nos [mailto:pali.gabor at gmail.com] | Sent: 15 July 2014 13:03 | To: Simon Peyton Jones | Cc: Joachim Breitner; ghc-devs | Subject: Re: Travis now tests ghc directly | | 2014-07-15 10:38 GMT+02:00 Simon Peyton Jones : | > That "Nightly builds" heading points to | > http://haskell.inf.elte.hu/builders/, which is utterly opaque to me. | It needs a wiki page explain what all this means. | | I have added a dedicated wiki page [1] to explain it and have linked it | to the corresponding places. Hope it makes things a bit clearer now. | | > The CI page could explain all that. | | Feel free to add it to or merge with the CI page, if needed. | | [1] https://ghc.haskell.org/trac/ghc/wiki/BuilderSummary From simonpj at microsoft.com Tue Jul 15 12:15:52 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 15 Jul 2014 12:15:52 +0000 Subject: Core-to-core transformations: current state-of-the-art? In-Reply-To: <201407151111.02093.jan.stolarek@p.lodz.pl> References: <201407151111.02093.jan.stolarek@p.lodz.pl> Message-ID: <618BE556AADD624C9C918AA5D5911BEF1042D371@DB3PRD3001MB020.064d.mgd.msft.net> SpecConstr: http://research.microsoft.com/en-us/um/people/simonpj/papers/spec-constr/index.htm CPR: http://research.microsoft.com/en-us/um/people/simonpj/papers/cpr/index.htm Inliner: http://research.microsoft.com/en-us/um/people/simonpj/papers/inlining/index.htm Short cut deforestation: http://research.microsoft.com/en-us/um/people/simonpj/papers/andy-thesis.ps.gz Unboxed values: http://research.microsoft.com/en-us/um/people/simonpj/papers/unboxed-values.ps.Z RULES: http://research.microsoft.com/en-us/um/people/simonpj/papers/rules.htm | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Jan | Stolarek | Sent: 15 July 2014 10:11 | To: ghc-devs at haskell.org | Subject: Core-to-core transformations: current state-of-the-art? | | Hi devs, | | I'm curious about the current state of core-to-core transformations | inside GHC. I know about following papers that cover this subject: | | 1. "Compilation by Transformation in Non-Strict Functional Languages", | a PhD dissertation by Santos (I assume this summarizes earlier papers | by Santos and SPJ). | 2. "A transformation-based optimiser for Haskell", paper by SPJ and | Santos. | 3. "Let-floating: moving bindings to give faster programs", paper by | SPJ, Partain and Santos. | 4. "Modular, Higher-Order Cardinality Analysis in Theory and Practice", | paper by Sergey, Vytiniotis and SPJ. | | Is there any other work that I should be aware of? Are there | transformations implemented in GHC that were not described in any | paper? | | Janek | | PS. I'm aware of the wiki pages that list the GHC papers. | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From karel.gardas at centrum.cz Tue Jul 15 12:19:15 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Tue, 15 Jul 2014 14:19:15 +0200 Subject: HEAD build fails on compiling haddoc Message-ID: <53C51C43.2070901@centrum.cz> Hello, I'm trying to compile HEAD on amd64-solaris2 platform, but the build fails with: "inplace/bin/ghc-stage2" -hisuf dyn_hi -osuf dyn_o -hcsuf dyn_hc -fPIC -dynamic -H32m -O -hide-all-packages -i -iutils/haddock/driver -iutils/haddock/src -iutils/haddock/haddock-library/vendor/attoparsec-0.10.4.0 -iutils/haddock/haddock-library/src -iutils/haddock/dist/build -iutils/haddock/dist/build/autogen -Iutils/haddock/dist/build -Iutils/haddock/dist/build/autogen -optP-DIN_GHC_TREE -optP-include -optPutils/haddock/dist/build/autogen/cabal_macros.h -package Cabal-1.20.0.1 -package array-0.5.0.0 -package base-4.7.1.0 -package bytestring-0.10.4.0 -package containers-0.5.5.1 -package deepseq-1.3.0.2 -package directory-1.2.1.0 -package filepath-1.3.0.2 -package ghc-7.9.20140715 -package xhtml-3000.2.1 -funbox-strict-fields -Wall -fwarn-tabs -O2 -XHaskell2010 -no-user-package-db -rtsopts -odir utils/haddock/dist/build -hidir utils/haddock/dist/build -stubdir utils/haddock/dist/build -c utils/haddock/src/Haddock/GhcUtils.hs -o utils/haddock/dist/build/Haddock/GhcUtils.dyn_o utils/haddock/src/Haddock/GhcUtils.hs:105:21: Not in scope: data constructor ?TyFamInstEqn? Perhaps you meant ?TyFamInstD? (imported from GHC) utils/haddock/src/Haddock/GhcUtils.hs:105:36: ?tfie_rhs? is not a (visible) constructor field name gmake[1]: *** [utils/haddock/dist/build/Haddock/GhcUtils.dyn_o] Error 1 gmake: *** [all] Error 2 is this just issue on my system or general issue which waits for someone to resolve it? Since I built HEAD successfully just two days ago, this patch may looks suspiciously: commit 9b8ba62991ae22420a0c4486127a3b22ee7f22bd Author: Simon Peyton Jones Date: Tue Jul 15 07:43:55 2014 +0100 Entirely re-jig the handling of default type-family instances (fixes Trac #9063) In looking at Trac #9063 I decided to re-design the default instances for associated type synonyms. Previously it was all jolly complicated, to support generality that no one wanted, and was arguably undesirable. Specifically * The default instance for an associated type can have only type variables on the LHS. (Not type patterns.) * There can be at most one default instances declaration for each associated type. To achieve this I had to do a surprisingly large amount of refactoring of HsSyn, specifically to parameterise HsDecls.TyFamEqn over the type of the LHS patterns. That change in HsDecls has a (trivial) knock-on effect in Haddock, so this commit does a submodule update too. The net result is good though. The code is simpler; the language specification is simpler. Happy days. Trac #9263 and #9264 are thereby fixed as well. but I'm not expert so no offense! And if this is a mistake on my side, then I'm really sorry for false alarm. Thanks! Karel From simonpj at microsoft.com Tue Jul 15 12:28:17 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 15 Jul 2014 12:28:17 +0000 Subject: HEAD build fails on compiling haddoc In-Reply-To: <53C51C43.2070901@centrum.cz> References: <53C51C43.2070901@centrum.cz> Message-ID: <618BE556AADD624C9C918AA5D5911BEF1042D3EF@DB3PRD3001MB020.064d.mgd.msft.net> I pushed a change to Haddock, and a change to GHC. They should match up. Did you do ./sync-all pull? I may have screwed up. I sent a long message to ghc-devs this morning explaining what I did. Maybe some guru can help. Simon | -----Original Message----- | From: Karel Gardas [mailto:karel.gardas at centrum.cz] | Sent: 15 July 2014 13:19 | To: ghc-devs; Simon Peyton Jones | Subject: HEAD build fails on compiling haddoc | | | Hello, | | I'm trying to compile HEAD on amd64-solaris2 platform, but the build | fails with: | | "inplace/bin/ghc-stage2" -hisuf dyn_hi -osuf dyn_o -hcsuf dyn_hc -fPIC | -dynamic -H32m -O -hide-all-packages -i -iutils/haddock/driver | -iutils/haddock/src | -iutils/haddock/haddock-library/vendor/attoparsec-0.10.4.0 | -iutils/haddock/haddock-library/src -iutils/haddock/dist/build - | iutils/haddock/dist/build/autogen -Iutils/haddock/dist/build | -Iutils/haddock/dist/build/autogen -optP-DIN_GHC_TREE -optP-include | -optPutils/haddock/dist/build/autogen/cabal_macros.h -package | Cabal-1.20.0.1 -package array-0.5.0.0 -package base-4.7.1.0 -package | bytestring-0.10.4.0 -package containers-0.5.5.1 -package deepseq- | 1.3.0.2 -package directory-1.2.1.0 -package filepath-1.3.0.2 -package | ghc-7.9.20140715 -package xhtml-3000.2.1 -funbox-strict-fields -Wall | -fwarn-tabs -O2 -XHaskell2010 -no-user-package-db -rtsopts -odir | utils/haddock/dist/build -hidir utils/haddock/dist/build -stubdir | utils/haddock/dist/build -c utils/haddock/src/Haddock/GhcUtils.hs -o | utils/haddock/dist/build/Haddock/GhcUtils.dyn_o | | utils/haddock/src/Haddock/GhcUtils.hs:105:21: | Not in scope: data constructor 'TyFamInstEqn' | Perhaps you meant 'TyFamInstD' (imported from GHC) | | utils/haddock/src/Haddock/GhcUtils.hs:105:36: | 'tfie_rhs' is not a (visible) constructor field name | gmake[1]: *** [utils/haddock/dist/build/Haddock/GhcUtils.dyn_o] Error 1 | gmake: *** [all] Error 2 | | | is this just issue on my system or general issue which waits for | someone to resolve it? | | Since I built HEAD successfully just two days ago, this patch may looks | suspiciously: | | commit 9b8ba62991ae22420a0c4486127a3b22ee7f22bd | Author: Simon Peyton Jones | Date: Tue Jul 15 07:43:55 2014 +0100 | | Entirely re-jig the handling of default type-family instances | (fixes Trac #9063) | | In looking at Trac #9063 I decided to re-design the default | instances for associated type synonyms. Previously it was all | jolly complicated, to support generality that no one wanted, and | was arguably undesirable. | | Specifically | | * The default instance for an associated type can have only | type variables on the LHS. (Not type patterns.) | | * There can be at most one default instances declaration for | each associated type. | | To achieve this I had to do a surprisingly large amount of | refactoring | of HsSyn, specifically to parameterise HsDecls.TyFamEqn over the | type | of the LHS patterns. | | That change in HsDecls has a (trivial) knock-on effect in Haddock, | so | this commit does a submodule update too. | | The net result is good though. The code is simpler; the language | specification is simpler. Happy days. | | Trac #9263 and #9264 are thereby fixed as well. | | | but I'm not expert so no offense! And if this is a mistake on my side, | then I'm really sorry for false alarm. | | Thanks! | Karel From eir at cis.upenn.edu Tue Jul 15 12:41:24 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Tue, 15 Jul 2014 08:41:24 -0400 Subject: Updating Haddock submodule In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF1042CC47@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF1042CC47@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <84A760A7-6373-4158-8EEE-1E0B653AF6A7@cis.upenn.edu> I'd like to humbly second this request... and specifically, that I more often use git in fear rather than confidence, especially when I'm working with ghc. This is perhaps inevitable -- a complicated system requires complicated tools. But, any help here is greatly appreciated. As a very minor point: On Jul 15, 2014, at 3:01 AM, Simon Peyton Jones wrote: > 12. git add/commit to record patch as usual, but > That line confused me. I thought there was a new command `add/commit`! It didn't take too long to unravel the meaning, but I'd like to request that we avoid *any* abbreviations, especially in typewriter font. Thanks to the git-wizards who help us git-gits! Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From karel.gardas at centrum.cz Tue Jul 15 12:54:03 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Tue, 15 Jul 2014 14:54:03 +0200 Subject: HEAD build fails on compiling haddoc In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF1042D3EF@DB3PRD3001MB020.064d.mgd.msft.net> References: <53C51C43.2070901@centrum.cz> <618BE556AADD624C9C918AA5D5911BEF1042D3EF@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <53C5246B.60504@centrum.cz> Yes, I did both `get' and `pull' as it was really (months) old repo. Anyway, let's see if builders catch this over night. If not, it was just my poor error for which I apologize. Thanks, Karel On 07/15/14 02:28 PM, Simon Peyton Jones wrote: > I pushed a change to Haddock, and a change to GHC. They should match up. Did you do ./sync-all pull? > > I may have screwed up. I sent a long message to ghc-devs this morning explaining what I did. Maybe some guru can help. > > Simon > > | -----Original Message----- > | From: Karel Gardas [mailto:karel.gardas at centrum.cz] > | Sent: 15 July 2014 13:19 > | To: ghc-devs; Simon Peyton Jones > | Subject: HEAD build fails on compiling haddoc > | > | > | Hello, > | > | I'm trying to compile HEAD on amd64-solaris2 platform, but the build > | fails with: > | > | "inplace/bin/ghc-stage2" -hisuf dyn_hi -osuf dyn_o -hcsuf dyn_hc -fPIC > | -dynamic -H32m -O -hide-all-packages -i -iutils/haddock/driver > | -iutils/haddock/src > | -iutils/haddock/haddock-library/vendor/attoparsec-0.10.4.0 > | -iutils/haddock/haddock-library/src -iutils/haddock/dist/build - > | iutils/haddock/dist/build/autogen -Iutils/haddock/dist/build > | -Iutils/haddock/dist/build/autogen -optP-DIN_GHC_TREE -optP-include > | -optPutils/haddock/dist/build/autogen/cabal_macros.h -package > | Cabal-1.20.0.1 -package array-0.5.0.0 -package base-4.7.1.0 -package > | bytestring-0.10.4.0 -package containers-0.5.5.1 -package deepseq- > | 1.3.0.2 -package directory-1.2.1.0 -package filepath-1.3.0.2 -package > | ghc-7.9.20140715 -package xhtml-3000.2.1 -funbox-strict-fields -Wall > | -fwarn-tabs -O2 -XHaskell2010 -no-user-package-db -rtsopts -odir > | utils/haddock/dist/build -hidir utils/haddock/dist/build -stubdir > | utils/haddock/dist/build -c utils/haddock/src/Haddock/GhcUtils.hs -o > | utils/haddock/dist/build/Haddock/GhcUtils.dyn_o > | > | utils/haddock/src/Haddock/GhcUtils.hs:105:21: > | Not in scope: data constructor 'TyFamInstEqn' > | Perhaps you meant 'TyFamInstD' (imported from GHC) > | > | utils/haddock/src/Haddock/GhcUtils.hs:105:36: > | 'tfie_rhs' is not a (visible) constructor field name > | gmake[1]: *** [utils/haddock/dist/build/Haddock/GhcUtils.dyn_o] Error 1 > | gmake: *** [all] Error 2 > | > | > | is this just issue on my system or general issue which waits for > | someone to resolve it? > | > | Since I built HEAD successfully just two days ago, this patch may looks > | suspiciously: > | > | commit 9b8ba62991ae22420a0c4486127a3b22ee7f22bd > | Author: Simon Peyton Jones > | Date: Tue Jul 15 07:43:55 2014 +0100 > | > | Entirely re-jig the handling of default type-family instances > | (fixes Trac #9063) > | > | In looking at Trac #9063 I decided to re-design the default > | instances for associated type synonyms. Previously it was all > | jolly complicated, to support generality that no one wanted, and > | was arguably undesirable. > | > | Specifically > | > | * The default instance for an associated type can have only > | type variables on the LHS. (Not type patterns.) > | > | * There can be at most one default instances declaration for > | each associated type. > | > | To achieve this I had to do a surprisingly large amount of > | refactoring > | of HsSyn, specifically to parameterise HsDecls.TyFamEqn over the > | type > | of the LHS patterns. > | > | That change in HsDecls has a (trivial) knock-on effect in Haddock, > | so > | this commit does a submodule update too. > | > | The net result is good though. The code is simpler; the language > | specification is simpler. Happy days. > | > | Trac #9263 and #9264 are thereby fixed as well. > | > | > | but I'm not expert so no offense! And if this is a mistake on my side, > | then I'm really sorry for false alarm. > | > | Thanks! > | Karel > From mark.lentczner at gmail.com Tue Jul 15 13:44:45 2014 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Tue, 15 Jul 2014 06:44:45 -0700 Subject: Status of Haskell Platform 2014.2.0.0 In-Reply-To: References: Message-ID: As per comments: Done: - hashable can be bumped to 1.2.2.0. - network can be bumped to 2.4.2.3 - unordered-containers can be bumped to 0.2.4.0 - happy bumped to 1.19.4 - bump cabal-install to 1.18.0.5 ?Unsure: - Not sure if you want QuickCheck 2.7.5 rather than 2.6 -- cc'ing QuickCheck devs: Would they like to weigh in? Not Changed: - attoparsec will remain at version 0.10.4.0, as newer attoparsec depends on the new package scientific - cabal-install will stay at the 1.18 level (not bump to 1.20.*) because it needs to remain in sync with 7.8.3's Cabal package, Johan has been great about back-porting fixes to 1.18 for this reason. -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Jul 15 14:40:58 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 15 Jul 2014 14:40:58 +0000 Subject: New source documentation policy for GHC In-Reply-To: References: Message-ID: <618BE556AADD624C9C918AA5D5911BEF1042D95F@DB3PRD3001MB020.064d.mgd.msft.net> Dear Johan Great. Could you update the Coding style page? https://ghc.haskell.org/trac/ghc/wiki/Commentary/CodingStyle Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Johan Tibell Sent: 07 July 2014 08:39 To: ghc-devs at haskell.org Subject: ANN: New source documentation policy for GHC Hi all! After some discussion [1] we've decided to require Haddock comments for all top-level entities (i.e. functions, classes, and data types) in new [2] GHC code. If you're writing new code, please try to add at least a sentence of two documenting what the function does and why. When you're reviewing patches, please ask the author to add comments. This policy doesn't replace GHC's use of [Notes], which talk mostly about implementation details, but instead compliments it by making it clearer how to *use* the code. See the original thread [1] for some example comments. We will use Haddock for our comments, in order to make the generated Haddock docs more useful, but we don't require that you use any special markup in the comments themselves or required that you validate that your docs render nicely [3]. We're not adding any technical enforcement [4], to avoid extending the compile/validate cycle, instead rely on social enforcement; please encourage people to write comments and remind them during code review! Cheers, Johan 1. https://www.mail-archive.com/ghc-devs at haskell.org/msg05135.html 2. Documenting old code is of course also much appreciated! 3. This is a pragmatic trade-off to not add too much extra work for frequent contributions. It's easy for anyone to fix up bad markup, so allowing some bad markup to slip in temporarily isn't a big issue. 4. We might try to add lint warning to Phabricator, to serve as an extra reminder to patch authors. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Christian.Maeder at dfki.de Tue Jul 15 15:41:09 2014 From: Christian.Maeder at dfki.de (Christian Maeder) Date: Tue, 15 Jul 2014 17:41:09 +0200 Subject: 64bit Solaris was: Re: 7.8.1 plan In-Reply-To: <53C3D82E.4040101@centrum.cz> References: <53311BD4.3030008@mail.ru> <533DAACF.3050801@fuuzetsu.co.uk> <53E2835B-4D02-466F-9F14-2E860DFB97F2@gmail.com> <534509A8.3070309@centrum.cz> <4A7CF78C-9146-4801-843F-CC67E2362056@gmail.com> <53464A9C.8070807@dfki.de> <53C377B0.4070008@centrum.cz> <53C3D2C2.5080808@dfki.de> <53C3D82E.4040101@centrum.cz> Message-ID: <53C54B95.6040603@dfki.de> I've described my attempt in the ticket: https://ghc.haskell.org/trac/ghc/ticket/8910 In case someone wants to investigate the core dump further I've put the binary-dist of the initial cross-compiler here: http://www.informatik.uni-bremen.de/agbkb/forschung/formal_methods/CoFI/hets/pc-solaris/ghcs/ghc-7.9.20140715-x86_64-unknown-solaris2.tar.bz2 (will be removed after a while as it is not suited for normal use) C. Am 14.07.2014 15:16, schrieb Karel Gardas: > On 07/14/14 02:53 PM, Christian Maeder wrote: >> Hi Karel, >> >> usually I do not build HEAD. My attempt starting to do so failed as >> follows: >> >> git clone git://git.haskell.org/ghc.git >> cd ghc >> ./sync-all get >> autoreconf > > ^ I'm not sure, but shouldn't that be `perl boot' ? > >> ./configure > > ^ if you don't have x86_64-solaris ghc yet on your system, you will > probably need to cross-compile with --target= param. > >> Is there somewhere a x86_64-solaris2 binary-dist (for Solaris 10) to try >> out first? > > I haven't tried that yet as my primary target is Solaris 11. > > Karel > > > > From andrew.gibiansky at gmail.com Tue Jul 15 15:57:33 2014 From: andrew.gibiansky at gmail.com (Andrew Gibiansky) Date: Tue, 15 Jul 2014 08:57:33 -0700 Subject: Building GHC API Documentation? In-Reply-To: References: Message-ID: I did try that. I've found that if I run `make` in the toplevel, I can just look at the last command it executes (which is generating all the documentation) and copy/paste all of the huge command to rerun it. It's a terrible solution but works well for me... If you run it from the `ghc` subdirectory, you just get the same error message but with "No rule to make target `html_ghc'. Stop". Anyway, I've temporarily solved this by copying the huge raw command that toplevel `make` produces to build the docs, but I don't know what the right way to do this is. -- Andrew On Tue, Jul 15, 2014 at 4:36 AM, Richard Eisenberg wrote: > I've discovered that passing options to `make` doesn't necessarily work > all that well. Certain ones work great, but it's not as reliable as I might > like. > > That said, have you tried running the command in the ./ghc subdirectory? > (Not the top directory -- it's probably ...../ghc/ghc on your machine.) > > If you get that error on a clean checkout, you could perhaps post a bug > report. > > Richard > > On Jul 14, 2014, at 10:50 PM, Andrew Gibiansky > wrote: > > Any suggestions? I'm still stuck on this, and don't really know what to > try next. > > Andrew > > > On Thu, Jul 10, 2014 at 9:36 PM, Andrew Gibiansky < > andrew.gibiansky at gmail.com> wrote: > >> Hello, >> >> I am trying to create my first patch, for #9294, where I want to export >> some extra things from Parser along with a bit of documentation. However, I >> cannot figure out how to regenerate the documentation for the GHC API (not >> for the libraries). >> >> I tried running `make html stage=0 FAST=YES` in `./compiler`, and got the >> following helpful message: >> >> make[2]: *** No rule to make target `html_compiler'. Stop. >> make[1]: *** [html_compiler] Error 2 >> make: *** [html] Error 2 >> >> So I'm a bit stumped. How do I regenerate Haddock documentation for the >> GHC API? >> >> Thanks! >> Andrew >> > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Tue Jul 15 16:43:09 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Tue, 15 Jul 2014 18:43:09 +0200 Subject: New source documentation policy for GHC In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF1042D95F@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF1042D95F@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Already done. See "Comments on top-level entities". On Tue, Jul 15, 2014 at 4:40 PM, Simon Peyton Jones wrote: > Dear Johan > > > Great. Could you update the Coding style page? > > https://ghc.haskell.org/trac/ghc/wiki/Commentary/CodingStyle > > > > > > Simon > > > > *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of *Johan > Tibell > *Sent:* 07 July 2014 08:39 > *To:* ghc-devs at haskell.org > *Subject:* ANN: New source documentation policy for GHC > > > > Hi all! > > > > After some discussion [1] we've decided to require Haddock comments for > all top-level entities (i.e. functions, classes, and data types) in new [2] > GHC code. If you're writing new code, please try to add at least a sentence > of two documenting what the function does and why. When you're reviewing > patches, please ask the author to add comments. > > > > This policy doesn't replace GHC's use of [Notes], which talk mostly about > implementation details, but instead compliments it by making it clearer how > to *use* the code. See the original thread [1] for some example comments. > > > > We will use Haddock for our comments, in order to make the generated > Haddock docs more useful, but we don't require that you use any special > markup in the comments themselves or required that you validate that your > docs render nicely [3]. > > > > We're not adding any technical enforcement [4], to avoid extending the > compile/validate cycle, instead rely on social enforcement; please > encourage people to write comments and remind them during code review! > > > > Cheers, > > Johan > > > > 1. https://www.mail-archive.com/ghc-devs at haskell.org/msg05135.html > > 2. Documenting old code is of course also much appreciated! > > 3. This is a pragmatic trade-off to not add too much extra work for > frequent contributions. It's easy for anyone to fix up bad markup, so > allowing some bad markup to slip in temporarily isn't a big issue. > > 4. We might try to add lint warning to Phabricator, to serve as an extra > reminder to patch authors. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bos at serpentine.com Tue Jul 15 17:56:24 2014 From: bos at serpentine.com (Bryan O'Sullivan) Date: Tue, 15 Jul 2014 10:56:24 -0700 Subject: Status of Haskell Platform 2014.2.0.0 In-Reply-To: References: Message-ID: On Tue, Jul 15, 2014 at 6:44 AM, Mark Lentczner wrote: > ?Unsure: > > - Not sure if you want QuickCheck 2.7.5 rather than 2.6 -- cc'ing > QuickCheck devs: Would they like to weigh in? > > I would really like to see 2.7.5 go in, as it has a number of both improvements and backwards incompatible API changes that make it impossible to interoperate with 2.6. Not Changed: > > - attoparsec will remain at version 0.10.4.0, as newer attoparsec > depends on the new package scientific > > attoparsec has some security fixes in recent releases that depend on the scientific package. It would be a very bad idea to continue with 0.10.4.0. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kili at outback.escape.de Tue Jul 15 19:25:43 2014 From: kili at outback.escape.de (Matthias Kilian) Date: Tue, 15 Jul 2014 21:25:43 +0200 Subject: Status of Haskell Platform 2014.2.0.0 In-Reply-To: References: Message-ID: <20140715192543.GA22908@nutty.outback.escape.de> Hi, On Tue, Jul 15, 2014 at 10:56:24AM -0700, Bryan O'Sullivan wrote: > Not Changed: > > > > - attoparsec will remain at version 0.10.4.0, as newer attoparsec > > depends on the new package scientific > > > attoparsec has some security fixes in recent releases that depend on the > scientific package. It would be a very bad idea to continue with 0.10.4.0. Apart from this, i guess future versions of attoparsec (and other libraries included in the HP) will probably depend on more libraries not yet in the HP. So what's the correct way to deal with this? - Use newer versions for libraries like attoparsec and add additional libraries they depend on? - Or reduce the number of libraries included in the HP? Ciao, Kili From mark.lentczner at gmail.com Tue Jul 15 20:43:29 2014 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Tue, 15 Jul 2014 13:43:29 -0700 Subject: Status of Haskell Platform 2014.2.0.0 In-Reply-To: References: Message-ID: On Tue, Jul 15, 2014 at 10:56 AM, Bryan O'Sullivan wrote: > attoparsec has some security fixes in recent releases that depend on the > scientific package. It would be a very bad idea to continue with 0.10.4.0. > This is rather late to hear this... given that I plan to Alpha this weekend or sooner. Can you quantify the security fixes? Do they only revolve around floats? -------------- next part -------------- An HTML attachment was scrubbed... URL: From bos at serpentine.com Tue Jul 15 20:59:38 2014 From: bos at serpentine.com (Bryan O'Sullivan) Date: Tue, 15 Jul 2014 13:59:38 -0700 Subject: Status of Haskell Platform 2014.2.0.0 In-Reply-To: References: Message-ID: On Tue, Jul 15, 2014 at 1:43 PM, Mark Lentczner wrote: > This is rather late to hear this... given that I plan to Alpha this > weekend or sooner. > > Can you quantify the security fixes? Do they only revolve around floats? > Well, it was rather late to hear that you weren't going to upgrade attoparsec, too ;-) In brief, an attacker can DoS a user of attoparsec by handing them a floating point number with a sufficiently large exponent (e.g. 1e1000000000). This will cause it to try to create an Integer with the given number of digits, thus possibly OOMing a machine or crashing a process. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Tue Jul 15 21:34:04 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Tue, 15 Jul 2014 23:34:04 +0200 Subject: Travis now tests ghc directly In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF1042CFCD@DB3PRD3001MB020.064d.mgd.msft.net> References: <1405184690.22017.6.camel@kirk> <618BE556AADD624C9C918AA5D5911BEF1042CFCD@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <1405460044.2694.18.camel@kirk> Hi, Am Dienstag, den 15.07.2014, 08:38 +0000 schrieb Simon Peyton Jones: > This is all fantastic, thank you Joachim. But how would a new GHC dev > find out this information? Could it please be documented on the wiki? Yes, I plan to do it once it settles, there are still a view things to observe and to decide. (E.g. we?d like to make travis send mails about breakage to the commiter directly, but for that it needs to be more reliable). With other projects in the pipeline, I want to add a new wiki page, linked from the sidebar https://ghc.haskell.org/trac/ghc/wiki/Infrastructure that gives an overview of all tools that a developer should at least know that they exist. It will mostly link to the corresponding wiki page, and possibly list a responsible person. This will include: * git repository * trac itself (wiki + issue tracker) * mailing lists (note the s! ? I didn?t know about ghc-builds for a long time) * Phab * Builders * Travis * The yet to announce performance dashboard It could also host links to external tools such as https://www.ohloh.net/p/ghc Anyways, please keep bleating about such things! Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From mark.lentczner at gmail.com Tue Jul 15 22:05:20 2014 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Tue, 15 Jul 2014 15:05:20 -0700 Subject: Status of Haskell Platform 2014.2.0.0 In-Reply-To: <20140715192543.GA22908@nutty.outback.escape.de> References: <20140715192543.GA22908@nutty.outback.escape.de> Message-ID: On Tue, Jul 15, 2014 at 12:25 PM, Matthias Kilian wrote: > Apart from this, i guess future versions of attoparsec (and other > libraries included in the HP) will probably depend on more libraries > not yet in the HP. So what's the correct way to deal with this? > > - Use newer versions for libraries like attoparsec and add additional > libraries they depend on? > > - Or reduce the number of libraries included in the HP? This a fundamental tension with having a set of packages like the platform: The point of a library being in the platform is that it forms part of the stable base for all other development work. But then a library in it has become a stable base. If stable library pulls in a new dependency on a library that isn't stable... then the depending library is no longer stable. [Stable here refers to API and expectation stability... not code quality.] Libraries in the platform should, ideally, be very conservative with adding new dependencies. In the past, we have set the bar that a package will not move forward in version in the platform if it requires a dependency that is not in the platform. If we relax this, you can see where it leads: A is admitted into the platform. Later it depends on B, not in the platform... so we bundle B into the platform as well (as A needs it). Now, people start to come to expect that B is "there"... and depend on it... but it hasn't ever signed up to the stability commitment, and so if it changes radically... the platform, as practically seen by users is no longer stable. We are a very fast moving bunch here in the Haskell world. In part because both our language and ecosystem of tools have enabled us to build on their safety. For example: In it's short 9 month life (introduced in Oct 2013), scientific has had 7 major API revisions in 16 releases. That is pretty unprecedented for packages that make up other stable library sets other ecosystems. It would be hard to include a package with that much API motion in a set of stable packages whose API you could code against... and expect it to be relatively unchanged in a year. Imagine if Data.Map's API changed that often.... I certainly don't want to have to constantly fiddle with my project over a year to keep it working with Data.Map. And even if I update my platform once every six months (I know... I know... but *most *years...) then it is *still* wasted time if I need to fiddle with my project to be sure it still works with Data.Map. Of course, Data.Map is stable... and this doesn't happen. We should have this discussion again after the release. We should re-evaluate what we expect from the libraries that are in the platform, and we should set more clear stability metrics. Looking forward to lively discussions on this over beer in Gothenburg! - Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Tue Jul 15 22:09:13 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Wed, 16 Jul 2014 00:09:13 +0200 Subject: GhcLibHcOpts: -O or -O2? Message-ID: <1405462153.2694.26.camel@kirk> Hi, I found that some performance tests cases fail in some setting, and not in another, and after a while I found the cause: The default settings (i.e. no mk/build.mk) have GhcLibHcOpts=-O2, while most development profiles (e.g. devel2) _and the validate settings_ have GhcLibHcOpts=-O1. It doesn?t make a lot of sense to have performance tests cases without fixing this parameter. And I think we really should be validating the settings that we actually used in the final build that we release. Hence three questions: 1. Are the default settings really the settings used when producing the final build? 2. Do we want to use GhcLibHcOpts=-O2 or GhcLibHcOpts=-O for releases? 3. Once 2. is answered, may I change the validate settings to that value, and update the performance tests to expect the number produced with that value? (This was first observed in https://ghc.haskell.org/trac/ghc/ticket/9315#comment:5 but I wanted to the wider audience of the mailing list for such a design decision.) Thanks, Joachim -- Joachim Breitner e-Mail: mail at joachim-breitner.de Homepage: http://www.joachim-breitner.de Jabber-ID: nomeata at joachim-breitner.de -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From mark.lentczner at gmail.com Tue Jul 15 22:14:37 2014 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Tue, 15 Jul 2014 15:14:37 -0700 Subject: Status of Haskell Platform 2014.2.0.0 In-Reply-To: References: Message-ID: On Tue, Jul 15, 2014 at 1:59 PM, Bryan O'Sullivan wrote: > Well, it was rather late to hear that you weren't going to upgrade > attoparsec, too ;-) > On Sun, Mar 30, 2014 at 1:06 PM, Mark Lentczner wrote: > SO, In anticipation of releasing a HP shortly (1 month?) after GHC 7.8... > I'd like to get going on nailing down package versions. > > , incLib "attoparsec" "0.10.4.0" > > In brief, an attacker can DoS a user of attoparsec by handing them a > floating point number with a sufficiently large exponent (e.g. > 1e1000000000). This will cause it to try to create an Integer with the > given number of digits, thus possibly OOMing a machine or crashing a > process. > But only if you use the Data.Atooparsec.Text parsers double, number, and rational parser, right? - Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Jul 15 22:29:51 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 15 Jul 2014 22:29:51 +0000 Subject: Windows breakage -- again Message-ID: <618BE556AADD624C9C918AA5D5911BEF1042DFFC@DB3PRD3001MB020.064d.mgd.msft.net> Aargh! The Windows build has broken - again. I can't build GHC on my laptop any more. A clean 'sh validate' finishes as below. What on earth is `___sync_fetch_and_add_1'? Can anyone help? Thanks! Simon "inplace/bin/ghc-stage2.exe" -hisuf hi -osuf o -hcsuf hc -static -H32m -O -Werror -Wall -H64m -O0 -package-name vector-0.10.9.1 -hide-all-packages -i -ilibraries/vector/. -ilibraries/vector/dist-install/build -ilibraries/vector/dist-install/build/autogen -Ilibraries/vector/dist-install/build -Ilibraries/vector/dist-install/build/autogen -Ilibraries/vector/include -Ilibraries/vector/internal -optP-DVECTOR_BOUNDS_CHECKS -optP-include -optPlibraries/vector/dist-install/build/autogen/cabal_macros.h -package base-4.7.1.0 -package deepseq-1.3.0.2 -package ghc-prim-0.3.1.0 -package primitive-0.5.2.1 -O2 -XHaskell98 -XCPP -XDeriveDataTypeable -O2 -O -dcore-lint -fno-warn-deprecated-flags -no-user-package-db -rtsopts -Wwarn -odir libraries/vector/dist-install/build -hidir libraries/vector/dist-install/build -stubdir libraries/vector/dist-install/build -c libraries/vector/./Data/Vector/Fusion/Stream/Monadic.hs -o libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o Loading package ghc-prim ... linking ... ghc-stage2.exe: unable to load package `ghc-prim' ghc-stage2.exe: C:\code\HEAD\libraries\ghc-prim\dist-install\build\HSghc-prim-0.3.1.0.o: unknown symbol `___sync_fetch_and_add_1' libraries/vector/ghc.mk:5: recipe for target 'libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o' failed make[1]: *** [libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o] Error 1 I -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Tue Jul 15 22:55:22 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Wed, 16 Jul 2014 00:55:22 +0200 Subject: Building GHC under Wine? Message-ID: <1405464922.2694.29.camel@kirk> Hi, I feel sorry for Simon always repeatedly stuck with an unbuildable tree, and an idea crossed my mind: Can we build? GHC under Wine? If so, is it likely to catch the kind of problems that Simon is getting? If so, maybe it runs fast enough to be also tested by travis on every commit? (This mail is to find out if people have tried it before. If not, I?ll give it a quick shot.) Greetings, Joachim ? we surely can use it: http://www.haskell.org/haskellwiki/GHC_under_Wine -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From bos at serpentine.com Tue Jul 15 23:09:38 2014 From: bos at serpentine.com (Bryan O'Sullivan) Date: Tue, 15 Jul 2014 16:09:38 -0700 Subject: Status of Haskell Platform 2014.2.0.0 In-Reply-To: References: Message-ID: On Tue, Jul 15, 2014 at 3:14 PM, Mark Lentczner wrote: > But only if you use the Data.Atooparsec.Text parsers double, number, and > rational parser, right? > Well yes, but that's a rather important "if". -------------- next part -------------- An HTML attachment was scrubbed... URL: From chak at cse.unsw.edu.au Wed Jul 16 00:49:49 2014 From: chak at cse.unsw.edu.au (Manuel M T Chakravarty) Date: Wed, 16 Jul 2014 10:49:49 +1000 Subject: RFC: style cleanup & guidelines for GHC, and related bikeshedding In-Reply-To: <53BC00DE.9050807@gmail.com> References: <53BC00DE.9050807@gmail.com> Message-ID: <47769D71-08FA-4CBB-B2AB-936CB15820A8@cse.unsw.edu.au> Simon Marlow : > Austin didn't mention this, so I will: we have a wiki page for style > > https://ghc.haskell.org/trac/ghc/wiki/Commentary/CodingStyle > > It has a pretty clear set of guidelines for imports/exports, for example (that we don't follow as much as we should). > > I'd be in favour of changing .lhs files to .hs files, replacing all the \begin{code}...\end{code} with -}...{-. As I said in my reply to Simon, literate source files aren't providing any real benefit to us, and in the name of consistency this would be a positive step. I?m all in favour of gardening the code base to clean up things like this. Yes! > However, the best time for a big stylistic sweep is a time that minimizes the number of merges we have to do across these commits. That would be just before we branch for a new major release; hopefully at that point most of the feature branches will be merged and we're not going to merge any further patches into the previous release branch. > > I'm less enthusiastic about fixing whitespace things. It's a tough call, but I'm guessing that fixing it would cause more pain than not fixing it. Opinions might differ, and I wouldn?t mind at all if the consensus were to do a whitespace sweep too. I think, the tabs should go ? it?s been dragging on for a long time now. (The rest is less important.) > One other thing I'd like to propose is an 80-column limit on new code. Personally I've always used an 80-column limit for various reasons. This is the biggest bikeshed ever and we could talk all day about it, but here are a couple of concrete points that I think are uncontroversial: > > - there has to be *some* limit, so that we know how wide to make our > windows. The only valid discussion is what the limit should be. > > - Phabricator's side-by-side diffs are hard to read on a laptop screen > when lines go beyond 80 columns. > > And I think 80 is a good enough number, especially for Haskell where you can pack a lot into an 80-column line. Phabricator is already flagging up >80 column lines in its linter, which is its default setting. I used to be a 80 column guy, but moved away from that the last years. But you are right, there must be an upper limit and, if >80 is a problem for code reviews, then it?s a reasonable choice. Cheers, Manuel > On 02/07/2014 12:59, Austin Seipp wrote: >> Hi *, >> >> First off, WARNING: BIKESHEDDING AHEAD. >> >> With that out of the way - today on IRC, there was some discussion >> about some stylistic/consistency issues in GHC, and being spurred by >> Johans recent proposal for top-level documentation, I figured perhaps >> we should beat the drum on this issue as well. >> >> The TL;DR is that GHC has a lot of inconsistent style issues, >> including things like: >> >> - Mixing literate haskell with non-literate haskell files >> - Legacy code with tabs and spaces intermixed >> - Related to the last one, trailing whitespace >> - Mixing styles of do notation in different parts of the compiler >> (braces vs no braces) >> - Probably things like indentation mismatches even in the same code >> - Probably many other things I've missed, obvious or not. >> >> These issues by themselves aren't too bad, but together they make the >> coding style for GHC very inconsistent, and this hurts maintainability >> a bit I feel. Furthermore, some of these issues block related >> improvements - for example, >> https://ghc.haskell.org/trac/ghc/ticket/9230 which is probably quite >> reasonable will likely be a bit annoying to implement until GHC itself >> is de-tabbed - we use -Werror during ./validate. This particular issue >> is what started the discussion. >> >> Also, with developers now using arcanist and phabricator, they have >> linting enabled for new patches, but they will often warn about >> surrounding issues, mostly tabs and trailing spaces. This is a bit >> annoying for submitters, and would be fixed by enforcing it. >> >> First attack plan >> ~~~~~~~~~~~~~~~ >> >> So, to start, I'd like to propose that we make some guidelines for >> these kinds of things, and also a plan to fix some of them. To start: >> >> #1) We should really consider going ahead and detabbing the remaining >> files that have them. We already enforce this on new commits with git >> hooks, but by doing this, we can make -fwarn-tabs a default flag and >> then validate with -Werror in the development process. >> >> #2) Similarly, we should kill all the trailing whitespace. (I think >> this is less controversial than #1) >> >> #3) We should most certainly move the remaining files from literate >> haskell to non-literate haskell. Most of the files in the compiler are >> already in this form, and the literate haskell documentation can't be >> used to generate PDFs or anything similar. I suggest we get rid of it. >> More Haskell users use non-literate files anyway. This is probably the >> least controversial. >> >> Merge issues >> ~~~~~~~~~~~~~~~~~ >> >> The reason we haven't done the above three things historically is that >> it makes merge conflicts nastier. A useful approximation suggested on >> IRC might be to detab and remove whitespace for files older than a >> certain date (say, 6 months). >> >> However, in general I'm thinking perhaps it's best to go ahead and >> bite the bullet. maybe. I'd like to know what other people think! If >> we have a vote and most people are in favor of doing this, maybe we >> should really do it. >> >> I'd especially like to hear about this if you have an outstanding branch. >> >> Some numbers on these issues >> ~~~~~~~~~~~~~~~~~~~~~~~~ >> >> Here are some quick numbers on where most of the tabs reside, as well >> as the breakdown of literate files vs non-literate files. >> >> NOTE: these tests occurred in the 'compiler' subdirectory of the GHC >> repository, which is where most of the relevant code is. >> >> LITERATE vs NON-LITERATE: >> >> $ find . -type f -iname '*.hs' | wc -l >> 206 >> >> $ find . -type f -iname '*.lhs' | wc -l >> 194 >> >> Non-literate wins by a slim margin! But having the compiler divided in >> half is really not a good thing IMO... >> >> NUMBER OF TABS PER SUBDIRECTORY: >> >> NOTE: this counts the number of lines which have tabs in them. It does >> not count the total number of tab occurrences. >> >> $ for x in `echo */`; do echo -n "$x:\t\t"; find $x -type f -regex >> '.*\.\(lhs\|hs\)' | xargs grep -P '\t' | wc -l; done >> basicTypes/: 919 >> cbits/: 0 >> cmm/: 38 >> codeGen/: 0 >> coreSyn/: 843 >> deSugar/: 545 >> ghci/: 90 >> hsSyn/: 120 >> iface/: 213 >> llvmGen/: 0 >> main/: 8 >> nativeGen/: 1213 >> parser/: 19 >> prelude/: 182 >> profiling/: 39 >> rename/: 188 >> simplCore/: 754 >> simplStg/: 0 >> specialise/: 0 >> stgSyn/: 0 >> stranal/: 336 >> typecheck/: 1171 >> types/: 301 >> utils/: 220 >> vectorise/: 0 >> >> From these numbers, we can see a few useful things at least, primarily >> that there are definitely some places where removing tabs should be >> easy. For example, parser/, profiling/, main/, and cmm/ can all be >> de-tabbed without much of a problem, I think. >> >> nativeGen is very often not touched, so even though it has a *huge* >> amount of tabs, it can likely be de-tabbed as well with minimal >> impact. >> >> Other style issues >> ~~~~~~~~~~~~~~~~~ >> >> We should also discuss some related issues, like what general >> block-width to use for indentations, naming conventions, and other >> stuff. However, I leave this all to you, and perhaps it is best we >> split that part off into a separate thread. Some things I'd like you >> all to consider: >> >> - Block width for indentation >> - Naming conventions (we use camelCase and_underscores_sometimes >> which isReally_confusing) >> - Import/export styles (I think we have some sloppiness here too) >> - Other things worth arguing forever about. >> >> Thoughts on the above issues? >> > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From eir at cis.upenn.edu Wed Jul 16 03:12:02 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Tue, 15 Jul 2014 23:12:02 -0400 Subject: performance monitoring idea Message-ID: Hi all, I'm deep into working on #9233 right now, which may stem from various inefficiencies due to roles. I'm making good progress, and I feel badly about introducing these problems. When I was coding this originally, I was thinking "Premature optimization is the root of all evil" and didn't pay too much attention to performance, thinking that a perf test would show me up if I erred. I was wrong. This all got me thinking: could we track some cumulative timing numbers for running the whole testsuite? As a first draft, it could just be the sum of the MUT statistic from +RTS -t. As part of our growing CI support (thanks, Joachim and others!!), we could track this number over time. When a commit slows GHC down, we would hopefully see it reflected here and then try to catch these bugs earlier, in an effort to defy Lennart's expectations of GHC slowing down every release. (Sorry, can't find reference to quote at the moment.) This idea seems easy enough to implement. It may not be terribly reliable, but it might just be reliable enough to help catch these problems before releases and to help to discover where they came from. What do we think? Richard From johan.tibell at gmail.com Wed Jul 16 07:37:12 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Wed, 16 Jul 2014 09:37:12 +0200 Subject: Windows breakage -- again In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF1042DFFC@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF1042DFFC@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: I added some primops about a month ago (4ee4ab01c1d97845aecb7707ad2f9a80933e7a49) that call __sync_fetch_and_add, a gcc/llvm builtin. I'm a bit surprised to see this error. The GCC manual [1] says: > " Not all operations are supported by all target processors. If a particular operation cannot be implemented on the target processor, a warning will be generated and a call an external function will be generated. The external function will carry the same name as the builtin, with an additional suffix `_n' where n is the size of the data type." I'm a bit surprised by this error for two reasons: * A call to that symbol should only be generated if the CPU doesn't support the atomic instructions. What CPU model does Windows report that you have? * gcc should define such a symbol. For me the following test program compiles: #include uint8_t test(uint8_t* ptr, uint8_t val) { return __sync_fetch_and_add_1(ptr, val); } int main(void) { uint8_t n; return test(&n, 1); } Does that compile for you? Which version of GCC do we end up using on Windows? The reported symbol (___sync_fetch_and_add_1) has three leading underscores, that looks weird. Can you compile just libraries/ghc-prim/cbits/atomic.c and see if it's indeed GCC that generates a reference to that symbol? 1. http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Atomic-Builtins.html On Wed, Jul 16, 2014 at 12:29 AM, Simon Peyton Jones wrote: > Aargh! The Windows build has broken ? again. I can?t build GHC on my > laptop any more. > > A clean ?sh validate? finishes as below. What on earth is > `___sync_fetch_and_add_1'? > > Can anyone help? Thanks! > > Simon > > "inplace/bin/ghc-stage2.exe" -hisuf hi -osuf o -hcsuf hc -static -H32m > -O -Werror -Wall -H64m -O0 -package-name vector-0.10.9.1 > -hide-all-packages -i -ilibraries/vector/. > -ilibraries/vector/dist-install/build > -ilibraries/vector/dist-install/build/autogen > -Ilibraries/vector/dist-install/build > -Ilibraries/vector/dist-install/build/autogen -Ilibraries/vector/include > -Ilibraries/vector/internal -optP-DVECTOR_BOUNDS_CHECKS -optP-include > -optPlibraries/vector/dist-install/build/autogen/cabal_macros.h -package > base-4.7.1.0 -package deepseq-1.3.0.2 -package ghc-prim-0.3.1.0 -package > primitive-0.5.2.1 -O2 -XHaskell98 -XCPP -XDeriveDataTypeable -O2 -O > -dcore-lint -fno-warn-deprecated-flags -no-user-package-db -rtsopts > -Wwarn -odir libraries/vector/dist-install/build -hidir > libraries/vector/dist-install/build -stubdir > libraries/vector/dist-install/build -c > libraries/vector/./Data/Vector/Fusion/Stream/Monadic.hs -o > libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o > > Loading package ghc-prim ... linking ... ghc-stage2.exe: unable to load > package `ghc-prim' > > ghc-stage2.exe: > C:\code\HEAD\libraries\ghc-prim\dist-install\build\HSghc-prim-0.3.1.0.o: > unknown symbol `___sync_fetch_and_add_1' > > libraries/vector/ghc.mk:5: recipe for target > 'libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o' > failed > > make[1]: *** > [libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o] > Error 1 > > > > I > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Wed Jul 16 07:56:40 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Wed, 16 Jul 2014 09:56:40 +0200 Subject: Windows breakage -- again In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF1042DFFC@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: You can rollback the commit (git revert 4ee4ab01c1d97845aecb7707ad2f9a80933e7a49) and push that to the repo if you wish. I will try to re-add the primop again after I figure out what's wrong. On Wed, Jul 16, 2014 at 9:37 AM, Johan Tibell wrote: > I added some primops about a month ago > (4ee4ab01c1d97845aecb7707ad2f9a80933e7a49) that call __sync_fetch_and_add, > a gcc/llvm builtin. I'm a bit surprised to see this error. The GCC manual > [1] says: > > > " Not all operations are supported by all target processors. If a > particular operation cannot be implemented on the target processor, a > warning will be generated and a call an external function will be > generated. The external function will carry the same name as the builtin, > with an additional suffix `_n' where n is the size of the data type." > > I'm a bit surprised by this error for two reasons: > > * A call to that symbol should only be generated if the CPU doesn't > support the atomic instructions. What CPU model does Windows report that > you have? > > * gcc should define such a symbol. For me the following test program > compiles: > > #include > > uint8_t test(uint8_t* ptr, uint8_t val) { > return __sync_fetch_and_add_1(ptr, val); > } > > int main(void) { > uint8_t n; > return test(&n, 1); > } > > Does that compile for you? Which version of GCC do we end up using on > Windows? > > The reported symbol (___sync_fetch_and_add_1) has three leading > underscores, that looks weird. Can you compile just > libraries/ghc-prim/cbits/atomic.c and see if it's indeed GCC that generates > a reference to that symbol? > > 1. http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Atomic-Builtins.html > > > On Wed, Jul 16, 2014 at 12:29 AM, Simon Peyton Jones < > simonpj at microsoft.com> wrote: > >> Aargh! The Windows build has broken ? again. I can?t build GHC on my >> laptop any more. >> >> A clean ?sh validate? finishes as below. What on earth is >> `___sync_fetch_and_add_1'? >> >> Can anyone help? Thanks! >> >> Simon >> >> "inplace/bin/ghc-stage2.exe" -hisuf hi -osuf o -hcsuf hc -static -H32m >> -O -Werror -Wall -H64m -O0 -package-name vector-0.10.9.1 >> -hide-all-packages -i -ilibraries/vector/. >> -ilibraries/vector/dist-install/build >> -ilibraries/vector/dist-install/build/autogen >> -Ilibraries/vector/dist-install/build >> -Ilibraries/vector/dist-install/build/autogen -Ilibraries/vector/include >> -Ilibraries/vector/internal -optP-DVECTOR_BOUNDS_CHECKS -optP-include >> -optPlibraries/vector/dist-install/build/autogen/cabal_macros.h -package >> base-4.7.1.0 -package deepseq-1.3.0.2 -package ghc-prim-0.3.1.0 -package >> primitive-0.5.2.1 -O2 -XHaskell98 -XCPP -XDeriveDataTypeable -O2 -O >> -dcore-lint -fno-warn-deprecated-flags -no-user-package-db -rtsopts >> -Wwarn -odir libraries/vector/dist-install/build -hidir >> libraries/vector/dist-install/build -stubdir >> libraries/vector/dist-install/build -c >> libraries/vector/./Data/Vector/Fusion/Stream/Monadic.hs -o >> libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o >> >> Loading package ghc-prim ... linking ... ghc-stage2.exe: unable to load >> package `ghc-prim' >> >> ghc-stage2.exe: >> C:\code\HEAD\libraries\ghc-prim\dist-install\build\HSghc-prim-0.3.1.0.o: >> unknown symbol `___sync_fetch_and_add_1' >> >> libraries/vector/ghc.mk:5: recipe for target >> 'libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o' >> failed >> >> make[1]: *** >> [libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o] >> Error 1 >> >> >> >> I >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Wed Jul 16 08:02:21 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Wed, 16 Jul 2014 10:02:21 +0200 Subject: Beta Performance dashboard Message-ID: <1405497741.2301.34.camel@kirk> Hi, I guess it?s time to talk about this, especially as Richard just brought it up again... I felt that we were seriously lacking in our grip on performance issues. We don?t even know whether 6.8.3 was better or worse than 6.8.3 or 7.6.4 in terms of nofib, not to speak of the effect of each single commit. I want to change that, so I set up a benchmark monitoring dashboard. You can currently reach it at: http://ghcspeed-nomeata.rhcloud.com/ What does it do? ~~~~~~~~~~~~~~~~ It monitors the repository (master branch only) and builds each commit, complete with the test suite and nofib. The log is saved and analyzed, and some numbers are extracted: * The build time * The test suite summary numbers * Runtime (if >1s), allocations and binary sizes of the nofib benchmarks These are uploaded to the website above, which is powered by codespeed, a general performance dashboard, implemented in Python using Django. Under _Changes_, it provides a report for each commit (changes wrt. to the previous version, and wrt. to 10 revisions earlier, the so-called ?trend?). A summary of these reports is visible on the front-page. The _Timeline_ is a graph for each individual performance number. If there are bumps, you can hopefully find them there! You can also compare to 7.8.3, which is available as a ?baseline?. _Comparison_ will be more useful if we have more tagged revision, or if were benchmarking various options (e.g. -fllvm): Here you can do bar-chart comparisons. Why codespeed? ~~~~~~~~~~~~~~ For a long time I searched for a suitable software product, and one criterion is that it should be open source, rather simple to set up and mostly decoupled from other tools, i.e. something that I throw numbers at and which then displays them nicely. While I don?t think codespeed is the best performance dashboard out there (I find http://goperfd.appspot.com/perf a bit better; I wonder how well codespeed scales to even larger numbers of benchmarks and I wish it were more git-aware), it was the easiest to get started with. And thanks to the loose coupling of (1) running the tests to acquire a log, (2) parsing the log to get numbers and (3) putting them on a server, we can hopefully replace it when we come along something better. I was hoping for the Phabricator guys to have something in their tool suite, but doesn?t look like it. How does it work (currently)? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ My office PC is underused (I work on my laptop), so its currently dedicated to it. I have a simple shell script that monitors the repo for new versions. It builds the newest revision and works itself back to the commit where everything was turned into submodules: https://github.com/nomeata/codespeed/blob/ghc/tools/ghc/watch.sh It calls a script that does the actual building: https://github.com/nomeata/codespeed/blob/ghc/tools/ghc/run-speed.sh This produces a log file which should contain all the required numbers somewhere. A second script extracts these numbers (with help of nofib-analyze) and converts them into codespeed compatible JSON files: https://github.com/nomeata/codespeed/blob/ghc/tools/ghc/log2json.pl Finally, a simple invocation to curl uploads them to codespeed: https://github.com/nomeata/codespeed/blob/ghc/tools/ghc/upload.sh So if you want additional benchmarks to be tracked, make sure they are present in the logs and adjust log2json.pl. codespeed will automatically pick up new benchmarks in these logs. Reimplementations in Haskell are also welcome :-) The testsuite is run with VERBOSE=4, so the performance numbers are also shown for failing test cases. So once a test case goes over the limit, you can grep through previous logs try to find the real culprit. I uploaded the logs (so far) to https://github.com/nomeata/ghc-speed-logs (but this is not automated yet, ping me if you need an update on this). What next? ~~~~~~~~~~ Clearly, the current setup is only good enough to evaluate the system. Eventually, I might want to use my office PC again, and the free hosting on openshift is not very powerful. So if we want to keep this setup and make it ?official?, we need find a permanent solution.? This involves: * A dedicated machine to run the benchmarks. This probably shouldn?t be a VM, if we want to keep the noise in the runtime down. * A machine to run the codespeed server. Can be a VM, or even run on any of the system that we have right now. Just needs a database (postgresql preferably) and a webserver supporting WSGI (i.e. any of them). * Maybe a better place to store the logs for public consumption. Also, there are way to improve the system: * As I said, I don?t think codespeed is the best. If we find something better, we can replace it. Since we have all the logs, we can easily fill the new system with the data, or even run both at the same time. * We might want to have more numbers. I am already putting lines-of-code and disk space usage numbers into the logs, but do not parse them yet. * In particular, we might want to put in each performance test case as a benchmark of its own, to easier find commits that degrade (or improve!) performance. I?m not sure how well the web page will handle that. * We might want to replace my rather simple watch.sh-script by something more serious. In particular, I imagine that our builder setup could manages this, with a dedicated builder doing the benchmark runs and the builder server scheduling a build for each commit. That?s it for now. Enjoy clicking around! Greetings, Joachim ? I guess that could be considered beta-reduction :-) -- Joachim Breitner e-Mail: mail at joachim-breitner.de Homepage: http://www.joachim-breitner.de Jabber-ID: nomeata at joachim-breitner.de -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From johan.tibell at gmail.com Wed Jul 16 09:42:02 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Wed, 16 Jul 2014 11:42:02 +0200 Subject: Beta Performance dashboard In-Reply-To: <1405497741.2301.34.camel@kirk> References: <1405497741.2301.34.camel@kirk> Message-ID: This is great. I wanted this for a long time. Joachim, could you write a wiki page with step-by-step instructions for how to set this up, detailed enough that e.g. one of our infrastructure volunteers could set it up on another machine. Haskell infrastructure people, do we have a (e.g. Hetzner) machine that we can run this on? On Wed, Jul 16, 2014 at 10:02 AM, Joachim Breitner wrote: > Hi, > > I guess it?s time to talk about this, especially as Richard just brought > it up again... > > I felt that we were seriously lacking in our grip on performance issues. > We don?t even know whether 6.8.3 was better or worse than 6.8.3 or 7.6.4 > in terms of nofib, not to speak of the effect of each single commit. > > I want to change that, so I set up a benchmark monitoring dashboard. You > can currently reach it at: > > http://ghcspeed-nomeata.rhcloud.com/ > > What does it do? > ~~~~~~~~~~~~~~~~ > > It monitors the repository (master branch only) and builds each commit, > complete with the test suite and nofib. The log is saved and analyzed, > and some numbers are extracted: > * The build time > * The test suite summary numbers > * Runtime (if >1s), allocations and binary sizes of the nofib > benchmarks > > These are uploaded to the website above, which is powered by codespeed, > a general performance dashboard, implemented in Python using Django. > > Under _Changes_, it provides a report for each commit (changes wrt. to > the previous version, and wrt. to 10 revisions earlier, the so-called > ?trend?). A summary of these reports is visible on the front-page. > > The _Timeline_ is a graph for each individual performance number. If > there are bumps, you can hopefully find them there! You can also compare > to 7.8.3, which is available as a ?baseline?. > > _Comparison_ will be more useful if we have more tagged revision, or if > were benchmarking various options (e.g. -fllvm): Here you can do > bar-chart comparisons. > > Why codespeed? > ~~~~~~~~~~~~~~ > > For a long time I searched for a suitable software product, and one > criterion is that it should be open source, rather simple to set up and > mostly decoupled from other tools, i.e. something that I throw numbers > at and which then displays them nicely. While I don?t think codespeed is > the best performance dashboard out there (I find > http://goperfd.appspot.com/perf a bit better; I wonder how well > codespeed scales to even larger numbers of benchmarks and I wish it were > more git-aware), it was the easiest to get started with. And thanks to > the loose coupling of (1) running the tests to acquire a log, (2) > parsing the log to get numbers and (3) putting them on a server, we can > hopefully replace it when we come along something better. I was hoping > for the Phabricator guys to have something in their tool suite, but > doesn?t look like it. > > How does it work (currently)? > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > My office PC is underused (I work on my laptop), so its currently > dedicated to it. I have a simple shell script that monitors the repo for > new versions. It builds the newest revision and works itself back to the > commit where everything was turned into submodules: > https://github.com/nomeata/codespeed/blob/ghc/tools/ghc/watch.sh > > It calls a script that does the actual building: > https://github.com/nomeata/codespeed/blob/ghc/tools/ghc/run-speed.sh > This produces a log file which should contain all the required numbers > somewhere. > > A second script extracts these numbers (with help of nofib-analyze) and > converts them into codespeed compatible JSON files: > https://github.com/nomeata/codespeed/blob/ghc/tools/ghc/log2json.pl > > Finally, a simple invocation to curl uploads them to codespeed: > https://github.com/nomeata/codespeed/blob/ghc/tools/ghc/upload.sh > > So if you want additional benchmarks to be tracked, make sure they are > present in the logs and adjust log2json.pl. codespeed will automatically > pick up new benchmarks in these logs. Reimplementations in Haskell are > also welcome :-) > > The testsuite is run with VERBOSE=4, so the performance numbers are also > shown for failing test cases. So once a test case goes over the limit, > you can grep through previous logs try to find the real culprit. I > uploaded the logs (so far) to https://github.com/nomeata/ghc-speed-logs > (but this is not automated yet, ping me if you need an update on this). > > What next? > ~~~~~~~~~~ > > Clearly, the current setup is only good enough to evaluate the system. > Eventually, I might want to use my office PC again, and the free hosting > on openshift is not very powerful. > > So if we want to keep this setup and make it ?official?, we need find a > permanent solution.? This involves: > > * A dedicated machine to run the benchmarks. This probably shouldn?t be > a VM, if we want to keep the noise in the runtime down. > * A machine to run the codespeed server. Can be a VM, or even run on > any of the system that we have right now. Just needs a database > (postgresql preferably) and a webserver supporting WSGI (i.e. any > of them). > * Maybe a better place to store the logs for public consumption. > > Also, there are way to improve the system: > > * As I said, I don?t think codespeed is the best. If we find something > better, we can replace it. Since we have all the logs, we can easily > fill the new system with the data, or even run both at the same time. > * We might want to have more numbers. I am already putting > lines-of-code and disk space usage numbers into the logs, but do not > parse them yet. > * In particular, we might want to put in each performance test case as > a benchmark of its own, to easier find commits that degrade (or > improve!) performance. I?m not sure how well the web page will handle > that. > * We might want to replace my rather simple watch.sh-script by > something more serious. In particular, I imagine that our builder > setup could manages this, with a dedicated builder doing the > benchmark runs and the builder server scheduling a build for each > commit. > > > That?s it for now. Enjoy clicking around! > > Greetings, > Joachim > > ? I guess that could be considered beta-reduction :-) > > > > -- > Joachim Breitner > e-Mail: mail at joachim-breitner.de > Homepage: http://www.joachim-breitner.de > Jabber-ID: nomeata at joachim-breitner.de > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuuzetsu at fuuzetsu.co.uk Wed Jul 16 11:04:29 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Wed, 16 Jul 2014 13:04:29 +0200 Subject: Updating Haddock submodule In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF1042CC47@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF1042CC47@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <53C65C3D.7060803@fuuzetsu.co.uk> On 07/15/2014 09:01 AM, Simon Peyton Jones wrote: > Herbert, Austin > I've just made a change to GHC that has a (trivial) knock-on effect in Haddock, so I had to update the submodule. Here is what I did, after consulting Austin. > Can I humbly implore you both (or someone) to write down the workflow so that git-na?ve people like me can do this with confidence, rather than (as now) in fear? > Below is my draft of the workflow. It seems pretty complicated, and there are three places (in red) where I am unsure what to do. > Please in writing the workflow, document every step as I have done below. > Thanks! > Simon > > > 1. Starting point: > > ? all changes made in GHC and in utils/haddock. > > ? validate works > > 2. cd utils/haddock > > 3. git stash > Keep my changes out of the way > > 4. git branch -av > Keep the output > > 5. git checkout master > Why 'master'? Because I know from talking to Austin that the ghc repo tracks Haddock's master branch. There is no way to get this information without talking to Someone Who Knows. There should be a wiki page that documents it. > > 6. Check that in the output of step 4, the current branch (which should be detatched-head) is the same commit as origin/master. > > I have no idea what to do if this isn't the case. > > 7. git pull > Now Haddock's 'master' is up to date > > 8. git stash pop > Apply the changes to the Haddock master branch > > 9. git add/commit to record and commit the patch as usual > > 10. git push > > Phew! At this stage we have updated Haddock. I think. > > 11. cd ../.. Back into main ghc repo > > 12. git add/commit to record patch as usual, but > > ? include utils/haddock in the "files" you add. This will update the submodule pointer in the ghc repo > > ? include the word "submodule" in the commit message > > 13. git push > > At this stage we have updated GHC too > > 14. ???? to get utils/haddock back into the detached-head state. > > How do I do this? git submodule update doesn't > > > > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > Is https://ghc.haskell.org/trac/ghc/wiki/WorkingConventions/Git/Submodules#MakingchangestoGHCsubmodules what you are looking for? Basically in step 12, you do your GHC hacking. Git should also show you a one line change with a commit reference which is your updated Haddock. You should commit that as well. Not sure why you have step 14, it seems to me that you should be good after 13. At step 14 you will already be pointing to the appropriate commit, it will just happen to be the same as the master branch at that point too so I think you're done. -- Mateusz K. From johan.tibell at gmail.com Wed Jul 16 12:42:52 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Wed, 16 Jul 2014 14:42:52 +0200 Subject: Windows breakage -- again In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF1042DFFC@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: I won't have time to fix this today and I will be out until Tuesday so I suggest you run git revert 4ee4ab01c1d97845aecb7707ad2f9a80933e7a49 and push the result to origin/master to unblock yourself (and any other GHC devs on Windows?) On Wed, Jul 16, 2014 at 9:56 AM, Johan Tibell wrote: > You can rollback the commit (git revert 4ee4ab01c1d97845aecb7707ad2f9a80933e7a49) > and push that to the repo if you wish. I will try to re-add the primop > again after I figure out what's wrong. > > > On Wed, Jul 16, 2014 at 9:37 AM, Johan Tibell > wrote: > >> I added some primops about a month ago >> (4ee4ab01c1d97845aecb7707ad2f9a80933e7a49) that call __sync_fetch_and_add, >> a gcc/llvm builtin. I'm a bit surprised to see this error. The GCC manual >> [1] says: >> >> > " Not all operations are supported by all target processors. If a >> particular operation cannot be implemented on the target processor, a >> warning will be generated and a call an external function will be >> generated. The external function will carry the same name as the builtin, >> with an additional suffix `_n' where n is the size of the data type." >> >> I'm a bit surprised by this error for two reasons: >> >> * A call to that symbol should only be generated if the CPU doesn't >> support the atomic instructions. What CPU model does Windows report that >> you have? >> >> * gcc should define such a symbol. For me the following test program >> compiles: >> >> #include >> >> uint8_t test(uint8_t* ptr, uint8_t val) { >> return __sync_fetch_and_add_1(ptr, val); >> } >> >> int main(void) { >> uint8_t n; >> return test(&n, 1); >> } >> >> Does that compile for you? Which version of GCC do we end up using on >> Windows? >> >> The reported symbol (___sync_fetch_and_add_1) has three leading >> underscores, that looks weird. Can you compile just >> libraries/ghc-prim/cbits/atomic.c and see if it's indeed GCC that generates >> a reference to that symbol? >> >> 1. http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Atomic-Builtins.html >> >> >> On Wed, Jul 16, 2014 at 12:29 AM, Simon Peyton Jones < >> simonpj at microsoft.com> wrote: >> >>> Aargh! The Windows build has broken ? again. I can?t build GHC on my >>> laptop any more. >>> >>> A clean ?sh validate? finishes as below. What on earth is >>> `___sync_fetch_and_add_1'? >>> >>> Can anyone help? Thanks! >>> >>> Simon >>> >>> "inplace/bin/ghc-stage2.exe" -hisuf hi -osuf o -hcsuf hc -static -H32m >>> -O -Werror -Wall -H64m -O0 -package-name vector-0.10.9.1 >>> -hide-all-packages -i -ilibraries/vector/. >>> -ilibraries/vector/dist-install/build >>> -ilibraries/vector/dist-install/build/autogen >>> -Ilibraries/vector/dist-install/build >>> -Ilibraries/vector/dist-install/build/autogen -Ilibraries/vector/include >>> -Ilibraries/vector/internal -optP-DVECTOR_BOUNDS_CHECKS -optP-include >>> -optPlibraries/vector/dist-install/build/autogen/cabal_macros.h -package >>> base-4.7.1.0 -package deepseq-1.3.0.2 -package ghc-prim-0.3.1.0 -package >>> primitive-0.5.2.1 -O2 -XHaskell98 -XCPP -XDeriveDataTypeable -O2 -O >>> -dcore-lint -fno-warn-deprecated-flags -no-user-package-db -rtsopts >>> -Wwarn -odir libraries/vector/dist-install/build -hidir >>> libraries/vector/dist-install/build -stubdir >>> libraries/vector/dist-install/build -c >>> libraries/vector/./Data/Vector/Fusion/Stream/Monadic.hs -o >>> libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o >>> >>> Loading package ghc-prim ... linking ... ghc-stage2.exe: unable to load >>> package `ghc-prim' >>> >>> ghc-stage2.exe: >>> C:\code\HEAD\libraries\ghc-prim\dist-install\build\HSghc-prim-0.3.1.0.o: >>> unknown symbol `___sync_fetch_and_add_1' >>> >>> libraries/vector/ghc.mk:5: recipe for target >>> 'libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o' >>> failed >>> >>> make[1]: *** >>> [libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o] >>> Error 1 >>> >>> >>> >>> I >>> >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://www.haskell.org/mailman/listinfo/ghc-devs >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From metaniklas at gmail.com Wed Jul 16 12:48:38 2014 From: metaniklas at gmail.com (Niklas Larsson) Date: Wed, 16 Jul 2014 14:48:38 +0200 Subject: SV: Windows breakage -- again Message-ID: <53c674b5.e2c8700a.7a69.ffffc657@mx.google.com> I have built ghc on windows after that was added with no issue. I can take a look this evening and see how HEAD works for me. The standard gcc in the tarballs is 4.6.3, which is getting long in the tooth, there is an issue on trac to upgrade it. -- Niklas ----- Ursprungligt meddelande ----- Fr?n: "Johan Tibell" Skickat: ?2014-?07-?16 09:57 Till: "Simon Peyton Jones" Kopia: "ghc-devs at haskell.org" ?mne: Re: Windows breakage -- again You can rollback the commit (git revert 4ee4ab01c1d97845aecb7707ad2f9a80933e7a49) and push that to the repo if you wish. I will try to re-add the primop again after I figure out what's wrong. On Wed, Jul 16, 2014 at 9:37 AM, Johan Tibell wrote: I added some primops about a month ago (4ee4ab01c1d97845aecb7707ad2f9a80933e7a49) that call __sync_fetch_and_add, a gcc/llvm builtin. I'm a bit surprised to see this error. The GCC manual [1] says: > " Not all operations are supported by all target processors. If a particular operation cannot be implemented on the target processor, a warning will be generated and a call an external function will be generated. The external function will carry the same name as the builtin, with an additional suffix `_n' where n is the size of the data type." I'm a bit surprised by this error for two reasons: * A call to that symbol should only be generated if the CPU doesn't support the atomic instructions. What CPU model does Windows report that you have? * gcc should define such a symbol. For me the following test program compiles: #include uint8_t test(uint8_t* ptr, uint8_t val) { return __sync_fetch_and_add_1(ptr, val); } int main(void) { uint8_t n; return test(&n, 1); } Does that compile for you? Which version of GCC do we end up using on Windows? The reported symbol (___sync_fetch_and_add_1) has three leading underscores, that looks weird. Can you compile just libraries/ghc-prim/cbits/atomic.c and see if it's indeed GCC that generates a reference to that symbol? 1. http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Atomic-Builtins.html On Wed, Jul 16, 2014 at 12:29 AM, Simon Peyton Jones wrote: Aargh! The Windows build has broken ? again. I can?t build GHC on my laptop any more. A clean ?sh validate? finishes as below. What on earth is `___sync_fetch_and_add_1'? Can anyone help? Thanks! Simon "inplace/bin/ghc-stage2.exe" -hisuf hi -osuf o -hcsuf hc -static -H32m -O -Werror -Wall -H64m -O0 -package-name vector-0.10.9.1 -hide-all-packages -i -ilibraries/vector/. -ilibraries/vector/dist-install/build -ilibraries/vector/dist-install/build/autogen -Ilibraries/vector/dist-install/build -Ilibraries/vector/dist-install/build/autogen -Ilibraries/vector/include -Ilibraries/vector/internal -optP-DVECTOR_BOUNDS_CHECKS -optP-include -optPlibraries/vector/dist-install/build/autogen/cabal_macros.h -package base-4.7.1.0 -package deepseq-1.3.0.2 -package ghc-prim-0.3.1.0 -package primitive-0.5.2.1 -O2 -XHaskell98 -XCPP -XDeriveDataTypeable -O2 -O -dcore-lint -fno-warn-deprecated-flags -no-user-package-db -rtsopts -Wwarn -odir libraries/vector/dist-install/build -hidir libraries/vector/dist-install/build -stubdir libraries/vector/dist-install/build -c libraries/vector/./Data/Vector/Fusion/Stream/Monadic.hs -o libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o Loading package ghc-prim ... linking ... ghc-stage2.exe: unable to load package `ghc-prim' ghc-stage2.exe: C:\code\HEAD\libraries\ghc-prim\dist-install\build\HSghc-prim-0.3.1.0.o: unknown symbol `___sync_fetch_and_add_1' libraries/vector/ghc.mk:5: recipe for target 'libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o' failed make[1]: *** [libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o] Error 1 I _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From eir at cis.upenn.edu Wed Jul 16 13:54:31 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Wed, 16 Jul 2014 09:54:31 -0400 Subject: a little phrustrated Message-ID: <5BC8E2CB-7528-4537-92EA-E1C3BDDF1B1E@cis.upenn.edu> Hi all, I'm trying to use Phab for the first time this morning, and hitting a fair number of obstacles. I'm writing up my experiences here in order to figure out which of these are my fault, which can be fixed, and which are just things to live with; and also to help others who may go down the same path. If relevant, my diff is at https://phabricator.haskell.org/D73 1) I had some untracked files in a submodule repo. I couldn't find a way to get `arc diff` to ignore these, as they appeared to git to be a change in a tracked file (that is, a change to a submodule, which is considered tracked). `git stash` offered no help, so I had to delete the untracked files. This didn't cause real pain (the files were there in error), but it seems a weakness of the system if I can't make progress otherwise. 2) I develop and build in the same tree. This means that I often have a few untracked files in the outer, ghc.git repo that someone hasn't yet added to .gitignore. Thus, I need to say `--allow-untracked` to get `arc diff` to work. I will likely always need `--allow-untracked`, so I looked for a way to get this to be configured automatically. I found https://secure.phabricator.com/book/phabricator/article/arcanist/#configuration , but the details there are sparse. Any advice? 3) The linter picks up and complains about tabs in any of my touched files. I can then write an excuse for every `arc diff` I do, or de-tab the files. In one case, I changed roughly one line in the file (MkCore.lhs) and didn't think it right to de-tab the whole file. Even if I did de-tab the whole file, then my eventual `arc land` would squash the whitespace commit in with my substantive commits, which we expressly don't want. I can imagine a fair amount of git fiddling which would push the whitespace commit to master and then rebase my substantive work on top so that the final, landed, squashed patch would avoid the whitespace changes, but this is painful. And advice on this? Just ignore the lint errors and write silly excuses? Or, is there a way Phab/arc can be smart enough to keep whitespace-only commits (perhaps tagged with the words "whitespace only" in the commit message) separate from other commits when squashing in `arc land`? 4) For better or worse, we don't currently require every file to be tab-free, just some of them. Could this be reflected in Phab's lint settings to avoid the problem in (3)? (Of course, a way to de-tab and keep the history nice would be much better!) 5) In writing my revision description, I had to add reviewers. I assumed these should be comma-separated. This worked and I have updated the Wiki. Please advise if I am wrong. 6) When I looked at my posted revision, it said that the revision was "closed"... and that I had done it! slyfox on IRC informed me that this was likely because I had pushed my commits to a wip/... branch. Is using wip branches with Phab not recommended? Or, can Phab be configured not to close revisions if the commit appears only in wip/... branches? 7) How can I "re-open" my revision? 8) Some time after posting, phaskell tells me that my build failed. OK. This is despite the fact that Travis was able to build the same commit (https://travis-ci.org/ghc/ghc/builds/30066130). I go to find out why it failed, and am directed to build log F3870 (https://phabricator.haskell.org/file/info/PHID-FILE-hz2r4sjamkkrbf7nsz6b/). I can't view the file online, but instead have to download and then ungzip it. Is it possible to view this file directly? Or not have it be compressed? 9) When I do view the build log, I get no answers. The end of the file comes abruptly in the middle of some haddock output, and the closest thing that looks like an error is about a missing link in a haddock tag `$kind_subtyping` in Type.lhs. I didn't touch this file, and I imagine the missing link has been there for some time, so I'm dubious that this is the real problem. Are these log files cut off? 10) More of a question than a phrustration: is there a way to link directly to Trac tickets and/or wiki pages from Phab comments? I like the Phab:D73 syntax from Trac to Phab, and thanks, Austin, for adding the field at the top of Trac tickets to Phab revisions. I did fully expect to hit a few bumps on my first use of this new tool, but it got to the point where I thought I should seek some advice before continuing to muddle through -- hence this email. I do hope my tone is not overly negative: I'm *very* appreciative of the work that many of you do to support GHC's infrastructure, and I look forward to being able to get and provide source code feedback through Phab. We just need to work out some kinks, I think! (Any number of these kinks may be solely my fault, of course.) Many thanks, Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Wed Jul 16 14:59:52 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Wed, 16 Jul 2014 16:59:52 +0200 Subject: Windows breakage -- again In-Reply-To: <53c674b5.e2c8700a.7a69.ffffc657@mx.google.com> References: <53c674b5.e2c8700a.7a69.ffffc657@mx.google.com> Message-ID: Simon, does a clean checkout of HEAD, with a default mk/build.mk (i.e. no funny linking options), lead to this failure for you? On Wed, Jul 16, 2014 at 2:48 PM, Niklas Larsson wrote: > I have built ghc on windows after that was added with no issue. > > I can take a look this evening and see how HEAD works for me. > > The standard gcc in the tarballs is 4.6.3, which is getting long in the > tooth, there is an issue on trac to upgrade it. > > -- Niklas > > ------------------------------ > Fr?n: Johan Tibell > Skickat: ?2014-?07-?16 09:57 > Till: Simon Peyton Jones > Kopia: ghc-devs at haskell.org > ?mne: Re: Windows breakage -- again > > You can rollback the commit (git revert 4ee4ab01c1d97845aecb7707ad2f9a80933e7a49) > and push that to the repo if you wish. I will try to re-add the primop > again after I figure out what's wrong. > > > On Wed, Jul 16, 2014 at 9:37 AM, Johan Tibell > wrote: > >> I added some primops about a month ago >> (4ee4ab01c1d97845aecb7707ad2f9a80933e7a49) that call __sync_fetch_and_add, >> a gcc/llvm builtin. I'm a bit surprised to see this error. The GCC manual >> [1] says: >> >> > " Not all operations are supported by all target processors. If a >> particular operation cannot be implemented on the target processor, a >> warning will be generated and a call an external function will be >> generated. The external function will carry the same name as the builtin, >> with an additional suffix `_n' where n is the size of the data type." >> >> I'm a bit surprised by this error for two reasons: >> >> * A call to that symbol should only be generated if the CPU doesn't >> support the atomic instructions. What CPU model does Windows report that >> you have? >> >> * gcc should define such a symbol. For me the following test program >> compiles: >> >> #include >> >> uint8_t test(uint8_t* ptr, uint8_t val) { >> return __sync_fetch_and_add_1(ptr, val); >> } >> >> int main(void) { >> uint8_t n; >> return test(&n, 1); >> } >> >> Does that compile for you? Which version of GCC do we end up using on >> Windows? >> >> The reported symbol (___sync_fetch_and_add_1) has three leading >> underscores, that looks weird. Can you compile just >> libraries/ghc-prim/cbits/atomic.c and see if it's indeed GCC that generates >> a reference to that symbol? >> >> 1. http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Atomic-Builtins.html >> >> >> On Wed, Jul 16, 2014 at 12:29 AM, Simon Peyton Jones < >> simonpj at microsoft.com> wrote: >> >>> Aargh! The Windows build has broken ? again. I can?t build GHC on >>> my laptop any more. >>> >>> A clean ?sh validate? finishes as below. What on earth is >>> `___sync_fetch_and_add_1'? >>> >>> Can anyone help? Thanks! >>> >>> Simon >>> >>> "inplace/bin/ghc-stage2.exe" -hisuf hi -osuf o -hcsuf hc -static -H32m >>> -O -Werror -Wall -H64m -O0 -package-name vector-0.10.9.1 >>> -hide-all-packages -i -ilibraries/vector/. >>> -ilibraries/vector/dist-install/build >>> -ilibraries/vector/dist-install/build/autogen >>> -Ilibraries/vector/dist-install/build >>> -Ilibraries/vector/dist-install/build/autogen -Ilibraries/vector/include >>> -Ilibraries/vector/internal -optP-DVECTOR_BOUNDS_CHECKS -optP-include >>> -optPlibraries/vector/dist-install/build/autogen/cabal_macros.h -package >>> base-4.7.1.0 -package deepseq-1.3.0.2 -package ghc-prim-0.3.1.0 -package >>> primitive-0.5.2.1 -O2 -XHaskell98 -XCPP -XDeriveDataTypeable -O2 -O >>> -dcore-lint -fno-warn-deprecated-flags -no-user-package-db -rtsopts >>> -Wwarn -odir libraries/vector/dist-install/build -hidir >>> libraries/vector/dist-install/build -stubdir >>> libraries/vector/dist-install/build -c >>> libraries/vector/./Data/Vector/Fusion/Stream/Monadic.hs -o >>> libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o >>> >>> Loading package ghc-prim ... linking ... ghc-stage2.exe: unable to load >>> package `ghc-prim' >>> >>> ghc-stage2.exe: >>> C:\code\HEAD\libraries\ghc-prim\dist-install\build\HSghc-prim-0.3.1.0.o: >>> unknown symbol `___sync_fetch_and_add_1' >>> >>> libraries/vector/ghc.mk:5: recipe for target >>> 'libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o' >>> failed >>> >>> make[1]: *** >>> [libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o] >>> Error 1 >>> >>> >>> >>> I >>> >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://www.haskell.org/mailman/listinfo/ghc-devs >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Wed Jul 16 15:58:12 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Wed, 16 Jul 2014 11:58:12 -0400 Subject: Windows breakage -- again In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF1042DFFC@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Huh. We're not generating the atomics assembly directly ourselves? On Wednesday, July 16, 2014, Johan Tibell wrote: > I added some primops about a month ago > (4ee4ab01c1d97845aecb7707ad2f9a80933e7a49) that call __sync_fetch_and_add, > a gcc/llvm builtin. I'm a bit surprised to see this error. The GCC manual > [1] says: > > > " Not all operations are supported by all target processors. If a > particular operation cannot be implemented on the target processor, a > warning will be generated and a call an external function will be > generated. The external function will carry the same name as the builtin, > with an additional suffix `_n' where n is the size of the data type." > > I'm a bit surprised by this error for two reasons: > > * A call to that symbol should only be generated if the CPU doesn't > support the atomic instructions. What CPU model does Windows report that > you have? > > * gcc should define such a symbol. For me the following test program > compiles: > > #include > > uint8_t test(uint8_t* ptr, uint8_t val) { > return __sync_fetch_and_add_1(ptr, val); > } > > int main(void) { > uint8_t n; > return test(&n, 1); > } > > Does that compile for you? Which version of GCC do we end up using on > Windows? > > The reported symbol (___sync_fetch_and_add_1) has three leading > underscores, that looks weird. Can you compile just > libraries/ghc-prim/cbits/atomic.c and see if it's indeed GCC that generates > a reference to that symbol? > > 1. http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Atomic-Builtins.html > > > On Wed, Jul 16, 2014 at 12:29 AM, Simon Peyton Jones < > simonpj at microsoft.com > > wrote: > >> Aargh! The Windows build has broken ? again. I can?t build GHC on my >> laptop any more. >> >> A clean ?sh validate? finishes as below. What on earth is >> `___sync_fetch_and_add_1'? >> >> Can anyone help? Thanks! >> >> Simon >> >> "inplace/bin/ghc-stage2.exe" -hisuf hi -osuf o -hcsuf hc -static -H32m >> -O -Werror -Wall -H64m -O0 -package-name vector-0.10.9.1 >> -hide-all-packages -i -ilibraries/vector/. >> -ilibraries/vector/dist-install/build >> -ilibraries/vector/dist-install/build/autogen >> -Ilibraries/vector/dist-install/build >> -Ilibraries/vector/dist-install/build/autogen -Ilibraries/vector/include >> -Ilibraries/vector/internal -optP-DVECTOR_BOUNDS_CHECKS -optP-include >> -optPlibraries/vector/dist-install/build/autogen/cabal_macros.h -package >> base-4.7.1.0 -package deepseq-1.3.0.2 -package ghc-prim-0.3.1.0 -package >> primitive-0.5.2.1 -O2 -XHaskell98 -XCPP -XDeriveDataTypeable -O2 -O >> -dcore-lint -fno-warn-deprecated-flags -no-user-package-db -rtsopts >> -Wwarn -odir libraries/vector/dist-install/build -hidir >> libraries/vector/dist-install/build -stubdir >> libraries/vector/dist-install/build -c >> libraries/vector/./Data/Vector/Fusion/Stream/Monadic.hs -o >> libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o >> >> Loading package ghc-prim ... linking ... ghc-stage2.exe: unable to load >> package `ghc-prim' >> >> ghc-stage2.exe: >> C:\code\HEAD\libraries\ghc-prim\dist-install\build\HSghc-prim-0.3.1.0.o: >> unknown symbol `___sync_fetch_and_add_1' >> >> libraries/vector/ghc.mk:5: recipe for target >> 'libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o' >> failed >> >> make[1]: *** >> [libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o] >> Error 1 >> >> >> >> I >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eir at cis.upenn.edu Wed Jul 16 16:20:41 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Wed, 16 Jul 2014 12:20:41 -0400 Subject: another Phab question: Local commits Message-ID: <2D6E196A-3567-4600-B665-3CDA0B5BED91@cis.upenn.edu> Hi all, I wanted to make a comment in a Phab revision and refer to performance of a particular commit. So, I found the "Local commits" section of my page (https://phabricator.haskell.org/D73). But, those SHA's ("Commit" and "Tree") seem not to be related to others. The correct SHA, as I can see it, is the bit after "rGHC" in the "Description" field of "Diff 2". That SHA corresponds to the SHA I have on my own machine. What are the "Local commit" SHA's? Is there a way to refer to a particular commit in a Phab comment? And, while I'm at it, why do I have *2* Diffs here? I submitted only one git commit. One of the diffs (#2 a.k.a. #161 -- there seem to be two numbering schemes) is empty. Many thanks! Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Wed Jul 16 16:36:44 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Wed, 16 Jul 2014 18:36:44 +0200 Subject: Windows breakage -- again In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF1042DFFC@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: We are on x86(-64), but not on other archs. Simon, which arch are you on? On Wed, Jul 16, 2014 at 5:58 PM, Carter Schonwald < carter.schonwald at gmail.com> wrote: > Huh. We're not generating the atomics assembly directly ourselves? > > > On Wednesday, July 16, 2014, Johan Tibell wrote: > >> I added some primops about a month ago >> (4ee4ab01c1d97845aecb7707ad2f9a80933e7a49) that call __sync_fetch_and_add, >> a gcc/llvm builtin. I'm a bit surprised to see this error. The GCC manual >> [1] says: >> >> > " Not all operations are supported by all target processors. If a >> particular operation cannot be implemented on the target processor, a >> warning will be generated and a call an external function will be >> generated. The external function will carry the same name as the builtin, >> with an additional suffix `_n' where n is the size of the data type." >> >> I'm a bit surprised by this error for two reasons: >> >> * A call to that symbol should only be generated if the CPU doesn't >> support the atomic instructions. What CPU model does Windows report that >> you have? >> >> * gcc should define such a symbol. For me the following test program >> compiles: >> >> #include >> >> uint8_t test(uint8_t* ptr, uint8_t val) { >> return __sync_fetch_and_add_1(ptr, val); >> } >> >> int main(void) { >> uint8_t n; >> return test(&n, 1); >> } >> >> Does that compile for you? Which version of GCC do we end up using on >> Windows? >> >> The reported symbol (___sync_fetch_and_add_1) has three leading >> underscores, that looks weird. Can you compile just >> libraries/ghc-prim/cbits/atomic.c and see if it's indeed GCC that generates >> a reference to that symbol? >> >> 1. http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Atomic-Builtins.html >> >> >> On Wed, Jul 16, 2014 at 12:29 AM, Simon Peyton Jones < >> simonpj at microsoft.com> wrote: >> >>> Aargh! The Windows build has broken ? again. I can?t build GHC on my >>> laptop any more. >>> >>> A clean ?sh validate? finishes as below. What on earth is >>> `___sync_fetch_and_add_1'? >>> >>> Can anyone help? Thanks! >>> >>> Simon >>> >>> "inplace/bin/ghc-stage2.exe" -hisuf hi -osuf o -hcsuf hc -static -H32m >>> -O -Werror -Wall -H64m -O0 -package-name vector-0.10.9.1 >>> -hide-all-packages -i -ilibraries/vector/. >>> -ilibraries/vector/dist-install/build >>> -ilibraries/vector/dist-install/build/autogen >>> -Ilibraries/vector/dist-install/build >>> -Ilibraries/vector/dist-install/build/autogen -Ilibraries/vector/include >>> -Ilibraries/vector/internal -optP-DVECTOR_BOUNDS_CHECKS -optP-include >>> -optPlibraries/vector/dist-install/build/autogen/cabal_macros.h -package >>> base-4.7.1.0 -package deepseq-1.3.0.2 -package ghc-prim-0.3.1.0 -package >>> primitive-0.5.2.1 -O2 -XHaskell98 -XCPP -XDeriveDataTypeable -O2 -O >>> -dcore-lint -fno-warn-deprecated-flags -no-user-package-db -rtsopts >>> -Wwarn -odir libraries/vector/dist-install/build -hidir >>> libraries/vector/dist-install/build -stubdir >>> libraries/vector/dist-install/build -c >>> libraries/vector/./Data/Vector/Fusion/Stream/Monadic.hs -o >>> libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o >>> >>> Loading package ghc-prim ... linking ... ghc-stage2.exe: unable to load >>> package `ghc-prim' >>> >>> ghc-stage2.exe: >>> C:\code\HEAD\libraries\ghc-prim\dist-install\build\HSghc-prim-0.3.1.0.o: >>> unknown symbol `___sync_fetch_and_add_1' >>> >>> libraries/vector/ghc.mk:5: recipe for target >>> 'libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o' >>> failed >>> >>> make[1]: *** >>> [libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o] >>> Error 1 >>> >>> >>> >>> I >>> >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://www.haskell.org/mailman/listinfo/ghc-devs >>> >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Wed Jul 16 16:42:19 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Wed, 16 Jul 2014 18:42:19 +0200 Subject: Windows breakage -- again In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF1042DFFC@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Simon M, do I need to add atomic.c in any other place than ghc-prim.cabal? I for example found this file includes/stg/Prim.h that list some other primops. On Wed, Jul 16, 2014 at 6:36 PM, Johan Tibell wrote: > We are on x86(-64), but not on other archs. Simon, which arch are you on? > > > On Wed, Jul 16, 2014 at 5:58 PM, Carter Schonwald < > carter.schonwald at gmail.com> wrote: > >> Huh. We're not generating the atomics assembly directly ourselves? >> >> >> On Wednesday, July 16, 2014, Johan Tibell wrote: >> >>> I added some primops about a month ago >>> (4ee4ab01c1d97845aecb7707ad2f9a80933e7a49) that call __sync_fetch_and_add, >>> a gcc/llvm builtin. I'm a bit surprised to see this error. The GCC manual >>> [1] says: >>> >>> > " Not all operations are supported by all target processors. If a >>> particular operation cannot be implemented on the target processor, a >>> warning will be generated and a call an external function will be >>> generated. The external function will carry the same name as the builtin, >>> with an additional suffix `_n' where n is the size of the data type." >>> >>> I'm a bit surprised by this error for two reasons: >>> >>> * A call to that symbol should only be generated if the CPU doesn't >>> support the atomic instructions. What CPU model does Windows report that >>> you have? >>> >>> * gcc should define such a symbol. For me the following test program >>> compiles: >>> >>> #include >>> >>> uint8_t test(uint8_t* ptr, uint8_t val) { >>> return __sync_fetch_and_add_1(ptr, val); >>> } >>> >>> int main(void) { >>> uint8_t n; >>> return test(&n, 1); >>> } >>> >>> Does that compile for you? Which version of GCC do we end up using on >>> Windows? >>> >>> The reported symbol (___sync_fetch_and_add_1) has three leading >>> underscores, that looks weird. Can you compile just >>> libraries/ghc-prim/cbits/atomic.c and see if it's indeed GCC that generates >>> a reference to that symbol? >>> >>> 1. http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Atomic-Builtins.html >>> >>> >>> On Wed, Jul 16, 2014 at 12:29 AM, Simon Peyton Jones < >>> simonpj at microsoft.com> wrote: >>> >>>> Aargh! The Windows build has broken ? again. I can?t build GHC on >>>> my laptop any more. >>>> >>>> A clean ?sh validate? finishes as below. What on earth is >>>> `___sync_fetch_and_add_1'? >>>> >>>> Can anyone help? Thanks! >>>> >>>> Simon >>>> >>>> "inplace/bin/ghc-stage2.exe" -hisuf hi -osuf o -hcsuf hc -static >>>> -H32m -O -Werror -Wall -H64m -O0 -package-name vector-0.10.9.1 >>>> -hide-all-packages -i -ilibraries/vector/. >>>> -ilibraries/vector/dist-install/build >>>> -ilibraries/vector/dist-install/build/autogen >>>> -Ilibraries/vector/dist-install/build >>>> -Ilibraries/vector/dist-install/build/autogen -Ilibraries/vector/include >>>> -Ilibraries/vector/internal -optP-DVECTOR_BOUNDS_CHECKS -optP-include >>>> -optPlibraries/vector/dist-install/build/autogen/cabal_macros.h -package >>>> base-4.7.1.0 -package deepseq-1.3.0.2 -package ghc-prim-0.3.1.0 -package >>>> primitive-0.5.2.1 -O2 -XHaskell98 -XCPP -XDeriveDataTypeable -O2 -O >>>> -dcore-lint -fno-warn-deprecated-flags -no-user-package-db -rtsopts >>>> -Wwarn -odir libraries/vector/dist-install/build -hidir >>>> libraries/vector/dist-install/build -stubdir >>>> libraries/vector/dist-install/build -c >>>> libraries/vector/./Data/Vector/Fusion/Stream/Monadic.hs -o >>>> libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o >>>> >>>> Loading package ghc-prim ... linking ... ghc-stage2.exe: unable to load >>>> package `ghc-prim' >>>> >>>> ghc-stage2.exe: >>>> C:\code\HEAD\libraries\ghc-prim\dist-install\build\HSghc-prim-0.3.1.0.o: >>>> unknown symbol `___sync_fetch_and_add_1' >>>> >>>> libraries/vector/ghc.mk:5: recipe for target >>>> 'libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o' >>>> failed >>>> >>>> make[1]: *** >>>> [libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o] >>>> Error 1 >>>> >>>> >>>> >>>> I >>>> >>>> _______________________________________________ >>>> ghc-devs mailing list >>>> ghc-devs at haskell.org >>>> http://www.haskell.org/mailman/listinfo/ghc-devs >>>> >>>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From metaniklas at gmail.com Wed Jul 16 18:57:44 2014 From: metaniklas at gmail.com (Niklas Larsson) Date: Wed, 16 Jul 2014 20:57:44 +0200 Subject: Windows breakage -- again In-Reply-To: <2362520305436346764@unknownmsgid> References: <2362520305436346764@unknownmsgid> Message-ID: I get the same failure when I try to build HEAD. Turns out the error occurs on the 32-bit Windows build, and my successful build was a 64-bit build. My 64-bit build still succeeds. Also, gcc is 4.5.2 on 32-bit, not 4.6.3 as on 64-bit. Niklas 2014-07-16 14:48 GMT+02:00 Niklas Larsson : > I have built ghc on windows after that was added with no issue. > > I can take a look this evening and see how HEAD works for me. > > The standard gcc in the tarballs is 4.6.3, which is getting long in the > tooth, there is an issue on trac to upgrade it. > > -- Niklas > > ------------------------------ > Fr?n: Johan Tibell > Skickat: ?2014-?07-?16 09:57 > Till: Simon Peyton Jones > Kopia: ghc-devs at haskell.org > ?mne: Re: Windows breakage -- again > > You can rollback the commit (git revert 4ee4ab01c1d97845aecb7707ad2f9a80933e7a49) > and push that to the repo if you wish. I will try to re-add the primop > again after I figure out what's wrong. > > > On Wed, Jul 16, 2014 at 9:37 AM, Johan Tibell > wrote: > >> I added some primops about a month ago >> (4ee4ab01c1d97845aecb7707ad2f9a80933e7a49) that call __sync_fetch_and_add, >> a gcc/llvm builtin. I'm a bit surprised to see this error. The GCC manual >> [1] says: >> >> > " Not all operations are supported by all target processors. If a >> particular operation cannot be implemented on the target processor, a >> warning will be generated and a call an external function will be >> generated. The external function will carry the same name as the builtin, >> with an additional suffix `_n' where n is the size of the data type." >> >> I'm a bit surprised by this error for two reasons: >> >> * A call to that symbol should only be generated if the CPU doesn't >> support the atomic instructions. What CPU model does Windows report that >> you have? >> >> * gcc should define such a symbol. For me the following test program >> compiles: >> >> #include >> >> uint8_t test(uint8_t* ptr, uint8_t val) { >> return __sync_fetch_and_add_1(ptr, val); >> } >> >> int main(void) { >> uint8_t n; >> return test(&n, 1); >> } >> >> Does that compile for you? Which version of GCC do we end up using on >> Windows? >> >> The reported symbol (___sync_fetch_and_add_1) has three leading >> underscores, that looks weird. Can you compile just >> libraries/ghc-prim/cbits/atomic.c and see if it's indeed GCC that generates >> a reference to that symbol? >> >> 1. http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Atomic-Builtins.html >> >> >> On Wed, Jul 16, 2014 at 12:29 AM, Simon Peyton Jones < >> simonpj at microsoft.com> wrote: >> >>> Aargh! The Windows build has broken ? again. I can?t build GHC on >>> my laptop any more. >>> >>> A clean ?sh validate? finishes as below. What on earth is >>> `___sync_fetch_and_add_1'? >>> >>> Can anyone help? Thanks! >>> >>> Simon >>> >>> "inplace/bin/ghc-stage2.exe" -hisuf hi -osuf o -hcsuf hc -static -H32m >>> -O -Werror -Wall -H64m -O0 -package-name vector-0.10.9.1 >>> -hide-all-packages -i -ilibraries/vector/. >>> -ilibraries/vector/dist-install/build >>> -ilibraries/vector/dist-install/build/autogen >>> -Ilibraries/vector/dist-install/build >>> -Ilibraries/vector/dist-install/build/autogen -Ilibraries/vector/include >>> -Ilibraries/vector/internal -optP-DVECTOR_BOUNDS_CHECKS -optP-include >>> -optPlibraries/vector/dist-install/build/autogen/cabal_macros.h -package >>> base-4.7.1.0 -package deepseq-1.3.0.2 -package ghc-prim-0.3.1.0 -package >>> primitive-0.5.2.1 -O2 -XHaskell98 -XCPP -XDeriveDataTypeable -O2 -O >>> -dcore-lint -fno-warn-deprecated-flags -no-user-package-db -rtsopts >>> -Wwarn -odir libraries/vector/dist-install/build -hidir >>> libraries/vector/dist-install/build -stubdir >>> libraries/vector/dist-install/build -c >>> libraries/vector/./Data/Vector/Fusion/Stream/Monadic.hs -o >>> libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o >>> >>> Loading package ghc-prim ... linking ... ghc-stage2.exe: unable to load >>> package `ghc-prim' >>> >>> ghc-stage2.exe: >>> C:\code\HEAD\libraries\ghc-prim\dist-install\build\HSghc-prim-0.3.1.0.o: >>> unknown symbol `___sync_fetch_and_add_1' >>> >>> libraries/vector/ghc.mk:5: recipe for target >>> 'libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o' >>> failed >>> >>> make[1]: *** >>> [libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o] >>> Error 1 >>> >>> >>> >>> I >>> >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://www.haskell.org/mailman/listinfo/ghc-devs >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Wed Jul 16 19:02:57 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Wed, 16 Jul 2014 20:02:57 +0100 Subject: a little phrustrated In-Reply-To: <5BC8E2CB-7528-4537-92EA-E1C3BDDF1B1E@cis.upenn.edu> References: <5BC8E2CB-7528-4537-92EA-E1C3BDDF1B1E@cis.upenn.edu> Message-ID: <1405536989-sup-1770@sabre> Hello Richard, > 1) I had some untracked files in a submodule repo. I couldn't find a way to get `arc diff` to ignore these, as they appeared to git to be a change in a tracked file (that is, a change to a submodule, which is considered tracked). `git stash` offered no help, so I had to delete the untracked files. This didn't cause real pain (the files were there in error), but it seems a weakness of the system if I can't make progress otherwise. Yes, this was fairly painful for me as well. One way to make the pain go away and help others out is improve the .gitignore files so these files are not considered tracked. Here is another thread discussing this problem: http://comments.gmane.org/gmane.comp.version-control.git/238173 though I haven't read through it fully yet. > 2) I develop and build in the same tree. This means that I often have a few untracked files in the outer, ghc.git repo that someone hasn't yet added to .gitignore. Thus, I need to say `--allow-untracked` to get `arc diff` to work. I will likely always need `--allow-untracked`, so I looked for a way to get this to be configured automatically. I found https://secure.phabricator.com/book/phabricator/article/arcanist/#configuration , but the details there are sparse. Any advice? No, but I CR so infrequently manually Xing through the prompt is not bad (also, helps me remember if I *actually* forgot to add a file.) Also, see above :) > 3) The linter picks up and complains about tabs in any of my touched files. I can then write an excuse for every `arc diff` I do, or de-tab the files. In one case, I changed roughly one line in the file (MkCore.lhs) and didn't think it right to de-tab the whole file. Even if I did de-tab the whole file, then my eventual `arc land` would squash the whitespace commit in with my substantive commits, which we expressly don't want. I can imagine a fair amount of git fiddling which would push the whitespace commit to master and then rebase my substantive work on top so that the final, landed, squashed patch would avoid the whitespace changes, but this is painful. And advice on this? Just ignore the lint errors and write silly excuses? Or, is there a way Phab/arc can be smart enough to keep whitespace-only commits (perhaps tagged with the words "whitespace only" in the commit message) separate from other commits when squashing in `arc land`? As far as I know, we have a fair amount of latitude in modifying the linter, so maybe hvr can fix this :) As for the other questions, I'd love to know answers to them too. Cheers, Edward From johan.tibell at gmail.com Wed Jul 16 19:15:00 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Wed, 16 Jul 2014 21:15:00 +0200 Subject: Windows breakage -- again In-Reply-To: References: <2362520305436346764@unknownmsgid> Message-ID: I'll be out until Tuesday. I would much appreciate if someone with a Windows setup could help me debug this. These symbols are supposed to be defined by GCC somewhere (as GCC is outputting these when it cannot convert __sync_fetch_and_add into assembly instructions.) On Wed, Jul 16, 2014 at 9:08 PM, Niklas Larsson wrote: > Oh, wait, it isn't the system linker that is complaining, it's the ghc rts > linker. Maybe we just have to tell it to leave those symbols alone. > > Niklas > > > 2014-07-16 20:57 GMT+02:00 Niklas Larsson : > > I get the same failure when I try to build HEAD. Turns out the error >> occurs on the 32-bit Windows build, and my successful build was a 64-bit >> build. My 64-bit build still succeeds. >> >> Also, gcc is 4.5.2 on 32-bit, not 4.6.3 as on 64-bit. >> >> Niklas >> >> >> >> 2014-07-16 14:48 GMT+02:00 Niklas Larsson : >> >> I have built ghc on windows after that was added with no issue. >>> >>> I can take a look this evening and see how HEAD works for me. >>> >>> The standard gcc in the tarballs is 4.6.3, which is getting long in the >>> tooth, there is an issue on trac to upgrade it. >>> >>> -- Niklas >>> >>> ------------------------------ >>> Fr?n: Johan Tibell >>> Skickat: ?2014-?07-?16 09:57 >>> Till: Simon Peyton Jones >>> Kopia: ghc-devs at haskell.org >>> ?mne: Re: Windows breakage -- again >>> >>> You can rollback the commit (git revert 4ee4ab01c1d97845aecb7707ad2f9a80933e7a49) >>> and push that to the repo if you wish. I will try to re-add the primop >>> again after I figure out what's wrong. >>> >>> >>> On Wed, Jul 16, 2014 at 9:37 AM, Johan Tibell >>> wrote: >>> >>>> I added some primops about a month ago >>>> (4ee4ab01c1d97845aecb7707ad2f9a80933e7a49) that call __sync_fetch_and_add, >>>> a gcc/llvm builtin. I'm a bit surprised to see this error. The GCC manual >>>> [1] says: >>>> >>>> > " Not all operations are supported by all target processors. If a >>>> particular operation cannot be implemented on the target processor, a >>>> warning will be generated and a call an external function will be >>>> generated. The external function will carry the same name as the builtin, >>>> with an additional suffix `_n' where n is the size of the data type." >>>> >>>> I'm a bit surprised by this error for two reasons: >>>> >>>> * A call to that symbol should only be generated if the CPU doesn't >>>> support the atomic instructions. What CPU model does Windows report that >>>> you have? >>>> >>>> * gcc should define such a symbol. For me the following test program >>>> compiles: >>>> >>>> #include >>>> >>>> uint8_t test(uint8_t* ptr, uint8_t val) { >>>> return __sync_fetch_and_add_1(ptr, val); >>>> } >>>> >>>> int main(void) { >>>> uint8_t n; >>>> return test(&n, 1); >>>> } >>>> >>>> Does that compile for you? Which version of GCC do we end up using on >>>> Windows? >>>> >>>> The reported symbol (___sync_fetch_and_add_1) has three leading >>>> underscores, that looks weird. Can you compile just >>>> libraries/ghc-prim/cbits/atomic.c and see if it's indeed GCC that generates >>>> a reference to that symbol? >>>> >>>> 1. http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Atomic-Builtins.html >>>> >>>> >>>> On Wed, Jul 16, 2014 at 12:29 AM, Simon Peyton Jones < >>>> simonpj at microsoft.com> wrote: >>>> >>>>> Aargh! The Windows build has broken ? again. I can?t build GHC on >>>>> my laptop any more. >>>>> >>>>> A clean ?sh validate? finishes as below. What on earth is >>>>> `___sync_fetch_and_add_1'? >>>>> >>>>> Can anyone help? Thanks! >>>>> >>>>> Simon >>>>> >>>>> "inplace/bin/ghc-stage2.exe" -hisuf hi -osuf o -hcsuf hc -static >>>>> -H32m -O -Werror -Wall -H64m -O0 -package-name vector-0.10.9.1 >>>>> -hide-all-packages -i -ilibraries/vector/. >>>>> -ilibraries/vector/dist-install/build >>>>> -ilibraries/vector/dist-install/build/autogen >>>>> -Ilibraries/vector/dist-install/build >>>>> -Ilibraries/vector/dist-install/build/autogen -Ilibraries/vector/include >>>>> -Ilibraries/vector/internal -optP-DVECTOR_BOUNDS_CHECKS -optP-include >>>>> -optPlibraries/vector/dist-install/build/autogen/cabal_macros.h -package >>>>> base-4.7.1.0 -package deepseq-1.3.0.2 -package ghc-prim-0.3.1.0 -package >>>>> primitive-0.5.2.1 -O2 -XHaskell98 -XCPP -XDeriveDataTypeable -O2 -O >>>>> -dcore-lint -fno-warn-deprecated-flags -no-user-package-db -rtsopts >>>>> -Wwarn -odir libraries/vector/dist-install/build -hidir >>>>> libraries/vector/dist-install/build -stubdir >>>>> libraries/vector/dist-install/build -c >>>>> libraries/vector/./Data/Vector/Fusion/Stream/Monadic.hs -o >>>>> libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o >>>>> >>>>> Loading package ghc-prim ... linking ... ghc-stage2.exe: unable to >>>>> load package `ghc-prim' >>>>> >>>>> ghc-stage2.exe: >>>>> C:\code\HEAD\libraries\ghc-prim\dist-install\build\HSghc-prim-0.3.1.0.o: >>>>> unknown symbol `___sync_fetch_and_add_1' >>>>> >>>>> libraries/vector/ghc.mk:5: recipe for target >>>>> 'libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o' >>>>> failed >>>>> >>>>> make[1]: *** >>>>> [libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o] >>>>> Error 1 >>>>> >>>>> >>>>> >>>>> I >>>>> >>>>> _______________________________________________ >>>>> ghc-devs mailing list >>>>> ghc-devs at haskell.org >>>>> http://www.haskell.org/mailman/listinfo/ghc-devs >>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pali.gabor at gmail.com Wed Jul 16 19:32:20 2014 From: pali.gabor at gmail.com (=?UTF-8?B?UMOhbGkgR8OhYm9yIErDoW5vcw==?=) Date: Wed, 16 Jul 2014 21:32:20 +0200 Subject: Windows breakage -- again In-Reply-To: References: <2362520305436346764@unknownmsgid> Message-ID: 2014-07-16 20:57 GMT+02:00 Niklas Larsson : > I get the same failure when I try to build HEAD. Turns out the error occurs > on the 32-bit Windows build, and my successful build was a 64-bit build. My > 64-bit build still succeeds. Perhaps you should specify -march=i586 for the C compiler? I have tried Johan's test program (posted in this thread earlier) with -march=i386 and it generates a function call for __sync_fetch_and_add_1(). (No cmpxchg?) In case of -march=i486, __sync_fetch_and_add_8() is generated (this cpu type does have a cmpxchg8b instruction). I faced this error earlier because the FreeBSD/i386 builder has been failing for a while [1] as it assumes -march=i486 by default. With -march=i586, everything builds just fine. On GNU/Linux, it is not a problem as it builds C code with -march=i686. [1] http://haskell.inf.elte.hu/builders/freebsd-i386-head/317/10.html From alain.odea at gmail.com Wed Jul 16 20:28:01 2014 From: alain.odea at gmail.com (Alain O'Dea) Date: Wed, 16 Jul 2014 20:28:01 +0000 Subject: 64bit Solaris was: Re: 7.8.1 plan In-Reply-To: <53C3D82E.4040101@centrum.cz> References: <53311BD4.3030008@mail.ru> <533DAACF.3050801@fuuzetsu.co.uk> <53E2835B-4D02-466F-9F14-2E860DFB97F2@gmail.com> <534509A8.3070309@centrum.cz> <4A7CF78C-9146-4801-843F-CC67E2362056@gmail.com> <53464A9C.8070807@dfki.de> <53C377B0.4070008@centrum.cz> <53C3D2C2.5080808@dfki.de> <53C3D82E.4040101@centrum.cz> Message-ID: <53C6E051.2010305@gmail.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 07/14/2014 01:16 PM, Karel Gardas wrote: > On 07/14/14 02:53 PM, Christian Maeder wrote: >> Hi Karel, >> >> usually I do not build HEAD. My attempt starting to do so failed >> as follows: >> >> git clone git://git.haskell.org/ghc.git cd ghc ./sync-all get >> autoreconf > > ^ I'm not sure, but shouldn't that be `perl boot' ? > >> ./configure > > ^ if you don't have x86_64-solaris ghc yet on your system, you > will probably need to cross-compile with --target= param. > >> Is there somewhere a x86_64-solaris2 binary-dist (for Solaris 10) >> to try out first? > > I haven't tried that yet as my primary target is Solaris 11. If you target Solaris 10 it should work on Solaris 11 as well. That's what the JRE and JDK do. That's only workable if there isn't some specific significant advantage in APIs/libraries unique to Solaris 11. > > Karel > > -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQEcBAEBAgAGBQJTxuBRAAoJEP0rIXJNjNSAukUIAMiSjbGIY+3yEG9hEHaAh/7W xTI0B3CBEc7lZiaIaprleuQKCyvu05bWJw/gJ0WPdxpg5rMo7Q1Fpn/eR++mbGY2 YYA4jZ/e3BTb5ItMk8KiefEAuVLaauDSfywrMiLFzbN5bbwF4sSKyi74V/EiUqNr 3hV+Ha4v+Nl6ID1se4m3ZqDOt9Q1yLi3l96tBB56gva0TxTfNtkek2Pv7Yaq0UTD /FF3Kb8RFS5hvo9BbEtq4d6VEAMM/KEO1qDAUHcYw5m0CqYSYkb4EC1uFEyPa7kZ 9POrMXQtZOtiutMN9lupwtP5E93F0qpo3dfF4CzAKTtd3odcQg9Akeu/BBAANTs= =NIXv -----END PGP SIGNATURE----- From karel.gardas at centrum.cz Wed Jul 16 20:33:45 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Wed, 16 Jul 2014 22:33:45 +0200 Subject: 64bit Solaris was: Re: 7.8.1 plan In-Reply-To: <53C6E051.2010305@gmail.com> References: <53311BD4.3030008@mail.ru> <533DAACF.3050801@fuuzetsu.co.uk> <53E2835B-4D02-466F-9F14-2E860DFB97F2@gmail.com> <534509A8.3070309@centrum.cz> <4A7CF78C-9146-4801-843F-CC67E2362056@gmail.com> <53464A9C.8070807@dfki.de> <53C377B0.4070008@centrum.cz> <53C3D2C2.5080808@dfki.de> <53C3D82E.4040101@centrum.cz> <53C6E051.2010305@gmail.com> Message-ID: <53C6E1A9.3070903@centrum.cz> On 07/16/14 10:28 PM, Alain O'Dea wrote: >>> Is there somewhere a x86_64-solaris2 binary-dist (for Solaris 10) >>> to try out first? >> >> I haven't tried that yet as my primary target is Solaris 11. > > If you target Solaris 10 it should work on Solaris 11 as well. That's > what the JRE and JDK do. That's only workable if there isn't some > specific significant advantage in APIs/libraries unique to Solaris 11. Yes, Solaris 10 -> Solaris 11 works. Thanks to nice work of Sun folks even Solaris 11 -> Solaris 10 sometimes works too. Anyway, now, my priority is to get Solaris 11/AMD64 builder up'n'running. It looks like it's hit by inconsistency in #ifdef GHCI symbols in TcSplice.lhs/lhs-boot files[1]. Anyway, this inconsistency shows that something is broken on build process which I'll need to track down. Once this is done, I'll attempt to build Solaris 11 binary dist but with GHC provided static libffi/libgmp libraries. Such build worked on Solaris 10 well in the past and it may be the cheapest route to go that time... Karel [1]: http://haskell.inf.elte.hu/builders/solaris-amd64-head/5/10.html From mail at joachim-breitner.de Wed Jul 16 21:12:18 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Wed, 16 Jul 2014 23:12:18 +0200 Subject: Travis now tests ghc directly In-Reply-To: <1405460044.2694.18.camel@kirk> References: <1405184690.22017.6.camel@kirk> <618BE556AADD624C9C918AA5D5911BEF1042CFCD@DB3PRD3001MB020.064d.mgd.msft.net> <1405460044.2694.18.camel@kirk> Message-ID: <1405545138.32401.2.camel@kirk> Hi, Am Dienstag, den 15.07.2014, 23:34 +0200 schrieb Joachim Breitner: > Am Dienstag, den 15.07.2014, 08:38 +0000 schrieb Simon Peyton Jones: > > This is all fantastic, thank you Joachim. But how would a new GHC dev > > find out this information? Could it please be documented on the wiki? > > Yes, I plan to do it once it settles, there are still a view things to > observe and to decide. (E.g. we?d like to make travis send mails about > breakage to the commiter directly, but for that it needs to be more > reliable). Done: https://ghc.haskell.org/trac/ghc/wiki/Travis > With other projects in the pipeline, I want to add a new wiki page, > linked from the sidebar > https://ghc.haskell.org/trac/ghc/wiki/Infrastructure > that gives an overview of all tools that a developer should at least > know that they exist. Also done. It looks a bit scrawny, but that?s ok, it is just a quick overview of what?s there. Herbert, if you find this useful, can you link it from the sidebar? I guess somewhere under ?Documentation? would be good. Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From jwlato at gmail.com Wed Jul 16 21:13:09 2014 From: jwlato at gmail.com (John Lato) Date: Wed, 16 Jul 2014 14:13:09 -0700 Subject: a little phrustrated In-Reply-To: <5BC8E2CB-7528-4537-92EA-E1C3BDDF1B1E@cis.upenn.edu> References: <5BC8E2CB-7528-4537-92EA-E1C3BDDF1B1E@cis.upenn.edu> Message-ID: Speaking more as a bystander than anything else, I'd recommend that the ghc dev team just go ahead and detab files. Yes, merging branches will be painful, but it's a one-time pain. Better that than the ongoing pain mixed tabs and spaces seem to be causing. And merging doesn't even have to be that painful. Just cherry-pick the detab commit into your wip branch, if there are any conflicts resolve them to your branch, detab again and commit. It could be completely automated. John L. On Jul 16, 2014 6:54 AM, "Richard Eisenberg" wrote: > > Hi all, > > I'm trying to use Phab for the first time this morning, and hitting a fair number of obstacles. I'm writing up my experiences here in order to figure out which of these are my fault, which can be fixed, and which are just things to live with; and also to help others who may go down the same path. If relevant, my diff is at https://phabricator.haskell.org/D73 > > 1) I had some untracked files in a submodule repo. I couldn't find a way to get `arc diff` to ignore these, as they appeared to git to be a change in a tracked file (that is, a change to a submodule, which is considered tracked). `git stash` offered no help, so I had to delete the untracked files. This didn't cause real pain (the files were there in error), but it seems a weakness of the system if I can't make progress otherwise. > > 2) I develop and build in the same tree. This means that I often have a few untracked files in the outer, ghc.git repo that someone hasn't yet added to .gitignore. Thus, I need to say `--allow-untracked` to get `arc diff` to work. I will likely always need `--allow-untracked`, so I looked for a way to get this to be configured automatically. I found https://secure.phabricator.com/book/phabricator/article/arcanist/#configuration , but the details there are sparse. Any advice? > > 3) The linter picks up and complains about tabs in any of my touched files. I can then write an excuse for every `arc diff` I do, or de-tab the files. In one case, I changed roughly one line in the file (MkCore.lhs) and didn't think it right to de-tab the whole file. Even if I did de-tab the whole file, then my eventual `arc land` would squash the whitespace commit in with my substantive commits, which we expressly don't want. I can imagine a fair amount of git fiddling which would push the whitespace commit to master and then rebase my substantive work on top so that the final, landed, squashed patch would avoid the whitespace changes, but this is painful. And advice on this? Just ignore the lint errors and write silly excuses? Or, is there a way Phab/arc can be smart enough to keep whitespace-only commits (perhaps tagged with the words "whitespace only" in the commit message) separate from other commits when squashing in `arc land`? > > 4) For better or worse, we don't currently require every file to be tab-free, just some of them. Could this be reflected in Phab's lint settings to avoid the problem in (3)? (Of course, a way to de-tab and keep the history nice would be much better!) > > 5) In writing my revision description, I had to add reviewers. I assumed these should be comma-separated. This worked and I have updated the Wiki. Please advise if I am wrong. > > 6) When I looked at my posted revision, it said that the revision was "closed"... and that I had done it! slyfox on IRC informed me that this was likely because I had pushed my commits to a wip/... branch. Is using wip branches with Phab not recommended? Or, can Phab be configured not to close revisions if the commit appears only in wip/... branches? > > 7) How can I "re-open" my revision? > > 8) Some time after posting, phaskell tells me that my build failed. OK. This is despite the fact that Travis was able to build the same commit ( https://travis-ci.org/ghc/ghc/builds/30066130). I go to find out why it failed, and am directed to build log F3870 ( https://phabricator.haskell.org/file/info/PHID-FILE-hz2r4sjamkkrbf7nsz6b/). I can't view the file online, but instead have to download and then ungzip it. Is it possible to view this file directly? Or not have it be compressed? > > 9) When I do view the build log, I get no answers. The end of the file comes abruptly in the middle of some haddock output, and the closest thing that looks like an error is about a missing link in a haddock tag `$kind_subtyping` in Type.lhs. I didn't touch this file, and I imagine the missing link has been there for some time, so I'm dubious that this is the real problem. Are these log files cut off? > > 10) More of a question than a phrustration: is there a way to link directly to Trac tickets and/or wiki pages from Phab comments? I like the Phab:D73 syntax from Trac to Phab, and thanks, Austin, for adding the field at the top of Trac tickets to Phab revisions. > > > I did fully expect to hit a few bumps on my first use of this new tool, but it got to the point where I thought I should seek some advice before continuing to muddle through -- hence this email. I do hope my tone is not overly negative: I'm *very* appreciative of the work that many of you do to support GHC's infrastructure, and I look forward to being able to get and provide source code feedback through Phab. We just need to work out some kinks, I think! (Any number of these kinks may be solely my fault, of course.) > > Many thanks, > Richard > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.lentczner at gmail.com Wed Jul 16 21:43:43 2014 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Wed, 16 Jul 2014 14:43:43 -0700 Subject: updated OS X 7.8.3 bindist In-Reply-To: References: Message-ID: Can we get this out? How can I help? - Mark ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Wed Jul 16 21:56:58 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Wed, 16 Jul 2014 23:56:58 +0200 Subject: Windows breakage -- again In-Reply-To: References: <2362520305436346764@unknownmsgid> Message-ID: This bug report might shed some light on this: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=47460 I wonder if I've misunderstood the GCC docs at https://gcc.gnu.org/onlinedocs/gcc-4.9.1/gcc/_005f_005fsync-Builtins.html#_005f_005fsync-Builtins. My reading of the docs was that if the platform doesn't support the needed instructions then GCC will generated a call to e.g. __sync_fetch_and_add_1, where that function *is provided by GCC* as a fallback. Now I'm wondering if I'm supposed to supply that fallback. On Wed, Jul 16, 2014 at 9:32 PM, P?li G?bor J?nos wrote: > 2014-07-16 20:57 GMT+02:00 Niklas Larsson : > > I get the same failure when I try to build HEAD. Turns out the error > occurs > > on the 32-bit Windows build, and my successful build was a 64-bit build. > My > > 64-bit build still succeeds. > > Perhaps you should specify -march=i586 for the C compiler? > > I have tried Johan's test program (posted in this thread earlier) with > -march=i386 and it generates a function call for > __sync_fetch_and_add_1(). (No cmpxchg?) In case of -march=i486, > __sync_fetch_and_add_8() is generated (this cpu type does have a > cmpxchg8b instruction). > > I faced this error earlier because the FreeBSD/i386 builder has been > failing for a while [1] as it assumes -march=i486 by default. With > -march=i586, everything builds just fine. On GNU/Linux, it is not a > problem as it builds C code with -march=i686. > > [1] http://haskell.inf.elte.hu/builders/freebsd-i386-head/317/10.html > -------------- next part -------------- An HTML attachment was scrubbed... URL: From iavor.diatchki at gmail.com Wed Jul 16 22:18:09 2014 From: iavor.diatchki at gmail.com (Iavor Diatchki) Date: Wed, 16 Jul 2014 15:18:09 -0700 Subject: Updating Haddock submodule In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF1042CC47@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF1042CC47@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Hello, here is a bit of git theory that may be useful: - a git repository is a graph, where each node is a *commit* (it identifies the state of the entire repo), and the edges keep track of what happened after what. - a *branch* is a pointer to a specific commit, where we plan to extend the graph. When we add a new commit to some branch, we link the commit to wherever the branch points, and then we move the branch pointer to the new commit. Thus, the branches of a repository are the roots of the graph. - *HEAD* points to the "current" commit for a repo. Typically, HEAD points to the same place as one of the branches (we say that we are "on that branch"). When we checkout a branch, we just move HEAD to point to the same place as that branch. If HEAD does not point to the same place as any branch, we say that we are in a "detached HEAD state". Making a commit in a "detached HEAD state" is bad, because this results in a graph node that is not reachable from any of the branches (i.e., roots), and it would get lost if we were to GC the repo. - a *sub-module* remembers to a specific commit of a different repo. When we get a submodule, the different repo is downloaded and its HEAD is moved to point to the specific commit (the command "git submodule update" does this for all registered sub-modules). In this way, we know that we are always using the exact same version of the source code, no matter who was committing what to the different repo. Because of this, the submodule repo will often end up in a "detached HEAD" state. This is why before you commit to it, you have to make sure that you are on a branch (e.g., "master"). To move around the "sub-module" pointer: 1) move the HEAD of the sub-module repo (e.g., haddock) to the commit that you want, and 2) add/commit the sub-module change to the parent repo (e.g., GHC) -Iavor On Tue, Jul 15, 2014 at 12:01 AM, Simon Peyton Jones wrote: > Herbert, Austin > > I?ve just made a change to GHC that has a (trivial) knock-on effect in > Haddock, so I had to update the submodule. Here is what I did, after > consulting Austin. > > Can I humbly implore you both (or someone) to write down the workflow so > that git-na?ve people like me can do this with confidence, rather than (as > now) in fear? > > Below is my draft of the workflow. It seems pretty complicated, and there > are three places (in red) where I am unsure what to do. > > Please in writing the workflow, document every step as I have done below. > > Thanks! > > Simon > > > > 1. *Starting point*: > > ? all changes made in GHC and in utils/haddock. > > ? validate works > > 2. cd utils/haddock > > 3. git stash > Keep my changes out of the way > > 4. git branch ?av > Keep the output > > *5. *git checkout master > *Why ?master?? Because I know from talking to Austin that the ghc repo > tracks Haddock?s master branch. There is no way to get this information > without talking to Someone Who Knows. There should be a wiki page that > documents it.* > > 6. Check that in the output of step 4, the current branch (which > should be detatched-head) is the same commit as origin/master. > > *I have no idea what to do if this isn?t the case.* > > 7. git pull > Now Haddock?s ?master? is up to date > > 8. git stash pop > Apply the changes to the Haddock master branch > > 9. git add/commit to record and commit the patch as usual > > 10. git push > > *Phew! At this stage we have updated Haddock. I think.* > > 11. cd ../.. Back into main ghc repo > > 12. git add/commit to record patch as usual, but > > ? include utils/haddock in the ?files? you add. This will update > the submodule pointer in the ghc repo > > ? include the word ?submodule? in the commit message > > 13. git push > > *At this stage we have updated GHC too* > > 14. ???? to get utils/haddock back into the detached-head state. > > *How do I do this? git submodule update doesn?t* > > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pali.gabor at gmail.com Wed Jul 16 22:23:58 2014 From: pali.gabor at gmail.com (=?UTF-8?B?UMOhbGkgR8OhYm9yIErDoW5vcw==?=) Date: Thu, 17 Jul 2014 00:23:58 +0200 Subject: Windows breakage -- again In-Reply-To: References: <2362520305436346764@unknownmsgid> Message-ID: 2014-07-16 23:56 GMT+02:00 Johan Tibell : > My reading of the docs was that if the platform doesn't support the needed > instructions then GCC will generated a call to e.g. __sync_fetch_and_add_1, > where that function *is provided by GCC* as a fallback. I guess GCC would expect that somebody else will implement the given functionality by an external function. > Now I'm wondering if I'm supposed to supply that fallback. I was told that you cannot reliably emulate those operations in user mode, without loss in performance. From lukexipd at gmail.com Wed Jul 16 22:32:18 2014 From: lukexipd at gmail.com (Luke Iannini) Date: Wed, 16 Jul 2014 15:32:18 -0700 Subject: 7.8.3 source tarball imminent In-Reply-To: References: Message-ID: Hi Austin, Thanks! Should have mentioned that I updated the README and streamlined installation process for my class at BayHac. The new README is here if you'd like to pull it in: https://github.com/ghc-ios/ghc-ios-scripts/blob/master/README.md (raw: https://github.com/ghc-ios/ghc-ios-scripts/raw/master/README.md) Best Luke On Sat, Jul 12, 2014 at 2:54 PM, Austin Seipp wrote: > Thank you Luke. The binaries are uploaded and on the webpage. > > On Fri, Jul 11, 2014 at 9:54 PM, Luke Iannini wrote: > > Hi Austin/all, > > Here are the iOS builds! > > http://tree.is/files/ghc-7.8.3-arm-apple-ios.tar.bz2 > > http://tree.is/files/ghc-7.8.3-arm-apple-ios.tar.bz2.sha1 > > http://tree.is/files/ghc-7.8.3-arm-apple-ios.tar.xz > > http://tree.is/files/ghc-7.8.3-arm-apple-ios.tar.xz.sha1 > > > > http://tree.is/files/ghc-7.8.3-i386-apple-ios.tar.bz2 > > http://tree.is/files/ghc-7.8.3-i386-apple-ios.tar.bz2.sha1 > > http://tree.is/files/ghc-7.8.3-i386-apple-ios.tar.xz > > http://tree.is/files/ghc-7.8.3-i386-apple-ios.tar.xz.sha1 > > > > Best > > Luke > > > > > > > > On Thu, Jul 10, 2014 at 6:51 AM, Mark Lentczner < > mark.lentczner at gmail.com> > > wrote: > >> > >> Morning all. I've just kicked off the Mac builds.... > >> > >> Since I have several to do, and they take awhile... and I have to go to > >> work today... I expect these to be ready sometime late tonight PST. > >> > >> - Mark > >> > >> _______________________________________________ > >> ghc-devs mailing list > >> ghc-devs at haskell.org > >> http://www.haskell.org/mailman/listinfo/ghc-devs > >> > > > > > > -- > Regards, > > Austin Seipp, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Wed Jul 16 22:34:38 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Thu, 17 Jul 2014 00:34:38 +0200 Subject: Windows breakage -- again In-Reply-To: References: <2362520305436346764@unknownmsgid> Message-ID: On Thu, Jul 17, 2014 at 12:23 AM, P?li G?bor J?nos wrote: > 2014-07-16 23:56 GMT+02:00 Johan Tibell : >> My reading of the docs was that if the platform doesn't support the needed >> instructions then GCC will generated a call to e.g. __sync_fetch_and_add_1, >> where that function *is provided by GCC* as a fallback. > > I guess GCC would expect that somebody else will implement the given > functionality by an external function. > >> Now I'm wondering if I'm supposed to supply that fallback. > > I was told that you cannot reliably emulate those operations in user > mode, without loss in performance. I think loss on performance is OK on i386, but not loss of correctness. In my mind we have 3 options: 1. Raise a compile time error saying that these operations aren't supported on your platform (somewhat hard to detect as it depends on GCC). 2. Raise a runtime error saying the same (easy to detect, just raise from the fallback functions). 3. Provide some fallback. I don't know how to do (3), but it seems like the best option if possible. From metaniklas at gmail.com Wed Jul 16 22:47:36 2014 From: metaniklas at gmail.com (Niklas Larsson) Date: Thu, 17 Jul 2014 00:47:36 +0200 Subject: Windows breakage -- again In-Reply-To: References: <2362520305436346764@unknownmsgid> Message-ID: I think it all works ok if the object files goes the normal way, the gnu linker knows what to do with those symbols when it links with the gcc lib. But for some reason (template haskell?) the ghc linker gets involved when compiling the vector library, and it pukes on those symbols. I hope they can just be done away with at the source, that is to make gcc generate the assembly primitives. GHC should already be built with i686, but does that reach ghc-prim? -------------- next part -------------- An HTML attachment was scrubbed... URL: From pali.gabor at gmail.com Wed Jul 16 22:51:46 2014 From: pali.gabor at gmail.com (=?UTF-8?B?UMOhbGkgR8OhYm9yIErDoW5vcw==?=) Date: Thu, 17 Jul 2014 00:51:46 +0200 Subject: Windows breakage -- again In-Reply-To: References: <2362520305436346764@unknownmsgid> Message-ID: 2014-07-17 0:47 GMT+02:00 Niklas Larsson : > I hope they can just be done away with at the source, that is to make gcc > generate the assembly primitives. GHC should already be built with i686, but > does that reach ghc-prim? This depends on GCC -- if no -march=XXX is explicitly set, I guess it will take its default, which may vary platform by platform. From pedro at onimail.net Thu Jul 17 00:45:48 2014 From: pedro at onimail.net (Pedro Rodrigues) Date: Thu, 17 Jul 2014 01:45:48 +0100 Subject: a little phrustrated In-Reply-To: <5BC8E2CB-7528-4537-92EA-E1C3BDDF1B1E@cis.upenn.edu> References: <5BC8E2CB-7528-4537-92EA-E1C3BDDF1B1E@cis.upenn.edu> Message-ID: <53C71CBC.5090401@onimail.net> Hi, > 1) I had some untracked files in a submodule repo. I couldn't find a way > to get `arc diff` to ignore these, as they appeared to git to be a > change in a tracked file (that is, a change to a submodule, which is > considered tracked). `git stash` offered no help, so I had to delete the > untracked files. This didn't cause real pain (the files were there in > error), but it seems a weakness of the system if I can't make progress > otherwise. Using the `-u` flag with `git stash` will also stash the untracked files. Unfortunately, as far as I'm aware, there's no built-in command to do it recursively for every submodule. > > 2) I develop and build in the same tree. This means that I often have a > few untracked files in the outer, ghc.git repo that someone hasn't yet > added to .gitignore. Thus, I need to say `--allow-untracked` to get `arc > diff` to work. I will likely always need `--allow-untracked`, so I > looked for a way to get this to be configured automatically. I found > https://secure.phabricator.com/book/phabricator/article/arcanist/#configuration , > but the details there are sparse. Any advice? I'm assuming that you don't want to add those files to .gitignore yourself, so would adding them to ".git/info/exclude" solve your problem? (In case you don't know, this file works like .gitignore except that it's not version controled). Cheers, Pedro Rodrigues From carter.schonwald at gmail.com Thu Jul 17 00:46:08 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Wed, 16 Jul 2014 20:46:08 -0400 Subject: Windows breakage -- again In-Reply-To: References: <2362520305436346764@unknownmsgid> Message-ID: it looks like the atomics available differ in older gccs https://gcc.gnu.org/onlinedocs/gcc-4.5.4/gcc/Atomic-Builtins.html#Atomic-Builtins https://gcc.gnu.org/onlinedocs/gcc-4.2.4/gcc/Atomic-Builtins.html#Atomic-Builtins On Wed, Jul 16, 2014 at 6:51 PM, P?li G?bor J?nos wrote: > 2014-07-17 0:47 GMT+02:00 Niklas Larsson : > > I hope they can just be done away with at the source, that is to make gcc > > generate the assembly primitives. GHC should already be built with i686, > but > > does that reach ghc-prim? > > This depends on GCC -- if no -march=XXX is explicitly set, I guess it > will take its default, which may vary platform by platform. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pali.gabor at gmail.com Thu Jul 17 06:21:36 2014 From: pali.gabor at gmail.com (=?UTF-8?B?UMOhbGkgR8OhYm9yIErDoW5vcw==?=) Date: Thu, 17 Jul 2014 08:21:36 +0200 Subject: Windows breakage -- again In-Reply-To: References: <2362520305436346764@unknownmsgid> Message-ID: 2014-07-17 0:51 GMT+02:00 P?li G?bor J?nos : > 2014-07-17 0:47 GMT+02:00 Niklas Larsson : >> I hope they can just be done away with at the source, that is to make gcc >> generate the assembly primitives. GHC should already be built with i686, but >> does that reach ghc-prim? > > This depends on GCC -- if no -march=XXX is explicitly set, I guess it > will take its default, which may vary platform by platform. All right, I have finally got a Windows (x64) machine and installed the msys2 environment by the GHC wiki [1]. This has GCC 4.5.2 (as Niklas wrote earlier), where the default -march is i386. You should see this line when trying to compile Johan's test program with the -v flag set: COLLECT_GCC_OPTIONS= ... '-v' '-mtune=i386' '-march=i386' With the -march=i586 flag explicitly set in the command line, no __sync_fetch_and_add_n() calls are generated. [1] https://ghc.haskell.org/trac/ghc/wiki/Building/Preparation/Windows/MSYS2 From metaniklas at gmail.com Thu Jul 17 06:33:04 2014 From: metaniklas at gmail.com (Niklas Larsson) Date: Thu, 17 Jul 2014 08:33:04 +0200 Subject: Windows breakage -- again In-Reply-To: References: <2362520305436346764@unknownmsgid> Message-ID: I just found exactly the same thing! Well, I used i686 instead. Sounds like it's worthwhile to see if this is limited to ghc-prim or if there's more stuff that's built with i386. 2014-07-17 8:21 GMT+02:00 P?li G?bor J?nos : > 2014-07-17 0:51 GMT+02:00 P?li G?bor J?nos : > > 2014-07-17 0:47 GMT+02:00 Niklas Larsson : > >> I hope they can just be done away with at the source, that is to make > gcc > >> generate the assembly primitives. GHC should already be built with > i686, but > >> does that reach ghc-prim? > > > > This depends on GCC -- if no -march=XXX is explicitly set, I guess it > > will take its default, which may vary platform by platform. > > All right, I have finally got a Windows (x64) machine and installed > the msys2 environment by the GHC wiki [1]. This has GCC 4.5.2 (as > Niklas wrote earlier), where the default -march is i386. You should > see this line when trying to compile Johan's test program with the -v > flag set: > > COLLECT_GCC_OPTIONS= ... '-v' '-mtune=i386' '-march=i386' > > With the -march=i586 flag explicitly set in the command line, no > __sync_fetch_and_add_n() calls are generated. > > [1] > https://ghc.haskell.org/trac/ghc/wiki/Building/Preparation/Windows/MSYS2 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Thu Jul 17 06:33:50 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Thu, 17 Jul 2014 08:33:50 +0200 Subject: Windows breakage -- again In-Reply-To: References: <2362520305436346764@unknownmsgid> Message-ID: A perhaps silly question, *should* ghc-prim be built with i386 or i686? On Thu, Jul 17, 2014 at 8:33 AM, Niklas Larsson wrote: > I just found exactly the same thing! Well, I used i686 instead. > > Sounds like it's worthwhile to see if this is limited to ghc-prim or if > there's more stuff that's built with i386. > > > 2014-07-17 8:21 GMT+02:00 P?li G?bor J?nos : > >> 2014-07-17 0:51 GMT+02:00 P?li G?bor J?nos : >> > 2014-07-17 0:47 GMT+02:00 Niklas Larsson : >> >> I hope they can just be done away with at the source, that is to make >> >> gcc >> >> generate the assembly primitives. GHC should already be built with >> >> i686, but >> >> does that reach ghc-prim? >> > >> > This depends on GCC -- if no -march=XXX is explicitly set, I guess it >> > will take its default, which may vary platform by platform. >> >> All right, I have finally got a Windows (x64) machine and installed >> the msys2 environment by the GHC wiki [1]. This has GCC 4.5.2 (as >> Niklas wrote earlier), where the default -march is i386. You should >> see this line when trying to compile Johan's test program with the -v >> flag set: >> >> COLLECT_GCC_OPTIONS= ... '-v' '-mtune=i386' '-march=i386' >> >> With the -march=i586 flag explicitly set in the command line, no >> __sync_fetch_and_add_n() calls are generated. >> >> [1] >> https://ghc.haskell.org/trac/ghc/wiki/Building/Preparation/Windows/MSYS2 > > From mp2e at archlinux.us Thu Jul 17 06:36:55 2014 From: mp2e at archlinux.us (member MP2E) Date: Wed, 16 Jul 2014 23:36:55 -0700 Subject: Windows breakage -- again In-Reply-To: References: <2362520305436346764@unknownmsgid> Message-ID: i686 has been out for so long that worrying about i386 support is silly. MinGW-w64 uses i686 by default. Even i586 is *incredibly* conservative. On Jul 16, 2014 11:34 PM, "Johan Tibell" wrote: > A perhaps silly question, *should* ghc-prim be built with i386 or i686? > > On Thu, Jul 17, 2014 at 8:33 AM, Niklas Larsson > wrote: > > I just found exactly the same thing! Well, I used i686 instead. > > > > Sounds like it's worthwhile to see if this is limited to ghc-prim or if > > there's more stuff that's built with i386. > > > > > > 2014-07-17 8:21 GMT+02:00 P?li G?bor J?nos : > > > >> 2014-07-17 0:51 GMT+02:00 P?li G?bor J?nos : > >> > 2014-07-17 0:47 GMT+02:00 Niklas Larsson : > >> >> I hope they can just be done away with at the source, that is to make > >> >> gcc > >> >> generate the assembly primitives. GHC should already be built with > >> >> i686, but > >> >> does that reach ghc-prim? > >> > > >> > This depends on GCC -- if no -march=XXX is explicitly set, I guess it > >> > will take its default, which may vary platform by platform. > >> > >> All right, I have finally got a Windows (x64) machine and installed > >> the msys2 environment by the GHC wiki [1]. This has GCC 4.5.2 (as > >> Niklas wrote earlier), where the default -march is i386. You should > >> see this line when trying to compile Johan's test program with the -v > >> flag set: > >> > >> COLLECT_GCC_OPTIONS= ... '-v' '-mtune=i386' '-march=i386' > >> > >> With the -march=i586 flag explicitly set in the command line, no > >> __sync_fetch_and_add_n() calls are generated. > >> > >> [1] > >> > https://ghc.haskell.org/trac/ghc/wiki/Building/Preparation/Windows/MSYS2 > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From metaniklas at gmail.com Thu Jul 17 06:37:45 2014 From: metaniklas at gmail.com (Niklas Larsson) Date: Thu, 17 Jul 2014 08:37:45 +0200 Subject: Windows breakage -- again In-Reply-To: References: <2362520305436346764@unknownmsgid> Message-ID: It certainly shouldn't be built with i386, because that is generating code for a processor that lacks all these fancy atomic instructions. The first of them appears on the 486. i686 should be safe, it goes all the way back to Pentium Pro. 2014-07-17 8:33 GMT+02:00 Johan Tibell : > A perhaps silly question, *should* ghc-prim be built with i386 or i686? > > On Thu, Jul 17, 2014 at 8:33 AM, Niklas Larsson > wrote: > > I just found exactly the same thing! Well, I used i686 instead. > > > > Sounds like it's worthwhile to see if this is limited to ghc-prim or if > > there's more stuff that's built with i386. > > > > > > 2014-07-17 8:21 GMT+02:00 P?li G?bor J?nos : > > > >> 2014-07-17 0:51 GMT+02:00 P?li G?bor J?nos : > >> > 2014-07-17 0:47 GMT+02:00 Niklas Larsson : > >> >> I hope they can just be done away with at the source, that is to make > >> >> gcc > >> >> generate the assembly primitives. GHC should already be built with > >> >> i686, but > >> >> does that reach ghc-prim? > >> > > >> > This depends on GCC -- if no -march=XXX is explicitly set, I guess it > >> > will take its default, which may vary platform by platform. > >> > >> All right, I have finally got a Windows (x64) machine and installed > >> the msys2 environment by the GHC wiki [1]. This has GCC 4.5.2 (as > >> Niklas wrote earlier), where the default -march is i386. You should > >> see this line when trying to compile Johan's test program with the -v > >> flag set: > >> > >> COLLECT_GCC_OPTIONS= ... '-v' '-mtune=i386' '-march=i386' > >> > >> With the -march=i586 flag explicitly set in the command line, no > >> __sync_fetch_and_add_n() calls are generated. > >> > >> [1] > >> > https://ghc.haskell.org/trac/ghc/wiki/Building/Preparation/Windows/MSYS2 > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Thu Jul 17 06:39:57 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Thu, 17 Jul 2014 08:39:57 +0200 Subject: Windows breakage -- again In-Reply-To: References: <2362520305436346764@unknownmsgid> Message-ID: Alright, then which Make file do we need to fix to make sure GCC is called correctly? Also, I remember reading that some time during the 4.x GCC series GCC switched to auto-detecting the arch to be that of the machine being used. Could someone try to just switch GCC to a newer version and see if it automatically stops trying to use i386, leading to Simon's problem? On Thu, Jul 17, 2014 at 8:37 AM, Niklas Larsson wrote: > It certainly shouldn't be built with i386, because that is generating code > for a processor that lacks all these fancy atomic instructions. The first of > them appears on the 486. > > i686 should be safe, it goes all the way back to Pentium Pro. > > > 2014-07-17 8:33 GMT+02:00 Johan Tibell : > >> A perhaps silly question, *should* ghc-prim be built with i386 or i686? >> >> On Thu, Jul 17, 2014 at 8:33 AM, Niklas Larsson >> wrote: >> > I just found exactly the same thing! Well, I used i686 instead. >> > >> > Sounds like it's worthwhile to see if this is limited to ghc-prim or if >> > there's more stuff that's built with i386. >> > >> > >> > 2014-07-17 8:21 GMT+02:00 P?li G?bor J?nos : >> > >> >> 2014-07-17 0:51 GMT+02:00 P?li G?bor J?nos : >> >> > 2014-07-17 0:47 GMT+02:00 Niklas Larsson : >> >> >> I hope they can just be done away with at the source, that is to >> >> >> make >> >> >> gcc >> >> >> generate the assembly primitives. GHC should already be built with >> >> >> i686, but >> >> >> does that reach ghc-prim? >> >> > >> >> > This depends on GCC -- if no -march=XXX is explicitly set, I guess it >> >> > will take its default, which may vary platform by platform. >> >> >> >> All right, I have finally got a Windows (x64) machine and installed >> >> the msys2 environment by the GHC wiki [1]. This has GCC 4.5.2 (as >> >> Niklas wrote earlier), where the default -march is i386. You should >> >> see this line when trying to compile Johan's test program with the -v >> >> flag set: >> >> >> >> COLLECT_GCC_OPTIONS= ... '-v' '-mtune=i386' '-march=i386' >> >> >> >> With the -march=i586 flag explicitly set in the command line, no >> >> __sync_fetch_and_add_n() calls are generated. >> >> >> >> [1] >> >> >> >> https://ghc.haskell.org/trac/ghc/wiki/Building/Preparation/Windows/MSYS2 >> > >> > > > From simonpj at microsoft.com Thu Jul 17 06:40:28 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 17 Jul 2014 06:40:28 +0000 Subject: RFC: style cleanup & guidelines for GHC, and related bikeshedding In-Reply-To: <47769D71-08FA-4CBB-B2AB-936CB15820A8@cse.unsw.edu.au> References: <53BC00DE.9050807@gmail.com> <47769D71-08FA-4CBB-B2AB-936CB15820A8@cse.unsw.edu.au> Message-ID: <618BE556AADD624C9C918AA5D5911BEF1042F0AD@DB3PRD3001MB020.064d.mgd.msft.net> | I used to be a 80 column guy, but moved away from that the last years. | But you are right, there must be an upper limit and, if >80 is a | problem for code reviews, then it's a reasonable choice. As laptop screens have successively more horizontal pixels and fewer vertical pixels, longer lines use screen real estate better. 80 columns now seems a bit narrow to me. 100 would be better. But I'm not going to die for this Simon From simonpj at microsoft.com Thu Jul 17 06:54:32 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 17 Jul 2014 06:54:32 +0000 Subject: Updating Haddock submodule In-Reply-To: <53C65C3D.7060803@fuuzetsu.co.uk> References: <618BE556AADD624C9C918AA5D5911BEF1042CC47@DB3PRD3001MB020.064d.mgd.msft.net> <53C65C3D.7060803@fuuzetsu.co.uk> Message-ID: <618BE556AADD624C9C918AA5D5911BEF1042F1BD@DB3PRD3001MB020.064d.mgd.msft.net> | Is | https://ghc.haskell.org/trac/ghc/wiki/WorkingConventions/Git/Submodules | #MakingchangestoGHCsubmodules | what you are looking for? Yes, it's the right kind of thing. I failed to find that, apologies. But - the page is advertised as work in progress - it checks out 'master'. Is that always right? perhaps not (see my comments) - it assumes you have anticipated the need for change before you do them Much more likely is my situation in which I altered my tree and then thought "oh now I have to commit" - it's at the bottom of a long page, most of which is irrelevant if you use ./sync-all (I think??) More generally I think I just need a bit more hand-holding for this process. Examples of expected output at the various stages would be useful. (I didn't include those in my writeup, but I should have.) | Basically in step 12, you do your GHC hacking. Git should also show you | a one line change with a commit reference which is your updated | Haddock. | You should commit that as well. There's an example of the need for an example. How does it display that one line change? What command makes it do so. | Not sure why you have step 14, it seems to me that you should be good | after 13. At step 14 you will already be pointing to the appropriate | commit, it will just happen to be the same as the master branch at that | point too so I think you're done. OK. But the current page clearly states that submodules should be in a detatched-head state, and it plainly isn't at that moment. Perhaps that's fine, but an unequivaocal statement that it's fine would be super helpful. I'm *not* complaining -- just trying to articulate explicitly what would be helpful to me (or other ill-informed people) next time. Thanks SImon | | -- | Mateusz K. | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From johan.tibell at gmail.com Thu Jul 17 06:57:26 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Thu, 17 Jul 2014 08:57:26 +0200 Subject: RFC: style cleanup & guidelines for GHC, and related bikeshedding In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF1042F0AD@DB3PRD3001MB020.064d.mgd.msft.net> References: <53BC00DE.9050807@gmail.com> <47769D71-08FA-4CBB-B2AB-936CB15820A8@cse.unsw.edu.au> <618BE556AADD624C9C918AA5D5911BEF1042F0AD@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: On Thu, Jul 17, 2014 at 8:40 AM, Simon Peyton Jones wrote: > | I used to be a 80 column guy, but moved away from that the last years. > | But you are right, there must be an upper limit and, if >80 is a > | problem for code reviews, then it's a reasonable choice. > > As laptop screens have successively more horizontal pixels and fewer vertical pixels, longer lines use screen real estate better. 80 columns now seems a bit narrow to me. 100 would be better. > > But I'm not going to die for this Here we go! * Wider screens let you have several Emacs buffers next to each other. At 80 chars you can have about 2 buffers next to each other on a 13" screen. * The average line length is about 30-35 characters in Python. If it's anything similar in Haskell shorter line length are more efficient, looking how much of the lines times columns space is filled with characters. * The eye has trouble traveling back to the next line if lines get too long (at least when reading prose). Research says around 60-70 characters is optimal, if I recall correctly. From pali.gabor at gmail.com Thu Jul 17 07:05:43 2014 From: pali.gabor at gmail.com (=?UTF-8?B?UMOhbGkgR8OhYm9yIErDoW5vcw==?=) Date: Thu, 17 Jul 2014 09:05:43 +0200 Subject: The build is broken? (Haddock parse error) Message-ID: Hello there, I have just found the builds are failing due to this (see [1] for the full log): "/usr/home/ghc-builder/work/builder/tempbuild/build/inplace/bin/haddock" --odir="compiler/stage2/doc/html/ghc" --no-tmp-comp-dir --dump-interface=compiler/stage2/doc/html/ghc/ghc.haddock --html --hoogle --title="ghc-7.9.20140717: The GHC API" --prologue="compiler/stage2/haddock-prologue.txt" --read-interface=../Cabal-1.20.0.1,../Cabal-1.20.0.1/src/%{MODULE/./-}.html\#%{NAME},libraries/Cabal/Cabal/dist-install/doc/html/Cabal/Cabal.haddock --read-interface=../array-0.5.0.0,../array-0.5.0.0/src/%{MODULE/./-}.html\#%{NAME},libraries/array/dist-install/doc/html/array/array.haddock --read-interface=../base-4.7.1.0,../base-4.7.1.0/src/%{MODULE/./-}.html\#%{NAME},libraries/base/dist-install/doc/html/base/base.haddock --read-interface=../bin-package-db-0.0.0.0,../bin-package-db-0.0.0.0/src/%{MODULE/./-}.html\#%{NAME},libraries/bin-package-db/dist-install/doc/html/bin-package-db/bin-package-db.haddock --read-interface=../bytestring-0.10.4.0,../bytestring-0.10.4.0/src/%{MODULE/./-}.html\#%{NAME},libraries/bytestring/dist-install/doc/html/bytestring/bytestring.haddock --read-interface=../containers-0.5.5.1,../containers-0.5.5.1/src/%{MODULE/./-}.html\#%{NAME},libraries/containers/dist-install/doc/html/containers/containers.haddock --read-interface=../directory-1.2.1.0,../directory-1.2.1.0/src/%{MODULE/./-}.html\#%{NAME},libraries/directory/dist-install/doc/html/directory/directory.haddock --read-interface=../filepath-1.3.0.2,../filepath-1.3.0.2/src/%{MODULE/./-}.html\#%{NAME},libraries/filepath/dist-install/doc/html/filepath/filepath.haddock --read-interface=../hoopl-3.10.0.1,../hoopl-3.10.0.1/src/%{MODULE/./-}.html\#%{NAME},libraries/hoopl/dist-install/doc/html/hoopl/hoopl.haddock --read-interface=../hpc-0.6.0.1,../hpc-0.6.0.1/src/%{MODULE/./-}.html\#%{NAME},libraries/hpc/dist-install/doc/html/hpc/hpc.haddock --read-interface=../process-1.2.0.0,../process-1.2.0.0/src/%{MODULE/./-}.html\#%{NAME},libraries/process/dist-install/doc/html/process/process.haddock --read-interface=../template-haskell-2.10.0.0,../template-haskell-2.10.0.0/src/%{MODULE/./-}.html\#%{NAME},libraries/template-haskell/dist-install/doc/html/template-haskell/template-haskell.haddock --read-interface=../time-1.4.2,../time-1.4.2/src/%{MODULE/./-}.html\#%{NAME},libraries/time/dist-install/doc/html/time/time.haddock --read-interface=../transformers-0.4.1.0,../transformers-0.4.1.0/src/%{MODULE/./-}.html\#%{NAME},libraries/transformers/dist-install/doc/html/transformers/transformers.haddock --read-interface=../unix-2.7.0.2,../unix-2.7.0.2/src/%{MODULE/./-}.html\#%{NAME},libraries/unix/dist-install/doc/html/unix/unix.haddock --optghc=-hisuf --optghc=dyn_hi --optghc=-osuf --optghc=dyn_o --optghc=-hcsuf --optghc=dyn_hc --optghc=-fPIC --optghc=-dynamic --optghc=-H32m --optghc=-O --optghc=-package-name --optghc=ghc-7.9.20140717 --optghc=-hide-all-packages --optghc=-i --optghc=-icompiler/basicTypes --optghc=-icompiler/cmm --optghc=-icompiler/codeGen --optghc=-icompiler/coreSyn --optghc=-icompiler/deSugar --optghc=-icompiler/ghci --optghc=-icompiler/hsSyn --optghc=-icompiler/iface --optghc=-icompiler/llvmGen --optghc=-icompiler/main --optghc=-icompiler/nativeGen --optghc=-icompiler/parser --optghc=-icompiler/prelude --optghc=-icompiler/profiling --optghc=-icompiler/rename --optghc=-icompiler/simplCore --optghc=-icompiler/simplStg --optghc=-icompiler/specialise --optghc=-icompiler/stgSyn --optghc=-icompiler/stranal --optghc=-icompiler/typecheck --optghc=-icompiler/types --optghc=-icompiler/utils --optghc=-icompiler/vectorise --optghc=-icompiler/stage2/build --optghc=-icompiler/stage2/build/autogen --optghc=-Icompiler/stage2/build --optghc=-Icompiler/stage2/build/autogen --optghc=-Icompiler/. --optghc=-Icompiler/parser --optghc=-Icompiler/utils --optghc=-Icompiler/../rts/dist/build --optghc=-Icompiler/stage2 --optghc=-optP-DGHCI --optghc=-optP-include --optghc=-optPcompiler/stage2/build/autogen/cabal_macros.h --optghc=-package --optghc=Cabal-1.20.0.1 --optghc=-package --optghc=array-0.5.0.0 --optghc=-package --optghc=base-4.7.1.0 --optghc=-package --optghc=bin-package-db-0.0.0.0 --optghc=-package --optghc=bytestring-0.10.4.0 --optghc=-package --optghc=containers-0.5.5.1 --optghc=-package --optghc=directory-1.2.1.0 --optghc=-package --optghc=filepath-1.3.0.2 --optghc=-package --optghc=hoopl-3.10.0.1 --optghc=-package --optghc=hpc-0.6.0.1 --optghc=-package --optghc=process-1.2.0.0 --optghc=-package --optghc=template-haskell-2.10.0.0 --optghc=-package --optghc=time-1.4.2 --optghc=-package --optghc=transformers-0.4.1.0 --optghc=-package --optghc=unix-2.7.0.2 --optghc=-Wall --optghc=-fno-warn-name-shadowing --optghc=-XHaskell2010 --optghc=-optc-DTHREADED_RTS --optghc=-DGHCI_TABLES_NEXT_TO_CODE --optghc=-DSTAGE=2 --optghc=-Rghc-timing --optghc=-O2 --optghc=-no-user-package-db --optghc=-rtsopts --optghc=-odir --optghc=compiler/stage2/build --optghc=-hidir --optghc=compiler/stage2/build --optghc=-stubdir --optghc=compiler/stage2/build --source-module=src/%{MODULE/./-}.html --source-entity=src/%{MODULE/./-}.html#%{NAME} --optghc=-DSTAGE=2 compiler/basicTypes/Avail.hs compiler/basicTypes/BasicTypes.lhs compiler/basicTypes/ConLike.lhs compiler/basicTypes/DataCon.lhs compiler/basicTypes/PatSyn.lhs compiler/basicTypes/Demand.lhs compiler/utils/Exception.hs compiler/main/GhcMonad.hs compiler/main/Hooks.lhs compiler/basicTypes/Id.lhs compiler/basicTypes/IdInfo.lhs compiler/basicTypes/Literal.lhs compiler/llvmGen/Llvm.hs compiler/llvmGen/Llvm/AbsSyn.hs compiler/llvmGen/Llvm/MetaData.hs compiler/llvmGen/Llvm/PpLlvm.hs compiler/llvmGen/Llvm/Types.hs compiler/llvmGen/LlvmCodeGen.hs compiler/llvmGen/LlvmCodeGen/Base.hs compiler/llvmGen/LlvmCodeGen/CodeGen.hs compiler/llvmGen/LlvmCodeGen/Data.hs compiler/llvmGen/LlvmCodeGen/Ppr.hs compiler/llvmGen/LlvmCodeGen/Regs.hs compiler/llvmGen/LlvmMangler.hs compiler/basicTypes/MkId.lhs compiler/basicTypes/Module.lhs compiler/basicTypes/Name.lhs compiler/basicTypes/NameEnv.lhs compiler/basicTypes/NameSet.lhs compiler/basicTypes/OccName.lhs compiler/basicTypes/RdrName.lhs compiler/basicTypes/SrcLoc.lhs compiler/basicTypes/UniqSupply.lhs compiler/basicTypes/Unique.lhs compiler/basicTypes/Var.lhs compiler/basicTypes/VarEnv.lhs compiler/basicTypes/VarSet.lhs compiler/utils/UnVarGraph.hs compiler/cmm/BlockId.hs compiler/cmm/CLabel.hs compiler/cmm/Cmm.hs compiler/cmm/CmmBuildInfoTables.hs compiler/cmm/CmmPipeline.hs compiler/cmm/CmmCallConv.hs compiler/cmm/CmmCommonBlockElim.hs compiler/cmm/CmmContFlowOpt.hs compiler/cmm/CmmExpr.hs compiler/cmm/CmmInfo.hs compiler/stage2/build/CmmLex.hs compiler/cmm/CmmLint.hs compiler/cmm/CmmLive.hs compiler/cmm/CmmMachOp.hs compiler/cmm/CmmNode.hs compiler/cmm/CmmOpt.hs compiler/stage2/build/CmmParse.hs compiler/cmm/CmmProcPoint.hs compiler/cmm/CmmSink.hs compiler/cmm/CmmType.hs compiler/cmm/CmmUtils.hs compiler/cmm/CmmLayoutStack.hs compiler/cmm/MkGraph.hs compiler/nativeGen/PprBase.hs compiler/cmm/PprC.hs compiler/cmm/PprCmm.hs compiler/cmm/PprCmmDecl.hs compiler/cmm/PprCmmExpr.hs compiler/cmm/Bitmap.hs compiler/codeGen/CodeGen/Platform.hs compiler/codeGen/CodeGen/Platform/ARM.hs compiler/codeGen/CodeGen/Platform/NoRegs.hs compiler/codeGen/CodeGen/Platform/PPC.hs compiler/codeGen/CodeGen/Platform/PPC_Darwin.hs compiler/codeGen/CodeGen/Platform/SPARC.hs compiler/codeGen/CodeGen/Platform/X86.hs compiler/codeGen/CodeGen/Platform/X86_64.hs compiler/codeGen/CgUtils.hs compiler/codeGen/StgCmm.hs compiler/codeGen/StgCmmBind.hs compiler/codeGen/StgCmmClosure.hs compiler/codeGen/StgCmmCon.hs compiler/codeGen/StgCmmEnv.hs compiler/codeGen/StgCmmExpr.hs compiler/codeGen/StgCmmForeign.hs compiler/codeGen/StgCmmHeap.hs compiler/codeGen/StgCmmHpc.hs compiler/codeGen/StgCmmArgRep.hs compiler/codeGen/StgCmmLayout.hs compiler/codeGen/StgCmmMonad.hs compiler/codeGen/StgCmmPrim.hs compiler/codeGen/StgCmmProf.hs compiler/codeGen/StgCmmTicky.hs compiler/codeGen/StgCmmUtils.hs compiler/codeGen/StgCmmExtCode.hs compiler/cmm/SMRep.lhs compiler/coreSyn/CoreArity.lhs compiler/coreSyn/CoreFVs.lhs compiler/coreSyn/CoreLint.lhs compiler/coreSyn/CorePrep.lhs compiler/coreSyn/CoreSubst.lhs compiler/coreSyn/CoreSyn.lhs compiler/coreSyn/TrieMap.lhs compiler/coreSyn/CoreTidy.lhs compiler/coreSyn/CoreUnfold.lhs compiler/coreSyn/CoreUtils.lhs compiler/coreSyn/MkCore.lhs compiler/coreSyn/PprCore.lhs compiler/deSugar/Check.lhs compiler/deSugar/Coverage.lhs compiler/deSugar/Desugar.lhs compiler/deSugar/DsArrows.lhs compiler/deSugar/DsBinds.lhs compiler/deSugar/DsCCall.lhs compiler/deSugar/DsExpr.lhs compiler/deSugar/DsForeign.lhs compiler/deSugar/DsGRHSs.lhs compiler/deSugar/DsListComp.lhs compiler/deSugar/DsMonad.lhs compiler/deSugar/DsUtils.lhs compiler/deSugar/Match.lhs compiler/deSugar/MatchCon.lhs compiler/deSugar/MatchLit.lhs compiler/hsSyn/HsBinds.lhs compiler/hsSyn/HsDecls.lhs compiler/hsSyn/HsDoc.hs compiler/hsSyn/HsExpr.lhs compiler/hsSyn/HsImpExp.lhs compiler/hsSyn/HsLit.lhs compiler/hsSyn/HsPat.lhs compiler/hsSyn/HsSyn.lhs compiler/hsSyn/HsTypes.lhs compiler/hsSyn/HsUtils.lhs compiler/iface/BinIface.hs compiler/iface/BuildTyCl.lhs compiler/iface/IfaceEnv.lhs compiler/iface/IfaceSyn.lhs compiler/iface/IfaceType.lhs compiler/iface/LoadIface.lhs compiler/iface/MkIface.lhs compiler/iface/TcIface.lhs compiler/iface/FlagChecker.hs compiler/main/Annotations.hs compiler/main/BreakArray.hs compiler/main/CmdLineParser.hs compiler/main/CodeOutput.lhs compiler/stage2/build/Config.hs compiler/main/Constants.lhs compiler/main/DriverMkDepend.hs compiler/main/DriverPhases.hs compiler/main/PipelineMonad.hs compiler/main/DriverPipeline.hs compiler/main/DynFlags.hs compiler/main/ErrUtils.lhs compiler/main/Finder.lhs compiler/main/GHC.hs compiler/main/GhcMake.hs compiler/main/GhcPlugins.hs compiler/main/DynamicLoading.hs compiler/main/HeaderInfo.hs compiler/main/HscMain.hs compiler/main/HscStats.hs compiler/main/HscTypes.lhs compiler/main/InteractiveEval.hs compiler/main/InteractiveEvalTypes.hs compiler/main/PackageConfig.hs compiler/main/Packages.lhs compiler/main/PlatformConstants.hs compiler/main/PprTyThing.hs compiler/main/StaticFlags.hs compiler/main/SysTools.lhs compiler/main/TidyPgm.lhs compiler/parser/Ctype.lhs compiler/parser/HaddockUtils.hs compiler/stage2/build/Lexer.hs compiler/types/OptCoercion.lhs compiler/stage2/build/Parser.hs compiler/parser/RdrHsSyn.lhs compiler/prelude/ForeignCall.lhs compiler/prelude/PrelInfo.lhs compiler/prelude/PrelNames.lhs compiler/prelude/PrelRules.lhs compiler/prelude/PrimOp.lhs compiler/prelude/TysPrim.lhs compiler/prelude/TysWiredIn.lhs compiler/profiling/CostCentre.lhs compiler/profiling/ProfInit.hs compiler/profiling/SCCfinal.lhs compiler/rename/RnBinds.lhs compiler/rename/RnEnv.lhs compiler/rename/RnExpr.lhs compiler/rename/RnHsDoc.hs compiler/rename/RnNames.lhs compiler/rename/RnPat.lhs compiler/rename/RnSource.lhs compiler/rename/RnSplice.lhs compiler/rename/RnTypes.lhs compiler/simplCore/CoreMonad.lhs compiler/simplCore/CSE.lhs compiler/simplCore/FloatIn.lhs compiler/simplCore/FloatOut.lhs compiler/simplCore/LiberateCase.lhs compiler/simplCore/OccurAnal.lhs compiler/simplCore/SAT.lhs compiler/simplCore/SetLevels.lhs compiler/simplCore/SimplCore.lhs compiler/simplCore/SimplEnv.lhs compiler/simplCore/SimplMonad.lhs compiler/simplCore/SimplUtils.lhs compiler/simplCore/Simplify.lhs compiler/simplStg/SimplStg.lhs compiler/simplStg/StgStats.lhs compiler/simplStg/UnariseStg.lhs compiler/specialise/Rules.lhs compiler/specialise/SpecConstr.lhs compiler/specialise/Specialise.lhs compiler/stgSyn/CoreToStg.lhs compiler/stgSyn/StgLint.lhs compiler/stgSyn/StgSyn.lhs compiler/simplCore/CallArity.hs compiler/stranal/DmdAnal.lhs compiler/stranal/WorkWrap.lhs compiler/stranal/WwLib.lhs compiler/typecheck/FamInst.lhs compiler/typecheck/Inst.lhs compiler/typecheck/TcAnnotations.lhs compiler/typecheck/TcArrows.lhs compiler/typecheck/TcBinds.lhs compiler/typecheck/TcClassDcl.lhs compiler/typecheck/TcDefaults.lhs compiler/typecheck/TcDeriv.lhs compiler/typecheck/TcEnv.lhs compiler/typecheck/TcExpr.lhs compiler/typecheck/TcForeign.lhs compiler/typecheck/TcGenDeriv.lhs compiler/typecheck/TcGenGenerics.lhs compiler/typecheck/TcHsSyn.lhs compiler/typecheck/TcHsType.lhs compiler/typecheck/TcInstDcls.lhs compiler/typecheck/TcMType.lhs compiler/typecheck/TcValidity.lhs compiler/typecheck/TcMatches.lhs compiler/typecheck/TcPat.lhs compiler/typecheck/TcPatSyn.lhs compiler/typecheck/TcRnDriver.lhs compiler/typecheck/TcRnMonad.lhs compiler/typecheck/TcRnTypes.lhs compiler/typecheck/TcRules.lhs compiler/typecheck/TcSimplify.lhs compiler/typecheck/TcErrors.lhs compiler/typecheck/TcTyClsDecls.lhs compiler/typecheck/TcTyDecls.lhs compiler/typecheck/TcType.lhs compiler/typecheck/TcEvidence.lhs compiler/typecheck/TcUnify.lhs compiler/typecheck/TcInteract.lhs compiler/typecheck/TcCanonical.lhs compiler/typecheck/TcSMonad.lhs compiler/typecheck/TcTypeNats.hs compiler/typecheck/TcSplice.lhs compiler/types/Class.lhs compiler/types/Coercion.lhs compiler/types/FamInstEnv.lhs compiler/typecheck/FunDeps.lhs compiler/types/InstEnv.lhs compiler/types/TyCon.lhs compiler/types/CoAxiom.lhs compiler/types/Kind.lhs compiler/types/Type.lhs compiler/types/TypeRep.lhs compiler/types/Unify.lhs compiler/utils/Bag.lhs compiler/utils/Binary.hs compiler/utils/BooleanFormula.hs compiler/utils/BufWrite.hs compiler/utils/Digraph.lhs compiler/utils/Encoding.hs compiler/utils/FastBool.lhs compiler/utils/FastFunctions.lhs compiler/utils/FastMutInt.lhs compiler/utils/FastString.lhs compiler/utils/FastTypes.lhs compiler/stage2/build/Fingerprint.hs compiler/utils/FiniteMap.lhs compiler/utils/GraphBase.hs compiler/utils/GraphColor.hs compiler/utils/GraphOps.hs compiler/utils/GraphPpr.hs compiler/utils/IOEnv.hs compiler/utils/ListSetOps.lhs compiler/utils/Maybes.lhs compiler/utils/MonadUtils.hs compiler/utils/OrdList.lhs compiler/utils/Outputable.lhs compiler/utils/Pair.lhs compiler/utils/Panic.lhs compiler/utils/Pretty.lhs compiler/utils/Serialized.hs compiler/utils/State.hs compiler/utils/Stream.hs compiler/utils/StringBuffer.lhs compiler/utils/UniqFM.lhs compiler/utils/UniqSet.lhs compiler/utils/Util.lhs compiler/utils/ExtsCompat46.hs compiler/vectorise/Vectorise/Builtins/Base.hs compiler/vectorise/Vectorise/Builtins/Initialise.hs compiler/vectorise/Vectorise/Builtins.hs compiler/vectorise/Vectorise/Monad/Base.hs compiler/vectorise/Vectorise/Monad/Naming.hs compiler/vectorise/Vectorise/Monad/Local.hs compiler/vectorise/Vectorise/Monad/Global.hs compiler/vectorise/Vectorise/Monad/InstEnv.hs compiler/vectorise/Vectorise/Monad.hs compiler/vectorise/Vectorise/Utils/Base.hs compiler/vectorise/Vectorise/Utils/Closure.hs compiler/vectorise/Vectorise/Utils/Hoisting.hs compiler/vectorise/Vectorise/Utils/PADict.hs compiler/vectorise/Vectorise/Utils/Poly.hs compiler/vectorise/Vectorise/Utils.hs compiler/vectorise/Vectorise/Generic/Description.hs compiler/vectorise/Vectorise/Generic/PAMethods.hs compiler/vectorise/Vectorise/Generic/PADict.hs compiler/vectorise/Vectorise/Generic/PData.hs compiler/vectorise/Vectorise/Type/Env.hs compiler/vectorise/Vectorise/Type/Type.hs compiler/vectorise/Vectorise/Type/TyConDecl.hs compiler/vectorise/Vectorise/Type/Classify.hs compiler/vectorise/Vectorise/Convert.hs compiler/vectorise/Vectorise/Vect.hs compiler/vectorise/Vectorise/Var.hs compiler/vectorise/Vectorise/Env.hs compiler/vectorise/Vectorise/Exp.hs compiler/vectorise/Vectorise.hs compiler/cmm/Hoopl/Dataflow.hs compiler/cmm/Hoopl.hs compiler/nativeGen/AsmCodeGen.lhs compiler/nativeGen/TargetReg.hs compiler/nativeGen/NCGMonad.hs compiler/nativeGen/Instruction.hs compiler/nativeGen/Size.hs compiler/nativeGen/Reg.hs compiler/nativeGen/RegClass.hs compiler/nativeGen/PIC.hs compiler/utils/Platform.hs compiler/nativeGen/CPrim.hs compiler/nativeGen/X86/Regs.hs compiler/nativeGen/X86/RegInfo.hs compiler/nativeGen/X86/Instr.hs compiler/nativeGen/X86/Cond.hs compiler/nativeGen/X86/Ppr.hs compiler/nativeGen/X86/CodeGen.hs compiler/nativeGen/PPC/Regs.hs compiler/nativeGen/PPC/RegInfo.hs compiler/nativeGen/PPC/Instr.hs compiler/nativeGen/PPC/Cond.hs compiler/nativeGen/PPC/Ppr.hs compiler/nativeGen/PPC/CodeGen.hs compiler/nativeGen/SPARC/Base.hs compiler/nativeGen/SPARC/Regs.hs compiler/nativeGen/SPARC/Imm.hs compiler/nativeGen/SPARC/AddrMode.hs compiler/nativeGen/SPARC/Cond.hs compiler/nativeGen/SPARC/Instr.hs compiler/nativeGen/SPARC/Stack.hs compiler/nativeGen/SPARC/ShortcutJump.hs compiler/nativeGen/SPARC/Ppr.hs compiler/nativeGen/SPARC/CodeGen.hs compiler/nativeGen/SPARC/CodeGen/Amode.hs compiler/nativeGen/SPARC/CodeGen/Base.hs compiler/nativeGen/SPARC/CodeGen/CondCode.hs compiler/nativeGen/SPARC/CodeGen/Gen32.hs compiler/nativeGen/SPARC/CodeGen/Gen64.hs compiler/nativeGen/SPARC/CodeGen/Sanity.hs compiler/nativeGen/SPARC/CodeGen/Expand.hs compiler/nativeGen/RegAlloc/Liveness.hs compiler/nativeGen/RegAlloc/Graph/Main.hs compiler/nativeGen/RegAlloc/Graph/Stats.hs compiler/nativeGen/RegAlloc/Graph/ArchBase.hs compiler/nativeGen/RegAlloc/Graph/ArchX86.hs compiler/nativeGen/RegAlloc/Graph/Coalesce.hs compiler/nativeGen/RegAlloc/Graph/Spill.hs compiler/nativeGen/RegAlloc/Graph/SpillClean.hs compiler/nativeGen/RegAlloc/Graph/SpillCost.hs compiler/nativeGen/RegAlloc/Graph/TrivColorable.hs compiler/nativeGen/RegAlloc/Linear/Main.hs compiler/nativeGen/RegAlloc/Linear/JoinToTargets.hs compiler/nativeGen/RegAlloc/Linear/State.hs compiler/nativeGen/RegAlloc/Linear/Stats.hs compiler/nativeGen/RegAlloc/Linear/FreeRegs.hs compiler/nativeGen/RegAlloc/Linear/StackMap.hs compiler/nativeGen/RegAlloc/Linear/Base.hs compiler/nativeGen/RegAlloc/Linear/X86/FreeRegs.hs compiler/nativeGen/RegAlloc/Linear/X86_64/FreeRegs.hs compiler/nativeGen/RegAlloc/Linear/PPC/FreeRegs.hs compiler/nativeGen/RegAlloc/Linear/SPARC/FreeRegs.hs compiler/deSugar/DsMeta.hs compiler/hsSyn/Convert.lhs compiler/ghci/ByteCodeAsm.lhs compiler/ghci/ByteCodeGen.lhs compiler/ghci/ByteCodeInstr.lhs compiler/ghci/ByteCodeItbls.lhs compiler/ghci/ByteCodeLink.lhs compiler/ghci/Debugger.hs compiler/stage2/build/LibFFI.hs compiler/ghci/Linker.lhs compiler/ghci/ObjLink.lhs compiler/ghci/RtClosureInspect.hs compiler/ghci/DebuggerUtils.hs +RTS -tcompiler/stage2/doc/html/ghc/ghc.haddock.t --machine-readable compiler/types/Coercion.lhs:1892:3: parse error on input ?-- *kind* and role of its argument. Luckily, laziness should? [1] http://haskell.inf.elte.hu/builders/freebsd-amd64-head/322/10.html From jan.stolarek at p.lodz.pl Thu Jul 17 07:35:19 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Thu, 17 Jul 2014 09:35:19 +0200 Subject: a little phrustrated In-Reply-To: <5BC8E2CB-7528-4537-92EA-E1C3BDDF1B1E@cis.upenn.edu> References: <5BC8E2CB-7528-4537-92EA-E1C3BDDF1B1E@cis.upenn.edu> Message-ID: <201407170935.19256.jan.stolarek@p.lodz.pl> I've used Phabricator for the first time yesterday and have also experienced most of the problems mentioned by Richard. The most painful ones were: 1. Complaining about any untracked or uncommited changes in the source tree. This is mostly annoying. How can I tell arcanist to ignore such changes? Rant: I really don't like tools that try to be smarter than me and prohibit from doing what I want them to do. 2. Linters. I vote to disable the tab linter - we already have mechanisms to prevent introducing tabs in the source code. I'd also vote to disable whitespace linter unless we can tweak it to complain only about whitespaces introduced by commits in question. I've also run into issues with updating my revision. `arc diff` did not seem to work. Sadly, I don't remember the details - I'll write them down if it happens again. Janek Dnia ?roda, 16 lipca 2014, Richard Eisenberg napisa?: > Hi all, > > I'm trying to use Phab for the first time this morning, and hitting a fair > number of obstacles. I'm writing up my experiences here in order to figure > out which of these are my fault, which can be fixed, and which are just > things to live with; and also to help others who may go down the same path. > If relevant, my diff is at https://phabricator.haskell.org/D73 > > 1) I had some untracked files in a submodule repo. I couldn't find a way to > get `arc diff` to ignore these, as they appeared to git to be a change in a > tracked file (that is, a change to a submodule, which is considered > tracked). `git stash` offered no help, so I had to delete the untracked > files. This didn't cause real pain (the files were there in error), but it > seems a weakness of the system if I can't make progress otherwise. > > 2) I develop and build in the same tree. This means that I often have a few > untracked files in the outer, ghc.git repo that someone hasn't yet added to > .gitignore. Thus, I need to say `--allow-untracked` to get `arc diff` to > work. I will likely always need `--allow-untracked`, so I looked for a way > to get this to be configured automatically. I found > https://secure.phabricator.com/book/phabricator/article/arcanist/#configura >tion , but the details there are sparse. Any advice? > > 3) The linter picks up and complains about tabs in any of my touched files. > I can then write an excuse for every `arc diff` I do, or de-tab the files. > In one case, I changed roughly one line in the file (MkCore.lhs) and didn't > think it right to de-tab the whole file. Even if I did de-tab the whole > file, then my eventual `arc land` would squash the whitespace commit in > with my substantive commits, which we expressly don't want. I can imagine a > fair amount of git fiddling which would push the whitespace commit to > master and then rebase my substantive work on top so that the final, > landed, squashed patch would avoid the whitespace changes, but this is > painful. And advice on this? Just ignore the lint errors and write silly > excuses? Or, is there a way Phab/arc can be smart enough to keep > whitespace-only commits (perhaps tagged with the words "whitespace only" in > the commit message) separate from other commits when squashing in `arc > land`? > > 4) For better or worse, we don't currently require every file to be > tab-free, just some of them. Could this be reflected in Phab's lint > settings to avoid the problem in (3)? (Of course, a way to de-tab and keep > the history nice would be much better!) > > 5) In writing my revision description, I had to add reviewers. I assumed > these should be comma-separated. This worked and I have updated the Wiki. > Please advise if I am wrong. > > 6) When I looked at my posted revision, it said that the revision was > "closed"... and that I had done it! slyfox on IRC informed me that this was > likely because I had pushed my commits to a wip/... branch. Is using wip > branches with Phab not recommended? Or, can Phab be configured not to close > revisions if the commit appears only in wip/... branches? > > 7) How can I "re-open" my revision? > > 8) Some time after posting, phaskell tells me that my build failed. OK. > This is despite the fact that Travis was able to build the same commit > (https://travis-ci.org/ghc/ghc/builds/30066130). I go to find out why it > failed, and am directed to build log F3870 > (https://phabricator.haskell.org/file/info/PHID-FILE-hz2r4sjamkkrbf7nsz6b/) >. I can't view the file online, but instead have to download and then ungzip > it. Is it possible to view this file directly? Or not have it be > compressed? > > 9) When I do view the build log, I get no answers. The end of the file > comes abruptly in the middle of some haddock output, and the closest thing > that looks like an error is about a missing link in a haddock tag > `$kind_subtyping` in Type.lhs. I didn't touch this file, and I imagine the > missing link has been there for some time, so I'm dubious that this is the > real problem. Are these log files cut off? > > 10) More of a question than a phrustration: is there a way to link directly > to Trac tickets and/or wiki pages from Phab comments? I like the Phab:D73 > syntax from Trac to Phab, and thanks, Austin, for adding the field at the > top of Trac tickets to Phab revisions. > > > I did fully expect to hit a few bumps on my first use of this new tool, but > it got to the point where I thought I should seek some advice before > continuing to muddle through -- hence this email. I do hope my tone is not > overly negative: I'm *very* appreciative of the work that many of you do to > support GHC's infrastructure, and I look forward to being able to get and > provide source code feedback through Phab. We just need to work out some > kinks, I think! (Any number of these kinks may be solely my fault, of > course.) > > Many thanks, > Richard From fuuzetsu at fuuzetsu.co.uk Thu Jul 17 08:05:46 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Thu, 17 Jul 2014 10:05:46 +0200 Subject: The build is broken? (Haddock parse error) In-Reply-To: References: Message-ID: <53C783DA.6020606@fuuzetsu.co.uk> On 07/17/2014 09:05 AM, P?li G?bor J?nos wrote: > Hello there, > > I have just found the builds are failing due to this (see [1] for the full log): > > "/usr/home/ghc-builder/work/builder/tempbuild/build/inplace/bin/haddock" > --odir="compiler/stage2/doc/html/ghc" --no-tmp-comp-dir > --dump-interface=compiler/stage2/doc/html/ghc/ghc.haddock --html > --hoogle --title="ghc-7.9.20140717: The GHC API" > --prologue="compiler/stage2/haddock-prologue.txt" > --read-interface=../Cabal-1.20.0.1,../Cabal-1.20.0.1/src/%{MODULE/./-}.html\#%{NAME},libraries/Cabal/Cabal/dist-install/doc/html/Cabal/Cabal.haddock > --read-interface=../array-0.5.0.0,../array-0.5.0.0/src/%{MODULE/./-}.html\#%{NAME},libraries/array/dist-install/doc/html/array/array.haddock > --read-interface=../base-4.7.1.0,../base-4.7.1.0/src/%{MODULE/./-}.html\#%{NAME},libraries/base/dist-install/doc/html/base/base.haddock > --read-interface=../bin-package-db-0.0.0.0,../bin-package-db-0.0.0.0/src/%{MODULE/./-}.html\#%{NAME},libraries/bin-package-db/dist-install/doc/html/bin-package-db/bin-package-db.haddock > --read-interface=../bytestring-0.10.4.0,../bytestring-0.10.4.0/src/%{MODULE/./-}.html\#%{NAME},libraries/bytestring/dist-install/doc/html/bytestring/bytestring.haddock > --read-interface=../containers-0.5.5.1,../containers-0.5.5.1/src/%{MODULE/./-}.html\#%{NAME},libraries/containers/dist-install/doc/html/containers/containers.haddock > --read-interface=../directory-1.2.1.0,../directory-1.2.1.0/src/%{MODULE/./-}.html\#%{NAME},libraries/directory/dist-install/doc/html/directory/directory.haddock > --read-interface=../filepath-1.3.0.2,../filepath-1.3.0.2/src/%{MODULE/./-}.html\#%{NAME},libraries/filepath/dist-install/doc/html/filepath/filepath.haddock > --read-interface=../hoopl-3.10.0.1,../hoopl-3.10.0.1/src/%{MODULE/./-}.html\#%{NAME},libraries/hoopl/dist-install/doc/html/hoopl/hoopl.haddock > --read-interface=../hpc-0.6.0.1,../hpc-0.6.0.1/src/%{MODULE/./-}.html\#%{NAME},libraries/hpc/dist-install/doc/html/hpc/hpc.haddock > --read-interface=../process-1.2.0.0,../process-1.2.0.0/src/%{MODULE/./-}.html\#%{NAME},libraries/process/dist-install/doc/html/process/process.haddock > --read-interface=../template-haskell-2.10.0.0,../template-haskell-2.10.0.0/src/%{MODULE/./-}.html\#%{NAME},libraries/template-haskell/dist-install/doc/html/template-haskell/template-haskell.haddock > --read-interface=../time-1.4.2,../time-1.4.2/src/%{MODULE/./-}.html\#%{NAME},libraries/time/dist-install/doc/html/time/time.haddock > --read-interface=../transformers-0.4.1.0,../transformers-0.4.1.0/src/%{MODULE/./-}.html\#%{NAME},libraries/transformers/dist-install/doc/html/transformers/transformers.haddock > --read-interface=../unix-2.7.0.2,../unix-2.7.0.2/src/%{MODULE/./-}.html\#%{NAME},libraries/unix/dist-install/doc/html/unix/unix.haddock > --optghc=-hisuf --optghc=dyn_hi --optghc=-osuf --optghc=dyn_o > --optghc=-hcsuf --optghc=dyn_hc --optghc=-fPIC --optghc=-dynamic > --optghc=-H32m --optghc=-O --optghc=-package-name > --optghc=ghc-7.9.20140717 --optghc=-hide-all-packages --optghc=-i > --optghc=-icompiler/basicTypes --optghc=-icompiler/cmm > --optghc=-icompiler/codeGen --optghc=-icompiler/coreSyn > --optghc=-icompiler/deSugar --optghc=-icompiler/ghci > --optghc=-icompiler/hsSyn --optghc=-icompiler/iface > --optghc=-icompiler/llvmGen --optghc=-icompiler/main > --optghc=-icompiler/nativeGen --optghc=-icompiler/parser > --optghc=-icompiler/prelude --optghc=-icompiler/profiling > --optghc=-icompiler/rename --optghc=-icompiler/simplCore > --optghc=-icompiler/simplStg --optghc=-icompiler/specialise > --optghc=-icompiler/stgSyn --optghc=-icompiler/stranal > --optghc=-icompiler/typecheck --optghc=-icompiler/types > --optghc=-icompiler/utils --optghc=-icompiler/vectorise > --optghc=-icompiler/stage2/build > --optghc=-icompiler/stage2/build/autogen > --optghc=-Icompiler/stage2/build > --optghc=-Icompiler/stage2/build/autogen --optghc=-Icompiler/. > --optghc=-Icompiler/parser --optghc=-Icompiler/utils > --optghc=-Icompiler/../rts/dist/build --optghc=-Icompiler/stage2 > --optghc=-optP-DGHCI --optghc=-optP-include > --optghc=-optPcompiler/stage2/build/autogen/cabal_macros.h > --optghc=-package --optghc=Cabal-1.20.0.1 --optghc=-package > --optghc=array-0.5.0.0 --optghc=-package --optghc=base-4.7.1.0 > --optghc=-package --optghc=bin-package-db-0.0.0.0 --optghc=-package > --optghc=bytestring-0.10.4.0 --optghc=-package > --optghc=containers-0.5.5.1 --optghc=-package > --optghc=directory-1.2.1.0 --optghc=-package --optghc=filepath-1.3.0.2 > --optghc=-package --optghc=hoopl-3.10.0.1 --optghc=-package > --optghc=hpc-0.6.0.1 --optghc=-package --optghc=process-1.2.0.0 > --optghc=-package --optghc=template-haskell-2.10.0.0 --optghc=-package > --optghc=time-1.4.2 --optghc=-package --optghc=transformers-0.4.1.0 > --optghc=-package --optghc=unix-2.7.0.2 --optghc=-Wall > --optghc=-fno-warn-name-shadowing --optghc=-XHaskell2010 > --optghc=-optc-DTHREADED_RTS --optghc=-DGHCI_TABLES_NEXT_TO_CODE > --optghc=-DSTAGE=2 --optghc=-Rghc-timing --optghc=-O2 > --optghc=-no-user-package-db --optghc=-rtsopts --optghc=-odir > --optghc=compiler/stage2/build --optghc=-hidir > --optghc=compiler/stage2/build --optghc=-stubdir > --optghc=compiler/stage2/build --source-module=src/%{MODULE/./-}.html > --source-entity=src/%{MODULE/./-}.html#%{NAME} --optghc=-DSTAGE=2 > compiler/basicTypes/Avail.hs compiler/basicTypes/BasicTypes.lhs > compiler/basicTypes/ConLike.lhs compiler/basicTypes/DataCon.lhs > compiler/basicTypes/PatSyn.lhs compiler/basicTypes/Demand.lhs > compiler/utils/Exception.hs compiler/main/GhcMonad.hs > compiler/main/Hooks.lhs compiler/basicTypes/Id.lhs > compiler/basicTypes/IdInfo.lhs compiler/basicTypes/Literal.lhs > compiler/llvmGen/Llvm.hs compiler/llvmGen/Llvm/AbsSyn.hs > compiler/llvmGen/Llvm/MetaData.hs compiler/llvmGen/Llvm/PpLlvm.hs > compiler/llvmGen/Llvm/Types.hs compiler/llvmGen/LlvmCodeGen.hs > compiler/llvmGen/LlvmCodeGen/Base.hs > compiler/llvmGen/LlvmCodeGen/CodeGen.hs > compiler/llvmGen/LlvmCodeGen/Data.hs > compiler/llvmGen/LlvmCodeGen/Ppr.hs > compiler/llvmGen/LlvmCodeGen/Regs.hs compiler/llvmGen/LlvmMangler.hs > compiler/basicTypes/MkId.lhs compiler/basicTypes/Module.lhs > compiler/basicTypes/Name.lhs compiler/basicTypes/NameEnv.lhs > compiler/basicTypes/NameSet.lhs compiler/basicTypes/OccName.lhs > compiler/basicTypes/RdrName.lhs compiler/basicTypes/SrcLoc.lhs > compiler/basicTypes/UniqSupply.lhs compiler/basicTypes/Unique.lhs > compiler/basicTypes/Var.lhs compiler/basicTypes/VarEnv.lhs > compiler/basicTypes/VarSet.lhs compiler/utils/UnVarGraph.hs > compiler/cmm/BlockId.hs compiler/cmm/CLabel.hs compiler/cmm/Cmm.hs > compiler/cmm/CmmBuildInfoTables.hs compiler/cmm/CmmPipeline.hs > compiler/cmm/CmmCallConv.hs compiler/cmm/CmmCommonBlockElim.hs > compiler/cmm/CmmContFlowOpt.hs compiler/cmm/CmmExpr.hs > compiler/cmm/CmmInfo.hs compiler/stage2/build/CmmLex.hs > compiler/cmm/CmmLint.hs compiler/cmm/CmmLive.hs > compiler/cmm/CmmMachOp.hs compiler/cmm/CmmNode.hs > compiler/cmm/CmmOpt.hs compiler/stage2/build/CmmParse.hs > compiler/cmm/CmmProcPoint.hs compiler/cmm/CmmSink.hs > compiler/cmm/CmmType.hs compiler/cmm/CmmUtils.hs > compiler/cmm/CmmLayoutStack.hs compiler/cmm/MkGraph.hs > compiler/nativeGen/PprBase.hs compiler/cmm/PprC.hs > compiler/cmm/PprCmm.hs compiler/cmm/PprCmmDecl.hs > compiler/cmm/PprCmmExpr.hs compiler/cmm/Bitmap.hs > compiler/codeGen/CodeGen/Platform.hs > compiler/codeGen/CodeGen/Platform/ARM.hs > compiler/codeGen/CodeGen/Platform/NoRegs.hs > compiler/codeGen/CodeGen/Platform/PPC.hs > compiler/codeGen/CodeGen/Platform/PPC_Darwin.hs > compiler/codeGen/CodeGen/Platform/SPARC.hs > compiler/codeGen/CodeGen/Platform/X86.hs > compiler/codeGen/CodeGen/Platform/X86_64.hs > compiler/codeGen/CgUtils.hs compiler/codeGen/StgCmm.hs > compiler/codeGen/StgCmmBind.hs compiler/codeGen/StgCmmClosure.hs > compiler/codeGen/StgCmmCon.hs compiler/codeGen/StgCmmEnv.hs > compiler/codeGen/StgCmmExpr.hs compiler/codeGen/StgCmmForeign.hs > compiler/codeGen/StgCmmHeap.hs compiler/codeGen/StgCmmHpc.hs > compiler/codeGen/StgCmmArgRep.hs compiler/codeGen/StgCmmLayout.hs > compiler/codeGen/StgCmmMonad.hs compiler/codeGen/StgCmmPrim.hs > compiler/codeGen/StgCmmProf.hs compiler/codeGen/StgCmmTicky.hs > compiler/codeGen/StgCmmUtils.hs compiler/codeGen/StgCmmExtCode.hs > compiler/cmm/SMRep.lhs compiler/coreSyn/CoreArity.lhs > compiler/coreSyn/CoreFVs.lhs compiler/coreSyn/CoreLint.lhs > compiler/coreSyn/CorePrep.lhs compiler/coreSyn/CoreSubst.lhs > compiler/coreSyn/CoreSyn.lhs compiler/coreSyn/TrieMap.lhs > compiler/coreSyn/CoreTidy.lhs compiler/coreSyn/CoreUnfold.lhs > compiler/coreSyn/CoreUtils.lhs compiler/coreSyn/MkCore.lhs > compiler/coreSyn/PprCore.lhs compiler/deSugar/Check.lhs > compiler/deSugar/Coverage.lhs compiler/deSugar/Desugar.lhs > compiler/deSugar/DsArrows.lhs compiler/deSugar/DsBinds.lhs > compiler/deSugar/DsCCall.lhs compiler/deSugar/DsExpr.lhs > compiler/deSugar/DsForeign.lhs compiler/deSugar/DsGRHSs.lhs > compiler/deSugar/DsListComp.lhs compiler/deSugar/DsMonad.lhs > compiler/deSugar/DsUtils.lhs compiler/deSugar/Match.lhs > compiler/deSugar/MatchCon.lhs compiler/deSugar/MatchLit.lhs > compiler/hsSyn/HsBinds.lhs compiler/hsSyn/HsDecls.lhs > compiler/hsSyn/HsDoc.hs compiler/hsSyn/HsExpr.lhs > compiler/hsSyn/HsImpExp.lhs compiler/hsSyn/HsLit.lhs > compiler/hsSyn/HsPat.lhs compiler/hsSyn/HsSyn.lhs > compiler/hsSyn/HsTypes.lhs compiler/hsSyn/HsUtils.lhs > compiler/iface/BinIface.hs compiler/iface/BuildTyCl.lhs > compiler/iface/IfaceEnv.lhs compiler/iface/IfaceSyn.lhs > compiler/iface/IfaceType.lhs compiler/iface/LoadIface.lhs > compiler/iface/MkIface.lhs compiler/iface/TcIface.lhs > compiler/iface/FlagChecker.hs compiler/main/Annotations.hs > compiler/main/BreakArray.hs compiler/main/CmdLineParser.hs > compiler/main/CodeOutput.lhs compiler/stage2/build/Config.hs > compiler/main/Constants.lhs compiler/main/DriverMkDepend.hs > compiler/main/DriverPhases.hs compiler/main/PipelineMonad.hs > compiler/main/DriverPipeline.hs compiler/main/DynFlags.hs > compiler/main/ErrUtils.lhs compiler/main/Finder.lhs > compiler/main/GHC.hs compiler/main/GhcMake.hs > compiler/main/GhcPlugins.hs compiler/main/DynamicLoading.hs > compiler/main/HeaderInfo.hs compiler/main/HscMain.hs > compiler/main/HscStats.hs compiler/main/HscTypes.lhs > compiler/main/InteractiveEval.hs > compiler/main/InteractiveEvalTypes.hs compiler/main/PackageConfig.hs > compiler/main/Packages.lhs compiler/main/PlatformConstants.hs > compiler/main/PprTyThing.hs compiler/main/StaticFlags.hs > compiler/main/SysTools.lhs compiler/main/TidyPgm.lhs > compiler/parser/Ctype.lhs compiler/parser/HaddockUtils.hs > compiler/stage2/build/Lexer.hs compiler/types/OptCoercion.lhs > compiler/stage2/build/Parser.hs compiler/parser/RdrHsSyn.lhs > compiler/prelude/ForeignCall.lhs compiler/prelude/PrelInfo.lhs > compiler/prelude/PrelNames.lhs compiler/prelude/PrelRules.lhs > compiler/prelude/PrimOp.lhs compiler/prelude/TysPrim.lhs > compiler/prelude/TysWiredIn.lhs compiler/profiling/CostCentre.lhs > compiler/profiling/ProfInit.hs compiler/profiling/SCCfinal.lhs > compiler/rename/RnBinds.lhs compiler/rename/RnEnv.lhs > compiler/rename/RnExpr.lhs compiler/rename/RnHsDoc.hs > compiler/rename/RnNames.lhs compiler/rename/RnPat.lhs > compiler/rename/RnSource.lhs compiler/rename/RnSplice.lhs > compiler/rename/RnTypes.lhs compiler/simplCore/CoreMonad.lhs > compiler/simplCore/CSE.lhs compiler/simplCore/FloatIn.lhs > compiler/simplCore/FloatOut.lhs compiler/simplCore/LiberateCase.lhs > compiler/simplCore/OccurAnal.lhs compiler/simplCore/SAT.lhs > compiler/simplCore/SetLevels.lhs compiler/simplCore/SimplCore.lhs > compiler/simplCore/SimplEnv.lhs compiler/simplCore/SimplMonad.lhs > compiler/simplCore/SimplUtils.lhs compiler/simplCore/Simplify.lhs > compiler/simplStg/SimplStg.lhs compiler/simplStg/StgStats.lhs > compiler/simplStg/UnariseStg.lhs compiler/specialise/Rules.lhs > compiler/specialise/SpecConstr.lhs compiler/specialise/Specialise.lhs > compiler/stgSyn/CoreToStg.lhs compiler/stgSyn/StgLint.lhs > compiler/stgSyn/StgSyn.lhs compiler/simplCore/CallArity.hs > compiler/stranal/DmdAnal.lhs compiler/stranal/WorkWrap.lhs > compiler/stranal/WwLib.lhs compiler/typecheck/FamInst.lhs > compiler/typecheck/Inst.lhs compiler/typecheck/TcAnnotations.lhs > compiler/typecheck/TcArrows.lhs compiler/typecheck/TcBinds.lhs > compiler/typecheck/TcClassDcl.lhs compiler/typecheck/TcDefaults.lhs > compiler/typecheck/TcDeriv.lhs compiler/typecheck/TcEnv.lhs > compiler/typecheck/TcExpr.lhs compiler/typecheck/TcForeign.lhs > compiler/typecheck/TcGenDeriv.lhs > compiler/typecheck/TcGenGenerics.lhs compiler/typecheck/TcHsSyn.lhs > compiler/typecheck/TcHsType.lhs compiler/typecheck/TcInstDcls.lhs > compiler/typecheck/TcMType.lhs compiler/typecheck/TcValidity.lhs > compiler/typecheck/TcMatches.lhs compiler/typecheck/TcPat.lhs > compiler/typecheck/TcPatSyn.lhs compiler/typecheck/TcRnDriver.lhs > compiler/typecheck/TcRnMonad.lhs compiler/typecheck/TcRnTypes.lhs > compiler/typecheck/TcRules.lhs compiler/typecheck/TcSimplify.lhs > compiler/typecheck/TcErrors.lhs compiler/typecheck/TcTyClsDecls.lhs > compiler/typecheck/TcTyDecls.lhs compiler/typecheck/TcType.lhs > compiler/typecheck/TcEvidence.lhs compiler/typecheck/TcUnify.lhs > compiler/typecheck/TcInteract.lhs compiler/typecheck/TcCanonical.lhs > compiler/typecheck/TcSMonad.lhs compiler/typecheck/TcTypeNats.hs > compiler/typecheck/TcSplice.lhs compiler/types/Class.lhs > compiler/types/Coercion.lhs compiler/types/FamInstEnv.lhs > compiler/typecheck/FunDeps.lhs compiler/types/InstEnv.lhs > compiler/types/TyCon.lhs compiler/types/CoAxiom.lhs > compiler/types/Kind.lhs compiler/types/Type.lhs > compiler/types/TypeRep.lhs compiler/types/Unify.lhs > compiler/utils/Bag.lhs compiler/utils/Binary.hs > compiler/utils/BooleanFormula.hs compiler/utils/BufWrite.hs > compiler/utils/Digraph.lhs compiler/utils/Encoding.hs > compiler/utils/FastBool.lhs compiler/utils/FastFunctions.lhs > compiler/utils/FastMutInt.lhs compiler/utils/FastString.lhs > compiler/utils/FastTypes.lhs compiler/stage2/build/Fingerprint.hs > compiler/utils/FiniteMap.lhs compiler/utils/GraphBase.hs > compiler/utils/GraphColor.hs compiler/utils/GraphOps.hs > compiler/utils/GraphPpr.hs compiler/utils/IOEnv.hs > compiler/utils/ListSetOps.lhs compiler/utils/Maybes.lhs > compiler/utils/MonadUtils.hs compiler/utils/OrdList.lhs > compiler/utils/Outputable.lhs compiler/utils/Pair.lhs > compiler/utils/Panic.lhs compiler/utils/Pretty.lhs > compiler/utils/Serialized.hs compiler/utils/State.hs > compiler/utils/Stream.hs compiler/utils/StringBuffer.lhs > compiler/utils/UniqFM.lhs compiler/utils/UniqSet.lhs > compiler/utils/Util.lhs compiler/utils/ExtsCompat46.hs > compiler/vectorise/Vectorise/Builtins/Base.hs > compiler/vectorise/Vectorise/Builtins/Initialise.hs > compiler/vectorise/Vectorise/Builtins.hs > compiler/vectorise/Vectorise/Monad/Base.hs > compiler/vectorise/Vectorise/Monad/Naming.hs > compiler/vectorise/Vectorise/Monad/Local.hs > compiler/vectorise/Vectorise/Monad/Global.hs > compiler/vectorise/Vectorise/Monad/InstEnv.hs > compiler/vectorise/Vectorise/Monad.hs > compiler/vectorise/Vectorise/Utils/Base.hs > compiler/vectorise/Vectorise/Utils/Closure.hs > compiler/vectorise/Vectorise/Utils/Hoisting.hs > compiler/vectorise/Vectorise/Utils/PADict.hs > compiler/vectorise/Vectorise/Utils/Poly.hs > compiler/vectorise/Vectorise/Utils.hs > compiler/vectorise/Vectorise/Generic/Description.hs > compiler/vectorise/Vectorise/Generic/PAMethods.hs > compiler/vectorise/Vectorise/Generic/PADict.hs > compiler/vectorise/Vectorise/Generic/PData.hs > compiler/vectorise/Vectorise/Type/Env.hs > compiler/vectorise/Vectorise/Type/Type.hs > compiler/vectorise/Vectorise/Type/TyConDecl.hs > compiler/vectorise/Vectorise/Type/Classify.hs > compiler/vectorise/Vectorise/Convert.hs > compiler/vectorise/Vectorise/Vect.hs > compiler/vectorise/Vectorise/Var.hs > compiler/vectorise/Vectorise/Env.hs > compiler/vectorise/Vectorise/Exp.hs compiler/vectorise/Vectorise.hs > compiler/cmm/Hoopl/Dataflow.hs compiler/cmm/Hoopl.hs > compiler/nativeGen/AsmCodeGen.lhs compiler/nativeGen/TargetReg.hs > compiler/nativeGen/NCGMonad.hs compiler/nativeGen/Instruction.hs > compiler/nativeGen/Size.hs compiler/nativeGen/Reg.hs > compiler/nativeGen/RegClass.hs compiler/nativeGen/PIC.hs > compiler/utils/Platform.hs compiler/nativeGen/CPrim.hs > compiler/nativeGen/X86/Regs.hs compiler/nativeGen/X86/RegInfo.hs > compiler/nativeGen/X86/Instr.hs compiler/nativeGen/X86/Cond.hs > compiler/nativeGen/X86/Ppr.hs compiler/nativeGen/X86/CodeGen.hs > compiler/nativeGen/PPC/Regs.hs compiler/nativeGen/PPC/RegInfo.hs > compiler/nativeGen/PPC/Instr.hs compiler/nativeGen/PPC/Cond.hs > compiler/nativeGen/PPC/Ppr.hs compiler/nativeGen/PPC/CodeGen.hs > compiler/nativeGen/SPARC/Base.hs compiler/nativeGen/SPARC/Regs.hs > compiler/nativeGen/SPARC/Imm.hs compiler/nativeGen/SPARC/AddrMode.hs > compiler/nativeGen/SPARC/Cond.hs compiler/nativeGen/SPARC/Instr.hs > compiler/nativeGen/SPARC/Stack.hs > compiler/nativeGen/SPARC/ShortcutJump.hs > compiler/nativeGen/SPARC/Ppr.hs compiler/nativeGen/SPARC/CodeGen.hs > compiler/nativeGen/SPARC/CodeGen/Amode.hs > compiler/nativeGen/SPARC/CodeGen/Base.hs > compiler/nativeGen/SPARC/CodeGen/CondCode.hs > compiler/nativeGen/SPARC/CodeGen/Gen32.hs > compiler/nativeGen/SPARC/CodeGen/Gen64.hs > compiler/nativeGen/SPARC/CodeGen/Sanity.hs > compiler/nativeGen/SPARC/CodeGen/Expand.hs > compiler/nativeGen/RegAlloc/Liveness.hs > compiler/nativeGen/RegAlloc/Graph/Main.hs > compiler/nativeGen/RegAlloc/Graph/Stats.hs > compiler/nativeGen/RegAlloc/Graph/ArchBase.hs > compiler/nativeGen/RegAlloc/Graph/ArchX86.hs > compiler/nativeGen/RegAlloc/Graph/Coalesce.hs > compiler/nativeGen/RegAlloc/Graph/Spill.hs > compiler/nativeGen/RegAlloc/Graph/SpillClean.hs > compiler/nativeGen/RegAlloc/Graph/SpillCost.hs > compiler/nativeGen/RegAlloc/Graph/TrivColorable.hs > compiler/nativeGen/RegAlloc/Linear/Main.hs > compiler/nativeGen/RegAlloc/Linear/JoinToTargets.hs > compiler/nativeGen/RegAlloc/Linear/State.hs > compiler/nativeGen/RegAlloc/Linear/Stats.hs > compiler/nativeGen/RegAlloc/Linear/FreeRegs.hs > compiler/nativeGen/RegAlloc/Linear/StackMap.hs > compiler/nativeGen/RegAlloc/Linear/Base.hs > compiler/nativeGen/RegAlloc/Linear/X86/FreeRegs.hs > compiler/nativeGen/RegAlloc/Linear/X86_64/FreeRegs.hs > compiler/nativeGen/RegAlloc/Linear/PPC/FreeRegs.hs > compiler/nativeGen/RegAlloc/Linear/SPARC/FreeRegs.hs > compiler/deSugar/DsMeta.hs compiler/hsSyn/Convert.lhs > compiler/ghci/ByteCodeAsm.lhs compiler/ghci/ByteCodeGen.lhs > compiler/ghci/ByteCodeInstr.lhs compiler/ghci/ByteCodeItbls.lhs > compiler/ghci/ByteCodeLink.lhs compiler/ghci/Debugger.hs > compiler/stage2/build/LibFFI.hs compiler/ghci/Linker.lhs > compiler/ghci/ObjLink.lhs compiler/ghci/RtClosureInspect.hs > compiler/ghci/DebuggerUtils.hs +RTS > -tcompiler/stage2/doc/html/ghc/ghc.haddock.t --machine-readable > compiler/types/Coercion.lhs:1892:3: > parse error on input ?-- *kind* and role of its argument. Luckily, > laziness should? > > [1] http://haskell.inf.elte.hu/builders/freebsd-amd64-head/322/10.html > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > ?-- *? is Haddock syntax for headers and because GHC doesn't expect a Haddock comment there, it chucks a parse error. I think just adding a leading space or turning it into {- -} style comment would work. In all honesty, GHC should just warn and treat as regular comments at these cases rather than failing. -- Mateusz K. From hvriedel at gmail.com Thu Jul 17 08:20:13 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Thu, 17 Jul 2014 10:20:13 +0200 Subject: Updating Haddock submodule In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF1042F1BD@DB3PRD3001MB020.064d.mgd.msft.net> (Simon Peyton Jones's message of "Thu, 17 Jul 2014 06:54:32 +0000") References: <618BE556AADD624C9C918AA5D5911BEF1042CC47@DB3PRD3001MB020.064d.mgd.msft.net> <53C65C3D.7060803@fuuzetsu.co.uk> <618BE556AADD624C9C918AA5D5911BEF1042F1BD@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <87oawobdmq.fsf@gmail.com> On 2014-07-17 at 08:54:32 +0200, Simon Peyton Jones wrote: [...] > - it's at the bottom of a long page, most of which is irrelevant if you use ./sync-all (I think??) Fwiw, the page was written to be a `./sync-all`-agnostic on purpose (in fact, ./sync-all isn't mentioned only once for pre-submodule trees) [...] > I'm *not* complaining -- just trying to articulate explicitly what > would be helpful to me (or other ill-informed people) next time. Maybe we need different two different presentations. One concise reference-like wiki page for Git-gnostic devs, and one for `./sync-all`-accustomed devs (or maybe even a rosetta-stone like translation between 'sync-all' invocations, and what the respective Git-only commands look like) Fwiw, I have started experimenting with a `runghc`-based ./sync-all replacement[1] (which uses only the packages bundled with GHC), but I don't have time to work on it for the next couple of weeks. Cheers, hvr [1]: Currently, it's more of a ghc.git advisor, checking the current state of your Git repo, and telling you what commands you should invoke next, but it's in its really earliest stages. If anyone wants to pick it up, and work on it, lemme know. From simonpj at microsoft.com Thu Jul 17 08:21:43 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 17 Jul 2014 08:21:43 +0000 Subject: The build is broken? (Haddock parse error) In-Reply-To: <53C783DA.6020606@fuuzetsu.co.uk> References: <53C783DA.6020606@fuuzetsu.co.uk> Message-ID: <618BE556AADD624C9C918AA5D5911BEF1042F53C@DB3PRD3001MB020.064d.mgd.msft.net> | > parse error on input ?-- *kind* and role of its argument. | Luckily, | > laziness should? | > | > [1] http://haskell.inf.elte.hu/builders/freebsd-amd64- | head/322/10.html | > _______________________________________________ | > ghc-devs mailing list | > ghc-devs at haskell.org | > http://www.haskell.org/mailman/listinfo/ghc-devs | > | | ?-- *? is Haddock syntax for headers and because GHC doesn't expect a | Haddock comment there, it chucks a parse error. I think just adding a | leading space or turning it into {- -} style comment would work. | | In all honesty, GHC should just warn and treat as regular comments at | these cases rather than failing. I agree. I've been bitten by this several times. But don't you mean "Haddock should just warn..."? S From chak at cse.unsw.edu.au Thu Jul 17 08:24:54 2014 From: chak at cse.unsw.edu.au (Manuel M T Chakravarty) Date: Thu, 17 Jul 2014 18:24:54 +1000 Subject: RFC: style cleanup & guidelines for GHC, and related bikeshedding In-Reply-To: References: <53BC00DE.9050807@gmail.com> <47769D71-08FA-4CBB-B2AB-936CB15820A8@cse.unsw.edu.au> <618BE556AADD624C9C918AA5D5911BEF1042F0AD@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <7039D957-C1C4-4D9C-8757-3DA418C5E408@cse.unsw.edu.au> Johan Tibell : > > On Thu, Jul 17, 2014 at 8:40 AM, Simon Peyton Jones > wrote: >> | I used to be a 80 column guy, but moved away from that the last years. >> | But you are right, there must be an upper limit and, if >80 is a >> | problem for code reviews, then it's a reasonable choice. >> >> As laptop screens have successively more horizontal pixels and fewer vertical pixels, longer lines use screen real estate better. 80 columns now seems a bit narrow to me. 100 would be better. >> >> But I'm not going to die for this > > Here we go! > > * Wider screens let you have several Emacs buffers next to each > other. At 80 chars you can have about 2 buffers next to each other on > a 13" screen. I think that was SimonM's premise for code reviews, that you want lines short enough to have two versions besides each other. > * The average line length is about 30-35 characters in Python. If > it's anything similar in Haskell shorter line length are more > efficient, looking how much of the lines times columns space is filled > with characters. The problem is that indentation and long identifiers push you towards longer lines. > * The eye has trouble traveling back to the next line if lines get > too long (at least when reading prose). Research says around 60-70 > characters is optimal, if I recall correctly. I think we read code differently to prose (and prose is not much indented), so I don't think these numbers transfer. Manuel From simonpj at microsoft.com Thu Jul 17 08:24:45 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 17 Jul 2014 08:24:45 +0000 Subject: Updating Haddock submodule In-Reply-To: <87oawobdmq.fsf@gmail.com> References: <618BE556AADD624C9C918AA5D5911BEF1042CC47@DB3PRD3001MB020.064d.mgd.msft.net> <53C65C3D.7060803@fuuzetsu.co.uk> <618BE556AADD624C9C918AA5D5911BEF1042F1BD@DB3PRD3001MB020.064d.mgd.msft.net> <87oawobdmq.fsf@gmail.com> Message-ID: <618BE556AADD624C9C918AA5D5911BEF1042F556@DB3PRD3001MB020.064d.mgd.msft.net> | Maybe we need different two different presentations. One concise | reference-like wiki page for Git-gnostic devs, and one for `./sync- | all`-accustomed devs (or maybe even a rosetta-stone like translation | between 'sync-all' invocations, and what the respective Git-only | commands look like) Maybe so. But we *certainly* need the one for people who are not Git-gnostic. WE provide sync-all, and advise its use for most folk, precisely because it automates a number of tricky corners. So our primary Git-un-gnostic documentation should be directed at that workflow. By all means there can be more implementation detail behind it, for Git gurus. I'm begging for it. Begging! Humbly! Simon | -----Original Message----- | From: Herbert Valerio Riedel [mailto:hvriedel at gmail.com] | Sent: 17 July 2014 09:20 | To: Simon Peyton Jones | Cc: Mateusz Kowalczyk; ghc-devs at haskell.org | Subject: Re: Updating Haddock submodule | | On 2014-07-17 at 08:54:32 +0200, Simon Peyton Jones wrote: | | [...] | | > - it's at the bottom of a long page, most of which is irrelevant if | > you use ./sync-all (I think??) | | Fwiw, the page was written to be a `./sync-all`-agnostic on purpose (in | fact, ./sync-all isn't mentioned only once for pre-submodule trees) | | [...] | | > I'm *not* complaining -- just trying to articulate explicitly what | > would be helpful to me (or other ill-informed people) next time. | | Maybe we need different two different presentations. One concise | reference-like wiki page for Git-gnostic devs, and one for `./sync- | all`-accustomed devs (or maybe even a rosetta-stone like translation | between 'sync-all' invocations, and what the respective Git-only | commands look like) | | Fwiw, I have started experimenting with a `runghc`-based ./sync-all | replacement[1] (which uses only the packages bundled with GHC), but I | don't have time to work on it for the next couple of weeks. | | | Cheers, | hvr | | [1]: Currently, it's more of a ghc.git advisor, checking the current | state of your Git repo, and telling you what commands you should | invoke next, but it's in its really earliest stages. If anyone | wants to pick it up, and work on it, lemme know. From sol at typeful.net Thu Jul 17 08:44:52 2014 From: sol at typeful.net (Simon Hengel) Date: Thu, 17 Jul 2014 16:44:52 +0800 Subject: The build is broken? (Haddock parse error) In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF1042F53C@DB3PRD3001MB020.064d.mgd.msft.net> References: <53C783DA.6020606@fuuzetsu.co.uk> <618BE556AADD624C9C918AA5D5911BEF1042F53C@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <20140717084452.GC4066@x200> On Thu, Jul 17, 2014 at 08:21:43AM +0000, Simon Peyton Jones wrote: > > | > parse error on input ?-- *kind* and role of its argument. > | Luckily, > | > laziness should? > | > > | > [1] http://haskell.inf.elte.hu/builders/freebsd-amd64- > | head/322/10.html > | > _______________________________________________ > | > ghc-devs mailing list > | > ghc-devs at haskell.org > | > http://www.haskell.org/mailman/listinfo/ghc-devs > | > > | > | ?-- *? is Haddock syntax for headers and because GHC doesn't expect a > | Haddock comment there, it chucks a parse error. I think just adding a > | leading space or turning it into {- -} style comment would work. > | > | In all honesty, GHC should just warn and treat as regular comments at > | these cases rather than failing. > > I agree. I've been bitten by this several times. But don't you mean "Haddock should just warn..."? The corresponding code is in GHC (only used with Opt_Haddock). So we need to change this in GHC. Cheers, Simon From alexander at plaimi.net Thu Jul 17 08:57:38 2014 From: alexander at plaimi.net (Alexander Berntsen) Date: Thu, 17 Jul 2014 10:57:38 +0200 Subject: RFC: style cleanup & guidelines for GHC, and related bikeshedding In-Reply-To: References: <53BC00DE.9050807@gmail.com> <47769D71-08FA-4CBB-B2AB-936CB15820A8@cse.unsw.edu.au> <618BE556AADD624C9C918AA5D5911BEF1042F0AD@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <53C79002.8030507@plaimi.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 17/07/14 08:57, Johan Tibell wrote: > * Wider screens let you have several Emacs buffers next to each > other. At 80 chars you can have about 2 buffers next to each other > on a 13" screen. This is my main grief with 100 char lines (which is the Android standard, by the way). I like to have 6 or 8 files open side by side (including diffs and other meta-code). > * The eye has trouble traveling back to the next line if lines get > too long (at least when reading prose). Research says around 60-70 > characters is optimal, if I recall correctly. 66 as far as I remember, but that number is for prose and thus not *very* relevant, as Manuel points out. But I do think it's a problem in code too, regardless of the exact number. Being at 80+ is also often an indication that you're in dire need of refactoring. So while we're all chiming in, my preferences in order: 78 72 80 less than 72 more than 80 - -- Alexander alexander at plaimi.net https://secure.plaimi.net/~alexander -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iF4EAREIAAYFAlPHkAIACgkQRtClrXBQc7VVpwD+K2Q8NYplnmJdNYTulHx4hQBP GVeBJjYQifYrr6MoQq8BAJNP3IUyq+pg+VsGqJg4tCkrv6nmfM1teExzE2avz0/u =76Tq -----END PGP SIGNATURE----- From simonpj at microsoft.com Thu Jul 17 11:21:30 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 17 Jul 2014 11:21:30 +0000 Subject: Beta Performance dashboard In-Reply-To: <1405497741.2301.34.camel@kirk> References: <1405497741.2301.34.camel@kirk> Message-ID: <618BE556AADD624C9C918AA5D5911BEF1042F97E@DB3PRD3001MB020.064d.mgd.msft.net> This is totally brilliant: thank you Joachim! Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | Joachim Breitner | Sent: 16 July 2014 09:02 | To: ghc-devs at haskell.org | Subject: Beta Performance dashboard | | Hi, | | I guess it?s time to talk about this, especially as Richard just | brought it up again... | | I felt that we were seriously lacking in our grip on performance | issues. | We don?t even know whether 6.8.3 was better or worse than 6.8.3 or | 7.6.4 in terms of nofib, not to speak of the effect of each single | commit. | | I want to change that, so I set up a benchmark monitoring dashboard. | You can currently reach it at: | | http://ghcspeed-nomeata.rhcloud.com/ | | What does it do? | ~~~~~~~~~~~~~~~~ | | It monitors the repository (master branch only) and builds each commit, | complete with the test suite and nofib. The log is saved and analyzed, | and some numbers are extracted: | * The build time | * The test suite summary numbers | * Runtime (if >1s), allocations and binary sizes of the nofib | benchmarks | | These are uploaded to the website above, which is powered by codespeed, | a general performance dashboard, implemented in Python using Django. | | Under _Changes_, it provides a report for each commit (changes wrt. to | the previous version, and wrt. to 10 revisions earlier, the so-called | ?trend?). A summary of these reports is visible on the front-page. | | The _Timeline_ is a graph for each individual performance number. If | there are bumps, you can hopefully find them there! You can also | compare to 7.8.3, which is available as a ?baseline?. | | _Comparison_ will be more useful if we have more tagged revision, or if | were benchmarking various options (e.g. -fllvm): Here you can do bar- | chart comparisons. | | Why codespeed? | ~~~~~~~~~~~~~~ | | For a long time I searched for a suitable software product, and one | criterion is that it should be open source, rather simple to set up and | mostly decoupled from other tools, i.e. something that I throw numbers | at and which then displays them nicely. While I don?t think codespeed | is the best performance dashboard out there (I find | http://goperfd.appspot.com/perf a bit better; I wonder how well | codespeed scales to even larger numbers of benchmarks and I wish it | were more git-aware), it was the easiest to get started with. And | thanks to the loose coupling of (1) running the tests to acquire a log, | (2) parsing the log to get numbers and (3) putting them on a server, we | can hopefully replace it when we come along something better. I was | hoping for the Phabricator guys to have something in their tool suite, | but doesn?t look like it. | | How does it work (currently)? | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | | My office PC is underused (I work on my laptop), so its currently | dedicated to it. I have a simple shell script that monitors the repo | for new versions. It builds the newest revision and works itself back | to the commit where everything was turned into submodules: | https://github.com/nomeata/codespeed/blob/ghc/tools/ghc/watch.sh | | It calls a script that does the actual building: | https://github.com/nomeata/codespeed/blob/ghc/tools/ghc/run-speed.sh | This produces a log file which should contain all the required numbers | somewhere. | | A second script extracts these numbers (with help of nofib-analyze) and | converts them into codespeed compatible JSON files: | https://github.com/nomeata/codespeed/blob/ghc/tools/ghc/log2json.pl | | Finally, a simple invocation to curl uploads them to codespeed: | https://github.com/nomeata/codespeed/blob/ghc/tools/ghc/upload.sh | | So if you want additional benchmarks to be tracked, make sure they are | present in the logs and adjust log2json.pl. codespeed will | automatically pick up new benchmarks in these logs. Reimplementations | in Haskell are also welcome :-) | | The testsuite is run with VERBOSE=4, so the performance numbers are | also shown for failing test cases. So once a test case goes over the | limit, you can grep through previous logs try to find the real culprit. | I uploaded the logs (so far) to https://github.com/nomeata/ghc-speed- | logs | (but this is not automated yet, ping me if you need an update on this). | | What next? | ~~~~~~~~~~ | | Clearly, the current setup is only good enough to evaluate the system. | Eventually, I might want to use my office PC again, and the free | hosting on openshift is not very powerful. | | So if we want to keep this setup and make it ?official?, we need find a | permanent solution.? This involves: | | * A dedicated machine to run the benchmarks. This probably shouldn?t | be | a VM, if we want to keep the noise in the runtime down. | * A machine to run the codespeed server. Can be a VM, or even run on | any of the system that we have right now. Just needs a database | (postgresql preferably) and a webserver supporting WSGI (i.e. any | of them). | * Maybe a better place to store the logs for public consumption. | | Also, there are way to improve the system: | | * As I said, I don?t think codespeed is the best. If we find something | better, we can replace it. Since we have all the logs, we can easily | fill the new system with the data, or even run both at the same | time. | * We might want to have more numbers. I am already putting | lines-of-code and disk space usage numbers into the logs, but do not | parse them yet. | * In particular, we might want to put in each performance test case as | a benchmark of its own, to easier find commits that degrade (or | improve!) performance. I?m not sure how well the web page will | handle | that. | * We might want to replace my rather simple watch.sh-script by | something more serious. In particular, I imagine that our builder | setup could manages this, with a dedicated builder doing the | benchmark runs and the builder server scheduling a build for each | commit. | | | That?s it for now. Enjoy clicking around! | | Greetings, | Joachim | | ? I guess that could be considered beta-reduction :-) | | | | -- | Joachim Breitner | e-Mail: mail at joachim-breitner.de | Homepage: http://www.joachim-breitner.de | Jabber-ID: nomeata at joachim-breitner.de From eir at cis.upenn.edu Thu Jul 17 12:06:20 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Thu, 17 Jul 2014 08:06:20 -0400 Subject: The build is broken? (Haddock parse error) In-Reply-To: <20140717084452.GC4066@x200> References: <53C783DA.6020606@fuuzetsu.co.uk> <618BE556AADD624C9C918AA5D5911BEF1042F53C@DB3PRD3001MB020.064d.mgd.msft.net> <20140717084452.GC4066@x200> Message-ID: <238628A4-9376-4BC2-A54D-E2EF3AC9D730@cis.upenn.edu> Oops. I was relying on Travis's validation, which skips haddocking. This is most likely why Phab's build failed. Thanks for fixing it! Richard On Jul 17, 2014, at 4:44 AM, Simon Hengel wrote: > On Thu, Jul 17, 2014 at 08:21:43AM +0000, Simon Peyton Jones wrote: >> >> | > parse error on input ?-- *kind* and role of its argument. >> | Luckily, >> | > laziness should? >> | > >> | > [1] http://haskell.inf.elte.hu/builders/freebsd-amd64- >> | head/322/10.html >> | > _______________________________________________ >> | > ghc-devs mailing list >> | > ghc-devs at haskell.org >> | > http://www.haskell.org/mailman/listinfo/ghc-devs >> | > >> | >> | ?-- *? is Haddock syntax for headers and because GHC doesn't expect a >> | Haddock comment there, it chucks a parse error. I think just adding a >> | leading space or turning it into {- -} style comment would work. >> | >> | In all honesty, GHC should just warn and treat as regular comments at >> | these cases rather than failing. >> >> I agree. I've been bitten by this several times. But don't you mean "Haddock should just warn..."? > > The corresponding code is in GHC (only used with Opt_Haddock). So we > need to change this in GHC. > > Cheers, > Simon > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From marlowsd at gmail.com Thu Jul 17 18:06:05 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Thu, 17 Jul 2014 19:06:05 +0100 Subject: Windows breakage -- again In-Reply-To: References: <2362520305436346764@unknownmsgid> Message-ID: <53C8108D.7030905@gmail.com> gcc has -march=native which uses the current CPU's architecture, but I think it would be a really bad idea to turn that on by default, because it would mean that we have to be really careful which machine we build the distributions on. On my Linux box, gcc -v says it was configured with --with-arch-32=i686, which means that -march=i686 is the default for 32-bit code. We shouldn't go any later than that IMO. Anyway, this is all beside the point, if we aren't able to run the code generated by gcc (in whatever mode) then there's a bug somewhere. Cheers, Simon On 17/07/2014 07:39, Johan Tibell wrote: > Alright, then which Make file do we need to fix to make sure GCC is > called correctly? Also, I remember reading that some time during the > 4.x GCC series GCC switched to auto-detecting the arch to be that of > the machine being used. Could someone try to just switch GCC to a > newer version and see if it automatically stops trying to use i386, > leading to Simon's problem? > > On Thu, Jul 17, 2014 at 8:37 AM, Niklas Larsson wrote: >> It certainly shouldn't be built with i386, because that is generating code >> for a processor that lacks all these fancy atomic instructions. The first of >> them appears on the 486. >> >> i686 should be safe, it goes all the way back to Pentium Pro. >> >> >> 2014-07-17 8:33 GMT+02:00 Johan Tibell : >> >>> A perhaps silly question, *should* ghc-prim be built with i386 or i686? >>> >>> On Thu, Jul 17, 2014 at 8:33 AM, Niklas Larsson >>> wrote: >>>> I just found exactly the same thing! Well, I used i686 instead. >>>> >>>> Sounds like it's worthwhile to see if this is limited to ghc-prim or if >>>> there's more stuff that's built with i386. >>>> >>>> >>>> 2014-07-17 8:21 GMT+02:00 P?li G?bor J?nos : >>>> >>>>> 2014-07-17 0:51 GMT+02:00 P?li G?bor J?nos : >>>>>> 2014-07-17 0:47 GMT+02:00 Niklas Larsson : >>>>>>> I hope they can just be done away with at the source, that is to >>>>>>> make >>>>>>> gcc >>>>>>> generate the assembly primitives. GHC should already be built with >>>>>>> i686, but >>>>>>> does that reach ghc-prim? >>>>>> >>>>>> This depends on GCC -- if no -march=XXX is explicitly set, I guess it >>>>>> will take its default, which may vary platform by platform. >>>>> >>>>> All right, I have finally got a Windows (x64) machine and installed >>>>> the msys2 environment by the GHC wiki [1]. This has GCC 4.5.2 (as >>>>> Niklas wrote earlier), where the default -march is i386. You should >>>>> see this line when trying to compile Johan's test program with the -v >>>>> flag set: >>>>> >>>>> COLLECT_GCC_OPTIONS= ... '-v' '-mtune=i386' '-march=i386' >>>>> >>>>> With the -march=i586 flag explicitly set in the command line, no >>>>> __sync_fetch_and_add_n() calls are generated. >>>>> >>>>> [1] >>>>> >>>>> https://ghc.haskell.org/trac/ghc/wiki/Building/Preparation/Windows/MSYS2 >>>> >>>> >> >> From robstewart57 at gmail.com Thu Jul 17 18:18:42 2014 From: robstewart57 at gmail.com (Rob Stewart) Date: Thu, 17 Jul 2014 19:18:42 +0100 Subject: GHC contribution guidelines and infrastructure talk on 6th September at HIW? Message-ID: Hi, On Saturday 6th September is the Haskell Implementers Workshop. There has been plenty of discussion over the last 12 months about making contributions to GHC less formidable. Is this story going to be told at HIW? A talk about revised contribution guidelines and helpful tool support might engage those sat on, or peering over, the fence. This might include: * Phabricator code review demonstration. * Continuous integration infrastructure. * Trac demonstration, e.g. how to contribute to design discussions. * Wiki navigation, and important new pages born in recent months. * GHC coding guidelines, e.g. using notes and haddock documentation. * Git policies, e.g. use of submodules. * What GHC needs.. Windows testers? * Old contribution guidelines that no longer apply. Is HIW on 6th September a good place to give a "GHC contributions and infrastructure" talk? -- Rob From carter.schonwald at gmail.com Thu Jul 17 18:34:10 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Thu, 17 Jul 2014 14:34:10 -0400 Subject: GHC contribution guidelines and infrastructure talk on 6th September at HIW? In-Reply-To: References: Message-ID: wearing my "i'm on the HIW PC" hat, i'm going to speculatively say, "Yes!", though i'm ccing Jost into this dialogue :) On Thu, Jul 17, 2014 at 2:18 PM, Rob Stewart wrote: > Hi, > > On Saturday 6th September is the Haskell Implementers Workshop. There > has been plenty of discussion over the last 12 months about making > contributions to GHC less formidable. Is this story going to be told > at HIW? A talk about revised contribution guidelines and helpful tool > support might engage those sat on, or peering over, the fence. > > This might include: > > * Phabricator code review demonstration. > * Continuous integration infrastructure. > * Trac demonstration, e.g. how to contribute to design discussions. > * Wiki navigation, and important new pages born in recent months. > * GHC coding guidelines, e.g. using notes and haddock documentation. > * Git policies, e.g. use of submodules. > * What GHC needs.. Windows testers? > * Old contribution guidelines that no longer apply. > > Is HIW on 6th September a good place to give a "GHC contributions and > infrastructure" talk? > > -- > Rob > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Thu Jul 17 20:05:32 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Thu, 17 Jul 2014 22:05:32 +0200 Subject: Windows breakage -- again In-Reply-To: <53C8108D.7030905@gmail.com> References: <2362520305436346764@unknownmsgid> <53C8108D.7030905@gmail.com> Message-ID: Can someone see if adding Cc-options: -march=i686 To ghc-prim.cabal resolves the issue? On Jul 17, 2014 9:06 PM, "Simon Marlow" wrote: > gcc has -march=native which uses the current CPU's architecture, but I > think it would be a really bad idea to turn that on by default, because it > would mean that we have to be really careful which machine we build the > distributions on. > > On my Linux box, gcc -v says it was configured with --with-arch-32=i686, > which means that -march=i686 is the default for 32-bit code. We shouldn't > go any later than that IMO. > > Anyway, this is all beside the point, if we aren't able to run the code > generated by gcc (in whatever mode) then there's a bug somewhere. > > Cheers, > Simon > > On 17/07/2014 07:39, Johan Tibell wrote: > >> Alright, then which Make file do we need to fix to make sure GCC is >> called correctly? Also, I remember reading that some time during the >> 4.x GCC series GCC switched to auto-detecting the arch to be that of >> the machine being used. Could someone try to just switch GCC to a >> newer version and see if it automatically stops trying to use i386, >> leading to Simon's problem? >> >> On Thu, Jul 17, 2014 at 8:37 AM, Niklas Larsson >> wrote: >> >>> It certainly shouldn't be built with i386, because that is generating >>> code >>> for a processor that lacks all these fancy atomic instructions. The >>> first of >>> them appears on the 486. >>> >>> i686 should be safe, it goes all the way back to Pentium Pro. >>> >>> >>> 2014-07-17 8:33 GMT+02:00 Johan Tibell : >>> >>> A perhaps silly question, *should* ghc-prim be built with i386 or i686? >>>> >>>> On Thu, Jul 17, 2014 at 8:33 AM, Niklas Larsson >>>> wrote: >>>> >>>>> I just found exactly the same thing! Well, I used i686 instead. >>>>> >>>>> Sounds like it's worthwhile to see if this is limited to ghc-prim or if >>>>> there's more stuff that's built with i386. >>>>> >>>>> >>>>> 2014-07-17 8:21 GMT+02:00 P?li G?bor J?nos : >>>>> >>>>> 2014-07-17 0:51 GMT+02:00 P?li G?bor J?nos : >>>>>> >>>>>>> 2014-07-17 0:47 GMT+02:00 Niklas Larsson : >>>>>>> >>>>>>>> I hope they can just be done away with at the source, that is to >>>>>>>> make >>>>>>>> gcc >>>>>>>> generate the assembly primitives. GHC should already be built with >>>>>>>> i686, but >>>>>>>> does that reach ghc-prim? >>>>>>>> >>>>>>> >>>>>>> This depends on GCC -- if no -march=XXX is explicitly set, I guess it >>>>>>> will take its default, which may vary platform by platform. >>>>>>> >>>>>> >>>>>> All right, I have finally got a Windows (x64) machine and installed >>>>>> the msys2 environment by the GHC wiki [1]. This has GCC 4.5.2 (as >>>>>> Niklas wrote earlier), where the default -march is i386. You should >>>>>> see this line when trying to compile Johan's test program with the -v >>>>>> flag set: >>>>>> >>>>>> COLLECT_GCC_OPTIONS= ... '-v' '-mtune=i386' '-march=i386' >>>>>> >>>>>> With the -march=i586 flag explicitly set in the command line, no >>>>>> __sync_fetch_and_add_n() calls are generated. >>>>>> >>>>>> [1] >>>>>> >>>>>> https://ghc.haskell.org/trac/ghc/wiki/Building/Preparation/ >>>>>> Windows/MSYS2 >>>>>> >>>>> >>>>> >>>>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From metaniklas at gmail.com Thu Jul 17 20:05:35 2014 From: metaniklas at gmail.com (Niklas Larsson) Date: Thu, 17 Jul 2014 22:05:35 +0200 Subject: Windows breakage -- again In-Reply-To: <53C8108D.7030905@gmail.com> References: <2362520305436346764@unknownmsgid> <53C8108D.7030905@gmail.com> Message-ID: | On my Linux box, gcc -v says it was configured with --with-arch-32=i686, which means that -march=i686 is the default for 32-bit code. We shouldn't go any later than that IMO. | | Anyway, this is all beside the point, if we aren't able to run the code generated by gcc (in whatever mode) then there's a bug somewhere. I've looked at it more now. The bug is that the mingw32 build is generating code for i386, which lacks these assembler primitives. The failure to link is just because gcc is sticking in these undefined symbols on the off-chance that you want to provide them yourself, they aren't included in gcc. Stick a -march=i686 in there and it works. I don't think it's reasonable to cater for processors that doesn't have support for CAS. So I'm testing a patch that just adds a -march=i686 flag for 32-bit mingw in aclocal.m4. Niklas 2014-07-17 20:06 GMT+02:00 Simon Marlow : > gcc has -march=native which uses the current CPU's architecture, but I > think it would be a really bad idea to turn that on by default, because it > would mean that we have to be really careful which machine we build the > distributions on. > > On my Linux box, gcc -v says it was configured with --with-arch-32=i686, > which means that -march=i686 is the default for 32-bit code. We shouldn't > go any later than that IMO. > > Anyway, this is all beside the point, if we aren't able to run the code > generated by gcc (in whatever mode) then there's a bug somewhere. > > Cheers, > Simon > > > On 17/07/2014 07:39, Johan Tibell wrote: > >> Alright, then which Make file do we need to fix to make sure GCC is >> called correctly? Also, I remember reading that some time during the >> 4.x GCC series GCC switched to auto-detecting the arch to be that of >> the machine being used. Could someone try to just switch GCC to a >> newer version and see if it automatically stops trying to use i386, >> leading to Simon's problem? >> >> On Thu, Jul 17, 2014 at 8:37 AM, Niklas Larsson >> wrote: >> >>> It certainly shouldn't be built with i386, because that is generating >>> code >>> for a processor that lacks all these fancy atomic instructions. The >>> first of >>> them appears on the 486. >>> >>> i686 should be safe, it goes all the way back to Pentium Pro. >>> >>> >>> 2014-07-17 8:33 GMT+02:00 Johan Tibell : >>> >>> A perhaps silly question, *should* ghc-prim be built with i386 or i686? >>>> >>>> On Thu, Jul 17, 2014 at 8:33 AM, Niklas Larsson >>>> wrote: >>>> >>>>> I just found exactly the same thing! Well, I used i686 instead. >>>>> >>>>> Sounds like it's worthwhile to see if this is limited to ghc-prim or if >>>>> there's more stuff that's built with i386. >>>>> >>>>> >>>>> 2014-07-17 8:21 GMT+02:00 P?li G?bor J?nos : >>>>> >>>>> 2014-07-17 0:51 GMT+02:00 P?li G?bor J?nos : >>>>>> >>>>>>> 2014-07-17 0:47 GMT+02:00 Niklas Larsson : >>>>>>> >>>>>>>> I hope they can just be done away with at the source, that is to >>>>>>>> make >>>>>>>> gcc >>>>>>>> generate the assembly primitives. GHC should already be built with >>>>>>>> i686, but >>>>>>>> does that reach ghc-prim? >>>>>>>> >>>>>>> >>>>>>> This depends on GCC -- if no -march=XXX is explicitly set, I guess it >>>>>>> will take its default, which may vary platform by platform. >>>>>>> >>>>>> >>>>>> All right, I have finally got a Windows (x64) machine and installed >>>>>> the msys2 environment by the GHC wiki [1]. This has GCC 4.5.2 (as >>>>>> Niklas wrote earlier), where the default -march is i386. You should >>>>>> see this line when trying to compile Johan's test program with the -v >>>>>> flag set: >>>>>> >>>>>> COLLECT_GCC_OPTIONS= ... '-v' '-mtune=i386' '-march=i386' >>>>>> >>>>>> With the -march=i586 flag explicitly set in the command line, no >>>>>> __sync_fetch_and_add_n() calls are generated. >>>>>> >>>>>> [1] >>>>>> >>>>>> https://ghc.haskell.org/trac/ghc/wiki/Building/Preparation/ >>>>>> Windows/MSYS2 >>>>>> >>>>> >>>>> >>>>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From metaniklas at gmail.com Thu Jul 17 20:25:55 2014 From: metaniklas at gmail.com (Niklas Larsson) Date: Thu, 17 Jul 2014 22:25:55 +0200 Subject: Windows breakage -- again In-Reply-To: References: <2362520305436346764@unknownmsgid> <53C8108D.7030905@gmail.com> Message-ID: Now I have an extremely simple patch that I have confirmed is working. As it is in the autoconfig stuff, one needs to run: autoreconf make distclean ./boot and so on.... As apparently everything has been built with i386 before, I'm curious about if there's a performance increase when going i686. I'll have to measure that, but not tonight. Niklas 2014-07-17 22:05 GMT+02:00 Johan Tibell : > Can someone see if adding > > Cc-options: -march=i686 > > To ghc-prim.cabal resolves the issue? > On Jul 17, 2014 9:06 PM, "Simon Marlow" wrote: > >> gcc has -march=native which uses the current CPU's architecture, but I >> think it would be a really bad idea to turn that on by default, because it >> would mean that we have to be really careful which machine we build the >> distributions on. >> >> On my Linux box, gcc -v says it was configured with --with-arch-32=i686, >> which means that -march=i686 is the default for 32-bit code. We shouldn't >> go any later than that IMO. >> >> Anyway, this is all beside the point, if we aren't able to run the code >> generated by gcc (in whatever mode) then there's a bug somewhere. >> >> Cheers, >> Simon >> >> On 17/07/2014 07:39, Johan Tibell wrote: >> >>> Alright, then which Make file do we need to fix to make sure GCC is >>> called correctly? Also, I remember reading that some time during the >>> 4.x GCC series GCC switched to auto-detecting the arch to be that of >>> the machine being used. Could someone try to just switch GCC to a >>> newer version and see if it automatically stops trying to use i386, >>> leading to Simon's problem? >>> >>> On Thu, Jul 17, 2014 at 8:37 AM, Niklas Larsson >>> wrote: >>> >>>> It certainly shouldn't be built with i386, because that is generating >>>> code >>>> for a processor that lacks all these fancy atomic instructions. The >>>> first of >>>> them appears on the 486. >>>> >>>> i686 should be safe, it goes all the way back to Pentium Pro. >>>> >>>> >>>> 2014-07-17 8:33 GMT+02:00 Johan Tibell : >>>> >>>> A perhaps silly question, *should* ghc-prim be built with i386 or i686? >>>>> >>>>> On Thu, Jul 17, 2014 at 8:33 AM, Niklas Larsson >>>>> wrote: >>>>> >>>>>> I just found exactly the same thing! Well, I used i686 instead. >>>>>> >>>>>> Sounds like it's worthwhile to see if this is limited to ghc-prim or >>>>>> if >>>>>> there's more stuff that's built with i386. >>>>>> >>>>>> >>>>>> 2014-07-17 8:21 GMT+02:00 P?li G?bor J?nos : >>>>>> >>>>>> 2014-07-17 0:51 GMT+02:00 P?li G?bor J?nos : >>>>>>> >>>>>>>> 2014-07-17 0:47 GMT+02:00 Niklas Larsson : >>>>>>>> >>>>>>>>> I hope they can just be done away with at the source, that is to >>>>>>>>> make >>>>>>>>> gcc >>>>>>>>> generate the assembly primitives. GHC should already be built with >>>>>>>>> i686, but >>>>>>>>> does that reach ghc-prim? >>>>>>>>> >>>>>>>> >>>>>>>> This depends on GCC -- if no -march=XXX is explicitly set, I guess >>>>>>>> it >>>>>>>> will take its default, which may vary platform by platform. >>>>>>>> >>>>>>> >>>>>>> All right, I have finally got a Windows (x64) machine and installed >>>>>>> the msys2 environment by the GHC wiki [1]. This has GCC 4.5.2 (as >>>>>>> Niklas wrote earlier), where the default -march is i386. You should >>>>>>> see this line when trying to compile Johan's test program with the -v >>>>>>> flag set: >>>>>>> >>>>>>> COLLECT_GCC_OPTIONS= ... '-v' '-mtune=i386' '-march=i386' >>>>>>> >>>>>>> With the -march=i586 flag explicitly set in the command line, no >>>>>>> __sync_fetch_and_add_n() calls are generated. >>>>>>> >>>>>>> [1] >>>>>>> >>>>>>> https://ghc.haskell.org/trac/ghc/wiki/Building/Preparation/ >>>>>>> Windows/MSYS2 >>>>>>> >>>>>> >>>>>> >>>>>> >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 0001-Set-i686-as-the-minimum-architecture-on-32-bit-mingw.patch Type: application/octet-stream Size: 611 bytes Desc: not available URL: From pali.gabor at gmail.com Thu Jul 17 20:50:00 2014 From: pali.gabor at gmail.com (=?UTF-8?B?UMOhbGkgR8OhYm9yIErDoW5vcw==?=) Date: Thu, 17 Jul 2014 22:50:00 +0200 Subject: Windows breakage -- again In-Reply-To: References: <2362520305436346764@unknownmsgid> <53C8108D.7030905@gmail.com> Message-ID: 2014-07-17 22:05 GMT+02:00 Niklas Larsson : > The bug is that the mingw32 build is generating > code for i386, which lacks these assembler primitives. I think this might be a feature instead. As I wrote earlier, on FreeBSD/i386, arch is set to i486 in GCC's configure script [1]. Hence, whatever version I install from the Ports Collection, it will target i486 by default. [1] https://github.com/mirrors/gcc/blob/master/gcc/config.gcc#L2899 From pali.gabor at gmail.com Thu Jul 17 20:51:52 2014 From: pali.gabor at gmail.com (=?UTF-8?B?UMOhbGkgR8OhYm9yIErDoW5vcw==?=) Date: Thu, 17 Jul 2014 22:51:52 +0200 Subject: Windows breakage -- again In-Reply-To: References: <2362520305436346764@unknownmsgid> <53C8108D.7030905@gmail.com> Message-ID: 2014-07-17 22:05 GMT+02:00 Johan Tibell : > Can someone see if adding > > Cc-options: -march=i686 > > To ghc-prim.cabal resolves the issue? Yes, I can confirm that it works. However, it shall be prefixed with a conditional such as (os(windows) && arch(i386)) as x86_64 does not support -march=i686. From metaniklas at gmail.com Thu Jul 17 20:55:56 2014 From: metaniklas at gmail.com (Niklas Larsson) Date: Thu, 17 Jul 2014 22:55:56 +0200 Subject: Windows breakage -- again In-Reply-To: References: <2362520305436346764@unknownmsgid> <53C8108D.7030905@gmail.com> Message-ID: Yes, ultimately the problem is that the mingw-gcc that is in the tarball git has a too low target. But that is a rather larger fix than setting the arch targetted in configure. 2014-07-17 22:50 GMT+02:00 P?li G?bor J?nos : > 2014-07-17 22:05 GMT+02:00 Niklas Larsson : > > The bug is that the mingw32 build is generating > > code for i386, which lacks these assembler primitives. > > I think this might be a feature instead. As I wrote earlier, on > FreeBSD/i386, arch is set to i486 in GCC's configure script [1]. > Hence, whatever version I install from the Ports Collection, it will > target i486 by default. > > [1] https://github.com/mirrors/gcc/blob/master/gcc/config.gcc#L2899 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Jul 18 07:25:59 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 18 Jul 2014 07:25:59 +0000 Subject: GHC contribution guidelines and infrastructure talk on 6th September at HIW? In-Reply-To: References: Message-ID: <618BE556AADD624C9C918AA5D5911BEF1043098D@DB3PRD3001MB020.064d.mgd.msft.net> | On Saturday 6th September is the Haskell Implementers Workshop. There | has been plenty of discussion over the last 12 months about making | contributions to GHC less formidable. Is this story going to be told at | HIW? A talk about revised contribution guidelines and helpful tool | support might engage those sat on, or peering over, the fence. I think that's a great idea. Maybe Simon M, or Joachim, or Austin, or Herbert? Of some coalition thereof Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Rob | Stewart | Sent: 17 July 2014 19:19 | To: ghc-devs at haskell.org | Subject: GHC contribution guidelines and infrastructure talk on 6th | September at HIW? | | Hi, | | On Saturday 6th September is the Haskell Implementers Workshop. There | has been plenty of discussion over the last 12 months about making | contributions to GHC less formidable. Is this story going to be told at | HIW? A talk about revised contribution guidelines and helpful tool | support might engage those sat on, or peering over, the fence. | | This might include: | | * Phabricator code review demonstration. | * Continuous integration infrastructure. | * Trac demonstration, e.g. how to contribute to design discussions. | * Wiki navigation, and important new pages born in recent months. | * GHC coding guidelines, e.g. using notes and haddock documentation. | * Git policies, e.g. use of submodules. | * What GHC needs.. Windows testers? | * Old contribution guidelines that no longer apply. | | Is HIW on 6th September a good place to give a "GHC contributions and | infrastructure" talk? | | -- | Rob | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From marlowsd at gmail.com Fri Jul 18 07:38:56 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Fri, 18 Jul 2014 08:38:56 +0100 Subject: Building GHC under Wine? In-Reply-To: <1405464922.2694.29.camel@kirk> References: <1405464922.2694.29.camel@kirk> Message-ID: <53C8CF10.7000003@gmail.com> I might be misremembering, but I believe someone (Ross Paterson?) used to do this a while ago. I can't think of any good reasons it *shouldn't* work. Cheers, Simon On 15/07/2014 23:55, Joachim Breitner wrote: > Hi, > > I feel sorry for Simon always repeatedly stuck with an unbuildable tree, > and an idea crossed my mind: Can we build? GHC under Wine? If so, is it > likely to catch the kind of problems that Simon is getting? If so, maybe > it runs fast enough to be also tested by travis on every commit? > > (This mail is to find out if people have tried it before. If not, I?ll > give it a quick shot.) > > Greetings, > Joachim > > ? we surely can use it: http://www.haskell.org/haskellwiki/GHC_under_Wine > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From mail at joachim-breitner.de Fri Jul 18 07:59:52 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Fri, 18 Jul 2014 09:59:52 +0200 Subject: Building GHC under Wine? In-Reply-To: <53C8CF10.7000003@gmail.com> References: <1405464922.2694.29.camel@kirk> <53C8CF10.7000003@gmail.com> Message-ID: <1405670392.8383.4.camel@kirk> Hi, Am Freitag, den 18.07.2014, 08:38 +0100 schrieb Simon Marlow: > I might be misremembering, but I believe someone (Ross Paterson?) used > to do this a while ago. > > I can't think of any good reasons it *shouldn't* work. Then, next question: Is it likely to find windows building failures, are are the failures usually of the kind that would not occur in a not-quite-a-real-windows environment? Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From mail at joachim-breitner.de Fri Jul 18 08:01:51 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Fri, 18 Jul 2014 10:01:51 +0200 Subject: GHC contribution guidelines and infrastructure talk on 6th September at HIW? In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF1043098D@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF1043098D@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <1405670511.8383.6.camel@kirk> Hi, Am Freitag, den 18.07.2014, 07:25 +0000 schrieb Simon Peyton Jones: > | On Saturday 6th September is the Haskell Implementers Workshop. There > | has been plenty of discussion over the last 12 months about making > | contributions to GHC less formidable. Is this story going to be told at > | HIW? A talk about revised contribution guidelines and helpful tool > | support might engage those sat on, or peering over, the fence. > > I think that's a great idea. Maybe Simon M, or Joachim, or Austin, or Herbert? Of some coalition thereof I agree, and I?d be available for it, or for joining a coalition. Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From ezyang at mit.edu Fri Jul 18 11:44:07 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Fri, 18 Jul 2014 12:44:07 +0100 Subject: a little phrustrated In-Reply-To: <201407170935.19256.jan.stolarek@p.lodz.pl> References: <5BC8E2CB-7528-4537-92EA-E1C3BDDF1B1E@cis.upenn.edu> <201407170935.19256.jan.stolarek@p.lodz.pl> Message-ID: <1405683749-sup-5849@sabre> Excerpts from Jan Stolarek's message of 2014-07-17 08:35:19 +0100: > 1. Complaining about any untracked or uncommited changes in the source tree. This is mostly > annoying. How can I tell arcanist to ignore such changes? Rant: I really don't like tools that > try to be smarter than me and prohibit from doing what I want them to do. OK, I finally gave in and took a look at the Phabricator source code. Short answer: It's hard-coded, you can't disable the check Medium answer: It's pretty easy to disable, just uncomment the two 'throw new ArcanistUsageException' lines in src/workflow/ArcanistBaseWorkflow.php which complain about staging/committing before proceeding Long answer: Arcanist lint will still run on your working tree, so you are going to get spurious lint results. Oof! Edward From simonpj at microsoft.com Fri Jul 18 13:54:26 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 18 Jul 2014 13:54:26 +0000 Subject: Windows breakage -- again In-Reply-To: References: <2362520305436346764@unknownmsgid> Message-ID: <618BE556AADD624C9C918AA5D5911BEF104311AF@DB3PRD3001MB020.064d.mgd.msft.net> Thank you all for pursuing this. I gather that you know what is going on, so no further info needed from me. Yell if it is otherwise. Meanwhile, is the fix imminent, or should we revert Johan?s patch? Simon From: Niklas Larsson [mailto:metaniklas at gmail.com] Sent: 16 July 2014 19:58 To: Johan Tibell; Simon Peyton Jones Cc: ghc-devs at haskell.org Subject: Re: Windows breakage -- again I get the same failure when I try to build HEAD. Turns out the error occurs on the 32-bit Windows build, and my successful build was a 64-bit build. My 64-bit build still succeeds. Also, gcc is 4.5.2 on 32-bit, not 4.6.3 as on 64-bit. Niklas 2014-07-16 14:48 GMT+02:00 Niklas Larsson >: I have built ghc on windows after that was added with no issue. I can take a look this evening and see how HEAD works for me. The standard gcc in the tarballs is 4.6.3, which is getting long in the tooth, there is an issue on trac to upgrade it. -- Niklas ________________________________ Fr?n: Johan Tibell Skickat: ?2014-?07-?16 09:57 Till: Simon Peyton Jones Kopia: ghc-devs at haskell.org ?mne: Re: Windows breakage -- again You can rollback the commit (git revert 4ee4ab01c1d97845aecb7707ad2f9a80933e7a49) and push that to the repo if you wish. I will try to re-add the primop again after I figure out what's wrong. On Wed, Jul 16, 2014 at 9:37 AM, Johan Tibell > wrote: I added some primops about a month ago (4ee4ab01c1d97845aecb7707ad2f9a80933e7a49) that call __sync_fetch_and_add, a gcc/llvm builtin. I'm a bit surprised to see this error. The GCC manual [1] says: > " Not all operations are supported by all target processors. If a particular operation cannot be implemented on the target processor, a warning will be generated and a call an external function will be generated. The external function will carry the same name as the builtin, with an additional suffix `_n' where n is the size of the data type." I'm a bit surprised by this error for two reasons: * A call to that symbol should only be generated if the CPU doesn't support the atomic instructions. What CPU model does Windows report that you have? * gcc should define such a symbol. For me the following test program compiles: #include uint8_t test(uint8_t* ptr, uint8_t val) { return __sync_fetch_and_add_1(ptr, val); } int main(void) { uint8_t n; return test(&n, 1); } Does that compile for you? Which version of GCC do we end up using on Windows? The reported symbol (___sync_fetch_and_add_1) has three leading underscores, that looks weird. Can you compile just libraries/ghc-prim/cbits/atomic.c and see if it's indeed GCC that generates a reference to that symbol? 1. http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Atomic-Builtins.html On Wed, Jul 16, 2014 at 12:29 AM, Simon Peyton Jones > wrote: Aargh! The Windows build has broken ? again. I can?t build GHC on my laptop any more. A clean ?sh validate? finishes as below. What on earth is `___sync_fetch_and_add_1'? Can anyone help? Thanks! Simon "inplace/bin/ghc-stage2.exe" -hisuf hi -osuf o -hcsuf hc -static -H32m -O -Werror -Wall -H64m -O0 -package-name vector-0.10.9.1 -hide-all-packages -i -ilibraries/vector/. -ilibraries/vector/dist-install/build -ilibraries/vector/dist-install/build/autogen -Ilibraries/vector/dist-install/build -Ilibraries/vector/dist-install/build/autogen -Ilibraries/vector/include -Ilibraries/vector/internal -optP-DVECTOR_BOUNDS_CHECKS -optP-include -optPlibraries/vector/dist-install/build/autogen/cabal_macros.h -package base-4.7.1.0 -package deepseq-1.3.0.2 -package ghc-prim-0.3.1.0 -package primitive-0.5.2.1 -O2 -XHaskell98 -XCPP -XDeriveDataTypeable -O2 -O -dcore-lint -fno-warn-deprecated-flags -no-user-package-db -rtsopts -Wwarn -odir libraries/vector/dist-install/build -hidir libraries/vector/dist-install/build -stubdir libraries/vector/dist-install/build -c libraries/vector/./Data/Vector/Fusion/Stream/Monadic.hs -o libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o Loading package ghc-prim ... linking ... ghc-stage2.exe: unable to load package `ghc-prim' ghc-stage2.exe: C:\code\HEAD\libraries\ghc-prim\dist-install\build\HSghc-prim-0.3.1.0.o: unknown symbol `___sync_fetch_and_add_1' libraries/vector/ghc.mk:5: recipe for target 'libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o' failed make[1]: *** [libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o] Error 1 I _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From metaniklas at gmail.com Fri Jul 18 14:21:18 2014 From: metaniklas at gmail.com (Niklas Larsson) Date: Fri, 18 Jul 2014 16:21:18 +0200 Subject: SV: Windows breakage -- again Message-ID: <53c92d82.0625980a.3ecc.2618@mx.google.com> I posted a working and tested patch last night. Please feel free to commit it, I haven't the rights to do it. Niklas ----- Ursprungligt meddelande ----- Fr?n: "Simon Peyton Jones" Skickat: ?2014-?07-?18 15:55 Till: "Niklas Larsson" ; "Johan Tibell" Kopia: "ghc-devs at haskell.org" ?mne: RE: Windows breakage -- again Thank you all for pursuing this. I gather that you know what is going on, so no further info needed from me. Yell if it is otherwise. Meanwhile, is the fix imminent, or should we revert Johan?s patch? Simon From: Niklas Larsson [mailto:metaniklas at gmail.com] Sent: 16 July 2014 19:58 To: Johan Tibell; Simon Peyton Jones Cc: ghc-devs at haskell.org Subject: Re: Windows breakage -- again I get the same failure when I try to build HEAD. Turns out the error occurs on the 32-bit Windows build, and my successful build was a 64-bit build. My 64-bit build still succeeds. Also, gcc is 4.5.2 on 32-bit, not 4.6.3 as on 64-bit. Niklas 2014-07-16 14:48 GMT+02:00 Niklas Larsson : I have built ghc on windows after that was added with no issue. I can take a look this evening and see how HEAD works for me. The standard gcc in the tarballs is 4.6.3, which is getting long in the tooth, there is an issue on trac to upgrade it. -- Niklas Fr?n: Johan Tibell Skickat: ?2014-?07-?16 09:57 Till: Simon Peyton Jones Kopia: ghc-devs at haskell.org ?mne: Re: Windows breakage -- again You can rollback the commit (git revert 4ee4ab01c1d97845aecb7707ad2f9a80933e7a49) and push that to the repo if you wish. I will try to re-add the primop again after I figure out what's wrong. On Wed, Jul 16, 2014 at 9:37 AM, Johan Tibell wrote: I added some primops about a month ago (4ee4ab01c1d97845aecb7707ad2f9a80933e7a49) that call __sync_fetch_and_add, a gcc/llvm builtin. I'm a bit surprised to see this error. The GCC manual [1] says: > " Not all operations are supported by all target processors. If a particular operation cannot be implemented on the target processor, a warning will be generated and a call an external function will be generated. The external function will carry the same name as the builtin, with an additional suffix `_n' where n is the size of the data type." I'm a bit surprised by this error for two reasons: * A call to that symbol should only be generated if the CPU doesn't support the atomic instructions. What CPU model does Windows report that you have? * gcc should define such a symbol. For me the following test program compiles: #include uint8_t test(uint8_t* ptr, uint8_t val) { return __sync_fetch_and_add_1(ptr, val); } int main(void) { uint8_t n; return test(&n, 1); } Does that compile for you? Which version of GCC do we end up using on Windows? The reported symbol (___sync_fetch_and_add_1) has three leading underscores, that looks weird. Can you compile just libraries/ghc-prim/cbits/atomic.c and see if it's indeed GCC that generates a reference to that symbol? 1. http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Atomic-Builtins.html On Wed, Jul 16, 2014 at 12:29 AM, Simon Peyton Jones wrote: Aargh! The Windows build has broken ? again. I can?t build GHC on my laptop any more. [Hela det ursprungliga meddelandet tas inte med.] -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Fri Jul 18 18:56:25 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Fri, 18 Jul 2014 20:56:25 +0200 Subject: Update Cabal submodule to HEAD (1.21) Message-ID: <1405709785.8129.1.camel@kirk> Hi Edward, your commit makes the builders fail: [28 of 79] Compiling Distribution.Simple.Utils ( libraries/Cabal/Cabal/Distribution/Simple/Utils.hs, bootstrapping/Distribution/Simple/Utils.o ) libraries/Cabal/Cabal/Distribution/Simple/Utils.hs:382:46: `Process.delegate_ctlc' is not a (visible) constructor field name libraries/Cabal/Cabal/Distribution/Simple/Utils.hs:408:46: `Process.delegate_ctlc' is not a (visible) constructor field name make[1]: *** [utils/ghc-cabal/dist/build/tmp/ghc-cabal] Error 1 make: *** [all] Error 2 https://s3.amazonaws.com/archive.travis-ci.org/jobs/30289010/log.txt Please fix! Thanks, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From austin at well-typed.com Sat Jul 19 00:04:08 2014 From: austin at well-typed.com (Austin Seipp) Date: Fri, 18 Jul 2014 19:04:08 -0500 Subject: Windows breakage -- again In-Reply-To: <53c92d82.0625980a.3ecc.2618@mx.google.com> References: <53c92d82.0625980a.3ecc.2618@mx.google.com> Message-ID: Thanks Niklas, this is now committed. On Fri, Jul 18, 2014 at 9:21 AM, Niklas Larsson wrote: > I posted a working and tested patch last night. Please feel free to commit > it, I haven't the rights to do it. > > Niklas > ________________________________ > Fr?n: Simon Peyton Jones > Skickat: ?2014-?07-?18 15:55 > Till: Niklas Larsson; Johan Tibell > Kopia: ghc-devs at haskell.org > ?mne: RE: Windows breakage -- again > > Thank you all for pursuing this. I gather that you know what is going on, > so no further info needed from me. Yell if it is otherwise. > > > > Meanwhile, is the fix imminent, or should we revert Johan?s patch? > > > > Simon > > > > From: Niklas Larsson [mailto:metaniklas at gmail.com] > Sent: 16 July 2014 19:58 > To: Johan Tibell; Simon Peyton Jones > Cc: ghc-devs at haskell.org > Subject: Re: Windows breakage -- again > > > > I get the same failure when I try to build HEAD. Turns out the error occurs > on the 32-bit Windows build, and my successful build was a 64-bit build. My > 64-bit build still succeeds. > > Also, gcc is 4.5.2 on 32-bit, not 4.6.3 as on 64-bit. > > Niklas > > > > > > 2014-07-16 14:48 GMT+02:00 Niklas Larsson : > > I have built ghc on windows after that was added with no issue. > > I can take a look this evening and see how HEAD works for me. > > The standard gcc in the tarballs is 4.6.3, which is getting long in the > tooth, there is an issue on trac to upgrade it. > > -- Niklas > > ________________________________ > > Fr?n: Johan Tibell > Skickat: ?2014-?07-?16 09:57 > Till: Simon Peyton Jones > Kopia: ghc-devs at haskell.org > ?mne: Re: Windows breakage -- again > > You can rollback the commit (git revert > 4ee4ab01c1d97845aecb7707ad2f9a80933e7a49) and push that to the repo if you > wish. I will try to re-add the primop again after I figure out what's wrong. > > > > On Wed, Jul 16, 2014 at 9:37 AM, Johan Tibell > wrote: > > I added some primops about a month ago > (4ee4ab01c1d97845aecb7707ad2f9a80933e7a49) that call __sync_fetch_and_add, a > gcc/llvm builtin. I'm a bit surprised to see this error. The GCC manual [1] > says: > > > >> " Not all operations are supported by all target processors. If a >> particular operation cannot be implemented on the target processor, a >> warning will be generated and a call an external function will be generated. >> The external function will carry the same name as the builtin, with an >> additional suffix `_n' where n is the size of the data type." > > > > I'm a bit surprised by this error for two reasons: > > > > * A call to that symbol should only be generated if the CPU doesn't support > the atomic instructions. What CPU model does Windows report that you have? > > > > * gcc should define such a symbol. For me the following test program > compiles: > > > > #include > > > > uint8_t test(uint8_t* ptr, uint8_t val) { > > return __sync_fetch_and_add_1(ptr, val); > > } > > > > int main(void) { > > uint8_t n; > > return test(&n, 1); > > } > > > > Does that compile for you? Which version of GCC do we end up using on > Windows? > > > > The reported symbol (___sync_fetch_and_add_1) has three leading underscores, > that looks weird. Can you compile just libraries/ghc-prim/cbits/atomic.c and > see if it's indeed GCC that generates a reference to that symbol? > > > > 1. http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Atomic-Builtins.html > > > > On Wed, Jul 16, 2014 at 12:29 AM, Simon Peyton Jones > wrote: > > Aargh! The Windows build has broken ? again. I can?t build GHC on my > laptop any more. > > > [Hela det ursprungliga meddelandet tas inte med.] > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From ezyang at mit.edu Sat Jul 19 00:14:38 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Sat, 19 Jul 2014 01:14:38 +0100 Subject: Update Cabal submodule to HEAD (1.21) In-Reply-To: <1405709785.8129.1.camel@kirk> References: <1405709785.8129.1.camel@kirk> Message-ID: <1405727535-sup-1472@sabre> The reason for this is that the builders do not have a sufficiently recent version of process, which Cabal has upgraded to depend on. Probably 'cabal update && cabal install process' should bring it up to date and make it working. Unfortunately, the build system bypasses Cabal for the bootstrap build of Cabal, so the version dependency range is not checked. I suppose if you /really/ wanted to we could add a macro to disable ctl-c support and pass it on the zeroboot. BTW, I noticed the builders are still bootstrapping from 7.4. Since 7.8 was stabilized recently, we're going to be retiring support for bootstrapping from 7.4 soon. Upgrade! Edward From kvanberendonck at gmail.com Sat Jul 19 05:00:17 2014 From: kvanberendonck at gmail.com (Kyle Van Berendonck) Date: Sat, 19 Jul 2014 15:00:17 +1000 Subject: Thread status constants Message-ID: Hi, I found these: https://github.com/ghc/ghc/blob/5f3c5384df59717ca8013c5df8d1f65692867825/includes/rts/Constants.h#L194 They go only 0-14, so there's some long chains of branches and stuff in hot paths that could be cleaned up into single &-masked branches by changing these into a set of flags. And then I saw these: https://github.com/ghc/ghc/blob/master/libraries/base/GHC/Conc/Sync.lhs#L483 Where does 16 and 17 come from -- I couldn't find them in the header files anywhere? Kyle -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Sat Jul 19 14:48:44 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Sat, 19 Jul 2014 16:48:44 +0200 Subject: Update Cabal submodule to HEAD (1.21) Message-ID: <1405781324.18386.0.camel@joachim-breitner.de> Hi, Am Samstag, den 19.07.2014, 01:14 +0100 schrieb Edward Z.Yang: > The reason for this is that the builders do not have a sufficiently > recent version of process, which Cabal has upgraded to depend on. > Probably 'cabal update && cabal install process' should bring it up to > date and make it working. Unfortunately, the build system bypasses > Cabal for the bootstrap build of Cabal, so the version dependency range > is not checked. didn?t seem to help just like that. https://api.travis-ci.org/jobs/30346178/log.txt?deansi=true Probably because of "-no-user-package-db". Trying with "cabal install --global" now. > BTW, I noticed the builders are still bootstrapping from 7.4. Since > 7.8 was stabilized recently, we're going to be retiring support for > bootstrapping from 7.4 soon. Upgrade! 7.4 is still the version in the latest Debian stable release, as well as the almost latest Ubuntu LTS release (14.4 is still kinda new). But I guess I can install ghc from a PPA on travis. It doesn?t help with the process problem, though, that also affects 7.6. Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From mark.lentczner at gmail.com Sat Jul 19 17:43:38 2014 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Sat, 19 Jul 2014 10:43:38 -0700 Subject: Planning a Haskell Platfrom Release Candidate this weekend Message-ID: Here's my remaining list before putting out a realse candidate. I'm aiming for this weekend: *-- General --* [ ] haddock linkage from HP packages to GHC core packages is broken [ ] decision on QuickCheck? *-- Source Tarball --* [ ] missing platform.sh scripts [ ] missing os-extras in hptool/src *-- Mac --* [ ] start.html needs a once over updating [ ] uninstall-hs should clean out /usr/local/bin symlinks (as it does with /usr/bin) *-- Windows --* [ ] GLUT issue in ghci *-- Website --* [ ] templatize info on pages [ ] add build mode to bptool Main - needs to take file of downloads & hashes ?Anyone have anything else? Otherwise... we're almost good to go!!!! - Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From berthold at Mathematik.Uni-Marburg.de Sat Jul 19 18:43:02 2014 From: berthold at Mathematik.Uni-Marburg.de (Jost Berthold) Date: Sat, 19 Jul 2014 20:43:02 +0200 Subject: Thread status constants In-Reply-To: References: Message-ID: <53CABC36.8030509@mathematik.uni-marburg.de> > Message: 2 > Date: Sat, 19 Jul 2014 15:00:17 +1000 > From: Kyle Van Berendonck > To: "ghc-devs at haskell.org" > Subject: Thread status constants > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > Hi, > > I found these: > https://github.com/ghc/ghc/blob/5f3c5384df59717ca8013c5df8d1f65692867825/includes/rts/Constants.h#L194 > > They go only 0-14, so there's some long chains of branches and stuff in hot > paths that could be cleaned up into single &-masked branches by changing > these into a set of flags. > > And then I saw these: > https://github.com/ghc/ghc/blob/master/libraries/base/GHC/Conc/Sync.lhs#L483 > > Where does 16 and 17 come from -- I couldn't find them in the header files > anywhere? > > Kyle Hi Kyle, I had a look at these constants just recently. About 16 and 17, see here: https://github.com/ghc/ghc/blob/master/rts/PrimOps.cmm#L853 About the others:The block reasons received two updates just recently. Turns out that the eventlog (GHC's post-mortem tracing for threadscope et al.) expects these states to have particular values (those in includes/rts/Constants.h, consider them a fixed ABI for the time being). See here for the story: https://ghc.haskell.org/trac/ghc/ticket/9003 NB: Darn... base still uses GHC-7.8.2 constants. Thanks for the pointer! About your proposal to use bit masks: Did you find some of those hot paths? I believe (believe, unchecked...) that there are not too many places where a complete switch over all possible reasons is done (apart from debug output maybe); the typical use is rather to compare to one single constant than to all. So the performance might not increase too much from optimisations. Again, just a hunch, not verified. If the RTS contains code specialised to the current constants, these places need to be kept symbolic, to avoid future breakage when new states are added. A fix which removes the tight implicit dependency of ghc-events from the constants in includes/rts/Constants.h is pending, btw. / Jost From carter.schonwald at gmail.com Sat Jul 19 19:07:36 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sat, 19 Jul 2014 15:07:36 -0400 Subject: Thread status constants In-Reply-To: <53CABC36.8030509@mathematik.uni-marburg.de> References: <53CABC36.8030509@mathematik.uni-marburg.de> Message-ID: Wait, does that mean the corresponding constants in base for 7.8.3 are busted? On Saturday, July 19, 2014, Jost Berthold < berthold at mathematik.uni-marburg.de> wrote: > Message: 2 >> Date: Sat, 19 Jul 2014 15:00:17 +1000 >> From: Kyle Van Berendonck >> To: "ghc-devs at haskell.org" >> Subject: Thread status constants >> Message-ID: >> > EbYdSLKA at mail.gmail.com> >> Content-Type: text/plain; charset="utf-8" >> >> Hi, >> >> I found these: >> https://github.com/ghc/ghc/blob/5f3c5384df59717ca8013c5df8d1f6 >> 5692867825/includes/rts/Constants.h#L194 >> >> They go only 0-14, so there's some long chains of branches and stuff in >> hot >> paths that could be cleaned up into single &-masked branches by changing >> these into a set of flags. >> >> And then I saw these: >> https://github.com/ghc/ghc/blob/master/libraries/base/ >> GHC/Conc/Sync.lhs#L483 >> >> Where does 16 and 17 come from -- I couldn't find them in the header files >> anywhere? >> >> Kyle >> > > Hi Kyle, > > I had a look at these constants just recently. > > About 16 and 17, see here: > https://github.com/ghc/ghc/blob/master/rts/PrimOps.cmm#L853 > > About the others:The block reasons received two updates just recently. > Turns out that the eventlog (GHC's post-mortem tracing for threadscope et > al.) expects these states to have particular values (those in > includes/rts/Constants.h, consider them a fixed ABI for the time being). > See here for the story: > https://ghc.haskell.org/trac/ghc/ticket/9003 > > NB: Darn... base still uses GHC-7.8.2 constants. Thanks for the pointer! > > About your proposal to use bit masks: > > Did you find some of those hot paths? > I believe (believe, unchecked...) that there are not too many places where > a complete switch over all possible reasons is done (apart from debug > output maybe); the typical use is rather to compare to one single constant > than to all. So the performance might not increase too much from > optimisations. Again, just a hunch, not verified. > > If the RTS contains code specialised to the current constants, these > places need to be kept symbolic, to avoid future breakage when new states > are added. > > A fix which removes the tight implicit dependency of ghc-events from the > constants in includes/rts/Constants.h is pending, btw. > > / Jost > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From berthold at Mathematik.Uni-Marburg.de Sat Jul 19 19:11:32 2014 From: berthold at Mathematik.Uni-Marburg.de (Jost Berthold) Date: Sat, 19 Jul 2014 21:11:32 +0200 Subject: Thread status constants In-Reply-To: References: <53CABC36.8030509@mathematik.uni-marburg.de> Message-ID: <53CAC2E4.1030403@mathematik.uni-marburg.de> I'm afraid so - I am just creating the ticket. / jost On 07/19/2014 09:07 PM, Carter Schonwald wrote: > Wait, does that mean the corresponding constants in base for 7.8.3 are > busted? > > On Saturday, July 19, 2014, Jost Berthold > > wrote: > > Message: 2 > Date: Sat, 19 Jul 2014 15:00:17 +1000 > From: Kyle Van Berendonck > To: "ghc-devs at haskell.org" > Subject: Thread status constants > Message-ID: > > > Content-Type: text/plain; charset="utf-8" > > Hi, > > I found these: > https://github.com/ghc/ghc/__blob/__5f3c5384df59717ca8013c5df8d1f6__5692867825/includes/rts/__Constants.h#L194 > > > They go only 0-14, so there's some long chains of branches and > stuff in hot > paths that could be cleaned up into single &-masked branches by > changing > these into a set of flags. > > And then I saw these: > https://github.com/ghc/ghc/__blob/master/libraries/base/__GHC/Conc/Sync.lhs#L483 > > > Where does 16 and 17 come from -- I couldn't find them in the > header files > anywhere? > > Kyle > > > Hi Kyle, > > I had a look at these constants just recently. > > About 16 and 17, see here: > https://github.com/ghc/ghc/__blob/master/rts/PrimOps.cmm#__L853 > > > About the others:The block reasons received two updates just recently. > Turns out that the eventlog (GHC's post-mortem tracing for > threadscope et al.) expects these states to have particular values > (those in includes/rts/Constants.h, consider them a fixed ABI for > the time being). > See here for the story: > https://ghc.haskell.org/trac/__ghc/ticket/9003 > > > NB: Darn... base still uses GHC-7.8.2 constants. Thanks for the pointer! > > About your proposal to use bit masks: > > Did you find some of those hot paths? > I believe (believe, unchecked...) that there are not too many places > where a complete switch over all possible reasons is done (apart > from debug output maybe); the typical use is rather to compare to > one single constant than to all. So the performance might not > increase too much from optimisations. Again, just a hunch, not verified. > > If the RTS contains code specialised to the current constants, these > places need to be kept symbolic, to avoid future breakage when new > states are added. > > A fix which removes the tight implicit dependency of ghc-events from > the constants in includes/rts/Constants.h is pending, btw. > > / Jost > > _________________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/__mailman/listinfo/ghc-devs > > From kvanberendonck at gmail.com Sat Jul 19 22:55:51 2014 From: kvanberendonck at gmail.com (Kyle Van Berendonck) Date: Sun, 20 Jul 2014 08:55:51 +1000 Subject: Thread status constants In-Reply-To: <53CABC36.8030509@mathematik.uni-marburg.de> References: <53CABC36.8030509@mathematik.uni-marburg.de> Message-ID: Yeah, it's not like a big deal or anything. Here's all the places which grep told me use chains of || (or long switch statements which can't be turned into tables) to compare these values: ghc/rts//win32/AsyncIO.c:280 ghc/rts//Threads.c:286 ghc/rts//Threads.c:257 ghc/rts//sm/Scav.c:73 ghc/rts//sm/Sanity.c:521 ghc/rts//sm/Compact.c:436 ghc/rts//Schedule.c:2617 ghc/rts//Schedule.c:2605 ghc/rts//Schedule.c:946 ghc/rts//RaiseAsync.h:54 ghc/rts//RaiseAsync.c:416 I didn't include all the ones in the profiler because they probably don't get hot. Kyle On Sun, Jul 20, 2014 at 4:43 AM, Jost Berthold < berthold at mathematik.uni-marburg.de> wrote: > Message: 2 >> Date: Sat, 19 Jul 2014 15:00:17 +1000 >> From: Kyle Van Berendonck >> To: "ghc-devs at haskell.org" >> Subject: Thread status constants >> Message-ID: >> > SLKA at mail.gmail.com> >> Content-Type: text/plain; charset="utf-8" >> >> Hi, >> >> I found these: >> https://github.com/ghc/ghc/blob/5f3c5384df59717ca8013c5df8d1f6 >> 5692867825/includes/rts/Constants.h#L194 >> >> They go only 0-14, so there's some long chains of branches and stuff in >> hot >> paths that could be cleaned up into single &-masked branches by changing >> these into a set of flags. >> >> And then I saw these: >> https://github.com/ghc/ghc/blob/master/libraries/base/ >> GHC/Conc/Sync.lhs#L483 >> >> Where does 16 and 17 come from -- I couldn't find them in the header files >> anywhere? >> >> Kyle >> > > Hi Kyle, > > I had a look at these constants just recently. > > About 16 and 17, see here: > https://github.com/ghc/ghc/blob/master/rts/PrimOps.cmm#L853 > > About the others:The block reasons received two updates just recently. > Turns out that the eventlog (GHC's post-mortem tracing for threadscope et > al.) expects these states to have particular values (those in > includes/rts/Constants.h, consider them a fixed ABI for the time being). > See here for the story: > https://ghc.haskell.org/trac/ghc/ticket/9003 > > NB: Darn... base still uses GHC-7.8.2 constants. Thanks for the pointer! > > About your proposal to use bit masks: > > Did you find some of those hot paths? > I believe (believe, unchecked...) that there are not too many places where > a complete switch over all possible reasons is done (apart from debug > output maybe); the typical use is rather to compare to one single constant > than to all. So the performance might not increase too much from > optimisations. Again, just a hunch, not verified. > > If the RTS contains code specialised to the current constants, these > places need to be kept symbolic, to avoid future breakage when new states > are added. > > A fix which removes the tight implicit dependency of ghc-events from the > constants in includes/rts/Constants.h is pending, btw. > > / Jost > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Sun Jul 20 14:58:33 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Sun, 20 Jul 2014 16:58:33 +0200 Subject: Update Cabal submodule to HEAD (1.21) In-Reply-To: <1405781324.18386.0.camel@joachim-breitner.de> References: <1405781324.18386.0.camel@joachim-breitner.de> Message-ID: <1405868313.13230.1.camel@joachim-breitner.de> Hi, Am Samstag, den 19.07.2014, 16:48 +0200 schrieb Joachim Breitner: > Am Samstag, den 19.07.2014, 01:14 +0100 schrieb Edward Z.Yang: > > The reason for this is that the builders do not have a sufficiently > > recent version of process, which Cabal has upgraded to depend on. > > Probably 'cabal update && cabal install process' should bring it up to > > date and make it working. Unfortunately, the build system bypasses > > Cabal for the bootstrap build of Cabal, so the version dependency range > > is not checked. > > didn?t seem to help just like that. > https://api.travis-ci.org/jobs/30346178/log.txt?deansi=true > > Probably because of "-no-user-package-db". Trying with "cabal install > --global" now. I did this on the builder machines, but now I noticed that I?d also like to build this on my own machine, and using cabal to do system wide installations is not really the recomended way. And people wanting to hack on GHC that do not have root access have to jump through even more hoops (such as making a user installation of GHC). It would also break building Distribution packages (and already does so on) http://deb.haskell.org/dailies/ Is there an easy way out here? > > I suppose if you /really/ wanted to we could add a > > macro to disable ctl-c support and pass it on the zeroboot. > I tried that (there is already a macro BOOTSTRAPPING), and then it does build inplace/bin/ghc-cabal. But the next thing it tries to do is to build Cabal, which then fails, expecting the newer process. So process needs to be added to PACKAGES_STAGE0. But this adds "--constriant process == 1.2.0.0" to the options when building hsc2hs, which then fails ? but only with make -j2. In a sequential build, process happens to be built before hsc2hs.... So I guess I need to tell make somehow to first configure and register process and then configure and register hsc2hs. But I?m lost in GHC?s makefiles... Can anyone point me into the right direction? Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From mail at joachim-breitner.de Sun Jul 20 15:55:20 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Sun, 20 Jul 2014 17:55:20 +0200 Subject: Update Cabal submodule to HEAD (1.21) In-Reply-To: <1405868313.13230.1.camel@joachim-breitner.de> References: <1405781324.18386.0.camel@joachim-breitner.de> <1405868313.13230.1.camel@joachim-breitner.de> Message-ID: <1405871720.13230.3.camel@joachim-breitner.de> Hi, Am Sonntag, den 20.07.2014, 16:58 +0200 schrieb Joachim Breitner: > > > I suppose if you /really/ wanted to we could add a > > > macro to disable ctl-c support and pass it on the zeroboot. > > > I tried that (there is already a macro BOOTSTRAPPING), and then it does > build inplace/bin/ghc-cabal. > > But the next thing it tries to do is to build Cabal, which then fails, > expecting the newer process. > > So process needs to be added to PACKAGES_STAGE0. But this adds > "--constriant process == 1.2.0.0" to the options when building hsc2hs, > which then fails ? but only with make -j2. In a sequential build, > process happens to be built before hsc2hs.... > > So I guess I need to tell make somehow to first configure and register > process and then configure and register hsc2hs. But I?m lost in GHC?s > makefiles... Can anyone point me into the right direction? So I got a working configuration. The following patch needs to be applied to Cabal: diff --git a/Cabal/Distribution/Simple/Utils.hs b/Cabal/Distribution/Simple/Utils.hs index 9096186..54df19d 100644 --- a/Cabal/Distribution/Simple/Utils.hs +++ b/Cabal/Distribution/Simple/Utils.hs @@ -379,7 +379,12 @@ rawSystemExitWithEnv verbosity path args env = do hFlush stdout (_,_,_,ph) <- createProcess $ (Process.proc path args) { Process.env = (Just env) - , Process.delegate_ctlc = True } +#ifndef BOOTSTRAPPING +-- delegate_ctlc has been added in process 1.2, and we still want to be able to build +-- bootstrap GHC on systems not having that version + , Process.delegate_ctlc = True +#endif + } exitcode <- waitForProcess ph unless (exitcode == ExitSuccess) $ do debug verbosity $ path ++ " returned " ++ show exitcode @@ -405,7 +410,12 @@ rawSystemIOWithEnv verbosity path args mcwd menv inp out err = do , Process.std_in = mbToStd inp , Process.std_out = mbToStd out , Process.std_err = mbToStd err - , Process.delegate_ctlc = True } +#ifndef BOOTSTRAPPING +-- delegate_ctlc has been added in process 1.2, and we still want to be able to build +-- bootstrap GHC on systems not having that version + , Process.delegate_ctlc = True +#endif + } exitcode <- waitForProcess ph unless (exitcode == ExitSuccess) $ do debug verbosity $ path ++ " returned " ++ show exitcode and this adjustment made to GHCs build system: diff --git a/ghc.mk b/ghc.mk index e9d7e83..cfe46ec 100644 --- a/ghc.mk +++ b/ghc.mk @@ -382,8 +382,10 @@ else # Packages that are built by stage0. These packages are dependencies of # programs such as GHC and ghc-pkg, that we do not assume the stage0 # compiler already has installed (or up-to-date enough). +# "process" can be removed once the version required by Cabal is not +# particularly new any more. -PACKAGES_STAGE0 = Cabal/Cabal hpc bin-package-db hoopl transformers +PACKAGES_STAGE0 = process Cabal/Cabal hpc bin-package-db hoopl transformers ifeq "$(Windows_Host)" "NO" ifneq "$(HostOS_CPP)" "ios" PACKAGES_STAGE0 += terminfo @@ -732,6 +734,11 @@ fixed_pkg_prev= $(foreach pkg,$(PACKAGES_STAGE0),$(eval $(call fixed_pkg_dep,$(pkg),dist-boot))) utils/ghc-pkg/dist/package-data.mk: $(fixed_pkg_prev) compiler/stage1/package-data.mk: $(fixed_pkg_prev) + +# we also need to configure hsc2hs after process has been configured, as +# BOOT_PKG_CONSTRAINTS will make hsc2hs want to use the in-tree process library. +utils/hsc2hs/dist/package-data.mk : libraries/process/dist-boot/package-data.mk + endif ifneq "$(BINDIST)" "YES" Now I?d like to apply this change, but I?m not sure how to proceed with such GHC-specific fixes to Cabal. I guess I could create a pull request, wait for the Cabal devs to apply it, wait for the next Cabal release, and then update the submodule. Is there anything quicker that gets the fix in until that has happened? Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From ezyang at mit.edu Sun Jul 20 16:08:42 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Sun, 20 Jul 2014 17:08:42 +0100 Subject: Update Cabal submodule to HEAD (1.21) In-Reply-To: <1405871720.13230.3.camel@joachim-breitner.de> References: <1405781324.18386.0.camel@joachim-breitner.de> <1405868313.13230.1.camel@joachim-breitner.de> <1405871720.13230.3.camel@joachim-breitner.de> Message-ID: <1405872495-sup-5825@sabre> As long as we get Cabal to do a release before we cut a release, it should be fine, so all we need is for Cabal to take the patch. Edward Excerpts from Joachim Breitner's message of 2014-07-20 16:55:20 +0100: > Hi, > > Am Sonntag, den 20.07.2014, 16:58 +0200 schrieb Joachim Breitner: > > > > I suppose if you /really/ wanted to we could add a > > > > macro to disable ctl-c support and pass it on the zeroboot. > > > > > I tried that (there is already a macro BOOTSTRAPPING), and then it does > > build inplace/bin/ghc-cabal. > > > > But the next thing it tries to do is to build Cabal, which then fails, > > expecting the newer process. > > > > So process needs to be added to PACKAGES_STAGE0. But this adds > > "--constriant process == 1.2.0.0" to the options when building hsc2hs, > > which then fails ? but only with make -j2. In a sequential build, > > process happens to be built before hsc2hs.... > > > > So I guess I need to tell make somehow to first configure and register > > process and then configure and register hsc2hs. But I?m lost in GHC?s > > makefiles... Can anyone point me into the right direction? > > So I got a working configuration. The following patch needs to be > applied to Cabal: > > diff --git a/Cabal/Distribution/Simple/Utils.hs b/Cabal/Distribution/Simple/Utils.hs > index 9096186..54df19d 100644 > --- a/Cabal/Distribution/Simple/Utils.hs > +++ b/Cabal/Distribution/Simple/Utils.hs > @@ -379,7 +379,12 @@ rawSystemExitWithEnv verbosity path args env = do > hFlush stdout > (_,_,_,ph) <- createProcess $ > (Process.proc path args) { Process.env = (Just env) > - , Process.delegate_ctlc = True } > +#ifndef BOOTSTRAPPING > +-- delegate_ctlc has been added in process 1.2, and we still want to be able to build > +-- bootstrap GHC on systems not having that version > + , Process.delegate_ctlc = True > +#endif > + } > exitcode <- waitForProcess ph > unless (exitcode == ExitSuccess) $ do > debug verbosity $ path ++ " returned " ++ show exitcode > @@ -405,7 +410,12 @@ rawSystemIOWithEnv verbosity path args mcwd menv inp out err = do > , Process.std_in = mbToStd inp > , Process.std_out = mbToStd out > , Process.std_err = mbToStd err > - , Process.delegate_ctlc = True } > +#ifndef BOOTSTRAPPING > +-- delegate_ctlc has been added in process 1.2, and we still want to be able to build > +-- bootstrap GHC on systems not having that version > + , Process.delegate_ctlc = True > +#endif > + } > exitcode <- waitForProcess ph > unless (exitcode == ExitSuccess) $ do > debug verbosity $ path ++ " returned " ++ show exitcode > > and this adjustment made to GHCs build system: > > diff --git a/ghc.mk b/ghc.mk > index e9d7e83..cfe46ec 100644 > --- a/ghc.mk > +++ b/ghc.mk > @@ -382,8 +382,10 @@ else > # Packages that are built by stage0. These packages are dependencies of > # programs such as GHC and ghc-pkg, that we do not assume the stage0 > # compiler already has installed (or up-to-date enough). > +# "process" can be removed once the version required by Cabal is not > +# particularly new any more. > > -PACKAGES_STAGE0 = Cabal/Cabal hpc bin-package-db hoopl transformers > +PACKAGES_STAGE0 = process Cabal/Cabal hpc bin-package-db hoopl transformers > ifeq "$(Windows_Host)" "NO" > ifneq "$(HostOS_CPP)" "ios" > PACKAGES_STAGE0 += terminfo > @@ -732,6 +734,11 @@ fixed_pkg_prev= > $(foreach pkg,$(PACKAGES_STAGE0),$(eval $(call fixed_pkg_dep,$(pkg),dist-boot))) > utils/ghc-pkg/dist/package-data.mk: $(fixed_pkg_prev) > compiler/stage1/package-data.mk: $(fixed_pkg_prev) > + > +# we also need to configure hsc2hs after process has been configured, as > +# BOOT_PKG_CONSTRAINTS will make hsc2hs want to use the in-tree process library. > +utils/hsc2hs/dist/package-data.mk : libraries/process/dist-boot/package-data.mk > + > endif > > ifneq "$(BINDIST)" "YES" > > > > Now I?d like to apply this change, but I?m not sure how to proceed with > such GHC-specific fixes to Cabal. I guess I could create a pull request, > wait for the Cabal devs to apply it, wait for the next Cabal release, > and then update the submodule. Is there anything quicker that gets the > fix in until that has happened? > > Greetings, > Joachim > From mail at joachim-breitner.de Sun Jul 20 16:18:56 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Sun, 20 Jul 2014 18:18:56 +0200 Subject: Update Cabal submodule to HEAD (1.21) In-Reply-To: <1405872495-sup-5825@sabre> References: <1405781324.18386.0.camel@joachim-breitner.de> <1405868313.13230.1.camel@joachim-breitner.de> <1405871720.13230.3.camel@joachim-breitner.de> <1405872495-sup-5825@sabre> Message-ID: <1405873136.13230.7.camel@joachim-breitner.de> Hi, Am Sonntag, den 20.07.2014, 17:08 +0100 schrieb Edward Z.Yang: > As long as we get Cabal to do a release before we cut a release, > it should be fine, so all we need is for Cabal to take the patch. Nevermind, I noticed you updated the submodule to Cabal HEAD, not Cabal some-release. But anyways... > Excerpts from Joachim Breitner's message of 2014-07-20 16:55:20 +0100: > > Am Sonntag, den 20.07.2014, 16:58 +0200 schrieb Joachim Breitner: > > So I got a working configuration. The following patch needs to be > > applied to Cabal: > > > > [..] > > > > and this adjustment made to GHCs build system: > > > > [..] doesn?t quite work. It finishes phase 0, but phase 1 says make[1]: *** No rule to make target 'libraries/process/dist-boot/build/System/Process.hi', needed by 'utils/hsc2hs/dist/build/Common.o'. Schluss. and I?m stuck. Can someone help me here? Thanks, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From mark.lentczner at gmail.com Sun Jul 20 17:40:27 2014 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Sun, 20 Jul 2014 10:40:27 -0700 Subject: The haddock / clang problem revisited ... root cause found... Message-ID: tl;dr: - cabal is the culprit - If you have the `Extensions: CPP` directive in your .cabal file, then when invoking haddock, cabal causes every file to be pre-processed twice in sequence. - pre-processing isn't idempotent - when done under clang, it outputs things that ghc. longer: Regarding the ellusive problem whereby haddock for some packages fails on ghc-7.8.3 / haddock-2.14.2 systems with clang (as in OS X 10.9): Bugs were filed for both haddock and cabal: https://github.com/haskell/haddock/issues/284 https://github.com/haskell/cabal/issues/1740 I've been trying to find a work-around before releasing the platform for OS X. In sleuthing this I found the root cause: cabal. Full details here: Root cause of haddock / clang failure The correct fix is to change cabal's behavior: Either don't pre-process the fils for Haddock, or don't pass --optghc=-XCPP There is no good work around: We could add -optP-P to all compilations, but that breaks line numbers for everything. We could wrap haddock to remove the --optghc=-XCPP, but that will break hand invocations of haddock. - Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From karel.gardas at centrum.cz Sun Jul 20 17:55:08 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Sun, 20 Jul 2014 19:55:08 +0200 Subject: Fatal git error on .git/modules/libffi-tarballs Message-ID: <53CC027C.7070402@centrum.cz> Hello, while working on HEAD and after a few iteraction of ./validate my git complains with: $ git status fatal: Not a git repository: /export/home/karel/vcs/ghc-src/ghc-git-test-2/.git/modules/libffi-tarballs karel at silence:~/vcs/ghc-src/ghc-solaris-validate-fix$ Is this a known error or is there any known workaround for this issue? Thanks! Karel From ezyang at mit.edu Sun Jul 20 18:31:08 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Sun, 20 Jul 2014 19:31:08 +0100 Subject: Fatal git error on .git/modules/libffi-tarballs In-Reply-To: <53CC027C.7070402@centrum.cz> References: <53CC027C.7070402@centrum.cz> Message-ID: <1405880590-sup-7215@sabre> Hello Karel, You should blow away your copy (well, preserve the patches), upgrade your version of Git and then check out again. You've run into this bug: http://comments.gmane.org/gmane.comp.version-control.git/193492 Cheers, Edward Excerpts from Karel Gardas's message of 2014-07-20 18:55:08 +0100: > > Hello, > > while working on HEAD and after a few iteraction of ./validate my git > complains with: > > $ git status > fatal: Not a git repository: > /export/home/karel/vcs/ghc-src/ghc-git-test-2/.git/modules/libffi-tarballs > karel at silence:~/vcs/ghc-src/ghc-solaris-validate-fix$ > > Is this a known error or is there any known workaround for this issue? > > Thanks! > Karel From ezyang at mit.edu Sun Jul 20 21:57:38 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Sun, 20 Jul 2014 22:57:38 +0100 Subject: Update Cabal submodule to HEAD (1.21) In-Reply-To: <1405873136.13230.7.camel@joachim-breitner.de> References: <1405781324.18386.0.camel@joachim-breitner.de> <1405868313.13230.1.camel@joachim-breitner.de> <1405871720.13230.3.camel@joachim-breitner.de> <1405872495-sup-5825@sabre> <1405873136.13230.7.camel@joachim-breitner.de> Message-ID: <1405893428-sup-5913@sabre> Since this patch causes GHC HEAD to not bootstrap out of the box from GHC 7.6, I've reverted it for now. We'll have to cross this bridge sometime though. Edward Excerpts from Joachim Breitner's message of 2014-07-20 17:18:56 +0100: > Hi, > > Am Sonntag, den 20.07.2014, 17:08 +0100 schrieb Edward Z.Yang: > > As long as we get Cabal to do a release before we cut a release, > > it should be fine, so all we need is for Cabal to take the patch. > > Nevermind, I noticed you updated the submodule to Cabal HEAD, not Cabal > some-release. > > > But anyways... > > > Excerpts from Joachim Breitner's message of 2014-07-20 16:55:20 +0100: > > > Am Sonntag, den 20.07.2014, 16:58 +0200 schrieb Joachim Breitner: > > > So I got a working configuration. The following patch needs to be > > > applied to Cabal: > > > > > > [..] > > > > > > and this adjustment made to GHCs build system: > > > > > > [..] > > doesn?t quite work. It finishes phase 0, but phase 1 says > make[1]: *** No rule to make target 'libraries/process/dist-boot/build/System/Process.hi', needed by 'utils/hsc2hs/dist/build/Common.o'. Schluss. > and I?m stuck. Can someone help me here? > > Thanks, > Joachim > From mark.lentczner at gmail.com Mon Jul 21 04:06:15 2014 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Sun, 20 Jul 2014 21:06:15 -0700 Subject: Planning a Haskell Platfrom Release Candidate this weekend In-Reply-To: References: Message-ID: Where I'm at: *-- General --* [x] haddock linkage from HP packages to GHC core packages is broken * -- think I have this fixed.... checking in all contexts* [x] decision on QuickCheck? * -- staying with 2.6, because 2.7 added a dep on a package not in the platform* *-- Source Tarball --* [x] missing platform.sh scripts [x] missing os-extras in hptool/src * -- both done* *-- Mac --* [x] start.html needs a once over updating * -- done* [-] uninstall-hs should clean out /usr/local/bin symlinks (as it does with /usr/bin) * -- turns out it already was... long post on why it is hard to clean out installations that are configured to install directly into common shared directories (like /usr/local/bin and /usr/local/share/man)* [?] activate-hs should add -P for clang pre-processing * - bug found in Cabal* * - adding -P for all preprocessing will suck* * - no other good fix* [-] ensuring that the MACOSX_DEPLOYMENT_TARGET=10.6 is set * -- decided to defer this to later - just need to remember to do this by hand before starting the build* *-- Windows --* [ ] GLUT issue in ghci * -- no updates* *-- Website --* [ ] templatize info on pages [ ] add build mode to bptool Main - needs to take file of downloads & hashes *-- this can all wait until after the RC* ?The issue with pre-processing is a big one - and one not easily solvable. It has already bit developers (bos posted about it recently), and it will bite again. I'm also feeling at a loss to actually test the Platform. Normally I do a big system build, like Pandoc, or my own Plush to prove that a wider variety of developer packages will compile with the platform. Alas, none of those will compile with 7.8.3 and this platform. [for the curious: texmath ends up in a compilation that never ends during the profiling lib build, and Plush relies on warp, which puts it into it's own cabal hell...) - Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Mon Jul 21 07:22:32 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Mon, 21 Jul 2014 09:22:32 +0200 Subject: Update Cabal submodule to HEAD (1.21) In-Reply-To: <1405893428-sup-5913@sabre> References: <1405781324.18386.0.camel@joachim-breitner.de> <1405868313.13230.1.camel@joachim-breitner.de> <1405871720.13230.3.camel@joachim-breitner.de> <1405872495-sup-5825@sabre> <1405873136.13230.7.camel@joachim-breitner.de> <1405893428-sup-5913@sabre> Message-ID: <1405927352.1650.3.camel@joachim-breitner.de> Hi, Am Sonntag, den 20.07.2014, 22:57 +0100 schrieb Edward Z.Yang: > Since this patch causes GHC HEAD to not bootstrap out of the box > from GHC 7.6, I've reverted it for now. We'll have to cross > this bridge sometime though. thanks. Cabal already has applied the patch to make the initial ghc-cabal binary build: https://github.com/haskell/cabal/commit/3ef560208721a050e91fd9e67a0066ce44b04ba2 Now all we need to do is to figure out how to tell the build system to build process before Cabal. I started in http://www.haskell.org/pipermail/ghc-devs/2014-July/005685.html but that didn?t satisfiy the build system completely. Probably simply, but I have been staring at the makefiles for too long already. Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From austin at well-typed.com Mon Jul 21 11:56:00 2014 From: austin at well-typed.com (Austin Seipp) Date: Mon, 21 Jul 2014 06:56:00 -0500 Subject: GHC 7.10.1 tickets Message-ID: Hello *, We need to be sorting out tickets for 7.10.1. Last week, I spent some time remilestoning things to get them into the rough correct places (roughly), but we all need some help to get an idea of what we should fix for the release. Here's a query to look at: https://ghc.haskell.org/trac/ghc/query?status=!closed&milestone=7.10.1&order=priority I have already categorized these tickets roughly, but we'll probably still address some of the priorities on existing tickets of course. These are all of the 7.10.1 tickets currently, but there are some important things to note: 1) There are a lot of them! 2) Most of the development team is probably not going to pay attention to many of them unless they are high or highest priority in Trac. (That's just the result of having so many tickets for 7.10.1 at the moment.) So I'm asking if people out there with the time can help chip in and give some input on the ticket! - If there's a ticket that interests you, please comment on it! - Say why you think it should have its priority elevated for 7.10.1. - It will certainly be easier if you offer to help. Then we can move it upwards and we'll look at it. Finally, if you are *really* dedicated or a newcomer - please go through the tickets and see if any of them can be closed, reassigned, marked as duplicates, etc. There are certainly some dead tickets and probably easy tickets for newcomers, so just spend a few minutes looking if you're interested! (Otherwise they just end up getting punted off in batch modifications) Thanks! ~~~~~~~ BTW, for developers, you can look at your assigned tickets for 7.10.1 here, it's also always on the sidebar: https://ghc.haskell.org/trac/ghc/query?owner=%24USER&status=infoneeded&status=merge&status=new&status=patch&milestone=7.10.1&order=priority -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From austin at well-typed.com Mon Jul 21 13:48:43 2014 From: austin at well-typed.com (Austin Seipp) Date: Mon, 21 Jul 2014 08:48:43 -0500 Subject: Status updates Message-ID: Hello *, Here are some updates from this past week. - I sent an email earlier about 7.10.1 tickets - please vote on them! - Related to that - as I said last week, I finished remilestoning a lot of tickets, but more remain. Please excuse any mail spam, but do pay attention to your bugs if they change! - I cleaned up some tabs and whitespace in the compiler in low-activity areas. You may have seen the commits go by, but otherwise it shouldn't be a problem. I'll probably keep eliminating errors in low-contention areas of the codebase as I can. Ditto with 'arc lint'. - I've cleaned up a bunch of wiki pages relating to the git repositories, and submodule management, but more stuff remains. See below for my reply to Simon. - I've been lazy about the Applicative-Monad work, since some of it fell out of date. But now other work is pending on it, so it's time to finish it Real Soon Now. - I'm working out some of the kinks in the Phabricator automation for builds etc, but it's not quite done yet - sorry! Here are some things I plan to do this week, roughly in order of priority: - I'm going to keep cleaning up these Git pages to make them more sensible, since some of the information is still wrong. Please see the comments below this and let me know what's wrong! - I will finalize these Phabricator things, which I should be able to get done with soon. This is all mostly annoying HTTP endpoint and PHP/Python stuff and is a bit of a chore to test/setup completely. - I am going to spend time next on landing AMP and get it back into shape. Yes, really landing it; I need to send something upstream to Cabal still. - The patch queue on Trac needs to be drained and slipped my mind. - Given all that I'll also probably end up continuing to fix lint errors, and hopefully fix bugs based on what people want in 7.10. :) Below is a quick recap of what me and Simon discussed last week, mostly stuff from the above: > ? Complete re-milestoning. Then email ghc-devs and ghc-users [send a draft to Simon & Simon first] > >o Giving links to suitable query(s) > >o Saying that any ticket that isn?t > >a) milestoned 7.8.4 or 7.10.1, and > >b) high or highest priority, > >isn?t going to get attended to by us. > >o Inviting them to bid (on the ticket) for high/highest 7.10.1 status, explaining why it is important to them. And that it?s more likely to be done if they help. I sent this mail earlier - please bid on 7.10.1 tickets! > ? Trac/Phab integration Doing this in a complete manner is turning out to be a bit of work and I haven't gotten done yet with having comments appear on tickets just yet. My goal is to have it so that when you say 'arc diff', you can put the words: Trac: #9832, #9123 into the review, and it will auto appear on Phabricator as a custom field, hyperlink it, and then a comment will appear on Trac. I have to write some PHP and Python on both ends to get this all working, though. Also the builder script is currently really crappy, so I'm fixing that a bit, so it will correctly deal with concurrent builds and multiple builders. > ? Document how to update a submodule; e.g. haddock > > o We worked through an example > > o We need a canonical place on the wiki to say which branch GHC should be tracking on the submodule; and how to check that it is > > o Need to talk to haddock and cabal folk about whether GHC should track master or a branch in their repo. They may want a more stable branch to work on. > > > ? Wiki pages > > o Be sure to update https://ghc.haskell.org/trac/ghc/wiki/Building/GettingTheSources, to include simple cloning story > > o Consolidate ?how to use Git? information into WorkingConventions/Git, perhaps headed ?Git workflows?. Merging WorkingConventions/Git with the Git/Submodules page will make it intimidatingly long. > > ? Think about how to split it up. > > ? Tricks should be on a separate page. > > ? Focus on tasks, not technology > > o wiki:Repositories page says where everything lives, but not how to use them. > > ? Needs updating to have the correct data. > > ? Also integrate https://ghc.haskell.org/trac/ghc/wiki/Repositories/Upstream This is now done with a first draft. I think I simplified the pages a lot. See here: https://ghc.haskell.org/trac/ghc/wiki/WorkingConventions/Git and: https://ghc.haskell.org/trac/ghc/wiki/WorkingConventions/Git/Submodules I plan on obsoleting the old Repositories pages since they're mostly redundant and out of date now. There may be more info stuffed into other places, but I imagine it will stay under this hierarchy. There are still some links and prose that need to be rewritten that I obviously missed however. > ? Merge AMP stuff. Should be done now. (Without join in Monad for now, #9123.) https://phabricator.haskell.org/D13 Unfortunately this is still pending a few things, so I haven't gotten it done yet. :( > o Maybe make the bug tracker page easier to find. > > ? Move https://ghc.haskell.org/trac/ghc/wiki/WorkingConventions from ?Joining in? side-bar to ?Documentation? and list it as ?Working conventions? > > ? Move newcomer material from wiki:WorkingConventions to wiki:Newcomers, and put *that* under ?Joining in? > > Now the bug tracker stuff is in Working Conventions, easier to find. This has all been done. -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From marlowsd at gmail.com Mon Jul 21 17:31:57 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Mon, 21 Jul 2014 18:31:57 +0100 Subject: Building GHC under Wine? In-Reply-To: <1405670392.8383.4.camel@kirk> References: <1405464922.2694.29.camel@kirk> <53C8CF10.7000003@gmail.com> <1405670392.8383.4.camel@kirk> Message-ID: <53CD4E8D.7010407@gmail.com> On 18/07/2014 08:59, Joachim Breitner wrote: > Hi, > > Am Freitag, den 18.07.2014, 08:38 +0100 schrieb Simon Marlow: >> I might be misremembering, but I believe someone (Ross Paterson?) used >> to do this a while ago. >> >> I can't think of any good reasons it *shouldn't* work. > > Then, next question: Is it likely to find windows building failures, are > are the failures usually of the kind that would not occur in a > not-quite-a-real-windows environment? As far as I know, yes it should look like a real Windows environment. However, now that I think about it, I suspect what I'm remembering is that someone used GHC under Wine to develop something else, rather than building GHC itself. Building GHC might be rather more difficult. Cheers, Simon From ezyang at mit.edu Mon Jul 21 17:43:10 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Mon, 21 Jul 2014 18:43:10 +0100 Subject: Multi-instance packages status report Message-ID: <1405962902-sup-6315@sabre> Hello all, As you may have noticed, I've been knocking around GHC and Cabal the past few weeks. One of the tasks that has been on my list is essentially reimplementing Philipp Schuster's 2012 GSoC, with a few small but important architectural differences. Here is a status report of what is going on. In my copy of GHC and Cabal [1,2], you can now install multiple copies of a package with differing dependencies to the package database, i.e. q-1.0 compiled against p-1.0, and against p-2.0. The packages in the database are distinguished via a *package key*, which is an md5 hash of the package id (e.g. q-1.0) and the sorted list of the package IDs of the transitive closure of dependencies (e.g. p-1.0 or p-2.0). The package key is used to generate linker symbols for packages, so it's possible to link together both copies of q in the same program, as long as you rename the modules appropriately (of course, the redefined types are considered unequal). When registering a package, ghc-pkg now checks and removes duplicate package keys, as opposed to package IDs. I've also implemented many of the necessary niceties for making it pleasant to deal with duplicated package IDs: for example, GHC's output logic has been adjusted to only qualify a package ID with the package key when there are multiples of the same package ID exposed in the database. I've also adjusted GHC's build system to use package keys rather than package IDs to refer to packages when building. Like the original GSoC project, we still need to have decisions on some of the major design choices. I think the most pressing one is "simplistic dependency resolution" when you use, e.g. ghc -package foo-1.0 or ./Setup configure --with-constraint="foo==1.0". Right now, I pick dependencies in an unspecified manner, check if they are consistent, and bail out if they are not. As it turns out, both GHC and ./Setup configure will already compute the transitive closure of dependencies, so I suspect we might be able to do something clever here. Less pressing but eventually necessary is clueing in cabal-install. Comments would be especially appreciated. Cheers, Edward [1] https://phabricator.haskell.org/D80 [2] https://github.com/haskell/cabal/pull/2002 From marlowsd at gmail.com Mon Jul 21 19:30:14 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Mon, 21 Jul 2014 20:30:14 +0100 Subject: Beta Performance dashboard In-Reply-To: <1405497741.2301.34.camel@kirk> References: <1405497741.2301.34.camel@kirk> Message-ID: <53CD6A46.3030404@gmail.com> This is great! Can it do alerts, e.g. send a mail to the list when a metric moves by a certain amount? Cheers, Simon On 16/07/2014 09:02, Joachim Breitner wrote: > Hi, > > I guess it?s time to talk about this, especially as Richard just brought > it up again... > > I felt that we were seriously lacking in our grip on performance issues. > We don?t even know whether 6.8.3 was better or worse than 6.8.3 or 7.6.4 > in terms of nofib, not to speak of the effect of each single commit. > > I want to change that, so I set up a benchmark monitoring dashboard. You > can currently reach it at: > > http://ghcspeed-nomeata.rhcloud.com/ > > What does it do? > ~~~~~~~~~~~~~~~~ > > It monitors the repository (master branch only) and builds each commit, > complete with the test suite and nofib. The log is saved and analyzed, > and some numbers are extracted: > * The build time > * The test suite summary numbers > * Runtime (if >1s), allocations and binary sizes of the nofib > benchmarks > > These are uploaded to the website above, which is powered by codespeed, > a general performance dashboard, implemented in Python using Django. > > Under _Changes_, it provides a report for each commit (changes wrt. to > the previous version, and wrt. to 10 revisions earlier, the so-called > ?trend?). A summary of these reports is visible on the front-page. > > The _Timeline_ is a graph for each individual performance number. If > there are bumps, you can hopefully find them there! You can also compare > to 7.8.3, which is available as a ?baseline?. > > _Comparison_ will be more useful if we have more tagged revision, or if > were benchmarking various options (e.g. -fllvm): Here you can do > bar-chart comparisons. > > Why codespeed? > ~~~~~~~~~~~~~~ > > For a long time I searched for a suitable software product, and one > criterion is that it should be open source, rather simple to set up and > mostly decoupled from other tools, i.e. something that I throw numbers > at and which then displays them nicely. While I don?t think codespeed is > the best performance dashboard out there (I find > http://goperfd.appspot.com/perf a bit better; I wonder how well > codespeed scales to even larger numbers of benchmarks and I wish it were > more git-aware), it was the easiest to get started with. And thanks to > the loose coupling of (1) running the tests to acquire a log, (2) > parsing the log to get numbers and (3) putting them on a server, we can > hopefully replace it when we come along something better. I was hoping > for the Phabricator guys to have something in their tool suite, but > doesn?t look like it. > > How does it work (currently)? > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > My office PC is underused (I work on my laptop), so its currently > dedicated to it. I have a simple shell script that monitors the repo for > new versions. It builds the newest revision and works itself back to the > commit where everything was turned into submodules: > https://github.com/nomeata/codespeed/blob/ghc/tools/ghc/watch.sh > > It calls a script that does the actual building: > https://github.com/nomeata/codespeed/blob/ghc/tools/ghc/run-speed.sh > This produces a log file which should contain all the required numbers > somewhere. > > A second script extracts these numbers (with help of nofib-analyze) and > converts them into codespeed compatible JSON files: > https://github.com/nomeata/codespeed/blob/ghc/tools/ghc/log2json.pl > > Finally, a simple invocation to curl uploads them to codespeed: > https://github.com/nomeata/codespeed/blob/ghc/tools/ghc/upload.sh > > So if you want additional benchmarks to be tracked, make sure they are > present in the logs and adjust log2json.pl. codespeed will automatically > pick up new benchmarks in these logs. Reimplementations in Haskell are > also welcome :-) > > The testsuite is run with VERBOSE=4, so the performance numbers are also > shown for failing test cases. So once a test case goes over the limit, > you can grep through previous logs try to find the real culprit. I > uploaded the logs (so far) to https://github.com/nomeata/ghc-speed-logs > (but this is not automated yet, ping me if you need an update on this). > > What next? > ~~~~~~~~~~ > > Clearly, the current setup is only good enough to evaluate the system. > Eventually, I might want to use my office PC again, and the free hosting > on openshift is not very powerful. > > So if we want to keep this setup and make it ?official?, we need find a > permanent solution.? This involves: > > * A dedicated machine to run the benchmarks. This probably shouldn?t be > a VM, if we want to keep the noise in the runtime down. > * A machine to run the codespeed server. Can be a VM, or even run on > any of the system that we have right now. Just needs a database > (postgresql preferably) and a webserver supporting WSGI (i.e. any > of them). > * Maybe a better place to store the logs for public consumption. > > Also, there are way to improve the system: > > * As I said, I don?t think codespeed is the best. If we find something > better, we can replace it. Since we have all the logs, we can easily > fill the new system with the data, or even run both at the same time. > * We might want to have more numbers. I am already putting > lines-of-code and disk space usage numbers into the logs, but do not > parse them yet. > * In particular, we might want to put in each performance test case as > a benchmark of its own, to easier find commits that degrade (or > improve!) performance. I?m not sure how well the web page will handle > that. > * We might want to replace my rather simple watch.sh-script by > something more serious. In particular, I imagine that our builder > setup could manages this, with a dedicated builder doing the > benchmark runs and the builder server scheduling a build for each > commit. > > > That?s it for now. Enjoy clicking around! > > Greetings, > Joachim > > ? I guess that could be considered beta-reduction :-) > > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From johan.tibell at gmail.com Mon Jul 21 19:31:18 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Mon, 21 Jul 2014 21:31:18 +0200 Subject: Windows breakage -- again In-Reply-To: References: <53c92d82.0625980a.3ecc.2618@mx.google.com> Message-ID: Great. Thanks all for your help! On Sat, Jul 19, 2014 at 2:04 AM, Austin Seipp wrote: > Thanks Niklas, this is now committed. > > On Fri, Jul 18, 2014 at 9:21 AM, Niklas Larsson wrote: >> I posted a working and tested patch last night. Please feel free to commit >> it, I haven't the rights to do it. >> >> Niklas >> ________________________________ >> Fr?n: Simon Peyton Jones >> Skickat: ?2014-?07-?18 15:55 >> Till: Niklas Larsson; Johan Tibell >> Kopia: ghc-devs at haskell.org >> ?mne: RE: Windows breakage -- again >> >> Thank you all for pursuing this. I gather that you know what is going on, >> so no further info needed from me. Yell if it is otherwise. >> >> >> >> Meanwhile, is the fix imminent, or should we revert Johan?s patch? >> >> >> >> Simon >> >> >> >> From: Niklas Larsson [mailto:metaniklas at gmail.com] >> Sent: 16 July 2014 19:58 >> To: Johan Tibell; Simon Peyton Jones >> Cc: ghc-devs at haskell.org >> Subject: Re: Windows breakage -- again >> >> >> >> I get the same failure when I try to build HEAD. Turns out the error occurs >> on the 32-bit Windows build, and my successful build was a 64-bit build. My >> 64-bit build still succeeds. >> >> Also, gcc is 4.5.2 on 32-bit, not 4.6.3 as on 64-bit. >> >> Niklas >> >> >> >> >> >> 2014-07-16 14:48 GMT+02:00 Niklas Larsson : >> >> I have built ghc on windows after that was added with no issue. >> >> I can take a look this evening and see how HEAD works for me. >> >> The standard gcc in the tarballs is 4.6.3, which is getting long in the >> tooth, there is an issue on trac to upgrade it. >> >> -- Niklas >> >> ________________________________ >> >> Fr?n: Johan Tibell >> Skickat: ?2014-?07-?16 09:57 >> Till: Simon Peyton Jones >> Kopia: ghc-devs at haskell.org >> ?mne: Re: Windows breakage -- again >> >> You can rollback the commit (git revert >> 4ee4ab01c1d97845aecb7707ad2f9a80933e7a49) and push that to the repo if you >> wish. I will try to re-add the primop again after I figure out what's wrong. >> >> >> >> On Wed, Jul 16, 2014 at 9:37 AM, Johan Tibell >> wrote: >> >> I added some primops about a month ago >> (4ee4ab01c1d97845aecb7707ad2f9a80933e7a49) that call __sync_fetch_and_add, a >> gcc/llvm builtin. I'm a bit surprised to see this error. The GCC manual [1] >> says: >> >> >> >>> " Not all operations are supported by all target processors. If a >>> particular operation cannot be implemented on the target processor, a >>> warning will be generated and a call an external function will be generated. >>> The external function will carry the same name as the builtin, with an >>> additional suffix `_n' where n is the size of the data type." >> >> >> >> I'm a bit surprised by this error for two reasons: >> >> >> >> * A call to that symbol should only be generated if the CPU doesn't support >> the atomic instructions. What CPU model does Windows report that you have? >> >> >> >> * gcc should define such a symbol. For me the following test program >> compiles: >> >> >> >> #include >> >> >> >> uint8_t test(uint8_t* ptr, uint8_t val) { >> >> return __sync_fetch_and_add_1(ptr, val); >> >> } >> >> >> >> int main(void) { >> >> uint8_t n; >> >> return test(&n, 1); >> >> } >> >> >> >> Does that compile for you? Which version of GCC do we end up using on >> Windows? >> >> >> >> The reported symbol (___sync_fetch_and_add_1) has three leading underscores, >> that looks weird. Can you compile just libraries/ghc-prim/cbits/atomic.c and >> see if it's indeed GCC that generates a reference to that symbol? >> >> >> >> 1. http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Atomic-Builtins.html >> >> >> >> On Wed, Jul 16, 2014 at 12:29 AM, Simon Peyton Jones >> wrote: >> >> Aargh! The Windows build has broken ? again. I can?t build GHC on my >> laptop any more. >> >> >> [Hela det ursprungliga meddelandet tas inte med.] >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> > > > > -- > Regards, > > Austin Seipp, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ From marlowsd at gmail.com Mon Jul 21 19:32:50 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Mon, 21 Jul 2014 20:32:50 +0100 Subject: Thread status constants In-Reply-To: References: Message-ID: <53CD6AE2.5070901@gmail.com> On 19/07/2014 06:00, Kyle Van Berendonck wrote: > I found these: > https://github.com/ghc/ghc/blob/5f3c5384df59717ca8013c5df8d1f65692867825/includes/rts/Constants.h#L194 > > They go only 0-14, so there's some long chains of branches and stuff in > hot paths that could be cleaned up into single &-masked branches by > changing these into a set of flags. GHC will generate a table jump for a large dense case, if that's what you're worried about. Cheers, Simon From mail at joachim-breitner.de Mon Jul 21 20:06:49 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Mon, 21 Jul 2014 22:06:49 +0200 Subject: Multi-instance packages status report In-Reply-To: <1405962902-sup-6315@sabre> References: <1405962902-sup-6315@sabre> Message-ID: <1405973209.3485.4.camel@joachim-breitner.de> Hi, Am Montag, den 21.07.2014, 18:43 +0100 schrieb Edward Z.Yang: > In my copy of GHC and Cabal [1,2], you can now install multiple copies of a > package with differing dependencies to the package database, i.e. q-1.0 > compiled against p-1.0, and against p-2.0. The packages in the database > are distinguished via a *package key*, which is an md5 hash of the > package id (e.g. q-1.0) and the sorted list of the package IDs of the > transitive closure of dependencies (e.g. p-1.0 or p-2.0). maybe a stupid question, but how does the package key relate to the hash that "ghc-pkg" shows for package? Thanks, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From mail at joachim-breitner.de Mon Jul 21 20:07:55 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Mon, 21 Jul 2014 22:07:55 +0200 Subject: Beta Performance dashboard In-Reply-To: <53CD6A46.3030404@gmail.com> References: <1405497741.2301.34.camel@kirk> <53CD6A46.3030404@gmail.com> Message-ID: <1405973275.3485.5.camel@joachim-breitner.de> Hi, Am Montag, den 21.07.2014, 20:30 +0100 schrieb Simon Marlow: > This is great! Can it do alerts, e.g. send a mail to the list when a > metric moves by a certain amount? no (or not yet). Large deviations are listed in the summary on http://ghcspeed-nomeata.rhcloud.com/, and I try to keep an eye on them. But the whole thing needs to settle first anyways. For example, in order to get the "unexpected failures" to zero we need to find an agreement in #9315 (where input from others would be helpful as well) The last few reports may be a bit noisy as I am re-measuring them with changed build settings. Greetings, Joachim -- Joachim Breitner e-Mail: mail at joachim-breitner.de Homepage: http://www.joachim-breitner.de Jabber-ID: nomeata at joachim-breitner.de -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From kvanberendonck at gmail.com Mon Jul 21 20:51:11 2014 From: kvanberendonck at gmail.com (Kyle Van Berendonck) Date: Tue, 22 Jul 2014 06:51:11 +1000 Subject: Thread status constants In-Reply-To: <53CD6AE2.5070901@gmail.com> References: <53CD6AE2.5070901@gmail.com> Message-ID: Hi Simon, I have been inspecting what gcc generates and found that in all the functions I checked which used the thread status in a switch, none of them generated jump tables, rather, long chains of cmp's. I predict this is because there are overlapping cases (or even a fall-through in one I saw, which looked accidental...). On Tue, Jul 22, 2014 at 5:32 AM, Simon Marlow wrote: > On 19/07/2014 06:00, Kyle Van Berendonck wrote: > >> I found these: >> https://github.com/ghc/ghc/blob/5f3c5384df59717ca8013c5df8d1f6 >> 5692867825/includes/rts/Constants.h#L194 >> >> They go only 0-14, so there's some long chains of branches and stuff in >> hot paths that could be cleaned up into single &-masked branches by >> changing these into a set of flags. >> > > GHC will generate a table jump for a large dense case, if that's what > you're worried about. > > Cheers, > Simon > -------------- next part -------------- An HTML attachment was scrubbed... URL: From robstewart57 at gmail.com Mon Jul 21 21:55:22 2014 From: robstewart57 at gmail.com (Rob Stewart) Date: Mon, 21 Jul 2014 22:55:22 +0100 Subject: GHC contribution guidelines and infrastructure talk on 6th September at HIW? In-Reply-To: <1405670511.8383.6.camel@kirk> References: <618BE556AADD624C9C918AA5D5911BEF1043098D@DB3PRD3001MB020.064d.mgd.msft.net> <1405670511.8383.6.camel@kirk> Message-ID: On 18 July 2014 09:01, Joachim Breitner wrote: > Am Freitag, den 18.07.2014, 07:25 +0000 schrieb Simon Peyton Jones: >> | On Saturday 6th September is the Haskell Implementers Workshop. There >> | has been plenty of discussion over the last 12 months about making >> | contributions to GHC less formidable. Is this story going to be told at >> | HIW? A talk about revised contribution guidelines and helpful tool >> | support might engage those sat on, or peering over, the fence. >> >> I think that's a great idea. Maybe Simon M, or Joachim, or Austin, or Herbert? Of some coalition thereof > > I agree, and I?d be available for it, or for joining a coalition. I gentle nudge about the idea of a HIW talk on contributing to GHC development. I'm glad some people think that this is a good idea. However, given that the official deadline for talk proposals has already passed, at least an abstract would have to be submitted to the HIW committee very soon to be considered. The presentation content can of course be put together much closer to the time. I don't want to create work for anyone, of course. -- Rob From mark.lentczner at gmail.com Tue Jul 22 01:02:30 2014 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Mon, 21 Jul 2014 18:02:30 -0700 Subject: GHC contribution guidelines and infrastructure talk on 6th September at HIW? In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF1043098D@DB3PRD3001MB020.064d.mgd.msft.net> <1405670511.8383.6.camel@kirk> Message-ID: On a related front... I don't have a talk to give (hence I didn't submit a proposal)... But I'd love it if some of us could have a group discussion about coordinating releases, and our approach to putting out "Haskell": In short, we see it as several related peices (GHC, Cabal, Haddock, core libs, platform, etc...) but my guess is that most developers considering using Haskell see it as one thing: "Can I haz the Haskellz on my machine? kthxbai?" Therefore, I think we could put some thought into how we manage these pieces into a cohesive whole whose release more or less "just works". Not sure if this should be a "session", a "workshop", a long hallway disucssion, a night of good food and beer, or what. I'm happy to put some effort into organizing, and setting the context for the discussion. - Mark ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From pali.gabor at gmail.com Tue Jul 22 04:53:15 2014 From: pali.gabor at gmail.com (=?UTF-8?B?UMOhbGkgR8OhYm9yIErDoW5vcw==?=) Date: Tue, 22 Jul 2014 06:53:15 +0200 Subject: Windows breakage -- again In-Reply-To: References: <53c92d82.0625980a.3ecc.2618@mx.google.com> Message-ID: 2014-07-21 21:31 GMT+02:00 Johan Tibell : > Great. Thanks all for your help! I am afraid we are not done with this yet. Yesterday I have also committed the fix for the FreeBSD platform, but today I noticed that the corresponding test case ("AtomicPrimops") is failing due to SIGILL, that is, illegal instruction. And it has been happening for all the 32-bit platforms, including Linux [1], SmartOS [2], and Solaris [3]. I do not know yet why it goes wrong. [1] http://haskell.inf.elte.hu/builders/validator1-linux-x86-head/34/10.html [2] http://haskell.inf.elte.hu/builders/smartos-x86-head/73/21.html [3] http://haskell.inf.elte.hu/builders/solaris-x86-head/116/21.html From mail at joachim-breitner.de Tue Jul 22 07:23:22 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Tue, 22 Jul 2014 09:23:22 +0200 Subject: Multi-instance packages status report In-Reply-To: <1405981078-sup-1728@sabre> References: <1405962902-sup-6315@sabre> <1405973209.3485.4.camel@joachim-breitner.de> <1405981078-sup-1728@sabre> Message-ID: <1406013802.2079.3.camel@joachim-breitner.de> [Replying to the list, in case it was sent to me in private by accident] Hi Edward, Am Montag, den 21.07.2014, 23:25 +0100 schrieb Edward Z.Yang: > Excerpts from Joachim Breitner's message of 2014-07-21 21:06:49 +0100: > > maybe a stupid question, but how does the package key relate to the hash > > that "ghc-pkg" shows for package? > > Fine question---this is definitely something that is different from the > GSoC project. The short answer is, the current hash shown in ghc-pkg is > the ABI hash associated with the InstalledPackageId, which is computed > after GHC is done compiling your code; whereas the package key is a > hash of the dependency graph, which can be done before compilation. > > The longer answer is we now have three ID-like things, in order of > increasing specificity: > > Package IDs: containers-0.9 > These are the "user visible" things that we expect users to talk > about in Cabal file > Package Keys: md5("containers-0.9" + transitive deps) > These are the identifiers the compiler cares about: they are used > for type equality, and contain a bit more detail than we expect > a user to normally need---however, a user might need to refer to > this to disambiguate in some situations. > Installed Package IDs: ABI hash of compiled code > This uniquely identifies an installed package in the database, up > to ABI. > > So, if two packages have the same IPID, their package keys are > guaranteed to be the same, but not vice versa. (And likewise for package > IDs.) thanks for the explanations, it makes it clear to me. Do the package key contain the flags used to compile dependencies? In the example where it could matter the flag would change that package?s key, so maybe it is redundant.... And just to confirm my understandn: If we had a completely reproducible environment, the same key would (conceptually, not practically) imply the same IPID, right? Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From mail at joachim-breitner.de Tue Jul 22 07:43:21 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Tue, 22 Jul 2014 09:43:21 +0200 Subject: GHC contribution guidelines and infrastructure talk on 6th September at HIW? In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF1043098D@DB3PRD3001MB020.064d.mgd.msft.net> <1405670511.8383.6.camel@kirk> Message-ID: <1406015001.2079.5.camel@joachim-breitner.de> Hi Rob, Am Montag, den 21.07.2014, 22:55 +0100 schrieb Rob Stewart: > On 18 July 2014 09:01, Joachim Breitner wrote: > > Am Freitag, den 18.07.2014, 07:25 +0000 schrieb Simon Peyton Jones: > >> | On Saturday 6th September is the Haskell Implementers Workshop. There > >> | has been plenty of discussion over the last 12 months about making > >> | contributions to GHC less formidable. Is this story going to be told at > >> | HIW? A talk about revised contribution guidelines and helpful tool > >> | support might engage those sat on, or peering over, the fence. > >> > >> I think that's a great idea. Maybe Simon M, or Joachim, or Austin, > >> or Herbert? Of some coalition thereof > > > > I agree, and I?d be available for it, or for joining a coalition. > > I gentle nudge about the idea of a HIW talk on contributing to GHC > development. I'm glad some people think that this is a good idea. > However, given that the official deadline for talk proposals has > already passed, at least an abstract would have to be submitted to the > HIW committee very soon to be considered. The presentation content can > of course be put together much closer to the time. for some reason I assumed you were part of the committee (your mail sounded to me like ?I?m responsible for this event, and would like to such a talk?), so I wasn?t paying close attention to the deadline. But I see that?s not the case... The registration is closed on Easy Chair. So I?ll make a submission directly to Carter (who spoke in favor of this last Thursday ? past the deadline) and Jost (the chair). Maybe there is still a slot left. ====================================================================== Desperately late submission to HIW: Contributing to GHC ~~~~~~~~~~~~~~~~~~~ The core component of the Haskell ecosystem, the Glasgow Haskell Compiler (GHC) is not only open source, it is also a proper open source project relying on the work of volunteers. Despite its age and its apparent complexity, new contributors are not needed but actually useful to the project. Recently, the project has seen some changes that make it even easier for you to start hacking on it, more convenient to get your changes reviewed and harder to break anything: Our repositories have a less custom setup; a tool called Phabricator is used for efficient and meme-ridden code review; various quality assurances services detect breakage and performance regressions early. This extends our existing tools (trac, the mailing lists) and practices (notes, an extensive test suite) that keep working on GHC manageable. In this talk we give an overview of old and new practices and tools, especially aiming at interested newcomers, lowering the entry barrier to contributing to GHC. ====================================================================== (Side remark: I would have like to start the summary with ?GHC is attracting ever contributors?, but according to the graph at https://github.com/ghc/ghc/graphs/contributors this is not obvious. There have been higher spikes some years ago. But at least we seem to have a higher stable base. Although less than in 2011 and 2012.) Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From metaniklas at gmail.com Tue Jul 22 07:50:27 2014 From: metaniklas at gmail.com (Niklas Larsson) Date: Tue, 22 Jul 2014 09:50:27 +0200 Subject: Windows breakage -- again In-Reply-To: References: <53c92d82.0625980a.3ecc.2618@mx.google.com> Message-ID: AtomicPrimOps.hs flakes out for: fetchAndTest fetchNandTest fetchOrTest fetchXorTest casTest but not for fetchAddSubTest and readWriteTest. If I step through it, the segfault comes at line 166, it doesn't reach the .fetchXXXIntArray function that was called from the thread (at least ghci doesn't hit a breakpoint set at it). GDB says the bad instruction is: 4475: f0 8b 4c 24 40 lock mov 0x40(%esp),%ecx Niklas 2014-07-22 6:53 GMT+02:00 P?li G?bor J?nos : > 2014-07-21 21:31 GMT+02:00 Johan Tibell : > > Great. Thanks all for your help! > > I am afraid we are not done with this yet. Yesterday I have also > committed the fix for the FreeBSD platform, but today I noticed that > the corresponding test case ("AtomicPrimops") is failing due to > SIGILL, that is, illegal instruction. And it has been happening for > all the 32-bit platforms, including Linux [1], SmartOS [2], and > Solaris [3]. > > I do not know yet why it goes wrong. > > [1] > http://haskell.inf.elte.hu/builders/validator1-linux-x86-head/34/10.html > [2] http://haskell.inf.elte.hu/builders/smartos-x86-head/73/21.html > [3] http://haskell.inf.elte.hu/builders/solaris-x86-head/116/21.html > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvr at gnu.org Tue Jul 22 08:07:44 2014 From: hvr at gnu.org (Herbert Valerio Riedel) Date: Tue, 22 Jul 2014 10:07:44 +0200 Subject: Call for help on testing integer-gmp2 on non-Linux archs Message-ID: <8738dtyfxr.fsf@gnu.org> Hello *, As some of you may have already noticed, there's an attempt[1] in the works to reimplement integer-gmp in such a way to avoid overriding GMP's internal memory allocator functions, and thus make it possible to link GHC/integer-gmp compiled programs with other components linked to libgmp which break if GMP's memory allocation goes via GHC's GC. I also hope this will facilitate to ship GHC bindists for Windows with a dynamically linked (& unpatched!) GMP library, to reduce LGPL licencing concerns for resulting GHC compiled programs. So far, I've only been able to test the code on Linux/i386 and Linux/amd64 where it works correctly. Now it'd be interesting to know if integer-gmp2 in its current form works also on non-Linux archs, and if not, what's needed to make it work. Fwiw, I mostly suspect linker-related issues. Therefore, is anyone here interested to help out with making sure GHC+integer-gmp2 builds on Windows, OSX and so on? If so, please get into contact with me! Cheers, hvr [1]: https://ghc.haskell.org/trac/ghc/ticket/9281 https://phabricator.haskell.org/D82 From ezyang at mit.edu Tue Jul 22 08:11:59 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Tue, 22 Jul 2014 09:11:59 +0100 Subject: Multi-instance packages status report In-Reply-To: <1406013802.2079.3.camel@joachim-breitner.de> References: <1405962902-sup-6315@sabre> <1405973209.3485.4.camel@joachim-breitner.de> <1405981078-sup-1728@sabre> <1406013802.2079.3.camel@joachim-breitner.de> Message-ID: <1406016183-sup-7529@sabre> Excerpts from Joachim Breitner's message of 2014-07-22 08:23:22 +0100: > [Replying to the list, in case it was sent to me in private by accident] Oops, thanks. > thanks for the explanations, it makes it clear to me. > > Do the package key contain the flags used to compile dependencies? In > the example where it could matter the flag would change that package?s > key, so maybe it is redundant.... That is a good question. At the moment, flags are not incorporated, but they could be. I think it probably makes more sense to include them, but it does require accommodation from the dependency solver which doesn't exist at the moment. > And just to confirm my understandn: If we had a completely reproducible > environment, the same key would (conceptually, not practically) imply > the same IPID, right? I don't even think that's necessary conceptually true. If I am working on a package in development and I modify the type of one file, the package key (as currently described) stays the same, but the ABI hash changes. I think the overwrite behavior can still be handy in development situations since it avoids the "lots of old packages" problem that the GSoC project had to deal with. Conversely, I don't think the package key should include something like the hash of the sources of the source tree, because it is totally possible for differing sources to be ABI compatible (and thus have the same IPID). But what this points to is the need to differentiate between ABI (not unique) and "true IPID" (which is absolutely, completely unique). Edward From simonpj at microsoft.com Tue Jul 22 08:38:22 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 22 Jul 2014 08:38:22 +0000 Subject: GHC contribution guidelines and infrastructure talk on 6th September at HIW? In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF1043098D@DB3PRD3001MB020.064d.mgd.msft.net> <1405670511.8383.6.camel@kirk> Message-ID: <618BE556AADD624C9C918AA5D5911BEF10435304@DB3PRD3001MB020.064d.mgd.msft.net> I think such a discussion would be a Good Thing, and just what HIW is for. Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Mark Lentczner Sent: 22 July 2014 02:03 To: ghc-devs at haskell.org Subject: Re: GHC contribution guidelines and infrastructure talk on 6th September at HIW? On a related front... I don't have a talk to give (hence I didn't submit a proposal)... But I'd love it if some of us could have a group discussion about coordinating releases, and our approach to putting out "Haskell": In short, we see it as several related peices (GHC, Cabal, Haddock, core libs, platform, etc...) but my guess is that most developers considering using Haskell see it as one thing: "Can I haz the Haskellz on my machine? kthxbai?" Therefore, I think we could put some thought into how we manage these pieces into a cohesive whole whose release more or less "just works". Not sure if this should be a "session", a "workshop", a long hallway disucssion, a night of good food and beer, or what. I'm happy to put some effort into organizing, and setting the context for the discussion. - Mark ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Tue Jul 22 09:06:30 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Tue, 22 Jul 2014 10:06:30 +0100 Subject: RFC: unsafeShrinkMutableByteArray# In-Reply-To: <87tx6lflhd.fsf@gmail.com> References: <87sim7815x.fsf@gmail.com> <53C156D7.9080008@gmail.com> <87tx6lflhd.fsf@gmail.com> Message-ID: <53CE2996.3000200@gmail.com> On 13/07/14 14:15, Herbert Valerio Riedel wrote: > On 2014-07-12 at 17:40:07 +0200, Simon Marlow wrote: >> Yes, this will cause problems in some modes, namely -debug and -prof >> that need to be able to scan the heap linearly. > > ...and I assume we don't want to fallback to a non-zerocopy mode for > -debug & -prof in order avoid distorting the profiling measurements > either? I suppose that would be doable. Not ideal, but doable. In profiling mode you could arrange for the extra allocation to be assigned to CCS_OVERHEAD, so that it gets counted as profiling overhead. You'd still have the time overhead of the copy though. >> Usually we invoke the >> OVERWRITING_CLOSURE() macro which overwrites the original closure with >> zero words, but this won't work in your case because you want to keep >> the original contents. So you'll need a version of >> OVERWRITING_CLOSURE() that takes the size that you want to retain, and >> doesn't overwrite that part of the closure. This is probably a good >> idea anyway, because it might save some work in other places where we >> use OVERWRITING_CLOSURE(). > > I'm not sure I follow. What's the purpose of overwriting the original > closure payload with zeros while in debug/profile mode? (and on what > occasions that would be problematic for a MutableByteArray does it > happen?) Certain features of the RTS need to be able to scan the contents of the heap by linearly traversing the memory. When there are gaps between heap objects, there needs to be a way to find the start of the next heap object, so currently when we overwrite an object with a smaller one we clear the payload with zeroes. There are more efficient ways, such as overwriting with a special "gap" object, but since the times we need to do this are not performance critical, we haven't optimised it. Currently we need to do this * in debug mode, for heap sanity checking * in profiling mode, for biographical profiling The macro that does this, OVERWRITING_CLOSURE() currently overwrites the whole payload of the closure with zeroes, whereas you want to retain part of the closure, so you would need a different version of this macro. >> I am worried about sizeofMutableByteArray# though. It wouldn't be >> safe to call sizeofMutableByteArray# on the original array, just in >> case it was evaluated after the shrink. You could make things >> slightly safer by having unsafeShrinkMutableByteArray# return the new >> array, so that you have a safe way to call sizeofMutableByteArray# >> after the shrink. This still doesn't seem very satisfactory to me >> though. > > ...as a somewhat drastic obvious measure, one could change the type-sig > of sizeofMutableByteArray# to > > :: MutableByteArray# s a -> State# s -> (# State# s, Int# #) > > and fwiw, I could find only one use-site of sizeofMutableByteArray# > inside ghc.git, so I'm wondering if that primitive is used much anyway. I think that would definitely be better, if it is possible without too much breakage. Once we have operations that change the size of an array, the operation that reads the size should be stateful. > btw, is it currently safe to call/evaluate sizeofMutableByteArray# on > the original MBA after a unsafeFreezeByteArray# was performed? Probably safe, but better to avoid doing it if you can. > Otoh, if we are to thread a MutableByteArray# through the call anyway, > can't we just combine shrinking and freezing in one primop (as suggested > below)? I don't think this makes anything easier. You still need to overwrite the unused part of the array, and sizeofMutableByteArray# is still dangerous. Cheers, Simon > [...] > >>> PS: maybe unsafeShrinkMutableByteArray# could unsafe-freeze the >>> ByteArray# while at it (thus be called something like >>> unsafeShrinkAndFreezeMutableByteArray#), as once I know the final >>> smaller size I would freeze it anyway right after shrinking. From johan.tibell at gmail.com Tue Jul 22 09:49:16 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Tue, 22 Jul 2014 11:49:16 +0200 Subject: Windows breakage -- again In-Reply-To: References: <53c92d82.0625980a.3ecc.2618@mx.google.com> Message-ID: On Tue, Jul 22, 2014 at 9:50 AM, Niklas Larsson wrote: > AtomicPrimOps.hs flakes out for: > fetchAndTest > fetchNandTest > fetchOrTest > fetchXorTest > casTest > > but not for fetchAddSubTest and readWriteTest. > > If I step through it, the segfault comes at line 166, it doesn't reach the > .fetchXXXIntArray function that was called from the thread (at least ghci > doesn't hit a breakpoint set at it). > > GDB says the bad instruction is: > 4475: f0 8b 4c 24 40 lock mov 0x40(%esp),%ecx Is this on FreeBSD only or does it happen elsewhere? From berthold at Mathematik.Uni-Marburg.de Tue Jul 22 10:12:03 2014 From: berthold at Mathematik.Uni-Marburg.de (Jost Berthold) Date: Tue, 22 Jul 2014 12:12:03 +0200 Subject: GHC contribution guidelines and infrastructure talk on 6th September at HIW? In-Reply-To: References: Message-ID: <53CE38F3.9080201@mathematik.uni-marburg.de> (Sorry for joining this late... I figured we would be in dialogue off the list eventually) Joachim wrote and posted a proposal, and I think this proposal is indeed a good idea (and one of the purposes of HIW, definite yes). We shall make room for it in the programme, possibly in the last session, which can turn into the "Haskell release discussion evening". Best regards Jost On 07/22/2014 11:06 AM, ghc-devs-request at haskell.org wrote: > Date: Tue, 22 Jul 2014 08:38:22 +0000 > From: Simon Peyton Jones > To: Mark Lentczner , "ghc-devs at haskell.org" > > Subject: RE: GHC contribution guidelines and infrastructure talk on > 6th September at HIW? > Message-ID: > <618BE556AADD624C9C918AA5D5911BEF10435304 at DB3PRD3001MB020.064d.mgd.msft.net> > > Content-Type: text/plain; charset="utf-8" > > I think such a discussion would be a Good Thing, and just what HIW is for. > > Simon > > From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Mark Lentczner > Sent: 22 July 2014 02:03 > To: ghc-devs at haskell.org > Subject: Re: GHC contribution guidelines and infrastructure talk on 6th September at HIW? > > On a related front... I don't have a talk to give (hence I didn't submit a proposal)... But I'd love it if some of us could have a group discussion about coordinating releases, and our approach to putting out "Haskell": > > In short, we see it as several related peices (GHC, Cabal, Haddock, core libs, platform, etc...) but my guess is that most developers considering using Haskell see it as one thing: "Can I haz the Haskellz on my machine? kthxbai?" Therefore, I think we could put some thought into how we manage these pieces into a cohesive whole whose release more or less "just works". > > Not sure if this should be a "session", a "workshop", a long hallway disucssion, a night of good food and beer, or what. I'm happy to put some effort into organizing, and setting the context for the discussion. > > - Mark From robstewart57 at gmail.com Tue Jul 22 10:17:35 2014 From: robstewart57 at gmail.com (Rob Stewart) Date: Tue, 22 Jul 2014 11:17:35 +0100 Subject: GHC contribution guidelines and infrastructure talk on 6th September at HIW? In-Reply-To: <53CE38F3.9080201@mathematik.uni-marburg.de> References: <53CE38F3.9080201@mathematik.uni-marburg.de> Message-ID: On 22 July 2014 11:12, Jost Berthold wrote: > We shall make room for it in the programme, possibly in the last session, > which can turn into the "Haskell release discussion evening". Fantastic, thank you Jost. Are HIW talks to be recorded? For those budding GHC to-be contributors unable to attend Gothenburg, a recording online would be very helpful. Malcolm Wallace has been very helpful with this in the past. -- Rob From pali.gabor at gmail.com Tue Jul 22 10:22:18 2014 From: pali.gabor at gmail.com (=?UTF-8?B?UMOhbGkgR8OhYm9yIErDoW5vcw==?=) Date: Tue, 22 Jul 2014 12:22:18 +0200 Subject: Windows breakage -- again In-Reply-To: References: <53c92d82.0625980a.3ecc.2618@mx.google.com> Message-ID: 2014-07-22 11:49 GMT+02:00 Johan Tibell : > Is this on FreeBSD only or does it happen elsewhere? I would say it happens everywhere (on 32 bits). I guess Niklas was debugging the mingw32 version. From simonpj at microsoft.com Tue Jul 22 10:28:05 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 22 Jul 2014 10:28:05 +0000 Subject: tcInferRho In-Reply-To: <5F753739-F045-42BC-8834-919B5FB8CFFE@cis.upenn.edu> References: <1F535F4F-02E6-440C-B0F8-023324A85417@cis.upenn.edu> <618BE556AADD624C9C918AA5D5911BEF1043185E@DB3PRD3001MB020.064d.mgd.msft.net> <5F753739-F045-42BC-8834-919B5FB8CFFE@cis.upenn.edu> Message-ID: <618BE556AADD624C9C918AA5D5911BEF10435A82@DB3PRD3001MB020.064d.mgd.msft.net> Richard You are right; there is something squishy here. The original idea was that a unification variable only stands for a *monotype* (with no for-alls). But our basic story for the type inference engine is tcExpr :: HsExpr -> TcType -> TcM HsExpr' which checks that the expression has the given expected type. To do inference we pass in a unification variable as the "expected type". BUT if the expression actually has a type like (forall a. a->a) -> Int, then the unification variable clearly isn't being unified with a monotype. There are a couple of places where we must "zonk" the expected type, after calling tcExpr, to expose the foralls. A major example is TcExpr.tcInferFun. I say this is squishy because *in principle* we could replace every unification with generating an equality constraint, for later solving. (This does often happen, see TcUnify.uType_defer.) BUT if we generate an equality constraint, the zonking won't work, and the foralls won't be exposed early enough. I wish that the story here was more solid. The original idea of tcInferRho was to have some special cases that did not rely on this squishy "unify with polytype" story. It had a number of special cases, perhaps not enough as you observe. But it does look as if the original goal (which I think was to deal with function applications) doesn't even use it -- it uses tcInferFun instead. So I think you may be right: tcInferRho may not be important. There is a perhaps-significant efficiency question though: it avoids allocating an unifying a fresh unification variable each time. Simon | -----Original Message----- | From: Richard Eisenberg [mailto:eir at cis.upenn.edu] | Sent: 18 July 2014 22:00 | To: Simon Peyton Jones | Subject: Re: tcInferRho | | I thought as much, but I can't seem to tickle the bug. For example: | | > {-# LANGUAGE RankNTypes #-} | > | > f :: Int -> Bool -> (forall a. a -> a) -> Int | > f = undefined | > | > x = (3 `f` True) | > | | | GHCi tells me that x's type is `x :: (forall a. a -> a) -> Int`, as we | would hope. If we were somehow losing the higher-rank polymorphism | without tcInferRho, then I would expect something like `(3 `f` True) $ | not)` to succeed (or behave bizarrely), but we get a very sensible type | error | | Couldn't match type 'a' with 'Bool' | 'a' is a rigid type variable bound by | a type expected by the context: a -> a | at /Users/rae/temp/Bug.hs:6:5 | Expected type: a -> a | Actual type: Bool -> Bool | In the second argument of '($)', namely 'not' | In the expression: (3 `f` True) $ not | | So, instead of just adding more cases, I wonder if we can't *remove* | cases, as it seems that the gears turn fine without this function. This | continues to surprise me, but it's what the evidence indicates. Can you | make any sense of this? | | Thanks, | Richard | | | On Jul 18, 2014, at 12:49 PM, Simon Peyton Jones | wrote: | | > You're right, its' an omission. The reason for the special case is | described in the comment on tcInferRho. Adding OpApp would be a Good | Thing. A bit tiresome because we'd need to pass to tcInferApp the | function to use to reconstruct the result HsExpr (currently foldl | mkHsApp, in tcInferApp), so that in the OpApp case it'd reconstruct an | OpApp. | > | > Go ahead and do this if you like | > | > S | > | > | -----Original Message----- | > | From: Richard Eisenberg [mailto:eir at cis.upenn.edu] | > | Sent: 17 July 2014 18:48 | > | To: Simon Peyton Jones | > | Subject: tcInferRho | > | | > | Hi Simon, | > | | > | I'm in the process of rejiggering the functions in TcHsType to be | more | > | like those in TcExpr, in order to handle the richer type/kind | language | > | of my branch. | > | | > | I have a question about tcInferRho (TcExpr.lhs:115). It calls | > | tcInfExpr, which handles three special cases of HsExpr, before | > | deferring to tcExpr. The three cases are HsVar, HsPar, and HsApp. | > | What's odd about this is that there are other cases that seem to | belong | > | in this group, like OpApp. After all, (x + y) and ((+) x y) should | > | behave the same in all circumstances, right? I can't find a way to | > | tickle the omission here, so there may not be a bug, but it certainly | > | is a little strange. Can you shed any light? | > | | > | Thanks! | > | Richard | > From metaniklas at gmail.com Tue Jul 22 10:41:07 2014 From: metaniklas at gmail.com (Niklas Larsson) Date: Tue, 22 Jul 2014 12:41:07 +0200 Subject: Windows breakage -- again In-Reply-To: References: <53c92d82.0625980a.3ecc.2618@mx.google.com> Message-ID: That's true, I used mingw. I have created a ticket https://ghc.haskell.org/trac/ghc/ticket/9346#ticket. 2014-07-22 12:22 GMT+02:00 P?li G?bor J?nos : > 2014-07-22 11:49 GMT+02:00 Johan Tibell : > > Is this on FreeBSD only or does it happen elsewhere? > > I would say it happens everywhere (on 32 bits). I guess Niklas was > debugging the mingw32 version. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Tue Jul 22 11:17:54 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Tue, 22 Jul 2014 12:17:54 +0100 Subject: a little phrustrated In-Reply-To: <1405536989-sup-1770@sabre> References: <5BC8E2CB-7528-4537-92EA-E1C3BDDF1B1E@cis.upenn.edu> <1405536989-sup-1770@sabre> Message-ID: <53CE4862.8000502@gmail.com> On 16/07/14 20:02, Edward Z. Yang wrote: > Hello Richard, > >> 1) I had some untracked files in a submodule repo. I couldn't find a way to get `arc diff` to ignore these, as they appeared to git to be a change in a tracked file (that is, a change to a submodule, which is considered tracked). `git stash` offered no help, so I had to delete the untracked files. This didn't cause real pain (the files were there in error), but it seems a weakness of the system if I can't make progress otherwise. > > Yes, this was fairly painful for me as well. One way to make the pain > go away and help others out is improve the .gitignore files so these > files are not considered tracked. Here is another thread discussing > this problem: > > http://comments.gmane.org/gmane.comp.version-control.git/238173 > > though I haven't read through it fully yet. If you go into your .git/config file in the GHC repo, and add "ignore = untracked", like this: [submodule "nofib"] url = /home/simon/ghc-mirror/nofib.git ignore = untracked Then git won't consider untracked files in that submodule as making that submodule dirty, and you'll be able to happily "arc diff". Cheers, Simon From simonpj at microsoft.com Tue Jul 22 11:22:34 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 22 Jul 2014 11:22:34 +0000 Subject: a little phrustrated In-Reply-To: <53CE4862.8000502@gmail.com> References: <5BC8E2CB-7528-4537-92EA-E1C3BDDF1B1E@cis.upenn.edu> <1405536989-sup-1770@sabre> <53CE4862.8000502@gmail.com> Message-ID: <618BE556AADD624C9C918AA5D5911BEF10435BE1@DB3PRD3001MB020.064d.mgd.msft.net> Maybe add this useful lore to Git guidance or Phabricator guidance? S | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Simon | Marlow | Sent: 22 July 2014 12:18 | To: Edward Z. Yang; Richard Eisenberg | Cc: ghc-devs at haskell.org | Subject: Re: a little phrustrated | | On 16/07/14 20:02, Edward Z. Yang wrote: | > Hello Richard, | > | >> 1) I had some untracked files in a submodule repo. I couldn't find a | way to get `arc diff` to ignore these, as they appeared to git to be a | change in a tracked file (that is, a change to a submodule, which is | considered tracked). `git stash` offered no help, so I had to delete | the untracked files. This didn't cause real pain (the files were there | in error), but it seems a weakness of the system if I can't make | progress otherwise. | > | > Yes, this was fairly painful for me as well. One way to make the | pain | > go away and help others out is improve the .gitignore files so these | > files are not considered tracked. Here is another thread discussing | > this problem: | > | > http://comments.gmane.org/gmane.comp.version-control.git/238173 | > | > though I haven't read through it fully yet. | | If you go into your .git/config file in the GHC repo, and add "ignore = | untracked", like this: | | [submodule "nofib"] | url = /home/simon/ghc-mirror/nofib.git | ignore = untracked | | Then git won't consider untracked files in that submodule as making | that submodule dirty, and you'll be able to happily "arc diff". | | Cheers, | Simon | | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From marlowsd at gmail.com Tue Jul 22 11:27:46 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Tue, 22 Jul 2014 12:27:46 +0100 Subject: Multi-instance packages status report In-Reply-To: <1406013802.2079.3.camel@joachim-breitner.de> References: <1405962902-sup-6315@sabre> <1405973209.3485.4.camel@joachim-breitner.de> <1405981078-sup-1728@sabre> <1406013802.2079.3.camel@joachim-breitner.de> Message-ID: <53CE4AB2.8010909@gmail.com> On 22/07/14 08:23, Joachim Breitner wrote: > [Replying to the list, in case it was sent to me in private by accident] > > > Hi Edward, > > Am Montag, den 21.07.2014, 23:25 +0100 schrieb Edward Z.Yang: >> Excerpts from Joachim Breitner's message of 2014-07-21 21:06:49 +0100: >>> maybe a stupid question, but how does the package key relate to the hash >>> that "ghc-pkg" shows for package? >> >> Fine question---this is definitely something that is different from the >> GSoC project. The short answer is, the current hash shown in ghc-pkg is >> the ABI hash associated with the InstalledPackageId, which is computed >> after GHC is done compiling your code; whereas the package key is a >> hash of the dependency graph, which can be done before compilation. >> >> The longer answer is we now have three ID-like things, in order of >> increasing specificity: >> >> Package IDs: containers-0.9 >> These are the "user visible" things that we expect users to talk >> about in Cabal file >> Package Keys: md5("containers-0.9" + transitive deps) >> These are the identifiers the compiler cares about: they are used >> for type equality, and contain a bit more detail than we expect >> a user to normally need---however, a user might need to refer to >> this to disambiguate in some situations. >> Installed Package IDs: ABI hash of compiled code >> This uniquely identifies an installed package in the database, up >> to ABI. >> >> So, if two packages have the same IPID, their package keys are >> guaranteed to be the same, but not vice versa. (And likewise for package >> IDs.) (Replying to Edward) It's not clear to me why identical IPID would imply identical package key. Can't two instances of a package compiled against different dependencies still have identical ABIs? Reviewing your patches is next on my queue... Cheers, Simon From metaniklas at gmail.com Tue Jul 22 11:33:04 2014 From: metaniklas at gmail.com (Niklas Larsson) Date: Tue, 22 Jul 2014 13:33:04 +0200 Subject: Call for help on testing integer-gmp2 on non-Linux archs In-Reply-To: <8738dtyfxr.fsf@gnu.org> References: <8738dtyfxr.fsf@gnu.org> Message-ID: I can test on Windows. Niklas 2014-07-22 10:07 GMT+02:00 Herbert Valerio Riedel : > Hello *, > > As some of you may have already noticed, there's an attempt[1] in the > works to reimplement integer-gmp in such a way to avoid overriding GMP's > internal memory allocator functions, and thus make it possible to link > GHC/integer-gmp compiled programs with other components linked to libgmp > which break if GMP's memory allocation goes via GHC's GC. I also hope > this will facilitate to ship GHC bindists for Windows with a dynamically > linked (& unpatched!) GMP library, to reduce LGPL licencing concerns for > resulting GHC compiled programs. > > So far, I've only been able to test the code on Linux/i386 and > Linux/amd64 where it works correctly. Now it'd be interesting to know if > integer-gmp2 in its current form works also on non-Linux archs, and if > not, what's needed to make it work. Fwiw, I mostly suspect > linker-related issues. > > Therefore, is anyone here interested to help out with making sure > GHC+integer-gmp2 builds on Windows, OSX and so on? If so, please get > into contact with me! > > Cheers, > hvr > > [1]: https://ghc.haskell.org/trac/ghc/ticket/9281 > https://phabricator.haskell.org/D82 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Tue Jul 22 11:54:53 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Tue, 22 Jul 2014 13:54:53 +0200 Subject: Windows breakage -- again In-Reply-To: References: <53c92d82.0625980a.3ecc.2618@mx.google.com> Message-ID: I suggest we continue the discussion on the ticket: https://ghc.haskell.org/trac/ghc/ticket/9346 Summary so far is that LOCK is not a valid prefix to MOV, but the x86 code generator doesn't emit any LOCKs before MOVs so I'm not sure how that instruction got there. On Tue, Jul 22, 2014 at 12:41 PM, Niklas Larsson wrote: > That's true, I used mingw. > > I have created a ticket https://ghc.haskell.org/trac/ghc/ticket/9346#ticket. > > > 2014-07-22 12:22 GMT+02:00 P?li G?bor J?nos : > >> 2014-07-22 11:49 GMT+02:00 Johan Tibell : >> > Is this on FreeBSD only or does it happen elsewhere? >> >> I would say it happens everywhere (on 32 bits). I guess Niklas was >> debugging the mingw32 version. > > From hvr at gnu.org Tue Jul 22 12:02:32 2014 From: hvr at gnu.org (Herbert Valerio Riedel) Date: Tue, 22 Jul 2014 14:02:32 +0200 Subject: Call for help on testing integer-gmp2 on non-Linux archs In-Reply-To: (Niklas Larsson's message of "Tue, 22 Jul 2014 13:33:04 +0200") References: <8738dtyfxr.fsf@gnu.org> Message-ID: <87ha29wqhz.fsf@gnu.org> On 2014-07-22 at 13:33:04 +0200, Niklas Larsson wrote: > I can test on Windows. great! Are you using the 32bit or 64bit compiler? All you'd need to do is 'git checkout' the wip/T9281 branch, add the line INTEGER_LIBRARY=integer-gmp2 at the end of mk/build.mk (and 'BuildFlavour=quick' should suffice) and try to build GHC with that. If you end up with a working stage2 compiler, and 'inplace/bin/ghc-stage2 --interactive' reports loading the package 'integer-gmp2' then everything went better than expected :) Then running the testsuite via cd testsuite/ && make WAY=normal SKIP_PERF_TESTS=YES should only fail with a few testcases due to the strings "integer-gmp2" vs. "integer-gmp" being different in the output. Thanks,, hvr From christiaan.baaij at gmail.com Tue Jul 22 12:11:21 2014 From: christiaan.baaij at gmail.com (Christiaan Baaij) Date: Tue, 22 Jul 2014 14:11:21 +0200 Subject: Call for help on testing integer-gmp2 on non-Linux archs In-Reply-To: <87ha29wqhz.fsf@gnu.org> References: <8738dtyfxr.fsf@gnu.org> <87ha29wqhz.fsf@gnu.org> Message-ID: Starting a build on my MAC: OS: 10.8.5 XCode: XCode 4 CLI-only (so _no_ full Xcode, that is, xcode-select fails) GCC: i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00) GHC: 7.8.3 On Jul 22, 2014, at 2:02 PM, Herbert Valerio Riedel wrote: > On 2014-07-22 at 13:33:04 +0200, Niklas Larsson wrote: >> I can test on Windows. > > great! Are you using the 32bit or 64bit compiler? > > All you'd need to do is 'git checkout' the wip/T9281 branch, add the line > > INTEGER_LIBRARY=integer-gmp2 > > at the end of mk/build.mk (and 'BuildFlavour=quick' should suffice) and > try to build GHC with that. If you end up with a working stage2 > compiler, and 'inplace/bin/ghc-stage2 --interactive' reports loading the > package 'integer-gmp2' then everything went better than expected :) > > Then running the testsuite via > > cd testsuite/ && make WAY=normal SKIP_PERF_TESTS=YES > > should only fail with a few testcases due to the strings "integer-gmp2" > vs. "integer-gmp" being different in the output. > > Thanks,, > hvr > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From ezyang at mit.edu Tue Jul 22 12:17:04 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Tue, 22 Jul 2014 13:17:04 +0100 Subject: Multi-instance packages status report In-Reply-To: <53CE4AB2.8010909@gmail.com> References: <1405962902-sup-6315@sabre> <1405973209.3485.4.camel@joachim-breitner.de> <1405981078-sup-1728@sabre> <1406013802.2079.3.camel@joachim-breitner.de> <53CE4AB2.8010909@gmail.com> Message-ID: <1406031267-sup-9782@sabre> Excerpts from Simon Marlow's message of 2014-07-22 12:27:46 +0100: > (Replying to Edward) > > It's not clear to me why identical IPID would imply identical package > key. Can't two instances of a package compiled against different > dependencies still have identical ABIs? No, because the package key is baked into the linker symbols (and thus the ABI). I guess maybe if you had a completely empty package, the ABIs would be the same. Cheers, Edward From eir at cis.upenn.edu Tue Jul 22 12:21:51 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Tue, 22 Jul 2014 08:21:51 -0400 Subject: tcInferRho In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF10435A82@DB3PRD3001MB020.064d.mgd.msft.net> References: <1F535F4F-02E6-440C-B0F8-023324A85417@cis.upenn.edu> <618BE556AADD624C9C918AA5D5911BEF1043185E@DB3PRD3001MB020.064d.mgd.msft.net> <5F753739-F045-42BC-8834-919B5FB8CFFE@cis.upenn.edu> <618BE556AADD624C9C918AA5D5911BEF10435A82@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <180103F2-F682-4F3C-9A2B-340FAD1A27BE@cis.upenn.edu> OK -- that all makes sense. But why does it actually work, I wonder? It seems that to get the behavior that you describe below, and the behavior that we see in practice, a unification variable *does* have to unify with a non-tau-type, like (forall a. a -> a) -> Int. But doesn't defer_me in TcUnify.checkTauTvUpdate prevent such a thing from happening? To learn more, I tried compiling this code: > f :: Bool -> Bool -> (forall a. a -> a) -> () > f = undefined > > g = (True `f` False) id I use infix application to avoid tcInferRho. With -ddump-tc-trace -dppr-debug, I see the following bit: > Scratch.hs:18:6: > u_tys > untch 0 > (forall a{tv apE} [sk]. a{tv apE} [sk] -> a{tv apE} [sk]) -> () > ~ > t_aHO{tv} [tau[0]] > a type equality (forall a{tv apE} [sk]. > a{tv apE} [sk] -> a{tv apE} [sk]) > -> () > ~ > t_aHO{tv} [tau[0]] > Scratch.hs:18:6: > writeMetaTyVar > t_aHO{tv} [tau[0]] := (forall a{tv apE} [sk]. > a{tv apE} [sk] -> a{tv apE} [sk]) > -> () > What's very strange to me here is that we see t_aHO, a **tau** type, being rewritten to a poly-type. I could clearly throw in more printing statements to see what is going on, but I wanted to check if this looks strange to you, too. Thanks, Richard On Jul 22, 2014, at 6:28 AM, Simon Peyton Jones wrote: > Richard > > You are right; there is something squishy here. > > The original idea was that a unification variable only stands for a *monotype* (with no for-alls). But our basic story for the type inference engine is > tcExpr :: HsExpr -> TcType -> TcM HsExpr' > which checks that the expression has the given expected type. To do inference we pass in a unification variable as the "expected type". BUT if the expression actually has a type like (forall a. a->a) -> Int, then the unification variable clearly isn't being unified with a monotype. There are a couple of places where we must "zonk" the expected type, after calling tcExpr, to expose the foralls. A major example is TcExpr.tcInferFun. > > I say this is squishy because *in principle* we could replace every unification with generating an equality constraint, for later solving. (This does often happen, see TcUnify.uType_defer.) BUT if we generate an equality constraint, the zonking won't work, and the foralls won't be exposed early enough. I wish that the story here was more solid. > > The original idea of tcInferRho was to have some special cases that did not rely on this squishy "unify with polytype" story. It had a number of special cases, perhaps not enough as you observe. But it does look as if the original goal (which I think was to deal with function applications) doesn't even use it -- it uses tcInferFun instead. > > So I think you may be right: tcInferRho may not be important. There is a perhaps-significant efficiency question though: it avoids allocating an unifying a fresh unification variable each time. > > Simon > > | -----Original Message----- > | From: Richard Eisenberg [mailto:eir at cis.upenn.edu] > | Sent: 18 July 2014 22:00 > | To: Simon Peyton Jones > | Subject: Re: tcInferRho > | > | I thought as much, but I can't seem to tickle the bug. For example: > | > | > {-# LANGUAGE RankNTypes #-} > | > > | > f :: Int -> Bool -> (forall a. a -> a) -> Int > | > f = undefined > | > > | > x = (3 `f` True) > | > > | > | > | GHCi tells me that x's type is `x :: (forall a. a -> a) -> Int`, as we > | would hope. If we were somehow losing the higher-rank polymorphism > | without tcInferRho, then I would expect something like `(3 `f` True) $ > | not)` to succeed (or behave bizarrely), but we get a very sensible type > | error > | > | Couldn't match type 'a' with 'Bool' > | 'a' is a rigid type variable bound by > | a type expected by the context: a -> a > | at /Users/rae/temp/Bug.hs:6:5 > | Expected type: a -> a > | Actual type: Bool -> Bool > | In the second argument of '($)', namely 'not' > | In the expression: (3 `f` True) $ not > | > | So, instead of just adding more cases, I wonder if we can't *remove* > | cases, as it seems that the gears turn fine without this function. This > | continues to surprise me, but it's what the evidence indicates. Can you > | make any sense of this? > | > | Thanks, > | Richard > | > | > | On Jul 18, 2014, at 12:49 PM, Simon Peyton Jones > | wrote: > | > | > You're right, its' an omission. The reason for the special case is > | described in the comment on tcInferRho. Adding OpApp would be a Good > | Thing. A bit tiresome because we'd need to pass to tcInferApp the > | function to use to reconstruct the result HsExpr (currently foldl > | mkHsApp, in tcInferApp), so that in the OpApp case it'd reconstruct an > | OpApp. > | > > | > Go ahead and do this if you like > | > > | > S > | > > | > | -----Original Message----- > | > | From: Richard Eisenberg [mailto:eir at cis.upenn.edu] > | > | Sent: 17 July 2014 18:48 > | > | To: Simon Peyton Jones > | > | Subject: tcInferRho > | > | > | > | Hi Simon, > | > | > | > | I'm in the process of rejiggering the functions in TcHsType to be > | more > | > | like those in TcExpr, in order to handle the richer type/kind > | language > | > | of my branch. > | > | > | > | I have a question about tcInferRho (TcExpr.lhs:115). It calls > | > | tcInfExpr, which handles three special cases of HsExpr, before > | > | deferring to tcExpr. The three cases are HsVar, HsPar, and HsApp. > | > | What's odd about this is that there are other cases that seem to > | belong > | > | in this group, like OpApp. After all, (x + y) and ((+) x y) should > | > | behave the same in all circumstances, right? I can't find a way to > | > | tickle the omission here, so there may not be a bug, but it certainly > | > | is a little strange. Can you shed any light? > | > | > | > | Thanks! > | > | Richard > | > > > From marlowsd at gmail.com Tue Jul 22 12:22:58 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Tue, 22 Jul 2014 13:22:58 +0100 Subject: Multi-instance packages status report In-Reply-To: <1406031267-sup-9782@sabre> References: <1405962902-sup-6315@sabre> <1405973209.3485.4.camel@joachim-breitner.de> <1405981078-sup-1728@sabre> <1406013802.2079.3.camel@joachim-breitner.de> <53CE4AB2.8010909@gmail.com> <1406031267-sup-9782@sabre> Message-ID: <53CE57A2.5060604@gmail.com> On 22/07/14 13:17, Edward Z. Yang wrote: > Excerpts from Simon Marlow's message of 2014-07-22 12:27:46 +0100: >> (Replying to Edward) >> >> It's not clear to me why identical IPID would imply identical package >> key. Can't two instances of a package compiled against different >> dependencies still have identical ABIs? > > No, because the package key is baked into the linker symbols > (and thus the ABI). I guess maybe if you had a completely empty > package, the ABIs would be the same. Aha, I see. Thanks! Simon From johan.tibell at gmail.com Tue Jul 22 12:38:03 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Tue, 22 Jul 2014 14:38:03 +0200 Subject: [QuickCheck] Status of Haskell Platform 2014.2.0.0 In-Reply-To: <20140720233035.GC6183@8325.org> References: <20140720233035.GC6183@8325.org> Message-ID: On Mon, Jul 21, 2014 at 1:30 AM, Nick Smallbone wrote: > 1. We make sure that tf-random becomes stable and hope it can be > included in the next version of the platform. > > 2. We add a simple TFGen-inspired generator directly to QuickCheck. > > 3. We fix StdGen by replacing it with a TFGen-inspired implementation. > > Number 3 would be best for everyone, but if it doesn't happen maybe > option 2 is the most pragmatic one. I agree that (2) looks like the most pragmatic one. From simonpj at microsoft.com Tue Jul 22 13:19:50 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 22 Jul 2014 13:19:50 +0000 Subject: tcInferRho In-Reply-To: <180103F2-F682-4F3C-9A2B-340FAD1A27BE@cis.upenn.edu> References: <1F535F4F-02E6-440C-B0F8-023324A85417@cis.upenn.edu> <618BE556AADD624C9C918AA5D5911BEF1043185E@DB3PRD3001MB020.064d.mgd.msft.net> <5F753739-F045-42BC-8834-919B5FB8CFFE@cis.upenn.edu> <618BE556AADD624C9C918AA5D5911BEF10435A82@DB3PRD3001MB020.064d.mgd.msft.net> <180103F2-F682-4F3C-9A2B-340FAD1A27BE@cis.upenn.edu> Message-ID: <618BE556AADD624C9C918AA5D5911BEF10435EF7@DB3PRD3001MB020.064d.mgd.msft.net> Indeed. Unification variables *can* unify with polytypes, as you see. GHC does "on the fly" unification with in-place update, and only defers to the constraint solver if it can't readily unify on the fly. The squishiness is precisely that for this setting we *must* unify on the fly, so the "it's always ok to defer" rule doesn't hold. Simon | -----Original Message----- | From: Richard Eisenberg [mailto:eir at cis.upenn.edu] | Sent: 22 July 2014 13:22 | To: Simon Peyton Jones | Cc: ghc-devs at haskell.org | Subject: Re: tcInferRho | | OK -- that all makes sense. | | But why does it actually work, I wonder? It seems that to get the | behavior that you describe below, and the behavior that we see in | practice, a unification variable *does* have to unify with a non-tau- | type, like (forall a. a -> a) -> Int. But doesn't defer_me in | TcUnify.checkTauTvUpdate prevent such a thing from happening? | | To learn more, I tried compiling this code: | | > f :: Bool -> Bool -> (forall a. a -> a) -> () f = undefined | > | > g = (True `f` False) id | | I use infix application to avoid tcInferRho. | | With -ddump-tc-trace -dppr-debug, I see the following bit: | | > Scratch.hs:18:6: | > u_tys | > untch 0 | > (forall a{tv apE} [sk]. a{tv apE} [sk] -> a{tv apE} [sk]) -> () | > ~ | > t_aHO{tv} [tau[0]] | > a type equality (forall a{tv apE} [sk]. | > a{tv apE} [sk] -> a{tv apE} [sk]) | > -> () | > ~ | > t_aHO{tv} [tau[0]] | > Scratch.hs:18:6: | > writeMetaTyVar | > t_aHO{tv} [tau[0]] := (forall a{tv apE} [sk]. | > a{tv apE} [sk] -> a{tv apE} [sk]) | > -> () | > | | What's very strange to me here is that we see t_aHO, a **tau** type, | being rewritten to a poly-type. I could clearly throw in more printing | statements to see what is going on, but I wanted to check if this looks | strange to you, too. | | Thanks, | Richard | | On Jul 22, 2014, at 6:28 AM, Simon Peyton Jones | wrote: | | > Richard | > | > You are right; there is something squishy here. | > | > The original idea was that a unification variable only stands for a | *monotype* (with no for-alls). But our basic story for the type | inference engine is | > tcExpr :: HsExpr -> TcType -> TcM HsExpr' | > which checks that the expression has the given expected type. To do | inference we pass in a unification variable as the "expected type". | BUT if the expression actually has a type like (forall a. a->a) -> Int, | then the unification variable clearly isn't being unified with a | monotype. There are a couple of places where we must "zonk" the | expected type, after calling tcExpr, to expose the foralls. A major | example is TcExpr.tcInferFun. | > | > I say this is squishy because *in principle* we could replace every | unification with generating an equality constraint, for later solving. | (This does often happen, see TcUnify.uType_defer.) BUT if we generate | an equality constraint, the zonking won't work, and the foralls won't | be exposed early enough. I wish that the story here was more solid. | > | > The original idea of tcInferRho was to have some special cases that | did not rely on this squishy "unify with polytype" story. It had a | number of special cases, perhaps not enough as you observe. But it | does look as if the original goal (which I think was to deal with | function applications) doesn't even use it -- it uses tcInferFun | instead. | > | > So I think you may be right: tcInferRho may not be important. There | is a perhaps-significant efficiency question though: it avoids | allocating an unifying a fresh unification variable each time. | > | > Simon | > | > | -----Original Message----- | > | From: Richard Eisenberg [mailto:eir at cis.upenn.edu] | > | Sent: 18 July 2014 22:00 | > | To: Simon Peyton Jones | > | Subject: Re: tcInferRho | > | | > | I thought as much, but I can't seem to tickle the bug. For example: | > | | > | > {-# LANGUAGE RankNTypes #-} | > | > | > | > f :: Int -> Bool -> (forall a. a -> a) -> Int f = undefined | > | > | > | > x = (3 `f` True) | > | > | > | | > | | > | GHCi tells me that x's type is `x :: (forall a. a -> a) -> Int`, as | > | we would hope. If we were somehow losing the higher-rank | > | polymorphism without tcInferRho, then I would expect something like | > | `(3 `f` True) $ not)` to succeed (or behave bizarrely), but we get | a | > | very sensible type error | > | | > | Couldn't match type 'a' with 'Bool' | > | 'a' is a rigid type variable bound by | > | a type expected by the context: a -> a | > | at /Users/rae/temp/Bug.hs:6:5 | > | Expected type: a -> a | > | Actual type: Bool -> Bool | > | In the second argument of '($)', namely 'not' | > | In the expression: (3 `f` True) $ not | > | | > | So, instead of just adding more cases, I wonder if we can't | *remove* | > | cases, as it seems that the gears turn fine without this function. | > | This continues to surprise me, but it's what the evidence | indicates. | > | Can you make any sense of this? | > | | > | Thanks, | > | Richard | > | | > | | > | On Jul 18, 2014, at 12:49 PM, Simon Peyton Jones | > | | > | wrote: | > | | > | > You're right, its' an omission. The reason for the special case | > | > is | > | described in the comment on tcInferRho. Adding OpApp would be a | > | Good Thing. A bit tiresome because we'd need to pass to tcInferApp | > | the function to use to reconstruct the result HsExpr (currently | > | foldl mkHsApp, in tcInferApp), so that in the OpApp case it'd | > | reconstruct an OpApp. | > | > | > | > Go ahead and do this if you like | > | > | > | > S | > | > | > | > | -----Original Message----- | > | > | From: Richard Eisenberg [mailto:eir at cis.upenn.edu] | > | > | Sent: 17 July 2014 18:48 | > | > | To: Simon Peyton Jones | > | > | Subject: tcInferRho | > | > | | > | > | Hi Simon, | > | > | | > | > | I'm in the process of rejiggering the functions in TcHsType to | > | > | be | > | more | > | > | like those in TcExpr, in order to handle the richer type/kind | > | language | > | > | of my branch. | > | > | | > | > | I have a question about tcInferRho (TcExpr.lhs:115). It calls | > | > | tcInfExpr, which handles three special cases of HsExpr, before | > | > | deferring to tcExpr. The three cases are HsVar, HsPar, and | HsApp. | > | > | What's odd about this is that there are other cases that seem | to | > | belong | > | > | in this group, like OpApp. After all, (x + y) and ((+) x y) | > | > | should behave the same in all circumstances, right? I can't | find | > | > | a way to tickle the omission here, so there may not be a bug, | > | > | but it certainly is a little strange. Can you shed any light? | > | > | | > | > | Thanks! | > | > | Richard | > | > | > | > From eir at cis.upenn.edu Tue Jul 22 13:26:36 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Tue, 22 Jul 2014 09:26:36 -0400 Subject: tcInferRho In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF10435EF7@DB3PRD3001MB020.064d.mgd.msft.net> References: <1F535F4F-02E6-440C-B0F8-023324A85417@cis.upenn.edu> <618BE556AADD624C9C918AA5D5911BEF1043185E@DB3PRD3001MB020.064d.mgd.msft.net> <5F753739-F045-42BC-8834-919B5FB8CFFE@cis.upenn.edu> <618BE556AADD624C9C918AA5D5911BEF10435A82@DB3PRD3001MB020.064d.mgd.msft.net> <180103F2-F682-4F3C-9A2B-340FAD1A27BE@cis.upenn.edu> <618BE556AADD624C9C918AA5D5911BEF10435EF7@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <5E399830-58E1-4F3C-85D0-131389268CB5@cis.upenn.edu> Ah -- it's all clear to me now. To summarize: a TauTv *can* become a poly-type, but the solver won't ever discover so. That would seem to contradict > = TauTv -- This MetaTv is an ordinary unification variable > -- A TauTv is always filled in with a tau-type, which > -- never contains any ForAlls > which appears in the declaration for MetaInfo in TcType. Is that an accurate summary? Thanks for helping to clear this up! Richard On Jul 22, 2014, at 9:19 AM, Simon Peyton Jones wrote: > Indeed. > > Unification variables *can* unify with polytypes, as you see. > > GHC does "on the fly" unification with in-place update, and only defers to the constraint solver if it can't readily unify on the fly. The squishiness is precisely that for this setting we *must* unify on the fly, so the "it's always ok to defer" rule doesn't hold. > > Simon > > | -----Original Message----- > | From: Richard Eisenberg [mailto:eir at cis.upenn.edu] > | Sent: 22 July 2014 13:22 > | To: Simon Peyton Jones > | Cc: ghc-devs at haskell.org > | Subject: Re: tcInferRho > | > | OK -- that all makes sense. > | > | But why does it actually work, I wonder? It seems that to get the > | behavior that you describe below, and the behavior that we see in > | practice, a unification variable *does* have to unify with a non-tau- > | type, like (forall a. a -> a) -> Int. But doesn't defer_me in > | TcUnify.checkTauTvUpdate prevent such a thing from happening? > | > | To learn more, I tried compiling this code: > | > | > f :: Bool -> Bool -> (forall a. a -> a) -> () f = undefined > | > > | > g = (True `f` False) id > | > | I use infix application to avoid tcInferRho. > | > | With -ddump-tc-trace -dppr-debug, I see the following bit: > | > | > Scratch.hs:18:6: > | > u_tys > | > untch 0 > | > (forall a{tv apE} [sk]. a{tv apE} [sk] -> a{tv apE} [sk]) -> () > | > ~ > | > t_aHO{tv} [tau[0]] > | > a type equality (forall a{tv apE} [sk]. > | > a{tv apE} [sk] -> a{tv apE} [sk]) > | > -> () > | > ~ > | > t_aHO{tv} [tau[0]] > | > Scratch.hs:18:6: > | > writeMetaTyVar > | > t_aHO{tv} [tau[0]] := (forall a{tv apE} [sk]. > | > a{tv apE} [sk] -> a{tv apE} [sk]) > | > -> () > | > > | > | What's very strange to me here is that we see t_aHO, a **tau** type, > | being rewritten to a poly-type. I could clearly throw in more printing > | statements to see what is going on, but I wanted to check if this looks > | strange to you, too. > | > | Thanks, > | Richard > | > | On Jul 22, 2014, at 6:28 AM, Simon Peyton Jones > | wrote: > | > | > Richard > | > > | > You are right; there is something squishy here. > | > > | > The original idea was that a unification variable only stands for a > | *monotype* (with no for-alls). But our basic story for the type > | inference engine is > | > tcExpr :: HsExpr -> TcType -> TcM HsExpr' > | > which checks that the expression has the given expected type. To do > | inference we pass in a unification variable as the "expected type". > | BUT if the expression actually has a type like (forall a. a->a) -> Int, > | then the unification variable clearly isn't being unified with a > | monotype. There are a couple of places where we must "zonk" the > | expected type, after calling tcExpr, to expose the foralls. A major > | example is TcExpr.tcInferFun. > | > > | > I say this is squishy because *in principle* we could replace every > | unification with generating an equality constraint, for later solving. > | (This does often happen, see TcUnify.uType_defer.) BUT if we generate > | an equality constraint, the zonking won't work, and the foralls won't > | be exposed early enough. I wish that the story here was more solid. > | > > | > The original idea of tcInferRho was to have some special cases that > | did not rely on this squishy "unify with polytype" story. It had a > | number of special cases, perhaps not enough as you observe. But it > | does look as if the original goal (which I think was to deal with > | function applications) doesn't even use it -- it uses tcInferFun > | instead. > | > > | > So I think you may be right: tcInferRho may not be important. There > | is a perhaps-significant efficiency question though: it avoids > | allocating an unifying a fresh unification variable each time. > | > > | > Simon > | > > | > | -----Original Message----- > | > | From: Richard Eisenberg [mailto:eir at cis.upenn.edu] > | > | Sent: 18 July 2014 22:00 > | > | To: Simon Peyton Jones > | > | Subject: Re: tcInferRho > | > | > | > | I thought as much, but I can't seem to tickle the bug. For example: > | > | > | > | > {-# LANGUAGE RankNTypes #-} > | > | > > | > | > f :: Int -> Bool -> (forall a. a -> a) -> Int f = undefined > | > | > > | > | > x = (3 `f` True) > | > | > > | > | > | > | > | > | GHCi tells me that x's type is `x :: (forall a. a -> a) -> Int`, as > | > | we would hope. If we were somehow losing the higher-rank > | > | polymorphism without tcInferRho, then I would expect something like > | > | `(3 `f` True) $ not)` to succeed (or behave bizarrely), but we get > | a > | > | very sensible type error > | > | > | > | Couldn't match type 'a' with 'Bool' > | > | 'a' is a rigid type variable bound by > | > | a type expected by the context: a -> a > | > | at /Users/rae/temp/Bug.hs:6:5 > | > | Expected type: a -> a > | > | Actual type: Bool -> Bool > | > | In the second argument of '($)', namely 'not' > | > | In the expression: (3 `f` True) $ not > | > | > | > | So, instead of just adding more cases, I wonder if we can't > | *remove* > | > | cases, as it seems that the gears turn fine without this function. > | > | This continues to surprise me, but it's what the evidence > | indicates. > | > | Can you make any sense of this? > | > | > | > | Thanks, > | > | Richard > | > | > | > | > | > | On Jul 18, 2014, at 12:49 PM, Simon Peyton Jones > | > | > | > | wrote: > | > | > | > | > You're right, its' an omission. The reason for the special case > | > | > is > | > | described in the comment on tcInferRho. Adding OpApp would be a > | > | Good Thing. A bit tiresome because we'd need to pass to tcInferApp > | > | the function to use to reconstruct the result HsExpr (currently > | > | foldl mkHsApp, in tcInferApp), so that in the OpApp case it'd > | > | reconstruct an OpApp. > | > | > > | > | > Go ahead and do this if you like > | > | > > | > | > S > | > | > > | > | > | -----Original Message----- > | > | > | From: Richard Eisenberg [mailto:eir at cis.upenn.edu] > | > | > | Sent: 17 July 2014 18:48 > | > | > | To: Simon Peyton Jones > | > | > | Subject: tcInferRho > | > | > | > | > | > | Hi Simon, > | > | > | > | > | > | I'm in the process of rejiggering the functions in TcHsType to > | > | > | be > | > | more > | > | > | like those in TcExpr, in order to handle the richer type/kind > | > | language > | > | > | of my branch. > | > | > | > | > | > | I have a question about tcInferRho (TcExpr.lhs:115). It calls > | > | > | tcInfExpr, which handles three special cases of HsExpr, before > | > | > | deferring to tcExpr. The three cases are HsVar, HsPar, and > | HsApp. > | > | > | What's odd about this is that there are other cases that seem > | to > | > | belong > | > | > | in this group, like OpApp. After all, (x + y) and ((+) x y) > | > | > | should behave the same in all circumstances, right? I can't > | find > | > | > | a way to tickle the omission here, so there may not be a bug, > | > | > | but it certainly is a little strange. Can you shed any light? > | > | > | > | > | > | Thanks! > | > | > | Richard > | > | > > | > > | > > > From simonpj at microsoft.com Tue Jul 22 13:39:59 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 22 Jul 2014 13:39:59 +0000 Subject: tcInferRho In-Reply-To: <5E399830-58E1-4F3C-85D0-131389268CB5@cis.upenn.edu> References: <1F535F4F-02E6-440C-B0F8-023324A85417@cis.upenn.edu> <618BE556AADD624C9C918AA5D5911BEF1043185E@DB3PRD3001MB020.064d.mgd.msft.net> <5F753739-F045-42BC-8834-919B5FB8CFFE@cis.upenn.edu> <618BE556AADD624C9C918AA5D5911BEF10435A82@DB3PRD3001MB020.064d.mgd.msft.net> <180103F2-F682-4F3C-9A2B-340FAD1A27BE@cis.upenn.edu> <618BE556AADD624C9C918AA5D5911BEF10435EF7@DB3PRD3001MB020.064d.mgd.msft.net> <5E399830-58E1-4F3C-85D0-131389268CB5@cis.upenn.edu> Message-ID: <618BE556AADD624C9C918AA5D5911BEF10435FD2@DB3PRD3001MB020.064d.mgd.msft.net> Yes that comment is a lie! I would welcome a way to tighten this up. Unifying with foralls is just fine, provided they behave rigidly like type constructors. The unifier can even unify two foralls, and generate evidence. All good. BUT foralls are implicitly instantiated, and it is the implicitly-instantiated ones that must not be hidden. One possibility, pioneered by QML (http://research.microsoft.com/en-us/um/people/crusso/qml/) is to have two kinds of foralls, implicitly instantiated and explicitly instantiated. GHC has been moving in that direction but only fitfully. That's one reason that the entire ImpredicativeTypes extensions is currently in limbo. Simon | -----Original Message----- | From: Richard Eisenberg [mailto:eir at cis.upenn.edu] | Sent: 22 July 2014 14:27 | To: Simon Peyton Jones | Cc: ghc-devs at haskell.org | Subject: Re: tcInferRho | | Ah -- it's all clear to me now. | | To summarize: a TauTv *can* become a poly-type, but the solver won't | ever discover so. | | That would seem to contradict | | > = TauTv -- This MetaTv is an ordinary unification variable | > -- A TauTv is always filled in with a tau-type, | which | > -- never contains any ForAlls | > | | which appears in the declaration for MetaInfo in TcType. | | Is that an accurate summary? | | Thanks for helping to clear this up! | Richard | | | On Jul 22, 2014, at 9:19 AM, Simon Peyton Jones | wrote: | | > Indeed. | > | > Unification variables *can* unify with polytypes, as you see. | > | > GHC does "on the fly" unification with in-place update, and only | defers to the constraint solver if it can't readily unify on the fly. | The squishiness is precisely that for this setting we *must* unify on | the fly, so the "it's always ok to defer" rule doesn't hold. | > | > Simon | > | > | -----Original Message----- | > | From: Richard Eisenberg [mailto:eir at cis.upenn.edu] | > | Sent: 22 July 2014 13:22 | > | To: Simon Peyton Jones | > | Cc: ghc-devs at haskell.org | > | Subject: Re: tcInferRho | > | | > | OK -- that all makes sense. | > | | > | But why does it actually work, I wonder? It seems that to get the | > | behavior that you describe below, and the behavior that we see in | > | practice, a unification variable *does* have to unify with a | > | non-tau- type, like (forall a. a -> a) -> Int. But doesn't defer_me | > | in TcUnify.checkTauTvUpdate prevent such a thing from happening? | > | | > | To learn more, I tried compiling this code: | > | | > | > f :: Bool -> Bool -> (forall a. a -> a) -> () f = undefined | > | > | > | > g = (True `f` False) id | > | | > | I use infix application to avoid tcInferRho. | > | | > | With -ddump-tc-trace -dppr-debug, I see the following bit: | > | | > | > Scratch.hs:18:6: | > | > u_tys | > | > untch 0 | > | > (forall a{tv apE} [sk]. a{tv apE} [sk] -> a{tv apE} [sk]) - | > () | > | > ~ | > | > t_aHO{tv} [tau[0]] | > | > a type equality (forall a{tv apE} [sk]. | > | > a{tv apE} [sk] -> a{tv apE} [sk]) | > | > -> () | > | > ~ | > | > t_aHO{tv} [tau[0]] | > | > Scratch.hs:18:6: | > | > writeMetaTyVar | > | > t_aHO{tv} [tau[0]] := (forall a{tv apE} [sk]. | > | > a{tv apE} [sk] -> a{tv apE} [sk]) | > | > -> () | > | > | > | | > | What's very strange to me here is that we see t_aHO, a **tau** | type, | > | being rewritten to a poly-type. I could clearly throw in more | > | printing statements to see what is going on, but I wanted to check | > | if this looks strange to you, too. | > | | > | Thanks, | > | Richard | > | | > | On Jul 22, 2014, at 6:28 AM, Simon Peyton Jones | > | | > | wrote: | > | | > | > Richard | > | > | > | > You are right; there is something squishy here. | > | > | > | > The original idea was that a unification variable only stands for | > | > a | > | *monotype* (with no for-alls). But our basic story for the type | > | inference engine is | > | > tcExpr :: HsExpr -> TcType -> TcM HsExpr' | > | > which checks that the expression has the given expected type. To | > | > do | > | inference we pass in a unification variable as the "expected type". | > | BUT if the expression actually has a type like (forall a. a->a) -> | > | Int, then the unification variable clearly isn't being unified with | > | a monotype. There are a couple of places where we must "zonk" the | > | expected type, after calling tcExpr, to expose the foralls. A | major | > | example is TcExpr.tcInferFun. | > | > | > | > I say this is squishy because *in principle* we could replace | > | > every | > | unification with generating an equality constraint, for later | solving. | > | (This does often happen, see TcUnify.uType_defer.) BUT if we | > | generate an equality constraint, the zonking won't work, and the | > | foralls won't be exposed early enough. I wish that the story here | was more solid. | > | > | > | > The original idea of tcInferRho was to have some special cases | > | > that | > | did not rely on this squishy "unify with polytype" story. It had a | > | number of special cases, perhaps not enough as you observe. But it | > | does look as if the original goal (which I think was to deal with | > | function applications) doesn't even use it -- it uses tcInferFun | > | instead. | > | > | > | > So I think you may be right: tcInferRho may not be important. | > | > There | > | is a perhaps-significant efficiency question though: it avoids | > | allocating an unifying a fresh unification variable each time. | > | > | > | > Simon | > | > | > | > | -----Original Message----- | > | > | From: Richard Eisenberg [mailto:eir at cis.upenn.edu] | > | > | Sent: 18 July 2014 22:00 | > | > | To: Simon Peyton Jones | > | > | Subject: Re: tcInferRho | > | > | | > | > | I thought as much, but I can't seem to tickle the bug. For | example: | > | > | | > | > | > {-# LANGUAGE RankNTypes #-} | > | > | > | > | > | > f :: Int -> Bool -> (forall a. a -> a) -> Int f = undefined | > | > | > | > | > | > x = (3 `f` True) | > | > | > | > | > | | > | > | | > | > | GHCi tells me that x's type is `x :: (forall a. a -> a) -> | Int`, | > | > | as we would hope. If we were somehow losing the higher-rank | > | > | polymorphism without tcInferRho, then I would expect something | > | > | like | > | > | `(3 `f` True) $ not)` to succeed (or behave bizarrely), but we | > | > | get | > | a | > | > | very sensible type error | > | > | | > | > | Couldn't match type 'a' with 'Bool' | > | > | 'a' is a rigid type variable bound by | > | > | a type expected by the context: a -> a | > | > | at /Users/rae/temp/Bug.hs:6:5 | > | > | Expected type: a -> a | > | > | Actual type: Bool -> Bool | > | > | In the second argument of '($)', namely 'not' | > | > | In the expression: (3 `f` True) $ not | > | > | | > | > | So, instead of just adding more cases, I wonder if we can't | > | *remove* | > | > | cases, as it seems that the gears turn fine without this | function. | > | > | This continues to surprise me, but it's what the evidence | > | indicates. | > | > | Can you make any sense of this? | > | > | | > | > | Thanks, | > | > | Richard | > | > | | > | > | | > | > | On Jul 18, 2014, at 12:49 PM, Simon Peyton Jones | > | > | | > | > | wrote: | > | > | | > | > | > You're right, its' an omission. The reason for the special | > | > | > case is | > | > | described in the comment on tcInferRho. Adding OpApp would be | a | > | > | Good Thing. A bit tiresome because we'd need to pass to | > | > | tcInferApp the function to use to reconstruct the result HsExpr | > | > | (currently foldl mkHsApp, in tcInferApp), so that in the OpApp | > | > | case it'd reconstruct an OpApp. | > | > | > | > | > | > Go ahead and do this if you like | > | > | > | > | > | > S | > | > | > | > | > | > | -----Original Message----- | > | > | > | From: Richard Eisenberg [mailto:eir at cis.upenn.edu] | > | > | > | Sent: 17 July 2014 18:48 | > | > | > | To: Simon Peyton Jones | > | > | > | Subject: tcInferRho | > | > | > | | > | > | > | Hi Simon, | > | > | > | | > | > | > | I'm in the process of rejiggering the functions in TcHsType | > | > | > | to be | > | > | more | > | > | > | like those in TcExpr, in order to handle the richer | > | > | > | type/kind | > | > | language | > | > | > | of my branch. | > | > | > | | > | > | > | I have a question about tcInferRho (TcExpr.lhs:115). It | > | > | > | calls tcInfExpr, which handles three special cases of | > | > | > | HsExpr, before deferring to tcExpr. The three cases are | > | > | > | HsVar, HsPar, and | > | HsApp. | > | > | > | What's odd about this is that there are other cases that | > | > | > | seem | > | to | > | > | belong | > | > | > | in this group, like OpApp. After all, (x + y) and ((+) x y) | > | > | > | should behave the same in all circumstances, right? I can't | > | find | > | > | > | a way to tickle the omission here, so there may not be a | > | > | > | bug, but it certainly is a little strange. Can you shed any | light? | > | > | > | | > | > | > | Thanks! | > | > | > | Richard | > | > | > | > | > | > | > | > | > From austin at well-typed.com Tue Jul 22 13:58:31 2014 From: austin at well-typed.com (Austin Seipp) Date: Tue, 22 Jul 2014 08:58:31 -0500 Subject: a little phrustrated In-Reply-To: <5BC8E2CB-7528-4537-92EA-E1C3BDDF1B1E@cis.upenn.edu> References: <5BC8E2CB-7528-4537-92EA-E1C3BDDF1B1E@cis.upenn.edu> Message-ID: Hi Richard, Sorry for missing this email - it slid out of my queue... On Wed, Jul 16, 2014 at 8:54 AM, Richard Eisenberg wrote: > Hi all, > > I'm trying to use Phab for the first time this morning, and hitting a fair > number of obstacles. I'm writing up my experiences here in order to figure > out which of these are my fault, which can be fixed, and which are just > things to live with; and also to help others who may go down the same path. > If relevant, my diff is at https://phabricator.haskell.org/D73 > 1) I had some untracked files in a submodule repo. I couldn't find a way to > get `arc diff` to ignore these, as they appeared to git to be a change in a > tracked file (that is, a change to a submodule, which is considered > tracked). `git stash` offered no help, so I had to delete the untracked > files. This didn't cause real pain (the files were there in error), but it > seems a weakness of the system if I can't make progress otherwise. Yes, you can use: $ git config --global diff.ignoreSubmodules dirty to ignore this. If you don't pass --global, it will only take affect in the repository you perform it in. This should fix this problem. > 2) I develop and build in the same tree. This means that I often have a few > untracked files in the outer, ghc.git repo that someone hasn't yet added to > .gitignore. Thus, I need to say `--allow-untracked` to get `arc diff` to > work. I will likely always need `--allow-untracked`, so I looked for a way > to get this to be configured automatically. I found > https://secure.phabricator.com/book/phabricator/article/arcanist/#configuration > , but the details there are sparse. Any advice? No, it doesn't look like it I'm afraid. I asked upstream about it this morning (it was very easy to write a patch for), and unfortunately they do not want to allow this feature (it's very easy to add it as a config option, but I digress). In the mean time, you can use 'arc alias' to create a version of 'arc diff' like what you want: $ arc alias udiff diff -- --allow-untracked Then run: $ arc udiff instead. I think this is really a short-term solution; in the long run we should commit .gitignore entries for everything since the reason for this is that having untracked files is generally a liability that should be caught. > 3) The linter picks up and complains about tabs in any of my touched files. > I can then write an excuse for every `arc diff` I do, or de-tab the files. > In one case, I changed roughly one line in the file (MkCore.lhs) and didn't > think it right to de-tab the whole file. Even if I did de-tab the whole > file, then my eventual `arc land` would squash the whitespace commit in with > my substantive commits, which we expressly don't want. I can imagine a fair > amount of git fiddling which would push the whitespace commit to master and > then rebase my substantive work on top so that the final, landed, squashed > patch would avoid the whitespace changes, but this is painful. And advice on > this? Just ignore the lint errors and write silly excuses? Or, is there a > way Phab/arc can be smart enough to keep whitespace-only commits (perhaps > tagged with the words "whitespace only" in the commit message) separate from > other commits when squashing in `arc land`? I'm afraid right now I don't have some fancy stuff to help automate this or alleviate it. I personally suggest that we take the pain on these as an opportunity to remove things, per recent discussions. We can't remove it all in one swoop, but we should start being aggressive about enforcing style errors. In short, I'd suggest you - Add silly excuses for now - Land your changes - Commit fixes for the lint errors *after* that. - Commit lint fixes one file at a time. If we keep doing this, we'll begin making a lot of headway on this, I'm sure. (The nice thing is that now, you can be lazy and fix violations, then let Phabricator or Travis-CI do builds for you.) > 4) For better or worse, we don't currently require every file to be > tab-free, just some of them. Could this be reflected in Phab's lint settings > to avoid the problem in (3)? (Of course, a way to de-tab and keep the > history nice would be much better!) We could exclude all the files that have tabs, but it would be a lot still. See above though - I suggest we use this as an opportunity to remove this stuff. Just be aggressive about cleaning it up after it lands. The average lifespan of a review is fairly short in practice. I think it should be pretty easy to keep up. The lint rules do still need some tweaking probably though, so if you do see something bogus, please do so. > 5) In writing my revision description, I had to add reviewers. I assumed > these should be comma-separated. This worked and I have updated the Wiki. > Please advise if I am wrong. That's correct, but separated by spaces should work too - thanks! > 6) When I looked at my posted revision, it said that the revision was > "closed"... and that I had done it! slyfox on IRC informed me that this was > likely because I had pushed my commits to a wip/... branch. Is using wip > branches with Phab not recommended? Or, can Phab be configured not to close > revisions if the commit appears only in wip/... branches? Joachim ran into this today. In short, I fixed this by tweaking the repository settings. Phabricator will now autoclose commits ONLY if they occur on the master branch. This means you should feel free to push to wip/* branches as much as you want without fear now. Sorry! > 7) How can I "re-open" my revision? I'm afraid you can't. > 8) Some time after posting, phaskell tells me that my build failed. OK. This > is despite the fact that Travis was able to build the same commit > (https://travis-ci.org/ghc/ghc/builds/30066130). I go to find out why it > failed, and am directed to build log F3870 > (https://phabricator.haskell.org/file/info/PHID-FILE-hz2r4sjamkkrbf7nsz6b/). > I can't view the file online, but instead have to download and then ungzip > it. Is it possible to view this file directly? Or not have it be compressed? This is a bug in my script because it's a piece of crap, both the failure and the build logging. I'm working on a Much Better Version? not written in Shell script but Haskell that should fix all this, hopefully I can deploy it soon. It will also include more features that may or may not actually work. :) I'd prefer to keep the log files compressed if that's OK. An uncompressed log from ./validate is over *ten* megabytes already, and it doesn't even correctly capture *all* of the logs! In comparison, the .gz version is a short 300kb. That's a crazy space savings, especially since this bot will hopefully report more soon. > 9) When I do view the build log, I get no answers. The end of the file comes > abruptly in the middle of some haddock output, and the closest thing that > looks like an error is about a missing link in a haddock tag > `$kind_subtyping` in Type.lhs. I didn't touch this file, and I imagine the > missing link has been there for some time, so I'm dubious that this is the > real problem. Are these log files cut off? Again, bug. Sorry about that. The shell script is already falling on its face. > 10) More of a question than a phrustration: is there a way to link directly > to Trac tickets and/or wiki pages from Phab comments? I like the Phab:D73 > syntax from Trac to Phab, and thanks, Austin, for adding the field at the > top of Trac tickets to Phab revisions. There's unfortunately no syntax for this. HOWEVER, I have just this morning rolled out a change to phabricator.haskell.org that now allows you to set the Trac issue #s in a revision, and it will hyperlink it! For example: https://phabricator.haskell.org/D88 Look at the 'Trac Issues' field, which hyperlinks to the right issue - yay! You can even specify this field when using arcanist, if you add something like: 'Trac: #9303' to the commit message when you run `arc diff`. > > I did fully expect to hit a few bumps on my first use of this new tool, but > it got to the point where I thought I should seek some advice before > continuing to muddle through -- hence this email. I do hope my tone is not > overly negative: I'm *very* appreciative of the work that many of you do to > support GHC's infrastructure, and I look forward to being able to get and > provide source code feedback through Phab. We just need to work out some > kinks, I think! (Any number of these kinks may be solely my fault, of > course.) Actually, it looks like most of them are *my* fault, but I hope all of these answers help a lot! > Many thanks, > Richard > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From eir at cis.upenn.edu Tue Jul 22 14:17:15 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Tue, 22 Jul 2014 10:17:15 -0400 Subject: a little phrustrated In-Reply-To: References: <5BC8E2CB-7528-4537-92EA-E1C3BDDF1B1E@cis.upenn.edu> Message-ID: <495DA489-A399-494B-9F25-8429E7EAF11E@cis.upenn.edu> On Jul 22, 2014, at 9:58 AM, Austin Seipp wrote: > Hi Richard, > > Sorry for missing this email - it slid out of my queue... No worries on the delay. I wouldn't be surprised if there is a Best Practices document somewhere which advises waiting at least several days to respond to a work-related email with a human emotion in the subject line. :) I appreciate your thorough answers below. > >> 2) I develop and build in the same tree. This means that I often have a few >> untracked files in the outer, ghc.git repo that someone hasn't yet added to >> .gitignore. Thus, I need to say `--allow-untracked` to get `arc diff` to >> work. I will likely always need `--allow-untracked`, so I looked for a way >> to get this to be configured automatically. I found >> https://secure.phabricator.com/book/phabricator/article/arcanist/#configuration >> , but the details there are sparse. Any advice? > > No, it doesn't look like it I'm afraid. I asked upstream about it this > morning (it was very easy to write a patch for), and unfortunately > they do not want to allow this feature (it's very easy to add it as a > config option, but I digress). > > In the mean time, you can use 'arc alias' to create a version of 'arc > diff' like what you want: > > $ arc alias udiff diff -- --allow-untracked > > Then run: > > $ arc udiff > > instead. > > I think this is really a short-term solution; in the long run we > should commit .gitignore entries for everything since the reason for > this is that having untracked files is generally a liability that > should be caught. Thanks for the `alias` tip. I think that having an always-updated .gitignore might be difficult from a practical standpoint, because each different architecture might produce different files. Of course, I could add entries myself, but I'm always quite scared of touching anything interacting with the build system. > [snip] > I personally suggest that we take the pain on these as an opportunity > to remove things, per recent discussions. We can't remove it all in > one swoop, but we should start being aggressive about enforcing style > errors. > > In short, I'd suggest you > > - Add silly excuses for now > - Land your changes > - Commit fixes for the lint errors *after* that. > - Commit lint fixes one file at a time. > > If we keep doing this, we'll begin making a lot of headway on this, > I'm sure. (The nice thing is that now, you can be lazy and fix > violations, then let Phabricator or Travis-CI do builds for you.) Not a bad plan. I've personally come around to the "let's just de-tab now and get on with it" camp, even though it will give me a painful merge. I think my (and others') painful merge is less painful than the status quo. > >> 6) When I looked at my posted revision, it said that the revision was >> "closed"... and that I had done it! slyfox on IRC informed me that this was >> likely because I had pushed my commits to a wip/... branch. Is using wip >> branches with Phab not recommended? Or, can Phab be configured not to close >> revisions if the commit appears only in wip/... branches? > > Joachim ran into this today. > > In short, I fixed this by tweaking the repository settings. > Phabricator will now autoclose commits ONLY if they occur on the > master branch. > > This means you should feel free to push to wip/* branches as much as > you want without fear now. Sorry! Great. Thanks! > >> 7) How can I "re-open" my revision? > > I'm afraid you can't. Is this worth pushing upstream as a feature request? Even absent technical glitches like the wip/* stuff, I could see wanting to do this. Say there is a subtle revision that accumulates a bit of commentary. It lands after general consensus that the revision is good. Then, someone discovers that it was wrong, after all. It would be nice to continue the original conversation instead of starting afresh, I would think. > >> 8) Some time after posting, phaskell tells me that my build failed. OK. This >> is despite the fact that Travis was able to build the same commit >> (https://travis-ci.org/ghc/ghc/builds/30066130). I go to find out why it >> failed, and am directed to build log F3870 >> (https://phabricator.haskell.org/file/info/PHID-FILE-hz2r4sjamkkrbf7nsz6b/). >> I can't view the file online, but instead have to download and then ungzip >> it. Is it possible to view this file directly? Or not have it be compressed? > > This is a bug in my script because it's a piece of crap, both the > failure and the build logging. I'm working on a Much Better Version? > not written in Shell script but Haskell that should fix all this, > hopefully I can deploy it soon. It will also include more features > that may or may not actually work. :) > > I'd prefer to keep the log files compressed if that's OK. An > uncompressed log from ./validate is over *ten* megabytes already, and > it doesn't even correctly capture *all* of the logs! In comparison, > the .gz version is a short 300kb. That's a crazy space savings, > especially since this bot will hopefully report more soon. Is it possible then to use some web-server magic to call gunzip and then push the result down the wire? Not a big deal, and not worth much time investment to get this nicety, but the current multi-step process just to view the log is a little jarring. Perhaps as a middle road, the last 100 lines or so could be available uncompressed? Again, low priority but would-be-nice. > >> 9) When I do view the build log, I get no answers. The end of the file comes >> abruptly in the middle of some haddock output, and the closest thing that >> looks like an error is about a missing link in a haddock tag >> `$kind_subtyping` in Type.lhs. I didn't touch this file, and I imagine the >> missing link has been there for some time, so I'm dubious that this is the >> real problem. Are these log files cut off? > > Again, bug. Sorry about that. The shell script is already falling on its face. > >> 10) More of a question than a phrustration: is there a way to link directly >> to Trac tickets and/or wiki pages from Phab comments? I like the Phab:D73 >> syntax from Trac to Phab, and thanks, Austin, for adding the field at the >> top of Trac tickets to Phab revisions. > > There's unfortunately no syntax for this. HOWEVER, I have just this > morning rolled out a change to phabricator.haskell.org that now allows > you to set the Trac issue #s in a revision, and it will hyperlink it! > For example: > > https://phabricator.haskell.org/D88 > > Look at the 'Trac Issues' field, which hyperlinks to the right issue - yay! > > You can even specify this field when using arcanist, if you add something like: > > 'Trac: #9303' > > to the commit message when you run `arc diff`. Cool! Thanks! Thanks again for your help! Richard From mail at joachim-breitner.de Tue Jul 22 15:12:54 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Tue, 22 Jul 2014 17:12:54 +0200 Subject: GHC contribution guidelines and infrastructure talk on 6th September at HIW? In-Reply-To: <53CE38F3.9080201@mathematik.uni-marburg.de> References: <53CE38F3.9080201@mathematik.uni-marburg.de> Message-ID: <1406041974.2079.17.camel@joachim-breitner.de> Hi, Am Dienstag, den 22.07.2014, 12:12 +0200 schrieb Jost Berthold: > (Sorry for joining this late... I figured we would be in dialogue off > the list eventually) > > Joachim wrote and posted a proposal, and I think this proposal is indeed > a good idea (and one of the purposes of HIW, definite yes). > > We shall make room for it in the programme, possibly in the last > session, which can turn into the "Haskell release discussion evening". Jost added my proposal to EasyChair, but it turns out that I scheduled by return flight a bit too early (taking off at 19:10) and might not be able to attend the last session. Who is able to fill in for me if the infrastructure talk is scheduled here? Maybe Simon M, or Austin, or Herbert? Of some coalition thereof Thanks, Joachim -- Joachim Breitner e-Mail: mail at joachim-breitner.de Homepage: http://www.joachim-breitner.de Jabber-ID: nomeata at joachim-breitner.de -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From christiaan.baaij at gmail.com Tue Jul 22 15:13:05 2014 From: christiaan.baaij at gmail.com (Christiaan Baaij) Date: Tue, 22 Jul 2014 17:13:05 +0200 Subject: Call for help on testing integer-gmp2 on non-Linux archs In-Reply-To: References: <8738dtyfxr.fsf@gnu.org> <87ha29wqhz.fsf@gnu.org> Message-ID: <297FFF0C-EFE2-44FB-8ACA-D430F8A1754A@gmail.com> The testsuite results are here: http://paste.ubuntu.com/7836630/ On Jul 22, 2014, at 2:11 PM, Christiaan Baaij wrote: > Starting a build on my MAC: > > OS: 10.8.5 > XCode: XCode 4 CLI-only (so _no_ full Xcode, that is, xcode-select fails) > GCC: i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.11.00) > GHC: 7.8.3 > > On Jul 22, 2014, at 2:02 PM, Herbert Valerio Riedel wrote: > >> On 2014-07-22 at 13:33:04 +0200, Niklas Larsson wrote: >>> I can test on Windows. >> >> great! Are you using the 32bit or 64bit compiler? >> >> All you'd need to do is 'git checkout' the wip/T9281 branch, add the line >> >> INTEGER_LIBRARY=integer-gmp2 >> >> at the end of mk/build.mk (and 'BuildFlavour=quick' should suffice) and >> try to build GHC with that. If you end up with a working stage2 >> compiler, and 'inplace/bin/ghc-stage2 --interactive' reports loading the >> package 'integer-gmp2' then everything went better than expected :) >> >> Then running the testsuite via >> >> cd testsuite/ && make WAY=normal SKIP_PERF_TESTS=YES >> >> should only fail with a few testcases due to the strings "integer-gmp2" >> vs. "integer-gmp" being different in the output. >> >> Thanks,, >> hvr >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs > From metaniklas at gmail.com Tue Jul 22 21:38:07 2014 From: metaniklas at gmail.com (Niklas Larsson) Date: Tue, 22 Jul 2014 23:38:07 +0200 Subject: Call for help on testing integer-gmp2 on non-Linux archs In-Reply-To: <87ha29wqhz.fsf@gnu.org> References: <8738dtyfxr.fsf@gnu.org> <87ha29wqhz.fsf@gnu.org> Message-ID: I can do both 32 and 64-bit builds.I started with 32 bits. I got "inplace/bin/ghc-stage2.exe" -hisuf hi -osuf o -hcsuf hc -static -H64m -O0 -fasm -package-name vector -0.10.9.1 -hide-all-packages -i -ilibraries/vector/. -ilibraries/vector/dist-install/build -ilibraries/vec tor/dist-install/build/autogen -Ilibraries/vector/dist-install/build -Ilibraries/vector/dist-install/build /autogen -Ilibraries/vector/include -Ilibraries/vector/internal -optP-DVECTOR_BOUNDS_CHECKS -optP-includ e -optPlibraries/vector/dist-install/build/autogen/cabal_macros.h -package base-4.7.1.0 -package deepseq-1 .3.0.2 -package ghc-prim-0.3.1.0 -package primitive-0.5.2.1 -O2 -XHaskell98 -XCPP -XDeriveDataTypeable -O -fasm -no-user-package-db -rtsopts -odir libraries/vector/dist-install/build -hidir libraries/vector /dist-install/build -stubdir libraries/vector/dist-install/build -c libraries/vector/./Data/Vector/Fusio n/Stream/Monadic.hs -o libraries/vector/dist-install/build/Data/Vector/Fusion/Stream/Monadic.o "/usr/bin/ar" q libraries/primitive/dist-install/build/libHSprimitive-0.5.2.1.a @libraries/primitive/dist -install/build/libHSprimitive-0.5.2.1.a.contents /usr/bin/ar: creating libraries/primitive/dist-install/build/libHSprimitive-0.5.2.1.a "rm" -f libraries/primitive/dist-install/build/libHSprimitive-0.5.2.1.a.contents Loading package ghc-prim ... linking ... done. Loading package integer-gmp2 ... linking ... ghc-stage2.exe: unable to load package `integer-gmp2' ghc-stage2.exe: D:\Niklas\scratch\ghc-build\msys\home\niklas\ghc\libraries\integer-gmp2\dist-install\build \HSinteger-gmp2-0.0.1.0.o: unknown symbol `_scalbn' I built it?with gmp-6.0.0. 2014-07-22 14:02 GMT+02:00 Herbert Valerio Riedel : > On 2014-07-22 at 13:33:04 +0200, Niklas Larsson wrote: > > I can test on Windows. > > great! Are you using the 32bit or 64bit compiler? > > All you'd need to do is 'git checkout' the wip/T9281 branch, add the line > > INTEGER_LIBRARY=integer-gmp2 > > at the end of mk/build.mk (and 'BuildFlavour=quick' should suffice) and > try to build GHC with that. If you end up with a working stage2 > compiler, and 'inplace/bin/ghc-stage2 --interactive' reports loading the > package 'integer-gmp2' then everything went better than expected :) > > Then running the testsuite via > > cd testsuite/ && make WAY=normal SKIP_PERF_TESTS=YES > > should only fail with a few testcases due to the strings "integer-gmp2" > vs. "integer-gmp" being different in the output. > > Thanks,, > hvr > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.stolarek at p.lodz.pl Wed Jul 23 11:21:54 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Wed, 23 Jul 2014 13:21:54 +0200 Subject: Looking for list comprehensions use cases Message-ID: <201407231321.54464.jan.stolarek@p.lodz.pl> Haskellers, recently I've been looking into the possibility of creating some new optimisations for GHC. These would be mostly aimed at list comprehensions. Here's where I need your help: 1. Do you have complex list comprehensions usage examples from real code? By complex I mean nested list comprehensions, reading from more than one list ([ ...| x <- xs, y <- ys ... ]) etc. 2. Do you have list comprehensions code that you had to optimize by hand because GHC was unable to make them fast enough? Janek From hvriedel at gmail.com Wed Jul 23 14:12:15 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Wed, 23 Jul 2014 16:12:15 +0200 Subject: Call for help on testing integer-gmp2 on non-Linux archs In-Reply-To: (Niklas Larsson's message of "Tue, 22 Jul 2014 23:38:07 +0200") References: <8738dtyfxr.fsf@gnu.org> <87ha29wqhz.fsf@gnu.org> Message-ID: <87mwc0w4e8.fsf@gnu.org> Hello Niklas, On 2014-07-22 at 23:38:07 +0200, Niklas Larsson wrote: > I can do both 32 and 64-bit builds.I started with 32 bits. Both would be interesting! > I got [...] > Loading package ghc-prim ... linking ... done. > Loading package integer-gmp2 ... linking ... ghc-stage2.exe: unable to load > package `integer-gmp2' > ghc-stage2.exe: > D:\Niklas\scratch\ghc-build\msys\home\niklas\ghc\libraries\integer-gmp2\dist-install\build > \HSinteger-gmp2-0.0.1.0.o: unknown symbol `_scalbn' > > I built it?with gmp-6.0.0. can you retry with http://git.haskell.org/ghc.git/commitdiff/237462c166a8ae7ca733bc755334c2cf47c2fc49 applied? It seems like Windows requires each library which uses symbols from libm to link directly against libm. Thanks! From mark.lentczner at gmail.com Wed Jul 23 14:20:47 2014 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Wed, 23 Jul 2014 07:20:47 -0700 Subject: The haddock / clang problem revisited ... root cause found... In-Reply-To: References: Message-ID: *Re.: Full details here: Root cause of haddock / clang failure * Iain Nicol had submmited a set of commits back in May which addressed the "double pre-process" issue with Haddock. No one had figured out the connection to this set of issues with clang back then. I back-ported them to 1.18 last night , verfied that these patches do indeed fix the issue, and Johan has committed them to the tree. My plan is to ship haskell platform with Cabal-1.18.1.4 (forthcoming), which is just these patches beyond the Cabal-1.18.1.3 that comes with GHC 7.8.3. This will just be an additional version of the Cabal package, in the package db that ships with the platform. I won't re-build 7.8.3 with a patched Cabal. I don't think there will be any major issues(*) with users getting a package db with two Cabal versions (cabal-install will be built against the later one). - Mark (*) If some proejct depends on package ghc and Cabal, then they will get 1.18.1.3 (as that is what ghc, in 7.8.3 is compiled against in the official bindist). If that project then invokes the haddock operation ... it will double pre-process input if EnableExtension CPP is in allExtensions of the project they are building. I think we're safe... -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Wed Jul 23 15:48:29 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Wed, 23 Jul 2014 17:48:29 +0200 Subject: Early draft spec of Strict language pragma Message-ID: Hi! I started a draft spec for the Strict language pragma we chatted about during our call a while ago. I made the big mistake of writing it much after the actual discussion, so I forgot most of the details. https://ghc.haskell.org/trac/ghc/wiki/StrictPragma Perhaps if you could ask questions that you don't think are answered in the wiki page I can start fleshing it out more? P.S. I've included our chat log at the bottom of the wiki page. -- Johan From mark.lentczner at gmail.com Wed Jul 23 17:28:05 2014 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Wed, 23 Jul 2014 10:28:05 -0700 Subject: The haddock / clang problem revisited ... root cause found... In-Reply-To: References: Message-ID: After discussion on #ghc, my revised plan is that HP 2014.2.0.0 will include just Cabal-1.18.1.3. While the double-pre-processing is "wrong" on all oses, it only systems with clang as the compiler. For the Mac OS X build of HP 2014.2.0.0, I'll be basing it on a "patched" version of GHC 7.8.3 that bumps Cabal to Cabal-1.18.1.4 (and relaxing the hptool build to not balk at this change). I can make that GHC bindist available as well, for those that want GHC w/o HP. - Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From bgamari.foss at gmail.com Wed Jul 23 18:32:21 2014 From: bgamari.foss at gmail.com (Ben Gamari) Date: Wed, 23 Jul 2014 14:32:21 -0400 Subject: globalRegMaybe and ARM Message-ID: <87d2cwhqoa.fsf@gmail.com> Hello Simon, b0534f7 [1] and the subsequent reversion f0fcc41d7 touched `includes/CodeGen.Platform.hs`, the former removing a panic in the case of `globalRegMaybe` being undefined for a platform and replacing it with `Nothing`. Recently I've found that my ARM builds (with -fllvm) crash at this panic whereas they did not as of the 7.8 release. Given that b0534f7 was reverted this is no doubt due to another change that I haven't identified yet. Do you have any idea what is happening here? I'm currently attempting to build with a workaround setting `globalRegMaybe _ = Nothing`, although this smells suspicously like what would happen in the unregisterized case. My other hypothesis is that MACHREGS_arm should be added to the #if MACHREGS_i386 || MACHREGS_x86_64 || MACHREGS_sparc || MACHREGS_powerpc which smells more like what a registerised architecture should do and it seems the requisite macros are defined for ARM in `stg/MachRegs.h`. Whatever happens for ARM should probably also happen for AArch64. How should `globalRegMaybe` and `freeReg` be defined for platforms that rely exclusively on the LLVM backend? Both ARM and AArch64 appear to be doing the wrong thing at present. Cheers, - Ben [1] https://github.com/ghc/ghc/commit/b0534f78a73f972e279eed4447a5687bd6a8308e#diff-4899eba6e173d5811d08d6c312da7752R741 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From johan.tibell at gmail.com Wed Jul 23 18:58:17 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Wed, 23 Jul 2014 20:58:17 +0200 Subject: The haddock / clang problem revisited ... root cause found... In-Reply-To: References: Message-ID: A patched bindist sounds like a good idea. I've just uploaded Cabal-1.18.1.4 so you should be good to go. From mark.lentczner at gmail.com Thu Jul 24 06:57:52 2014 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Wed, 23 Jul 2014 23:57:52 -0700 Subject: At long last: 2014.2.0.0 Release Candidate 1 Message-ID: *Even I find it hard to believe that it's finally here!* *I'm happy to announce the RC1 release of Haskell Platform 2014.2.0.0.* You can find builds for Windows (32bit and 64bit), Mac (64bit), as well as a source tarball, and a tag in the repo: - source tarball: haskell-platform-2014.2.0.0-RC1.tar.gz - source repo: https://github.com/haskell/haskell-platform/tree/2014.2.0.0-RC1 - windows 32bit: HaskellPlatform-2014.2.0.0-i386-RC1-setup.exe - windows 64bit: HaskellPlatform-2014.2.0.0-x86_64-RC1-setup.exe - os x 64bit: Haskell Platform 2014.2.0.0 64bit RC1.signed.pkg - travis-ci build: 20 *Notes:* General - Built with the new shake based build system - cgi package not included as it doesn't build under 7.8, and no word from the maintainer in quite some time - hscolour is included as a tool, mostly as it is used to build the platform itself on win and mac... but we should probably officially decide to include it - the haskell-platform package itself is not in these releases... not sure if we actually need it as it doesn't contain anything other than dependencies. Windows - The Haskell Platform now provides a native Windows 64-bit installation (haskell-platform issue #54 ) - All included packages built without --enable-split-objs (GHC 7.8 FAQ ) - All included packages built without --enable-shared (GHC ticket #8228 ) - All html document links are relative and everything links nicely together now, including the master index and cross-package links - If other Haskell Platform installations are detected during the installation of Haskell Platform 2014.2.0.0, a warning is displayed to the user that this is not recommended since problems will arise due to how the PATH is used in many cases to find ancillary build tools - Using ghci to build an executable that links against a DLL may result in numerous warnings about symbols (GHC ticket #9297 ) - On Windows 64-bit, some unneeded python files are included (GHC ticket #9014 ) - The Haskell Platform for Windows, both 32-bit and 64-bit, now includes an updated version of the OpenGL Utility Toolkit (GLUT) from the FreeGLUT project, utilizing the pre-built distribution from http://www.transmissionzero.co.uk/software/freeglut-devel/ with the MinGW build (freeglut-MinGW-2.8.1-1.mp.zip). (haskell-platform issue #81 ) Mac OS X - Distributed with a build of 7.8.3 that differs from the released bindist in two ways: a) it was built split-objs for smaller resulting executables, b) it includes Cabal-1.18.1.4 which fixes a particularly nasty problem with haddock, -XCPP, and clang based systems. This ghc-7.8.3 bindist is available as well: - ghc-7.8.3-x86_64-apple-darwin-r3.tar.bz2 - haddocks finally cross link between packages correctly - includes a new experimental activate-hs command that can switch between multiple installed versions of the platform - includes a slightly updated uninstall-hs command - the cabal command is wrapped to provide a smoother file layout on the Mac... the wrapping only updates the ~/.cabal/config file the first time you run it.. please pay attention to its output - if you have a custom config file, you'll want to update it, as Cabal's defaults have changed - works on 10.7 (tested), 10.8 (assumed!), 10.9 (tested) - both with gcc and clang based Xcode installs. (Next RC will support back to 10.6!) - 64bit only build this time... anyone still want the 32bit build? Source tarball - Quite some changes from last time - notably now includes the new build machinery, not the old. - The build machinery is well tested (see the travis-ci link above) from a repo... not tested much from a source repo - It isn't clear the source repo is of much interest... anyone need it? anyone need anything beyond the haskell-platform.cabal file in it? Timetable - These can "soak" amongst the intreped on these lists for 48 hours. - On Saturday, I'll release any needed updated RCs, and announce to a wider audience - End of next week (from my vacation, I'll point out), we'll declare success and ship. ? Mark P.S.: sha256 sums for the reasonably wary: 860cba2d65bc1790ec3b8a0a62c8aaf19f5b18e9070aa1ec3c468b7c6ca82697 Haskell Platform 2014.2.0.0 64bit RC1.signed.pkg 8ca8a994334bf846ac6a41ada381c51634b2fce0d68dc70b2dadf59cb8748397 haskell-platform-2014.2.0.0-RC1.tar.gz b034e2daa595e0d3827860b062afca05f62df64434b0923559acc99bd82ef1e9 HaskellPlatform-2014.2.0.0-i386-RC1-setup.exe e31fe5e0cb9dca330e6f0c62ac49bf70b79f555bfe11e8f04c47739fef224145 HaskellPlatform-2014.2.0.0-x86_64-RC1-setup.exe 8e479d9dd504b1c603cd51f3be0fa57ecdc996e842d655e39d97faafff3c2d31 ghc-7.8.3-x86_64-apple-darwin-r3.tar.bz2 -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Jul 24 07:32:23 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 24 Jul 2014 07:32:23 +0000 Subject: A couple of GHC-API questions In-Reply-To: References: Message-ID: <618BE556AADD624C9C918AA5D5911BEF104374B9@DB3PRD3001MB020.064d.mgd.msft.net> Eric I'd like to help but I don't understand the question. What do you mean by "resolve" all the types that are imported from module A? If you have a TyCon (for A.Foo) in your hand, it has a Name. That Name tells you where it comes from. Inside it you will find its DataCons which also come from A. And so on. Later you say "is there a simple way to ask GHC to resolve the name x in the context of module m". You could mean * Imagine that the unqualified name "x" appeared in module m. How do I look it up, in m's top-level lexical environment. but I don?t think that is what you mean. I'm confused. Could you be more concrete? Possibly this may help? https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/NameType I'm cc'ing ghc-devs. Simon | -----Original Message----- | From: Eric Seidel [mailto:eseidel at cs.ucsd.edu] | Sent: 23 July 2014 19:01 | To: Simon Peyton Jones | Subject: A couple of GHC-API questions | | Hi Simon, | | I have a couple of questions about how to use GHC?s API properly. | There?s nothing urgent here as we?re making due just fine in | LiquidHaskell, but I?m pretty sure we?re hacking around GHC when we | don?t need to :) | | As you probably know, we basically use GHC to get the Core binders of a | module, and to resolve a bunch of names. Translating to Core has never | been an issue for us as far as I know, but our name-resolution code is | very haphazard. The core of our problems with name-resolution comes | down to resolving the names in specifications that have been imported | from another module, e.g. | | ? ? module A where | ? ? data Foo | ? ? mkFoo :: {v:Int | v >= 0} -> Foo | ? ? ... | | ? ? module B where | ? ? import qualified A | ? ? ... | | When we verify module B we have to first resolve all of the types that | were imported from module A, *but* we can?t do that in the context of | module B due to the qualified import. So I?ve been using a slightly | modified version of GHC?s DynamicLoading.lookupRdrNameInModule to | handle the change of context, but this seems to require that we use the | HscInterpreted flag, which causes further problems for a few of our | benchmarks that make use of the C FFI.. This method also only works for | modules where we actually have the source code available, i.e. we have | to use a different method of looking up names to attach specifications | to functions from the base libraries. | | So I guess my question is this: is there a simple way to ask GHC to | resolve the name x in the context of module m, without invoking the | code generator or any other part of GHC?s functionality? I suppose a | more principled solution would be to somehow hook into the .hi files | that GHC generates, but I?m not sure if that?s possible. | | Again, the code we have right now works, it?s just very fragile and I?d | like to clean things up. | | Thanks! | | Eric From svenpanne at gmail.com Thu Jul 24 08:25:19 2014 From: svenpanne at gmail.com (Sven Panne) Date: Thu, 24 Jul 2014 10:25:19 +0200 Subject: At long last: 2014.2.0.0 Release Candidate 1 In-Reply-To: References: Message-ID: The source tarball is missing a few files for hptool: hptool/src/HaddockMaster.hs hptool/src/OS/Win.hs hptool/src/Releases.hs hptool/src/Releases2012.hs hptool/src/Releases2013.hs hptool/src/Templates.hs hptool/src/Website.hs I guess these are missing from https://github.com/haskell/haskell-platform/blob/2014.2.0.0-RC1/hptool/hptool.cabal. I've just copied those missing files from GitHub, let's see how the installation from source continues on an x64 Linux... From alexander at plaimi.net Thu Jul 24 08:38:45 2014 From: alexander at plaimi.net (Alexander Berntsen) Date: Thu, 24 Jul 2014 10:38:45 +0200 Subject: GHCi: Behave nicely on `-e`, like `ghc` and other programs In-Reply-To: References: Message-ID: <53D0C615.50702@plaimi.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 24/07/14 01:29, Andrew Pennebaker wrote: > Like many programming language environments, GHC offers a handy > `-e` option for evaluating an expression, then returning to the > shell. > > $ ghc -e '2 + 2' > > 4 > > One would expect the interpreter, GHCi, to offer a similar flag, > but it surprisingly rejects it. > > ghci -e '2 + 2' > > ghc: on the commandline: cannot use `--interactive' with `-e' > > Usage: For basic information, try the `--help' option. > > > I think this behavior is quite unintuitive--when I pass `-e ` > to ghci, or pass `--interactive -e ` to ghc, I expect the > expression to be evaluated as the leading expression in an > interactive interpreter session. > > > Could we please tweak ghc like this to make it slightly more > intuitive when these flags are used together? Good idea! I'll look into it. But did you mean to post this to the ghc-devs list? I've CC'd them with your entire email intact. Also, in case I forget all about this or fail, you should add a ticket to our Trac system[0]. That way, this won't get lost. Thanks! [0] - -- Alexander alexander at plaimi.net https://secure.plaimi.net/~alexander -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iF4EAREIAAYFAlPQxhUACgkQRtClrXBQc7XylgEAqbDrspchvHllACPf9lwrFTbN YenIDCggXbj98NJR5/8BAJUxpkej6T6QV4G+jk9R61nbUOEASEJ+0GBcHLBXpAsw =3/+R -----END PGP SIGNATURE----- From roma at ro-che.info Thu Jul 24 09:10:19 2014 From: roma at ro-che.info (Roman Cheplyaka) Date: Thu, 24 Jul 2014 12:10:19 +0300 Subject: Early draft spec of Strict language pragma In-Reply-To: References: Message-ID: <20140724091019.GA1664@sniper> * Johan Tibell [2014-07-23 17:48:29+0200] > Hi! > > I started a draft spec for the Strict language pragma we chatted about > during our call a while ago. I made the big mistake of writing it much > after the actual discussion, so I forgot most of the details. > > https://ghc.haskell.org/trac/ghc/wiki/StrictPragma > > Perhaps if you could ask questions that you don't think are answered > in the wiki page I can start fleshing it out more? > > P.S. I've included our chat log at the bottom of the wiki page. Will case with an irrefutable pattern force the scrutinee, too? I.e. will case x of { pat -> y } desugar to case x of { pat -> x `seq` y } ? That'd be consistent with the rules for newtypes (which I find a bit strange), so I want to clarify. Will tuples be considered to have strict fields? Since they are defined "externally" (by analogy with the Just example), they shouldn't, right? Roman -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: Digital signature URL: From ezyang at mit.edu Thu Jul 24 09:16:00 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Thu, 24 Jul 2014 10:16:00 +0100 Subject: Update Cabal submodule to HEAD (1.21) In-Reply-To: <1405927352.1650.3.camel@joachim-breitner.de> References: <1405781324.18386.0.camel@joachim-breitner.de> <1405868313.13230.1.camel@joachim-breitner.de> <1405871720.13230.3.camel@joachim-breitner.de> <1405872495-sup-5825@sabre> <1405873136.13230.7.camel@joachim-breitner.de> <1405893428-sup-5913@sabre> <1405927352.1650.3.camel@joachim-breitner.de> Message-ID: <1406193284-sup-3397@sabre> For the record, Cabal is now up-to-date, we're using a different patch which relaxes the version constraint on process so that 7.6 bootstraps. Cheers, Edward Excerpts from Joachim Breitner's message of 2014-07-21 08:22:32 +0100: > Hi, > > Am Sonntag, den 20.07.2014, 22:57 +0100 schrieb Edward Z.Yang: > > Since this patch causes GHC HEAD to not bootstrap out of the box > > from GHC 7.6, I've reverted it for now. We'll have to cross > > this bridge sometime though. > > thanks. > > Cabal already has applied the patch to make the initial ghc-cabal binary > build: > https://github.com/haskell/cabal/commit/3ef560208721a050e91fd9e67a0066ce44b04ba2 > > Now all we need to do is to figure out how to tell the build system to > build process before Cabal. I started in > http://www.haskell.org/pipermail/ghc-devs/2014-July/005685.html > but that didn?t satisfiy the build system completely. Probably simply, > but I have been staring at the makefiles for too long already. > > Greetings, > Joachim > From johan.tibell at gmail.com Thu Jul 24 09:30:55 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Thu, 24 Jul 2014 11:30:55 +0200 Subject: Early draft spec of Strict language pragma In-Reply-To: <20140724091019.GA1664@sniper> References: <20140724091019.GA1664@sniper> Message-ID: On Thu, Jul 24, 2014 at 11:10 AM, Roman Cheplyaka wrote: > Will case with an irrefutable pattern force the scrutinee, too? > I.e. will > > case x of { pat -> y } > > desugar to > > case x of { pat -> x `seq` y } > > ? Yes. The user has to write ~x if he/she doesn't want that. > Will tuples be considered to have strict fields? Since they are defined > "externally" (by analogy with the Just example), they shouldn't, right? They will have lazy fields. I think we consider them as already defined in some other module (even though they are slightly magical in practice). -- Johan From simonpj at microsoft.com Thu Jul 24 10:43:50 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 24 Jul 2014 10:43:50 +0000 Subject: Repositories page Message-ID: <618BE556AADD624C9C918AA5D5911BEF10437A4E@DB3PRD3001MB020.064d.mgd.msft.net> Austin I know that you have been working on * https://ghc.haskell.org/trac/ghc/wiki/WorkingConventions/Git * https://ghc.haskell.org/trac/ghc/wiki/WorkingConventions/Git/Submodules But what about the main "Repositories" page * https://ghc.haskell.org/trac/ghc/wiki/Repositories which is linked directly from the LH Margin bar? Is this page fully upd to date, including the "Full repository breakdown" table? Presumably not... I'm pretty sure that you said that Repositories/Upstream is fully out of date and should be retired. Also the Git/Submodules page also has a similar table, but with fewer columns. But some of the columns in the Repositories page ("reqd to build", "installed") are still relevant. Shouldn't there be just one master table with all this info? Could you perhaps clear this up --- in the wiki, not by replying to me :)? Thanks There is also https://ghc.haskell.org/trac/ghc/wiki/Commentary/Libraries, which is useful, and I think is still up to date. Indeed Edward and I have just updated it. But it's not that easy to find. It would be good if the Master Table (see above) had a column for each of the four properties of the boot packages that this page describes. Thanks Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From svenpanne at gmail.com Thu Jul 24 11:22:28 2014 From: svenpanne at gmail.com (Sven Panne) Date: Thu, 24 Jul 2014 13:22:28 +0200 Subject: At long last: 2014.2.0.0 Release Candidate 1 In-Reply-To: References: Message-ID: 2014-07-24 10:25 GMT+02:00 Sven Panne : > [...] let's see how the installation from source continues on an x64 Linux... After the missing hptool sources have been copied manually, platform.sh seems to have completed successfully. But I'm a bit clueless how to proceed from that point: What I actually want is a complete installation of the HP under a given prefix. The stuff below haskell-platform-2014.2.0.0/build/target looks almost like that, but with a prefix of /usr/local. Would it be OK to just copy this? How can I change the prefix? The README doesn't describe this AFAICT. From svenpanne at gmail.com Thu Jul 24 11:48:21 2014 From: svenpanne at gmail.com (Sven Panne) Date: Thu, 24 Jul 2014 13:48:21 +0200 Subject: At long last: 2014.2.0.0 Release Candidate 1 In-Reply-To: References: Message-ID: 2014-07-24 13:22 GMT+02:00 Sven Panne : > How can I change the prefix? The README doesn't describe this AFAICT. Hmmm, it looks like the install paths are hardwired in hptool/src/OS/Internal.hs. Do I see this correctly? If yes, that's a bit unfortunate, an additional parameter to platform.sh or an environment variable would be nice. Furthermore, the executables are scattered around: build/target/usr/local/haskell-platform/2014.2.0.0/lib/alex-3.1.3/bin/alex build/target/usr/local/haskell-platform/2014.2.0.0/lib/cabal-install-1.18.0.5/bin/cabal build/target/usr/local/haskell-platform/2014.2.0.0/lib/happy-1.19.4/bin/happy build/target/usr/local/haskell-platform/2014.2.0.0/lib/hscolour-1.20.3/bin/HsColour Even if one copies this to /usr/local, extending the PATH would be tiresome. Is this intended or is a common bin directory below usr/local/haskell-platform/2014.2.0.0 missing? The whole layout below build/target is somehow un-Linux-esque (it's basically structured the Windows way): Normal packages put their binaries below /usr/bin, their libs below /usr/lib, their docs below /usr/share/doc etc. How are other packagers on this list handling this? From eir at cis.upenn.edu Thu Jul 24 11:57:02 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Thu, 24 Jul 2014 07:57:02 -0400 Subject: A couple of GHC-API questions In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF104374B9@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF104374B9@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Hi Eric, This answer seems too easy, but I'll try anyway: are you looking for RnEnv.lookupOccRn :: RdrName -> RnM Name? According to the comments in the RdrName module, a RdrName is essentially a string with an optional module prefix. lookupOccRn will take a RdrName and convert it into a Name, which then can be lookup up perhaps a little more easily. You can use the RnM monad via functions in TcRnMonad. Is this what you want? Richard On Jul 24, 2014, at 3:32 AM, Simon Peyton Jones wrote: > Eric > > I'd like to help but I don't understand the question. What do you mean by "resolve" all the types that are imported from module A? > > If you have a TyCon (for A.Foo) in your hand, it has a Name. That Name tells you where it comes from. Inside it you will find its DataCons which also come from A. And so on. > > Later you say "is there a simple way to ask GHC to resolve the name x in the context of module m". You could mean > * Imagine that the unqualified name "x" appeared in module m. How do I look it up, in m's top-level lexical environment. > but I don?t think that is what you mean. > > I'm confused. Could you be more concrete? > Possibly this may help? https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/NameType > > I'm cc'ing ghc-devs. > > Simon > > | -----Original Message----- > | From: Eric Seidel [mailto:eseidel at cs.ucsd.edu] > | Sent: 23 July 2014 19:01 > | To: Simon Peyton Jones > | Subject: A couple of GHC-API questions > | > | Hi Simon, > | > | I have a couple of questions about how to use GHC?s API properly. > | There?s nothing urgent here as we?re making due just fine in > | LiquidHaskell, but I?m pretty sure we?re hacking around GHC when we > | don?t need to :) > | > | As you probably know, we basically use GHC to get the Core binders of a > | module, and to resolve a bunch of names. Translating to Core has never > | been an issue for us as far as I know, but our name-resolution code is > | very haphazard. The core of our problems with name-resolution comes > | down to resolving the names in specifications that have been imported > | from another module, e.g. > | > | module A where > | data Foo > | mkFoo :: {v:Int | v >= 0} -> Foo > | ... > | > | module B where > | import qualified A > | ... > | > | When we verify module B we have to first resolve all of the types that > | were imported from module A, *but* we can?t do that in the context of > | module B due to the qualified import. So I?ve been using a slightly > | modified version of GHC?s DynamicLoading.lookupRdrNameInModule to > | handle the change of context, but this seems to require that we use the > | HscInterpreted flag, which causes further problems for a few of our > | benchmarks that make use of the C FFI.. This method also only works for > | modules where we actually have the source code available, i.e. we have > | to use a different method of looking up names to attach specifications > | to functions from the base libraries. > | > | So I guess my question is this: is there a simple way to ask GHC to > | resolve the name x in the context of module m, without invoking the > | code generator or any other part of GHC?s functionality? I suppose a > | more principled solution would be to somehow hook into the .hi files > | that GHC generates, but I?m not sure if that?s possible. > | > | Again, the code we have right now works, it?s just very fragile and I?d > | like to clean things up. > | > | Thanks! > | > | Eric > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From george.colpitts at gmail.com Thu Jul 24 12:09:21 2014 From: george.colpitts at gmail.com (George Colpitts) Date: Thu, 24 Jul 2014 09:09:21 -0300 Subject: At long last: 2014.2.0.0 Release Candidate 1 In-Reply-To: References: Message-ID: Installed on the Mac, looks good, not sure if following is significant: ghc-pkg check Warning: library-dirs: /opt/local/lib/ doesn't exist or isn't a directory Warning: include-dirs: /opt/local/include/ doesn't exist or isn't a directory Warning: haddock-interfaces: /Users/gcolpitts/Library/Haskell/ghc-7.8.3/lib/Cabal-1.20.0.1/doc/html/Cabal.haddock doesn't exist or isn't a file ?did cabal install -j3 of threadscope, criterion, hlint, ghc-mod, quickcheck, smallcheck and smartcheck? On Thu, Jul 24, 2014 at 3:57 AM, Mark Lentczner wrote: > *Even I find it hard to believe that it's finally here!* > > *I'm happy to announce the RC1 release of Haskell Platform 2014.2.0.0.* > > You can find builds for Windows (32bit and 64bit), Mac (64bit), as well as > a source tarball, and a tag in the repo: > > > - source tarball: haskell-platform-2014.2.0.0-RC1.tar.gz > > - source repo: > https://github.com/haskell/haskell-platform/tree/2014.2.0.0-RC1 > - windows 32bit: HaskellPlatform-2014.2.0.0-i386-RC1-setup.exe > > - windows 64bit: HaskellPlatform-2014.2.0.0-x86_64-RC1-setup.exe > > - os x 64bit: Haskell Platform 2014.2.0.0 64bit RC1.signed.pkg > > - travis-ci build: 20 > > > *Notes:* > > General > > - Built with the new shake based build system > - cgi package not included as it doesn't build under 7.8, and no word > from the maintainer in quite some time > - hscolour is included as a tool, mostly as it is used to build the > platform itself on win and mac... but we should probably officially decide > to include it > - the haskell-platform package itself is not in these releases... not > sure if we actually need it as it doesn't contain anything other than > dependencies. > > Windows > > - The Haskell Platform now provides a native Windows 64-bit > installation (haskell-platform issue #54 > ) > - All included packages built without --enable-split-objs (GHC 7.8 FAQ > ) > - All included packages built without --enable-shared (GHC ticket #8228 > ) > - All html document links are relative and everything links nicely > together now, including the master index and cross-package links > - If other Haskell Platform installations are detected during the > installation of Haskell Platform 2014.2.0.0, a warning is displayed to the > user that this is not recommended since problems will arise due to how the > PATH is used in many cases to find ancillary build tools > - Using ghci to build an executable that links against a DLL may > result in numerous warnings about symbols (GHC ticket #9297 > ) > - On Windows 64-bit, some unneeded python files are included (GHC > ticket #9014 ) > - The Haskell Platform for Windows, both 32-bit and 64-bit, now > includes an updated version of the OpenGL Utility Toolkit (GLUT) from the > FreeGLUT project, utilizing the pre-built distribution from > http://www.transmissionzero.co.uk/software/freeglut-devel/ with the > MinGW build (freeglut-MinGW-2.8.1-1.mp.zip). (haskell-platform issue > #81 ) > > Mac OS X > > - Distributed with a build of 7.8.3 that differs from the released > bindist in two ways: a) it was built split-objs for smaller resulting > executables, b) it includes Cabal-1.18.1.4 which fixes a particularly nasty > problem with haddock, -XCPP, and clang based systems. This ghc-7.8.3 > bindist is available as well: > - ghc-7.8.3-x86_64-apple-darwin-r3.tar.bz2 > > - haddocks finally cross link between packages correctly > - includes a new experimental activate-hs command that can switch > between multiple installed versions of the platform > - includes a slightly updated uninstall-hs command > - the cabal command is wrapped to provide a smoother file layout on > the Mac... the wrapping only updates the ~/.cabal/config file the first > time you run it.. please pay attention to its output - if you have a custom > config file, you'll want to update it, as Cabal's defaults have changed > - works on 10.7 (tested), 10.8 (assumed!), 10.9 (tested) - both with > gcc and clang based Xcode installs. (Next RC will support back to 10.6!) > - 64bit only build this time... anyone still want the 32bit build? > > Source tarball > > > - Quite some changes from last time - notably now includes the new > build machinery, not the old. > - The build machinery is well tested (see the travis-ci link above) > from a repo... not tested much from a source repo > - It isn't clear the source repo is of much interest... anyone need > it? anyone need anything beyond the haskell-platform.cabal file in it? > > Timetable > > - These can "soak" amongst the intreped on these lists for 48 hours. > - On Saturday, I'll release any needed updated RCs, and announce to a > wider audience > - End of next week (from my vacation, I'll point out), we'll declare > success and ship. > > ? Mark > > P.S.: sha256 sums for the reasonably wary: > 860cba2d65bc1790ec3b8a0a62c8aaf19f5b18e9070aa1ec3c468b7c6ca82697 Haskell > Platform 2014.2.0.0 64bit RC1.signed.pkg > 8ca8a994334bf846ac6a41ada381c51634b2fce0d68dc70b2dadf59cb8748397 > haskell-platform-2014.2.0.0-RC1.tar.gz > b034e2daa595e0d3827860b062afca05f62df64434b0923559acc99bd82ef1e9 > HaskellPlatform-2014.2.0.0-i386-RC1-setup.exe > e31fe5e0cb9dca330e6f0c62ac49bf70b79f555bfe11e8f04c47739fef224145 > HaskellPlatform-2014.2.0.0-x86_64-RC1-setup.exe > 8e479d9dd504b1c603cd51f3be0fa57ecdc996e842d655e39d97faafff3c2d31 > ghc-7.8.3-x86_64-apple-darwin-r3.tar.bz2 > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From svenpanne at gmail.com Thu Jul 24 12:10:03 2014 From: svenpanne at gmail.com (Sven Panne) Date: Thu, 24 Jul 2014 14:10:03 +0200 Subject: At long last: 2014.2.0.0 Release Candidate 1 In-Reply-To: References: Message-ID: Hmmm, even after copying the stuff below build/target to /usr/local and extending my PATH things don't work, because the packages are not registered with GHC. Somehow I get the impression that I misunderstand how things are supposed to work, so let's repeat my use case: * Download the right bindist from http://www.haskell.org/ghc/download_ghc_7_8_3#x86_64linux * Download and unpack the 2014.2.0.0 RC1 (and add the missing hptool files). * Do ??? to compile and install GHC 7.8.3 and the 2014.2.0.0 to a given prefix, e.g. ~/local. So my question is: What exactly is "???"? :-) From jan.stolarek at p.lodz.pl Thu Jul 24 12:54:56 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Thu, 24 Jul 2014 14:54:56 +0200 Subject: Understanding core2core optimisation pipeline Message-ID: <201407241454.56547.jan.stolarek@p.lodz.pl> Devs, I'm trying to understand how the core2core pipeline works. Sadly, we don't have a wiki page about this so the "only" source of information are the papers and the source code. Papers give pretty much detail about each transformation in separate but none of the papers gives a comprehensive and up-to-date overview of how the whole pipeline is structured. My questions are based on reading the user documentation and the following papers: [1] - "The Glasgow Haskell Compiler" from The Architecture of Open Source Application, vol. 2 [2] - "Compilation by Transformation in Non-Strict Functional Languages", PhD by Santos [3] - "Secrets of the Glasgow Haskell Compiler inliner" [4] - "A transformation-based optimiser for Haskell" [5] - "Modular, Higher-Order Cardinality Analysis in Theory and Practice" [6] - "Let-floatig: moving bindings to give faster programs" [7] - "Playing by the Rules: Rewriting as a practical optimisation technique in GHC" I know there are several papers missing from this list, eg. "Constructed Product Result Analysis for Haskell" or "Call-pattern specialisation for Haskell programs". The reason is that these optimisations are beyond the scope of what I'm doing at the moment (or so I believe). This mail basically asks just one question: what is the order of optimizations pefromed on Core? Since this question is pretty big and general I've separated it into smaller questions that arose from reading the above papers, documentation, and experimenting with GHC. Now the detailed questions: 1. What is the difference between a "simplifier iteration" and "simplifier phase"? Section 7.20.6.5 of the user guide mentions phases but I believe that iterations are not explained anywhere. My best guess, expressed in pseudo-code, is this (sorry about the imperative style): foreach (i in iterations) { // some optimisations here? foreach (p in phases) { //...optimisations here } // some optimisations here? } 1a. What is the default maximum iterations count? User documentation does not specify that. 2. How can I observe the effects of `-ddump-simpl-phases`. I tried compiling several different programs and this flag seems to have no effect (ie. nothing gets printed). 3. Cardinality anlaysis and inlining: cardinality analysis can determine that a let binding is used exactly once. Can the inliner re-use this information from the cardinality analysis or does it recompute it per [3], section 3.1? 4. I've compiled a sample program using `-dverbose-core2core` and got the following phases: - Desugar (after optimization) - Simplifier (Phase = InitialPhase [Gentle]) - Specialise - Levels added - Float out - Float inwards - Simplifier (Phase = 2 [main]) - Simplifier (Phase = 1 [main]) - Simplifier (Phase = 0 [main]) - Demand analysis - Worker Wrapper binds - Simplifier (Phase = 0 [post-worker-wrapper]) - Levels added - Float out - Common sub-expression - Float inwards - Simplifier (Phase = 0 [final]) - Tidy Core - CorePrep This raises lots of questions: 4a. The first phase is "Desugar (after optimization)". What optimizations are performed during desugaring? 4b. I'm not sure whether I'm looking at a single iteration of core2core transformation or at multiple ones. Some passes are performed several times (Float out, Float inwards), which suggests that there might be many iterations here. On the other hand simplifier phases are decreasing towards 0, which looks as if it was one core2core iteration. My assumption here is that every time a new core2core iteration starts the simplifier phases are counted anew from 2 towards 0. Is that correct? 4c. Why are there several 0 phases of the Simplifier? I find it confusing. 4d. I understand that some passes can be enabled or disabled using command-line options. Can the decission to run some passes be made dynamically by the compiler (eg. to run extra simplifier passes)? 4e. Are there more phases that could appear here, ie. they were ommited with -O? 4f. "Levels added" pass before the "Float out" pass: my guess is that this is preparation for the full laziness transform. So, is full laziness performed as part of "Float out" pass? A general note is that I am confused by many Simplifier passes being interleaved with other passes. I expected that simplifier phases will grouped into a single pass, as speculated in question 1. 5. What optimizations *exactly* are performed by the Simplifier? I assume that most of what's described in chapter 3 of [2]: beta reduction, let elimination, case elimination, case floating, constant folding and eta expansion. I'm not sure about floating let outwards and inwards - [1], pg. 7, says these are in a pass separate from the simplifier. `-dverbose-core2core` seems to confirm that since it reveals separate "Float out" and "Float inwards" passes. 6. [4], pg. 31, mentions the Deforestation optimisation. Is everything described in that "Deforestation" section subsumed by cardinality analysis ([5], end of section 2.1 and section 7.1)? If not then when is deforestation performed? 7. [5], section 6.1 says: "We run the analysis twice, once in the middle of the optimisation pipeline and once near the end". When exactly in the middle of the pipeline? Between which passes? This does not show up with `-dverbose-core2core` (or at least it is not explicitly named). 8. How does the rules rewriting fit into the picture? Section 7.20.6.5 of the User Guide and section 4.1 of [7] explain the interaction between rules and inlining and my guess is that both are performed by the Simplifier. Again the "simplifier phases/iterations" distinction puzzles me as to what exactly is happening when. Within a single phase is the inlining happening before rewriting or vice versa? I know that all of the above questions can be answered by looking at the source code for sufficiently long. This is actually what I'm planning to do next but if anyone could help me by answering some of these questions this would certainly save me some time. My plan is to gather up the answers on a wiki page. Janek From mail at joachim-breitner.de Thu Jul 24 13:53:04 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Thu, 24 Jul 2014 15:53:04 +0200 Subject: GHC contribution guidelines and infrastructure talk on 6th September at HIW? In-Reply-To: <1406041974.2079.17.camel@joachim-breitner.de> References: <53CE38F3.9080201@mathematik.uni-marburg.de> <1406041974.2079.17.camel@joachim-breitner.de> Message-ID: <1406209984.4078.5.camel@joachim-breitner.de> Hi, Am Dienstag, den 22.07.2014, 17:12 +0200 schrieb Joachim Breitner: > Am Dienstag, den 22.07.2014, 12:12 +0200 schrieb Jost Berthold: > > (Sorry for joining this late... I figured we would be in dialogue off > > the list eventually) > > > > Joachim wrote and posted a proposal, and I think this proposal is indeed > > a good idea (and one of the purposes of HIW, definite yes). > > > > We shall make room for it in the programme, possibly in the last > > session, which can turn into the "Haskell release discussion evening". > > Jost added my proposal to EasyChair, but it turns out that I scheduled > by return flight a bit too early (taking off at 19:10) and might not be > able to attend the last session. > > Who is able to fill in for me if the infrastructure talk is scheduled > here? Maybe Simon M, or Austin, or Herbert? Of some coalition thereof FYI: The talk got accepted. Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- An embedded message was scrubbed... From: HIW 2014 Subject: HIW 2014 notification for paper 15 Date: Thu, 24 Jul 2014 15:18:41 +0200 Size: 3389 URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From ezyang at mit.edu Thu Jul 24 13:56:22 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Thu, 24 Jul 2014 14:56:22 +0100 Subject: Removing GHC's dependency on Cabal Message-ID: <1406208563-sup-6481@sabre> Hello all, I know Duncan and SPJ have been keen on removing GHC's dependency on Cabal for some time now. Simon and I were chatting about the subject today, and we wanted to propose an alternative way of doing the remodularization. Here are diagrams of the proposals: http://web.mit.edu/~ezyang/Public/ghc-cabal-refactor.pdf As I understand it, Duncan's proposal is to take the current constellation of libraries, and just remove the dependency to Cabal from GHC and bin-package-db, duplicating data structures as necessary. ghc-pkg is now responsible for converting between Cabal's format and GHC's format. Simon suggested that this 'ghc-pkg' functionality (which specifies the database format and how to handle it) should be placed in a library of its own. So in the third graph, we have a new package ghc-db which everyone depends on. Cabal no longers shells out to ghc-pkg to modify database, instead it directly converts to ghc-db's format and then invokes library functions in the library. We were wondering if there was any reason to prefer the former situation over the latter. One answer might be that Cabal is less keen to have a dependency on a very GHC specific library (although the ghc-pkg dependency is quite a fairly tightly coupled one.) Cheers, Edward From mark.lentczner at gmail.com Thu Jul 24 13:59:19 2014 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Thu, 24 Jul 2014 06:59:19 -0700 Subject: At long last: 2014.2.0.0 Release Candidate 1 In-Reply-To: References: Message-ID: On Thu, Jul 24, 2014 at 5:09 AM, George Colpitts wrote: > Installed on the Mac, looks good, ... > ?did cabal install -j3 of threadscope, criterion, hlint, ghc-mod, > quickcheck, smallcheck and smartcheck? > > That's all a good sign... not sure if following is significant: > > ghc-pkg check > Warning: library-dirs: /opt/local/lib/ doesn't exist or isn't a directory > Warning: include-dirs: /opt/local/include/ doesn't exist or isn't a > directory > Warning: haddock-interfaces: > /Users/gcolpitts/Library/Haskell/ghc-7.8.3/lib/Cabal-1.20.0.1/doc/html/Cabal.haddock > doesn't exist or isn't a file > > Something is amiss in how things ended up on your machine. I've just double checked the contents of the Mac distro and /opt doesn't appear in any of the package registrations. Furthermore, unless you built Cabal-1.20.0.1, it shouldn't be in your package db. My rough guess is that you've got a 7.8.3 compatible package db left over from a prior install somewhere. Can you e-mail me directly the output of: which cabal cabal --version which ghc ghc --version which ghc-pkg ghc-pkg --version ghc-pkg list ghc-pkg dump Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.lentczner at gmail.com Thu Jul 24 14:02:12 2014 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Thu, 24 Jul 2014 07:02:12 -0700 Subject: At long last: 2014.2.0.0 Release Candidate 1 In-Reply-To: References: Message-ID: On Thu, Jul 24, 2014 at 5:30 AM, Brandon Allbery wrote: > Errrr. Whoever is making the Mac bindist should take care to avoid any > MacPorts (or for that matter Homebrew) components slipping in to it; this > will cause problems down the road. > That would be makin' that bindist! And your advice about other package systems is the same as my own. The bindist used in the release was built on a very clean machine that has only stock OS X and stock Xcode installed... never Homebrew or MacPorts. That machine doesn't even have a /opt directory! - Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From the.dead.shall.rise at gmail.com Thu Jul 24 14:03:13 2014 From: the.dead.shall.rise at gmail.com (Mikhail Glushenkov) Date: Thu, 24 Jul 2014 16:03:13 +0200 Subject: Removing GHC's dependency on Cabal In-Reply-To: <1406208563-sup-6481@sabre> References: <1406208563-sup-6481@sabre> Message-ID: Hi, On 24 July 2014 15:56, Edward Z. Yang wrote: > We were wondering if there was any reason to prefer the former > situation over the latter. One answer might be that Cabal is less keen > to have a dependency on a very GHC specific library (although the > ghc-pkg dependency is quite a fairly tightly coupled one.) If this new proposal will allow us not to ship Cabal with GHC, then I think it's preferable to the old one. For example, Haskell Platform is often forced to use an old version of Cabal because the Cabal version is determined by the GHC version. From mail at joachim-breitner.de Thu Jul 24 14:07:14 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Thu, 24 Jul 2014 16:07:14 +0200 Subject: Removing GHC's dependency on Cabal In-Reply-To: <1406208563-sup-6481@sabre> References: <1406208563-sup-6481@sabre> Message-ID: <1406210834.4078.8.camel@joachim-breitner.de> Hi, Am Donnerstag, den 24.07.2014, 14:56 +0100 schrieb Edward Z.Yang: > We were wondering if there was any reason to prefer the former > situation over the latter. One way to decide that is to ask ?What is the more stable interface?? I.e. under what circumstances will upgrading Cabal require upgrading packages depended upon by ghc. So while Duncan?s Proposal has no such dependency, in Simon?s proposal there is one. Will ghc-db?s interface be stable enough that the Cabal developers will be happy to build against a very old version of it? Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From mark.lentczner at gmail.com Thu Jul 24 14:24:13 2014 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Thu, 24 Jul 2014 07:24:13 -0700 Subject: At long last: 2014.2.0.0 Release Candidate 1 In-Reply-To: References: Message-ID: On Thu, Jul 24, 2014 at 1:25 AM, Sven Panne wrote: > The source tarball is missing a few files for hptool: > I'll try to catch them the next round... or pull requests on github welcome! On Thu, Jul 24, 2014 at 4:22 AM, Sven Panne wrote: > ...But I'm a bit > clueless how to proceed from that point: What I actually want is a > complete installation of the HP under a given prefix. Your build is using the *genericOS* set up in src/OS/Internal.hs. In the past, linux/unix distribution packagers have worked from the src tarball, and when I queired in the past, none said they used the build scripts HP had, but used their own. And in turn, no one put in any effort over the last six months to add a build decription using the new build system other than OS X and Windows. SO - what you, and we, need is a src/OS/GenericLinux.hs file so that it does you/we think is the generic layout for where things should live. For your own, peronsal install, you could start by just hacking on the *genericOS* structure in src/OS/Internal.hs. That strucutre is currently set up to build something which is unpacked over /, which is probably what most people *don't want, *but is agnostic to the variety of layouts possible. On Thu, Jul 24, 2014 at 4:48 AM, Sven Panne wrote: > Hmmm, it looks like the install paths are hardwired in > hptool/src/OS/Internal.hs. Do I see this correctly? If yes, that's a > bit unfortunate, an additional parameter to platform.sh or an > environment variable would be nice. Yes - it would need to be added to Main.hs, and then plumbed to the right point in the code. Sound great.... I can haz pull request plz? (you knew that was coming... :-) ) > Furthermore, the executables are > scattered around: > > > build/target/usr/local/haskell-platform/2014.2.0.0/lib/alex-3.1.3/bin/alex > > build/target/usr/local/haskell-platform/2014.2.0.0/lib/cabal-install-1.18.0.5/bin/cabal > > build/target/usr/local/haskell-platform/2014.2.0.0/lib/happy-1.19.4/bin/happy > > build/target/usr/local/haskell-platform/2014.2.0.0/lib/hscolour-1.20.3/bin/HsColour > ... Normal packages put their binaries below /usr/bin, their > libs below /usr/lib, their docs below /usr/share/doc etc. How are > other packagers on this list handling this? > My rationale is that if you install things under /usr/bin, /usr/lib, /usr/share/doc - then it becomes essentially impossible to remove easily: There are just far too many files in those trees intermingled with files from other things. Same is true if you do /usr/local/... My preferred approach is to put the whole installation under one prefix, say /usr/loca/ghc/ghc-7.8.3/, in there there are /bin, /lib, and /share. As you point out, this is a pain for PATH... so what I do is have an "activation" script that symlinks all the executables in that tree up and over into /usr/bin (or /usr/local/bin). Now clean up is a snap: Remove the ghc- tree you want gone, and then remove all the symlinks in /usr/bin and/or /usr/local/bin that are now dangling into it. Does this make sense for simple "untar, run a script" kind of distributions? For people packaging for OS distributions, using a host OS package manager, I suspect they prefer the "overlay it all" approach, because the package managers take care of the chore of remembering which files go with which thing and handle uninstallation. I could easily imagine two or more styles under src/OS for hptool: PosixLocalTarball.hs, PosixPackaged.hs, Ubuntu.hs, FreeBSD.hs, etc... - Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From the.dead.shall.rise at gmail.com Thu Jul 24 14:30:02 2014 From: the.dead.shall.rise at gmail.com (Mikhail Glushenkov) Date: Thu, 24 Jul 2014 16:30:02 +0200 Subject: Removing GHC's dependency on Cabal In-Reply-To: <1406210834.4078.8.camel@joachim-breitner.de> References: <1406208563-sup-6481@sabre> <1406210834.4078.8.camel@joachim-breitner.de> Message-ID: Hi, On 24 July 2014 16:07, Joachim Breitner wrote: > > So while Duncan?s Proposal has no such dependency, in Simon?s proposal > there is one. Will ghc-db?s interface be stable enough that the Cabal > developers will be happy to build against a very old version of it? Cabal's policy is to support versions of GHC that are up to 3 years old, so I think this could be solved with a combination of #ifdefs and automatic testing. From simonpj at microsoft.com Thu Jul 24 14:55:03 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 24 Jul 2014 14:55:03 +0000 Subject: Removing GHC's dependency on Cabal In-Reply-To: <1406210834.4078.8.camel@joachim-breitner.de> References: <1406208563-sup-6481@sabre> <1406210834.4078.8.camel@joachim-breitner.de> Message-ID: <618BE556AADD624C9C918AA5D5911BEF1043804B@DB3PRD3001MB020.064d.mgd.msft.net> The way I am thinking of it is this. In "Simon's proposal" in http://web.mit.edu/~ezyang/Public/ghc-cabal-refactor.pdf the ghc-db library is simply a Haskell library that gives a programmatic way to interact with GHC's installed package database(s). GHC needs that. ghc-pkg needs that. cabal needs that. There should be one thing that supplies it. Cabal currently shells out to ghc-pkg, which in turn mandates a special file format (the .conf file) that Cabal uses to communicate its wishes to ghc-pkg. This needs syntax, a parser, a pretty printer, and is, to all intents and purposes just as stable, or unstable, as a Haskell API to ghc-db would be. To me it seems simple and obvious! Why are we going round the houses to do something so simple? Simon | -----Original Message----- | From: cabal-devel [mailto:cabal-devel-bounces at haskell.org] On Behalf Of | Joachim Breitner | Sent: 24 July 2014 15:07 | To: ghc-devs at haskell.org | Cc: cabal-devel | Subject: Re: Removing GHC's dependency on Cabal | | Hi, | | | Am Donnerstag, den 24.07.2014, 14:56 +0100 schrieb Edward Z.Yang: | > We were wondering if there was any reason to prefer the former | > situation over the latter. | | One way to decide that is to ask ?What is the more stable interface?? | I.e. under what circumstances will upgrading Cabal require upgrading | packages depended upon by ghc. | | So while Duncan?s Proposal has no such dependency, in Simon?s proposal | there is one. Will ghc-db?s interface be stable enough that the Cabal | developers will be happy to build against a very old version of it? | | Greetings, | Joachim | | | -- | Joachim ?nomeata? Breitner | mail at joachim-breitner.de ? http://www.joachim-breitner.de/ | Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F | Debian Developer: nomeata at debian.org From ezyang at mit.edu Thu Jul 24 14:57:05 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Thu, 24 Jul 2014 15:57:05 +0100 Subject: Changing the -package dependency resolution algorithm Message-ID: <1406210353-sup-3241@sabre> Right now, GHC has a very complex and hard to explain algorithm for picking packages from the package database when you give it a pile of -package/-package-id/-{hide,ignore,trust,distrust}-package flags. Roughly, it currently does something like this. 1. Concatenate all of the package databases into a giant list, with system packages first and then user packages later, removing duplicate entries with the same installed package ID (preferring later packages). Call this PACKAGES. 2. Take all the -package-id flags and compute their transitive closure (call this set P). 3. Calculate the set of shadowed packages--the set of installed packages for which there exists another package with the same package ID (foo-0.1) which is in P or is later in the package stack. 4. Calculate the set of ignored packages---the set of packages which match all -ignore-package flags 5. Filter out shadowed and ignored packages from the list of packages, calling the result ALL_PACKAGES. 6. Calculate the set of broken packages---the set of packages not contained in the least fixed point of the relation that takes a set of packages, and adds all packages whose dependencies are satisfied, from ALL_PACKAGES. 7. Process the package flags in order, operating on PACKAGES (not ALL_PACKAGES). For -package/-hide-package, take the package with the *latest* version that matches the flag and are not broken (since this includes shadowed packages, the result is unique) and toggle it to be exposed/unexposed, and hide all the other packages. For -trust-package/-distrust-package, toggle the trusted bit for all instances in the database. 8. If we have exposed multiple versions of the same package name, hide all the older versions What a mouthful! I have no idea, given a set of flags, how this works. So here is an alternate proposal for an alternate way of handling these flags *when starting from an empty database* (e.g. -hide-all-packages) Suppose we maintain a set of selected packages, which starts off empty. Process each package flag in turn. For -package and -package-id, get the set of installed packages which match the flag. (If multiple package names apply, process each in turn as if it were a separate flag.) Compute the transitive closure of dependencies for all of them, and filter out all choices which have dependencies which are inconsistent with the current set of selected packages. Consistency without multi-instances is a mapping of a package name to an installed package. If there is still more than one choice, tiebreak by version, which database and time of install. (The latter tiebreak should not be necessary until we allow multiple instances of a package with the same package ID.) For -hide-package, get the set of packages which match and hide them all; for -ignore-package, hide the transitive closure of dependencies of it. For -trust,distrust-package, toggle for all matching packages as before. Here are some differences in behavior between this and the previous scheme: - It's no longer valid to indirectly depend on two different versions of the same package. Most of the time, users didn't want this anyway. Note that the current scheme prevents directly depending on two different versions by shadowing the old ones. - We can easily extend it to handle multi-instances by relaxing the consistency check. - It assumes *-hide-all-packages* at the beginning. This scheme probably works less well without that: now we need some consistent view of the database to start with. What do people think? Cheers, Edward From simonpj at microsoft.com Thu Jul 24 15:00:58 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 24 Jul 2014 15:00:58 +0000 Subject: Changing the -package dependency resolution algorithm In-Reply-To: <1406210353-sup-3241@sabre> References: <1406210353-sup-3241@sabre> Message-ID: <618BE556AADD624C9C918AA5D5911BEF104380E7@DB3PRD3001MB020.064d.mgd.msft.net> The background here is our (Edward + me) beliefs that * The current situation is complicated, and completely un-documented * Its functionality is not used. The dominant modes of use are - Cabal: hide-all-packages and then say exactly which ones to expose - Users: use -package (but not -package-id etc) in very simple-minded way So the cost/benefit ratio is extremely poor. Better to implement something extremely simple (for the common case), relying on Cabal's super solver for planning a good solution to more complex situations. Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | Edward Z.Yang | Sent: 24 July 2014 15:57 | To: ghc-devs | Subject: Changing the -package dependency resolution algorithm | | Right now, GHC has a very complex and hard to explain algorithm for | picking packages from the package database when you give it a pile of - | package/-package-id/-{hide,ignore,trust,distrust}-package flags. | Roughly, it currently does something like this. | | 1. Concatenate all of the package databases into a giant list, with | system packages first and then user packages later, removing duplicate | entries with the same installed package ID (preferring later packages). | Call this PACKAGES. | | 2. Take all the -package-id flags and compute their transitive closure | (call this set P). | | 3. Calculate the set of shadowed packages--the set of installed | packages for which there exists another package with the same package | ID | (foo-0.1) which is in P or is later in the package stack. | | 4. Calculate the set of ignored packages---the set of packages which | match all -ignore-package flags | | 5. Filter out shadowed and ignored packages from the list of packages, | calling the result ALL_PACKAGES. | | 6. Calculate the set of broken packages---the set of packages not | contained in the least fixed point of the relation that takes a set of | packages, and adds all packages whose dependencies are satisfied, from | ALL_PACKAGES. | | 7. Process the package flags in order, operating on PACKAGES (not | ALL_PACKAGES). For -package/-hide-package, take the package with the | *latest* version that matches the flag and are not broken (since this | includes shadowed packages, the result is unique) and toggle it to be | exposed/unexposed, and hide all the other packages. For -trust- | package/-distrust-package, toggle the trusted bit for all instances in | the database. | | 8. If we have exposed multiple versions of the same package name, hide | all the older versions | | What a mouthful! I have no idea, given a set of flags, how this works. | So here is an alternate proposal for an alternate way of handling these | flags *when starting from an empty database* (e.g. -hide-all-packages) | | Suppose we maintain a set of selected packages, which starts off empty. | Process each package flag in turn. | | For -package and -package-id, get the set of installed packages which | match the flag. (If multiple package names apply, process each in turn | as if it were a separate flag.) Compute the transitive closure of | dependencies for all of them, and filter out all choices which have | dependencies which are inconsistent with the current set of selected | packages. Consistency without multi-instances is a mapping of a | package name to an installed package. If there is still more than one | choice, tiebreak by version, which database and time of install. (The | latter tiebreak should not be necessary until we allow multiple | instances of a package with the same package ID.) | | For -hide-package, get the set of packages which match and hide them | all; for -ignore-package, hide the transitive closure of dependencies | of it. | For -trust,distrust-package, toggle for all matching packages as | before. | | Here are some differences in behavior between this and the previous | scheme: | | - It's no longer valid to indirectly depend on two different versions | of the same package. Most of the time, users didn't want this | anyway. | Note that the current scheme prevents directly depending on two | different versions by shadowing the old ones. | | - We can easily extend it to handle multi-instances by relaxing the | consistency check. | | - It assumes *-hide-all-packages* at the beginning. This scheme | probably works less well without that: now we need some consistent | view of the database to start with. | | What do people think? | | Cheers, | Edward | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From ezyang at mit.edu Thu Jul 24 15:12:36 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Thu, 24 Jul 2014 16:12:36 +0100 Subject: Changing the -package dependency resolution algorithm In-Reply-To: <1406210353-sup-3241@sabre> References: <1406210353-sup-3241@sabre> Message-ID: <1406214588-sup-9538@sabre> Excerpts from Edward Z. Yang's message of 2014-07-24 15:57:05 +0100: > - It assumes *-hide-all-packages* at the beginning. This scheme > probably works less well without that: now we need some consistent > view of the database to start with. Actually, thinking about this, this dovetails nicely with the "package environments" work the IHG is sponsoring. The idea behind a package environment is you specify some set of installed package IDs, which serves as the visible slice of the package database which is used for compilation. Then, ghc called without any arguments is simply using the *default* package environment. Furthermore, when users install packages, they may or may not decide to add the package to their global environment, and they can be informed if the package is inconsistent with a package that already is in their environment (mismatched dependencies). A user can also request to upgrade a package in their environment, and Cabal could calculate how all the other packages in the environment would need to be upgraded in order to keep the environment consistent, and run this plan for the user. Cheers, Edward From ydewit at gmail.com Thu Jul 24 15:32:43 2014 From: ydewit at gmail.com (Yuri de Wit) Date: Thu, 24 Jul 2014 12:32:43 -0300 Subject: Removing GHC's dependency on Cabal In-Reply-To: References: <1406208563-sup-6481@sabre> <1406210834.4078.8.camel@joachim-breitner.de> <618BE556AADD624C9C918AA5D5911BEF1043804B@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: On Thu, Jul 24, 2014 at 11:59 AM, Brandon Allbery wrote: > On Thu, Jul 24, 2014 at 10:55 AM, Simon Peyton Jones < > simonpj at microsoft.com> wrote: > >> To me it seems simple and obvious! Why are we going round the houses to >> do something so simple? > > > So cabal can maintain its conceit that it supports more than just ghc. > I don't understand this as an argument against the ghc-db library, which to me also seems the simple and obvious solution. Having a ghc-db library will mean one of two options: 1. The *ghc-db* is GHC-specific and will be used by the GHC specific wrappers in Cabal: doesn't cabal already deals with each compiler differently? 2. The ghc-db is really *hs-db* and a library contract that can be reused by all haskell compilers (i.e. part of The Haskell Cabal) In both cases, the assumption is that ghc-db/hs-db should have a stable API. Now, choosing option (1) doesn't eliminate option (2). When and if there is a broad agreement across all compilers, ghc-db could become hs-db and be incorporate into The Haskell Cabal. > > -- > brandon s allbery kf8nh sine nomine > associates > allbery.b at gmail.com > ballbery at sinenomine.net > unix, openafs, kerberos, infrastructure, xmonad > http://sinenomine.net > > _______________________________________________ > cabal-devel mailing list > cabal-devel at haskell.org > http://www.haskell.org/mailman/listinfo/cabal-devel > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.k.f.holzenspies at utwente.nl Thu Jul 24 15:42:19 2014 From: p.k.f.holzenspies at utwente.nl (p.k.f.holzenspies at utwente.nl) Date: Thu, 24 Jul 2014 15:42:19 +0000 Subject: Broken Data.Data instances Message-ID: Dear GHC-ers, Is there a reason for explicitly broken Data.Data instances? Case in point: > instance Data Var where > -- don't traverse? > toConstr _ = abstractConstr "Var" > gunfold _ _ = error "gunfold" > dataTypeOf _ = mkNoRepType "Var" I understand (vaguely) arguments about abstract data types, but this also excludes convenient queries that can, e.g. extract all types from a CoreExpr. I had hoped to do stuff like this: > collect :: (Typeable b, Data a, MonadPlus m) => a -> m b > collect = everything mplus $ mkQ mzero return > > allTypes :: CoreExpr -> [Type] > allTypes = collect Especially when still exploring (parts of) the GHC API, being able to extract things in this fashion is very helpful. SYB's "everything" being broken by these instances, not so much. Would a patch "fixing" these instances be acceptable? Regards, Philip -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Jul 24 16:22:03 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 24 Jul 2014 16:22:03 +0000 Subject: Broken Data.Data instances In-Reply-To: References: Message-ID: <618BE556AADD624C9C918AA5D5911BEF10438545@DB3PRD3001MB020.064d.mgd.msft.net> GHC's data structures are often mutually recursive. e.g. * The TyCon for Maybe contains the DataCon for Just * The DataCon For just contains Just's type * Just's type contains the TyCon for Maybe So any attempt to recursively walk over all these structures, as you would a tree, will fail. Also there's a lot of sharing. For example, every occurrence of 'map' is a Var, and inside that Var is map's type, its strictness, its rewrite RULE, etc etc. In walking over a term you may not want to walk over all that stuff at every occurrence of map. Maybe that's it; I'm not certain since I did not write the Data instances for any of GHC's types Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of p.k.f.holzenspies at utwente.nl Sent: 24 July 2014 16:42 To: ghc-devs at haskell.org Subject: Broken Data.Data instances Dear GHC-ers, Is there a reason for explicitly broken Data.Data instances? Case in point: > instance Data Var where > -- don't traverse? > toConstr _ = abstractConstr "Var" > gunfold _ _ = error "gunfold" > dataTypeOf _ = mkNoRepType "Var" I understand (vaguely) arguments about abstract data types, but this also excludes convenient queries that can, e.g. extract all types from a CoreExpr. I had hoped to do stuff like this: > collect :: (Typeable b, Data a, MonadPlus m) => a -> m b > collect = everything mplus $ mkQ mzero return > > allTypes :: CoreExpr -> [Type] > allTypes = collect Especially when still exploring (parts of) the GHC API, being able to extract things in this fashion is very helpful. SYB's "everything" being broken by these instances, not so much. Would a patch "fixing" these instances be acceptable? Regards, Philip -------------- next part -------------- An HTML attachment was scrubbed... URL: From gridaphobe at gmail.com Thu Jul 24 17:14:17 2014 From: gridaphobe at gmail.com (Eric Seidel) Date: Thu, 24 Jul 2014 10:14:17 -0700 Subject: A couple of GHC-API questions In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF104374B9@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Sorry for the double email Simon, I hadn?t signed up for ghc-devs so my response was rejected.. --- Thanks for getting back to me so quickly!? On July 24, 2014 at 0:33:44, Simon Peyton Jones (simonpj at microsoft.com) wrote:? > Eric? >? > I'd like to help but I don't understand the question. What do you mean by "resolve" all? > the types that are imported from module A?? >? > If you have a TyCon (for A.Foo) in your hand, it has a Name. That Name tells you where it comes? > from. Inside it you will find its DataCons which also come from A. And so on.? Sorry for the confusion, I?m actually asking about how to get the right Name. When we first parse a specification, all of the TyCons etc. are plain Strings, i.e. we have a type? ? ? data RefType tycon tyvar = ?? ? ? type ParsedRefType = RefType String String? But we don?t want to deal with Strings, we want to use GHC?s TyCons and TyVars? ? ? type GHCRefType = RefType TyCon TyVar? so we need a function that maps Strings to Names (and then finding the TyCon is easy). ? This is what I mean by ?resolving? the types. For single-module programs this is trivial, we can do something like? ? ? resolve x = do rn <- hscParseIdentifier x? ? ? ? ? ? ? ? ? ? ?hscTcRnLookupRdrName rn? For multi-module programs it becomes trickier because we also have to resolve the types that we?ve imported from other modules. So, using my example from earlier, when we type-check module B we have to turn the String ?Foo? into the Name A.Foo. This is problematic because module B imported module A qualified, so ?Foo? is not in scope inside B but ?A.Foo? is. ? I believe GHC might be avoiding this issue via the .hi files, so when you import a Type from another module, it is already using TyCons instead of Strings.? > Later you say "is there a simple way to ask GHC to resolve the name x in the context of module? > m". You could mean? > * Imagine that the unqualified name "x" appeared in module m. How do I look it up, in m's? > top-level lexical environment.? > but I don?t think that is what you mean.? I think this is almost exactly what I mean, except that I want to be able to look up the unqualified ?x? as well as the qualified ?SomeModuleThatMayHaveBeenRenamed.x? inside m?s top-level environment. This is more or less what the DynamicLoading.lookupRdrNameInModuleForPlugins function that I?ve copied and tweaked does, but it requires the presence of the original source code. I?m hoping there may be some other function out there that does the same thing without requiring the source code, so we can use it for the base libraries as well.? > I'm confused. Could you be more concrete?? > Possibly this may help? https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/NameType? Thanks for the reference, I?m familiar with most of the info there already from browsing GHC?s source, but this is laid out much more newcomer-friendly manner :)? Hopefully I?ve made my question a bit clearer.? Eric? From p.k.f.holzenspies at utwente.nl Thu Jul 24 17:42:14 2014 From: p.k.f.holzenspies at utwente.nl (=?ISO-8859-1?Q?=22Philip_K=2EF=2E_H=F6lzenspies=22?=) Date: Thu, 24 Jul 2014 19:42:14 +0200 Subject: Broken Data.Data instances In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF10438545@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF10438545@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <53D14576.4060503@utwente.nl> Dear Simon, et al, These are very good points to make for people writing such traversals and queries. I would be more than happy to write a page on the pitfalls etc. on the wiki, but in my experience so far, exploring the innards of GHC is tremendously helped by trying small things out and showing (bits of) the intermediate structures. For me, personally, this has always been hindered by the absence of good instances of Data and/or Show (not having to bring DynFlags and not just visualising with the pretty printer are very helpful). So... does anyone object to me changing these "broken" instances with the ones given by DeriveDataTypeable? Also, many of these internal data structures could be provided with useful lenses to improve such traversals further. Anyone ever go at that? Would be people be interested? Regards, Philip > Simon Peyton Jones > 24 Jul 2014 18:22 > > GHC's data structures are often mutually recursive. e.g. > > ?The TyCon for Maybe contains the DataCon for Just > > ?The DataCon For just contains Just's type > > ?Just's type contains the TyCon for Maybe > > So any attempt to recursively walk over all these structures, as you > would a tree, will fail. > > Also there's a lot of sharing. For example, every occurrence of 'map' > is a Var, and inside that Var is map's type, its strictness, its > rewrite RULE, etc etc. In walking over a term you may not want to > walk over all that stuff at every occurrence of map. > > Maybe that's it; I'm not certain since I did not write the Data > instances for any of GHC's types > > Simon > > *From:*ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of > *p.k.f.holzenspies at utwente.nl > *Sent:* 24 July 2014 16:42 > *To:* ghc-devs at haskell.org > *Subject:* Broken Data.Data instances > > Dear GHC-ers, > > Is there a reason for explicitly broken Data.Data instances? Case in > point: > > > instance Data Var where > > > -- don't traverse? > > > toConstr _ = abstractConstr "Var" > > > gunfold _ _ = error "gunfold" > > > dataTypeOf _ = mkNoRepType "Var" > > I understand (vaguely) arguments about abstract data types, but this > also excludes convenient queries that can, e.g. extract all types from > a CoreExpr. I had hoped to do stuff like this: > > > collect :: (Typeable b, Data a, MonadPlus m) => a -> m b > > > collect = everything mplus $ mkQ mzero return > > > > > > allTypes :: CoreExpr -> [Type] > > > allTypes = collect > > Especially when still exploring (parts of) the GHC API, being able to > extract things in this fashion is very helpful. SYB's "everything" > being broken by these instances, not so much. > > Would a patch "fixing" these instances be acceptable? > > Regards, > > Philip > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: postbox-contact.jpg Type: image/jpeg Size: 1247 bytes Desc: not available URL: From simonpj at microsoft.com Thu Jul 24 21:06:16 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 24 Jul 2014 21:06:16 +0000 Subject: Broken Data.Data instances In-Reply-To: <53D14576.4060503@utwente.nl> References: <618BE556AADD624C9C918AA5D5911BEF10438545@DB3PRD3001MB020.064d.mgd.msft.net> <53D14576.4060503@utwente.nl> Message-ID: <618BE556AADD624C9C918AA5D5911BEF104387F2@DB3PRD3001MB020.064d.mgd.msft.net> So... does anyone object to me changing these "broken" instances with the ones given by DeriveDataTypeable? That's fine with me provided (a) the default behaviour is not immediate divergence (which it might well be), and (b) the pitfalls are documented. Simon From: "Philip K.F. H?lzenspies" [mailto:p.k.f.holzenspies at utwente.nl] Sent: 24 July 2014 18:42 To: Simon Peyton Jones Cc: ghc-devs at haskell.org Subject: Re: Broken Data.Data instances Dear Simon, et al, These are very good points to make for people writing such traversals and queries. I would be more than happy to write a page on the pitfalls etc. on the wiki, but in my experience so far, exploring the innards of GHC is tremendously helped by trying small things out and showing (bits of) the intermediate structures. For me, personally, this has always been hindered by the absence of good instances of Data and/or Show (not having to bring DynFlags and not just visualising with the pretty printer are very helpful). So... does anyone object to me changing these "broken" instances with the ones given by DeriveDataTypeable? Also, many of these internal data structures could be provided with useful lenses to improve such traversals further. Anyone ever go at that? Would be people be interested? Regards, Philip [cid:image001.jpg at 01CFA78B.7D356DE0] Simon Peyton Jones 24 Jul 2014 18:22 GHC's data structures are often mutually recursive. e.g. ? The TyCon for Maybe contains the DataCon for Just ? The DataCon For just contains Just's type ? Just's type contains the TyCon for Maybe So any attempt to recursively walk over all these structures, as you would a tree, will fail. Also there's a lot of sharing. For example, every occurrence of 'map' is a Var, and inside that Var is map's type, its strictness, its rewrite RULE, etc etc. In walking over a term you may not want to walk over all that stuff at every occurrence of map. Maybe that's it; I'm not certain since I did not write the Data instances for any of GHC's types Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of p.k.f.holzenspies at utwente.nl Sent: 24 July 2014 16:42 To: ghc-devs at haskell.org Subject: Broken Data.Data instances Dear GHC-ers, Is there a reason for explicitly broken Data.Data instances? Case in point: > instance Data Var where > -- don't traverse? > toConstr _ = abstractConstr "Var" > gunfold _ _ = error "gunfold" > dataTypeOf _ = mkNoRepType "Var" I understand (vaguely) arguments about abstract data types, but this also excludes convenient queries that can, e.g. extract all types from a CoreExpr. I had hoped to do stuff like this: > collect :: (Typeable b, Data a, MonadPlus m) => a -> m b > collect = everything mplus $ mkQ mzero return > > allTypes :: CoreExpr -> [Type] > allTypes = collect Especially when still exploring (parts of) the GHC API, being able to extract things in this fashion is very helpful. SYB's "everything" being broken by these instances, not so much. Would a patch "fixing" these instances be acceptable? Regards, Philip -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 1247 bytes Desc: image001.jpg URL: From alan.zimm at gmail.com Thu Jul 24 21:23:28 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Thu, 24 Jul 2014 23:23:28 +0200 Subject: Broken Data.Data instances In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF104387F2@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF10438545@DB3PRD3001MB020.064d.mgd.msft.net> <53D14576.4060503@utwente.nl> <618BE556AADD624C9C918AA5D5911BEF104387F2@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: While we are talking about fixing traversals, how about getting rid of the phase specific panic initialisers for placeHolderType, placeHolderKind and friends? In order to safely traverse with SYB, the following needs to be inserted into all the SYB schemes (see https://github.com/alanz/HaRe/blob/master/src/Language/Haskell/Refact/Utils/GhcUtils.hs ) -- Check the Typeable items checkItemStage1 :: (Typeable a) => SYB.Stage -> a -> Bool checkItemStage1 stage x = (const False `SYB.extQ` postTcType `SYB.extQ` fixity `SYB.extQ` nameSet) x where nameSet = const (stage `elem` [SYB.Parser,SYB.TypeChecker]) :: GHC.NameSet -> Bool postTcType = const (stage < SYB.TypeChecker ) :: GHC.PostTcType -> Bool fixity = const (stage < SYB.Renamer ) :: GHC.Fixity -> Bool And in addition HsCmdTop and ParStmtBlock are initialised with explicit 'undefined values. Perhaps use an initialiser that can have its panic turned off when called via the GHC API? Regards Alan On Thu, Jul 24, 2014 at 11:06 PM, Simon Peyton Jones wrote: > So... does anyone object to me changing these "broken" instances with > the ones given by DeriveDataTypeable? > > That?s fine with me provided (a) the default behaviour is not immediate > divergence (which it might well be), and (b) the pitfalls are documented. > > > > Simon > > > > *From:* "Philip K.F. H?lzenspies" [mailto:p.k.f.holzenspies at utwente.nl] > *Sent:* 24 July 2014 18:42 > *To:* Simon Peyton Jones > *Cc:* ghc-devs at haskell.org > *Subject:* Re: Broken Data.Data instances > > > > Dear Simon, et al, > > These are very good points to make for people writing such traversals and > queries. I would be more than happy to write a page on the pitfalls etc. on > the wiki, but in my experience so far, exploring the innards of GHC is > tremendously helped by trying small things out and showing (bits of) the > intermediate structures. For me, personally, this has always been hindered > by the absence of good instances of Data and/or Show (not having to bring > DynFlags and not just visualising with the pretty printer are very helpful). > > So... does anyone object to me changing these "broken" instances with the > ones given by DeriveDataTypeable? > > Also, many of these internal data structures could be provided with useful > lenses to improve such traversals further. Anyone ever go at that? Would be > people be interested? > > Regards, > Philip > > > *Simon Peyton Jones* > > 24 Jul 2014 18:22 > > GHC?s data structures are often mutually recursive. e.g. > > ? The TyCon for Maybe contains the DataCon for Just > > ? The DataCon For just contains Just?s type > > ? Just?s type contains the TyCon for Maybe > > > > So any attempt to recursively walk over all these structures, as you would > a tree, will fail. > > > > Also there?s a lot of sharing. For example, every occurrence of ?map? is > a Var, and inside that Var is map?s type, its strictness, its rewrite RULE, > etc etc. In walking over a term you may not want to walk over all that > stuff at every occurrence of map. > > > > Maybe that?s it; I?m not certain since I did not write the Data instances > for any of GHC?s types > > > > Simon > > > > *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org > ] *On Behalf Of * > p.k.f.holzenspies at utwente.nl > *Sent:* 24 July 2014 16:42 > *To:* ghc-devs at haskell.org > *Subject:* Broken Data.Data instances > > > > Dear GHC-ers, > > > > Is there a reason for explicitly broken Data.Data instances? Case in point: > > > > > instance Data Var where > > > -- don't traverse? > > > toConstr _ = abstractConstr "Var" > > > gunfold _ _ = error "gunfold" > > > dataTypeOf _ = mkNoRepType "Var" > > > > I understand (vaguely) arguments about abstract data types, but this also > excludes convenient queries that can, e.g. extract all types from a > CoreExpr. I had hoped to do stuff like this: > > > > > collect :: (Typeable b, Data a, MonadPlus m) => a -> m b > > > collect = everything mplus $ mkQ mzero return > > > > > > allTypes :: CoreExpr -> [Type] > > > allTypes = collect > > > > Especially when still exploring (parts of) the GHC API, being able to > extract things in this fashion is very helpful. SYB?s ?everything? being > broken by these instances, not so much. > > > > Would a patch ?fixing? these instances be acceptable? > > > > Regards, > > Philip > > > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 1247 bytes Desc: not available URL: From simonpj at microsoft.com Thu Jul 24 22:54:05 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 24 Jul 2014 22:54:05 +0000 Subject: A couple of GHC-API questions In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF104374B9@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF10438A09@DB3PRD3001MB020.064d.mgd.msft.net> | This is what I mean by ?resolving? the types. For single-module programs | this is trivial, we can do something like | | ? ? resolve x = do rn <- hscParseIdentifier x | ? ? ? ? ? ? ? ? ? ?hscTcRnLookupRdrName rn | | For multi-module programs it becomes trickier because we also have to | resolve the types that we?ve imported from other modules. No, that is no harder **provided those types are in scope**. So suppose you have module M where import A( Foo ) f :: Int -> Int {-# LIQUID f :: { x | ..blah..Foo..blah } -> Int #-} Here I am supposing that the Liquid Haskell specification is in a pragma of M, and mentions an imported data type Foo. To resolve the string "Foo" to the Name A.Foo (or Bar.Foo, or whatever), hscTcRnLookupRdrName will work just fine. If you import Foo qualified, then of course you'll have to use a qualified name in the source module M where import qualified A( Foo ) f :: Int -> Int {-# LIQUID f :: { x | ..blah..A.Foo..blah } -> Int #-} If you *don't* import Foo at all, then it's utterly non-obvious where to look for it, so I don't suppose you are doing that. In short, why doesn?t hscTcRnLookupRdrName do the job? Incidentally, it doesn't make sense to ask if a Name is "in scope". Only RdrNames can be "in scope" or "not in scope". Simon | example from earlier, when we type-check module B we have to turn the | String ?Foo? into the Name A.Foo. This is problematic because module B | imported module A qualified, so ?Foo? is not in scope inside B but | ?A.Foo? is. | | I believe GHC might be avoiding this issue via the .hi files, so when you | import a Type from another module, it is already using TyCons instead of | Strings. | | > Later you say "is there a simple way to ask GHC to resolve the name x | in the context of module | > m". You could mean | > * Imagine that the unqualified name "x" appeared in module m. How do I | look it up, in m's | > top-level lexical environment. | > but I don?t think that is what you mean. | | I think this is almost exactly what I mean, except that I want to be able | to look up the unqualified ?x? as well as the qualified | ?SomeModuleThatMayHaveBeenRenamed.x? inside m?s top-level environment. | This is more or less what the | DynamicLoading.lookupRdrNameInModuleForPlugins function that I?ve copied | and tweaked does, but it requires the presence of the original source | code. I?m hoping there may be some other function out there that does the | same thing without requiring the source code, so we can use it for the | base libraries as well. | | > I'm confused. Could you be more concrete? | > Possibly this may help? | https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/NameType | | Thanks for the reference, I?m familiar with most of the info there | already from browsing GHC?s source, but this is laid out much more | newcomer-friendly manner :) | | Hopefully I?ve made my question a bit clearer. | | Eric From simonpj at microsoft.com Thu Jul 24 23:26:41 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 24 Jul 2014 23:26:41 +0000 Subject: Understanding core2core optimisation pipeline In-Reply-To: <201407241454.56547.jan.stolarek@p.lodz.pl> References: <201407241454.56547.jan.stolarek@p.lodz.pl> Message-ID: <618BE556AADD624C9C918AA5D5911BEF10438AB9@DB3PRD3001MB020.064d.mgd.msft.net> | My plan is to gather up | the answers on a wiki page. Excellent -- please do that! My replies are below Simon | This mail basically asks just one question: what is the order of | optimizations pefromed on Core? It's entirely defined by SimplCore.getCoreToDo :: DynFlags -> [CoreToDo] The code there should be reasonably self-explanatory. The type signature is very descriptive. | 1. What is the difference between a "simplifier iteration" and | "simplifier phase"? Roughtly, a complete run of the simplifier means "run the simplifier repeatedly until nothing further happens". The iterations are the successive iterations of this loop. Currently there's a (rather arbitrary) limit of four such iterations before we give up and declare victory. | 1a. What is the default maximum iterations count? User documentation does | not specify that. Four | 2. How can I observe the effects of `-ddump-simpl-phases`. I tried | compiling several different | programs and this flag seems to have no effect (ie. nothing gets | printed). I had to read the source code. I think you say "-ddump-simpl-phases=A,B,C" to dump the output of phases called A,B,C. But no, it seems that it only affects output of simplifier statistics (see simplifyPgmIO). I have never used this flag. Maybe it can go. Looks strange to me. | 3. Cardinality anlaysis and inlining: cardinality analysis can determine | that a let binding is | used exactly once. Can the inliner re-use this information from the | cardinality analysis or does | it recompute it per [3], section 3.1? Cardinality analysis determines that something is *demanded* once. The occurrence analyser determines when it *occurs* once. For example if .. then x else x+1 x occurs twice, but is demanded once. So they are different. The inliner uses occurrence information. | 4a. The first phase is "Desugar (after optimization)". What optimizations | are performed during | desugaring? Just a few basic ones; see CoreSubst.simpleOptPgm. It implements the "Very Simple Optimiser" which is only a page or two of code. Reading it and writing a Note that enumerates what optimisations it does would be a Good Thing. | 4b. I'm not sure whether I'm looking at a single iteration of core2core | transformation or at | multiple ones. Some passes are performed several times (Float out, Float | inwards), which suggests | that there might be many iterations here. On the other hand simplifier | phases are decreasing | towards 0, which looks as if it was one core2core iteration. My | assumption here is that every | time a new core2core iteration starts the simplifier phases are counted | anew from 2 towards 0. Is | that correct? No. getCoreToDo produces a list of CoreToDos. Each specifies a stage in the pipeline. One such stage is a run of the simplifier. Such a run has a "phase" number, which is set in getCoreToDo. This phase number is used (only) to control INLINE pragmas and RULES (see extensive documentation in the user manual e.g http://www.haskell.org/ghc/docs/latest/html/users_guide/pragmas.html#phase-control) | 4c. Why are there several 0 phases of the Simplifier? I find it | confusing. We need to run the simplifier several times to propagate the effects of (say) strictness analysis or let-floating. But by that stage the need for staging RULES and INLINE pragmas is over. | 4d. I understand that some passes can be enabled or disabled using | command-line options. Can the | decission to run some passes be made dynamically by the compiler (eg. to | run extra simplifier | passes)? Yes. Plug-ins do precisely that, by manipulating the [CoreToDo] pipeline list. | 4e. Are there more phases that could appear here, ie. they were ommited | with -O? Could appear where? | 4f. "Levels added" pass before the "Float out" pass: my guess is that | this is preparation for the | full laziness transform. So, is full laziness performed as part of "Float | out" pass? The full laziness pass is simply a combination of set-levels (an analysis) followed by float out (which transforms the program) | A general note is that I am confused by many Simplifier passes being | interleaved with other | passes. I expected that simplifier phases will grouped into a single | pass, as speculated in | question 1. Many passes produce output that can readily be simplified. Rather than require them to perform those simplifications, we delegate it to the simplifier. | 5. What optimizations *exactly* are performed by the Simplifier? I assume | that most of what's | described in chapter 3 of [2]: beta reduction, let elimination, case | elimination, case floating, | constant folding and eta expansion. I'm not sure about floating let | outwards and inwards - [1], | pg. 7, says these are in a pass separate from the simplifier. `-dverbose- | core2core` seems to | confirm that since it reveals separate "Float out" and "Float inwards" | passes. I don't have an exhaustive list, I'm afraid. The general rule is: if it can be done with local knowledge, the simplifier should do it. Making a full list would be another good exercise. If you start I could add more. Examples: - constant foldings - applying RULES - inlining (a really important one) - case of case - case of known constructor - eta expansion and eta reduction - combining adjacent casts - pushing a cast out of the way of an application e.g. (f |> k) a ==> f (a |> k1) |> k2 for suitable k1, k2 and there are probably a lot more. Look for SimplifierTicks; every time the simplifier does something significant it bumps a tick count. (or should do so.) | 6. [4], pg. 31, mentions the Deforestation optimisation. Is everything | described in | that "Deforestation" section subsumed by cardinality analysis ([5], end | of section 2.1 and | section 7.1)? If not then when is deforestation performed? Deforestation is simply the application of rewrite RULES; that's done by the simplifier. | 7. [5], section 6.1 says: "We run the analysis twice, once in the middle | of the optimisation | pipeline and once near the end". When exactly in the middle of the | pipeline? Between which | passes? This does not show up with `-dverbose-core2core` (or at least it | is not explicitly | named). I'm not actually certain that cardinality analysis *is* run twice in HEAD. Maybe it was only in the version for the paper. Ilya can tell us | 8. How does the rules rewriting fit into the picture? It's done by the simplifier From jwlato at gmail.com Thu Jul 24 23:52:12 2014 From: jwlato at gmail.com (John Lato) Date: Fri, 25 Jul 2014 07:52:12 +0800 Subject: Changing the -package dependency resolution algorithm In-Reply-To: <1406214588-sup-9538@sabre> References: <1406210353-sup-3241@sabre> <1406214588-sup-9538@sabre> Message-ID: How would this work with ghci? If I'm understanding correctly, the proposal means users could no longer do: $ ghci SomeFile.hs and have it work without manually specifying all -package flags. Did I miss something? I think it would work in conjuction with the package environments stuff, provided that were available on all platforms ghc supports. John L. On Thu, Jul 24, 2014 at 11:12 PM, Edward Z. Yang wrote: > Excerpts from Edward Z. Yang's message of 2014-07-24 15:57:05 +0100: > > - It assumes *-hide-all-packages* at the beginning. This scheme > > probably works less well without that: now we need some consistent > > view of the database to start with. > > Actually, thinking about this, this dovetails nicely with the "package > environments" work the IHG is sponsoring. The idea behind a package > environment is you specify some set of installed package IDs, which > serves as the visible slice of the package database which is used for > compilation. Then, ghc called without any arguments is simply using > the *default* package environment. > > Furthermore, when users install packages, they may or may not decide > to add the package to their global environment, and they can be informed > if the package is inconsistent with a package that already is in their > environment (mismatched dependencies). A user can also request to > upgrade a package in their environment, and Cabal could calculate how > all the other packages in the environment would need to be upgraded > in order to keep the environment consistent, and run this plan for the > user. > > Cheers, > Edward > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Fri Jul 25 00:09:08 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Fri, 25 Jul 2014 01:09:08 +0100 Subject: Changing the -package dependency resolution algorithm In-Reply-To: References: <1406210353-sup-3241@sabre> <1406214588-sup-9538@sabre> Message-ID: <1406246437-sup-6526@sabre> Good question. I think package environments are the right answer here: GHCi should come preloaded with some special global package environment. Edward Excerpts from John Lato's message of 2014-07-25 00:52:12 +0100: > How would this work with ghci? If I'm understanding correctly, the > proposal means users could no longer do: > > $ ghci SomeFile.hs > > and have it work without manually specifying all -package flags. Did I > miss something? > > I think it would work in conjuction with the package environments stuff, > provided that were available on all platforms ghc supports. > > John L. > > On Thu, Jul 24, 2014 at 11:12 PM, Edward Z. Yang wrote: > > > Excerpts from Edward Z. Yang's message of 2014-07-24 15:57:05 +0100: > > > - It assumes *-hide-all-packages* at the beginning. This scheme > > > probably works less well without that: now we need some consistent > > > view of the database to start with. > > > > Actually, thinking about this, this dovetails nicely with the "package > > environments" work the IHG is sponsoring. The idea behind a package > > environment is you specify some set of installed package IDs, which > > serves as the visible slice of the package database which is used for > > compilation. Then, ghc called without any arguments is simply using > > the *default* package environment. > > > > Furthermore, when users install packages, they may or may not decide > > to add the package to their global environment, and they can be informed > > if the package is inconsistent with a package that already is in their > > environment (mismatched dependencies). A user can also request to > > upgrade a package in their environment, and Cabal could calculate how > > all the other packages in the environment would need to be upgraded > > in order to keep the environment consistent, and run this plan for the > > user. > > > > Cheers, > > Edward > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > From gridaphobe at gmail.com Fri Jul 25 01:24:02 2014 From: gridaphobe at gmail.com (Eric Seidel) Date: Thu, 24 Jul 2014 18:24:02 -0700 Subject: A couple of GHC-API questions In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF10438A09@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF104374B9@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF10438A09@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: I think I see where the confusion is now. In your example, hscTcRnLookupRdrName should work perfectly. I?m thinking of a different scenario, where we are trying to use the specification of a function that has been imported from another module. Suppose we have module List where data List = Nil | Cons Int List {-# LIQUID measure length :: List -> Int #-} replicate :: Int -> Int -> List {-# LIQUID replicate :: x:Int -> n:Int -> {v:List | length v = n } #-} head :: List -> Int {-# LIQUID head :: {v:List | length v > 0} -> Int #-} module Main where import qualified List as L main = print . L.head $ L.replicate 5 10 --- LiquidHaskell should be able to prove that the call to L.head in Main.main is safe. In order to do so we have to first figure out what types were given to L.head and L.replicate.? When we first parse in the specifications from List, we get types that refer to the String ? ? "List" which is parsed into the RdrName? ? ? Unqual (OccName tcName?"List") hscTcRnLookupRdrName will *rightly complain* that this RdrName is not in scope because we?re currently in the context of Main; we have to instead look for it in the top-level environment of the List module. I think part of the confusion is coming from the fact that we don?t process each module individually, resolve all of the types, and serialize them to disk somewhere. Rather, when we check a module, we parse in ALL of the specifications it could possibly refer to (using the transitive closure of the module imports) and try to resolve all of them before moving to the actual constraint generation and solving step. Does that make more sense? Eric On July 24, 2014 at 15:54:27, Simon Peyton Jones (simonpj at microsoft.com) wrote: > | This is what I mean by ?resolving? the types. For single-module programs > | this is trivial, we can do something like > | > | resolve x = do rn <- hscParseIdentifier x > | hscTcRnLookupRdrName rn > | > | For multi-module programs it becomes trickier because we also have to > | resolve the types that we?ve imported from other modules. > > No, that is no harder **provided those types are in scope**. So suppose you have > > module M where > import A( Foo ) > f :: Int -> Int > {-# LIQUID f :: { x | ..blah..Foo..blah } -> Int #-} > > Here I am supposing that the Liquid Haskell specification is in a pragma of M, and mentions > an imported data type Foo. > > To resolve the string "Foo" to the Name A.Foo (or Bar.Foo, or whatever), hscTcRnLookupRdrName > will work just fine. > > If you import Foo qualified, then of course you'll have to use a qualified name in the source > > module M where > import qualified A( Foo ) > f :: Int -> Int > {-# LIQUID f :: { x | ..blah..A.Foo..blah } -> Int #-} > > If you *don't* import Foo at all, then it's utterly non-obvious where to look for it, so > I don't suppose you are doing that. > > In short, why doesn?t hscTcRnLookupRdrName do the job? From simonpj at microsoft.com Fri Jul 25 06:48:49 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 25 Jul 2014 06:48:49 +0000 Subject: A couple of GHC-API questions In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF104374B9@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF10438A09@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF10438DB8@DB3PRD3001MB020.064d.mgd.msft.net> | I think part of the confusion is coming from the fact that we don?t | process each module individually, resolve all of the types, and | serialize them to disk somewhere. Rather, when we check a module, we | parse in ALL of the specifications it could possibly refer to (using | the transitive closure of the module imports) and try to resolve all of | them before moving to the actual constraint generation and solving | step. Yes that makes more sense. But it's not a good fit with GHC's way of working. You want to take some random module M, parse it afresh to find its LIQUID bits, and then resolve what those things mean. But that *necessarily* means that you must reconstruct the top-level lexical environment, which is formed from the import and top-level declarations. Constructing this environment -- the GlobalRdrEnv -- is not a simple matter. Much of RnNames does precisely that. Alas, we do not serialise this environment into M's interface file M.hi. We could do so, and doing so would be somewhat useful for GHCi, but it's absolutely not needed in the usual way of things. So I suppose what you could do is to reconstruct it afresh using GHC's existing code to do so. If you call RnNames.rnImports on the import declarations of your module, you'll get back a GlobalRdrEnv in which you can look up the RdrNames that occur in the LIQUID declarations. Actually that'll deal only with the imported things. You also want the locally defined top-level things. To do that you could use things like getLocalNonValBinders (see the call in RnSource.rnSrcDecls), but it might be easier to take the TypeEnv computed from the module's interface file, and turn that back into a bunch of global binders. But it's not so easy. What about module M( f ) where {-# LIQUID f :: { x | ...g... } -> Int f = blah g = blah Here g is only mentioned form f's LIQUID spec. So g will have disappeared altogether from M's interface file, discarded as dead code. The natural alternative would be to serialise the LIQUID things into M.hi. That would entail parsing and renaming them in GHC, and extending the interface file format to handle it. I think it would be simpler. But it would be a bit more invasive of GHC. There is some support for serialising random stuff into interface files, called "annotations". You write {-# ANN f #-} and GHC will record in the interface file that f is associated with expression. See http://www.haskell.org/ghc/docs/latest/html/users_guide/extending-ghc.html#annotation-pragmas Maybe there's a neater way to do this, that neither involves changing GHC nor all the faff above. If so I'd be happy to change GHC to accommodate the neater way. I'd like GHC to accommodate extensions like LH much more easily. Simon | -----Original Message----- | From: Eric Seidel [mailto:gridaphobe at gmail.com] | Sent: 25 July 2014 02:24 | To: Simon Peyton Jones | Cc: ghc-devs at haskell.org | Subject: RE: A couple of GHC-API questions | | I think I see where the confusion is now. In your example, | hscTcRnLookupRdrName should work perfectly. I?m thinking of a different | scenario, where we are trying to use the specification of a function | that has been imported from another module. Suppose we have | | module List where | | data List = Nil | Cons Int List | {-# LIQUID measure length :: List -> Int #-} | | replicate :: Int -> Int -> List | {-# LIQUID replicate :: x:Int -> n:Int -> {v:List | length v = n } #-} | | head :: List -> Int | {-# LIQUID head :: {v:List | length v > 0} -> Int #-} | | | module Main where | import qualified List as L | | main = print . L.head $ L.replicate 5 10 | | --- | | LiquidHaskell should be able to prove that the call to L.head in | Main.main is safe. In order to do so we have to first figure out what | types were given to L.head and L.replicate. | | When we first parse in the specifications from List, we get types that | refer to the String | | ? ? "List" | | which is parsed into the RdrName | | ? ? Unqual (OccName tcName?"List") | | hscTcRnLookupRdrName will *rightly complain* that this RdrName is not | in scope because we?re currently in the context of Main; we have to | instead look for it in the top-level environment of the List module. | | I think part of the confusion is coming from the fact that we don?t | process each module individually, resolve all of the types, and | serialize them to disk somewhere. Rather, when we check a module, we | parse in ALL of the specifications it could possibly refer to (using | the transitive closure of the module imports) and try to resolve all of | them before moving to the actual constraint generation and solving | step. | | Does that make more sense? | | Eric | | On July 24, 2014 at 15:54:27, Simon Peyton Jones | (simonpj at microsoft.com) wrote: | > | This is what I mean by ?resolving? the types. For single-module | > | programs this is trivial, we can do something like | > | | > | resolve x = do rn <- hscParseIdentifier x hscTcRnLookupRdrName rn | > | | > | For multi-module programs it becomes trickier because we also have | > | to resolve the types that we?ve imported from other modules. | > | > No, that is no harder **provided those types are in scope**. So | > suppose you have | > | > module M where | > import A( Foo ) | > f :: Int -> Int | > {-# LIQUID f :: { x | ..blah..Foo..blah } -> Int #-} | > | > Here I am supposing that the Liquid Haskell specification is in a | > pragma of M, and mentions an imported data type Foo. | > | > To resolve the string "Foo" to the Name A.Foo (or Bar.Foo, or | > whatever), hscTcRnLookupRdrName will work just fine. | > | > If you import Foo qualified, then of course you'll have to use a | > qualified name in the source | > | > module M where | > import qualified A( Foo ) | > f :: Int -> Int | > {-# LIQUID f :: { x | ..blah..A.Foo..blah } -> Int #-} | > | > If you *don't* import Foo at all, then it's utterly non-obvious where | > to look for it, so I don't suppose you are doing that. | > | > In short, why doesn?t hscTcRnLookupRdrName do the job? | From svenpanne at gmail.com Fri Jul 25 07:32:07 2014 From: svenpanne at gmail.com (Sven Panne) Date: Fri, 25 Jul 2014 09:32:07 +0200 Subject: At long last: 2014.2.0.0 Release Candidate 1 In-Reply-To: References: Message-ID: 2014-07-24 16:24 GMT+02:00 Mark Lentczner : > On Thu, Jul 24, 2014 at 1:25 AM, Sven Panne wrote: >> The source tarball is missing a few files for hptool: > I'll try to catch them the next round... or pull requests on github welcome! The structure on GitHub is a bit confusing: It took me some time to figure out that "master" is probably irrelevant by now, and "new-build" contains the stuff for the upcoming HP. Is that correct? If yes, one should probably merge back things to master and base the HP releases directly on that. As it is, for new people things are quite puzzling... > On Thu, Jul 24, 2014 at 4:22 AM, Sven Panne wrote: >> ...But I'm a bit clueless how to proceed from that point: What I actually want is a >> complete installation of the HP under a given prefix. > Your build is using the genericOS set up in src/OS/Internal.hs. In the past, > linux/unix distribution packagers have worked from the src tarball, and when > I queired in the past, none said they used the build scripts HP had, but > used their own. And in turn, no one put in any effort over the last six > months to add a build decription using the new build system other than OS X > and Windows. Ah, OK, that was unclear to me, and it is a rather serious regression compared to the previous platform release: With 2013.2.0.0 one could easily specify a --prefix in the configure step, and everything worked as expected. I really think that this use case should be resurrected before the new HP is released: There are various reasons why pre-built Linux packages are not an option for many people (having to wait until a package is ready/fixed/etc. for one's distro, no root access, etc.), so the --prefix option was and still is important here. What about packages being unregistered? This seems to be a bug in hptool, and it effectively makes the stuff below build/target unusable. > SO - what you, and we, need is a src/OS/GenericLinux.hs file so that it does > you/we think is the generic layout for where things should live. For your > own, peronsal install, you could start by just hacking on the genericOS > structure in src/OS/Internal.hs. [...] Guess what I did... ;-) >> [...] Furthermore, the executables arescattered around: >> >> build/target/usr/local/haskell-platform/2014.2.0.0/lib/alex-3.1.3/bin/alex >> build/target/usr/local/haskell-platform/2014.2.0.0/lib/cabal-install-1.18.0.5/bin/cabal >> build/target/usr/local/haskell-platform/2014.2.0.0/lib/happy-1.19.4/bin/happy >> build/target/usr/local/haskell-platform/2014.2.0.0/lib/hscolour-1.20.3/bin/HsColour > > My rationale is that if you install things under /usr/bin, /usr/lib, > /usr/share/doc - then it becomes essentially impossible to remove easily: > There are just far too many files in those trees intermingled with files > from other things. Same is true if you do /usr/local/... If you have either --prefix or use the tree below build/target to assemble a package for your distro, that's a non-issue. As mentioned above, we need --prefix anyway, so we should have a single {bin,lib,share,...} structure directly below it. Even keeping the distinction between the GHC parts and the rest of the platform is not useful from a user perspective and to large parts totally artificial: Why e.g. is haddock below GHC part, but alex below the HP part? This is by pure technical accident, and it's unimportant when one installs the HP as a whole. > [...] Does this make sense for simple "untar, run a script" kind of distributions? Partly, yes. It doesn't matter in detail how things happen, but we need the ability of the simple and easy "./configure --prefix=foo && make && make install" from the 2013.2.0.0 HP back in the new HP source release. I hope that I don't sound too negative, you're doing great (and unpleasant) work, I just want to make sure that we don't regress on Linux platforms... From simonpj at microsoft.com Fri Jul 25 09:03:34 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 25 Jul 2014 09:03:34 +0000 Subject: Early draft spec of Strict language pragma In-Reply-To: References: Message-ID: <618BE556AADD624C9C918AA5D5911BEF10438FA2@DB3PRD3001MB020.064d.mgd.msft.net> OK I have made extensive edits to https://ghc.haskell.org/trac/ghc/wiki/StrictPragma You might want to check it through. Simon From simonpj at microsoft.com Fri Jul 25 10:59:11 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 25 Jul 2014 10:59:11 +0000 Subject: GhcPlugin-writing and "finding things" In-Reply-To: References: Message-ID: <618BE556AADD624C9C918AA5D5911BEF104391F8@DB3PRD3001MB020.064d.mgd.msft.net> Philip You are right: there are some missing pieces. * First you need to ask where your plugin's special library module "Foo" is in the file system. This is what findImportedModule is for, and it seems quite reasonable. However, it (or some variant) should be available to you in CoreM. * Next, suppose you special library module defines a special type "T". You need to get its Name. For this you a CoreM variant of IfaceEnv.lookupOrig. The function CoreMonad.getOrigNameCache is far too low level and should be killed. Instead, CoreMonad should expose lookupOrig :: Module -> OccName -> CoreM Name It should be an easy function to write, using IfaceEnv.lookupOrig; maybe a tiny bit of refactoring. * Next you want to get from T's Name to T's TyCon. Here CoreMonad is fine: it offers lookupThing :: Name -> CoreM TyThing This function calls TcEnv.tcLookupGlobal, which will automatically load Foo.hi if need be. So your code should look like foo_mod <- findImportedModule "Foo" t_name <- lookupOrig foo_mod (mkTcOcc "T") t_tycon <- lookupThing t_name corresponding to these three steps. I suspect that the error cases of findImported module should be dealt with via exceptions in CoreM, to de-clutter the code. Some of the above suggests a bit of cleaning up of the CoreM API. Would someone like to undertake that? I can advise, but I don't want to lead. Simon | -----Original Message----- | From: Glasgow-haskell-users [mailto:glasgow-haskell-users- | bounces at haskell.org] On Behalf Of p.k.f.holzenspies at utwente.nl | Sent: 23 July 2014 17:07 | To: glasgow-haskell-users at haskell.org | Subject: GhcPlugin-writing and "finding things" | | Dear GHC-ers, | | I'm working on a plugin for GHC that should help compile the library | with which this plugin is to ship. What this plugin does is traverse | the CoreProgram(s) to find things of types defined in my library and | optimizes them. I have worked out how to "find" things, but I was | wondering whether the API could be improved for plugin-writers. | | For the sake of argument, I have the following: | - module Foo: library for users to import, containing functions, ADTs | etc | - module Foo.Plugin: GhcPlugin that compiles out all uses of things in | Foo | | > module Foo where | > | > data Foo x = Foo x | > | > runFoo :: Foo x -> x | > runFoo (Foo x) = x | | | This example is trivial and I imagine GHC will have no trouble | eliminating most cases of this, but imagine more complex stuff. Now, if | I want to traverse the CoreProgram in my plugin, I need to find | occurrences of these, so somewhere there's stuff like: | | > pass tcFoo _ _ (NonRec b expr) | > | varType b `containsTyConAnywhere` tcFoo | > = {- clever stuff to compile out Foo -} | | My problem is "getting" tcFoo in this example. Below is how I do it | now. Maybe I'm being thick, or maybe there's just no simpler way. This | is my 'plugin' function in Foo.Plugin: | | > plugin = Plugin $ \opts todo -> do | > hsc <- getHscEnv | > dfs <- getDynFlags | > fr <- liftIO $ findImportedModule hsc (mkModuleName "Foo") Nothing | > mod <- case fr of | > Found ml m -> return m | > _ -> panic "Failed to (unambiguously) find 'Foo' (using | findImportedModule)" | > onc <- getOrigNameCache | > let nms = lookupWithDefaultModuleEnv nms (panic "No names defined | for module 'Foo'") mod | > find_ d occ fnd nm | > = maybe | > (fail $ "Failed to find " ++ d ++ " '" ++ nm ++ "'") | > fnd | > (lookupOccEnv nms $ occ nm) | > tcFind = find_ "TyCon" mkTcOcc lookupTyCon | > dcFind = find_ "DataCon" mkDataOcc lookupDataCon | > idFind = find_ "Id" mkVarOcc lookupId | > tcFoo <- tcFind "Foo" | > dcFoo <- dcFind "Foo" | > idRunFoo <- idFind "runFoo" | > return $ CoreDoPluginPass "Foo optimisation" (pass tcFoo dcFoo | > idRunFoo) : todo | | I have the following questions: | | 1) Is this a/the right way to "find" those things in the plugin? | 2) There seems to be a lot to gain with quasi-quoting a la Template | Haskell for people writing plugins to go with a library that they | wrote. Can such QQ be done? Has it been considered? | 3) Is findImportedModule the right function to find my starting point | to begin with? | 4) What is the 'Maybe FastString' argument in findImportedModule for? | I've been trying to put in the FSs of PackageIDs, but they make the | lookup fail. This (dumb) example really made me nervous: | | > fr <- liftIO $ findImportedModule hsc (mkModuleName "Foo") Nothing | > mod <- case fr of | > Found ml m -> do | > fr' <- liftIO $ findImportedModule hsc (moduleName m) | > (packageIdFS $ packageId m) | | Here, fr' should always be a "Found ml' m'" such that ml == ml' and m | == m', but... it consistently results in NotFound{} for me. Also, I | find this especially round-about. Shouldn't Paths_Foo.hs (the Cabal- | generated file) maybe contain variables for every module in the | package? In my case it would thus contain some "modFoo :: Module" | | Comments and suggestions more than welcome! | | Regards, | Philip | | | | | | | | _______________________________________________ | Glasgow-haskell-users mailing list | Glasgow-haskell-users at haskell.org | http://www.haskell.org/mailman/listinfo/glasgow-haskell-users From alan.zimm at gmail.com Fri Jul 25 11:44:06 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Fri, 25 Jul 2014 13:44:06 +0200 Subject: Broken Data.Data instances In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF10438545@DB3PRD3001MB020.064d.mgd.msft.net> <53D14576.4060503@utwente.nl> <618BE556AADD624C9C918AA5D5911BEF104387F2@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: By the way, I would be happy to attempt this task, if the concept is viable. On Thu, Jul 24, 2014 at 11:23 PM, Alan & Kim Zimmerman wrote: > While we are talking about fixing traversals, how about getting rid of the > phase specific panic initialisers for placeHolderType, placeHolderKind and > friends? > > In order to safely traverse with SYB, the following needs to be inserted > into all the SYB schemes (see > > https://github.com/alanz/HaRe/blob/master/src/Language/Haskell/Refact/Utils/GhcUtils.hs > ) > > -- Check the Typeable items > checkItemStage1 :: (Typeable a) => SYB.Stage -> a -> Bool > checkItemStage1 stage x = (const False `SYB.extQ` postTcType `SYB.extQ` > fixity `SYB.extQ` nameSet) x > where nameSet = const (stage `elem` [SYB.Parser,SYB.TypeChecker]) :: > GHC.NameSet -> Bool > postTcType = const (stage < SYB.TypeChecker ) :: > GHC.PostTcType -> Bool > fixity = const (stage < SYB.Renamer ) :: > GHC.Fixity -> Bool > > And in addition HsCmdTop and ParStmtBlock are initialised with explicit > 'undefined values. > > Perhaps use an initialiser that can have its panic turned off when called > via the GHC API? > > Regards > Alan > > > > On Thu, Jul 24, 2014 at 11:06 PM, Simon Peyton Jones < > simonpj at microsoft.com> wrote: > >> So... does anyone object to me changing these "broken" instances with >> the ones given by DeriveDataTypeable? >> >> That?s fine with me provided (a) the default behaviour is not immediate >> divergence (which it might well be), and (b) the pitfalls are documented. >> >> >> >> Simon >> >> >> >> *From:* "Philip K.F. H?lzenspies" [mailto:p.k.f.holzenspies at utwente.nl] >> *Sent:* 24 July 2014 18:42 >> *To:* Simon Peyton Jones >> *Cc:* ghc-devs at haskell.org >> *Subject:* Re: Broken Data.Data instances >> >> >> >> Dear Simon, et al, >> >> These are very good points to make for people writing such traversals and >> queries. I would be more than happy to write a page on the pitfalls etc. on >> the wiki, but in my experience so far, exploring the innards of GHC is >> tremendously helped by trying small things out and showing (bits of) the >> intermediate structures. For me, personally, this has always been hindered >> by the absence of good instances of Data and/or Show (not having to bring >> DynFlags and not just visualising with the pretty printer are very helpful). >> >> So... does anyone object to me changing these "broken" instances with the >> ones given by DeriveDataTypeable? >> >> Also, many of these internal data structures could be provided with >> useful lenses to improve such traversals further. Anyone ever go at that? >> Would be people be interested? >> >> Regards, >> Philip >> >> >> *Simon Peyton Jones* >> >> 24 Jul 2014 18:22 >> >> GHC?s data structures are often mutually recursive. e.g. >> >> ? The TyCon for Maybe contains the DataCon for Just >> >> ? The DataCon For just contains Just?s type >> >> ? Just?s type contains the TyCon for Maybe >> >> >> >> So any attempt to recursively walk over all these structures, as you >> would a tree, will fail. >> >> >> >> Also there?s a lot of sharing. For example, every occurrence of ?map? is >> a Var, and inside that Var is map?s type, its strictness, its rewrite RULE, >> etc etc. In walking over a term you may not want to walk over all that >> stuff at every occurrence of map. >> >> >> >> Maybe that?s it; I?m not certain since I did not write the Data instances >> for any of GHC?s types >> >> >> >> Simon >> >> >> >> *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org >> ] *On Behalf Of * >> p.k.f.holzenspies at utwente.nl >> *Sent:* 24 July 2014 16:42 >> *To:* ghc-devs at haskell.org >> *Subject:* Broken Data.Data instances >> >> >> >> Dear GHC-ers, >> >> >> >> Is there a reason for explicitly broken Data.Data instances? Case in >> point: >> >> >> >> > instance Data Var where >> >> > -- don't traverse? >> >> > toConstr _ = abstractConstr "Var" >> >> > gunfold _ _ = error "gunfold" >> >> > dataTypeOf _ = mkNoRepType "Var" >> >> >> >> I understand (vaguely) arguments about abstract data types, but this also >> excludes convenient queries that can, e.g. extract all types from a >> CoreExpr. I had hoped to do stuff like this: >> >> >> >> > collect :: (Typeable b, Data a, MonadPlus m) => a -> m b >> >> > collect = everything mplus $ mkQ mzero return >> >> > >> >> > allTypes :: CoreExpr -> [Type] >> >> > allTypes = collect >> >> >> >> Especially when still exploring (parts of) the GHC API, being able to >> extract things in this fashion is very helpful. SYB?s ?everything? being >> broken by these instances, not so much. >> >> >> >> Would a patch ?fixing? these instances be acceptable? >> >> >> >> Regards, >> >> Philip >> >> >> >> >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 1247 bytes Desc: not available URL: From johan.tibell at gmail.com Fri Jul 25 11:49:14 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Fri, 25 Jul 2014 13:49:14 +0200 Subject: Early draft spec of Strict language pragma In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF10438FA2@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF10438FA2@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Thanks a lot Simon. Re top-level bindings: I agree. There's no good time to force CAFs so they'll need to be forced on first use. Re newtypes: I agree. Pattern matching on newtypes should be strict. Implementation: Where do I get started? I think we looked at some code together during the call, but I forgot what it was? Any good identifiers to grep for? On Fri, Jul 25, 2014 at 11:03 AM, Simon Peyton Jones wrote: > OK I have made extensive edits to > https://ghc.haskell.org/trac/ghc/wiki/StrictPragma > > You might want to check it through. > > Simon > From simonpj at microsoft.com Fri Jul 25 12:31:44 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 25 Jul 2014 12:31:44 +0000 Subject: Early draft spec of Strict language pragma In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF10438FA2@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF10439356@DB3PRD3001MB020.064d.mgd.msft.net> I suggest starting with the proposed improvements to let !pat = rhs * Remove the stuff in TcBinds that forces such things to be non-recursive and monomorphic -- ie bangs no longer affect type checking. * In DsBinds, add the appropriate seqs. Take a look at DsExpr.ds_val_bind and see how you get on Simon | -----Original Message----- | From: Johan Tibell [mailto:johan.tibell at gmail.com] | Sent: 25 July 2014 12:49 | To: Simon Peyton Jones | Cc: ghc-devs at haskell.org; Duncan Coutts | Subject: Re: Early draft spec of Strict language pragma | | Thanks a lot Simon. | | Re top-level bindings: | | I agree. There's no good time to force CAFs so they'll need to be | forced on first use. | | Re newtypes: | | I agree. Pattern matching on newtypes should be strict. | | Implementation: | | Where do I get started? I think we looked at some code together during | the call, but I forgot what it was? Any good identifiers to grep for? | | On Fri, Jul 25, 2014 at 11:03 AM, Simon Peyton Jones | wrote: | > OK I have made extensive edits to | > https://ghc.haskell.org/trac/ghc/wiki/StrictPragma | > | > You might want to check it through. | > | > Simon | > From mark.lentczner at gmail.com Fri Jul 25 14:18:37 2014 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Fri, 25 Jul 2014 07:18:37 -0700 Subject: At long last: 2014.2.0.0 Release Candidate 1 In-Reply-To: References: Message-ID: On Fri, Jul 25, 2014 at 12:32 AM, Sven Panne wrote: > The structure on GitHub is a bit confusing As with past releases, we generally work on a branch until release. > With 2013.2.0.0 one could > easily specify a --prefix in the configure step, and everything worked > as expected. I didn't realize anyone used it. But it should be easy enough to add. What about packages being unregistered? It is intentional. The package registrations are all prepared, but the final registration should happen after the tree is moved into place. Conventionally this was part of "make install". For the build, there should be an "activation" script added to the tree that the user runs after copying the tree to the final system. > Even keeping the > distinction between the GHC parts and the rest of the platform is not > useful from a user perspective and to large parts totally artificial: > Why e.g. is haddock below GHC part, but alex below the HP part? Becuase the platform is built from the GHC bindist, and traditionally we've keep the GHC supplied portions "pristine". > Partly, yes. It doesn't matter in detail how things happen, but we > need the ability of the simple and easy "./configure --prefix=foo && > make && make install" from the 2013.2.0.0 HP back in the new HP source > release. > Or equivalent functionality. I don't think we'll be going back to autoconf & make. > I hope that I don't sound too negative, you're doing great (and > unpleasant) work, I just want to make sure that we don't regress on > Linux platforms... We need more people involved in the platform! - Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From gridaphobe at gmail.com Fri Jul 25 18:30:34 2014 From: gridaphobe at gmail.com (Eric Seidel) Date: Fri, 25 Jul 2014 11:30:34 -0700 Subject: A couple of GHC-API questions In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF10438DB8@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF104374B9@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF10438A09@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF10438DB8@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: On July 24, 2014 at 23:49:10, Simon Peyton Jones (simonpj at microsoft.com) wrote: > | I think part of the confusion is coming from the fact that we don?t > | process each module individually, resolve all of the types, and > | serialize them to disk somewhere. Rather, when we check a module, we > | parse in ALL of the specifications it could possibly refer to (using > | the transitive closure of the module imports) and try to resolve all of > | them before moving to the actual constraint generation and solving > | step. > > Yes that makes more sense. > > But it's not a good fit with GHC's way of working. You want to take some random module M, > parse it afresh to find its LIQUID bits, and then resolve what those things mean. But that > *necessarily* means that you must reconstruct the top-level lexical environment, > which is formed from the import and top-level declarations. Constructing this environment > -- the GlobalRdrEnv -- is not a simple matter. Much of RnNames does precisely that. > > Alas, we do not serialise this environment into M's interface file M.hi. We could do so, > and doing so would be somewhat useful for GHCi, but it's absolutely not needed in the usual > way of things. This is basically what I expected, and I wouldn?t want to argue for adding more things to the .hi files unless there are other compelling use cases. > So I suppose what you could do is to reconstruct it afresh using GHC's existing code to > do so. If you call RnNames.rnImports on the import declarations of your module, you'll > get back a GlobalRdrEnv in which you can look up the RdrNames that occur in the LIQUID declarations. > > Actually that'll deal only with the imported things. You also want the locally defined > top-level things. To do that you could use things like getLocalNonValBinders (see the > call in RnSource.rnSrcDecls), but it might be easier to take the TypeEnv computed from > the module's interface file, and turn that back into a bunch of global binders. > > But it's not so easy. What about > > module M( f ) where > > {-# LIQUID f :: { x | ...g... } -> Int > f = blah > > g = blah > > Here g is only mentioned form f's LIQUID spec. So g will have disappeared altogether from > M's interface file, discarded as dead code. Thanks for these pointers! I?m not too concerned about the dead code issue since Liquid Types generally won?t refer to top-level binders (they?re either going to be functions, which can?t appear in the types in the first place, or constants, which can be inlined). > The natural alternative would be to serialise the LIQUID things into M.hi. That would > entail parsing and renaming them in GHC, and extending the interface file format to handle > it. I think it would be simpler. But it would be a bit more invasive of GHC. > > > There is some support for serialising random stuff into interface files, called "annotations". > You write > {-# ANN f #-} > and GHC will record in the interface file that f is associated with expression. See http://www.haskell.org/ghc/docs/latest/html/users_guide/extending-ghc.html#annotation-pragmas Yes, I?ve looked at the annotations stuff in the past and have imagined converting LiquidHaskell to a GHC Plugin that pulls the types out of annotations. I *think* it could work, though there are two immediate wrinkles: 1. The docs say that annotations can only be applied to top-level binders/declarations. We?ve found it *very* helpful to annotate nested binders in some cases to reduce the inference burden (it?s also extremely helpful for debugging the code/spec). Is this restriction in place because of the aforementioned issue of not serializing the GlobalRdrEnv? If so, could the restriction be lifted with the caveat that annotations on nested binders will *not* be exported? I think that would be sufficient for us as long as we can get a hold of the Core before any sort of inlining happens. 2. The docs say that you can only annotate binders that are declared in the same module. We?ve similarly found it useful to be able to *assume* stronger types for certain imported functions, though this is of course unsound. Could we also lift the same-module restriction, again with the caveat that the annotations will *not* be exported? These are just some things to think about, I doubt I would have time to attempt serious changes to LiquidHaskell along these lines until next year at the earliest. > Maybe there's a neater way to do this, that neither involves changing GHC nor all the faff > above. If so I'd be happy to change GHC to accommodate the neater way. I'd like GHC to accommodate > extensions like LH much more easily. I still think that a Template Haskell-based approach could be really nice without requiring any changes to GHC (beyond the profiling issue we?ve run into). The reason I like this idea is that it feels more lightweight to me, users would just express their specifications as Haskell values, and they (and we) wouldn?t have to worry about making sure that the right flags are passed to GHC to find imported modules etc. Has anyone ever tried to blend Template Haskell with Annotations/Plugins? I?m imagining something like the following ? ? t_head ? ? ?= head ::: [liquid| {v:[a] | length v > 0} -> a|] ? ? head [] ? ? = error ? ? head (x:xs) = x ? ? {-# ANN head t_head #-} Here the spec for head exists alongside the implementation as a Haskell value, so it?s immediately available for other tools to build on top of, but it?s also attached to head as an Annotation so that Plugins like a future version of the liquidhaskell verifier, or perhaps an optimization pass, can make use of it during compilation. The ANN pragma seems a bit redundant here, but I really like the idea of having these types readily available at multiple levels and I don?t quite see how to accomplish that with Template Haskell or Annotations alone. Thanks for all of the comments! Eric From karel.gardas at centrum.cz Fri Jul 25 21:48:21 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Fri, 25 Jul 2014 23:48:21 +0200 Subject: phabricator issue with git submodules. Message-ID: <53D2D0A5.9090705@centrum.cz> Hi, just fixing few warning issues on Solaris/x86. The changes spread over main ghc tree and libraries/primitive and libraries/unix. I already commited changes and pushed to my github.com's forks of libraries/primitive and libraries/unix. The git status looks then: $ git status On branch master Your branch is ahead of 'origin/master' by 2 commits. (use "git push" to publish your local commits) Changes not staged for commit: (use "git add ..." to update what will be committed) (use "git checkout -- ..." to discard changes in working directory) modified: libraries/primitive (new commits) modified: libraries/unix (new commits) no changes added to commit (use "git add" and/or "git commit -a") and yet phabricator still complains about it: $ arc diff You have unstaged changes in this working copy. Working copy: /export/home/karel/vcs/ghc-src/validate-fixes/ Unstaged changes in working copy: libraries/primitive libraries/unix Do you want to amend these files to the commit? [y/N] Usage Exception: Stage and commit (or revert) them before proceeding. I pressed enter in question above. Is that a known issue or am I doing something wrong here? Thanks! Karel From karel.gardas at centrum.cz Fri Jul 25 21:49:19 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Fri, 25 Jul 2014 23:49:19 +0200 Subject: Fatal git error on .git/modules/libffi-tarballs In-Reply-To: <1405880590-sup-7215@sabre> References: <53CC027C.7070402@centrum.cz> <1405880590-sup-7215@sabre> Message-ID: <53D2D0DF.5010904@centrum.cz> Indeed! Thanks for the fast help. Updating to 1.8.5 solved this issue. Karel On 07/20/14 08:31 PM, Edward Z. Yang wrote: > Hello Karel, > > You should blow away your copy (well, preserve the patches), upgrade > your version of Git and then check out again. You've run into > this bug: > > http://comments.gmane.org/gmane.comp.version-control.git/193492 > > Cheers, > Edward > > Excerpts from Karel Gardas's message of 2014-07-20 18:55:08 +0100: >> >> Hello, >> >> while working on HEAD and after a few iteraction of ./validate my git >> complains with: >> >> $ git status >> fatal: Not a git repository: >> /export/home/karel/vcs/ghc-src/ghc-git-test-2/.git/modules/libffi-tarballs >> karel at silence:~/vcs/ghc-src/ghc-solaris-validate-fix$ >> >> Is this a known error or is there any known workaround for this issue? >> >> Thanks! >> Karel > From ezyang at mit.edu Sat Jul 26 01:33:55 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Sat, 26 Jul 2014 02:33:55 +0100 Subject: Heads up: binary package db format changed (EOM) Message-ID: <1406338408-sup-4735@sabre> From ezyang at mit.edu Sat Jul 26 19:39:58 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Sat, 26 Jul 2014 20:39:58 +0100 Subject: phabricator issue with git submodules. In-Reply-To: <53D2D0A5.9090705@centrum.cz> References: <53D2D0A5.9090705@centrum.cz> Message-ID: <1406403523-sup-5584@sabre> Hello Karel, When your submodules get updated, you need to add them to your commit (since the parent repository maintains pointers to the submodules). Then they will no longer show up as dirty and you can submit the Phabricator patch. Edward Excerpts from Karel Gardas's message of 2014-07-25 22:48:21 +0100: > > Hi, > > just fixing few warning issues on Solaris/x86. The changes spread over > main ghc tree and libraries/primitive and libraries/unix. I already > commited changes and pushed to my github.com's forks of > libraries/primitive and libraries/unix. The git status looks then: > > $ git status > On branch master > Your branch is ahead of 'origin/master' by 2 commits. > (use "git push" to publish your local commits) > > Changes not staged for commit: > (use "git add ..." to update what will be committed) > (use "git checkout -- ..." to discard changes in working directory) > > modified: libraries/primitive (new commits) > modified: libraries/unix (new commits) > > no changes added to commit (use "git add" and/or "git commit -a") > > > and yet phabricator still complains about it: > > $ arc diff > You have unstaged changes in this working copy. > > Working copy: /export/home/karel/vcs/ghc-src/validate-fixes/ > > Unstaged changes in working copy: > libraries/primitive > libraries/unix > > > Do you want to amend these files to the commit? [y/N] > > Usage Exception: Stage and commit (or revert) them before proceeding. > > I pressed enter in question above. > > Is that a known issue or am I doing something wrong here? > > Thanks! > Karel From ezyang at mit.edu Sat Jul 26 19:49:42 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Sat, 26 Jul 2014 20:49:42 +0100 Subject: Extending ghc-pkg to handle installed package IDs Message-ID: <1406403632-sup-2276@sabre> Hello all, Since we're relaxing the constraint that a package ID correspond to exactly one installed package in the package database, it will now sometimes be necessary to query the package database using ghc-pkg for a specific installed package ID. At the moment, none of the commands support this, and I'd like to request comments on proposed ways of adding this functionality. Here are the proposals: 1. All command line modes stay the same, but anywhere a package ID was previously accepted, now we also accept an installed package ID. We distinguish the two by using the IPID parser from Cabal, and falling back the the package ID parser (assuming that version tags are not used, this is unambiguous). 2. Add a new flag --id which changes the meaning of a package ID field to be an installed package ID field. 3. Add new variants of all commands such as 'ghc-pkg unregister-id' which accept package IDs. 4. ... 5. Profit!! Cheers, Edward From p.k.f.holzenspies at utwente.nl Sun Jul 27 14:17:27 2014 From: p.k.f.holzenspies at utwente.nl (p.k.f.holzenspies at utwente.nl) Date: Sun, 27 Jul 2014 14:17:27 +0000 Subject: Broken Data.Data instances In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF10438545@DB3PRD3001MB020.064d.mgd.msft.net> <53D14576.4060503@utwente.nl> <618BE556AADD624C9C918AA5D5911BEF104387F2@DB3PRD3001MB020.064d.mgd.msft.net> , Message-ID: Alan, In that case, let's have a short feedback-loop between the two of us. It seems many of these files (Name.lhs, for example) are really stable through the repo-history. It would be nice to have one bigger refactoring all in one go (some of the code could use a polish, a lot of code seems removable). Regards, Philip ________________________________ Van: Alan & Kim Zimmerman [alan.zimm at gmail.com] Verzonden: vrijdag 25 juli 2014 13:44 Aan: Simon Peyton Jones CC: Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org Onderwerp: Re: Broken Data.Data instances By the way, I would be happy to attempt this task, if the concept is viable. On Thu, Jul 24, 2014 at 11:23 PM, Alan & Kim Zimmerman > wrote: While we are talking about fixing traversals, how about getting rid of the phase specific panic initialisers for placeHolderType, placeHolderKind and friends? In order to safely traverse with SYB, the following needs to be inserted into all the SYB schemes (see https://github.com/alanz/HaRe/blob/master/src/Language/Haskell/Refact/Utils/GhcUtils.hs) -- Check the Typeable items checkItemStage1 :: (Typeable a) => SYB.Stage -> a -> Bool checkItemStage1 stage x = (const False `SYB.extQ` postTcType `SYB.extQ` fixity `SYB.extQ` nameSet) x where nameSet = const (stage `elem` [SYB.Parser,SYB.TypeChecker]) :: GHC.NameSet -> Bool postTcType = const (stage < SYB.TypeChecker ) :: GHC.PostTcType -> Bool fixity = const (stage < SYB.Renamer ) :: GHC.Fixity -> Bool And in addition HsCmdTop and ParStmtBlock are initialised with explicit 'undefined values. Perhaps use an initialiser that can have its panic turned off when called via the GHC API? Regards Alan On Thu, Jul 24, 2014 at 11:06 PM, Simon Peyton Jones > wrote: So... does anyone object to me changing these "broken" instances with the ones given by DeriveDataTypeable? That?s fine with me provided (a) the default behaviour is not immediate divergence (which it might well be), and (b) the pitfalls are documented. Simon From: "Philip K.F. H?lzenspies" [mailto:p.k.f.holzenspies at utwente.nl] Sent: 24 July 2014 18:42 To: Simon Peyton Jones Cc: ghc-devs at haskell.org Subject: Re: Broken Data.Data instances Dear Simon, et al, These are very good points to make for people writing such traversals and queries. I would be more than happy to write a page on the pitfalls etc. on the wiki, but in my experience so far, exploring the innards of GHC is tremendously helped by trying small things out and showing (bits of) the intermediate structures. For me, personally, this has always been hindered by the absence of good instances of Data and/or Show (not having to bring DynFlags and not just visualising with the pretty printer are very helpful). So... does anyone object to me changing these "broken" instances with the ones given by DeriveDataTypeable? Also, many of these internal data structures could be provided with useful lenses to improve such traversals further. Anyone ever go at that? Would be people be interested? Regards, Philip [cid:image001.jpg at 01CFA78B.7D356DE0] Simon Peyton Jones 24 Jul 2014 18:22 GHC?s data structures are often mutually recursive. e.g. ? The TyCon for Maybe contains the DataCon for Just ? The DataCon For just contains Just?s type ? Just?s type contains the TyCon for Maybe So any attempt to recursively walk over all these structures, as you would a tree, will fail. Also there?s a lot of sharing. For example, every occurrence of ?map? is a Var, and inside that Var is map?s type, its strictness, its rewrite RULE, etc etc. In walking over a term you may not want to walk over all that stuff at every occurrence of map. Maybe that?s it; I?m not certain since I did not write the Data instances for any of GHC?s types Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of p.k.f.holzenspies at utwente.nl Sent: 24 July 2014 16:42 To: ghc-devs at haskell.org Subject: Broken Data.Data instances Dear GHC-ers, Is there a reason for explicitly broken Data.Data instances? Case in point: > instance Data Var where > -- don't traverse? > toConstr _ = abstractConstr "Var" > gunfold _ _ = error "gunfold" > dataTypeOf _ = mkNoRepType "Var" I understand (vaguely) arguments about abstract data types, but this also excludes convenient queries that can, e.g. extract all types from a CoreExpr. I had hoped to do stuff like this: > collect :: (Typeable b, Data a, MonadPlus m) => a -> m b > collect = everything mplus $ mkQ mzero return > > allTypes :: CoreExpr -> [Type] > allTypes = collect Especially when still exploring (parts of) the GHC API, being able to extract things in this fashion is very helpful. SYB?s ?everything? being broken by these instances, not so much. Would a patch ?fixing? these instances be acceptable? Regards, Philip _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 1247 bytes Desc: image001.jpg URL: From alan.zimm at gmail.com Sun Jul 27 14:28:50 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Sun, 27 Jul 2014 16:28:50 +0200 Subject: Broken Data.Data instances In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF10438545@DB3PRD3001MB020.064d.mgd.msft.net> <53D14576.4060503@utwente.nl> <618BE556AADD624C9C918AA5D5911BEF104387F2@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: My intention would be to simply change the placeholders into something that would not blow up during a normal traversal, preferably something that still gives the required behaviour when invoked normally by GHC, to indicate a bug that needs fixing, but that can somehow be turned off at other tiimes. I am open to suggestions as to a mechanism that can achieve this, I thought of some kind of setting via Dynamic Flags. Alan On Sun, Jul 27, 2014 at 4:17 PM, wrote: > Alan, > > In that case, let's have a short feedback-loop between the two of us. It > seems many of these files (Name.lhs, for example) are really stable through > the repo-history. It would be nice to have one bigger refactoring all in > one go (some of the code could use a polish, a lot of code seems removable). > > Regards, > Philip > > ------------------------------ > *Van:* Alan & Kim Zimmerman [alan.zimm at gmail.com] > *Verzonden:* vrijdag 25 juli 2014 13:44 > *Aan:* Simon Peyton Jones > *CC:* Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org > *Onderwerp:* Re: Broken Data.Data instances > > By the way, I would be happy to attempt this task, if the concept is > viable. > > > On Thu, Jul 24, 2014 at 11:23 PM, Alan & Kim Zimmerman < > alan.zimm at gmail.com> wrote: > >> While we are talking about fixing traversals, how about getting rid >> of the phase specific panic initialisers for placeHolderType, >> placeHolderKind and friends? >> >> In order to safely traverse with SYB, the following needs to be inserted >> into all the SYB schemes (see >> >> https://github.com/alanz/HaRe/blob/master/src/Language/Haskell/Refact/Utils/GhcUtils.hs >> ) >> >> -- Check the Typeable items >> checkItemStage1 :: (Typeable a) => SYB.Stage -> a -> Bool >> checkItemStage1 stage x = (const False `SYB.extQ` postTcType `SYB.extQ` >> fixity `SYB.extQ` nameSet) x >> where nameSet = const (stage `elem` [SYB.Parser,SYB.TypeChecker]) >> :: GHC.NameSet -> Bool >> postTcType = const (stage < SYB.TypeChecker ) >> :: GHC.PostTcType -> Bool >> fixity = const (stage < SYB.Renamer ) >> :: GHC.Fixity -> Bool >> >> And in addition HsCmdTop and ParStmtBlock are initialised with explicit >> 'undefined values. >> >> Perhaps use an initialiser that can have its panic turned off when >> called via the GHC API? >> >> Regards >> Alan >> >> >> >> On Thu, Jul 24, 2014 at 11:06 PM, Simon Peyton Jones < >> simonpj at microsoft.com> wrote: >> >>> So... does anyone object to me changing these "broken" instances >>> with the ones given by DeriveDataTypeable? >>> >>> That?s fine with me provided (a) the default behaviour is not >>> immediate divergence (which it might well be), and (b) the pitfalls are >>> documented. >>> >>> >>> >>> Simon >>> >>> >>> >>> *From:* "Philip K.F. H?lzenspies" [mailto:p.k.f.holzenspies at utwente.nl] >>> *Sent:* 24 July 2014 18:42 >>> *To:* Simon Peyton Jones >>> *Cc:* ghc-devs at haskell.org >>> *Subject:* Re: Broken Data.Data instances >>> >>> >>> >>> Dear Simon, et al, >>> >>> These are very good points to make for people writing such traversals >>> and queries. I would be more than happy to write a page on the pitfalls >>> etc. on the wiki, but in my experience so far, exploring the innards of GHC >>> is tremendously helped by trying small things out and showing (bits of) the >>> intermediate structures. For me, personally, this has always been hindered >>> by the absence of good instances of Data and/or Show (not having to bring >>> DynFlags and not just visualising with the pretty printer are very helpful). >>> >>> So... does anyone object to me changing these "broken" instances with >>> the ones given by DeriveDataTypeable? >>> >>> Also, many of these internal data structures could be provided with >>> useful lenses to improve such traversals further. Anyone ever go at that? >>> Would be people be interested? >>> >>> Regards, >>> Philip >>> >>> >>> *Simon Peyton Jones* >>> >>> 24 Jul 2014 18:22 >>> >>> GHC?s data structures are often mutually recursive. e.g. >>> >>> ? The TyCon for Maybe contains the DataCon for Just >>> >>> ? The DataCon For just contains Just?s type >>> >>> ? Just?s type contains the TyCon for Maybe >>> >>> >>> >>> So any attempt to recursively walk over all these structures, as you >>> would a tree, will fail. >>> >>> >>> >>> Also there?s a lot of sharing. For example, every occurrence of ?map? >>> is a Var, and inside that Var is map?s type, its strictness, its rewrite >>> RULE, etc etc. In walking over a term you may not want to walk over all >>> that stuff at every occurrence of map. >>> >>> >>> >>> Maybe that?s it; I?m not certain since I did not write the Data >>> instances for any of GHC?s types >>> >>> >>> >>> Simon >>> >>> >>> >>> *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org >>> ] *On Behalf Of * >>> p.k.f.holzenspies at utwente.nl >>> *Sent:* 24 July 2014 16:42 >>> *To:* ghc-devs at haskell.org >>> *Subject:* Broken Data.Data instances >>> >>> >>> >>> Dear GHC-ers, >>> >>> >>> >>> Is there a reason for explicitly broken Data.Data instances? Case in >>> point: >>> >>> >>> >>> > instance Data Var where >>> >>> > -- don't traverse? >>> >>> > toConstr _ = abstractConstr "Var" >>> >>> > gunfold _ _ = error "gunfold" >>> >>> > dataTypeOf _ = mkNoRepType "Var" >>> >>> >>> >>> I understand (vaguely) arguments about abstract data types, but this >>> also excludes convenient queries that can, e.g. extract all types from a >>> CoreExpr. I had hoped to do stuff like this: >>> >>> >>> >>> > collect :: (Typeable b, Data a, MonadPlus m) => a -> m b >>> >>> > collect = everything mplus $ mkQ mzero return >>> >>> > >>> >>> > allTypes :: CoreExpr -> [Type] >>> >>> > allTypes = collect >>> >>> >>> >>> Especially when still exploring (parts of) the GHC API, being able to >>> extract things in this fashion is very helpful. SYB?s ?everything? being >>> broken by these instances, not so much. >>> >>> >>> >>> Would a patch ?fixing? these instances be acceptable? >>> >>> >>> >>> Regards, >>> >>> Philip >>> >>> >>> >>> >>> >>> >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://www.haskell.org/mailman/listinfo/ghc-devs >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 1247 bytes Desc: not available URL: From karel.gardas at centrum.cz Sun Jul 27 17:02:24 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Sun, 27 Jul 2014 19:02:24 +0200 Subject: phabricator issue with git submodules. In-Reply-To: <1406403523-sup-5584@sabre> References: <53D2D0A5.9090705@centrum.cz> <1406403523-sup-5584@sabre> Message-ID: <53D530A0.6030601@centrum.cz> Hello Edward, I've done that, see https://phabricator.haskell.org/D96 -- but now I'm curious but since this is done in this way, basically speaking library/unix + libraries/primitive now points to commits done in my forks of those libs on github.com waiting for approval since I already pushed appropriate pull requests. Now this also means that D96 is probably not includable in GHC HEAD since it points to currently non-existing patches (in public libraries/unix + primitive). Am I right that this works in this way? Thanks, Karel On 07/26/14 09:39 PM, Edward Z. Yang wrote: > Hello Karel, > > When your submodules get updated, you need to add them to your commit > (since the parent repository maintains pointers to the submodules). > Then they will no longer show up as dirty and you can submit the > Phabricator patch. > > Edward > > Excerpts from Karel Gardas's message of 2014-07-25 22:48:21 +0100: >> >> Hi, >> >> just fixing few warning issues on Solaris/x86. The changes spread over >> main ghc tree and libraries/primitive and libraries/unix. I already >> commited changes and pushed to my github.com's forks of >> libraries/primitive and libraries/unix. The git status looks then: >> >> $ git status >> On branch master >> Your branch is ahead of 'origin/master' by 2 commits. >> (use "git push" to publish your local commits) >> >> Changes not staged for commit: >> (use "git add..." to update what will be committed) >> (use "git checkout --..." to discard changes in working directory) >> >> modified: libraries/primitive (new commits) >> modified: libraries/unix (new commits) >> >> no changes added to commit (use "git add" and/or "git commit -a") >> >> >> and yet phabricator still complains about it: >> >> $ arc diff >> You have unstaged changes in this working copy. >> >> Working copy: /export/home/karel/vcs/ghc-src/validate-fixes/ >> >> Unstaged changes in working copy: >> libraries/primitive >> libraries/unix >> >> >> Do you want to amend these files to the commit? [y/N] >> >> Usage Exception: Stage and commit (or revert) them before proceeding. >> >> I pressed enter in question above. >> >> Is that a known issue or am I doing something wrong here? >> >> Thanks! >> Karel > From ezyang at mit.edu Sun Jul 27 17:09:06 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Sun, 27 Jul 2014 18:09:06 +0100 Subject: phabricator issue with git submodules. In-Reply-To: <53D530A0.6030601@centrum.cz> References: <53D2D0A5.9090705@centrum.cz> <1406403523-sup-5584@sabre> <53D530A0.6030601@centrum.cz> Message-ID: <1406480883-sup-9936@sabre> That's right. I am actually not even sure how Harbormaster even manages to find your commits for the build... Edward Excerpts from Karel Gardas's message of 2014-07-27 18:02:24 +0100: > > Hello Edward, > > I've done that, see https://phabricator.haskell.org/D96 -- but now I'm > curious but since this is done in this way, basically speaking > library/unix + libraries/primitive now points to commits done in my > forks of those libs on github.com waiting for approval since I already > pushed appropriate pull requests. Now this also means that D96 is > probably not includable in GHC HEAD since it points to currently > non-existing patches (in public libraries/unix + primitive). Am I right > that this works in this way? > > Thanks, > Karel > > On 07/26/14 09:39 PM, Edward Z. Yang wrote: > > Hello Karel, > > > > When your submodules get updated, you need to add them to your commit > > (since the parent repository maintains pointers to the submodules). > > Then they will no longer show up as dirty and you can submit the > > Phabricator patch. > > > > Edward > > > > Excerpts from Karel Gardas's message of 2014-07-25 22:48:21 +0100: > >> > >> Hi, > >> > >> just fixing few warning issues on Solaris/x86. The changes spread over > >> main ghc tree and libraries/primitive and libraries/unix. I already > >> commited changes and pushed to my github.com's forks of > >> libraries/primitive and libraries/unix. The git status looks then: > >> > >> $ git status > >> On branch master > >> Your branch is ahead of 'origin/master' by 2 commits. > >> (use "git push" to publish your local commits) > >> > >> Changes not staged for commit: > >> (use "git add..." to update what will be committed) > >> (use "git checkout --..." to discard changes in working directory) > >> > >> modified: libraries/primitive (new commits) > >> modified: libraries/unix (new commits) > >> > >> no changes added to commit (use "git add" and/or "git commit -a") > >> > >> > >> and yet phabricator still complains about it: > >> > >> $ arc diff > >> You have unstaged changes in this working copy. > >> > >> Working copy: /export/home/karel/vcs/ghc-src/validate-fixes/ > >> > >> Unstaged changes in working copy: > >> libraries/primitive > >> libraries/unix > >> > >> > >> Do you want to amend these files to the commit? [y/N] > >> > >> Usage Exception: Stage and commit (or revert) them before proceeding. > >> > >> I pressed enter in question above. > >> > >> Is that a known issue or am I doing something wrong here? > >> > >> Thanks! > >> Karel > > From alan.zimm at gmail.com Sun Jul 27 17:13:49 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Sun, 27 Jul 2014 19:13:49 +0200 Subject: Broken Data.Data instances In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF10438545@DB3PRD3001MB020.064d.mgd.msft.net> <53D14576.4060503@utwente.nl> <618BE556AADD624C9C918AA5D5911BEF104387F2@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: To my knowledge there is no trac ticket to make the AST safe. Is this correct? Can I make one? Alan On Sun, Jul 27, 2014 at 4:28 PM, Alan & Kim Zimmerman wrote: > My intention would be to simply change the placeholders into something > that would not blow up during a normal traversal, preferably something that > still gives the required behaviour when invoked normally by GHC, to > indicate a bug that needs fixing, but that can somehow be turned off at > other tiimes. > > I am open to suggestions as to a mechanism that can achieve this, I > thought of some kind of setting via Dynamic Flags. > > > Alan > > > On Sun, Jul 27, 2014 at 4:17 PM, wrote: > >> Alan, >> >> In that case, let's have a short feedback-loop between the two of us. It >> seems many of these files (Name.lhs, for example) are really stable through >> the repo-history. It would be nice to have one bigger refactoring all in >> one go (some of the code could use a polish, a lot of code seems removable). >> >> Regards, >> Philip >> >> ------------------------------ >> *Van:* Alan & Kim Zimmerman [alan.zimm at gmail.com] >> *Verzonden:* vrijdag 25 juli 2014 13:44 >> *Aan:* Simon Peyton Jones >> *CC:* Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org >> *Onderwerp:* Re: Broken Data.Data instances >> >> By the way, I would be happy to attempt this task, if the concept is >> viable. >> >> >> On Thu, Jul 24, 2014 at 11:23 PM, Alan & Kim Zimmerman < >> alan.zimm at gmail.com> wrote: >> >>> While we are talking about fixing traversals, how about getting rid >>> of the phase specific panic initialisers for placeHolderType, >>> placeHolderKind and friends? >>> >>> In order to safely traverse with SYB, the following needs to be >>> inserted into all the SYB schemes (see >>> >>> https://github.com/alanz/HaRe/blob/master/src/Language/Haskell/Refact/Utils/GhcUtils.hs >>> ) >>> >>> -- Check the Typeable items >>> checkItemStage1 :: (Typeable a) => SYB.Stage -> a -> Bool >>> checkItemStage1 stage x = (const False `SYB.extQ` postTcType `SYB.extQ` >>> fixity `SYB.extQ` nameSet) x >>> where nameSet = const (stage `elem` [SYB.Parser,SYB.TypeChecker]) >>> :: GHC.NameSet -> Bool >>> postTcType = const (stage < SYB.TypeChecker ) >>> :: GHC.PostTcType -> Bool >>> fixity = const (stage < SYB.Renamer ) >>> :: GHC.Fixity -> Bool >>> >>> And in addition HsCmdTop and ParStmtBlock are initialised with explicit >>> 'undefined values. >>> >>> Perhaps use an initialiser that can have its panic turned off when >>> called via the GHC API? >>> >>> Regards >>> Alan >>> >>> >>> >>> On Thu, Jul 24, 2014 at 11:06 PM, Simon Peyton Jones < >>> simonpj at microsoft.com> wrote: >>> >>>> So... does anyone object to me changing these "broken" instances >>>> with the ones given by DeriveDataTypeable? >>>> >>>> That?s fine with me provided (a) the default behaviour is not >>>> immediate divergence (which it might well be), and (b) the pitfalls are >>>> documented. >>>> >>>> >>>> >>>> Simon >>>> >>>> >>>> >>>> *From:* "Philip K.F. H?lzenspies" [mailto:p.k.f.holzenspies at utwente.nl] >>>> >>>> *Sent:* 24 July 2014 18:42 >>>> *To:* Simon Peyton Jones >>>> *Cc:* ghc-devs at haskell.org >>>> *Subject:* Re: Broken Data.Data instances >>>> >>>> >>>> >>>> Dear Simon, et al, >>>> >>>> These are very good points to make for people writing such traversals >>>> and queries. I would be more than happy to write a page on the pitfalls >>>> etc. on the wiki, but in my experience so far, exploring the innards of GHC >>>> is tremendously helped by trying small things out and showing (bits of) the >>>> intermediate structures. For me, personally, this has always been hindered >>>> by the absence of good instances of Data and/or Show (not having to bring >>>> DynFlags and not just visualising with the pretty printer are very helpful). >>>> >>>> So... does anyone object to me changing these "broken" instances with >>>> the ones given by DeriveDataTypeable? >>>> >>>> Also, many of these internal data structures could be provided with >>>> useful lenses to improve such traversals further. Anyone ever go at that? >>>> Would be people be interested? >>>> >>>> Regards, >>>> Philip >>>> >>>> >>>> *Simon Peyton Jones* >>>> >>>> 24 Jul 2014 18:22 >>>> >>>> GHC?s data structures are often mutually recursive. e.g. >>>> >>>> ? The TyCon for Maybe contains the DataCon for Just >>>> >>>> ? The DataCon For just contains Just?s type >>>> >>>> ? Just?s type contains the TyCon for Maybe >>>> >>>> >>>> >>>> So any attempt to recursively walk over all these structures, as you >>>> would a tree, will fail. >>>> >>>> >>>> >>>> Also there?s a lot of sharing. For example, every occurrence of ?map? >>>> is a Var, and inside that Var is map?s type, its strictness, its rewrite >>>> RULE, etc etc. In walking over a term you may not want to walk over all >>>> that stuff at every occurrence of map. >>>> >>>> >>>> >>>> Maybe that?s it; I?m not certain since I did not write the Data >>>> instances for any of GHC?s types >>>> >>>> >>>> >>>> Simon >>>> >>>> >>>> >>>> *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org >>>> ] *On Behalf Of * >>>> p.k.f.holzenspies at utwente.nl >>>> *Sent:* 24 July 2014 16:42 >>>> *To:* ghc-devs at haskell.org >>>> *Subject:* Broken Data.Data instances >>>> >>>> >>>> >>>> Dear GHC-ers, >>>> >>>> >>>> >>>> Is there a reason for explicitly broken Data.Data instances? Case in >>>> point: >>>> >>>> >>>> >>>> > instance Data Var where >>>> >>>> > -- don't traverse? >>>> >>>> > toConstr _ = abstractConstr "Var" >>>> >>>> > gunfold _ _ = error "gunfold" >>>> >>>> > dataTypeOf _ = mkNoRepType "Var" >>>> >>>> >>>> >>>> I understand (vaguely) arguments about abstract data types, but this >>>> also excludes convenient queries that can, e.g. extract all types from a >>>> CoreExpr. I had hoped to do stuff like this: >>>> >>>> >>>> >>>> > collect :: (Typeable b, Data a, MonadPlus m) => a -> m b >>>> >>>> > collect = everything mplus $ mkQ mzero return >>>> >>>> > >>>> >>>> > allTypes :: CoreExpr -> [Type] >>>> >>>> > allTypes = collect >>>> >>>> >>>> >>>> Especially when still exploring (parts of) the GHC API, being able to >>>> extract things in this fashion is very helpful. SYB?s ?everything? being >>>> broken by these instances, not so much. >>>> >>>> >>>> >>>> Would a patch ?fixing? these instances be acceptable? >>>> >>>> >>>> >>>> Regards, >>>> >>>> Philip >>>> >>>> >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> ghc-devs mailing list >>>> ghc-devs at haskell.org >>>> http://www.haskell.org/mailman/listinfo/ghc-devs >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 1247 bytes Desc: not available URL: From ekmett at gmail.com Sun Jul 27 17:27:09 2014 From: ekmett at gmail.com (Edward Kmett) Date: Sun, 27 Jul 2014 13:27:09 -0400 Subject: Broken Data.Data instances In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF10438545@DB3PRD3001MB020.064d.mgd.msft.net> <53D14576.4060503@utwente.nl> <618BE556AADD624C9C918AA5D5911BEF104387F2@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Philip, Alan, If you need a hand, I'm happy to pitch in guidance. I've had to mangle a bunch of hand-written Data instances and push out patches to a dozen packages that used to be built this way before I convinced the authors to switch to safer versions of Data. Using virtual smart constructors like we do now in containers and Text where needed can be used to preserve internal invariants, etc. This works far better for users of the API than just randomly throwing them a live hand grenade. As I recall, these little grenades in generic programming over the GHC API have been a constant source of pain for libraries like haddock. Simon, It seems to me that regarding circular data structures, nothing prevents you from walking a circular data structure with Data.Data. You can generate a new one productively that looks just like the old with the contents swapped out, it is indistinguishable to an observer if the fixed point is lost, and a clever observer can use observable sharing to get it back, supposing that they are allowed to try. Alternately, we could use the 'virtual constructor' trick there to break the cycle and reintroduce it, but I'm less enthusiastic about that idea, even if it is simpler in many ways. -Edward On Sun, Jul 27, 2014 at 10:17 AM, wrote: > Alan, > > In that case, let's have a short feedback-loop between the two of us. It > seems many of these files (Name.lhs, for example) are really stable through > the repo-history. It would be nice to have one bigger refactoring all in > one go (some of the code could use a polish, a lot of code seems removable). > > Regards, > Philip > > ------------------------------ > *Van:* Alan & Kim Zimmerman [alan.zimm at gmail.com] > *Verzonden:* vrijdag 25 juli 2014 13:44 > *Aan:* Simon Peyton Jones > *CC:* Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org > *Onderwerp:* Re: Broken Data.Data instances > > By the way, I would be happy to attempt this task, if the concept is > viable. > > > On Thu, Jul 24, 2014 at 11:23 PM, Alan & Kim Zimmerman < > alan.zimm at gmail.com> wrote: > >> While we are talking about fixing traversals, how about getting rid >> of the phase specific panic initialisers for placeHolderType, >> placeHolderKind and friends? >> >> In order to safely traverse with SYB, the following needs to be inserted >> into all the SYB schemes (see >> >> https://github.com/alanz/HaRe/blob/master/src/Language/Haskell/Refact/Utils/GhcUtils.hs >> ) >> >> -- Check the Typeable items >> checkItemStage1 :: (Typeable a) => SYB.Stage -> a -> Bool >> checkItemStage1 stage x = (const False `SYB.extQ` postTcType `SYB.extQ` >> fixity `SYB.extQ` nameSet) x >> where nameSet = const (stage `elem` [SYB.Parser,SYB.TypeChecker]) >> :: GHC.NameSet -> Bool >> postTcType = const (stage < SYB.TypeChecker ) >> :: GHC.PostTcType -> Bool >> fixity = const (stage < SYB.Renamer ) >> :: GHC.Fixity -> Bool >> >> And in addition HsCmdTop and ParStmtBlock are initialised with explicit >> 'undefined values. >> >> Perhaps use an initialiser that can have its panic turned off when >> called via the GHC API? >> >> Regards >> Alan >> >> >> >> On Thu, Jul 24, 2014 at 11:06 PM, Simon Peyton Jones < >> simonpj at microsoft.com> wrote: >> >>> So... does anyone object to me changing these "broken" instances >>> with the ones given by DeriveDataTypeable? >>> >>> That?s fine with me provided (a) the default behaviour is not >>> immediate divergence (which it might well be), and (b) the pitfalls are >>> documented. >>> >>> >>> >>> Simon >>> >>> >>> >>> *From:* "Philip K.F. H?lzenspies" [mailto:p.k.f.holzenspies at utwente.nl] >>> *Sent:* 24 July 2014 18:42 >>> *To:* Simon Peyton Jones >>> *Cc:* ghc-devs at haskell.org >>> *Subject:* Re: Broken Data.Data instances >>> >>> >>> >>> Dear Simon, et al, >>> >>> These are very good points to make for people writing such traversals >>> and queries. I would be more than happy to write a page on the pitfalls >>> etc. on the wiki, but in my experience so far, exploring the innards of GHC >>> is tremendously helped by trying small things out and showing (bits of) the >>> intermediate structures. For me, personally, this has always been hindered >>> by the absence of good instances of Data and/or Show (not having to bring >>> DynFlags and not just visualising with the pretty printer are very helpful). >>> >>> So... does anyone object to me changing these "broken" instances with >>> the ones given by DeriveDataTypeable? >>> >>> Also, many of these internal data structures could be provided with >>> useful lenses to improve such traversals further. Anyone ever go at that? >>> Would be people be interested? >>> >>> Regards, >>> Philip >>> >>> >>> *Simon Peyton Jones* >>> >>> 24 Jul 2014 18:22 >>> >>> GHC?s data structures are often mutually recursive. e.g. >>> >>> ? The TyCon for Maybe contains the DataCon for Just >>> >>> ? The DataCon For just contains Just?s type >>> >>> ? Just?s type contains the TyCon for Maybe >>> >>> >>> >>> So any attempt to recursively walk over all these structures, as you >>> would a tree, will fail. >>> >>> >>> >>> Also there?s a lot of sharing. For example, every occurrence of ?map? >>> is a Var, and inside that Var is map?s type, its strictness, its rewrite >>> RULE, etc etc. In walking over a term you may not want to walk over all >>> that stuff at every occurrence of map. >>> >>> >>> >>> Maybe that?s it; I?m not certain since I did not write the Data >>> instances for any of GHC?s types >>> >>> >>> >>> Simon >>> >>> >>> >>> *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org >>> ] *On Behalf Of * >>> p.k.f.holzenspies at utwente.nl >>> *Sent:* 24 July 2014 16:42 >>> *To:* ghc-devs at haskell.org >>> *Subject:* Broken Data.Data instances >>> >>> >>> >>> Dear GHC-ers, >>> >>> >>> >>> Is there a reason for explicitly broken Data.Data instances? Case in >>> point: >>> >>> >>> >>> > instance Data Var where >>> >>> > -- don't traverse? >>> >>> > toConstr _ = abstractConstr "Var" >>> >>> > gunfold _ _ = error "gunfold" >>> >>> > dataTypeOf _ = mkNoRepType "Var" >>> >>> >>> >>> I understand (vaguely) arguments about abstract data types, but this >>> also excludes convenient queries that can, e.g. extract all types from a >>> CoreExpr. I had hoped to do stuff like this: >>> >>> >>> >>> > collect :: (Typeable b, Data a, MonadPlus m) => a -> m b >>> >>> > collect = everything mplus $ mkQ mzero return >>> >>> > >>> >>> > allTypes :: CoreExpr -> [Type] >>> >>> > allTypes = collect >>> >>> >>> >>> Especially when still exploring (parts of) the GHC API, being able to >>> extract things in this fashion is very helpful. SYB?s ?everything? being >>> broken by these instances, not so much. >>> >>> >>> >>> Would a patch ?fixing? these instances be acceptable? >>> >>> >>> >>> Regards, >>> >>> Philip >>> >>> >>> >>> >>> >>> >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://www.haskell.org/mailman/listinfo/ghc-devs >>> >>> >> > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 1247 bytes Desc: not available URL: From alan.zimm at gmail.com Sun Jul 27 18:04:53 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Sun, 27 Jul 2014 20:04:53 +0200 Subject: Broken Data.Data instances In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF10438545@DB3PRD3001MB020.064d.mgd.msft.net> <53D14576.4060503@utwente.nl> <618BE556AADD624C9C918AA5D5911BEF104387F2@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Philip How would you like to take this forward? From my side I would appreciate all guidance/help to get it resolved, it is a huge hindrance for HaRe. Alan On Sun, Jul 27, 2014 at 7:27 PM, Edward Kmett wrote: > Philip, Alan, > > If you need a hand, I'm happy to pitch in guidance. > > I've had to mangle a bunch of hand-written Data instances and push out > patches to a dozen packages that used to be built this way before I > convinced the authors to switch to safer versions of Data. Using virtual > smart constructors like we do now in containers and Text where needed can > be used to preserve internal invariants, etc. > > This works far better for users of the API than just randomly throwing > them a live hand grenade. As I recall, these little grenades in generic > programming over the GHC API have been a constant source of pain for > libraries like haddock. > > Simon, > > It seems to me that regarding circular data structures, nothing prevents > you from walking a circular data structure with Data.Data. You can generate > a new one productively that looks just like the old with the contents > swapped out, it is indistinguishable to an observer if the fixed point is > lost, and a clever observer can use observable sharing to get it back, > supposing that they are allowed to try. > > Alternately, we could use the 'virtual constructor' trick there to break > the cycle and reintroduce it, but I'm less enthusiastic about that idea, > even if it is simpler in many ways. > > -Edward > > > On Sun, Jul 27, 2014 at 10:17 AM, wrote: > >> Alan, >> >> In that case, let's have a short feedback-loop between the two of us. It >> seems many of these files (Name.lhs, for example) are really stable through >> the repo-history. It would be nice to have one bigger refactoring all in >> one go (some of the code could use a polish, a lot of code seems removable). >> >> Regards, >> Philip >> >> ------------------------------ >> *Van:* Alan & Kim Zimmerman [alan.zimm at gmail.com] >> *Verzonden:* vrijdag 25 juli 2014 13:44 >> *Aan:* Simon Peyton Jones >> *CC:* Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org >> *Onderwerp:* Re: Broken Data.Data instances >> >> By the way, I would be happy to attempt this task, if the concept is >> viable. >> >> >> On Thu, Jul 24, 2014 at 11:23 PM, Alan & Kim Zimmerman < >> alan.zimm at gmail.com> wrote: >> >>> While we are talking about fixing traversals, how about getting rid >>> of the phase specific panic initialisers for placeHolderType, >>> placeHolderKind and friends? >>> >>> In order to safely traverse with SYB, the following needs to be >>> inserted into all the SYB schemes (see >>> >>> https://github.com/alanz/HaRe/blob/master/src/Language/Haskell/Refact/Utils/GhcUtils.hs >>> ) >>> >>> -- Check the Typeable items >>> checkItemStage1 :: (Typeable a) => SYB.Stage -> a -> Bool >>> checkItemStage1 stage x = (const False `SYB.extQ` postTcType `SYB.extQ` >>> fixity `SYB.extQ` nameSet) x >>> where nameSet = const (stage `elem` [SYB.Parser,SYB.TypeChecker]) >>> :: GHC.NameSet -> Bool >>> postTcType = const (stage < SYB.TypeChecker ) >>> :: GHC.PostTcType -> Bool >>> fixity = const (stage < SYB.Renamer ) >>> :: GHC.Fixity -> Bool >>> >>> And in addition HsCmdTop and ParStmtBlock are initialised with explicit >>> 'undefined values. >>> >>> Perhaps use an initialiser that can have its panic turned off when >>> called via the GHC API? >>> >>> Regards >>> Alan >>> >>> >>> >>> On Thu, Jul 24, 2014 at 11:06 PM, Simon Peyton Jones < >>> simonpj at microsoft.com> wrote: >>> >>>> So... does anyone object to me changing these "broken" instances >>>> with the ones given by DeriveDataTypeable? >>>> >>>> That?s fine with me provided (a) the default behaviour is not >>>> immediate divergence (which it might well be), and (b) the pitfalls are >>>> documented. >>>> >>>> >>>> >>>> Simon >>>> >>>> >>>> >>>> *From:* "Philip K.F. H?lzenspies" [mailto:p.k.f.holzenspies at utwente.nl] >>>> >>>> *Sent:* 24 July 2014 18:42 >>>> *To:* Simon Peyton Jones >>>> *Cc:* ghc-devs at haskell.org >>>> *Subject:* Re: Broken Data.Data instances >>>> >>>> >>>> >>>> Dear Simon, et al, >>>> >>>> These are very good points to make for people writing such traversals >>>> and queries. I would be more than happy to write a page on the pitfalls >>>> etc. on the wiki, but in my experience so far, exploring the innards of GHC >>>> is tremendously helped by trying small things out and showing (bits of) the >>>> intermediate structures. For me, personally, this has always been hindered >>>> by the absence of good instances of Data and/or Show (not having to bring >>>> DynFlags and not just visualising with the pretty printer are very helpful). >>>> >>>> So... does anyone object to me changing these "broken" instances with >>>> the ones given by DeriveDataTypeable? >>>> >>>> Also, many of these internal data structures could be provided with >>>> useful lenses to improve such traversals further. Anyone ever go at that? >>>> Would be people be interested? >>>> >>>> Regards, >>>> Philip >>>> >>>> >>>> *Simon Peyton Jones* >>>> >>>> 24 Jul 2014 18:22 >>>> >>>> GHC?s data structures are often mutually recursive. e.g. >>>> >>>> ? The TyCon for Maybe contains the DataCon for Just >>>> >>>> ? The DataCon For just contains Just?s type >>>> >>>> ? Just?s type contains the TyCon for Maybe >>>> >>>> >>>> >>>> So any attempt to recursively walk over all these structures, as you >>>> would a tree, will fail. >>>> >>>> >>>> >>>> Also there?s a lot of sharing. For example, every occurrence of ?map? >>>> is a Var, and inside that Var is map?s type, its strictness, its rewrite >>>> RULE, etc etc. In walking over a term you may not want to walk over all >>>> that stuff at every occurrence of map. >>>> >>>> >>>> >>>> Maybe that?s it; I?m not certain since I did not write the Data >>>> instances for any of GHC?s types >>>> >>>> >>>> >>>> Simon >>>> >>>> >>>> >>>> *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org >>>> ] *On Behalf Of * >>>> p.k.f.holzenspies at utwente.nl >>>> *Sent:* 24 July 2014 16:42 >>>> *To:* ghc-devs at haskell.org >>>> *Subject:* Broken Data.Data instances >>>> >>>> >>>> >>>> Dear GHC-ers, >>>> >>>> >>>> >>>> Is there a reason for explicitly broken Data.Data instances? Case in >>>> point: >>>> >>>> >>>> >>>> > instance Data Var where >>>> >>>> > -- don't traverse? >>>> >>>> > toConstr _ = abstractConstr "Var" >>>> >>>> > gunfold _ _ = error "gunfold" >>>> >>>> > dataTypeOf _ = mkNoRepType "Var" >>>> >>>> >>>> >>>> I understand (vaguely) arguments about abstract data types, but this >>>> also excludes convenient queries that can, e.g. extract all types from a >>>> CoreExpr. I had hoped to do stuff like this: >>>> >>>> >>>> >>>> > collect :: (Typeable b, Data a, MonadPlus m) => a -> m b >>>> >>>> > collect = everything mplus $ mkQ mzero return >>>> >>>> > >>>> >>>> > allTypes :: CoreExpr -> [Type] >>>> >>>> > allTypes = collect >>>> >>>> >>>> >>>> Especially when still exploring (parts of) the GHC API, being able to >>>> extract things in this fashion is very helpful. SYB?s ?everything? being >>>> broken by these instances, not so much. >>>> >>>> >>>> >>>> Would a patch ?fixing? these instances be acceptable? >>>> >>>> >>>> >>>> Regards, >>>> >>>> Philip >>>> >>>> >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> ghc-devs mailing list >>>> ghc-devs at haskell.org >>>> http://www.haskell.org/mailman/listinfo/ghc-devs >>>> >>>> >>> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 1247 bytes Desc: not available URL: From eir at cis.upenn.edu Mon Jul 28 01:49:42 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Sun, 27 Jul 2014 21:49:42 -0400 Subject: Broken Data.Data instances In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF10438545@DB3PRD3001MB020.064d.mgd.msft.net> <53D14576.4060503@utwente.nl> <618BE556AADD624C9C918AA5D5911BEF104387F2@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: What if there is a good reason for a missing/broken Data.Data instance? I'm specifically thinking of GADTs. There are few currently, but I, for one, have toyed with the idea of adding more. My recollection is that Data.Data doesn't work with GADTs. As a concrete, existent example, see CoAxiom.BranchList, which allows for type-level reification of singleton lists as distinct from other, not-necessarily-singleton lists. I would very much like to support API usage that would benefit from working Data.Data instances, but I also want to be sure we're not eliminating other possible futures without due discussion. Richard On Jul 27, 2014, at 2:04 PM, "Alan & Kim Zimmerman" wrote: > Philip > > How would you like to take this forward? From my side I would appreciate all guidance/help to get it resolved, it is a huge hindrance for HaRe. > > Alan > > > On Sun, Jul 27, 2014 at 7:27 PM, Edward Kmett wrote: > Philip, Alan, > > If you need a hand, I'm happy to pitch in guidance. > > I've had to mangle a bunch of hand-written Data instances and push out patches to a dozen packages that used to be built this way before I convinced the authors to switch to safer versions of Data. Using virtual smart constructors like we do now in containers and Text where needed can be used to preserve internal invariants, etc. > > This works far better for users of the API than just randomly throwing them a live hand grenade. As I recall, these little grenades in generic programming over the GHC API have been a constant source of pain for libraries like haddock. > > Simon, > > It seems to me that regarding circular data structures, nothing prevents you from walking a circular data structure with Data.Data. You can generate a new one productively that looks just like the old with the contents swapped out, it is indistinguishable to an observer if the fixed point is lost, and a clever observer can use observable sharing to get it back, supposing that they are allowed to try. > > Alternately, we could use the 'virtual constructor' trick there to break the cycle and reintroduce it, but I'm less enthusiastic about that idea, even if it is simpler in many ways. > > -Edward > > > On Sun, Jul 27, 2014 at 10:17 AM, wrote: > Alan, > > In that case, let's have a short feedback-loop between the two of us. It seems many of these files (Name.lhs, for example) are really stable through the repo-history. It would be nice to have one bigger refactoring all in one go (some of the code could use a polish, a lot of code seems removable). > > Regards, > Philip > > Van: Alan & Kim Zimmerman [alan.zimm at gmail.com] > Verzonden: vrijdag 25 juli 2014 13:44 > Aan: Simon Peyton Jones > CC: Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org > Onderwerp: Re: Broken Data.Data instances > > By the way, I would be happy to attempt this task, if the concept is viable. > > > On Thu, Jul 24, 2014 at 11:23 PM, Alan & Kim Zimmerman wrote: > While we are talking about fixing traversals, how about getting rid of the phase specific panic initialisers for placeHolderType, placeHolderKind and friends? > > In order to safely traverse with SYB, the following needs to be inserted into all the SYB schemes (see > https://github.com/alanz/HaRe/blob/master/src/Language/Haskell/Refact/Utils/GhcUtils.hs) > > -- Check the Typeable items > checkItemStage1 :: (Typeable a) => SYB.Stage -> a -> Bool > checkItemStage1 stage x = (const False `SYB.extQ` postTcType `SYB.extQ` fixity `SYB.extQ` nameSet) x > where nameSet = const (stage `elem` [SYB.Parser,SYB.TypeChecker]) :: GHC.NameSet -> Bool > postTcType = const (stage < SYB.TypeChecker ) :: GHC.PostTcType -> Bool > fixity = const (stage < SYB.Renamer ) :: GHC.Fixity -> Bool > > And in addition HsCmdTop and ParStmtBlock are initialised with explicit 'undefined values. > > Perhaps use an initialiser that can have its panic turned off when called via the GHC API? > > Regards > Alan > > > > On Thu, Jul 24, 2014 at 11:06 PM, Simon Peyton Jones wrote: > So... does anyone object to me changing these "broken" instances with the ones given by DeriveDataTypeable? > > > That?s fine with me provided (a) the default behaviour is not immediate divergence (which it might well be), and (b) the pitfalls are documented. > > > > Simon > > > > From: "Philip K.F. H?lzenspies" [mailto:p.k.f.holzenspies at utwente.nl] > Sent: 24 July 2014 18:42 > To: Simon Peyton Jones > Cc: ghc-devs at haskell.org > Subject: Re: Broken Data.Data instances > > > > Dear Simon, et al, > > These are very good points to make for people writing such traversals and queries. I would be more than happy to write a page on the pitfalls etc. on the wiki, but in my experience so far, exploring the innards of GHC is tremendously helped by trying small things out and showing (bits of) the intermediate structures. For me, personally, this has always been hindered by the absence of good instances of Data and/or Show (not having to bring DynFlags and not just visualising with the pretty printer are very helpful). > > So... does anyone object to me changing these "broken" instances with the ones given by DeriveDataTypeable? > > Also, many of these internal data structures could be provided with useful lenses to improve such traversals further. Anyone ever go at that? Would be people be interested? > > Regards, > Philip > > > > > > Simon Peyton Jones > > 24 Jul 2014 18:22 > > GHC?s data structures are often mutually recursive. e.g. > > ? The TyCon for Maybe contains the DataCon for Just > > ? The DataCon For just contains Just?s type > > ? Just?s type contains the TyCon for Maybe > > > > So any attempt to recursively walk over all these structures, as you would a tree, will fail. > > > > Also there?s a lot of sharing. For example, every occurrence of ?map? is a Var, and inside that Var is map?s type, its strictness, its rewrite RULE, etc etc. In walking over a term you may not want to walk over all that stuff at every occurrence of map. > > > > Maybe that?s it; I?m not certain since I did not write the Data instances for any of GHC?s types > > > > Simon > > > > From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of p.k.f.holzenspies at utwente.nl > Sent: 24 July 2014 16:42 > To: ghc-devs at haskell.org > Subject: Broken Data.Data instances > > > > Dear GHC-ers, > > > > Is there a reason for explicitly broken Data.Data instances? Case in point: > > > > > instance Data Var where > > > -- don't traverse? > > > toConstr _ = abstractConstr "Var" > > > gunfold _ _ = error "gunfold" > > > dataTypeOf _ = mkNoRepType "Var" > > > > I understand (vaguely) arguments about abstract data types, but this also excludes convenient queries that can, e.g. extract all types from a CoreExpr. I had hoped to do stuff like this: > > > > > collect :: (Typeable b, Data a, MonadPlus m) => a -> m b > > > collect = everything mplus $ mkQ mzero return > > > > > > allTypes :: CoreExpr -> [Type] > > > allTypes = collect > > > > Especially when still exploring (parts of) the GHC API, being able to extract things in this fashion is very helpful. SYB?s ?everything? being broken by these instances, not so much. > > > > Would a patch ?fixing? these instances be acceptable? > > > > Regards, > > Philip > > > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekmett at gmail.com Mon Jul 28 04:36:05 2014 From: ekmett at gmail.com (Edward Kmett) Date: Mon, 28 Jul 2014 00:36:05 -0400 Subject: Broken Data.Data instances In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF10438545@DB3PRD3001MB020.064d.mgd.msft.net> <53D14576.4060503@utwente.nl> <618BE556AADD624C9C918AA5D5911BEF104387F2@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Im mostly looking at the Data.Data stuff as "nice to have" at this point. Well, it really is need to have for some users, but they can typically get by by writing a few hundred lines of boilerplate when its not there. If you need to break something internally and it costs us a Data instance for something? Have at it. If we can still hack around the changes with Data then great. Otherwise the Data machinery has always been for expert users who already deal with a great deal of breakage anyways, so thrashing on that API seems fine to me. Not desirable, but not unexpected. -Edward On Sun, Jul 27, 2014 at 9:49 PM, Richard Eisenberg wrote: > What if there is a good reason for a missing/broken Data.Data instance? > I'm specifically thinking of GADTs. There are few currently, but I, for > one, have toyed with the idea of adding more. My recollection is that > Data.Data doesn't work with GADTs. As a concrete, existent example, see > CoAxiom.BranchList, which allows for type-level reification of singleton > lists as distinct from other, not-necessarily-singleton lists. > > I would very much like to support API usage that would benefit from > working Data.Data instances, but I also want to be sure we're not > eliminating other possible futures without due discussion. > > Richard > > On Jul 27, 2014, at 2:04 PM, "Alan & Kim Zimmerman" > wrote: > > Philip > > How would you like to take this forward? From my side I would appreciate > all guidance/help to get it resolved, it is a huge hindrance for HaRe. > > Alan > > > On Sun, Jul 27, 2014 at 7:27 PM, Edward Kmett wrote: > >> Philip, Alan, >> >> If you need a hand, I'm happy to pitch in guidance. >> >> I've had to mangle a bunch of hand-written Data instances and push out >> patches to a dozen packages that used to be built this way before I >> convinced the authors to switch to safer versions of Data. Using virtual >> smart constructors like we do now in containers and Text where needed can >> be used to preserve internal invariants, etc. >> >> This works far better for users of the API than just randomly throwing >> them a live hand grenade. As I recall, these little grenades in generic >> programming over the GHC API have been a constant source of pain for >> libraries like haddock. >> >> Simon, >> >> It seems to me that regarding circular data structures, nothing prevents >> you from walking a circular data structure with Data.Data. You can generate >> a new one productively that looks just like the old with the contents >> swapped out, it is indistinguishable to an observer if the fixed point is >> lost, and a clever observer can use observable sharing to get it back, >> supposing that they are allowed to try. >> >> Alternately, we could use the 'virtual constructor' trick there to break >> the cycle and reintroduce it, but I'm less enthusiastic about that idea, >> even if it is simpler in many ways. >> >> -Edward >> >> >> On Sun, Jul 27, 2014 at 10:17 AM, wrote: >> >>> Alan, >>> >>> In that case, let's have a short feedback-loop between the two of us. It >>> seems many of these files (Name.lhs, for example) are really stable through >>> the repo-history. It would be nice to have one bigger refactoring all in >>> one go (some of the code could use a polish, a lot of code seems removable). >>> >>> Regards, >>> Philip >>> >>> ------------------------------ >>> *Van:* Alan & Kim Zimmerman [alan.zimm at gmail.com] >>> *Verzonden:* vrijdag 25 juli 2014 13:44 >>> *Aan:* Simon Peyton Jones >>> *CC:* Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org >>> *Onderwerp:* Re: Broken Data.Data instances >>> >>> By the way, I would be happy to attempt this task, if the concept is >>> viable. >>> >>> >>> On Thu, Jul 24, 2014 at 11:23 PM, Alan & Kim Zimmerman < >>> alan.zimm at gmail.com> wrote: >>> >>>> While we are talking about fixing traversals, how about getting rid >>>> of the phase specific panic initialisers for placeHolderType, >>>> placeHolderKind and friends? >>>> >>>> In order to safely traverse with SYB, the following needs to be >>>> inserted into all the SYB schemes (see >>>> >>>> https://github.com/alanz/HaRe/blob/master/src/Language/Haskell/Refact/Utils/GhcUtils.hs >>>> ) >>>> >>>> -- Check the Typeable items >>>> checkItemStage1 :: (Typeable a) => SYB.Stage -> a -> Bool >>>> checkItemStage1 stage x = (const False `SYB.extQ` postTcType `SYB.extQ` >>>> fixity `SYB.extQ` nameSet) x >>>> where nameSet = const (stage `elem` [SYB.Parser,SYB.TypeChecker]) >>>> :: GHC.NameSet -> Bool >>>> postTcType = const (stage < SYB.TypeChecker ) >>>> :: GHC.PostTcType -> Bool >>>> fixity = const (stage < SYB.Renamer ) >>>> :: GHC.Fixity -> Bool >>>> >>>> And in addition HsCmdTop and ParStmtBlock are initialised with >>>> explicit 'undefined values. >>>> >>>> Perhaps use an initialiser that can have its panic turned off when >>>> called via the GHC API? >>>> >>>> Regards >>>> Alan >>>> >>>> >>>> >>>> On Thu, Jul 24, 2014 at 11:06 PM, Simon Peyton Jones < >>>> simonpj at microsoft.com> wrote: >>>> >>>>> So... does anyone object to me changing these "broken" instances >>>>> with the ones given by DeriveDataTypeable? >>>>> >>>>> That?s fine with me provided (a) the default behaviour is not >>>>> immediate divergence (which it might well be), and (b) the pitfalls are >>>>> documented. >>>>> >>>>> >>>>> >>>>> Simon >>>>> >>>>> >>>>> >>>>> *From:* "Philip K.F. H?lzenspies" [mailto:p.k.f.holzenspies at utwente.nl] >>>>> >>>>> *Sent:* 24 July 2014 18:42 >>>>> *To:* Simon Peyton Jones >>>>> *Cc:* ghc-devs at haskell.org >>>>> *Subject:* Re: Broken Data.Data instances >>>>> >>>>> >>>>> >>>>> Dear Simon, et al, >>>>> >>>>> These are very good points to make for people writing such traversals >>>>> and queries. I would be more than happy to write a page on the pitfalls >>>>> etc. on the wiki, but in my experience so far, exploring the innards of GHC >>>>> is tremendously helped by trying small things out and showing (bits of) the >>>>> intermediate structures. For me, personally, this has always been hindered >>>>> by the absence of good instances of Data and/or Show (not having to bring >>>>> DynFlags and not just visualising with the pretty printer are very helpful). >>>>> >>>>> So... does anyone object to me changing these "broken" instances with >>>>> the ones given by DeriveDataTypeable? >>>>> >>>>> Also, many of these internal data structures could be provided with >>>>> useful lenses to improve such traversals further. Anyone ever go at that? >>>>> Would be people be interested? >>>>> >>>>> Regards, >>>>> Philip >>>>> >>>>> >>>>> >>>>> >>>>> *Simon Peyton Jones* >>>>> >>>>> 24 Jul 2014 18:22 >>>>> >>>>> GHC?s data structures are often mutually recursive. e.g. >>>>> >>>>> ? The TyCon for Maybe contains the DataCon for Just >>>>> >>>>> ? The DataCon For just contains Just?s type >>>>> >>>>> ? Just?s type contains the TyCon for Maybe >>>>> >>>>> >>>>> >>>>> So any attempt to recursively walk over all these structures, as you >>>>> would a tree, will fail. >>>>> >>>>> >>>>> >>>>> Also there?s a lot of sharing. For example, every occurrence of ?map? >>>>> is a Var, and inside that Var is map?s type, its strictness, its rewrite >>>>> RULE, etc etc. In walking over a term you may not want to walk over all >>>>> that stuff at every occurrence of map. >>>>> >>>>> >>>>> >>>>> Maybe that?s it; I?m not certain since I did not write the Data >>>>> instances for any of GHC?s types >>>>> >>>>> >>>>> >>>>> Simon >>>>> >>>>> >>>>> >>>>> *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org >>>>> ] *On Behalf Of * >>>>> p.k.f.holzenspies at utwente.nl >>>>> *Sent:* 24 July 2014 16:42 >>>>> *To:* ghc-devs at haskell.org >>>>> *Subject:* Broken Data.Data instances >>>>> >>>>> >>>>> >>>>> Dear GHC-ers, >>>>> >>>>> >>>>> >>>>> Is there a reason for explicitly broken Data.Data instances? Case in >>>>> point: >>>>> >>>>> >>>>> >>>>> > instance Data Var where >>>>> >>>>> > -- don't traverse? >>>>> >>>>> > toConstr _ = abstractConstr "Var" >>>>> >>>>> > gunfold _ _ = error "gunfold" >>>>> >>>>> > dataTypeOf _ = mkNoRepType "Var" >>>>> >>>>> >>>>> >>>>> I understand (vaguely) arguments about abstract data types, but this >>>>> also excludes convenient queries that can, e.g. extract all types from a >>>>> CoreExpr. I had hoped to do stuff like this: >>>>> >>>>> >>>>> >>>>> > collect :: (Typeable b, Data a, MonadPlus m) => a -> m b >>>>> >>>>> > collect = everything mplus $ mkQ mzero return >>>>> >>>>> > >>>>> >>>>> > allTypes :: CoreExpr -> [Type] >>>>> >>>>> > allTypes = collect >>>>> >>>>> >>>>> >>>>> Especially when still exploring (parts of) the GHC API, being able to >>>>> extract things in this fashion is very helpful. SYB?s ?everything? being >>>>> broken by these instances, not so much. >>>>> >>>>> >>>>> >>>>> Would a patch ?fixing? these instances be acceptable? >>>>> >>>>> >>>>> >>>>> Regards, >>>>> >>>>> Philip >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> ghc-devs mailing list >>>>> ghc-devs at haskell.org >>>>> http://www.haskell.org/mailman/listinfo/ghc-devs >>>>> >>>>> >>>> >>> >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://www.haskell.org/mailman/listinfo/ghc-devs >>> >>> >> > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Jul 28 06:45:23 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 28 Jul 2014 06:45:23 +0000 Subject: [commit: ghc] master: Module reexports, fixing #8407. (7f5c1086) In-Reply-To: <20140726010806.D1B342406D@ghc.haskell.org> References: <20140726010806.D1B342406D@ghc.haskell.org> Message-ID: <618BE556AADD624C9C918AA5D5911BEF20E3F35E@DB3PRD3001MB020.064d.mgd.msft.net> Edward Great stuff. Is this documented somewhere, notably in http://www.haskell.org/ghc/docs/latest/html/users_guide/packages.html for GHC, and somewhere in Cabal? And perhaps somewhere on the wiki https://ghc.haskell.org/trac/ghc/wiki/Commentary/Packages Thanks Simon | -----Original Message----- | From: ghc-commits [mailto:ghc-commits-bounces at haskell.org] On Behalf Of | git at git.haskell.org | Sent: 26 July 2014 02:08 | To: ghc-commits at haskell.org | Subject: [commit: ghc] master: Module reexports, fixing #8407. | (7f5c1086) | | Repository : ssh://git at git.haskell.org/ghc | | On branch : master | Link : | http://ghc.haskell.org/trac/ghc/changeset/7f5c10864e7c26b90c7ff4ed09d00 | c8a09aa4349/ghc | | >--------------------------------------------------------------- | | commit 7f5c10864e7c26b90c7ff4ed09d00c8a09aa4349 | Author: Edward Z. Yang | Date: Fri Jul 4 17:01:08 2014 +0100 | | Module reexports, fixing #8407. | | The general approach is to add a new field to the package database, | reexported-modules, which considered by the module finder as | possible | module declarations. Unlike declaring stub module files, multiple | reexports of the same physical package at the same name do not | result in an ambiguous import. | | Has submodule updates for Cabal and haddock. | | NB: When a reexport renames a module, that renaming is *not* | accessible | from inside the package. This is not so much a deliberate design | choice | as for implementation expediency (reexport resolution happens only | when | a package is in the package database.) | | TODO: Error handling when there are duplicate reexports/etc is not | very | well tested. | | Signed-off-by: Edward Z. Yang | | Conflicts: | compiler/main/HscTypes.lhs | testsuite/.gitignore | utils/haddock | | | >--------------------------------------------------------------- | | 7f5c10864e7c26b90c7ff4ed09d00c8a09aa4349 | compiler/main/DynFlags.hs | 1 + | compiler/main/Finder.lhs | 25 +++-- | compiler/main/GHC.hs | 12 ++- | compiler/main/HscTypes.lhs | 6 +- | compiler/main/PackageConfig.hs | 4 + | compiler/main/Packages.lhs | 109 | ++++++++++++++++----- | ghc/InteractiveUI.hs | 8 +- | libraries/Cabal | 2 +- | .../Distribution/InstalledPackageInfo/Binary.hs | 8 ++ | testsuite/.gitignore | 8 ++ | testsuite/tests/cabal/Makefile | 15 +++ | testsuite/tests/cabal/all.T | 6 ++ | testsuite/tests/cabal/cabal05/Makefile | 69 +++++++++++++ | .../{driver/T3007/A => cabal/cabal05}/Setup.hs | 0 | testsuite/tests/cabal/{cabal03 => cabal05}/all.T | 4 +- | .../tests/cabal/cabal05/p/LICENSE | 0 | testsuite/tests/cabal/cabal05/p/P.hs | 3 + | testsuite/tests/cabal/cabal05/p/P2.hs | 1 + | .../{driver/T3007/A => cabal/cabal05/p}/Setup.hs | 0 | testsuite/tests/cabal/cabal05/p/p.cabal | 11 +++ | .../tests/cabal/cabal05/q/LICENSE | 0 | testsuite/tests/cabal/cabal05/q/Q.hs | 4 + | .../{driver/T3007/A => cabal/cabal05/q}/Setup.hs | 0 | testsuite/tests/cabal/cabal05/q/q.cabal | 29 ++++++ | .../tests/cabal/cabal05/r/LICENSE | 0 | testsuite/tests/cabal/cabal05/r/R.hs | 11 +++ | .../{driver/T3007/A => cabal/cabal05/r}/Setup.hs | 0 | testsuite/tests/cabal/cabal05/r/r.cabal | 32 ++++++ | .../tests/cabal/cabal05/s/LICENSE | 0 | testsuite/tests/cabal/cabal05/s/S.hs | 18 ++++ | .../{driver/T3007/A => cabal/cabal05/s}/Setup.hs | 0 | testsuite/tests/cabal/cabal05/s/s.cabal | 11 +++ | testsuite/tests/cabal/ghcpkg07.stdout | 11 +++ | .../{test4.pkg => recache_reexport_db/a.conf} | 20 ++-- | testsuite/tests/cabal/{test4.pkg => test7a.pkg} | 20 ++-- | testsuite/tests/cabal/test7b.pkg | 17 ++++ | utils/ghc-cabal/ghc-cabal.cabal | 3 +- | utils/ghc-pkg/Main.hs | 55 ++++++++++- | utils/ghc-pkg/ghc-pkg.cabal | 4 +- | utils/ghctags/ghctags.cabal | 3 +- | utils/haddock | 2 +- | 41 files changed, 453 insertions(+), 79 deletions(-) | | Diff suppressed because of size. To see it, use: | | git diff-tree --root --patch-with-stat --no-color --find-copies- | harder --ignore-space-at-eol --cc | 7f5c10864e7c26b90c7ff4ed09d00c8a09aa4349 | _______________________________________________ | ghc-commits mailing list | ghc-commits at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-commits From simonpj at microsoft.com Mon Jul 28 07:14:59 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 28 Jul 2014 07:14:59 +0000 Subject: Broken Data.Data instances In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF10438545@DB3PRD3001MB020.064d.mgd.msft.net> <53D14576.4060503@utwente.nl> <618BE556AADD624C9C918AA5D5911BEF104387F2@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF20E414F3@DB3PRD3001MB020.064d.mgd.msft.net> I've had to mangle a bunch of hand-written Data instances and push out patches to a dozen packages that used to be built this way before I convinced the authors to switch to safer versions of Data. Using virtual smart constructors like we do now in containers and Text where needed can be used to preserve internal invariants, etc. If the ?hand grenades? are the PostTcTypes, etc, then I can explain why they are there. There simply is no sensible type you can put before the type checker runs. For example one of the constructors in HsExpr is | HsMultiIf PostTcType [LGRHS id (LHsExpr id)] After type checking we know what type the thing has, but before we have no clue. We could get around this by saying type PostTcType = Maybe TcType but that would mean that every post-typechecking consumer would need a redundant pattern-match on a Just that would always succeed. It?s nothing deeper than that. Adding Maybes everywhere would be possible, just clunky. However we now have type functions, and HsExpr is parameterised by an ?id? parameter, which changes from RdrName (after parsing) to Name (after renaming) to Id (after typechecking). So we could do this: | HsMultiIf (PostTcType id) [LGRHS id (LHsExpr id)] and define PostTcType as a closed type family thus type family PostTcType a where PostTcType Id = TcType PostTcType other = () That would be better than filling it with bottoms. But it might not help with generic programming, because there?d be a component whose type wasn?t fixed. I have no idea how generics and type functions interact. Simon From: Edward Kmett [mailto:ekmett at gmail.com] Sent: 27 July 2014 18:27 To: p.k.f.holzenspies at utwente.nl Cc: alan.zimm at gmail.com; Simon Peyton Jones; ghc-devs Subject: Re: Broken Data.Data instances Philip, Alan, If you need a hand, I'm happy to pitch in guidance. I've had to mangle a bunch of hand-written Data instances and push out patches to a dozen packages that used to be built this way before I convinced the authors to switch to safer versions of Data. Using virtual smart constructors like we do now in containers and Text where needed can be used to preserve internal invariants, etc. This works far better for users of the API than just randomly throwing them a live hand grenade. As I recall, these little grenades in generic programming over the GHC API have been a constant source of pain for libraries like haddock. Simon, It seems to me that regarding circular data structures, nothing prevents you from walking a circular data structure with Data.Data. You can generate a new one productively that looks just like the old with the contents swapped out, it is indistinguishable to an observer if the fixed point is lost, and a clever observer can use observable sharing to get it back, supposing that they are allowed to try. Alternately, we could use the 'virtual constructor' trick there to break the cycle and reintroduce it, but I'm less enthusiastic about that idea, even if it is simpler in many ways. -Edward On Sun, Jul 27, 2014 at 10:17 AM, > wrote: Alan, In that case, let's have a short feedback-loop between the two of us. It seems many of these files (Name.lhs, for example) are really stable through the repo-history. It would be nice to have one bigger refactoring all in one go (some of the code could use a polish, a lot of code seems removable). Regards, Philip ________________________________ Van: Alan & Kim Zimmerman [alan.zimm at gmail.com] Verzonden: vrijdag 25 juli 2014 13:44 Aan: Simon Peyton Jones CC: Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org Onderwerp: Re: Broken Data.Data instances By the way, I would be happy to attempt this task, if the concept is viable. On Thu, Jul 24, 2014 at 11:23 PM, Alan & Kim Zimmerman > wrote: While we are talking about fixing traversals, how about getting rid of the phase specific panic initialisers for placeHolderType, placeHolderKind and friends? In order to safely traverse with SYB, the following needs to be inserted into all the SYB schemes (see https://github.com/alanz/HaRe/blob/master/src/Language/Haskell/Refact/Utils/GhcUtils.hs) -- Check the Typeable items checkItemStage1 :: (Typeable a) => SYB.Stage -> a -> Bool checkItemStage1 stage x = (const False `SYB.extQ` postTcType `SYB.extQ` fixity `SYB.extQ` nameSet) x where nameSet = const (stage `elem` [SYB.Parser,SYB.TypeChecker]) :: GHC.NameSet -> Bool postTcType = const (stage < SYB.TypeChecker ) :: GHC.PostTcType -> Bool fixity = const (stage < SYB.Renamer ) :: GHC.Fixity -> Bool And in addition HsCmdTop and ParStmtBlock are initialised with explicit 'undefined values. Perhaps use an initialiser that can have its panic turned off when called via the GHC API? Regards Alan On Thu, Jul 24, 2014 at 11:06 PM, Simon Peyton Jones > wrote: So... does anyone object to me changing these "broken" instances with the ones given by DeriveDataTypeable? That?s fine with me provided (a) the default behaviour is not immediate divergence (which it might well be), and (b) the pitfalls are documented. Simon From: "Philip K.F. H?lzenspies" [mailto:p.k.f.holzenspies at utwente.nl] Sent: 24 July 2014 18:42 To: Simon Peyton Jones Cc: ghc-devs at haskell.org Subject: Re: Broken Data.Data instances Dear Simon, et al, These are very good points to make for people writing such traversals and queries. I would be more than happy to write a page on the pitfalls etc. on the wiki, but in my experience so far, exploring the innards of GHC is tremendously helped by trying small things out and showing (bits of) the intermediate structures. For me, personally, this has always been hindered by the absence of good instances of Data and/or Show (not having to bring DynFlags and not just visualising with the pretty printer are very helpful). So... does anyone object to me changing these "broken" instances with the ones given by DeriveDataTypeable? Also, many of these internal data structures could be provided with useful lenses to improve such traversals further. Anyone ever go at that? Would be people be interested? Regards, Philip [cid:image001.jpg at 01CFAA3C.05A2D1B0] Simon Peyton Jones 24 Jul 2014 18:22 GHC?s data structures are often mutually recursive. e.g. ? The TyCon for Maybe contains the DataCon for Just ? The DataCon For just contains Just?s type ? Just?s type contains the TyCon for Maybe So any attempt to recursively walk over all these structures, as you would a tree, will fail. Also there?s a lot of sharing. For example, every occurrence of ?map? is a Var, and inside that Var is map?s type, its strictness, its rewrite RULE, etc etc. In walking over a term you may not want to walk over all that stuff at every occurrence of map. Maybe that?s it; I?m not certain since I did not write the Data instances for any of GHC?s types Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of p.k.f.holzenspies at utwente.nl Sent: 24 July 2014 16:42 To: ghc-devs at haskell.org Subject: Broken Data.Data instances Dear GHC-ers, Is there a reason for explicitly broken Data.Data instances? Case in point: > instance Data Var where > -- don't traverse? > toConstr _ = abstractConstr "Var" > gunfold _ _ = error "gunfold" > dataTypeOf _ = mkNoRepType "Var" I understand (vaguely) arguments about abstract data types, but this also excludes convenient queries that can, e.g. extract all types from a CoreExpr. I had hoped to do stuff like this: > collect :: (Typeable b, Data a, MonadPlus m) => a -> m b > collect = everything mplus $ mkQ mzero return > > allTypes :: CoreExpr -> [Type] > allTypes = collect Especially when still exploring (parts of) the GHC API, being able to extract things in this fashion is very helpful. SYB?s ?everything? being broken by these instances, not so much. Would a patch ?fixing? these instances be acceptable? Regards, Philip _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 1247 bytes Desc: image001.jpg URL: From alan.zimm at gmail.com Mon Jul 28 07:50:23 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Mon, 28 Jul 2014 09:50:23 +0200 Subject: Broken Data.Data instances In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF20E414F3@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF10438545@DB3PRD3001MB020.064d.mgd.msft.net> <53D14576.4060503@utwente.nl> <618BE556AADD624C9C918AA5D5911BEF104387F2@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF20E414F3@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: What about creating a specific type with a single constructor for the "not relevant to this phase" type to be used instead of () above? That would also clearly document what was going on. Alan On Mon, Jul 28, 2014 at 9:14 AM, Simon Peyton Jones wrote: > I've had to mangle a bunch of hand-written Data instances and push out > patches to a dozen packages that used to be built this way before I > convinced the authors to switch to safer versions of Data. Using virtual > smart constructors like we do now in containers and Text where needed can > be used to preserve internal invariants, etc. > > > > If the ?hand grenades? are the PostTcTypes, etc, then I can explain why > they are there. > > > > There simply is no sensible type you can put before the type checker > runs. For example one of the constructors in HsExpr is > > | HsMultiIf PostTcType [LGRHS id (LHsExpr id)] > > After type checking we know what type the thing has, but before we have no > clue. > > > > We could get around this by saying > > type PostTcType = Maybe TcType > > but that would mean that every post-typechecking consumer would need a > redundant pattern-match on a Just that would always succeed. > > > > It?s nothing deeper than that. Adding Maybes everywhere would be > possible, just clunky. > > > > > > However we now have type functions, and HsExpr is parameterised by an ?id? > parameter, which changes from RdrName (after parsing) to Name (after > renaming) to Id (after typechecking). So we could do this: > > | HsMultiIf (PostTcType id) [LGRHS id (LHsExpr id)] > > and define PostTcType as a closed type family thus > > > > type family PostTcType a where > > PostTcType Id = TcType > > PostTcType other = () > > > > That would be better than filling it with bottoms. But it might not help > with generic programming, because there?d be a component whose type wasn?t > fixed. I have no idea how generics and type functions interact. > > > > Simon > > > > *From:* Edward Kmett [mailto:ekmett at gmail.com] > *Sent:* 27 July 2014 18:27 > *To:* p.k.f.holzenspies at utwente.nl > *Cc:* alan.zimm at gmail.com; Simon Peyton Jones; ghc-devs > > *Subject:* Re: Broken Data.Data instances > > > > Philip, Alan, > > > > If you need a hand, I'm happy to pitch in guidance. > > > > I've had to mangle a bunch of hand-written Data instances and push out > patches to a dozen packages that used to be built this way before I > convinced the authors to switch to safer versions of Data. Using virtual > smart constructors like we do now in containers and Text where needed can > be used to preserve internal invariants, etc. > > > > This works far better for users of the API than just randomly throwing > them a live hand grenade. As I recall, these little grenades in generic > programming over the GHC API have been a constant source of pain for > libraries like haddock. > > > > Simon, > > > > It seems to me that regarding circular data structures, nothing prevents > you from walking a circular data structure with Data.Data. You can generate > a new one productively that looks just like the old with the contents > swapped out, it is indistinguishable to an observer if the fixed point is > lost, and a clever observer can use observable sharing to get it back, > supposing that they are allowed to try. > > > > Alternately, we could use the 'virtual constructor' trick there to break > the cycle and reintroduce it, but I'm less enthusiastic about that idea, > even if it is simpler in many ways. > > > > -Edward > > > > On Sun, Jul 27, 2014 at 10:17 AM, wrote: > > Alan, > > In that case, let's have a short feedback-loop between the two of us. It > seems many of these files (Name.lhs, for example) are really stable through > the repo-history. It would be nice to have one bigger refactoring all in > one go (some of the code could use a polish, a lot of code seems removable). > > Regards, > Philip > ------------------------------ > > *Van:* Alan & Kim Zimmerman [alan.zimm at gmail.com] > *Verzonden:* vrijdag 25 juli 2014 13:44 > *Aan:* Simon Peyton Jones > *CC:* Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org > *Onderwerp:* Re: Broken Data.Data instances > > By the way, I would be happy to attempt this task, if the concept is > viable. > > > > On Thu, Jul 24, 2014 at 11:23 PM, Alan & Kim Zimmerman < > alan.zimm at gmail.com> wrote: > > While we are talking about fixing traversals, how about getting rid > of the phase specific panic initialisers for placeHolderType, > placeHolderKind and friends? > > In order to safely traverse with SYB, the following needs to be inserted > into all the SYB schemes (see > > https://github.com/alanz/HaRe/blob/master/src/Language/Haskell/Refact/Utils/GhcUtils.hs > ) > > -- Check the Typeable items > checkItemStage1 :: (Typeable a) => SYB.Stage -> a -> Bool > checkItemStage1 stage x = (const False `SYB.extQ` postTcType `SYB.extQ` > fixity `SYB.extQ` nameSet) x > where nameSet = const (stage `elem` [SYB.Parser,SYB.TypeChecker]) :: > GHC.NameSet -> Bool > postTcType = const (stage < SYB.TypeChecker ) :: > GHC.PostTcType -> Bool > fixity = const (stage < SYB.Renamer ) :: > GHC.Fixity -> Bool > > And in addition HsCmdTop and ParStmtBlock are initialised with explicit > 'undefined values. > > Perhaps use an initialiser that can have its panic turned off when called > via the GHC API? > > Regards > > Alan > > > > > > On Thu, Jul 24, 2014 at 11:06 PM, Simon Peyton Jones < > simonpj at microsoft.com> wrote: > > So... does anyone object to me changing these "broken" instances with > the ones given by DeriveDataTypeable? > > That?s fine with me provided (a) the default behaviour is not immediate > divergence (which it might well be), and (b) the pitfalls are documented. > > > > Simon > > > > *From:* "Philip K.F. H?lzenspies" [mailto:p.k.f.holzenspies at utwente.nl] > *Sent:* 24 July 2014 18:42 > *To:* Simon Peyton Jones > *Cc:* ghc-devs at haskell.org > *Subject:* Re: Broken Data.Data instances > > > > Dear Simon, et al, > > These are very good points to make for people writing such traversals and > queries. I would be more than happy to write a page on the pitfalls etc. on > the wiki, but in my experience so far, exploring the innards of GHC is > tremendously helped by trying small things out and showing (bits of) the > intermediate structures. For me, personally, this has always been hindered > by the absence of good instances of Data and/or Show (not having to bring > DynFlags and not just visualising with the pretty printer are very helpful). > > So... does anyone object to me changing these "broken" instances with the > ones given by DeriveDataTypeable? > > Also, many of these internal data structures could be provided with useful > lenses to improve such traversals further. Anyone ever go at that? Would be > people be interested? > > Regards, > Philip > > *Simon Peyton Jones* > > 24 Jul 2014 18:22 > > GHC?s data structures are often mutually recursive. e.g. > > ? The TyCon for Maybe contains the DataCon for Just > > ? The DataCon For just contains Just?s type > > ? Just?s type contains the TyCon for Maybe > > > > So any attempt to recursively walk over all these structures, as you would > a tree, will fail. > > > > Also there?s a lot of sharing. For example, every occurrence of ?map? is > a Var, and inside that Var is map?s type, its strictness, its rewrite RULE, > etc etc. In walking over a term you may not want to walk over all that > stuff at every occurrence of map. > > > > Maybe that?s it; I?m not certain since I did not write the Data instances > for any of GHC?s types > > > > Simon > > > > *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org > ] *On Behalf Of * > p.k.f.holzenspies at utwente.nl > *Sent:* 24 July 2014 16:42 > *To:* ghc-devs at haskell.org > *Subject:* Broken Data.Data instances > > > > Dear GHC-ers, > > > > Is there a reason for explicitly broken Data.Data instances? Case in point: > > > > > instance Data Var where > > > -- don't traverse? > > > toConstr _ = abstractConstr "Var" > > > gunfold _ _ = error "gunfold" > > > dataTypeOf _ = mkNoRepType "Var" > > > > I understand (vaguely) arguments about abstract data types, but this also > excludes convenient queries that can, e.g. extract all types from a > CoreExpr. I had hoped to do stuff like this: > > > > > collect :: (Typeable b, Data a, MonadPlus m) => a -> m b > > > collect = everything mplus $ mkQ mzero return > > > > > > allTypes :: CoreExpr -> [Type] > > > allTypes = collect > > > > Especially when still exploring (parts of) the GHC API, being able to > extract things in this fashion is very helpful. SYB?s ?everything? being > broken by these instances, not so much. > > > > Would a patch ?fixing? these instances be acceptable? > > > > Regards, > > Philip > > > > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 1247 bytes Desc: not available URL: From alan.zimm at gmail.com Mon Jul 28 09:14:23 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Mon, 28 Jul 2014 11:14:23 +0200 Subject: Broken Data.Data instances In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF10438545@DB3PRD3001MB020.064d.mgd.msft.net> <53D14576.4060503@utwente.nl> <618BE556AADD624C9C918AA5D5911BEF104387F2@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF20E414F3@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: I have made a conceptual example of this here http://lpaste.net/108262 Alan On Mon, Jul 28, 2014 at 9:50 AM, Alan & Kim Zimmerman wrote: > What about creating a specific type with a single constructor for the "not > relevant to this phase" type to be used instead of () above? That would > also clearly document what was going on. > > Alan > > > On Mon, Jul 28, 2014 at 9:14 AM, Simon Peyton Jones > wrote: > >> I've had to mangle a bunch of hand-written Data instances and push out >> patches to a dozen packages that used to be built this way before I >> convinced the authors to switch to safer versions of Data. Using virtual >> smart constructors like we do now in containers and Text where needed can >> be used to preserve internal invariants, etc. >> >> >> >> If the ?hand grenades? are the PostTcTypes, etc, then I can explain why >> they are there. >> >> >> >> There simply is no sensible type you can put before the type checker >> runs. For example one of the constructors in HsExpr is >> >> | HsMultiIf PostTcType [LGRHS id (LHsExpr id)] >> >> After type checking we know what type the thing has, but before we have >> no clue. >> >> >> >> We could get around this by saying >> >> type PostTcType = Maybe TcType >> >> but that would mean that every post-typechecking consumer would need a >> redundant pattern-match on a Just that would always succeed. >> >> >> >> It?s nothing deeper than that. Adding Maybes everywhere would be >> possible, just clunky. >> >> >> >> >> >> However we now have type functions, and HsExpr is parameterised by an >> ?id? parameter, which changes from RdrName (after parsing) to Name (after >> renaming) to Id (after typechecking). So we could do this: >> >> | HsMultiIf (PostTcType id) [LGRHS id (LHsExpr id)] >> >> and define PostTcType as a closed type family thus >> >> >> >> type family PostTcType a where >> >> PostTcType Id = TcType >> >> PostTcType other = () >> >> >> >> That would be better than filling it with bottoms. But it might not help >> with generic programming, because there?d be a component whose type wasn?t >> fixed. I have no idea how generics and type functions interact. >> >> >> >> Simon >> >> >> >> *From:* Edward Kmett [mailto:ekmett at gmail.com] >> *Sent:* 27 July 2014 18:27 >> *To:* p.k.f.holzenspies at utwente.nl >> *Cc:* alan.zimm at gmail.com; Simon Peyton Jones; ghc-devs >> >> *Subject:* Re: Broken Data.Data instances >> >> >> >> Philip, Alan, >> >> >> >> If you need a hand, I'm happy to pitch in guidance. >> >> >> >> I've had to mangle a bunch of hand-written Data instances and push out >> patches to a dozen packages that used to be built this way before I >> convinced the authors to switch to safer versions of Data. Using virtual >> smart constructors like we do now in containers and Text where needed can >> be used to preserve internal invariants, etc. >> >> >> >> This works far better for users of the API than just randomly throwing >> them a live hand grenade. As I recall, these little grenades in generic >> programming over the GHC API have been a constant source of pain for >> libraries like haddock. >> >> >> >> Simon, >> >> >> >> It seems to me that regarding circular data structures, nothing prevents >> you from walking a circular data structure with Data.Data. You can generate >> a new one productively that looks just like the old with the contents >> swapped out, it is indistinguishable to an observer if the fixed point is >> lost, and a clever observer can use observable sharing to get it back, >> supposing that they are allowed to try. >> >> >> >> Alternately, we could use the 'virtual constructor' trick there to break >> the cycle and reintroduce it, but I'm less enthusiastic about that idea, >> even if it is simpler in many ways. >> >> >> >> -Edward >> >> >> >> On Sun, Jul 27, 2014 at 10:17 AM, wrote: >> >> Alan, >> >> In that case, let's have a short feedback-loop between the two of us. It >> seems many of these files (Name.lhs, for example) are really stable through >> the repo-history. It would be nice to have one bigger refactoring all in >> one go (some of the code could use a polish, a lot of code seems removable). >> >> Regards, >> Philip >> ------------------------------ >> >> *Van:* Alan & Kim Zimmerman [alan.zimm at gmail.com] >> *Verzonden:* vrijdag 25 juli 2014 13:44 >> *Aan:* Simon Peyton Jones >> *CC:* Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org >> *Onderwerp:* Re: Broken Data.Data instances >> >> By the way, I would be happy to attempt this task, if the concept is >> viable. >> >> >> >> On Thu, Jul 24, 2014 at 11:23 PM, Alan & Kim Zimmerman < >> alan.zimm at gmail.com> wrote: >> >> While we are talking about fixing traversals, how about getting rid >> of the phase specific panic initialisers for placeHolderType, >> placeHolderKind and friends? >> >> In order to safely traverse with SYB, the following needs to be inserted >> into all the SYB schemes (see >> >> https://github.com/alanz/HaRe/blob/master/src/Language/Haskell/Refact/Utils/GhcUtils.hs >> ) >> >> -- Check the Typeable items >> checkItemStage1 :: (Typeable a) => SYB.Stage -> a -> Bool >> checkItemStage1 stage x = (const False `SYB.extQ` postTcType `SYB.extQ` >> fixity `SYB.extQ` nameSet) x >> where nameSet = const (stage `elem` [SYB.Parser,SYB.TypeChecker]) >> :: GHC.NameSet -> Bool >> postTcType = const (stage < SYB.TypeChecker ) >> :: GHC.PostTcType -> Bool >> fixity = const (stage < SYB.Renamer ) >> :: GHC.Fixity -> Bool >> >> And in addition HsCmdTop and ParStmtBlock are initialised with explicit >> 'undefined values. >> >> Perhaps use an initialiser that can have its panic turned off when called >> via the GHC API? >> >> Regards >> >> Alan >> >> >> >> >> >> On Thu, Jul 24, 2014 at 11:06 PM, Simon Peyton Jones < >> simonpj at microsoft.com> wrote: >> >> So... does anyone object to me changing these "broken" instances with >> the ones given by DeriveDataTypeable? >> >> That?s fine with me provided (a) the default behaviour is not immediate >> divergence (which it might well be), and (b) the pitfalls are documented. >> >> >> >> Simon >> >> >> >> *From:* "Philip K.F. H?lzenspies" [mailto:p.k.f.holzenspies at utwente.nl] >> *Sent:* 24 July 2014 18:42 >> *To:* Simon Peyton Jones >> *Cc:* ghc-devs at haskell.org >> *Subject:* Re: Broken Data.Data instances >> >> >> >> Dear Simon, et al, >> >> These are very good points to make for people writing such traversals and >> queries. I would be more than happy to write a page on the pitfalls etc. on >> the wiki, but in my experience so far, exploring the innards of GHC is >> tremendously helped by trying small things out and showing (bits of) the >> intermediate structures. For me, personally, this has always been hindered >> by the absence of good instances of Data and/or Show (not having to bring >> DynFlags and not just visualising with the pretty printer are very helpful). >> >> So... does anyone object to me changing these "broken" instances with the >> ones given by DeriveDataTypeable? >> >> Also, many of these internal data structures could be provided with >> useful lenses to improve such traversals further. Anyone ever go at that? >> Would be people be interested? >> >> Regards, >> Philip >> >> *Simon Peyton Jones* >> >> 24 Jul 2014 18:22 >> >> GHC?s data structures are often mutually recursive. e.g. >> >> ? The TyCon for Maybe contains the DataCon for Just >> >> ? The DataCon For just contains Just?s type >> >> ? Just?s type contains the TyCon for Maybe >> >> >> >> So any attempt to recursively walk over all these structures, as you >> would a tree, will fail. >> >> >> >> Also there?s a lot of sharing. For example, every occurrence of ?map? is >> a Var, and inside that Var is map?s type, its strictness, its rewrite RULE, >> etc etc. In walking over a term you may not want to walk over all that >> stuff at every occurrence of map. >> >> >> >> Maybe that?s it; I?m not certain since I did not write the Data instances >> for any of GHC?s types >> >> >> >> Simon >> >> >> >> *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org >> ] *On Behalf Of * >> p.k.f.holzenspies at utwente.nl >> *Sent:* 24 July 2014 16:42 >> *To:* ghc-devs at haskell.org >> *Subject:* Broken Data.Data instances >> >> >> >> Dear GHC-ers, >> >> >> >> Is there a reason for explicitly broken Data.Data instances? Case in >> point: >> >> >> >> > instance Data Var where >> >> > -- don't traverse? >> >> > toConstr _ = abstractConstr "Var" >> >> > gunfold _ _ = error "gunfold" >> >> > dataTypeOf _ = mkNoRepType "Var" >> >> >> >> I understand (vaguely) arguments about abstract data types, but this also >> excludes convenient queries that can, e.g. extract all types from a >> CoreExpr. I had hoped to do stuff like this: >> >> >> >> > collect :: (Typeable b, Data a, MonadPlus m) => a -> m b >> >> > collect = everything mplus $ mkQ mzero return >> >> > >> >> > allTypes :: CoreExpr -> [Type] >> >> > allTypes = collect >> >> >> >> Especially when still exploring (parts of) the GHC API, being able to >> extract things in this fashion is very helpful. SYB?s ?everything? being >> broken by these instances, not so much. >> >> >> >> Would a patch ?fixing? these instances be acceptable? >> >> >> >> Regards, >> >> Philip >> >> >> >> >> >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> >> >> >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 1247 bytes Desc: not available URL: From p.k.f.holzenspies at utwente.nl Mon Jul 28 10:20:00 2014 From: p.k.f.holzenspies at utwente.nl (p.k.f.holzenspies at utwente.nl) Date: Mon, 28 Jul 2014 10:20:00 +0000 Subject: Broken Data.Data instances In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF10438545@DB3PRD3001MB020.064d.mgd.msft.net> <53D14576.4060503@utwente.nl> <618BE556AADD624C9C918AA5D5911BEF104387F2@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF20E414F3@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: I always read the () as ?there?s nothing meaningful to stick in here, but I have to stick in something? so I don?t necessarily want the WrongPhase-thing. There is very old commentary stating it would be lovely if someone could expose the PostTcType as a parameter of the AST-types, but that there are so many types and constructors, that it?s a boring chore to do. Actually, I was hoping haRe would come up to speed to be able to do this. That being said, I think Simon?s idea to turn PostTcType into a type-family is a better way altogether; it also documents intent, i.e. () may not say so much, but PostTcType RdrName says quite a lot. Simon commented that a lot of the internal structures aren?t trees, but cyclic graphs, e.g. the TyCon for Maybe references the DataCons for Just and Nothing, which again refer to the TyCon for Maybe. From: Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] Sent: maandag 28 juli 2014 11:14 To: Simon Peyton Jones Cc: Edward Kmett; Holzenspies, P.K.F. (EWI); ghc-devs Subject: Re: Broken Data.Data instances I have made a conceptual example of this here http://lpaste.net/108262 Alan On Mon, Jul 28, 2014 at 9:50 AM, Alan & Kim Zimmerman > wrote: What about creating a specific type with a single constructor for the "not relevant to this phase" type to be used instead of () above? That would also clearly document what was going on. Alan On Mon, Jul 28, 2014 at 9:14 AM, Simon Peyton Jones > wrote: I've had to mangle a bunch of hand-written Data instances and push out patches to a dozen packages that used to be built this way before I convinced the authors to switch to safer versions of Data. Using virtual smart constructors like we do now in containers and Text where needed can be used to preserve internal invariants, etc. If the ?hand grenades? are the PostTcTypes, etc, then I can explain why they are there. There simply is no sensible type you can put before the type checker runs. For example one of the constructors in HsExpr is | HsMultiIf PostTcType [LGRHS id (LHsExpr id)] After type checking we know what type the thing has, but before we have no clue. We could get around this by saying type PostTcType = Maybe TcType but that would mean that every post-typechecking consumer would need a redundant pattern-match on a Just that would always succeed. It?s nothing deeper than that. Adding Maybes everywhere would be possible, just clunky. However we now have type functions, and HsExpr is parameterised by an ?id? parameter, which changes from RdrName (after parsing) to Name (after renaming) to Id (after typechecking). So we could do this: | HsMultiIf (PostTcType id) [LGRHS id (LHsExpr id)] and define PostTcType as a closed type family thus type family PostTcType a where PostTcType Id = TcType PostTcType other = () That would be better than filling it with bottoms. But it might not help with generic programming, because there?d be a component whose type wasn?t fixed. I have no idea how generics and type functions interact. Simon From: Edward Kmett [mailto:ekmett at gmail.com] Sent: 27 July 2014 18:27 To: p.k.f.holzenspies at utwente.nl Cc: alan.zimm at gmail.com; Simon Peyton Jones; ghc-devs Subject: Re: Broken Data.Data instances Philip, Alan, If you need a hand, I'm happy to pitch in guidance. I've had to mangle a bunch of hand-written Data instances and push out patches to a dozen packages that used to be built this way before I convinced the authors to switch to safer versions of Data. Using virtual smart constructors like we do now in containers and Text where needed can be used to preserve internal invariants, etc. This works far better for users of the API than just randomly throwing them a live hand grenade. As I recall, these little grenades in generic programming over the GHC API have been a constant source of pain for libraries like haddock. Simon, It seems to me that regarding circular data structures, nothing prevents you from walking a circular data structure with Data.Data. You can generate a new one productively that looks just like the old with the contents swapped out, it is indistinguishable to an observer if the fixed point is lost, and a clever observer can use observable sharing to get it back, supposing that they are allowed to try. Alternately, we could use the 'virtual constructor' trick there to break the cycle and reintroduce it, but I'm less enthusiastic about that idea, even if it is simpler in many ways. -Edward On Sun, Jul 27, 2014 at 10:17 AM, > wrote: Alan, In that case, let's have a short feedback-loop between the two of us. It seems many of these files (Name.lhs, for example) are really stable through the repo-history. It would be nice to have one bigger refactoring all in one go (some of the code could use a polish, a lot of code seems removable). Regards, Philip ________________________________ Van: Alan & Kim Zimmerman [alan.zimm at gmail.com] Verzonden: vrijdag 25 juli 2014 13:44 Aan: Simon Peyton Jones CC: Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org Onderwerp: Re: Broken Data.Data instances By the way, I would be happy to attempt this task, if the concept is viable. On Thu, Jul 24, 2014 at 11:23 PM, Alan & Kim Zimmerman > wrote: While we are talking about fixing traversals, how about getting rid of the phase specific panic initialisers for placeHolderType, placeHolderKind and friends? In order to safely traverse with SYB, the following needs to be inserted into all the SYB schemes (see https://github.com/alanz/HaRe/blob/master/src/Language/Haskell/Refact/Utils/GhcUtils.hs) -- Check the Typeable items checkItemStage1 :: (Typeable a) => SYB.Stage -> a -> Bool checkItemStage1 stage x = (const False `SYB.extQ` postTcType `SYB.extQ` fixity `SYB.extQ` nameSet) x where nameSet = const (stage `elem` [SYB.Parser,SYB.TypeChecker]) :: GHC.NameSet -> Bool postTcType = const (stage < SYB.TypeChecker ) :: GHC.PostTcType -> Bool fixity = const (stage < SYB.Renamer ) :: GHC.Fixity -> Bool And in addition HsCmdTop and ParStmtBlock are initialised with explicit 'undefined values. Perhaps use an initialiser that can have its panic turned off when called via the GHC API? Regards Alan On Thu, Jul 24, 2014 at 11:06 PM, Simon Peyton Jones > wrote: So... does anyone object to me changing these "broken" instances with the ones given by DeriveDataTypeable? That?s fine with me provided (a) the default behaviour is not immediate divergence (which it might well be), and (b) the pitfalls are documented. Simon From: "Philip K.F. H?lzenspies" [mailto:p.k.f.holzenspies at utwente.nl] Sent: 24 July 2014 18:42 To: Simon Peyton Jones Cc: ghc-devs at haskell.org Subject: Re: Broken Data.Data instances Dear Simon, et al, These are very good points to make for people writing such traversals and queries. I would be more than happy to write a page on the pitfalls etc. on the wiki, but in my experience so far, exploring the innards of GHC is tremendously helped by trying small things out and showing (bits of) the intermediate structures. For me, personally, this has always been hindered by the absence of good instances of Data and/or Show (not having to bring DynFlags and not just visualising with the pretty printer are very helpful). So... does anyone object to me changing these "broken" instances with the ones given by DeriveDataTypeable? Also, many of these internal data structures could be provided with useful lenses to improve such traversals further. Anyone ever go at that? Would be people be interested? Regards, Philip [cid:image001.jpg at 01CFAA5A.D2DDE610] Simon Peyton Jones 24 Jul 2014 18:22 GHC?s data structures are often mutually recursive. e.g. ? The TyCon for Maybe contains the DataCon for Just ? The DataCon For just contains Just?s type ? Just?s type contains the TyCon for Maybe So any attempt to recursively walk over all these structures, as you would a tree, will fail. Also there?s a lot of sharing. For example, every occurrence of ?map? is a Var, and inside that Var is map?s type, its strictness, its rewrite RULE, etc etc. In walking over a term you may not want to walk over all that stuff at every occurrence of map. Maybe that?s it; I?m not certain since I did not write the Data instances for any of GHC?s types Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of p.k.f.holzenspies at utwente.nl Sent: 24 July 2014 16:42 To: ghc-devs at haskell.org Subject: Broken Data.Data instances Dear GHC-ers, Is there a reason for explicitly broken Data.Data instances? Case in point: > instance Data Var where > -- don't traverse? > toConstr _ = abstractConstr "Var" > gunfold _ _ = error "gunfold" > dataTypeOf _ = mkNoRepType "Var" I understand (vaguely) arguments about abstract data types, but this also excludes convenient queries that can, e.g. extract all types from a CoreExpr. I had hoped to do stuff like this: > collect :: (Typeable b, Data a, MonadPlus m) => a -> m b > collect = everything mplus $ mkQ mzero return > > allTypes :: CoreExpr -> [Type] > allTypes = collect Especially when still exploring (parts of) the GHC API, being able to extract things in this fashion is very helpful. SYB?s ?everything? being broken by these instances, not so much. Would a patch ?fixing? these instances be acceptable? Regards, Philip _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 1247 bytes Desc: image001.jpg URL: From jan.stolarek at p.lodz.pl Mon Jul 28 10:27:17 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Mon, 28 Jul 2014 12:27:17 +0200 Subject: Understanding core2core optimisation pipeline In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF10438AB9@DB3PRD3001MB020.064d.mgd.msft.net> References: <201407241454.56547.jan.stolarek@p.lodz.pl> <618BE556AADD624C9C918AA5D5911BEF10438AB9@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <201407281227.18059.jan.stolarek@p.lodz.pl> The wiki page just went live: https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/Core2CorePipeline It's not yet perfect but it should be a good start. > Roughtly, a complete run of the simplifier means "run the simplifier > repeatedly until nothing further happens". The iterations are the > successive iterations of this loop. Currently there's a (rather arbitrary) > limit of four such iterations before we give up and declare victory. A limit or a default value for that limit? To Ilya: > If you grep for the "late_dmd_anal" option variable in the compiler/simplCore/SimplCore.lhs > module, you'll see that it triggers a phase close to the endo of getCoreToDo's tasks, which > contains, in particular, the "CoreDoStrictness" pass. This is the "late" phase. The paper said that the late pass is run to detect single-entry thunks and the reason why it is run late in the pipeline is that if it were run earlier this information could be invalidated by the transformations. But in the source code I see that this late pass is followed by the simplifier, which can invalidate the information. Also, the documentation for -flate-dmd-anal says: "We found some opportunities for discovering strictness that were not visible earlier; and optimisations like -fspec-constr can create functions with unused arguments which are eliminated by late demand analysis". This says nothing about single-netry thunks. So, is the single-entry thunk optimisation performed by GHC? Janek From p.k.f.holzenspies at utwente.nl Mon Jul 28 10:30:30 2014 From: p.k.f.holzenspies at utwente.nl (p.k.f.holzenspies at utwente.nl) Date: Mon, 28 Jul 2014 10:30:30 +0000 Subject: Broken Data.Data instances In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF10438545@DB3PRD3001MB020.064d.mgd.msft.net> <53D14576.4060503@utwente.nl> <618BE556AADD624C9C918AA5D5911BEF104387F2@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF20E414F3@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Sorry about that? I?m having it out with my terminal server and the server seems to be winning. Here?s another go: I always read the () as ?there?s nothing meaningful to stick in here, but I have to stick in something? so I don?t necessarily want the WrongPhase-thing. There is very old commentary stating it would be lovely if someone could expose the PostTcType as a parameter of the AST-types, but that there are so many types and constructors, that it?s a boring chore to do. Actually, I was hoping haRe would come up to speed to be able to do this. That being said, I think Simon?s idea to turn PostTcType into a type-family is a better way altogether; it also documents intent, i.e. () may not say so much, but PostTcType RdrName says quite a lot. Simon commented that a lot of the internal structures aren?t trees, but cyclic graphs, e.g. the TyCon for Maybe references the DataCons for Just and Nothing, which again refer to the TyCon for Maybe. I was wondering whether it would be possible to make stateful lenses for this. Of course, for specific cases, we could do this, but I wonder if it is also possible to have lenses remember the things they visited and not visit them twice. Any ideas on this, Edward? Regards, Philip From: Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] Sent: maandag 28 juli 2014 11:14 To: Simon Peyton Jones Cc: Edward Kmett; Holzenspies, P.K.F. (EWI); ghc-devs Subject: Re: Broken Data.Data instances I have made a conceptual example of this here http://lpaste.net/108262 Alan On Mon, Jul 28, 2014 at 9:50 AM, Alan & Kim Zimmerman > wrote: What about creating a specific type with a single constructor for the "not relevant to this phase" type to be used instead of () above? That would also clearly document what was going on. Alan On Mon, Jul 28, 2014 at 9:14 AM, Simon Peyton Jones > wrote: I've had to mangle a bunch of hand-written Data instances and push out patches to a dozen packages that used to be built this way before I convinced the authors to switch to safer versions of Data. Using virtual smart constructors like we do now in containers and Text where needed can be used to preserve internal invariants, etc. If the ?hand grenades? are the PostTcTypes, etc, then I can explain why they are there. There simply is no sensible type you can put before the type checker runs. For example one of the constructors in HsExpr is | HsMultiIf PostTcType [LGRHS id (LHsExpr id)] After type checking we know what type the thing has, but before we have no clue. We could get around this by saying type PostTcType = Maybe TcType but that would mean that every post-typechecking consumer would need a redundant pattern-match on a Just that would always succeed. It?s nothing deeper than that. Adding Maybes everywhere would be possible, just clunky. However we now have type functions, and HsExpr is parameterised by an ?id? parameter, which changes from RdrName (after parsing) to Name (after renaming) to Id (after typechecking). So we could do this: | HsMultiIf (PostTcType id) [LGRHS id (LHsExpr id)] and define PostTcType as a closed type family thus type family PostTcType a where PostTcType Id = TcType PostTcType other = () That would be better than filling it with bottoms. But it might not help with generic programming, because there?d be a component whose type wasn?t fixed. I have no idea how generics and type functions interact. Simon From: Edward Kmett [mailto:ekmett at gmail.com] Sent: 27 July 2014 18:27 To: p.k.f.holzenspies at utwente.nl Cc: alan.zimm at gmail.com; Simon Peyton Jones; ghc-devs Subject: Re: Broken Data.Data instances Philip, Alan, If you need a hand, I'm happy to pitch in guidance. I've had to mangle a bunch of hand-written Data instances and push out patches to a dozen packages that used to be built this way before I convinced the authors to switch to safer versions of Data. Using virtual smart constructors like we do now in containers and Text where needed can be used to preserve internal invariants, etc. This works far better for users of the API than just randomly throwing them a live hand grenade. As I recall, these little grenades in generic programming over the GHC API have been a constant source of pain for libraries like haddock. Simon, It seems to me that regarding circular data structures, nothing prevents you from walking a circular data structure with Data.Data. You can generate a new one productively that looks just like the old with the contents swapped out, it is indistinguishable to an observer if the fixed point is lost, and a clever observer can use observable sharing to get it back, supposing that they are allowed to try. Alternately, we could use the 'virtual constructor' trick there to break the cycle and reintroduce it, but I'm less enthusiastic about that idea, even if it is simpler in many ways. -Edward On Sun, Jul 27, 2014 at 10:17 AM, > wrote: Alan, In that case, let's have a short feedback-loop between the two of us. It seems many of these files (Name.lhs, for example) are really stable through the repo-history. It would be nice to have one bigger refactoring all in one go (some of the code could use a polish, a lot of code seems removable). Regards, Philip ________________________________ Van: Alan & Kim Zimmerman [alan.zimm at gmail.com] Verzonden: vrijdag 25 juli 2014 13:44 Aan: Simon Peyton Jones CC: Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org Onderwerp: Re: Broken Data.Data instances By the way, I would be happy to attempt this task, if the concept is viable. On Thu, Jul 24, 2014 at 11:23 PM, Alan & Kim Zimmerman > wrote: While we are talking about fixing traversals, how about getting rid of the phase specific panic initialisers for placeHolderType, placeHolderKind and friends? In order to safely traverse with SYB, the following needs to be inserted into all the SYB schemes (see https://github.com/alanz/HaRe/blob/master/src/Language/Haskell/Refact/Utils/GhcUtils.hs) -- Check the Typeable items checkItemStage1 :: (Typeable a) => SYB.Stage -> a -> Bool checkItemStage1 stage x = (const False `SYB.extQ` postTcType `SYB.extQ` fixity `SYB.extQ` nameSet) x where nameSet = const (stage `elem` [SYB.Parser,SYB.TypeChecker]) :: GHC.NameSet -> Bool postTcType = const (stage < SYB.TypeChecker ) :: GHC.PostTcType -> Bool fixity = const (stage < SYB.Renamer ) :: GHC.Fixity -> Bool And in addition HsCmdTop and ParStmtBlock are initialised with explicit 'undefined values. Perhaps use an initialiser that can have its panic turned off when called via the GHC API? Regards Alan On Thu, Jul 24, 2014 at 11:06 PM, Simon Peyton Jones > wrote: So... does anyone object to me changing these "broken" instances with the ones given by DeriveDataTypeable? That?s fine with me provided (a) the default behaviour is not immediate divergence (which it might well be), and (b) the pitfalls are documented. Simon From: "Philip K.F. H?lzenspies" [mailto:p.k.f.holzenspies at utwente.nl] Sent: 24 July 2014 18:42 To: Simon Peyton Jones Cc: ghc-devs at haskell.org Subject: Re: Broken Data.Data instances Dear Simon, et al, These are very good points to make for people writing such traversals and queries. I would be more than happy to write a page on the pitfalls etc. on the wiki, but in my experience so far, exploring the innards of GHC is tremendously helped by trying small things out and showing (bits of) the intermediate structures. For me, personally, this has always been hindered by the absence of good instances of Data and/or Show (not having to bring DynFlags and not just visualising with the pretty printer are very helpful). So... does anyone object to me changing these "broken" instances with the ones given by DeriveDataTypeable? Also, many of these internal data structures could be provided with useful lenses to improve such traversals further. Anyone ever go at that? Would be people be interested? Regards, Philip [cid:image001.jpg at 01CFAA5F.B8814B90] Simon Peyton Jones 24 Jul 2014 18:22 GHC?s data structures are often mutually recursive. e.g. ? The TyCon for Maybe contains the DataCon for Just ? The DataCon For just contains Just?s type ? Just?s type contains the TyCon for Maybe So any attempt to recursively walk over all these structures, as you would a tree, will fail. Also there?s a lot of sharing. For example, every occurrence of ?map? is a Var, and inside that Var is map?s type, its strictness, its rewrite RULE, etc etc. In walking over a term you may not want to walk over all that stuff at every occurrence of map. Maybe that?s it; I?m not certain since I did not write the Data instances for any of GHC?s types Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of p.k.f.holzenspies at utwente.nl Sent: 24 July 2014 16:42 To: ghc-devs at haskell.org Subject: Broken Data.Data instances Dear GHC-ers, Is there a reason for explicitly broken Data.Data instances? Case in point: > instance Data Var where > -- don't traverse? > toConstr _ = abstractConstr "Var" > gunfold _ _ = error "gunfold" > dataTypeOf _ = mkNoRepType "Var" I understand (vaguely) arguments about abstract data types, but this also excludes convenient queries that can, e.g. extract all types from a CoreExpr. I had hoped to do stuff like this: > collect :: (Typeable b, Data a, MonadPlus m) => a -> m b > collect = everything mplus $ mkQ mzero return > > allTypes :: CoreExpr -> [Type] > allTypes = collect Especially when still exploring (parts of) the GHC API, being able to extract things in this fashion is very helpful. SYB?s ?everything? being broken by these instances, not so much. Would a patch ?fixing? these instances be acceptable? Regards, Philip _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 1247 bytes Desc: image001.jpg URL: From ezyang at mit.edu Mon Jul 28 10:38:43 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Mon, 28 Jul 2014 11:38:43 +0100 Subject: [commit: ghc] master: Module reexports, fixing #8407. (7f5c1086) In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF20E3F35E@DB3PRD3001MB020.064d.mgd.msft.net> References: <20140726010806.D1B342406D@ghc.haskell.org> <618BE556AADD624C9C918AA5D5911BEF20E3F35E@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <1406543821-sup-3697@sabre> Excerpts from Simon Peyton Jones's message of 2014-07-28 07:45:23 +0100: > Great stuff. Is this documented somewhere, notably in > http://www.haskell.org/ghc/docs/latest/html/users_guide/packages.html > for GHC, and somewhere in Cabal? You're right, I should add a line to the installed package specification. We're already documented as far as Cabal is concerned. > And perhaps somewhere on the wiki > https://ghc.haskell.org/trac/ghc/wiki/Commentary/Packages It's not on that page, but it is here https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/Packages#Reexportedmodules Edward From ezyang at mit.edu Mon Jul 28 11:49:58 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Mon, 28 Jul 2014 12:49:58 +0100 Subject: [commit: ghc] master: Module reexports, fixing #8407. (7f5c1086) In-Reply-To: <1406543821-sup-3697@sabre> References: <20140726010806.D1B342406D@ghc.haskell.org> <618BE556AADD624C9C918AA5D5911BEF20E3F35E@DB3PRD3001MB020.064d.mgd.msft.net> <1406543821-sup-3697@sabre> Message-ID: <1406548183-sup-2563@sabre> Excerpts from Edward Z. Yang's message of 2014-07-28 11:38:43 +0100: > You're right, I should add a line to the installed package > specification. We're already documented as far as Cabal is concerned. OK, this is done. From alan.zimm at gmail.com Mon Jul 28 12:10:13 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Mon, 28 Jul 2014 14:10:13 +0200 Subject: Broken Data.Data instances In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF10438545@DB3PRD3001MB020.064d.mgd.msft.net> <53D14576.4060503@utwente.nl> <618BE556AADD624C9C918AA5D5911BEF104387F2@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF20E414F3@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Philip I think the main reason for the WrongPhase thing is to have something that explicitly has a Data and Typeable instance, to allow generic (SYB) traversal. If we can get by without this so much the better. On a related note, is there any way to constrain the 'a' in type family PostTcType a where PostTcType Id = TcType PostTcType other = WrongPhaseTyp to have an instance of Data? I am experimenting with traversals over my earlier paste, and got stuck here (which is the reason the Show instances were commentet out in the original). Alan On Mon, Jul 28, 2014 at 12:30 PM, wrote: > Sorry about that? I?m having it out with my terminal server and the > server seems to be winning. Here?s another go: > > > > I always read the () as ?there?s nothing meaningful to stick in here, but > I have to stick in something? so I don?t necessarily want the > WrongPhase-thing. There is very old commentary stating it would be lovely > if someone could expose the PostTcType as a parameter of the AST-types, but > that there are so many types and constructors, that it?s a boring chore to > do. Actually, I was hoping haRe would come up to speed to be able to do > this. That being said, I think Simon?s idea to turn PostTcType into a > type-family is a better way altogether; it also documents intent, i.e. () > may not say so much, but PostTcType RdrName says quite a lot. > > > > Simon commented that a lot of the internal structures aren?t trees, but > cyclic graphs, e.g. the TyCon for Maybe references the DataCons for Just > and Nothing, which again refer to the TyCon for Maybe. I was wondering > whether it would be possible to make stateful lenses for this. Of course, > for specific cases, we could do this, but I wonder if it is also possible > to have lenses remember the things they visited and not visit them twice. > Any ideas on this, Edward? > > > > Regards, > > Philip > > > > > > > > > > > > *From:* Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] > *Sent:* maandag 28 juli 2014 11:14 > *To:* Simon Peyton Jones > *Cc:* Edward Kmett; Holzenspies, P.K.F. (EWI); ghc-devs > > *Subject:* Re: Broken Data.Data instances > > > > I have made a conceptual example of this here http://lpaste.net/108262 > > Alan > > > > On Mon, Jul 28, 2014 at 9:50 AM, Alan & Kim Zimmerman > wrote: > > What about creating a specific type with a single constructor for the "not > relevant to this phase" type to be used instead of () above? That would > also clearly document what was going on. > > Alan > > > > On Mon, Jul 28, 2014 at 9:14 AM, Simon Peyton Jones > wrote: > > I've had to mangle a bunch of hand-written Data instances and push out > patches to a dozen packages that used to be built this way before I > convinced the authors to switch to safer versions of Data. Using virtual > smart constructors like we do now in containers and Text where needed can > be used to preserve internal invariants, etc. > > > > If the ?hand grenades? are the PostTcTypes, etc, then I can explain why > they are there. > > > > There simply is no sensible type you can put before the type checker > runs. For example one of the constructors in HsExpr is > > | HsMultiIf PostTcType [LGRHS id (LHsExpr id)] > > After type checking we know what type the thing has, but before we have no > clue. > > > > We could get around this by saying > > type PostTcType = Maybe TcType > > but that would mean that every post-typechecking consumer would need a > redundant pattern-match on a Just that would always succeed. > > > > It?s nothing deeper than that. Adding Maybes everywhere would be > possible, just clunky. > > > > > > However we now have type functions, and HsExpr is parameterised by an ?id? > parameter, which changes from RdrName (after parsing) to Name (after > renaming) to Id (after typechecking). So we could do this: > > | HsMultiIf (PostTcType id) [LGRHS id (LHsExpr id)] > > and define PostTcType as a closed type family thus > > > > type family PostTcType a where > > PostTcType Id = TcType > > PostTcType other = () > > > > That would be better than filling it with bottoms. But it might not help > with generic programming, because there?d be a component whose type wasn?t > fixed. I have no idea how generics and type functions interact. > > > > Simon > > > > *From:* Edward Kmett [mailto:ekmett at gmail.com] > *Sent:* 27 July 2014 18:27 > *To:* p.k.f.holzenspies at utwente.nl > *Cc:* alan.zimm at gmail.com; Simon Peyton Jones; ghc-devs > > > *Subject:* Re: Broken Data.Data instances > > > > Philip, Alan, > > > > If you need a hand, I'm happy to pitch in guidance. > > > > I've had to mangle a bunch of hand-written Data instances and push out > patches to a dozen packages that used to be built this way before I > convinced the authors to switch to safer versions of Data. Using virtual > smart constructors like we do now in containers and Text where needed can > be used to preserve internal invariants, etc. > > > > This works far better for users of the API than just randomly throwing > them a live hand grenade. As I recall, these little grenades in generic > programming over the GHC API have been a constant source of pain for > libraries like haddock. > > > > Simon, > > > > It seems to me that regarding circular data structures, nothing prevents > you from walking a circular data structure with Data.Data. You can generate > a new one productively that looks just like the old with the contents > swapped out, it is indistinguishable to an observer if the fixed point is > lost, and a clever observer can use observable sharing to get it back, > supposing that they are allowed to try. > > > > Alternately, we could use the 'virtual constructor' trick there to break > the cycle and reintroduce it, but I'm less enthusiastic about that idea, > even if it is simpler in many ways. > > > > -Edward > > > > On Sun, Jul 27, 2014 at 10:17 AM, wrote: > > Alan, > > In that case, let's have a short feedback-loop between the two of us. It > seems many of these files (Name.lhs, for example) are really stable through > the repo-history. It would be nice to have one bigger refactoring all in > one go (some of the code could use a polish, a lot of code seems removable). > > Regards, > Philip > ------------------------------ > > *Van:* Alan & Kim Zimmerman [alan.zimm at gmail.com] > *Verzonden:* vrijdag 25 juli 2014 13:44 > *Aan:* Simon Peyton Jones > *CC:* Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org > *Onderwerp:* Re: Broken Data.Data instances > > By the way, I would be happy to attempt this task, if the concept is > viable. > > > > On Thu, Jul 24, 2014 at 11:23 PM, Alan & Kim Zimmerman < > alan.zimm at gmail.com> wrote: > > While we are talking about fixing traversals, how about getting rid of > the phase specific panic initialisers for placeHolderType, placeHolderKind > and friends? > > In order to safely traverse with SYB, the following needs to be inserted > into all the SYB schemes (see > > https://github.com/alanz/HaRe/blob/master/src/Language/Haskell/Refact/Utils/GhcUtils.hs > ) > > -- Check the Typeable items > checkItemStage1 :: (Typeable a) => SYB.Stage -> a -> Bool > checkItemStage1 stage x = (const False `SYB.extQ` postTcType `SYB.extQ` > fixity `SYB.extQ` nameSet) x > where nameSet = const (stage `elem` [SYB.Parser,SYB.TypeChecker]) :: > GHC.NameSet -> Bool > postTcType = const (stage < SYB.TypeChecker ) :: > GHC.PostTcType -> Bool > fixity = const (stage < SYB.Renamer ) :: > GHC.Fixity -> Bool > > And in addition HsCmdTop and ParStmtBlock are initialised with explicit > 'undefined values. > > Perhaps use an initialiser that can have its panic turned off when called > via the GHC API? > > Regards > > Alan > > > > > > On Thu, Jul 24, 2014 at 11:06 PM, Simon Peyton Jones < > simonpj at microsoft.com> wrote: > > So... does anyone object to me changing these "broken" instances with > the ones given by DeriveDataTypeable? > > That?s fine with me provided (a) the default behaviour is not immediate > divergence (which it might well be), and (b) the pitfalls are documented. > > > > Simon > > > > *From:* "Philip K.F. H?lzenspies" [mailto:p.k.f.holzenspies at utwente.nl] > *Sent:* 24 July 2014 18:42 > *To:* Simon Peyton Jones > *Cc:* ghc-devs at haskell.org > *Subject:* Re: Broken Data.Data instances > > > > Dear Simon, et al, > > These are very good points to make for people writing such traversals and > queries. I would be more than happy to write a page on the pitfalls etc. on > the wiki, but in my experience so far, exploring the innards of GHC is > tremendously helped by trying small things out and showing (bits of) the > intermediate structures. For me, personally, this has always been hindered > by the absence of good instances of Data and/or Show (not having to bring > DynFlags and not just visualising with the pretty printer are very helpful). > > So... does anyone object to me changing these "broken" instances with the > ones given by DeriveDataTypeable? > > Also, many of these internal data structures could be provided with useful > lenses to improve such traversals further. Anyone ever go at that? Would be > people be interested? > > Regards, > Philip > > *Simon Peyton Jones* > > 24 Jul 2014 18:22 > > GHC?s data structures are often mutually recursive. e.g. > > ? The TyCon for Maybe contains the DataCon for Just > > ? The DataCon For just contains Just?s type > > ? Just?s type contains the TyCon for Maybe > > > > So any attempt to recursively walk over all these structures, as you would > a tree, will fail. > > > > Also there?s a lot of sharing. For example, every occurrence of ?map? is > a Var, and inside that Var is map?s type, its strictness, its rewrite RULE, > etc etc. In walking over a term you may not want to walk over all that > stuff at every occurrence of map. > > > > Maybe that?s it; I?m not certain since I did not write the Data instances > for any of GHC?s types > > > > Simon > > > > *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org > ] *On Behalf Of * > p.k.f.holzenspies at utwente.nl > *Sent:* 24 July 2014 16:42 > *To:* ghc-devs at haskell.org > *Subject:* Broken Data.Data instances > > > > Dear GHC-ers, > > > > Is there a reason for explicitly broken Data.Data instances? Case in point: > > > > > instance Data Var where > > > -- don't traverse? > > > toConstr _ = abstractConstr "Var" > > > gunfold _ _ = error "gunfold" > > > dataTypeOf _ = mkNoRepType "Var" > > > > I understand (vaguely) arguments about abstract data types, but this also > excludes convenient queries that can, e.g. extract all types from a > CoreExpr. I had hoped to do stuff like this: > > > > > collect :: (Typeable b, Data a, MonadPlus m) => a -> m b > > > collect = everything mplus $ mkQ mzero return > > > > > > allTypes :: CoreExpr -> [Type] > > > allTypes = collect > > > > Especially when still exploring (parts of) the GHC API, being able to > extract things in this fashion is very helpful. SYB?s ?everything? being > broken by these instances, not so much. > > > > Would a patch ?fixing? these instances be acceptable? > > > > Regards, > > Philip > > > > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 1247 bytes Desc: not available URL: From austin at well-typed.com Mon Jul 28 13:48:22 2014 From: austin at well-typed.com (Austin Seipp) Date: Mon, 28 Jul 2014 08:48:22 -0500 Subject: Status updates Message-ID: Hello *, Here are some notes on what I've done in the past week, and what I plan on doing going forward: - First and foremost, I made a bunch of improvements to Phabricator/Trac. Now, most noticeably: 1) Phabricator has a field for GHC trac tickets. You can specify this when you run 'arc diff', and it will appear on the Differential Revision. By default if you run 'arc diff', this field appears at the bottom, below 'Subscribers:'. 2) Phabricator now supports Trac markup syntax - if you say '#7602' in a trac ticket, this turns into a hyperlink with the text "Trac #7602" and links to the ticket. 3) The IRC bot on #ghc can now look up Trac tickets for you (this is mostly only relevant to those of us there) - The final piece of this whole story is actually getting Phabricator to *comment* on Trac. This is almost done, but due to the fact I'm writing PHP (and thus have absolutely no idea what I'm doing) the code still seems slightly broken somewhere since it's one of the larger parts of the Phabricator integration. Hopefully I can finish this soon. I will say though - thank you Herbert for helping me figure out the Trac XML/RPC plugin, and getting it working for me! - Another Phabricator related thing: I've begun working on a better Harbormaster build backend - it will actually report failures, support concurrent builds, and give you stderr in log results! Yay! - I took some time to further clean up the Git wiki page following last week, but only made minor progress here. - The patch queue is, in fact, being drained as we speak! I have a whole bunch of incoming patches from contributors running under ./validate as I write this, including some cleanup commits too (to detab/whitespace things). Not all of them, but a good chunk I could fit in that were low-impact and correct. - I have not finished AMP. :( This is quickly going to be my #1 priority before fixing any bugs because at this point, it's holding up other `base` improvements (Edward K, I'm sure, is getting a bit fiddly about this getting done :) This week: - Before fixing *any* new bugs myself, I'm going to finish AMP, because it's a blocker for others including Edward, Simon, and future base improvements. I'm thinking I may just do this before any more Phabricator improvements. - I will hopefully finish the Trac and Phabricator setup soon. at the very least I want to get the comment updates working. I'll probably continue with the build system shenannigans sometime after. - The wiki pages and Git stuff still need more cleanup to streamline them. This will be happening when I get a chance (Herbert already improved some things I touched last week - Thanks!) -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From simonpj at microsoft.com Mon Jul 28 14:02:42 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 28 Jul 2014 14:02:42 +0000 Subject: Trac Message-ID: <618BE556AADD624C9C918AA5D5911BEF20E44C91@DB3PRD3001MB020.064d.mgd.msft.net> Trac is taking minutes to display a page. Also spam is still coming Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.lentczner at gmail.com Mon Jul 28 14:40:11 2014 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Mon, 28 Jul 2014 07:40:11 -0700 Subject: Haskell Platform 2014.2.0.0 Release Candidate 2 Message-ID: The long anticipated Haskell Platform 2014.2 release, including GHC 7.8.3, and numerous updated packages, is almost here! We have created "Release Candidate 2" installers for OS X and Windows, and believe, barring show stopper issues, creepers exploding, or other bizarre phenomenon, these will likely be blessed as the final in a few days time. If you would like to be an early adopter, please try 'em out... - source tarball: haskell-platform-2014.2.0.0-RC2.tar.gz - source repo: haskell/haskell-platform at 2014.2.0.0-RC2 - windows 32bit: hskellPlatform-2014.2.0.0-i386-RC2-setup.exe - windows 64bit:hskellPlatform-2014.2.0.0-x86_64-RC2-setup.exe - os x 64bit: Haskell Platform 2014.2.0.0 64bit RC2.signed.pkg - travis-ci build: haskell/haskell-platform *Notes for RC2, since RC1:* *Windows* *Extra thanks to Randy Polen for burning the midnight-oil to get ths out* - removed unneeded python (et al) files from the GHC bindist for 64-bit Windows (referenced in GHC ticket #9014 ) - added HTML "view source" pages for the GHC packages that was missing from the GHC bindist for both 32- and 64-bit Windows. *Mac OS X* *If you installed RC1, you can remove first with the command *sudo uninstall-hs only 7.8.3 --remove *Run it without *--remove* to see what it will do before running, it if you like. In theory you can just install this one right on top of RC1.... but hasn't been tested.* - file layout on the Mac... improved slightly (again). In particular, executables are now installed directly in $prefix/bin dirs, rather than within the package dir - fix the bug with haddock master index not being updated correctly - works on 10.6! (and 10.7, 10.8, and 10.9) with gcc or clang based Xcodes - works on 10.10!!! (Yosemite, developer preview 4 release) *Source tarball* - missing sources in hptool now present - platform.sh improved somewhat (in particular, handles host cabal being pre-sandbox) *Timetable* - These can "soak" amongst the intreped on these lists for a few hours. - On Monday evening (PST) I'll announce to haskell-cafe and reddit - End of next week (from my vacation, I'll point out), we'll declare success and ship. ? Mark *SHA-256 sums:* 62f39246ad95dd2aed6ece5138f6297f945d2b450f215d074820294310e0c48a Haskell Platform 2014.2.0.0 64bit RC2.signed.pkg 7c7d3585e89e1407461efea29dcaa9628c3be3c47d93a13b5a4978046375e4fd haskell-platform-2014.2.0.0-RC2.tar.gz 6eedd76aafb266d9a09baff80cd2973498ab59195c771f7cd64425d40be29c49 hskellPlatform-2014.2.0.0-i386-RC2-setup.exe b22115ed84d1f7e747d7f0b47e32e1489e4a24613d69c91df4ae32052f88b130 hskellPlatform-2014.2.0.0-x86_64-RC2-setup.exe -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.k.f.holzenspies at utwente.nl Mon Jul 28 14:55:45 2014 From: p.k.f.holzenspies at utwente.nl (p.k.f.holzenspies at utwente.nl) Date: Mon, 28 Jul 2014 14:55:45 +0000 Subject: Broken Data.Data instances In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF10438545@DB3PRD3001MB020.064d.mgd.msft.net> <53D14576.4060503@utwente.nl> <618BE556AADD624C9C918AA5D5911BEF104387F2@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF20E414F3@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Dear Alan, I would think you would want to constrain the result, i.e. type family (Data (PostTcType a)) => PostTcType a where ? The Data-instance of ?a? doesn?t give you much if you have a ?PostTcType a?. Your point about SYB-recognition of WrongPhase is, of course, a good one ;) Regards, Philip From: Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] Sent: maandag 28 juli 2014 14:10 To: Holzenspies, P.K.F. (EWI) Cc: Simon Peyton Jones; Edward Kmett; ghc-devs at haskell.org Subject: Re: Broken Data.Data instances Philip I think the main reason for the WrongPhase thing is to have something that explicitly has a Data and Typeable instance, to allow generic (SYB) traversal. If we can get by without this so much the better. On a related note, is there any way to constrain the 'a' in type family PostTcType a where PostTcType Id = TcType PostTcType other = WrongPhaseTyp to have an instance of Data? I am experimenting with traversals over my earlier paste, and got stuck here (which is the reason the Show instances were commentet out in the original). Alan On Mon, Jul 28, 2014 at 12:30 PM, > wrote: Sorry about that? I?m having it out with my terminal server and the server seems to be winning. Here?s another go: I always read the () as ?there?s nothing meaningful to stick in here, but I have to stick in something? so I don?t necessarily want the WrongPhase-thing. There is very old commentary stating it would be lovely if someone could expose the PostTcType as a parameter of the AST-types, but that there are so many types and constructors, that it?s a boring chore to do. Actually, I was hoping haRe would come up to speed to be able to do this. That being said, I think Simon?s idea to turn PostTcType into a type-family is a better way altogether; it also documents intent, i.e. () may not say so much, but PostTcType RdrName says quite a lot. Simon commented that a lot of the internal structures aren?t trees, but cyclic graphs, e.g. the TyCon for Maybe references the DataCons for Just and Nothing, which again refer to the TyCon for Maybe. I was wondering whether it would be possible to make stateful lenses for this. Of course, for specific cases, we could do this, but I wonder if it is also possible to have lenses remember the things they visited and not visit them twice. Any ideas on this, Edward? Regards, Philip From: Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] Sent: maandag 28 juli 2014 11:14 To: Simon Peyton Jones Cc: Edward Kmett; Holzenspies, P.K.F. (EWI); ghc-devs Subject: Re: Broken Data.Data instances I have made a conceptual example of this here http://lpaste.net/108262 Alan On Mon, Jul 28, 2014 at 9:50 AM, Alan & Kim Zimmerman > wrote: What about creating a specific type with a single constructor for the "not relevant to this phase" type to be used instead of () above? That would also clearly document what was going on. Alan On Mon, Jul 28, 2014 at 9:14 AM, Simon Peyton Jones > wrote: I've had to mangle a bunch of hand-written Data instances and push out patches to a dozen packages that used to be built this way before I convinced the authors to switch to safer versions of Data. Using virtual smart constructors like we do now in containers and Text where needed can be used to preserve internal invariants, etc. If the ?hand grenades? are the PostTcTypes, etc, then I can explain why they are there. There simply is no sensible type you can put before the type checker runs. For example one of the constructors in HsExpr is | HsMultiIf PostTcType [LGRHS id (LHsExpr id)] After type checking we know what type the thing has, but before we have no clue. We could get around this by saying type PostTcType = Maybe TcType but that would mean that every post-typechecking consumer would need a redundant pattern-match on a Just that would always succeed. It?s nothing deeper than that. Adding Maybes everywhere would be possible, just clunky. However we now have type functions, and HsExpr is parameterised by an ?id? parameter, which changes from RdrName (after parsing) to Name (after renaming) to Id (after typechecking). So we could do this: | HsMultiIf (PostTcType id) [LGRHS id (LHsExpr id)] and define PostTcType as a closed type family thus type family PostTcType a where PostTcType Id = TcType PostTcType other = () That would be better than filling it with bottoms. But it might not help with generic programming, because there?d be a component whose type wasn?t fixed. I have no idea how generics and type functions interact. Simon From: Edward Kmett [mailto:ekmett at gmail.com] Sent: 27 July 2014 18:27 To: p.k.f.holzenspies at utwente.nl Cc: alan.zimm at gmail.com; Simon Peyton Jones; ghc-devs Subject: Re: Broken Data.Data instances Philip, Alan, If you need a hand, I'm happy to pitch in guidance. I've had to mangle a bunch of hand-written Data instances and push out patches to a dozen packages that used to be built this way before I convinced the authors to switch to safer versions of Data. Using virtual smart constructors like we do now in containers and Text where needed can be used to preserve internal invariants, etc. This works far better for users of the API than just randomly throwing them a live hand grenade. As I recall, these little grenades in generic programming over the GHC API have been a constant source of pain for libraries like haddock. Simon, It seems to me that regarding circular data structures, nothing prevents you from walking a circular data structure with Data.Data. You can generate a new one productively that looks just like the old with the contents swapped out, it is indistinguishable to an observer if the fixed point is lost, and a clever observer can use observable sharing to get it back, supposing that they are allowed to try. Alternately, we could use the 'virtual constructor' trick there to break the cycle and reintroduce it, but I'm less enthusiastic about that idea, even if it is simpler in many ways. -Edward On Sun, Jul 27, 2014 at 10:17 AM, > wrote: Alan, In that case, let's have a short feedback-loop between the two of us. It seems many of these files (Name.lhs, for example) are really stable through the repo-history. It would be nice to have one bigger refactoring all in one go (some of the code could use a polish, a lot of code seems removable). Regards, Philip ________________________________ Van: Alan & Kim Zimmerman [alan.zimm at gmail.com] Verzonden: vrijdag 25 juli 2014 13:44 Aan: Simon Peyton Jones CC: Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org Onderwerp: Re: Broken Data.Data instances By the way, I would be happy to attempt this task, if the concept is viable. On Thu, Jul 24, 2014 at 11:23 PM, Alan & Kim Zimmerman > wrote: While we are talking about fixing traversals, how about getting rid of the phase specific panic initialisers for placeHolderType, placeHolderKind and friends? In order to safely traverse with SYB, the following needs to be inserted into all the SYB schemes (see https://github.com/alanz/HaRe/blob/master/src/Language/Haskell/Refact/Utils/GhcUtils.hs) -- Check the Typeable items checkItemStage1 :: (Typeable a) => SYB.Stage -> a -> Bool checkItemStage1 stage x = (const False `SYB.extQ` postTcType `SYB.extQ` fixity `SYB.extQ` nameSet) x where nameSet = const (stage `elem` [SYB.Parser,SYB.TypeChecker]) :: GHC.NameSet -> Bool postTcType = const (stage < SYB.TypeChecker ) :: GHC.PostTcType -> Bool fixity = const (stage < SYB.Renamer ) :: GHC.Fixity -> Bool And in addition HsCmdTop and ParStmtBlock are initialised with explicit 'undefined values. Perhaps use an initialiser that can have its panic turned off when called via the GHC API? Regards Alan On Thu, Jul 24, 2014 at 11:06 PM, Simon Peyton Jones > wrote: So... does anyone object to me changing these "broken" instances with the ones given by DeriveDataTypeable? That?s fine with me provided (a) the default behaviour is not immediate divergence (which it might well be), and (b) the pitfalls are documented. Simon From: "Philip K.F. H?lzenspies" [mailto:p.k.f.holzenspies at utwente.nl] Sent: 24 July 2014 18:42 To: Simon Peyton Jones Cc: ghc-devs at haskell.org Subject: Re: Broken Data.Data instances Dear Simon, et al, These are very good points to make for people writing such traversals and queries. I would be more than happy to write a page on the pitfalls etc. on the wiki, but in my experience so far, exploring the innards of GHC is tremendously helped by trying small things out and showing (bits of) the intermediate structures. For me, personally, this has always been hindered by the absence of good instances of Data and/or Show (not having to bring DynFlags and not just visualising with the pretty printer are very helpful). So... does anyone object to me changing these "broken" instances with the ones given by DeriveDataTypeable? Also, many of these internal data structures could be provided with useful lenses to improve such traversals further. Anyone ever go at that? Would be people be interested? Regards, Philip [cid:image001.jpg at 01CFAA84.C621E9B0] Simon Peyton Jones 24 Jul 2014 18:22 GHC?s data structures are often mutually recursive. e.g. ? The TyCon for Maybe contains the DataCon for Just ? The DataCon For just contains Just?s type ? Just?s type contains the TyCon for Maybe So any attempt to recursively walk over all these structures, as you would a tree, will fail. Also there?s a lot of sharing. For example, every occurrence of ?map? is a Var, and inside that Var is map?s type, its strictness, its rewrite RULE, etc etc. In walking over a term you may not want to walk over all that stuff at every occurrence of map. Maybe that?s it; I?m not certain since I did not write the Data instances for any of GHC?s types Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of p.k.f.holzenspies at utwente.nl Sent: 24 July 2014 16:42 To: ghc-devs at haskell.org Subject: Broken Data.Data instances Dear GHC-ers, Is there a reason for explicitly broken Data.Data instances? Case in point: > instance Data Var where > -- don't traverse? > toConstr _ = abstractConstr "Var" > gunfold _ _ = error "gunfold" > dataTypeOf _ = mkNoRepType "Var" I understand (vaguely) arguments about abstract data types, but this also excludes convenient queries that can, e.g. extract all types from a CoreExpr. I had hoped to do stuff like this: > collect :: (Typeable b, Data a, MonadPlus m) => a -> m b > collect = everything mplus $ mkQ mzero return > > allTypes :: CoreExpr -> [Type] > allTypes = collect Especially when still exploring (parts of) the GHC API, being able to extract things in this fashion is very helpful. SYB?s ?everything? being broken by these instances, not so much. Would a patch ?fixing? these instances be acceptable? Regards, Philip _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 1247 bytes Desc: image001.jpg URL: From austin at well-typed.com Mon Jul 28 15:10:06 2014 From: austin at well-typed.com (Austin Seipp) Date: Mon, 28 Jul 2014 10:10:06 -0500 Subject: Trac In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF20E44C91@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF20E44C91@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: This seems to be alleviated now - page results are returning fast now it seems. Herbert - can you fill us in on what's happening here? I vaguely remember the details but I'll probably mess it all up if I try repeating it. :) Do we not have caching in place or anything for ghc.haskell.org? I think that was sort of the issue. RE: spam, I think we just need to move off our captchas into custom Haskell-based ones, which bots shouldn't be able to figure out. Dunno how much work that is though - Herbert will probably know that too. On Mon, Jul 28, 2014 at 9:02 AM, Simon Peyton Jones wrote: > Trac is taking minutes to display a page. > > Also spam is still coming > > Simon > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From alan.zimm at gmail.com Mon Jul 28 15:41:45 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Mon, 28 Jul 2014 17:41:45 +0200 Subject: Broken Data.Data instances In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF10438545@DB3PRD3001MB020.064d.mgd.msft.net> <53D14576.4060503@utwente.nl> <618BE556AADD624C9C918AA5D5911BEF104387F2@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF20E414F3@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: I already tried that, the syntax does not seem to allow it. I suspect some higher form of sorcery will be required, as alluded to here http://stackoverflow.com/questions/14133121/can-i-constrain-a-type-family Alan On Mon, Jul 28, 2014 at 4:55 PM, wrote: > Dear Alan, > > > > I would think you would want to constrain the result, i.e. > > > > type family (Data (PostTcType a)) => PostTcType a where ? > > > > The Data-instance of ?a? doesn?t give you much if you have a ?PostTcType > a?. > > > > Your point about SYB-recognition of WrongPhase is, of course, a good one ;) > > > > Regards, > > Philip > > > > > > > > *From:* Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] > *Sent:* maandag 28 juli 2014 14:10 > *To:* Holzenspies, P.K.F. (EWI) > *Cc:* Simon Peyton Jones; Edward Kmett; ghc-devs at haskell.org > > *Subject:* Re: Broken Data.Data instances > > > > Philip > > I think the main reason for the WrongPhase thing is to have something that > explicitly has a Data and Typeable instance, to allow generic (SYB) > traversal. If we can get by without this so much the better. > > On a related note, is there any way to constrain the 'a' in > > type family PostTcType a where > PostTcType Id = TcType > PostTcType other = WrongPhaseTyp > > to have an instance of Data? > > I am experimenting with traversals over my earlier paste, and got stuck > here (which is the reason the Show instances were commentet out in the > original). > > Alan > > > > > > On Mon, Jul 28, 2014 at 12:30 PM, wrote: > > Sorry about that? I?m having it out with my terminal server and the server > seems to be winning. Here?s another go: > > > > I always read the () as ?there?s nothing meaningful to stick in here, but > I have to stick in something? so I don?t necessarily want the > WrongPhase-thing. There is very old commentary stating it would be lovely > if someone could expose the PostTcType as a parameter of the AST-types, but > that there are so many types and constructors, that it?s a boring chore to > do. Actually, I was hoping haRe would come up to speed to be able to do > this. That being said, I think Simon?s idea to turn PostTcType into a > type-family is a better way altogether; it also documents intent, i.e. () > may not say so much, but PostTcType RdrName says quite a lot. > > > > Simon commented that a lot of the internal structures aren?t trees, but > cyclic graphs, e.g. the TyCon for Maybe references the DataCons for Just > and Nothing, which again refer to the TyCon for Maybe. I was wondering > whether it would be possible to make stateful lenses for this. Of course, > for specific cases, we could do this, but I wonder if it is also possible > to have lenses remember the things they visited and not visit them twice. > Any ideas on this, Edward? > > > > Regards, > > Philip > > > > > > > > > > > > *From:* Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] > > *Sent:* maandag 28 juli 2014 11:14 > > *To:* Simon Peyton Jones > *Cc:* Edward Kmett; Holzenspies, P.K.F. (EWI); ghc-devs > > > *Subject:* Re: Broken Data.Data instances > > > > I have made a conceptual example of this here http://lpaste.net/108262 > > Alan > > > > On Mon, Jul 28, 2014 at 9:50 AM, Alan & Kim Zimmerman > wrote: > > What about creating a specific type with a single constructor for the "not > relevant to this phase" type to be used instead of () above? That would > also clearly document what was going on. > > Alan > > > > On Mon, Jul 28, 2014 at 9:14 AM, Simon Peyton Jones > wrote: > > I've had to mangle a bunch of hand-written Data instances and push out > patches to a dozen packages that used to be built this way before I > convinced the authors to switch to safer versions of Data. Using virtual > smart constructors like we do now in containers and Text where needed can > be used to preserve internal invariants, etc. > > > > If the ?hand grenades? are the PostTcTypes, etc, then I can explain why > they are there. > > > > There simply is no sensible type you can put before the type checker > runs. For example one of the constructors in HsExpr is > > | HsMultiIf PostTcType [LGRHS id (LHsExpr id)] > > After type checking we know what type the thing has, but before we have no > clue. > > > > We could get around this by saying > > type PostTcType = Maybe TcType > > but that would mean that every post-typechecking consumer would need a > redundant pattern-match on a Just that would always succeed. > > > > It?s nothing deeper than that. Adding Maybes everywhere would be > possible, just clunky. > > > > > > However we now have type functions, and HsExpr is parameterised by an ?id? > parameter, which changes from RdrName (after parsing) to Name (after > renaming) to Id (after typechecking). So we could do this: > > | HsMultiIf (PostTcType id) [LGRHS id (LHsExpr id)] > > and define PostTcType as a closed type family thus > > > > type family PostTcType a where > > PostTcType Id = TcType > > PostTcType other = () > > > > That would be better than filling it with bottoms. But it might not help > with generic programming, because there?d be a component whose type wasn?t > fixed. I have no idea how generics and type functions interact. > > > > Simon > > > > *From:* Edward Kmett [mailto:ekmett at gmail.com] > *Sent:* 27 July 2014 18:27 > *To:* p.k.f.holzenspies at utwente.nl > *Cc:* alan.zimm at gmail.com; Simon Peyton Jones; ghc-devs > > > *Subject:* Re: Broken Data.Data instances > > > > Philip, Alan, > > > > If you need a hand, I'm happy to pitch in guidance. > > > > I've had to mangle a bunch of hand-written Data instances and push out > patches to a dozen packages that used to be built this way before I > convinced the authors to switch to safer versions of Data. Using virtual > smart constructors like we do now in containers and Text where needed can > be used to preserve internal invariants, etc. > > > > This works far better for users of the API than just randomly throwing > them a live hand grenade. As I recall, these little grenades in generic > programming over the GHC API have been a constant source of pain for > libraries like haddock. > > > > Simon, > > > > It seems to me that regarding circular data structures, nothing prevents > you from walking a circular data structure with Data.Data. You can generate > a new one productively that looks just like the old with the contents > swapped out, it is indistinguishable to an observer if the fixed point is > lost, and a clever observer can use observable sharing to get it back, > supposing that they are allowed to try. > > > > Alternately, we could use the 'virtual constructor' trick there to break > the cycle and reintroduce it, but I'm less enthusiastic about that idea, > even if it is simpler in many ways. > > > > -Edward > > > > On Sun, Jul 27, 2014 at 10:17 AM, wrote: > > Alan, > > In that case, let's have a short feedback-loop between the two of us. It > seems many of these files (Name.lhs, for example) are really stable through > the repo-history. It would be nice to have one bigger refactoring all in > one go (some of the code could use a polish, a lot of code seems removable). > > Regards, > Philip > ------------------------------ > > *Van:* Alan & Kim Zimmerman [alan.zimm at gmail.com] > *Verzonden:* vrijdag 25 juli 2014 13:44 > *Aan:* Simon Peyton Jones > *CC:* Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org > *Onderwerp:* Re: Broken Data.Data instances > > By the way, I would be happy to attempt this task, if the concept is > viable. > > > > On Thu, Jul 24, 2014 at 11:23 PM, Alan & Kim Zimmerman < > alan.zimm at gmail.com> wrote: > > While we are talking about fixing traversals, how about getting rid of > the phase specific panic initialisers for placeHolderType, placeHolderKind > and friends? > > In order to safely traverse with SYB, the following needs to be inserted > into all the SYB schemes (see > > https://github.com/alanz/HaRe/blob/master/src/Language/Haskell/Refact/Utils/GhcUtils.hs > ) > > -- Check the Typeable items > checkItemStage1 :: (Typeable a) => SYB.Stage -> a -> Bool > checkItemStage1 stage x = (const False `SYB.extQ` postTcType `SYB.extQ` > fixity `SYB.extQ` nameSet) x > where nameSet = const (stage `elem` [SYB.Parser,SYB.TypeChecker]) :: > GHC.NameSet -> Bool > postTcType = const (stage < SYB.TypeChecker ) :: > GHC.PostTcType -> Bool > fixity = const (stage < SYB.Renamer ) :: > GHC.Fixity -> Bool > > And in addition HsCmdTop and ParStmtBlock are initialised with explicit > 'undefined values. > > Perhaps use an initialiser that can have its panic turned off when called > via the GHC API? > > Regards > > Alan > > > > > > On Thu, Jul 24, 2014 at 11:06 PM, Simon Peyton Jones < > simonpj at microsoft.com> wrote: > > So... does anyone object to me changing these "broken" instances with > the ones given by DeriveDataTypeable? > > That?s fine with me provided (a) the default behaviour is not immediate > divergence (which it might well be), and (b) the pitfalls are documented. > > > > Simon > > > > *From:* "Philip K.F. H?lzenspies" [mailto:p.k.f.holzenspies at utwente.nl] > *Sent:* 24 July 2014 18:42 > *To:* Simon Peyton Jones > *Cc:* ghc-devs at haskell.org > *Subject:* Re: Broken Data.Data instances > > > > Dear Simon, et al, > > These are very good points to make for people writing such traversals and > queries. I would be more than happy to write a page on the pitfalls etc. on > the wiki, but in my experience so far, exploring the innards of GHC is > tremendously helped by trying small things out and showing (bits of) the > intermediate structures. For me, personally, this has always been hindered > by the absence of good instances of Data and/or Show (not having to bring > DynFlags and not just visualising with the pretty printer are very helpful). > > So... does anyone object to me changing these "broken" instances with the > ones given by DeriveDataTypeable? > > Also, many of these internal data structures could be provided with useful > lenses to improve such traversals further. Anyone ever go at that? Would be > people be interested? > > Regards, > Philip > > *Simon Peyton Jones* > > 24 Jul 2014 18:22 > > GHC?s data structures are often mutually recursive. e.g. > > ? The TyCon for Maybe contains the DataCon for Just > > ? The DataCon For just contains Just?s type > > ? Just?s type contains the TyCon for Maybe > > > > So any attempt to recursively walk over all these structures, as you would > a tree, will fail. > > > > Also there?s a lot of sharing. For example, every occurrence of ?map? is > a Var, and inside that Var is map?s type, its strictness, its rewrite RULE, > etc etc. In walking over a term you may not want to walk over all that > stuff at every occurrence of map. > > > > Maybe that?s it; I?m not certain since I did not write the Data instances > for any of GHC?s types > > > > Simon > > > > *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org > ] *On Behalf Of * > p.k.f.holzenspies at utwente.nl > *Sent:* 24 July 2014 16:42 > *To:* ghc-devs at haskell.org > *Subject:* Broken Data.Data instances > > > > Dear GHC-ers, > > > > Is there a reason for explicitly broken Data.Data instances? Case in point: > > > > > instance Data Var where > > > -- don't traverse? > > > toConstr _ = abstractConstr "Var" > > > gunfold _ _ = error "gunfold" > > > dataTypeOf _ = mkNoRepType "Var" > > > > I understand (vaguely) arguments about abstract data types, but this also > excludes convenient queries that can, e.g. extract all types from a > CoreExpr. I had hoped to do stuff like this: > > > > > collect :: (Typeable b, Data a, MonadPlus m) => a -> m b > > > collect = everything mplus $ mkQ mzero return > > > > > > allTypes :: CoreExpr -> [Type] > > > allTypes = collect > > > > Especially when still exploring (parts of) the GHC API, being able to > extract things in this fashion is very helpful. SYB?s ?everything? being > broken by these instances, not so much. > > > > Would a patch ?fixing? these instances be acceptable? > > > > Regards, > > Philip > > > > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 1247 bytes Desc: not available URL: From alan.zimm at gmail.com Mon Jul 28 15:45:48 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Mon, 28 Jul 2014 17:45:48 +0200 Subject: Broken Data.Data instances In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF10438545@DB3PRD3001MB020.064d.mgd.msft.net> <53D14576.4060503@utwente.nl> <618BE556AADD624C9C918AA5D5911BEF104387F2@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF20E414F3@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: FYI I edited the paste at http://lpaste.net/108262 to show the problem On Mon, Jul 28, 2014 at 5:41 PM, Alan & Kim Zimmerman wrote: > I already tried that, the syntax does not seem to allow it. > > I suspect some higher form of sorcery will be required, as alluded to here > http://stackoverflow.com/questions/14133121/can-i-constrain-a-type-family > > Alan > > > On Mon, Jul 28, 2014 at 4:55 PM, wrote: > >> Dear Alan, >> >> >> >> I would think you would want to constrain the result, i.e. >> >> >> >> type family (Data (PostTcType a)) => PostTcType a where ? >> >> >> >> The Data-instance of ?a? doesn?t give you much if you have a ?PostTcType >> a?. >> >> >> >> Your point about SYB-recognition of WrongPhase is, of course, a good one >> ;) >> >> >> >> Regards, >> >> Philip >> >> >> >> >> >> >> >> *From:* Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] >> *Sent:* maandag 28 juli 2014 14:10 >> *To:* Holzenspies, P.K.F. (EWI) >> *Cc:* Simon Peyton Jones; Edward Kmett; ghc-devs at haskell.org >> >> *Subject:* Re: Broken Data.Data instances >> >> >> >> Philip >> >> I think the main reason for the WrongPhase thing is to have something >> that explicitly has a Data and Typeable instance, to allow generic (SYB) >> traversal. If we can get by without this so much the better. >> >> On a related note, is there any way to constrain the 'a' in >> >> type family PostTcType a where >> PostTcType Id = TcType >> PostTcType other = WrongPhaseTyp >> >> to have an instance of Data? >> >> I am experimenting with traversals over my earlier paste, and got stuck >> here (which is the reason the Show instances were commentet out in the >> original). >> >> Alan >> >> >> >> >> >> On Mon, Jul 28, 2014 at 12:30 PM, wrote: >> >> Sorry about that? I?m having it out with my terminal server and the >> server seems to be winning. Here?s another go: >> >> >> >> I always read the () as ?there?s nothing meaningful to stick in here, but >> I have to stick in something? so I don?t necessarily want the >> WrongPhase-thing. There is very old commentary stating it would be lovely >> if someone could expose the PostTcType as a parameter of the AST-types, but >> that there are so many types and constructors, that it?s a boring chore to >> do. Actually, I was hoping haRe would come up to speed to be able to do >> this. That being said, I think Simon?s idea to turn PostTcType into a >> type-family is a better way altogether; it also documents intent, i.e. () >> may not say so much, but PostTcType RdrName says quite a lot. >> >> >> >> Simon commented that a lot of the internal structures aren?t trees, but >> cyclic graphs, e.g. the TyCon for Maybe references the DataCons for Just >> and Nothing, which again refer to the TyCon for Maybe. I was wondering >> whether it would be possible to make stateful lenses for this. Of course, >> for specific cases, we could do this, but I wonder if it is also possible >> to have lenses remember the things they visited and not visit them twice. >> Any ideas on this, Edward? >> >> >> >> Regards, >> >> Philip >> >> >> >> >> >> >> >> >> >> >> >> *From:* Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] >> >> *Sent:* maandag 28 juli 2014 11:14 >> >> *To:* Simon Peyton Jones >> *Cc:* Edward Kmett; Holzenspies, P.K.F. (EWI); ghc-devs >> >> >> *Subject:* Re: Broken Data.Data instances >> >> >> >> I have made a conceptual example of this here http://lpaste.net/108262 >> >> Alan >> >> >> >> On Mon, Jul 28, 2014 at 9:50 AM, Alan & Kim Zimmerman < >> alan.zimm at gmail.com> wrote: >> >> What about creating a specific type with a single constructor for the >> "not relevant to this phase" type to be used instead of () above? That >> would also clearly document what was going on. >> >> Alan >> >> >> >> On Mon, Jul 28, 2014 at 9:14 AM, Simon Peyton Jones < >> simonpj at microsoft.com> wrote: >> >> I've had to mangle a bunch of hand-written Data instances and push out >> patches to a dozen packages that used to be built this way before I >> convinced the authors to switch to safer versions of Data. Using virtual >> smart constructors like we do now in containers and Text where needed can >> be used to preserve internal invariants, etc. >> >> >> >> If the ?hand grenades? are the PostTcTypes, etc, then I can explain why >> they are there. >> >> >> >> There simply is no sensible type you can put before the type checker >> runs. For example one of the constructors in HsExpr is >> >> | HsMultiIf PostTcType [LGRHS id (LHsExpr id)] >> >> After type checking we know what type the thing has, but before we have >> no clue. >> >> >> >> We could get around this by saying >> >> type PostTcType = Maybe TcType >> >> but that would mean that every post-typechecking consumer would need a >> redundant pattern-match on a Just that would always succeed. >> >> >> >> It?s nothing deeper than that. Adding Maybes everywhere would be >> possible, just clunky. >> >> >> >> >> >> However we now have type functions, and HsExpr is parameterised by an >> ?id? parameter, which changes from RdrName (after parsing) to Name (after >> renaming) to Id (after typechecking). So we could do this: >> >> | HsMultiIf (PostTcType id) [LGRHS id (LHsExpr id)] >> >> and define PostTcType as a closed type family thus >> >> >> >> type family PostTcType a where >> >> PostTcType Id = TcType >> >> PostTcType other = () >> >> >> >> That would be better than filling it with bottoms. But it might not help >> with generic programming, because there?d be a component whose type wasn?t >> fixed. I have no idea how generics and type functions interact. >> >> >> >> Simon >> >> >> >> *From:* Edward Kmett [mailto:ekmett at gmail.com] >> *Sent:* 27 July 2014 18:27 >> *To:* p.k.f.holzenspies at utwente.nl >> *Cc:* alan.zimm at gmail.com; Simon Peyton Jones; ghc-devs >> >> >> *Subject:* Re: Broken Data.Data instances >> >> >> >> Philip, Alan, >> >> >> >> If you need a hand, I'm happy to pitch in guidance. >> >> >> >> I've had to mangle a bunch of hand-written Data instances and push out >> patches to a dozen packages that used to be built this way before I >> convinced the authors to switch to safer versions of Data. Using virtual >> smart constructors like we do now in containers and Text where needed can >> be used to preserve internal invariants, etc. >> >> >> >> This works far better for users of the API than just randomly throwing >> them a live hand grenade. As I recall, these little grenades in generic >> programming over the GHC API have been a constant source of pain for >> libraries like haddock. >> >> >> >> Simon, >> >> >> >> It seems to me that regarding circular data structures, nothing prevents >> you from walking a circular data structure with Data.Data. You can generate >> a new one productively that looks just like the old with the contents >> swapped out, it is indistinguishable to an observer if the fixed point is >> lost, and a clever observer can use observable sharing to get it back, >> supposing that they are allowed to try. >> >> >> >> Alternately, we could use the 'virtual constructor' trick there to break >> the cycle and reintroduce it, but I'm less enthusiastic about that idea, >> even if it is simpler in many ways. >> >> >> >> -Edward >> >> >> >> On Sun, Jul 27, 2014 at 10:17 AM, wrote: >> >> Alan, >> >> In that case, let's have a short feedback-loop between the two of us. It >> seems many of these files (Name.lhs, for example) are really stable through >> the repo-history. It would be nice to have one bigger refactoring all in >> one go (some of the code could use a polish, a lot of code seems removable). >> >> Regards, >> Philip >> ------------------------------ >> >> *Van:* Alan & Kim Zimmerman [alan.zimm at gmail.com] >> *Verzonden:* vrijdag 25 juli 2014 13:44 >> *Aan:* Simon Peyton Jones >> *CC:* Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org >> *Onderwerp:* Re: Broken Data.Data instances >> >> By the way, I would be happy to attempt this task, if the concept is >> viable. >> >> >> >> On Thu, Jul 24, 2014 at 11:23 PM, Alan & Kim Zimmerman < >> alan.zimm at gmail.com> wrote: >> >> While we are talking about fixing traversals, how about getting rid >> of the phase specific panic initialisers for placeHolderType, >> placeHolderKind and friends? >> >> In order to safely traverse with SYB, the following needs to be inserted >> into all the SYB schemes (see >> >> https://github.com/alanz/HaRe/blob/master/src/Language/Haskell/Refact/Utils/GhcUtils.hs >> ) >> >> -- Check the Typeable items >> checkItemStage1 :: (Typeable a) => SYB.Stage -> a -> Bool >> checkItemStage1 stage x = (const False `SYB.extQ` postTcType `SYB.extQ` >> fixity `SYB.extQ` nameSet) x >> where nameSet = const (stage `elem` [SYB.Parser,SYB.TypeChecker]) >> :: GHC.NameSet -> Bool >> postTcType = const (stage < SYB.TypeChecker ) >> :: GHC.PostTcType -> Bool >> fixity = const (stage < SYB.Renamer ) >> :: GHC.Fixity -> Bool >> >> And in addition HsCmdTop and ParStmtBlock are initialised with explicit >> 'undefined values. >> >> Perhaps use an initialiser that can have its panic turned off when called >> via the GHC API? >> >> Regards >> >> Alan >> >> >> >> >> >> On Thu, Jul 24, 2014 at 11:06 PM, Simon Peyton Jones < >> simonpj at microsoft.com> wrote: >> >> So... does anyone object to me changing these "broken" instances with >> the ones given by DeriveDataTypeable? >> >> That?s fine with me provided (a) the default behaviour is not immediate >> divergence (which it might well be), and (b) the pitfalls are documented. >> >> >> >> Simon >> >> >> >> *From:* "Philip K.F. H?lzenspies" [mailto:p.k.f.holzenspies at utwente.nl] >> *Sent:* 24 July 2014 18:42 >> *To:* Simon Peyton Jones >> *Cc:* ghc-devs at haskell.org >> *Subject:* Re: Broken Data.Data instances >> >> >> >> Dear Simon, et al, >> >> These are very good points to make for people writing such traversals and >> queries. I would be more than happy to write a page on the pitfalls etc. on >> the wiki, but in my experience so far, exploring the innards of GHC is >> tremendously helped by trying small things out and showing (bits of) the >> intermediate structures. For me, personally, this has always been hindered >> by the absence of good instances of Data and/or Show (not having to bring >> DynFlags and not just visualising with the pretty printer are very helpful). >> >> So... does anyone object to me changing these "broken" instances with the >> ones given by DeriveDataTypeable? >> >> Also, many of these internal data structures could be provided with >> useful lenses to improve such traversals further. Anyone ever go at that? >> Would be people be interested? >> >> Regards, >> Philip >> >> *Simon Peyton Jones* >> >> 24 Jul 2014 18:22 >> >> GHC?s data structures are often mutually recursive. e.g. >> >> ? The TyCon for Maybe contains the DataCon for Just >> >> ? The DataCon For just contains Just?s type >> >> ? Just?s type contains the TyCon for Maybe >> >> >> >> So any attempt to recursively walk over all these structures, as you >> would a tree, will fail. >> >> >> >> Also there?s a lot of sharing. For example, every occurrence of ?map? is >> a Var, and inside that Var is map?s type, its strictness, its rewrite RULE, >> etc etc. In walking over a term you may not want to walk over all that >> stuff at every occurrence of map. >> >> >> >> Maybe that?s it; I?m not certain since I did not write the Data instances >> for any of GHC?s types >> >> >> >> Simon >> >> >> >> *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org >> ] *On Behalf Of * >> p.k.f.holzenspies at utwente.nl >> *Sent:* 24 July 2014 16:42 >> *To:* ghc-devs at haskell.org >> *Subject:* Broken Data.Data instances >> >> >> >> Dear GHC-ers, >> >> >> >> Is there a reason for explicitly broken Data.Data instances? Case in >> point: >> >> >> >> > instance Data Var where >> >> > -- don't traverse? >> >> > toConstr _ = abstractConstr "Var" >> >> > gunfold _ _ = error "gunfold" >> >> > dataTypeOf _ = mkNoRepType "Var" >> >> >> >> I understand (vaguely) arguments about abstract data types, but this also >> excludes convenient queries that can, e.g. extract all types from a >> CoreExpr. I had hoped to do stuff like this: >> >> >> >> > collect :: (Typeable b, Data a, MonadPlus m) => a -> m b >> >> > collect = everything mplus $ mkQ mzero return >> >> > >> >> > allTypes :: CoreExpr -> [Type] >> >> > allTypes = collect >> >> >> >> Especially when still exploring (parts of) the GHC API, being able to >> extract things in this fashion is very helpful. SYB?s ?everything? being >> broken by these instances, not so much. >> >> >> >> Would a patch ?fixing? these instances be acceptable? >> >> >> >> Regards, >> >> Philip >> >> >> >> >> >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> >> >> >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> >> >> >> >> >> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 1247 bytes Desc: not available URL: From ker0sin at ya.ru Mon Jul 28 23:27:09 2014 From: ker0sin at ya.ru (Alexander Pakhomov) Date: Tue, 29 Jul 2014 03:27:09 +0400 Subject: Dashboard In-Reply-To: <1406238103.7074.1.camel@joachim-breitner.de> References: <561421406215321@web9j.yandex.ru> <1406238103.7074.1.camel@joachim-breitner.de> Message-ID: <852871406590029@web15m.yandex.ru> Hi Joachim, I think automatic regression notification at least to the author is a good idea. Probably I can do it in a nearest time. Unfortunately, right now I fail to get your code up. Also I believe it is a good style to check commits for regressions before pushing them. But maybe GHC community is less performance oriented. 25.07.2014, 01:41, "Joachim Breitner" : > ?Hi Alexander, > > ?Am Donnerstag, den 24.07.2014, 19:22 +0400 schrieb Alexander Pakhomov: >> ??I've been informed you created http://ghcspeed-nomeata.rhcloud.com. I >> ??believe it is necessary for any serious perfomance work. >> ??Do you need some help? What have you done and what problems are >> ??unsolved? > ?thanks for your interest. > > ?One way to help is to monitor the page for regressions, and notify > ?whoever caused it. I?m doing that from time to time, relying on the > ?Latest Results summary. > > ?Then I don?t find the codespeed software to be perfect. The thread at > ?https://groups.google.com/forum/#!topic/codespeed/yY5-kPrcG94 discusses > ?some of the things I don?t like. So if you feel like hacking Python, > ?just hack away on it and I?ll happily accept pull requests at > ?https://github.com/nomeata/codespeed (branch ghc), or directly upstream. > > ?Eventually, the whole setup should be moved to some dedicated and > ?official hardware, but I?m not in a hurry with this. It?s been running > ?for just a week or two, and I?d like to observe how its behaving. > > ?Greetings, > ?Joachim > > ?PS: I prefer to discuss these things on the appropriate mailing list and > ?in public, if only for the archive. If you agree, simply reply to the > ?ghc-dev list (quoting in full) and I?ll read your reply there. > > ?-- > ?Joachim ?nomeata? Breitner > ???mail at joachim-breitner.de ? http://www.joachim-breitner.de/ > ???Jabber: nomeata at joachim-breitner.de ?? GPG-Key: 0xF0FBF51F > ???Debian Developer: nomeata at debian.org From jan.stolarek at p.lodz.pl Tue Jul 29 05:57:00 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Tue, 29 Jul 2014 07:57:00 +0200 Subject: Status updates In-Reply-To: References: Message-ID: <201407290757.00312.jan.stolarek@p.lodz.pl> > ? 3) The IRC bot on #ghc can now look up Trac tickets for you (this is > mostly only relevant to those of us there) Sounds great. But how can we do this? Janek From gergo at erdi.hu Tue Jul 29 06:25:15 2014 From: gergo at erdi.hu (Dr. ERDI Gergo) Date: Tue, 29 Jul 2014 08:25:15 +0200 (CEST) Subject: .gitignore of tests Message-ID: Hi, Is there a good reason we have a single monolithic .gitignore file in testuiste/ instead of one per directory? It just feels so unnecessarily burdensome to maintain it compared to just putting four lines in a new .gitignore file... Bye, Gergo From mail at joachim-breitner.de Tue Jul 29 07:34:04 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Tue, 29 Jul 2014 09:34:04 +0200 Subject: Dashboard In-Reply-To: <852871406590029@web15m.yandex.ru> References: <561421406215321@web9j.yandex.ru> <1406238103.7074.1.camel@joachim-breitner.de> <852871406590029@web15m.yandex.ru> Message-ID: <1406619244.10918.8.camel@joachim-breitner.de> Dear Alexander, Am Dienstag, den 29.07.2014, 03:27 +0400 schrieb Alexander Pakhomov: > I think automatic regression notification at least to the author is a good idea. > Probably I can do it in a nearest time. Unfortunately, right now I fail to get your code up. > Also I believe it is a good style to check commits for regressions before pushing them. > But maybe GHC community is less performance oriented. we don?t even enforce that the code compiles and the test suite passes before pushing, but we are doing ok. Having things checked and reported after the fact is already a good step forward. Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From hvriedel at gmail.com Tue Jul 29 07:34:40 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Tue, 29 Jul 2014 09:34:40 +0200 Subject: .gitignore of tests In-Reply-To: (ERDI Gergo's message of "Tue, 29 Jul 2014 08:25:15 +0200 (CEST)") References: Message-ID: <874my0d3e7.fsf@gmail.com> Hello! On 2014-07-29 at 08:25:15 +0200, Dr. ERDI Gergo wrote: > Is there a good reason we have a single monolithic .gitignore file in > testuiste/ instead of one per directory? It just feels so > unnecessarily burdensome to maintain it compared to just putting four > lines in a new .gitignore file... That's a good question I've been wondering about as well. I recently merged a few single .gitignore files I found in testsuite/ sub-folders into the main testsuite/.gitignore file it was rather inconsistent to have most stuff listed in the monolithic testsuite/.gitignore file and just a few deliberate .gitignores in a few sub-folders: http://git.haskell.org/ghc.git/commitdiff/767b9ddf7d2ea2bb99d49372c79be129fc2058ce The other issue is tooling. Some Git front-ends such as http://magit.github.io/ don't even recognize non-top-level .gitignore files (yet), and certainly don't offer to create one if they don't find it in the current sub-folder (instead magit just adds it to the top-level gitignore file) Also, I'm not sure if having .gitignore files sprinkled all over the source-tree instead of having just a few .gitignore files would be so much better, as in order to compute the "effective .gitignore", you have to consider the union of all existing .gitignore files up to the top-level folder (while not crossing Git repo boundaries). However, we *can* switch to a scheme where we use many little per-folder .gitignore files, but everyone would have to agree to follow that new scheme, or we'll end up with a mess of confusing possibly overlapping .gitignore files. Cheers, hvr From hvriedel at gmail.com Tue Jul 29 07:50:33 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Tue, 29 Jul 2014 09:50:33 +0200 Subject: redundant rts/StgPrimFloat decodeDouble? Message-ID: <87y4vcbo3a.fsf@gmail.com> Hello *, While working on integer-gmp2 I noticed that there seems to be redundant code between rts/StgPrimFloat.c and integer-gmp. Specifically, there's a void __decodeDouble_2Int (I_ *man_sign, W_ *man_high, W_ *man_low, I_ *exp, StgDouble dbl); C implementation which does some low-level bit-fiddling on the IEEE representation instead of just using the more portable standard frexp() function, like I'm doing in integer-gmp2, e.g.: HsInt ?integer_gmp_decode_double (const HsDouble x, HsInt64 *const mantisse) ?{ ? if (x) { ? int exp = 0; *mantisse = (HsInt64)scalbn(frexp(x, &exp), DBL_MANT_DIG); ? return exp-DBL_MANT_DIG; ? } else { ? *mantisse = 0; ? return 0; ? } ?} A similiar operation exists in integer-gmp/cbits/float.c So here's my questions: 1) Is there any reason/value in doing low-level IEEE bit-fiddling instead of just delegating to the C99/POSIX compliant frexp(3) operations? 2) Specifically, `decodeDouble_2Int` seems to be dead-code. I'd like to instead move the HsInt ?integer_gmp_decode_double (const HsDouble x, HsInt64 *const mantisse); which I currently import as #if WORD_SIZE_IN_BITS == 32 foreign import prim "integer_gmp_cmm_decode_double" cmm_decode_double :: Double# -> (# Int64#, Int# #) #elif WORD_SIZE_IN_BITS == 64 foreign import prim "integer_gmp_cmm_decode_double" cmm_decode_double :: Double# -> (# Int#, Int# #) #endif based on the size of `Int#` into rts/PrimOps.cmm & rts/StgPrimFloat.c as that may allow me to avoid any C-- in integer-gmp2 altogether (as I only need to go via C-- to return an unboxed pair) Cheers, hvr From simonpj at microsoft.com Tue Jul 29 09:11:05 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 29 Jul 2014 09:11:05 +0000 Subject: Overlapping and incoherent instances Message-ID: <618BE556AADD624C9C918AA5D5911BEF2207B3A1@DB3PRD3001MB020.064d.mgd.msft.net> Friends One of GHC's more widely-used features is overlapping (and sometimes incoherent) instances. The user-manual documentation is here. The use of overlapping/incoherent instances is controlled by LANGUAGE pragmas: OverlappingInstances and IncoherentInstances respectively. However the overlap/incoherent-ness is a property of the *instance declaration* itself, and has been for a long time. Using LANGUAGE OverlappingInstances simply sets the "I am an overlapping instance" flag for every instance declaration in that module. This is a Big Hammer. It give no clue about *which* particular instances the programmer is expecting to be overlapped, nor which are doing the overlapping. It brutally applies to every instance in the module. Moreover, when looking at an instance declaration, there is no nearby clue that it might be overlapped. The clue might be in the command line that compiles that module! Iavor has recently implemented per-instance-declaration pragmas, so you can say instance {-# OVERLAPPABLE #-} Show a => Show [a] where ... instance {-# OVERLAPPING #-} Show [Char] where ... This is much more precise (it affects only those specific instances) and it is much clearer (you see it when you see the instance declaration). This new feature will be in GHC 7.10 and I'm sure you will be happy about that. But I propose also to deprecate the LANGUAGE pragmas OverlappingInstances and IncoherentInstances, as way to encourage everyone to use the new feature instead of the old big hammer. The old LANGUAGE pragmas will continue to work, of course, for at least another complete release cycle. We could make that two cycles if it was helpful. However, if you want deprecation-free libraries, it will entail a wave of library updates. This email is just to warn you, and to let you yell if you think this is a bad idea. It would actually not be difficult to retain the old LANGUAGE pragmas indefinitely - it just seems wrong not to actively push authors in the right direction. These deprecations of course popped up in the test suite, so I've been replacing them with per-instance pragmas there too. Interestingly in some cases, when looking for which instances needed the pragmas, I found...none. So OverlappingInstances was entirely unnecessary. Maybe library authors will find that too! Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvr at gnu.org Tue Jul 29 09:50:08 2014 From: hvr at gnu.org (Herbert Valerio Riedel) Date: Tue, 29 Jul 2014 11:50:08 +0200 Subject: Overlapping and incoherent instances In-Reply-To: <53D76989.2070808@nh2.me> ("Niklas \=\?utf-8\?Q\?Hamb\=C3\=BCchen\?\= \=\?utf-8\?Q\?\=22's\?\= message of "Tue, 29 Jul 2014 11:29:45 +0200") References: <618BE556AADD624C9C918AA5D5911BEF2207B3A1@DB3PRD3001MB020.064d.mgd.msft.net> <53D76989.2070808@nh2.me> Message-ID: <87lhrcbijz.fsf@gnu.org> On 2014-07-29 at 11:29:45 +0200, Niklas Hamb?chen wrote: >> instance {-# OVERLAPPABLE #-} Show a => Show [a] where ? > > Is the syntax somewhat flexible in where the pragma can be placed? > For example, some might prefer > > {-# OVERLAPPING #-} > instance Show [Char] where ? This variant may also be more convenient in cases where you need to CPP-guard that pragma, as it's on a separate line. From johan.tibell at gmail.com Tue Jul 29 10:02:19 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Tue, 29 Jul 2014 12:02:19 +0200 Subject: Overlapping and incoherent instances In-Reply-To: <87lhrcbijz.fsf@gnu.org> References: <618BE556AADD624C9C918AA5D5911BEF2207B3A1@DB3PRD3001MB020.064d.mgd.msft.net> <53D76989.2070808@nh2.me> <87lhrcbijz.fsf@gnu.org> Message-ID: On Tue, Jul 29, 2014 at 11:50 AM, Herbert Valerio Riedel wrote: > On 2014-07-29 at 11:29:45 +0200, Niklas Hamb?chen wrote: >>> instance {-# OVERLAPPABLE #-} Show a => Show [a] where ? >> >> Is the syntax somewhat flexible in where the pragma can be placed? >> For example, some might prefer >> >> {-# OVERLAPPING #-} >> instance Show [Char] where ? > > This variant may also be more convenient in cases where you need to > CPP-guard that pragma, as it's on a separate line. Agreed, and if we remove the old pragma (even with a deprecation cycle) you'll see quite a few of those as many library authors try to have their libraries compile with the last 3 major GHC versions. P.S. For e.g. INLINABLE we require that you mention the function name next to the pragma (which means that you can e.g. put the pragma after the declaration). What's the rationale to not require {-# OVERLAPPING Show [Char] #-} here? Perhaps it's too annoying to have to repeat the types? From ezyang at mit.edu Tue Jul 29 10:09:14 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Tue, 29 Jul 2014 11:09:14 +0100 Subject: .gitignore of tests In-Reply-To: <874my0d3e7.fsf@gmail.com> References: <874my0d3e7.fsf@gmail.com> Message-ID: <1406628501-sup-4569@sabre> My 2 cents: I don't really care what we do, as long as (1) it's obvious where to put new gitignore entries, and (2) the gitignore list is accurate. Perhaps the testsuite should learn about Git and offer to add the files it creates to gitignore? Cheers, Edward Excerpts from Herbert Valerio Riedel's message of 2014-07-29 08:34:40 +0100: > Hello! > > On 2014-07-29 at 08:25:15 +0200, Dr. ERDI Gergo wrote: > > Is there a good reason we have a single monolithic .gitignore file in > > testuiste/ instead of one per directory? It just feels so > > unnecessarily burdensome to maintain it compared to just putting four > > lines in a new .gitignore file... > > That's a good question I've been wondering about as well. I recently > merged a few single .gitignore files I found in testsuite/ sub-folders > into the main testsuite/.gitignore file it was rather inconsistent to > have most stuff listed in the monolithic testsuite/.gitignore file and > just a few deliberate .gitignores in a few sub-folders: > > http://git.haskell.org/ghc.git/commitdiff/767b9ddf7d2ea2bb99d49372c79be129fc2058ce > > The other issue is tooling. Some Git front-ends such as > http://magit.github.io/ don't even recognize non-top-level .gitignore > files (yet), and certainly don't offer to create one if they don't find > it in the current sub-folder (instead magit just adds it to the > top-level gitignore file) > > Also, I'm not sure if having .gitignore files sprinkled all over the > source-tree instead of having just a few .gitignore files would be so > much better, as in order to compute the "effective .gitignore", you have > to consider the union of all existing .gitignore files up to the > top-level folder (while not crossing Git repo boundaries). > > However, we *can* switch to a scheme where we use many little per-folder > .gitignore files, but everyone would have to agree to follow that new > scheme, or we'll end up with a mess of confusing possibly overlapping > .gitignore files. > > Cheers, > hvr From daniel.trstenjak at gmail.com Tue Jul 29 10:37:14 2014 From: daniel.trstenjak at gmail.com (Daniel Trstenjak) Date: Tue, 29 Jul 2014 12:37:14 +0200 Subject: Overlapping and incoherent instances In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF2207B3A1@DB3PRD3001MB020.064d.mgd.msft.net> <53D76989.2070808@nh2.me> <87lhrcbijz.fsf@gnu.org> Message-ID: <20140729103714.GA15071@machine> On Tue, Jul 29, 2014 at 12:02:19PM +0200, Johan Tibell wrote: > What's the rationale to not require > > {-# OVERLAPPING Show [Char] #-} > > here? Perhaps it's too annoying to have to repeat the types? This one might be written at the top of the file, so it would be easier to overlook it, than having it directly at the instance declaration, which seems to be one of the major points for OVERLAPPING and OVERLAPPABLE. Greetings, Daniel From johan.tibell at gmail.com Tue Jul 29 10:48:08 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Tue, 29 Jul 2014 12:48:08 +0200 Subject: Overlapping and incoherent instances In-Reply-To: <20140729103714.GA15071@machine> References: <618BE556AADD624C9C918AA5D5911BEF2207B3A1@DB3PRD3001MB020.064d.mgd.msft.net> <53D76989.2070808@nh2.me> <87lhrcbijz.fsf@gnu.org> <20140729103714.GA15071@machine> Message-ID: On Tue, Jul 29, 2014 at 12:37 PM, Daniel Trstenjak wrote: > > On Tue, Jul 29, 2014 at 12:02:19PM +0200, Johan Tibell wrote: >> What's the rationale to not require >> >> {-# OVERLAPPING Show [Char] #-} >> >> here? Perhaps it's too annoying to have to repeat the types? > > This one might be written at the top of the file, so it would be easier > to overlook it, than having it directly at the instance declaration, > which seems to be one of the major points for OVERLAPPING and OVERLAPPABLE. The same could be said for e.g. INLINE. The extra flexibility is nice to have (e.g. because you can opt to put the pragma after the declaration, to de-emphasize it.) From simonpj at microsoft.com Tue Jul 29 11:02:35 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 29 Jul 2014 11:02:35 +0000 Subject: Overlapping and incoherent instances In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF2207B3A1@DB3PRD3001MB020.064d.mgd.msft.net> <53D76989.2070808@nh2.me> <87lhrcbijz.fsf@gnu.org> Message-ID: <618BE556AADD624C9C918AA5D5911BEF2207B596@DB3PRD3001MB020.064d.mgd.msft.net> The current implementation requires the pragma exactly where showed it. I'm not keen on allowing it to be separated. I suppose with some more parser jiggery pokery it could be allowed immediately before (or, better, after). But cpp would let you say instance #if blah {-# OVERLAPPABLE #-} #endif Show a => Show [a] where ... Simon | -----Original Message----- | From: Johan Tibell [mailto:johan.tibell at gmail.com] | Sent: 29 July 2014 11:02 | To: Herbert Valerio Riedel | Cc: Niklas Hamb?chen; Haskell Libraries (libraries at haskell.org); GHC | users; Simon Peyton Jones; ghc-devs | Subject: Re: Overlapping and incoherent instances | | On Tue, Jul 29, 2014 at 11:50 AM, Herbert Valerio Riedel | wrote: | > On 2014-07-29 at 11:29:45 +0200, Niklas Hamb?chen wrote: | >>> instance {-# OVERLAPPABLE #-} Show a => Show [a] where ? | >> | >> Is the syntax somewhat flexible in where the pragma can be placed? | >> For example, some might prefer | >> | >> {-# OVERLAPPING #-} | >> instance Show [Char] where ? | > | > This variant may also be more convenient in cases where you need to | > CPP-guard that pragma, as it's on a separate line. | | Agreed, and if we remove the old pragma (even with a deprecation | cycle) you'll see quite a few of those as many library authors try to | have their libraries compile with the last 3 major GHC versions. | | P.S. For e.g. INLINABLE we require that you mention the function name | next to the pragma (which means that you can e.g. put the pragma after | the declaration). What's the rationale to not require | | {-# OVERLAPPING Show [Char] #-} | | here? Perhaps it's too annoying to have to repeat the types? From george.colpitts at gmail.com Tue Jul 29 11:45:12 2014 From: george.colpitts at gmail.com (George Colpitts) Date: Tue, 29 Jul 2014 08:45:12 -0300 Subject: Haskell Platform 2014.2.0.0 Release Candidate 2 In-Reply-To: References: Message-ID: Installation worked fine. However I encountered a problem that looks like a regression, although it may be a problem with new versions of the package I am trying to install: cabal install -j3 glib Resolving dependencies... Configuring glib-0.12.5.4... Building glib-0.12.5.4... Failed to install glib-0.12.5.4 Build log ( /Users/gcolpitts/.cabal/logs/glib-0.12.5.4.log ): [1 of 2] Compiling SetupWrapper ( /var/folders/9b/rh4y2gy92hgdb6ktv4df1jv00000gn/T/glib-0.12.5.4-27489/glib-0.12.5.4/SetupWrapper.hs, /var/folders/9b/rh4y2gy92hgdb6ktv4df1jv00000gn/T/glib-0.12.5.4-27489/glib-0.12.5.4/dist/setup/SetupWrapper.o ) [2 of 2] Compiling Main ( /var/folders/9b/rh4y2gy92hgdb6ktv4df1jv00000gn/T/glib-0.12.5.4-27489/glib-0.12.5.4/dist/setup/setup.hs, /var/folders/9b/rh4y2gy92hgdb6ktv4df1jv00000gn/T/glib-0.12.5.4-27489/glib-0.12.5.4/dist/setup/Main.o ) Linking /var/folders/9b/rh4y2gy92hgdb6ktv4df1jv00000gn/T/glib-0.12.5.4-27489/glib-0.12.5.4/dist/setup/setup ... [1 of 2] Compiling Gtk2HsSetup ( Gtk2HsSetup.hs, dist/setup-wrapper/Gtk2HsSetup.o ) [2 of 2] Compiling Main ( SetupMain.hs, dist/setup-wrapper/Main.o ) Linking dist/setup-wrapper/setup ... Configuring glib-0.12.5.4... Building glib-0.12.5.4... Preprocessing library glib-0.12.5.4... gtk2hsC2hs: Error in C header file. /usr/include/dirent.h:147: (column 10) [FATAL] >>> Syntax error! The symbol `^' does not fit here. cabal: Error: some packages failed to install: glib-0.12.5.4 failed during the building phase. The exception was: ExitFailure 1 On Mon, Jul 28, 2014 at 11:40 AM, Mark Lentczner wrote: > The long anticipated Haskell Platform 2014.2 release, including GHC 7.8.3, > and numerous updated packages, is almost here! > > We have created "Release Candidate 2" installers for OS X and Windows, and > believe, barring show stopper issues, creepers exploding, or other bizarre > phenomenon, these will likely be blessed as the final in a few days time. > > If you would like to be an early adopter, please try 'em out... > > - source tarball: haskell-platform-2014.2.0.0-RC2.tar.gz > > - source repo: haskell/haskell-platform at 2014.2.0.0-RC2 > > - windows 32bit: hskellPlatform-2014.2.0.0-i386-RC2-setup.exe > > - windows 64bit:hskellPlatform-2014.2.0.0-x86_64-RC2-setup.exe > > - os x 64bit: Haskell Platform 2014.2.0.0 64bit RC2.signed.pkg > > - travis-ci build: haskell/haskell-platform > > > > *Notes for RC2, since RC1:* > > *Windows* > > *Extra thanks to Randy Polen for burning the midnight-oil to get ths out* > > > - removed unneeded python (et al) files from the GHC bindist for > 64-bit Windows (referenced in GHC ticket #9014 > ) > - added HTML "view source" pages for the GHC packages that was missing > from the GHC bindist for both 32- and 64-bit Windows. > > *Mac OS X* > > *If you installed RC1, you can remove first with the command *sudo > uninstall-hs only 7.8.3 --remove > > *Run it without *--remove* to see what it will do before running, it if > you like. In theory you can just install this one right on top of RC1.... > but hasn't been tested.* > > > - file layout on the Mac... improved slightly (again). In particular, > executables are now installed directly in $prefix/bin dirs, rather than > within the package dir > - fix the bug with haddock master index not being updated correctly > - works on 10.6! (and 10.7, 10.8, and 10.9) with gcc or clang based > Xcodes > - works on 10.10!!! (Yosemite, developer preview 4 release) > > *Source tarball* > > - missing sources in hptool now present > - platform.sh improved somewhat (in particular, handles host cabal > being pre-sandbox) > > *Timetable* > > - These can "soak" amongst the intreped on these lists for a few hours. > - On Monday evening (PST) I'll announce to haskell-cafe and reddit > - End of next week (from my vacation, I'll point out), we'll declare > success and ship. > > ? Mark > > *SHA-256 sums:* > 62f39246ad95dd2aed6ece5138f6297f945d2b450f215d074820294310e0c48a Haskell > Platform 2014.2.0.0 64bit RC2.signed.pkg > 7c7d3585e89e1407461efea29dcaa9628c3be3c47d93a13b5a4978046375e4fd > haskell-platform-2014.2.0.0-RC2.tar.gz > 6eedd76aafb266d9a09baff80cd2973498ab59195c771f7cd64425d40be29c49 > hskellPlatform-2014.2.0.0-i386-RC2-setup.exe > b22115ed84d1f7e747d7f0b47e32e1489e4a24613d69c91df4ae32052f88b130 > hskellPlatform-2014.2.0.0-x86_64-RC2-setup.exe > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From george.colpitts at gmail.com Tue Jul 29 11:54:43 2014 From: george.colpitts at gmail.com (George Colpitts) Date: Tue, 29 Jul 2014 08:54:43 -0300 Subject: Haskell Platform 2014.2.0.0 Release Candidate 2 In-Reply-To: References: Message-ID: apologies, for lack of detail, the following was on a Mac running 10.9.4 On Tue, Jul 29, 2014 at 8:45 AM, George Colpitts wrote: > Installation worked fine. However I encountered a problem that looks like > a regression, although it may be a problem with new versions of the package > I am trying to install: > > cabal install -j3 glib > Resolving dependencies... > Configuring glib-0.12.5.4... > Building glib-0.12.5.4... > Failed to install glib-0.12.5.4 > Build log ( /Users/gcolpitts/.cabal/logs/glib-0.12.5.4.log ): > [1 of 2] Compiling SetupWrapper ( > /var/folders/9b/rh4y2gy92hgdb6ktv4df1jv00000gn/T/glib-0.12.5.4-27489/glib-0.12.5.4/SetupWrapper.hs, > /var/folders/9b/rh4y2gy92hgdb6ktv4df1jv00000gn/T/glib-0.12.5.4-27489/glib-0.12.5.4/dist/setup/SetupWrapper.o > ) > [2 of 2] Compiling Main ( > /var/folders/9b/rh4y2gy92hgdb6ktv4df1jv00000gn/T/glib-0.12.5.4-27489/glib-0.12.5.4/dist/setup/setup.hs, > /var/folders/9b/rh4y2gy92hgdb6ktv4df1jv00000gn/T/glib-0.12.5.4-27489/glib-0.12.5.4/dist/setup/Main.o > ) > Linking > /var/folders/9b/rh4y2gy92hgdb6ktv4df1jv00000gn/T/glib-0.12.5.4-27489/glib-0.12.5.4/dist/setup/setup > ... > [1 of 2] Compiling Gtk2HsSetup ( Gtk2HsSetup.hs, > dist/setup-wrapper/Gtk2HsSetup.o ) > [2 of 2] Compiling Main ( SetupMain.hs, > dist/setup-wrapper/Main.o ) > Linking dist/setup-wrapper/setup ... > Configuring glib-0.12.5.4... > Building glib-0.12.5.4... > Preprocessing library glib-0.12.5.4... > gtk2hsC2hs: Error in C header file. > > /usr/include/dirent.h:147: (column 10) [FATAL] > >>> Syntax error! > The symbol `^' does not fit here. > > cabal: Error: some packages failed to install: > glib-0.12.5.4 failed during the building phase. The exception was: > ExitFailure 1 > > > On Mon, Jul 28, 2014 at 11:40 AM, Mark Lentczner > wrote: > >> The long anticipated Haskell Platform 2014.2 release, including GHC >> 7.8.3, and numerous updated packages, is almost here! >> >> We have created "Release Candidate 2" installers for OS X and Windows, >> and believe, barring show stopper issues, creepers exploding, or other >> bizarre phenomenon, these will likely be blessed as the final in a few days >> time. >> >> If you would like to be an early adopter, please try 'em out... >> >> - source tarball: haskell-platform-2014.2.0.0-RC2.tar.gz >> >> - source repo: haskell/haskell-platform at 2014.2.0.0-RC2 >> >> - windows 32bit: hskellPlatform-2014.2.0.0-i386-RC2-setup.exe >> >> - windows 64bit:hskellPlatform-2014.2.0.0-x86_64-RC2-setup.exe >> >> - os x 64bit: Haskell Platform 2014.2.0.0 64bit RC2.signed.pkg >> >> - travis-ci build: haskell/haskell-platform >> >> >> >> *Notes for RC2, since RC1:* >> >> *Windows* >> >> *Extra thanks to Randy Polen for burning the midnight-oil to get ths >> out* >> >> >> - removed unneeded python (et al) files from the GHC bindist for >> 64-bit Windows (referenced in GHC ticket #9014 >> ) >> - added HTML "view source" pages for the GHC packages that was >> missing from the GHC bindist for both 32- and 64-bit Windows. >> >> *Mac OS X* >> >> *If you installed RC1, you can remove first with the command *sudo >> uninstall-hs only 7.8.3 --remove >> >> *Run it without *--remove* to see what it will do before running, it if >> you like. In theory you can just install this one right on top of RC1.... >> but hasn't been tested.* >> >> >> - file layout on the Mac... improved slightly (again). In particular, >> executables are now installed directly in $prefix/bin dirs, rather than >> within the package dir >> - fix the bug with haddock master index not being updated correctly >> - works on 10.6! (and 10.7, 10.8, and 10.9) with gcc or clang based >> Xcodes >> - works on 10.10!!! (Yosemite, developer preview 4 release) >> >> *Source tarball* >> >> - missing sources in hptool now present >> - platform.sh improved somewhat (in particular, handles host cabal >> being pre-sandbox) >> >> *Timetable* >> >> - These can "soak" amongst the intreped on these lists for a few >> hours. >> - On Monday evening (PST) I'll announce to haskell-cafe and reddit >> - End of next week (from my vacation, I'll point out), we'll declare >> success and ship. >> >> ? Mark >> >> *SHA-256 sums:* >> 62f39246ad95dd2aed6ece5138f6297f945d2b450f215d074820294310e0c48a Haskell >> Platform 2014.2.0.0 64bit RC2.signed.pkg >> 7c7d3585e89e1407461efea29dcaa9628c3be3c47d93a13b5a4978046375e4fd >> haskell-platform-2014.2.0.0-RC2.tar.gz >> 6eedd76aafb266d9a09baff80cd2973498ab59195c771f7cd64425d40be29c49 >> hskellPlatform-2014.2.0.0-i386-RC2-setup.exe >> b22115ed84d1f7e747d7f0b47e32e1489e4a24613d69c91df4ae32052f88b130 >> hskellPlatform-2014.2.0.0-x86_64-RC2-setup.exe >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From george.colpitts at gmail.com Tue Jul 29 12:11:52 2014 From: george.colpitts at gmail.com (George Colpitts) Date: Tue, 29 Jul 2014 09:11:52 -0300 Subject: Haskell Platform 2014.2.0.0 Release Candidate 2 In-Reply-To: References: Message-ID: rc2 fixes the problem with ghc-pkg check I saw on rc1. Independently it is clear that the hmatrix package has a problem: ghc-pkg check bash-3.2$ cabal install -j3 hmatrix Resolving dependencies... Configuring storable-complex-0.2.1... Building storable-complex-0.2.1... Installed storable-complex-0.2.1 Configuring hmatrix-0.16.0.4... Building hmatrix-0.16.0.4... Installed hmatrix-0.16.0.4 Updating documentation index /Users/gcolpitts/Library/Haskell/share/doc/index.html bash-3.2$ ghc-pkg check Warning: library-dirs: /opt/local/lib/ doesn't exist or isn't a directory Warning: include-dirs: /opt/local/include/ doesn't exist or isn't a directory On Mon, Jul 28, 2014 at 11:40 AM, Mark Lentczner wrote: > The long anticipated Haskell Platform 2014.2 release, including GHC 7.8.3, > and numerous updated packages, is almost here! > > We have created "Release Candidate 2" installers for OS X and Windows, and > believe, barring show stopper issues, creepers exploding, or other bizarre > phenomenon, these will likely be blessed as the final in a few days time. > > If you would like to be an early adopter, please try 'em out... > > - source tarball: haskell-platform-2014.2.0.0-RC2.tar.gz > > - source repo: haskell/haskell-platform at 2014.2.0.0-RC2 > > - windows 32bit: hskellPlatform-2014.2.0.0-i386-RC2-setup.exe > > - windows 64bit:hskellPlatform-2014.2.0.0-x86_64-RC2-setup.exe > > - os x 64bit: Haskell Platform 2014.2.0.0 64bit RC2.signed.pkg > > - travis-ci build: haskell/haskell-platform > > > > *Notes for RC2, since RC1:* > > *Windows* > > *Extra thanks to Randy Polen for burning the midnight-oil to get ths out* > > > - removed unneeded python (et al) files from the GHC bindist for > 64-bit Windows (referenced in GHC ticket #9014 > ) > - added HTML "view source" pages for the GHC packages that was missing > from the GHC bindist for both 32- and 64-bit Windows. > > *Mac OS X* > > *If you installed RC1, you can remove first with the command *sudo > uninstall-hs only 7.8.3 --remove > > *Run it without *--remove* to see what it will do before running, it if > you like. In theory you can just install this one right on top of RC1.... > but hasn't been tested.* > > > - file layout on the Mac... improved slightly (again). In particular, > executables are now installed directly in $prefix/bin dirs, rather than > within the package dir > - fix the bug with haddock master index not being updated correctly > - works on 10.6! (and 10.7, 10.8, and 10.9) with gcc or clang based > Xcodes > - works on 10.10!!! (Yosemite, developer preview 4 release) > > *Source tarball* > > - missing sources in hptool now present > - platform.sh improved somewhat (in particular, handles host cabal > being pre-sandbox) > > *Timetable* > > - These can "soak" amongst the intreped on these lists for a few hours. > - On Monday evening (PST) I'll announce to haskell-cafe and reddit > - End of next week (from my vacation, I'll point out), we'll declare > success and ship. > > ? Mark > > *SHA-256 sums:* > 62f39246ad95dd2aed6ece5138f6297f945d2b450f215d074820294310e0c48a Haskell > Platform 2014.2.0.0 64bit RC2.signed.pkg > 7c7d3585e89e1407461efea29dcaa9628c3be3c47d93a13b5a4978046375e4fd > haskell-platform-2014.2.0.0-RC2.tar.gz > 6eedd76aafb266d9a09baff80cd2973498ab59195c771f7cd64425d40be29c49 > hskellPlatform-2014.2.0.0-i386-RC2-setup.exe > b22115ed84d1f7e747d7f0b47e32e1489e4a24613d69c91df4ae32052f88b130 > hskellPlatform-2014.2.0.0-x86_64-RC2-setup.exe > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eir at cis.upenn.edu Tue Jul 29 12:13:22 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Tue, 29 Jul 2014 08:13:22 -0400 Subject: Overlapping and incoherent instances In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF2207B596@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF2207B3A1@DB3PRD3001MB020.064d.mgd.msft.net> <53D76989.2070808@nh2.me> <87lhrcbijz.fsf@gnu.org> <618BE556AADD624C9C918AA5D5911BEF2207B596@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: I think one nice thing about this proposal is that it doesn't seem (to me) to require CPP around the pragma: unrecognized pragmas are warned about but are otherwise harmless. Are folks very keen to have *warning-free* compilation on several GHC versions? Personally, I would aim for warning-free compilation on a most recent version, and otherwise successful compilation on older versions. The only place CPP would be needed is around the LANGUAGE pragma, in order to avoid the deprecation warning in 7.10. One other issue this brings up: how does this all interact with -XSafe? Right now, Safety can be inferred by looking at the set of LANGUAGE pragmas and the import list. (Right?) With the change as implemented, Safe inference would require looking at all instance declarations. Is this OK? Richard On Jul 29, 2014, at 7:02 AM, Simon Peyton Jones wrote: > The current implementation requires the pragma exactly where showed it. > > I'm not keen on allowing it to be separated. > > I suppose with some more parser jiggery pokery it could be allowed immediately before (or, better, after). > > But cpp would let you say > > instance > #if blah > {-# OVERLAPPABLE #-} > #endif > Show a => Show [a] where ... > > Simon > > | -----Original Message----- > | From: Johan Tibell [mailto:johan.tibell at gmail.com] > | Sent: 29 July 2014 11:02 > | To: Herbert Valerio Riedel > | Cc: Niklas Hamb?chen; Haskell Libraries (libraries at haskell.org); GHC > | users; Simon Peyton Jones; ghc-devs > | Subject: Re: Overlapping and incoherent instances > | > | On Tue, Jul 29, 2014 at 11:50 AM, Herbert Valerio Riedel > | wrote: > | > On 2014-07-29 at 11:29:45 +0200, Niklas Hamb?chen wrote: > | >>> instance {-# OVERLAPPABLE #-} Show a => Show [a] where ? > | >> > | >> Is the syntax somewhat flexible in where the pragma can be placed? > | >> For example, some might prefer > | >> > | >> {-# OVERLAPPING #-} > | >> instance Show [Char] where ? > | > > | > This variant may also be more convenient in cases where you need to > | > CPP-guard that pragma, as it's on a separate line. > | > | Agreed, and if we remove the old pragma (even with a deprecation > | cycle) you'll see quite a few of those as many library authors try to > | have their libraries compile with the last 3 major GHC versions. > | > | P.S. For e.g. INLINABLE we require that you mention the function name > | next to the pragma (which means that you can e.g. put the pragma after > | the declaration). What's the rationale to not require > | > | {-# OVERLAPPING Show [Char] #-} > | > | here? Perhaps it's too annoying to have to repeat the types? > _______________________________________________ > Glasgow-haskell-users mailing list > Glasgow-haskell-users at haskell.org > http://www.haskell.org/mailman/listinfo/glasgow-haskell-users > From hvriedel at gmail.com Tue Jul 29 13:16:24 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Tue, 29 Jul 2014 15:16:24 +0200 Subject: Overlapping and incoherent instances In-Reply-To: (Richard Eisenberg's message of "Tue, 29 Jul 2014 08:13:22 -0400") References: <618BE556AADD624C9C918AA5D5911BEF2207B3A1@DB3PRD3001MB020.064d.mgd.msft.net> <53D76989.2070808@nh2.me> <87lhrcbijz.fsf@gnu.org> <618BE556AADD624C9C918AA5D5911BEF2207B596@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <87a97sb907.fsf@gmail.com> On 2014-07-29 at 14:13:22 +0200, Richard Eisenberg wrote: > Are folks very keen to have *warning-free* compilation on several GHC > versions? Personally, I would aim for warning-free compilation on a > most recent version, and otherwise successful compilation on older > versions. IMO: In typical build-bot configurations where you test against multiple GHC versions you often the warnings to be turned into errors via -Werror, as otherwise you wouldn't notice them. After all, warnings are hints about something that may need to be looked at, and if they turn out to be harmless, the specific warning can be turned off on a case-by-case basis (which alas, in GHC's case is a bit difficult, as for instance, you can't turn off a specific warning for a single declaration only, but only for a whole module) Cheers, hvr From george.colpitts at gmail.com Tue Jul 29 13:52:51 2014 From: george.colpitts at gmail.com (George Colpitts) Date: Tue, 29 Jul 2014 10:52:51 -0300 Subject: Haskell Platform 2014.2.0.0 Release Candidate 2 In-Reply-To: References: Message-ID: Weird that I didn't have this problem with rc1 On Tuesday, July 29, 2014, Brandon Allbery wrote: > On Tue, Jul 29, 2014 at 7:45 AM, George Colpitts < > george.colpitts at gmail.com > > wrote: > >> Installation worked fine. However I encountered a problem that looks like >> a regression, although it may be a problem with new versions of the package >> I am trying to install: > > > It's not an H-P problem; Apple's started using their BLOCKS C extension in > system headers, and gtk2hsc2hs doesn't understand it. (And possibly is not > using cpp properly when processing headers.) You'll need to take this up > with the gtk2hs folks. > > #ifdef __BLOCKS__ > int scandir_b(const char *, struct dirent ***, > int (^)(const struct dirent *), int (^)(const struct dirent **, > const struct > dirent **)) __DARWIN_INODE64(scandir_b) > __OSX_AVAILABLE_STARTING(__MAC_10_6, __ > IPHONE_3_2); > #endif /* __BLOCKS__ */ > > -- > brandon s allbery kf8nh sine nomine > associates > allbery.b at gmail.com > ballbery at sinenomine.net > > unix, openafs, kerberos, infrastructure, xmonad > http://sinenomine.net > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Jul 29 16:29:45 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 29 Jul 2014 16:29:45 +0000 Subject: Overlapping and incoherent instances In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF2207B3A1@DB3PRD3001MB020.064d.mgd.msft.net> <53D76989.2070808@nh2.me> <53D79483.3000302@ifi.lmu.de> Message-ID: <618BE556AADD624C9C918AA5D5911BEF2207FD01@DB3PRD3001MB020.064d.mgd.msft.net> CAN_OVERLAP and CAN_BE_OVERLAPPED? (instead of OVERLAPPING and OVERLAPPABLE) Or CAN-OVERLAP, CAN-BE-OVERLAPPED That?s ok with me if that?s what you all want! Simon From: Glasgow-haskell-users [mailto:glasgow-haskell-users-bounces at haskell.org] On Behalf Of Krzysztof Skrzetnicki Sent: 29 July 2014 16:56 To: Brandon Allbery Cc: Simon Peyton Jones; Andreas Abel; GHC users; Haskell Libraries (libraries at haskell.org); ghc-devs Subject: Re: Overlapping and incoherent instances How about CAN_OVERLAP? -- Krzysztof 29-07-2014 15:40, "Brandon Allbery" > napisa?(a): On Tue, Jul 29, 2014 at 8:33 AM, Andreas Abel > wrote: +1. I like Niklas' syntax better. Also OVERLAPPABLE is a horrible word, OVERLAPPING sound less formidable (even though it might be slightly less accurrate). We already get "overlap ok" in instance-related type errors, so OVERLAP_OK wouldn't be particularly alien even if it doesn't quite fit in with existing pragmas. -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net _______________________________________________ Libraries mailing list Libraries at haskell.org http://www.haskell.org/mailman/listinfo/libraries -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Jul 29 16:37:13 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 29 Jul 2014 16:37:13 +0000 Subject: Overlapping and incoherent instances In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF2207B3A1@DB3PRD3001MB020.064d.mgd.msft.net> <53D76989.2070808@nh2.me> <87lhrcbijz.fsf@gnu.org> <618BE556AADD624C9C918AA5D5911BEF2207B596@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF2207FD98@DB3PRD3001MB020.064d.mgd.msft.net> | One other issue this brings up: how does this all interact with -XSafe? | Right now, Safety can be inferred by looking at the set of LANGUAGE | pragmas and the import list. (Right?) With the change as implemented, | Safe inference would require looking at all instance declarations. Is | this OK? I'm honestly not sure, but I do know that, in the implementation, each instance declaration keeps track of (a) whether it is OVERLAPPABLE/OVERLAPPING/INCOHERENT, and (b) the setting of -XSafe in the module where the instance declaration is given. This doesn't change. So I can't answer your question directly, but I think that the behaviour is unchanged from that at present. Simon | | Richard | | On Jul 29, 2014, at 7:02 AM, Simon Peyton Jones | wrote: | | > The current implementation requires the pragma exactly where showed | it. | > | > I'm not keen on allowing it to be separated. | > | > I suppose with some more parser jiggery pokery it could be allowed | immediately before (or, better, after). | > | > But cpp would let you say | > | > instance | > #if blah | > {-# OVERLAPPABLE #-} | > #endif | > Show a => Show [a] where ... | > | > Simon | > | > | -----Original Message----- | > | From: Johan Tibell [mailto:johan.tibell at gmail.com] | > | Sent: 29 July 2014 11:02 | > | To: Herbert Valerio Riedel | > | Cc: Niklas Hamb?chen; Haskell Libraries (libraries at haskell.org); | GHC | > | users; Simon Peyton Jones; ghc-devs | > | Subject: Re: Overlapping and incoherent instances | > | | > | On Tue, Jul 29, 2014 at 11:50 AM, Herbert Valerio Riedel | > | | > | wrote: | > | > On 2014-07-29 at 11:29:45 +0200, Niklas Hamb?chen wrote: | > | >>> instance {-# OVERLAPPABLE #-} Show a => Show [a] where . | > | >> | > | >> Is the syntax somewhat flexible in where the pragma can be | placed? | > | >> For example, some might prefer | > | >> | > | >> {-# OVERLAPPING #-} | > | >> instance Show [Char] where . | > | > | > | > This variant may also be more convenient in cases where you need | > | > to CPP-guard that pragma, as it's on a separate line. | > | | > | Agreed, and if we remove the old pragma (even with a deprecation | > | cycle) you'll see quite a few of those as many library authors try | > | to have their libraries compile with the last 3 major GHC versions. | > | | > | P.S. For e.g. INLINABLE we require that you mention the function | > | name next to the pragma (which means that you can e.g. put the | > | pragma after the declaration). What's the rationale to not require | > | | > | {-# OVERLAPPING Show [Char] #-} | > | | > | here? Perhaps it's too annoying to have to repeat the types? | > _______________________________________________ | > Glasgow-haskell-users mailing list | > Glasgow-haskell-users at haskell.org | > http://www.haskell.org/mailman/listinfo/glasgow-haskell-users | > From iavor.diatchki at gmail.com Tue Jul 29 17:28:56 2014 From: iavor.diatchki at gmail.com (Iavor Diatchki) Date: Tue, 29 Jul 2014 10:28:56 -0700 Subject: Overlapping and incoherent instances In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF2207B3A1@DB3PRD3001MB020.064d.mgd.msft.net> <53D76989.2070808@nh2.me> <53D79483.3000302@ifi.lmu.de> <618BE556AADD624C9C918AA5D5911BEF2207FD01@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Hello, I have no strong feelings about what words we use, but I wanted to point out that while we are thinking of names, we may want to consider 3 (and not just 2). Currently we have: * OVERLAPPING: This instances may overlap existing instances * OVERLAPPABLE: This instance may be overlapped by existing instances * OVERLAPS: This instance is both OVERLAPPING and OVERLAPPABLE Of course, the 3rd one (OVERLAPS) could be replaced by a comma separated list of the first two, but I could not see how to make this work easily with GHC's pragmas. It would not be hard to simply allow 2 pragmas after the `instance` keyword, but both of those seem rather long. Either way, I'll keep an eye on the discussion, and would be happy to change the names if a consesus is reached. -Iavor On Tue, Jul 29, 2014 at 9:57 AM, David Thomas wrote: > Honestly, I think "OVERLAPS" and "OVERLAPPED" are perfectly clear. > > On Tue, Jul 29, 2014 at 9:52 AM, David Feuer > wrote: > > CAN-OVERLAP and CAN-BE-OVERLAPPED are nice and clear. A little long, > perhaps. > > > > On Tue, Jul 29, 2014 at 12:29 PM, Simon Peyton Jones > > wrote: > >> CAN_OVERLAP and CAN_BE_OVERLAPPED? > >> > >> > >> > >> (instead of OVERLAPPING and OVERLAPPABLE) > >> > >> > >> > >> Or CAN-OVERLAP, CAN-BE-OVERLAPPED > >> > >> > >> > >> That?s ok with me if that?s what you all want! > >> > >> > >> > >> Simon > >> > >> > >> > >> From: Glasgow-haskell-users > >> [mailto:glasgow-haskell-users-bounces at haskell.org] On Behalf Of > Krzysztof > >> Skrzetnicki > >> Sent: 29 July 2014 16:56 > >> To: Brandon Allbery > >> Cc: Simon Peyton Jones; Andreas Abel; GHC users; Haskell Libraries > >> (libraries at haskell.org); ghc-devs > >> > >> > >> Subject: Re: Overlapping and incoherent instances > >> > >> > >> > >> How about CAN_OVERLAP? > >> > >> -- > >> Krzysztof > >> > >> 29-07-2014 15:40, "Brandon Allbery" napisa?(a): > >> > >> On Tue, Jul 29, 2014 at 8:33 AM, Andreas Abel > >> wrote: > >> > >> +1. I like Niklas' syntax better. Also OVERLAPPABLE is a horrible word, > >> OVERLAPPING sound less formidable (even though it might be slightly less > >> accurrate). > >> > >> > >> > >> We already get "overlap ok" in instance-related type errors, so > OVERLAP_OK > >> wouldn't be particularly alien even if it doesn't quite fit in with > existing > >> pragmas. > >> > >> > >> > >> -- > >> > >> brandon s allbery kf8nh sine nomine > associates > >> > >> allbery.b at gmail.com > ballbery at sinenomine.net > >> > >> unix, openafs, kerberos, infrastructure, xmonad > http://sinenomine.net > >> > >> > >> _______________________________________________ > >> Libraries mailing list > >> Libraries at haskell.org > >> http://www.haskell.org/mailman/listinfo/libraries > >> > >> > >> _______________________________________________ > >> Libraries mailing list > >> Libraries at haskell.org > >> http://www.haskell.org/mailman/listinfo/libraries > >> > > _______________________________________________ > > Libraries mailing list > > Libraries at haskell.org > > http://www.haskell.org/mailman/listinfo/libraries > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://www.haskell.org/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Jul 29 22:37:54 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 29 Jul 2014 22:37:54 +0000 Subject: Windows build broken -- again! Message-ID: <618BE556AADD624C9C918AA5D5911BEF220802C5@DB3PRD3001MB020.064d.mgd.msft.net> Aaargh! My windows build is, once again, broken. The error is below. Could whoever broke it please fix it? Something to do with "blocking_queue_hd" perhaps? Please. Thanks Simon rts\win32\AsyncIO.c: In function 'awaitRequests': rts\win32\AsyncIO.c:289:23: error: 'blocking_queue_hd' undeclared (first use in this function) rts\win32\AsyncIO.c:289:23: note: each undeclared identifier is reported only once for each function it appears in -------------- next part -------------- An HTML attachment was scrubbed... URL: From singpolyma at singpolyma.net Tue Jul 29 23:41:53 2014 From: singpolyma at singpolyma.net (Stephen Paul Weber) Date: Tue, 29 Jul 2014 18:41:53 -0500 Subject: Overlapping and incoherent instances In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF2207B3A1@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF2207B3A1@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <20140729234153.GE31802@singpolyma-liberty> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 >instance {-# OVERLAPPABLE #-} Show a => Show [a] where ... > >instance {-# OVERLAPPING #-} Show [Char] where ... This, to me, is an admission that developers are not going to want to turn overlapping on globally in general, and so the current language extensions would not make sense to get adopted into the core language at any point. I agree with this idea, and so would second the proposal mentioned at that a language extension that adds actual keywords to tag instances that should be allowed to overlap be added, instead of resorting to pragmas. This seems like an approach that could be useful in general and one could imagine moving "past" an extension to the core language at some point, potentially. - -- Stephen Paul Weber, @singpolyma See for how I prefer to be contacted edition right joseph -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) iQIcBAEBCAAGBQJT2DFBAAoJENEcKRHOUZzemmIQAJEbSjPyx745UI6mkuhBVhKl LQWJlpu0/kzBHaFJl/mWcghKxoBFWwU+pCTh/Pr2oj0rp/KGskBLlplIqB4btZTA ov2MpPrsHm1M37MuGyMtiBhs57UJE+saKKuvcH3qzLZyBCHorE3lFcKAFbNupBrL e/vgNblQ70KGmDRAKqbAQHm9anoGeZUJPgQ9ylVEH4nBYLDo0YSNo/zTeB7fK2yv xBE+Ul3YGfhzf82cLJhYNQOpi5wJ3JEDBevKXcGRzr4Mhzn2Lke+26tu0tx6sOSN snPX2REoeQD1AfXvuNKSxV7BL+CQeyAOOmm2Isj3vW/oO3gkqpfRjCFc+ZSPEjlG XQ3S7L7cgNB34rd6sOFzTv83PXvsH0a0d5RqKUM2kN/qGEjSbAQ1FVyJEUcaEmzr jBnVnWq+abCOSOBg4joGfTxjq0zufdjxkzScEJVDVZ4pIXoxej7HJBi8UfIvo3Jo EDMGGsLSedt4tR2LzYf/5up7GPfuBsFQQzuIcdgMG8/zYca7zWPJgyunlXAPcbzr RvM/gf63SCBuVaQjrtv2Zhzp3PucWOL94NEmLYONU3uuEmo6bq1VO42fOUcl7X/1 UyhFbtoV7s/7PVClxdD4Ag9PumtSfl/CSvN0BA8AzDwTuHWNOPdThAHFqfAtRsD0 GyNzVRNOGuze9SqkLc4T =YeKu -----END PGP SIGNATURE----- From metaniklas at gmail.com Wed Jul 30 01:53:23 2014 From: metaniklas at gmail.com (Niklas Larsson) Date: Wed, 30 Jul 2014 03:53:23 +0200 Subject: Windows build broken -- again! In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF220802C5@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF220802C5@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Hi! Seems like it is the detabbing in commit 3021fb that has gone awry, blocked_queue_hd was renamed blocking_queue_hd in one place. I have attached a patch. I really need to set up a buildbot. Maybe if the weather gets worse. Niklas 2014-07-30 0:37 GMT+02:00 Simon Peyton Jones : > Aaargh! > > My windows build is, once again, broken. The error is below. Could > whoever broke it please fix it? Something to do with ?blocking_queue_hd? > perhaps? > > Please. > > Thanks > > Simon > > rts\win32\AsyncIO.c: In function 'awaitRequests': > > > > rts\win32\AsyncIO.c:289:23: > > error: 'blocking_queue_hd' undeclared (first use in this function) > > > > rts\win32\AsyncIO.c:289:23: > > note: each undeclared identifier is reported only once for each > function it appears in > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 0001-Fix-variable-name-typo-from-commit-3021fb.patch Type: application/octet-stream Size: 767 bytes Desc: not available URL: From singpolyma at singpolyma.net Wed Jul 30 01:56:08 2014 From: singpolyma at singpolyma.net (Stephen Paul Weber) Date: Wed, 30 Jul 2014 01:56:08 +0000 Subject: Overlapping and incoherent instances In-Reply-To: <53D84EA4.1040807@gmail.com> References: <618BE556AADD624C9C918AA5D5911BEF2207B3A1@DB3PRD3001MB020.064d.mgd.msft.net> <20140729234153.GE31802@singpolyma-liberty> <53D84EA4.1040807@gmail.com> Message-ID: <20140730015608.GA17277@singpolyma.net> Somebody signing messages as Felipe Lessa wrote: >OTOH, the pragma is mostly harmless for older GHC versions, while the >keyword approach needs a preprocessor. Only if *both* the old LANGUAGE pragma and the new pragma were employed, which will generate a deprecation warning for awhile and then eventually (likely) be rejected by newer GHCs, thus requiring a prepropcessor in either case. -- Stephen Paul Weber, @singpolyma See for how I prefer to be contacted edition right joseph From austin at well-typed.com Wed Jul 30 05:13:53 2014 From: austin at well-typed.com (Austin Seipp) Date: Wed, 30 Jul 2014 00:13:53 -0500 Subject: Windows build broken -- again! In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF220802C5@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Thanks Niklas - this was an utter failure on my part. I'm not even sure how this slipped in, but it was definitely my fault. Fixed in 6640635e6e2654f0acd8f10e0d02a8bd1c8296ff On Tue, Jul 29, 2014 at 8:53 PM, Niklas Larsson wrote: > Hi! > > Seems like it is the detabbing in commit 3021fb that has gone awry, > blocked_queue_hd was renamed blocking_queue_hd in one place. I have attached > a patch. > > I really need to set up a buildbot. Maybe if the weather gets worse. > > Niklas > > > > 2014-07-30 0:37 GMT+02:00 Simon Peyton Jones : >> >> Aaargh! >> >> My windows build is, once again, broken. The error is below. Could >> whoever broke it please fix it? Something to do with ?blocking_queue_hd? >> perhaps? >> >> Please. >> >> Thanks >> >> Simon >> >> rts\win32\AsyncIO.c: In function 'awaitRequests': >> >> >> >> rts\win32\AsyncIO.c:289:23: >> >> error: 'blocking_queue_hd' undeclared (first use in this function) >> >> >> >> rts\win32\AsyncIO.c:289:23: >> >> note: each undeclared identifier is reported only once for each >> function it appears in >> >> >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From johan.tibell at gmail.com Wed Jul 30 12:00:26 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Wed, 30 Jul 2014 14:00:26 +0200 Subject: [commit: ghc] wip/travis: Add new validate flag: --fastest (dba4217) In-Reply-To: <20140730104619.1A9D1240EA@ghc.haskell.org> References: <20140730104619.1A9D1240EA@ghc.haskell.org> Message-ID: On Wed, Jul 30, 2014 at 12:46 PM, wrote: > +ifneq "$(ValidateSpeed)" "FASTEST" > +HADDOCK_DOCS = NO > +else > HADDOCK_DOCS = YES > +endif This conditional looks like it has been reversed. From johan.tibell at gmail.com Wed Jul 30 12:55:47 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Wed, 30 Jul 2014 14:55:47 +0200 Subject: Overlapping and incoherent instances In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF2207B3A1@DB3PRD3001MB020.064d.mgd.msft.net> <53D76989.2070808@nh2.me> <53D79483.3000302@ifi.lmu.de> <618BE556AADD624C9C918AA5D5911BEF2207FD01@DB3PRD3001MB020.064d.mgd.msft.net> <53D8DFE5.1010609@ifi.lmu.de> Message-ID: On Wed, Jul 30, 2014 at 2:50 PM, Ivan Lazar Miljenovic wrote: > On 30 July 2014 22:07, Andreas Abel wrote: >> I am a bit surprised by the distinction you outline below. This is maybe >> because I am native German, not English. The German equivalent of >> "overlap", "?berschneiden/?berlappen", is used exclusively in a symmetrical >> fashion. It's like in English, if I say "our interests overlap", then it is >> pointless to ask whether my interest are overlapping yours or are overlapped >> by yours. I want to alert you to the fact that non-native English speaker >> might have little understanding for a distinction between "OVERLAPPING" and >> "OVERLAPPABLE". >> >> Let's try to guess what it meant: Given >> >> A) instance Bla Char >> B) instance Bla a => Bla [a] >> C) instance Bla String >> >> you will in context A,B write C as OVERLAPPING, >> and in context A,C write B as OVERLAPPABLE? > > IIUC, B will be OVERLAPPABLE ("you can overlap this") and C will be > OVERLAPPING ("I'm overlapping an existing one") whereas C will be > plain. Apologies if this question doesn't make sense. Can we really talk about overlapping, given that instances can be written in different modules, moved between modules, or removed? From andreas.voellmy at gmail.com Wed Jul 30 22:49:24 2014 From: andreas.voellmy at gmail.com (Andreas Voellmy) Date: Wed, 30 Jul 2014 18:49:24 -0400 Subject: Interrupt interruptible foreign calls on HS exit Message-ID: Hi GHCers, I've been looking into issue #9284, which boils down to getting certain foreign calls issued by HS threads to finish (i.e. return) in the exit sequence of forkProcess. There are several options for solving the particular problem in #9284; one option is to issue the particular foreign calls causing that issue as "interruptible" and then have the exit sequence interrupt interruptible foreign calls. The exit sequence, starting from hs_exit(), goes through hs_exit_(), exitScheduler(), scheduleDoGC(), deleteAllThreads(), and then deleteThread(), where deleteThread is this: static void deleteThread (Capability *cap STG_UNUSED, StgTSO *tso) { // NOTE: must only be called on a TSO that we have exclusive // access to, because we will call throwToSingleThreaded() below. // The TSO must be on the run queue of the Capability we own, or // we must own all Capabilities. if (tso->why_blocked != BlockedOnCCall && tso->why_blocked != BlockedOnCCall_Interruptible) { throwToSingleThreaded(tso->cap,tso,NULL); } } So it looks like interruptible foreign calls are not interrupted in the exit sequence. Is there a good reason why we have this behavior? Could we change it to interrupt TSO's with why_blocked == BlockedOnCCall_Interruptible in the exit sequence? Thanks, Andi P.S. It looks like this was introduced in commit 83d563cb9ede0ba792836e529b1e2929db926355. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Wed Jul 30 22:57:28 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Wed, 30 Jul 2014 23:57:28 +0100 Subject: Interrupt interruptible foreign calls on HS exit In-Reply-To: References: Message-ID: <1406761014-sup-6348@sabre> Recalling when I implemented this functionality, I think not interrupting threads in the exit sequence was just an oversight, and I think we could implement it. Seems reasonable to me. Edward Excerpts from Andreas Voellmy's message of 2014-07-30 23:49:24 +0100: > Hi GHCers, > > I've been looking into issue #9284, which boils down to getting certain > foreign calls issued by HS threads to finish (i.e. return) in the exit > sequence of forkProcess. > > There are several options for solving the particular problem in #9284; one > option is to issue the particular foreign calls causing that issue as > "interruptible" and then have the exit sequence interrupt interruptible > foreign calls. > > The exit sequence, starting from hs_exit(), goes through hs_exit_(), > exitScheduler(), scheduleDoGC(), deleteAllThreads(), and then > deleteThread(), where deleteThread is this: > > static void > deleteThread (Capability *cap STG_UNUSED, StgTSO *tso) > { > // NOTE: must only be called on a TSO that we have exclusive > // access to, because we will call throwToSingleThreaded() below. > // The TSO must be on the run queue of the Capability we own, or > // we must own all Capabilities. > if (tso->why_blocked != BlockedOnCCall && > tso->why_blocked != BlockedOnCCall_Interruptible) { > throwToSingleThreaded(tso->cap,tso,NULL); > } > } > > So it looks like interruptible foreign calls are not interrupted in the > exit sequence. > > Is there a good reason why we have this behavior? Could we change it to > interrupt TSO's with why_blocked == BlockedOnCCall_Interruptible in the > exit sequence? > > Thanks, > Andi > > P.S. It looks like this was introduced in commit > 83d563cb9ede0ba792836e529b1e2929db926355. From simonpj at microsoft.com Thu Jul 31 07:20:31 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 31 Jul 2014 07:20:31 +0000 Subject: Overlapping and incoherent instances In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF2207B3A1@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF2207B3A1@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF2208260F@DB3PRD3001MB020.064d.mgd.msft.net> Friends, in sending my message below, I should also have sent a link to https://ghc.haskell.org/trac/ghc/ticket/9242#comment:25 Comment 25 describes the semantics of OVERLAPPING/OVERLAPPABLE etc, which I signally failed to do in my message below, leading to confusion in the follow up messages. My apologies for that. Some key points: * There is a useful distinction between overlapping and overlappable, but if you don't want to be bothered with it you can just say OVERLAPS (which means both). * Overlap between two candidate instances is allowed if either has the relevant property. This is a bit sloppy, but reduces the annotation burden. Actually, with this per-instance stuff I think it'd be perfectly defensible to require both to be annotated, but that's a different discussion. I hope that helps clarify. I'm really pretty certain that the basic proposal here is good: it implements the current semantics in a more fine-grained manner. My main motivation was to signal the proposed deprecation of the global per-module flag -XoverlappingInstances. Happily people generally seem fine with this. It is, after all, precisely what deprecations are for ("the old thing still works for now, but it won't do so for ever, and you should change as soon as is convenient"). Thanks Simon From: Libraries [mailto:libraries-bounces at haskell.org] On Behalf Of Simon Peyton Jones Sent: 29 July 2014 10:11 To: ghc-devs; GHC users; Haskell Libraries (libraries at haskell.org) Subject: Overlapping and incoherent instances Friends One of GHC's more widely-used features is overlapping (and sometimes incoherent) instances. The user-manual documentation is here. The use of overlapping/incoherent instances is controlled by LANGUAGE pragmas: OverlappingInstances and IncoherentInstances respectively. However the overlap/incoherent-ness is a property of the *instance declaration* itself, and has been for a long time. Using LANGUAGE OverlappingInstances simply sets the "I am an overlapping instance" flag for every instance declaration in that module. This is a Big Hammer. It give no clue about *which* particular instances the programmer is expecting to be overlapped, nor which are doing the overlapping. It brutally applies to every instance in the module. Moreover, when looking at an instance declaration, there is no nearby clue that it might be overlapped. The clue might be in the command line that compiles that module! Iavor has recently implemented per-instance-declaration pragmas, so you can say instance {-# OVERLAPPABLE #-} Show a => Show [a] where ... instance {-# OVERLAPPING #-} Show [Char] where ... This is much more precise (it affects only those specific instances) and it is much clearer (you see it when you see the instance declaration). This new feature will be in GHC 7.10 and I'm sure you will be happy about that. But I propose also to deprecate the LANGUAGE pragmas OverlappingInstances and IncoherentInstances, as way to encourage everyone to use the new feature instead of the old big hammer. The old LANGUAGE pragmas will continue to work, of course, for at least another complete release cycle. We could make that two cycles if it was helpful. However, if you want deprecation-free libraries, it will entail a wave of library updates. This email is just to warn you, and to let you yell if you think this is a bad idea. It would actually not be difficult to retain the old LANGUAGE pragmas indefinitely - it just seems wrong not to actively push authors in the right direction. These deprecations of course popped up in the test suite, so I've been replacing them with per-instance pragmas there too. Interestingly in some cases, when looking for which instances needed the pragmas, I found...none. So OverlappingInstances was entirely unnecessary. Maybe library authors will find that too! Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Thu Jul 31 07:53:40 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Thu, 31 Jul 2014 08:53:40 +0100 Subject: Changing the -package dependency resolution algorithm In-Reply-To: <1406210353-sup-3241@sabre> References: <1406210353-sup-3241@sabre> Message-ID: <53D9F604.1000707@gmail.com> Sorry for not replying to this earlier. I think after our meeting the other day we agreed to keep the existing algorithm but document it better. Here's a stab at describing the spec: 1. compute the set of valid packages, where a package is valid if - all its dependencies are valid - it is not named in an -ignore-package flag, - it is not shadowed by a package with the same PackageId in another Package DB, unless it is in the transitive closure of a package named in a -package-id flag 2. for each flag: -package P: expose P, and hide all other versions of P -package-id P: expose P, and hide all other versions of P -hide-package P: hide P 3. if there are multiple exposed packages with the same name, hide all but the most recent version. We need to rethink the shadowing behaviour. It is designed to handle the case where we have the same PackageId (name + version) in two different DBs (e.g. global and local). Shadowing takes the topmost one of these (e.g. local, or rightmost -package-db flag). We can relax this requirement so long as the InstalledPackageIds are different, but what if the InstalledPackageIds are the same? Right now that's OK, because identical InstalledPackageIds implies identical ABIs, but if we change that so that InstalledPackageId is derived from the source and not the ABI, we would not be able to assume that two identical InstalledPackageIds are compatible. Cheers, Simon On 24/07/2014 15:57, Edward Z. Yang wrote: > Right now, GHC has a very complex and hard to explain algorithm for > picking packages from the package database when you give it a pile of > -package/-package-id/-{hide,ignore,trust,distrust}-package flags. > Roughly, it currently does something like this. > > 1. Concatenate all of the package databases into a giant list, with > system packages first and then user packages later, removing duplicate > entries with the same installed package ID (preferring later packages). > Call this PACKAGES. > > 2. Take all the -package-id flags and compute their transitive closure > (call this set P). > > 3. Calculate the set of shadowed packages--the set of installed packages > for which there exists another package with the same package ID > (foo-0.1) which is in P or is later in the package stack. > > 4. Calculate the set of ignored packages---the set of packages which > match all -ignore-package flags > > 5. Filter out shadowed and ignored packages from the list of packages, > calling the result ALL_PACKAGES. > > 6. Calculate the set of broken packages---the set of packages not > contained in the least fixed point of the relation that takes > a set of packages, and adds all packages whose dependencies are > satisfied, from ALL_PACKAGES. > > 7. Process the package flags in order, operating on PACKAGES (not > ALL_PACKAGES). For -package/-hide-package, take the package with the > *latest* version that matches the flag and are not broken (since this > includes shadowed packages, the result is unique) and toggle it to be > exposed/unexposed, and hide all the other packages. For > -trust-package/-distrust-package, toggle the trusted bit for all > instances in the database. > > 8. If we have exposed multiple versions of the same package name, > hide all the older versions > > What a mouthful! I have no idea, given a set of flags, how this works. > So here is an alternate proposal for an alternate way of handling these > flags *when starting from an empty database* (e.g. -hide-all-packages) > > Suppose we maintain a set of selected packages, which starts off empty. > Process each package flag in turn. > > For -package and -package-id, get the set of installed packages which > match the flag. (If multiple package names apply, process each in turn > as if it were a separate flag.) Compute the transitive closure of > dependencies for all of them, and filter out all choices which have > dependencies which are inconsistent with the current set of selected > packages. Consistency without multi-instances is a mapping of a package > name to an installed package. If there is still more than one choice, > tiebreak by version, which database and time of install. (The latter > tiebreak should not be necessary until we allow multiple instances of a > package with the same package ID.) > > For -hide-package, get the set of packages which match and hide them > all; for -ignore-package, hide the transitive closure of dependencies of it. > For -trust,distrust-package, toggle for all matching packages as before. > > Here are some differences in behavior between this and the previous > scheme: > > - It's no longer valid to indirectly depend on two different versions > of the same package. Most of the time, users didn't want this anyway. > Note that the current scheme prevents directly depending on two > different versions by shadowing the old ones. > > - We can easily extend it to handle multi-instances by relaxing the > consistency check. > > - It assumes *-hide-all-packages* at the beginning. This scheme > probably works less well without that: now we need some consistent > view of the database to start with. > > What do people think? > > Cheers, > Edward > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From simonpj at microsoft.com Thu Jul 31 08:13:43 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 31 Jul 2014 08:13:43 +0000 Subject: Overlapping and incoherent instances In-Reply-To: <53D9F758.4080606@chalmers.se> References: <618BE556AADD624C9C918AA5D5911BEF2207B3A1@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF2208260F@DB3PRD3001MB020.064d.mgd.msft.net> <53D9F758.4080606@chalmers.se> Message-ID: <618BE556AADD624C9C918AA5D5911BEF2208409B@DB3PRD3001MB020.064d.mgd.msft.net> Andreas, remember that GHC 7.8 already implements (essentially) the same algorithm. The difference is that 7.8 offers only the brutal -XOverlappingInstances to control it. In your example of the decision you make when writing instance Bla a => Bla [a] vs instance {-# OVERLAPPABLE #-} Bla a => Bla [a] you are, with GHC 7.8, making precisely the same decision when you decide whether or not to add {-# LANGUAGE OverlappingInstances #-} to that module. Perhaps that wasn't clear in what I wrote; apologies. So your proposal seems to be this don't remove -XOverlappingInstances, because that will prevent programmers from "flipping on/off pragmas until their program goes through". It's hard to argue AGAINST providing the opportunity for more careful programmers to express their intentions more precisely, which is what the OVERLAP/OVERLAPPABLE pragmas do. Concerning deprecating OverlappingInstances, my gut feel is that it is positively a good thing to guide programmers towards a more robust programming style. But my reason for starting this thread was to see whether or not others' gut feel is similar. Simon | -----Original Message----- | From: Libraries [mailto:libraries-bounces at haskell.org] On Behalf Of | Andreas Abel | Sent: 31 July 2014 08:59 | To: Simon Peyton Jones; ghc-devs; GHC users; Haskell Libraries | (libraries at haskell.org) | Subject: Re: Overlapping and incoherent instances | | On 31.07.2014 09:20, Simon Peyton Jones wrote: | > Friends, in sending my message below, I should also have sent a link | > to | > | > https://ghc.haskell.org/trac/ghc/ticket/9242#comment:25 | | Indeed. | | Quoting from the spec: | | * Eliminate any candidate IX for which both of the following hold: | * There is another candidate IY that is strictly more specific; | that is, IY is a substitution instance of IX but not vice versa. | | * Either IX is overlappable or IY is overlapping. | | Mathematically, this makes a lot of sense. But put on the hat of | library writers, and users, and users that don't rtfm. Looking out | from under this hat, the one may always wonder whether one should make | one's generic instances OVERLAPPABLE or not. | | If I create a library with type class Bla and | | instance Bla a => Bla [a] | | I could be a nice library writer and spare my users from declaring | their Bla String instances as OVERLAPPING, so I'd write | | instance {-# OVERLAPPABLE #-} Bla a => Bla [a] | | Or maybe that would be malicious? | | I think the current proposal is too sophisticated. There are no | convincing examples given in the discussion so far that demonstrate | where this sophistication pays off in practice. | | Keep in mind that 99% of the Haskell users will never study the | instance resolution algorithm or its specification, but just flip | on/off pragmas until their code goes through. [At least that was my | approach: whenever GHC asks for one more LANGUAGE pragma, just throw it | in.] | | Cheers, | Andreas | | | > Comment 25 describes the semantics of OVERLAPPING/OVERLAPPABLE etc, | > which I signally failed to do in my message below, leading to | > confusion in the follow up messages. My apologies for that. | > | > Some key points: | > | > *There is a useful distinction between /overlapping/ and | > /overlappable/, but if you don't want to be bothered with it you can | > just say OVERLAPS (which means both). | > | > *Overlap between two candidate instances is allowed if /either/ has | > the relevant property. This is a bit sloppy, but reduces the | > annotation burden. Actually, with this per-instance stuff I think | > it'd be perfectly defensible to require both to be annotated, but | > that's a different discussion. | > | > I hope that helps clarify. | > | > I'm really pretty certain that the basic proposal here is good: it | > implements the current semantics in a more fine-grained manner. My | > main motivation was to signal the proposed deprecation of the global | > per-module flag -XoverlappingInstances. Happily people generally | seem | > fine with this. It is, after all, precisely what deprecations are | for | > ("the old thing still works for now, but it won't do so for ever, and | > you should change as soon as is convenient"). | > | > Thanks | > | > Simon | > | > *From:*Libraries [mailto:libraries-bounces at haskell.org] *On Behalf Of | > *Simon Peyton Jones | > *Sent:* 29 July 2014 10:11 | > *To:* ghc-devs; GHC users; Haskell Libraries (libraries at haskell.org) | > *Subject:* Overlapping and incoherent instances | > | > Friends | > | > One of GHC's more widely-used features is overlapping (and sometimes | > incoherent) instances. The user-manual documentation is here | > . | > | > The use of overlapping/incoherent instances is controlled by LANGUAGE | > pragmas: OverlappingInstances and IncoherentInstances respectively. | > | > However the overlap/incoherent-ness is a property of the **instance | > declaration** itself, and has been for a long time. Using LANGUAGE | > OverlappingInstances simply sets the "I am an overlapping instance" | > flag for every instance declaration in that module. | > | > This is a Big Hammer. It give no clue about **which** particular | > instances the programmer is expecting to be overlapped, nor which are | > doing the overlapping. It brutally applies to every instance in | the | > module. Moreover, when looking at an instance declaration, there is | > no nearby clue that it might be overlapped. The clue might be in the | > command line that compiles that module! | > | > Iavor has recently implemented per-instance-declaration pragmas, so | > you can say | > | > instance {-# OVERLAPPABLE #-} Show a => Show [a] where ... | > | > instance {-# OVERLAPPING #-} Show [Char] where ... | > | > This is much more precise (it affects only those specific instances) | > and it is much clearer (you see it when you see the instance | declaration). | > | > This new feature will be in GHC 7.10 and I'm sure you will be happy | > about that. *But I propose also to deprecate the LANGUAGE pragmas | > OverlappingInstances and IncoherentInstances*, as way to encourage | > everyone to use the new feature instead of the old big hammer. The | > old LANGUAGE pragmas will continue to work, of course, for at least | > another complete release cycle. We could make that two cycles if it | was helpful. | > | > However, if you want deprecation-free libraries, it will entail a | wave | > of library updates. | > | > This email is just to warn you, and to let you yell if you think this | is | > a bad idea. It would actually not be difficult to retain the old | > LANGUAGE pragmas indefinitely - it just seems wrong not to actively | > push authors in the right direction. | > | > These deprecations of course popped up in the test suite, so I've | been | > replacing them with per-instance pragmas there too. Interestingly in | > some cases, when looking for which instances needed the pragmas, I | > found...none. So OverlappingInstances was entirely unnecessary. Maybe | > library authors will find that too! | > | > Simon | > | > | > | > _______________________________________________ | > Libraries mailing list | > Libraries at haskell.org | > http://www.haskell.org/mailman/listinfo/libraries | > | | | -- | Andreas Abel <>< Du bist der geliebte Mensch. | | Department of Computer Science and Engineering Chalmers and Gothenburg | University, Sweden | | andreas.abel at gu.se | http://www2.tcs.ifi.lmu.de/~abel/ | _______________________________________________ | Libraries mailing list | Libraries at haskell.org | http://www.haskell.org/mailman/listinfo/libraries From ekmett at gmail.com Thu Jul 31 08:18:55 2014 From: ekmett at gmail.com (Edward Kmett) Date: Thu, 31 Jul 2014 04:18:55 -0400 Subject: Overlapping and incoherent instances In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF2208409B@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF2207B3A1@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF2208260F@DB3PRD3001MB020.064d.mgd.msft.net> <53D9F758.4080606@chalmers.se> <618BE556AADD624C9C918AA5D5911BEF2208409B@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Now if only we could somehow find a way to do the same thing for AllowAmbiguousTypes. :) I have a 2500 line file that I'm forced to turn on AllowAmbiguousTypes in for 3 definitions, and checking that I didn't accidentally make something else ambiguous to GHC's eyes is a rather brutal affair. (I can't break up the file without inducing orphans) This is just a passing comment, while I'm thinking about it, not a serious attempt to derail the topic! -Edward On Thu, Jul 31, 2014 at 4:13 AM, Simon Peyton Jones wrote: > Andreas, remember that GHC 7.8 already implements (essentially) the same > algorithm. The difference is that 7.8 offers only the brutal > -XOverlappingInstances to control it. In your example of the decision you > make when writing > instance Bla a => Bla [a] > vs > instance {-# OVERLAPPABLE #-} Bla a => Bla [a] > you are, with GHC 7.8, making precisely the same decision when you decide > whether or not to add {-# LANGUAGE OverlappingInstances #-} to that module. > Perhaps that wasn't clear in what I wrote; apologies. > > So your proposal seems to be this > > don't remove -XOverlappingInstances, because that will prevent > programmers from "flipping on/off pragmas until their program > goes through". > > It's hard to argue AGAINST providing the opportunity for more careful > programmers to express their intentions more precisely, which is what the > OVERLAP/OVERLAPPABLE pragmas do. > > Concerning deprecating OverlappingInstances, my gut feel is that it is > positively a good thing to guide programmers towards a more robust > programming style. But my reason for starting this thread was to see > whether or not others' gut feel is similar. > > Simon > > | -----Original Message----- > | From: Libraries [mailto:libraries-bounces at haskell.org] On Behalf Of > | Andreas Abel > | Sent: 31 July 2014 08:59 > | To: Simon Peyton Jones; ghc-devs; GHC users; Haskell Libraries > | (libraries at haskell.org) > | Subject: Re: Overlapping and incoherent instances > | > | On 31.07.2014 09:20, Simon Peyton Jones wrote: > | > Friends, in sending my message below, I should also have sent a link > | > to > | > > | > https://ghc.haskell.org/trac/ghc/ticket/9242#comment:25 > | > | Indeed. > | > | Quoting from the spec: > | > | * Eliminate any candidate IX for which both of the following hold: > | * There is another candidate IY that is strictly more specific; > | that is, IY is a substitution instance of IX but not vice versa. > | > | * Either IX is overlappable or IY is overlapping. > | > | Mathematically, this makes a lot of sense. But put on the hat of > | library writers, and users, and users that don't rtfm. Looking out > | from under this hat, the one may always wonder whether one should make > | one's generic instances OVERLAPPABLE or not. > | > | If I create a library with type class Bla and > | > | instance Bla a => Bla [a] > | > | I could be a nice library writer and spare my users from declaring > | their Bla String instances as OVERLAPPING, so I'd write > | > | instance {-# OVERLAPPABLE #-} Bla a => Bla [a] > | > | Or maybe that would be malicious? > | > | I think the current proposal is too sophisticated. There are no > | convincing examples given in the discussion so far that demonstrate > | where this sophistication pays off in practice. > | > | Keep in mind that 99% of the Haskell users will never study the > | instance resolution algorithm or its specification, but just flip > | on/off pragmas until their code goes through. [At least that was my > | approach: whenever GHC asks for one more LANGUAGE pragma, just throw it > | in.] > | > | Cheers, > | Andreas > | > | > | > Comment 25 describes the semantics of OVERLAPPING/OVERLAPPABLE etc, > | > which I signally failed to do in my message below, leading to > | > confusion in the follow up messages. My apologies for that. > | > > | > Some key points: > | > > | > *There is a useful distinction between /overlapping/ and > | > /overlappable/, but if you don't want to be bothered with it you can > | > just say OVERLAPS (which means both). > | > > | > *Overlap between two candidate instances is allowed if /either/ has > | > the relevant property. This is a bit sloppy, but reduces the > | > annotation burden. Actually, with this per-instance stuff I think > | > it'd be perfectly defensible to require both to be annotated, but > | > that's a different discussion. > | > > | > I hope that helps clarify. > | > > | > I'm really pretty certain that the basic proposal here is good: it > | > implements the current semantics in a more fine-grained manner. My > | > main motivation was to signal the proposed deprecation of the global > | > per-module flag -XoverlappingInstances. Happily people generally > | seem > | > fine with this. It is, after all, precisely what deprecations are > | for > | > ("the old thing still works for now, but it won't do so for ever, and > | > you should change as soon as is convenient"). > | > > | > Thanks > | > > | > Simon > | > > | > *From:*Libraries [mailto:libraries-bounces at haskell.org] *On Behalf Of > | > *Simon Peyton Jones > | > *Sent:* 29 July 2014 10:11 > | > *To:* ghc-devs; GHC users; Haskell Libraries (libraries at haskell.org) > | > *Subject:* Overlapping and incoherent instances > | > > | > Friends > | > > | > One of GHC's more widely-used features is overlapping (and sometimes > | > incoherent) instances. The user-manual documentation is here > | > | extensions.html#instance-overlap>. > | > > | > The use of overlapping/incoherent instances is controlled by LANGUAGE > | > pragmas: OverlappingInstances and IncoherentInstances respectively. > | > > | > However the overlap/incoherent-ness is a property of the **instance > | > declaration** itself, and has been for a long time. Using LANGUAGE > | > OverlappingInstances simply sets the "I am an overlapping instance" > | > flag for every instance declaration in that module. > | > > | > This is a Big Hammer. It give no clue about **which** particular > | > instances the programmer is expecting to be overlapped, nor which are > | > doing the overlapping. It brutally applies to every instance in > | the > | > module. Moreover, when looking at an instance declaration, there is > | > no nearby clue that it might be overlapped. The clue might be in the > | > command line that compiles that module! > | > > | > Iavor has recently implemented per-instance-declaration pragmas, so > | > you can say > | > > | > instance {-# OVERLAPPABLE #-} Show a => Show [a] where ... > | > > | > instance {-# OVERLAPPING #-} Show [Char] where ... > | > > | > This is much more precise (it affects only those specific instances) > | > and it is much clearer (you see it when you see the instance > | declaration). > | > > | > This new feature will be in GHC 7.10 and I'm sure you will be happy > | > about that. *But I propose also to deprecate the LANGUAGE pragmas > | > OverlappingInstances and IncoherentInstances*, as way to encourage > | > everyone to use the new feature instead of the old big hammer. The > | > old LANGUAGE pragmas will continue to work, of course, for at least > | > another complete release cycle. We could make that two cycles if it > | was helpful. > | > > | > However, if you want deprecation-free libraries, it will entail a > | wave > | > of library updates. > | > > | > This email is just to warn you, and to let you yell if you think this > | is > | > a bad idea. It would actually not be difficult to retain the old > | > LANGUAGE pragmas indefinitely - it just seems wrong not to actively > | > push authors in the right direction. > | > > | > These deprecations of course popped up in the test suite, so I've > | been > | > replacing them with per-instance pragmas there too. Interestingly in > | > some cases, when looking for which instances needed the pragmas, I > | > found...none. So OverlappingInstances was entirely unnecessary. Maybe > | > library authors will find that too! > | > > | > Simon > | > > | > > | > > | > _______________________________________________ > | > Libraries mailing list > | > Libraries at haskell.org > | > http://www.haskell.org/mailman/listinfo/libraries > | > > | > | > | -- > | Andreas Abel <>< Du bist der geliebte Mensch. > | > | Department of Computer Science and Engineering Chalmers and Gothenburg > | University, Sweden > | > | andreas.abel at gu.se > | http://www2.tcs.ifi.lmu.de/~abel/ > | _______________________________________________ > | Libraries mailing list > | Libraries at haskell.org > | http://www.haskell.org/mailman/listinfo/libraries > _______________________________________________ > Glasgow-haskell-users mailing list > Glasgow-haskell-users at haskell.org > http://www.haskell.org/mailman/listinfo/glasgow-haskell-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Thu Jul 31 08:53:20 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Thu, 31 Jul 2014 09:53:20 +0100 Subject: [commit: ghc] master: rts: add Emacs 'Local Variables' to every .c file (39b5c1c) In-Reply-To: <20140728143737.49FDC2406D@ghc.haskell.org> References: <20140728143737.49FDC2406D@ghc.haskell.org> Message-ID: <53DA0400.5030705@gmail.com> On 28/07/14 15:37, git at git.haskell.org wrote: > Repository : ssh://git at git.haskell.org/ghc > > On branch : master > Link : http://ghc.haskell.org/trac/ghc/changeset/39b5c1cbd8950755de400933cecca7b8deb4ffcd/ghc > >> --------------------------------------------------------------- > > commit 39b5c1cbd8950755de400933cecca7b8deb4ffcd > Author: Austin Seipp > Date: Mon Jul 21 23:08:31 2014 -0500 > > rts: add Emacs 'Local Variables' to every .c file > > This will hopefully help ensure some basic consistency in the forward by > overriding buffer variables. In particular, it sets the wrap length, the > offset to 4, and turns off tabs. I'm not a huge fan of this. Is there a way to do this without adding the spam to the end of every source file? Also it only works for emacs, what if the vim folks want to add their own spam? Cheers, Simon > > Signed-off-by: Austin Seipp > > >> --------------------------------------------------------------- > > 39b5c1cbd8950755de400933cecca7b8deb4ffcd > rts/Adjustor.c | 8 ++++++++ > rts/Apply.h | 8 ++++++++ > rts/Arena.c | 8 ++++++++ > rts/Arena.h | 8 ++++++++ > rts/AutoApply.h | 8 ++++++++ > rts/AwaitEvent.h | 8 ++++++++ > rts/BeginPrivate.h | 8 ++++++++ > rts/Capability.c | 8 ++++++++ > rts/Capability.h | 8 ++++++++ > rts/CheckUnload.c | 8 ++++++++ > rts/CheckUnload.h | 8 ++++++++ > rts/ClosureFlags.c | 8 ++++++++ > rts/Disassembler.c | 8 ++++++++ > rts/Disassembler.h | 8 ++++++++ > rts/EndPrivate.h | 8 ++++++++ > rts/FileLock.c | 8 ++++++++ > rts/FileLock.h | 8 ++++++++ > rts/GetEnv.h | 8 ++++++++ > rts/GetTime.h | 8 ++++++++ > rts/Globals.c | 8 ++++++++ > rts/Globals.h | 8 ++++++++ > rts/Hash.c | 8 ++++++++ > rts/Hash.h | 8 ++++++++ > rts/Hpc.c | 8 ++++++++ > rts/HsFFI.c | 8 ++++++++ > rts/Inlines.c | 8 ++++++++ > rts/Interpreter.c | 8 ++++++++ > rts/Interpreter.h | 8 ++++++++ > rts/LdvProfile.c | 8 ++++++++ > rts/LdvProfile.h | 8 ++++++++ > rts/Linker.c | 8 ++++++++ > rts/LinkerInternals.h | 8 ++++++++ > rts/Messages.c | 8 ++++++++ > rts/Messages.h | 8 ++++++++ > rts/OldARMAtomic.c | 8 ++++++++ > rts/Papi.c | 8 ++++++++ > rts/Papi.h | 8 ++++++++ > rts/PosixSource.h | 8 ++++++++ > rts/Prelude.h | 8 ++++++++ > rts/Printer.c | 8 ++++++++ > rts/Printer.h | 8 ++++++++ > rts/ProfHeap.c | 8 ++++++++ > rts/ProfHeap.h | 8 ++++++++ > rts/Profiling.c | 8 ++++++++ > rts/Profiling.h | 8 ++++++++ > rts/Proftimer.c | 8 ++++++++ > rts/Proftimer.h | 8 ++++++++ > rts/RaiseAsync.c | 8 ++++++++ > rts/RaiseAsync.h | 8 ++++++++ > rts/RetainerProfile.c | 8 ++++++++ > rts/RetainerProfile.h | 8 ++++++++ > rts/RetainerSet.c | 8 ++++++++ > rts/RetainerSet.h | 8 ++++++++ > rts/RtsAPI.c | 8 ++++++++ > rts/RtsDllMain.c | 8 ++++++++ > rts/RtsDllMain.h | 8 ++++++++ > rts/RtsFlags.c | 8 ++++++++ > rts/RtsFlags.h | 8 ++++++++ > rts/RtsMain.c | 8 ++++++++ > rts/RtsMessages.c | 8 ++++++++ > rts/RtsSignals.h | 8 ++++++++ > rts/RtsStartup.c | 8 ++++++++ > rts/RtsUtils.c | 8 ++++++++ > rts/RtsUtils.h | 8 ++++++++ > rts/STM.c | 8 ++++++++ > rts/STM.h | 8 ++++++++ > rts/Schedule.c | 8 ++++++++ > rts/Schedule.h | 8 ++++++++ > rts/Sparks.c | 8 ++++++++ > rts/Sparks.h | 8 ++++++++ > rts/Stable.c | 8 ++++++++ > rts/Stable.h | 8 ++++++++ > rts/Stats.c | 8 ++++++++ > rts/Stats.h | 8 ++++++++ > rts/StgCRun.c | 8 ++++++++ > rts/StgPrimFloat.c | 8 ++++++++ > rts/StgPrimFloat.h | 8 ++++++++ > rts/StgRun.h | 8 ++++++++ > rts/Task.c | 8 ++++++++ > rts/Task.h | 8 ++++++++ > rts/ThreadLabels.c | 8 ++++++++ > rts/ThreadLabels.h | 8 ++++++++ > rts/ThreadPaused.c | 8 ++++++++ > rts/ThreadPaused.h | 8 ++++++++ > rts/Threads.c | 8 ++++++++ > rts/Threads.h | 8 ++++++++ > rts/Ticker.h | 8 ++++++++ > rts/Ticky.c | 8 ++++++++ > rts/Ticky.h | 8 ++++++++ > rts/Timer.c | 8 ++++++++ > rts/Timer.h | 8 ++++++++ > rts/Trace.c | 8 ++++++++ > rts/Trace.h | 8 ++++++++ > rts/Updates.h | 8 ++++++++ > rts/WSDeque.c | 8 ++++++++ > rts/WSDeque.h | 8 ++++++++ > rts/Weak.c | 8 ++++++++ > rts/Weak.h | 8 ++++++++ > rts/eventlog/EventLog.c | 8 ++++++++ > rts/eventlog/EventLog.h | 8 ++++++++ > rts/hooks/FlagDefaults.c | 8 ++++++++ > rts/hooks/MallocFail.c | 8 ++++++++ > rts/hooks/OnExit.c | 8 ++++++++ > rts/hooks/OutOfHeap.c | 8 ++++++++ > rts/hooks/StackOverflow.c | 8 ++++++++ > rts/posix/Clock.h | 8 ++++++++ > rts/posix/GetEnv.c | 8 ++++++++ > rts/posix/GetTime.c | 8 ++++++++ > rts/posix/Itimer.c | 8 ++++++++ > rts/posix/Itimer.h | 8 ++++++++ > rts/posix/OSMem.c | 8 ++++++++ > rts/posix/OSThreads.c | 8 ++++++++ > rts/posix/Select.c | 8 ++++++++ > rts/posix/Select.h | 8 ++++++++ > rts/posix/Signals.c | 8 ++++++++ > rts/posix/Signals.h | 8 ++++++++ > rts/posix/TTY.c | 8 ++++++++ > rts/posix/TTY.h | 8 ++++++++ > rts/sm/BlockAlloc.c | 8 ++++++++ > rts/sm/BlockAlloc.h | 8 ++++++++ > rts/sm/Compact.c | 8 ++++++++ > rts/sm/Compact.h | 8 ++++++++ > rts/sm/Evac.c | 8 ++++++++ > rts/sm/Evac.h | 8 ++++++++ > rts/sm/GC.c | 8 ++++++++ > rts/sm/GC.h | 8 ++++++++ > rts/sm/GCAux.c | 8 ++++++++ > rts/sm/GCTDecl.h | 8 ++++++++ > rts/sm/GCThread.h | 8 ++++++++ > rts/sm/GCUtils.c | 8 ++++++++ > rts/sm/GCUtils.h | 8 ++++++++ > rts/sm/MBlock.c | 8 ++++++++ > rts/sm/MarkStack.h | 8 ++++++++ > rts/sm/MarkWeak.c | 8 ++++++++ > rts/sm/MarkWeak.h | 8 ++++++++ > rts/sm/OSMem.h | 8 ++++++++ > rts/sm/Sanity.c | 8 ++++++++ > rts/sm/Sanity.h | 8 ++++++++ > rts/sm/Scav.c | 8 ++++++++ > rts/sm/Scav.h | 8 ++++++++ > rts/sm/Storage.c | 8 ++++++++ > rts/sm/Storage.h | 8 ++++++++ > rts/sm/Sweep.c | 8 ++++++++ > rts/sm/Sweep.h | 8 ++++++++ > rts/win32/AsyncIO.c | 8 ++++++++ > rts/win32/AsyncIO.h | 8 ++++++++ > rts/win32/AwaitEvent.c | 8 ++++++++ > rts/win32/ConsoleHandler.c | 8 ++++++++ > rts/win32/ConsoleHandler.h | 8 ++++++++ > rts/win32/GetEnv.c | 8 ++++++++ > rts/win32/GetTime.c | 8 ++++++++ > rts/win32/IOManager.c | 8 ++++++++ > rts/win32/IOManager.h | 8 ++++++++ > rts/win32/OSMem.c | 8 ++++++++ > rts/win32/OSThreads.c | 8 ++++++++ > rts/win32/ThrIOManager.c | 8 ++++++++ > rts/win32/Ticker.c | 8 ++++++++ > rts/win32/WorkQueue.c | 8 ++++++++ > rts/win32/WorkQueue.h | 8 ++++++++ > rts/win32/seh_excn.c | 8 ++++++++ > rts/win32/seh_excn.h | 8 ++++++++ > 161 files changed, 1288 insertions(+) > > Diff suppressed because of size. To see it, use: > > git diff-tree --root --patch-with-stat --no-color --find-copies-harder --ignore-space-at-eol --cc 39b5c1cbd8950755de400933cecca7b8deb4ffcd > _______________________________________________ > ghc-commits mailing list > ghc-commits at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-commits > From hvriedel at gmail.com Thu Jul 31 09:37:34 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Thu, 31 Jul 2014 11:37:34 +0200 Subject: [commit: ghc] master: rts: add Emacs 'Local Variables' to every .c file (39b5c1c) In-Reply-To: <53DA0400.5030705@gmail.com> (Simon Marlow's message of "Thu, 31 Jul 2014 09:53:20 +0100") References: <20140728143737.49FDC2406D@ghc.haskell.org> <53DA0400.5030705@gmail.com> Message-ID: <874mxxhns1.fsf@gmail.com> Hello Simon, On 2014-07-31 at 10:53:20 +0200, Simon Marlow wrote: [...] >> rts: add Emacs 'Local Variables' to every .c file >> >> This will hopefully help ensure some basic consistency in the forward by >> overriding buffer variables. In particular, it sets the wrap length, the >> offset to 4, and turns off tabs. > > I'm not a huge fan of this. Is there a way to do this without adding > the spam to the end of every source file? Also it only works for > emacs, what if the vim folks want to add their own spam? I recently stumbled over http://cgit.freedesktop.org/mesa/mesa/commit/?id=8d0a1a6bc05af7edd25b15ce9159025036b636ff which lead me to read up on its effect which is explained in more detail in https://www.gnu.org/software/emacs/manual/html_node/emacs/Directory-Variables.html | The usual way to define directory-local variables is to put a file | named .dir-locals.el1 in a directory. Whenever Emacs visits any file | in that directory or any of its subdirectories, it will apply the | directory-local variables specified in .dir-locals.el, as though they | had been defined as file-local variables for that file (see File | Variables). So maybe that's a better option? Cheers, hvr From marlowsd at gmail.com Thu Jul 31 09:43:13 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Thu, 31 Jul 2014 10:43:13 +0100 Subject: [commit: ghc] master: rts: add Emacs 'Local Variables' to every .c file (39b5c1c) In-Reply-To: <874mxxhns1.fsf@gmail.com> References: <20140728143737.49FDC2406D@ghc.haskell.org> <53DA0400.5030705@gmail.com> <874mxxhns1.fsf@gmail.com> Message-ID: <53DA0FB1.6090300@gmail.com> On 31/07/14 10:37, Herbert Valerio Riedel wrote: > Hello Simon, > > On 2014-07-31 at 10:53:20 +0200, Simon Marlow wrote: > > [...] > >>> rts: add Emacs 'Local Variables' to every .c file >>> >>> This will hopefully help ensure some basic consistency in the forward by >>> overriding buffer variables. In particular, it sets the wrap length, the >>> offset to 4, and turns off tabs. >> >> I'm not a huge fan of this. Is there a way to do this without adding >> the spam to the end of every source file? Also it only works for >> emacs, what if the vim folks want to add their own spam? > > I recently stumbled over > > http://cgit.freedesktop.org/mesa/mesa/commit/?id=8d0a1a6bc05af7edd25b15ce9159025036b636ff > > which lead me to read up on its effect which is explained in more detail in > > https://www.gnu.org/software/emacs/manual/html_node/emacs/Directory-Variables.html > > | The usual way to define directory-local variables is to put a file > | named .dir-locals.el1 in a directory. Whenever Emacs visits any file > | in that directory or any of its subdirectories, it will apply the > | directory-local variables specified in .dir-locals.el, as though they > | had been defined as file-local variables for that file (see File > | Variables). > > So maybe that's a better option? Great. Austin, want to change over to using .dir-locals.el? Cheers, Simon From marlowsd at gmail.com Thu Jul 31 09:51:06 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Thu, 31 Jul 2014 10:51:06 +0100 Subject: Forcing apps to collect GC stats? In-Reply-To: References: Message-ID: <53DA118A.5000303@gmail.com> Hey Bryan, Sorry for the delay. On 15/07/14 01:57, Bryan O'Sullivan wrote: > I spent a bit of time over the weekend trying to figure out how to force > the RTS to collect GC statistics, but was unable to do so. > > I'm currently working on enriching criterion's ability to gather data, > among which I'd like to see GC statistics. If I try to obtain GC stats > using criterion when I'm not running the benchmark app with +RTS -T, I > get an exception. > > Is there a way to allow criterion to forcibly enable stats collection? > My efforts to do so have gotten me nowhere. It would be unfortunate if I > had to tell users of criterion that they should always run with +RTS -T > or add a -rtsopts clause, as they'll simply forget. > > And while I'm asking, why does GHC not simply collect GC stats by > default? Collecting them seems to have zero cost, from what I can see? So you can do this in the same way as GHC. See https://phabricator.haskell.org/diffusion/GHC/browse/master/ghc/hschooks.c;6fa6caad0cb4ba99b2c0b444b0583190e743dd63$18-28 Which is imported into Haskell like this: https://phabricator.haskell.org/diffusion/GHC/browse/master/ghc/Main.hs;6fa6caad0cb4ba99b2c0b444b0583190e743dd63$847-848 I'm not sure why it's marked "safe", but it doesn't hurt. This API is kind-of public, in the sense that we deliberately expose it via the Rts.h header, and I'll try not to break it gratuitously. Cheers, Simon From alexander at plaimi.net Thu Jul 31 11:03:47 2014 From: alexander at plaimi.net (Alexander Berntsen) Date: Thu, 31 Jul 2014 13:03:47 +0200 Subject: [commit: ghc] master: rts: add Emacs 'Local Variables' to every .c file (39b5c1c) In-Reply-To: <874mxxhns1.fsf@gmail.com> References: <20140728143737.49FDC2406D@ghc.haskell.org> <53DA0400.5030705@gmail.com> <874mxxhns1.fsf@gmail.com> Message-ID: <53DA2293.5050001@plaimi.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 31/07/14 11:37, Herbert Valerio Riedel wrote: > https://www.gnu.org/software/emacs/manual/html_node/emacs/Directory-Variables.html And > for those of us who use the improved editor of the beast: - -- Alexander alexander at plaimi.net https://secure.plaimi.net/~alexander -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iF4EAREIAAYFAlPaIpMACgkQRtClrXBQc7UkSAEAmXTKImKH0KeOghqK3/UFHRD9 3AfCQRtwrsrwE9XkQWYA/0l/WEX+ebeWLCVh9IQ773guO0vEo11AdkVZUbPZdvUC =1on2 -----END PGP SIGNATURE----- From simonpj at microsoft.com Thu Jul 31 11:02:59 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 31 Jul 2014 11:02:59 +0000 Subject: Overlapping and incoherent instances In-Reply-To: <53DA060E.90206@chalmers.se> References: <618BE556AADD624C9C918AA5D5911BEF2207B3A1@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF2208260F@DB3PRD3001MB020.064d.mgd.msft.net> <53D9F758.4080606@chalmers.se> <618BE556AADD624C9C918AA5D5911BEF2208409B@DB3PRD3001MB020.064d.mgd.msft.net> <53DA060E.90206@chalmers.se> Message-ID: <618BE556AADD624C9C918AA5D5911BEF22084361@DB3PRD3001MB020.064d.mgd.msft.net> | My proposal is to have just one pragma, e.g. OVERLAP, that allows | overlap in either direction. But if you have examples whether the | extra sophistication introduced by a separation into OVERLAPPABLE and | OVERLAPPING is needed, I am happy to go along... Great! As you'll see the proposal, "OVERLAPS" is precisely what you want. I don't care whether it is called "OVERLAP" or "OVERLAPS". So it sounds as if you are content. (I assume you don't want to *prevent* careful programmers from saying something more precise.) Simon | | On 31.07.2014 10:13, Simon Peyton Jones wrote: | > Andreas, remember that GHC 7.8 already implements (essentially) the | same algorithm. The difference is that 7.8 offers only the brutal - | XOverlappingInstances to control it. In your example of the decision | you make when writing | > instance Bla a => Bla [a] | > vs | > instance {-# OVERLAPPABLE #-} Bla a => Bla [a] you are, with GHC | > 7.8, making precisely the same decision when you decide whether or | not to add {-# LANGUAGE OverlappingInstances #-} to that module. | Perhaps that wasn't clear in what I wrote; apologies. | > | > So your proposal seems to be this | > | > don't remove -XOverlappingInstances, because that will prevent | > programmers from "flipping on/off pragmas until their program | > goes through". | > | > It's hard to argue AGAINST providing the opportunity for more careful | programmers to express their intentions more precisely, which is what | the OVERLAP/OVERLAPPABLE pragmas do. | > | > Concerning deprecating OverlappingInstances, my gut feel is that it | is positively a good thing to guide programmers towards a more robust | programming style. But my reason for starting this thread was to see | whether or not others' gut feel is similar. | > | > Simon | > | > | -----Original Message----- | > | From: Libraries [mailto:libraries-bounces at haskell.org] On Behalf Of | > | Andreas Abel | > | Sent: 31 July 2014 08:59 | > | To: Simon Peyton Jones; ghc-devs; GHC users; Haskell Libraries | > | (libraries at haskell.org) | > | Subject: Re: Overlapping and incoherent instances | > | | > | On 31.07.2014 09:20, Simon Peyton Jones wrote: | > | > Friends, in sending my message below, I should also have sent a | > | > link to | > | > | > | > https://ghc.haskell.org/trac/ghc/ticket/9242#comment:25 | > | | > | Indeed. | > | | > | Quoting from the spec: | > | | > | * Eliminate any candidate IX for which both of the following | hold: | > | * There is another candidate IY that is strictly more specific; | > | that is, IY is a substitution instance of IX but not vice | versa. | > | | > | * Either IX is overlappable or IY is overlapping. | > | | > | Mathematically, this makes a lot of sense. But put on the hat of | > | library writers, and users, and users that don't rtfm. Looking out | > | from under this hat, the one may always wonder whether one should | > | make one's generic instances OVERLAPPABLE or not. | > | | > | If I create a library with type class Bla and | > | | > | instance Bla a => Bla [a] | > | | > | I could be a nice library writer and spare my users from declaring | > | their Bla String instances as OVERLAPPING, so I'd write | > | | > | instance {-# OVERLAPPABLE #-} Bla a => Bla [a] | > | | > | Or maybe that would be malicious? | > | | > | I think the current proposal is too sophisticated. There are no | > | convincing examples given in the discussion so far that demonstrate | > | where this sophistication pays off in practice. | > | | > | Keep in mind that 99% of the Haskell users will never study the | > | instance resolution algorithm or its specification, but just flip | > | on/off pragmas until their code goes through. [At least that was | my | > | approach: whenever GHC asks for one more LANGUAGE pragma, just | throw | > | it in.] | > | | > | Cheers, | > | Andreas | > | | > | | > | > Comment 25 describes the semantics of OVERLAPPING/OVERLAPPABLE | > | > etc, which I signally failed to do in my message below, leading | to | > | > confusion in the follow up messages. My apologies for that. | > | > | > | > Some key points: | > | > | > | > *There is a useful distinction between /overlapping/ and | > | > /overlappable/, but if you don't want to be bothered with it you | > | > can just say OVERLAPS (which means both). | > | > | > | > *Overlap between two candidate instances is allowed if /either/ | > | > has the relevant property. This is a bit sloppy, but reduces the | > | > annotation burden. Actually, with this per-instance stuff I | think | > | > it'd be perfectly defensible to require both to be annotated, but | > | > that's a different discussion. | > | > | > | > I hope that helps clarify. | > | > | > | > I'm really pretty certain that the basic proposal here is good: | it | > | > implements the current semantics in a more fine-grained manner. | > | > My main motivation was to signal the proposed deprecation of the | > | > global per-module flag -XoverlappingInstances. Happily people | > | > generally | > | seem | > | > fine with this. It is, after all, precisely what deprecations | are | > | for | > | > ("the old thing still works for now, but it won't do so for ever, | > | > and you should change as soon as is convenient"). | > | > | > | > Thanks | > | > | > | > Simon | > | > | > | > *From:*Libraries [mailto:libraries-bounces at haskell.org] *On | Behalf | > | > Of *Simon Peyton Jones | > | > *Sent:* 29 July 2014 10:11 | > | > *To:* ghc-devs; GHC users; Haskell Libraries | > | > (libraries at haskell.org) | > | > *Subject:* Overlapping and incoherent instances | > | > | > | > Friends | > | > | > | > One of GHC's more widely-used features is overlapping (and | > | > sometimes | > | > incoherent) instances. The user-manual documentation is here | > | > | > s- | > | extensions.html#instance-overlap>. | > | > | > | > The use of overlapping/incoherent instances is controlled by | > | > LANGUAGE | > | > pragmas: OverlappingInstances and IncoherentInstances | respectively. | > | > | > | > However the overlap/incoherent-ness is a property of the | > | > **instance | > | > declaration** itself, and has been for a long time. Using | > | > LANGUAGE OverlappingInstances simply sets the "I am an | overlapping instance" | > | > flag for every instance declaration in that module. | > | > | > | > This is a Big Hammer. It give no clue about **which** particular | > | > instances the programmer is expecting to be overlapped, nor which | are | > | > doing the overlapping. It brutally applies to every instance | in | > | the | > | > module. Moreover, when looking at an instance declaration, there | > | > is no nearby clue that it might be overlapped. The clue might be | > | > in the command line that compiles that module! | > | > | > | > Iavor has recently implemented per-instance-declaration pragmas, | > | > so you can say | > | > | > | > instance {-# OVERLAPPABLE #-} Show a => Show [a] where ... | > | > | > | > instance {-# OVERLAPPING #-} Show [Char] where ... | > | > | > | > This is much more precise (it affects only those specific | > | > instances) and it is much clearer (you see it when you see the | > | > instance | > | declaration). | > | > | > | > This new feature will be in GHC 7.10 and I'm sure you will be | > | > happy about that. *But I propose also to deprecate the LANGUAGE | > | > pragmas OverlappingInstances and IncoherentInstances*, as way to | > | > encourage everyone to use the new feature instead of the old big | > | > hammer. The old LANGUAGE pragmas will continue to work, of | > | > course, for at least another complete release cycle. We could | > | > make that two cycles if it | > | was helpful. | > | > | > | > However, if you want deprecation-free libraries, it will entail a | > | wave | > | > of library updates. | > | > | > | > This email is just to warn you, and to let you yell if you think | > | > this | > | is | > | > a bad idea. It would actually not be difficult to retain the | old | > | > LANGUAGE pragmas indefinitely - it just seems wrong not to | > | > actively push authors in the right direction. | > | > | > | > These deprecations of course popped up in the test suite, so I've | > | been | > | > replacing them with per-instance pragmas there too. | Interestingly | > | > in some cases, when looking for which instances needed the | > | > pragmas, I found...none. So OverlappingInstances was entirely | > | > unnecessary. Maybe library authors will find that too! | > | > | > | > Simon | > | > | > | > | > | > | > | > _______________________________________________ | > | > Libraries mailing list | > | > Libraries at haskell.org | > | > http://www.haskell.org/mailman/listinfo/libraries | > | > | > | | > | | > | -- | > | Andreas Abel <>< Du bist der geliebte Mensch. | > | | > | Department of Computer Science and Engineering Chalmers and | > | Gothenburg University, Sweden | > | | > | andreas.abel at gu.se | > | http://www2.tcs.ifi.lmu.de/~abel/ | > | _______________________________________________ | > | Libraries mailing list | > | Libraries at haskell.org | > | http://www.haskell.org/mailman/listinfo/libraries | > _______________________________________________ | > Libraries mailing list | > Libraries at haskell.org | > http://www.haskell.org/mailman/listinfo/libraries | > | | | -- | Andreas Abel <>< Du bist der geliebte Mensch. | | Department of Computer Science and Engineering Chalmers and Gothenburg | University, Sweden | | andreas.abel at gu.se | http://www2.tcs.ifi.lmu.de/~abel/ From daniel.trstenjak at gmail.com Thu Jul 31 11:24:47 2014 From: daniel.trstenjak at gmail.com (Daniel Trstenjak) Date: Thu, 31 Jul 2014 13:24:47 +0200 Subject: [commit: ghc] master: rts: add Emacs 'Local Variables' to every .c file (39b5c1c) In-Reply-To: <53DA2293.5050001@plaimi.net> References: <20140728143737.49FDC2406D@ghc.haskell.org> <53DA0400.5030705@gmail.com> <874mxxhns1.fsf@gmail.com> <53DA2293.5050001@plaimi.net> Message-ID: <20140731112447.GA17639@machine> On Thu, Jul 31, 2014 at 01:03:47PM +0200, Alexander Berntsen wrote: > for those of us who use the improved editor of the beast: > This only seems to work if you're starting a new vim instance inside of the directory, the vimrc in the directory isn't considered if you just open a file inside of the directory with an already running vim instance. But this plugin seems to do the desired thing: http://www.vim.org/scripts/script.php?script_id=3393 Greetings, Daniel From pali.gabor at gmail.com Thu Jul 31 12:29:39 2014 From: pali.gabor at gmail.com (=?UTF-8?B?UMOhbGkgR8OhYm9yIErDoW5vcw==?=) Date: Thu, 31 Jul 2014 14:29:39 +0200 Subject: The build is broken: docs/users_guide/users_guide.xml Message-ID: Hi there, I have dblatex installed on my builders that makes them failing for the last few days, e.g. [1]. The cause of the failure is this: "/usr/local/bin/dblatex" -P 'filename.as.url=0' docs/users_guide/users_guide.xml --ps -o docs/users_guide/users_guide.ps [..] Build users_guide.ps latex failed users_guide.tex:6509: Lonely \item--perhaps a missing list environment. users_guide.tex:6509: leading text: ...-{}\-i\-p\-i\-d} \index{-{}-{}ipid} }] Unexpected error occured The version of dblatex is 0.3.2. [1] http://haskell.inf.elte.hu/builders/freebsd-amd64-head/336/10.html From simonpj at microsoft.com Thu Jul 31 14:03:39 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 31 Jul 2014 14:03:39 +0000 Subject: A couple of GHC-API questions In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF104374B9@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF10438A09@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF10438DB8@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF22084876@DB3PRD3001MB020.064d.mgd.msft.net> | 1. The docs say that annotations can only be applied to top-level | binders/declarations. We?ve found it *very* helpful to annotate nested | binders in some cases to reduce the inference burden (it?s also | extremely helpful for debugging the code/spec). Is this restriction in | place because of the aforementioned issue of not serializing the | GlobalRdrEnv? If so, could the restriction be lifted with the caveat | that annotations on nested binders will *not* be exported? I think that | would be sufficient for us as long as we can get a hold of the Core | before any sort of inlining happens. I think the main awkwardness is that there's no very convenient way to attach the annotation to a nested binder. For built in things like strictness it's all kept in the IdInfo. You could imagine a kind of extensible IdInfo, with a field of [Dynamic], being the annotations. To look up, find the Dynamic with the expected type (e.g. LiquidHaskellSpec). Not very hard, I think, but still work. | 2. The docs say that you can only annotate binders that are declared in | the same module. We?ve similarly found it useful to be able to *assume* | stronger types for certain imported functions, though this is of course | unsound. Could we also lift the same-module restriction, again with the | caveat that the annotations will *not* be exported? I don?t think that would be hard to lift. | I still think that a Template Haskell-based approach could be really | nice without requiring any changes to GHC (beyond the profiling issue | we?ve run into). That sounds good, but I can't really say more because I don't understand what "a Template-Haskell-based approach" really is. I'd be happy to have a Skype call about it, with Ranjit and Niki, if you wanted. Simon From ezyang at mit.edu Thu Jul 31 14:31:04 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Thu, 31 Jul 2014 15:31:04 +0100 Subject: Changing the -package dependency resolution algorithm In-Reply-To: <53D9F604.1000707@gmail.com> References: <1406210353-sup-3241@sabre> <53D9F604.1000707@gmail.com> Message-ID: <1406816361-sup-7168@sabre> > We need to rethink the shadowing behaviour. It is designed to handle > the case where we have the same PackageId (name + version) in two > different DBs (e.g. global and local). Shadowing takes the topmost one > of these (e.g. local, or rightmost -package-db flag). We can relax this > requirement so long as the InstalledPackageIds are different, but what > if the InstalledPackageIds are the same? Right now that's OK, because > identical InstalledPackageIds implies identical ABIs, but if we change > that so that InstalledPackageId is derived from the source and not the > ABI, we would not be able to assume that two identical > InstalledPackageIds are compatible. I talked to Duncan about this, and he's asserted that under a Nix-like model, equal InstalledPackageIds really would imply identical ABIs. I think this would be pretty hard to achieve reliably (and if we get it wrong, segfault); Simon, perhaps you would know more about this. Duncan (and shout if I understand wrong) is also keen on abolishing the shadowing algorithm completely when we have package environments and making a package environment mandatory (defaulting to a global package environment when nothing available.) The reason for this is that in a Nix model, we mostly abolish package database stacks and have a single global package database, which all packages are chucked into. In this case, the current shadowing really has no idea how to pick between two packages which shadow each other in the same database. Edward From ekmett at gmail.com Thu Jul 31 16:15:48 2014 From: ekmett at gmail.com (Edward Kmett) Date: Thu, 31 Jul 2014 12:15:48 -0400 Subject: Forcing apps to collect GC stats? In-Reply-To: <53DA118A.5000303@gmail.com> References: <53DA118A.5000303@gmail.com> Message-ID: Interesting. I suppose ekg could also (ab)use this. Johan? -Edward On Thu, Jul 31, 2014 at 5:51 AM, Simon Marlow wrote: > Hey Bryan, > > Sorry for the delay. > > > On 15/07/14 01:57, Bryan O'Sullivan wrote: > >> I spent a bit of time over the weekend trying to figure out how to force >> the RTS to collect GC statistics, but was unable to do so. >> >> I'm currently working on enriching criterion's ability to gather data, >> among which I'd like to see GC statistics. If I try to obtain GC stats >> using criterion when I'm not running the benchmark app with +RTS -T, I >> get an exception. >> >> Is there a way to allow criterion to forcibly enable stats collection? >> My efforts to do so have gotten me nowhere. It would be unfortunate if I >> had to tell users of criterion that they should always run with +RTS -T >> or add a -rtsopts clause, as they'll simply forget. >> >> And while I'm asking, why does GHC not simply collect GC stats by >> default? Collecting them seems to have zero cost, from what I can see? >> > > So you can do this in the same way as GHC. See > > https://phabricator.haskell.org/diffusion/GHC/browse/ > master/ghc/hschooks.c;6fa6caad0cb4ba99b2c0b444b0583190e743dd63$18-28 > > Which is imported into Haskell like this: > > https://phabricator.haskell.org/diffusion/GHC/browse/master/ghc/Main.hs; > 6fa6caad0cb4ba99b2c0b444b0583190e743dd63$847-848 > > I'm not sure why it's marked "safe", but it doesn't hurt. > > This API is kind-of public, in the sense that we deliberately expose it > via the Rts.h header, and I'll try not to break it gratuitously. > > Cheers, > Simon > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Thu Jul 31 16:41:41 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Thu, 31 Jul 2014 18:41:41 +0200 Subject: Forcing apps to collect GC stats? In-Reply-To: References: <53DA118A.5000303@gmail.com> Message-ID: Indeed. I filed a bug earlier today to make use of this. On Thu, Jul 31, 2014 at 6:15 PM, Edward Kmett wrote: > Interesting. > > I suppose ekg could also (ab)use this. > > Johan? > > -Edward > > > On Thu, Jul 31, 2014 at 5:51 AM, Simon Marlow wrote: >> >> Hey Bryan, >> >> Sorry for the delay. >> >> >> On 15/07/14 01:57, Bryan O'Sullivan wrote: >>> >>> I spent a bit of time over the weekend trying to figure out how to force >>> the RTS to collect GC statistics, but was unable to do so. >>> >>> I'm currently working on enriching criterion's ability to gather data, >>> among which I'd like to see GC statistics. If I try to obtain GC stats >>> using criterion when I'm not running the benchmark app with +RTS -T, I >>> get an exception. >>> >>> Is there a way to allow criterion to forcibly enable stats collection? >>> My efforts to do so have gotten me nowhere. It would be unfortunate if I >>> had to tell users of criterion that they should always run with +RTS -T >>> or add a -rtsopts clause, as they'll simply forget. >>> >>> And while I'm asking, why does GHC not simply collect GC stats by >>> default? Collecting them seems to have zero cost, from what I can see? >> >> >> So you can do this in the same way as GHC. See >> >> >> https://phabricator.haskell.org/diffusion/GHC/browse/master/ghc/hschooks.c;6fa6caad0cb4ba99b2c0b444b0583190e743dd63$18-28 >> >> Which is imported into Haskell like this: >> >> >> https://phabricator.haskell.org/diffusion/GHC/browse/master/ghc/Main.hs;6fa6caad0cb4ba99b2c0b444b0583190e743dd63$847-848 >> >> I'm not sure why it's marked "safe", but it doesn't hurt. >> >> This API is kind-of public, in the sense that we deliberately expose it >> via the Rts.h header, and I'll try not to break it gratuitously. >> >> Cheers, >> Simon >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs > > From austin at well-typed.com Thu Jul 31 16:55:18 2014 From: austin at well-typed.com (Austin Seipp) Date: Thu, 31 Jul 2014 11:55:18 -0500 Subject: Forcing apps to collect GC stats? In-Reply-To: References: <53DA118A.5000303@gmail.com> Message-ID: The 'safe' mark was just an oversight from me - I don't really think it matters, considering it's only called once at startup anyway. Similarly I imagine the oversight is negligible in the Criterion or ekg case. And also, take note the conditional is very specific; you want COLLECT_GC_STATS only when NO_GC_STATS is the current setting - if you unconditionally force COLLECT_GC_STATS, then things like `+RTS -sstderr -RTS` will no longer work. I mention this because I messed it up originally. :) Given we already use this in GHC, and other users want it, it's probably reasonable to put such an interface in `base` somewhere, e.g. data GCStatisticsOption = NoStatistics | CollectStatistics | VerboseStatistics ... getGcStatistics :: IO GcStatisticsOption setGcStatistics :: GcStatisticsOption -> IO () modifyGcStatistics :: (GcStatisticsOption -> GcStatisticsOption) -> IO () or something like that. The special case inside GHC could then be handled with such an API I suppose. On Thu, Jul 31, 2014 at 11:41 AM, Johan Tibell wrote: > Indeed. I filed a bug earlier today to make use of this. > > On Thu, Jul 31, 2014 at 6:15 PM, Edward Kmett wrote: >> Interesting. >> >> I suppose ekg could also (ab)use this. >> >> Johan? >> >> -Edward >> >> >> On Thu, Jul 31, 2014 at 5:51 AM, Simon Marlow wrote: >>> >>> Hey Bryan, >>> >>> Sorry for the delay. >>> >>> >>> On 15/07/14 01:57, Bryan O'Sullivan wrote: >>>> >>>> I spent a bit of time over the weekend trying to figure out how to force >>>> the RTS to collect GC statistics, but was unable to do so. >>>> >>>> I'm currently working on enriching criterion's ability to gather data, >>>> among which I'd like to see GC statistics. If I try to obtain GC stats >>>> using criterion when I'm not running the benchmark app with +RTS -T, I >>>> get an exception. >>>> >>>> Is there a way to allow criterion to forcibly enable stats collection? >>>> My efforts to do so have gotten me nowhere. It would be unfortunate if I >>>> had to tell users of criterion that they should always run with +RTS -T >>>> or add a -rtsopts clause, as they'll simply forget. >>>> >>>> And while I'm asking, why does GHC not simply collect GC stats by >>>> default? Collecting them seems to have zero cost, from what I can see? >>> >>> >>> So you can do this in the same way as GHC. See >>> >>> >>> https://phabricator.haskell.org/diffusion/GHC/browse/master/ghc/hschooks.c;6fa6caad0cb4ba99b2c0b444b0583190e743dd63$18-28 >>> >>> Which is imported into Haskell like this: >>> >>> >>> https://phabricator.haskell.org/diffusion/GHC/browse/master/ghc/Main.hs;6fa6caad0cb4ba99b2c0b444b0583190e743dd63$847-848 >>> >>> I'm not sure why it's marked "safe", but it doesn't hurt. >>> >>> This API is kind-of public, in the sense that we deliberately expose it >>> via the Rts.h header, and I'll try not to break it gratuitously. >>> >>> Cheers, >>> Simon >>> >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From johan.tibell at gmail.com Thu Jul 31 17:00:32 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Thu, 31 Jul 2014 19:00:32 +0200 Subject: Forcing apps to collect GC stats? In-Reply-To: References: <53DA118A.5000303@gmail.com> Message-ID: We already have GHC.Stats. It would be nice if it was put there. On Thu, Jul 31, 2014 at 6:55 PM, Austin Seipp wrote: > The 'safe' mark was just an oversight from me - I don't really think > it matters, considering it's only called once at startup anyway. > Similarly I imagine the oversight is negligible in the Criterion or > ekg case. > > And also, take note the conditional is very specific; you want > COLLECT_GC_STATS only when NO_GC_STATS is the current setting - if you > unconditionally force COLLECT_GC_STATS, then things like `+RTS > -sstderr -RTS` will no longer work. I mention this because I messed it > up originally. :) > > Given we already use this in GHC, and other users want it, it's > probably reasonable to put such an interface in `base` somewhere, e.g. > > data GCStatisticsOption = NoStatistics | CollectStatistics | > VerboseStatistics ... > > getGcStatistics :: IO GcStatisticsOption > setGcStatistics :: GcStatisticsOption -> IO () > modifyGcStatistics :: (GcStatisticsOption -> GcStatisticsOption) -> IO () > > or something like that. > > The special case inside GHC could then be handled with such an API I suppose. > > > On Thu, Jul 31, 2014 at 11:41 AM, Johan Tibell wrote: >> Indeed. I filed a bug earlier today to make use of this. >> >> On Thu, Jul 31, 2014 at 6:15 PM, Edward Kmett wrote: >>> Interesting. >>> >>> I suppose ekg could also (ab)use this. >>> >>> Johan? >>> >>> -Edward >>> >>> >>> On Thu, Jul 31, 2014 at 5:51 AM, Simon Marlow wrote: >>>> >>>> Hey Bryan, >>>> >>>> Sorry for the delay. >>>> >>>> >>>> On 15/07/14 01:57, Bryan O'Sullivan wrote: >>>>> >>>>> I spent a bit of time over the weekend trying to figure out how to force >>>>> the RTS to collect GC statistics, but was unable to do so. >>>>> >>>>> I'm currently working on enriching criterion's ability to gather data, >>>>> among which I'd like to see GC statistics. If I try to obtain GC stats >>>>> using criterion when I'm not running the benchmark app with +RTS -T, I >>>>> get an exception. >>>>> >>>>> Is there a way to allow criterion to forcibly enable stats collection? >>>>> My efforts to do so have gotten me nowhere. It would be unfortunate if I >>>>> had to tell users of criterion that they should always run with +RTS -T >>>>> or add a -rtsopts clause, as they'll simply forget. >>>>> >>>>> And while I'm asking, why does GHC not simply collect GC stats by >>>>> default? Collecting them seems to have zero cost, from what I can see? >>>> >>>> >>>> So you can do this in the same way as GHC. See >>>> >>>> >>>> https://phabricator.haskell.org/diffusion/GHC/browse/master/ghc/hschooks.c;6fa6caad0cb4ba99b2c0b444b0583190e743dd63$18-28 >>>> >>>> Which is imported into Haskell like this: >>>> >>>> >>>> https://phabricator.haskell.org/diffusion/GHC/browse/master/ghc/Main.hs;6fa6caad0cb4ba99b2c0b444b0583190e743dd63$847-848 >>>> >>>> I'm not sure why it's marked "safe", but it doesn't hurt. >>>> >>>> This API is kind-of public, in the sense that we deliberately expose it >>>> via the Rts.h header, and I'll try not to break it gratuitously. >>>> >>>> Cheers, >>>> Simon >>>> >>>> _______________________________________________ >>>> ghc-devs mailing list >>>> ghc-devs at haskell.org >>>> http://www.haskell.org/mailman/listinfo/ghc-devs >>> >>> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> > > > > -- > Regards, > > Austin Seipp, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ From johan.tibell at gmail.com Thu Jul 31 17:17:51 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Thu, 31 Jul 2014 19:17:51 +0200 Subject: Forcing apps to collect GC stats? In-Reply-To: References: <53DA118A.5000303@gmail.com> Message-ID: On Thu, Jul 31, 2014 at 6:55 PM, Austin Seipp wrote: > And also, take note the conditional is very specific; you want > COLLECT_GC_STATS only when NO_GC_STATS is the current setting - if you > unconditionally force COLLECT_GC_STATS, then things like `+RTS > -sstderr -RTS` will no longer work. I mention this because I messed it > up originally. :) Could you expand a bit on this. I want to programtically set whatever flags/settings would be set if the user passed +RTS -T on the command line. From austin at well-typed.com Thu Jul 31 17:49:31 2014 From: austin at well-typed.com (Austin Seipp) Date: Thu, 31 Jul 2014 12:49:31 -0500 Subject: Forcing apps to collect GC stats? In-Reply-To: References: <53DA118A.5000303@gmail.com> Message-ID: Right, so if you look in ./includes/rts/Flags.h, you'll see a set of different options for the GC stats: #define NO_GC_STATS 0 #define COLLECT_GC_STATS 1 #define ONELINE_GC_STATS 2 #define SUMMARY_GC_STATS 3 #define VERBOSE_GC_STATS 4 By default the RTS sets the stats flag to NO_GC_STATS, which does nothing. Note that *every* flag here (except NO_GC_STATS) implies the collection of GC statistics; it's just a matter of what's displayed to the user (if anything). There are several RTS options, each of which correspond to one of the above (see RtsFlags.c): `+RTS -T` corresponds to COLLECT_GC_STATS `+RTS -S` corresponds to VERBOSE_GC_STATS `+RTS -s` corresponds to SUMMARY_GC_STATS `+RTS -t` corresponds to ONELINE_GC_STATS These flags are parsed at startup before any Haskell code is executed obviously. So here's the problem: let's say you unconditionally force the statistics flag to be COLLECT_GC_STATS from Haskell-land - if I started my application with `+RTS -s`, then you've changed the statistics reporting from SUMMARY_GC_STATS, which means that when the program terminates, no statistics about the program will be displayed! Which will break end-user expectations. So the rule of thumb is: all of the above flags enforce the collection of statistics, and you *only* want to set the flag to COLLECT_GC_STATS *iff* it is currently set to NO_GC_STATS. In all other cases, GC statistics will already be enabled for the runtime, so you don't need to do anything. Hope that explains it well enough. On Thu, Jul 31, 2014 at 12:17 PM, Johan Tibell wrote: > On Thu, Jul 31, 2014 at 6:55 PM, Austin Seipp wrote: >> And also, take note the conditional is very specific; you want >> COLLECT_GC_STATS only when NO_GC_STATS is the current setting - if you >> unconditionally force COLLECT_GC_STATS, then things like `+RTS >> -sstderr -RTS` will no longer work. I mention this because I messed it >> up originally. :) > > Could you expand a bit on this. I want to programtically set whatever > flags/settings would be set if the user passed +RTS -T on the command > line. > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From mark.lentczner at gmail.com Thu Jul 31 20:48:05 2014 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Thu, 31 Jul 2014 16:48:05 -0400 Subject: Haskell Platform 2014.2.0.0 Release Candidate 3 Message-ID: Small update to the Haskell Platfrom 2014.2.0.0 release: We have new Release Candidate 3 versions of the source tarball... and a new generic-linux bindist of the platform! - source tarball: haskell-platform-2014.2.0.0-srcdist-RC3.tar.gz - generic linux: haskell-platform-2014.2.0.0-unknown-linux-x86_64-RC3.tar.gz *Windows and OS X users: There are no RC3 versions - as the RC2 versions seem to be holding up fine!* *General* - hptool (and hence ./platform.sh script) take a new --prefix parameter that is used for generic (non-OS X, non-Windows) builds: It sets the root under which haskell installations are located. Defaults to /usr/local/haskell. Everything will be placed in a directory named ghc-7.8.3- under this prefix. - activate-hs script for default Posix-like builds - small fixes to allow bootstrapping the platform with GHC 7.4 - README and LICENSE updated *Source Tarball* - all missing pieces now present - build platfrom from source tarball verified *Generic Linux Binary Dist* - Built from the GHC's generic Linux bindist for Deb 7 style systems - Complete tarball: this is all you need to install Haskell on a compatible system - Built on Ubuntu 12.04LTS - Tested on Ubuntu 14 - my system needed a symlink from libgmp.so to libgmp.so.10 - this seems to be an issue with the GHC bindist built components To install this: cd / sudo tar xvf ~/haskell-platform-2014.2.0.0-unknown-linux-x86_64-RC3.tar.gz sudo /usr/local/haskell/ghc-7.8.3-x8_64/bin/activate-hs This will finish the package registration, and install symlinks from /usr/local/bin for all the standard haskell command line tools. ? Mark *shasum -a 256:* ab759ec50618f2604163eca7ad07e50c8292398a2d043fdc1012df161b2eb89a haskell-platform-2014.2.0.0-srcdist-RC3.tar.gz 0da6879ae657481849e7ec4e5d3c4c035e090824167f97434b48af297ec17cf9 haskell-platform-2014.2.0.0-unknown-linux-x86_64-RC3.tar.gz -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Jul 31 20:56:05 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 31 Jul 2014 20:56:05 +0000 Subject: pthread.h missing Message-ID: <618BE556AADD624C9C918AA5D5911BEF22084FCB@DB3PRD3001MB020.064d.mgd.msft.net> On Windows rts/T8124 is failing thus: T8124_c.c:4:21: fatal error: pthread.h: No such file or directory compilation terminated. *** unexpected failure for T8124(threaded1) Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From metaniklas at gmail.com Thu Jul 31 22:09:04 2014 From: metaniklas at gmail.com (Niklas Larsson) Date: Fri, 1 Aug 2014 00:09:04 +0200 Subject: pthread.h missing In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF22084FCB@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF22084FCB@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Not strange at all as pthread isn't included in the mingw toolchain (it is on Win64, but it doesn't know to link in pthread there). There's no easy fix. Other than setting it to expected failure, of course. Niklas 2014-07-31 22:56 GMT+02:00 Simon Peyton Jones : > On Windows rts/T8124 is failing thus: > > T8124_c.c:4:21: fatal error: pthread.h: No such file or directory > > compilation terminated. > > > > *** unexpected failure for T8124(threaded1) > > Simon > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From varosi at gmail.com Thu Jul 31 22:24:42 2014 From: varosi at gmail.com (eng. Vassil Ognyanov Keremidchiev) Date: Fri, 1 Aug 2014 01:24:42 +0300 Subject: Multiple imports on single line Message-ID: Hello, dear GHC developers! Is it possible to add such constructions in next GHCs: import Data.List, Data.Array, Data.Aeson, Network.HTTP.Conduit on a single line. This will simplify source code and clear the space before the actual code. Best regards, Vassil Keremidchiev -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Jul 31 22:45:03 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 31 Jul 2014 22:45:03 +0000 Subject: Multiple imports on single line In-Reply-To: References: Message-ID: <618BE556AADD624C9C918AA5D5911BEF2208508F@DB3PRD3001MB020.064d.mgd.msft.net> you can do it today: import Foo; import Bar; import Baz you do have to repeat the ?import? though S From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of eng. Vassil Ognyanov Keremidchiev Sent: 31 July 2014 23:25 To: ghc-devs at haskell.org Subject: Multiple imports on single line Hello, dear GHC developers! Is it possible to add such constructions in next GHCs: import Data.List, Data.Array, Data.Aeson, Network.HTTP.Conduit on a single line. This will simplify source code and clear the space before the actual code. Best regards, Vassil Keremidchiev -------------- next part -------------- An HTML attachment was scrubbed... URL: