From singpolyma at singpolyma.net Wed Oct 1 00:45:00 2014 From: singpolyma at singpolyma.net (Stephen Paul Weber) Date: Tue, 30 Sep 2014 19:45:00 -0500 Subject: Is there a way to add a dynamic library to every link with GHC automatically? Message-ID: <20141001004500.GA2003@singpolyma-liberty> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 I'm working with my qnx-nto-arm cross-compiler again. I still have the issue where I get link errors about __aeabi_memcpy and similar. I am able to solve this problem by always adding -lcaps to ghc invocations. I'm still not sure if that library should be required, but it does fix the problem. So now I'm wondering if there's a way to tell my GHC build that when it is built for this platform it should just always link against libcaps as well as whatever else? - -- Stephen Paul Weber, @singpolyma See for how I prefer to be contacted edition right joseph -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) iQIcBAEBCAAGBQJUK06LAAoJENEcKRHOUZzekNYQAMMVMpXE1hG0UxFDvKcDzBrD xbK5woDC/7LRhGsXNntG7PM76/vaL51KYenCrPuejYD3xAJXVZY9FifR6yx77qQg ysqX3IZzOQXxAiGDXkkYdf/G2NRIBKhCyCZacY4eCOfRSLVLO58bNbsZI96enrSN j4BTtWoeM/l2wGavV+5QtkDYXrPF7Lg16q1A4w3a7wPLH3YLqeql81gG1qdi55sc nVegs1+Bnr3AzqaMr5n0p5hX0Q+OklNUGVEPRtX6Dh8q7z5MmjKgswXZbukzFUEU wxCSmZYmuQXjYCa0aJkW5J8/2BYBNAkHwTZzZ2GyCtcigCaeaXGppBfEdHk5FHT1 YbyiAheOBJ2ZsTIYdIJEwCGT8KHlxQvruWX3y03LDzV62CX5aMqlISY7oSGY9WM6 ZclMP0mM7whScCdpX2yDsJQV2orSEaWd/9nCF+yVSq4SNpC3fQSzR/aT5PlEpdLh dSnPvPfrDOPdWKF3mITj4P2/oDBO8+iWd6vIUWXjyHrG/iZS5tfeuN9yD8SSudFI l9jiOTPOvOZSyVCXaVEJ0nheUteCM5yAXFN9ipQAuLA10QpH2vt/QpcJ8bvuaMN3 tS+SHr31zV29OZ5nnTaz+Cn8kXaMSSHlgyK/zUjhs2KJAw4XakdzayYS+aLV1Zpz kcs3y8hA/EMu2lTDKYC3 =yv6N -----END PGP SIGNATURE----- From omeragacan at gmail.com Wed Oct 1 05:54:52 2014 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Wed, 1 Oct 2014 08:54:52 +0300 Subject: cabal directory structure under /libraries/ for a lib that uses Rts.h Message-ID: Hi all, I'm trying to implement https://ghc.haskell.org/trac/ghc/ticket/5364 , I did the coding part but I'm having trouble compiling it/adding it as a part of GHC libraries. My library is just one hsc file with a line `#include "Rts.h"` in it. Any ideas what should I do to make it compiled with GHC? Thanks.. --- ?mer Sinan A?acan http://osa1.net From ezyang at mit.edu Wed Oct 1 07:20:10 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Wed, 01 Oct 2014 00:20:10 -0700 Subject: cabal directory structure under /libraries/ for a lib that uses Rts.h In-Reply-To: References: Message-ID: <1412147969-sup-5452@sabre> Well, which library should it be part of? Add it to the exposed-modules list there and it should get compiled. Edward Excerpts from ?mer Sinan A?acan's message of 2014-09-30 22:54:52 -0700: > Hi all, > > I'm trying to implement https://ghc.haskell.org/trac/ghc/ticket/5364 , > I did the coding part but I'm having trouble compiling it/adding it as > a part of GHC libraries. > > My library is just one hsc file with a line `#include "Rts.h"` in it. > Any ideas what should I do to make it compiled with GHC? > > Thanks.. > > --- > ?mer Sinan A?acan > http://osa1.net From alexander at plaimi.net Wed Oct 1 08:35:38 2014 From: alexander at plaimi.net (Alexander Berntsen) Date: Wed, 01 Oct 2014 10:35:38 +0200 Subject: GHC Weekly news In-Reply-To: References: Message-ID: <542BBCDA.3080504@plaimi.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 30/09/14 22:10, Austin Seipp wrote: > Included verbatim for those too lazy to open new tabs: :-D Please do continue to indulge us like this in the future! - -- Alexander alexander at plaimi.net https://secure.plaimi.net/~alexander -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iF4EAREIAAYFAlQrvNoACgkQRtClrXBQc7UxVwD+MDZK8DTh1i5Ck1L/NCqEbNS7 mFytaLxV7/HcTQ7r850A/2bvE7Xuz67miYuFMoV7yGWxUiOkHypCyt/asApkxO/N =fAzo -----END PGP SIGNATURE----- From simonpj at microsoft.com Wed Oct 1 08:57:19 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 1 Oct 2014 08:57:19 +0000 Subject: Build time regressions In-Reply-To: References: <1412110484-sup-5631@sabre> Message-ID: <618BE556AADD624C9C918AA5D5911BEF2223E01E@DB3PRD3001MB020.064d.mgd.msft.net> It sounds as if there are two issues here: ? Should GHC unpack a !?d constructor argument if the constructor?s argument has a lot of fields? It probably isn?t profitable to unbox very large products, because it doesn?t save much allocation, and might cause extra allocation at pattern-match sites. So I think the answer is yes. I?ll open a ticket. ? Is some library (binary? blaze?) creating far too much code in some circumstances? I have no idea about this, but it sounds fishy. Simply creating the large worker function should not make things go bad. Incidentally, John, using {-# NOUNPACK #-} !Bar would prevent the unpacking while still allowing the field to be strict. It?s manually controllable. Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of John Lato Sent: 01 October 2014 00:45 To: Edward Z. Yang Cc: Joachim Breitner; ghc-devs at haskell.org Subject: Re: Build time regressions Hi Edward, This is possibly unrelated, but the setup seems almost identical to a very similar problem we had in some code, i.e. very long compile times (6+ minutes for 1 module) and excessive memory usage when compiling generic serialization instances for some data structures. In our case, I also thought that INLINE functions were the cause of the problem, but it turns out they were not. We had a nested data structure, e.g. > data Foo { fooBar :: !Bar, ... } with Bar very large (~150 records). even when we explicitly NOINLINE'd the function that serialized Bar, GHC still created a very large helper function of the form: > serialize_foo :: Int# -> Int# -> ... where the arguments were the unboxed fields of the Bar structure, along with the other fields within Foo. It appears that even though the serialization function was NOINLINE'd, it simply created a Builder, and while combining the Builder's ghc saw the full structure. Our serializer uses blaze, but perhaps Binary's builder is similar enough the same thing could happen. Anyway, in our case the fix was to simply remove the bang pattern from the 'fooBar' record field. Then the serialize_foo function takes a Bar as an argument and serializes that. I'm not entirely sure why compilation takes so much longer otherwise. I've tried dumping the output of each simplifier phase and it clearly gets stuck at a certain point, but I didn't really debug in much detail so I don't recall the details. If you think this is related, I can investigate more thoroughly. Cheers, John L. On Wed, Oct 1, 2014 at 4:54 AM, Edward Z. Yang > wrote: Hello Joachim, This was halfway known, but it sounds like we haven't solved it completely. The beginning of the sordid tale was when Cabal HEAD switched to using derived binary instances: https://ghc.haskell.org/trac/ghc/ticket/9583 SPJ fixed the infinite loop bug in the simplifier, but apparently the deriving binary generates a lot of code, meaning a lot of memory. https://ghc.haskell.org/trac/ghc/ticket/9630 hvr's fix was specifically to solve this problem. But it sounds like it didn't eliminate the regression entirely? If there's an unrelated regression, we should suss it out. It would be helpful if someone could revert just the deriving changes, and see if this reverts the compilation time. Edward Excerpts from Joachim Breitner's message of 2014-09-30 13:36:27 -0700: > Hi, > > the attached graph shows a noticable increase in build time caused by > > Update Cabal submodule & ghc-pkg to use new module re-export types > author Edward Z. Yang > > https://git.haskell.org/ghc.git/commit/4b648be19c75e6c6a8e6f9f93fa12c7a4176f0ae > > and only halfway mitigated by > > Update `binary` submodule in an attempt to address #9630 > author Herbert Valerio Riedel > > https://git.haskell.org/ghc.git/commit/3ecca02516af5de803e4ff667c8c969c5bffb35f > > > I am not sure if the improvement is related to the regression, but in > any case: Edward, was such an increase expected by you? If not, can you > explain it? Can it be avoided? > > Or maybe Cabal just became much larger... +38% in allocations when > running haddock on it seems to confirm this. > > Greetings, > Joachim > _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Wed Oct 1 09:20:30 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 1 Oct 2014 09:20:30 +0000 Subject: mkCoreConApps vs. mkConApp In-Reply-To: <67946498-5089-465E-8063-FFC4ED5FBCFB@cis.upenn.edu> References: <67946498-5089-465E-8063-FFC4ED5FBCFB@cis.upenn.edu> Message-ID: <618BE556AADD624C9C918AA5D5911BEF2223E0EF@DB3PRD3001MB020.064d.mgd.msft.net> mkConApp assumes the let/app invariant holds, and hence does not need to carry around, decompose, or test the function type. Hence slightly more efficient. My instinct is to make mkConApp have ASSERT checks for the let/app invariant (and indeed the function having enough (->) arrows), but under #ifdef DEBUG. And rename it "mkLazyConApp" to stress that there's an invariant involved. In contrast mkCoreConApp must carry round the types in order to generate a Case sometimes. Do you feel like doing that? (In HEAD.) Or shall I? Simon | -----Original Message----- | From: Richard Eisenberg [mailto:eir at cis.upenn.edu] | Sent: 29 September 2014 20:05 | To: Simon Peyton Jones | Subject: mkCoreConApps vs. mkConApp | | Hi Simon, | | I ran into a core-lint error on my branch which led me to wonder: when | should anyone use CoreSyn.mkConApp instead of MkCore.mkCoreConApps? | They appear to do roughly the same thing, but mkCoreConApps does more | checks (specifically, in my case, the let/app invariant check) and | claims to be more efficient. mkConApp even tells you to use | mkCoreConApps "if possible". When isn't this possible? | | To fix my error, I changed all uses of mkConApp in MkCore to | mkCoreConApps. Problem solved. But it's all a bit of a mystery why the | problem was there to begin with. Can you shed any light? | | Thanks! | Richard | | PS: The reason this happened on my branch is because of different | handling of making unboxed tuples, thanks to levity-polymorphism. From rwbarton at gmail.com Wed Oct 1 13:30:38 2014 From: rwbarton at gmail.com (Reid Barton) Date: Wed, 1 Oct 2014 09:30:38 -0400 Subject: Build time regressions In-Reply-To: References: <1412110484-sup-5631@sabre> Message-ID: On Tue, Sep 30, 2014 at 7:44 PM, John Lato wrote: > Hi Edward, > > This is possibly unrelated, but the setup seems almost identical to a very > similar problem we had in some code, i.e. very long compile times (6+ > minutes for 1 module) and excessive memory usage when compiling generic > serialization instances for some data structures. > > In our case, I also thought that INLINE functions were the cause of the > problem, but it turns out they were not. We had a nested data structure, > e.g. > > > data Foo { fooBar :: !Bar, ... } > > with Bar very large (~150 records). > > even when we explicitly NOINLINE'd the function that serialized Bar, GHC > still created a very large helper function of the form: > > > serialize_foo :: Int# -> Int# -> ... > > where the arguments were the unboxed fields of the Bar structure, along > with the other fields within Foo. > This sounds very much like the bug Richard fixed in https://ghc.haskell.org/trac/ghc/ticket/9233. (See "g/F.hs" from my "minimized.tar.gz".) If so then I think it is actually caused simply by creating the worker function, and doesn't have to do with unpacking, only the strictness of the Bar field. Regards, Reid Barton -------------- next part -------------- An HTML attachment was scrubbed... URL: From singpolyma at singpolyma.net Wed Oct 1 15:04:59 2014 From: singpolyma at singpolyma.net (Stephen Paul Weber) Date: Wed, 1 Oct 2014 10:04:59 -0500 Subject: Is there a way to add a dynamic library to every link with GHC automatically? In-Reply-To: <20141001004500.GA2003@singpolyma-liberty> References: <20141001004500.GA2003@singpolyma-liberty> Message-ID: <20141001150459.GB2046@singpolyma-liberty> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 >So now I'm wondering if there's a way to tell my GHC build that when it is >built for this platform it should just always link against libcaps as well >as whatever else? So, I think I've figured it out. Does this look like the best way to acheive what I need: diff --git a/compiler/main/DriverPipeline.hs b/compiler/main/DriverPipeline.hs index 0e17793..68a0c13 100644 - --- a/compiler/main/DriverPipeline.hs +++ b/compiler/main/DriverPipeline.hs @@ -1886,6 +1886,10 @@ linkBinary' staticLink dflags o_files dep_packages = do else ["-lpthread"] | otherwise = [] + let qnx_opts + | platformOS platform == OSQNXNTO = ["-lcaps"] + | otherwise = [] + rc_objs <- maybeCreateManifest dflags output_fn let link = if staticLink @@ -1957,6 +1961,7 @@ linkBinary' staticLink dflags o_files dep_packages = do ++ pkg_framework_opts ++ debug_opts ++ thread_opts + ++ qnx_opts )) -- parallel only: move binary to another dir -- HWL - -- Stephen Paul Weber, @singpolyma See for how I prefer to be contacted edition right joseph -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) iQIcBAEBCAAGBQJULBgbAAoJENEcKRHOUZzeikQP/ir+JdnO5Hdfe2j13Ut4Plkx cRBK9mbn5bXgojGS/0WZuDLl8A6wn4QlzgeG/4U1GcQTe7EifatfYaRGsnek5dN7 WO7NLKWuo+mUDR6DHowVysWU/II25ddY+iX/6I1ax16SK3iD0+lb0+strw4ble59 Yq2lsrs2PFM9W0reyJxwYjkgqndndUpZ8yzkYeR0L0Mw/7rh+Fepvc8KR27AfEAz 5jSl/7eKUqNW9FAiiNix5vT0vG2awdaDoS8INhzTwYXtJh6qNHTrpdu/TEORFF7V v+vRNTmzmFH5+R5FXhlmCzDpX0FuQdF3xB2doB8XCEaNuwTmJhjMGZIxVZuhMu9s a4sgyhgJWzpHIbwDbDYWfTlYPEYUWGlb8xli8y+qTJ8N73FbYCc8qzwvKJgOZBUi +ezT/Hb2ssgHa9IQojoe7uxVsx9ZYhSksYx8Ah/cVTGP8v6ihl7YgwI2qnobZxNB D3pTkfu3UsnH6GwyjRSOOnV7di/BSeo7INSt0aIM4uuxZJUyGkh2LABLSMLOVEkG fwaN/aIXvw+Nwa7xvYd/BoP9io18uxD6ejUvPJU0NAnXT6QTwn5rp7cr6lMgPAgn oZsg8ZbbENgUTkalyXye6Xe8juVWGPA681ceUn3UQm5QznwfiYdAHbwmNor1qzvU Rd/s6us8hOH33PgUULS6 =P9u0 -----END PGP SIGNATURE----- From alan.zimm at gmail.com Wed Oct 1 15:13:08 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Wed, 1 Oct 2014 17:13:08 +0200 Subject: Feedback request for #9628 AST Annotations In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF2223B4EC@DB3PRD3001MB020.064d.mgd.msft.net> References: <1412038564-sup-8892@sabre> <618BE556AADD624C9C918AA5D5911BEF2223B4EC@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: I have put up a new diff at https://phabricator.haskell.org/D297 It is just a proof of concept at this point, to check if the approach is acceptable. This is much less intrusive, and only affects the lexer/parser, in what should be a transparent way. The new module ApiAnnotation was introduced because it needs to be imported by Lexer.x, and I was worried about another circular import cycle. It does also allow the annotations to be defined in a self-contained way, which can then easily be used by other external projects such as ghc-parser. If there is consensus that this will not break anything else, I would like to go ahead and add the rest of the annotations. Regards Alan On Tue, Sep 30, 2014 at 11:19 AM, Simon Peyton Jones wrote: > I'm anxious about it being too big a change too. > > I'd be up for it if we had several "customers" all saying "yes, this is > precisely what we need to make our usage of the GHC API far far easier". > With enough detail so we can understand their use-case. > > Otherwise I worry that we might go to a lot of effort to solve the wrong > problem; or to build a solution that does not, in the end, work for the > actual use-case. > > Another way to tackle this would be to ensure that syntax tree nodes have > a "node-key" (a bit like their source location) that clients could use in a > finite map, to map node-key to values of their choice. > > I have not reviewed your patch in detail, but it's uncomfortable that the > 'l' parameter gets into IfGblEnv and DsM. That doesn't smell right. > > Ditto DynFlags/HscEnv, though I think here that you are right that the > "hooks" interface is very crucial. After all, the WHOLE POINT is too make > the client interface more flexible. I would consult Luite and Edsko, who > were instrumental in designing the new hooks interface > https://ghc.haskell.org/trac/ghc/wiki/Ghc/Hooks > (I'm not sure if that page is up to date, but I hope so) > > A good way to proceed might be to identify some of the big users of the > GHC API (I'm sure I don't know them all), discuss with them what would help > them, and share the results on a wiki page. > > Simon > > | -----Original Message----- > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of > | Richard Eisenberg > | Sent: 30 September 2014 03:04 > | To: Edward Z. Yang > | Cc: ghc-devs at haskell.org > | Subject: Re: Feedback request for #9628 AST Annotations > | > | I'm only speaking up because Alan is specifically requesting feedback: > | I'm really ambivalent about this. I agree with Edward that this is a > | big change and adds permanent noise in a lot of places. But, I also > | really respect the goal here -- better tool support. Is it worthwhile > | to do this using a dynamically typed bit (using Typeable and such), > | which would avoid the noise? Maybe. > | > | What do other languages do? Do we know what, say, Agda does to get > | such tight coupling with an editor? Does, say, Eclipse have such a > | chummy relationship with a Java compiler to do its refactoring, or is > | that separately implemented? Haskell/GHC is not the first project to > | have this problem, and there's plenty of solutions out there. And, > | unlike most other times, I don't think Haskell is exceptional in this > | regard (there's nothing very special about Haskell's AST, maybe beyond > | indentation-awareness), so we can probably adopt other solutions > | nicely. > | > | Richard > | > | On Sep 29, 2014, at 8:58 PM, "Edward Z. Yang" wrote: > | > | > Excerpts from Alan & Kim Zimmerman's message of 2014-09-29 13:38:45 > | -0700: > | >> 1. Is this change too big, should I scale it back to just update > | the > | >> HsSyn structures and then lock it down to Located SrcSpan for all > | >> the rest? > | > > | > I don't claim to speak for the rest of the GHC developers, but I > | think > | > this change is too big. I am almost tempted to say that we > | shouldn't > | > add the type parameter at all, and do something else (maybe Backpack > | > can let us extend SrcSpan in a modular way, or even use a > | dynamically > | > typed map for annotations.) > | > > | > Edward > | > _______________________________________________ > | > ghc-devs mailing list > | > ghc-devs at haskell.org > | > http://www.haskell.org/mailman/listinfo/ghc-devs > | > | _______________________________________________ > | ghc-devs mailing list > | ghc-devs at haskell.org > | http://www.haskell.org/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From singpolyma at singpolyma.net Wed Oct 1 15:50:23 2014 From: singpolyma at singpolyma.net (Stephen Paul Weber) Date: Wed, 1 Oct 2014 10:50:23 -0500 Subject: Using cabal-install with GHC HEAD? Message-ID: <20141001155023.GC2046@singpolyma-liberty> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 I'm running cabal-install 1.20.0.3 -- when I try to install packages for my cross-compiler built from GHC HEAD I get: > ghc-stage1: ghc no longer supports single-file style package databases > (dist/package.conf.inplace) use 'ghc-pkg init' to create > the database with the correct format. So, do I need an even newer version of cabal-install (HEAD maybe?)? Or does cabal-install just not support GHC HEAD yet at all yet? - -- Stephen Paul Weber, @singpolyma See for how I prefer to be contacted edition right joseph -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) iQIcBAEBCAAGBQJULCK/AAoJENEcKRHOUZzeoigP/0ScPQUxO4lPp7zr2yNz/Vvu fXAyAnEfKssFvCgnXNvaRhj4mGvIb+aRMDT5xs2bCMkx2ZJr1jWOVJrRX1wYmI5M PM6pMQYxJxr4avhPNra388lkkYLf/A7OA/dXvv3Z93TpPWJ6wTwNf0kmnysWo5/j ZLD8qKAMgFjrw6j55gM/gyb9eYKA5eMssVBvlK9YWrHR1InoVmP+Ua+ru50ClP2b WTbH8GFSKcijli1EKPzYzGAvmwcEGTmua+DEo6BvBAciaqDrd1a0FbxYze3IkLq9 frb4PuK71ehd+uQgGrRMqaS+Zty1Rj7lV1NBhHKiL/eT5WBNXuotfDgp9/GT+ch6 ULg3JSTTJ3KEJJ87vmGwgt92RXcJhLHSzWl+3gY10wFvSCt0mHQwFuo+u7nQ+yuG PRBKMN7osnPJR1DjSzIYcGBgSPhMFdBUouiVUiiacEoZF8ukdSO2xXoOB6z3OihB XDVv5tqvp2dbDzHTyJQWl0O381JiDC6Lwogb0Q+G6+rA3CAkh2D+8ryda/zWTze2 DCuu40+MQa6YjO5rXn+eQtXaXVXZUBY1P8VFE7oRfkgxjA78ahT46rpNgtzwlNp0 bPEnBh6lVq3eGwuv/dqtonASOZp3/MEaySBdi6ZCokbZhKagLEY660eHFHY6aURu 6gT+FZFLIOzzl+Xhka/T =A0vd -----END PGP SIGNATURE----- From austin at well-typed.com Wed Oct 1 15:56:41 2014 From: austin at well-typed.com (Austin Seipp) Date: Wed, 1 Oct 2014 10:56:41 -0500 Subject: Using cabal-install with GHC HEAD? In-Reply-To: <20141001155023.GC2046@singpolyma-liberty> References: <20141001155023.GC2046@singpolyma-liberty> Message-ID: Yeah, you'll need to update your Cabal to HEAD for all the latest support for package keys, etc. These changes happened in the past several weeks, so you might not have noticed if you don't rebuild especially frequently. On Wed, Oct 1, 2014 at 10:50 AM, Stephen Paul Weber wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA256 > > I'm running cabal-install 1.20.0.3 -- when I try to install packages for my > cross-compiler built from GHC HEAD I get: > >> ghc-stage1: ghc no longer supports single-file style package databases >> (dist/package.conf.inplace) use 'ghc-pkg init' to create >> the database with the correct format. > > So, do I need an even newer version of cabal-install (HEAD maybe?)? Or does > cabal-install just not support GHC HEAD yet at all yet? > > - -- > Stephen Paul Weber, @singpolyma > See for how I prefer to be contacted > edition right joseph > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.11 (GNU/Linux) > > iQIcBAEBCAAGBQJULCK/AAoJENEcKRHOUZzeoigP/0ScPQUxO4lPp7zr2yNz/Vvu > fXAyAnEfKssFvCgnXNvaRhj4mGvIb+aRMDT5xs2bCMkx2ZJr1jWOVJrRX1wYmI5M > PM6pMQYxJxr4avhPNra388lkkYLf/A7OA/dXvv3Z93TpPWJ6wTwNf0kmnysWo5/j > ZLD8qKAMgFjrw6j55gM/gyb9eYKA5eMssVBvlK9YWrHR1InoVmP+Ua+ru50ClP2b > WTbH8GFSKcijli1EKPzYzGAvmwcEGTmua+DEo6BvBAciaqDrd1a0FbxYze3IkLq9 > frb4PuK71ehd+uQgGrRMqaS+Zty1Rj7lV1NBhHKiL/eT5WBNXuotfDgp9/GT+ch6 > ULg3JSTTJ3KEJJ87vmGwgt92RXcJhLHSzWl+3gY10wFvSCt0mHQwFuo+u7nQ+yuG > PRBKMN7osnPJR1DjSzIYcGBgSPhMFdBUouiVUiiacEoZF8ukdSO2xXoOB6z3OihB > XDVv5tqvp2dbDzHTyJQWl0O381JiDC6Lwogb0Q+G6+rA3CAkh2D+8ryda/zWTze2 > DCuu40+MQa6YjO5rXn+eQtXaXVXZUBY1P8VFE7oRfkgxjA78ahT46rpNgtzwlNp0 > bPEnBh6lVq3eGwuv/dqtonASOZp3/MEaySBdi6ZCokbZhKagLEY660eHFHY6aURu > 6gT+FZFLIOzzl+Xhka/T > =A0vd > -----END PGP SIGNATURE----- > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From simonpj at microsoft.com Wed Oct 1 16:06:19 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 1 Oct 2014 16:06:19 +0000 Subject: Feedback request for #9628 AST Annotations In-Reply-To: References: <1412038564-sup-8892@sabre> <618BE556AADD624C9C918AA5D5911BEF2223B4EC@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF2223FA16@DB3PRD3001MB020.064d.mgd.msft.net> Let me urge you, once more, to consult some actual heavy-duty users of these proposed facilities. I am very keen to avoid investing design and implementation effort in facilities that may not meet the need. If they end up acclaiming the node-key idea, then we should surely simply make the key an abstract type, simply an instance of Hashable, Ord, etc. Simon From: Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] Sent: 30 September 2014 19:48 To: Simon Peyton Jones Cc: Richard Eisenberg; Edward Z. Yang; ghc-devs at haskell.org Subject: Re: Feedback request for #9628 AST Annotations On further reflection of the goals for the annotation, I would like to put forward the following proposal for comment Instead of physically placing a "node-key" in each AST Node, a virtual node key can be generated from any `GenLocated SrcSpan e' comprising a combination of the `SrcSpan` value and a unique identifier from the constructor for `e`, perhaps using its `TypeRep`, since the entire AST derives Typeable. To further reduce the intrusiveness, a base Annotation type can be defined that captures the location of noise tokens for each AST constructor. This can then be emitted from the parser, if the appropriate flag is set to enable it. So data ApiAnnKey = AK SrcSpan TypeRep mkApiAnnKey :: (Located e) -> ApiAnnKey mkApiAnnKey = ... data Ann = .... | AnnHsLet SrcSpan -- of the word "let" SrcSpan -- of the word "in" | AnnHsDo SrcSpan -- of the word "do" And then in the parser | 'let' binds 'in' exp { mkAnnHsLet $1 $3 (LL $ HsLet (unLoc $2) $4) } The helper is mkAnnHsLet :: Located a -> Located b -> LHsExpr RdrName -> P (LHsExpr RdrName) mkAnnHsLet (L l_let _) (L l_in _) e = do addAnnotation (mkAnnKey e) (AnnHsLet l_let l_in) return e; The Parse Monad would have to accumulate the annotations to be returned at the end, if called with the appropriate flag. There will be some boilerplate in getting the annotations and helper functions defined, but it will not pollute the rest. This technique can also potentially be backported to support older GHC versions via a modification to ghc-parser. https://hackage.haskell.org/package/ghc-parser Regards Alan On Tue, Sep 30, 2014 at 2:04 PM, Alan & Kim Zimmerman > wrote: I tend to agree that this change is much too intrusive for what it attempts to do. I think the concept of a node key could be workable, and ties in to the approach I am taking in ghc-exactprint [1], which uses a SrcSpan together with node type as the annotation key. [1] https://github.com/alanz/ghc-exactprint On Tue, Sep 30, 2014 at 11:19 AM, Simon Peyton Jones > wrote: I'm anxious about it being too big a change too. I'd be up for it if we had several "customers" all saying "yes, this is precisely what we need to make our usage of the GHC API far far easier". With enough detail so we can understand their use-case. Otherwise I worry that we might go to a lot of effort to solve the wrong problem; or to build a solution that does not, in the end, work for the actual use-case. Another way to tackle this would be to ensure that syntax tree nodes have a "node-key" (a bit like their source location) that clients could use in a finite map, to map node-key to values of their choice. I have not reviewed your patch in detail, but it's uncomfortable that the 'l' parameter gets into IfGblEnv and DsM. That doesn't smell right. Ditto DynFlags/HscEnv, though I think here that you are right that the "hooks" interface is very crucial. After all, the WHOLE POINT is too make the client interface more flexible. I would consult Luite and Edsko, who were instrumental in designing the new hooks interface https://ghc.haskell.org/trac/ghc/wiki/Ghc/Hooks (I'm not sure if that page is up to date, but I hope so) A good way to proceed might be to identify some of the big users of the GHC API (I'm sure I don't know them all), discuss with them what would help them, and share the results on a wiki page. Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | Richard Eisenberg | Sent: 30 September 2014 03:04 | To: Edward Z. Yang | Cc: ghc-devs at haskell.org | Subject: Re: Feedback request for #9628 AST Annotations | | I'm only speaking up because Alan is specifically requesting feedback: | I'm really ambivalent about this. I agree with Edward that this is a | big change and adds permanent noise in a lot of places. But, I also | really respect the goal here -- better tool support. Is it worthwhile | to do this using a dynamically typed bit (using Typeable and such), | which would avoid the noise? Maybe. | | What do other languages do? Do we know what, say, Agda does to get | such tight coupling with an editor? Does, say, Eclipse have such a | chummy relationship with a Java compiler to do its refactoring, or is | that separately implemented? Haskell/GHC is not the first project to | have this problem, and there's plenty of solutions out there. And, | unlike most other times, I don't think Haskell is exceptional in this | regard (there's nothing very special about Haskell's AST, maybe beyond | indentation-awareness), so we can probably adopt other solutions | nicely. | | Richard | | On Sep 29, 2014, at 8:58 PM, "Edward Z. Yang" > wrote: | | > Excerpts from Alan & Kim Zimmerman's message of 2014-09-29 13:38:45 | -0700: | >> 1. Is this change too big, should I scale it back to just update | the | >> HsSyn structures and then lock it down to Located SrcSpan for all | >> the rest? | > | > I don't claim to speak for the rest of the GHC developers, but I | think | > this change is too big. I am almost tempted to say that we | shouldn't | > add the type parameter at all, and do something else (maybe Backpack | > can let us extend SrcSpan in a modular way, or even use a | dynamically | > typed map for annotations.) | > | > Edward | > _______________________________________________ | > ghc-devs mailing list | > ghc-devs at haskell.org | > http://www.haskell.org/mailman/listinfo/ghc-devs | | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin at well-typed.com Wed Oct 1 16:10:41 2014 From: austin at well-typed.com (Austin Seipp) Date: Wed, 1 Oct 2014 11:10:41 -0500 Subject: The future of the haskell2010/haskell98 packages - AKA Trac #9590 In-Reply-To: <5F743CA1-227C-4F5C-B38E-1DD860659168@me.com> References: <5F743CA1-227C-4F5C-B38E-1DD860659168@me.com> Message-ID: Hi Malcolm, Withdrawing the packages from GHC's distribution is certainly a possibility. We did briefly raise that point when we talked yesterday too, but it wasn't discussed much. Perhaps some others feel the same, but I imagine more people would be OK with #2 above as opposed to eliminating it, since we already lightly break some things anyway. Hopefully we'll really find out soon. On Tue, Sep 30, 2014 at 4:00 PM, Malcolm Wallace wrote: > How about doing the honest thing, and withdrawing both packages in ghc-7.10? Haskell'98 is now 15 years old, and the 2010 standard was never really popular anyway. > > Regards, > Malcolm > > On 30 Sep 2014, at 21:21, Austin Seipp wrote: > > Hello developers, users, friends, > > I'd like you all to weigh in on something - a GHC bug report, that has > happened as a result of making Applicative a superclass of Monad: > > https://ghc.haskell.org/trac/ghc/ticket/9590 > > The very condensed version is this: because haskell2010/haskell98 > packages try to be fairly strictly conforming, they do not have > modules like Control.Applicative. > > Unfortunately, due to the way these packages are structured, many > things are simply re-exported from base, like `Monad`. But > `Applicative` is not, and cannot be imported if you use -XHaskell2010 > and the haskell2010 package. > > The net result here is that haskell98/haskell2010 are hopelessly > broken in the current state: it's impossible to define an instance of > `Monad`, because you cannot define an instance of `Applicative`, > because you can't import it in the first place! > > This leaves us in quite a pickle. > > So I ask: Friends, what do you think we should do? I am particularly > interested in users/developers of current Haskell2010 packages - not > just code that may *be* standard Haskell - code that implies a > dependency on it. > > There was a short discussion between me and Simon Marlow about this in > the morning, and again on IRC this morning between me, Duncan, Edward > K, and Herbert. > > Basically, I only see one of two options: > > - We could make GHC support both: a version of `Monad` without > `Applicative`, and one with it. This creates some complication in the > desugarer, where GHC takes care of `do` syntax (and thus needs to be > aware of `Monad`'s definition and location). But it is, perhaps, quite > doable. > > - We change both packages to export `Applicative` and follow the API > changes in `base` accordingly. > > Note that #1 above is contingent on three things: > > 1) There is interest in this actually happening, and these separate > APIs being supported. If there is not significant interest in > maintaining this, it's unclear if we should go for it. > > 2) It's not overly monstrously complex (I don't think it necessarily > will be, but it might be.) > > 3) You can't like `haskell2010` packages and `base` packages together > in the general case, but, AFAIK, this wasn't the case before either. > > I'd really appreciate your thoughts. This must be sorted out for 7.10 > somehow; the current situation is hopelessly busted. > > -- > Regards, > > Austin Seipp, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > _______________________________________________ > Glasgow-haskell-users mailing list > Glasgow-haskell-users at haskell.org > http://www.haskell.org/mailman/listinfo/glasgow-haskell-users > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From ndmitchell at gmail.com Wed Oct 1 16:37:58 2014 From: ndmitchell at gmail.com (Neil Mitchell) Date: Wed, 1 Oct 2014 17:37:58 +0100 Subject: Feedback request for #9628 AST Annotations In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF2223FA16@DB3PRD3001MB020.064d.mgd.msft.net> References: <1412038564-sup-8892@sabre> <618BE556AADD624C9C918AA5D5911BEF2223B4EC@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF2223FA16@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: I was getting a bit lost between the idea and the implementation. Let me try rephrasing the idea in my own words. The goal: Capture inner source spans in AST syntax nodes. At the moment if ... then ... else ... captures the spans [if [...] then [...] else [...]]. We want to capture the spans for each keyword as well, so: [{if} [...] {then} [...] {else} [...]]. The proposal: Rather than add anything to the AST, have a separate mapping (SrcSpan,AstCtor) to [SrcSpan]. So you give in the SrcSpan from the IfThenElse node, and some token for the IfThenElse constructor, and get back a list of IfThenElse for the particular keyword. I like the proposal because it adds nothing inside the AST, and requires no fresh invariants of the AST. I dislike it because the contents of that separate mapping are highly tied up with the AST, and easy to get out of sync. I think it's the right choice for three reasons, 1) it is easier to try out and doesn't break the AST, so we have more scope for changing our minds later; 2) the same technique is able to represent things other than SrcSpan without introducing a polymorphic src span; 3) the people who pay the complexity are the people who use it, which is relatively few people. That said, as a tweak to the API, rather than a single data type for all annotations, you could have: data AnnIfThenElse = AnnIfThenElse {posIf, posThen, posElse :: SrcSpan} data AnnDo = AnnDo {posDo :: SrcSpan} Then you could just have an opaque Map (SrcSpan, TypeRep) Dynamic, with the invariant that the TypeRep in the key matches the Dynamic. Then you can have: getAnnotation :: Typeable a => Annotations -> SrcSpan -> Maybe a. I think it simplifies some of the TypeRep trickery you are engaging in with mkAnnKey. Thanks, Neil On Wed, Oct 1, 2014 at 5:06 PM, Simon Peyton Jones wrote: > Let me urge you, once more, to consult some actual heavy-duty users of these > proposed facilities. I am very keen to avoid investing design and > implementation effort in facilities that may not meet the need. > > > > If they end up acclaiming the node-key idea, then we should surely simply > make the key an abstract type, simply an instance of Hashable, Ord, etc. > > > > Simon > > > > From: Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] > Sent: 30 September 2014 19:48 > To: Simon Peyton Jones > Cc: Richard Eisenberg; Edward Z. Yang; ghc-devs at haskell.org > > > Subject: Re: Feedback request for #9628 AST Annotations > > > > On further reflection of the goals for the annotation, I would like to put > forward the following proposal for comment > > > Instead of physically placing a "node-key" in each AST Node, a virtual > node key can be generated from any `GenLocated SrcSpan e' comprising a > combination of the `SrcSpan` value and a unique identifier from the > constructor for `e`, perhaps using its `TypeRep`, since the entire AST > derives Typeable. > > To further reduce the intrusiveness, a base Annotation type can be > defined that captures the location of noise tokens for each AST > constructor. This can then be emitted from the parser, if the > appropriate flag is set to enable it. > > So > > data ApiAnnKey = AK SrcSpan TypeRep > > mkApiAnnKey :: (Located e) -> ApiAnnKey > mkApiAnnKey = ... > > data Ann = > .... > | AnnHsLet SrcSpan -- of the word "let" > SrcSpan -- of the word "in" > > | AnnHsDo SrcSpan -- of the word "do" > > And then in the parser > > | 'let' binds 'in' exp { mkAnnHsLet $1 $3 (LL $ HsLet (unLoc $2) > $4) } > > The helper is > > mkAnnHsLet :: Located a -> Located b -> LHsExpr RdrName -> P (LHsExpr > RdrName) > mkAnnHsLet (L l_let _) (L l_in _) e = do > addAnnotation (mkAnnKey e) (AnnHsLet l_let l_in) > return e; > > The Parse Monad would have to accumulate the annotations to be > returned at the end, if called with the appropriate flag. > > There will be some boilerplate in getting the annotations and helper > functions defined, but it will not pollute the rest. > > This technique can also potentially be backported to support older GHC > versions via a modification to ghc-parser. > > https://hackage.haskell.org/package/ghc-parser > > Regards > > Alan > > > > On Tue, Sep 30, 2014 at 2:04 PM, Alan & Kim Zimmerman > wrote: > > I tend to agree that this change is much too intrusive for what it attempts > to do. > > I think the concept of a node key could be workable, and ties in to the > approach I am taking in ghc-exactprint [1], which uses a SrcSpan together > with node type as the annotation key. > > [1] https://github.com/alanz/ghc-exactprint > > > > On Tue, Sep 30, 2014 at 11:19 AM, Simon Peyton Jones > wrote: > > I'm anxious about it being too big a change too. > > I'd be up for it if we had several "customers" all saying "yes, this is > precisely what we need to make our usage of the GHC API far far easier". > With enough detail so we can understand their use-case. > > Otherwise I worry that we might go to a lot of effort to solve the wrong > problem; or to build a solution that does not, in the end, work for the > actual use-case. > > Another way to tackle this would be to ensure that syntax tree nodes have a > "node-key" (a bit like their source location) that clients could use in a > finite map, to map node-key to values of their choice. > > I have not reviewed your patch in detail, but it's uncomfortable that the > 'l' parameter gets into IfGblEnv and DsM. That doesn't smell right. > > Ditto DynFlags/HscEnv, though I think here that you are right that the > "hooks" interface is very crucial. After all, the WHOLE POINT is too make > the client interface more flexible. I would consult Luite and Edsko, who > were instrumental in designing the new hooks interface > https://ghc.haskell.org/trac/ghc/wiki/Ghc/Hooks > (I'm not sure if that page is up to date, but I hope so) > > A good way to proceed might be to identify some of the big users of the GHC > API (I'm sure I don't know them all), discuss with them what would help > them, and share the results on a wiki page. > > Simon > > > | -----Original Message----- > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of > | Richard Eisenberg > | Sent: 30 September 2014 03:04 > | To: Edward Z. Yang > | Cc: ghc-devs at haskell.org > | Subject: Re: Feedback request for #9628 AST Annotations > | > | I'm only speaking up because Alan is specifically requesting feedback: > | I'm really ambivalent about this. I agree with Edward that this is a > | big change and adds permanent noise in a lot of places. But, I also > | really respect the goal here -- better tool support. Is it worthwhile > | to do this using a dynamically typed bit (using Typeable and such), > | which would avoid the noise? Maybe. > | > | What do other languages do? Do we know what, say, Agda does to get > | such tight coupling with an editor? Does, say, Eclipse have such a > | chummy relationship with a Java compiler to do its refactoring, or is > | that separately implemented? Haskell/GHC is not the first project to > | have this problem, and there's plenty of solutions out there. And, > | unlike most other times, I don't think Haskell is exceptional in this > | regard (there's nothing very special about Haskell's AST, maybe beyond > | indentation-awareness), so we can probably adopt other solutions > | nicely. > | > | Richard > | > | On Sep 29, 2014, at 8:58 PM, "Edward Z. Yang" wrote: > | > | > Excerpts from Alan & Kim Zimmerman's message of 2014-09-29 13:38:45 > | -0700: > | >> 1. Is this change too big, should I scale it back to just update > | the > | >> HsSyn structures and then lock it down to Located SrcSpan for all > | >> the rest? > | > > | > I don't claim to speak for the rest of the GHC developers, but I > | think > | > this change is too big. I am almost tempted to say that we > | shouldn't > | > add the type parameter at all, and do something else (maybe Backpack > | > can let us extend SrcSpan in a modular way, or even use a > | dynamically > | > typed map for annotations.) > | > > | > Edward > | > _______________________________________________ > | > ghc-devs mailing list > | > ghc-devs at haskell.org > | > http://www.haskell.org/mailman/listinfo/ghc-devs > | > | _______________________________________________ > | ghc-devs mailing list > | ghc-devs at haskell.org > | http://www.haskell.org/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From simonpj at microsoft.com Wed Oct 1 17:44:51 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 1 Oct 2014 17:44:51 +0000 Subject: Feedback request for #9628 AST Annotations In-Reply-To: References: <1412038564-sup-8892@sabre> <618BE556AADD624C9C918AA5D5911BEF2223B4EC@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF22240B87@DB3PRD3001MB020.064d.mgd.msft.net> Let me urge you, once again, to consult users. I really do not want to implement a feature that (thus far) lacks a single enthusiastic user. Please. Simon From: Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] Sent: 01 October 2014 16:13 To: Simon Peyton Jones Cc: Richard Eisenberg; Edward Z. Yang; ghc-devs at haskell.org Subject: Re: Feedback request for #9628 AST Annotations I have put up a new diff at https://phabricator.haskell.org/D297 It is just a proof of concept at this point, to check if the approach is acceptable. This is much less intrusive, and only affects the lexer/parser, in what should be a transparent way. The new module ApiAnnotation was introduced because it needs to be imported by Lexer.x, and I was worried about another circular import cycle. It does also allow the annotations to be defined in a self-contained way, which can then easily be used by other external projects such as ghc-parser. If there is consensus that this will not break anything else, I would like to go ahead and add the rest of the annotations. Regards Alan On Tue, Sep 30, 2014 at 11:19 AM, Simon Peyton Jones > wrote: I'm anxious about it being too big a change too. I'd be up for it if we had several "customers" all saying "yes, this is precisely what we need to make our usage of the GHC API far far easier". With enough detail so we can understand their use-case. Otherwise I worry that we might go to a lot of effort to solve the wrong problem; or to build a solution that does not, in the end, work for the actual use-case. Another way to tackle this would be to ensure that syntax tree nodes have a "node-key" (a bit like their source location) that clients could use in a finite map, to map node-key to values of their choice. I have not reviewed your patch in detail, but it's uncomfortable that the 'l' parameter gets into IfGblEnv and DsM. That doesn't smell right. Ditto DynFlags/HscEnv, though I think here that you are right that the "hooks" interface is very crucial. After all, the WHOLE POINT is too make the client interface more flexible. I would consult Luite and Edsko, who were instrumental in designing the new hooks interface https://ghc.haskell.org/trac/ghc/wiki/Ghc/Hooks (I'm not sure if that page is up to date, but I hope so) A good way to proceed might be to identify some of the big users of the GHC API (I'm sure I don't know them all), discuss with them what would help them, and share the results on a wiki page. Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | Richard Eisenberg | Sent: 30 September 2014 03:04 | To: Edward Z. Yang | Cc: ghc-devs at haskell.org | Subject: Re: Feedback request for #9628 AST Annotations | | I'm only speaking up because Alan is specifically requesting feedback: | I'm really ambivalent about this. I agree with Edward that this is a | big change and adds permanent noise in a lot of places. But, I also | really respect the goal here -- better tool support. Is it worthwhile | to do this using a dynamically typed bit (using Typeable and such), | which would avoid the noise? Maybe. | | What do other languages do? Do we know what, say, Agda does to get | such tight coupling with an editor? Does, say, Eclipse have such a | chummy relationship with a Java compiler to do its refactoring, or is | that separately implemented? Haskell/GHC is not the first project to | have this problem, and there's plenty of solutions out there. And, | unlike most other times, I don't think Haskell is exceptional in this | regard (there's nothing very special about Haskell's AST, maybe beyond | indentation-awareness), so we can probably adopt other solutions | nicely. | | Richard | | On Sep 29, 2014, at 8:58 PM, "Edward Z. Yang" > wrote: | | > Excerpts from Alan & Kim Zimmerman's message of 2014-09-29 13:38:45 | -0700: | >> 1. Is this change too big, should I scale it back to just update | the | >> HsSyn structures and then lock it down to Located SrcSpan for all | >> the rest? | > | > I don't claim to speak for the rest of the GHC developers, but I | think | > this change is too big. I am almost tempted to say that we | shouldn't | > add the type parameter at all, and do something else (maybe Backpack | > can let us extend SrcSpan in a modular way, or even use a | dynamically | > typed map for annotations.) | > | > Edward | > _______________________________________________ | > ghc-devs mailing list | > ghc-devs at haskell.org | > http://www.haskell.org/mailman/listinfo/ghc-devs | | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Wed Oct 1 18:34:54 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Wed, 1 Oct 2014 20:34:54 +0200 Subject: Feedback request for #9628 AST Annotations In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF22240B87@DB3PRD3001MB020.064d.mgd.msft.net> References: <1412038564-sup-8892@sabre> <618BE556AADD624C9C918AA5D5911BEF2223B4EC@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF22240B87@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Ok, I have started a discussion on haskell-cafe, will cross reference to reddit too Alan On Wed, Oct 1, 2014 at 7:44 PM, Simon Peyton Jones wrote: > Let me urge you, once again, to consult users. I really do not want to > implement a feature that (thus far) lacks a single enthusiastic user. > Please. > > > > Simon > > > > *From:* Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] > *Sent:* 01 October 2014 16:13 > *To:* Simon Peyton Jones > *Cc:* Richard Eisenberg; Edward Z. Yang; ghc-devs at haskell.org > > *Subject:* Re: Feedback request for #9628 AST Annotations > > > > I have put up a new diff at https://phabricator.haskell.org/D297 > > It is just a proof of concept at this point, to check if the approach is > acceptable. > > This is much less intrusive, and only affects the lexer/parser, in what > should be a transparent way. > > The new module ApiAnnotation was introduced because it needs to be > imported by Lexer.x, and I was worried about another circular import cycle. > It does also allow the annotations to be defined in a self-contained way, > which can then easily be used by other external projects such as ghc-parser. > > If there is consensus that this will not break anything else, I would like > to go ahead and add the rest of the annotations. > > Regards > > Alan > > > > On Tue, Sep 30, 2014 at 11:19 AM, Simon Peyton Jones < > simonpj at microsoft.com> wrote: > > I'm anxious about it being too big a change too. > > I'd be up for it if we had several "customers" all saying "yes, this is > precisely what we need to make our usage of the GHC API far far easier". > With enough detail so we can understand their use-case. > > Otherwise I worry that we might go to a lot of effort to solve the wrong > problem; or to build a solution that does not, in the end, work for the > actual use-case. > > Another way to tackle this would be to ensure that syntax tree nodes have > a "node-key" (a bit like their source location) that clients could use in a > finite map, to map node-key to values of their choice. > > I have not reviewed your patch in detail, but it's uncomfortable that the > 'l' parameter gets into IfGblEnv and DsM. That doesn't smell right. > > Ditto DynFlags/HscEnv, though I think here that you are right that the > "hooks" interface is very crucial. After all, the WHOLE POINT is too make > the client interface more flexible. I would consult Luite and Edsko, who > were instrumental in designing the new hooks interface > https://ghc.haskell.org/trac/ghc/wiki/Ghc/Hooks > (I'm not sure if that page is up to date, but I hope so) > > A good way to proceed might be to identify some of the big users of the > GHC API (I'm sure I don't know them all), discuss with them what would help > them, and share the results on a wiki page. > > Simon > > > | -----Original Message----- > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of > | Richard Eisenberg > | Sent: 30 September 2014 03:04 > | To: Edward Z. Yang > | Cc: ghc-devs at haskell.org > | Subject: Re: Feedback request for #9628 AST Annotations > | > | I'm only speaking up because Alan is specifically requesting feedback: > | I'm really ambivalent about this. I agree with Edward that this is a > | big change and adds permanent noise in a lot of places. But, I also > | really respect the goal here -- better tool support. Is it worthwhile > | to do this using a dynamically typed bit (using Typeable and such), > | which would avoid the noise? Maybe. > | > | What do other languages do? Do we know what, say, Agda does to get > | such tight coupling with an editor? Does, say, Eclipse have such a > | chummy relationship with a Java compiler to do its refactoring, or is > | that separately implemented? Haskell/GHC is not the first project to > | have this problem, and there's plenty of solutions out there. And, > | unlike most other times, I don't think Haskell is exceptional in this > | regard (there's nothing very special about Haskell's AST, maybe beyond > | indentation-awareness), so we can probably adopt other solutions > | nicely. > | > | Richard > | > | On Sep 29, 2014, at 8:58 PM, "Edward Z. Yang" wrote: > | > | > Excerpts from Alan & Kim Zimmerman's message of 2014-09-29 13:38:45 > | -0700: > | >> 1. Is this change too big, should I scale it back to just update > | the > | >> HsSyn structures and then lock it down to Located SrcSpan for all > | >> the rest? > | > > | > I don't claim to speak for the rest of the GHC developers, but I > | think > | > this change is too big. I am almost tempted to say that we > | shouldn't > | > add the type parameter at all, and do something else (maybe Backpack > | > can let us extend SrcSpan in a modular way, or even use a > | dynamically > | > typed map for annotations.) > | > > | > Edward > | > _______________________________________________ > | > ghc-devs mailing list > | > ghc-devs at haskell.org > | > http://www.haskell.org/mailman/listinfo/ghc-devs > | > | _______________________________________________ > | ghc-devs mailing list > | ghc-devs at haskell.org > | http://www.haskell.org/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Wed Oct 1 18:38:22 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Wed, 1 Oct 2014 20:38:22 +0200 Subject: Feedback request for #9628 AST Annotations In-Reply-To: References: <1412038564-sup-8892@sabre> <618BE556AADD624C9C918AA5D5911BEF2223B4EC@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF2223FA16@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Thanks for the feedback and support. Alan On Wed, Oct 1, 2014 at 6:37 PM, Neil Mitchell wrote: > I was getting a bit lost between the idea and the implementation. Let > me try rephrasing the idea in my own words. > > The goal: Capture inner source spans in AST syntax nodes. At the > moment if ... then ... else ... captures the spans [if [...] then > [...] else [...]]. We want to capture the spans for each keyword as > well, so: [{if} [...] {then} [...] {else} [...]]. > > The proposal: Rather than add anything to the AST, have a separate > mapping (SrcSpan,AstCtor) to [SrcSpan]. So you give in the SrcSpan > from the IfThenElse node, and some token for the IfThenElse > constructor, and get back a list of IfThenElse for the particular > keyword. > > I like the proposal because it adds nothing inside the AST, and > requires no fresh invariants of the AST. I dislike it because the > contents of that separate mapping are highly tied up with the AST, and > easy to get out of sync. I think it's the right choice for three > reasons, 1) it is easier to try out and doesn't break the AST, so we > have more scope for changing our minds later; 2) the same technique is > able to represent things other than SrcSpan without introducing a > polymorphic src span; 3) the people who pay the complexity are the > people who use it, which is relatively few people. > > That said, as a tweak to the API, rather than a single data type for > all annotations, you could have: > > data AnnIfThenElse = AnnIfThenElse {posIf, posThen, posElse :: SrcSpan} > data AnnDo = AnnDo {posDo :: SrcSpan} > > Then you could just have an opaque Map (SrcSpan, TypeRep) Dynamic, > with the invariant that the TypeRep in the key matches the Dynamic. > Then you can have: getAnnotation :: Typeable a => Annotations -> > SrcSpan -> Maybe a. I think it simplifies some of the TypeRep trickery > you are engaging in with mkAnnKey. > > Thanks, Neil > > On Wed, Oct 1, 2014 at 5:06 PM, Simon Peyton Jones > wrote: > > Let me urge you, once more, to consult some actual heavy-duty users of > these > > proposed facilities. I am very keen to avoid investing design and > > implementation effort in facilities that may not meet the need. > > > > > > > > If they end up acclaiming the node-key idea, then we should surely simply > > make the key an abstract type, simply an instance of Hashable, Ord, etc. > > > > > > > > Simon > > > > > > > > From: Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] > > Sent: 30 September 2014 19:48 > > To: Simon Peyton Jones > > Cc: Richard Eisenberg; Edward Z. Yang; ghc-devs at haskell.org > > > > > > Subject: Re: Feedback request for #9628 AST Annotations > > > > > > > > On further reflection of the goals for the annotation, I would like to > put > > forward the following proposal for comment > > > > > > Instead of physically placing a "node-key" in each AST Node, a virtual > > node key can be generated from any `GenLocated SrcSpan e' comprising a > > combination of the `SrcSpan` value and a unique identifier from the > > constructor for `e`, perhaps using its `TypeRep`, since the entire AST > > derives Typeable. > > > > To further reduce the intrusiveness, a base Annotation type can be > > defined that captures the location of noise tokens for each AST > > constructor. This can then be emitted from the parser, if the > > appropriate flag is set to enable it. > > > > So > > > > data ApiAnnKey = AK SrcSpan TypeRep > > > > mkApiAnnKey :: (Located e) -> ApiAnnKey > > mkApiAnnKey = ... > > > > data Ann = > > .... > > | AnnHsLet SrcSpan -- of the word "let" > > SrcSpan -- of the word "in" > > > > | AnnHsDo SrcSpan -- of the word "do" > > > > And then in the parser > > > > | 'let' binds 'in' exp { mkAnnHsLet $1 $3 (LL $ HsLet (unLoc > $2) > > $4) } > > > > The helper is > > > > mkAnnHsLet :: Located a -> Located b -> LHsExpr RdrName -> P (LHsExpr > > RdrName) > > mkAnnHsLet (L l_let _) (L l_in _) e = do > > addAnnotation (mkAnnKey e) (AnnHsLet l_let l_in) > > return e; > > > > The Parse Monad would have to accumulate the annotations to be > > returned at the end, if called with the appropriate flag. > > > > There will be some boilerplate in getting the annotations and helper > > functions defined, but it will not pollute the rest. > > > > This technique can also potentially be backported to support older GHC > > versions via a modification to ghc-parser. > > > > https://hackage.haskell.org/package/ghc-parser > > > > Regards > > > > Alan > > > > > > > > On Tue, Sep 30, 2014 at 2:04 PM, Alan & Kim Zimmerman < > alan.zimm at gmail.com> > > wrote: > > > > I tend to agree that this change is much too intrusive for what it > attempts > > to do. > > > > I think the concept of a node key could be workable, and ties in to the > > approach I am taking in ghc-exactprint [1], which uses a SrcSpan together > > with node type as the annotation key. > > > > [1] https://github.com/alanz/ghc-exactprint > > > > > > > > On Tue, Sep 30, 2014 at 11:19 AM, Simon Peyton Jones < > simonpj at microsoft.com> > > wrote: > > > > I'm anxious about it being too big a change too. > > > > I'd be up for it if we had several "customers" all saying "yes, this is > > precisely what we need to make our usage of the GHC API far far easier". > > With enough detail so we can understand their use-case. > > > > Otherwise I worry that we might go to a lot of effort to solve the wrong > > problem; or to build a solution that does not, in the end, work for the > > actual use-case. > > > > Another way to tackle this would be to ensure that syntax tree nodes > have a > > "node-key" (a bit like their source location) that clients could use in a > > finite map, to map node-key to values of their choice. > > > > I have not reviewed your patch in detail, but it's uncomfortable that the > > 'l' parameter gets into IfGblEnv and DsM. That doesn't smell right. > > > > Ditto DynFlags/HscEnv, though I think here that you are right that the > > "hooks" interface is very crucial. After all, the WHOLE POINT is too > make > > the client interface more flexible. I would consult Luite and Edsko, who > > were instrumental in designing the new hooks interface > > https://ghc.haskell.org/trac/ghc/wiki/Ghc/Hooks > > (I'm not sure if that page is up to date, but I hope so) > > > > A good way to proceed might be to identify some of the big users of the > GHC > > API (I'm sure I don't know them all), discuss with them what would help > > them, and share the results on a wiki page. > > > > Simon > > > > > > | -----Original Message----- > > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of > > | Richard Eisenberg > > | Sent: 30 September 2014 03:04 > > | To: Edward Z. Yang > > | Cc: ghc-devs at haskell.org > > | Subject: Re: Feedback request for #9628 AST Annotations > > | > > | I'm only speaking up because Alan is specifically requesting feedback: > > | I'm really ambivalent about this. I agree with Edward that this is a > > | big change and adds permanent noise in a lot of places. But, I also > > | really respect the goal here -- better tool support. Is it worthwhile > > | to do this using a dynamically typed bit (using Typeable and such), > > | which would avoid the noise? Maybe. > > | > > | What do other languages do? Do we know what, say, Agda does to get > > | such tight coupling with an editor? Does, say, Eclipse have such a > > | chummy relationship with a Java compiler to do its refactoring, or is > > | that separately implemented? Haskell/GHC is not the first project to > > | have this problem, and there's plenty of solutions out there. And, > > | unlike most other times, I don't think Haskell is exceptional in this > > | regard (there's nothing very special about Haskell's AST, maybe beyond > > | indentation-awareness), so we can probably adopt other solutions > > | nicely. > > | > > | Richard > > | > > | On Sep 29, 2014, at 8:58 PM, "Edward Z. Yang" wrote: > > | > > | > Excerpts from Alan & Kim Zimmerman's message of 2014-09-29 13:38:45 > > | -0700: > > | >> 1. Is this change too big, should I scale it back to just update > > | the > > | >> HsSyn structures and then lock it down to Located SrcSpan for all > > | >> the rest? > > | > > > | > I don't claim to speak for the rest of the GHC developers, but I > > | think > > | > this change is too big. I am almost tempted to say that we > > | shouldn't > > | > add the type parameter at all, and do something else (maybe Backpack > > | > can let us extend SrcSpan in a modular way, or even use a > > | dynamically > > | > typed map for annotations.) > > | > > > | > Edward > > | > _______________________________________________ > > | > ghc-devs mailing list > > | > ghc-devs at haskell.org > > | > http://www.haskell.org/mailman/listinfo/ghc-devs > > | > > | _______________________________________________ > > | ghc-devs mailing list > > | ghc-devs at haskell.org > > | http://www.haskell.org/mailman/listinfo/ghc-devs > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > > > > > > > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Wed Oct 1 19:55:09 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Wed, 1 Oct 2014 21:55:09 +0200 Subject: Feedback request for #9628 AST Annotations In-Reply-To: References: <1412038564-sup-8892@sabre> <618BE556AADD624C9C918AA5D5911BEF2223B4EC@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF2223FA16@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Neil I looked into your proposed change in more detail, and I think it is flawed because it is trying to map the annotation back to itself. To start with we have a SrcSpan and the concrete AST value. We need to map the concrete constructor to the relevant annotation, which is of a different type. One straightforward way of doing it is the following data ApiAnnKey = AK SrcSpan String deriving (Eq,Ord,Show) getAnnotation :: Data a => Map.Map ApiAnnKey ApiAnn -> Located a -> Maybe ApiAnn getAnnotation anns a = Map.lookup (mkApiAnnKey a) anns mkApiAnnKey :: (Data e) => (Located e) -> ApiAnnKey mkApiAnnKey (L l e) = AK l (gconname e) gconname :: Data a => a -> String gconname = (\t -> showConstr . toConstr $ t) Note that showConstr is just an alias to the record selector for the Data.Data.Constr, so it is fast and returns a constant string. In this scenario I am not sure that there is a benefit to splitting the ApiAnn type into multiple separate ones. Also, it only relies on the AST being an instance of Data, which already holds. On Wed, Oct 1, 2014 at 6:37 PM, Neil Mitchell wrote: > I was getting a bit lost between the idea and the implementation. Let > me try rephrasing the idea in my own words. > > The goal: Capture inner source spans in AST syntax nodes. At the > moment if ... then ... else ... captures the spans [if [...] then > [...] else [...]]. We want to capture the spans for each keyword as > well, so: [{if} [...] {then} [...] {else} [...]]. > > The proposal: Rather than add anything to the AST, have a separate > mapping (SrcSpan,AstCtor) to [SrcSpan]. So you give in the SrcSpan > from the IfThenElse node, and some token for the IfThenElse > constructor, and get back a list of IfThenElse for the particular > keyword. > > I like the proposal because it adds nothing inside the AST, and > requires no fresh invariants of the AST. I dislike it because the > contents of that separate mapping are highly tied up with the AST, and > easy to get out of sync. I think it's the right choice for three > reasons, 1) it is easier to try out and doesn't break the AST, so we > have more scope for changing our minds later; 2) the same technique is > able to represent things other than SrcSpan without introducing a > polymorphic src span; 3) the people who pay the complexity are the > people who use it, which is relatively few people. > > That said, as a tweak to the API, rather than a single data type for > all annotations, you could have: > > data AnnIfThenElse = AnnIfThenElse {posIf, posThen, posElse :: SrcSpan} > data AnnDo = AnnDo {posDo :: SrcSpan} > > Then you could just have an opaque Map (SrcSpan, TypeRep) Dynamic, > with the invariant that the TypeRep in the key matches the Dynamic. > Then you can have: getAnnotation :: Typeable a => Annotations -> > SrcSpan -> Maybe a. I think it simplifies some of the TypeRep trickery > you are engaging in with mkAnnKey. > > Thanks, Neil > > On Wed, Oct 1, 2014 at 5:06 PM, Simon Peyton Jones > wrote: > > Let me urge you, once more, to consult some actual heavy-duty users of > these > > proposed facilities. I am very keen to avoid investing design and > > implementation effort in facilities that may not meet the need. > > > > > > > > If they end up acclaiming the node-key idea, then we should surely simply > > make the key an abstract type, simply an instance of Hashable, Ord, etc. > > > > > > > > Simon > > > > > > > > From: Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] > > Sent: 30 September 2014 19:48 > > To: Simon Peyton Jones > > Cc: Richard Eisenberg; Edward Z. Yang; ghc-devs at haskell.org > > > > > > Subject: Re: Feedback request for #9628 AST Annotations > > > > > > > > On further reflection of the goals for the annotation, I would like to > put > > forward the following proposal for comment > > > > > > Instead of physically placing a "node-key" in each AST Node, a virtual > > node key can be generated from any `GenLocated SrcSpan e' comprising a > > combination of the `SrcSpan` value and a unique identifier from the > > constructor for `e`, perhaps using its `TypeRep`, since the entire AST > > derives Typeable. > > > > To further reduce the intrusiveness, a base Annotation type can be > > defined that captures the location of noise tokens for each AST > > constructor. This can then be emitted from the parser, if the > > appropriate flag is set to enable it. > > > > So > > > > data ApiAnnKey = AK SrcSpan TypeRep > > > > mkApiAnnKey :: (Located e) -> ApiAnnKey > > mkApiAnnKey = ... > > > > data Ann = > > .... > > | AnnHsLet SrcSpan -- of the word "let" > > SrcSpan -- of the word "in" > > > > | AnnHsDo SrcSpan -- of the word "do" > > > > And then in the parser > > > > | 'let' binds 'in' exp { mkAnnHsLet $1 $3 (LL $ HsLet (unLoc > $2) > > $4) } > > > > The helper is > > > > mkAnnHsLet :: Located a -> Located b -> LHsExpr RdrName -> P (LHsExpr > > RdrName) > > mkAnnHsLet (L l_let _) (L l_in _) e = do > > addAnnotation (mkAnnKey e) (AnnHsLet l_let l_in) > > return e; > > > > The Parse Monad would have to accumulate the annotations to be > > returned at the end, if called with the appropriate flag. > > > > There will be some boilerplate in getting the annotations and helper > > functions defined, but it will not pollute the rest. > > > > This technique can also potentially be backported to support older GHC > > versions via a modification to ghc-parser. > > > > https://hackage.haskell.org/package/ghc-parser > > > > Regards > > > > Alan > > > > > > > > On Tue, Sep 30, 2014 at 2:04 PM, Alan & Kim Zimmerman < > alan.zimm at gmail.com> > > wrote: > > > > I tend to agree that this change is much too intrusive for what it > attempts > > to do. > > > > I think the concept of a node key could be workable, and ties in to the > > approach I am taking in ghc-exactprint [1], which uses a SrcSpan together > > with node type as the annotation key. > > > > [1] https://github.com/alanz/ghc-exactprint > > > > > > > > On Tue, Sep 30, 2014 at 11:19 AM, Simon Peyton Jones < > simonpj at microsoft.com> > > wrote: > > > > I'm anxious about it being too big a change too. > > > > I'd be up for it if we had several "customers" all saying "yes, this is > > precisely what we need to make our usage of the GHC API far far easier". > > With enough detail so we can understand their use-case. > > > > Otherwise I worry that we might go to a lot of effort to solve the wrong > > problem; or to build a solution that does not, in the end, work for the > > actual use-case. > > > > Another way to tackle this would be to ensure that syntax tree nodes > have a > > "node-key" (a bit like their source location) that clients could use in a > > finite map, to map node-key to values of their choice. > > > > I have not reviewed your patch in detail, but it's uncomfortable that the > > 'l' parameter gets into IfGblEnv and DsM. That doesn't smell right. > > > > Ditto DynFlags/HscEnv, though I think here that you are right that the > > "hooks" interface is very crucial. After all, the WHOLE POINT is too > make > > the client interface more flexible. I would consult Luite and Edsko, who > > were instrumental in designing the new hooks interface > > https://ghc.haskell.org/trac/ghc/wiki/Ghc/Hooks > > (I'm not sure if that page is up to date, but I hope so) > > > > A good way to proceed might be to identify some of the big users of the > GHC > > API (I'm sure I don't know them all), discuss with them what would help > > them, and share the results on a wiki page. > > > > Simon > > > > > > | -----Original Message----- > > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of > > | Richard Eisenberg > > | Sent: 30 September 2014 03:04 > > | To: Edward Z. Yang > > | Cc: ghc-devs at haskell.org > > | Subject: Re: Feedback request for #9628 AST Annotations > > | > > | I'm only speaking up because Alan is specifically requesting feedback: > > | I'm really ambivalent about this. I agree with Edward that this is a > > | big change and adds permanent noise in a lot of places. But, I also > > | really respect the goal here -- better tool support. Is it worthwhile > > | to do this using a dynamically typed bit (using Typeable and such), > > | which would avoid the noise? Maybe. > > | > > | What do other languages do? Do we know what, say, Agda does to get > > | such tight coupling with an editor? Does, say, Eclipse have such a > > | chummy relationship with a Java compiler to do its refactoring, or is > > | that separately implemented? Haskell/GHC is not the first project to > > | have this problem, and there's plenty of solutions out there. And, > > | unlike most other times, I don't think Haskell is exceptional in this > > | regard (there's nothing very special about Haskell's AST, maybe beyond > > | indentation-awareness), so we can probably adopt other solutions > > | nicely. > > | > > | Richard > > | > > | On Sep 29, 2014, at 8:58 PM, "Edward Z. Yang" wrote: > > | > > | > Excerpts from Alan & Kim Zimmerman's message of 2014-09-29 13:38:45 > > | -0700: > > | >> 1. Is this change too big, should I scale it back to just update > > | the > > | >> HsSyn structures and then lock it down to Located SrcSpan for all > > | >> the rest? > > | > > > | > I don't claim to speak for the rest of the GHC developers, but I > > | think > > | > this change is too big. I am almost tempted to say that we > > | shouldn't > > | > add the type parameter at all, and do something else (maybe Backpack > > | > can let us extend SrcSpan in a modular way, or even use a > > | dynamically > > | > typed map for annotations.) > > | > > > | > Edward > > | > _______________________________________________ > > | > ghc-devs mailing list > > | > ghc-devs at haskell.org > > | > http://www.haskell.org/mailman/listinfo/ghc-devs > > | > > | _______________________________________________ > > | ghc-devs mailing list > > | ghc-devs at haskell.org > > | http://www.haskell.org/mailman/listinfo/ghc-devs > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > > > > > > > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndmitchell at gmail.com Wed Oct 1 20:05:39 2014 From: ndmitchell at gmail.com (Neil Mitchell) Date: Wed, 1 Oct 2014 21:05:39 +0100 Subject: Feedback request for #9628 AST Annotations In-Reply-To: References: <1412038564-sup-8892@sabre> <618BE556AADD624C9C918AA5D5911BEF2223B4EC@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF2223FA16@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: > I looked into your proposed change in more detail, and I think it is flawed > because it is trying to map the annotation back to itself. Flawed because it is no better, or flawed because it won't work? > In this scenario I am not sure that there is a benefit to splitting the > ApiAnn type into multiple separate ones. Imagine you are traversing the syntax tree and looking at each constructor. With your proposal you have a LetIn node in your hand. You now grab an annotation (which may return Nothing), then you have to pattern match on the annotation to check you have a AnnLetIn node. With my proposal you have the LetIn, then you try and grab an AnnLetIn, which either returns Nothing or Just, and if it returns Just you know you have the right thing. One less dynamic value test, so a bit more safety. That said, I'm willing to believe there is some level of generic-ness that is easier to leverage with the single annotation, so I'm not convinced my proposal is necessarily a good idea. > Also, it only relies on the AST being an instance of Data, which already > holds. Mine only relies on the annotation types being an instance of Typeable, which is far less burdensome (although somewhat irrelevant, since both criteria will be met). Thanks, Neil > On Wed, Oct 1, 2014 at 6:37 PM, Neil Mitchell wrote: >> >> I was getting a bit lost between the idea and the implementation. Let >> me try rephrasing the idea in my own words. >> >> The goal: Capture inner source spans in AST syntax nodes. At the >> moment if ... then ... else ... captures the spans [if [...] then >> [...] else [...]]. We want to capture the spans for each keyword as >> well, so: [{if} [...] {then} [...] {else} [...]]. >> >> The proposal: Rather than add anything to the AST, have a separate >> mapping (SrcSpan,AstCtor) to [SrcSpan]. So you give in the SrcSpan >> from the IfThenElse node, and some token for the IfThenElse >> constructor, and get back a list of IfThenElse for the particular >> keyword. >> >> I like the proposal because it adds nothing inside the AST, and >> requires no fresh invariants of the AST. I dislike it because the >> contents of that separate mapping are highly tied up with the AST, and >> easy to get out of sync. I think it's the right choice for three >> reasons, 1) it is easier to try out and doesn't break the AST, so we >> have more scope for changing our minds later; 2) the same technique is >> able to represent things other than SrcSpan without introducing a >> polymorphic src span; 3) the people who pay the complexity are the >> people who use it, which is relatively few people. >> >> That said, as a tweak to the API, rather than a single data type for >> all annotations, you could have: >> >> data AnnIfThenElse = AnnIfThenElse {posIf, posThen, posElse :: SrcSpan} >> data AnnDo = AnnDo {posDo :: SrcSpan} >> >> Then you could just have an opaque Map (SrcSpan, TypeRep) Dynamic, >> with the invariant that the TypeRep in the key matches the Dynamic. >> Then you can have: getAnnotation :: Typeable a => Annotations -> >> SrcSpan -> Maybe a. I think it simplifies some of the TypeRep trickery >> you are engaging in with mkAnnKey. >> >> Thanks, Neil >> >> On Wed, Oct 1, 2014 at 5:06 PM, Simon Peyton Jones >> wrote: >> > Let me urge you, once more, to consult some actual heavy-duty users of >> > these >> > proposed facilities. I am very keen to avoid investing design and >> > implementation effort in facilities that may not meet the need. >> > >> > >> > >> > If they end up acclaiming the node-key idea, then we should surely >> > simply >> > make the key an abstract type, simply an instance of Hashable, Ord, etc. >> > >> > >> > >> > Simon >> > >> > >> > >> > From: Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] >> > Sent: 30 September 2014 19:48 >> > To: Simon Peyton Jones >> > Cc: Richard Eisenberg; Edward Z. Yang; ghc-devs at haskell.org >> > >> > >> > Subject: Re: Feedback request for #9628 AST Annotations >> > >> > >> > >> > On further reflection of the goals for the annotation, I would like to >> > put >> > forward the following proposal for comment >> > >> > >> > Instead of physically placing a "node-key" in each AST Node, a virtual >> > node key can be generated from any `GenLocated SrcSpan e' comprising a >> > combination of the `SrcSpan` value and a unique identifier from the >> > constructor for `e`, perhaps using its `TypeRep`, since the entire AST >> > derives Typeable. >> > >> > To further reduce the intrusiveness, a base Annotation type can be >> > defined that captures the location of noise tokens for each AST >> > constructor. This can then be emitted from the parser, if the >> > appropriate flag is set to enable it. >> > >> > So >> > >> > data ApiAnnKey = AK SrcSpan TypeRep >> > >> > mkApiAnnKey :: (Located e) -> ApiAnnKey >> > mkApiAnnKey = ... >> > >> > data Ann = >> > .... >> > | AnnHsLet SrcSpan -- of the word "let" >> > SrcSpan -- of the word "in" >> > >> > | AnnHsDo SrcSpan -- of the word "do" >> > >> > And then in the parser >> > >> > | 'let' binds 'in' exp { mkAnnHsLet $1 $3 (LL $ HsLet (unLoc >> > $2) >> > $4) } >> > >> > The helper is >> > >> > mkAnnHsLet :: Located a -> Located b -> LHsExpr RdrName -> P >> > (LHsExpr >> > RdrName) >> > mkAnnHsLet (L l_let _) (L l_in _) e = do >> > addAnnotation (mkAnnKey e) (AnnHsLet l_let l_in) >> > return e; >> > >> > The Parse Monad would have to accumulate the annotations to be >> > returned at the end, if called with the appropriate flag. >> > >> > There will be some boilerplate in getting the annotations and helper >> > functions defined, but it will not pollute the rest. >> > >> > This technique can also potentially be backported to support older GHC >> > versions via a modification to ghc-parser. >> > >> > https://hackage.haskell.org/package/ghc-parser >> > >> > Regards >> > >> > Alan >> > >> > >> > >> > On Tue, Sep 30, 2014 at 2:04 PM, Alan & Kim Zimmerman >> > >> > wrote: >> > >> > I tend to agree that this change is much too intrusive for what it >> > attempts >> > to do. >> > >> > I think the concept of a node key could be workable, and ties in to the >> > approach I am taking in ghc-exactprint [1], which uses a SrcSpan >> > together >> > with node type as the annotation key. >> > >> > [1] https://github.com/alanz/ghc-exactprint >> > >> > >> > >> > On Tue, Sep 30, 2014 at 11:19 AM, Simon Peyton Jones >> > >> > wrote: >> > >> > I'm anxious about it being too big a change too. >> > >> > I'd be up for it if we had several "customers" all saying "yes, this is >> > precisely what we need to make our usage of the GHC API far far easier". >> > With enough detail so we can understand their use-case. >> > >> > Otherwise I worry that we might go to a lot of effort to solve the wrong >> > problem; or to build a solution that does not, in the end, work for the >> > actual use-case. >> > >> > Another way to tackle this would be to ensure that syntax tree nodes >> > have a >> > "node-key" (a bit like their source location) that clients could use in >> > a >> > finite map, to map node-key to values of their choice. >> > >> > I have not reviewed your patch in detail, but it's uncomfortable that >> > the >> > 'l' parameter gets into IfGblEnv and DsM. That doesn't smell right. >> > >> > Ditto DynFlags/HscEnv, though I think here that you are right that the >> > "hooks" interface is very crucial. After all, the WHOLE POINT is too >> > make >> > the client interface more flexible. I would consult Luite and Edsko, who >> > were instrumental in designing the new hooks interface >> > https://ghc.haskell.org/trac/ghc/wiki/Ghc/Hooks >> > (I'm not sure if that page is up to date, but I hope so) >> > >> > A good way to proceed might be to identify some of the big users of the >> > GHC >> > API (I'm sure I don't know them all), discuss with them what would help >> > them, and share the results on a wiki page. >> > >> > Simon >> > >> > >> > | -----Original Message----- >> > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of >> > | Richard Eisenberg >> > | Sent: 30 September 2014 03:04 >> > | To: Edward Z. Yang >> > | Cc: ghc-devs at haskell.org >> > | Subject: Re: Feedback request for #9628 AST Annotations >> > | >> > | I'm only speaking up because Alan is specifically requesting >> > feedback: >> > | I'm really ambivalent about this. I agree with Edward that this is a >> > | big change and adds permanent noise in a lot of places. But, I also >> > | really respect the goal here -- better tool support. Is it worthwhile >> > | to do this using a dynamically typed bit (using Typeable and such), >> > | which would avoid the noise? Maybe. >> > | >> > | What do other languages do? Do we know what, say, Agda does to get >> > | such tight coupling with an editor? Does, say, Eclipse have such a >> > | chummy relationship with a Java compiler to do its refactoring, or is >> > | that separately implemented? Haskell/GHC is not the first project to >> > | have this problem, and there's plenty of solutions out there. And, >> > | unlike most other times, I don't think Haskell is exceptional in this >> > | regard (there's nothing very special about Haskell's AST, maybe >> > beyond >> > | indentation-awareness), so we can probably adopt other solutions >> > | nicely. >> > | >> > | Richard >> > | >> > | On Sep 29, 2014, at 8:58 PM, "Edward Z. Yang" wrote: >> > | >> > | > Excerpts from Alan & Kim Zimmerman's message of 2014-09-29 13:38:45 >> > | -0700: >> > | >> 1. Is this change too big, should I scale it back to just update >> > | the >> > | >> HsSyn structures and then lock it down to Located SrcSpan for >> > all >> > | >> the rest? >> > | > >> > | > I don't claim to speak for the rest of the GHC developers, but I >> > | think >> > | > this change is too big. I am almost tempted to say that we >> > | shouldn't >> > | > add the type parameter at all, and do something else (maybe >> > Backpack >> > | > can let us extend SrcSpan in a modular way, or even use a >> > | dynamically >> > | > typed map for annotations.) >> > | > >> > | > Edward >> > | > _______________________________________________ >> > | > ghc-devs mailing list >> > | > ghc-devs at haskell.org >> > | > http://www.haskell.org/mailman/listinfo/ghc-devs >> > | >> > | _______________________________________________ >> > | ghc-devs mailing list >> > | ghc-devs at haskell.org >> > | http://www.haskell.org/mailman/listinfo/ghc-devs >> > _______________________________________________ >> > ghc-devs mailing list >> > ghc-devs at haskell.org >> > http://www.haskell.org/mailman/listinfo/ghc-devs >> > >> > >> > >> > >> > >> > >> > _______________________________________________ >> > ghc-devs mailing list >> > ghc-devs at haskell.org >> > http://www.haskell.org/mailman/listinfo/ghc-devs >> > > > From alan.zimm at gmail.com Wed Oct 1 20:16:26 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Wed, 1 Oct 2014 22:16:26 +0200 Subject: Feedback request for #9628 AST Annotations In-Reply-To: References: <1412038564-sup-8892@sabre> <618BE556AADD624C9C918AA5D5911BEF2223B4EC@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF2223FA16@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: > I looked into your proposed change in more detail, and I think it is flawed > > because it is trying to map the annotation back to itself. > > Flawed because it is no better, or flawed because it won't work? > > I am not sure that I understand your proposal correctly, but I interpret the requirement to map the Dynamic type to the TypeRep of the constructor meaning some kind of separate linkage between the Constructor and the specific annotation type. > > In this scenario I am not sure that there is a benefit to splitting the > > ApiAnn type into multiple separate ones. > > Imagine you are traversing the syntax tree and looking at each > constructor. With your proposal you have a LetIn node in your hand. > You now grab an annotation (which may return Nothing), then you have > to pattern match on the annotation to check you have a AnnLetIn node. > With my proposal you have the LetIn, then you try and grab an > AnnLetIn, which either returns Nothing or Just, and if it returns Just > you know you have the right thing. One less dynamic value test, so a > bit more safety. > > This is a very good reason to break it into separate types. And then the reason for the Dynamic becomes clear. > That said, I'm willing to believe there is some level of generic-ness > that is easier to leverage with the single annotation, so I'm not > convinced my proposal is necessarily a good idea. > > > Also, it only relies on the AST being an instance of Data, which already > > holds. > > Mine only relies on the annotation types being an instance of > Typeable, which is far less burdensome (although somewhat irrelevant, > since both criteria will be met). > > Thanks, Neil > > > > On Wed, Oct 1, 2014 at 6:37 PM, Neil Mitchell > wrote: > >> > >> I was getting a bit lost between the idea and the implementation. Let > >> me try rephrasing the idea in my own words. > >> > >> The goal: Capture inner source spans in AST syntax nodes. At the > >> moment if ... then ... else ... captures the spans [if [...] then > >> [...] else [...]]. We want to capture the spans for each keyword as > >> well, so: [{if} [...] {then} [...] {else} [...]]. > >> > >> The proposal: Rather than add anything to the AST, have a separate > >> mapping (SrcSpan,AstCtor) to [SrcSpan]. So you give in the SrcSpan > >> from the IfThenElse node, and some token for the IfThenElse > >> constructor, and get back a list of IfThenElse for the particular > >> keyword. > >> > >> I like the proposal because it adds nothing inside the AST, and > >> requires no fresh invariants of the AST. I dislike it because the > >> contents of that separate mapping are highly tied up with the AST, and > >> easy to get out of sync. I think it's the right choice for three > >> reasons, 1) it is easier to try out and doesn't break the AST, so we > >> have more scope for changing our minds later; 2) the same technique is > >> able to represent things other than SrcSpan without introducing a > >> polymorphic src span; 3) the people who pay the complexity are the > >> people who use it, which is relatively few people. > >> > >> That said, as a tweak to the API, rather than a single data type for > >> all annotations, you could have: > >> > >> data AnnIfThenElse = AnnIfThenElse {posIf, posThen, posElse :: SrcSpan} > >> data AnnDo = AnnDo {posDo :: SrcSpan} > >> > >> Then you could just have an opaque Map (SrcSpan, TypeRep) Dynamic, > >> with the invariant that the TypeRep in the key matches the Dynamic. > >> Then you can have: getAnnotation :: Typeable a => Annotations -> > >> SrcSpan -> Maybe a. I think it simplifies some of the TypeRep trickery > >> you are engaging in with mkAnnKey. > >> > >> Thanks, Neil > >> > >> On Wed, Oct 1, 2014 at 5:06 PM, Simon Peyton Jones > >> wrote: > >> > Let me urge you, once more, to consult some actual heavy-duty users of > >> > these > >> > proposed facilities. I am very keen to avoid investing design and > >> > implementation effort in facilities that may not meet the need. > >> > > >> > > >> > > >> > If they end up acclaiming the node-key idea, then we should surely > >> > simply > >> > make the key an abstract type, simply an instance of Hashable, Ord, > etc. > >> > > >> > > >> > > >> > Simon > >> > > >> > > >> > > >> > From: Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] > >> > Sent: 30 September 2014 19:48 > >> > To: Simon Peyton Jones > >> > Cc: Richard Eisenberg; Edward Z. Yang; ghc-devs at haskell.org > >> > > >> > > >> > Subject: Re: Feedback request for #9628 AST Annotations > >> > > >> > > >> > > >> > On further reflection of the goals for the annotation, I would like to > >> > put > >> > forward the following proposal for comment > >> > > >> > > >> > Instead of physically placing a "node-key" in each AST Node, a virtual > >> > node key can be generated from any `GenLocated SrcSpan e' comprising a > >> > combination of the `SrcSpan` value and a unique identifier from the > >> > constructor for `e`, perhaps using its `TypeRep`, since the entire AST > >> > derives Typeable. > >> > > >> > To further reduce the intrusiveness, a base Annotation type can be > >> > defined that captures the location of noise tokens for each AST > >> > constructor. This can then be emitted from the parser, if the > >> > appropriate flag is set to enable it. > >> > > >> > So > >> > > >> > data ApiAnnKey = AK SrcSpan TypeRep > >> > > >> > mkApiAnnKey :: (Located e) -> ApiAnnKey > >> > mkApiAnnKey = ... > >> > > >> > data Ann = > >> > .... > >> > | AnnHsLet SrcSpan -- of the word "let" > >> > SrcSpan -- of the word "in" > >> > > >> > | AnnHsDo SrcSpan -- of the word "do" > >> > > >> > And then in the parser > >> > > >> > | 'let' binds 'in' exp { mkAnnHsLet $1 $3 (LL $ HsLet (unLoc > >> > $2) > >> > $4) } > >> > > >> > The helper is > >> > > >> > mkAnnHsLet :: Located a -> Located b -> LHsExpr RdrName -> P > >> > (LHsExpr > >> > RdrName) > >> > mkAnnHsLet (L l_let _) (L l_in _) e = do > >> > addAnnotation (mkAnnKey e) (AnnHsLet l_let l_in) > >> > return e; > >> > > >> > The Parse Monad would have to accumulate the annotations to be > >> > returned at the end, if called with the appropriate flag. > >> > > >> > There will be some boilerplate in getting the annotations and helper > >> > functions defined, but it will not pollute the rest. > >> > > >> > This technique can also potentially be backported to support older GHC > >> > versions via a modification to ghc-parser. > >> > > >> > https://hackage.haskell.org/package/ghc-parser > >> > > >> > Regards > >> > > >> > Alan > >> > > >> > > >> > > >> > On Tue, Sep 30, 2014 at 2:04 PM, Alan & Kim Zimmerman > >> > > >> > wrote: > >> > > >> > I tend to agree that this change is much too intrusive for what it > >> > attempts > >> > to do. > >> > > >> > I think the concept of a node key could be workable, and ties in to > the > >> > approach I am taking in ghc-exactprint [1], which uses a SrcSpan > >> > together > >> > with node type as the annotation key. > >> > > >> > [1] https://github.com/alanz/ghc-exactprint > >> > > >> > > >> > > >> > On Tue, Sep 30, 2014 at 11:19 AM, Simon Peyton Jones > >> > > >> > wrote: > >> > > >> > I'm anxious about it being too big a change too. > >> > > >> > I'd be up for it if we had several "customers" all saying "yes, this > is > >> > precisely what we need to make our usage of the GHC API far far > easier". > >> > With enough detail so we can understand their use-case. > >> > > >> > Otherwise I worry that we might go to a lot of effort to solve the > wrong > >> > problem; or to build a solution that does not, in the end, work for > the > >> > actual use-case. > >> > > >> > Another way to tackle this would be to ensure that syntax tree nodes > >> > have a > >> > "node-key" (a bit like their source location) that clients could use > in > >> > a > >> > finite map, to map node-key to values of their choice. > >> > > >> > I have not reviewed your patch in detail, but it's uncomfortable that > >> > the > >> > 'l' parameter gets into IfGblEnv and DsM. That doesn't smell right. > >> > > >> > Ditto DynFlags/HscEnv, though I think here that you are right that the > >> > "hooks" interface is very crucial. After all, the WHOLE POINT is too > >> > make > >> > the client interface more flexible. I would consult Luite and Edsko, > who > >> > were instrumental in designing the new hooks interface > >> > https://ghc.haskell.org/trac/ghc/wiki/Ghc/Hooks > >> > (I'm not sure if that page is up to date, but I hope so) > >> > > >> > A good way to proceed might be to identify some of the big users of > the > >> > GHC > >> > API (I'm sure I don't know them all), discuss with them what would > help > >> > them, and share the results on a wiki page. > >> > > >> > Simon > >> > > >> > > >> > | -----Original Message----- > >> > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of > >> > | Richard Eisenberg > >> > | Sent: 30 September 2014 03:04 > >> > | To: Edward Z. Yang > >> > | Cc: ghc-devs at haskell.org > >> > | Subject: Re: Feedback request for #9628 AST Annotations > >> > | > >> > | I'm only speaking up because Alan is specifically requesting > >> > feedback: > >> > | I'm really ambivalent about this. I agree with Edward that this is > a > >> > | big change and adds permanent noise in a lot of places. But, I also > >> > | really respect the goal here -- better tool support. Is it > worthwhile > >> > | to do this using a dynamically typed bit (using Typeable and such), > >> > | which would avoid the noise? Maybe. > >> > | > >> > | What do other languages do? Do we know what, say, Agda does to get > >> > | such tight coupling with an editor? Does, say, Eclipse have such a > >> > | chummy relationship with a Java compiler to do its refactoring, or > is > >> > | that separately implemented? Haskell/GHC is not the first project > to > >> > | have this problem, and there's plenty of solutions out there. And, > >> > | unlike most other times, I don't think Haskell is exceptional in > this > >> > | regard (there's nothing very special about Haskell's AST, maybe > >> > beyond > >> > | indentation-awareness), so we can probably adopt other solutions > >> > | nicely. > >> > | > >> > | Richard > >> > | > >> > | On Sep 29, 2014, at 8:58 PM, "Edward Z. Yang" > wrote: > >> > | > >> > | > Excerpts from Alan & Kim Zimmerman's message of 2014-09-29 > 13:38:45 > >> > | -0700: > >> > | >> 1. Is this change too big, should I scale it back to just update > >> > | the > >> > | >> HsSyn structures and then lock it down to Located SrcSpan for > >> > all > >> > | >> the rest? > >> > | > > >> > | > I don't claim to speak for the rest of the GHC developers, but I > >> > | think > >> > | > this change is too big. I am almost tempted to say that we > >> > | shouldn't > >> > | > add the type parameter at all, and do something else (maybe > >> > Backpack > >> > | > can let us extend SrcSpan in a modular way, or even use a > >> > | dynamically > >> > | > typed map for annotations.) > >> > | > > >> > | > Edward > >> > | > _______________________________________________ > >> > | > ghc-devs mailing list > >> > | > ghc-devs at haskell.org > >> > | > http://www.haskell.org/mailman/listinfo/ghc-devs > >> > | > >> > | _______________________________________________ > >> > | ghc-devs mailing list > >> > | ghc-devs at haskell.org > >> > | http://www.haskell.org/mailman/listinfo/ghc-devs > >> > _______________________________________________ > >> > ghc-devs mailing list > >> > ghc-devs at haskell.org > >> > http://www.haskell.org/mailman/listinfo/ghc-devs > >> > > >> > > >> > > >> > > >> > > >> > > >> > _______________________________________________ > >> > ghc-devs mailing list > >> > ghc-devs at haskell.org > >> > http://www.haskell.org/mailman/listinfo/ghc-devs > >> > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndmitchell at gmail.com Wed Oct 1 20:23:55 2014 From: ndmitchell at gmail.com (Neil Mitchell) Date: Wed, 1 Oct 2014 21:23:55 +0100 Subject: Feedback request for #9628 AST Annotations In-Reply-To: References: <1412038564-sup-8892@sabre> <618BE556AADD624C9C918AA5D5911BEF2223B4EC@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF2223FA16@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: > I am not sure that I understand your proposal correctly, but I interpret the > requirement > to map the Dynamic type to the TypeRep of the constructor meaning some kind > of separate > linkage between the Constructor and the specific annotation type. The key is that there is no linkage from the constructor. Instead of putting ((srcspan,"LetIn"), AnnLetIn ... :: Ann) into the map we put (srcspan, AnnLetIn ... :: AnnLetIn) in the map. The constructor is implicitly encoded by the type of the annotation. > This is a very good reason to break it into separate types. And then the > reason > for the Dynamic becomes clear. The whole TypeRep/Dynamic thing is just a nice way to encode Map with multiple value types that don't tread on each other, it's not a detail the user of the API would ever see. > > >> >> That said, I'm willing to believe there is some level of generic-ness >> that is easier to leverage with the single annotation, so I'm not >> convinced my proposal is necessarily a good idea. >> >> > Also, it only relies on the AST being an instance of Data, which already >> > holds. >> >> Mine only relies on the annotation types being an instance of >> Typeable, which is far less burdensome (although somewhat irrelevant, >> since both criteria will be met). >> >> Thanks, Neil >> >> >> > On Wed, Oct 1, 2014 at 6:37 PM, Neil Mitchell >> > wrote: >> >> >> >> I was getting a bit lost between the idea and the implementation. Let >> >> me try rephrasing the idea in my own words. >> >> >> >> The goal: Capture inner source spans in AST syntax nodes. At the >> >> moment if ... then ... else ... captures the spans [if [...] then >> >> [...] else [...]]. We want to capture the spans for each keyword as >> >> well, so: [{if} [...] {then} [...] {else} [...]]. >> >> >> >> The proposal: Rather than add anything to the AST, have a separate >> >> mapping (SrcSpan,AstCtor) to [SrcSpan]. So you give in the SrcSpan >> >> from the IfThenElse node, and some token for the IfThenElse >> >> constructor, and get back a list of IfThenElse for the particular >> >> keyword. >> >> >> >> I like the proposal because it adds nothing inside the AST, and >> >> requires no fresh invariants of the AST. I dislike it because the >> >> contents of that separate mapping are highly tied up with the AST, and >> >> easy to get out of sync. I think it's the right choice for three >> >> reasons, 1) it is easier to try out and doesn't break the AST, so we >> >> have more scope for changing our minds later; 2) the same technique is >> >> able to represent things other than SrcSpan without introducing a >> >> polymorphic src span; 3) the people who pay the complexity are the >> >> people who use it, which is relatively few people. >> >> >> >> That said, as a tweak to the API, rather than a single data type for >> >> all annotations, you could have: >> >> >> >> data AnnIfThenElse = AnnIfThenElse {posIf, posThen, posElse :: SrcSpan} >> >> data AnnDo = AnnDo {posDo :: SrcSpan} >> >> >> >> Then you could just have an opaque Map (SrcSpan, TypeRep) Dynamic, >> >> with the invariant that the TypeRep in the key matches the Dynamic. >> >> Then you can have: getAnnotation :: Typeable a => Annotations -> >> >> SrcSpan -> Maybe a. I think it simplifies some of the TypeRep trickery >> >> you are engaging in with mkAnnKey. >> >> >> >> Thanks, Neil >> >> >> >> On Wed, Oct 1, 2014 at 5:06 PM, Simon Peyton Jones >> >> wrote: >> >> > Let me urge you, once more, to consult some actual heavy-duty users >> >> > of >> >> > these >> >> > proposed facilities. I am very keen to avoid investing design and >> >> > implementation effort in facilities that may not meet the need. >> >> > >> >> > >> >> > >> >> > If they end up acclaiming the node-key idea, then we should surely >> >> > simply >> >> > make the key an abstract type, simply an instance of Hashable, Ord, >> >> > etc. >> >> > >> >> > >> >> > >> >> > Simon >> >> > >> >> > >> >> > >> >> > From: Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] >> >> > Sent: 30 September 2014 19:48 >> >> > To: Simon Peyton Jones >> >> > Cc: Richard Eisenberg; Edward Z. Yang; ghc-devs at haskell.org >> >> > >> >> > >> >> > Subject: Re: Feedback request for #9628 AST Annotations >> >> > >> >> > >> >> > >> >> > On further reflection of the goals for the annotation, I would like >> >> > to >> >> > put >> >> > forward the following proposal for comment >> >> > >> >> > >> >> > Instead of physically placing a "node-key" in each AST Node, a >> >> > virtual >> >> > node key can be generated from any `GenLocated SrcSpan e' comprising >> >> > a >> >> > combination of the `SrcSpan` value and a unique identifier from the >> >> > constructor for `e`, perhaps using its `TypeRep`, since the entire >> >> > AST >> >> > derives Typeable. >> >> > >> >> > To further reduce the intrusiveness, a base Annotation type can be >> >> > defined that captures the location of noise tokens for each AST >> >> > constructor. This can then be emitted from the parser, if the >> >> > appropriate flag is set to enable it. >> >> > >> >> > So >> >> > >> >> > data ApiAnnKey = AK SrcSpan TypeRep >> >> > >> >> > mkApiAnnKey :: (Located e) -> ApiAnnKey >> >> > mkApiAnnKey = ... >> >> > >> >> > data Ann = >> >> > .... >> >> > | AnnHsLet SrcSpan -- of the word "let" >> >> > SrcSpan -- of the word "in" >> >> > >> >> > | AnnHsDo SrcSpan -- of the word "do" >> >> > >> >> > And then in the parser >> >> > >> >> > | 'let' binds 'in' exp { mkAnnHsLet $1 $3 (LL $ HsLet >> >> > (unLoc >> >> > $2) >> >> > $4) } >> >> > >> >> > The helper is >> >> > >> >> > mkAnnHsLet :: Located a -> Located b -> LHsExpr RdrName -> P >> >> > (LHsExpr >> >> > RdrName) >> >> > mkAnnHsLet (L l_let _) (L l_in _) e = do >> >> > addAnnotation (mkAnnKey e) (AnnHsLet l_let l_in) >> >> > return e; >> >> > >> >> > The Parse Monad would have to accumulate the annotations to be >> >> > returned at the end, if called with the appropriate flag. >> >> > >> >> > There will be some boilerplate in getting the annotations and helper >> >> > functions defined, but it will not pollute the rest. >> >> > >> >> > This technique can also potentially be backported to support older >> >> > GHC >> >> > versions via a modification to ghc-parser. >> >> > >> >> > https://hackage.haskell.org/package/ghc-parser >> >> > >> >> > Regards >> >> > >> >> > Alan >> >> > >> >> > >> >> > >> >> > On Tue, Sep 30, 2014 at 2:04 PM, Alan & Kim Zimmerman >> >> > >> >> > wrote: >> >> > >> >> > I tend to agree that this change is much too intrusive for what it >> >> > attempts >> >> > to do. >> >> > >> >> > I think the concept of a node key could be workable, and ties in to >> >> > the >> >> > approach I am taking in ghc-exactprint [1], which uses a SrcSpan >> >> > together >> >> > with node type as the annotation key. >> >> > >> >> > [1] https://github.com/alanz/ghc-exactprint >> >> > >> >> > >> >> > >> >> > On Tue, Sep 30, 2014 at 11:19 AM, Simon Peyton Jones >> >> > >> >> > wrote: >> >> > >> >> > I'm anxious about it being too big a change too. >> >> > >> >> > I'd be up for it if we had several "customers" all saying "yes, this >> >> > is >> >> > precisely what we need to make our usage of the GHC API far far >> >> > easier". >> >> > With enough detail so we can understand their use-case. >> >> > >> >> > Otherwise I worry that we might go to a lot of effort to solve the >> >> > wrong >> >> > problem; or to build a solution that does not, in the end, work for >> >> > the >> >> > actual use-case. >> >> > >> >> > Another way to tackle this would be to ensure that syntax tree nodes >> >> > have a >> >> > "node-key" (a bit like their source location) that clients could use >> >> > in >> >> > a >> >> > finite map, to map node-key to values of their choice. >> >> > >> >> > I have not reviewed your patch in detail, but it's uncomfortable that >> >> > the >> >> > 'l' parameter gets into IfGblEnv and DsM. That doesn't smell right. >> >> > >> >> > Ditto DynFlags/HscEnv, though I think here that you are right that >> >> > the >> >> > "hooks" interface is very crucial. After all, the WHOLE POINT is too >> >> > make >> >> > the client interface more flexible. I would consult Luite and Edsko, >> >> > who >> >> > were instrumental in designing the new hooks interface >> >> > https://ghc.haskell.org/trac/ghc/wiki/Ghc/Hooks >> >> > (I'm not sure if that page is up to date, but I hope so) >> >> > >> >> > A good way to proceed might be to identify some of the big users of >> >> > the >> >> > GHC >> >> > API (I'm sure I don't know them all), discuss with them what would >> >> > help >> >> > them, and share the results on a wiki page. >> >> > >> >> > Simon >> >> > >> >> > >> >> > | -----Original Message----- >> >> > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of >> >> > | Richard Eisenberg >> >> > | Sent: 30 September 2014 03:04 >> >> > | To: Edward Z. Yang >> >> > | Cc: ghc-devs at haskell.org >> >> > | Subject: Re: Feedback request for #9628 AST Annotations >> >> > | >> >> > | I'm only speaking up because Alan is specifically requesting >> >> > feedback: >> >> > | I'm really ambivalent about this. I agree with Edward that this is >> >> > a >> >> > | big change and adds permanent noise in a lot of places. But, I >> >> > also >> >> > | really respect the goal here -- better tool support. Is it >> >> > worthwhile >> >> > | to do this using a dynamically typed bit (using Typeable and >> >> > such), >> >> > | which would avoid the noise? Maybe. >> >> > | >> >> > | What do other languages do? Do we know what, say, Agda does to get >> >> > | such tight coupling with an editor? Does, say, Eclipse have such a >> >> > | chummy relationship with a Java compiler to do its refactoring, or >> >> > is >> >> > | that separately implemented? Haskell/GHC is not the first project >> >> > to >> >> > | have this problem, and there's plenty of solutions out there. And, >> >> > | unlike most other times, I don't think Haskell is exceptional in >> >> > this >> >> > | regard (there's nothing very special about Haskell's AST, maybe >> >> > beyond >> >> > | indentation-awareness), so we can probably adopt other solutions >> >> > | nicely. >> >> > | >> >> > | Richard >> >> > | >> >> > | On Sep 29, 2014, at 8:58 PM, "Edward Z. Yang" >> >> > wrote: >> >> > | >> >> > | > Excerpts from Alan & Kim Zimmerman's message of 2014-09-29 >> >> > 13:38:45 >> >> > | -0700: >> >> > | >> 1. Is this change too big, should I scale it back to just >> >> > update >> >> > | the >> >> > | >> HsSyn structures and then lock it down to Located SrcSpan for >> >> > all >> >> > | >> the rest? >> >> > | > >> >> > | > I don't claim to speak for the rest of the GHC developers, but I >> >> > | think >> >> > | > this change is too big. I am almost tempted to say that we >> >> > | shouldn't >> >> > | > add the type parameter at all, and do something else (maybe >> >> > Backpack >> >> > | > can let us extend SrcSpan in a modular way, or even use a >> >> > | dynamically >> >> > | > typed map for annotations.) >> >> > | > >> >> > | > Edward >> >> > | > _______________________________________________ >> >> > | > ghc-devs mailing list >> >> > | > ghc-devs at haskell.org >> >> > | > http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > | >> >> > | _______________________________________________ >> >> > | ghc-devs mailing list >> >> > | ghc-devs at haskell.org >> >> > | http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > _______________________________________________ >> >> > ghc-devs mailing list >> >> > ghc-devs at haskell.org >> >> > http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > >> >> > >> >> > >> >> > >> >> > >> >> > >> >> > _______________________________________________ >> >> > ghc-devs mailing list >> >> > ghc-devs at haskell.org >> >> > http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > >> > >> > > > From jwlato at gmail.com Thu Oct 2 01:07:55 2014 From: jwlato at gmail.com (John Lato) Date: Thu, 2 Oct 2014 09:07:55 +0800 Subject: Build time regressions In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF2223E01E@DB3PRD3001MB020.064d.mgd.msft.net> References: <1412110484-sup-5631@sabre> <618BE556AADD624C9C918AA5D5911BEF2223E01E@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Hi Simon, Thanks for replying. Unfortunately the field in question wasn't being unpacked, so there's something else going on. But there's a decent chance Richard has already fixed the issue; I'll check and report back if the problem persists. Unfortunately it may take me a couple days before I have time to investigate fully. However, I agree with your suggestion that GHC should not unpack wide strict constructors without an explicit UNPACK pragma. John On Wed, Oct 1, 2014 at 4:57 PM, Simon Peyton Jones wrote: > It sounds as if there are two issues here: > > > > ? *Should GHC unpack a !?d constructor argument if the > constructor?s argument has a lot of fields? *It probably isn?t > profitable to unbox very large products, because it doesn?t save much > allocation, and might *cause* extra allocation at pattern-match sites. > So I think the answer is yes. I?ll open a ticket. > > > > ? *Is some library (binary? blaze?) creating far too much code in > some circumstances?* I have no idea about this, but it sounds fishy. > Simply creating the large worker function should not make things go bad. > > > > Incidentally, John, using {-# NOUNPACK #-} !Bar would prevent the > unpacking while still allowing the field to be strict. It?s manually > controllable. > > > > Simon > > > > > > > > *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of *John > Lato > *Sent:* 01 October 2014 00:45 > *To:* Edward Z. Yang > *Cc:* Joachim Breitner; ghc-devs at haskell.org > *Subject:* Re: Build time regressions > > > > Hi Edward, > > > > This is possibly unrelated, but the setup seems almost identical to a very > similar problem we had in some code, i.e. very long compile times (6+ > minutes for 1 module) and excessive memory usage when compiling generic > serialization instances for some data structures. > > > > In our case, I also thought that INLINE functions were the cause of the > problem, but it turns out they were not. We had a nested data structure, > e.g. > > > > > data Foo { fooBar :: !Bar, ... } > > > > with Bar very large (~150 records). > > > > even when we explicitly NOINLINE'd the function that serialized Bar, GHC > still created a very large helper function of the form: > > > > > serialize_foo :: Int# -> Int# -> ... > > > > where the arguments were the unboxed fields of the Bar structure, along > with the other fields within Foo. It appears that even though the > serialization function was NOINLINE'd, it simply created a Builder, and > while combining the Builder's ghc saw the full structure. Our serializer > uses blaze, but perhaps Binary's builder is similar enough the same thing > could happen. > > > > Anyway, in our case the fix was to simply remove the bang pattern from the > 'fooBar' record field. Then the serialize_foo function takes a Bar as an > argument and serializes that. I'm not entirely sure why compilation takes > so much longer otherwise. I've tried dumping the output of each simplifier > phase and it clearly gets stuck at a certain point, but I didn't really > debug in much detail so I don't recall the details. > > > > If you think this is related, I can investigate more thoroughly. > > > > Cheers, > > John L. > > > > On Wed, Oct 1, 2014 at 4:54 AM, Edward Z. Yang wrote: > > Hello Joachim, > > This was halfway known, but it sounds like we haven't solved > it completely. > > The beginning of the sordid tale was when Cabal HEAD switched > to using derived binary instances: > https://ghc.haskell.org/trac/ghc/ticket/9583 > > SPJ fixed the infinite loop bug in the simplifier, but apparently > the deriving binary generates a lot of code, meaning a lot of > memory. https://ghc.haskell.org/trac/ghc/ticket/9630 > hvr's fix was specifically to solve this problem. > > But it sounds like it didn't eliminate the regression entirely? > If there's an unrelated regression, we should suss it out. It would > be helpful if someone could revert just the deriving changes, > and see if this reverts the compilation time. > > Edward > > Excerpts from Joachim Breitner's message of 2014-09-30 13:36:27 -0700: > > Hi, > > > > the attached graph shows a noticable increase in build time caused by > > > > Update Cabal submodule & ghc-pkg to use new module re-export types > > author Edward Z. Yang > > > https://git.haskell.org/ghc.git/commit/4b648be19c75e6c6a8e6f9f93fa12c7a4176f0ae > > > > and only halfway mitigated by > > > > Update `binary` submodule in an attempt to address #9630 > > author Herbert Valerio Riedel > > > https://git.haskell.org/ghc.git/commit/3ecca02516af5de803e4ff667c8c969c5bffb35f > > > > > > I am not sure if the improvement is related to the regression, but in > > any case: Edward, was such an increase expected by you? If not, can you > > explain it? Can it be avoided? > > > > Or maybe Cabal just became much larger... +38% in allocations when > > running haddock on it seems to confirm this. > > > > Greetings, > > Joachim > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Oct 2 07:49:11 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 2 Oct 2014 07:49:11 +0000 Subject: sync-all Message-ID: <618BE556AADD624C9C918AA5D5911BEF222413D2@DB3PRD3001MB020.064d.mgd.msft.net> Herbert says (in bold font) "do not ever use sync-all", in this post http://www.reddit.com/r/haskell/comments/2hes8m/the_ghc_source_code_contains_1088_todos_please/. If that's really true, we should either nuke it altogether, or change it to do something correct. The idea that it might "set up your tree in subtly different ways" is alarming. Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuuzetsu at fuuzetsu.co.uk Thu Oct 2 08:15:37 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Thu, 02 Oct 2014 09:15:37 +0100 Subject: sync-all In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF222413D2@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF222413D2@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <542D09A9.2050301@fuuzetsu.co.uk> On 10/02/2014 08:49 AM, Simon Peyton Jones wrote: > Herbert says (in bold font) "do not ever use sync-all", in this post http://www.reddit.com/r/haskell/comments/2hes8m/the_ghc_source_code_contains_1088_todos_please/. > > If that's really true, we should either nuke it altogether, or change it to do something correct. The idea that it might "set up your tree in subtly different ways" is alarming. > > Simon > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > This is news to me, I always use sync-all get && sync-all pull when I have to? I'd appreciate if such things were announced on lists rather than a comment on reddit somewhere? -- Mateusz K. From hvriedel at gmail.com Thu Oct 2 08:20:38 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Thu, 02 Oct 2014 10:20:38 +0200 Subject: sync-all In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF222413D2@DB3PRD3001MB020.064d.mgd.msft.net> (Simon Peyton Jones's message of "Thu, 2 Oct 2014 07:49:11 +0000") References: <618BE556AADD624C9C918AA5D5911BEF222413D2@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <87zjdevq5l.fsf@gmail.com> On 2014-10-02 at 09:49:11 +0200, Simon Peyton Jones wrote: > Herbert says (in bold font) "do not ever use sync-all", in this post http://www.reddit.com/r/haskell/comments/2hes8m/the_ghc_source_code_contains_1088_todos_please/. > > If that's really true, we should either nuke it altogether, or change > it to do something correct. The idea that it might "set up your tree > in subtly different ways" is alarming. To clarify what I mean by that: For users just wanting to clone ghc.git now and just wanting to keep their tree updated, I'd recommend doing the following one-shot convenience configuration setup (see [1] for more details): git config --global alias.pullall '!f(){ git pull "$@" && git submodule update --init --recursive; }; f' And then clone simply via git clone --recursive http://ghc.haskell.org/ghc.git (or use https:// or git:// or whatever) and to keep the tree simply use git pullall --rebase (`pullall` just saves you from having to call `git pull`+`git submodule update --init` yourself) In contrast, `sync-all` is a multi-step process: 1.) you need to clone ghc.git, 2.) then you have a sync-all, which when called will `git submodule init` (which doesn't yet download the submodules) 3.) rewrites the fetch-urls in each initialised submodule 4.) finally calls `git submodule update` to fetch the submodule and checkout the commits registered in ghc.git The difference become apparent when wanting to use github instead; my recommended approach is to use the URL rewriting feature of GitHub which even allows you to easily switch between git.haskell.org and github.com with a single command, or in its simpler form (as described on [2]): Only needed once, and idempotent setup step: git config --global url."git://github.com/ghc/packages-".insteadOf git://github.com/ghc/packages/ And then just as before: git clone --recursive git://github.com/ghc/ghc So long story short; I've mostly got a bad gut feeling recommending to use a 1000-line Perl script http://git.haskell.org/ghc.git/blob/HEAD:/sync-all to accomplish what the 2 `git config` commands and the the day-to-day `git` commands I mentioned in this email can do in a more Git-idiomatic way. So I'm not saying `sync-all` is doing something wrong, but rather that's overly complex for the task at hand, and we've had support issues with `sync-all` due to subtle bugs in it (just check the git history for sync-all to see the tweaks we needed). Moreover, IMHO, if someone's already proficient with Git, telling them to use `sync-all` will rather confuse than help as it's not that transparent what it really does. [1]: https://ghc.haskell.org/trac/ghc/wiki/WorkingConventions/Git/Submodules#UsingaGitalias [2]: https://ghc.haskell.org/trac/ghc/wiki/Newcomers From alexander at plaimi.net Thu Oct 2 08:24:05 2014 From: alexander at plaimi.net (Alexander Berntsen) Date: Thu, 02 Oct 2014 10:24:05 +0200 Subject: sync-all In-Reply-To: <542D09A9.2050301@fuuzetsu.co.uk> References: <618BE556AADD624C9C918AA5D5911BEF222413D2@DB3PRD3001MB020.064d.mgd.msft.net> <542D09A9.2050301@fuuzetsu.co.uk> Message-ID: <542D0BA5.6030401@plaimi.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Herbert, you are still not making the case for why we are keeping it. In fact, you are doing quite the opposite! The wiki[0] appears to no longer tell you to use sync-all when obtaining sources, however, it seems sync-all is still to be used when making local clones -- why? The sync-all entry[1] on the wiki says that "you normally don't need it anymore". What does this mean? What abnormal situations are there where we need to use it? [0] [1] - -- Alexander alexander at plaimi.net https://secure.plaimi.net/~alexander -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iF4EAREIAAYFAlQtC6UACgkQRtClrXBQc7WubAEAqX+yflipFfpYDBdTm8pO1+Oe 9i/yxmIB6vzFl0Hf1PsA/iaiNXyxoy0O0lCTCB6u500CywvPq4GA8Ieg0PucBizB =VU62 -----END PGP SIGNATURE----- From alexander at plaimi.net Thu Oct 2 08:25:21 2014 From: alexander at plaimi.net (Alexander Berntsen) Date: Thu, 02 Oct 2014 10:25:21 +0200 Subject: sync-all In-Reply-To: <542D0BA5.6030401@plaimi.net> References: <618BE556AADD624C9C918AA5D5911BEF222413D2@DB3PRD3001MB020.064d.mgd.msft.net> <542D09A9.2050301@fuuzetsu.co.uk> <542D0BA5.6030401@plaimi.net> Message-ID: <542D0BF1.4000706@plaimi.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 I just realised I might have answered my last question with the observation that you're supposed to use it for local clones. But further clarification would still be appreciated. Thanks. - -- Alexander alexander at plaimi.net https://secure.plaimi.net/~alexander -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iF4EAREIAAYFAlQtC/EACgkQRtClrXBQc7XnlQD/blKknLWHDHf0BWF2zcKrePnC i0dVPPDkAx5F43+6blYA/0C1xz7JyU6wAwMvo9OKyDVM4kV9GdX/egbunxuehKK3 =lS6d -----END PGP SIGNATURE----- From omeragacan at gmail.com Thu Oct 2 09:34:35 2014 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Thu, 2 Oct 2014 12:34:35 +0300 Subject: cabal directory structure under /libraries/ for a lib that uses Rts.h In-Reply-To: <1412147969-sup-5452@sabre> References: <1412147969-sup-5452@sabre> Message-ID: > Well, which library should it be part of? Add it to the exposed-modules > list there and it should get compiled. It's not only a "get it compiled" problem, even if I add it to base or some other lib and get it compiled, it's failing with a "undefined reference" linker error. I'm trying to use a function from `rts/RtsFlags.c`. I can define the function elsewhere but I still link it with `RtsFlags.c` because I'm using `RtsFlags` from that file. Any ideas? --- ?mer Sinan A?acan http://osa1.net From mle+hs at mega-nerd.com Thu Oct 2 10:18:43 2014 From: mle+hs at mega-nerd.com (Erik de Castro Lopo) Date: Thu, 2 Oct 2014 20:18:43 +1000 Subject: sync-all In-Reply-To: <87zjdevq5l.fsf@gmail.com> References: <618BE556AADD624C9C918AA5D5911BEF222413D2@DB3PRD3001MB020.064d.mgd.msft.net> <87zjdevq5l.fsf@gmail.com> Message-ID: <20141002201843.6cc8220dc0beb99c3dd116d1@mega-nerd.com> Herbert Valerio Riedel wrote: > So I'm not saying `sync-all` is doing something wrong, but rather that's > overly complex for the task at hand, and we've had support issues with > `sync-all` due to subtle bugs in it (just check the git history for > sync-all to see the tweaks we needed). Moreover, IMHO, if someone's > already proficient with Git, telling them to use `sync-all` will rather > confuse than help as it's not that transparent what it really does. So why not turn the current 1000 line Perl script with two lines of shell script and some comments? Erik (who uses sync-all because he didn't know any better) -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/ From alan.zimm at gmail.com Thu Oct 2 15:41:01 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Thu, 2 Oct 2014 17:41:01 +0200 Subject: Feedback request for #9628 AST Annotations In-Reply-To: References: <1412038564-sup-8892@sabre> <618BE556AADD624C9C918AA5D5911BEF2223B4EC@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF2223FA16@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: After further discussion with Neil Mitchell arouund this, I have pushed an update through to https://phabricator.haskell.org/D297. It introduces one data structure per annotation, and allows the user to look up based on the SrcSpan of the annotated AST element and th expected annotation type. On Wed, Oct 1, 2014 at 6:37 PM, Neil Mitchell wrote: > I was getting a bit lost between the idea and the implementation. Let > me try rephrasing the idea in my own words. > > The goal: Capture inner source spans in AST syntax nodes. At the > moment if ... then ... else ... captures the spans [if [...] then > [...] else [...]]. We want to capture the spans for each keyword as > well, so: [{if} [...] {then} [...] {else} [...]]. > > The proposal: Rather than add anything to the AST, have a separate > mapping (SrcSpan,AstCtor) to [SrcSpan]. So you give in the SrcSpan > from the IfThenElse node, and some token for the IfThenElse > constructor, and get back a list of IfThenElse for the particular > keyword. > > I like the proposal because it adds nothing inside the AST, and > requires no fresh invariants of the AST. I dislike it because the > contents of that separate mapping are highly tied up with the AST, and > easy to get out of sync. I think it's the right choice for three > reasons, 1) it is easier to try out and doesn't break the AST, so we > have more scope for changing our minds later; 2) the same technique is > able to represent things other than SrcSpan without introducing a > polymorphic src span; 3) the people who pay the complexity are the > people who use it, which is relatively few people. > > That said, as a tweak to the API, rather than a single data type for > all annotations, you could have: > > data AnnIfThenElse = AnnIfThenElse {posIf, posThen, posElse :: SrcSpan} > data AnnDo = AnnDo {posDo :: SrcSpan} > > Then you could just have an opaque Map (SrcSpan, TypeRep) Dynamic, > with the invariant that the TypeRep in the key matches the Dynamic. > Then you can have: getAnnotation :: Typeable a => Annotations -> > SrcSpan -> Maybe a. I think it simplifies some of the TypeRep trickery > you are engaging in with mkAnnKey. > > Thanks, Neil > > On Wed, Oct 1, 2014 at 5:06 PM, Simon Peyton Jones > wrote: > > Let me urge you, once more, to consult some actual heavy-duty users of > these > > proposed facilities. I am very keen to avoid investing design and > > implementation effort in facilities that may not meet the need. > > > > > > > > If they end up acclaiming the node-key idea, then we should surely simply > > make the key an abstract type, simply an instance of Hashable, Ord, etc. > > > > > > > > Simon > > > > > > > > From: Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] > > Sent: 30 September 2014 19:48 > > To: Simon Peyton Jones > > Cc: Richard Eisenberg; Edward Z. Yang; ghc-devs at haskell.org > > > > > > Subject: Re: Feedback request for #9628 AST Annotations > > > > > > > > On further reflection of the goals for the annotation, I would like to > put > > forward the following proposal for comment > > > > > > Instead of physically placing a "node-key" in each AST Node, a virtual > > node key can be generated from any `GenLocated SrcSpan e' comprising a > > combination of the `SrcSpan` value and a unique identifier from the > > constructor for `e`, perhaps using its `TypeRep`, since the entire AST > > derives Typeable. > > > > To further reduce the intrusiveness, a base Annotation type can be > > defined that captures the location of noise tokens for each AST > > constructor. This can then be emitted from the parser, if the > > appropriate flag is set to enable it. > > > > So > > > > data ApiAnnKey = AK SrcSpan TypeRep > > > > mkApiAnnKey :: (Located e) -> ApiAnnKey > > mkApiAnnKey = ... > > > > data Ann = > > .... > > | AnnHsLet SrcSpan -- of the word "let" > > SrcSpan -- of the word "in" > > > > | AnnHsDo SrcSpan -- of the word "do" > > > > And then in the parser > > > > | 'let' binds 'in' exp { mkAnnHsLet $1 $3 (LL $ HsLet (unLoc > $2) > > $4) } > > > > The helper is > > > > mkAnnHsLet :: Located a -> Located b -> LHsExpr RdrName -> P (LHsExpr > > RdrName) > > mkAnnHsLet (L l_let _) (L l_in _) e = do > > addAnnotation (mkAnnKey e) (AnnHsLet l_let l_in) > > return e; > > > > The Parse Monad would have to accumulate the annotations to be > > returned at the end, if called with the appropriate flag. > > > > There will be some boilerplate in getting the annotations and helper > > functions defined, but it will not pollute the rest. > > > > This technique can also potentially be backported to support older GHC > > versions via a modification to ghc-parser. > > > > https://hackage.haskell.org/package/ghc-parser > > > > Regards > > > > Alan > > > > > > > > On Tue, Sep 30, 2014 at 2:04 PM, Alan & Kim Zimmerman < > alan.zimm at gmail.com> > > wrote: > > > > I tend to agree that this change is much too intrusive for what it > attempts > > to do. > > > > I think the concept of a node key could be workable, and ties in to the > > approach I am taking in ghc-exactprint [1], which uses a SrcSpan together > > with node type as the annotation key. > > > > [1] https://github.com/alanz/ghc-exactprint > > > > > > > > On Tue, Sep 30, 2014 at 11:19 AM, Simon Peyton Jones < > simonpj at microsoft.com> > > wrote: > > > > I'm anxious about it being too big a change too. > > > > I'd be up for it if we had several "customers" all saying "yes, this is > > precisely what we need to make our usage of the GHC API far far easier". > > With enough detail so we can understand their use-case. > > > > Otherwise I worry that we might go to a lot of effort to solve the wrong > > problem; or to build a solution that does not, in the end, work for the > > actual use-case. > > > > Another way to tackle this would be to ensure that syntax tree nodes > have a > > "node-key" (a bit like their source location) that clients could use in a > > finite map, to map node-key to values of their choice. > > > > I have not reviewed your patch in detail, but it's uncomfortable that the > > 'l' parameter gets into IfGblEnv and DsM. That doesn't smell right. > > > > Ditto DynFlags/HscEnv, though I think here that you are right that the > > "hooks" interface is very crucial. After all, the WHOLE POINT is too > make > > the client interface more flexible. I would consult Luite and Edsko, who > > were instrumental in designing the new hooks interface > > https://ghc.haskell.org/trac/ghc/wiki/Ghc/Hooks > > (I'm not sure if that page is up to date, but I hope so) > > > > A good way to proceed might be to identify some of the big users of the > GHC > > API (I'm sure I don't know them all), discuss with them what would help > > them, and share the results on a wiki page. > > > > Simon > > > > > > | -----Original Message----- > > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of > > | Richard Eisenberg > > | Sent: 30 September 2014 03:04 > > | To: Edward Z. Yang > > | Cc: ghc-devs at haskell.org > > | Subject: Re: Feedback request for #9628 AST Annotations > > | > > | I'm only speaking up because Alan is specifically requesting feedback: > > | I'm really ambivalent about this. I agree with Edward that this is a > > | big change and adds permanent noise in a lot of places. But, I also > > | really respect the goal here -- better tool support. Is it worthwhile > > | to do this using a dynamically typed bit (using Typeable and such), > > | which would avoid the noise? Maybe. > > | > > | What do other languages do? Do we know what, say, Agda does to get > > | such tight coupling with an editor? Does, say, Eclipse have such a > > | chummy relationship with a Java compiler to do its refactoring, or is > > | that separately implemented? Haskell/GHC is not the first project to > > | have this problem, and there's plenty of solutions out there. And, > > | unlike most other times, I don't think Haskell is exceptional in this > > | regard (there's nothing very special about Haskell's AST, maybe beyond > > | indentation-awareness), so we can probably adopt other solutions > > | nicely. > > | > > | Richard > > | > > | On Sep 29, 2014, at 8:58 PM, "Edward Z. Yang" wrote: > > | > > | > Excerpts from Alan & Kim Zimmerman's message of 2014-09-29 13:38:45 > > | -0700: > > | >> 1. Is this change too big, should I scale it back to just update > > | the > > | >> HsSyn structures and then lock it down to Located SrcSpan for all > > | >> the rest? > > | > > > | > I don't claim to speak for the rest of the GHC developers, but I > > | think > > | > this change is too big. I am almost tempted to say that we > > | shouldn't > > | > add the type parameter at all, and do something else (maybe Backpack > > | > can let us extend SrcSpan in a modular way, or even use a > > | dynamically > > | > typed map for annotations.) > > | > > > | > Edward > > | > _______________________________________________ > > | > ghc-devs mailing list > > | > ghc-devs at haskell.org > > | > http://www.haskell.org/mailman/listinfo/ghc-devs > > | > > | _______________________________________________ > > | ghc-devs mailing list > > | ghc-devs at haskell.org > > | http://www.haskell.org/mailman/listinfo/ghc-devs > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > > > > > > > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Thu Oct 2 18:59:00 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Thu, 2 Oct 2014 20:59:00 +0200 Subject: Feedback request for #9628 AST Annotations In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF22240B87@DB3PRD3001MB020.064d.mgd.msft.net> References: <1412038564-sup-8892@sabre> <618BE556AADD624C9C918AA5D5911BEF2223B4EC@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF22240B87@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Ok, report back 24 hrs after a Haskell Cafe email [1] and reference to it on Reddit [2]. The Reddit post has 16 upvotes and no downvotes. The Haskell Cafe post generated two reponses, one from Andrew Gibiansky (IHaskell,ghc-parser), and one from Mateusz Kowalczyk (Haddock,yi,GHC), both in favour. Neil Mitchell (hlint) has also expressed support on this mailing list. And of course the most enthusiastic user is me, as it will simplify HaRe dramatically. [1] http://www.haskell.org/pipermail/haskell-cafe/2014-October/116267.html [2] http://www.reddit.com/r/haskell/comments/2i0jo8/haskellcafe_ghc_710_ghcapi_changes_and_proposed/ On Wed, Oct 1, 2014 at 7:44 PM, Simon Peyton Jones wrote: > Let me urge you, once again, to consult users. I really do not want to > implement a feature that (thus far) lacks a single enthusiastic user. > Please. > > > > Simon > > > > *From:* Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] > *Sent:* 01 October 2014 16:13 > *To:* Simon Peyton Jones > *Cc:* Richard Eisenberg; Edward Z. Yang; ghc-devs at haskell.org > > *Subject:* Re: Feedback request for #9628 AST Annotations > > > > I have put up a new diff at https://phabricator.haskell.org/D297 > > It is just a proof of concept at this point, to check if the approach is > acceptable. > > This is much less intrusive, and only affects the lexer/parser, in what > should be a transparent way. > > The new module ApiAnnotation was introduced because it needs to be > imported by Lexer.x, and I was worried about another circular import cycle. > It does also allow the annotations to be defined in a self-contained way, > which can then easily be used by other external projects such as ghc-parser. > > If there is consensus that this will not break anything else, I would like > to go ahead and add the rest of the annotations. > > Regards > > Alan > > > > On Tue, Sep 30, 2014 at 11:19 AM, Simon Peyton Jones < > simonpj at microsoft.com> wrote: > > I'm anxious about it being too big a change too. > > I'd be up for it if we had several "customers" all saying "yes, this is > precisely what we need to make our usage of the GHC API far far easier". > With enough detail so we can understand their use-case. > > Otherwise I worry that we might go to a lot of effort to solve the wrong > problem; or to build a solution that does not, in the end, work for the > actual use-case. > > Another way to tackle this would be to ensure that syntax tree nodes have > a "node-key" (a bit like their source location) that clients could use in a > finite map, to map node-key to values of their choice. > > I have not reviewed your patch in detail, but it's uncomfortable that the > 'l' parameter gets into IfGblEnv and DsM. That doesn't smell right. > > Ditto DynFlags/HscEnv, though I think here that you are right that the > "hooks" interface is very crucial. After all, the WHOLE POINT is too make > the client interface more flexible. I would consult Luite and Edsko, who > were instrumental in designing the new hooks interface > https://ghc.haskell.org/trac/ghc/wiki/Ghc/Hooks > (I'm not sure if that page is up to date, but I hope so) > > A good way to proceed might be to identify some of the big users of the > GHC API (I'm sure I don't know them all), discuss with them what would help > them, and share the results on a wiki page. > > Simon > > > | -----Original Message----- > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of > | Richard Eisenberg > | Sent: 30 September 2014 03:04 > | To: Edward Z. Yang > | Cc: ghc-devs at haskell.org > | Subject: Re: Feedback request for #9628 AST Annotations > | > | I'm only speaking up because Alan is specifically requesting feedback: > | I'm really ambivalent about this. I agree with Edward that this is a > | big change and adds permanent noise in a lot of places. But, I also > | really respect the goal here -- better tool support. Is it worthwhile > | to do this using a dynamically typed bit (using Typeable and such), > | which would avoid the noise? Maybe. > | > | What do other languages do? Do we know what, say, Agda does to get > | such tight coupling with an editor? Does, say, Eclipse have such a > | chummy relationship with a Java compiler to do its refactoring, or is > | that separately implemented? Haskell/GHC is not the first project to > | have this problem, and there's plenty of solutions out there. And, > | unlike most other times, I don't think Haskell is exceptional in this > | regard (there's nothing very special about Haskell's AST, maybe beyond > | indentation-awareness), so we can probably adopt other solutions > | nicely. > | > | Richard > | > | On Sep 29, 2014, at 8:58 PM, "Edward Z. Yang" wrote: > | > | > Excerpts from Alan & Kim Zimmerman's message of 2014-09-29 13:38:45 > | -0700: > | >> 1. Is this change too big, should I scale it back to just update > | the > | >> HsSyn structures and then lock it down to Located SrcSpan for all > | >> the rest? > | > > | > I don't claim to speak for the rest of the GHC developers, but I > | think > | > this change is too big. I am almost tempted to say that we > | shouldn't > | > add the type parameter at all, and do something else (maybe Backpack > | > can let us extend SrcSpan in a modular way, or even use a > | dynamically > | > typed map for annotations.) > | > > | > Edward > | > _______________________________________________ > | > ghc-devs mailing list > | > ghc-devs at haskell.org > | > http://www.haskell.org/mailman/listinfo/ghc-devs > | > | _______________________________________________ > | ghc-devs mailing list > | ghc-devs at haskell.org > | http://www.haskell.org/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Oct 2 19:48:45 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 2 Oct 2014 19:48:45 +0000 Subject: dropWhileEndLE breakage Message-ID: <618BE556AADD624C9C918AA5D5911BEF22243718@DB3PRD3001MB020.064d.mgd.msft.net> What's going on here? No other library module defines this function, except in Cabal! Simon libraries\base\GHC\Windows.hs:124:16: Not in scope: 'dropWhileEndLE' Perhaps you meant 'dropWhileEnd' (imported from Data.OldList) libraries/base/ghc.mk:4: recipe for target 'libraries/base/dist-install/build/GHC/Windows.o' failed -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin at well-typed.com Thu Oct 2 19:53:18 2014 From: austin at well-typed.com (Austin Seipp) Date: Thu, 2 Oct 2014 14:53:18 -0500 Subject: dropWhileEndLE breakage In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF22243718@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF22243718@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: (Replying to ghc-devs@ as well.) --------------------- Yes, you're quite right. I think a part of the original diff was actually missing. Please update/rebase your tree. I've reverted this in d6d5c127b86dc186b25add2843cb83fc12e72a85 On Thu, Oct 2, 2014 at 2:48 PM, Simon Peyton Jones wrote: > What?s going on here? No other library module defines this function, > except in Cabal! > > Simon > > > > libraries\base\GHC\Windows.hs:124:16: > > Not in scope: ?dropWhileEndLE? > > Perhaps you meant ?dropWhileEnd? (imported from Data.OldList) > > libraries/base/ghc.mk:4: recipe for target > 'libraries/base/dist-install/build/GHC/Windows.o' failed > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From gintautas.miliauskas at gmail.com Thu Oct 2 20:32:25 2014 From: gintautas.miliauskas at gmail.com (Gintautas Miliauskas) Date: Thu, 2 Oct 2014 22:32:25 +0200 Subject: Building ghc on Windows with msys2 In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF2223DA32@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF2223DA32@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Hi, > All we need is someone to act as convenor/coordinator and we are good to go. Would any of you be willing to play that role? Indeed, the next thing I was going to ask was about expediting the decision process. I would be happy to try and coordinate a push in Windows matters. There is a caveat though: I don't have any skin in the GHC-on-Windows game, so I will want to move on to other things afterwards. An advantage of having a working group is that you can *decide* things. At > the moment people often wait for GHC HQ to make a decision, and end up > waiting a long time. It would be better if a working group was responsible > for the GHC-on-Windows build and then if (say) you want to mandate msys2, > you can go ahead and mandate it. Well, obviously consult ghc-devs for > advice, but you are in the lead. Does that make sense? > Sounds great. The question still remains about making changes to code: is there a particular person with commit rights that we could lean on for code reviews and committing changes to the main repository? > I think an early task is to replace what Neil Mitchell encountered: FIVE > different wiki pages describing how to build GHC on Windows. We want just > one! (Others can perhaps be marked ?out of date/archive? rather than > deleted, but it should be clear which is the main choice.) > Indeed, it's a bit of a mess. I intended to shape up the msys2 page to serve as the default, but wanted to see more testing done before before dropping the other pages. > I agree with using msys2 as the main choice. (I?m using it myself.) It > may be that Gintautas?s page > https://ghc.haskell.org/trac/ghc/wiki/Building/Preparation/Windows/MSYS2 > is already sufficient. Although I?d like to see it tested by others. For > example, I found that it was CRUCIAL to set MSYSYSTEM=MINGW whereas > Gintautas?s page says nothing about that. > Are you sure that is a problem? The page specifically instructs to use the msys64_shell.bat script (through a shortcut) that is included in msys2, and that script takes care of setting MSYSTEM=MINGW64, among other important things. Other small thoughts: > > ? We started including the ghc-tarball stuff because when we > relied directly on the gcc that came with msys, we kept getting build > failures because the gcc that some random person happened to be using did > not work (e..g. they had a too-old or too-new version of msys). By using a > single, fixed gcc, we avoided all this pain. > Makes sense. Just curious: why is this less of a problem on GNU/Linux distros compared to msys2? Does msys2 see comparatively less testing, or is it generally more bleeding edge? > > ? I don?t know what a ?rubenvb? build is, but I think you can go > ahead and say ?use X and Y in this way?. The important thing is that it > should be reproducible, and not dependent on the particular Cygwin or gcc > or whatever the that user happens to have installed. > A "rubenvb" build is one of the available types of prebuilt binary packages of mingw for Windows. Let's figure out if there is something more mainstream and if we can migrate to that. -- Gintautas Miliauskas -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Oct 2 20:39:42 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 2 Oct 2014 20:39:42 +0000 Subject: Windows build broken (again) Message-ID: <618BE556AADD624C9C918AA5D5911BEF2224383C@DB3PRD3001MB020.064d.mgd.msft.net> Sigh. The testsuite fails utterly on Windows, with thousands of identical errors =====> tc012(normal) 3039 of 4088 [1, 2677, 88] cd .\typecheck\should_compile && 'C:/code/HEAD/inplace/bin/ghc-stage2.exe' -fforce-recomp -dcore-lint -dcmm-lint -dno-debug-output -no-user-package-db -rtsopts -fno-ghci-history -c tc012.hs -fno-warn-incomplete-patterns >tc012.comp.stderr 2>&1 sh: line 0: cd: .typecheckshould_compile: No such file or directory Compile failed (status 256) errors were: *** unexpected failure for tc012(normal) Presumably this is some kind of Windows escape-character problem. But it has worked fine for years, so what is going on? It's very tiresome dealing with Windows breakage so frequently. A few regression test failures, maybe, but outright breakage is very bad. Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From gintautas.miliauskas at gmail.com Thu Oct 2 20:41:53 2014 From: gintautas.miliauskas at gmail.com (Gintautas Miliauskas) Date: Thu, 2 Oct 2014 22:41:53 +0200 Subject: Building ghc on Windows with msys2 In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF22217EF2@DB3PRD3001MB020.064d.mgd.msft.net> <87bnq0wwvd.fsf@gmail.com> Message-ID: I'm sure we could make git handle the tarballs, but it just seems like the wrong tool for the job. We'd have to use multiple advanced features of git where a simple wget/curl would do. Versioning is also a moot point, since we would embed versions in filenames. In fact, versioning would be easier and nicer when the filenames with versions are in a file on the main repository rather than in a submodule. I was thinking of performing the wget (if necessary) in the Makefile, to further bring down the number of steps that users have to execute for a working build. Any strong objections? Whom should I contact to get some static files deployed in a folder under haskell.org? On Mon, Sep 29, 2014 at 11:40 AM, Thomas Miedema wrote: > >> > 3. Why is ghc-tarballs a git repository? That does not seem very wise. >>> [...] >>> > Could we have a stable folder under haskell.org/ to put the files in, >>> to >>> > make sure that they never go away, and just wget/curl them from there? >>> >>> http://thread.gmane.org/gmane.comp.lang.haskell.ghc.devel/4883/focus=4887 >>> >> >> Hmm, that was a while ago. Whom should I contact to get the files >> deployed under haskell.org? >> > > Here's a different solution to the 'big binary blobs' problem: > > * Keep the ghc-tarballs git repository, and add it as a submodule > * Make sure it doesn't get cloned by default > git config -f .gitmodules submodule.ghc-tarballs.update none > * Windows developers run (after initial clone --recursive of the ghc > repository, one time): > git config submodule.ghc-tarballs.update checkout > git submodule update --depth=1 > * After that, windows developers run the normal: > git submodule update > > The advantages are: > * only the most recent ghc-tarballs commit gets cloned initially > * subsequent 'git submodule update' runs will make sure always the most > recent version of ghc-tarballs is available > * full history of ghc-tarballs is tracked, easier bisecting > * no extra scripts needed > > I don't know how much space overhead git adds. wget-ting just the files > themselves might still be faster. > -- Gintautas Miliauskas -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Thu Oct 2 20:42:36 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Thu, 2 Oct 2014 22:42:36 +0200 Subject: Windows build broken (again) In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF2224383C@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF2224383C@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: We need to get a windows build not up for phabricator that stops breaking changes from getting submitted. On Oct 2, 2014 10:40 PM, "Simon Peyton Jones" wrote: > Sigh. The testsuite fails utterly on Windows, with thousands of > identical errors > > =====> tc012(normal) 3039 of 4088 [1, 2677, 88] > > cd .\typecheck\should_compile && 'C:/code/HEAD/inplace/bin/ghc-stage2.exe' > -fforce-recomp -dcore-lint -dcmm-lint -dno-debug-output -no-user-package-db > -rtsopts -fno-ghci-history -c tc012.hs -fno-warn-incomplete-patterns > >tc012.comp.stderr 2>&1 > > sh: line 0: cd: .typecheckshould_compile: No such file or directory > > Compile failed (status 256) errors were: > > *** unexpected failure for tc012(normal) > > Presumably this is some kind of Windows escape-character problem. But it > has worked fine for years, so what is going on? > > It?s very tiresome dealing with Windows breakage so frequently. A few > regression test failures, maybe, but outright breakage is very bad. > > Simon > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Thu Oct 2 21:00:26 2014 From: allbery.b at gmail.com (Brandon Allbery) Date: Thu, 2 Oct 2014 17:00:26 -0400 Subject: Windows build broken (again) In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF2224383C@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF2224383C@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: On Thu, Oct 2, 2014 at 4:39 PM, Simon Peyton Jones wrote: > Presumably this is some kind of Windows escape-character problem. But it > has worked fine for years, so what is going on? At a guess, something that was using / is now using \ and getting eaten by the shell. Or quoting that was preventing the \s from being eaten has been lost somewhere. -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Thu Oct 2 21:05:01 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Thu, 02 Oct 2014 14:05:01 -0700 Subject: Windows build broken (again) In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF2224383C@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <1412283889-sup-1430@sabre> Maybe it's the Python 3 patches. Excerpts from Brandon Allbery's message of 2014-10-02 14:00:26 -0700: > On Thu, Oct 2, 2014 at 4:39 PM, Simon Peyton Jones > wrote: > > > Presumably this is some kind of Windows escape-character problem. But it > > has worked fine for years, so what is going on? > > > At a guess, something that was using / is now using \ and getting eaten by > the shell. Or quoting that was preventing the \s from being eaten has been > lost somewhere. > From ezyang at mit.edu Thu Oct 2 23:24:23 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Thu, 02 Oct 2014 16:24:23 -0700 Subject: cabal directory structure under /libraries/ for a lib that uses Rts.h In-Reply-To: References: <1412147969-sup-5452@sabre> Message-ID: <1412292189-sup-8929@sabre> Oh, in this case, it's likely because we're not actually exporting the symbol. Check Linker.c, esp the calls to SymI_HasProto. Edward Excerpts from ?mer Sinan A?acan's message of 2014-10-02 02:34:35 -0700: > > Well, which library should it be part of? Add it to the exposed-modules > > list there and it should get compiled. > > It's not only a "get it compiled" problem, even if I add it to base or > some other lib and get it compiled, it's failing with a "undefined > reference" linker error. I'm trying to use a function from > `rts/RtsFlags.c`. I can define the function elsewhere but I still link > it with `RtsFlags.c` because I'm using `RtsFlags` from that file. > > Any ideas? > > --- > ?mer Sinan A?acan > http://osa1.net From david.feuer at gmail.com Fri Oct 3 01:05:00 2014 From: david.feuer at gmail.com (David Feuer) Date: Thu, 2 Oct 2014 21:05:00 -0400 Subject: dropWhileEndLE breakage Message-ID: Simon Peyton Jones asked > What's going on here? No other library module defines this function, except in Cabal! > Simon That was my fault; I'm very sorry. I had added that function (similar to Data.List.dropWhileEnd, but not the same) to compiler/utils/Util.lhs and to another module that used it, and then forgot it was not available in libraries/base/GHC/. Since neither Phab nor I run Windows, there was little hope of catching the mistake before it went out. I believe Joachim Breitner has fixed the problem now by using Data.List.dropWhileEnd to construct that error message?the difference in behavior doesn't matter there. David Feuer From simonpj at microsoft.com Fri Oct 3 15:29:31 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 3 Oct 2014 15:29:31 +0000 Subject: Windows build broken (again) In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF2224383C@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF2224383C@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F31457F@DB3PRD3001MB020.064d.mgd.msft.net> Perhaps, yes, it is Python 3. I don't know. Could someone revert to make it work again, please? Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Simon Peyton Jones Sent: 02 October 2014 21:40 To: ghc-devs at haskell.org Subject: Windows build broken (again) Sigh. The testsuite fails utterly on Windows, with thousands of identical errors =====> tc012(normal) 3039 of 4088 [1, 2677, 88] cd .\typecheck\should_compile && 'C:/code/HEAD/inplace/bin/ghc-stage2.exe' -fforce-recomp -dcore-lint -dcmm-lint -dno-debug-output -no-user-package-db -rtsopts -fno-ghci-history -c tc012.hs -fno-warn-incomplete-patterns >tc012.comp.stderr 2>&1 sh: line 0: cd: .typecheckshould_compile: No such file or directory Compile failed (status 256) errors were: *** unexpected failure for tc012(normal) Presumably this is some kind of Windows escape-character problem. But it has worked fine for years, so what is going on? It's very tiresome dealing with Windows breakage so frequently. A few regression test failures, maybe, but outright breakage is very bad. Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvriedel at gmail.com Fri Oct 3 15:51:23 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Fri, 03 Oct 2014 17:51:23 +0200 Subject: Windows build broken (again) In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F31457F@DB3PRD3001MB020.064d.mgd.msft.net> (Simon Peyton Jones's message of "Fri, 3 Oct 2014 15:29:31 +0000") References: <618BE556AADD624C9C918AA5D5911BEF2224383C@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF3F31457F@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <8738b5w3r8.fsf@gmail.com> On 2014-10-03 at 17:29:31 +0200, Simon Peyton Jones wrote: > Perhaps, yes, it is Python 3. I don't know. Could someone revert to > make it work again, please? Fyi, I can't reproduce this specific problem on Cygwin at least (I don't have any working pure Msys2 environment yet (still working on it), as this may exactly be the kind of failure I'd expect Msys2 to be prone to while Cygwin to be unaffected by). What I tried in order to reproduce: $ git rev-parse HEAD 084d241b316bfa12e41fc34cae993ca276bf0730 # <-- this is the Py3/testsuite commit $ make TEST=tc012 WAY=normal ... =====> tc012(normal) 3039 of 4088 [0, 0, 0] cd ./typecheck/should_compile && 'C:/cygwin64/home/ghc/ghc-hvr/inplace/bin/ghc-stage2.exe' -fforce-recomp -dcore-lint -dcmm-lint -dno-debug-output -no-user-package-db -rtsopts -fno-ghci-history -c tc012.hs -fno-warn-incomplete-patterns >tc012.comp.stderr 2>&1 OVERALL SUMMARY for test run started at Fri Oct 3 15:42:04 2014 GMT 0:00:03 spent to go through 4088 total tests, which gave rise to 12360 test cases, of which 12359 were skipped 0 had missing libraries 1 expected passes 0 expected failures ... And btw, with the latest GHC HEAD commit (and I suspect the recent HEAP_ALLOCED-related commits to be responsible for that), I get a ton of testsuite failures due to such errors: T8639_api.exe: Unknown PEi386 section name `staticclosures' (while processing: C:\cygwin64\home\ghc\ghc-hvr\libraries\ghc-prim\dist-install\build\HSghcpr_BE58KUgBe9ELCsPXiJ1Q2r.o) From krz.gogolewski at gmail.com Fri Oct 3 18:04:32 2014 From: krz.gogolewski at gmail.com (Krzysztof Gogolewski) Date: Fri, 3 Oct 2014 20:04:32 +0200 Subject: Windows build broken (again) In-Reply-To: <8738b5w3r8.fsf@gmail.com> References: <618BE556AADD624C9C918AA5D5911BEF2224383C@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF3F31457F@DB3PRD3001MB020.064d.mgd.msft.net> <8738b5w3r8.fsf@gmail.com> Message-ID: Python 3 is a likely culprit (though I couldn't confirm it), so I reverted it. Does it work now? On Fri, Oct 3, 2014 at 5:51 PM, Herbert Valerio Riedel wrote: > On 2014-10-03 at 17:29:31 +0200, Simon Peyton Jones wrote: > > Perhaps, yes, it is Python 3. I don't know. Could someone revert to > > make it work again, please? > > Fyi, I can't reproduce this specific problem on Cygwin at least (I don't > have any working pure Msys2 environment yet (still working on it), as > this may exactly be the kind of failure I'd expect Msys2 to be prone to > while Cygwin to be unaffected by). > > What I tried in order to reproduce: > > $ git rev-parse HEAD > 084d241b316bfa12e41fc34cae993ca276bf0730 # <-- this is the > Py3/testsuite commit > > $ make TEST=tc012 WAY=normal > ... > =====> tc012(normal) 3039 of 4088 [0, 0, 0] > cd ./typecheck/should_compile && > 'C:/cygwin64/home/ghc/ghc-hvr/inplace/bin/ghc-stage2.exe' -fforce-recomp > -dcore-lint -dcmm-lint -dno-debug-output -no-user-package-db -rtsopts > -fno-ghci-history -c tc012.hs -fno-warn-incomplete-patterns > >tc012.comp.stderr 2>&1 > > OVERALL SUMMARY for test run started at Fri Oct 3 15:42:04 2014 GMT > 0:00:03 spent to go through > 4088 total tests, which gave rise to > 12360 test cases, of which > 12359 were skipped > > 0 had missing libraries > 1 expected passes > 0 expected failures > ... > > > And btw, with the latest GHC HEAD commit (and I suspect the recent > HEAP_ALLOCED-related commits to be responsible for that), I get a ton of > testsuite failures due to such errors: > > T8639_api.exe: Unknown PEi386 section name `staticclosures' (while > processing: > C:\cygwin64\home\ghc\ghc-hvr\libraries\ghc-prim\dist-install\build\HSghcpr_BE58KUgBe9ELCsPXiJ1Q2r.o) > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin at well-typed.com Fri Oct 3 21:35:28 2014 From: austin at well-typed.com (Austin Seipp) Date: Fri, 3 Oct 2014 16:35:28 -0500 Subject: Tentative high-level plans for 7.10.1 Message-ID: Hi *, Today, Mikolaj and I discussed and plotted out a quick, high-level roadmap for the 7.10.1 release, based on our earlier plans. Consider this the 10,000 foot view (the last one was like a view from space). **The TL;DR** - We think the freeze and branching for major things will happen in _5 to 6 weeks from now_, in early/mid November. The release will happen in Feburary. Developers, please read some code (see below), and get your stuff in! The initial bug list will come next week for you to look at. --------------------------- The not-quite-TL;DR version: We'll get about 4 months of fixes into STABLE. This stability period will come at a typically low-point during the year, when many people will be preoccupied with Holidays. So, we're trying to aim far ahead, and try to give STABLE a long, official 'cooking period'. Feature inclusions after the freeze _are_ negotiable, but not large changes. Expect us to say "No" more often this time around. The STABLE branch will only be touchable by me and Herbert. We do not believe we will ship a 7.8.4 at all, contrary to what you may have seen on Trac - we never decided definitively, but there is likely not enough time. Over the next few days, I will remove the defunct 7.8.4 milestone, and re-triage the assigned tickets. We don't have any expected time for an RC just yet. I will spend the next few days culling the 7.10.1 milestone to contain appropriate tickets. Expect a list of these tickets early next week, with more info. Developers: if you have time, **please** review some code, and get your stuff in too! Petition people to review things of yours. Review things other people wrote, and learn new stuff. High five each other afterwords. Here are the major patches on Phabricator still needing review, that I think we'd like to see for 7.10.1: - D168: Partial type signatures - D72: New rebindable syntax for arrows. - D155: LLVM 3.5 compatibility - D169: Source code note infrastructure - D202: Injective type families - Edward Yang's HEAP_ALLOCED saga, D270 through D293 - D130: Implementation of hsig (module signatures) It is possible not all of these will make it. But I think they're solid priorities to try and land soon. Please let me know if you disagree with this. After I publish the bug list, please also take a look, and use your judgement to include or remove things you think are sensible for 7.10.1 (I trust your judgement, and we can always talk it out later). I think that's all. Here's a full roadmap below from our notes: ----------------------------------- - Expected ETA for 7.10.1: - Roughly Feburary 2015. - Expected code freeze ~5wks - Entails making the stable branch - Tentatively create branch on Nov. 7th. - ~3-4mo's of freeze time for STABLE. - Empirically a low-point in development due to Holidays, but gives us a lot of time. - Cull and probably remove the 7.8.4 milestone. - Simply not enough time to address almost any of the tickets in any reasonable timeframe before 7.10.1, while also shipping them. - Only one, probably workarouadble, not game-changing bug (#9303) marked for 7.8.4. - No particular pressure on any outstanding bugs to release immediately. - ANY release would be extremely unlikely, but if so, only backed by the most critical of bugs. - We will move everything in 7.8.4 milestone to 7.10.1 milestone. - To accurately catalogue what was fixed. - To eliminate confusion. - Cull the 7.10.1 milestone - Currently ~700 tickets. - 31 high tickets. - 1 highest priority ticket. - Bulk of them will need moved out to 7.12.1. - Ask developers/users to move things back in. - Demote any old tickets out of highest according to bug tracker policy. - Alert people these are our priorities. - Go through ONLY high/highest priority tickets for 7.10.1. - Email ghc-devs with plans. - Major tentative patches for 7.10 - D168: Partial type signatures - D72: New rebindable syntax for arrows. - D155: LLVM 3.5 compatibility (not big, but important for users!) - D169: Source code note infrastructure (partially reviewed, Austin to review) - D202: Injective type families - Edward Yang's HEAP_ALLOCED saga, D270 through D293 - D130: Implementation of hsig (module signatures) -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From bgamari.foss at gmail.com Sat Oct 4 03:51:44 2014 From: bgamari.foss at gmail.com (Ben Gamari) Date: Fri, 03 Oct 2014 23:51:44 -0400 Subject: Tentative high-level plans for 7.10.1 In-Reply-To: References: Message-ID: <87k34gsd9r.fsf@gmail.com> Austin Seipp writes: snip. > > We do not believe we will ship a 7.8.4 at all, contrary to what you > may have seen on Trac - we never decided definitively, but there is > likely not enough time. Over the next few days, I will remove the > defunct 7.8.4 milestone, and re-triage the assigned tickets. > The only potential issue here is that not a single 7.8 release will be able to bootstrap LLVM-only targets due to #9439. I'm not sure how much of an issue this will be in practice but there should probably be some discussion with packagers to ensure that 7.8 is skipped on affected platforms lest users be stuck with no functional stage 0 compiler. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From johan.tibell at gmail.com Sat Oct 4 06:54:21 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Sat, 4 Oct 2014 08:54:21 +0200 Subject: Tentative high-level plans for 7.10.1 In-Reply-To: References: Message-ID: On Fri, Oct 3, 2014 at 11:35 PM, Austin Seipp wrote: > - Cull and probably remove the 7.8.4 milestone. > - Simply not enough time to address almost any of the tickets > in any reasonable timeframe before 7.10.1, while also shipping them. > - Only one, probably workarouadble, not game-changing > bug (#9303) marked for 7.8.4. > - No particular pressure on any outstanding bugs to release immediately. > - ANY release would be extremely unlikely, but if so, only > backed by the most critical of bugs. > - We will move everything in 7.8.4 milestone to 7.10.1 milestone. > - To accurately catalogue what was fixed. > - To eliminate confusion. > #8960 looks rather serious and potentially makes all of 7.8 a no-go for some users. I'm worried that we're (in general) pushing too many bug fixes towards future major versions. Since major versions tend to add new bugs, we risk getting into a situation where no major release is really solid. -------------- next part -------------- An HTML attachment was scrubbed... URL: From murray at sonology.net Sat Oct 4 21:27:13 2014 From: murray at sonology.net (Murray Campbell) Date: Sat, 4 Oct 2014 14:27:13 -0700 Subject: Errors building GHC on iOS with LLVM >= 3.4 Message-ID: Hi, I am trying to help solve #9125 in which an ARM build creates binaries that mangle Float values. After a great deal of help from rwbarton (detailed in the comments) it would appear that the problem is actually in LLVM 3.0. (v3.0 is virtually insisted upon in the iOS build instructions) Building GHC 7.8.3 for the iOS simulator with LLVM 3.4 produces a compiler that creates well behaved binaries. Unfortunately, building the device version fails with several warnings along the lines of: /var/folders/02/0mv6cz6505x2xhzlr279k2340000gp/T/ghc5244_0/ghc5244_6-armv7s.s:2944:2: warning: deprecated since v7, use 'dmb' mcr p15, #0, r7, c7, c10, #5 and: libraries/ghc-prim/cbits/popcnt.c:76:38: warning: shift count >= width of type [-Wshift-count-overflow] popcount_tab[(unsigned char)(x >> 32)] + ^ ~~ before bailing with /var/folders/02/0mv6cz6505x2xhzlr279k2340000gp/T/ghc6860_0/ghc6860_6-armv7.s:3916:2: error: out of range pc-relative fixup value vldr d8, LCPI70_0 ^ /var/folders/02/0mv6cz6505x2xhzlr279k2340000gp/T/ghc6860_0/ghc6860_6-armv7s.s:3916:2: error: out of range pc-relative fixup value vldr d8, LCPI70_0 ^ Next I tried to build HEAD (plus phabricator D208) with LLVM 3.4 but got the same error. Various adventures with LLVM 3.5 ended with the same error. I am trying to educate myself in ARM assembly but in the meantime does this ring any bells for anyone? I don't see anything on the trac. Thanks, Murray Campbell From bgamari.foss at gmail.com Sun Oct 5 02:32:30 2014 From: bgamari.foss at gmail.com (Ben Gamari) Date: Sat, 04 Oct 2014 22:32:30 -0400 Subject: Errors building GHC on iOS with LLVM >= 3.4 In-Reply-To: References: Message-ID: <87eguns0u9.fsf@gmail.com> Murray Campbell writes: > Hi, > > I am trying to help solve #9125 in which an ARM build creates binaries > that mangle Float values. > > After a great deal of help from rwbarton (detailed in the comments) it > would appear that the problem is actually in LLVM 3.0. (v3.0 is > virtually insisted upon in the iOS build instructions) > > Building GHC 7.8.3 for the iOS simulator with LLVM 3.4 produces a > compiler that creates well behaved binaries. > > Unfortunately, building the device version fails with several warnings > along the lines of: > > /var/folders/02/0mv6cz6505x2xhzlr279k2340000gp/T/ghc5244_0/ghc5244_6-armv7s.s:2944:2: > warning: deprecated since v7, use 'dmb' > mcr p15, #0, r7, c7, c10, #5 > > and: > > libraries/ghc-prim/cbits/popcnt.c:76:38: > warning: shift count >= width of type [-Wshift-count-overflow] > popcount_tab[(unsigned char)(x >> 32)] + > ^ ~~ This is a case of an #ifdef being inappropriately x86 specific. The 32-bit popcnt implementation should be used on ARM yet the #ifdef looks at i386_HOST_ARCH. > before bailing with > > /var/folders/02/0mv6cz6505x2xhzlr279k2340000gp/T/ghc6860_0/ghc6860_6-armv7.s:3916:2: > error: out of range pc-relative fixup value > vldr d8, LCPI70_0 > ^ > > /var/folders/02/0mv6cz6505x2xhzlr279k2340000gp/T/ghc6860_0/ghc6860_6-armv7s.s:3916:2: > error: out of range pc-relative fixup value > vldr d8, LCPI70_0 > ^ > > Next I tried to build HEAD (plus phabricator D208) with LLVM 3.4 but > got the same error. > I've never seen an error of this form. What symbol definitions does this error occur in? > Various adventures with LLVM 3.5 ended with the same error. > Be aware that LLVM 3.5 will require D155 due to changes in where LLVM aliases are accepted. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From eir at cis.upenn.edu Sun Oct 5 02:51:41 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Sat, 4 Oct 2014 22:51:41 -0400 Subject: GitHub pull requests Message-ID: <6D1821C0-52BE-4506-8503-307D2CBD83DF@cis.upenn.edu> I've just finished reading this: http://www.reddit.com/r/haskell/comments/2hes8m/the_ghc_source_code_contains_1088_todos_please/ For better or worse, I don't read reddit often enough to hold a conversation there, so I'll ask my question here: Is there a way we can turn GitHub pull requests into Phab code reviews? I'm thinking of something like this: - GitHub user ghc-newbie submits a pull request to github.com/ghc/ghc about feature X. - GitHub duly sends out notices to the watch list. - Some bot that is watching the repo receives the notification email that a pull request is submitted. - The bot then pulls down the pull request commit and creates a Phab code review from it. In a perfect world, the code review could be made to look like it comes from ghc-newbie, but I don't think this author-redirection is necessary, and ghc-newbie could be cited in an automatically-included comment. - The bot sends ghc-newbie an email with a link to the code review and something like this: Hi ghc-newbie, We've received your pull request on GitHub. Thanks! It's great that you've contributed. GHC, for a variety of well-considered reasons [1], uses a tool called Phabricator [2] -- not GitHub -- to process contributions. We have taken the liberty of posting your pull request to Phab for further review. You can see it here: <...> If you have modifications to make, please use our Phab workflow, documented in full here [3] and in brief here [4]. Thanks again, and we look forward to reviewing and hopefully merging your patch! GHC [1], [2], [3], [4]: - We GHC devs then carry on like we have been. If the patch is good, we can merge it without further input from ghc-newbie. If it's no good, and ghc-newbie doesn't get onto Phab, that's their loss. To naive me, this all seems possible -- and relatively easy -- to automate. (Naive = I've never done proper tool integration and am deeply grateful to anyone who actually makes things work.) Is this a good idea? Is this possible? Have I missed anything? Thanks for reading! Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From bgamari.foss at gmail.com Sun Oct 5 05:03:49 2014 From: bgamari.foss at gmail.com (Ben Gamari) Date: Sun, 05 Oct 2014 01:03:49 -0400 Subject: GitHub pull requests In-Reply-To: <6D1821C0-52BE-4506-8503-307D2CBD83DF@cis.upenn.edu> References: <6D1821C0-52BE-4506-8503-307D2CBD83DF@cis.upenn.edu> Message-ID: <87a95brtu2.fsf@gmail.com> Richard Eisenberg writes: > I've just finished reading this: http://www.reddit.com/r/haskell/comments/2hes8m/the_ghc_source_code_contains_1088_todos_please/ > > For better or worse, I don't read reddit often enough to hold a > conversation there, so I'll ask my question here: Is there a way we > can turn GitHub pull requests into Phab code reviews? I'm thinking of > something like this: > > ... > I'm still quite unsure of how many people exist who, * find a bug they need to fix, * are willing to dig into the GHC codebase and fix it, * clean up their fix enough to submit upstream, and * take the initiative to send the fix upstream and yet aren't willing to take the five (twenty?) minutes to familiarize themselves with Phabricator and the arc toolchain. That being said, I'm all for lowering barriers. I started to write up a quick hack [1] to implement this sort of process. It's a bit late at the moment so I'll just put it up here for comment for the time being. It's a bit messy, the security implications are haven't been considered at all, and half of it is pseudo-code at best. That being said, it's a start and if someone picks it up and finishes it before I wake up tomorrow I won't be offended. Cheers, - Ben [1] https://gist.github.com/bgamari/72020a6186be205d0f33 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From carter.schonwald at gmail.com Sun Oct 5 05:28:49 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sun, 5 Oct 2014 01:28:49 -0400 Subject: GitHub pull requests In-Reply-To: <87a95brtu2.fsf@gmail.com> References: <6D1821C0-52BE-4506-8503-307D2CBD83DF@cis.upenn.edu> <87a95brtu2.fsf@gmail.com> Message-ID: @ben cool! yeah... the phab/arc workflow is probably the easiest part of the compiler patch work flow, heck, unlike a lot of projects you get ci for free! :) but thigns that make it easier are always good On Sun, Oct 5, 2014 at 1:03 AM, Ben Gamari wrote: > Richard Eisenberg writes: > > > I've just finished reading this: > http://www.reddit.com/r/haskell/comments/2hes8m/the_ghc_source_code_contains_1088_todos_please/ > > > > For better or worse, I don't read reddit often enough to hold a > > conversation there, so I'll ask my question here: Is there a way we > > can turn GitHub pull requests into Phab code reviews? I'm thinking of > > something like this: > > > > ... > > > I'm still quite unsure of how many people exist who, > > * find a bug they need to fix, > * are willing to dig into the GHC codebase and fix it, > * clean up their fix enough to submit upstream, and > * take the initiative to send the fix upstream > > and yet aren't willing to take the five (twenty?) minutes to familiarize > themselves with Phabricator and the arc toolchain. > > That being said, I'm all for lowering barriers. I started to write up a > quick hack [1] to implement this sort of process. It's a bit late at the > moment so I'll just put it up here for comment for the time being. It's > a bit messy, the security implications are haven't been considered at > all, and half of it is pseudo-code at best. That being said, it's a > start and if someone picks it up and finishes it before I wake up > tomorrow I won't be offended. > > Cheers, > > - Ben > > > [1] https://gist.github.com/bgamari/72020a6186be205d0f33 > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Sun Oct 5 06:53:03 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Sun, 5 Oct 2014 08:53:03 +0200 Subject: GitHub pull requests In-Reply-To: <87a95brtu2.fsf@gmail.com> References: <6D1821C0-52BE-4506-8503-307D2CBD83DF@cis.upenn.edu> <87a95brtu2.fsf@gmail.com> Message-ID: At least for cabal there was a large uptick in contributions once we moved to GitHub. On Sun, Oct 5, 2014 at 7:03 AM, Ben Gamari wrote: > Richard Eisenberg writes: > > > I've just finished reading this: > http://www.reddit.com/r/haskell/comments/2hes8m/the_ghc_source_code_contains_1088_todos_please/ > > > > For better or worse, I don't read reddit often enough to hold a > > conversation there, so I'll ask my question here: Is there a way we > > can turn GitHub pull requests into Phab code reviews? I'm thinking of > > something like this: > > > > ... > > > I'm still quite unsure of how many people exist who, > > * find a bug they need to fix, > * are willing to dig into the GHC codebase and fix it, > * clean up their fix enough to submit upstream, and > * take the initiative to send the fix upstream > > and yet aren't willing to take the five (twenty?) minutes to familiarize > themselves with Phabricator and the arc toolchain. > > That being said, I'm all for lowering barriers. I started to write up a > quick hack [1] to implement this sort of process. It's a bit late at the > moment so I'll just put it up here for comment for the time being. It's > a bit messy, the security implications are haven't been considered at > all, and half of it is pseudo-code at best. That being said, it's a > start and if someone picks it up and finishes it before I wake up > tomorrow I won't be offended. > > Cheers, > > - Ben > > > [1] https://gist.github.com/bgamari/72020a6186be205d0f33 > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvriedel at gmail.com Sun Oct 5 07:31:02 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Sun, 05 Oct 2014 09:31:02 +0200 Subject: GitHub pull requests In-Reply-To: <6D1821C0-52BE-4506-8503-307D2CBD83DF@cis.upenn.edu> (Richard Eisenberg's message of "Sat, 4 Oct 2014 22:51:41 -0400") References: <6D1821C0-52BE-4506-8503-307D2CBD83DF@cis.upenn.edu> Message-ID: <87sij32csp.fsf@gmail.com> On 2014-10-05 at 04:51:41 +0200, Richard Eisenberg wrote: > I've just finished reading this: http://www.reddit.com/r/haskell/comments/2hes8m/the_ghc_source_code_contains_1088_todos_please/ > > For better or worse, I don't read reddit often enough to hold a > conversation there, so I'll ask my question here: Is there a way we > can turn GitHub pull requests into Phab code reviews? I'm thinking of > something like this: My greatest worry about allowing GitHub PRs to the github.com.ghc/ghc.git repo is that GitHub and Trac use the very same `#[0-9]+` syntax tokens for referring to tickets and PRs, and trigger actions as soon as they detect any commit using that token. In other words, there's a namespace collisions (Luckily, Phabricator seems to have been designed to be used in concert with an external Ticket tracker, so it uses `[DT][0-9]+` to refer to code-revisions & tickets respectively). Morever, I'm also worrying this may become confusing to new contributors, since we already have the Trav vs Phabricator confusion about where to submit patches; if we also add GitHub PRs it'll just add another item to be confused about where things ought to be submitted. And the more PRs are added on github.com/ghc/ghc, the more it may appear as if that is the encouraged way to submit them (even though Phabricator+Trac is our currently targetted workflow) What I'd suggest alternatively, since this what some of our contributors are already doing instead of uploading patches: Teach Phabricator to allow to submit a URL to a commit (or branch) in a forked github.com/ghc/ghc repo, and create a code-revision out of that. Cheers, hvr From abela at chalmers.se Sun Oct 5 08:21:15 2014 From: abela at chalmers.se (Andreas Abel) Date: Sun, 5 Oct 2014 10:21:15 +0200 Subject: GitHub pull requests In-Reply-To: <87a95brtu2.fsf@gmail.com> References: <6D1821C0-52BE-4506-8503-307D2CBD83DF@cis.upenn.edu> <87a95brtu2.fsf@gmail.com> Message-ID: <5430FF7B.6060207@chalmers.se> On 05.10.2014 07:03, Ben Gamari wrote: > and yet aren't willing to take the five (twenty?) minutes to familiarize > themselves with Phabricator and the arc toolchain. Are you serious about this? I think your time estimate is a grand illusion. I attended Joachim Breitner's talk about Phabricator at the GHC developer meeting, that already (nearly?) used up the twenty minutes you allow. Yet I still have to * try it the first time, * make sure I get everything right, * learn to *trust* the tool * that is does the right thing, * does not do anything bad to my files * etc. pp. The brightest might be up to get on track in a couple of hours, but the majority is quite hesitant towards new tools... Human condition. Cheers, Andreas -- Andreas Abel <>< Du bist der geliebte Mensch. Department of Computer Science and Engineering Chalmers and Gothenburg University, Sweden andreas.abel at gu.se http://www2.tcs.ifi.lmu.de/~abel/ From michael at snoyman.com Sun Oct 5 08:30:11 2014 From: michael at snoyman.com (Michael Snoyman) Date: Sun, 5 Oct 2014 11:30:11 +0300 Subject: GitHub pull requests In-Reply-To: <5430FF7B.6060207@chalmers.se> References: <6D1821C0-52BE-4506-8503-307D2CBD83DF@cis.upenn.edu> <87a95brtu2.fsf@gmail.com> <5430FF7B.6060207@chalmers.se> Message-ID: On Sun, Oct 5, 2014 at 11:21 AM, Andreas Abel wrote: > On 05.10.2014 07:03, Ben Gamari wrote: > >> and yet aren't willing to take the five (twenty?) minutes to familiarize >> themselves with Phabricator and the arc toolchain. >> > > Are you serious about this? I think your time estimate is a grand > illusion. I attended Joachim Breitner's talk about Phabricator at the GHC > developer meeting, that already (nearly?) used up the twenty minutes you > allow. Yet I still have to > > * try it the first time, > * make sure I get everything right, > * learn to *trust* the tool > * that is does the right thing, > * does not do anything bad to my files > * etc. pp. > > The brightest might be up to get on track in a couple of hours, but the > majority is quite hesitant towards new tools... > > Human condition. > > Cheers, > Andreas > > -- > Andreas Abel <>< Du bist der geliebte Mensch. > > Department of Computer Science and Engineering > Chalmers and Gothenburg University, Sweden > > andreas.abel at gu.se > http://www2.tcs.ifi.lmu.de/~abel/ > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > I have to agree with Andreas here. I was in the boat of wanting to send a pull request for a simple documentation fix. If I could have just sent a Github PR, it would have taken no more than 5 minutes[1] to send a pull request. And since it was pure documentation, I could have even done it directly from the Github web interface if I'd wanted to. Phabricator took me quite a bit longer to set up (I don't remember the exact time, but certainly more than 20 minutes and certainly less than 3 hours). I also had trouble figuring out the right way to get started on this, and had to bug Herbert, who just sent me a link to the Phabricator site. At the very least, I think accepting Github PRs would allow people in my situation to send documentation fixes- which is something we should really be encouraging[2]. If we're still going to require Phabricator, there should be a canonical, step-by-step guide linked from multiple locations (including a README.md on GHC's Github repo) to make it as obvious as possible to willing contributors how to get started. Michael [1] Yes, I realize that's because I'm very familiar with the Github PR process already. I'm not interested in whether Github or Phabricator are easier for new users, the presumption is that many people- like me- are already very familiar with Github. [2] http://www.reddit.com/r/haskell/comments/2i1z9u/improving_haskellrelated_documentation/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Sun Oct 5 10:56:28 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Sun, 05 Oct 2014 12:56:28 +0200 Subject: GitHub pull requests In-Reply-To: <6D1821C0-52BE-4506-8503-307D2CBD83DF@cis.upenn.edu> References: <6D1821C0-52BE-4506-8503-307D2CBD83DF@cis.upenn.edu> Message-ID: <1412506588.4605.1.camel@joachim-breitner.de> Hi, we already have the problem with some people submitting diffs on trac, other submitting git patches on trac or linking to their private fork somewhere to pull from, and others using Phabricator. And, at least as far as I can tell from here, it doesn?t seem to be a big deal. So (warning: surprisingly simple solution ahead) we could consider the option of simply accepting Github pull requests! I think it could work well ok if we either * somehow communicate that people should open a trac ticket as well, if they want to make sure their contribution is handled in a timely manner, or (or and) * someone of us looks after PR and creates trac tickets as needed. The advantage is clear: Low entry barrier for new contributors and, as Michael says, for very small contributions (documentation typo fixes or so). The downside is that we have more tools to work with. This either means that we all get to know the GitHub frontend (which is quite intuitive, and at least reading diffs and commenting should be possible for all without a lot of learning), or that we simply let those developers who are familiar with GitHub handle it. I think the advantage could outweigh the downside and it?s worth a try. We don?t even have to advocate it aggressively, just remove the ?Do not submit PRs? notice on the repo and see what happens. (The problem with the ticket numbers remain, unfortunately.) Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From hvriedel at gmail.com Sun Oct 5 12:11:26 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Sun, 05 Oct 2014 14:11:26 +0200 Subject: GitHub pull requests In-Reply-To: <1412510621.4605.2.camel@joachim-breitner.de> (Joachim Breitner's message of "Sun, 05 Oct 2014 14:03:41 +0200") References: <6D1821C0-52BE-4506-8503-307D2CBD83DF@cis.upenn.edu> <1412506588.4605.1.camel@joachim-breitner.de> <8761fy3hal.fsf@gmail.com> <1412510621.4605.2.camel@joachim-breitner.de> Message-ID: <871tqm3edt.fsf@gmail.com> Hello, On 2014-10-05 at 14:03:41 +0200, Joachim Breitner wrote: > Am Sonntag, den 05.10.2014, 13:08 +0200 schrieb Herbert Valerio Riedel: >> On 2014-10-05 at 12:56:28 +0200, Joachim Breitner wrote: >> >> [...] >> >> > I think the advantage could outweigh the downside and it?s worth a try. >> > We don?t even have to advocate it aggressively, just remove the ?Do not >> > submit PRs? notice on the repo and see what happens. >> > >> > (The problem with the ticket numbers remain, unfortunately.) >> >> Take into account though, there's no easy going back once we open that >> Pandora box; once GitHub allocates a #-number for a repo, it can only be >> removed by involving the GitHub admins, and until then any overlapping >> #-reference will lead to confusing notifications and ticket/issue >> comments associated w/ the respective Trac-ticket and/or GitHub >> pull-request. > > that?s a valid point. > > Is there maybe a way to disable all #-number-parsing on GitHub? But I > haven?t seen it... Not that I know of; I'd be a bit suprised though if it was indeed possible, as it's a core feature of GitHub (and after all, you can't disable the PR-submission either) Cheers, hvr From bgamari.foss at gmail.com Sun Oct 5 14:32:30 2014 From: bgamari.foss at gmail.com (Ben Gamari) Date: Sun, 05 Oct 2014 10:32:30 -0400 Subject: GitHub pull requests In-Reply-To: <5430FF7B.6060207@chalmers.se> References: <6D1821C0-52BE-4506-8503-307D2CBD83DF@cis.upenn.edu> <87a95brtu2.fsf@gmail.com> <5430FF7B.6060207@chalmers.se> Message-ID: <877g0esi2p.fsf@gmail.com> Andreas Abel writes: > On 05.10.2014 07:03, Ben Gamari wrote: >> and yet aren't willing to take the five (twenty?) minutes to familiarize >> themselves with Phabricator and the arc toolchain. > > Are you serious about this? I think your time estimate is a grand > illusion. > Fair enough; this may well be an underestimate. To form the number I tried thinking back to my own experience starting off with Phabricator (back in August, IIRC) which went roughly as follows, 1. I asked `thoughtpolice` about this new-fangled Phabricator thing 2. He pointed me to the GHC wiki [1] 3. I ignored nearly everything on the page but `The CLI` section, installing PHP (this is where I'm thankful to be running Linux where package installation is quite straightforward) 4. I ran `arc diff`, 5.a. I reflected on the mild shock of seeing that `arc` had squashed my carefully crafted patch set into a single commit. This still bothers me to this day. 5.b. I moved on with life and had a coffee All-in-all this perhaps took half an hour from start to coffee. Admittedly, I had very little understanding of what was going on underneath the shiny veneer (and more or less still don't), but I did successfully submit a patch. This being said, I can see that there are several places where this can go awry. I hate to think of what this might look like on Windows. Moreover, I have absolutely confidence that git would preserve my work, regardless of what unholy things this new tool did to my repo. Without this confidence I would have tread far more carefully which inevitably would have cost time. > I attended Joachim Breitner's talk about Phabricator at the > GHC developer meeting, that already (nearly?) used up the twenty > minutes you allow. Yet I still have to > > * try it the first time, > * make sure I get everything right, > * learn to *trust* the tool > * that is does the right thing, > * does not do anything bad to my files > * etc. pp. > > The brightest might be up to get on track in a couple of hours, but the > majority is quite hesitant towards new tools... > Prior to my experience I had read in a variety of venues about all of the wonderful things that Phabricator would do for us. I understand that casual contributors without this background may find it harder to even motivate beginning the process of picking it up, regardless of how easy this may be. > Human condition. > Point taken. I agree that it can't hurt to expose a more familiar interface to the world. Cheers, - Ben [1] https://ghc.haskell.org/trac/ghc/wiki/Phabricator -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From bgamari.foss at gmail.com Sun Oct 5 14:43:00 2014 From: bgamari.foss at gmail.com (Ben Gamari) Date: Sun, 05 Oct 2014 10:43:00 -0400 Subject: GitHub pull requests In-Reply-To: <87sij32csp.fsf@gmail.com> References: <6D1821C0-52BE-4506-8503-307D2CBD83DF@cis.upenn.edu> <87sij32csp.fsf@gmail.com> Message-ID: <871tqmshl7.fsf@gmail.com> Herbert Valerio Riedel writes: > On 2014-10-05 at 04:51:41 +0200, Richard Eisenberg wrote: >> I've just finished reading this: http://www.reddit.com/r/haskell/comments/2hes8m/the_ghc_source_code_contains_1088_todos_please/ >> >> For better or worse, I don't read reddit often enough to hold a >> conversation there, so I'll ask my question here: Is there a way we >> can turn GitHub pull requests into Phab code reviews? I'm thinking of >> something like this: > > My greatest worry about allowing GitHub PRs to the > github.com.ghc/ghc.git repo is that GitHub and Trac use the very same > `#[0-9]+` syntax tokens for referring to tickets and PRs, and trigger > actions as soon as they detect any commit using that token. In other > words, there's a namespace collisions (Luckily, Phabricator seems to > have been designed to be used in concert with an external Ticket > tracker, so it uses `[DT][0-9]+` to refer to code-revisions & tickets > respectively). > > Morever, I'm also worrying this may become confusing to new > contributors, since we already have the Trav vs Phabricator confusion > about where to submit patches; if we also add GitHub PRs it'll just add > another item to be confused about where things ought to be > submitted. And the more PRs are added on github.com/ghc/ghc, the more it > may appear as if that is the encouraged way to submit them (even though > Phabricator+Trac is our currently targetted workflow) > > > What I'd suggest alternatively, since this what some of our contributors > are already doing instead of uploading patches: > > Teach Phabricator to allow to submit a URL to a commit (or branch) in a > forked github.com/ghc/ghc repo, and create a code-revision out of that. > This is a nice idea and sounds simple to implement. It certainly doesn't reduce the contributor-side friction nearly as well as accepting pull requests but it may be good enough. It would be quite straightforward to adapt the code I hacked together last night into such an interface. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From tuncer.ayaz at gmail.com Sun Oct 5 17:13:33 2014 From: tuncer.ayaz at gmail.com (Tuncer Ayaz) Date: Sun, 5 Oct 2014 19:13:33 +0200 Subject: GitHub pull requests In-Reply-To: <877g0esi2p.fsf@gmail.com> References: <6D1821C0-52BE-4506-8503-307D2CBD83DF@cis.upenn.edu> <87a95brtu2.fsf@gmail.com> <5430FF7B.6060207@chalmers.se> <877g0esi2p.fsf@gmail.com> Message-ID: On Sun, Oct 5, 2014 at 4:32 PM, Ben Gamari wrote: > To form the number I tried thinking back to my own experience > starting off with Phabricator (back in August, IIRC) which went > roughly as follows, [...] > 4. I ran `arc diff`, > 5.a. I reflected on the mild shock of seeing that `arc` had squashed my > carefully crafted patch set into a single commit. This still > bothers me to this day. I second 5.a, but does it have to be this way, or can arc be instructed to not squash commits? From tuncer.ayaz at gmail.com Sun Oct 5 17:20:32 2014 From: tuncer.ayaz at gmail.com (Tuncer Ayaz) Date: Sun, 5 Oct 2014 19:20:32 +0200 Subject: GitHub pull requests In-Reply-To: <1412506588.4605.1.camel@joachim-breitner.de> References: <6D1821C0-52BE-4506-8503-307D2CBD83DF@cis.upenn.edu> <1412506588.4605.1.camel@joachim-breitner.de> Message-ID: On Sun, Oct 5, 2014 at 12:56 PM, Joachim Breitner wrote: > Hi, > > we already have the problem with some people submitting diffs on > trac, other submitting git patches on trac or linking to their > private fork somewhere to pull from, and others using Phabricator. > And, at least as far as I can tell from here, it doesn?t seem to be > a big deal. > > So (warning: surprisingly simple solution ahead) we could consider > the option of simply accepting Github pull requests! > > I think it could work well ok if we either > * somehow communicate that people should open a trac ticket as > well, if they want to make sure their contribution is handled in > a timely manner, or (or and) > * someone of us looks after PR and creates trac tickets as needed. > > The advantage is clear: Low entry barrier for new contributors and, > as Michael says, for very small contributions (documentation typo > fixes or so). > > The downside is that we have more tools to work with. This either > means that we all get to know the GitHub frontend (which is quite > intuitive, and at least reading diffs and commenting should be > possible for all without a lot of learning), or that we simply let > those developers who are familiar with GitHub handle it. > > I think the advantage could outweigh the downside and it?s worth a > try. We don?t even have to advocate it aggressively, just remove the > ?Do not submit PRs? notice on the repo and see what happens. > > (The problem with the ticket numbers remain, unfortunately.) There's also the problem that Github's review system is not as powerful and most importantly does not preserve history like Gerrit or Phabricator do. Once used to it, maintainers probably won't be happy to lose productivity due to the simplistic review system. From mail at joachim-breitner.de Sun Oct 5 20:44:12 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Sun, 05 Oct 2014 22:44:12 +0200 Subject: GitHub pull requests In-Reply-To: References: <6D1821C0-52BE-4506-8503-307D2CBD83DF@cis.upenn.edu> <1412506588.4605.1.camel@joachim-breitner.de> Message-ID: <1412541852.21551.4.camel@joachim-breitner.de> Hi, Am Sonntag, den 05.10.2014, 19:20 +0200 schrieb Tuncer Ayaz: > There's also the problem that Github's review system is not as > powerful and most importantly does not preserve history like Gerrit or > Phabricator do. Once used to it, maintainers probably won't be happy > to lose productivity due to the simplistic review system. I don?t think this is a reason to forbid them alltogether. We could say: ?We prefer submissions via Phabricator, especially for larger patches, but if you like, you can use GitHub as well ? your contributions is welcome in any case.? We are talking about small contributions and entry-barriers here, and a for a documentation patch or similarly small contributions, we don?t need the full power of Phabricator. When new contributors start to engage more deeply they will not mind learning Phabricator. But they will be much more motivated to do so when they have already successfully contributed something. Greetings, Joachim -- Joachim Breitner e-Mail: mail at joachim-breitner.de Homepage: http://www.joachim-breitner.de Jabber-ID: nomeata at joachim-breitner.de -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From gintautas at miliauskas.lt Sun Oct 5 21:20:34 2014 From: gintautas at miliauskas.lt (Gintautas Miliauskas) Date: Sun, 5 Oct 2014 23:20:34 +0200 Subject: GitHub pull requests In-Reply-To: <1412541852.21551.4.camel@joachim-breitner.de> References: <6D1821C0-52BE-4506-8503-307D2CBD83DF@cis.upenn.edu> <1412506588.4605.1.camel@joachim-breitner.de> <1412541852.21551.4.camel@joachim-breitner.de> Message-ID: Is there any particular reason why taking in GitHub pull requests would be more problematic than, say, applying patches attached to Trac bugs? Both have to be dealt with manually by someone with commit rights for the canonical repository anyway. If the issue is important enough that, say, tracking and reviews come into play, the contributor could always be asked to move the patch to Phabricator/Trac. Let's keep the barriers as low as possible... -------------- next part -------------- An HTML attachment was scrubbed... URL: From jwlato at gmail.com Sun Oct 5 23:10:11 2014 From: jwlato at gmail.com (John Lato) Date: Mon, 6 Oct 2014 07:10:11 +0800 Subject: Tentative high-level plans for 7.10.1 In-Reply-To: References: Message-ID: Speaking as a user, I think Johan's concern is well-founded. For us, ghc-7.8.3 was the first of the 7.8 line that was really usable in production, due to #8960 and other bugs. Sure, that can be worked around in user code, but it takes some time for developers to locate the issues, track down the bug, and implement the workaround. And even 7.8.3 has some bugs that cause minor annoyances (either ugly workarounds or intermittent build failures that I haven't had the time to debug); it's definitely not solid. Similarly, 7.6.3 was the first 7.6 release that we were able to use in production. I'm particularly concerned about ghc-7.10 as the AMP means there will be significant lag in identifying new bugs (since it'll take time to update codebases for that major change). For the curious, within the past few days we've seen all the following, some multiple times, all so far intermittent: > ghc: panic! (the 'impossible' happened) > (GHC version 7.8.3.0 for x86_64-unknown-linux): > kindFunResult ghc-prim:GHC.Prim.*{(w) tc 34d} > ByteCodeLink.lookupCE > During interactive linking, GHCi couldn't find the following symbol: > some_mangled_name_closure > ghc: mmap 0 bytes at (nil): Invalid Argument > internal error: scavenge_one: strange object 2022017865 Some of these I've mapped to likely ghc issues, and some are fixed in HEAD, but so far I haven't had an opportunity to put together reproducible test cases. And that's just bugs that we haven't triaged yet, there are several more for which workarounds are in place. John L. On Sat, Oct 4, 2014 at 2:54 PM, Johan Tibell wrote: > On Fri, Oct 3, 2014 at 11:35 PM, Austin Seipp > wrote: > >> - Cull and probably remove the 7.8.4 milestone. >> - Simply not enough time to address almost any of the tickets >> in any reasonable timeframe before 7.10.1, while also shipping them. >> - Only one, probably workarouadble, not game-changing >> bug (#9303) marked for 7.8.4. >> - No particular pressure on any outstanding bugs to release >> immediately. >> - ANY release would be extremely unlikely, but if so, only >> backed by the most critical of bugs. >> - We will move everything in 7.8.4 milestone to 7.10.1 milestone. >> - To accurately catalogue what was fixed. >> - To eliminate confusion. >> > > #8960 looks rather serious and potentially makes all of 7.8 a no-go for > some users. I'm worried that we're (in general) pushing too many bug fixes > towards future major versions. Since major versions tend to add new bugs, > we risk getting into a situation where no major release is really solid. > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.k.f.holzenspies at utwente.nl Mon Oct 6 09:03:19 2014 From: p.k.f.holzenspies at utwente.nl (p.k.f.holzenspies at utwente.nl) Date: Mon, 6 Oct 2014 09:03:19 +0000 Subject: Tentative high-level plans for 7.10.1 In-Reply-To: References: , Message-ID: I don't know whether this has ever been considered as an idea, but what about having a notion of Long Term Support version (similar to how a lot of processor and operating systems vendors go about this). The idea behind an LTS-GHC would be to continue bug-fixing on the LTS-version, even if newer major versions no longer get bug-fixing support. To some extent, there will be redundancies (bugs that have disappeared in newer versions because newer code does the same and more, still needing to be fixed on the LTS code base), but the upside would be a clear prioritisation between stability (LTS) and innovation (latest major release). The current policy for feature *use* in the GHC code-base is that they're supported in (at least) three earlier major release versions. Should we go the LTS-route, the logical choice would be to demand the latest LTS-version. The danger, of course, is that people aren't very enthusiastic about bug-fixing older versions of a compiler, but for language/compiler-uptake, this might actually be a Better Way. Thoughts? Ph. ________________________________ From: John Lato Sent: 06 October 2014 01:10 To: Johan Tibell Cc: Simon Marlow; ghc-devs at haskell.org Subject: Re: Tentative high-level plans for 7.10.1 Speaking as a user, I think Johan's concern is well-founded. For us, ghc-7.8.3 was the first of the 7.8 line that was really usable in production, due to #8960 and other bugs. Sure, that can be worked around in user code, but it takes some time for developers to locate the issues, track down the bug, and implement the workaround. And even 7.8.3 has some bugs that cause minor annoyances (either ugly workarounds or intermittent build failures that I haven't had the time to debug); it's definitely not solid. Similarly, 7.6.3 was the first 7.6 release that we were able to use in production. I'm particularly concerned about ghc-7.10 as the AMP means there will be significant lag in identifying new bugs (since it'll take time to update codebases for that major change). For the curious, within the past few days we've seen all the following, some multiple times, all so far intermittent: > ghc: panic! (the 'impossible' happened) > (GHC version 7.8.3.0 for x86_64-unknown-linux): > kindFunResult ghc-prim:GHC.Prim.*{(w) tc 34d} > ByteCodeLink.lookupCE > During interactive linking, GHCi couldn't find the following symbol: > some_mangled_name_closure > ghc: mmap 0 bytes at (nil): Invalid Argument > internal error: scavenge_one: strange object 2022017865 Some of these I've mapped to likely ghc issues, and some are fixed in HEAD, but so far I haven't had an opportunity to put together reproducible test cases. And that's just bugs that we haven't triaged yet, there are several more for which workarounds are in place. John L. On Sat, Oct 4, 2014 at 2:54 PM, Johan Tibell > wrote: On Fri, Oct 3, 2014 at 11:35 PM, Austin Seipp > wrote: - Cull and probably remove the 7.8.4 milestone. - Simply not enough time to address almost any of the tickets in any reasonable timeframe before 7.10.1, while also shipping them. - Only one, probably workarouadble, not game-changing bug (#9303) marked for 7.8.4. - No particular pressure on any outstanding bugs to release immediately. - ANY release would be extremely unlikely, but if so, only backed by the most critical of bugs. - We will move everything in 7.8.4 milestone to 7.10.1 milestone. - To accurately catalogue what was fixed. - To eliminate confusion. #8960 looks rather serious and potentially makes all of 7.8 a no-go for some users. I'm worried that we're (in general) pushing too many bug fixes towards future major versions. Since major versions tend to add new bugs, we risk getting into a situation where no major release is really solid. _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.stolarek at p.lodz.pl Mon Oct 6 09:15:50 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Mon, 6 Oct 2014 11:15:50 +0200 Subject: Tentative high-level plans for 7.10.1 In-Reply-To: References: Message-ID: <201410061115.50980.jan.stolarek@p.lodz.pl> > Here are the major patches on Phabricator still needing review, that I > think we'd like to see for 7.10.1: > > - D72: New rebindable syntax for arrows. I don't think D72 will make it in. I started to work on this a couple of months ago but the work has stalled. I just don't understand arrows well enough :-/ Sophie Taylor (aka spacekitteh) expressed some interest in this and we chatted a bit about it on IRC. But I haven't heard anything from Sophie in the past 2 weeks so I don't know whether she intends to pick up my work or not. Janek From abela at chalmers.se Mon Oct 6 09:27:10 2014 From: abela at chalmers.se (Andreas Abel) Date: Mon, 6 Oct 2014 11:27:10 +0200 Subject: GitHub pull requests In-Reply-To: References: <6D1821C0-52BE-4506-8503-307D2CBD83DF@cis.upenn.edu> <87a95brtu2.fsf@gmail.com> <5430FF7B.6060207@chalmers.se> <877g0esi2p.fsf@gmail.com> Message-ID: <5432606E.70402@chalmers.se> This is also the thing that worries most about arc: Squashing commits. Splitting commits into * things that only do whitespace changes * things that only add comments * things that only refactor * things that actually introduce a semantic change *is* very valuable also for the efficency of the review process. On 05.10.2014 19:13, Tuncer Ayaz wrote: > On Sun, Oct 5, 2014 at 4:32 PM, Ben Gamari wrote: >> 5.a. I reflected on the mild shock of seeing that `arc` had squashed my >> carefully crafted patch set into a single commit. This still >> bothers me to this day. > > I second 5.a, but does it have to be this way, or can arc be > instructed to not squash commits? -- Andreas Abel <>< Du bist der geliebte Mensch. Department of Computer Science and Engineering Chalmers and Gothenburg University, Sweden andreas.abel at gu.se http://www2.tcs.ifi.lmu.de/~abel/ From hvriedel at gmail.com Mon Oct 6 09:28:41 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Mon, 06 Oct 2014 11:28:41 +0200 Subject: Tentative high-level plans for 7.10.1 In-Reply-To: (p. k. f. holzenspies's message of "Mon, 6 Oct 2014 09:03:19 +0000") References: Message-ID: <87sij18s3a.fsf@gmail.com> On 2014-10-06 at 11:03:19 +0200, p.k.f.holzenspies at utwente.nl wrote: [...] > The idea behind an LTS-GHC would be to continue bug-fixing on the > LTS-version, even if newer major versions no longer get bug-fixing > support. To some extent, there will be redundancies (bugs that have > disappeared in newer versions because newer code does the same and > more, still needing to be fixed on the LTS code base), but the upside > would be a clear prioritisation between stability (LTS) and innovation > (latest major release). As I'm not totally sure what you mean: Assuming we already had decided years ago to follow LTS-style, given GHC 7.0, 7.2, 7.4, 7.6, 7.8 and the future 7.10; which of those GHC versions would you have been considered a LTS version? [...] > The danger, of course, is that people aren't very enthusiastic about > bug-fixing older versions of a compiler, but for > language/compiler-uptake, this might actually be a Better Way. Maybe some of the commercial GHC users might be interested in donating the manpower to maintain older GHC versions. It's mostly a time-consuming QA & auditing process to maintain old GHCs. From johan.tibell at gmail.com Mon Oct 6 09:38:31 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Mon, 6 Oct 2014 11:38:31 +0200 Subject: Tentative high-level plans for 7.10.1 In-Reply-To: <87sij18s3a.fsf@gmail.com> References: <87sij18s3a.fsf@gmail.com> Message-ID: On Mon, Oct 6, 2014 at 11:28 AM, Herbert Valerio Riedel wrote: > On 2014-10-06 at 11:03:19 +0200, p.k.f.holzenspies at utwente.nl wrote: > > The danger, of course, is that people aren't very enthusiastic about > > bug-fixing older versions of a compiler, but for > > language/compiler-uptake, this might actually be a Better Way. > > Maybe some of the commercial GHC users might be interested in donating > the manpower to maintain older GHC versions. It's mostly a > time-consuming QA & auditing process to maintain old GHCs. > What can we do to make that process cheaper? In particular, which are the manual steps in making a new GHC release today? In the long run back porting bugfixes is the route successful OSS projects take. Once people have written large enough Haskell programs they will stop jumping onto the newer version all the time and will demand backports of bug fixes. This is already happening to some extent in cabal (as cabal is tied to a ghc release which means we need to backport changes sometimes.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From malcolm.wallace at me.com Mon Oct 6 09:50:03 2014 From: malcolm.wallace at me.com (Malcolm Wallace) Date: Mon, 06 Oct 2014 10:50:03 +0100 Subject: Tentative high-level plans for 7.10.1 In-Reply-To: <87sij18s3a.fsf@gmail.com> References: <87sij18s3a.fsf@gmail.com> Message-ID: <0C95F3A3-F842-44B5-A7F3-1E86322C2F83@me.com> On 6 Oct 2014, at 10:28, Herbert Valerio Riedel wrote: > As I'm not totally sure what you mean: Assuming we already had decided > years ago to follow LTS-style, given GHC 7.0, 7.2, 7.4, 7.6, 7.8 and the > future 7.10; which of those GHC versions would you have been considered > a LTS version? We continue to use 7.2, at least partly because all newer versions of ghc have had significant bugs that affect us. In fact, 7.2.2 also has a show-stopping bug, but we patched it ourselves to create our very own custom ghc-7.2.3 distribution. Regards, Malcolm From nicolas at incubaid.com Mon Oct 6 09:51:25 2014 From: nicolas at incubaid.com (Nicolas Trangez) Date: Mon, 06 Oct 2014 11:51:25 +0200 Subject: Tentative high-level plans for 7.10.1 In-Reply-To: References: <87sij18s3a.fsf@gmail.com> Message-ID: <1412589085.21124.13.camel@chi.nicolast.be> On Mon, 2014-10-06 at 11:38 +0200, Johan Tibell wrote: > On Mon, Oct 6, 2014 at 11:28 AM, Herbert Valerio Riedel > wrote: > > > On 2014-10-06 at 11:03:19 +0200, p.k.f.holzenspies at utwente.nl wrote: > > > The danger, of course, is that people aren't very enthusiastic about > > > bug-fixing older versions of a compiler, but for > > > language/compiler-uptake, this might actually be a Better Way. > > > > Maybe some of the commercial GHC users might be interested in donating > > the manpower to maintain older GHC versions. It's mostly a > > time-consuming QA & auditing process to maintain old GHCs. > > > > What can we do to make that process cheaper? In particular, which are the > manual steps in making a new GHC release today? > > In the long run back porting bugfixes is the route successful OSS projects > take. Once people have written large enough Haskell programs they will stop > jumping onto the newer version all the time and will demand backports of > bug fixes. This is already happening to some extent in cabal (as cabal is > tied to a ghc release which means we need to backport changes sometimes.) Quick note/experience report: In my experience maintaining a project which has several older/supported 'production' versions, using backports is counter-productive, and we decided to manage things the other way around (which is fairly easy to do thanks to git and git-imerge): Instead of fixing things in the latest version of the product, we fix things in the *oldest* version of the product which should contain the fix (or feature, whatever). Then we regularly forward-merge version branches into the next version (especially when a release is made). So, if 1.4.x, 1.5.x, 1.6.x and 1.7.x are 'supported' versions, and some bug is found in 1.6.2, but turns out to be introduced in 1.5.1, we fix the bug in the 1.5 branch. Then, if the bugfix is important enough, we merge 1.4 in 1.5 (which can be a no-op), 1.5 in 1.6, and 1.6 into 1.7. As such, every version branch 'contains' all 'older' branches. Even though the codebase can diverge quite a bit between the 1.5 and 1.7 tree, these merges tend to be fairly easy, especially since the required changes are split between the 1.5-1.6 merge and the 1.6-1.7 merge. 'Small' commits with clear purpose, minimal changes & clear commit messages tend to help a lot in this process as well (especially when using git-imerge). The squashing performed by Phab as mentioned in some other messages on the list can be troublesome in this regard... (I'm not a big fan of squash merges myself, from a maintainer POV). Nicolas From p.k.f.holzenspies at utwente.nl Mon Oct 6 09:58:38 2014 From: p.k.f.holzenspies at utwente.nl (p.k.f.holzenspies at utwente.nl) Date: Mon, 6 Oct 2014 09:58:38 +0000 Subject: Again: Uniques in GHC Message-ID: Dear all, I *finally* had some time again to look at how to properly redo Uniques in GHC. A few things stood out to me: - The export-list of Unique has some comments stating that function X is only exported for module Y, yet is used elsewhere. This may be because these comments do not show up in haddock etc. leading some people to think they're up for general use. In my refactoring, I'm sticking the restriction in the function name, so it's no longer mkUniqueGrimily, but rather mkUniqueOnlyForUniqSupply (making the name even longer should discourage their use more). If at all possible, these should be removed altogether asap. - The implementation is based on FastInts, which, on most machines nowadays, is a 64-bit thing. The serialisation in BinIface is explicitly based on Word32s. Aside from the obvious potential (albeit with a low probability) for errors, this lead me to wonder about 32/64-bitness. Is there a reason for 64-bit versions of GHC to write Word32s, or is this a historic thing? Must the interface-files be bit-compatible between different versions (32/64-bits) of the compiler? Lastly, is the choice of whether "this" is a 32 or 64-bit version completely determined by WORD_SIZE_IN_BITS (MachDeps.h)? - There are a few libraries that are explicitly dependent on GHC, yet have their own versions of Unique. So far, I've seen this for Hoopl and Template Haskell. They have unboxed ints to do the job. I have not researched whether they manipulate them in any way, or just store things. If the former; I would like to call to reconsider this, because it seems like a poor separation of concerns toe me. If the latter, I think refactoring them to use Unique instead of Int# should be straightforward. ? The point of refactoring Unique is to no longer have low-level optimisations of manually unpacking (and repacking using the MkUnique constructor), which should serve as a lovely test of how far the optimisations have come. Furthermore, it seemed that the use of characters to encode the domain was somewhat awkward, but I refer anyone interested to earlier posts on this list. Thoughts? Comments? Suggestions? Objections? Regards, Philip -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Mon Oct 6 10:06:43 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Mon, 6 Oct 2014 12:06:43 +0200 Subject: Again: Uniques in GHC In-Reply-To: References: Message-ID: On Mon, Oct 6, 2014 at 11:58 AM, wrote: > - The export-list of Unique has some comments stating that function X is > only exported for module Y, yet is used elsewhere. This may be because > these comments do not show up in haddock etc. leading some people to think > they're up for general use. In my refactoring, I'm sticking the restriction > in the function name, so it's no longer mkUniqueGrimily, but rather > mkUniqueOnlyForUniqSupply (making the name even longer should discourage > their use more). If at all possible, these should be removed altogether > asap. > Since you're touching this code base it would be a terrific time to add some Haddocks! (We recently decided, on the ghc-devs@ list, that all new top-level entities, i.e. functions, data types, and classes, should have Haddocks.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.k.f.holzenspies at utwente.nl Mon Oct 6 10:10:21 2014 From: p.k.f.holzenspies at utwente.nl (p.k.f.holzenspies at utwente.nl) Date: Mon, 6 Oct 2014 10:10:21 +0000 Subject: Again: Uniques in GHC In-Reply-To: References: , Message-ID: <069f322a8b4b4dc28d823b4fb4e83a28@EXMBX31.ad.utwente.nl> Very much part of my plan, Johan! I was a fervent "+1" on that recommendation. Ph. ? ________________________________ From: Johan Tibell Sent: 06 October 2014 12:06 To: Holzenspies, P.K.F. (EWI) Cc: ghc-devs at haskell.org Subject: Re: Again: Uniques in GHC On Mon, Oct 6, 2014 at 11:58 AM, > wrote: - The export-list of Unique has some comments stating that function X is only exported for module Y, yet is used elsewhere. This may be because these comments do not show up in haddock etc. leading some people to think they're up for general use. In my refactoring, I'm sticking the restriction in the function name, so it's no longer mkUniqueGrimily, but rather mkUniqueOnlyForUniqSupply (making the name even longer should discourage their use more). If at all possible, these should be removed altogether asap. Since you're touching this code base it would be a terrific time to add some Haddocks! (We recently decided, on the ghc-devs@ list, that all new top-level entities, i.e. functions, data types, and classes, should have Haddocks.) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Mon Oct 6 10:36:56 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Mon, 06 Oct 2014 12:36:56 +0200 Subject: Again: Uniques in GHC In-Reply-To: References: Message-ID: <1412591816.30103.10.camel@joachim-breitner.de> Hi, Am Montag, den 06.10.2014, 09:58 +0000 schrieb p.k.f.holzenspies at utwente.nl: > - The implementation is based on FastInts, which, on most machines > nowadays, is a 64-bit thing. The serialisation in BinIface is > explicitly based on Word32s. Aside from the obvious potential (albeit > with a low probability) for errors, this lead me to wonder about > 32/64-bitness. Is there a reason for 64-bit versions of GHC to write > Word32s, or is this a historic thing? Must the interface-files be > bit-compatible between different versions (32/64-bits) of the > compiler? Lastly, is the choice of whether "this" is a 32 or 64-bit > version completely determined by WORD_SIZE_IN_BITS (MachDeps.h)? > A while ago we had problems with haddock in Debian when the serialization became bit-dependent.? I suggest to keep the specification of any on-disk format independent of architecture specifics. Greetings, Joachim ? http://bugs.debian.org/586723#15 -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From daniel.trstenjak at gmail.com Mon Oct 6 10:46:25 2014 From: daniel.trstenjak at gmail.com (Daniel Trstenjak) Date: Mon, 6 Oct 2014 12:46:25 +0200 Subject: Tentative high-level plans for 7.10.1 In-Reply-To: <1412589085.21124.13.camel@chi.nicolast.be> References: <87sij18s3a.fsf@gmail.com> <1412589085.21124.13.camel@chi.nicolast.be> Message-ID: <20141006104625.GA7665@machine> Hi Nicolas, > So, if 1.4.x, 1.5.x, 1.6.x and 1.7.x are 'supported' versions, and some > bug is found in 1.6.2, but turns out to be introduced in 1.5.1, we fix > the bug in the 1.5 branch. > > Then, if the bugfix is important enough, we merge 1.4 in 1.5 (which can > be a no-op), 1.5 in 1.6, and 1.6 into 1.7. As such, every version branch > 'contains' all 'older' branches. I don't like this practise, because you certainly don't want to always incorprocate all commits of one release branch into an other. Just think about a hackish bug fix needed to be added in a former release, and in a newer release the problem has been solved in a completely different way, and now, if you have bad luck, the former release branch merges without conflicts into the new one, now getting the hackish fix into the new release, which might be even harmful. IMHO using cherry picking in this case is a lot better manageable. Greetings, Daniel From hvriedel at gmail.com Mon Oct 6 11:29:30 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Mon, 06 Oct 2014 13:29:30 +0200 Subject: Tentative high-level plans for 7.10.1 In-Reply-To: <0C95F3A3-F842-44B5-A7F3-1E86322C2F83@me.com> (Malcolm Wallace's message of "Mon, 06 Oct 2014 10:50:03 +0100") References: <87sij18s3a.fsf@gmail.com> <0C95F3A3-F842-44B5-A7F3-1E86322C2F83@me.com> Message-ID: <878ukt8mhx.fsf@gmail.com> On 2014-10-06 at 11:50:03 +0200, Malcolm Wallace wrote: > On 6 Oct 2014, at 10:28, Herbert Valerio Riedel wrote: > >> As I'm not totally sure what you mean: Assuming we already had decided >> years ago to follow LTS-style, given GHC 7.0, 7.2, 7.4, 7.6, 7.8 and the >> future 7.10; which of those GHC versions would you have been considered >> a LTS version? > > > We continue to use 7.2, at least partly because all newer versions of > ghc have had significant bugs that affect us. In fact, 7.2.2 also has > a show-stopping bug, but we patched it ourselves to create our very > own custom ghc-7.2.3 distribution. I'd like to point out that's kinda ironic, as of *all* the GHC releases, you had to stay on the one major release that was considered a non-proper tech-preview... :-) Cheers, hvr From nicolas at incubaid.com Mon Oct 6 11:57:45 2014 From: nicolas at incubaid.com (Nicolas Trangez) Date: Mon, 06 Oct 2014 13:57:45 +0200 Subject: Tentative high-level plans for 7.10.1 In-Reply-To: <20141006104625.GA7665@machine> References: <87sij18s3a.fsf@gmail.com> <1412589085.21124.13.camel@chi.nicolast.be> <20141006104625.GA7665@machine> Message-ID: <1412596665.21124.22.camel@chi.nicolast.be> Hello Daniel, On Mon, 2014-10-06 at 12:46 +0200, Daniel Trstenjak wrote: > > So, if 1.4.x, 1.5.x, 1.6.x and 1.7.x are 'supported' versions, and some > > bug is found in 1.6.2, but turns out to be introduced in 1.5.1, we fix > > the bug in the 1.5 branch. > > > > Then, if the bugfix is important enough, we merge 1.4 in 1.5 (which can > > be a no-op), 1.5 in 1.6, and 1.6 into 1.7. As such, every version branch > > 'contains' all 'older' branches. > > I don't like this practise, because you certainly don't want to always > incorprocate all commits of one release branch into an other. > > Just think about a hackish bug fix needed to be added in a former > release, and in a newer release the problem has been solved in a > completely different way, and now, if you have bad luck, the former > release branch merges without conflicts into the new one, now getting > the hackish fix into the new release, which might be even harmful. Agree, although I think this is less of an issue in practice because we enforce code reviews for all commits, including 'merge' commits, even if the merge was 100% automatic (hence we have PRs for '1.5-for-1.6' branches once in a while). These 'workarounds' are spotted easily during this process. Next to that, chances are fairly low a 'hack' won't in any way conflict with a 'proper fix', since they tend to touch (a portion of) the same code most of the time (except in build systems maybe). Using 'git-imerge' helps here quite a bit as well, since the conflicts aren't buried between 100s of unrelated changes (like what git-merge does). > IMHO using cherry picking in this case is a lot better manageable. Yet it has a (IMHO) major drawback: it requires a system next to VCS (issue tracker or alike) to make sure all fixes are propagated to all applicable versions, which all too often results in (human) error. "Hey, we reported this against 1.6.3, and it was fixed in 1.6.4, but now we upgraded to 1.7.3 which was released after 1.6.4 and the bug is back" is no good PR. Anyway, it's not like I intend to push GHC development/maintenance in any specific direction at all, just wanted to provide some experience report :-) Regards, Nicolas From alan.zimm at gmail.com Mon Oct 6 12:59:03 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Mon, 6 Oct 2014 14:59:03 +0200 Subject: Show instance for SrcSpan Message-ID: Is there any reason I can't put in a diff request to replace the derived Show instance for SrcSpan with a handcrafted one that does not exhausively list the constructors, making it more readable? Alan -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuuzetsu at fuuzetsu.co.uk Mon Oct 6 13:11:15 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Mon, 06 Oct 2014 14:11:15 +0100 Subject: Show instance for SrcSpan In-Reply-To: References: Message-ID: <543294F3.2050007@fuuzetsu.co.uk> On 10/06/2014 01:59 PM, Alan & Kim Zimmerman wrote: > Is there any reason I can't put in a diff request to replace the derived > Show instance for SrcSpan with a handcrafted one that does not exhausively > list the constructors, making it more readable? > > Alan > Why? If you're looking for pretty output then you should be changing Outputable. -- Mateusz K. From johan.tibell at gmail.com Mon Oct 6 13:15:02 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Mon, 6 Oct 2014 15:15:02 +0200 Subject: Show instance for SrcSpan In-Reply-To: <543294F3.2050007@fuuzetsu.co.uk> References: <543294F3.2050007@fuuzetsu.co.uk> Message-ID: Aside: I really miss derived Show instances in GHC. The Outputable class is only really useful if you already know how things work. If you e.g. want to see the AST used to represent some piece of Haskell code, in all its glory, the Outputable instances aren't useful because they elide too much information. On Mon, Oct 6, 2014 at 3:11 PM, Mateusz Kowalczyk wrote: > On 10/06/2014 01:59 PM, Alan & Kim Zimmerman wrote: > > Is there any reason I can't put in a diff request to replace the derived > > Show instance for SrcSpan with a handcrafted one that does not > exhausively > > list the constructors, making it more readable? > > > > Alan > > > > Why? If you're looking for pretty output then you should be changing > Outputable. > > > -- > Mateusz K. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Mon Oct 6 13:15:56 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Mon, 6 Oct 2014 15:15:56 +0200 Subject: Show instance for SrcSpan In-Reply-To: <543294F3.2050007@fuuzetsu.co.uk> References: <543294F3.2050007@fuuzetsu.co.uk> Message-ID: True, but if you are using GHC generated stuff via the GHC API you sometimes do not want to have to implement Outputable for all your app types, when you can auto derive Show which mostly does what you need. On Mon, Oct 6, 2014 at 3:11 PM, Mateusz Kowalczyk wrote: > On 10/06/2014 01:59 PM, Alan & Kim Zimmerman wrote: > > Is there any reason I can't put in a diff request to replace the derived > > Show instance for SrcSpan with a handcrafted one that does not > exhausively > > list the constructors, making it more readable? > > > > Alan > > > > Why? If you're looking for pretty output then you should be changing > Outputable. > > > -- > Mateusz K. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mf at zerobuzz.net Mon Oct 6 13:16:46 2014 From: mf at zerobuzz.net (Matthias Fischmann) Date: Mon, 6 Oct 2014 15:16:46 +0200 Subject: Show instance for SrcSpan In-Reply-To: References: Message-ID: <20141006131646.GN1655@lig> On Mon, Oct 06, 2014 at 02:59:03PM +0200, Alan & Kim Zimmerman wrote: > Date: Mon, 6 Oct 2014 14:59:03 +0200 > From: Alan & Kim Zimmerman > To: "ghc-devs at haskell.org" > Subject: Show instance for SrcSpan > > Is there any reason I can't put in a diff request to replace the derived > Show instance for SrcSpan with a handcrafted one that does not exhausively > list the constructors, making it more readable? > > Alan I like the notion that `Read . Show` should always work. Dropping parts of a datatype would break that rule. matthias From alan.zimm at gmail.com Mon Oct 6 13:27:49 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Mon, 6 Oct 2014 15:27:49 +0200 Subject: Show instance for SrcSpan In-Reply-To: <20141006131646.GN1655@lig> References: <20141006131646.GN1655@lig> Message-ID: Not at all, just show as e.g. SrcSpan (RealSrcSpan (SrcSpanOneLine "./foo.hs" 4 1 6)) We just avoid showing the srcSpanFile / srcSpanLine / srcSpanSCol / srcSpanECol noise On Mon, Oct 6, 2014 at 3:16 PM, Matthias Fischmann wrote: > On Mon, Oct 06, 2014 at 02:59:03PM +0200, Alan & Kim Zimmerman wrote: > > Date: Mon, 6 Oct 2014 14:59:03 +0200 > > From: Alan & Kim Zimmerman > > To: "ghc-devs at haskell.org" > > Subject: Show instance for SrcSpan > > > > Is there any reason I can't put in a diff request to replace the derived > > Show instance for SrcSpan with a handcrafted one that does not > exhausively > > list the constructors, making it more readable? > > > > Alan > > > I like the notion that `Read . Show` should always work. Dropping > parts of a datatype would break that rule. > > matthias > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mf at zerobuzz.net Mon Oct 6 13:37:13 2014 From: mf at zerobuzz.net (Matthias Fischmann) Date: Mon, 6 Oct 2014 15:37:13 +0200 Subject: Show instance for SrcSpan In-Reply-To: References: <20141006131646.GN1655@lig> Message-ID: <20141006133713.GO1655@lig> Ah, you want to drop constructor field names. Then I withdraw my reservations. thanks for the clarification, m. On Mon, Oct 06, 2014 at 03:27:49PM +0200, Alan & Kim Zimmerman wrote: > Date: Mon, 6 Oct 2014 15:27:49 +0200 > From: Alan & Kim Zimmerman > To: Matthias Fischmann > Cc: "ghc-devs at haskell.org" > Subject: Re: Show instance for SrcSpan > > Not at all, just show as e.g. > > SrcSpan (RealSrcSpan (SrcSpanOneLine "./foo.hs" 4 1 6)) > > We just avoid showing the srcSpanFile / srcSpanLine / srcSpanSCol / > srcSpanECol noise > > > On Mon, Oct 6, 2014 at 3:16 PM, Matthias Fischmann wrote: > > > On Mon, Oct 06, 2014 at 02:59:03PM +0200, Alan & Kim Zimmerman wrote: > > > Date: Mon, 6 Oct 2014 14:59:03 +0200 > > > From: Alan & Kim Zimmerman > > > To: "ghc-devs at haskell.org" > > > Subject: Show instance for SrcSpan > > > > > > Is there any reason I can't put in a diff request to replace the derived > > > Show instance for SrcSpan with a handcrafted one that does not > > exhausively > > > list the constructors, making it more readable? > > > > > > Alan > > > > > > I like the notion that `Read . Show` should always work. Dropping > > parts of a datatype would break that rule. > > > > matthias > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > From alan.zimm at gmail.com Mon Oct 6 13:48:08 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Mon, 6 Oct 2014 15:48:08 +0200 Subject: Show instance for SrcSpan In-Reply-To: <20141006133713.GO1655@lig> References: <20141006131646.GN1655@lig> <20141006133713.GO1655@lig> Message-ID: To get even more heretical, a quick grep of the compiler source tree does not show SrcSpanOneLine / SrcSpanMultiLine / SrcSpanPoint being used anywhere but in SrcLoc.lhs. The use in SrcLoc is in the original constructor, the deconstructors pulling out lines and columns, and combineSrcSpans. Perhaprs our SrcSpan is more complicated than it needs to be? On Mon, Oct 6, 2014 at 3:37 PM, Matthias Fischmann wrote: > > Ah, you want to drop constructor field names. Then I withdraw my > reservations. > > thanks for the clarification, > m. > > > > On Mon, Oct 06, 2014 at 03:27:49PM +0200, Alan & Kim Zimmerman wrote: > > Date: Mon, 6 Oct 2014 15:27:49 +0200 > > From: Alan & Kim Zimmerman > > To: Matthias Fischmann > > Cc: "ghc-devs at haskell.org" > > Subject: Re: Show instance for SrcSpan > > > > Not at all, just show as e.g. > > > > SrcSpan (RealSrcSpan (SrcSpanOneLine "./foo.hs" 4 1 6)) > > > > We just avoid showing the srcSpanFile / srcSpanLine / srcSpanSCol / > > srcSpanECol noise > > > > > > On Mon, Oct 6, 2014 at 3:16 PM, Matthias Fischmann > wrote: > > > > > On Mon, Oct 06, 2014 at 02:59:03PM +0200, Alan & Kim Zimmerman wrote: > > > > Date: Mon, 6 Oct 2014 14:59:03 +0200 > > > > From: Alan & Kim Zimmerman > > > > To: "ghc-devs at haskell.org" > > > > Subject: Show instance for SrcSpan > > > > > > > > Is there any reason I can't put in a diff request to replace the > derived > > > > Show instance for SrcSpan with a handcrafted one that does not > > > exhausively > > > > list the constructors, making it more readable? > > > > > > > > Alan > > > > > > > > > I like the notion that `Read . Show` should always work. Dropping > > > parts of a datatype would break that rule. > > > > > > matthias > > > _______________________________________________ > > > ghc-devs mailing list > > > ghc-devs at haskell.org > > > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.k.f.holzenspies at utwente.nl Mon Oct 6 14:28:23 2014 From: p.k.f.holzenspies at utwente.nl (p.k.f.holzenspies at utwente.nl) Date: Mon, 6 Oct 2014 14:28:23 +0000 Subject: Show instance for SrcSpan In-Reply-To: References: <543294F3.2050007@fuuzetsu.co.uk>, Message-ID: <83aac4f930d840b3b8150d496efe7994@EXMBX31.ad.utwente.nl> The way I read Alan's earlier mail is precisely that; auto-generated Show does what he wants (show the entire AST), whereas Outputable hides too much information. I very much understand his frustration with having to manually figure out what constructors and datatypes go where in a compiled program. Alan's point was the *absence* of auto derived Show instances and, in the case of SrcSpan, too much verbosity (rather than wanting stuff to be incomplete). Allowing some bespoke stuff to reduce the noise of something like record field names for SrcSpan makes even more sense in this context. Similarly, this is why Alan & I want everything to have Data instances, so you can (amongst many other nice things) selectively print parts of the AST. Ph. ________________________________ From: Alan & Kim Zimmerman Sent: 06 October 2014 15:15 To: Mateusz Kowalczyk Cc: ghc-devs at haskell.org Subject: Re: Show instance for SrcSpan True, but if you are using GHC generated stuff via the GHC API you sometimes do not want to have to implement Outputable for all your app types, when you can auto derive Show which mostly does what you need. On Mon, Oct 6, 2014 at 3:11 PM, Mateusz Kowalczyk > wrote: On 10/06/2014 01:59 PM, Alan & Kim Zimmerman wrote: > Is there any reason I can't put in a diff request to replace the derived > Show instance for SrcSpan with a handcrafted one that does not exhausively > list the constructors, making it more readable? > > Alan > Why? If you're looking for pretty output then you should be changing Outputable. -- Mateusz K. _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From omefire at yahoo.fr Mon Oct 6 15:23:39 2014 From: omefire at yahoo.fr (Omar Mefire) Date: Mon, 6 Oct 2014 16:23:39 +0100 Subject: Stepping through ghc Message-ID: <1412609019.13489.YahooMailNeo@web173002.mail.ir2.yahoo.com> Hi, I'm new to ghc codebase and I'm interested in stepping through the code in order to gain a better idea of how it all works. - Is there a way to load ghc into ghci ? and debug through it ? - What are the ways experienced ghc devs step through the code ? - Any techniques you guys recommend ? - What would your advices be for a newbie to the source code ? Thanks, Omar Mefire, -------------- next part -------------- An HTML attachment was scrubbed... URL: From tuncer.ayaz at gmail.com Mon Oct 6 15:41:38 2014 From: tuncer.ayaz at gmail.com (Tuncer Ayaz) Date: Mon, 6 Oct 2014 17:41:38 +0200 Subject: GitHub pull requests In-Reply-To: <5432606E.70402@chalmers.se> References: <6D1821C0-52BE-4506-8503-307D2CBD83DF@cis.upenn.edu> <87a95brtu2.fsf@gmail.com> <5430FF7B.6060207@chalmers.se> <877g0esi2p.fsf@gmail.com> <5432606E.70402@chalmers.se> Message-ID: On Mon, Oct 6, 2014 at 11:27 AM, Andreas Abel wrote: > This is also the thing that worries most about arc: Squashing > commits. Splitting commits into > > * things that only do whitespace changes > * things that only add comments > * things that only refactor > * things that actually introduce a semantic change > > *is* very valuable also for the efficency of the review process. Having separate commits also increases the chances of finding the faulty diff with git-bisect. Also, reviewing is usually easier with split commits, and it's simpler to provide descriptive per-commit explanations within each commit message. That being said, temp/fixup/backup commits should be squashed prior to submitting a commit/patch series, and I suppose that's the rationale behind arc's behavior. > On 05.10.2014 19:13, Tuncer Ayaz wrote: > > > > On Sun, Oct 5, 2014 at 4:32 PM, Ben Gamari wrote: > > > > > > 5.a. I reflected on the mild shock of seeing that `arc` had > > > squashed my carefully crafted patch set into a single > > > commit. This still bothers me to this day. > > > > > > I second 5.a, but does it have to be this way, or can arc be > > instructed to not squash commits? From tuncer.ayaz at gmail.com Mon Oct 6 15:54:44 2014 From: tuncer.ayaz at gmail.com (Tuncer Ayaz) Date: Mon, 6 Oct 2014 17:54:44 +0200 Subject: GitHub pull requests In-Reply-To: <1412541852.21551.4.camel@joachim-breitner.de> References: <6D1821C0-52BE-4506-8503-307D2CBD83DF@cis.upenn.edu> <1412506588.4605.1.camel@joachim-breitner.de> <1412541852.21551.4.camel@joachim-breitner.de> Message-ID: On Sun, Oct 5, 2014 at 10:44 PM, Joachim Breitner wrote: > Hi, > > > Am Sonntag, den 05.10.2014, 19:20 +0200 schrieb Tuncer Ayaz: > > There's also the problem that Github's review system is not as > > powerful and most importantly does not preserve history like > > Gerrit or Phabricator do. Once used to it, maintainers probably > > won't be happy to lose productivity due to the simplistic review > > system. > > I don't think this is a reason to forbid them alltogether. We could > say: "We prefer submissions via Phabricator, especially for larger > patches, but if you like, you can use GitHub as well - your > contributions is welcome in any case." > > We are talking about small contributions and entry-barriers here, > and a for a documentation patch or similarly small contributions, we > don't need the full power of Phabricator. When new contributors > start to engage more deeply they will not mind learning Phabricator. > But they will be much more motivated to do so when they have already > successfully contributed something. Sure, it should certainly be tried. By the way, while the Github team has no public ticket system, they are very responsive when you send them feature requests or, say, explain where the review system is incomplete/broken. They never promise anything and do not pre-announce a feature, so it is hard to track future changes. However, they're responsive and seem to value majority opinion. Here's the page: https://github.com/support From ezyang at mit.edu Mon Oct 6 16:01:06 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Mon, 06 Oct 2014 10:01:06 -0600 Subject: GitHub pull requests In-Reply-To: <5432606E.70402@chalmers.se> References: <6D1821C0-52BE-4506-8503-307D2CBD83DF@cis.upenn.edu> <87a95brtu2.fsf@gmail.com> <5430FF7B.6060207@chalmers.se> <877g0esi2p.fsf@gmail.com> <5432606E.70402@chalmers.se> Message-ID: <1412611136-sup-6932@sabre> To be completely clear, arc does not FORCE you to squash commits. You can simply arc diff each commit in question seperately. Now, it is certainly true that arc does not make this easy to do. See: https://secure.phabricator.com/T5636 Edward Excerpts from Andreas Abel's message of 2014-10-06 03:27:10 -0600: > This is also the thing that worries most about arc: Squashing commits. > Splitting commits into > > * things that only do whitespace changes > * things that only add comments > * things that only refactor > * things that actually introduce a semantic change > > *is* very valuable also for the efficency of the review process. > > On 05.10.2014 19:13, Tuncer Ayaz wrote: > > On Sun, Oct 5, 2014 at 4:32 PM, Ben Gamari wrote: > >> 5.a. I reflected on the mild shock of seeing that `arc` had squashed my > >> carefully crafted patch set into a single commit. This still > >> bothers me to this day. > > > > I second 5.a, but does it have to be this way, or can arc be > > instructed to not squash commits? > From mail at joachim-breitner.de Mon Oct 6 17:32:44 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Mon, 06 Oct 2014 19:32:44 +0200 Subject: GitHub pull requests In-Reply-To: References: <6D1821C0-52BE-4506-8503-307D2CBD83DF@cis.upenn.edu> <1412506588.4605.1.camel@joachim-breitner.de> <1412541852.21551.4.camel@joachim-breitner.de> Message-ID: <1412616764.8286.0.camel@joachim-breitner.de> Hi, Am Montag, den 06.10.2014, 17:54 +0200 schrieb Tuncer Ayaz: > By the way, while the Github team has no public ticket system, they > are very responsive when you send them feature requests or, say, > explain where the review system is incomplete/broken. They never > promise anything and do not pre-announce a feature, so it is hard to > track future changes. However, they're responsive and seem to value > majority opinion. > > Here's the page: https://github.com/support good idea, I sent them a message which is basically http://stackoverflow.com/questions/26204811/prevent-github-from-interpreting-nnnn-in-commit-messages Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From eir at cis.upenn.edu Mon Oct 6 14:32:23 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Mon, 6 Oct 2014 10:32:23 -0400 Subject: GitHub pull requests In-Reply-To: <5432606E.70402@chalmers.se> References: <6D1821C0-52BE-4506-8503-307D2CBD83DF@cis.upenn.edu> <87a95brtu2.fsf@gmail.com> <5430FF7B.6060207@chalmers.se> <877g0esi2p.fsf@gmail.com> <5432606E.70402@chalmers.se> Message-ID: I think the "arc" barrier is significant. Personally, while I feel quite comfortable hacking on Haskell code, system tools are always a bit of a mystery. Those of you who help manage the infrastructure may feel like the tools are easy enough to pick up, but I'm sure there are many competent Haskell programmers out there who dread learning new tooling. (Perhaps I'm a representative sample of this set. I still say `git help merge` before just about every merge, just to make sure that I'm remembering the concepts correctly.) I absolutely believe that we should use the best tools available and that committed GHC contributors should have to learn these tools as necessary. Though I've had my problems with Phab and `arc`, I'm confident that this tool was chosen after a deliberative process and am grateful that we have leaders in this area in our midst. All that said, I think that the suggestion just to accept GitHub pull requests will lead to confusion, if only for the namespace problem. If we start to accept pull requests, then we are de facto going to have to deal with both the GH issue tracker and Trac's (and Phab's), and that is a terrible place to be. Part of the automated response to pull request submissions could be a post on the GH pull request record pointing folks to the Phab review that was created in response. The pull request would then be closed. I agree with the comment that users will be more committed to learn Phab once they have contributed. That's why I wanted to point them to Phab in the automated response to the GH pull request. I think there's a psychological commitment made by a person once they click "submit pull request" and they will be happy enough to follow up on Phab, especially if commenting and such doesn't require the installation of a local tool. Richard From simonpj at microsoft.com Mon Oct 6 18:10:37 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 6 Oct 2014 18:10:37 +0000 Subject: Show instance for SrcSpan In-Reply-To: References: Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F319EA3@DB3PRD3001MB020.064d.mgd.msft.net> By all means do so S From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Alan & Kim Zimmerman Sent: 06 October 2014 13:59 To: ghc-devs at haskell.org Subject: Show instance for SrcSpan Is there any reason I can't put in a diff request to replace the derived Show instance for SrcSpan with a handcrafted one that does not exhausively list the constructors, making it more readable? Alan -------------- next part -------------- An HTML attachment was scrubbed... URL: From bgamari.foss at gmail.com Mon Oct 6 18:17:12 2014 From: bgamari.foss at gmail.com (Ben Gamari) Date: Mon, 06 Oct 2014 14:17:12 -0400 Subject: GitHub pull requests In-Reply-To: References: <6D1821C0-52BE-4506-8503-307D2CBD83DF@cis.upenn.edu> <87a95brtu2.fsf@gmail.com> <5430FF7B.6060207@chalmers.se> <877g0esi2p.fsf@gmail.com> <5432606E.70402@chalmers.se> Message-ID: <87sij1qd07.fsf@gmail.com> Richard Eisenberg writes: > I absolutely believe that we should use the best tools available and > that committed GHC contributors should have to learn these tools as > necessary. Though I've had my problems with Phab and `arc`, I'm > confident that this tool was chosen after a deliberative process and > am grateful that we have leaders in this area in our midst. > Agreed. Phab certainly has a learning curve and is not without its papercuts but on the whole seems to be an excellent tool. > All that said, I think that the suggestion just to accept GitHub pull > requests will lead to confusion, if only for the namespace problem. If > we start to accept pull requests, then we are de facto going to have > to deal with both the GH issue tracker and Trac's (and Phab's), and > that is a terrible place to be. Part of the automated response to pull > request submissions could be a post on the GH pull request record > pointing folks to the Phab review that was created in response. The > pull request would then be closed. > This is where I was going with the beginning of a script I posted on Saturday. To me this seems like an excellent compromise: using the familiarity of Github to attract contributions and (hopefully) siphon them into Phabricator. The numbering conflicts may still be problematic but I suspect that in practice people will learn that the Github numbers are meaningless fairly quickly. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From rarash at student.chalmers.se Mon Oct 6 18:50:25 2014 From: rarash at student.chalmers.se (Arash Rouhani) Date: Mon, 6 Oct 2014 20:50:25 +0200 Subject: Stepping through ghc In-Reply-To: <1412609019.13489.YahooMailNeo@web173002.mail.ir2.yahoo.com> References: <1412609019.13489.YahooMailNeo@web173002.mail.ir2.yahoo.com> Message-ID: <5432E471.5070703@student.chalmers.se> Hi Omar, You might want to narrow your scope to one part of GHC. For example, I mostly focused on the Run Time System to be able to conduct my master's thesis. Also, you might want to start off with a tiny goal, like to fix bug XYZ. Oh, and most important of all is the Commentary, which I think of as a wiki for GHC developers. http://ghc.haskell.org/trac/ghc/wiki/Commentary Cheers, Arash On 2014-10-06 17:23, Omar Mefire wrote: > Hi, > > I'm new to ghc codebase and I'm interested in stepping through the > code in order to gain a better idea of how it all works. > > - Is there a way to load ghc into ghci ? and debug through it ? > - What are the ways experienced ghc devs step through the code ? > - Any techniques you guys recommend ? > - What would your advices be for a newbie to the source code ? > > Thanks, > Omar Mefire, > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.k.f.holzenspies at utwente.nl Mon Oct 6 19:55:17 2014 From: p.k.f.holzenspies at utwente.nl (p.k.f.holzenspies at utwente.nl) Date: Mon, 6 Oct 2014 19:55:17 +0000 Subject: Again: Uniques in GHC In-Reply-To: <1412591816.30103.10.camel@joachim-breitner.de> References: , <1412591816.30103.10.camel@joachim-breitner.de> Message-ID: <01db8f0b509747669bbf426fc0dd18c2@EXMBX31.ad.utwente.nl> Dear Joachim, Although I can't quite get what you're saying from the posts on that link, I'm not immediately sure what you're saying should extend to hi-files. These files are very much specific to the compiler version you're using, as in, new GHCs add stuff to them all the time and their binary format does not (seem to) provision for being able to skip unknown things (i.e. it doesn't say how large the next semantic block is in the hi-file). If we're going to keep the formats the same for any architecture, we're going to have to limit 64-bit machines to 32-bit (actually 30-bits, another thing I don't quite understand in BinIface) Uniques. There seem to be possibilities to alleviate the issues with parallel generation of fresh Uniques in a parallel version of GHC. The idea is that, since 64-bits is more than we'll ever assign anyway, to use a few for thread-ids, so we would guarantee non-conflicting Uniques generated by different threads. Anyway, maybe someone a tad more knowledgeable about Uniques could maybe tell me on what scale Uniques in the hi-files should be unique? Must they only be non-conflicting in a Module? In a Package? If I first compile a file with GHC and then, in a separate invocation of GHC, compile another, surely their hi-files will have some of the same Uniques for their own, different things? Where are these conflicts resolved when linking multiple independently compiled files? Are they ever? Regards, Philip ? ________________________________ From: Joachim Breitner Sent: 06 October 2014 12:36 Subject: Re: Again: Uniques in GHC A while ago we had problems with haddock in Debian when the serialization became bit-dependent.^1 I suggest to keep the specification of any on-disk format independent of architecture specifics. Greetings, Joachim ^1 http://bugs.debian.org/586723#15 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Mon Oct 6 20:08:06 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Mon, 06 Oct 2014 22:08:06 +0200 Subject: Again: Uniques in GHC In-Reply-To: <01db8f0b509747669bbf426fc0dd18c2@EXMBX31.ad.utwente.nl> References: , <1412591816.30103.10.camel@joachim-breitner.de> <01db8f0b509747669bbf426fc0dd18c2@EXMBX31.ad.utwente.nl> Message-ID: <1412626086.9628.1.camel@joachim-breitner.de> Hi, Am Montag, den 06.10.2014, 19:55 +0000 schrieb p.k.f.holzenspies at utwente.nl: > Although I can't quite get what you're saying from the posts on that > link, I'm not immediately sure what you're saying should extend to > hi-files. These files are very much specific to the compiler version > you're using, as in, new GHCs add stuff to them all the time and their > binary format does not (seem to) provision for being able to skip > unknown things (i.e. it doesn't say how large the next semantic block > is in the hi-file). Some of this may not be true, but to my knowledge (part of) that interface reading code is (or was?) used by haddock when generating its .haddock file. > If we're going to keep the formats the same for any architecture, > we're going to have to limit 64-bit machines to 32-bit (actually > 30-bits, another thing I don't quite understand in BinIface) Uniques. Why? You can just serialize Uniques always as 64 bit numbers, even on 32-bit systems. This way, the data format is the same across architectures, with little cost. > There seem to be possibilities to alleviate the issues with parallel > generation of fresh Uniques in a parallel version of GHC. The idea is > that, since 64-bits is more than we'll ever assign anyway, to use a > few for thread-ids, so we would guarantee non-conflicting Uniques > generated by different threads. > But that would only work on 64 bit systems, right? Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From carter.schonwald at gmail.com Mon Oct 6 22:01:09 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 6 Oct 2014 18:01:09 -0400 Subject: Stepping through ghc In-Reply-To: <5432E471.5070703@student.chalmers.se> References: <1412609019.13489.YahooMailNeo@web173002.mail.ir2.yahoo.com> <5432E471.5070703@student.chalmers.se> Message-ID: Yeah , picking a single subsystem to get started is a very good and healthy idea. On Oct 6, 2014 2:50 PM, "Arash Rouhani" wrote: > Hi Omar, > > You might want to narrow your scope to one part of GHC. For example, I > mostly focused on the Run Time System to be able to conduct my master's > thesis. Also, you might want to start off with a tiny goal, like to fix bug > XYZ. > > Oh, and most important of all is the Commentary, which I think of as a > wiki for GHC developers. http://ghc.haskell.org/trac/ghc/wiki/Commentary > > Cheers, > Arash > > On 2014-10-06 17:23, Omar Mefire wrote: > > Hi, > > I'm new to ghc codebase and I'm interested in stepping through the code in > order to gain a better idea of how it all works. > > - Is there a way to load ghc into ghci ? and debug through it ? > - What are the ways experienced ghc devs step through the code ? > - Any techniques you guys recommend ? > - What would your advices be for a newbie to the source code ? > > Thanks, > Omar Mefire, > > > _______________________________________________ > ghc-devs mailing listghc-devs at haskell.orghttp://www.haskell.org/mailman/listinfo/ghc-devs > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jwlato at gmail.com Mon Oct 6 23:22:09 2014 From: jwlato at gmail.com (John Lato) Date: Tue, 7 Oct 2014 07:22:09 +0800 Subject: Tentative high-level plans for 7.10.1 In-Reply-To: References: <87sij18s3a.fsf@gmail.com> Message-ID: On Mon, Oct 6, 2014 at 5:38 PM, Johan Tibell wrote: > On Mon, Oct 6, 2014 at 11:28 AM, Herbert Valerio Riedel < > hvriedel at gmail.com> wrote: > >> On 2014-10-06 at 11:03:19 +0200, p.k.f.holzenspies at utwente.nl wrote: >> > The danger, of course, is that people aren't very enthusiastic about >> > bug-fixing older versions of a compiler, but for >> > language/compiler-uptake, this might actually be a Better Way. >> >> Maybe some of the commercial GHC users might be interested in donating >> the manpower to maintain older GHC versions. It's mostly a >> time-consuming QA & auditing process to maintain old GHCs. >> > > What can we do to make that process cheaper? In particular, which are the > manual steps in making a new GHC release today? > I would very much like to know this as well. For ghc-7.8.3 there were a number of people volunteering manpower to finish up the release, but to the best of my knowledge those offers weren't taken up, which makes me think that the extra overhead for coordinating more people would outweigh any gains. From the outside, it appears that the process/workflow could use some improvement, perhaps in ways that would make it simpler to divide up the workload. John L. -------------- next part -------------- An HTML attachment was scrubbed... URL: From murray at sonology.net Mon Oct 6 23:59:55 2014 From: murray at sonology.net (Murray Campbell) Date: Mon, 6 Oct 2014 16:59:55 -0700 Subject: Errors building GHC on iOS with LLVM >= 3.4 In-Reply-To: <87eguns0u9.fsf@gmail.com> References: <87eguns0u9.fsf@gmail.com> Message-ID: On Sat, Oct 4, 2014 at 7:32 PM, Ben Gamari wrote: > Murray Campbell writes: [snip] >> before bailing with >> >> /var/folders/02/0mv6cz6505x2xhzlr279k2340000gp/T/ghc6860_0/ghc6860_6-armv7.s:3916:2: >> error: out of range pc-relative fixup value >> vldr d8, LCPI70_0 >> ^ >> >> /var/folders/02/0mv6cz6505x2xhzlr279k2340000gp/T/ghc6860_0/ghc6860_6-armv7s.s:3916:2: >> error: out of range pc-relative fixup value >> vldr d8, LCPI70_0 >> ^ >> >> Next I tried to build HEAD (plus phabricator D208) with LLVM 3.4 but >> got the same error. >> > I've never seen an error of this form. What symbol definitions does this > error occur in? I have attached a gzipped version of the *-armv7.s file. The one I attached is from a build with LLVM 3.5. I had to apply D208 & D155 to get it to compile. I also had to get the ghc-ios-scripts/arm-apple-darwin10-clang script to pick up the homebrew clang rather than the apple one to get around an 'unknown directive: .maosx_version_min' error. However, the vldr error is identical to that with LLVM 3.4 building 7.8.3. I can get a straight 7.8.3 with LLVM 3.4 version if that would help. The error in the attached file is at line 5988 just below '_c3pb_info$def: '. This is below '_integerzmsimple_GHCziIntegerziType_doubleFromPositive_info$def:' The last lines of build instructions before the error are: "inplace/bin/ghc-stage1" -hisuf hi -osuf o -hcsuf hc -static -H64m -O0 -this-package-key integ_FpVba29yPwl8vdmOmO0xMS -hide-all-packages -i -ilibraries/integer-simple/. -ilibraries/integer-simple/dist-install/build -ilibraries/integer-simple/dist-install/build/autogen -Ilibraries/integer-simple/dist-install/build -Ilibraries/integer-simple/dist-install/build/autogen -Ilibraries/integer-simple/. -optP-include -optPlibraries/integer-simple/dist-install/build/autogen/cabal_macros.h -package-key ghcpr_BE58KUgBe9ELCsPXiJ1Q2r -this-package-key integer-simple -Wall -XHaskell2010 -XCPP -XMagicHash -XBangPatterns -XUnboxedTuples -XUnliftedFFITypes -XNoImplicitPrelude -O -fllvm -no-user-package-db -rtsopts -odir libraries/integer-simple/dist-install/build -hidir libraries/integer-simple/dist-install/build -stubdir libraries/integer-simple/dist-install/build -c libraries/integer-simple/./GHC/Integer/Type.hs -o libraries/integer-simple/dist-install/build/GHC/Integer/Type.o You are using a new version of LLVM that hasn't been tested yet! We will try though... /var/folders/02/0mv6cz6505x2xhzlr279k2340000gp/T/ghc80302_0/ghc80302_6-armv7.s:5988:2: error: out of range pc-relative fixup value vldr d8, LCPI102_0 ^ /var/folders/02/0mv6cz6505x2xhzlr279k2340000gp/T/ghc80302_0/ghc80302_6-armv7s.s:5988:2: error: out of range pc-relative fixup value vldr d8, LCPI102_0 ^ -------------- next part -------------- A non-text attachment was scrubbed... Name: ghc80302_6-arm7.s.gz Type: application/x-gzip Size: 42068 bytes Desc: not available URL: From austin at well-typed.com Tue Oct 7 00:45:09 2014 From: austin at well-typed.com (Austin Seipp) Date: Mon, 6 Oct 2014 19:45:09 -0500 Subject: Tentative high-level plans for 7.10.1 In-Reply-To: References: <87sij18s3a.fsf@gmail.com> Message-ID: The steps for making a GHC release are here: https://ghc.haskell.org/trac/ghc/wiki/MakingReleases So, for the record, making a release is not *that* arduous, but it does take time. On average it will take me about 1 day or so to go from absolutely-nothing to release announcement: 1. Bump version, update configure.ac, tag. 2. Build source tarball (this requires 1 build, but can be done very quickly). 3. Make N binary builds for each platform (the most time consuming part, as this requires heavy optimizations in the builds). 4. Upload documentation for all libraries. 5. Update webpage and upload binaries. 6. Send announcement. 7. Upload binaries from other systems later. Herbert has graciously begun taking care of stewarding and uploading the libraries. So, there are a few steps we could introduce to alleviate this process technically in a few ways, but ultimately all of these have to happen, pretty much (regardless of the automation involved). But I don't think this is the real problem. The real problem is that GHC moves forward in terms of implementation extremely, extremely quickly. It is not clear how to reconcile this development pace with something like needing dozens of LTS releases for a stable version. At least, not without a lot of concentrated effort from almost every single developer. A lot of it can be alleviated through social process perhaps, but it's not strictly technical IMO. What do I mean by that? I mean that: - We may introduce a feature in GHC version X.Y - That might have a bug, or other problems. - We may fix it, and in the process, fix up a few other things and refactor HEAD, which will be GHC X.Y+2 eventually. - Repeat steps 2-3 a few times. - Now we want to backport the fixes for that feature in HEAD back to X.Y. - But GHC X.Y has *significantly* diverged from HEAD in that timeframe, because of step 3 being repeated! In other words: we are often so aggressive at refactoring code that the *act* of backporting in and of itself can be complicated, and it gets harder as time goes on - because often the GHC of a year ago is so much different than the GHC of today. As a concrete example of this, let's look at the changes between GHC 7.8.2 and GHC 7.8.3: https://github.com/ghc/ghc/compare/ghc-7.8.2-release...ghc-7.8.3-release There are about ~110 commits between 7.8.2 and 7.8.3. But as the 7.8 branch lived on, backporting fixes became significantly more complex. In fact, I estimate close to 30 of those commits were NOT direct 7.8 requirements - but they were brought in because _actual fixes_ were dependent on them, in non-trivial ways. Take for example f895f33 by Simon PJ, which fixes #9023. The problem with f895f33 is that by the time we fixed the bug in HEAD with that commit, the history had changed significantly from the branch. In order to get f895f33 to plant easily, I had to backport *at least* 12 to 15 other commits, which it was dependent upon, and commits those commits were dependent upon, etc etc. I did not see any non-trivial way to do this otherwise. I believe at one point Gergo backported some of his fixes to 7.8, which had since become 'non applicable' (and I thank him for that greatly), but inevitably we instead brought along the few extra changes anyway, since they were *still* needed for other fixes. And some of them had API changes. So the choice was to rewrite 4 patches for an old codebase completely (the work being done by two separate people) or backport a few extra patches. The above is obviously an extreme case. But it stands to reason this would _only happen again_ with 7.8.4, probably even worse since more months of development have gone by. An LTS release would mandate things like no-API-changes-at-all, but this significantly limits our ability to *actually* backport patches sometimes, like the above, due to dependent changes. The alternative, obviously, is to do what Gergo did and manually re-write such a fix for the older branch. But that means we would have had to do that for *every patch* in the same boat, including 2 or 3 other fixes we needed! Furthermore, while I am a release manager and do think I know a bit about GHC, it is hopeless to expect me to know it all. I will absolutely require coordinated effort to help develop 'retropatches' that don't break API compatibility, from active developers who are involved in their respective features. And they are almost all volunteers! Simon and I are the only ones who wouldn't qualify on that. So - at what point does it stop becoming 'backporting fixes to older versions' and instead become literally "working on the older version of the compiler AND the new one in tandem"? Given our rate of churn and change internally, this seems like it would be a significant burden in general to ask of developers. If we had an LTS release of GHC that lasted 3 years for example, that would mean developers are expected to work on the current code of their own, *and their old code for the next three years*. That is an absolutely, undeniably a _huge_ investment to ask of someone. It's not clear how many can actually hold it (and I don't blame them). This email is already a bit long (which is extremely unusual for my emails, I'm sure you all know), but I just wanted to give some insight on the process. I think the technical/automation aspects are the easy part. We could probably fully automate the GHC release process in days, if one or two people worked on it dilligently. The hard part is actually balancing the needs and time of users and developers, which is a complex relationship. On Mon, Oct 6, 2014 at 6:22 PM, John Lato wrote: > On Mon, Oct 6, 2014 at 5:38 PM, Johan Tibell wrote: >> >> On Mon, Oct 6, 2014 at 11:28 AM, Herbert Valerio Riedel >> wrote: >>> >>> On 2014-10-06 at 11:03:19 +0200, p.k.f.holzenspies at utwente.nl wrote: >>> > The danger, of course, is that people aren't very enthusiastic about >>> > bug-fixing older versions of a compiler, but for >>> > language/compiler-uptake, this might actually be a Better Way. >>> >>> Maybe some of the commercial GHC users might be interested in donating >>> the manpower to maintain older GHC versions. It's mostly a >>> time-consuming QA & auditing process to maintain old GHCs. >> >> >> What can we do to make that process cheaper? In particular, which are the >> manual steps in making a new GHC release today? > > > I would very much like to know this as well. For ghc-7.8.3 there were a > number of people volunteering manpower to finish up the release, but to the > best of my knowledge those offers weren't taken up, which makes me think > that the extra overhead for coordinating more people would outweigh any > gains. From the outside, it appears that the process/workflow could use > some improvement, perhaps in ways that would make it simpler to divide up > the workload. > > John L. > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From p.k.f.holzenspies at utwente.nl Tue Oct 7 06:32:21 2014 From: p.k.f.holzenspies at utwente.nl (p.k.f.holzenspies at utwente.nl) Date: Tue, 7 Oct 2014 06:32:21 +0000 Subject: Again: Uniques in GHC In-Reply-To: <1412626086.9628.1.camel@joachim-breitner.de> References: , <1412591816.30103.10.camel@joachim-breitner.de> <01db8f0b509747669bbf426fc0dd18c2@EXMBX31.ad.utwente.nl>, <1412626086.9628.1.camel@joachim-breitner.de> Message-ID: Dear Joachim, > Some of this may not be true, but to my knowledge (part of) that > interface reading code is (or was?) used by haddock when generating > its .haddock file. Ah, well, I didn't know this. How does haddock use this code? If Haddock uses the GHC-API to do this; problem solved, because we're back at the specific compiler version that generated it. Otherwise... we may be in trouble. > Why? You can just serialize Uniques always as 64 bit numbers, even on > 32-bit systems. This way, the data format is the same across > architectures, with little cost. Ah, the cost is this; if we start relying on the 64-bitness for uniqueness (which may well happen; there are categories - currently characters - used only for four compile-time known Uniques, waisting 30 - 8 - 2 = 20 bits), this will break the 32-bit compilers. Arguably, their breakage should reject the change leading to the waisted Uniques. Seems a risk, though. Similarly to how currently Uniques are 64-bits, but serialised as 30. Alternatively, 32-bit GHC could use 64-bit Uniques, but that is going to give you quite the performance hit (speculating here). > But that would only work on 64 bit systems, right? Yes, this approach to a parallel GHC would only work on 64-bit machines. The idea is, I guess, that we're not going to see a massive demand for parallel GHC running on multi-core 32-bit systems. In other words; 32-bit systems wouldn't get a parallel GHC. Regards, Philip -------------- next part -------------- An HTML attachment was scrubbed... URL: From shumovichy at gmail.com Tue Oct 7 06:43:06 2014 From: shumovichy at gmail.com (Yuras Shumovich) Date: Tue, 07 Oct 2014 09:43:06 +0300 Subject: Tentative high-level plans for 7.10.1 In-Reply-To: References: <87sij18s3a.fsf@gmail.com> Message-ID: <1412664186.2619.1.camel@gmail.com> Hello, Note: you actually don't have to backport anything. Leave it for people how are interested in LTS release. As haskell enthusiast, I like all the features GHC comes with each release. But as working haskell programmer I'm tired. All my code I wrote at work will probably work with ghc-6.8, but I have to switch to newer ghc twice a year. (The last time it was because gcc/clang issue on mac os) LTS release means you MAY backport fixes. If you want or have time, if there are people interested in that, etc. Probably we'll have more chances that hackage libraries will support LTS releases longer then they support regular releases now. As a result it will be easer to introduce breaking changes like AMP or Traversable/Foldable proposal. Thanks, Yuras On Mon, 2014-10-06 at 19:45 -0500, Austin Seipp wrote: > The steps for making a GHC release are here: > https://ghc.haskell.org/trac/ghc/wiki/MakingReleases > > So, for the record, making a release is not *that* arduous, but it > does take time. On average it will take me about 1 day or so to go > from absolutely-nothing to release announcement: > > 1. Bump version, update configure.ac, tag. > 2. Build source tarball (this requires 1 build, but can be done very quickly). > 3. Make N binary builds for each platform (the most time consuming > part, as this requires heavy optimizations in the builds). > 4. Upload documentation for all libraries. > 5. Update webpage and upload binaries. > 6. Send announcement. > 7. Upload binaries from other systems later. > > Herbert has graciously begun taking care of stewarding and uploading > the libraries. So, there are a few steps we could introduce to > alleviate this process technically in a few ways, but ultimately all > of these have to happen, pretty much (regardless of the automation > involved). > > But I don't think this is the real problem. > > The real problem is that GHC moves forward in terms of implementation > extremely, extremely quickly. It is not clear how to reconcile this > development pace with something like needing dozens of LTS releases > for a stable version. At least, not without a lot of concentrated > effort from almost every single developer. A lot of it can be > alleviated through social process perhaps, but it's not strictly > technical IMO. > > What do I mean by that? I mean that: > > - We may introduce a feature in GHC version X.Y > - That might have a bug, or other problems. > - We may fix it, and in the process, fix up a few other things and > refactor HEAD, which will be GHC X.Y+2 eventually. > - Repeat steps 2-3 a few times. > - Now we want to backport the fixes for that feature in HEAD back to X.Y. > - But GHC X.Y has *significantly* diverged from HEAD in that > timeframe, because of step 3 being repeated! > > In other words: we are often so aggressive at refactoring code that > the *act* of backporting in and of itself can be complicated, and it > gets harder as time goes on - because often the GHC of a year ago is > so much different than the GHC of today. > > As a concrete example of this, let's look at the changes between GHC > 7.8.2 and GHC 7.8.3: > > https://github.com/ghc/ghc/compare/ghc-7.8.2-release...ghc-7.8.3-release > > There are about ~110 commits between 7.8.2 and 7.8.3. But as the 7.8 > branch lived on, backporting fixes became significantly more complex. > In fact, I estimate close to 30 of those commits were NOT direct 7.8 > requirements - but they were brought in because _actual fixes_ were > dependent on them, in non-trivial ways. > > Take for example f895f33 by Simon PJ, which fixes #9023. The problem > with f895f33 is that by the time we fixed the bug in HEAD with that > commit, the history had changed significantly from the branch. In > order to get f895f33 to plant easily, I had to backport *at least* 12 > to 15 other commits, which it was dependent upon, and commits those > commits were dependent upon, etc etc. I did not see any non-trivial > way to do this otherwise. > > I believe at one point Gergo backported some of his fixes to 7.8, > which had since become 'non applicable' (and I thank him for that > greatly), but inevitably we instead brought along the few extra > changes anyway, since they were *still* needed for other fixes. And > some of them had API changes. So the choice was to rewrite 4 patches > for an old codebase completely (the work being done by two separate > people) or backport a few extra patches. > > The above is obviously an extreme case. But it stands to reason this > would _only happen again_ with 7.8.4, probably even worse since more > months of development have gone by. > > An LTS release would mandate things like no-API-changes-at-all, but > this significantly limits our ability to *actually* backport patches > sometimes, like the above, due to dependent changes. The alternative, > obviously, is to do what Gergo did and manually re-write such a fix > for the older branch. But that means we would have had to do that for > *every patch* in the same boat, including 2 or 3 other fixes we > needed! > > Furthermore, while I am a release manager and do think I know a bit > about GHC, it is hopeless to expect me to know it all. I will > absolutely require coordinated effort to help develop 'retropatches' > that don't break API compatibility, from active developers who are > involved in their respective features. And they are almost all > volunteers! Simon and I are the only ones who wouldn't qualify on > that. > > So - at what point does it stop becoming 'backporting fixes to older > versions' and instead become literally "working on the older version > of the compiler AND the new one in tandem"? Given our rate of churn > and change internally, this seems like it would be a significant > burden in general to ask of developers. If we had an LTS release of > GHC that lasted 3 years for example, that would mean developers are > expected to work on the current code of their own, *and their old code > for the next three years*. That is an absolutely, undeniably a _huge_ > investment to ask of someone. It's not clear how many can actually > hold it (and I don't blame them). > > This email is already a bit long (which is extremely unusual for my > emails, I'm sure you all know), but I just wanted to give some insight > on the process. > > I think the technical/automation aspects are the easy part. We could > probably fully automate the GHC release process in days, if one or two > people worked on it dilligently. The hard part is actually balancing > the needs and time of users and developers, which is a complex > relationship. > > On Mon, Oct 6, 2014 at 6:22 PM, John Lato wrote: > > On Mon, Oct 6, 2014 at 5:38 PM, Johan Tibell wrote: > >> > >> On Mon, Oct 6, 2014 at 11:28 AM, Herbert Valerio Riedel > >> wrote: > >>> > >>> On 2014-10-06 at 11:03:19 +0200, p.k.f.holzenspies at utwente.nl wrote: > >>> > The danger, of course, is that people aren't very enthusiastic about > >>> > bug-fixing older versions of a compiler, but for > >>> > language/compiler-uptake, this might actually be a Better Way. > >>> > >>> Maybe some of the commercial GHC users might be interested in donating > >>> the manpower to maintain older GHC versions. It's mostly a > >>> time-consuming QA & auditing process to maintain old GHCs. > >> > >> > >> What can we do to make that process cheaper? In particular, which are the > >> manual steps in making a new GHC release today? > > > > > > I would very much like to know this as well. For ghc-7.8.3 there were a > > number of people volunteering manpower to finish up the release, but to the > > best of my knowledge those offers weren't taken up, which makes me think > > that the extra overhead for coordinating more people would outweigh any > > gains. From the outside, it appears that the process/workflow could use > > some improvement, perhaps in ways that would make it simpler to divide up > > the workload. > > > > John L. > > > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > From david.feuer at gmail.com Tue Oct 7 07:05:13 2014 From: david.feuer at gmail.com (David Feuer) Date: Tue, 7 Oct 2014 03:05:13 -0400 Subject: oneShot (was Re: FoldrW/buildW issues) Message-ID: Just for the heck of it, I tried out an implementation of scanl using Joachim Breitner's magical oneShot primitive. Using the test scanlA :: (b -> a -> b) -> b -> [a] -> [b] scanlA f a bs = build $ \c n -> a `c` foldr (\b g x -> let b' = f x b in (b' `c` g b')) (const n) bs a scanlB :: (b -> a -> b) -> b -> [a] -> [b] scanlB f a bs = build $ \c n -> a `c` foldr (\b g -> oneShot (\x -> let b' = f x b in (b' `c` g b'))) (const n) bs a f :: Int -> Bool f 0 = True f 1 = False {-# NOINLINE f #-} barA = scanlA (+) 0 . filter f barB = foldlB (+) 0 . filter f with -O2 (NOT disabling Call Arity) the Core from barB is really, really beautiful: it's small, there are no lets or local lambdas, and everything is completely unboxed. This is much better than the result of barA, which has a local let, and which doesn't seem to manage to unbox anything. It looks to me like this could be a pretty good tool to have around. It certainly has its limits?it doesn't do anything nice with reverse . reverse or reverse . scanl f b . reverse, but it doesn't need to be perfect to be useful. More evaluation, of course, is necessary.to make sure it doesn't go wrong when used sanely. David From omefire at yahoo.fr Tue Oct 7 07:35:26 2014 From: omefire at yahoo.fr (Omar Mefire) Date: Tue, 7 Oct 2014 08:35:26 +0100 Subject: Stepping through ghc In-Reply-To: <1412665738.18855.YahooMailNeo@web173006.mail.ir2.yahoo.com> References: <1412665738.18855.YahooMailNeo@web173006.mail.ir2.yahoo.com> Message-ID: <1412667326.75263.YahooMailNeo@web173004.mail.ir2.yahoo.com> Hi Arash, Thanks for your reply. I'm interested in stepping through the compiler part (Lexer, Parser, etc...) of GHC for now. Thanks for the link to the commentary. I'm going through it right now. ------- "Hi Omar, You might want to narrow your scope to one part of GHC. For example, I mostly focused on the Run Time System to be able to conduct my master's thesis. Also, you might want to start off with a tiny goal, like to fix bug XYZ. Oh, and most important of all is the Commentary, which I think of as a wiki for GHC developers. http://ghc.haskell.org/trac/ghc/wiki/Commentary Cheers, Arash" Omar Mefire, Le Mardi 7 octobre 2014 0h08, Omar Mefire a ?crit : Hi Arash, Thanks for your reply. I'm interested in stepping through the compiler part (Lexer, Parser, etc...) of GHC for a beginning. Thanks for the link to the commentary. I'm going through it right now. P.S: I emailed you directly because I had some difficulties figuring out how to respond only to your message instead of the whole daily digest. Hopefully, will figure it out shortly. :) " Hi Omar, You might want to narrow your scope to one part of GHC. For example, I mostly focused on the Run Time System to be able to conduct my master's thesis. Also, you might want to start off with a tiny goal, like to fix bug XYZ. Oh, and most important of all is the Commentary, which I think of as a wiki for GHC developers. http://ghc.haskell.org/trac/ghc/wiki/Commentary" Omar Mefire, Le Lundi 6 octobre 2014 12h55, "ghc-devs-request at haskell.org" a ?crit : Send ghc-devs mailing list submissions to ghc-devs at haskell.org To subscribe or unsubscribe via the World Wide Web, visit http://www.haskell.org/mailman/listinfo/ghc-devs or, via email, send a message with subject or body 'help' to ghc-devs-request at haskell.org You can reach the person managing the list at ghc-devs-owner at haskell.org When replying, please edit your Subject line so it is more specific than "Re: Contents of ghc-devs digest..." Today's Topics: 1. Re: GitHub pull requests (Richard Eisenberg) 2. RE: Show instance for SrcSpan (Simon Peyton Jones) 3. Re: GitHub pull requests (Ben Gamari) 4. Re: Stepping through ghc (Arash Rouhani) 5. RE: Again: Uniques in GHC (p.k.f.holzenspies at utwente.nl) ---------------------------------------------------------------------- Message: 1 Date: Mon, 6 Oct 2014 10:32:23 -0400 From: Richard Eisenberg To: andreas.abel at gu.se Cc: "ghc-devs at haskell.org Devs" Subject: Re: GitHub pull requests Message-ID: Content-Type: text/plain; charset=us-ascii I think the "arc" barrier is significant. Personally, while I feel quite comfortable hacking on Haskell code, system tools are always a bit of a mystery. Those of you who help manage the infrastructure may feel like the tools are easy enough to pick up, but I'm sure there are many competent Haskell programmers out there who dread learning new tooling. (Perhaps I'm a representative sample of this set. I still say `git help merge` before just about every merge, just to make sure that I'm remembering the concepts correctly.) I absolutely believe that we should use the best tools available and that committed GHC contributors should have to learn these tools as necessary. Though I've had my problems with Phab and `arc`, I'm confident that this tool was chosen after a deliberative process and am grateful that we have leaders in this area in our midst. All that said, I think that the suggestion just to accept GitHub pull requests will lead to confusion, if only for the namespace problem. If we start to accept pull requests, then we are de facto going to have to deal with both the GH issue tracker and Trac's (and Phab's), and that is a terrible place to be. Part of the automated response to pull request submissions could be a post on the GH pull request record pointing folks to the Phab review that was created in response. The pull request would then be closed. I agree with the comment that users will be more committed to learn Phab once they have contributed. That's why I wanted to point them to Phab in the automated response to the GH pull request. I think there's a psychological commitment made by a person once they click "submit pull request" and they will be happy enough to follow up on Phab, especially if commenting and such doesn't require the installation of a local tool. Richard ------------------------------ Message: 2 Date: Mon, 6 Oct 2014 18:10:37 +0000 From: Simon Peyton Jones To: Alan & Kim Zimmerman , "ghc-devs at haskell.org" Subject: RE: Show instance for SrcSpan Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F319EA3 at DB3PRD3001MB020.064d.mgd.msft.net> Content-Type: text/plain; charset="utf-8" By all means do so S From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Alan & Kim Zimmerman Sent: 06 October 2014 13:59 To: ghc-devs at haskell.org Subject: Show instance for SrcSpan Is there any reason I can't put in a diff request to replace the derived Show instance for SrcSpan with a handcrafted one that does not exhausively list the constructors, making it more readable? Alan -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 3 Date: Mon, 06 Oct 2014 14:17:12 -0400 From: Ben Gamari To: Richard Eisenberg , andreas.abel at gu.se Cc: "ghc-devs at haskell.org Devs" Subject: Re: GitHub pull requests Message-ID: <87sij1qd07.fsf at gmail.com> Content-Type: text/plain; charset="us-ascii" Richard Eisenberg writes: > I absolutely believe that we should use the best tools available and > that committed GHC contributors should have to learn these tools as > necessary. Though I've had my problems with Phab and `arc`, I'm > confident that this tool was chosen after a deliberative process and > am grateful that we have leaders in this area in our midst. > Agreed. Phab certainly has a learning curve and is not without its papercuts but on the whole seems to be an excellent tool. > All that said, I think that the suggestion just to accept GitHub pull > requests will lead to confusion, if only for the namespace problem. If > we start to accept pull requests, then we are de facto going to have > to deal with both the GH issue tracker and Trac's (and Phab's), and > that is a terrible place to be. Part of the automated response to pull > request submissions could be a post on the GH pull request record > pointing folks to the Phab review that was created in response. The > pull request would then be closed. > This is where I was going with the beginning of a script I posted on Saturday. To me this seems like an excellent compromise: using the familiarity of Github to attract contributions and (hopefully) siphon them into Phabricator. The numbering conflicts may still be problematic but I suspect that in practice people will learn that the Github numbers are meaningless fairly quickly. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 472 bytes Desc: not available URL: ------------------------------ Message: 4 Date: Mon, 6 Oct 2014 20:50:25 +0200 From: Arash Rouhani To: Subject: Re: Stepping through ghc Message-ID: <5432E471.5070703 at student.chalmers.se> Content-Type: text/plain; charset="windows-1252"; Format="flowed" Hi Omar, You might want to narrow your scope to one part of GHC. For example, I mostly focused on the Run Time System to be able to conduct my master's thesis. Also, you might want to start off with a tiny goal, like to fix bug XYZ. Oh, and most important of all is the Commentary, which I think of as a wiki for GHC developers. http://ghc.haskell.org/trac/ghc/wiki/Commentary Cheers, Arash On 2014-10-06 17:23, Omar Mefire wrote: > Hi, > > I'm new to ghc codebase and I'm interested in stepping through the > code in order to gain a better idea of how it all works. > > - Is there a way to load ghc into ghci ? and debug through it ? > - What are the ways experienced ghc devs step through the code ? > - Any techniques you guys recommend ? > - What would your advices be for a newbie to the source code ? > > Thanks, > Omar Mefire, > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 5 Date: Mon, 6 Oct 2014 19:55:17 +0000 From: To: , Subject: RE: Again: Uniques in GHC Message-ID: <01db8f0b509747669bbf426fc0dd18c2 at EXMBX31.ad.utwente.nl> Content-Type: text/plain; charset="us-ascii" Dear Joachim, Although I can't quite get what you're saying from the posts on that link, I'm not immediately sure what you're saying should extend to hi-files. These files are very much specific to the compiler version you're using, as in, new GHCs add stuff to them all the time and their binary format does not (seem to) provision for being able to skip unknown things (i.e. it doesn't say how large the next semantic block is in the hi-file). If we're going to keep the formats the same for any architecture, we're going to have to limit 64-bit machines to 32-bit (actually 30-bits, another thing I don't quite understand in BinIface) Uniques. There seem to be possibilities to alleviate the issues with parallel generation of fresh Uniques in a parallel version of GHC. The idea is that, since 64-bits is more than we'll ever assign anyway, to use a few for thread-ids, so we would guarantee non-conflicting Uniques generated by different threads. Anyway, maybe someone a tad more knowledgeable about Uniques could maybe tell me on what scale Uniques in the hi-files should be unique? Must they only be non-conflicting in a Module? In a Package? If I first compile a file with GHC and then, in a separate invocation of GHC, compile another, surely their hi-files will have some of the same Uniques for their own, different things? Where are these conflicts resolved when linking multiple independently compiled files? Are they ever? Regards, Philip ? ________________________________ From: Joachim Breitner Sent: 06 October 2014 12:36 Subject: Re: Again: Uniques in GHC A while ago we had problems with haddock in Debian when the serialization became bit-dependent.^1 I suggest to keep the specification of any on-disk format independent of architecture specifics. Greetings, Joachim ^1 http://bugs.debian.org/586723#15 -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Subject: Digest Footer _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs ------------------------------ End of ghc-devs Digest, Vol 134, Issue 23 ***************************************** -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Oct 7 08:12:13 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 7 Oct 2014 08:12:13 +0000 Subject: Tentative high-level plans for 7.10.1 In-Reply-To: <87k34gsd9r.fsf@gmail.com> References: <87k34gsd9r.fsf@gmail.com> Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F31A873@DB3PRD3001MB020.064d.mgd.msft.net> Thanks for this debate. (And thank you Austin for provoking it by articulating a medium term plan.) Our intent has always been that that the latest version on each branch is solid. There have been one or two occasions when we have knowingly abandoned a dodgy release branch entirely, but not many. So I think the major trick we are missing is this: We don't know what the show-stopping bugs on a branch are For example, here are three responses to Austin's message: | The only potential issue here is that not a single 7.8 release will be | able to bootstrap LLVM-only targets due to #9439. I'm not sure how | 8960 looks rather serious and potentially makes all of 7.8 a no-go | for some users. | We continue to use 7.2, at least partly because all newer versions of | ghc have had significant bugs that affect us That's not good. Austin's message said about 7.8.4 "No particular pressure on any outstanding bugs to release immediately". There are several dozen tickets queued up on 7.8.4 (see here https://ghc.haskell.org/trac/ghc/wiki/Status/GHC-7.8.4), but 95% of them are "nice to have". So clearly the message is not getting through. My conclusion * I think we (collectively!) should make a serious attempt to fix show-stopping bugs on a major release branch. (I agree that upgrading to the next major release often simply brings in a new wave of bugs because of GHC's rapid development culture.) * We can only possibly do this if a) we can distinguish "show-stopping" from "nice to have" b) we get some help (thank you John Lato for implicitly offering) I would define a "show-stopping" bug as one that simply prevents you from using the release altogether, or imposes a very large cost at the user end. For mechanism I suggest this. On the 7.8.4 status page (or in general, on the release branch page you want to influence), create a section "Show stoppers" with a list of the show-stopping bugs, including some English-language text saying who cares so much and why. (Yes I know that it might be there in the ticket, but the impact is much greater if there is an explicit list of two or three personal statements up front.) Concerning 7.8.4 itself, I think we could review the decision to abandon it, in the light of new information. We might, for example, fix show-stoppers, include fixes that are easy to apply, and not-include other fixes that are harder. Opinions? I'm not making a ruling here! Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Ben | Gamari | Sent: 04 October 2014 04:52 | To: Austin Seipp; ghc-devs at haskell.org | Cc: Simon Marlow | Subject: Re: Tentative high-level plans for 7.10.1 | | Austin Seipp writes: | | snip. | | > | > We do not believe we will ship a 7.8.4 at all, contrary to what you | > may have seen on Trac - we never decided definitively, but there is | > likely not enough time. Over the next few days, I will remove the | > defunct 7.8.4 milestone, and re-triage the assigned tickets. | > | The only potential issue here is that not a single 7.8 release will be | able to bootstrap LLVM-only targets due to #9439. I'm not sure how | much of an issue this will be in practice but there should probably be | some discussion with packagers to ensure that 7.8 is skipped on | affected platforms lest users be stuck with no functional stage 0 | compiler. | | Cheers, | | - Ben From simonpj at microsoft.com Tue Oct 7 08:17:34 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 7 Oct 2014 08:17:34 +0000 Subject: Framework failures Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F31A8AD@DB3PRD3001MB020.064d.mgd.msft.net> I'm getting three "framework failures" in the testsuite, as below, on Linux. Does anyone have any idea why that might happen? Simon =====> haddock.compiler(normal) 1891 of 4114 [0, 0, 2] *** framework failure for haddock.compiler(normal) do_test exception Traceback (most recent call last): File "/home/simonpj/code/HEAD-3/testsuite/driver/testlib.py", line 764, in do_test result = apply(func, [name,way] + args) File "/home/simonpj/code/HEAD-3/testsuite/driver/testlib.py", line 1035, in stats return checkStats(name, way, stats_file, opts.stats_range_fields) File "/home/simonpj/code/HEAD-3/testsuite/driver/testlib.py", line 1045, in checkStats f = open(in_testdir(stats_file)) IOError: [Errno 2] No such file or directory: './perf/haddock/../../../../compiler/stage2/doc/html/ghc/ghc.haddock.t' -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.k.f.holzenspies at utwente.nl Tue Oct 7 08:29:07 2014 From: p.k.f.holzenspies at utwente.nl (p.k.f.holzenspies at utwente.nl) Date: Tue, 7 Oct 2014 08:29:07 +0000 Subject: Tentative high-level plans for 7.10.1 In-Reply-To: References: <87sij18s3a.fsf@gmail.com> , Message-ID: <8c3be1a8db2b4400b3c07f55c453e697@EXMBX31.ad.utwente.nl> Mmm... yes, you seem to have some strong points against LTS. People committing to an LTS-version aside from the further development of HEAD seems somewhat unlikely also... I must say, though, significant API-changes with only minor version-bumps have bitten me also. Not sure we should want this. Ph. PS. Maybe long, but not too long, let alone TL;DR. Thanks for the clarity ________________________________________ From: mad.one at gmail.com on behalf of Austin Seipp Sent: 07 October 2014 02:45 To: John Lato Cc: Johan Tibell; Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org; Simon Marlow Subject: Re: Tentative high-level plans for 7.10.1 The steps for making a GHC release are here: https://ghc.haskell.org/trac/ghc/wiki/MakingReleases So, for the record, making a release is not *that* arduous, but it does take time. On average it will take me about 1 day or so to go from absolutely-nothing to release announcement: 1. Bump version, update configure.ac, tag. 2. Build source tarball (this requires 1 build, but can be done very quickly). 3. Make N binary builds for each platform (the most time consuming part, as this requires heavy optimizations in the builds). 4. Upload documentation for all libraries. 5. Update webpage and upload binaries. 6. Send announcement. 7. Upload binaries from other systems later. Herbert has graciously begun taking care of stewarding and uploading the libraries. So, there are a few steps we could introduce to alleviate this process technically in a few ways, but ultimately all of these have to happen, pretty much (regardless of the automation involved). But I don't think this is the real problem. The real problem is that GHC moves forward in terms of implementation extremely, extremely quickly. It is not clear how to reconcile this development pace with something like needing dozens of LTS releases for a stable version. At least, not without a lot of concentrated effort from almost every single developer. A lot of it can be alleviated through social process perhaps, but it's not strictly technical IMO. What do I mean by that? I mean that: - We may introduce a feature in GHC version X.Y - That might have a bug, or other problems. - We may fix it, and in the process, fix up a few other things and refactor HEAD, which will be GHC X.Y+2 eventually. - Repeat steps 2-3 a few times. - Now we want to backport the fixes for that feature in HEAD back to X.Y. - But GHC X.Y has *significantly* diverged from HEAD in that timeframe, because of step 3 being repeated! In other words: we are often so aggressive at refactoring code that the *act* of backporting in and of itself can be complicated, and it gets harder as time goes on - because often the GHC of a year ago is so much different than the GHC of today. As a concrete example of this, let's look at the changes between GHC 7.8.2 and GHC 7.8.3: https://github.com/ghc/ghc/compare/ghc-7.8.2-release...ghc-7.8.3-release There are about ~110 commits between 7.8.2 and 7.8.3. But as the 7.8 branch lived on, backporting fixes became significantly more complex. In fact, I estimate close to 30 of those commits were NOT direct 7.8 requirements - but they were brought in because _actual fixes_ were dependent on them, in non-trivial ways. Take for example f895f33 by Simon PJ, which fixes #9023. The problem with f895f33 is that by the time we fixed the bug in HEAD with that commit, the history had changed significantly from the branch. In order to get f895f33 to plant easily, I had to backport *at least* 12 to 15 other commits, which it was dependent upon, and commits those commits were dependent upon, etc etc. I did not see any non-trivial way to do this otherwise. I believe at one point Gergo backported some of his fixes to 7.8, which had since become 'non applicable' (and I thank him for that greatly), but inevitably we instead brought along the few extra changes anyway, since they were *still* needed for other fixes. And some of them had API changes. So the choice was to rewrite 4 patches for an old codebase completely (the work being done by two separate people) or backport a few extra patches. The above is obviously an extreme case. But it stands to reason this would _only happen again_ with 7.8.4, probably even worse since more months of development have gone by. An LTS release would mandate things like no-API-changes-at-all, but this significantly limits our ability to *actually* backport patches sometimes, like the above, due to dependent changes. The alternative, obviously, is to do what Gergo did and manually re-write such a fix for the older branch. But that means we would have had to do that for *every patch* in the same boat, including 2 or 3 other fixes we needed! Furthermore, while I am a release manager and do think I know a bit about GHC, it is hopeless to expect me to know it all. I will absolutely require coordinated effort to help develop 'retropatches' that don't break API compatibility, from active developers who are involved in their respective features. And they are almost all volunteers! Simon and I are the only ones who wouldn't qualify on that. So - at what point does it stop becoming 'backporting fixes to older versions' and instead become literally "working on the older version of the compiler AND the new one in tandem"? Given our rate of churn and change internally, this seems like it would be a significant burden in general to ask of developers. If we had an LTS release of GHC that lasted 3 years for example, that would mean developers are expected to work on the current code of their own, *and their old code for the next three years*. That is an absolutely, undeniably a _huge_ investment to ask of someone. It's not clear how many can actually hold it (and I don't blame them). This email is already a bit long (which is extremely unusual for my emails, I'm sure you all know), but I just wanted to give some insight on the process. I think the technical/automation aspects are the easy part. We could probably fully automate the GHC release process in days, if one or two people worked on it dilligently. The hard part is actually balancing the needs and time of users and developers, which is a complex relationship. On Mon, Oct 6, 2014 at 6:22 PM, John Lato wrote: > On Mon, Oct 6, 2014 at 5:38 PM, Johan Tibell wrote: >> >> On Mon, Oct 6, 2014 at 11:28 AM, Herbert Valerio Riedel >> wrote: >>> >>> On 2014-10-06 at 11:03:19 +0200, p.k.f.holzenspies at utwente.nl wrote: >>> > The danger, of course, is that people aren't very enthusiastic about >>> > bug-fixing older versions of a compiler, but for >>> > language/compiler-uptake, this might actually be a Better Way. >>> >>> Maybe some of the commercial GHC users might be interested in donating >>> the manpower to maintain older GHC versions. It's mostly a >>> time-consuming QA & auditing process to maintain old GHCs. >> >> >> What can we do to make that process cheaper? In particular, which are the >> manual steps in making a new GHC release today? > > > I would very much like to know this as well. For ghc-7.8.3 there were a > number of people volunteering manpower to finish up the release, but to the > best of my knowledge those offers weren't taken up, which makes me think > that the extra overhead for coordinating more people would outweigh any > gains. From the outside, it appears that the process/workflow could use > some improvement, perhaps in ways that would make it simpler to divide up > the workload. > > John L. > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From simonpj at microsoft.com Tue Oct 7 08:57:00 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 7 Oct 2014 08:57:00 +0000 Subject: Phabricator guidance Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F31B677@DB3PRD3001MB020.064d.mgd.msft.net> I suppose I will have to look at this. But I have no clue how to do so. D202 itself seems to be a very small patch (only ten lines or so), so presumably it applies on top of some other patch? But what? Someone said I could use arc patch D202 to apply the patch in my own tree, which is crucial for reproducing the error that Jan is stuck on. BUT the patch presumably applies to a particular commit, NOT the head of my current tree. But what is the base commit to which it applies? Does arc patch check out the base commit before applying? I can't find any documentation of 'arc patch'. Can anyone else? It would be good to document this workflow, since it's a very useful one. Thanks Simon | -----Original Message----- | From: noreply at phabricator.haskell.org | [mailto:noreply at phabricator.haskell.org] | Sent: 06 October 2014 10:12 | To: Simon Peyton Jones | Subject: [Differential] [Commented On] D202: Injective type families | | jstolarek added a comment. | | Bump. Austin put this on feature list for GHC 7.10 and I'd really like | to this feature to make it into the next stable version of GHC. | Currently I'm not making *any* progress because of the "Not in scope" | errors. I've spent time looking at the code, seeing how things are | done in other places but none of this got me closer to solving the | problem :-/ Help? | | REPOSITORY | rGHC Glasgow Haskell Compiler | | REVISION DETAIL | https://phabricator.haskell.org/D202 | | REPLY HANDLER ACTIONS | Reply to comment, or !reject, !abandon, !reclaim, !resign, !rethink, | !unsubscribe. | | To: jstolarek, austin, simonpj | Cc: thomie, goldfire, simonmar, ezyang, carter From hvriedel at gmail.com Tue Oct 7 09:04:33 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Tue, 07 Oct 2014 11:04:33 +0200 Subject: Phabricator guidance In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F31B677@DB3PRD3001MB020.064d.mgd.msft.net> (Simon Peyton Jones's message of "Tue, 7 Oct 2014 08:57:00 +0000") References: <618BE556AADD624C9C918AA5D5911BEF3F31B677@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <87vbnwckta.fsf@gmail.com> On 2014-10-07 at 10:57:00 +0200, Simon Peyton Jones wrote: > I suppose I will have to look at this. But I have no clue how to do so. > > D202 itself seems to be a very small patch (only ten lines or so), so presumably it applies on top of some other patch? But what? > > Someone said I could use > arc patch D202 > to apply the patch in my own tree, which is crucial for reproducing > the error that Jan is stuck on. > BUT the patch presumably applies to a > particular commit, NOT the head of my current tree. But what is the > base commit to which it applies? Does arc patch check out the base > commit before applying? If you actually perform 'arc patch D202', this is the output you currently get: ,---- | Created and checked out branch arcpatch-D202. | | | This diff is against commit 3e17822f5f4e4d2f582dc0a053f532125f9777c7, but | the commit is nowhere in the working copy. Try to apply it against the | current working copy state? (3549c952b535803270872adaf87262f2df0295a4) | [Y/n] n `---- So yes, 'arc' tries apply the code-revision on top of the commit is was based on; and in this case, it is actually missing from ghc.git :-/ What's more, you can also declare that a code-revisions builds on top of another code-revision, in which case 'arc' will automatically try to (recursively) apply that other code-revision to your source-tree first, before applying the one you are actually requesting on top. I hope Austin or someone else may chime in to provide further assistance if this doesn't help... From johan.tibell at gmail.com Tue Oct 7 09:23:32 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Tue, 7 Oct 2014 11:23:32 +0200 Subject: Tentative high-level plans for 7.10.1 In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F31A873@DB3PRD3001MB020.064d.mgd.msft.net> References: <87k34gsd9r.fsf@gmail.com> <618BE556AADD624C9C918AA5D5911BEF3F31A873@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: On Tue, Oct 7, 2014 at 10:12 AM, Simon Peyton Jones wrote: > | 8960 looks rather serious and potentially makes all of 7.8 a no-go > | for some users. > I think this is the big issue. If you look at all the related bugs linked from #8960, lots of users are affected. I think this bug alone probably warrants a release. We should also move all those related bugs to the 7.8.4 milestone, so the impact of this issue is more clear. > My conclusion > > * I think we (collectively!) should make a serious attempt to fix > show-stopping > bugs on a major release branch. (I agree that upgrading to the next > major > release often simply brings in a new wave of bugs because of GHC's > rapid development culture.) > > * We can only possibly do this if > a) we can distinguish "show-stopping" from "nice to have" > b) we get some help (thank you John Lato for implicitly offering) > All sounds good to me. I can help with backporting bug fixes if needed. In return I would encourage people to not mix bug fixes with "I rewrote the compiler" commits. :) I would define a "show-stopping" bug as one that simply prevents you from > using the release altogether, or imposes a very large cost at the user end. > Agreed. -- Johan -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Tue Oct 7 09:30:01 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Tue, 7 Oct 2014 11:30:01 +0200 Subject: Tentative high-level plans for 7.10.1 In-Reply-To: References: <87k34gsd9r.fsf@gmail.com> <618BE556AADD624C9C918AA5D5911BEF3F31A873@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: I re-targeted some of the bugs that were "obviously" the same SpecConstr issue to 7.8.4. There are a few others that should probably also be re-targeted, but I couldn't tell from a quick scan of the long comment threads. Looking at the 7.8.4 status page, it's now quite clear that the SpecConstr bug is a show stopper i.e. it affects lots of people/core libraries and doesn't really have a good workaround, as turning of SpecConstr will most likely make e.g. vector too slow. On Tue, Oct 7, 2014 at 11:23 AM, Johan Tibell wrote: > On Tue, Oct 7, 2014 at 10:12 AM, Simon Peyton Jones > wrote: > >> | 8960 looks rather serious and potentially makes all of 7.8 a no-go >> | for some users. >> > > I think this is the big issue. If you look at all the related bugs linked > from #8960, lots of users are affected. I think this bug alone probably > warrants a release. We should also move all those related bugs to the 7.8.4 > milestone, so the impact of this issue is more clear. > > >> My conclusion >> >> * I think we (collectively!) should make a serious attempt to fix >> show-stopping >> bugs on a major release branch. (I agree that upgrading to the next >> major >> release often simply brings in a new wave of bugs because of GHC's >> rapid development culture.) >> >> * We can only possibly do this if >> a) we can distinguish "show-stopping" from "nice to have" >> b) we get some help (thank you John Lato for implicitly offering) >> > > All sounds good to me. I can help with backporting bug fixes if needed. In > return I would encourage people to not mix bug fixes with "I rewrote the > compiler" commits. :) > > I would define a "show-stopping" bug as one that simply prevents you from >> using the release altogether, or imposes a very large cost at the user end. >> > > Agreed. > > -- Johan > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Oct 7 10:26:53 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 7 Oct 2014 10:26:53 +0000 Subject: Phabricator guidance In-Reply-To: <87vbnwckta.fsf@gmail.com> References: <618BE556AADD624C9C918AA5D5911BEF3F31B677@DB3PRD3001MB020.064d.mgd.msft.net> <87vbnwckta.fsf@gmail.com> Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F31C986@DB3PRD3001MB020.064d.mgd.msft.net> Aha, that helps. And looking further at https://phabricator.haskell.org/D202, I can see under "Revision update history" that there are four diffs all stashed in this on Phab ticket. (That contradicts my previous model which was one patch per Phab ticket; people have been complaining about that.) So my new questions are: * How can I apply "Diff 1" or "Diff 2"? Using "arc patch" only applies "Diff 4" * How can I apply all of "Diff 1" ... "Diff 4" in one go? Simon | -----Original Message----- | From: Herbert Valerio Riedel [mailto:hvriedel at gmail.com] | Sent: 07 October 2014 10:05 | To: Simon Peyton Jones | Cc: ghc-devs at haskell.org | Subject: Re: Phabricator guidance | | On 2014-10-07 at 10:57:00 +0200, Simon Peyton Jones wrote: | > I suppose I will have to look at this. But I have no clue how to do | so. | > | > D202 itself seems to be a very small patch (only ten lines or so), | so presumably it applies on top of some other patch? But what? | > | > Someone said I could use | > arc patch D202 | > to apply the patch in my own tree, which is crucial for reproducing | > the error that Jan is stuck on. | | > BUT the patch presumably applies to a | > particular commit, NOT the head of my current tree. But what is the | > base commit to which it applies? Does arc patch check out the base | > commit before applying? | | If you actually perform 'arc patch D202', this is the output you | currently get: | | | ,---- | | Created and checked out branch arcpatch-D202. | | | | | | This diff is against commit | 3e17822f5f4e4d2f582dc0a053f532125f9777c7, but | | the commit is nowhere in the working copy. Try to apply it | against the | | current working copy state? | (3549c952b535803270872adaf87262f2df0295a4) | | [Y/n] n | `---- | | So yes, 'arc' tries apply the code-revision on top of the commit is | was based on; and in this case, it is actually missing from ghc.git :- | / | | What's more, you can also declare that a code-revisions builds on top | of another code-revision, in which case 'arc' will automatically try | to | (recursively) apply that other code-revision to your source-tree | first, before applying the one you are actually requesting on top. | | | I hope Austin or someone else may chime in to provide further | assistance if this doesn't help... From mikolaj at well-typed.com Tue Oct 7 11:46:21 2014 From: mikolaj at well-typed.com (Mikolaj Konarski) Date: Tue, 7 Oct 2014 13:46:21 +0200 Subject: Tentative high-level plans for 7.10.1 In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F31A873@DB3PRD3001MB020.064d.mgd.msft.net> References: <87k34gsd9r.fsf@gmail.com> <618BE556AADD624C9C918AA5D5911BEF3F31A873@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: > Our intent has always been that that the latest version on each branch is solid. There have been one or two occasions when we have knowingly abandoned a dodgy release branch entirely, but not many. Perhaps we could do the opposite. Announce beforehand that a release branch X is going to be LTS (of Very Stable Release; roughly 1 in 4 branches?) and so very few major new features will be included in the release X+1 (there is just not enough time for both, as Austin explained). Then, on the GHC maintainers' side, put off accepting any "I rewrote the compiler" commits into HEAD for long time. On the community side, focus on bug fixes and non-disruptive, incremental improvements. Avoid API changes, whatever that means. Update release X many times, until very stable. A more radical proposal would be to do the above, but announce that X+! is going to be Very Stable Release and accept no major new features into HEAD at all and even revert any minor new features just before X+1 release, if non-trivial bugs in them are discovered. Then release X can be abandoned quickly, knowing that X+! will most probably resolve any problems in X, without introducing new ones. In either case, the main point is the announcement and so the focus of the community on bug-fixing and keeping HEAD close to the named releases, to make bug-fixing and back- and forward- porting easy. From sophie at traumapony.org Tue Oct 7 11:59:14 2014 From: sophie at traumapony.org (Sophie Taylor) Date: Tue, 7 Oct 2014 21:59:14 +1000 Subject: oneShot (was Re: FoldrW/buildW issues) In-Reply-To: References: Message-ID: Wait, isn't call arity analysis meant to do this by itself now? On 7 October 2014 17:05, David Feuer wrote: > Just for the heck of it, I tried out an implementation of scanl using > Joachim Breitner's magical oneShot primitive. Using the test > > scanlA :: (b -> a -> b) -> b -> [a] -> [b] > scanlA f a bs = build $ \c n -> > a `c` > foldr (\b g x -> let b' = f x b in (b' `c` g b')) > (const n) > bs > a > > scanlB :: (b -> a -> b) -> b -> [a] -> [b] > scanlB f a bs = build $ \c n -> > a `c` > foldr (\b g -> oneShot (\x -> let b' = f x b in (b' `c` g b'))) > (const n) > bs > a > > f :: Int -> Bool > f 0 = True > f 1 = False > {-# NOINLINE f #-} > > barA = scanlA (+) 0 . filter f > barB = foldlB (+) 0 . filter f > > > with -O2 (NOT disabling Call Arity) the Core from barB is really, > really beautiful: it's small, there are no lets or local lambdas, and > everything is completely unboxed. This is much better than the result > of barA, which has a local let, and which doesn't seem to manage to > unbox anything. It looks to me like this could be a pretty good tool > to have around. It certainly has its limits?it doesn't do anything > nice with reverse . reverse or reverse . scanl f b . reverse, but it > doesn't need to be perfect to be useful. More evaluation, of course, > is necessary.to make sure it doesn't go wrong when used sanely. > > David > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.stolarek at p.lodz.pl Tue Oct 7 12:05:53 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Tue, 7 Oct 2014 14:05:53 +0200 Subject: Phabricator guidance In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F31C986@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF3F31B677@DB3PRD3001MB020.064d.mgd.msft.net> <87vbnwckta.fsf@gmail.com> <618BE556AADD624C9C918AA5D5911BEF3F31C986@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <201410071405.53497.jan.stolarek@p.lodz.pl> Ugh. Arc is not easy to use :-/ Indeed 3e17822 does not seem to be in the revision on phab, although it exists in my local tree. I just pushed a fixed to Phab. Simon, does `arc patch D202` work now? Herbert, remember how I complained on IRC that `arc diff` does not automatically recognize that I'm updating a revision and I need to manually specify base commit? You told me that I need to add revision information to the commit message. I did that and `arc diff`indeed recognized the revision without me explicitly specifying the base commit. But now it turns out that it created an incomplete revision by pushing only the latest commit from my branch :-/ Janek Dnia wtorek, 7 pa?dziernika 2014, Simon Peyton Jones napisa?: > Aha, that helps. And looking further at > https://phabricator.haskell.org/D202, I can see under "Revision update > history" that there are four diffs all stashed in this on Phab ticket. > (That contradicts my previous model which was one patch per Phab ticket; > people have been complaining about that.) > > So my new questions are: > > * How can I apply "Diff 1" or "Diff 2"? Using "arc patch" only applies > "Diff 4" > > * How can I apply all of "Diff 1" ... "Diff 4" in one go? > > Simon > > | -----Original Message----- > | From: Herbert Valerio Riedel [mailto:hvriedel at gmail.com] > | Sent: 07 October 2014 10:05 > | To: Simon Peyton Jones > | Cc: ghc-devs at haskell.org > | Subject: Re: Phabricator guidance > | > | On 2014-10-07 at 10:57:00 +0200, Simon Peyton Jones wrote: > | > I suppose I will have to look at this. But I have no clue how to do > | > | so. > | > | > D202 itself seems to be a very small patch (only ten lines or so), > | > | so presumably it applies on top of some other patch? But what? > | > | > Someone said I could use > | > arc patch D202 > | > to apply the patch in my own tree, which is crucial for reproducing > | > the error that Jan is stuck on. > | > > | > BUT the patch presumably applies to a > | > particular commit, NOT the head of my current tree. But what is the > | > base commit to which it applies? Does arc patch check out the base > | > commit before applying? > | > | If you actually perform 'arc patch D202', this is the output you > | currently get: > | > | > | ,---- > | > | | Created and checked out branch arcpatch-D202. > | | > | | > | | This diff is against commit > | > | 3e17822f5f4e4d2f582dc0a053f532125f9777c7, but > | > | | the commit is nowhere in the working copy. Try to apply it > | > | against the > | > | | current working copy state? > | > | (3549c952b535803270872adaf87262f2df0295a4) > | > | | [Y/n] n > | > | `---- > | > | So yes, 'arc' tries apply the code-revision on top of the commit is > | was based on; and in this case, it is actually missing from ghc.git :- > | / > | > | What's more, you can also declare that a code-revisions builds on top > | of another code-revision, in which case 'arc' will automatically try > | to > | (recursively) apply that other code-revision to your source-tree > | first, before applying the one you are actually requesting on top. > | > | > | I hope Austin or someone else may chime in to provide further > | assistance if this doesn't help... > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From mail at joachim-breitner.de Tue Oct 7 12:20:20 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Tue, 07 Oct 2014 14:20:20 +0200 Subject: oneShot (was Re: FoldrW/buildW issues) In-Reply-To: References: Message-ID: <1412684420.2823.17.camel@joachim-breitner.de> Hi, Am Dienstag, den 07.10.2014, 03:05 -0400 schrieb David Feuer: > Just for the heck of it, I tried out an implementation of scanl using > Joachim Breitner's magical oneShot primitive. Using the test > > [..] > > with -O2 (NOT disabling Call Arity) the Core from barB is really, > really beautiful: it's small, there are no lets or local lambdas, and > everything is completely unboxed. This is much better than the result > of barA, which has a local let, and which doesn't seem to manage to > unbox anything. I cannot reproduce this here. In fact, I get identical core in both cases. Only when I do pass -fno-call-arity, A gets bad code, while B is still good. Maybe your example is too small? Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From chengang31 at gmail.com Tue Oct 7 13:04:18 2014 From: chengang31 at gmail.com (cg) Date: Tue, 07 Oct 2014 21:04:18 +0800 Subject: Building ghc on Windows with msys2 In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF2223DA32@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF2223DA32@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: On 10/1/2014 6:25 AM, Simon Peyton Jones wrote: > > ?[...] The important thing is that it > should be reproducible, and not dependent on the particular Cygwin or > gcc or whatever the that user happens to have installed. > Exactly. So how about setting up a build server using msys2? I guess the current two build server are all Cygwin based, they are failing at the same permission issue at early building stage, it prevents checking out the real problem. It seems msys2 (or msys) seldom has such issue. Best Regards cg From austin at well-typed.com Tue Oct 7 13:43:13 2014 From: austin at well-typed.com (Austin Seipp) Date: Tue, 7 Oct 2014 08:43:13 -0500 Subject: Tentative high-level plans for 7.10.1 In-Reply-To: References: <87k34gsd9r.fsf@gmail.com> <618BE556AADD624C9C918AA5D5911BEF3F31A873@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: First off, I just wanted to tell everyone - thank you for the feedback! I actually left these tickets in their place/milestones just in case something like this popped up, so I wouldn't have to undo it later. It seems like there's actually a fair amount of support for 7.8.4, where before we didn't get much of an indication as to user needs. As a result, I'll be leaving the 7.8.4 milestone tickets, but will still cull them down to what's acceptable, and we'll aim for those. #8960 seems to be the main one. As I said in the initial email, I'll follow up on this shortly after this, later today. On Tue, Oct 7, 2014 at 6:46 AM, Mikolaj Konarski wrote: >> Our intent has always been that that the latest version on each branch is solid. There have been one or two occasions when we have knowingly abandoned a dodgy release branch entirely, but not many. > > Perhaps we could do the opposite. Announce beforehand that > a release branch X is going to be LTS (of Very Stable Release; > roughly 1 in 4 branches?) and so very few major new features > will be included in the release X+1 (there is just not enough > time for both, as Austin explained). > Then, on the GHC maintainers' side, put off accepting any > "I rewrote the compiler" commits into HEAD for long time. > On the community side, focus on bug fixes and non-disruptive, > incremental improvements. Avoid API changes, whatever that means. > Update release X many times, until very stable. > > A more radical proposal would be to do the above, but announce > that X+! is going to be Very Stable Release and accept no major > new features into HEAD at all and even revert any minor new > features just before X+1 release, if non-trivial bugs in them are discovered. > Then release X can be abandoned quickly, knowing that X+! will most > probably resolve any problems in X, without introducing new ones. > > In either case, the main point is the announcement and so the > focus of the community on bug-fixing and keeping HEAD close > to the named releases, to make bug-fixing and back- and forward- > porting easy. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From eir at cis.upenn.edu Tue Oct 7 14:16:15 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Tue, 7 Oct 2014 10:16:15 -0400 Subject: Stepping through ghc In-Reply-To: <1412609019.13489.YahooMailNeo@web173002.mail.ir2.yahoo.com> References: <1412609019.13489.YahooMailNeo@web173002.mail.ir2.yahoo.com> Message-ID: <6EDA5072-8C53-4880-8D0D-F360201CF291@cis.upenn.edu> In direct answer to your question, there's not a great way to step through the code. As far as I know, there isn't a way to load GHC into GHCi. Personally, I think a lot about the type-checker and so use -ddump-tc-trace a lot. You can line up the output with the code to see what is going on. Also, if you have a DEBUG build (devel1 or devel2 in build.mk), you can use pprTrace to print out from pure code. But I surely second my colleagues in suggesting a small scope to start. My own start was in Template Haskell, which I found to be a great doorway in. Richard On Oct 6, 2014, at 11:23 AM, Omar Mefire wrote: > Hi, > > I'm new to ghc codebase and I'm interested in stepping through the code in order to gain a better idea of how it all works. > > - Is there a way to load ghc into ghci ? and debug through it ? > - What are the ways experienced ghc devs step through the code ? > - Any techniques you guys recommend ? > - What would your advices be for a newbie to the source code ? > > Thanks, > Omar Mefire, > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Oct 7 14:52:26 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 7 Oct 2014 14:52:26 +0000 Subject: Phabricator guidance In-Reply-To: <201410071405.53497.jan.stolarek@p.lodz.pl> References: <618BE556AADD624C9C918AA5D5911BEF3F31B677@DB3PRD3001MB020.064d.mgd.msft.net> <87vbnwckta.fsf@gmail.com> <618BE556AADD624C9C918AA5D5911BEF3F31C986@DB3PRD3001MB020.064d.mgd.msft.net> <201410071405.53497.jan.stolarek@p.lodz.pl> Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F31EA61@DB3PRD3001MB020.064d.mgd.msft.net> Much better thank you. Now I get as far as compiler/typecheck/TcTyClsDecls.lhs:421:26: Warning: Pattern match(es) are non-exhaustive In a case alternative: Patterns not matched: KindedTyVarSig (L _ _) No time to investigate right now. ============= New workflow question for Austin/Herbert. Suppose Jan updates D202 to fix the above. What's my workflow for getting his fix. If I say "arc patch D202" again, it probably won't work well because I'm already on local branch arcpatch-D202. I suppose I can git checkout master git branch --delete arcpatch-D202 arc patch D202 but is there a better way? S | -----Original Message----- | From: Jan Stolarek [mailto:jan.stolarek at p.lodz.pl] | Sent: 07 October 2014 13:06 | To: ghc-devs at haskell.org | Cc: Simon Peyton Jones; Herbert Valerio Riedel | Subject: Re: Phabricator guidance | | Ugh. Arc is not easy to use :-/ Indeed 3e17822 does not seem to be in | the revision on phab, although it exists in my local tree. I just | pushed a fixed to Phab. Simon, does `arc patch D202` work now? | | Herbert, remember how I complained on IRC that `arc diff` does not | automatically recognize that I'm updating a revision and I need to | manually specify base commit? You told me that I need to add revision | information to the commit message. I did that and `arc diff`indeed | recognized the revision without me explicitly specifying the base | commit. But now it turns out that it created an incomplete revision by | pushing only the latest commit from my branch :-/ | | Janek | | Dnia wtorek, 7 pa?dziernika 2014, Simon Peyton Jones napisa?: | > Aha, that helps. And looking further at | > https://phabricator.haskell.org/D202, I can see under "Revision | update | > history" that there are four diffs all stashed in this on Phab | ticket. | > (That contradicts my previous model which was one patch per Phab | > ticket; people have been complaining about that.) | > | > So my new questions are: | > | > * How can I apply "Diff 1" or "Diff 2"? Using "arc patch" only | > applies "Diff 4" | > | > * How can I apply all of "Diff 1" ... "Diff 4" in one go? | > | > Simon | > | > | -----Original Message----- | > | From: Herbert Valerio Riedel [mailto:hvriedel at gmail.com] | > | Sent: 07 October 2014 10:05 | > | To: Simon Peyton Jones | > | Cc: ghc-devs at haskell.org | > | Subject: Re: Phabricator guidance | > | | > | On 2014-10-07 at 10:57:00 +0200, Simon Peyton Jones wrote: | > | > I suppose I will have to look at this. But I have no clue how to | > | do | > | | > | so. | > | | > | > D202 itself seems to be a very small patch (only ten lines or | > | so), | > | | > | so presumably it applies on top of some other patch? But what? | > | | > | > Someone said I could use | > | > arc patch D202 | > | > to apply the patch in my own tree, which is crucial for | > | reproducing > the error that Jan is stuck on. | > | > | > | > BUT the patch presumably applies to a > particular commit, NOT | > | the head of my current tree. But what is the > base commit to | > | which it applies? Does arc patch check out the base > commit | > | before applying? | > | | > | If you actually perform 'arc patch D202', this is the output you | > | currently get: | > | | > | | > | ,---- | > | | > | | Created and checked out branch arcpatch-D202. | > | | | > | | | > | | This diff is against commit | > | | > | 3e17822f5f4e4d2f582dc0a053f532125f9777c7, but | > | | > | | the commit is nowhere in the working copy. Try to apply it | > | | > | against the | > | | > | | current working copy state? | > | | > | (3549c952b535803270872adaf87262f2df0295a4) | > | | > | | [Y/n] n | > | | > | `---- | > | | > | So yes, 'arc' tries apply the code-revision on top of the commit | is | > | was based on; and in this case, it is actually missing from ghc.git | > | :- / | > | | > | What's more, you can also declare that a code-revisions builds on | > | top of another code-revision, in which case 'arc' will | > | automatically try to | > | (recursively) apply that other code-revision to your source-tree | > | first, before applying the one you are actually requesting on top. | > | | > | | > | I hope Austin or someone else may chime in to provide further | > | assistance if this doesn't help... | > | > _______________________________________________ | > ghc-devs mailing list | > ghc-devs at haskell.org | > http://www.haskell.org/mailman/listinfo/ghc-devs | From hvriedel at gmail.com Tue Oct 7 14:53:18 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Tue, 07 Oct 2014 16:53:18 +0200 Subject: Building ghc on Windows with msys2 In-Reply-To: (cg's message of "Tue, 07 Oct 2014 21:04:18 +0800") References: <618BE556AADD624C9C918AA5D5911BEF2223DA32@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <8738b00w4h.fsf@gmail.com> On 2014-10-07 at 15:04:18 +0200, cg wrote: [...] > I guess the current two build server are all Cygwin based, they are > failing at the same permission issue at early building stage, it prevents > checking out the real problem. It seems msys2 (or msys) seldom has > such issue. Btw, while I finally managed to get a pure MSYS2 environment to work with a manually started sshd.exe, it'd be great if somebody could point me to instructions on how to setup e.g. cygrunsrv+sshd in order to have sshd.exe startup automatically on boot-up *and* have sshd.exe be able to log into more than just one single account (currently if I start sshd, I can only log into the very same account which started sshd) Having such a setup would be really useful to provide the GHC development improve its infrastructure as well to help reduce the regular windows-arch failures we're seeing so often. Cheers, hvr From pali.gabor at gmail.com Tue Oct 7 15:02:40 2014 From: pali.gabor at gmail.com (=?UTF-8?B?UMOhbGkgR8OhYm9yIErDoW5vcw==?=) Date: Tue, 7 Oct 2014 17:02:40 +0200 Subject: Building ghc on Windows with msys2 In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF2223DA32@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: 2014-10-07 15:04 GMT+02:00 cg : > I guess the current two build server are all Cygwin based, they are > failing at the same permission issue at early building stage, it prevents > checking out the real problem. It seems msys2 (or msys) seldom has > such issue. For what it is worth, I have been witnessing those permission issues with msys2 on my Windows builders. They worked (more or the less) fine until September 24, but suddenly, something has changed (not on my side) and all I got those errors since. From hvriedel at gmail.com Tue Oct 7 15:20:43 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Tue, 07 Oct 2014 17:20:43 +0200 Subject: RFC: Source-markup language for GHC User's Guide Message-ID: <87y4srzz1w.fsf@gmail.com> Hello GHC Developers & GHC User's Guide writers, I assume it is common knowledge to everyone here, that the GHC User's Guide is written in Docbook XML markup. However, it's a bit tedious to write Docbook-XML by hand, and the XML markup is not as lightweight as modern state-of-the-art markup languages designed for being edited in a simple text-editor are. Therefore I'd like to hear your opinion on migrating away from the current Docbook XML markup to some other similarly expressive but yet more lightweight markup documentation system such as Asciidoc[1] or ReST/Sphinx[2]. There's obviously some cost involved upfront for a (semi-automatic) conversion[3]. So one important question is obviously whether the long-term benefits outweight the cost/investment that we'd incur for the initial conversion. All suggestions/comments/worries welcome; please commence brainstorming :) [1]: http://www.methods.co.nz/asciidoc/ [2]: http://sphinx-doc.org/ [3]: There's automatic conversion tools to aid (though manual cleanup is still needed) the initial conversion, such as https://github.com/oreillymedia/docbook2asciidoc As an example, here's the conversion of http://git.haskell.org/ghc.git/blob/HEAD:/docs/users_guide/extending_ghc.xml to Asciidoc: https://phabricator.haskell.org/P24 to give an idea how XML compares to Asciidoc From david.feuer at gmail.com Tue Oct 7 15:23:31 2014 From: david.feuer at gmail.com (David Feuer) Date: Tue, 7 Oct 2014 11:23:31 -0400 Subject: oneShot (was Re: FoldrW/buildW issues) In-Reply-To: References: Message-ID: Yes, and it does a very good job in many cases. In other cases, it's not as good. On Tue, Oct 7, 2014 at 7:59 AM, Sophie Taylor wrote: > Wait, isn't call arity analysis meant to do this by itself now? > > On 7 October 2014 17:05, David Feuer wrote: >> >> Just for the heck of it, I tried out an implementation of scanl using >> Joachim Breitner's magical oneShot primitive. Using the test >> >> scanlA :: (b -> a -> b) -> b -> [a] -> [b] >> scanlA f a bs = build $ \c n -> >> a `c` >> foldr (\b g x -> let b' = f x b in (b' `c` g b')) >> (const n) >> bs >> a >> >> scanlB :: (b -> a -> b) -> b -> [a] -> [b] >> scanlB f a bs = build $ \c n -> >> a `c` >> foldr (\b g -> oneShot (\x -> let b' = f x b in (b' `c` g b'))) >> (const n) >> bs >> a >> >> f :: Int -> Bool >> f 0 = True >> f 1 = False >> {-# NOINLINE f #-} >> >> barA = scanlA (+) 0 . filter f >> barB = foldlB (+) 0 . filter f >> >> >> with -O2 (NOT disabling Call Arity) the Core from barB is really, >> really beautiful: it's small, there are no lets or local lambdas, and >> everything is completely unboxed. This is much better than the result >> of barA, which has a local let, and which doesn't seem to manage to >> unbox anything. It looks to me like this could be a pretty good tool >> to have around. It certainly has its limits?it doesn't do anything >> nice with reverse . reverse or reverse . scanl f b . reverse, but it >> doesn't need to be perfect to be useful. More evaluation, of course, >> is necessary.to make sure it doesn't go wrong when used sanely. >> >> David >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs > > From michael at snoyman.com Tue Oct 7 15:24:30 2014 From: michael at snoyman.com (Michael Snoyman) Date: Tue, 7 Oct 2014 18:24:30 +0300 Subject: RFC: Source-markup language for GHC User's Guide In-Reply-To: <87y4srzz1w.fsf@gmail.com> References: <87y4srzz1w.fsf@gmail.com> Message-ID: On Tue, Oct 7, 2014 at 6:20 PM, Herbert Valerio Riedel wrote: > Hello GHC Developers & GHC User's Guide writers, > > I assume it is common knowledge to everyone here, that the GHC User's > Guide is written in Docbook XML markup. > > However, it's a bit tedious to write Docbook-XML by hand, and the XML > markup is not as lightweight as modern state-of-the-art markup languages > designed for being edited in a simple text-editor are. > > Therefore I'd like to hear your opinion on migrating away from the > current Docbook XML markup to some other similarly expressive but yet > more lightweight markup documentation system such as Asciidoc[1] or > ReST/Sphinx[2]. > > There's obviously some cost involved upfront for a (semi-automatic) > conversion[3]. So one important question is obviously whether the > long-term benefits outweight the cost/investment that we'd incur for the > initial conversion. > > All suggestions/comments/worries welcome; please commence brainstorming :) > > > > [1]: http://www.methods.co.nz/asciidoc/ > > [2]: http://sphinx-doc.org/ > > [3]: There's automatic conversion tools to aid (though manual cleanup > is still needed) the initial conversion, such as > > https://github.com/oreillymedia/docbook2asciidoc > > As an example, here's the conversion of > > > http://git.haskell.org/ghc.git/blob/HEAD:/docs/users_guide/extending_ghc.xml > > to Asciidoc: > > https://phabricator.haskell.org/P24 > > to give an idea how XML compares to Asciidoc > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > My $0.02: I originally wrote the Yesod book in XML[1], and through automated tools converted it to asciidoc. The conversion was mostly painless, and it's a *huge* improvement to be able to edit in asciidoc instead. One of the nice things is you should be able to do the transition incrementally, since you can generally mix asciidoc and DocBook. Michael [1] DITA which I converted into DocBook -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin at well-typed.com Tue Oct 7 15:24:59 2014 From: austin at well-typed.com (Austin Seipp) Date: Tue, 7 Oct 2014 10:24:59 -0500 Subject: RFC: Source-markup language for GHC User's Guide In-Reply-To: <87y4srzz1w.fsf@gmail.com> References: <87y4srzz1w.fsf@gmail.com> Message-ID: Just for the record - I'm very much in favor of this. +1 from me. I think the one-time cost is very low for the most part, if the end result is a significantly more readable users guide to hack on. IMO, I don't particularly care whether we use Sphinx or AsciiDoc. The nice thing about AsciiDoc is it has a DocBook backend, so in theory we could make the end results seem pretty similar. Sphinx on the other hand generates its own documentation directly, I believe. The more annoying bit is it will incur an extra dependency for GHC documentation - which, remember, is part of ./validate - but that's life, perhaps. On Tue, Oct 7, 2014 at 10:20 AM, Herbert Valerio Riedel wrote: > Hello GHC Developers & GHC User's Guide writers, > > I assume it is common knowledge to everyone here, that the GHC User's > Guide is written in Docbook XML markup. > > However, it's a bit tedious to write Docbook-XML by hand, and the XML > markup is not as lightweight as modern state-of-the-art markup languages > designed for being edited in a simple text-editor are. > > Therefore I'd like to hear your opinion on migrating away from the > current Docbook XML markup to some other similarly expressive but yet > more lightweight markup documentation system such as Asciidoc[1] or > ReST/Sphinx[2]. > > There's obviously some cost involved upfront for a (semi-automatic) > conversion[3]. So one important question is obviously whether the > long-term benefits outweight the cost/investment that we'd incur for the > initial conversion. > > All suggestions/comments/worries welcome; please commence brainstorming :) > > > > [1]: http://www.methods.co.nz/asciidoc/ > > [2]: http://sphinx-doc.org/ > > [3]: There's automatic conversion tools to aid (though manual cleanup > is still needed) the initial conversion, such as > > https://github.com/oreillymedia/docbook2asciidoc > > As an example, here's the conversion of > > http://git.haskell.org/ghc.git/blob/HEAD:/docs/users_guide/extending_ghc.xml > > to Asciidoc: > > https://phabricator.haskell.org/P24 > > to give an idea how XML compares to Asciidoc > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From ezyang at mit.edu Tue Oct 7 15:25:33 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Tue, 07 Oct 2014 09:25:33 -0600 Subject: RFC: Source-markup language for GHC User's Guide In-Reply-To: <87y4srzz1w.fsf@gmail.com> References: <87y4srzz1w.fsf@gmail.com> Message-ID: <1412695374-sup-9289@sabre> I personally don't have a problem writing Docbook, and one problem with moving to lightweight markup is it becomes a bit harder to keep your markup semantic. Edward Excerpts from Herbert Valerio Riedel's message of 2014-10-07 09:20:43 -0600: > Hello GHC Developers & GHC User's Guide writers, > > I assume it is common knowledge to everyone here, that the GHC User's > Guide is written in Docbook XML markup. > > However, it's a bit tedious to write Docbook-XML by hand, and the XML > markup is not as lightweight as modern state-of-the-art markup languages > designed for being edited in a simple text-editor are. > > Therefore I'd like to hear your opinion on migrating away from the > current Docbook XML markup to some other similarly expressive but yet > more lightweight markup documentation system such as Asciidoc[1] or > ReST/Sphinx[2]. > > There's obviously some cost involved upfront for a (semi-automatic) > conversion[3]. So one important question is obviously whether the > long-term benefits outweight the cost/investment that we'd incur for the > initial conversion. > > All suggestions/comments/worries welcome; please commence brainstorming :) > > > > [1]: http://www.methods.co.nz/asciidoc/ > > [2]: http://sphinx-doc.org/ > > [3]: There's automatic conversion tools to aid (though manual cleanup > is still needed) the initial conversion, such as > > https://github.com/oreillymedia/docbook2asciidoc > > As an example, here's the conversion of > > http://git.haskell.org/ghc.git/blob/HEAD:/docs/users_guide/extending_ghc.xml > > to Asciidoc: > > https://phabricator.haskell.org/P24 > > to give an idea how XML compares to Asciidoc From ml at isaac.cedarswampstudios.org Tue Oct 7 15:33:57 2014 From: ml at isaac.cedarswampstudios.org (Isaac Dupree) Date: Tue, 07 Oct 2014 11:33:57 -0400 Subject: Again: Uniques in GHC In-Reply-To: References: , <1412591816.30103.10.camel@joachim-breitner.de> <01db8f0b509747669bbf426fc0dd18c2@EXMBX31.ad.utwente.nl>, <1412626086.9628.1.camel@joachim-breitner.de> Message-ID: <543407E5.70005@isaac.cedarswampstudios.org> On 10/07/2014 02:32 AM, p.k.f.holzenspies at utwente.nl wrote: >> But that would only work on 64 bit systems, right? > > Yes, this approach to a parallel GHC would only work on 64-bit machines. > The idea is, I guess, that we're not going to see a massive demand for > parallel GHC running on multi-core 32-bit systems. In other words; > 32-bit systems wouldn't get a parallel GHC. On ARM, 32-bit quad-cores are common. But maybe no one will care to run GHC on (32-bit) ARM. From michael at snoyman.com Tue Oct 7 15:36:01 2014 From: michael at snoyman.com (Michael Snoyman) Date: Tue, 7 Oct 2014 18:36:01 +0300 Subject: RFC: Source-markup language for GHC User's Guide In-Reply-To: <1412695374-sup-9289@sabre> References: <87y4srzz1w.fsf@gmail.com> <1412695374-sup-9289@sabre> Message-ID: On Tue, Oct 7, 2014 at 6:25 PM, Edward Z. Yang wrote: > I personally don't have a problem writing Docbook, and one problem > with moving to lightweight markup is it becomes a bit harder to > keep your markup semantic. > > Edward > > Why would this be a problem with asciidoc? All asciidoc maps directly into DocBook markup, and for cases that the simple asciidoc markup is insufficient, you can always embed full-blown DocBook (though I've ever done that in practice). Michael > Excerpts from Herbert Valerio Riedel's message of 2014-10-07 09:20:43 > -0600: > > Hello GHC Developers & GHC User's Guide writers, > > > > I assume it is common knowledge to everyone here, that the GHC User's > > Guide is written in Docbook XML markup. > > > > However, it's a bit tedious to write Docbook-XML by hand, and the XML > > markup is not as lightweight as modern state-of-the-art markup languages > > designed for being edited in a simple text-editor are. > > > > Therefore I'd like to hear your opinion on migrating away from the > > current Docbook XML markup to some other similarly expressive but yet > > more lightweight markup documentation system such as Asciidoc[1] or > > ReST/Sphinx[2]. > > > > There's obviously some cost involved upfront for a (semi-automatic) > > conversion[3]. So one important question is obviously whether the > > long-term benefits outweight the cost/investment that we'd incur for the > > initial conversion. > > > > All suggestions/comments/worries welcome; please commence brainstorming > :) > > > > > > > > [1]: http://www.methods.co.nz/asciidoc/ > > > > [2]: http://sphinx-doc.org/ > > > > [3]: There's automatic conversion tools to aid (though manual cleanup > > is still needed) the initial conversion, such as > > > > https://github.com/oreillymedia/docbook2asciidoc > > > > As an example, here's the conversion of > > > > > http://git.haskell.org/ghc.git/blob/HEAD:/docs/users_guide/extending_ghc.xml > > > > to Asciidoc: > > > > https://phabricator.haskell.org/P24 > > > > to give an idea how XML compares to Asciidoc > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Tue Oct 7 15:38:13 2014 From: allbery.b at gmail.com (Brandon Allbery) Date: Tue, 7 Oct 2014 11:38:13 -0400 Subject: RFC: Source-markup language for GHC User's Guide In-Reply-To: References: <87y4srzz1w.fsf@gmail.com> Message-ID: On Tue, Oct 7, 2014 at 11:24 AM, Austin Seipp wrote: > The more annoying bit is it will incur an extra dependency for GHC > documentation - which, remember, is part of ./validate - but that's > life, perhaps. Docbook is a fairly large dependency in my experience? -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin at well-typed.com Tue Oct 7 15:46:37 2014 From: austin at well-typed.com (Austin Seipp) Date: Tue, 7 Oct 2014 10:46:37 -0500 Subject: Again: Uniques in GHC In-Reply-To: References: <1412591816.30103.10.camel@joachim-breitner.de> <01db8f0b509747669bbf426fc0dd18c2@EXMBX31.ad.utwente.nl> <1412626086.9628.1.camel@joachim-breitner.de> Message-ID: On Tue, Oct 7, 2014 at 1:32 AM, wrote: > Yes, this approach to a parallel GHC would only work on 64-bit machines. The > idea is, I guess, that we're not going to see a massive demand for parallel > GHC running on multi-core 32-bit systems. In other words; 32-bit systems > wouldn't get a parallel GHC. Let me make sure I'm understanding this correctly: in this particular proposed solution, the side effect would be that we no longer have a capable 32bit runtime which supports multicore parallelism? Sorry, but I'm afraid this approach is pretty much unacceptable IMO, for precisely the reason outlined in your last sentence. 32bit systems are surprisingly commen. I have several multicore 32bit ARMv7 machines on my desk right now, for example. And there are a lot more of those floating around than you might think. If that's the 'cure', I think I (and other users) would consider it far worse than the disease. > Regards, > Philip > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From bgamari.foss at gmail.com Tue Oct 7 15:53:59 2014 From: bgamari.foss at gmail.com (Ben Gamari) Date: Tue, 07 Oct 2014 11:53:59 -0400 Subject: Tentative high-level plans for 7.10.1 In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F31A873@DB3PRD3001MB020.064d.mgd.msft.net> References: <87k34gsd9r.fsf@gmail.com> <618BE556AADD624C9C918AA5D5911BEF3F31A873@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <87iojvri3s.fsf@gmail.com> Simon Peyton Jones writes: > My conclusion > > * I think we (collectively!) should make a serious attempt to fix show-stopping > bugs on a major release branch. (I agree that upgrading to the next major > release often simply brings in a new wave of bugs because of GHC's > rapid development culture.) > > * We can only possibly do this if > a) we can distinguish "show-stopping" from "nice to have" > b) we get some help (thank you John Lato for implicitly offering) > > I would define a "show-stopping" bug as one that simply prevents you > from using the release altogether, or imposes a very large cost at the > user end. > > For mechanism I suggest this. On the 7.8.4 status page (or in > general, on the release branch page you want to influence), create a > section "Show stoppers" with a list of the show-stopping bugs, > including some English-language text saying who cares so much and why. > (Yes I know that it might be there in the ticket, but the impact is > much greater if there is an explicit list of two or three personal > statements up front.) > Writing a bit more text sounds like a small price to pay to ensure that show-stoppers don't fall through the cracks. It's also nice to have a high-level view of what is being considered for release. I've added some text for #9439 [1]. > Concerning 7.8.4 itself, I think we could review the decision to > abandon it, in the light of new information. We might, for example, > fix show-stoppers, include fixes that are easy to apply, and > not-include other fixes that are harder. > This seems quite reasonable. Cheers, - Ben [1] https://ghc.haskell.org/trac/ghc/wiki/Status/GHC-7.8.4#a9439:LLVMmanglermanglestoovigorously -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From austin at well-typed.com Tue Oct 7 15:57:13 2014 From: austin at well-typed.com (Austin Seipp) Date: Tue, 7 Oct 2014 10:57:13 -0500 Subject: RFC: Source-markup language for GHC User's Guide In-Reply-To: References: <87y4srzz1w.fsf@gmail.com> Message-ID: I don't really care too much about the size of the dependency (since 99.9% of time it's automated anyway via some package manager). My remark was more referring to the number of dependencies increases by 1 no matter what. :) But like I said, that's just life, and I frankly don't see this part as a big deal in either case. On Tue, Oct 7, 2014 at 10:38 AM, Brandon Allbery wrote: > On Tue, Oct 7, 2014 at 11:24 AM, Austin Seipp wrote: >> >> The more annoying bit is it will incur an extra dependency for GHC >> documentation - which, remember, is part of ./validate - but that's >> life, perhaps. > > > Docbook is a fairly large dependency in my experience? > > -- > brandon s allbery kf8nh sine nomine associates > allbery.b at gmail.com ballbery at sinenomine.net > unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From Thomas.Winant at cs.kuleuven.be Tue Oct 7 16:07:15 2014 From: Thomas.Winant at cs.kuleuven.be (Thomas Winant) Date: Tue, 07 Oct 2014 18:07:15 +0200 Subject: Tentative high-level plans for 7.10.1 In-Reply-To: References: Message-ID: <7dc2683c29e2c11a01acbb6833bece6f@cs.kuleuven.be> Hi, On 2014-10-03 23:35, Austin Seipp wrote: > .. > Here are the major patches on Phabricator still needing review, that I > think we'd like to see for 7.10.1: > > - D168: Partial type signatures > .. As Austin said, our patch implementing Partial Type Signatures is still up for code review on Phabricator [1]. It is our goal too to get it in 7.10.1, and we will try to do as much as we can to help out with this process. We'd like it very much if people had a thorough look at it (thanks Richard for the feedback). We're glad to provide additional info (including extra comments in the code), rewrite confusing code, etc. = Status = The implementation is nearly complete: * We've integrated support for Holes, i.e. by default, an underscore in a type signature will generate an error message mentioning the inferred type. By enabling -XPartialTypeSignatures, the inferred type is used and the underscore can remain in the type signature. * SPJ's proposed simplifications (over Skype) have been implemented, except for the fact that we still use the annotated constraints for solving, see [2]. * Richard's comments on Phabricator [1] have been addressed in extra commits. * I've rebased the patch against master on Monday. * I've added docstring for most of the new functions I've added. * Some TODOs still remain, I'll summarise the most important ones here. See [3] for a detailed list with examples. * When -XMonoLocalBinds is enabled (implied by -XGADTs and -XTypeFamilies), (some) local bindings without type signature aren't generalised. Partial type signatures should follow this behaviour. This is currently not handled correctly. We have a stopgap solution involving generating an error in mind, but would prefer a real fix. We'd like some help with this. * Partial type signatures are currently ignored for pattern bindings. This bug doesn't seem to be difficult to solve, but requires some debugging. * The following code doesn't type check: {-# LANGUAGE MonomorphismRestriction, PartialTypeSignatures #-} charlie :: _ => a charlie = 3 Type error: No instance for (Num a) arising from the literal ?3?. We would like the (Num a) constraint to be inferred (because of the extra-constraint wildcard). * Some smaller things, e.g. improving error messages. We'll try to fix the remaining TODOs, but help is certainly appreciated and will speed up integrating this patch! Please have a look at the code and let us know what we can do to help. Cheers, Thomas Winant [1]: https://phabricator.haskell.org/D168 [2]: https://ghc.haskell.org/trac/ghc/wiki/PartialTypeSignatures#extra-constraints-wildcard [3]: https://ghc.haskell.org/trac/ghc/wiki/PartialTypeSignatures#TODOs Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From chengang31 at gmail.com Tue Oct 7 16:25:26 2014 From: chengang31 at gmail.com (cg) Date: Wed, 08 Oct 2014 00:25:26 +0800 Subject: Building ghc on Windows with msys2 In-Reply-To: References: Message-ID: On 9/16/2014 4:57 AM, Gintautas Miliauskas wrote: > > msys2 seems to be in good shape and should probably be promoted to the > primary suggested method to build ghc on Windows. Let's look into that > once the new build instructions have been proofread and verified. > I am trying to build the latest code but without success. First, I encounter twe 'Ambiguous occurrence' issues: libraries\binary\src\Data\Binary\Builder\Base.hs:110:15: Ambiguous occurrence "empty" It could refer to either "Data.Binary.Builder.Base.empty", defined at libraries\binary\src\Data\Binary\Builder\Base.hs:124:1 or "GHC.Base.empty", imported from "GHC.Base" at libraries\binary\src\Data\Binary\Builder\Base.hs:84:1-15 and the same with Prelude.foldr in this Base.hs. I hide 'empty' and 'foldr' at importing point and the code compiles. Has anyone see the same issues? Then, I get this 'out of memory' error at 'building final phase' "inplace/bin/ghc-stage1.exe" ... -c libraries/Cabal/Cabal/./Language/Haskell/Extension.hs ghc-stage1.exe: out of memory I am building on Windows 8.1 with 16G ram, with 'BuildFlavour = quick'. And I see ghc-stage1 is using 16G+ memory in Windows Task Manager. Why does ghc-stage1.exe use so much memory? -- cg From p.k.f.holzenspies at utwente.nl Tue Oct 7 16:26:24 2014 From: p.k.f.holzenspies at utwente.nl (p.k.f.holzenspies at utwente.nl) Date: Tue, 7 Oct 2014 16:26:24 +0000 Subject: Again: Uniques in GHC In-Reply-To: References: <1412591816.30103.10.camel@joachim-breitner.de> <01db8f0b509747669bbf426fc0dd18c2@EXMBX31.ad.utwente.nl> <1412626086.9628.1.camel@joachim-breitner.de> , Message-ID: Wait, wait, wait! I wasn't talking about a parallel *runtime*. Nothing changes there. All I'm talking about is something that is a very old issue that never got added / solved / resolved. Somewhere on the commentary, or the mailing list, I seem to recall that the generation of Uniques was the bottleneck for the parallelisation of GHC *Itself*. It's about having a compiler using multiple threads and says nothing about programs coming out of it. I'm all with you on embedded processors and that kind of stuff, but I don't see a pressing need to compile *on* them. Isn't all ARM-stuff assuming cross-compilation? Ph. ________________________________________ From: mad.one at gmail.com on behalf of Austin Seipp Sent: 07 October 2014 17:46 To: Holzenspies, P.K.F. (EWI) Cc: ghc-devs at haskell.org Subject: Re: Again: Uniques in GHC On Tue, Oct 7, 2014 at 1:32 AM, wrote: > Yes, this approach to a parallel GHC would only work on 64-bit machines. The > idea is, I guess, that we're not going to see a massive demand for parallel > GHC running on multi-core 32-bit systems. In other words; 32-bit systems > wouldn't get a parallel GHC. Let me make sure I'm understanding this correctly: in this particular proposed solution, the side effect would be that we no longer have a capable 32bit runtime which supports multicore parallelism? Sorry, but I'm afraid this approach is pretty much unacceptable IMO, for precisely the reason outlined in your last sentence. 32bit systems are surprisingly commen. I have several multicore 32bit ARMv7 machines on my desk right now, for example. And there are a lot more of those floating around than you might think. If that's the 'cure', I think I (and other users) would consider it far worse than the disease. > Regards, > Philip > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From austin at well-typed.com Tue Oct 7 16:40:11 2014 From: austin at well-typed.com (Austin Seipp) Date: Tue, 7 Oct 2014 11:40:11 -0500 Subject: Again: Uniques in GHC In-Reply-To: References: <1412591816.30103.10.camel@joachim-breitner.de> <01db8f0b509747669bbf426fc0dd18c2@EXMBX31.ad.utwente.nl> <1412626086.9628.1.camel@joachim-breitner.de> Message-ID: On Tue, Oct 7, 2014 at 11:26 AM, wrote: > Wait, wait, wait! I wasn't talking about a parallel *runtime*. Nothing changes there. All I'm talking about is something that is a very old issue that never got added / solved / resolved. Somewhere on the commentary, or the mailing list, I seem to recall that the generation of Uniques was the bottleneck for the parallelisation of GHC *Itself*. It's about having a compiler using multiple threads and says nothing about programs coming out of it. OK, cool. Just making sure. :) > I'm all with you on embedded processors and that kind of stuff, but I don't see a pressing need to compile *on* them. Isn't all ARM-stuff assuming cross-compilation? No, not all ARM builds assume cross compilation. In fact, if you want a fully working GHC, cross compilation is impossible: you cannot cross compile GHCi, meaning you can't use Template Haskell (as well as some of the linker features, I believe). However, I don't think this change impacts building GHC at all, since we get parallelism through 'make', not through GHC itself (and on low-end systems, parallelism in the build system is crucial and really necessary.) So I assume your change would mean 'ghc -j' would not work for 32bit. I still consider this a big limitation, one which is only due to an implementation detail. But we need to confirm this will actually fix any bottlenecks first though before getting to that point. > Ph. > > > ________________________________________ > From: mad.one at gmail.com on behalf of Austin Seipp > Sent: 07 October 2014 17:46 > To: Holzenspies, P.K.F. (EWI) > Cc: ghc-devs at haskell.org > Subject: Re: Again: Uniques in GHC > > On Tue, Oct 7, 2014 at 1:32 AM, wrote: >> Yes, this approach to a parallel GHC would only work on 64-bit machines. The >> idea is, I guess, that we're not going to see a massive demand for parallel >> GHC running on multi-core 32-bit systems. In other words; 32-bit systems >> wouldn't get a parallel GHC. > > Let me make sure I'm understanding this correctly: in this particular > proposed solution, the side effect would be that we no longer have a > capable 32bit runtime which supports multicore parallelism? > > Sorry, but I'm afraid this approach is pretty much unacceptable IMO, > for precisely the reason outlined in your last sentence. 32bit systems > are surprisingly commen. I have several multicore 32bit ARMv7 machines > on my desk right now, for example. And there are a lot more of those > floating around than you might think. > > If that's the 'cure', I think I (and other users) would consider it > far worse than the disease. > >> Regards, >> Philip >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> > > -- > Regards, > > Austin Seipp, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From austin at well-typed.com Tue Oct 7 16:44:39 2014 From: austin at well-typed.com (Austin Seipp) Date: Tue, 7 Oct 2014 11:44:39 -0500 Subject: Again: Uniques in GHC In-Reply-To: References: <1412591816.30103.10.camel@joachim-breitner.de> <01db8f0b509747669bbf426fc0dd18c2@EXMBX31.ad.utwente.nl> <1412626086.9628.1.camel@joachim-breitner.de> Message-ID: On Tue, Oct 7, 2014 at 11:40 AM, Austin Seipp wrote: > I still consider this a big limitation, one which is only due to an > implementation detail. To make this point more clear: I'm generally hesitant about basing the availability of architecture-independent features ('ghc supporting -j', which almost entirely was fixed in the frontend compiler pipeline) on platform details, or the underlying hardware or execution model GHC runs on. Unless the changes are very significant (like, -j gets significantly faster), I think we should be very hesitant about gating frontend features behind architectural/implementation details. -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From p.k.f.holzenspies at utwente.nl Tue Oct 7 16:45:43 2014 From: p.k.f.holzenspies at utwente.nl (p.k.f.holzenspies at utwente.nl) Date: Tue, 7 Oct 2014 16:45:43 +0000 Subject: Again: Uniques in GHC In-Reply-To: References: <1412591816.30103.10.camel@joachim-breitner.de> <01db8f0b509747669bbf426fc0dd18c2@EXMBX31.ad.utwente.nl> <1412626086.9628.1.camel@joachim-breitner.de> , Message-ID: ________________________________________ From: mad.one at gmail.com on behalf of Austin Seipp So I assume your change would mean 'ghc -j' would not work for 32bit. I still consider this a big limitation, one which is only due to an implementation detail. But we need to confirm this will actually fix any bottlenecks first though before getting to that point. Yes, that's what I'm saying. Let me just add that what I'm proposing by no means prohibits or hinders making 32-bit GHC-versions be parallel later on, it just doesn't solve the problem. It depends to what extent the "fully deterministic behaviour" bug is considered a priority (there was something about parts of the hi-files being non-deterministic across different executions of GHC; don't recall the details). Anyhow, the work I'm doing now exposes a few things about Uniques that confuse me a little and that could have been bugs (that maybe never acted up). Extended e-mail to follow later on. Ph. From shumovichy at gmail.com Tue Oct 7 16:59:23 2014 From: shumovichy at gmail.com (Yuras Shumovich) Date: Tue, 07 Oct 2014 19:59:23 +0300 Subject: FFI: c/c++ struct on stack as an argument or return value In-Reply-To: <1402761470.4765.20.camel@shum-lt> References: <1394797839.4664.35.camel@shum-lt> <5F969E34-BCF1-40CB-86DF-20F84245BEDB@cse.unsw.edu.au> <1394887862.4722.31.camel@shum-lt> <532839B4.7040609@gmail.com> <1395163879.4601.35.camel@shum-lt> <5328BAE9.6080805@gmail.com> <1402761470.4765.20.camel@shum-lt> Message-ID: <1412701163.2646.1.camel@gmail.com> Simon, I finally managed to implement that for major NCG backends. Phabricator revision is here: https://phabricator.haskell.org/D252 Here is a link to the review you did before: https://github.com/Yuras/ghc/commit/7295a4c600bc69129b6800be5b52c3842c9c4e5b I don't have implementation for mac os x86, ppc and sparc. Are they actively used today? I don't have access to hardware to test them. Do you think it has it's own value without exposing to haskell FFI? What is the minimal feature set I should implement to make it merged? Thanks, Yuras On Sat, 2014-06-14 at 18:57 +0300, Yuras Shumovich wrote: > Hello, > > I implemented support for returning C structures by value in cmm for > x86_64 (most likely it works only on linux). You can find it here: > https://github.com/Yuras/ghc/commits/cmm-cstruct > > It supports I8, I16, I32, I64, F_, D_ cmm types, and requires special > annotation. For example: > > #include "Cmm.h" > > #define MyStruct struct(CInt, I8, struct(I8, CInt)) > > cmm_test(W_ i) > { > CInt i1; > I8 i2, i3; > float32 i4; > (i1, i2, i3, i4) = ccall c_test(W_TO_INT(i)) MyStruct; > return (TO_W_(i1), TO_W_(i2), TO_W_(i3), i4); > } > > (See "test" directory for full examples.) > > > Do you think it is right approach? > Could anyone review the code please? > > And the last thing, I need mentor for this project. Is anyone interested? > > Thanks, > Yuras > > On Tue, 2014-03-18 at 21:30 +0000, Simon Marlow wrote: > > So the hard parts are: > > > > - the native code generators > > - native adjustor support (rts/Adjustor.c) > > > > Everything else is relatively striaghtforward: we use libffi for > > adjustors on some platforms and for GHCi, and the LLVM backend should be > > quite easy too. > > > > I would at least take a look at the hard bits and see whether you think > > it's going to be possible to extend these to handle struct args/returns. > > Because if not, then the idea is a dead end. Or maybe we will need to > > limit the scope to make things easier (e.g. only integer and pointer > > fields). > > > > Cheers, > > Simon > > > > On 18/03/2014 17:31, Yuras Shumovich wrote: > > > Hi, > > > > > > I thought I have lost the battle :) > > > Thank you for the support, Simon! > > > > > > I'm interested in full featured solution: arguments, return value, > > > foreign import, foreign export, etc. But it is too much for me to do it > > > all at once. So I started with dynamic wrapper. > > > > > > The plan is to support structs as arguments and return value for dynamic > > > wrapper using libffi; > > > then implement native adjustors at least for x86_64 linux; > > > then make final design decision (tuple or data? language pragma? union > > > support? etc); > > > and only then start working on foreign import. > > > > > > But I'm open for suggestions. Just let me know if you think it is better > > > to start with return value support for foreign import. > > > > > > Thanks, > > > Yuras > > > > > > On Tue, 2014-03-18 at 12:19 +0000, Simon Marlow wrote: > > >> I'm really keen to have support for returning structs in particular. > > >> Passing structs less so, because working around the lack of struct > > >> passing isn't nearly as onerous as working around the lack of struct > > >> returns. Returning multiple values from a C function is a real pain > > >> without struct returns: you have to either allocate some memory in > > >> Haskell or in C, and both methods are needlessly complex and slow. > > >> (though allocating in Haskell is usually better.) C++ code does this all > > >> the time, so if you're wrapping C++ code for calling from Haskell, the > > >> lack of multiple returns bites a lot. > > >> > > >> In fact implementing this is on my todo list, I'm really glad to see > > >> someone else is planning to do it :-) > > >> > > >> The vague plan I had in my head was to allow the return value of a > > >> foreign import to be a tuple containing marshallable types, which would > > >> map to the appropriate return convention for a struct on the current > > >> platform. Perhaps allowing it to be an arbitrary single-constructor > > >> type is better, because it allows us to use a type that has a Storable > > >> instance. > > >> > > >> Cheers, > > >> Simon > > >> > > > > > > > > From austin at well-typed.com Tue Oct 7 17:03:54 2014 From: austin at well-typed.com (Austin Seipp) Date: Tue, 7 Oct 2014 12:03:54 -0500 Subject: Building ghc on Windows with msys2 In-Reply-To: References: Message-ID: On Tue, Oct 7, 2014 at 11:25 AM, cg wrote: > On 9/16/2014 4:57 AM, Gintautas Miliauskas wrote: >> >> >> msys2 seems to be in good shape and should probably be promoted to the >> primary suggested method to build ghc on Windows. Let's look into that >> once the new build instructions have been proofread and verified. >> > > I am trying to build the latest code but without success. > > First, I encounter twe 'Ambiguous occurrence' issues: > > libraries\binary\src\Data\Binary\Builder\Base.hs:110:15: > Ambiguous occurrence "empty" > It could refer to either "Data.Binary.Builder.Base.empty", > defined at libraries\binary\src\Data\Binary\Builder\Base.hs:124:1 > or "GHC.Base.empty", > imported from "GHC.Base" at > libraries\binary\src\Data\Binary\Builder\Base.hs:84:1-15 > > and the same with Prelude.foldr in this Base.hs. > > I hide 'empty' and 'foldr' at importing point and the code compiles. > > Has anyone see the same issues? Ugh, this is some fallout I thought we had fixed, but apparently not. I'll fix it shortly, thanks. > Then, I get this 'out of memory' error at 'building final phase' > "inplace/bin/ghc-stage1.exe" ... -c > libraries/Cabal/Cabal/./Language/Haskell/Extension.hs > ghc-stage1.exe: out of memory > > I am building on Windows 8.1 with 16G ram, with 'BuildFlavour = quick'. And > I see > ghc-stage1 is using 16G+ memory in Windows Task Manager. > > Why does ghc-stage1.exe use so much memory? Wow, I thought we fixed this one too! Please see this bug: https://ghc.haskell.org/trac/ghc/ticket/9630 What GHC commit are you using? Are your submodules all up to date? In particular, if 'binary' is not up to date, even if the rest of your tree is, you'll see this problem. > -- > cg > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From mail at joachim-breitner.de Tue Oct 7 18:42:43 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Tue, 07 Oct 2014 20:42:43 +0200 Subject: GitHub pull requests In-Reply-To: <1412616764.8286.0.camel@joachim-breitner.de> References: <6D1821C0-52BE-4506-8503-307D2CBD83DF@cis.upenn.edu> <1412506588.4605.1.camel@joachim-breitner.de> <1412541852.21551.4.camel@joachim-breitner.de> <1412616764.8286.0.camel@joachim-breitner.de> Message-ID: <1412707363.29506.0.camel@joachim-breitner.de> Hi, Am Montag, den 06.10.2014, 19:32 +0200 schrieb Joachim Breitner: > Am Montag, den 06.10.2014, 17:54 +0200 schrieb Tuncer Ayaz: > > By the way, while the Github team has no public ticket system, they > > are very responsive when you send them feature requests or, say, > > explain where the review system is incomplete/broken. They never > > promise anything and do not pre-announce a feature, so it is hard to > > track future changes. However, they're responsive and seem to value > > majority opinion. > > > > Here's the page: https://github.com/support > > good idea, I sent them a message which is basically > http://stackoverflow.com/questions/26204811/prevent-github-from-interpreting-nnnn-in-commit-messages doesn?t look like that will happen soon: > Hey Joachim, > > Disabling that linking is not possible currently, and I'm not sure if > that feature will be available in the near future. Still, I'll add > your request to our feature request wishlist and pass the feedback to > the team. > > Thanks for the question/suggestion and let us know if there's anything > else. > > Cheers, > Ivan > Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From dominique.devriese at cs.kuleuven.be Tue Oct 7 19:14:20 2014 From: dominique.devriese at cs.kuleuven.be (Dominique Devriese) Date: Tue, 7 Oct 2014 21:14:20 +0200 Subject: Tentative high-level plans for 7.10.1 In-Reply-To: <7dc2683c29e2c11a01acbb6833bece6f@cs.kuleuven.be> References: <7dc2683c29e2c11a01acbb6833bece6f@cs.kuleuven.be> Message-ID: To complement what Thomas said: Phabricator currently claims that the patch is not building, but if I understand Thomas correctly, this is the consequence of a limitation of the Phabricator builder which is not treating the haddock part of the patch correctly. So to reiterate: the partial type signatures patch *should currently build*, despite what Phabricator says. Regards Dominique 2014-10-07 18:07 GMT+02:00 Thomas Winant : > Hi, > > On 2014-10-03 23:35, Austin Seipp wrote: >> >> .. >> Here are the major patches on Phabricator still needing review, that I >> think we'd like to see for 7.10.1: >> >> - D168: Partial type signatures >> .. > > > As Austin said, our patch implementing Partial Type Signatures is still > up for code review on Phabricator [1]. It is our goal too to get it in > 7.10.1, and we will try to do as much as we can to help out with this > process. > > We'd like it very much if people had a thorough look at it (thanks > Richard for the feedback). We're glad to provide additional info > (including extra comments in the code), rewrite confusing code, etc. > > = Status = > > The implementation is nearly complete: > * We've integrated support for Holes, i.e. by default, an underscore in > a type signature will generate an error message mentioning the > inferred type. By enabling -XPartialTypeSignatures, the inferred type > is used and the underscore can remain in the type signature. > * SPJ's proposed simplifications (over Skype) have been implemented, > except for the fact that we still use the annotated constraints for > solving, see [2]. > * Richard's comments on Phabricator [1] have been addressed in extra > commits. > * I've rebased the patch against master on Monday. > * I've added docstring for most of the new functions I've added. > * Some TODOs still remain, I'll summarise the most important ones here. > See [3] for a detailed list with examples. > * When -XMonoLocalBinds is enabled (implied by -XGADTs and > -XTypeFamilies), (some) local bindings without type signature aren't > generalised. Partial type signatures should follow this behaviour. > This is currently not handled correctly. We have a stopgap solution > involving generating an error in mind, but would prefer a real fix. > We'd like some help with this. > * Partial type signatures are currently ignored for pattern bindings. > This bug doesn't seem to be difficult to solve, but requires some > debugging. > * The following code doesn't type check: > > {-# LANGUAGE MonomorphismRestriction, PartialTypeSignatures #-} > charlie :: _ => a > charlie = 3 > > Type error: No instance for (Num a) arising from the literal ?3?. We > would like the (Num a) constraint to be inferred (because of the > extra-constraint wildcard). > * Some smaller things, e.g. improving error messages. > > We'll try to fix the remaining TODOs, but help is certainly appreciated > and will speed up integrating this patch! > > Please have a look at the code and let us know what we can do to help. > > > Cheers, > Thomas Winant > > [1]: https://phabricator.haskell.org/D168 > [2]: > https://ghc.haskell.org/trac/ghc/wiki/PartialTypeSignatures#extra-constraints-wildcard > [3]: https://ghc.haskell.org/trac/ghc/wiki/PartialTypeSignatures#TODOs > > Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From carter.schonwald at gmail.com Tue Oct 7 19:30:37 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Tue, 7 Oct 2014 15:30:37 -0400 Subject: Again: Uniques in GHC In-Reply-To: References: <1412591816.30103.10.camel@joachim-breitner.de> <01db8f0b509747669bbf426fc0dd18c2@EXMBX31.ad.utwente.nl> <1412626086.9628.1.camel@joachim-breitner.de> Message-ID: in some respects, having fully deterministic builds is a very important goal: a lot of tooling for eg, caching builds of libraries works much much better if you have that property :) On Tue, Oct 7, 2014 at 12:45 PM, wrote: > > ________________________________________ > From: mad.one at gmail.com on behalf of Austin Seipp < > austin at well-typed.com> > > So I assume your change would mean 'ghc -j' would not work for 32bit. > I still consider this a big limitation, one which is only due to an > implementation detail. But we need to confirm this will actually fix > any bottlenecks first though before getting to that point. > > > > > Yes, that's what I'm saying. > > Let me just add that what I'm proposing by no means prohibits or hinders > making 32-bit GHC-versions be parallel later on, it just doesn't solve the > problem. It depends to what extent the "fully deterministic behaviour" bug > is considered a priority (there was something about parts of the hi-files > being non-deterministic across different executions of GHC; don't recall > the details). > > Anyhow, the work I'm doing now exposes a few things about Uniques that > confuse me a little and that could have been bugs (that maybe never acted > up). Extended e-mail to follow later on. > > Ph. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.k.f.holzenspies at utwente.nl Tue Oct 7 21:03:32 2014 From: p.k.f.holzenspies at utwente.nl (p.k.f.holzenspies at utwente.nl) Date: Tue, 7 Oct 2014 21:03:32 +0000 Subject: Again: Uniques in GHC In-Reply-To: References: <1412591816.30103.10.camel@joachim-breitner.de> <01db8f0b509747669bbf426fc0dd18c2@EXMBX31.ad.utwente.nl> <1412626086.9628.1.camel@joachim-breitner.de> , Message-ID: Dear Carter, Simon, et al, (CC'd SPJ on this explicitly, because I *think* he'll be most knowledgeable on some of the constraints that need to be guaranteed for Uniques) I agree, but to that end, a few parameters need to become clear. To this end, I've created a Phabricator-thing that we can discuss things off of: https://phabricator.haskell.org/D323 Here are my open issues: - There were ad hoc domains of Uniques being created everywhere in the compiler (i.e. characters chosen to classify the generated Uniques). I have gathered them all up and given them names as constructors in Unique.UniqueDomain. Some of these names are arbitrary, because I don't know what they're for precisely. I generally went for the module name as a starting point. I did, however, make a point of having different invocations of mkSplitUniqSupply et al all have different constructors (e.g. HscMainA through HscMainC). This is to prevent the high potential for conflicts (see comments in uniqueDomainChar). If there are people that are more knowledgeable about the use of Uniques in these modules (e.g. HscMain, ByteCodeGen, etc.) can say that the uniques coming from these different invocations can never cause conflict, they maybe can reduce the number of UniqueDomains. ? - Some UniqueDomains only have a handful of instances and seem a bit wasteful. - Uniques were represented by a custom-boxed Int#, but serialised as Word32. Most modern machines see Int# as a 64-bit thing. Aren't we worried about the potential for undetected overlap/conflict there? - What is the scope in which a Unique must be Unique? I.e. what if independently compiled modules have overlapping Uniques (for different Ids) in their hi-files? Also, do TyCons and DataCons really need to have guaranteed different Uniques? Shouldn't the parser/renamer figure out what goes where and raise errors on domain violations? - There seem to be related-but-different Unique implementations in Template Haskell and Hoopl. Why is this? - How critical is it to let mkUnique (and mkSplitUniqSupply) be pure functions? If they can be IO, we could greatly simplify the management of (un)generated Uniques in each UniqueDomain and quite possibly make the move to a threaded GHC easier (for what that's worth). Also, this may help solve the non-determinism issues. - Missing haddocks, failing lints (lines too long) and a lot of cosmetics will be met when the above points have become a tad more clear. I'm more than happy to document a lot of the answers to the above stuff in Unique and/or commentary. Regards, Philip ________________________________ From: Carter Schonwald Sent: 07 October 2014 21:30 To: Holzenspies, P.K.F. (EWI) Cc: Austin Seipp; ghc-devs at haskell.org Subject: Re: Again: Uniques in GHC in some respects, having fully deterministic builds is a very important goal: a lot of tooling for eg, caching builds of libraries works much much better if you have that property :) On Tue, Oct 7, 2014 at 12:45 PM, > wrote: ________________________________________ From: mad.one at gmail.com > on behalf of Austin Seipp > So I assume your change would mean 'ghc -j' would not work for 32bit. I still consider this a big limitation, one which is only due to an implementation detail. But we need to confirm this will actually fix any bottlenecks first though before getting to that point. Yes, that's what I'm saying. Let me just add that what I'm proposing by no means prohibits or hinders making 32-bit GHC-versions be parallel later on, it just doesn't solve the problem. It depends to what extent the "fully deterministic behaviour" bug is considered a priority (there was something about parts of the hi-files being non-deterministic across different executions of GHC; don't recall the details). Anyhow, the work I'm doing now exposes a few things about Uniques that confuse me a little and that could have been bugs (that maybe never acted up). Extended e-mail to follow later on. Ph. _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Oct 7 21:23:32 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 7 Oct 2014 21:23:32 +0000 Subject: Again: Uniques in GHC In-Reply-To: References: <1412591816.30103.10.camel@joachim-breitner.de> <01db8f0b509747669bbf426fc0dd18c2@EXMBX31.ad.utwente.nl> <1412626086.9628.1.camel@joachim-breitner.de> , , Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F31FDA2@DB3PRD3001MB020.064d.mgd.msft.net> One of the things I'm finding difficult about this Phab stuff is that I get presented with lots of code without enough supporting text saying * What problem is this patch trying to solve? * What is the user-visible design (for language features)? * What are the main ideas in the implementation? The place we usually put such design documents is on the GHC Trac Wiki. Email is ok for discussion, but the wiki is FAR better for stating clearly the current state of play. Philip, might you make such a page for this unique stuff? To answer some of you specific questions (please include the answers in the wiki page in some form): * Uniques are never put in .hi files (as far as I know). They do not survive a single invocation of GHC. * However with ghc --make, or ghci, uniques do survive for the entire invocation of GHC. For example in ghc --make, uniques assigned when compiling module A should not clash with those for module B * Yes, TyCons and DataCons must have separate uniques. We often form sets of Names, which contain both TyCons and DataCons. Let's not mess with this. * Having unique-supply-splitting as a pure function is so deeply embedded in GHC that I could not hazard a guess as to how difficult it would be to IO-ify it. Moreover, I would regret doing so because it would force sequentiality where none is needed. * Template Haskell is a completely independent Haskell library. It does not import GHC. If uniques were in their own package, then TH and GHC could share them. Ditto Hoopl. * You say that Uniques are serialised as Word32. I'm not sure why they are serialised at all! * Enforcing determinacy everywhere is a heavy burden. Instead I suppose that you could run a pass at the end to give everything a more determinate name TidyPgm does this for the name strings, so it would probably be easy to do so for the uniques too. Simon ________________________________ From: ghc-devs [ghc-devs-bounces at haskell.org] on behalf of p.k.f.holzenspies at utwente.nl [p.k.f.holzenspies at utwente.nl] Sent: 07 October 2014 22:03 To: carter.schonwald at gmail.com Cc: ghc-devs at haskell.org Subject: RE: Again: Uniques in GHC Dear Carter, Simon, et al, (CC'd SPJ on this explicitly, because I *think* he'll be most knowledgeable on some of the constraints that need to be guaranteed for Uniques) I agree, but to that end, a few parameters need to become clear. To this end, I've created a Phabricator-thing that we can discuss things off of: https://phabricator.haskell.org/D323 Here are my open issues: - There were ad hoc domains of Uniques being created everywhere in the compiler (i.e. characters chosen to classify the generated Uniques). I have gathered them all up and given them names as constructors in Unique.UniqueDomain. Some of these names are arbitrary, because I don't know what they're for precisely. I generally went for the module name as a starting point. I did, however, make a point of having different invocations of mkSplitUniqSupply et al all have different constructors (e.g. HscMainA through HscMainC). This is to prevent the high potential for conflicts (see comments in uniqueDomainChar). If there are people that are more knowledgeable about the use of Uniques in these modules (e.g. HscMain, ByteCodeGen, etc.) can say that the uniques coming from these different invocations can never cause conflict, they maybe can reduce the number of UniqueDomains. ? - Some UniqueDomains only have a handful of instances and seem a bit wasteful. - Uniques were represented by a custom-boxed Int#, but serialised as Word32. Most modern machines see Int# as a 64-bit thing. Aren't we worried about the potential for undetected overlap/conflict there? - What is the scope in which a Unique must be Unique? I.e. what if independently compiled modules have overlapping Uniques (for different Ids) in their hi-files? Also, do TyCons and DataCons really need to have guaranteed different Uniques? Shouldn't the parser/renamer figure out what goes where and raise errors on domain violations? - There seem to be related-but-different Unique implementations in Template Haskell and Hoopl. Why is this? - How critical is it to let mkUnique (and mkSplitUniqSupply) be pure functions? If they can be IO, we could greatly simplify the management of (un)generated Uniques in each UniqueDomain and quite possibly make the move to a threaded GHC easier (for what that's worth). Also, this may help solve the non-determinism issues. - Missing haddocks, failing lints (lines too long) and a lot of cosmetics will be met when the above points have become a tad more clear. I'm more than happy to document a lot of the answers to the above stuff in Unique and/or commentary. Regards, Philip ________________________________ From: Carter Schonwald Sent: 07 October 2014 21:30 To: Holzenspies, P.K.F. (EWI) Cc: Austin Seipp; ghc-devs at haskell.org Subject: Re: Again: Uniques in GHC in some respects, having fully deterministic builds is a very important goal: a lot of tooling for eg, caching builds of libraries works much much better if you have that property :) On Tue, Oct 7, 2014 at 12:45 PM, > wrote: ________________________________________ From: mad.one at gmail.com > on behalf of Austin Seipp > So I assume your change would mean 'ghc -j' would not work for 32bit. I still consider this a big limitation, one which is only due to an implementation detail. But we need to confirm this will actually fix any bottlenecks first though before getting to that point. Yes, that's what I'm saying. Let me just add that what I'm proposing by no means prohibits or hinders making 32-bit GHC-versions be parallel later on, it just doesn't solve the problem. It depends to what extent the "fully deterministic behaviour" bug is considered a priority (there was something about parts of the hi-files being non-deterministic across different executions of GHC; don't recall the details). Anyhow, the work I'm doing now exposes a few things about Uniques that confuse me a little and that could have been bugs (that maybe never acted up). Extended e-mail to follow later on. Ph. _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Oct 7 22:21:45 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 7 Oct 2014 22:21:45 +0000 Subject: Tentative high-level plans for 7.10.1 In-Reply-To: <7dc2683c29e2c11a01acbb6833bece6f@cs.kuleuven.be> References: , <7dc2683c29e2c11a01acbb6833bece6f@cs.kuleuven.be> Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F32067C@DB3PRD3001MB020.064d.mgd.msft.net> Is the wiki page up to date? https://ghc.haskell.org/trac/ghc/wiki/PartialTypeSignatures I'd move the Constraint Wildcards bit out to an appendix or delete altogether -- it's a distraction since it's not part of the design. Named wildcard are described as a "fourth form" but actually it's "third" I suppose. And aren't they just a vairant of "type wildcards"? You can't name "extra-constraint wildcards" can you? There's a long section on "partial expressions and pattern signatures" but I think the conclusion is "we don't do this". Again, move to an appendix of not-implemented ideas. Try to focus on the actual design. Thanks! Simon ________________________________________ From: ghc-devs [ghc-devs-bounces at haskell.org] on behalf of Thomas Winant [Thomas.Winant at cs.kuleuven.be] Sent: 07 October 2014 17:07 To: ghc-devs at haskell.org Subject: Re: Tentative high-level plans for 7.10.1 Hi, On 2014-10-03 23:35, Austin Seipp wrote: > .. > Here are the major patches on Phabricator still needing review, that I > think we'd like to see for 7.10.1: > > - D168: Partial type signatures > .. As Austin said, our patch implementing Partial Type Signatures is still up for code review on Phabricator [1]. It is our goal too to get it in 7.10.1, and we will try to do as much as we can to help out with this process. We'd like it very much if people had a thorough look at it (thanks Richard for the feedback). We're glad to provide additional info (including extra comments in the code), rewrite confusing code, etc. = Status = The implementation is nearly complete: * We've integrated support for Holes, i.e. by default, an underscore in a type signature will generate an error message mentioning the inferred type. By enabling -XPartialTypeSignatures, the inferred type is used and the underscore can remain in the type signature. * SPJ's proposed simplifications (over Skype) have been implemented, except for the fact that we still use the annotated constraints for solving, see [2]. * Richard's comments on Phabricator [1] have been addressed in extra commits. * I've rebased the patch against master on Monday. * I've added docstring for most of the new functions I've added. * Some TODOs still remain, I'll summarise the most important ones here. See [3] for a detailed list with examples. * When -XMonoLocalBinds is enabled (implied by -XGADTs and -XTypeFamilies), (some) local bindings without type signature aren't generalised. Partial type signatures should follow this behaviour. This is currently not handled correctly. We have a stopgap solution involving generating an error in mind, but would prefer a real fix. We'd like some help with this. * Partial type signatures are currently ignored for pattern bindings. This bug doesn't seem to be difficult to solve, but requires some debugging. * The following code doesn't type check: {-# LANGUAGE MonomorphismRestriction, PartialTypeSignatures #-} charlie :: _ => a charlie = 3 Type error: No instance for (Num a) arising from the literal ?3?. We would like the (Num a) constraint to be inferred (because of the extra-constraint wildcard). * Some smaller things, e.g. improving error messages. We'll try to fix the remaining TODOs, but help is certainly appreciated and will speed up integrating this patch! Please have a look at the code and let us know what we can do to help. Cheers, Thomas Winant [1]: https://phabricator.haskell.org/D168 [2]: https://ghc.haskell.org/trac/ghc/wiki/PartialTypeSignatures#extra-constraints-wildcard [3]: https://ghc.haskell.org/trac/ghc/wiki/PartialTypeSignatures#TODOs Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs From gintautas.miliauskas at gmail.com Tue Oct 7 23:21:28 2014 From: gintautas.miliauskas at gmail.com (Gintautas Miliauskas) Date: Wed, 8 Oct 2014 01:21:28 +0200 Subject: Building ghc on Windows with msys2 In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF2223DA32@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF2223DA32@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: I've cleaned up the main Windows build page, moved MSYS2 instructions there, moved out legacy instructions and added backlinks / warnings / redirects. It would be great if someone could go through, verify the instructions and make sure there are no loose ends or misleading wikipages. On Wed, Oct 1, 2014 at 12:25 AM, Simon Peyton Jones wrote: > Gintautas, and other folk building GHC on Windows, > > > > There has been some activity on the ?GHC on Windows? front, which is > great. > > > > Some time ago I wrote: > > I would love it if you guys formed a GHC-on-Windows Task Force, who tried > to make sure that the Windows experience was always good. At the moment we > have lots of Windows users but very few who are willing to help make it > work, the recipients of this email being honourable exceptions. > > > > but nothing really happened. Maybe this time it can! Possible member of > such a task force are: > > ? Gintautas Miliauskas gintautas.miliauskas at gmail.com > > ? kyra kyrab at mail.ru > > ? Marek Wawrzos marek.28.93 at gmail.com > > ? Tamar Christina > > ? Roman Kuznetsov > > ? Randy Polen > > > > All we need is someone to act as convenor/coordinator and we are good to > go. Would any of you be willing to play that role? > > > > An advantage of having a working group is that you can *decide* things. > At the moment people often wait for GHC HQ to make a decision, and end up > waiting a long time. It would be better if a working group was responsible > for the GHC-on-Windows build and then if (say) you want to mandate msys2, > you can go ahead and mandate it. Well, obviously consult ghc-devs for > advice, but you are in the lead. Does that make sense? > > > > > > I think an early task is to replace what Neil Mitchell encountered: FIVE > different wiki pages describing how to build GHC on Windows. We want just > one! (Others can perhaps be marked ?out of date/archive? rather than > deleted, but it should be clear which is the main choice.) > > > > I agree with using msys2 as the main choice. (I?m using it myself.) It > may be that Gintautas?s page > https://ghc.haskell.org/trac/ghc/wiki/Building/Preparation/Windows/MSYS2 > is already sufficient. Although I?d like to see it tested by others. For > example, I found that it was CRUCIAL to set MSYSYSTEM=MINGW whereas > Gintautas?s page says nothing about that. > > > > Other small thoughts: > > ? We started including the ghc-tarball stuff because when we > relied directly on the gcc that came with msys, we kept getting build > failures because the gcc that some random person happened to be using did > not work (e..g. they had a too-old or too-new version of msys). By using a > single, fixed gcc, we avoided all this pain. > > > > ? I don?t know what a ?rubenvb? build is, but I think you can go > ahead and say ?use X and Y in this way?. The important thing is that it > should be reproducible, and not dependent on the particular Cygwin or gcc > or whatever the that user happens to have installed. > > > > Simon > > > > *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of *Gintautas > Miliauskas > *Sent:* 15 September 2014 21:58 > *To:* ghc-devs at haskell.org > *Subject:* Building ghc on Windows with msys2 > > > > Hello, > > > > I have been messing around a little bit with building GHC from source on > Windows, and found the msys2 wikipage > quite > useful, but somewhat outdated. Quite a few steps in those instructions are > no longer necessary and can be omitted. I am working on cleaning up that > wikipage right now and should be done in a day or two. > > > > I've found a recent email > in > the middle of updating the wikipage about other people planning to do the > same, so I thought I'd shoot an email to make sure that work is not being > duplicated. > > > > msys2 seems to be in good shape and should probably be promoted to the > primary suggested method to build ghc on Windows. Let's look into that once > the new build instructions have been proofread and verified. > > > > Best regards, > > -- > Gintautas Miliauskas > -- Gintautas Miliauskas -------------- next part -------------- An HTML attachment was scrubbed... URL: From jwlato at gmail.com Tue Oct 7 23:56:13 2014 From: jwlato at gmail.com (John Lato) Date: Wed, 8 Oct 2014 07:56:13 +0800 Subject: Tentative high-level plans for 7.10.1 In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F31A873@DB3PRD3001MB020.064d.mgd.msft.net> References: <87k34gsd9r.fsf@gmail.com> <618BE556AADD624C9C918AA5D5911BEF3F31A873@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Ok, if the ghc devs decide to do a 7.8.4 release, I will explicitly commit to helping backport patches. However, I don't know how to do so. Therefore, I'm going to ask Austin (as he's probably the most knowledgeable) to update the 7.8.4 wiki page with the process people should use to contribute backports. I'm guessing it's probably something like this: checkout the 7.8.4 release branch (which branch is it? ghc-7.8?) git cherry-pick the desired commit(s) ? (make a phab request for ghc-hq to review?) update Trac with what you've done (or if this is already documented somewhere, please point me to it). Unfortunately this doesn't have any way of showing that I'm working on a specific backport/merge, so there's potential for duplicate work, which isn't great. I also agree with Nicolas that it's likely possible to make better use of git to help with this sort of work, but that's a decision for ghc hq so I won't say any more on that. Cheers, John On Tue, Oct 7, 2014 at 4:12 PM, Simon Peyton Jones wrote: > Thanks for this debate. (And thank you Austin for provoking it by > articulating a medium term plan.) > > Our intent has always been that that the latest version on each branch is > solid. There have been one or two occasions when we have knowingly > abandoned a dodgy release branch entirely, but not many. > > So I think the major trick we are missing is this: > > We don't know what the show-stopping bugs on a branch are > > For example, here are three responses to Austin's message: > > | The only potential issue here is that not a single 7.8 release will be > | able to bootstrap LLVM-only targets due to #9439. I'm not sure how > > | 8960 looks rather serious and potentially makes all of 7.8 a no-go > | for some users. > > | We continue to use 7.2, at least partly because all newer versions of > | ghc have had significant bugs that affect us > > That's not good. Austin's message said about 7.8.4 "No particular pressure > on any outstanding bugs to release immediately". There are several dozen > tickets queued up on 7.8.4 (see here > https://ghc.haskell.org/trac/ghc/wiki/Status/GHC-7.8.4), but 95% of them > are "nice to have". > > So clearly the message is not getting through. > > > My conclusion > > * I think we (collectively!) should make a serious attempt to fix > show-stopping > bugs on a major release branch. (I agree that upgrading to the next > major > release often simply brings in a new wave of bugs because of GHC's > rapid development culture.) > > * We can only possibly do this if > a) we can distinguish "show-stopping" from "nice to have" > b) we get some help (thank you John Lato for implicitly offering) > > I would define a "show-stopping" bug as one that simply prevents you from > using the release altogether, or imposes a very large cost at the user end. > > For mechanism I suggest this. On the 7.8.4 status page (or in general, on > the release branch page you want to influence), create a section "Show > stoppers" with a list of the show-stopping bugs, including some > English-language text saying who cares so much and why. (Yes I know that > it might be there in the ticket, but the impact is much greater if there is > an explicit list of two or three personal statements up front.) > > Concerning 7.8.4 itself, I think we could review the decision to abandon > it, in the light of new information. We might, for example, fix > show-stoppers, include fixes that are easy to apply, and not-include other > fixes that are harder. > > Opinions? I'm not making a ruling here! > > Simon > > | -----Original Message----- > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Ben > | Gamari > | Sent: 04 October 2014 04:52 > | To: Austin Seipp; ghc-devs at haskell.org > | Cc: Simon Marlow > | Subject: Re: Tentative high-level plans for 7.10.1 > | > | Austin Seipp writes: > | > | snip. > | > | > > | > We do not believe we will ship a 7.8.4 at all, contrary to what you > | > may have seen on Trac - we never decided definitively, but there is > | > likely not enough time. Over the next few days, I will remove the > | > defunct 7.8.4 milestone, and re-triage the assigned tickets. > | > > | The only potential issue here is that not a single 7.8 release will be > | able to bootstrap LLVM-only targets due to #9439. I'm not sure how > | much of an issue this will be in practice but there should probably be > | some discussion with packagers to ensure that 7.8 is skipped on > | affected platforms lest users be stuck with no functional stage 0 > | compiler. > | > | Cheers, > | > | - Ben > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gintautas.miliauskas at gmail.com Wed Oct 8 00:03:29 2014 From: gintautas.miliauskas at gmail.com (Gintautas Miliauskas) Date: Wed, 8 Oct 2014 02:03:29 +0200 Subject: Building ghc on Windows with msys2 In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF2223DA32@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: By the way, I've noticed that ghc occasionally segfaults during Windows builds, like this: "inplace/bin/ghc-stage1.exe" -hisuf hi -osuf o -hcsuf hc -static -H64m -O0 -fasm -hide-all-packages -i -iutils/hsc2hs/. -iutils/hsc2hs/dist-install/build -iutils/hsc2hs/dist-install/build/autogen -Iutils/hsc2hs/dist-install/build -Iutils/hsc2hs/dist-install/build/autogen -optP-include -optPutils/hsc2hs/dist-install/build/autogen/cabal_macros.h -package-key base_ESD4aQEEWwsHtYJVc1BwtJ -package-key conta_ChF4XLXB9JmByIycPzerow -package-key direc_HU5aFxMIQNwGQFzisjuinu -package-key filep_34DFDFT9FVD9pRLLgh8IdQ -package-key proce_7ZlAbRkwiRO8qgXx3NNP0G -XHaskell98 -XCPP -XForeignFunctionInterface -no-user-package-db -rtsopts -odir utils/hsc2hs/dist-install/build -hidir utils/hsc2hs/dist-install/build -stubdir utils/hsc2hs/dist-install/build -c utils/hsc2hs/./HSCParser.hs -o utils/hsc2hs/dist-install/build/HSCParser.o utils/hsc2hs/ghc.mk:16: recipe for target 'utils/hsc2hs/dist-install/build/HSCParser.o' failed make[1]: *** [utils/hsc2hs/dist-install/build/HSCParser.o] Segmentation fault make[1]: *** Deleting file 'utils/hsc2hs/dist-install/build/HSCParser.o' The errors are not deterministic at all. Any idea what's happening? Any suggestions for debugging this? On Wed, Oct 8, 2014 at 1:21 AM, Gintautas Miliauskas < gintautas.miliauskas at gmail.com> wrote: > I've cleaned up the main Windows build > > page, moved MSYS2 instructions there, moved out legacy instructions and > added backlinks / warnings / redirects. It would be great if someone could > go through, verify the instructions and make sure there are no loose ends > or misleading wikipages. > > On Wed, Oct 1, 2014 at 12:25 AM, Simon Peyton Jones > wrote: > >> Gintautas, and other folk building GHC on Windows, >> >> >> >> There has been some activity on the ?GHC on Windows? front, which is >> great. >> >> >> >> Some time ago I wrote: >> >> I would love it if you guys formed a GHC-on-Windows Task Force, who tried >> to make sure that the Windows experience was always good. At the moment we >> have lots of Windows users but very few who are willing to help make it >> work, the recipients of this email being honourable exceptions. >> >> >> >> but nothing really happened. Maybe this time it can! Possible member of >> such a task force are: >> >> ? Gintautas Miliauskas gintautas.miliauskas at gmail.com >> >> ? kyra kyrab at mail.ru >> >> ? Marek Wawrzos marek.28.93 at gmail.com >> >> ? Tamar Christina >> >> ? Roman Kuznetsov >> >> ? Randy Polen >> >> >> >> All we need is someone to act as convenor/coordinator and we are good to >> go. Would any of you be willing to play that role? >> >> >> >> An advantage of having a working group is that you can *decide* things. >> At the moment people often wait for GHC HQ to make a decision, and end up >> waiting a long time. It would be better if a working group was responsible >> for the GHC-on-Windows build and then if (say) you want to mandate msys2, >> you can go ahead and mandate it. Well, obviously consult ghc-devs for >> advice, but you are in the lead. Does that make sense? >> >> >> >> >> >> I think an early task is to replace what Neil Mitchell encountered: FIVE >> different wiki pages describing how to build GHC on Windows. We want just >> one! (Others can perhaps be marked ?out of date/archive? rather than >> deleted, but it should be clear which is the main choice.) >> >> >> >> I agree with using msys2 as the main choice. (I?m using it myself.) It >> may be that Gintautas?s page >> https://ghc.haskell.org/trac/ghc/wiki/Building/Preparation/Windows/MSYS2 >> is already sufficient. Although I?d like to see it tested by others. For >> example, I found that it was CRUCIAL to set MSYSYSTEM=MINGW whereas >> Gintautas?s page says nothing about that. >> >> >> >> Other small thoughts: >> >> ? We started including the ghc-tarball stuff because when we >> relied directly on the gcc that came with msys, we kept getting build >> failures because the gcc that some random person happened to be using did >> not work (e..g. they had a too-old or too-new version of msys). By using a >> single, fixed gcc, we avoided all this pain. >> >> >> >> ? I don?t know what a ?rubenvb? build is, but I think you can go >> ahead and say ?use X and Y in this way?. The important thing is that it >> should be reproducible, and not dependent on the particular Cygwin or gcc >> or whatever the that user happens to have installed. >> >> >> >> Simon >> >> >> >> *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of *Gintautas >> Miliauskas >> *Sent:* 15 September 2014 21:58 >> *To:* ghc-devs at haskell.org >> *Subject:* Building ghc on Windows with msys2 >> >> >> >> Hello, >> >> >> >> I have been messing around a little bit with building GHC from source on >> Windows, and found the msys2 wikipage >> quite >> useful, but somewhat outdated. Quite a few steps in those instructions are >> no longer necessary and can be omitted. I am working on cleaning up that >> wikipage right now and should be done in a day or two. >> >> >> >> I've found a recent email >> in >> the middle of updating the wikipage about other people planning to do the >> same, so I thought I'd shoot an email to make sure that work is not being >> duplicated. >> >> >> >> msys2 seems to be in good shape and should probably be promoted to the >> primary suggested method to build ghc on Windows. Let's look into that once >> the new build instructions have been proofread and verified. >> >> >> >> Best regards, >> >> -- >> Gintautas Miliauskas >> > > > > -- > Gintautas Miliauskas > -- Gintautas Miliauskas -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Wed Oct 8 00:13:01 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Tue, 7 Oct 2014 20:13:01 -0400 Subject: Tentative high-level plans for 7.10.1 In-Reply-To: References: <87k34gsd9r.fsf@gmail.com> <618BE556AADD624C9C918AA5D5911BEF3F31A873@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: the checkout process for the 7.8 branch is a bit involved (and NB: you really want to use a different tree than one for working on head, the checkout process is different ) $ git clone -b ghc-7.8 git://git.haskell.org/ghc.git ghc-7.8TREE $ cd ghc-7.8TREE/ $ ./sync-all get -b ghc-7.8 (theres no need for a lot of this with HEAD) that will checkout a working tree of 7.8 head (unless i'm missing a step) I Believe arc/phab will work correctly on top of this though. (I certainly used phab to get a patch or so into 7.8.3 ! ) On Tue, Oct 7, 2014 at 7:56 PM, John Lato wrote: > Ok, if the ghc devs decide to do a 7.8.4 release, I will explicitly commit > to helping backport patches. > > However, I don't know how to do so. Therefore, I'm going to ask Austin > (as he's probably the most knowledgeable) to update the 7.8.4 wiki page > with the process people should use to contribute backports. I'm guessing > it's probably something like this: > > checkout the 7.8.4 release branch (which branch is it? ghc-7.8?) > git cherry-pick the desired commit(s) > ? (make a phab request for ghc-hq to review?) > update Trac with what you've done > > (or if this is already documented somewhere, please point me to it). > > Unfortunately this doesn't have any way of showing that I'm working on a > specific backport/merge, so there's potential for duplicate work, which > isn't great. I also agree with Nicolas that it's likely possible to make > better use of git to help with this sort of work, but that's a decision for > ghc hq so I won't say any more on that. > > Cheers, > John > > > On Tue, Oct 7, 2014 at 4:12 PM, Simon Peyton Jones > wrote: > >> Thanks for this debate. (And thank you Austin for provoking it by >> articulating a medium term plan.) >> >> Our intent has always been that that the latest version on each branch is >> solid. There have been one or two occasions when we have knowingly >> abandoned a dodgy release branch entirely, but not many. >> >> So I think the major trick we are missing is this: >> >> We don't know what the show-stopping bugs on a branch are >> >> For example, here are three responses to Austin's message: >> >> | The only potential issue here is that not a single 7.8 release will be >> | able to bootstrap LLVM-only targets due to #9439. I'm not sure how >> >> | 8960 looks rather serious and potentially makes all of 7.8 a no-go >> | for some users. >> >> | We continue to use 7.2, at least partly because all newer versions of >> | ghc have had significant bugs that affect us >> >> That's not good. Austin's message said about 7.8.4 "No particular >> pressure on any outstanding bugs to release immediately". There are several >> dozen tickets queued up on 7.8.4 (see here >> https://ghc.haskell.org/trac/ghc/wiki/Status/GHC-7.8.4), but 95% of them >> are "nice to have". >> >> So clearly the message is not getting through. >> >> >> My conclusion >> >> * I think we (collectively!) should make a serious attempt to fix >> show-stopping >> bugs on a major release branch. (I agree that upgrading to the next >> major >> release often simply brings in a new wave of bugs because of GHC's >> rapid development culture.) >> >> * We can only possibly do this if >> a) we can distinguish "show-stopping" from "nice to have" >> b) we get some help (thank you John Lato for implicitly offering) >> >> I would define a "show-stopping" bug as one that simply prevents you from >> using the release altogether, or imposes a very large cost at the user end. >> >> For mechanism I suggest this. On the 7.8.4 status page (or in general, >> on the release branch page you want to influence), create a section "Show >> stoppers" with a list of the show-stopping bugs, including some >> English-language text saying who cares so much and why. (Yes I know that >> it might be there in the ticket, but the impact is much greater if there is >> an explicit list of two or three personal statements up front.) >> >> Concerning 7.8.4 itself, I think we could review the decision to abandon >> it, in the light of new information. We might, for example, fix >> show-stoppers, include fixes that are easy to apply, and not-include other >> fixes that are harder. >> >> Opinions? I'm not making a ruling here! >> >> Simon >> >> | -----Original Message----- >> | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Ben >> | Gamari >> | Sent: 04 October 2014 04:52 >> | To: Austin Seipp; ghc-devs at haskell.org >> | Cc: Simon Marlow >> | Subject: Re: Tentative high-level plans for 7.10.1 >> | >> | Austin Seipp writes: >> | >> | snip. >> | >> | > >> | > We do not believe we will ship a 7.8.4 at all, contrary to what you >> | > may have seen on Trac - we never decided definitively, but there is >> | > likely not enough time. Over the next few days, I will remove the >> | > defunct 7.8.4 milestone, and re-triage the assigned tickets. >> | > >> | The only potential issue here is that not a single 7.8 release will be >> | able to bootstrap LLVM-only targets due to #9439. I'm not sure how >> | much of an issue this will be in practice but there should probably be >> | some discussion with packagers to ensure that 7.8 is skipped on >> | affected platforms lest users be stuck with no functional stage 0 >> | compiler. >> | >> | Cheers, >> | >> | - Ben >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From george.colpitts at gmail.com Wed Oct 8 00:34:50 2014 From: george.colpitts at gmail.com (George Colpitts) Date: Tue, 7 Oct 2014 21:34:50 -0300 Subject: Tentative high-level plans for 7.10.1 In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F31A873@DB3PRD3001MB020.064d.mgd.msft.net> References: <87k34gsd9r.fsf@gmail.com> <618BE556AADD624C9C918AA5D5911BEF3F31A873@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: I agree a section show stoppers is a good idea, in parallel would it make sense to use the priority "highest" for tickets that we consider showstoppers? Austin did a great of explaining the difficulties of backporting fixes, my reaction is that we have to have higher quality releases so that ideally we have 0 backports. Having a showstoppers section will help that but I think we need to work harder at getting volunteers to write tests. For most people that's not exciting but it is a good way to get started on helping and would be an immense help in producing higher quality releases. As Austin also pointed out things change rapidly, it's hard to keep up and it's getting harder for people to get to the point where they feel they are decent Haskell programmers. So in addition to testing it would be great if we could get more people to document, i.e. write tutorials etc. It is difficult to balance being a research language and being a viable language for industrial use. FWIW, I personally feel that we side too much on being a research language. On Tue, Oct 7, 2014 at 5:12 AM, Simon Peyton Jones wrote: > Thanks for this debate. (And thank you Austin for provoking it by > articulating a medium term plan.) > > Our intent has always been that that the latest version on each branch is > solid. There have been one or two occasions when we have knowingly > abandoned a dodgy release branch entirely, but not many. > > So I think the major trick we are missing is this: > > We don't know what the show-stopping bugs on a branch are > > For example, here are three responses to Austin's message: > > | The only potential issue here is that not a single 7.8 release will be > | able to bootstrap LLVM-only targets due to #9439. I'm not sure how > > | 8960 looks rather serious and potentially makes all of 7.8 a no-go > | for some users. > > | We continue to use 7.2, at least partly because all newer versions of > | ghc have had significant bugs that affect us > > That's not good. Austin's message said about 7.8.4 "No particular pressure > on any outstanding bugs to release immediately". There are several dozen > tickets queued up on 7.8.4 (see here > https://ghc.haskell.org/trac/ghc/wiki/Status/GHC-7.8.4), but 95% of them > are "nice to have". > > So clearly the message is not getting through. > > > My conclusion > > * I think we (collectively!) should make a serious attempt to fix > show-stopping > bugs on a major release branch. (I agree that upgrading to the next > major > release often simply brings in a new wave of bugs because of GHC's > rapid development culture.) > > * We can only possibly do this if > a) we can distinguish "show-stopping" from "nice to have" > b) we get some help (thank you John Lato for implicitly offering) > > I would define a "show-stopping" bug as one that simply prevents you from > using the release altogether, or imposes a very large cost at the user end. > > For mechanism I suggest this. On the 7.8.4 status page (or in general, on > the release branch page you want to influence), create a section "Show > stoppers" with a list of the show-stopping bugs, including some > English-language text saying who cares so much and why. (Yes I know that > it might be there in the ticket, but the impact is much greater if there is > an explicit list of two or three personal statements up front.) > > Concerning 7.8.4 itself, I think we could review the decision to abandon > it, in the light of new information. We might, for example, fix > show-stoppers, include fixes that are easy to apply, and not-include other > fixes that are harder. > > Opinions? I'm not making a ruling here! > > Simon > > | -----Original Message----- > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Ben > | Gamari > | Sent: 04 October 2014 04:52 > | To: Austin Seipp; ghc-devs at haskell.org > | Cc: Simon Marlow > | Subject: Re: Tentative high-level plans for 7.10.1 > | > | Austin Seipp writes: > | > | snip. > | > | > > | > We do not believe we will ship a 7.8.4 at all, contrary to what you > | > may have seen on Trac - we never decided definitively, but there is > | > likely not enough time. Over the next few days, I will remove the > | > defunct 7.8.4 milestone, and re-triage the assigned tickets. > | > > | The only potential issue here is that not a single 7.8 release will be > | able to bootstrap LLVM-only targets due to #9439. I'm not sure how > | much of an issue this will be in practice but there should probably be > | some discussion with packagers to ensure that 7.8 is skipped on > | affected platforms lest users be stuck with no functional stage 0 > | compiler. > | > | Cheers, > | > | - Ben > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chengang31 at gmail.com Wed Oct 8 02:28:50 2014 From: chengang31 at gmail.com (cg) Date: Wed, 08 Oct 2014 10:28:50 +0800 Subject: Building ghc on Windows with msys2 In-Reply-To: References: Message-ID: On 10/8/2014 1:03 AM, Austin Seipp wrote: >> >> I hide 'empty' and 'foldr' at importing point and the code compiles. >> >> Has anyone see the same issues? > > Ugh, this is some fallout I thought we had fixed, but apparently not. > I'll fix it shortly, thanks. > [...] >> >> Why does ghc-stage1.exe use so much memory? > > Wow, I thought we fixed this one too! Please see this bug: > > https://ghc.haskell.org/trac/ghc/ticket/9630 > > What GHC commit are you using? Are your submodules all up to date? In > particular, if 'binary' is not up to date, even if the rest of your > tree is, you'll see this problem. > Ah, I know what causes the building failure now... After cloning ghc repository, I switch every sub-module to Master (it is usually HEAD) branch. I am in the habit of thinking Master is always the latest. But it is not the case with ghc. For example, the fix mentioned in ticket 9630 was submitted to ghc-head branch[1], but the Master/HEAD is way old[2]. And it seems there is some other submodule is like this. Now after cloning ghc repository, if I don't switch to any branch -- 'git branch' will show all submodules are detached -- the build will succeed. So why the Master/HEAD branches don't have the latest code? Thanks, cg [1] http://git.haskell.org/packages/binary.git/log/refs/heads/ghc-head [2] http://git.haskell.org/packages/binary.git/log/refs/heads/master From hvriedel at gmail.com Wed Oct 8 06:45:21 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Wed, 08 Oct 2014 08:45:21 +0200 Subject: Building ghc on Windows with msys2 In-Reply-To: (cg's message of "Wed, 08 Oct 2014 10:28:50 +0800") References: Message-ID: <87fvezkqke.fsf@gmail.com> On 2014-10-08 at 04:28:50 +0200, cg wrote: [...] > After cloning ghc repository, I switch every sub-module to Master (it is > usually HEAD) branch. Why are you doing that? :-) [...] > Now after cloning ghc repository, if I don't switch to any branch -- > 'git branch' > will show all submodules are detached -- the build will succeed. Well, that's how you're supposed to work with submodules in ghc.git[1] > So why the Master/HEAD branches don't have the latest code? Generally, the "master" branch refers to the latest upstream code, which is not always supposed to work with GHC HEAD (yet). And if the package is not owned by GHC HQ, you are not allowed to push changes to "master" anyway (as it'd be automatically overwritten by the automatic Git mirror job) Take Cabal for example, we have an automatic mirror-job that keeps Cabal's "master" branch synced to the state of the github.com/ However, we only update the gitlink for Cabal in ghc.git every couple of weeks to Cabal's latest "master" tip commit, as it has the potential to affect performance numbers or simply be temporarily in a broken state wrt GHC HEAD. Then there's Haddock for which it was recently decided to let upstream development progress decoupled from GHC HEAD's API changes, and have GHC HEAD simply use its own branch 'ghc-head' to diverge from upstream until shortly before a GHC release is close (at which point Haddock will converge again). Fwiw, 'git submodule update --remote utils/haddock' will track the "ghc-head" branch in this case. Finally, "binary" is a case where we needed a patch merged into binary, but couldn't wait for the "binary" upstream to merge the pull-request, as it was blocking GHC HEAD development. So that's why we temporarily are on a "ghc-head" branch, which will be switched away from again as soon as "binary"'s upstream "master" branch can be used again with GHC HEAD. And then there's also the potential case when we need to temporarily rollback a submodule update; then we don't necessarily need to 'git revert' commits inside that submodule, but we simply just reset the pointed-to submodule commit to an older commit. I hope this sheds a bit of light on the situation. Then there's also [1] which may provide further pointers. [1]: https://ghc.haskell.org/trac/ghc/wiki/WorkingConventions/Git/Submodules Cheers, hvr From hvriedel at gmail.com Wed Oct 8 06:48:59 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Wed, 08 Oct 2014 08:48:59 +0200 Subject: Tentative high-level plans for 7.10.1 In-Reply-To: (George Colpitts's message of "Tue, 7 Oct 2014 21:34:50 -0300") References: <87k34gsd9r.fsf@gmail.com> <618BE556AADD624C9C918AA5D5911BEF3F31A873@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <87bnpnkqec.fsf@gmail.com> Hello, On 2014-10-08 at 02:34:50 +0200, George Colpitts wrote: > I agree a section show stoppers is a good idea, in parallel would it > make sense to use the priority "highest" for tickets that we consider > showstoppers? I think, they are marked 'highest' already Btw, one could additionally add a dynamic ticket-query-table to that section to list all tickets currently marked priority="highest" and milestone="7.8.4" to make sure nothing is missed. From hvriedel at gmail.com Wed Oct 8 06:59:40 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Wed, 08 Oct 2014 08:59:40 +0200 Subject: Tentative high-level plans for 7.10.1 In-Reply-To: (Carter Schonwald's message of "Tue, 7 Oct 2014 20:13:01 -0400") References: <87k34gsd9r.fsf@gmail.com> <618BE556AADD624C9C918AA5D5911BEF3F31A873@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <877g0bkpwj.fsf@gmail.com> On 2014-10-08 at 02:13:01 +0200, Carter Schonwald wrote: > the checkout process for the 7.8 branch is a bit involved (and NB: you > really want to use a different tree than one for working on head, the > checkout process is different > ) > > $ git clone -b ghc-7.8 git://git.haskell.org/ghc.git ghc-7.8TREE > $ cd ghc-7.8TREE/ > $ ./sync-all get -b ghc-7.8 > > (theres no need for a lot of this with HEAD) Just to clarify/remind why this is needed: The GHC 7.8 branch was not converted to a proper submodule-only scheme like GHC HEAD was. Unless we keep maintaining GHC 7.8 for longer than a 7.8.4 release, this irregularity will become less of a concern, as the stable GHC 7.10 branch will be switchable to/from later branches such as GHC 7.12/HEAD w/o requiring a separately cloned tree. However, should GHC 7.8.x turn out to become a LTS-ishly maintained branch, we may want to consider converting it to a similiar Git structure as GHC HEAD currently is, to avoid having to keep two different sets of instructions on the GHC Wiki for how to work on GHC 7.8 vs working on GHC HEAD/7.10 and later. Cheers, hvr From tuncer.ayaz at gmail.com Wed Oct 8 08:21:08 2014 From: tuncer.ayaz at gmail.com (Tuncer Ayaz) Date: Wed, 8 Oct 2014 10:21:08 +0200 Subject: RFC: Source-markup language for GHC User's Guide In-Reply-To: <87y4srzz1w.fsf@gmail.com> References: <87y4srzz1w.fsf@gmail.com> Message-ID: On Tue, Oct 7, 2014 at 5:20 PM, Herbert Valerio Riedel wrote: > Hello GHC Developers & GHC User's Guide writers, > > I assume it is common knowledge to everyone here, that the GHC > User's Guide is written in Docbook XML markup. > > However, it's a bit tedious to write Docbook-XML by hand, and the > XML markup is not as lightweight as modern state-of-the-art markup > languages designed for being edited in a simple text-editor are. > > Therefore I'd like to hear your opinion on migrating away from the > current Docbook XML markup to some other similarly expressive but > yet more lightweight markup documentation system such as Asciidoc[1] > or ReST/Sphinx[2]. > > There's obviously some cost involved upfront for a (semi-automatic) > conversion[3]. So one important question is obviously whether the > long-term benefits outweight the cost/investment that we'd incur for > the initial conversion. > > All suggestions/comments/worries welcome; please commence > brainstorming :) Given the choices (and existing Docbook files), I would select AsciiDoc. From tuncer.ayaz at gmail.com Wed Oct 8 08:37:44 2014 From: tuncer.ayaz at gmail.com (Tuncer Ayaz) Date: Wed, 8 Oct 2014 10:37:44 +0200 Subject: GitHub pull requests In-Reply-To: <1412707363.29506.0.camel@joachim-breitner.de> References: <6D1821C0-52BE-4506-8503-307D2CBD83DF@cis.upenn.edu> <1412506588.4605.1.camel@joachim-breitner.de> <1412541852.21551.4.camel@joachim-breitner.de> <1412616764.8286.0.camel@joachim-breitner.de> <1412707363.29506.0.camel@joachim-breitner.de> Message-ID: On Tue, Oct 7, 2014 at 8:42 PM, Joachim Breitner wrote: > doesn't look like that will happen soon: > > > > Hey Joachim, > > > > Disabling that linking is not possible currently, and I'm not sure > > if that feature will be available in the near future. Still, I'll > > add your request to our feature request wishlist and pass the > > feedback to the team. > > > > Thanks for the question/suggestion and let us know if there's > > anything else. That's unsurprising, given how that linking scheme is used everywhere, but they responded quickly. I've sent them suggestions on improving the review system, and they have acknowledged working on that, but it's a behind closed doors development process without any pre-announcement or commitment. Another popular request is for projects like GHC or similar (that have their own or different infrastructure for discussion and contributions) to want to disable Pull-Requests (like Wiki or Issues). They say it's often requested but have no immediate plans to add a check box. It's unfortunate because you have to constantly close pull requests with a link to the contributing guide. From jan.stolarek at p.lodz.pl Wed Oct 8 08:49:33 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Wed, 8 Oct 2014 10:49:33 +0200 Subject: RFC: Source-markup language for GHC User's Guide In-Reply-To: <87y4srzz1w.fsf@gmail.com> References: <87y4srzz1w.fsf@gmail.com> Message-ID: <201410081049.33593.jan.stolarek@p.lodz.pl> > Therefore I'd like to hear your opinion on migrating away from the > current Docbook XML markup to some other similarly expressive but yet > more lightweight markup documentation system such as Asciidoc[1] or > ReST/Sphinx[2]. My opinion is that I don't really care. I only edit the User Guide once every couple of months or so. I don't have problems with Docbook but if others want something else I can adjust. Janek From johan.tibell at gmail.com Wed Oct 8 09:00:38 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Wed, 8 Oct 2014 11:00:38 +0200 Subject: RFC: Source-markup language for GHC User's Guide In-Reply-To: <201410081049.33593.jan.stolarek@p.lodz.pl> References: <87y4srzz1w.fsf@gmail.com> <201410081049.33593.jan.stolarek@p.lodz.pl> Message-ID: Same here. My interaction with the user guide is infrequent enough that it doesn't matter much to me. On Wed, Oct 8, 2014 at 10:49 AM, Jan Stolarek wrote: > > Therefore I'd like to hear your opinion on migrating away from the > > current Docbook XML markup to some other similarly expressive but yet > > more lightweight markup documentation system such as Asciidoc[1] or > > ReST/Sphinx[2]. > My opinion is that I don't really care. I only edit the User Guide once > every couple of months or > so. I don't have problems with Docbook but if others want something else I > can adjust. > > Janek > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Wed Oct 8 10:18:35 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 8 Oct 2014 10:18:35 +0000 Subject: Windows build broken (again) In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF2224383C@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF3F31457F@DB3PRD3001MB020.064d.mgd.msft.net> <8738b5w3r8.fsf@gmail.com> Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F3214C1@DB3PRD3001MB020.064d.mgd.msft.net> yes it seems fine now, thanks. Simon From: Krzysztof Gogolewski [mailto:krz.gogolewski at gmail.com] Sent: 03 October 2014 19:05 To: Herbert Valerio Riedel Cc: Simon Peyton Jones; ghc-devs at haskell.org Subject: Re: Windows build broken (again) Python 3 is a likely culprit (though I couldn't confirm it), so I reverted it. Does it work now? On Fri, Oct 3, 2014 at 5:51 PM, Herbert Valerio Riedel > wrote: On 2014-10-03 at 17:29:31 +0200, Simon Peyton Jones wrote: > Perhaps, yes, it is Python 3. I don't know. Could someone revert to > make it work again, please? Fyi, I can't reproduce this specific problem on Cygwin at least (I don't have any working pure Msys2 environment yet (still working on it), as this may exactly be the kind of failure I'd expect Msys2 to be prone to while Cygwin to be unaffected by). What I tried in order to reproduce: $ git rev-parse HEAD 084d241b316bfa12e41fc34cae993ca276bf0730 # <-- this is the Py3/testsuite commit $ make TEST=tc012 WAY=normal ... =====> tc012(normal) 3039 of 4088 [0, 0, 0] cd ./typecheck/should_compile && 'C:/cygwin64/home/ghc/ghc-hvr/inplace/bin/ghc-stage2.exe' -fforce-recomp -dcore-lint -dcmm-lint -dno-debug-output -no-user-package-db -rtsopts -fno-ghci-history -c tc012.hs -fno-warn-incomplete-patterns >tc012.comp.stderr 2>&1 OVERALL SUMMARY for test run started at Fri Oct 3 15:42:04 2014 GMT 0:00:03 spent to go through 4088 total tests, which gave rise to 12360 test cases, of which 12359 were skipped 0 had missing libraries 1 expected passes 0 expected failures ... And btw, with the latest GHC HEAD commit (and I suspect the recent HEAP_ALLOCED-related commits to be responsible for that), I get a ton of testsuite failures due to such errors: T8639_api.exe: Unknown PEi386 section name `staticclosures' (while processing: C:\cygwin64\home\ghc\ghc-hvr\libraries\ghc-prim\dist-install\build\HSghcpr_BE58KUgBe9ELCsPXiJ1Q2r.o) _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Wed Oct 8 10:18:33 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 8 Oct 2014 10:18:33 +0000 Subject: Tentative high-level plans for 7.10.1 In-Reply-To: References: <87k34gsd9r.fsf@gmail.com> <618BE556AADD624C9C918AA5D5911BEF3F31A873@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F32149B@DB3PRD3001MB020.064d.mgd.msft.net> I think we need to work harder at getting volunteers to write tests it would be great if we could get more people to document, i.e. write tutorials Good ideas, thank you. It would be great if you felt able to contribute to one or the other (or both) yourself. Simon From: George Colpitts [mailto:george.colpitts at gmail.com] Sent: 08 October 2014 01:35 To: Simon Peyton Jones Cc: Ben Gamari; Austin Seipp; ghc-devs at haskell.org; Simon Marlow Subject: Re: Tentative high-level plans for 7.10.1 I agree a section show stoppers is a good idea, in parallel would it make sense to use the priority "highest" for tickets that we consider showstoppers? Austin did a great of explaining the difficulties of backporting fixes, my reaction is that we have to have higher quality releases so that ideally we have 0 backports. Having a showstoppers section will help that but I think we need to work harder at getting volunteers to write tests. For most people that's not exciting but it is a good way to get started on helping and would be an immense help in producing higher quality releases. As Austin also pointed out things change rapidly, it's hard to keep up and it's getting harder for people to get to the point where they feel they are decent Haskell programmers. So in addition to testing it would be great if we could get more people to document, i.e. write tutorials etc. It is difficult to balance being a research language and being a viable language for industrial use. FWIW, I personally feel that we side too much on being a research language. On Tue, Oct 7, 2014 at 5:12 AM, Simon Peyton Jones > wrote: Thanks for this debate. (And thank you Austin for provoking it by articulating a medium term plan.) Our intent has always been that that the latest version on each branch is solid. There have been one or two occasions when we have knowingly abandoned a dodgy release branch entirely, but not many. So I think the major trick we are missing is this: We don't know what the show-stopping bugs on a branch are For example, here are three responses to Austin's message: | The only potential issue here is that not a single 7.8 release will be | able to bootstrap LLVM-only targets due to #9439. I'm not sure how | 8960 looks rather serious and potentially makes all of 7.8 a no-go | for some users. | We continue to use 7.2, at least partly because all newer versions of | ghc have had significant bugs that affect us That's not good. Austin's message said about 7.8.4 "No particular pressure on any outstanding bugs to release immediately". There are several dozen tickets queued up on 7.8.4 (see here https://ghc.haskell.org/trac/ghc/wiki/Status/GHC-7.8.4), but 95% of them are "nice to have". So clearly the message is not getting through. My conclusion * I think we (collectively!) should make a serious attempt to fix show-stopping bugs on a major release branch. (I agree that upgrading to the next major release often simply brings in a new wave of bugs because of GHC's rapid development culture.) * We can only possibly do this if a) we can distinguish "show-stopping" from "nice to have" b) we get some help (thank you John Lato for implicitly offering) I would define a "show-stopping" bug as one that simply prevents you from using the release altogether, or imposes a very large cost at the user end. For mechanism I suggest this. On the 7.8.4 status page (or in general, on the release branch page you want to influence), create a section "Show stoppers" with a list of the show-stopping bugs, including some English-language text saying who cares so much and why. (Yes I know that it might be there in the ticket, but the impact is much greater if there is an explicit list of two or three personal statements up front.) Concerning 7.8.4 itself, I think we could review the decision to abandon it, in the light of new information. We might, for example, fix show-stoppers, include fixes that are easy to apply, and not-include other fixes that are harder. Opinions? I'm not making a ruling here! Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Ben | Gamari | Sent: 04 October 2014 04:52 | To: Austin Seipp; ghc-devs at haskell.org | Cc: Simon Marlow | Subject: Re: Tentative high-level plans for 7.10.1 | | Austin Seipp > writes: | | snip. | | > | > We do not believe we will ship a 7.8.4 at all, contrary to what you | > may have seen on Trac - we never decided definitively, but there is | > likely not enough time. Over the next few days, I will remove the | > defunct 7.8.4 milestone, and re-triage the assigned tickets. | > | The only potential issue here is that not a single 7.8 release will be | able to bootstrap LLVM-only targets due to #9439. I'm not sure how | much of an issue this will be in practice but there should probably be | some discussion with packagers to ensure that 7.8 is skipped on | affected platforms lest users be stuck with no functional stage 0 | compiler. | | Cheers, | | - Ben _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Wed Oct 8 10:24:32 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Wed, 08 Oct 2014 12:24:32 +0200 Subject: [commit: ghc] master: Make Data.List.takeWhile fuse: fix #9132 (d14d3f9) In-Reply-To: <20141008065329.AA7B43A300@ghc.haskell.org> References: <20141008065329.AA7B43A300@ghc.haskell.org> Message-ID: <1412763872.8496.7.camel@joachim-breitner.de> Hi, Am Mittwoch, den 08.10.2014, 06:53 +0000 schrieb git at git.haskell.org: > commit d14d3f92d55a352db7faf62939127060716c4694 > Author: Joachim Breitner > Date: Wed Oct 8 08:53:26 2014 +0200 > > Make Data.List.takeWhile fuse: fix #9132 > > Summary: > Rewrites takeWhile to a build/foldr form; fuses repeated > applications of takeWhile. > > Reviewers: nomeata, austin > > Reviewed By: nomeata > > Subscribers: thomie, carter, ezyang, simonmar > > Projects: #ghc > > Differential Revision: https://phabricator.haskell.org/D322 > > GHC Trac Issues: #9132 nofib?s fft2?s allocs -23%, nice! (otherwise no differences worth mentioning). Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From hvriedel at gmail.com Wed Oct 8 11:14:33 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Wed, 08 Oct 2014 13:14:33 +0200 Subject: RFC: Source-markup language for GHC User's Guide In-Reply-To: <201410081049.33593.jan.stolarek@p.lodz.pl> (Jan Stolarek's message of "Wed, 8 Oct 2014 10:49:33 +0200") References: <87y4srzz1w.fsf@gmail.com> <201410081049.33593.jan.stolarek@p.lodz.pl> Message-ID: <8738aylso6.fsf@gmail.com> On 2014-10-08 at 10:49:33 +0200, Jan Stolarek wrote: >> Therefore I'd like to hear your opinion on migrating away from the >> current Docbook XML markup to some other similarly expressive but yet >> more lightweight markup documentation system such as Asciidoc[1] or >> ReST/Sphinx[2]. > My opinion is that I don't really care. I only edit the User Guide > once every couple of months or so. I don't have problems with Docbook > but if others want something else I can adjust. I'd argue, that casual contributions may benefit significantly from switching to a more human-friendly markup, as my theory is that it's much easier to pick-up a syntax that's much closer to plain-text rather than a fully-fledged Docbook XML. With a closer-to-plain-text syntax you can more easily focus on the content you want to write rather than being distracted by the incidental complexity of writing low-level XML markup. Or put differently, I believe or rather hope this may lower the barrier-to-entry for casual User's Guide contributions. Fwiw, I stumbled over the slide-deck (obviously dogfooded in Asciidoc) http://mojavelinux.github.io/decks/discover-zen-writing-asciidoc/cojugs201305/index.html which tries to make the point that Asciidoc helps you focus more on writing content rather than fighting with the markup, including a comparision of the conciseness of a chosen example of Asciidoc vs. the resulting Docbook XML it is converted into. Cheers, hvr From alan.zimm at gmail.com Wed Oct 8 12:00:05 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Wed, 8 Oct 2014 14:00:05 +0200 Subject: Warning when deriving Foldable/Traversable using -Wall Message-ID: I am not sure how to report bugs against the current development version of GHC. Should this go into Trac? The current HEAD gives a spurious unused declaration when deriving Typable/Traversable Details Compiling against current HEAD (0ed9a2779a2adf0347088134fdb9f60ae9f2735b) Adding test('T9069w', extra_clean(['T9069.o', 'T9069.hi']), multimod_compile, ['T9069', '-Wall']) to testsuite/tests/deriving/should_compile/all.T results in +[1 of 1] Compiling T9069 ( T9069.hs, T9069.o ) + +T9069.hs:5:1: Warning: + The import of ?Data.Foldable? is redundant + except perhaps to import instances from ?Data.Foldable? + To import instances alone, use: import Data.Foldable() + +T9069.hs:6:1: Warning: + The import of ?Data.Traversable? is redundant + except perhaps to import instances from ?Data.Traversable? + To import instances alone, use: import Data.Traversable() *** unexpected failure for T9069w(optasm) The file being compiled is -------------------------------------------- {-# LANGUAGE DeriveTraversable #-} module T9069 where import Data.Foldable import Data.Traversable data Trivial a = Trivial a deriving (Functor,Foldable,Traversable) --------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvriedel at gmail.com Wed Oct 8 12:05:45 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Wed, 08 Oct 2014 14:05:45 +0200 Subject: Warning when deriving Foldable/Traversable using -Wall In-Reply-To: (Alan & Kim Zimmerman's message of "Wed, 8 Oct 2014 14:00:05 +0200") References: Message-ID: <87y4sqkbqe.fsf@gmail.com> On 2014-10-08 at 14:00:05 +0200, Alan & Kim Zimmerman wrote: [...] > Should this go into Trac? Fwiw, there is a version "7.9" you can select when writing a Trac ticket for the very purpose to file bugs against GHC HEAD. [...] > The file being compiled is > > -------------------------------------------- > {-# LANGUAGE DeriveTraversable #-} > > module T9069 where > > import Data.Foldable > import Data.Traversable > > data Trivial a = Trivial a > deriving (Functor,Foldable,Traversable) > --------------------------------------------- There's two simple ways to workaround this; either a) add a 'import Prelude' after the two imports or b) remove the two imports The a) option has the benefit that it will still work with GHC 7.8.3 From nicolas at incubaid.com Wed Oct 8 12:05:55 2014 From: nicolas at incubaid.com (Nicolas Trangez) Date: Wed, 08 Oct 2014 14:05:55 +0200 Subject: Warning when deriving Foldable/Traversable using -Wall In-Reply-To: References: Message-ID: <1412769955.2876.9.camel@chi.nicolast.be> On Wed, 2014-10-08 at 14:00 +0200, Alan & Kim Zimmerman wrote: > The current HEAD gives a spurious unused declaration when deriving > Typable/Traversable Why would this be spurious, given `Foldable` and `Traversable` are now exported by `Prelude`, so those imports are in fact not necessary? Nicolas From simonpj at microsoft.com Wed Oct 8 12:07:14 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 8 Oct 2014 12:07:14 +0000 Subject: Warning when deriving Foldable/Traversable using -Wall In-Reply-To: References: Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F32294F@DB3PRD3001MB020.064d.mgd.msft.net> Yes, please add as a Trac ticket! thank you Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Alan & Kim Zimmerman Sent: 08 October 2014 13:00 To: ghc-devs at haskell.org Subject: Warning when deriving Foldable/Traversable using -Wall I am not sure how to report bugs against the current development version of GHC. Should this go into Trac? The current HEAD gives a spurious unused declaration when deriving Typable/Traversable Details Compiling against current HEAD (0ed9a2779a2adf0347088134fdb9f60ae9f2735b) Adding test('T9069w', extra_clean(['T9069.o', 'T9069.hi']), multimod_compile, ['T9069', '-Wall']) to testsuite/tests/deriving/should_compile/all.T results in +[1 of 1] Compiling T9069 ( T9069.hs, T9069.o ) + +T9069.hs:5:1: Warning: + The import of ?Data.Foldable? is redundant + except perhaps to import instances from ?Data.Foldable? + To import instances alone, use: import Data.Foldable() + +T9069.hs:6:1: Warning: + The import of ?Data.Traversable? is redundant + except perhaps to import instances from ?Data.Traversable? + To import instances alone, use: import Data.Traversable() *** unexpected failure for T9069w(optasm) The file being compiled is -------------------------------------------- {-# LANGUAGE DeriveTraversable #-} module T9069 where import Data.Foldable import Data.Traversable data Trivial a = Trivial a deriving (Functor,Foldable,Traversable) --------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Wed Oct 8 13:00:43 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Wed, 8 Oct 2014 15:00:43 +0200 Subject: Warning when deriving Foldable/Traversable using -Wall In-Reply-To: <1412769955.2876.9.camel@chi.nicolast.be> References: <1412769955.2876.9.camel@chi.nicolast.be> Message-ID: Ok, so stage2 is in fact behaving correctly, the stage1 code needs to have CPP directives around it. In other words this is not actually a bug. Thanks Alan On Wed, Oct 8, 2014 at 2:05 PM, Nicolas Trangez wrote: > On Wed, 2014-10-08 at 14:00 +0200, Alan & Kim Zimmerman wrote: > > The current HEAD gives a spurious unused declaration when deriving > > Typable/Traversable > > Why would this be spurious, given `Foldable` and `Traversable` are now > exported by `Prelude`, so those imports are in fact not necessary? > > Nicolas > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Wed Oct 8 15:23:05 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Wed, 08 Oct 2014 09:23:05 -0600 Subject: Tentative high-level plans for 7.10.1 In-Reply-To: <877g0bkpwj.fsf@gmail.com> References: <87k34gsd9r.fsf@gmail.com> <618BE556AADD624C9C918AA5D5911BEF3F31A873@DB3PRD3001MB020.064d.mgd.msft.net> <877g0bkpwj.fsf@gmail.com> Message-ID: <1412781755-sup-4952@sabre> Excerpts from Herbert Valerio Riedel's message of 2014-10-08 00:59:40 -0600: > However, should GHC 7.8.x turn out to become a LTS-ishly maintained > branch, we may want to consider converting it to a similiar Git > structure as GHC HEAD currently is, to avoid having to keep two > different sets of instructions on the GHC Wiki for how to work on GHC > 7.8 vs working on GHC HEAD/7.10 and later. Emphatically yes. Lack of submodules on the 7.8 branch makes working with it /very/ unpleasant. Edward From jwlato at gmail.com Wed Oct 8 16:22:19 2014 From: jwlato at gmail.com (John Lato) Date: Wed, 8 Oct 2014 09:22:19 -0700 Subject: Tentative high-level plans for 7.10.1 In-Reply-To: <1412781755-sup-4952@sabre> References: <87k34gsd9r.fsf@gmail.com> <618BE556AADD624C9C918AA5D5911BEF3F31A873@DB3PRD3001MB020.064d.mgd.msft.net> <877g0bkpwj.fsf@gmail.com> <1412781755-sup-4952@sabre> Message-ID: Speaking for myself, I don't think the question of doing a 7.8.4 release at all needs to be entangled with the LTS issue. On Wed, Oct 8, 2014 at 8:23 AM, Edward Z. Yang wrote: > Excerpts from Herbert Valerio Riedel's message of 2014-10-08 00:59:40 > -0600: > > However, should GHC 7.8.x turn out to become a LTS-ishly maintained > > branch, we may want to consider converting it to a similiar Git > > structure as GHC HEAD currently is, to avoid having to keep two > > different sets of instructions on the GHC Wiki for how to work on GHC > > 7.8 vs working on GHC HEAD/7.10 and later. > > Emphatically yes. Lack of submodules on the 7.8 branch makes working with > it /very/ unpleasant. > > Edward > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Wed Oct 8 16:32:47 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Wed, 8 Oct 2014 18:32:47 +0200 Subject: Capturing commas in Api Annotations (D297) Message-ID: I am currently working annotations into the parser, provided them as a separate structure at the end of the parse, indexed to the original by SrcSpan and AST element type. The question I have is how to capture commas and semicolons in lists of items. There are at least three ways of doing this 1. Make sure each of the items is Located, and add the possible comma location to the annotation structure for it. This has the drawback that all instances of the AST item annotation have the possible comma location in them, and it does not cope with multiple separators where these are allowed. 2. Introduce a new hsSyn structure to explicitly capture comma-separated lists. This is the current approach I am taking, modelled on the OrdList implementation, but with an extra constructor to capture the separator location. Thus ``` data HsCommaList a = Empty | Cons a (HsCommaList a) | ExtraComma SrcSpan (HsCommaList a) -- ^ We need a SrcSpan for the annotation | Snoc (HsCommaList a) a | Two (HsCommaList a) -- Invariant: non-empty (HsCommaList a) -- Invariant: non-empty ``` 3. Change the lists to be of type `[Either SrcSpan a]` to explicitly capture the comma locations in the list. 4. A fourth way is to add a list of SrcSpan to the annotation for the parent structure of the list, simply tracking the comma positions. This will make working with the annotations complicated though. I am currently proceeding with option 2, but would appreciate some comment on whether this is the best approach to take. Option 2 will allow the AST to capture the extra commas in record constructors, as suggested by SPJ in the debate on that feature. Regards Alan -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Wed Oct 8 19:39:06 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Wed, 8 Oct 2014 15:39:06 -0400 Subject: heres a 32bit OS X 7.8.3 build Message-ID: hey all, I know all of you wish you could run 32bit ghc 7.8.3 on your snazzy mac OS 10.9, so here you are! http://www.wellposed.com.s3.amazonaws.com/opensource/ghc/releasebuild-unofficial/ghc-7.8.3-i386-apple-darwin.tar.bz2 $ shasum -a256 ghc-7.8.3-i386-apple-darwin.tar.bz2 1268ce020b46b0b459b8713916466cb92ce0c54992a76b265db203e9ef5fb5e5 ghc-7.8.3-i386-apple-darwin.tar.bz2 is the relevant SHA 256 digest NB: I believe I managed to build it with intree-gmp too! So it wont' need GMP installed in the system (but I could be wrong, in which case brew install gmp will suffice) cheers -Carter -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Wed Oct 8 21:40:54 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Wed, 8 Oct 2014 17:40:54 -0400 Subject: RFC: Source-markup language for GHC User's Guide In-Reply-To: <8738aylso6.fsf@gmail.com> References: <87y4srzz1w.fsf@gmail.com> <201410081049.33593.jan.stolarek@p.lodz.pl> <8738aylso6.fsf@gmail.com> Message-ID: does asciidoc have a formal grammar/syntax or whatever? i'm trying to look up one, but can't seem to find it. On Wed, Oct 8, 2014 at 7:14 AM, Herbert Valerio Riedel wrote: > On 2014-10-08 at 10:49:33 +0200, Jan Stolarek wrote: > >> Therefore I'd like to hear your opinion on migrating away from the > >> current Docbook XML markup to some other similarly expressive but yet > >> more lightweight markup documentation system such as Asciidoc[1] or > >> ReST/Sphinx[2]. > > > My opinion is that I don't really care. I only edit the User Guide > > once every couple of months or so. I don't have problems with Docbook > > but if others want something else I can adjust. > > I'd argue, that casual contributions may benefit significantly from > switching to a more human-friendly markup, as my theory is that it's > much easier to pick-up a syntax that's much closer to plain-text rather > than a fully-fledged Docbook XML. With a closer-to-plain-text syntax you > can more easily focus on the content you want to write rather than being > distracted by the incidental complexity of writing low-level XML markup. > > Or put differently, I believe or rather hope this may lower the > barrier-to-entry for casual User's Guide contributions. > > > Fwiw, I stumbled over the slide-deck (obviously dogfooded in Asciidoc) > > > http://mojavelinux.github.io/decks/discover-zen-writing-asciidoc/cojugs201305/index.html > > which tries to make the point that Asciidoc helps you focus more on > writing content rather than fighting with the markup, including a > comparision of the conciseness of a chosen example of Asciidoc vs. the > resulting Docbook XML it is converted into. > > > Cheers, > hvr > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lonetiger at gmail.com Thu Oct 9 05:03:48 2014 From: lonetiger at gmail.com (lonetiger at gmail.com) Date: Thu, 9 Oct 2014 05:03:48 +0000 Subject: =?utf-8?Q?Re:_Building_ghc_on_Windows_with_msys2?= In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF2223DA32@DB3PRD3001MB020.064d.mgd.msft.net>, Message-ID: <543617ea.aa67b40a.48c1.ffff83a1@mx.google.com> Hi Gintautas, > Indeed, the next thing I was going to ask was about expediting the decision process. I would be happy to try and coordinate a push in Windows matters. There is a caveat though: I don't have any skin in the GHC-on-Windows game, so I will want to move on to other things afterwards. I think I?m fairly behind on the current build process of GHC, but as I do use GHC mainly on Windows, at such a time as you would like to move on to other things, I would certainly throw my hat In the ring. Cheers, Tamar From: Gintautas Miliauskas Sent: ?Thursday?, ?October? ?2?, ?2014 ?22?:?32 To: Simon Peyton Jones Cc: Randy Polen, kyra, Marek Wawrzos, Tamar Christina, Roman Kuznetsov, Neil Mitchell, ghc-devs at haskell.org Hi, > All we need is someone to act as convenor/coordinator and we are good to go. Would any of you be willing to play that role? Indeed, the next thing I was going to ask was about expediting the decision process. I would be happy to try and coordinate a push in Windows matters. There is a caveat though: I don't have any skin in the GHC-on-Windows game, so I will want to move on to other things afterwards. An advantage of having a working group is that you can decide things. At the moment people often wait for GHC HQ to make a decision, and end up waiting a long time. It would be better if a working group was responsible for the GHC-on-Windows build and then if (say) you want to mandate msys2, you can go ahead and mandate it. Well, obviously consult ghc-devs for advice, but you are in the lead. Does that make sense? Sounds great. The question still remains about making changes to code: is there a particular person with commit rights that we could lean on for code reviews and committing changes to the main repository? I think an early task is to replace what Neil Mitchell encountered: FIVE different wiki pages describing how to build GHC on Windows. We want just one! (Others can perhaps be marked ?out of date/archive? rather than deleted, but it should be clear which is the main choice.) Indeed, it's a bit of a mess. I intended to shape up the msys2 page to serve as the default, but wanted to see more testing done before before dropping the other pages. I agree with using msys2 as the main choice. (I?m using it myself.) It may be that Gintautas?s page https://ghc.haskell.org/trac/ghc/wiki/Building/Preparation/Windows/MSYS2 is already sufficient. Although I?d like to see it tested by others. For example, I found that it was CRUCIAL to set MSYSYSTEM=MINGW whereas Gintautas?s page says nothing about that. Are you sure that is a problem? The page specifically instructs to use the msys64_shell.bat script (through a shortcut) that is included in msys2, and that script takes care of setting MSYSTEM=MINGW64, among other important things. Other small thoughts: ? We started including the ghc-tarball stuff because when we relied directly on the gcc that came with msys, we kept getting build failures because the gcc that some random person happened to be using did not work (e..g. they had a too-old or too-new version of msys). By using a single, fixed gcc, we avoided all this pain. Makes sense. Just curious: why is this less of a problem on GNU/Linux distros compared to msys2? Does msys2 see comparatively less testing, or is it generally more bleeding edge? ? I don?t know what a ?rubenvb? build is, but I think you can go ahead and say ?use X and Y in this way?. The important thing is that it should be reproducible, and not dependent on the particular Cygwin or gcc or whatever the that user happens to have installed. A "rubenvb" build is one of the available types of prebuilt binary packages of mingw for Windows. Let's figure out if there is something more mainstream and if we can migrate to that. -- Gintautas Miliauskas -------------- next part -------------- An HTML attachment was scrubbed... URL: From pali.gabor at gmail.com Thu Oct 9 05:15:13 2014 From: pali.gabor at gmail.com (=?UTF-8?B?UMOhbGkgR8OhYm9yIErDoW5vcw==?=) Date: Thu, 9 Oct 2014 07:15:13 +0200 Subject: Building ghc on Windows with msys2 In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF2223DA32@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: 2014-10-07 17:02 GMT+02:00 P?li G?bor J?nos : > 2014-10-07 15:04 GMT+02:00 cg : >> I guess the current two build server are all Cygwin based, they are >> failing at the same permission issue at early building stage, it prevents >> checking out the real problem. It seems msys2 (or msys) seldom has >> such issue. > > For what it is worth, I have been witnessing those permission issues > with msys2 on my Windows builders. They worked (more or the less) > fine until September 24, but suddenly, something has changed (not on > my side) and all I got those errors since. Looks like the commit with the Cabal submodule update causes this [1]. The revision before that commit still builds fine on my system, while everything else after that commit dies early at build [2]. Is this only me, has anybody else experienced the problem? Perhaps I am doing something wrong? I do not remember to see any related "heads-up" message on the list, like I should update any of the build-time dependencies. [1] http://git.haskell.org/ghc.git/commit/4b648be19c75e6c6a8e6f9f93fa12c7a4176f0ae [2] http://haskell.inf.elte.hu/builders/windows-x86-head/56/10.html From hvriedel at gmail.com Thu Oct 9 07:12:23 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Thu, 09 Oct 2014 09:12:23 +0200 Subject: Building ghc on Windows with msys2 In-Reply-To: (=?utf-8?Q?=22P=C3=A1li_G=C3=A1bor_J=C3=A1nos=22's?= message of "Thu, 9 Oct 2014 07:15:13 +0200") References: <618BE556AADD624C9C918AA5D5911BEF2223DA32@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <87bnplra20.fsf@gmail.com> On 2014-10-09 at 07:15:13 +0200, P?li G?bor J?nos wrote: > 2014-10-07 17:02 GMT+02:00 P?li G?bor J?nos : >> 2014-10-07 15:04 GMT+02:00 cg : >>> I guess the current two build server are all Cygwin based, they are >>> failing at the same permission issue at early building stage, it prevents >>> checking out the real problem. It seems msys2 (or msys) seldom has >>> such issue. >> >> For what it is worth, I have been witnessing those permission issues >> with msys2 on my Windows builders. They worked (more or the less) >> fine until September 24, but suddenly, something has changed (not on >> my side) and all I got those errors since. > > Looks like the commit with the Cabal submodule update causes this [1]. > The revision before that commit still builds fine on my system, while > everything else after that commit dies early at build [2]. Is this > only me, has anybody else experienced the problem? Perhaps I am doing > something wrong? I do not remember to see any related "heads-up" > message on the list, like I should update any of the build-time > dependencies. Fwiw, I didn't see this issue on a newly setup MSYS2 environment either. How old is your MSYS environment? (And what filesystem & windows version are you running?) From simonpj at microsoft.com Thu Oct 9 07:51:40 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 9 Oct 2014 07:51:40 +0000 Subject: Building ghc on Windows with msys2 In-Reply-To: <543617ea.aa67b40a.48c1.ffff83a1@mx.google.com> References: <618BE556AADD624C9C918AA5D5911BEF2223DA32@DB3PRD3001MB020.064d.mgd.msft.net>, <543617ea.aa67b40a.48c1.ffff83a1@mx.google.com> Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F325DDA@DB3PRD3001MB020.064d.mgd.msft.net> I think I?m fairly behind on the current build process of GHC, but as I do use GHC mainly on Windows, at such a time as you would like to move on to other things, I would certainly throw my hat In the ring. That sounds helpful, thank you. Are we at the point where we could form a GHC-on-Windows Task Force? With its own wiki page on the GHC Trac, and with named participants. (Of course you can drop off again.) But it would be really helpful to have an explicit group who feels a sense of ownership about making sure GHC works well on Windows. At the moment we are reduced to folk memory ?I recall that Gintautas did something like that a few months ago?. It sounds as if Tamar would be a willing member. Would anyone else be willing? I?d say that being a member indicates a positive willingness to help others, along with some level of expertise, NOT a promise to drop everything to attend to someone else?s problem. Simon From: lonetiger at gmail.com [mailto:lonetiger at gmail.com] Sent: 09 October 2014 06:04 To: Gintautas Miliauskas; Simon Peyton Jones Cc: Randy Polen; kyra; Marek Wawrzos; Roman Kuznetsov; Neil Mitchell; ghc-devs at haskell.org Subject: Re: Building ghc on Windows with msys2 Hi Gintautas, > Indeed, the next thing I was going to ask was about expediting the decision process. I would be happy to try and coordinate a push in Windows matters. There is a caveat though: I don't have any skin in the GHC-on-Windows game, so I will want to move on to other things afterwards. I think I?m fairly behind on the current build process of GHC, but as I do use GHC mainly on Windows, at such a time as you would like to move on to other things, I would certainly throw my hat In the ring. Cheers, Tamar From: Gintautas Miliauskas Sent: ?Thursday?, ?October? ?2?, ?2014 ?22?:?32 To: Simon Peyton Jones Cc: Randy Polen, kyra, Marek Wawrzos, Tamar Christina, Roman Kuznetsov, Neil Mitchell, ghc-devs at haskell.org Hi, > All we need is someone to act as convenor/coordinator and we are good to go. Would any of you be willing to play that role? Indeed, the next thing I was going to ask was about expediting the decision process. I would be happy to try and coordinate a push in Windows matters. There is a caveat though: I don't have any skin in the GHC-on-Windows game, so I will want to move on to other things afterwards. An advantage of having a working group is that you can decide things. At the moment people often wait for GHC HQ to make a decision, and end up waiting a long time. It would be better if a working group was responsible for the GHC-on-Windows build and then if (say) you want to mandate msys2, you can go ahead and mandate it. Well, obviously consult ghc-devs for advice, but you are in the lead. Does that make sense? Sounds great. The question still remains about making changes to code: is there a particular person with commit rights that we could lean on for code reviews and committing changes to the main repository? I think an early task is to replace what Neil Mitchell encountered: FIVE different wiki pages describing how to build GHC on Windows. We want just one! (Others can perhaps be marked ?out of date/archive? rather than deleted, but it should be clear which is the main choice.) Indeed, it's a bit of a mess. I intended to shape up the msys2 page to serve as the default, but wanted to see more testing done before before dropping the other pages. I agree with using msys2 as the main choice. (I?m using it myself.) It may be that Gintautas?s page https://ghc.haskell.org/trac/ghc/wiki/Building/Preparation/Windows/MSYS2 is already sufficient. Although I?d like to see it tested by others. For example, I found that it was CRUCIAL to set MSYSYSTEM=MINGW whereas Gintautas?s page says nothing about that. Are you sure that is a problem? The page specifically instructs to use the msys64_shell.bat script (through a shortcut) that is included in msys2, and that script takes care of setting MSYSTEM=MINGW64, among other important things. Other small thoughts: ? We started including the ghc-tarball stuff because when we relied directly on the gcc that came with msys, we kept getting build failures because the gcc that some random person happened to be using did not work (e..g. they had a too-old or too-new version of msys). By using a single, fixed gcc, we avoided all this pain. Makes sense. Just curious: why is this less of a problem on GNU/Linux distros compared to msys2? Does msys2 see comparatively less testing, or is it generally more bleeding edge? ? I don?t know what a ?rubenvb? build is, but I think you can go ahead and say ?use X and Y in this way?. The important thing is that it should be reproducible, and not dependent on the particular Cygwin or gcc or whatever the that user happens to have installed. A "rubenvb" build is one of the available types of prebuilt binary packages of mingw for Windows. Let's figure out if there is something more mainstream and if we can migrate to that. -- Gintautas Miliauskas -------------- next part -------------- An HTML attachment was scrubbed... URL: From Thomas.Winant at cs.kuleuven.be Thu Oct 9 08:17:37 2014 From: Thomas.Winant at cs.kuleuven.be (Thomas Winant) Date: Thu, 09 Oct 2014 10:17:37 +0200 Subject: Tentative high-level plans for 7.10.1 In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F32067C@DB3PRD3001MB020.064d.mgd.msft.net> References: , <7dc2683c29e2c11a01acbb6833bece6f@cs.kuleuven.be> <618BE556AADD624C9C918AA5D5911BEF3F32067C@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: On 2014-10-08 00:21, Simon Peyton Jones wrote: > Is the wiki page up to date? > https://ghc.haskell.org/trac/ghc/wiki/PartialTypeSignatures Yes it is. > I'd move the Constraint Wildcards bit out to an appendix or delete > altogether -- it's a distraction since it's not part of the design. > > Named wildcard are described as a "fourth form" but actually it's > "third" I suppose. And aren't they just a vairant of "type wildcards"? > You can't name "extra-constraint wildcards" can you? > > There's a long section on "partial expressions and pattern signatures" > but I think the conclusion is "we don't do this". Again, move to an > appendix of not-implemented ideas. > > Try to focus on the actual design. > > Thanks! Done. Cheers, Thomas > ________________________________________ > From: ghc-devs [ghc-devs-bounces at haskell.org] on behalf of Thomas > Winant [Thomas.Winant at cs.kuleuven.be] > Sent: 07 October 2014 17:07 > To: ghc-devs at haskell.org > Subject: Re: Tentative high-level plans for 7.10.1 > > Hi, > > On 2014-10-03 23:35, Austin Seipp wrote: >> .. >> Here are the major patches on Phabricator still needing review, that I >> think we'd like to see for 7.10.1: >> >> - D168: Partial type signatures >> .. > > As Austin said, our patch implementing Partial Type Signatures is still > up for code review on Phabricator [1]. It is our goal too to get it in > 7.10.1, and we will try to do as much as we can to help out with this > process. > > We'd like it very much if people had a thorough look at it (thanks > Richard for the feedback). We're glad to provide additional info > (including extra comments in the code), rewrite confusing code, etc. > > = Status = > > The implementation is nearly complete: > * We've integrated support for Holes, i.e. by default, an underscore in > a type signature will generate an error message mentioning the > inferred type. By enabling -XPartialTypeSignatures, the inferred > type > is used and the underscore can remain in the type signature. > * SPJ's proposed simplifications (over Skype) have been implemented, > except for the fact that we still use the annotated constraints for > solving, see [2]. > * Richard's comments on Phabricator [1] have been addressed in extra > commits. > * I've rebased the patch against master on Monday. > * I've added docstring for most of the new functions I've added. > * Some TODOs still remain, I'll summarise the most important ones here. > See [3] for a detailed list with examples. > * When -XMonoLocalBinds is enabled (implied by -XGADTs and > -XTypeFamilies), (some) local bindings without type signature > aren't > generalised. Partial type signatures should follow this behaviour. > This is currently not handled correctly. We have a stopgap > solution > involving generating an error in mind, but would prefer a real > fix. > We'd like some help with this. > * Partial type signatures are currently ignored for pattern > bindings. > This bug doesn't seem to be difficult to solve, but requires some > debugging. > * The following code doesn't type check: > > {-# LANGUAGE MonomorphismRestriction, PartialTypeSignatures #-} > charlie :: _ => a > charlie = 3 > > Type error: No instance for (Num a) arising from the literal ?3?. > We > would like the (Num a) constraint to be inferred (because of the > extra-constraint wildcard). > * Some smaller things, e.g. improving error messages. > > We'll try to fix the remaining TODOs, but help is certainly appreciated > and will speed up integrating this patch! > > Please have a look at the code and let us know what we can do to help. > > > Cheers, > Thomas Winant > > [1]: https://phabricator.haskell.org/D168 > [2]: > https://ghc.haskell.org/trac/ghc/wiki/PartialTypeSignatures#extra-constraints-wildcard > [3]: https://ghc.haskell.org/trac/ghc/wiki/PartialTypeSignatures#TODOs > > Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm From gintautas.miliauskas at gmail.com Thu Oct 9 09:29:25 2014 From: gintautas.miliauskas at gmail.com (Gintautas Miliauskas) Date: Thu, 9 Oct 2014 11:29:25 +0200 Subject: Building ghc on Windows with msys2 In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F325DDA@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF2223DA32@DB3PRD3001MB020.064d.mgd.msft.net> <543617ea.aa67b40a.48c1.ffff83a1@mx.google.com> <618BE556AADD624C9C918AA5D5911BEF3F325DDA@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: I'll set up a wikipage this evening. Should we get a mailing list of our own too, or do you think it's best to continue on ghc-devs@? -- Gintautas Miliauskas On Oct 9, 2014 9:52 AM, "Simon Peyton Jones" wrote: > I think I?m fairly behind on the current build process of GHC, but as I > do use GHC mainly on Windows, at such a time as you would like to move on > to other things, I would certainly throw my hat In the ring. > > > > That sounds helpful, thank you. > > > Are we at the point where we could form a GHC-on-Windows Task Force? With > its own wiki page on the GHC Trac, and with named participants. (Of course > you can drop off again.) But it would be really helpful to have an > explicit group who feels a sense of ownership about making sure GHC works > well on Windows. At the moment we are reduced to folk memory ?I recall > that Gintautas did something like that a few months ago?. > > > It sounds as if Tamar would be a willing member. Would anyone else be > willing? I?d say that being a member indicates a positive willingness to > help others, along with some level of expertise, NOT a promise to drop > everything to attend to someone else?s problem. > > > > Simon > > > > *From:* lonetiger at gmail.com [mailto:lonetiger at gmail.com] > *Sent:* 09 October 2014 06:04 > *To:* Gintautas Miliauskas; Simon Peyton Jones > *Cc:* Randy Polen; kyra; Marek Wawrzos; Roman Kuznetsov; Neil Mitchell; > ghc-devs at haskell.org > *Subject:* Re: Building ghc on Windows with msys2 > > > > Hi Gintautas, > > > > > Indeed, the next thing I was going to ask was about expediting the > decision process. I would be happy to try and coordinate a push in Windows > matters. There is a caveat though: I don't have any skin in the > GHC-on-Windows game, so I will want to move on to other things afterwards. > > > > I think I?m fairly behind on the current build process of GHC, but as I do > use GHC mainly on Windows, at such a time as you would like to move on to > other things, I would certainly throw my hat In the ring. > > > > Cheers, > > Tamar > > > > > > *From:* Gintautas Miliauskas > *Sent:* ?Thursday?, ?October? ?2?, ?2014 ?22?:?32 > *To:* Simon Peyton Jones > *Cc:* Randy Polen , kyra , Marek > Wawrzos , Tamar Christina , Roman > Kuznetsov , Neil Mitchell , > ghc-devs at haskell.org > > > > Hi, > > > > > All we need is someone to act as convenor/coordinator and we are good to > go. Would any of you be willing to play that role? > > > > Indeed, the next thing I was going to ask was about expediting the > decision process. I would be happy to try and coordinate a push in Windows > matters. There is a caveat though: I don't have any skin in the > GHC-on-Windows game, so I will want to move on to other things afterwards. > > > > An advantage of having a working group is that you can *decide* things. > At the moment people often wait for GHC HQ to make a decision, and end up > waiting a long time. It would be better if a working group was responsible > for the GHC-on-Windows build and then if (say) you want to mandate msys2, > you can go ahead and mandate it. Well, obviously consult ghc-devs for > advice, but you are in the lead. Does that make sense? > > > > Sounds great. The question still remains about making changes to code: is > there a particular person with commit rights that we could lean on for code > reviews and committing changes to the main repository? > > > > I think an early task is to replace what Neil Mitchell encountered: FIVE > different wiki pages describing how to build GHC on Windows. We want just > one! (Others can perhaps be marked ?out of date/archive? rather than > deleted, but it should be clear which is the main choice.) > > > > Indeed, it's a bit of a mess. I intended to shape up the msys2 page to > serve as the default, but wanted to see more testing done before before > dropping the other pages. > > > > I agree with using msys2 as the main choice. (I?m using it myself.) It > may be that Gintautas?s page > https://ghc.haskell.org/trac/ghc/wiki/Building/Preparation/Windows/MSYS2 > is already sufficient. Although I?d like to see it tested by others. For > example, I found that it was CRUCIAL to set MSYSYSTEM=MINGW whereas > Gintautas?s page says nothing about that. > > > > Are you sure that is a problem? The page specifically instructs to use the > msys64_shell.bat script (through a shortcut) that is included in msys2, and > that script takes care of setting MSYSTEM=MINGW64, among other important > things. > > > > Other small thoughts: > > ? We started including the ghc-tarball stuff because when we > relied directly on the gcc that came with msys, we kept getting build > failures because the gcc that some random person happened to be using did > not work (e..g. they had a too-old or too-new version of msys). By using a > single, fixed gcc, we avoided all this pain. > > > > Makes sense. Just curious: why is this less of a problem on GNU/Linux > distros compared to msys2? Does msys2 see comparatively less testing, or is > it generally more bleeding edge? > > > > ? I don?t know what a ?rubenvb? build is, but I think you can go > ahead and say ?use X and Y in this way?. The important thing is that it > should be reproducible, and not dependent on the particular Cygwin or gcc > or whatever the that user happens to have installed. > > A "rubenvb" build is one of the available types of prebuilt binary > packages of mingw for Windows. Let's figure out if there is something more > mainstream and if we can migrate to that. > > > > -- > Gintautas Miliauskas > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Oct 9 09:47:26 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 9 Oct 2014 09:47:26 +0000 Subject: Building ghc on Windows with msys2 In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF2223DA32@DB3PRD3001MB020.064d.mgd.msft.net> <543617ea.aa67b40a.48c1.ffff83a1@mx.google.com> <618BE556AADD624C9C918AA5D5911BEF3F325DDA@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F3263FD@DB3PRD3001MB020.064d.mgd.msft.net> Better to continue on ghc-devs; that way others have visibility of what is going on. Simon From: Gintautas Miliauskas [mailto:gintautas.miliauskas at gmail.com] Sent: 09 October 2014 10:29 To: Simon Peyton Jones Cc: Neil Mitchell; Randy Polen; Roman Kuznetsov; lonetiger at gmail.com; ghc-devs at haskell.org; kyra; Marek Wawrzos Subject: RE: Building ghc on Windows with msys2 I'll set up a wikipage this evening. Should we get a mailing list of our own too, or do you think it's best to continue on ghc-devs@? -- Gintautas Miliauskas On Oct 9, 2014 9:52 AM, "Simon Peyton Jones" > wrote: I think I?m fairly behind on the current build process of GHC, but as I do use GHC mainly on Windows, at such a time as you would like to move on to other things, I would certainly throw my hat In the ring. That sounds helpful, thank you. Are we at the point where we could form a GHC-on-Windows Task Force? With its own wiki page on the GHC Trac, and with named participants. (Of course you can drop off again.) But it would be really helpful to have an explicit group who feels a sense of ownership about making sure GHC works well on Windows. At the moment we are reduced to folk memory ?I recall that Gintautas did something like that a few months ago?. It sounds as if Tamar would be a willing member. Would anyone else be willing? I?d say that being a member indicates a positive willingness to help others, along with some level of expertise, NOT a promise to drop everything to attend to someone else?s problem. Simon From: lonetiger at gmail.com [mailto:lonetiger at gmail.com] Sent: 09 October 2014 06:04 To: Gintautas Miliauskas; Simon Peyton Jones Cc: Randy Polen; kyra; Marek Wawrzos; Roman Kuznetsov; Neil Mitchell; ghc-devs at haskell.org Subject: Re: Building ghc on Windows with msys2 Hi Gintautas, > Indeed, the next thing I was going to ask was about expediting the decision process. I would be happy to try and coordinate a push in Windows matters. There is a caveat though: I don't have any skin in the GHC-on-Windows game, so I will want to move on to other things afterwards. I think I?m fairly behind on the current build process of GHC, but as I do use GHC mainly on Windows, at such a time as you would like to move on to other things, I would certainly throw my hat In the ring. Cheers, Tamar From: Gintautas Miliauskas Sent: ?Thursday?, ?October? ?2?, ?2014 ?22?:?32 To: Simon Peyton Jones Cc: Randy Polen, kyra, Marek Wawrzos, Tamar Christina, Roman Kuznetsov, Neil Mitchell, ghc-devs at haskell.org Hi, > All we need is someone to act as convenor/coordinator and we are good to go. Would any of you be willing to play that role? Indeed, the next thing I was going to ask was about expediting the decision process. I would be happy to try and coordinate a push in Windows matters. There is a caveat though: I don't have any skin in the GHC-on-Windows game, so I will want to move on to other things afterwards. An advantage of having a working group is that you can decide things. At the moment people often wait for GHC HQ to make a decision, and end up waiting a long time. It would be better if a working group was responsible for the GHC-on-Windows build and then if (say) you want to mandate msys2, you can go ahead and mandate it. Well, obviously consult ghc-devs for advice, but you are in the lead. Does that make sense? Sounds great. The question still remains about making changes to code: is there a particular person with commit rights that we could lean on for code reviews and committing changes to the main repository? I think an early task is to replace what Neil Mitchell encountered: FIVE different wiki pages describing how to build GHC on Windows. We want just one! (Others can perhaps be marked ?out of date/archive? rather than deleted, but it should be clear which is the main choice.) Indeed, it's a bit of a mess. I intended to shape up the msys2 page to serve as the default, but wanted to see more testing done before before dropping the other pages. I agree with using msys2 as the main choice. (I?m using it myself.) It may be that Gintautas?s page https://ghc.haskell.org/trac/ghc/wiki/Building/Preparation/Windows/MSYS2 is already sufficient. Although I?d like to see it tested by others. For example, I found that it was CRUCIAL to set MSYSYSTEM=MINGW whereas Gintautas?s page says nothing about that. Are you sure that is a problem? The page specifically instructs to use the msys64_shell.bat script (through a shortcut) that is included in msys2, and that script takes care of setting MSYSTEM=MINGW64, among other important things. Other small thoughts: ? We started including the ghc-tarball stuff because when we relied directly on the gcc that came with msys, we kept getting build failures because the gcc that some random person happened to be using did not work (e..g. they had a too-old or too-new version of msys). By using a single, fixed gcc, we avoided all this pain. Makes sense. Just curious: why is this less of a problem on GNU/Linux distros compared to msys2? Does msys2 see comparatively less testing, or is it generally more bleeding edge? ? I don?t know what a ?rubenvb? build is, but I think you can go ahead and say ?use X and Y in this way?. The important thing is that it should be reproducible, and not dependent on the particular Cygwin or gcc or whatever the that user happens to have installed. A "rubenvb" build is one of the available types of prebuilt binary packages of mingw for Windows. Let's figure out if there is something more mainstream and if we can migrate to that. -- Gintautas Miliauskas -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Thu Oct 9 10:14:36 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Thu, 09 Oct 2014 11:14:36 +0100 Subject: Building ghc on Windows with msys2 In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF22217EF2@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <5436600C.70808@gmail.com> On 27/09/2014 22:04, Gintautas Miliauskas wrote: > 8. A broader question: what general approach to ghc on Windows shall we > take? The prebuilt packages currently provided by ghc-tarballs are also > covered by msys2's package manager. Why not offload that pain to msys2 > then? The advantage here is less maintenance (and automatic upgrades of > the toolchain), the disadvantage is that the distribution becomes less > stable and msys2 updates could break ghc builds more easily. I think it > would make sense to be consistent with the Linux builds; we don't bundle > compilers with those. In that sense msys2 would be like another > distribution. Of course, we need to also consider if msys2 can be > trusted to stick around and stay up to date in the long run. It looks > like a relatively new project, so there's some risk. > > 9. If I understand correctly, one other thing to consider before > dropping ghc-tarballs is that Windows ghc still needs GCC utilities > (like cpp) to function properly, and so we need to have a prepackaged > bundle of binary GCC utilities (and maybe hardcoded paths? not sure) to > make that work. On the other hand, a custom-built ghc should work just > fine in the msys2 environment which does provide cpp et al., and the > additional GCC bundles would perhaps best be owned by, for example, the > Haskell Platform project rather than be part of core ghc? > > 10. Following the idea in (8), I tried to build ghc using the mingw gcc > provided by msys2 instead of the one in ghc-tarballs. It was a bit > weird. I had to hack configure.ac to disable use > of ghc-tarballs and try to use system tools. How about a configure > option to enable/disable use of ghc-tarballs? I also ran into some weird > issues, for example, the system ld and nm would not get detected by the > configure script correctly. They were found when I explicitly set LD=ld > and NM=nm. Weird. Will look into that later. Other than that, there were > no major problems, except... > > 11. A build with the host gcc failed. I think the cause is that it is > too new (4.9.1, significantly newer than 4.6.3 in ghc-tarballs). The > build of the currently checked in GMP (libraries/integer-gmp) fails > because a utility used in the build process segfaults. I tried upgrading > gmp from 5.0.3 to 6.0.0, and 6.0.0 builds fine by itself but the > ghc-specific patch used for 5.0.3 no longer applies (is it still > necessary?). Oh brother. One of the advantages of tracking msys2's gcc > would be that we would notice such breakage earlier. Shall I open an issue? We created ghc-tarballs for stability reasons. In the past, some versions of mingw were broken, so we wanted to ensure that everyone building GHC on Windows was using the same gcc, and that a given build of GHC will ship with a predictable gcc, rather than grabbing whatever is installed. I think it's pretty important that GHC can be installed independently of mingw. That dependency used to be huge source of pain when we had it. Windows is unlike Linux, in that on Linux it's easy to install a working gcc. Many distributions already ship it, and even when they don't, the package manager makes it easy to add gcc as a dependency of GHC so it gets installed automatically. Cheers, Simon From pali.gabor at gmail.com Thu Oct 9 10:15:43 2014 From: pali.gabor at gmail.com (=?UTF-8?B?UMOhbGkgR8OhYm9yIErDoW5vcw==?=) Date: Thu, 9 Oct 2014 12:15:43 +0200 Subject: Building ghc on Windows with msys2 In-Reply-To: <87bnplra20.fsf@gmail.com> References: <618BE556AADD624C9C918AA5D5911BEF2223DA32@DB3PRD3001MB020.064d.mgd.msft.net> <87bnplra20.fsf@gmail.com> Message-ID: 2014-10-09 9:12 GMT+02:00 Herbert Valerio Riedel : > I didn't see this issue on a newly setup MSYS2 environment > either. How old is your MSYS environment? (And what filesystem & windows > version are you running?) I use a 64-bit Windows 7 SP1 (6.1.7601) with both the 32-bit (i686) and 64-bit (x86_64) msys2 installed on a regular NTFS partition. The i686 instance is of July 4, 2014, and the x86_64 instance is of February 16, 2014. I have GHC 7.6.3 and GCC 4.5.2 (32-bit) and GCC 4.6.3 (64-bit) installed, respectively. From shumovichy at gmail.com Thu Oct 9 10:38:00 2014 From: shumovichy at gmail.com (Yuras Shumovich) Date: Thu, 9 Oct 2014 13:38:00 +0300 Subject: heres a 32bit OS X 7.8.3 build In-Reply-To: References: Message-ID: Hello carter, I tried to install it, but get the error (see bellow.) I did the usual thing I do on linux: ./configure --prefix=... && sudo make install The tail of the log: Installing library in /opt/ghc-7.8.3_x86/lib/ghc-7.8.3/terminfo-0.4.0.0 "utils/ghc-cabal/dist-install/build/tmp/ghc-cabal-bindist" copy libraries/haskeline dist-install "strip" '' '/opt/ghc-7.8.3_x86' '/opt/ghc-7.8.3_x86/lib/ghc-7.8.3' '/opt/ghc-7.8.3_x86/share/doc/ghc/html/libraries' 'v p dyn' Installing library in /opt/ghc-7.8.3_x86/lib/ghc-7.8.3/haskeline-0.7.1.2 "utils/ghc-cabal/dist-install/build/tmp/ghc-cabal-bindist" copy compiler stage2 "strip" '' '/opt/ghc-7.8.3_x86' '/opt/ghc-7.8.3_x86/lib/ghc-7.8.3' '/opt/ghc-7.8.3_x86/share/doc/ghc/html/libraries' 'v p dyn' Installing library in /opt/ghc-7.8.3_x86/lib/ghc-7.8.3/ghc-7.8.3 "utils/ghc-cabal/dist-install/build/tmp/ghc-cabal-bindist" copy libraries/old-time dist-install "strip" '' '/opt/ghc-7.8.3_x86' '/opt/ghc-7.8.3_x86/lib/ghc-7.8.3' '/opt/ghc-7.8.3_x86/share/doc/ghc/html/libraries' 'v p dyn' Installing library in /opt/ghc-7.8.3_x86/lib/ghc-7.8.3/old-time-1.1.0.2 "utils/ghc-cabal/dist-install/build/tmp/ghc-cabal-bindist" copy libraries/haskell98 dist-install "strip" '' '/opt/ghc-7.8.3_x86' '/opt/ghc-7.8.3_x86/lib/ghc-7.8.3' '/opt/ghc-7.8.3_x86/share/doc/ghc/html/libraries' 'v p dyn' Installing library in /opt/ghc-7.8.3_x86/lib/ghc-7.8.3/haskell98-2.0.0.3 "utils/ghc-cabal/dist-install/build/tmp/ghc-cabal-bindist" copy libraries/haskell2010 dist-install "strip" '' '/opt/ghc-7.8.3_x86' '/opt/ghc-7.8.3_x86/lib/ghc-7.8.3' '/opt/ghc-7.8.3_x86/share/doc/ghc/html/libraries' 'v p dyn' Installing library in /opt/ghc-7.8.3_x86/lib/ghc-7.8.3/haskell2010-1.1.2.0 "/opt/ghc-7.8.3_x86/lib/ghc-7.8.3/bin/ghc-pkg" --force --global-package-db "/opt/ghc-7.8.3_x86/lib/ghc-7.8.3/package.conf.d" update rts/dist/package.conf.install Reading package info from "rts/dist/package.conf.install" ... done. "utils/ghc-cabal/dist-install/build/tmp/ghc-cabal-bindist" register libraries/ghc-prim dist-install "/opt/ghc-7.8.3_x86/lib/ghc-7.8.3/bin/ghc" "/opt/ghc-7.8.3_x86/lib/ghc-7.8.3/bin/ghc-pkg" "/opt/ghc-7.8.3_x86/lib/ghc-7.8.3" '' '/opt/ghc-7.8.3_x86' '/opt/ghc-7.8.3_x86/lib/ghc-7.8.3' '/opt/ghc-7.8.3_x86/share/doc/ghc/html/libraries' NO Registering ghc-prim-0.3.1.0... "utils/ghc-cabal/dist-install/build/tmp/ghc-cabal-bindist" register libraries/integer-gmp dist-install "/opt/ghc-7.8.3_x86/lib/ghc-7.8.3/bin/ghc" "/opt/ghc-7.8.3_x86/lib/ghc-7.8.3/bin/ghc-pkg" "/opt/ghc-7.8.3_x86/lib/ghc-7.8.3" '' '/opt/ghc-7.8.3_x86' '/opt/ghc-7.8.3_x86/lib/ghc-7.8.3' '/opt/ghc-7.8.3_x86/share/doc/ghc/html/libraries' NO Registering integer-gmp-0.5.1.0... ghc-cabal: integer-gmp-0.5.1.0: library-dirs: yes is a relative path which makes no sense (as there is nothing for it to be relative to). You can make paths relative to the package database itself by using ${pkgroot}. (use --force to override) make[1]: *** [install_packages] Error 1 make: *** [install] Error 2 Thanks, Yuras 2014-10-08 22:39 GMT+03:00 Carter Schonwald : > hey all, > > I know all of you wish you could run 32bit ghc 7.8.3 on your snazzy mac OS > 10.9, so here you are! > > > http://www.wellposed.com.s3.amazonaws.com/opensource/ghc/releasebuild-unofficial/ghc-7.8.3-i386-apple-darwin.tar.bz2 > > > $ shasum -a256 ghc-7.8.3-i386-apple-darwin.tar.bz2 > 1268ce020b46b0b459b8713916466cb92ce0c54992a76b265db203e9ef5fb5e5 > ghc-7.8.3-i386-apple-darwin.tar.bz2 > > is the relevant SHA 256 digest > > NB: I believe I managed to build it with intree-gmp too! So it wont' need > GMP installed in the system (but I could be wrong, in which case brew > install gmp will suffice) > > cheers > -Carter > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.k.f.holzenspies at utwente.nl Thu Oct 9 11:39:10 2014 From: p.k.f.holzenspies at utwente.nl (p.k.f.holzenspies at utwente.nl) Date: Thu, 9 Oct 2014 11:39:10 +0000 Subject: Again: Uniques in GHC In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F31FDA2@DB3PRD3001MB020.064d.mgd.msft.net> References: <1412591816.30103.10.camel@joachim-breitner.de> <01db8f0b509747669bbf426fc0dd18c2@EXMBX31.ad.utwente.nl> <1412626086.9628.1.camel@joachim-breitner.de> , , , <618BE556AADD624C9C918AA5D5911BEF3F31FDA2@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <85db976026a441128257da13a0be9ad2@EXMBX31.ad.utwente.nl> Dear Simon, et al, I've created the wiki-page about the Unique-patch [1]. Should it be linked to from the KeyDataTypes [2]? Regards, Philip [1] https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/Unique [2] https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/KeyDataTypes ________________________________ From: Simon Peyton Jones Sent: 07 October 2014 23:23 To: Holzenspies, P.K.F. (EWI); carter.schonwald at gmail.com Cc: ghc-devs at haskell.org Subject: RE: Again: Uniques in GHC One of the things I'm finding difficult about this Phab stuff is that I get presented with lots of code without enough supporting text saying * What problem is this patch trying to solve? * What is the user-visible design (for language features)? * What are the main ideas in the implementation? The place we usually put such design documents is on the GHC Trac Wiki. Email is ok for discussion, but the wiki is FAR better for stating clearly the current state of play. Philip, might you make such a page for this unique stuff? To answer some of you specific questions (please include the answers in the wiki page in some form): * Uniques are never put in .hi files (as far as I know). They do not survive a single invocation of GHC. * However with ghc --make, or ghci, uniques do survive for the entire invocation of GHC. For example in ghc --make, uniques assigned when compiling module A should not clash with those for module B * Yes, TyCons and DataCons must have separate uniques. We often form sets of Names, which contain both TyCons and DataCons. Let's not mess with this. * Having unique-supply-splitting as a pure function is so deeply embedded in GHC that I could not hazard a guess as to how difficult it would be to IO-ify it. Moreover, I would regret doing so because it would force sequentiality where none is needed. * Template Haskell is a completely independent Haskell library. It does not import GHC. If uniques were in their own package, then TH and GHC could share them. Ditto Hoopl. * You say that Uniques are serialised as Word32. I'm not sure why they are serialised at all! * Enforcing determinacy everywhere is a heavy burden. Instead I suppose that you could run a pass at the end to give everything a more determinate name TidyPgm does this for the name strings, so it would probably be easy to do so for the uniques too. Simon ________________________________ From: ghc-devs [ghc-devs-bounces at haskell.org] on behalf of p.k.f.holzenspies at utwente.nl [p.k.f.holzenspies at utwente.nl] Sent: 07 October 2014 22:03 To: carter.schonwald at gmail.com Cc: ghc-devs at haskell.org Subject: RE: Again: Uniques in GHC Dear Carter, Simon, et al, (CC'd SPJ on this explicitly, because I *think* he'll be most knowledgeable on some of the constraints that need to be guaranteed for Uniques) I agree, but to that end, a few parameters need to become clear. To this end, I've created a Phabricator-thing that we can discuss things off of: https://phabricator.haskell.org/D323 Here are my open issues: - There were ad hoc domains of Uniques being created everywhere in the compiler (i.e. characters chosen to classify the generated Uniques). I have gathered them all up and given them names as constructors in Unique.UniqueDomain. Some of these names are arbitrary, because I don't know what they're for precisely. I generally went for the module name as a starting point. I did, however, make a point of having different invocations of mkSplitUniqSupply et al all have different constructors (e.g. HscMainA through HscMainC). This is to prevent the high potential for conflicts (see comments in uniqueDomainChar). If there are people that are more knowledgeable about the use of Uniques in these modules (e.g. HscMain, ByteCodeGen, etc.) can say that the uniques coming from these different invocations can never cause conflict, they maybe can reduce the number of UniqueDomains. ? - Some UniqueDomains only have a handful of instances and seem a bit wasteful. - Uniques were represented by a custom-boxed Int#, but serialised as Word32. Most modern machines see Int# as a 64-bit thing. Aren't we worried about the potential for undetected overlap/conflict there? - What is the scope in which a Unique must be Unique? I.e. what if independently compiled modules have overlapping Uniques (for different Ids) in their hi-files? Also, do TyCons and DataCons really need to have guaranteed different Uniques? Shouldn't the parser/renamer figure out what goes where and raise errors on domain violations? - There seem to be related-but-different Unique implementations in Template Haskell and Hoopl. Why is this? - How critical is it to let mkUnique (and mkSplitUniqSupply) be pure functions? If they can be IO, we could greatly simplify the management of (un)generated Uniques in each UniqueDomain and quite possibly make the move to a threaded GHC easier (for what that's worth). Also, this may help solve the non-determinism issues. - Missing haddocks, failing lints (lines too long) and a lot of cosmetics will be met when the above points have become a tad more clear. I'm more than happy to document a lot of the answers to the above stuff in Unique and/or commentary. Regards, Philip ________________________________ From: Carter Schonwald Sent: 07 October 2014 21:30 To: Holzenspies, P.K.F. (EWI) Cc: Austin Seipp; ghc-devs at haskell.org Subject: Re: Again: Uniques in GHC in some respects, having fully deterministic builds is a very important goal: a lot of tooling for eg, caching builds of libraries works much much better if you have that property :) On Tue, Oct 7, 2014 at 12:45 PM, > wrote: ________________________________________ From: mad.one at gmail.com > on behalf of Austin Seipp > So I assume your change would mean 'ghc -j' would not work for 32bit. I still consider this a big limitation, one which is only due to an implementation detail. But we need to confirm this will actually fix any bottlenecks first though before getting to that point. Yes, that's what I'm saying. Let me just add that what I'm proposing by no means prohibits or hinders making 32-bit GHC-versions be parallel later on, it just doesn't solve the problem. It depends to what extent the "fully deterministic behaviour" bug is considered a priority (there was something about parts of the hi-files being non-deterministic across different executions of GHC; don't recall the details). Anyhow, the work I'm doing now exposes a few things about Uniques that confuse me a little and that could have been bugs (that maybe never acted up). Extended e-mail to follow later on. Ph. _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.k.f.holzenspies at utwente.nl Thu Oct 9 11:52:15 2014 From: p.k.f.holzenspies at utwente.nl (p.k.f.holzenspies at utwente.nl) Date: Thu, 9 Oct 2014 11:52:15 +0000 Subject: Tentative high-level plans for 7.10.1 In-Reply-To: References: <87k34gsd9r.fsf@gmail.com> <618BE556AADD624C9C918AA5D5911BEF3F31A873@DB3PRD3001MB020.064d.mgd.msft.net> <877g0bkpwj.fsf@gmail.com> <1412781755-sup-4952@sabre> Message-ID: I?m with John wrt. the discussions on LTS and the 7.8.4 release being orthogonal. Especially if 7.8 does not have submodules and if this is a pain, there?s also no reason to backport our approach to LTS into 7.8. In other words, 7.10 could also be the first LTS version. Ph. From: John Lato [mailto:jwlato at gmail.com] Sent: woensdag 8 oktober 2014 18:22 To: Edward Z. Yang Cc: ghc-devs at haskell.org; Simon Marlow Subject: Re: Tentative high-level plans for 7.10.1 Speaking for myself, I don't think the question of doing a 7.8.4 release at all needs to be entangled with the LTS issue. On Wed, Oct 8, 2014 at 8:23 AM, Edward Z. Yang > wrote: Excerpts from Herbert Valerio Riedel's message of 2014-10-08 00:59:40 -0600: > However, should GHC 7.8.x turn out to become a LTS-ishly maintained > branch, we may want to consider converting it to a similiar Git > structure as GHC HEAD currently is, to avoid having to keep two > different sets of instructions on the GHC Wiki for how to work on GHC > 7.8 vs working on GHC HEAD/7.10 and later. Emphatically yes. Lack of submodules on the 7.8 branch makes working with it /very/ unpleasant. Edward _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Thu Oct 9 15:12:35 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Thu, 9 Oct 2014 11:12:35 -0400 Subject: heres a 32bit OS X 7.8.3 build In-Reply-To: References: Message-ID: woops, looks like the build is busted, I'll try again this afternoon. pardon the noise On Wed, Oct 8, 2014 at 3:39 PM, Carter Schonwald wrote: > hey all, > > I know all of you wish you could run 32bit ghc 7.8.3 on your snazzy mac OS > 10.9, so here you are! > > > http://www.wellposed.com.s3.amazonaws.com/opensource/ghc/releasebuild-unofficial/ghc-7.8.3-i386-apple-darwin.tar.bz2 > > > $ shasum -a256 ghc-7.8.3-i386-apple-darwin.tar.bz2 > 1268ce020b46b0b459b8713916466cb92ce0c54992a76b265db203e9ef5fb5e5 > ghc-7.8.3-i386-apple-darwin.tar.bz2 > > is the relevant SHA 256 digest > > NB: I believe I managed to build it with intree-gmp too! So it wont' need > GMP installed in the system (but I could be wrong, in which case brew > install gmp will suffice) > > cheers > -Carter > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Thu Oct 9 15:24:21 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Thu, 9 Oct 2014 17:24:21 +0200 Subject: [GHC] #9628: Add Annotations to the AST to simplify source to source conversions In-Reply-To: <059.635ab04c47008f1b6c91849d5a93e478@haskell.org> References: <044.285fa4db7fb10488df4811e6070f6acb@haskell.org> <059.635ab04c47008f1b6c91849d5a93e478@haskell.org> Message-ID: Yes, I was thinking last night I need to update the GhcAstAnnotations wiki page. Will do so and clean up. On Thu, Oct 9, 2014 at 5:20 PM, GHC wrote: > #9628: Add Annotations to the AST to simplify source to source conversions > -------------------------------------+------------------------------------- > Reporter: alanz | Owner: alanz > Type: feature | Status: new > request | Milestone: > Priority: normal | Version: 7.8.3 > Component: Compiler | Keywords: > Resolution: | Architecture: Unknown/Multiple > Operating System: | Difficulty: Unknown > Unknown/Multiple | Blocked By: > Type of failure: | Related Tickets: > None/Unknown | > Test Case: | > Blocking: | > Differential Revisions: D246 | > -------------------------------------+------------------------------------- > > Comment (by simonpj): > > I'm afraid I'm very confused by this thread. > > * There are two different Phab tickets: Phab:D246 is linked to this > ticket, but Phab:D297 (I believe) may supercede it. If so please let's > redirect the "Differential revision" field of this ticket, and explicit > mark the moribund one as moribund. > > * The wiki page GhcAstAnnotations does not appear to reflect any of the > discussion. Indeed it appears to describe only the first bullet from > comment:3 > > * comment:3 identifies two issues, which Alan (in comment:4) agreed were > separate. Yet [http://www.haskell.org/pipermail/ghc- > devs/2014-October/006487.html Neil certainly thinks] that the new > Phab:D297 is exclusively about issue 1. So maybe the new design > encompasses both issue 1 and issue 2? I have no idea. > > * There has been quite a lot of [http://www.haskell.org/pipermail/ghc- > devs/2014-October/006482.html traffic on ghc-devs] that is not captured > anywhere. That's fine: an email list is good for discussion. But my > input bandwidth is low and struggle to make sense of it all. And the > conclusions from the discussion may be useful. > > * Alan has posted a [http://www.haskell.org/pipermail/haskell- > cafe/2014-October/116267.html useful summary] to Haskell Cafe, which isn't > captured on a wiki anywhere. > > * Alan has done some work identifying users for the new features, and > written some email notes about that; again this would be useful to > capture. > > I am too slow to take a big patch and try to reverse-engineer the thought > process that went into it. Would be possible to update the wiki page > (presumably GhcAstAnnotations) to state > * The problem we are trying to solve > * The user-visible (or at least visible-to-client-of-GHC-API) design > * Other notes about the implementation. > > Covering the larger picture about the GHC API improvements you are making > (eg no landmines) would be helpful. Maybe you need more than one page. > > I'm delighted you are doing this. But I don't want to throw a lot of code > into GHC without a clear, shared consensus about what it is we are trying > do to, and how we are doing it. > > Thanks. > > Simon (drowning in review requests) PJ > > -- > Ticket URL: > GHC > The Glasgow Haskell Compiler > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Thu Oct 9 16:03:26 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Thu, 9 Oct 2014 12:03:26 -0400 Subject: Capturing commas in Api Annotations (D297) In-Reply-To: References: Message-ID: One small question I have is this: why's it called a comma list? On Oct 8, 2014 12:33 PM, "Alan & Kim Zimmerman" wrote: > I am currently working annotations into the parser, provided them as a > separate structure at the end of the parse, indexed to the original by > SrcSpan and AST element type. > > The question I have is how to capture commas and semicolons in lists of > items. > > There are at least three ways of doing this > > 1. Make sure each of the items is Located, and add the possible comma > location to the annotation structure for it. > > This has the drawback that all instances of the AST item annotation have > the possible comma location in them, and it does not cope with multiple > separators where these are allowed. > > > 2. Introduce a new hsSyn structure to explicitly capture comma-separated > lists. > > This is the current approach I am taking, modelled on the OrdList > implementation, but with an extra constructor to capture the separator > location. > > Thus > > ``` > data HsCommaList a > = Empty > | Cons a (HsCommaList a) > | ExtraComma SrcSpan (HsCommaList a) > -- ^ We need a SrcSpan for the annotation > | Snoc (HsCommaList a) a > | Two (HsCommaList a) -- Invariant: non-empty > (HsCommaList a) -- Invariant: non-empty > ``` > > > 3. Change the lists to be of type `[Either SrcSpan a]` to explicitly > capture the comma locations in the list. > > > 4. A fourth way is to add a list of SrcSpan to the annotation for the > parent structure of the list, simply tracking the comma positions. This > will make working with the annotations complicated though. > > > I am currently proceeding with option 2, but would appreciate some comment > on whether this is the best approach to take. > > Option 2 will allow the AST to capture the extra commas in record > constructors, as suggested by SPJ in the debate on that feature. > > > Regards > Alan > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Thu Oct 9 16:07:09 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Thu, 9 Oct 2014 18:07:09 +0200 Subject: Capturing commas in Api Annotations (D297) In-Reply-To: References: Message-ID: It is a structure proposed to capture extra commas in record declarations, which I have abused for use here. I have subsequently realised that I am using it to capture EVERY comma, and sometimes semicolons, so its naming is even worse for my use. Hence I am pretty sure it is something that should change, but I am not sure whether it should change in the hsSyn or be captured in annotations, or what the best mixture between those two is. On Thu, Oct 9, 2014 at 6:03 PM, Carter Schonwald wrote: > One small question I have is this: why's it called a comma list? > On Oct 8, 2014 12:33 PM, "Alan & Kim Zimmerman" > wrote: > >> I am currently working annotations into the parser, provided them as a >> separate structure at the end of the parse, indexed to the original by >> SrcSpan and AST element type. >> >> The question I have is how to capture commas and semicolons in lists of >> items. >> >> There are at least three ways of doing this >> >> 1. Make sure each of the items is Located, and add the possible comma >> location to the annotation structure for it. >> >> This has the drawback that all instances of the AST item annotation have >> the possible comma location in them, and it does not cope with multiple >> separators where these are allowed. >> >> >> 2. Introduce a new hsSyn structure to explicitly capture comma-separated >> lists. >> >> This is the current approach I am taking, modelled on the OrdList >> implementation, but with an extra constructor to capture the separator >> location. >> >> Thus >> >> ``` >> data HsCommaList a >> = Empty >> | Cons a (HsCommaList a) >> | ExtraComma SrcSpan (HsCommaList a) >> -- ^ We need a SrcSpan for the annotation >> | Snoc (HsCommaList a) a >> | Two (HsCommaList a) -- Invariant: non-empty >> (HsCommaList a) -- Invariant: non-empty >> ``` >> >> >> 3. Change the lists to be of type `[Either SrcSpan a]` to explicitly >> capture the comma locations in the list. >> >> >> 4. A fourth way is to add a list of SrcSpan to the annotation for the >> parent structure of the list, simply tracking the comma positions. This >> will make working with the annotations complicated though. >> >> >> I am currently proceeding with option 2, but would appreciate some >> comment on whether this is the best approach to take. >> >> Option 2 will allow the AST to capture the extra commas in record >> constructors, as suggested by SPJ in the debate on that feature. >> >> >> Regards >> Alan >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Thu Oct 9 17:23:20 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Thu, 9 Oct 2014 13:23:20 -0400 Subject: heres a 32bit OS X 7.8.3 build In-Reply-To: References: Message-ID: Ok, I've rebuilt it (and tested it!) the same link now points to that build $ shasum -a256 ghc-7.8.3-i386-apple-darwin.tar.bz2 8f79b4d88db1da6b904b02f23e28d0197d7f101008d005b27def7d1396f74421 ghc-7.8.3-i386-apple-darwin.tar.bz2 is the sha256 checksum This build uses intree-gmp correctly, so it doesnt need any GMP installed on the target system. And definitely works on OS X 10.9 cheers! -Carter On Thu, Oct 9, 2014 at 11:12 AM, Carter Schonwald < carter.schonwald at gmail.com> wrote: > woops, looks like the build is busted, I'll try again this afternoon. > pardon the noise > > On Wed, Oct 8, 2014 at 3:39 PM, Carter Schonwald < > carter.schonwald at gmail.com> wrote: > >> hey all, >> >> I know all of you wish you could run 32bit ghc 7.8.3 on your snazzy mac >> OS 10.9, so here you are! >> >> >> http://www.wellposed.com.s3.amazonaws.com/opensource/ghc/releasebuild-unofficial/ghc-7.8.3-i386-apple-darwin.tar.bz2 >> >> >> $ shasum -a256 ghc-7.8.3-i386-apple-darwin.tar.bz2 >> 1268ce020b46b0b459b8713916466cb92ce0c54992a76b265db203e9ef5fb5e5 >> ghc-7.8.3-i386-apple-darwin.tar.bz2 >> >> is the relevant SHA 256 digest >> >> NB: I believe I managed to build it with intree-gmp too! So it wont' need >> GMP installed in the system (but I could be wrong, in which case brew >> install gmp will suffice) >> >> cheers >> -Carter >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Oct 9 20:36:49 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 9 Oct 2014 20:36:49 +0000 Subject: Again: Uniques in GHC In-Reply-To: <85db976026a441128257da13a0be9ad2@EXMBX31.ad.utwente.nl> References: <1412591816.30103.10.camel@joachim-breitner.de> <01db8f0b509747669bbf426fc0dd18c2@EXMBX31.ad.utwente.nl> <1412626086.9628.1.camel@joachim-breitner.de> , , , <618BE556AADD624C9C918AA5D5911BEF3F31FDA2@DB3PRD3001MB020.064d.mgd.msft.net>, <85db976026a441128257da13a0be9ad2@EXMBX31.ad.utwente.nl> Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F327F34@DB3PRD3001MB020.064d.mgd.msft.net> Thank you. A most helpful beginning. I have added some comments and queries, as well as clarifying some points. Simon ________________________________ From: p.k.f.holzenspies at utwente.nl [p.k.f.holzenspies at utwente.nl] Sent: 09 October 2014 12:39 To: Simon Peyton Jones; carter.schonwald at gmail.com Cc: ghc-devs at haskell.org Subject: RE: Again: Uniques in GHC Dear Simon, et al, I've created the wiki-page about the Unique-patch [1]. Should it be linked to from the KeyDataTypes [2]? Regards, Philip [1] https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/Unique [2] https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/KeyDataTypes ________________________________ From: Simon Peyton Jones Sent: 07 October 2014 23:23 To: Holzenspies, P.K.F. (EWI); carter.schonwald at gmail.com Cc: ghc-devs at haskell.org Subject: RE: Again: Uniques in GHC One of the things I'm finding difficult about this Phab stuff is that I get presented with lots of code without enough supporting text saying * What problem is this patch trying to solve? * What is the user-visible design (for language features)? * What are the main ideas in the implementation? The place we usually put such design documents is on the GHC Trac Wiki. Email is ok for discussion, but the wiki is FAR better for stating clearly the current state of play. Philip, might you make such a page for this unique stuff? To answer some of you specific questions (please include the answers in the wiki page in some form): * Uniques are never put in .hi files (as far as I know). They do not survive a single invocation of GHC. * However with ghc --make, or ghci, uniques do survive for the entire invocation of GHC. For example in ghc --make, uniques assigned when compiling module A should not clash with those for module B * Yes, TyCons and DataCons must have separate uniques. We often form sets of Names, which contain both TyCons and DataCons. Let's not mess with this. * Having unique-supply-splitting as a pure function is so deeply embedded in GHC that I could not hazard a guess as to how difficult it would be to IO-ify it. Moreover, I would regret doing so because it would force sequentiality where none is needed. * Template Haskell is a completely independent Haskell library. It does not import GHC. If uniques were in their own package, then TH and GHC could share them. Ditto Hoopl. * You say that Uniques are serialised as Word32. I'm not sure why they are serialised at all! * Enforcing determinacy everywhere is a heavy burden. Instead I suppose that you could run a pass at the end to give everything a more determinate name TidyPgm does this for the name strings, so it would probably be easy to do so for the uniques too. Simon ________________________________ From: ghc-devs [ghc-devs-bounces at haskell.org] on behalf of p.k.f.holzenspies at utwente.nl [p.k.f.holzenspies at utwente.nl] Sent: 07 October 2014 22:03 To: carter.schonwald at gmail.com Cc: ghc-devs at haskell.org Subject: RE: Again: Uniques in GHC Dear Carter, Simon, et al, (CC'd SPJ on this explicitly, because I *think* he'll be most knowledgeable on some of the constraints that need to be guaranteed for Uniques) I agree, but to that end, a few parameters need to become clear. To this end, I've created a Phabricator-thing that we can discuss things off of: https://phabricator.haskell.org/D323 Here are my open issues: - There were ad hoc domains of Uniques being created everywhere in the compiler (i.e. characters chosen to classify the generated Uniques). I have gathered them all up and given them names as constructors in Unique.UniqueDomain. Some of these names are arbitrary, because I don't know what they're for precisely. I generally went for the module name as a starting point. I did, however, make a point of having different invocations of mkSplitUniqSupply et al all have different constructors (e.g. HscMainA through HscMainC). This is to prevent the high potential for conflicts (see comments in uniqueDomainChar). If there are people that are more knowledgeable about the use of Uniques in these modules (e.g. HscMain, ByteCodeGen, etc.) can say that the uniques coming from these different invocations can never cause conflict, they maybe can reduce the number of UniqueDomains. ? - Some UniqueDomains only have a handful of instances and seem a bit wasteful. - Uniques were represented by a custom-boxed Int#, but serialised as Word32. Most modern machines see Int# as a 64-bit thing. Aren't we worried about the potential for undetected overlap/conflict there? - What is the scope in which a Unique must be Unique? I.e. what if independently compiled modules have overlapping Uniques (for different Ids) in their hi-files? Also, do TyCons and DataCons really need to have guaranteed different Uniques? Shouldn't the parser/renamer figure out what goes where and raise errors on domain violations? - There seem to be related-but-different Unique implementations in Template Haskell and Hoopl. Why is this? - How critical is it to let mkUnique (and mkSplitUniqSupply) be pure functions? If they can be IO, we could greatly simplify the management of (un)generated Uniques in each UniqueDomain and quite possibly make the move to a threaded GHC easier (for what that's worth). Also, this may help solve the non-determinism issues. - Missing haddocks, failing lints (lines too long) and a lot of cosmetics will be met when the above points have become a tad more clear. I'm more than happy to document a lot of the answers to the above stuff in Unique and/or commentary. Regards, Philip ________________________________ From: Carter Schonwald Sent: 07 October 2014 21:30 To: Holzenspies, P.K.F. (EWI) Cc: Austin Seipp; ghc-devs at haskell.org Subject: Re: Again: Uniques in GHC in some respects, having fully deterministic builds is a very important goal: a lot of tooling for eg, caching builds of libraries works much much better if you have that property :) On Tue, Oct 7, 2014 at 12:45 PM, > wrote: ________________________________________ From: mad.one at gmail.com > on behalf of Austin Seipp > So I assume your change would mean 'ghc -j' would not work for 32bit. I still consider this a big limitation, one which is only due to an implementation detail. But we need to confirm this will actually fix any bottlenecks first though before getting to that point. Yes, that's what I'm saying. Let me just add that what I'm proposing by no means prohibits or hinders making 32-bit GHC-versions be parallel later on, it just doesn't solve the problem. It depends to what extent the "fully deterministic behaviour" bug is considered a priority (there was something about parts of the hi-files being non-deterministic across different executions of GHC; don't recall the details). Anyhow, the work I'm doing now exposes a few things about Uniques that confuse me a little and that could have been bugs (that maybe never acted up). Extended e-mail to follow later on. Ph. _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Thu Oct 9 22:06:20 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Thu, 9 Oct 2014 18:06:20 -0400 Subject: heres a 32bit OS X 7.8.3 build In-Reply-To: References: Message-ID: Since this came up on IRC on #ghc, yes, modern OS X 10.9 and 10.10 can run 32bit executables. (for some reason certain folks seem to think they're only runnable on <= 10.6 ) (plus apparently if you wanna hot patch certain program such as dropbox, those are written in 32bit form so you need 32bit haskell libs if you wanna use haskell for that :)) On Thu, Oct 9, 2014 at 1:23 PM, Carter Schonwald wrote: > Ok, I've rebuilt it (and tested it!) the same link > now > points to that build > > $ shasum -a256 ghc-7.8.3-i386-apple-darwin.tar.bz2 > 8f79b4d88db1da6b904b02f23e28d0197d7f101008d005b27def7d1396f74421 > ghc-7.8.3-i386-apple-darwin.tar.bz2 > > is the sha256 checksum > > This build uses intree-gmp correctly, so it doesnt need any GMP installed > on the target system. And definitely works on OS X 10.9 > > > cheers! > -Carter > > > On Thu, Oct 9, 2014 at 11:12 AM, Carter Schonwald < > carter.schonwald at gmail.com> wrote: > >> woops, looks like the build is busted, I'll try again this afternoon. >> pardon the noise >> >> On Wed, Oct 8, 2014 at 3:39 PM, Carter Schonwald < >> carter.schonwald at gmail.com> wrote: >> >>> hey all, >>> >>> I know all of you wish you could run 32bit ghc 7.8.3 on your snazzy mac >>> OS 10.9, so here you are! >>> >>> >>> http://www.wellposed.com.s3.amazonaws.com/opensource/ghc/releasebuild-unofficial/ghc-7.8.3-i386-apple-darwin.tar.bz2 >>> >>> >>> $ shasum -a256 ghc-7.8.3-i386-apple-darwin.tar.bz2 >>> 1268ce020b46b0b459b8713916466cb92ce0c54992a76b265db203e9ef5fb5e5 >>> ghc-7.8.3-i386-apple-darwin.tar.bz2 >>> >>> is the relevant SHA 256 digest >>> >>> NB: I believe I managed to build it with intree-gmp too! So it wont' >>> need GMP installed in the system (but I could be wrong, in which case brew >>> install gmp will suffice) >>> >>> cheers >>> -Carter >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pali.gabor at gmail.com Fri Oct 10 07:19:31 2014 From: pali.gabor at gmail.com (=?UTF-8?B?UMOhbGkgR8OhYm9yIErDoW5vcw==?=) Date: Fri, 10 Oct 2014 09:19:31 +0200 Subject: Broken build: 32-bit FreeBSD, SmartOS, Solaris Message-ID: Hello there, Looks one of the recent commits broke the x86 builds on multiple platforms [1][2][3]. The common error message basically is as follows: rts/Linker.c: In function 'mkOc': rts/Linker.c:2372:6: error: 'ObjectCode' has no member named 'symbol_extras' Please fix it! Note that the x86_64 counterparts of the affected builders completed their builds fine -- except for Solaris, but that has been broken already for a long time (relatively). [1] http://haskell.inf.elte.hu/builders/smartos-x86-head/144/10.html [2] http://haskell.inf.elte.hu/builders/freebsd-i386-head/403/10.html [3] http://haskell.inf.elte.hu/builders/solaris-x86-head/191/10.html From hvriedel at gmail.com Fri Oct 10 07:27:41 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Fri, 10 Oct 2014 09:27:41 +0200 Subject: Broken build: 32-bit FreeBSD, SmartOS, Solaris In-Reply-To: (=?utf-8?Q?=22P=C3=A1li_G=C3=A1bor_J=C3=A1nos=22's?= message of "Fri, 10 Oct 2014 09:19:31 +0200") References: Message-ID: <87d2a0xu36.fsf@gmail.com> On 2014-10-10 at 09:19:31 +0200, P?li G?bor J?nos wrote: > Looks one of the recent commits broke the x86 builds on multiple > platforms [1][2][3]. The common error message basically is as > follows: > > rts/Linker.c: In function 'mkOc': > rts/Linker.c:2372:6: > error: 'ObjectCode' has no member named 'symbol_extras' > > Please fix it! > > Note that the x86_64 counterparts of the affected builders completed > their builds fine -- except for Solaris, but that has been broken > already for a long time (relatively). > > [1] http://haskell.inf.elte.hu/builders/smartos-x86-head/144/10.html > [2] http://haskell.inf.elte.hu/builders/freebsd-i386-head/403/10.html > [3] http://haskell.inf.elte.hu/builders/solaris-x86-head/191/10.html Fyi, this was the commit, and I'm positive Simon is aware already: https://phabricator.haskell.org/rGHC5300099edf106c1f5938c0793bd6ca199a0eebf0 From karel.gardas at centrum.cz Fri Oct 10 09:51:14 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Fri, 10 Oct 2014 11:51:14 +0200 Subject: Broken build: 32-bit FreeBSD, SmartOS, Solaris In-Reply-To: References: Message-ID: <5437AC12.2010909@centrum.cz> On 10/10/14 09:19 AM, P?li G?bor J?nos wrote: > Hello there, > > Looks one of the recent commits broke the x86 builds on multiple > platforms [1][2][3]. The common error message basically is as > follows: > > rts/Linker.c: In function 'mkOc': > rts/Linker.c:2372:6: > error: 'ObjectCode' has no member named 'symbol_extras' > It looks like this patch is the culprit: 5300099ed rts/Linker.c (Simon Marlow 2014-10-01 13:15:05 +0100 2372) oc->symbol_extras = NULL; > Note that the x86_64 counterparts of the affected builders completed > their builds fine -- except for Solaris, but that has been broken > already for a long time (relatively). Yeah, I'll need to fix that. So far I invested my energy in fixing Solaris/i386 testcase issues... Karel From chengang31 at gmail.com Fri Oct 10 11:30:09 2014 From: chengang31 at gmail.com (cg) Date: Fri, 10 Oct 2014 19:30:09 +0800 Subject: Broken build: 32-bit FreeBSD, SmartOS, Solaris In-Reply-To: <5437AC12.2010909@centrum.cz> References: <5437AC12.2010909@centrum.cz> Message-ID: On 10/10/2014 5:51 PM, Karel Gardas wrote: > > It looks like this patch is the culprit: > > 5300099ed rts/Linker.c (Simon Marlow 2014-10-01 > 13:15:05 +0100 2372) oc->symbol_extras = NULL; > Yes, that is it and another line is x86_64 only (arch_name is): if (image == NULL) { +#if defined(x86_64_HOST_ARCH) errorBelch("%" PATH_FMT ": failed to allocate memory for image", arch_name); +#endif return NULL; >> Note that the x86_64 counterparts of the affected builders completed >> their builds fine -- except for Solaris, but that has been broken >> already for a long time (relatively). > I would like to take the chance to ask a question: How can I configure to build x86_64? When I build GHC (with msys2), it always builds i386 and I haven't spotted the option in ./configure to choose a x86_64 release. Thanks, cg From pali.gabor at gmail.com Fri Oct 10 13:40:52 2014 From: pali.gabor at gmail.com (=?UTF-8?B?UMOhbGkgR8OhYm9yIErDoW5vcw==?=) Date: Fri, 10 Oct 2014 15:40:52 +0200 Subject: Broken build: 32-bit FreeBSD, SmartOS, Solaris In-Reply-To: References: <5437AC12.2010909@centrum.cz> Message-ID: 2014-10-10 13:30 GMT+02:00 cg : > How can I configure to build x86_64? > > When I build GHC (with msys2), it always builds i386 and I haven't spotted > the option in ./configure to choose a x86_64 release. This is implicitly determined by the toolchain you use. So, probably you have the i686 msys2 installed, while you would need the x86_64 version. Given, that your operating system (and thus your hardware) is also x86_64. From p.k.f.holzenspies at utwente.nl Fri Oct 10 14:56:10 2014 From: p.k.f.holzenspies at utwente.nl (p.k.f.holzenspies at utwente.nl) Date: Fri, 10 Oct 2014 14:56:10 +0000 Subject: Again: Uniques in GHC In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F327F34@DB3PRD3001MB020.064d.mgd.msft.net> References: <1412591816.30103.10.camel@joachim-breitner.de> <01db8f0b509747669bbf426fc0dd18c2@EXMBX31.ad.utwente.nl> <1412626086.9628.1.camel@joachim-breitner.de> , , , <618BE556AADD624C9C918AA5D5911BEF3F31FDA2@DB3PRD3001MB020.064d.mgd.msft.net>, <85db976026a441128257da13a0be9ad2@EXMBX31.ad.utwente.nl>, <618BE556AADD624C9C918AA5D5911BEF3F327F34@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <14cbd6ae0c9d4ef1982b377e3234bfc1@EXMBX34.ad.utwente.nl> Some of those clarifying points helped a *great* deal. Thanks. I've addressed comments / questions and linked from KeyTypes. Ph. ? ________________________________ From: Simon Peyton Jones Sent: 09 October 2014 22:36 To: Holzenspies, P.K.F. (EWI); carter.schonwald at gmail.com Cc: ghc-devs at haskell.org Subject: RE: Again: Uniques in GHC Thank you. A most helpful beginning. I have added some comments and queries, as well as clarifying some points. Simon ________________________________ From: p.k.f.holzenspies at utwente.nl [p.k.f.holzenspies at utwente.nl] Sent: 09 October 2014 12:39 To: Simon Peyton Jones; carter.schonwald at gmail.com Cc: ghc-devs at haskell.org Subject: RE: Again: Uniques in GHC Dear Simon, et al, I've created the wiki-page about the Unique-patch [1]. Should it be linked to from the KeyDataTypes [2]? Regards, Philip [1] https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/Unique [2] https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/KeyDataTypes ________________________________ From: Simon Peyton Jones Sent: 07 October 2014 23:23 To: Holzenspies, P.K.F. (EWI); carter.schonwald at gmail.com Cc: ghc-devs at haskell.org Subject: RE: Again: Uniques in GHC One of the things I'm finding difficult about this Phab stuff is that I get presented with lots of code without enough supporting text saying * What problem is this patch trying to solve? * What is the user-visible design (for language features)? * What are the main ideas in the implementation? The place we usually put such design documents is on the GHC Trac Wiki. Email is ok for discussion, but the wiki is FAR better for stating clearly the current state of play. Philip, might you make such a page for this unique stuff? To answer some of you specific questions (please include the answers in the wiki page in some form): * Uniques are never put in .hi files (as far as I know). They do not survive a single invocation of GHC. * However with ghc --make, or ghci, uniques do survive for the entire invocation of GHC. For example in ghc --make, uniques assigned when compiling module A should not clash with those for module B * Yes, TyCons and DataCons must have separate uniques. We often form sets of Names, which contain both TyCons and DataCons. Let's not mess with this. * Having unique-supply-splitting as a pure function is so deeply embedded in GHC that I could not hazard a guess as to how difficult it would be to IO-ify it. Moreover, I would regret doing so because it would force sequentiality where none is needed. * Template Haskell is a completely independent Haskell library. It does not import GHC. If uniques were in their own package, then TH and GHC could share them. Ditto Hoopl. * You say that Uniques are serialised as Word32. I'm not sure why they are serialised at all! * Enforcing determinacy everywhere is a heavy burden. Instead I suppose that you could run a pass at the end to give everything a more determinate name TidyPgm does this for the name strings, so it would probably be easy to do so for the uniques too. Simon ________________________________ From: ghc-devs [ghc-devs-bounces at haskell.org] on behalf of p.k.f.holzenspies at utwente.nl [p.k.f.holzenspies at utwente.nl] Sent: 07 October 2014 22:03 To: carter.schonwald at gmail.com Cc: ghc-devs at haskell.org Subject: RE: Again: Uniques in GHC Dear Carter, Simon, et al, (CC'd SPJ on this explicitly, because I *think* he'll be most knowledgeable on some of the constraints that need to be guaranteed for Uniques) I agree, but to that end, a few parameters need to become clear. To this end, I've created a Phabricator-thing that we can discuss things off of: https://phabricator.haskell.org/D323 Here are my open issues: - There were ad hoc domains of Uniques being created everywhere in the compiler (i.e. characters chosen to classify the generated Uniques). I have gathered them all up and given them names as constructors in Unique.UniqueDomain. Some of these names are arbitrary, because I don't know what they're for precisely. I generally went for the module name as a starting point. I did, however, make a point of having different invocations of mkSplitUniqSupply et al all have different constructors (e.g. HscMainA through HscMainC). This is to prevent the high potential for conflicts (see comments in uniqueDomainChar). If there are people that are more knowledgeable about the use of Uniques in these modules (e.g. HscMain, ByteCodeGen, etc.) can say that the uniques coming from these different invocations can never cause conflict, they maybe can reduce the number of UniqueDomains. ? - Some UniqueDomains only have a handful of instances and seem a bit wasteful. - Uniques were represented by a custom-boxed Int#, but serialised as Word32. Most modern machines see Int# as a 64-bit thing. Aren't we worried about the potential for undetected overlap/conflict there? - What is the scope in which a Unique must be Unique? I.e. what if independently compiled modules have overlapping Uniques (for different Ids) in their hi-files? Also, do TyCons and DataCons really need to have guaranteed different Uniques? Shouldn't the parser/renamer figure out what goes where and raise errors on domain violations? - There seem to be related-but-different Unique implementations in Template Haskell and Hoopl. Why is this? - How critical is it to let mkUnique (and mkSplitUniqSupply) be pure functions? If they can be IO, we could greatly simplify the management of (un)generated Uniques in each UniqueDomain and quite possibly make the move to a threaded GHC easier (for what that's worth). Also, this may help solve the non-determinism issues. - Missing haddocks, failing lints (lines too long) and a lot of cosmetics will be met when the above points have become a tad more clear. I'm more than happy to document a lot of the answers to the above stuff in Unique and/or commentary. Regards, Philip ________________________________ From: Carter Schonwald Sent: 07 October 2014 21:30 To: Holzenspies, P.K.F. (EWI) Cc: Austin Seipp; ghc-devs at haskell.org Subject: Re: Again: Uniques in GHC in some respects, having fully deterministic builds is a very important goal: a lot of tooling for eg, caching builds of libraries works much much better if you have that property :) On Tue, Oct 7, 2014 at 12:45 PM, > wrote: ________________________________________ From: mad.one at gmail.com > on behalf of Austin Seipp > So I assume your change would mean 'ghc -j' would not work for 32bit. I still consider this a big limitation, one which is only due to an implementation detail. But we need to confirm this will actually fix any bottlenecks first though before getting to that point. Yes, that's what I'm saying. Let me just add that what I'm proposing by no means prohibits or hinders making 32-bit GHC-versions be parallel later on, it just doesn't solve the problem. It depends to what extent the "fully deterministic behaviour" bug is considered a priority (there was something about parts of the hi-files being non-deterministic across different executions of GHC; don't recall the details). Anyhow, the work I'm doing now exposes a few things about Uniques that confuse me a little and that could have been bugs (that maybe never acted up). Extended e-mail to follow later on. Ph. _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Fri Oct 10 15:49:24 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Fri, 10 Oct 2014 17:49:24 +0200 Subject: Capturing commas in Api Annotations (D297) In-Reply-To: References: Message-ID: I have just thought of a much simpler way of doing this. The approach I am following [1] allows annotations indexed by SrcSpan and annotation type. There is nothing to stop a given SrcSpan from having multiple annotations, provided they are each fof a different type. So when a Located element is in a list, we add a AnnListSeparator annotation to the SrcSpan as well as the one specific to the type. This means the whole HsCommaList thing can go away, and the annotations are still easy to use. [1] https://ghc.haskell.org/trac/ghc/wiki/GhcAstAnnotations#design Alan On Thu, Oct 9, 2014 at 6:07 PM, Alan & Kim Zimmerman wrote: > It is a structure proposed to capture extra commas in record declarations, > which I have abused for use here. > > I have subsequently realised that I am using it to capture EVERY comma, > and sometimes semicolons, so its naming is even worse for my use. > > Hence I am pretty sure it is something that should change, but I am not > sure whether it should change in the hsSyn or be captured in annotations, > or what the best mixture between those two is. > > > On Thu, Oct 9, 2014 at 6:03 PM, Carter Schonwald < > carter.schonwald at gmail.com> wrote: > >> One small question I have is this: why's it called a comma list? >> On Oct 8, 2014 12:33 PM, "Alan & Kim Zimmerman" >> wrote: >> >>> I am currently working annotations into the parser, provided them as a >>> separate structure at the end of the parse, indexed to the original by >>> SrcSpan and AST element type. >>> >>> The question I have is how to capture commas and semicolons in lists of >>> items. >>> >>> There are at least three ways of doing this >>> >>> 1. Make sure each of the items is Located, and add the possible comma >>> location to the annotation structure for it. >>> >>> This has the drawback that all instances of the AST item annotation have >>> the possible comma location in them, and it does not cope with multiple >>> separators where these are allowed. >>> >>> >>> 2. Introduce a new hsSyn structure to explicitly capture comma-separated >>> lists. >>> >>> This is the current approach I am taking, modelled on the OrdList >>> implementation, but with an extra constructor to capture the separator >>> location. >>> >>> Thus >>> >>> ``` >>> data HsCommaList a >>> = Empty >>> | Cons a (HsCommaList a) >>> | ExtraComma SrcSpan (HsCommaList a) >>> -- ^ We need a SrcSpan for the annotation >>> | Snoc (HsCommaList a) a >>> | Two (HsCommaList a) -- Invariant: non-empty >>> (HsCommaList a) -- Invariant: non-empty >>> ``` >>> >>> >>> 3. Change the lists to be of type `[Either SrcSpan a]` to explicitly >>> capture the comma locations in the list. >>> >>> >>> 4. A fourth way is to add a list of SrcSpan to the annotation for the >>> parent structure of the list, simply tracking the comma positions. This >>> will make working with the annotations complicated though. >>> >>> >>> I am currently proceeding with option 2, but would appreciate some >>> comment on whether this is the best approach to take. >>> >>> Option 2 will allow the AST to capture the extra commas in record >>> constructors, as suggested by SPJ in the debate on that feature. >>> >>> >>> Regards >>> Alan >>> >>> >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://www.haskell.org/mailman/listinfo/ghc-devs >>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gintautas.miliauskas at gmail.com Fri Oct 10 20:01:00 2014 From: gintautas.miliauskas at gmail.com (Gintautas Miliauskas) Date: Fri, 10 Oct 2014 22:01:00 +0200 Subject: Building ghc on Windows with msys2 In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F325DDA@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF2223DA32@DB3PRD3001MB020.064d.mgd.msft.net> <543617ea.aa67b40a.48c1.ffff83a1@mx.google.com> <618BE556AADD624C9C918AA5D5911BEF3F325DDA@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Hey, I have created https://ghc.haskell.org/trac/ghc/wiki/WindowsTaskForce, and added the two people from whom I heard a confirmation that they want to be on the list. Please edit the page and add yourself if you should be on that list. Feel free to hack the page up and add additional info as you see fit. On Thu, Oct 9, 2014 at 9:51 AM, Simon Peyton Jones wrote: > I think I?m fairly behind on the current build process of GHC, but as I > do use GHC mainly on Windows, at such a time as you would like to move on > to other things, I would certainly throw my hat In the ring. > > > > That sounds helpful, thank you. > > > Are we at the point where we could form a GHC-on-Windows Task Force? With > its own wiki page on the GHC Trac, and with named participants. (Of course > you can drop off again.) But it would be really helpful to have an > explicit group who feels a sense of ownership about making sure GHC works > well on Windows. At the moment we are reduced to folk memory ?I recall > that Gintautas did something like that a few months ago?. > > > It sounds as if Tamar would be a willing member. Would anyone else be > willing? I?d say that being a member indicates a positive willingness to > help others, along with some level of expertise, NOT a promise to drop > everything to attend to someone else?s problem. > > > > Simon > > > > *From:* lonetiger at gmail.com [mailto:lonetiger at gmail.com] > *Sent:* 09 October 2014 06:04 > *To:* Gintautas Miliauskas; Simon Peyton Jones > *Cc:* Randy Polen; kyra; Marek Wawrzos; Roman Kuznetsov; Neil Mitchell; > ghc-devs at haskell.org > *Subject:* Re: Building ghc on Windows with msys2 > > > > Hi Gintautas, > > > > > Indeed, the next thing I was going to ask was about expediting the > decision process. I would be happy to try and coordinate a push in Windows > matters. There is a caveat though: I don't have any skin in the > GHC-on-Windows game, so I will want to move on to other things afterwards. > > > > I think I?m fairly behind on the current build process of GHC, but as I do > use GHC mainly on Windows, at such a time as you would like to move on to > other things, I would certainly throw my hat In the ring. > > > > Cheers, > > Tamar > > > > > > *From:* Gintautas Miliauskas > *Sent:* ?Thursday?, ?October? ?2?, ?2014 ?22?:?32 > *To:* Simon Peyton Jones > *Cc:* Randy Polen , kyra , Marek > Wawrzos , Tamar Christina , Roman > Kuznetsov , Neil Mitchell , > ghc-devs at haskell.org > > > > Hi, > > > > > All we need is someone to act as convenor/coordinator and we are good to > go. Would any of you be willing to play that role? > > > > Indeed, the next thing I was going to ask was about expediting the > decision process. I would be happy to try and coordinate a push in Windows > matters. There is a caveat though: I don't have any skin in the > GHC-on-Windows game, so I will want to move on to other things afterwards. > > > > An advantage of having a working group is that you can *decide* things. > At the moment people often wait for GHC HQ to make a decision, and end up > waiting a long time. It would be better if a working group was responsible > for the GHC-on-Windows build and then if (say) you want to mandate msys2, > you can go ahead and mandate it. Well, obviously consult ghc-devs for > advice, but you are in the lead. Does that make sense? > > > > Sounds great. The question still remains about making changes to code: is > there a particular person with commit rights that we could lean on for code > reviews and committing changes to the main repository? > > > > I think an early task is to replace what Neil Mitchell encountered: FIVE > different wiki pages describing how to build GHC on Windows. We want just > one! (Others can perhaps be marked ?out of date/archive? rather than > deleted, but it should be clear which is the main choice.) > > > > Indeed, it's a bit of a mess. I intended to shape up the msys2 page to > serve as the default, but wanted to see more testing done before before > dropping the other pages. > > > > I agree with using msys2 as the main choice. (I?m using it myself.) It > may be that Gintautas?s page > https://ghc.haskell.org/trac/ghc/wiki/Building/Preparation/Windows/MSYS2 > is already sufficient. Although I?d like to see it tested by others. For > example, I found that it was CRUCIAL to set MSYSYSTEM=MINGW whereas > Gintautas?s page says nothing about that. > > > > Are you sure that is a problem? The page specifically instructs to use the > msys64_shell.bat script (through a shortcut) that is included in msys2, and > that script takes care of setting MSYSTEM=MINGW64, among other important > things. > > > > Other small thoughts: > > ? We started including the ghc-tarball stuff because when we > relied directly on the gcc that came with msys, we kept getting build > failures because the gcc that some random person happened to be using did > not work (e..g. they had a too-old or too-new version of msys). By using a > single, fixed gcc, we avoided all this pain. > > > > Makes sense. Just curious: why is this less of a problem on GNU/Linux > distros compared to msys2? Does msys2 see comparatively less testing, or is > it generally more bleeding edge? > > > > ? I don?t know what a ?rubenvb? build is, but I think you can go > ahead and say ?use X and Y in this way?. The important thing is that it > should be reproducible, and not dependent on the particular Cygwin or gcc > or whatever the that user happens to have installed. > > A "rubenvb" build is one of the available types of prebuilt binary > packages of mingw for Windows. Let's figure out if there is something more > mainstream and if we can migrate to that. > > > > -- > Gintautas Miliauskas > -- Gintautas Miliauskas -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Fri Oct 10 20:09:16 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Fri, 10 Oct 2014 22:09:16 +0200 Subject: [GHC] #9628: Add Annotations to the AST to simplify source to source conversions In-Reply-To: <059.da38863d5ed7020da31598061f089ac2@haskell.org> References: <044.285fa4db7fb10488df4811e6070f6acb@haskell.org> <059.da38863d5ed7020da31598061f089ac2@haskell.org> Message-ID: Ok, then I think the middle ground is keyword-specific annotations, as proposed by Neil. What should happen is that the raw annotations are used by a tool layer such as ghc-exactprint or HaRe, and other more casual users will not have to worry about the internal detail. On Fri, Oct 10, 2014 at 9:13 PM, GHC wrote: > #9628: Add Annotations to the AST to simplify source to source conversions > -------------------------------------+------------------------------------- > Reporter: alanz | Owner: alanz > Type: feature | Status: new > request | Milestone: > Priority: normal | Version: 7.9 > Component: Compiler | Keywords: > Resolution: | Architecture: Unknown/Multiple > Operating System: | Difficulty: Unknown > Unknown/Multiple | Blocked By: > Type of failure: | Related Tickets: > None/Unknown | > Test Case: | > Blocking: | > Differential Revisions: D297 | > -------------------------------------+------------------------------------- > > Comment (by simonpj): > > I don't have data, but people are already complaining about the amount of > code generated by data type declarations #9669. Have you counted how many > data constructors there are in `HsSyn`? It's a LOT. > > It just feels like a sledgehammer to crack a nut. > > Simon > > -- > Ticket URL: > GHC > The Glasgow Haskell Compiler > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Fri Oct 10 20:47:58 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Fri, 10 Oct 2014 16:47:58 -0400 Subject: Broken build: 32-bit FreeBSD, SmartOS, Solaris In-Reply-To: References: <5437AC12.2010909@centrum.cz> Message-ID: likewise, 32bit OS X seems to be broken on HEAD too http://lpaste.net/112412 is the relevant bit make[5]: Nothing to be done for `all'.depbase=`echo src/x86/win32.lo | sed 's|[^/]*$|.deps/&|;s|\.lo$||'`;\ /bin/sh ./libtool --mode=compile gcc-4.9 -DHAVE_CONFIG_H -I. -I.. -I. -I../include -Iinclude -I../src -I. -I../include -Iinclude -I../src -U__i686 -m32 -fno-stack-protector -w -MT src/x86/win32.lo -MMD -MP -MF $depbase.Tpo -c -o src/x86/win32.lo ../src/x86/win32.S &&\ mv -f $depbase.Tpo $depbase.Plolibtool: compile: gcc-4.9 -DHAVE_CONFIG_H -I. -I.. -I. -I../include -Iinclude -I../src -I. -I../include -Iinclude -I../src -U__i686 -m32 -fno-stack-protector -w -MT src/x86/win32.lo -MMD -MP -MF src/x86/.deps/win32.Tpo -c ../src/x86/win32.S -fno-common -DPIC -o src/x86/.libs/win32.o../src/x86/win32.S:1283:section difference relocatable subtraction expression, ".LFE5" minus ".LFB5" using a symbol at the end of section will not produce an assembly time constant../src/x86/win32.S:1283:use a symbol with a constant value created with an assignment instead of the expression, L_const_sym = .LFE5 - .LFB5../src/x86/win32.S:1275:section difference relocatable subtraction expression, ".LEFDE5" minus ".LASFDE5" using a symbol at the end of section will not produce an assembly time constant../src/x86/win32.S:1275:use a symbol with a constant value created with an assignment instead of the expression, L_const_sym = .LEFDE5 - .LASFDE5../src/x86/win32.S:unknown:missing indirect symbols for section (__IMPORT,__jump_table)make[5]: *** [src/x86/win32.lo] Error 1 On Fri, Oct 10, 2014 at 9:40 AM, P?li G?bor J?nos wrote: > 2014-10-10 13:30 GMT+02:00 cg : > > How can I configure to build x86_64? > > > > When I build GHC (with msys2), it always builds i386 and I haven't > spotted > > the option in ./configure to choose a x86_64 release. > > This is implicitly determined by the toolchain you use. So, probably > you have the i686 msys2 installed, while you would need the x86_64 > version. Given, that your operating system (and thus your hardware) > is also x86_64. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From karel.gardas at centrum.cz Fri Oct 10 20:55:12 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Fri, 10 Oct 2014 22:55:12 +0200 Subject: Broken build: 32-bit FreeBSD, SmartOS, Solaris In-Reply-To: References: <5437AC12.2010909@centrum.cz> Message-ID: <543847B0.30208@centrum.cz> Indeed, but this looks like completely unrelated to the issue originally reported. Kind of libffi misdetection of target platform? i.e. why it compiles win32 related file on macosx? Just trying to categorize not to decrease importance of this issue! Karel On 10/10/14 10:47 PM, Carter Schonwald wrote: > likewise, 32bit OS X seems to be broken on HEAD too > > http://lpaste.net/112412 is the relevant bit > > make[5]: Nothing to be done for `all'. > depbase=`echo src/x86/win32.lo | sed 's|[^/]*$|.deps/&|;s|\.lo$||'`;\ > /bin/sh ./libtool --mode=compile gcc-4.9 -DHAVE_CONFIG_H -I. -I.. -I. -I../include -Iinclude -I../src -I. -I../include -Iinclude -I../src -U__i686 -m32 -fno-stack-protector -w -MT src/x86/win32.lo -MMD -MP -MF $depbase.Tpo -c -o src/x86/win32.lo ../src/x86/win32.S&&\ > mv -f $depbase.Tpo $depbase.Plo > libtool: compile: gcc-4.9 -DHAVE_CONFIG_H -I. -I.. -I. -I../include -Iinclude -I../src -I. -I../include -Iinclude -I../src -U__i686 -m32 -fno-stack-protector -w -MT src/x86/win32.lo -MMD -MP -MF src/x86/.deps/win32.Tpo -c ../src/x86/win32.S -fno-common -DPIC -o src/x86/.libs/win32.o > ../src/x86/win32.S:1283:section difference relocatable subtraction expression, ".LFE5" minus ".LFB5" using a symbol at the end of section will not produce an assembly time constant > ../src/x86/win32.S:1283:use a symbol with a constant value created with an assignment instead of the expression, L_const_sym = .LFE5 - .LFB5 > ../src/x86/win32.S:1275:section difference relocatable subtraction expression, ".LEFDE5" minus ".LASFDE5" using a symbol at the end of section will not produce an assembly time constant > ../src/x86/win32.S:1275:use a symbol with a constant value created with an assignment instead of the expression, L_const_sym = .LEFDE5 - .LASFDE5 > ../src/x86/win32.S:unknown:missing indirect symbols for section (__IMPORT,__jump_table) > make[5]: *** [src/x86/win32.lo] Error 1 > > > On Fri, Oct 10, 2014 at 9:40 AM, P?li G?bor J?nos > wrote: > > 2014-10-10 13:30 GMT+02:00 cg >: > > How can I configure to build x86_64? > > > > When I build GHC (with msys2), it always builds i386 and I haven't > spotted > > the option in ./configure to choose a x86_64 release. > > This is implicitly determined by the toolchain you use. So, probably > you have the i686 msys2 installed, while you would need the x86_64 > version. Given, that your operating system (and thus your hardware) > is also x86_64. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From carter.schonwald at gmail.com Sat Oct 11 02:28:06 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Fri, 10 Oct 2014 22:28:06 -0400 Subject: Broken build: 32-bit FreeBSD, SmartOS, Solaris In-Reply-To: <543847B0.30208@centrum.cz> References: <5437AC12.2010909@centrum.cz> <543847B0.30208@centrum.cz> Message-ID: could be! i'm not equipped to do the right debuggin atm though (i may try to help out if can get some guidance though ) On Fri, Oct 10, 2014 at 4:55 PM, Karel Gardas wrote: > > Indeed, but this looks like completely unrelated to the issue originally > reported. Kind of libffi misdetection of target platform? i.e. why it > compiles win32 related file on macosx? > > Just trying to categorize not to decrease importance of this issue! > > Karel > > On 10/10/14 10:47 PM, Carter Schonwald wrote: > >> likewise, 32bit OS X seems to be broken on HEAD too >> >> http://lpaste.net/112412 is the relevant bit >> >> make[5]: Nothing to be done for `all'. >> depbase=`echo src/x86/win32.lo | sed 's|[^/]*$|.deps/&|;s|\.lo$||'`;\ >> /bin/sh ./libtool --mode=compile gcc-4.9 -DHAVE_CONFIG_H -I. >> -I.. -I. -I../include -Iinclude -I../src -I. -I../include -Iinclude >> -I../src -U__i686 -m32 -fno-stack-protector -w -MT src/x86/win32.lo -MMD >> -MP -MF $depbase.Tpo -c -o src/x86/win32.lo ../src/x86/win32.S&&\ >> mv -f $depbase.Tpo $depbase.Plo >> libtool: compile: gcc-4.9 -DHAVE_CONFIG_H -I. -I.. -I. >> -I../include -Iinclude -I../src -I. -I../include -Iinclude -I../src >> -U__i686 -m32 -fno-stack-protector -w -MT src/x86/win32.lo -MMD -MP >> -MF src/x86/.deps/win32.Tpo -c ../src/x86/win32.S -fno-common -DPIC >> -o src/x86/.libs/win32.o >> ../src/x86/win32.S:1283:section difference relocatable subtraction >> expression, ".LFE5" minus ".LFB5" using a symbol at the end of >> section will not produce an assembly time constant >> ../src/x86/win32.S:1283:use a symbol with a constant value >> created with an assignment instead of the expression, L_const_sym >> = .LFE5 - .LFB5 >> ../src/x86/win32.S:1275:section difference relocatable subtraction >> expression, ".LEFDE5" minus ".LASFDE5" using a symbol at the end >> of section will not produce an assembly time constant >> ../src/x86/win32.S:1275:use a symbol with a constant value >> created with an assignment instead of the expression, L_const_sym >> = .LEFDE5 - .LASFDE5 >> ../src/x86/win32.S:unknown:missing indirect symbols for section >> (__IMPORT,__jump_table) >> make[5]: *** [src/x86/win32.lo] Error 1 >> >> >> On Fri, Oct 10, 2014 at 9:40 AM, P?li G?bor J?nos > > wrote: >> >> 2014-10-10 13:30 GMT+02:00 cg > >: >> > How can I configure to build x86_64? >> > >> > When I build GHC (with msys2), it always builds i386 and I haven't >> spotted >> > the option in ./configure to choose a x86_64 release. >> >> This is implicitly determined by the toolchain you use. So, probably >> you have the i686 msys2 installed, while you would need the x86_64 >> version. Given, that your operating system (and thus your hardware) >> is also x86_64. >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bgamari.foss at gmail.com Sat Oct 11 04:17:12 2014 From: bgamari.foss at gmail.com (Ben Gamari) Date: Sat, 11 Oct 2014 00:17:12 -0400 Subject: One-shot semantics in GHC event manager Message-ID: <878uknxmt3.fsf@gmail.com> In ba2555ef and a6f52b19 one-shot semantics were added to event manager in `base`. If my understanding of this code is correct, in this mode the event manager will use only notify the user of the first event on a registered fd after which point the fd will need to be re-registered to get another notification. It appears this was done to optimize the common case of file I/O where only a single event is needed This change lead to a regression[1] in Bas van Dijk's usb library under GHC 7.8. usb's use of the event manager requires that all events on an fd are reported until the fd is registered or else hardware events are lost. I'm a bit perplexed as to why the change was made in the way that it was. Making one-shot a event-manager-wide attribute seems to add a fair bit of complexity to the subsystem while breaking backwards compatibility with library code. Going forward library authors now need to worry about whether the system event manager is one-shot or not. Not only is this platform dependent but it seems that there is no way for a user to determine which semantics the system event handler uses. Is there a reason why one-shot wasn't exported as a per-fd attribute instead of per-manager? Might it be possible to back out this change and instead add a variant of `registerFd` which exposes one-shot semantics? Cheers, - Ben [1] https://github.com/basvandijk/usb/issues/7 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From andreas.voellmy at gmail.com Sat Oct 11 04:41:28 2014 From: andreas.voellmy at gmail.com (Andreas Voellmy) Date: Sat, 11 Oct 2014 00:41:28 -0400 Subject: One-shot semantics in GHC event manager In-Reply-To: <878uknxmt3.fsf@gmail.com> References: <878uknxmt3.fsf@gmail.com> Message-ID: On Sat, Oct 11, 2014 at 12:17 AM, Ben Gamari wrote: > > In ba2555ef and a6f52b19 one-shot semantics were added to event manager > in `base`. If my understanding of this code is correct, in this mode the > event manager will use only notify the user of the first event on a > registered fd after which point the fd will need to be re-registered to > get another notification. Yes. > It appears this was done to optimize the > common case of file I/O where only a single event is needed > Yes. > > This change lead to a regression[1] in Bas van Dijk's usb library under > GHC 7.8. usb's use of the event manager requires that all events on an > fd are reported until the fd is registered or else hardware events are > lost. > The change should only affect libraries using GHC.Event (or other modules underneath), which are exposed, but considered "internal". I searched hackage before making this change and usb was the only library that came up using GHC.Event directly. I'm not sure if I sent the usb maintainers an email now... I really should have done that to save you the effort of hunting down the problem in usb. > > I'm a bit perplexed as to why the change was made in the way that it > was. Making one-shot a event-manager-wide attribute seems to add a fair > bit of complexity to the subsystem while breaking backwards > compatibility with library code. It added some complexity to the IO manager, but that should not affect clients except those using the internal interface. > Going forward library authors now need > to worry about whether the system event manager is one-shot or not. Yes, but only library authors using the internal interface. > Not > only is this platform dependent but it seems that there is no way for a > user to determine which semantics the system event handler uses. > Is there a reason why one-shot wasn't exported as a per-fd attribute > instead of per-manager? Might it be possible to back out this change and > instead add a variant of `registerFd` which exposes one-shot semantics? > > The system event manager is configured by GHC.Thread using ONE_SHOT if the system supports it. You can always create your own EventManager using GHC.Event.Manager.new or GHC.Event.Manager.newWith functions. Those functions take a Bool argument that control whether ONE_SHOT is used by the Manager returned by that function (False means not to use ONE_SHOT). Would this work for usb? -Andi > Cheers, > > - Ben > > > [1] https://github.com/basvandijk/usb/issues/7 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bgamari.foss at gmail.com Sat Oct 11 05:07:35 2014 From: bgamari.foss at gmail.com (Ben Gamari) Date: Sat, 11 Oct 2014 01:07:35 -0400 Subject: One-shot semantics in GHC event manager In-Reply-To: References: <878uknxmt3.fsf@gmail.com> Message-ID: <8761frxkh4.fsf@gmail.com> Thanks for your quick reply! Andreas Voellmy writes: > On Sat, Oct 11, 2014 at 12:17 AM, Ben Gamari wrote: >> >> I'm a bit perplexed as to why the change was made in the way that it >> was. Making one-shot a event-manager-wide attribute seems to add a fair >> bit of complexity to the subsystem while breaking backwards >> compatibility with library code. > > > It added some complexity to the IO manager, but that should not affect > clients except those using the internal interface. > What I'm wondering is what the extra complexity bought us. It seems like the same thing could have been achieved with less breakage by making this per-fd instead of per-manager. I may be missing something, however. > >> Going forward library authors now need >> to worry about whether the system event manager is one-shot or not. > > > Yes, but only library authors using the internal interface. > > >> Not >> only is this platform dependent but it seems that there is no way for a >> user to determine which semantics the system event handler uses. > > >> Is there a reason why one-shot wasn't exported as a per-fd attribute >> instead of per-manager? Might it be possible to back out this change and >> instead add a variant of `registerFd` which exposes one-shot semantics? >> >> > The system event manager is configured by GHC.Thread using ONE_SHOT if the > system supports it. > > You can always create your own EventManager using GHC.Event.Manager.new or > GHC.Event.Manager.newWith functions. Those functions take a Bool argument > that control whether ONE_SHOT is used by the Manager returned by that > function (False means not to use ONE_SHOT). Would this work for usb? > I had considered this but looked for other options for two reasons, * `loop` isn't exported by GHC.Event * there is already a perfectly usable event loop thread in existence I'm a bit curious to know what advantages ONE_SHOT being per-manager carries over per-fd. If the advantages are large enough then we can just export `loop` and be done with it but the design as it stands strikes me as a bit odd. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From hvriedel at gmail.com Sat Oct 11 13:24:02 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Sat, 11 Oct 2014 15:24:02 +0200 Subject: Curious Windows GHCi linker behaviour .o vs. .dll Message-ID: <87y4smvix9.fsf@gmail.com> Hello *, I assume this is a well known issue to MSYS2/Windows developers, so I hope somebody may be able to provide more insight for me to better understand the underlying problem of https://github.com/haskell/time/issues/2 So the prototype for tzset() is simply void tzset(void); and it's defined in `msvcrt.dll` as far as I can tell; Consider the following trivial program: module Main where foreign import ccall unsafe "time.h tzset" c_tzset :: IO () main :: IO() main = c_tzset When compiled with GHC 7.8.3, the resulting executable works and has the following tzset-symbols: $ nm tz.o | grep tzset U tzset $ nm tz.exe | grep tzset 000000000050e408 I __imp_tzset 00000000004afc40 T tzset However, when loaded into GHCi, the RTS linker fails to find `tzset`: $ ghci tz.hs WARNING: GHCi invoked via 'ghci.exe' in *nix-like shells (cygwin-bash, in particular) doesn't handle Ctrl-C well; use the 'ghcii.sh' shell wrapper instead GHCi, version 7.8.3: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... done. [1 of 1] Compiling Main ( tz.hs, interpreted ) ByteCodeLink: can't find label During interactive linking, GHCi couldn't find the following symbol: tzset ... However, when I prefix a `_` to the symbol-name in the FFI import, i.e. foreign import ccall unsafe "time.h tzset" c_tzset :: IO () Now, GHCi happily loads the module and is apparently able to resolve the `tzset` symbol: $ ghci tz.hs WARNING: GHCi invoked via 'ghci.exe' in *nix-like shells (cygwin-bash, in particular) doesn't handle Ctrl-C well; use the 'ghcii.sh' shell wrapper instead GHCi, version 7.8.3: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... done. [1 of 1] Compiling Main ( tz.hs, interpreted ) Ok, modules loaded: Main. *Main> Moreover, compiling and running the program still works, and the additional underscore is visible in `nm` as well: $ nm tz.o | grep tzset U _tzset $ nm tz.exe | grep tzset 000000000050e558 I __imp__tzset 00000000004b8050 T _tzset What's going on here? Why does one need to add an artificial underscore to FFI imported symbols for GHCi to resolve symbols? Is this a bug? Cheers, hvr From andreas.voellmy at gmail.com Sat Oct 11 13:31:38 2014 From: andreas.voellmy at gmail.com (Andreas Voellmy) Date: Sat, 11 Oct 2014 09:31:38 -0400 Subject: One-shot semantics in GHC event manager In-Reply-To: <8761frxkh4.fsf@gmail.com> References: <878uknxmt3.fsf@gmail.com> <8761frxkh4.fsf@gmail.com> Message-ID: On Sat, Oct 11, 2014 at 1:07 AM, Ben Gamari wrote: > Thanks for your quick reply! > > > Andreas Voellmy writes: > > > On Sat, Oct 11, 2014 at 12:17 AM, Ben Gamari > wrote: > >> > >> I'm a bit perplexed as to why the change was made in the way that it > >> was. Making one-shot a event-manager-wide attribute seems to add a fair > >> bit of complexity to the subsystem while breaking backwards > >> compatibility with library code. > > > > > > It added some complexity to the IO manager, but that should not affect > > clients except those using the internal interface. > > > What I'm wondering is what the extra complexity bought us. It seems like > the same thing could have been achieved with less breakage by making > this per-fd instead of per-manager. I may be missing something, however. > > Generally, ONE_SHOT helped improve performance. I agree with you that it may be possible to do this on a per-FD basis. I'll look into what it would take to do this. > > > >> Going forward library authors now need > >> to worry about whether the system event manager is one-shot or not. > > > > > > Yes, but only library authors using the internal interface. > > > > > >> Not > >> only is this platform dependent but it seems that there is no way for a > >> user to determine which semantics the system event handler uses. > > > > > >> Is there a reason why one-shot wasn't exported as a per-fd attribute > >> instead of per-manager? Might it be possible to back out this change and > >> instead add a variant of `registerFd` which exposes one-shot semantics? > >> > >> > > The system event manager is configured by GHC.Thread using ONE_SHOT if > the > > system supports it. > > > > You can always create your own EventManager using GHC.Event.Manager.new > or > > GHC.Event.Manager.newWith functions. Those functions take a Bool argument > > that control whether ONE_SHOT is used by the Manager returned by that > > function (False means not to use ONE_SHOT). Would this work for usb? > > > I had considered this but looked for other options for two reasons, > > * `loop` isn't exported by GHC.Event > Right - it wouldn't make sense to export the system EventManager's loop. However, the GHC.Event.Manager module does export its loop function, so if you create your own non-ONE_SHOT event manager, you can just invoke its loop function. > * there is already a perfectly usable event loop thread in existence > > I'm a bit curious to know what advantages ONE_SHOT being per-manager > carries over per-fd. If the advantages are large enough then we can just > export `loop` and be done with it but the design as it stands strikes me > as a bit odd. > I suspect that a per-FD design would perform just as well, but I need to look at the details to be sure. Cheers, > > - Ben > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Sat Oct 11 13:32:01 2014 From: allbery.b at gmail.com (Brandon Allbery) Date: Sat, 11 Oct 2014 09:32:01 -0400 Subject: Curious Windows GHCi linker behaviour .o vs. .dll In-Reply-To: <87y4smvix9.fsf@gmail.com> References: <87y4smvix9.fsf@gmail.com> Message-ID: On Sat, Oct 11, 2014 at 9:24 AM, Herbert Valerio Riedel wrote: > Moreover, compiling and running the program still works, and the > additional underscore is visible in `nm` as well: > Sounds like ghci's linker doesn't resolve weak symbols? -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreas.voellmy at gmail.com Sat Oct 11 14:58:55 2014 From: andreas.voellmy at gmail.com (Andreas Voellmy) Date: Sat, 11 Oct 2014 10:58:55 -0400 Subject: One-shot semantics in GHC event manager In-Reply-To: References: <878uknxmt3.fsf@gmail.com> <8761frxkh4.fsf@gmail.com> Message-ID: Another way to fix usb would be to re-register the callback after a previously registered callback is fired. Of course it is cheaper not to have to re-register, but re-registration in the latest IO manager should be fairly cheap, so this may not be a performance problem for usb. Would this work for you? You could also use a CPP directive to only do this for GHC 7.8 and up. If we want to allow usb to work unchanged, then we will have to revert to the non-ONE_SHOT behavior of registerFd and add some things to the API to allow GHC.Thread to register with ONE_SHOT behavior. Reverting could break clients of this semi-public API who have adapted to the 7.8 behavior. There probably aren't of these clients other than GHC.Thread, so this may not be a big issue. To do per-FD setting of ONE_SHOT or not, we actually need to have per-subscription settings, since there can be multiple invocations to register callbacks for a single file descriptor (e.g. from two different threads) and they might want different settings. If all the clients want ONE_SHOT we use ONE_SHOT registration, if all want persistent registrations we don't use ONE_SHOT. If it is mixed, then the manager has to choose one or the other and simulate the required behavior for the other registrations (e.g. choose persistent and automatically unregister for ONE_SHOT registrations). We could either always make the same choice (e.g. if there is a mix, use persistent), or we could have per-FD setting that is configurable by clients. Andi On Sat, Oct 11, 2014 at 9:31 AM, Andreas Voellmy wrote: > > > On Sat, Oct 11, 2014 at 1:07 AM, Ben Gamari > wrote: > >> Thanks for your quick reply! >> >> >> Andreas Voellmy writes: >> >> > On Sat, Oct 11, 2014 at 12:17 AM, Ben Gamari >> wrote: >> >> >> >> I'm a bit perplexed as to why the change was made in the way that it >> >> was. Making one-shot a event-manager-wide attribute seems to add a >> fair >> >> bit of complexity to the subsystem while breaking backwards >> >> compatibility with library code. >> > >> > >> > It added some complexity to the IO manager, but that should not affect >> > clients except those using the internal interface. >> > >> What I'm wondering is what the extra complexity bought us. It seems like >> the same thing could have been achieved with less breakage by making >> this per-fd instead of per-manager. I may be missing something, however. >> >> > Generally, ONE_SHOT helped improve performance. I agree with you that it > may be possible to do this on a per-FD basis. I'll look into what it would > take to do this. > > > >> > >> >> Going forward library authors now need >> >> to worry about whether the system event manager is one-shot or not. >> > >> > >> > Yes, but only library authors using the internal interface. >> > >> > >> >> Not >> >> only is this platform dependent but it seems that there is no way for a >> >> user to determine which semantics the system event handler uses. >> > >> > >> >> Is there a reason why one-shot wasn't exported as a per-fd attribute >> >> instead of per-manager? Might it be possible to back out this change >> and >> >> instead add a variant of `registerFd` which exposes one-shot semantics? >> >> >> >> >> > The system event manager is configured by GHC.Thread using ONE_SHOT if >> the >> > system supports it. >> > >> > You can always create your own EventManager using GHC.Event.Manager.new >> or >> > GHC.Event.Manager.newWith functions. Those functions take a Bool >> argument >> > that control whether ONE_SHOT is used by the Manager returned by that >> > function (False means not to use ONE_SHOT). Would this work for usb? >> > >> I had considered this but looked for other options for two reasons, >> >> * `loop` isn't exported by GHC.Event >> > > Right - it wouldn't make sense to export the system EventManager's loop. > However, the GHC.Event.Manager module does export its loop function, so if > you create your own non-ONE_SHOT event manager, you can just invoke its > loop function. > > >> * there is already a perfectly usable event loop thread in existence >> >> I'm a bit curious to know what advantages ONE_SHOT being per-manager >> carries over per-fd. If the advantages are large enough then we can just >> export `loop` and be done with it but the design as it stands strikes me >> as a bit odd. >> > > I suspect that a per-FD design would perform just as well, but I need to > look at the details to be sure. > > > Cheers, >> >> - Ben >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chengang31 at gmail.com Sat Oct 11 15:04:57 2014 From: chengang31 at gmail.com (cg) Date: Sat, 11 Oct 2014 23:04:57 +0800 Subject: Curious Windows GHCi linker behaviour .o vs. .dll In-Reply-To: <87y4smvix9.fsf@gmail.com> References: <87y4smvix9.fsf@gmail.com> Message-ID: On 10/11/2014 9:24 PM, Herbert Valerio Riedel wrote: > Consider the following trivial program: > > module Main where > > foreign import ccall unsafe "time.h tzset" c_tzset :: IO () > > main :: IO() > main = c_tzset > [...] > However, when loaded into GHCi, the RTS linker fails to find `tzset`: > > $ ghci tz.hs > [...] > ByteCodeLink: can't find label > During interactive linking, GHCi couldn't find the following symbol: > tzset > Strange, I tried it under HaskellPlatform-2014.2, it works, I didn't see the failure. And I tried it in both Windows cmd and msys2 shell. > However, when I prefix a `_` to the symbol-name in the FFI import, i.e. > > foreign import ccall unsafe "time.h tzset" c_tzset :: IO () > I guess it should read: foreign import ccall unsafe "time.h _tzset" c_tzset :: IO () It works too. Actually both _tzset and tzset exist in include/time.h, only tzset is old style name. They will be linked as the same function __imp__tzset. -- cg From hvriedel at gmail.com Sat Oct 11 15:44:12 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Sat, 11 Oct 2014 17:44:12 +0200 Subject: Curious Windows GHCi linker behaviour .o vs. .dll In-Reply-To: (cg's message of "Sat, 11 Oct 2014 23:04:57 +0800") References: <87y4smvix9.fsf@gmail.com> Message-ID: <87tx3avcfn.fsf@gmail.com> On 2014-10-11 at 17:04:57 +0200, cg wrote: [...] > [...] >> ByteCodeLink: can't find label >> During interactive linking, GHCi couldn't find the following symbol: >> tzset > > Strange, I tried it under HaskellPlatform-2014.2, it works, I didn't > see the > failure. And I tried it in both Windows cmd and msys2 shell. Well, I basically used a MSYS2 environment setup according to https://ghc.haskell.org/trac/ghc/wiki/Building/Preparation/Windows >> However, when I prefix a `_` to the symbol-name in the FFI import, i.e. >> >> foreign import ccall unsafe "time.h tzset" c_tzset :: IO () >> > > I guess it should read: > foreign import ccall unsafe "time.h _tzset" c_tzset :: IO () > > It works too. Yes, sorry, I forgot to add that leading underscore :-/ > Actually both _tzset and tzset exist in include/time.h, only tzset is old > style name. They will be linked as the same function __imp__tzset. What do you mean by "old style"? And more importantly, what foreign-import line shall be used that works both on Windows and non-Windows platforms, compiled as well as interpreted in GHCi? Note also that I reduced the original problem to a much smaller repro-case here, the time-library actually has an additional redirection: The `tzset()` call is made inside a C function in `cbits/HsTime.c` which in turn is then foreign-imported. So in this case, the GHCi linker fails to resolve the correctly referenced `tzset()`. To me this sounds more and more like a serious bug in GHCi's linker. PS: If I run ./validate on GHC HEAD, several of the GHCi testcases such as ghci/prog001 prog001 [bad stderr] (ghci) ghci/prog002 prog002 [bad stderr] (ghci) ghci/prog003 prog003 [bad stderr] (ghci) ghci/prog012 prog012 [bad stderr] (ghci) ghci/prog013 prog013 [bad stderr] (ghci) fail for me due to not being able to load the `time` package (due to tzset). Cheers, hvr From bgamari.foss at gmail.com Sat Oct 11 15:52:48 2014 From: bgamari.foss at gmail.com (Ben Gamari) Date: Sat, 11 Oct 2014 11:52:48 -0400 Subject: One-shot semantics in GHC event manager In-Reply-To: References: <878uknxmt3.fsf@gmail.com> <8761frxkh4.fsf@gmail.com> Message-ID: <871tqey567.fsf@gmail.com> Andreas Voellmy writes: > Another way to fix usb would be to re-register the callback after a > previously registered callback is fired. Of course it is cheaper not to > have to re-register, but re-registration in the latest IO manager should be > fairly cheap, so this may not be a performance problem for usb. Would this > work for you? > > You could also use a CPP directive to only do this for GHC 7.8 and up. > This is a possibility that I had considered. It would require that some code be reworked however. I'm leaning towards just using our non-event-manager-driven fallback path on GHC 7.8 for now until the event manager semantics can be worked out. > If we want to allow usb to work unchanged, then we will have to revert to > the non-ONE_SHOT behavior of registerFd and add some things to the API to > allow GHC.Thread to register with ONE_SHOT behavior. Reverting could break > clients of this semi-public API who have adapted to the 7.8 behavior. > There probably aren't of these clients other than GHC.Thread, so this may > not be a big issue. > I don't think we need to revert the changes just for usb as we have the fallback path that will work for now. I do think it might be good to explore other points in the design space, however. > To do per-FD setting of ONE_SHOT or not, we actually need to have > per-subscription settings, since there can be multiple invocations to > register callbacks for a single file descriptor (e.g. from two different > threads) and they might want different settings. > Agreed. > If all the clients want > ONE_SHOT we use ONE_SHOT registration, if all want persistent registrations > we don't use ONE_SHOT. If it is mixed, then the manager has to choose one > or the other and simulate the required behavior for the other registrations > (e.g. choose persistent and automatically unregister for ONE_SHOT > registrations). We could either always make the same choice (e.g. if there > is a mix, use persistent), or we could have per-FD setting that is > configurable by clients. > I would think we would actually want the desired one-shottedness to be a property of the registration. We would have, -- | Will this registration be valid until unregistration ('ManyShot') -- or only for a single event ('OneShot')? data Lifetime = OneShot | ManyShot registerFd :: EventManager -> Fd -> Event -> Lifetime -> IO FdKey The event manager would then have to choose either ONE_SHOT or not in the case of heterogenous registrations and emulate the other set, as you said. This seems like a nice interface as it allows the user to specify the semantics that they want (instead of working around whatever the manager happens to provide) and gives the the event manager enough knowledge and freedom to do what it can to efficiently implement what is needed. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From bgamari.foss at gmail.com Sat Oct 11 16:17:45 2014 From: bgamari.foss at gmail.com (Ben Gamari) Date: Sat, 11 Oct 2014 12:17:45 -0400 Subject: One-shot semantics in GHC event manager In-Reply-To: References: <878uknxmt3.fsf@gmail.com> <8761frxkh4.fsf@gmail.com> Message-ID: <87y4smwpg6.fsf@gmail.com> Andreas Voellmy writes: > On Sat, Oct 11, 2014 at 1:07 AM, Ben Gamari wrote: > >> Thanks for your quick reply! >> >> >> What I'm wondering is what the extra complexity bought us. It seems like >> the same thing could have been achieved with less breakage by making >> this per-fd instead of per-manager. I may be missing something, however. >> >> > Generally, ONE_SHOT helped improve performance. > Sure, I would certainly believe this. > I agree with you that it may be possible to do this on a per-FD > basis. I'll look into what it would take to do this. > I've started playing around with the code to see what might be possible here. We'll see how far I get. >> I had considered this but looked for other options for two reasons, >> >> * `loop` isn't exported by GHC.Event >> > > Right - it wouldn't make sense to export the system EventManager's loop. > The system EventManager's loop is `GHC.Event.Manager.loop`, no? > However, the GHC.Event.Manager module does export its loop function, so if > you create your own non-ONE_SHOT event manager, you can just invoke its > loop function. > Right, but `GHC.Event.Manager` is not exported by `base`. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From andreas.voellmy at gmail.com Sat Oct 11 17:54:46 2014 From: andreas.voellmy at gmail.com (Andreas Voellmy) Date: Sat, 11 Oct 2014 13:54:46 -0400 Subject: One-shot semantics in GHC event manager In-Reply-To: <87y4smwpg6.fsf@gmail.com> References: <878uknxmt3.fsf@gmail.com> <8761frxkh4.fsf@gmail.com> <87y4smwpg6.fsf@gmail.com> Message-ID: On Sat, Oct 11, 2014 at 12:17 PM, Ben Gamari wrote: > Andreas Voellmy writes: > > > On Sat, Oct 11, 2014 at 1:07 AM, Ben Gamari > wrote: > > > >> Thanks for your quick reply! > >> > >> > >> What I'm wondering is what the extra complexity bought us. It seems like > >> the same thing could have been achieved with less breakage by making > >> this per-fd instead of per-manager. I may be missing something, however. > >> > >> > > Generally, ONE_SHOT helped improve performance. > > > Sure, I would certainly believe this. > > > I agree with you that it may be possible to do this on a per-FD > > basis. I'll look into what it would take to do this. > > > I've started playing around with the code to see what might be possible > here. We'll see how far I get. > > >> I had considered this but looked for other options for two reasons, > >> > >> * `loop` isn't exported by GHC.Event > >> > > > > Right - it wouldn't make sense to export the system EventManager's loop. > > > The system EventManager's loop is `GHC.Event.Manager.loop`, no? > Yes, but it will be invoked by GHC.Thread and any other callers of it will simply block indefinitely waiting for the thread that is running loop to give it up - which will typically never happen. > > > However, the GHC.Event.Manager module does export its loop function, so > if > > you create your own non-ONE_SHOT event manager, you can just invoke its > > loop function. > > > Right, but `GHC.Event.Manager` is not exported by `base`. > Ah... so this is not useful to you. I guess we could add `loop` to GHC.Event's export list. On the other hand, I like your LifeTime proposal better and then no one needs `loop`, so let's try this first. > > Cheers, > > - Ben > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bgamari.foss at gmail.com Sat Oct 11 21:39:44 2014 From: bgamari.foss at gmail.com (Ben Gamari) Date: Sat, 11 Oct 2014 17:39:44 -0400 Subject: One-shot semantics in GHC event manager In-Reply-To: References: <878uknxmt3.fsf@gmail.com> <8761frxkh4.fsf@gmail.com> <87y4smwpg6.fsf@gmail.com> Message-ID: <87k346wajj.fsf@gmail.com> Andreas Voellmy writes: > On Sat, Oct 11, 2014 at 12:17 PM, Ben Gamari wrote: > > Yes, but it will be invoked by GHC.Thread and any other callers of it will > simply block indefinitely waiting for the thread that is running loop to > give it up - which will typically never happen. > Right. >> > However, the GHC.Event.Manager module does export its loop function, so if >> > you create your own non-ONE_SHOT event manager, you can just invoke its >> > loop function. >> > >> Right, but `GHC.Event.Manager` is not exported by `base`. >> > > Ah... so this is not useful to you. I guess we could add `loop` to > GHC.Event's export list. On the other hand, I like your LifeTime proposal > better and then no one needs `loop`, so let's try this first. > I have a first cut of this here [1]. It compiles but would be I shocked if it ran. All of the pieces are there but I need to change EventLifetime to a more efficient encoding (there's no reason why it needs to be more than an Int). Sadly I have to run for the night and will be on a bike ride tomorrow but perhaps I can come back to it on Monday. Feel free to read it over and see if I missed something. Cheers, - Ben [1] https://github.com/bgamari/packages-base/compare/ghc:ghc-7.8...event-rework -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From chengang31 at gmail.com Sun Oct 12 02:30:13 2014 From: chengang31 at gmail.com (cg) Date: Sun, 12 Oct 2014 10:30:13 +0800 Subject: Curious Windows GHCi linker behaviour .o vs. .dll In-Reply-To: <87tx3avcfn.fsf@gmail.com> References: <87y4smvix9.fsf@gmail.com> <87tx3avcfn.fsf@gmail.com> Message-ID: On 10/11/2014 11:44 PM, Herbert Valerio Riedel wrote: > > Well, I basically used a MSYS2 environment setup according to > https://ghc.haskell.org/trac/ghc/wiki/Building/Preparation/Windows > I reproduced the issue with ghc-7.8.3-x86_64. Are you using 64-bit ghc? If so, it looks the issue is 64-bit only. > > >> Actually both _tzset and tzset exist in include/time.h, only tzset is old >> style name. They will be linked as the same function __imp__tzset. > > What do you mean by "old style"? And more importantly, what > foreign-import line shall be used that works both on Windows and > non-Windows platforms, compiled as well as interpreted in GHCi? > I meant OLDNAME in MS's jargon, because they deprecate tzset [1], then call it 'old'. But it it still usable. [1] http://msdn.microsoft.com/en-us/library/ms235451.aspx -- cg From chengang31 at gmail.com Sun Oct 12 03:22:06 2014 From: chengang31 at gmail.com (cg) Date: Sun, 12 Oct 2014 11:22:06 +0800 Subject: Broken build: 32-bit FreeBSD, SmartOS, Solaris In-Reply-To: References: <5437AC12.2010909@centrum.cz> Message-ID: On 10/10/2014 9:40 PM, P?li G?bor J?nos wrote: > 2014-10-10 13:30 GMT+02:00 cg : >> How can I configure to build x86_64? >> >> When I build GHC (with msys2), it always builds i386 and I haven't spotted >> the option in ./configure to choose a x86_64 release. > > This is implicitly determined by the toolchain you use. So, probably > you have the i686 msys2 installed, while you would need the x86_64 > version. Given, that your operating system (and thus your hardware) > is also x86_64. > It turns out if I have 64-bit prebuilt ghc installed and exported in PATH, the build system will detect it and build a 64-bit ghc from source code. -- cg From hvriedel at gmail.com Sun Oct 12 10:11:52 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Sun, 12 Oct 2014 12:11:52 +0200 Subject: Curious Windows GHCi linker behaviour .o vs. .dll In-Reply-To: (cg's message of "Sun, 12 Oct 2014 10:30:13 +0800") References: <87y4smvix9.fsf@gmail.com> <87tx3avcfn.fsf@gmail.com> Message-ID: <87y4sld2c7.fsf@gmail.com> Hello! On 2014-10-12 at 04:30:13 +0200, cg wrote: [...] > Are you using 64-bit ghc? If so, it looks the issue is 64-bit only. Indeed, I have only set up 64bit CygWin & MSYS2 environments so far. >>> Actually both _tzset and tzset exist in include/time.h, only tzset is old >>> style name. They will be linked as the same function __imp__tzset. >> >> What do you mean by "old style"? And more importantly, what >> foreign-import line shall be used that works both on Windows and >> non-Windows platforms, compiled as well as interpreted in GHCi? >> > > I meant OLDNAME in MS's jargon, because they deprecate tzset [1], > then call it 'old'. But it it still usable. > > [1] http://msdn.microsoft.com/en-us/library/ms235451.aspx Ok, thanks for clairification, so I see there are actually two entangled issues here: 1) When coding directly against the MSVCRT, one is supposed to use the underscore-prefixed POSIX symbols, like e.g `_tzset()`. However, when targetting CygWin, using the proper `tzset()` POSIX name is the recommended course of action. To this end, I've submitted https://github.com/haskell/time/pull/4 (I hope that works for 32bit MSYS2 environments as well) Personally, I think this was a very questionable decision on Microsoft's part, as this way you effectively destroy any chance to simply compile existing POSIX-compatible source code for no good reason... 2) The other issue seems to be that while linking a package using `tzset()` into a `.exe`, `tzset()` gets resolved just fine, however as soon as GHCi's linker is used to resolve `tzset()` contained in that package, it fails. At this point, I still consider this a bug. It was suggested by Brandon, that GHCi's linker fails to resolve weak symbols. From allbery.b at gmail.com Sun Oct 12 12:45:00 2014 From: allbery.b at gmail.com (Brandon Allbery) Date: Sun, 12 Oct 2014 08:45:00 -0400 Subject: Curious Windows GHCi linker behaviour .o vs. .dll In-Reply-To: <87y4sld2c7.fsf@gmail.com> References: <87y4smvix9.fsf@gmail.com> <87tx3avcfn.fsf@gmail.com> <87y4sld2c7.fsf@gmail.com> Message-ID: On Sun, Oct 12, 2014 at 6:11 AM, Herbert Valerio Riedel wrote: > Personally, I think this was a very questionable decision on > Microsoft's part, as this way you effectively destroy any chance to > simply compile existing POSIX-compatible source code for no good > reason... > POSIX doesn't specify asm or linker level symbols, only C API. Most Unix-like platforms have an underscore on the front of symbol names at link level, so that the API doesn't have to avoid random platform-specific register names or the assembler need to have magic prefixes on either symbols or register names. So in fact, by adding the prefix underscore they are *more* compatible with Unix linkage, and presumably the FFI for Windows needs to start adding it the way the one for Unix does. -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Sun Oct 12 16:40:16 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Sun, 12 Oct 2014 17:40:16 +0100 Subject: Broken build: 32-bit FreeBSD, SmartOS, Solaris In-Reply-To: <5437AC12.2010909@centrum.cz> References: <5437AC12.2010909@centrum.cz> Message-ID: <543AAEF0.7070204@gmail.com> Sorry about this folks, I'm going to guess at the fix and validate it. Cheers, Simon On 10/10/2014 10:51, Karel Gardas wrote: > On 10/10/14 09:19 AM, P?li G?bor J?nos wrote: >> Hello there, >> >> Looks one of the recent commits broke the x86 builds on multiple >> platforms [1][2][3]. The common error message basically is as >> follows: >> >> rts/Linker.c: In function 'mkOc': >> rts/Linker.c:2372:6: >> error: 'ObjectCode' has no member named 'symbol_extras' >> > > It looks like this patch is the culprit: > > 5300099ed rts/Linker.c (Simon Marlow 2014-10-01 > 13:15:05 +0100 2372) oc->symbol_extras = NULL; > >> Note that the x86_64 counterparts of the affected builders completed >> their builds fine -- except for Solaris, but that has been broken >> already for a long time (relatively). > > Yeah, I'll need to fix that. So far I invested my energy in fixing > Solaris/i386 testcase issues... > > Karel From gintautas.miliauskas at gmail.com Sun Oct 12 21:22:40 2014 From: gintautas.miliauskas at gmail.com (Gintautas Miliauskas) Date: Sun, 12 Oct 2014 23:22:40 +0200 Subject: msys2 python2 for running tests Message-ID: Hi Herbert, I saw your comment on the Windows build page about msys2's python2. It used to be broken, a patch of mine recently went through to fix the problem. If you can verify that msys2's python2 works correctly for the entire test suite, feel free to update the instructions accordingly. Also, it looks like you added a section for setting up sshd on msys2. I think it's great, but I don't think it should be on the main ghc build page, it's just a separate concern (and it's also interesting not just to people building ghc). Would you mind moving that text off to a separate wikipage and adding a link to it instead? -- Gintautas Miliauskas -------------- next part -------------- An HTML attachment was scrubbed... URL: From gintautas at miliauskas.lt Sun Oct 12 22:06:23 2014 From: gintautas at miliauskas.lt (Gintautas Miliauskas) Date: Mon, 13 Oct 2014 00:06:23 +0200 Subject: Building ghc on Windows with msys2 In-Reply-To: <5427B3E8.6040802@mail.ru> References: <618BE556AADD624C9C918AA5D5911BEF22217EF2@DB3PRD3001MB020.064d.mgd.msft.net> <5427B3E8.6040802@mail.ru> Message-ID: > However, overall (not GHC use cases) gcc 4.9.1 still looks more buggy on > Windows than 4.8.3. 'Mingw-builds' project (which is now a part of > mingw-w64 project and is considered to be an "official" mingw-w64 gcc > distribution and is maintained by a man close to Msys2 project) has very > nice and complete build of 4.8.3 (64-bit build, for example, is here: > http://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting% > 20Win64/Personal%20Builds/mingw-builds/4.8.3/threads-posix/seh/). > I am looking into migrating ghc to the newer gcc package linked above. There is also the associated question of what to do with ghc-tarballs. Here's an idea: how about downloading the mingw package directly from sourceforge at configure time with curl/wget? That should work pretty well. I see two potential issues: 1. URL stability. The sourceforge repo is not in our control and files in it could go away any time. The repo seems stable though, with some files from 2011. If we're concerned about this, copying the file to a domain under our control should not be a problem. I did not get any responses about who to contact about that though... 2. Download failing due to internet connectivity issues or missing proxy configuration. This could easily be addressed by printing a message with a URL and a filesystem location to put the file in the case that the download fails. If there are no objections, I'll proceed with whipping up a patch. -- Gintautas Miliauskas -------------- next part -------------- An HTML attachment was scrubbed... URL: From gintautas.miliauskas at gmail.com Sun Oct 12 22:28:40 2014 From: gintautas.miliauskas at gmail.com (Gintautas Miliauskas) Date: Mon, 13 Oct 2014 00:28:40 +0200 Subject: Problems adding a custom section to a Windows binary Message-ID: This is slightly offtopic, but maybe some of the Windows folks know an answer to this question. I've been working on #9686 , but have been blocked by Windows rejecting all my attempts to add a custom section to a binary withthe error "bash: ./c.exe: cannot execute binary file: Exec format error". I'm not sure if this is a gcc/binutils bug or not (the exact same commands with the same tools work fine on Linux binaries). I have filed http://sourceforge.net/p/mingw/bugs/2239 and https://sourceware.org/bugzilla/show_bug.cgi?id=17466 to see what the developers say, but have not gotten a response yet. Perhaps someone here knows what's going on? -- Gintautas Miliauskas -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Sun Oct 12 22:33:20 2014 From: allbery.b at gmail.com (Brandon Allbery) Date: Sun, 12 Oct 2014 18:33:20 -0400 Subject: Problems adding a custom section to a Windows binary In-Reply-To: References: Message-ID: On Sun, Oct 12, 2014 at 6:28 PM, Gintautas Miliauskas < gintautas.miliauskas at gmail.com> wrote: > I'm not sure if this is a gcc/binutils bug or not (the exact same commands > with the same tools work fine on Linux binaries). There are huge differences between Linux ELF and Windows PE32/PE64; it would not be surprising if libbfd had bugs in the latter but not the former. Beyond that, I don't know. -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Mon Oct 13 01:56:47 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Sun, 12 Oct 2014 18:56:47 -0700 Subject: Problems adding a custom section to a Windows binary In-Reply-To: References: Message-ID: <1413165385-sup-9737@sabre> My suggestion is to have GHC spit some assembler that it is already generating, and see if custom sections are used at any point. Edward Excerpts from Gintautas Miliauskas's message of 2014-10-12 15:28:40 -0700: > This is slightly offtopic, but maybe some of the Windows folks know an > answer to this question. I've been working on #9686 > , but have been blocked by > Windows rejecting all my attempts to add a custom section to a binary > withthe error "bash: ./c.exe: cannot execute binary file: Exec format > error". > > I'm not sure if this is a gcc/binutils bug or not (the exact same commands > with the same tools work fine on Linux binaries). I have filed > http://sourceforge.net/p/mingw/bugs/2239 and > https://sourceware.org/bugzilla/show_bug.cgi?id=17466 to see what the > developers say, but have not gotten a response yet. Perhaps someone here > knows what's going on? > From lonetiger at gmail.com Mon Oct 13 04:11:33 2014 From: lonetiger at gmail.com (Tamar Christina) Date: Sun, 12 Oct 2014 21:11:33 -0700 Subject: Building ghc on Windows with msys2 Message-ID: <-9191884240881505436@unknownmsgid> Hi Gintautas, This seems like a good idea to me. I was also wondering would it be a good idea to have configure also handle cabal/happy/alex in a similar way? It would be another step you don't manually have to do and would make it easier to insure that the right versions are installed. Regards, Tamar ------------------------------ From: Gintautas Miliauskas Sent: ?13/?10/?2014 00:06 To: kyra Cc: ghc-devs at haskell.org Subject: Re: Building ghc on Windows with msys2 > However, overall (not GHC use cases) gcc 4.9.1 still looks more buggy on > Windows than 4.8.3. 'Mingw-builds' project (which is now a part of > mingw-w64 project and is considered to be an "official" mingw-w64 gcc > distribution and is maintained by a man close to Msys2 project) has very > nice and complete build of 4.8.3 (64-bit build, for example, is here: > http://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting% > 20Win64/Personal%20Builds/mingw-builds/4.8.3/threads-posix/seh/). > I am looking into migrating ghc to the newer gcc package linked above. There is also the associated question of what to do with ghc-tarballs. Here's an idea: how about downloading the mingw package directly from sourceforge at configure time with curl/wget? That should work pretty well. I see two potential issues: 1. URL stability. The sourceforge repo is not in our control and files in it could go away any time. The repo seems stable though, with some files from 2011. If we're concerned about this, copying the file to a domain under our control should not be a problem. I did not get any responses about who to contact about that though... 2. Download failing due to internet connectivity issues or missing proxy configuration. This could easily be addressed by printing a message with a URL and a filesystem location to put the file in the case that the download fails. If there are no objections, I'll proceed with whipping up a patch. -- Gintautas Miliauskas -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Oct 13 08:57:10 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 13 Oct 2014 08:57:10 +0000 Subject: Building ghc on Windows with msys2 In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF22217EF2@DB3PRD3001MB020.064d.mgd.msft.net> <5427B3E8.6040802@mail.ru> Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F32DD78@DB3PRD3001MB020.064d.mgd.msft.net> I think the potential difficulty is (1). Maybe they take it down (e.g. they move on to version X so they take down old version Y). An alternative would be to stash a copy somewhere on GHC?s main web server, and wget that. I?d be more comfortable doing that; less dependence on others. but I am a babe in these particular woods, and defer to others wisdom. Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Gintautas Miliauskas Sent: 12 October 2014 23:06 To: kyra Cc: ghc-devs at haskell.org Subject: Re: Building ghc on Windows with msys2 However, overall (not GHC use cases) gcc 4.9.1 still looks more buggy on Windows than 4.8.3. 'Mingw-builds' project (which is now a part of mingw-w64 project and is considered to be an "official" mingw-w64 gcc distribution and is maintained by a man close to Msys2 project) has very nice and complete build of 4.8.3 (64-bit build, for example, is here: http://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win64/Personal%20Builds/mingw-builds/4.8.3/threads-posix/seh/). I am looking into migrating ghc to the newer gcc package linked above. There is also the associated question of what to do with ghc-tarballs. Here's an idea: how about downloading the mingw package directly from sourceforge at configure time with curl/wget? That should work pretty well. I see two potential issues: 1. URL stability. The sourceforge repo is not in our control and files in it could go away any time. The repo seems stable though, with some files from 2011. If we're concerned about this, copying the file to a domain under our control should not be a problem. I did not get any responses about who to contact about that though... 2. Download failing due to internet connectivity issues or missing proxy configuration. This could easily be addressed by printing a message with a URL and a filesystem location to put the file in the case that the download fails. If there are no objections, I'll proceed with whipping up a patch. -- Gintautas Miliauskas -------------- next part -------------- An HTML attachment was scrubbed... URL: From karel.gardas at centrum.cz Mon Oct 13 09:12:01 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Mon, 13 Oct 2014 11:12:01 +0200 Subject: GHC HEAD breakage by pthread_setname_np usage. Message-ID: <543B9761.8020508@centrum.cz> Hello Simon, I'm sorry to disturb you, but your recent patch: commit 674c631ea111233daa929ef63500d75ba0db8858 Author: Simon Marlow Date: Fri Oct 10 14:26:19 2014 +0100 Name worker threads using pthread_setname_np This helps identify threads in gdb particularly in processes with a lot of threads. breaks build on FreeBSD and Solaris at least. The problem is that pthread_setname_np is GNU extension and so far I've seen it just in linux using glibc >=2.12, modern NetBSD and modern QNX. Builds on Solaris and FreeBSD result in unresolved symbol failure. Two examples showing this are here: http://haskell.inf.elte.hu/builders/smartos-x86-head/147/10.html http://haskell.inf.elte.hu/builders/freebsd-amd64-head/411/10.html The problem is that I cannot simply #ifdef usage of this since it's using `name' parameter of the createOSThread function and if I #ifdef this, then the compiler emits obvious warning: rts/posix/OSThreads.c: In function ?createOSThread?: rts/posix/OSThreads.c:132:40: warning: unused parameter ?name? which is probably going to break validate build which builds with -Werror. Perhaps if pthread_setname_np is not available on the target platform we can define it ourself (as empty macro or inline function doing nothing?) and use that, but in this case we would probably need proper configure check for the presence of this function. As rts is your domain, I'm just writing this in a hope that you will either revert the patch for now or solve it in a way you like. Thanks! Karel From marlowsd at gmail.com Mon Oct 13 09:15:10 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Mon, 13 Oct 2014 10:15:10 +0100 Subject: GHC HEAD breakage by pthread_setname_np usage. In-Reply-To: <543B9761.8020508@centrum.cz> References: <543B9761.8020508@centrum.cz> Message-ID: <543B981E.8000705@gmail.com> Thanks for letting me know, I'll add a configure test to check for this. Cheers, Simon On 13/10/2014 10:12, Karel Gardas wrote: > > Hello Simon, > > I'm sorry to disturb you, but your recent patch: > > commit 674c631ea111233daa929ef63500d75ba0db8858 > Author: Simon Marlow > Date: Fri Oct 10 14:26:19 2014 +0100 > > Name worker threads using pthread_setname_np > > This helps identify threads in gdb particularly in processes with a > lot of threads. > > > > breaks build on FreeBSD and Solaris at least. The problem is that > pthread_setname_np is GNU extension and so far I've seen it just in > linux using glibc >=2.12, modern NetBSD and modern QNX. Builds on > Solaris and FreeBSD result in unresolved symbol failure. Two examples > showing this are here: > > http://haskell.inf.elte.hu/builders/smartos-x86-head/147/10.html > http://haskell.inf.elte.hu/builders/freebsd-amd64-head/411/10.html > > The problem is that I cannot simply #ifdef usage of this since it's > using `name' parameter of the createOSThread function and if I #ifdef > this, then the compiler emits obvious warning: > > rts/posix/OSThreads.c: In function ?createOSThread?: > > rts/posix/OSThreads.c:132:40: warning: unused parameter ?name? > > > which is probably going to break validate build which builds with -Werror. > > Perhaps if pthread_setname_np is not available on the target platform we > can define it ourself (as empty macro or inline function doing nothing?) > and use that, but in this case we would probably need proper configure > check for the presence of this function. > > As rts is your domain, I'm just writing this in a hope that you will > either revert the patch for now or solve it in a way you like. > > Thanks! > Karel From hvriedel at gmail.com Mon Oct 13 09:34:30 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Mon, 13 Oct 2014 11:34:30 +0200 Subject: Building ghc on Windows with msys2 In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F32DD78@DB3PRD3001MB020.064d.mgd.msft.net> (Simon Peyton Jones's message of "Mon, 13 Oct 2014 08:57:10 +0000") References: <618BE556AADD624C9C918AA5D5911BEF22217EF2@DB3PRD3001MB020.064d.mgd.msft.net> <5427B3E8.6040802@mail.ru> <618BE556AADD624C9C918AA5D5911BEF3F32DD78@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <87k344uxcp.fsf@gmail.com> On 2014-10-13 at 10:57:10 +0200, Simon Peyton Jones wrote: > I think the potential difficulty is (1). Maybe they take it down (e.g. they move on to version X so they take down old version Y). > > An alternative would be to stash a copy somewhere on GHC?s main web > server, and wget that. I?d be more comfortable doing that; less > dependence on others. I guess storing a copy somewhere on https://ghc.haskell.org/ should be ok (I'm hoping Austin may weigh in wrt CDN-related considerations). I'd suggest using it as a fallback location though. I.e. try downloading from the official upstream location, and if that fails (either due to I/O errors and/or unexpected checksum), fallback to using our locally mirrored copy. However, we may need to take into account license issues, such as hosting the source-code as well, if we host binary distributions depending on the licenses involved (I'm not sure if this was ever considered for ghc-tarballs.git to begin with) Cheers, hvr From alan.zimm at gmail.com Mon Oct 13 09:47:28 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Mon, 13 Oct 2014 11:47:28 +0200 Subject: [GHC] #9628: Add Annotations to the AST to simplify source to source conversions In-Reply-To: <059.a7793e166a4ed0fc5d81fcf92c3ab795@haskell.org> References: <044.285fa4db7fb10488df4811e6070f6acb@haskell.org> <059.a7793e166a4ed0fc5d81fcf92c3ab795@haskell.org> Message-ID: Ok, will do. An integer can potentially have any number of leading zeros, and I will have to check what escaping exists in the others. On Mon, Oct 13, 2014 at 11:33 AM, GHC wrote: > #9628: Add Annotations to the AST to simplify source to source conversions > -------------------------------------+------------------------------------- > Reporter: alanz | Owner: alanz > Type: feature | Status: new > request | Milestone: > Priority: normal | Version: 7.9 > Component: Compiler | Keywords: > Resolution: | Architecture: Unknown/Multiple > Operating System: | Difficulty: Unknown > Unknown/Multiple | Blocked By: > Type of failure: | Related Tickets: > None/Unknown | > Test Case: | > Blocking: | > Differential Revisions: D297 | > -------------------------------------+------------------------------------- > > Comment (by simonpj): > > I suggest doing so only if the two can differ. In the case of `String` > there can be string gaps, thus > {{{ > foo :: String > foo = "blah blah\ > \more blah blah\ > \and more" > }}} > and I guess you want to have all that layout reproduced. Fine. But for > integers like `3234242329423`, I don't see how the displayed form could > differ. > > For `Words` perhaps there is binary/hex forms? > > Regardless, I'm not against this, but very keen that the reasons for > keeping the two are documented on a per-literal basis, as I have begun to > do above. > > -- > Ticket URL: > GHC > The Glasgow Haskell Compiler > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Oct 13 11:50:03 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 13 Oct 2014 11:50:03 +0000 Subject: Building ghc on Windows with msys2 In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF2223DA32@DB3PRD3001MB020.064d.mgd.msft.net> <543617ea.aa67b40a.48c1.ffff83a1@mx.google.com> <618BE556AADD624C9C918AA5D5911BEF3F325DDA@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F330E46@DB3PRD3001MB020.064d.mgd.msft.net> Thank you, that is brilliant. Simon From: Gintautas Miliauskas [mailto:gintautas.miliauskas at gmail.com] Sent: 10 October 2014 21:01 To: Simon Peyton Jones Cc: lonetiger at gmail.com; Randy Polen; kyra; Marek Wawrzos; Roman Kuznetsov; Neil Mitchell; ghc-devs at haskell.org Subject: Re: Building ghc on Windows with msys2 Hey, I have created https://ghc.haskell.org/trac/ghc/wiki/WindowsTaskForce, and added the two people from whom I heard a confirmation that they want to be on the list. Please edit the page and add yourself if you should be on that list. Feel free to hack the page up and add additional info as you see fit. On Thu, Oct 9, 2014 at 9:51 AM, Simon Peyton Jones > wrote: I think I?m fairly behind on the current build process of GHC, but as I do use GHC mainly on Windows, at such a time as you would like to move on to other things, I would certainly throw my hat In the ring. That sounds helpful, thank you. Are we at the point where we could form a GHC-on-Windows Task Force? With its own wiki page on the GHC Trac, and with named participants. (Of course you can drop off again.) But it would be really helpful to have an explicit group who feels a sense of ownership about making sure GHC works well on Windows. At the moment we are reduced to folk memory ?I recall that Gintautas did something like that a few months ago?. It sounds as if Tamar would be a willing member. Would anyone else be willing? I?d say that being a member indicates a positive willingness to help others, along with some level of expertise, NOT a promise to drop everything to attend to someone else?s problem. Simon From: lonetiger at gmail.com [mailto:lonetiger at gmail.com] Sent: 09 October 2014 06:04 To: Gintautas Miliauskas; Simon Peyton Jones Cc: Randy Polen; kyra; Marek Wawrzos; Roman Kuznetsov; Neil Mitchell; ghc-devs at haskell.org Subject: Re: Building ghc on Windows with msys2 Hi Gintautas, > Indeed, the next thing I was going to ask was about expediting the decision process. I would be happy to try and coordinate a push in Windows matters. There is a caveat though: I don't have any skin in the GHC-on-Windows game, so I will want to move on to other things afterwards. I think I?m fairly behind on the current build process of GHC, but as I do use GHC mainly on Windows, at such a time as you would like to move on to other things, I would certainly throw my hat In the ring. Cheers, Tamar From: Gintautas Miliauskas Sent: ?Thursday?, ?October? ?2?, ?2014 ?22?:?32 To: Simon Peyton Jones Cc: Randy Polen, kyra, Marek Wawrzos, Tamar Christina, Roman Kuznetsov, Neil Mitchell, ghc-devs at haskell.org Hi, > All we need is someone to act as convenor/coordinator and we are good to go. Would any of you be willing to play that role? Indeed, the next thing I was going to ask was about expediting the decision process. I would be happy to try and coordinate a push in Windows matters. There is a caveat though: I don't have any skin in the GHC-on-Windows game, so I will want to move on to other things afterwards. An advantage of having a working group is that you can decide things. At the moment people often wait for GHC HQ to make a decision, and end up waiting a long time. It would be better if a working group was responsible for the GHC-on-Windows build and then if (say) you want to mandate msys2, you can go ahead and mandate it. Well, obviously consult ghc-devs for advice, but you are in the lead. Does that make sense? Sounds great. The question still remains about making changes to code: is there a particular person with commit rights that we could lean on for code reviews and committing changes to the main repository? I think an early task is to replace what Neil Mitchell encountered: FIVE different wiki pages describing how to build GHC on Windows. We want just one! (Others can perhaps be marked ?out of date/archive? rather than deleted, but it should be clear which is the main choice.) Indeed, it's a bit of a mess. I intended to shape up the msys2 page to serve as the default, but wanted to see more testing done before before dropping the other pages. I agree with using msys2 as the main choice. (I?m using it myself.) It may be that Gintautas?s page https://ghc.haskell.org/trac/ghc/wiki/Building/Preparation/Windows/MSYS2 is already sufficient. Although I?d like to see it tested by others. For example, I found that it was CRUCIAL to set MSYSYSTEM=MINGW whereas Gintautas?s page says nothing about that. Are you sure that is a problem? The page specifically instructs to use the msys64_shell.bat script (through a shortcut) that is included in msys2, and that script takes care of setting MSYSTEM=MINGW64, among other important things. Other small thoughts: ? We started including the ghc-tarball stuff because when we relied directly on the gcc that came with msys, we kept getting build failures because the gcc that some random person happened to be using did not work (e..g. they had a too-old or too-new version of msys). By using a single, fixed gcc, we avoided all this pain. Makes sense. Just curious: why is this less of a problem on GNU/Linux distros compared to msys2? Does msys2 see comparatively less testing, or is it generally more bleeding edge? ? I don?t know what a ?rubenvb? build is, but I think you can go ahead and say ?use X and Y in this way?. The important thing is that it should be reproducible, and not dependent on the particular Cygwin or gcc or whatever the that user happens to have installed. A "rubenvb" build is one of the available types of prebuilt binary packages of mingw for Windows. Let's figure out if there is something more mainstream and if we can migrate to that. -- Gintautas Miliauskas -- Gintautas Miliauskas -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander at plaimi.net Mon Oct 13 13:56:55 2014 From: alexander at plaimi.net (Alexander Berntsen) Date: Mon, 13 Oct 2014 15:56:55 +0200 Subject: ExtraCommas In-Reply-To: References: <542139EA.1010407@plaimi.net> <618BE556AADD624C9C918AA5D5911BEF22229CDC@DB3PRD3001MB020.064d.mgd.msft.net> <54213FDB.4030204@plaimi.net> <618BE556AADD624C9C918AA5D5911BEF22233E9F@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <543BDA27.8010606@plaimi.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Just to let everyone know ExtraCommas is still alive: Alan and I just had a pleasant chat as alluded to in the forwarded message below. Despite being busy with other engagements lately, I am now picking up the ExtraCommas work again. I will be operating on the same level as Alan is in his work, and we'll be colliding a bit, no doubt. As such I will make efforts to collaborate with Alan in making the merge process nice before I submit any patches to phab. My plan now is simply: -hack around and get confident with the hsSyn code (I am quite confident in the Happy code already), -make a Wiki article with my idea and the details, -hack hack hack, -work with Alan to make sure merging is smooth, -& finally, submit a patch (or several). - -Alexander - -------- Original Message -------- Subject: Re: Capturing commas in the GHC parser Date: Mon, 13 Oct 2014 13:37:28 +0200 From: Alan & Kim Zimmerman To: Alexander Berntsen On Mon, Oct 13, 2014 at 12:30 PM, Alexander Berntsen wrote: > Hallo Alan, > > I've been away for a week on a conference. In my absence I see you > have made some posts about capturing commas in API annotations. > Prior to leaving I was working on an ExtraCommas language pragma. > > I had accomplished trailing & leading commas for signature > variables, fixity declarations, list expressions, import & export > declarations, record updates, record declaration & maybe more that > I don't remember. The way I had accomplished this was by manually > editing the happy code. > > SPJ showed me that I should have been messing about a level up, > like you are doing. > > Consequently, I have a few questions. How much does your work > overlap with what I am doing? How far have you come? Do you have > some advice for what I want to do? > > If we could schedule a little meeting for us to have a chat via > instant messaging, that would be great. I am on irc.freenode.net > as , on XMPP via , and you > can call me via SIP if you want to do audio/video chat using > . Hi I am alanz on freenode, and alan.zimm at gmail.com which may still come through on XMPP via hangouts. I actually did a large part of working a HsCommaList through the AST, including all the subsequent phases (renamer,typechecker etc), but have since backed it out as I was misusing the structure firstly, and I had come up with another means of doing it. My last commit with this in is https://github.com/alanz/ghc/tree/b970d42e1e6bc46077a60a602413d990615e3896 As you can see it is quite a far-reaching change, and mingled in with my annotations stuff too. I will be happy to talk to you about this, because it makes sense for our stuff to work together, otherwise it will be much harder to merge at a later date. I am hoping to get my annotation changes to the AST and parser firmed up this week, so that I can then just try to work with it to see that I have everything captured. Is there a place I can see what you have done? Regards Alan -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iF4EAREIAAYFAlQ72icACgkQRtClrXBQc7XtzwEAt2zTN8AVkY6Jox75TcyFG9Ks l26QndI+gvszGao89xEA/RfYpGMEarMB4tM2mtIZDI3dE53LC64zOnHfzE7N3G7f =FViZ -----END PGP SIGNATURE----- From austin at well-typed.com Mon Oct 13 16:37:25 2014 From: austin at well-typed.com (Austin Seipp) Date: Mon, 13 Oct 2014 11:37:25 -0500 Subject: GHC 7.8.4: call for tickets, show stoppers, and timelines - oh my! Message-ID: Hi *, After some discussion with Simon & Mikolaj today, I'd like to direct you all at this: https://ghc.haskell.org/trac/ghc/wiki/Status/GHC-7.8.4 This status page is the basic overview of what we plan on doing for 7.8.4. There are two basic components to this page: - Show stopping bugs. - Everything else, which is "nice to have". Show stoppers are listed at the top of the page, in the first paragraph. Right now, this includes: - #9439 - LLVM mangling too vigorously. - #8819 - Arithmetic failures for unregistered systems - #8690 - SpecConstr blow-up And that's all. But what's all the other stuff? That's "everything else". Aside from these tickets listed here - and any future amendments to it - all other tickets will only be considered nice-to-have. What does that mean? - It's low risk to include. - It clearly fixes the problem - It doesn't take Austin significant amounts of time to merge. For example, "Tickets marked merge with no milestone" are all nice-to-have. Similarly, all the *closed tickets* on this page may be re-opened and merged again[1], since most didn't make it to 7.8.4. Ditto with the remaining categories. OK, so that's the gist. Now I ask of you the following: - If you have a show-stopping bug with GHC 7.8.3, **you really, _positively_ need to file a bug, and get in contact with me ASAP**. Otherwise you'll be waiting for 7.10 most likely. - Again: if you have a show stopper, contact me. Very soon. - If there are bugs you *think* are showstoppers, but we didn't categorize them properly, let me know. Anything we accept as a show-stopper will delay the release of 7.8.4. Anything else can (and possibly will) be left behind. Luckily, almost all of the show stoppers have patches. Only #8819 does not, but I have asked Sergei to look into it for me if he has time today. Finally, I would please ask that users/developers do not include their own personal pet tickets under "show stoppers" without consulting me first, at least. :) If it's just nice to have, you can still pester me, of course, and I'll try to make it happen. I would like to have 7.8.4 out and done with by mid November, before we freeze the new STABLE branch for 7.10.1. That's not a hard deadline; just a timeframe I'd like to hit. Let me know if you have any questions or comments; thanks! [1] A lot of the closed tickets on this page had an improper milestone set, which is why they show up. You can mostly ignore them, I apologize. -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From murray at sonology.net Mon Oct 13 17:08:14 2014 From: murray at sonology.net (Murray Campbell) Date: Mon, 13 Oct 2014 10:08:14 -0700 Subject: Errors building GHC on iOS with LLVM >= 3.4 In-Reply-To: References: <87eguns0u9.fsf@gmail.com> Message-ID: If this is indeed unfamiliar, should I file a new ticket? Murray. On Mon, Oct 6, 2014 at 4:59 PM, Murray Campbell wrote: > On Sat, Oct 4, 2014 at 7:32 PM, Ben Gamari wrote: >> Murray Campbell writes: > > [snip] > >>> before bailing with >>> >>> /var/folders/02/0mv6cz6505x2xhzlr279k2340000gp/T/ghc6860_0/ghc6860_6-armv7.s:3916:2: >>> error: out of range pc-relative fixup value >>> vldr d8, LCPI70_0 >>> ^ >>> >>> /var/folders/02/0mv6cz6505x2xhzlr279k2340000gp/T/ghc6860_0/ghc6860_6-armv7s.s:3916:2: >>> error: out of range pc-relative fixup value >>> vldr d8, LCPI70_0 >>> ^ >>> >>> Next I tried to build HEAD (plus phabricator D208) with LLVM 3.4 but >>> got the same error. >>> >> I've never seen an error of this form. What symbol definitions does this >> error occur in? > > I have attached a gzipped version of the *-armv7.s file. The one I > attached is from a build with LLVM 3.5. I had to apply D208 & D155 to > get it to compile. I also had to get the > ghc-ios-scripts/arm-apple-darwin10-clang script to pick up the > homebrew clang rather than the apple one to get around an 'unknown > directive: .maosx_version_min' error. However, the vldr error is > identical to that with LLVM 3.4 building 7.8.3. > > I can get a straight 7.8.3 with LLVM 3.4 version if that would help. > > The error in the attached file is at line 5988 just below > '_c3pb_info$def: '. This is below > '_integerzmsimple_GHCziIntegerziType_doubleFromPositive_info$def:' > > The last lines of build instructions before the error are: > > "inplace/bin/ghc-stage1" -hisuf hi -osuf o -hcsuf hc -static -H64m > -O0 -this-package-key integ_FpVba29yPwl8vdmOmO0xMS > -hide-all-packages -i -ilibraries/integer-simple/. > -ilibraries/integer-simple/dist-install/build > -ilibraries/integer-simple/dist-install/build/autogen > -Ilibraries/integer-simple/dist-install/build > -Ilibraries/integer-simple/dist-install/build/autogen > -Ilibraries/integer-simple/. -optP-include > -optPlibraries/integer-simple/dist-install/build/autogen/cabal_macros.h > -package-key ghcpr_BE58KUgBe9ELCsPXiJ1Q2r -this-package-key > integer-simple -Wall -XHaskell2010 -XCPP -XMagicHash -XBangPatterns > -XUnboxedTuples -XUnliftedFFITypes -XNoImplicitPrelude -O -fllvm > -no-user-package-db -rtsopts -odir > libraries/integer-simple/dist-install/build -hidir > libraries/integer-simple/dist-install/build -stubdir > libraries/integer-simple/dist-install/build -c > libraries/integer-simple/./GHC/Integer/Type.hs -o > libraries/integer-simple/dist-install/build/GHC/Integer/Type.o > You are using a new version of LLVM that hasn't been tested yet! > We will try though... > > /var/folders/02/0mv6cz6505x2xhzlr279k2340000gp/T/ghc80302_0/ghc80302_6-armv7.s:5988:2: > error: out of range pc-relative fixup value > vldr d8, LCPI102_0 > ^ > > /var/folders/02/0mv6cz6505x2xhzlr279k2340000gp/T/ghc80302_0/ghc80302_6-armv7s.s:5988:2: > error: out of range pc-relative fixup value > vldr d8, LCPI102_0 > ^ From bgamari.foss at gmail.com Mon Oct 13 19:05:42 2014 From: bgamari.foss at gmail.com (Ben Gamari) Date: Mon, 13 Oct 2014 15:05:42 -0400 Subject: One-shot semantics in GHC event manager In-Reply-To: <87k346wajj.fsf@gmail.com> References: <878uknxmt3.fsf@gmail.com> <8761frxkh4.fsf@gmail.com> <87y4smwpg6.fsf@gmail.com> <87k346wajj.fsf@gmail.com> Message-ID: <8761fnx01l.fsf@gmail.com> Ben Gamari writes: > Andreas Voellmy writes: > >> On Sat, Oct 11, 2014 at 12:17 PM, Ben Gamari wrote: >> >> Ah... so this is not useful to you. I guess we could add `loop` to >> GHC.Event's export list. On the other hand, I like your LifeTime proposal >> better and then no one needs `loop`, so let's try this first. >> > I have a first cut of this here [1]. It compiles but would be I shocked > if it ran. All of the pieces are there but I need to change > EventLifetime to a more efficient encoding (there's no reason why it > needs to be more than an Int). > As it turns out the patch seems to get through the testsuite after a few minor fixes. What other tests can I subject this to? I'm afraid I don't have the access to any machine even close to the size of those that the original event manager was tested on so checking for performance regressions will be difficult. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From gintautas at miliauskas.lt Mon Oct 13 20:40:17 2014 From: gintautas at miliauskas.lt (Gintautas Miliauskas) Date: Mon, 13 Oct 2014 22:40:17 +0200 Subject: Building ghc on Windows with msys2 In-Reply-To: <87k344uxcp.fsf@gmail.com> References: <618BE556AADD624C9C918AA5D5911BEF22217EF2@DB3PRD3001MB020.064d.mgd.msft.net> <5427B3E8.6040802@mail.ru> <618BE556AADD624C9C918AA5D5911BEF3F32DD78@DB3PRD3001MB020.064d.mgd.msft.net> <87k344uxcp.fsf@gmail.com> Message-ID: I've updated the configure script to download the mingw distribution on the fly (D339 , #9218 ). I could use some help with a few things: 1. Validating the update to gcc 4.8.3. I tried to run the tests and got some failures, but I am not sure if they indicate problems with gcc or it's just noise. 2. Some general testing of the updated configure script. 3. Testing the build process on a 32-bit platform. 4. Setup of a local mirror of the distribution files on haskell.org. On Mon, Oct 13, 2014 at 11:34 AM, Herbert Valerio Riedel wrote: > On 2014-10-13 at 10:57:10 +0200, Simon Peyton Jones wrote: > > I think the potential difficulty is (1). Maybe they take it down (e.g. > they move on to version X so they take down old version Y). > > > > An alternative would be to stash a copy somewhere on GHC?s main web > > server, and wget that. I?d be more comfortable doing that; less > > dependence on others. > > I guess storing a copy somewhere on https://ghc.haskell.org/ should be > ok (I'm hoping Austin may weigh in wrt CDN-related considerations). I'd > suggest using it as a fallback location though. I.e. try downloading > from the official upstream location, and if that fails (either due to > I/O errors and/or unexpected checksum), fallback to using our locally > mirrored copy. However, we may need to take into account license issues, > such as hosting the source-code as well, if we host binary distributions > depending on the licenses involved (I'm not sure if this was ever > considered for ghc-tarballs.git to begin with) > > Cheers, > hvr > -- Gintautas Miliauskas -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin at well-typed.com Mon Oct 13 21:33:13 2014 From: austin at well-typed.com (Austin Seipp) Date: Mon, 13 Oct 2014 16:33:13 -0500 Subject: One-shot semantics in GHC event manager In-Reply-To: <8761fnx01l.fsf@gmail.com> References: <878uknxmt3.fsf@gmail.com> <8761frxkh4.fsf@gmail.com> <87y4smwpg6.fsf@gmail.com> <87k346wajj.fsf@gmail.com> <8761fnx01l.fsf@gmail.com> Message-ID: For the record, I talked to Ben earlier on IRC, and I can provide him with a machine to do intense benchmarks of the new I/O manager. Also, if any other developers (like Andreas, Johan, Bryan, etc) in this space want a big machine to test it on, I can probably equip you with one (or several). Since Rackspace is so gracious to us, we can immediately allocate high-powered, physical (i.e. not Xen, but real machines) machines to do high-scale testing on.[1] In any case, it's not hard to do and only takes a few minutes, so Ben: let me know. (I've thought it would be neat to implement a leasing system somehow, where a developer could lease a few machines for a short period of time, at which point they expire and a background job cleans them up.) [1] You can find the hardware specs here; GHC benchmarking is probably best suited for the "OnMetal I/O" type, which has 40 cores, 2x PCIe flash and 128GB of RAM - http://www.rackspace.com/cloud/servers/onmetal/ On Mon, Oct 13, 2014 at 2:05 PM, Ben Gamari wrote: > Ben Gamari writes: > >> Andreas Voellmy writes: >> >>> On Sat, Oct 11, 2014 at 12:17 PM, Ben Gamari wrote: >>> >>> Ah... so this is not useful to you. I guess we could add `loop` to >>> GHC.Event's export list. On the other hand, I like your LifeTime proposal >>> better and then no one needs `loop`, so let's try this first. >>> >> I have a first cut of this here [1]. It compiles but would be I shocked >> if it ran. All of the pieces are there but I need to change >> EventLifetime to a more efficient encoding (there's no reason why it >> needs to be more than an Int). >> > As it turns out the patch seems to get through the testsuite after a few > minor fixes. > > What other tests can I subject this to? I'm afraid I don't have the > access to any machine even close to the size of those that the original > event manager was tested on so checking for performance regressions will > be difficult. > > Cheers, > > - Ben > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From hvriedel at gmail.com Tue Oct 14 10:18:36 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Tue, 14 Oct 2014 12:18:36 +0200 Subject: One-shot semantics in GHC event manager In-Reply-To: (Austin Seipp's message of "Mon, 13 Oct 2014 16:33:13 -0500") References: <878uknxmt3.fsf@gmail.com> <8761frxkh4.fsf@gmail.com> <87y4smwpg6.fsf@gmail.com> <87k346wajj.fsf@gmail.com> <8761fnx01l.fsf@gmail.com> Message-ID: <87r3yb2bur.fsf@gmail.com> On 2014-10-13 at 23:33:13 +0200, Austin Seipp wrote: [...] > Also, if any other developers (like Andreas, Johan, Bryan, etc) in > this space want a big machine to test it on, I can probably equip you > with one (or several). Since Rackspace is so gracious to us, we can > immediately allocate high-powered, physical (i.e. not Xen, but real > machines) machines to do high-scale testing on.[1] > > In any case, it's not hard to do and only takes a few minutes, so Ben: > let me know. (I've thought it would be neat to implement a leasing > system somehow, where a developer could lease a few machines for a > short period of time, at which point they expire and a background job > cleans them up.) I'd like to add to this; If there's demand to provide SSH accounts to MSYS2-environments for developing/testing GHC patches or generally debugging/fixing GHC issues occuring on Windows, we may be able to provide such (ephemeral) accounts. Cheers, hvr From eir at cis.upenn.edu Tue Oct 14 13:33:55 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Tue, 14 Oct 2014 09:33:55 -0400 Subject: Request: Phab Differentials should include road maps Message-ID: Hi devs, I have what I hope is a simple request: that patch submissions contain a "road map" describing the patch. I'll illustrate via example: I just took a quick look at D323, about updating the design of Uniques. Although this patch was fairly straightforward, I would have been helped by a comment somewhere saying "All the important changes are in Unique.lhs. The rest of the changes are simply propagating the new UniqueDomain type." Then, I would just look at the one file and skim the rest very briefly. The reason I'm requesting this comment from the patch author is that my assumption above -- that all the action is in Unique.lhs -- might be quite wrong. Maybe there's a really important (perhaps one-line) change elsewhere that deserves attention. Or, maybe there's a function/type in Unique.lhs that the patch author is very uncertain about and wants extra scrutiny. In any case, a few sentences at the top of the patch would help focus reviewers' time where the author thinks it is most needed. What do we think? Is this a behavior we wish to adopt? Thanks! Richard From mail at joachim-breitner.de Tue Oct 14 13:55:58 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Tue, 14 Oct 2014 15:55:58 +0200 Subject: Request: Phab Differentials should include road maps In-Reply-To: References: Message-ID: <1413294958.2548.19.camel@joachim-breitner.de> Hi, Am Dienstag, den 14.10.2014, 09:33 -0400 schrieb Richard Eisenberg: > What do we think? Is this a behavior we wish to adopt? sounds sensible. I only worry that if people put such comments in the DR description, they will end up in the commit pushed to the tree, if done using "arc land". Maybe the submitter should add such things as comments to the DR? Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From simonpj at microsoft.com Tue Oct 14 14:09:06 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 14 Oct 2014 14:09:06 +0000 Subject: Request: Phab Differentials should include road maps In-Reply-To: References: Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F3363ED@DB3PRD3001MB020.064d.mgd.msft.net> I frequently find myself asking for a different kind of road map: a wiki page saying - what is the problem we are trying to solve - what is the general approach for solving it - what is the specification for what a GHC user (or maybe a GHC API client, depending) would see? - what is a road map for how the implementation is structured. We often have these wiki pages but not always. Simply reviewing a big blob of source-code diffs and trying to reconstruct the above four points is not much fun! Moreover the act of writing them can be fantastically illuminating. The StaticPtr stuff is a case in point. Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | Richard Eisenberg | Sent: 14 October 2014 14:34 | To: ghc-devs at haskell.org Devs | Subject: Request: Phab Differentials should include road maps | | Hi devs, | | I have what I hope is a simple request: that patch submissions contain | a "road map" describing the patch. I'll illustrate via example: I just | took a quick look at D323, about updating the design of Uniques. | Although this patch was fairly straightforward, I would have been | helped by a comment somewhere saying "All the important changes are in | Unique.lhs. The rest of the changes are simply propagating the new | UniqueDomain type." Then, I would just look at the one file and skim | the rest very briefly. The reason I'm requesting this comment from the | patch author is that my assumption above -- that all the action is in | Unique.lhs -- might be quite wrong. Maybe there's a really important | (perhaps one-line) change elsewhere that deserves attention. Or, maybe | there's a function/type in Unique.lhs that the patch author is very | uncertain about and wants extra scrutiny. In any case, a few sentences | at the top of the patch would help focus reviewers' time where the | author thinks it is most neede d. | | What do we think? Is this a behavior we wish to adopt? | | Thanks! | Richard | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From andreas.voellmy at gmail.com Tue Oct 14 15:57:35 2014 From: andreas.voellmy at gmail.com (Andreas Voellmy) Date: Tue, 14 Oct 2014 11:57:35 -0400 Subject: One-shot semantics in GHC event manager In-Reply-To: References: <878uknxmt3.fsf@gmail.com> <8761frxkh4.fsf@gmail.com> <87y4smwpg6.fsf@gmail.com> <87k346wajj.fsf@gmail.com> <8761fnx01l.fsf@gmail.com> Message-ID: This is awesome. I'd like to try to recreate some of the evaluations for the multicore IO manager paper on that 40 core system at backspace. How can I get access to this? I'll jump on IRC - maybe it is easier to chat in realtime. On Mon, Oct 13, 2014 at 5:33 PM, Austin Seipp wrote: > For the record, I talked to Ben earlier on IRC, and I can provide him > with a machine to do intense benchmarks of the new I/O manager. > > Also, if any other developers (like Andreas, Johan, Bryan, etc) in > this space want a big machine to test it on, I can probably equip you > with one (or several). Since Rackspace is so gracious to us, we can > immediately allocate high-powered, physical (i.e. not Xen, but real > machines) machines to do high-scale testing on.[1] > > In any case, it's not hard to do and only takes a few minutes, so Ben: > let me know. (I've thought it would be neat to implement a leasing > system somehow, where a developer could lease a few machines for a > short period of time, at which point they expire and a background job > cleans them up.) > > [1] You can find the hardware specs here; GHC benchmarking is probably > best suited for the "OnMetal I/O" type, which has 40 cores, 2x PCIe > flash and 128GB of RAM - > http://www.rackspace.com/cloud/servers/onmetal/ > > On Mon, Oct 13, 2014 at 2:05 PM, Ben Gamari > wrote: > > Ben Gamari writes: > > > >> Andreas Voellmy writes: > >> > >>> On Sat, Oct 11, 2014 at 12:17 PM, Ben Gamari > wrote: > >>> > >>> Ah... so this is not useful to you. I guess we could add `loop` to > >>> GHC.Event's export list. On the other hand, I like your LifeTime > proposal > >>> better and then no one needs `loop`, so let's try this first. > >>> > >> I have a first cut of this here [1]. It compiles but would be I shocked > >> if it ran. All of the pieces are there but I need to change > >> EventLifetime to a more efficient encoding (there's no reason why it > >> needs to be more than an Int). > >> > > As it turns out the patch seems to get through the testsuite after a few > > minor fixes. > > > > What other tests can I subject this to? I'm afraid I don't have the > > access to any machine even close to the size of those that the original > > event manager was tested on so checking for performance regressions will > > be difficult. > > > > Cheers, > > > > - Ben > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > > -- > Regards, > > Austin Seipp, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin at well-typed.com Tue Oct 14 17:22:42 2014 From: austin at well-typed.com (Austin Seipp) Date: Tue, 14 Oct 2014 12:22:42 -0500 Subject: One-shot semantics in GHC event manager In-Reply-To: References: <878uknxmt3.fsf@gmail.com> <8761frxkh4.fsf@gmail.com> <87y4smwpg6.fsf@gmail.com> <87k346wajj.fsf@gmail.com> <8761fnx01l.fsf@gmail.com> Message-ID: Hey Andreas, The basic rundown is that if we equip you with an account, you can just do it yourself. Although we'd like to restrict access a bit more; I'll figure something out. Yeah, if you hop on IRC, we can chat quickly about it and work something out in the mean time. On Tue, Oct 14, 2014 at 10:57 AM, Andreas Voellmy wrote: > This is awesome. I'd like to try to recreate some of the evaluations for the > multicore IO manager paper on that 40 core system at backspace. How can I > get access to this? I'll jump on IRC - maybe it is easier to chat in > realtime. > > On Mon, Oct 13, 2014 at 5:33 PM, Austin Seipp wrote: >> >> For the record, I talked to Ben earlier on IRC, and I can provide him >> with a machine to do intense benchmarks of the new I/O manager. >> >> Also, if any other developers (like Andreas, Johan, Bryan, etc) in >> this space want a big machine to test it on, I can probably equip you >> with one (or several). Since Rackspace is so gracious to us, we can >> immediately allocate high-powered, physical (i.e. not Xen, but real >> machines) machines to do high-scale testing on.[1] >> >> In any case, it's not hard to do and only takes a few minutes, so Ben: >> let me know. (I've thought it would be neat to implement a leasing >> system somehow, where a developer could lease a few machines for a >> short period of time, at which point they expire and a background job >> cleans them up.) >> >> [1] You can find the hardware specs here; GHC benchmarking is probably >> best suited for the "OnMetal I/O" type, which has 40 cores, 2x PCIe >> flash and 128GB of RAM - >> http://www.rackspace.com/cloud/servers/onmetal/ >> >> On Mon, Oct 13, 2014 at 2:05 PM, Ben Gamari >> wrote: >> > Ben Gamari writes: >> > >> >> Andreas Voellmy writes: >> >> >> >>> On Sat, Oct 11, 2014 at 12:17 PM, Ben Gamari >> >>> wrote: >> >>> >> >>> Ah... so this is not useful to you. I guess we could add `loop` to >> >>> GHC.Event's export list. On the other hand, I like your LifeTime >> >>> proposal >> >>> better and then no one needs `loop`, so let's try this first. >> >>> >> >> I have a first cut of this here [1]. It compiles but would be I shocked >> >> if it ran. All of the pieces are there but I need to change >> >> EventLifetime to a more efficient encoding (there's no reason why it >> >> needs to be more than an Int). >> >> >> > As it turns out the patch seems to get through the testsuite after a few >> > minor fixes. >> > >> > What other tests can I subject this to? I'm afraid I don't have the >> > access to any machine even close to the size of those that the original >> > event manager was tested on so checking for performance regressions will >> > be difficult. >> > >> > Cheers, >> > >> > - Ben >> > >> > _______________________________________________ >> > ghc-devs mailing list >> > ghc-devs at haskell.org >> > http://www.haskell.org/mailman/listinfo/ghc-devs >> > >> >> >> >> -- >> Regards, >> >> Austin Seipp, Haskell Consultant >> Well-Typed LLP, http://www.well-typed.com/ > > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From bgamari.foss at gmail.com Tue Oct 14 17:23:58 2014 From: bgamari.foss at gmail.com (Ben Gamari) Date: Tue, 14 Oct 2014 13:23:58 -0400 Subject: One-shot semantics in GHC event manager In-Reply-To: References: <878uknxmt3.fsf@gmail.com> <8761frxkh4.fsf@gmail.com> <87y4smwpg6.fsf@gmail.com> <87k346wajj.fsf@gmail.com> <8761fnx01l.fsf@gmail.com> Message-ID: <87ppduva35.fsf@gmail.com> Andreas Voellmy writes: > This is awesome. I'd like to try to recreate some of the evaluations for > the multicore IO manager paper on that 40 core system at backspace. How can > I get access to this? I'll jump on IRC - maybe it is easier to chat in > realtime. > Do you suppose you could document the process a bit as you do this? I've been having a bit of trouble reproducing your numbers with GHC 7.8 on the hardware that I have access to. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From andreas.voellmy at gmail.com Tue Oct 14 17:29:57 2014 From: andreas.voellmy at gmail.com (Andreas Voellmy) Date: Tue, 14 Oct 2014 13:29:57 -0400 Subject: One-shot semantics in GHC event manager In-Reply-To: <87ppduva35.fsf@gmail.com> References: <878uknxmt3.fsf@gmail.com> <8761frxkh4.fsf@gmail.com> <87y4smwpg6.fsf@gmail.com> <87k346wajj.fsf@gmail.com> <8761fnx01l.fsf@gmail.com> <87ppduva35.fsf@gmail.com> Message-ID: Yes, I'll try to describe it and script it so that others can understand the benchmarks and run it easily as well. On Tue, Oct 14, 2014 at 1:23 PM, Ben Gamari wrote: > Andreas Voellmy writes: > > > This is awesome. I'd like to try to recreate some of the evaluations for > > the multicore IO manager paper on that 40 core system at backspace. How > can > > I get access to this? I'll jump on IRC - maybe it is easier to chat in > > realtime. > > > Do you suppose you could document the process a bit as you do this? I've > been having a bit of trouble reproducing your numbers with GHC 7.8 on > the hardware that I have access to. > > Cheers, > > - Ben > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Tue Oct 14 19:11:12 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Tue, 14 Oct 2014 12:11:12 -0700 Subject: Broken build: 32-bit FreeBSD, SmartOS, Solaris In-Reply-To: References: <5437AC12.2010909@centrum.cz> Message-ID: <543D7550.8070804@gmail.com> I believe I've just pushed a fix for this, let me know if you still have problems. Cheers, Simon On 10/10/2014 13:47, Carter Schonwald wrote: > likewise, 32bit OS X seems to be broken on HEAD too > > http://lpaste.net/112412 is the relevant bit > > make[5]: Nothing to be done for `all'. > depbase=`echo src/x86/win32.lo | sed 's|[^/]*$|.deps/&|;s|\.lo$||'`;\ > /bin/sh ./libtool --mode=compile gcc-4.9 -DHAVE_CONFIG_H -I. -I.. -I. -I../include -Iinclude -I../src -I. -I../include -Iinclude -I../src -U__i686 -m32 -fno-stack-protector -w -MT src/x86/win32.lo -MMD -MP -MF $depbase.Tpo -c -o src/x86/win32.lo ../src/x86/win32.S &&\ > mv -f $depbase.Tpo $depbase.Plo > libtool: compile: gcc-4.9 -DHAVE_CONFIG_H -I. -I.. -I. -I../include -Iinclude -I../src -I. -I../include -Iinclude -I../src -U__i686 -m32 -fno-stack-protector -w -MT src/x86/win32.lo -MMD -MP -MF src/x86/.deps/win32.Tpo -c ../src/x86/win32.S -fno-common -DPIC -o src/x86/.libs/win32.o > ../src/x86/win32.S:1283:section difference relocatable subtraction expression, ".LFE5" minus ".LFB5" using a symbol at the end of section will not produce an assembly time constant > ../src/x86/win32.S:1283:use a symbol with a constant value created with an assignment instead of the expression, L_const_sym = .LFE5 - .LFB5 > ../src/x86/win32.S:1275:section difference relocatable subtraction expression, ".LEFDE5" minus ".LASFDE5" using a symbol at the end of section will not produce an assembly time constant > ../src/x86/win32.S:1275:use a symbol with a constant value created with an assignment instead of the expression, L_const_sym = .LEFDE5 - .LASFDE5 > ../src/x86/win32.S:unknown:missing indirect symbols for section (__IMPORT,__jump_table) > make[5]: *** [src/x86/win32.lo] Error 1 > > > On Fri, Oct 10, 2014 at 9:40 AM, P?li G?bor J?nos > wrote: > > 2014-10-10 13:30 GMT+02:00 cg >: > > How can I configure to build x86_64? > > > > When I build GHC (with msys2), it always builds i386 and I haven't spotted > > the option in ./configure to choose a x86_64 release. > > This is implicitly determined by the toolchain you use. So, probably > you have the i686 msys2 installed, while you would need the x86_64 > version. Given, that your operating system (and thus your hardware) > is also x86_64. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From simonpj at microsoft.com Tue Oct 14 20:16:30 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 14 Oct 2014 20:16:30 +0000 Subject: FW: optimizing StgPtr allocate (Capability *cap, W_ n) In-Reply-To: <659910670.20141014210859@gmail.com> References: <659910670.20141014210859@gmail.com> Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F336A67@DB3PRD3001MB020.064d.mgd.msft.net> Simon, did you see this? Simon -----Original Message----- From: Glasgow-haskell-users [mailto:glasgow-haskell-users-bounces at haskell.org] On Behalf Of Bulat Ziganshin Sent: 14 October 2014 18:09 To: glasgow-haskell-users at haskell.org Subject: optimizing StgPtr allocate (Capability *cap, W_ n) Hello Glasgow-haskell-users, i'm looking a the https://github.com/ghc/ghc/blob/23bb90460d7c963ee617d250fa0a33c6ac7bbc53/rts/sm/Storage.c#L680 if i correctly understand, it's speed-critical routine? i think that it may be improved in this way: StgPtr allocate (Capability *cap, W_ n) { bdescr *bd; StgPtr p; TICK_ALLOC_HEAP_NOCTR(WDS(n)); CCS_ALLOC(cap->r.rCCCS,n); /// here starts new improved code: bd = cap->r.rCurrentAlloc; if (bd == NULL || bd->free + n > bd->end) { if (n >= LARGE_OBJECT_THRESHOLD/sizeof(W_)) { .... } if (bd->free + n <= bd->start + BLOCK_SIZE_W) bd->end = min (bd->start + BLOCK_SIZE_W, bd->free + LARGE_OBJECT_THRESHOLD) goto usual_alloc; } .... } /// and here it stops usual_alloc: p = bd->free; bd->free += n; IF_DEBUG(sanity, ASSERT(*((StgWord8*)p) == 0xaa)); return p; } i think it's obvious - we consolidate two if's on the crirical path into the single one plus avoid one ADD by keeping highly-useful bd->end pointer further improvements may include removing bd==NULL check by initializing bd->free=bd->end=NULL and moving entire "if" body into separate slow_allocate() procedure marked "noinline" with allocate() probably marked as forceinline: StgPtr allocate (Capability *cap, W_ n) { bdescr *bd; StgPtr p; TICK_ALLOC_HEAP_NOCTR(WDS(n)); CCS_ALLOC(cap->r.rCCCS,n); bd = cap->r.rCurrentAlloc; if (bd->free + n > bd->end) return slow_allocate(cap,n); p = bd->free; bd->free += n; IF_DEBUG(sanity, ASSERT(*((StgWord8*)p) == 0xaa)); return p; } this change will greatly simplify optimizer's work. according to my experience current C++ compilers are weak on optimizing large functions with complex execution paths and such transformations really improve the generated code -- Best regards, Bulat mailto:Bulat.Ziganshin at gmail.com _______________________________________________ Glasgow-haskell-users mailing list Glasgow-haskell-users at haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users From bulat.ziganshin at gmail.com Tue Oct 14 22:01:23 2014 From: bulat.ziganshin at gmail.com (Bulat Ziganshin) Date: Wed, 15 Oct 2014 02:01:23 +0400 Subject: optimizing StgPtr allocate (Capability *cap, W_ n) Message-ID: <1477967281.20141015020123@gmail.com> Hello, i'm looking a the https://github.com/ghc/ghc/blob/23bb90460d7c963ee617d250fa0a33c6ac7bbc53/rts/sm/Storage.c#L680 if i correctly understand, it's speed-critical routine? i think that it may be improved in this way: StgPtr allocate (Capability *cap, W_ n) { bdescr *bd; StgPtr p; TICK_ALLOC_HEAP_NOCTR(WDS(n)); CCS_ALLOC(cap->r.rCCCS,n); /// here starts new improved code: bd = cap->r.rCurrentAlloc; if (bd == NULL || bd->free + n > bd->end) { if (n >= LARGE_OBJECT_THRESHOLD/sizeof(W_)) { .... } if (bd->free + n <= bd->start + BLOCK_SIZE_W) bd->end = min (bd->start + BLOCK_SIZE_W, bd->free + LARGE_OBJECT_THRESHOLD) goto usual_alloc; } .... } /// and here it stops usual_alloc: p = bd->free; bd->free += n; IF_DEBUG(sanity, ASSERT(*((StgWord8*)p) == 0xaa)); return p; } i think it's obvious - we consolidate two if's on the crirical path into the single one plus avoid one ADD by keeping highly-useful bd->end pointer further improvements may include removing bd==NULL check by initializing bd->free=bd->end=NULL and moving entire "if" body into separate slow_allocate() procedure marked "noinline" with allocate() probably marked as forceinline: StgPtr allocate (Capability *cap, W_ n) { bdescr *bd; StgPtr p; TICK_ALLOC_HEAP_NOCTR(WDS(n)); CCS_ALLOC(cap->r.rCCCS,n); bd = cap->r.rCurrentAlloc; if (bd->free + n > bd->end) return slow_allocate(cap,n); p = bd->free; bd->free += n; IF_DEBUG(sanity, ASSERT(*((StgWord8*)p) == 0xaa)); return p; } this change will greatly simplify optimizer's work. according to my experience current C++ compilers are weak on optimizing large functions with complex execution paths and such transformations really improve the generated code -- Best regards, Bulat mailto:Bulat.Ziganshin at gmail.com From mikolaj at well-typed.com Tue Oct 14 22:07:35 2014 From: mikolaj at well-typed.com (Mikolaj Konarski) Date: Wed, 15 Oct 2014 00:07:35 +0200 Subject: optimizing StgPtr allocate (Capability *cap, W_ n) In-Reply-To: <1477967281.20141015020123@gmail.com> References: <1477967281.20141015020123@gmail.com> Message-ID: Hi Bulat! Is this exactly the same email you posted to glasgow-haskell-users@ previously (I'm asking to know if I need to reread)? It was already forwarded here by none other but SPJ. :) Welcome, Mikolaj On Wed, Oct 15, 2014 at 12:01 AM, Bulat Ziganshin wrote: > Hello, > > i'm looking a the https://github.com/ghc/ghc/blob/23bb90460d7c963ee617d250fa0a33c6ac7bbc53/rts/sm/Storage.c#L680 > > if i correctly understand, it's speed-critical routine? > > i think that it may be improved in this way: > > StgPtr allocate (Capability *cap, W_ n) > { > bdescr *bd; > StgPtr p; > > TICK_ALLOC_HEAP_NOCTR(WDS(n)); > CCS_ALLOC(cap->r.rCCCS,n); > > /// here starts new improved code: > > bd = cap->r.rCurrentAlloc; > if (bd == NULL || bd->free + n > bd->end) { > if (n >= LARGE_OBJECT_THRESHOLD/sizeof(W_)) { > .... > } > if (bd->free + n <= bd->start + BLOCK_SIZE_W) > bd->end = min (bd->start + BLOCK_SIZE_W, bd->free + LARGE_OBJECT_THRESHOLD) > goto usual_alloc; > } > .... > } > > /// and here it stops > > usual_alloc: > p = bd->free; > bd->free += n; > > IF_DEBUG(sanity, ASSERT(*((StgWord8*)p) == 0xaa)); > return p; > } > > > i think it's obvious - we consolidate two if's on the crirical path > into the single one plus avoid one ADD by keeping highly-useful bd->end pointer > > further improvements may include removing bd==NULL check by > initializing bd->free=bd->end=NULL and moving entire "if" body > into separate slow_allocate() procedure marked "noinline" with > allocate() probably marked as forceinline: > > StgPtr allocate (Capability *cap, W_ n) > { > bdescr *bd; > StgPtr p; > > TICK_ALLOC_HEAP_NOCTR(WDS(n)); > CCS_ALLOC(cap->r.rCCCS,n); > > bd = cap->r.rCurrentAlloc; > if (bd->free + n > bd->end) > return slow_allocate(cap,n); > > p = bd->free; > bd->free += n; > > IF_DEBUG(sanity, ASSERT(*((StgWord8*)p) == 0xaa)); > return p; > } > > this change will greatly simplify optimizer's work. according to my > experience current C++ compilers are weak on optimizing large > functions with complex execution paths and such transformations really > improve the generated code > > -- > Best regards, > Bulat mailto:Bulat.Ziganshin at gmail.com > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From shumovichy at gmail.com Tue Oct 14 22:12:52 2014 From: shumovichy at gmail.com (Yuras Shumovich) Date: Wed, 15 Oct 2014 01:12:52 +0300 Subject: Wiki: special namespace for proposals? Message-ID: <1413324772.2582.3.camel@gmail.com> Hello, Would it be better to organize proposals under one namespace? Right now they belongs to root namespace, so title index ( https://ghc.haskell.org/trac/ghc/wiki/TitleIndex ) is hard to use. I was going to start new page describing language extension, but I don't want do increase entropy even more. What about creating special namespace, e.g. "Proposals"? Probably makes sense to divide if farther? Thanks, Yuras From bulat.ziganshin at gmail.com Tue Oct 14 22:22:37 2014 From: bulat.ziganshin at gmail.com (Bulat Ziganshin) Date: Wed, 15 Oct 2014 02:22:37 +0400 Subject: optimizing StgPtr allocate (Capability *cap, W_ n) In-Reply-To: References: <1477967281.20141015020123@gmail.com> Message-ID: <1692428457.20141015022237@gmail.com> Hello Mikolaj, Wednesday, October 15, 2014, 2:07:35 AM, you wrote: i'm sorry, it's the same > Hi Bulat! > Is this exactly the same email you posted to > glasgow-haskell-users@ previously (I'm asking to know > if I need to reread)? It was already forwarded here by none > other but SPJ. :) > Welcome, > Mikolaj > On Wed, Oct 15, 2014 at 12:01 AM, Bulat Ziganshin > wrote: >> Hello, >> >> i'm looking a the https://github.com/ghc/ghc/blob/23bb90460d7c963ee617d250fa0a33c6ac7bbc53/rts/sm/Storage.c#L680 >> >> if i correctly understand, it's speed-critical routine? >> >> i think that it may be improved in this way: >> >> StgPtr allocate (Capability *cap, W_ n) >> { >> bdescr *bd; >> StgPtr p; >> >> TICK_ALLOC_HEAP_NOCTR(WDS(n)); >> CCS_ALLOC(cap->r.rCCCS,n); >> >> /// here starts new improved code: >> >> bd = cap->r.rCurrentAlloc; >> if (bd == NULL || bd->free + n > bd->end) { >> if (n >= LARGE_OBJECT_THRESHOLD/sizeof(W_)) { >> .... >> } >> if (bd->free + n <= bd->start + BLOCK_SIZE_W) >> bd->end = min (bd->start + BLOCK_SIZE_W, bd->free + LARGE_OBJECT_THRESHOLD) >> goto usual_alloc; >> } >> .... >> } >> >> /// and here it stops >> >> usual_alloc: >> p = bd->free; >> bd->free += n; >> >> IF_DEBUG(sanity, ASSERT(*((StgWord8*)p) == 0xaa)); >> return p; >> } >> >> >> i think it's obvious - we consolidate two if's on the crirical path >> into the single one plus avoid one ADD by keeping highly-useful bd->end pointer >> >> further improvements may include removing bd==NULL check by >> initializing bd->free=bd->end=NULL and moving entire "if" body >> into separate slow_allocate() procedure marked "noinline" with >> allocate() probably marked as forceinline: >> >> StgPtr allocate (Capability *cap, W_ n) >> { >> bdescr *bd; >> StgPtr p; >> >> TICK_ALLOC_HEAP_NOCTR(WDS(n)); >> CCS_ALLOC(cap->r.rCCCS,n); >> >> bd = cap->r.rCurrentAlloc; >> if (bd->free + n > bd->end) >> return slow_allocate(cap,n); >> >> p = bd->free; >> bd->free += n; >> >> IF_DEBUG(sanity, ASSERT(*((StgWord8*)p) == 0xaa)); >> return p; >> } >> >> this change will greatly simplify optimizer's work. according to my >> experience current C++ compilers are weak on optimizing large >> functions with complex execution paths and such transformations really >> improve the generated code >> >> -- >> Best regards, >> Bulat mailto:Bulat.Ziganshin at gmail.com >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> -- Best regards, Bulat mailto:Bulat.Ziganshin at gmail.com From mail at joachim-breitner.de Tue Oct 14 22:24:16 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Wed, 15 Oct 2014 00:24:16 +0200 Subject: Wiki: special namespace for proposals? In-Reply-To: <1413324772.2582.3.camel@gmail.com> References: <1413324772.2582.3.camel@gmail.com> Message-ID: <1413325456.8147.3.camel@joachim-breitner.de> Hi, Am Mittwoch, den 15.10.2014, 01:12 +0300 schrieb Yuras Shumovich: > Would it be better to organize proposals under one namespace? Right now > they belongs to root namespace, so title index > ( https://ghc.haskell.org/trac/ghc/wiki/TitleIndex ) is hard to use. > > I was going to start new page describing language extension, but I don't > want do increase entropy even more. What about creating special > namespace, e.g. "Proposals"? Probably makes sense to divide if farther? Great idea! Please do that, and feel entitled to suggest it to anyone else starting a proposal page until everyone is used to it. I suggest to use https://ghc.haskell.org/trac/ghc/wiki/Proposal/Foo (i.e. singular), as the individual URL looks nicer this way. But it?s all up to you ? you start it, you get to paint the shed. Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From simonpj at microsoft.com Wed Oct 15 08:37:54 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 15 Oct 2014 08:37:54 +0000 Subject: Wiki: special namespace for proposals? In-Reply-To: <1413324772.2582.3.camel@gmail.com> References: <1413324772.2582.3.camel@gmail.com> Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F337066@DB3PRD3001MB020.064d.mgd.msft.net> I think that would be a fine idea, but it's always hard - pages change their status (a proposal becomes part of GHC) - a page may "belong" in multiple places - people keep URLs in bookmarks, and they are linked from other pages in Trac so moving pages is painful. The last is significant. Do we leave "this page has moved" stub pages (i.e. still cluttering the TitleIndex). Or do we move them, and live with dead links. I'm very un-keen on dead links in Trac itself. Maybe there is some way to rewrite all of those, at least? I don't have a strong opinion here Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | Yuras Shumovich | Sent: 14 October 2014 23:13 | To: ghc-devs at haskell.org Devs | Subject: Wiki: special namespace for proposals? | | | Hello, | | Would it be better to organize proposals under one namespace? Right | now they belongs to root namespace, so title index ( | https://ghc.haskell.org/trac/ghc/wiki/TitleIndex ) is hard to use. | | I was going to start new page describing language extension, but I | don't want do increase entropy even more. What about creating special | namespace, e.g. "Proposals"? Probably makes sense to divide if | farther? | | Thanks, | Yuras | | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From jan.stolarek at p.lodz.pl Wed Oct 15 09:06:09 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Wed, 15 Oct 2014 11:06:09 +0200 Subject: Wiki: special namespace for proposals? In-Reply-To: <1413324772.2582.3.camel@gmail.com> References: <1413324772.2582.3.camel@gmail.com> Message-ID: <201410151106.09750.jan.stolarek@p.lodz.pl> I'm all for improving organization of the wiki but I'm not sure about this idea. What happens when a proposal gets implemented? You can't just move the page to a new address. You can create a new wiki page describing the final dsign that was implemented and replace the content of the proposal page with a redirection. But that menas more mess in the wiki namespace. Janek Dnia ?roda, 15 pa?dziernika 2014, Yuras Shumovich napisa?: > Hello, > > Would it be better to organize proposals under one namespace? Right now > they belongs to root namespace, so title index > ( https://ghc.haskell.org/trac/ghc/wiki/TitleIndex ) is hard to use. > > I was going to start new page describing language extension, but I don't > want do increase entropy even more. What about creating special > namespace, e.g. "Proposals"? Probably makes sense to divide if farther? > > Thanks, > Yuras > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From carette at mcmaster.ca Wed Oct 15 12:07:47 2014 From: carette at mcmaster.ca (Jacques Carette) Date: Wed, 15 Oct 2014 08:07:47 -0400 Subject: Wiki: special namespace for proposals? In-Reply-To: <201410151106.09750.jan.stolarek@p.lodz.pl> References: <1413324772.2582.3.camel@gmail.com> <201410151106.09750.jan.stolarek@p.lodz.pl> Message-ID: <543E6393.1060002@mcmaster.ca> Suggestion: Why not use the namespace 'Design' rather than 'Proposal'? Rationale: a Proposal is a proposed design, and a final implementation is, well, an implementation of that design. So what the thing "is" does not change, but its status (proposal vs implemented) does. And, in fact, there can even be a third status: obsolete. In other words "that used to be the design". Those should be documented too! What is needed is a clear signpost, right at the top, which differentiates what status a Design is in. Jacques On 2014-10-15 5:06 AM, Jan Stolarek wrote: > I'm all for improving organization of the wiki but I'm not sure about this idea. What happens when > a proposal gets implemented? You can't just move the page to a new address. You can create a new > wiki page describing the final dsign that was implemented and replace the content of the proposal > page with a redirection. But that menas more mess in the wiki namespace. > > Janek > > Dnia ?roda, 15 pa?dziernika 2014, Yuras Shumovich napisa?: >> Hello, >> >> Would it be better to organize proposals under one namespace? Right now >> they belongs to root namespace, so title index >> ( https://ghc.haskell.org/trac/ghc/wiki/TitleIndex ) is hard to use. >> >> I was going to start new page describing language extension, but I don't >> want do increase entropy even more. What about creating special >> namespace, e.g. "Proposals"? Probably makes sense to divide if farther? >> >> Thanks, >> Yuras >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From mail at joachim-breitner.de Wed Oct 15 15:14:04 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Wed, 15 Oct 2014 17:14:04 +0200 Subject: Wiki: special namespace for proposals? In-Reply-To: <201410151106.09750.jan.stolarek@p.lodz.pl> References: <1413324772.2582.3.camel@gmail.com> <201410151106.09750.jan.stolarek@p.lodz.pl> Message-ID: <1413386044.19242.1.camel@joachim-breitner.de> Hi, Am Mittwoch, den 15.10.2014, 11:06 +0200 schrieb Jan Stolarek: > I'm all for improving organization of the wiki but I'm not sure about this idea. What happens when > a proposal gets implemented? You can't just move the page to a new address. You can create a new > wiki page describing the final dsign that was implemented and replace the content of the proposal > page with a redirection. But that menas more mess in the wiki namespace. I think proposal and design pages are (or at least, could be) different things. In a Proposal, there are alternatives, there are little details, there are notes about dead end, possibly benchmarks or such justifying a choice. Once something is implemented, most of that is not immediately interesting to someone trying to understand the final design (i.e. to fix a bug). So a good design page would have a structure anyway. And we already have a namespace for that: Commentary! So when a Proposal gets implemented, this should be clearly noted at the top of the Proposal page, linking to the relevant Comentary page (or paper, if there is one, or Note in the code, if the final design is so simple that it fits that format). The discussion about the Proposal would still be there for those who need to do some historical digging, i.e. when someone suggest a new implementation and we need to check if that variant was considered in the original implementation. Greetings, Joachm -- Joachim Breitner e-Mail: mail at joachim-breitner.de Homepage: http://www.joachim-breitner.de Jabber-ID: nomeata at joachim-breitner.de -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From mainland at apeiron.net Wed Oct 15 15:43:10 2014 From: mainland at apeiron.net (Geoffrey Mainland) Date: Wed, 15 Oct 2014 11:43:10 -0400 Subject: I'm going to disable DPH until someone starts maintaining it In-Reply-To: <5400B888.4000508@apeiron.net> References: <2F09BA4C-DADD-490F-9C29-D74BBD449A85@ouroborus.net> <53DF8F72.7080105@apeiron.net> <5400B888.4000508@apeiron.net> Message-ID: <543E960E.7010609@apeiron.net> Hi Austin, I haven't heard back from you about this, and D302 has been waiting for review in phab for two weeks. This seems to be a minor change, so if I don't hear otherwise by the end of the week, I'm going to go ahead and merge. Thanks, Geoff On 08/29/2014 01:29 PM, Geoffrey Mainland wrote: > Hi Austin, > > I've pushed wip/dph-fix branches to the dph and ghc repos. dph is not in > Phabricator, so I didn't submit anything for review. I think this is > small enough that we can probably just merge it directly, but it would > be nice to have DPH in Phabricator eventually. > > I only validated on Linux x64. Is there an easy way for me to validate > on other platforms? > > Thanks, > Geoff > > On 08/04/2014 10:07 AM, Austin Seipp wrote: >> On Mon, Aug 4, 2014 at 8:49 AM, Geoffrey Mainland wrote: >>> I have patches for DPH that let it work with vector 0.11 as of a few >>> months ago. I would be happy to submit them via phabricator if that is >>> agreeable (we have to coordinate with the import of vector 0.11 >>> though...I can instead leave them in a wip branch for Austin to merge as >>> he sees fit). I am also willing to commit some time to keep DPH at least >>> working in its current state. >> That would be quite nice if you could submit patches to get it to >> work! Thanks so much. >> >> As we've moved to submodules, having our own forks is becoming less >> palatable; we'd like to start tracking upstream closely, and having >> people submit changes there first and foremost. This creates a bit of >> a lag time between changes, but I think this is acceptable (and most >> of our maintainers are quite responsive to GHC needs!) >> >> It's also great you're willing to help maintain DPH a bit - but based >> on what Ben said, it seems like a significant rewrite will happen >> eventually. >> >> Geoff, here's my proposal: >> >> 1) I'll disable DPH for right now, so it won't pop up during >> ./validate. This will probably happen today. >> 2) We can coordinate the update of vector to 0.11, making it track >> the official master. (Perhaps an email thread or even Skype would >> work) >> 3) We can fix DPH at the same time. >> 4) Afterwords, we can re-enable it for ./validate >> >> If you submit Phabricator patches, that would be fantastic - we can >> add the DPH repository to Phabricator with little issue. >> >> In the long run, I think we should sync up with Ben and perhaps Simon >> & Co to see what will happen long-term for the DPH libraries. >> >>> Geoff >>> >>> On 8/4/14 8:18 AM, Ben Lippmeier wrote: >>>> On 4 Aug 2014, at 21:47 , Austin Seipp wrote: >>>> >>>>> Why? Because I'm afraid I just don't have any more patience for DPH, >>>>> I'm tired of fixing it, and it takes up a lot of extra time to build, >>>>> and time to maintain. >>>> I'm not going to argue against cutting it lose. >>>> >>>> >>>>> So - why are we still building it, exactly? >>>> It can be a good stress test for the simplifier, especially the SpecConstr transform. The fact that it takes so long to build is part of the reason it's a good stress test. >>>> >>>> >>>>> [1] And by 'speak up', I mean I'd like to see someone actively step >>>>> forward address my concerns above in a decisive manner. With patches. >>>> I thought that in the original conversation we agreed that if the DPH code became too much of a burden it was fine to switch it off and let it become unmaintained. I don't have time to maintain it anymore myself. >>>> >>>> The original DPH project has fractured into a few different research streams, none of which work directly with the implementation in GHC, or with the DPH libraries that are bundled with the GHC build. >>>> >>>> The short of it is that the array fusion mechanism implemented in DPH (based on stream fusion) is inadequate for the task. A few people are working on replacement fusion systems that aim to solve this problem, but merging this work back into DPH will entail an almost complete rewrite of the backend libraries. If it the existing code has become a maintenance burden then it's fine to switch it off. >>>> >>>> Sorry for the trouble. >>>> Ben. >>>> >> From afarmer at ittc.ku.edu Wed Oct 15 16:09:32 2014 From: afarmer at ittc.ku.edu (Andrew Farmer) Date: Wed, 15 Oct 2014 17:09:32 +0100 Subject: Wiki: special namespace for proposals? In-Reply-To: <1413386044.19242.1.camel@joachim-breitner.de> References: <1413324772.2582.3.camel@gmail.com> <201410151106.09750.jan.stolarek@p.lodz.pl> <1413386044.19242.1.camel@joachim-breitner.de> Message-ID: Instead of encoding the status in the URL, since we don't want URLs to change with the status of the proposal/feature changes, it sounds like we really just want something better than TitleIndex for browsing the wiki. (I've never seen a Trac wiki where TitleIndex is that useful anyway, other than to ctrl-f stuff.) I'm not really familiar with what Trac Wiki is capable of. Is it possible to add tags/categories to a page and then make an auto-generated list of links to all pages with a given tag/category? Then local edits to a given design (changing the category from 'proposed' to 'implemented') would automatically move it around as appropriate. I think it would be helpful to have a single place from which we could browse current/past proposals. If for no other reason than to get an idea how to write one myself. On Wed, Oct 15, 2014 at 4:14 PM, Joachim Breitner wrote: > Hi, > > > Am Mittwoch, den 15.10.2014, 11:06 +0200 schrieb Jan Stolarek: >> I'm all for improving organization of the wiki but I'm not sure about this idea. What happens when >> a proposal gets implemented? You can't just move the page to a new address. You can create a new >> wiki page describing the final dsign that was implemented and replace the content of the proposal >> page with a redirection. But that menas more mess in the wiki namespace. > > I think proposal and design pages are (or at least, could be) different > things. In a Proposal, there are alternatives, there are little details, > there are notes about dead end, possibly benchmarks or such justifying a > choice. > > Once something is implemented, most of that is not immediately > interesting to someone trying to understand the final design (i.e. to > fix a bug). So a good design page would have a structure anyway. And we > already have a namespace for that: Commentary! > > So when a Proposal gets implemented, this should be clearly noted at the > top of the Proposal page, linking to the relevant Comentary page (or > paper, if there is one, or Note in the code, if the final design is so > simple that it fits that format). The discussion about the Proposal > would still be there for those who need to do some historical digging, > i.e. when someone suggest a new implementation and we need to check if > that variant was considered in the original implementation. > > Greetings, > Joachm > > -- > Joachim Breitner > e-Mail: mail at joachim-breitner.de > Homepage: http://www.joachim-breitner.de > Jabber-ID: nomeata at joachim-breitner.de > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From hvriedel at gmail.com Wed Oct 15 16:36:21 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Wed, 15 Oct 2014 18:36:21 +0200 Subject: Wiki: special namespace for proposals? In-Reply-To: (Andrew Farmer's message of "Wed, 15 Oct 2014 17:09:32 +0100") References: <1413324772.2582.3.camel@gmail.com> <201410151106.09750.jan.stolarek@p.lodz.pl> <1413386044.19242.1.camel@joachim-breitner.de> Message-ID: <87siipmgsa.fsf@gmail.com> On 2014-10-15 at 18:09:32 +0200, Andrew Farmer wrote: [...] > I'm not really familiar with what Trac Wiki is capable of. Is it > possible to add tags/categories to a page and then make an > auto-generated list of links to all pages with a given tag/category? Fyi, Trac by default has no tags, and the TitleIndex macro can be used to insert a list of wiki pages matching certain criterias: http://trac.edgewall.org/wiki/WikiMacros#TitleIndex-macro However, there's a plugin to teach 'tags' to Trac: http://trac-hacks.org/wiki/TagsPlugin Cheers, hvr From jan.stolarek at p.lodz.pl Wed Oct 15 16:48:31 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Wed, 15 Oct 2014 18:48:31 +0200 Subject: Wiki: special namespace for proposals? In-Reply-To: <1413386044.19242.1.camel@joachim-breitner.de> References: <1413324772.2582.3.camel@gmail.com> <201410151106.09750.jan.stolarek@p.lodz.pl> <1413386044.19242.1.camel@joachim-breitner.de> Message-ID: <201410151848.31389.jan.stolarek@p.lodz.pl> Joachim >> yes, you are right that proposals and designs are different things. > And we already have a namespace for that: Commentary! Good point. > So when a Proposal gets implemented, this should be clearly noted at the > top of the Proposal page, linking to the relevant Comentary page > (...) > The discussion about the Proposal would still be there for those who need to do some historical > digging I disagree about these statements. Wiki pages typically don't contain discussions between people - trac tickets do. Unless you meant theoretical discussion of possible approaches to implementing a proposal. In that case, from my experience, once a proposal is implemented most of the discussion about alternatives becomes irrelevant. Of course a good commentary page will contain justification for the design choices that were made so that exploring dead ends can be avoided in the future. Andrew makes some good points. We don't want to move pages around - we just want to tag their status. That would be a Good Thing to have. Janek From mail at joachim-breitner.de Wed Oct 15 16:58:05 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Wed, 15 Oct 2014 18:58:05 +0200 Subject: Wiki: special namespace for proposals? In-Reply-To: <201410151848.31389.jan.stolarek@p.lodz.pl> References: <1413324772.2582.3.camel@gmail.com> <201410151106.09750.jan.stolarek@p.lodz.pl> <1413386044.19242.1.camel@joachim-breitner.de> <201410151848.31389.jan.stolarek@p.lodz.pl> Message-ID: <1413392285.22928.1.camel@joachim-breitner.de> Hi, Am Mittwoch, den 15.10.2014, 18:48 +0200 schrieb Jan Stolarek: > Joachim >> yes, you are right that proposals and designs are different things. > > > And we already have a namespace for that: Commentary! > Good point. > > > So when a Proposal gets implemented, this should be clearly noted at the > > top of the Proposal page, linking to the relevant Comentary page > > (...) > > The discussion about the Proposal would still be there for those who need to do some historical > > digging > I disagree about these statements. Wiki pages typically don't contain discussions between people - > trac tickets do. Unless you meant theoretical discussion of possible approaches to implementing a > proposal. That?s what I meant. The kind of ?discussion? found in papers, not the one found on this list :-) > In that case, from my experience, once a proposal is implemented most of the discussion > about alternatives becomes irrelevant. I wouldn?t be too sure about this (but I also don?t have examples to back that up right now). Another difference: A proposal needs to convince that something is useful and worth doing. Once we have a design page that?s no longer needed, as we have to live with it (or replace it) :-) Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From carter.schonwald at gmail.com Wed Oct 15 17:07:18 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Wed, 15 Oct 2014 13:07:18 -0400 Subject: Wiki: special namespace for proposals? In-Reply-To: <1413392285.22928.1.camel@joachim-breitner.de> References: <1413324772.2582.3.camel@gmail.com> <201410151106.09750.jan.stolarek@p.lodz.pl> <1413386044.19242.1.camel@joachim-breitner.de> <201410151848.31389.jan.stolarek@p.lodz.pl> <1413392285.22928.1.camel@joachim-breitner.de> Message-ID: yeah, agreed currently on the wiki its sometimes hard to determine which pages are "this is how we implemented it" vs "this is a bunch of different ideas and approaches we're trying to layout" On Wed, Oct 15, 2014 at 12:58 PM, Joachim Breitner wrote: > Hi, > > > Am Mittwoch, den 15.10.2014, 18:48 +0200 schrieb Jan Stolarek: > > Joachim >> yes, you are right that proposals and designs are different > things. > > > > > And we already have a namespace for that: Commentary! > > Good point. > > > > > So when a Proposal gets implemented, this should be clearly noted at > the > > > top of the Proposal page, linking to the relevant Comentary page > > > (...) > > > The discussion about the Proposal would still be there for those who > need to do some historical > > > digging > > I disagree about these statements. Wiki pages typically don't contain > discussions between people - > > trac tickets do. Unless you meant theoretical discussion of possible > approaches to implementing a > > proposal. > > That?s what I meant. The kind of ?discussion? found in papers, not the > one found on this list :-) > > > In that case, from my experience, once a proposal is implemented most > of the discussion > > about alternatives becomes irrelevant. > > I wouldn?t be too sure about this (but I also don?t have examples to > back that up right now). > > > Another difference: A proposal needs to convince that something is > useful and worth doing. Once we have a design page that?s no longer > needed, as we have to live with it (or replace it) :-) > > Greetings, > Joachim > > > -- > Joachim ?nomeata? Breitner > mail at joachim-breitner.de ? http://www.joachim-breitner.de/ > Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F > Debian Developer: nomeata at debian.org > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin at well-typed.com Wed Oct 15 22:21:21 2014 From: austin at well-typed.com (Austin Seipp) Date: Wed, 15 Oct 2014 17:21:21 -0500 Subject: I'm going to disable DPH until someone starts maintaining it In-Reply-To: <543E960E.7010609@apeiron.net> References: <2F09BA4C-DADD-490F-9C29-D74BBD449A85@ouroborus.net> <53DF8F72.7080105@apeiron.net> <5400B888.4000508@apeiron.net> <543E960E.7010609@apeiron.net> Message-ID: Hi Geoff, Sorry about that - this one slipped by my radar amongst a lot of other things. I'm spending today in the review queue, so I'll get to it. On Wed, Oct 15, 2014 at 10:43 AM, Geoffrey Mainland wrote: > Hi Austin, > > I haven't heard back from you about this, and D302 has been waiting for > review in phab for two weeks. This seems to be a minor change, so if I > don't hear otherwise by the end of the week, I'm going to go ahead and > merge. > > Thanks, > Geoff > > On 08/29/2014 01:29 PM, Geoffrey Mainland wrote: >> Hi Austin, >> >> I've pushed wip/dph-fix branches to the dph and ghc repos. dph is not in >> Phabricator, so I didn't submit anything for review. I think this is >> small enough that we can probably just merge it directly, but it would >> be nice to have DPH in Phabricator eventually. >> >> I only validated on Linux x64. Is there an easy way for me to validate >> on other platforms? >> >> Thanks, >> Geoff >> >> On 08/04/2014 10:07 AM, Austin Seipp wrote: >>> On Mon, Aug 4, 2014 at 8:49 AM, Geoffrey Mainland wrote: >>>> I have patches for DPH that let it work with vector 0.11 as of a few >>>> months ago. I would be happy to submit them via phabricator if that is >>>> agreeable (we have to coordinate with the import of vector 0.11 >>>> though...I can instead leave them in a wip branch for Austin to merge as >>>> he sees fit). I am also willing to commit some time to keep DPH at least >>>> working in its current state. >>> That would be quite nice if you could submit patches to get it to >>> work! Thanks so much. >>> >>> As we've moved to submodules, having our own forks is becoming less >>> palatable; we'd like to start tracking upstream closely, and having >>> people submit changes there first and foremost. This creates a bit of >>> a lag time between changes, but I think this is acceptable (and most >>> of our maintainers are quite responsive to GHC needs!) >>> >>> It's also great you're willing to help maintain DPH a bit - but based >>> on what Ben said, it seems like a significant rewrite will happen >>> eventually. >>> >>> Geoff, here's my proposal: >>> >>> 1) I'll disable DPH for right now, so it won't pop up during >>> ./validate. This will probably happen today. >>> 2) We can coordinate the update of vector to 0.11, making it track >>> the official master. (Perhaps an email thread or even Skype would >>> work) >>> 3) We can fix DPH at the same time. >>> 4) Afterwords, we can re-enable it for ./validate >>> >>> If you submit Phabricator patches, that would be fantastic - we can >>> add the DPH repository to Phabricator with little issue. >>> >>> In the long run, I think we should sync up with Ben and perhaps Simon >>> & Co to see what will happen long-term for the DPH libraries. >>> >>>> Geoff >>>> >>>> On 8/4/14 8:18 AM, Ben Lippmeier wrote: >>>>> On 4 Aug 2014, at 21:47 , Austin Seipp wrote: >>>>> >>>>>> Why? Because I'm afraid I just don't have any more patience for DPH, >>>>>> I'm tired of fixing it, and it takes up a lot of extra time to build, >>>>>> and time to maintain. >>>>> I'm not going to argue against cutting it lose. >>>>> >>>>> >>>>>> So - why are we still building it, exactly? >>>>> It can be a good stress test for the simplifier, especially the SpecConstr transform. The fact that it takes so long to build is part of the reason it's a good stress test. >>>>> >>>>> >>>>>> [1] And by 'speak up', I mean I'd like to see someone actively step >>>>>> forward address my concerns above in a decisive manner. With patches. >>>>> I thought that in the original conversation we agreed that if the DPH code became too much of a burden it was fine to switch it off and let it become unmaintained. I don't have time to maintain it anymore myself. >>>>> >>>>> The original DPH project has fractured into a few different research streams, none of which work directly with the implementation in GHC, or with the DPH libraries that are bundled with the GHC build. >>>>> >>>>> The short of it is that the array fusion mechanism implemented in DPH (based on stream fusion) is inadequate for the task. A few people are working on replacement fusion systems that aim to solve this problem, but merging this work back into DPH will entail an almost complete rewrite of the backend libraries. If it the existing code has become a maintenance burden then it's fine to switch it off. >>>>> >>>>> Sorry for the trouble. >>>>> Ben. >>>>> >>> > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From austin at well-typed.com Wed Oct 15 23:27:21 2014 From: austin at well-typed.com (Austin Seipp) Date: Wed, 15 Oct 2014 18:27:21 -0500 Subject: RFC: Source-markup language for GHC User's Guide In-Reply-To: References: <87y4srzz1w.fsf@gmail.com> <201410081049.33593.jan.stolarek@p.lodz.pl> <8738aylso6.fsf@gmail.com> Message-ID: OK, to kick this thread around: It seems like several people who touch the users guide are in favor of this due to: - Simpler markup - DocBook compatibility - Hopefully attracting more users if it's easier to manage. Cons: - +1 Dependency (minor) - No formal grammar (I don't think it has one either, re: Carter), but in ambiguous cases we can embed DocBook. And most other people seem completely neutral. Therefore: I'd say it's probably worth doing, since most people doing the editing are in favor, while most other people seem neutral. Does anyone else have strong opposition or other views? If not, I'd say we could get this over with before I create the STABLE branch. On Wed, Oct 8, 2014 at 4:40 PM, Carter Schonwald wrote: > does asciidoc have a formal grammar/syntax or whatever? i'm trying to look > up one, but can't seem to find it. > > > On Wed, Oct 8, 2014 at 7:14 AM, Herbert Valerio Riedel > wrote: >> >> On 2014-10-08 at 10:49:33 +0200, Jan Stolarek wrote: >> >> Therefore I'd like to hear your opinion on migrating away from the >> >> current Docbook XML markup to some other similarly expressive but yet >> >> more lightweight markup documentation system such as Asciidoc[1] or >> >> ReST/Sphinx[2]. >> >> > My opinion is that I don't really care. I only edit the User Guide >> > once every couple of months or so. I don't have problems with Docbook >> > but if others want something else I can adjust. >> >> I'd argue, that casual contributions may benefit significantly from >> switching to a more human-friendly markup, as my theory is that it's >> much easier to pick-up a syntax that's much closer to plain-text rather >> than a fully-fledged Docbook XML. With a closer-to-plain-text syntax you >> can more easily focus on the content you want to write rather than being >> distracted by the incidental complexity of writing low-level XML markup. >> >> Or put differently, I believe or rather hope this may lower the >> barrier-to-entry for casual User's Guide contributions. >> >> >> Fwiw, I stumbled over the slide-deck (obviously dogfooded in Asciidoc) >> >> >> http://mojavelinux.github.io/decks/discover-zen-writing-asciidoc/cojugs201305/index.html >> >> which tries to make the point that Asciidoc helps you focus more on >> writing content rather than fighting with the markup, including a >> comparision of the conciseness of a chosen example of Asciidoc vs. the >> resulting Docbook XML it is converted into. >> >> >> Cheers, >> hvr >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From simonpj at microsoft.com Thu Oct 16 08:08:15 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 16 Oct 2014 08:08:15 +0000 Subject: Merging a branch Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F354C91@DB3PRD3001MB020.064d.mgd.msft.net> Friends I have a branch, wip/new-flatten-skolems-Aug14, which has a long succession of work-in progress patches. Plus a couple of merges from HEAD. I'd like to completely re-organise the patches before committing to HEAD. How do I do that? Some kind of rebase? Clearly I want to start from current HEAD, rather than having weird merge patches involved. I was thinking of starting a new branch and doing a manual diff/patch, but that seems crude. I think that one other person (Iavor) has pulled from this branch, but he has not modified it. Thanks Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Thu Oct 16 08:15:08 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Thu, 16 Oct 2014 01:15:08 -0700 Subject: Merging a branch In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F354C91@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF3F354C91@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <1413447196-sup-6418@sabre> You might try and run 'git rebase' (you can run 'git rebase --abort' if things get too hairy), which will remove the merge patches and put your patchset on HEAD. Unfortunately, if you've done nontrivial work resolving merge conflicts, rebase doesn't really know how to take advantage of that, so you'll have to redo it. Edward Excerpts from Simon Peyton Jones's message of 2014-10-16 01:08:15 -0700: > Friends > > I have a branch, wip/new-flatten-skolems-Aug14, which has a long succession of work-in progress patches. Plus a couple of merges from HEAD. > > I'd like to completely re-organise the patches before committing to HEAD. How do I do that? Some kind of rebase? Clearly I want to start from current HEAD, rather than having weird merge patches involved. > > I was thinking of starting a new branch and doing a manual diff/patch, but that seems crude. > > I think that one other person (Iavor) has pulled from this branch, but he has not modified it. > > Thanks > > Simon From jwlato at gmail.com Thu Oct 16 08:32:47 2014 From: jwlato at gmail.com (John Lato) Date: Thu, 16 Oct 2014 01:32:47 -0700 Subject: Merging a branch In-Reply-To: <1413447196-sup-6418@sabre> References: <618BE556AADD624C9C918AA5D5911BEF3F354C91@DB3PRD3001MB020.064d.mgd.msft.net> <1413447196-sup-6418@sabre> Message-ID: I think it's often useful to use git rebase -i for an interactive rebase, sometimes I even run it multiple times in succession. I usually start by squashing any adjacent commits that should be logically grouped together (one pass), then re-ordering and squashing again. This isolates any complicated edits that I had to perform in order to re-order patches. As Edward points out you'll have to redo edits of merge conflicts, unless you've enabled rerere: http://git-scm.com/blog/2010/03/08/rerere.html. You probably want to enable it. John L. On Thu, Oct 16, 2014 at 1:15 AM, Edward Z. Yang wrote: > You might try and run 'git rebase' (you can run 'git rebase --abort' if > things get too hairy), which will remove the merge patches > and put your patchset on HEAD. Unfortunately, if you've done nontrivial > work > resolving merge conflicts, rebase doesn't really know how to take > advantage of that, so you'll have to redo it. > > Edward > > Excerpts from Simon Peyton Jones's message of 2014-10-16 01:08:15 -0700: > > Friends > > > > I have a branch, wip/new-flatten-skolems-Aug14, which has a long > succession of work-in progress patches. Plus a couple of merges from HEAD. > > > > I'd like to completely re-organise the patches before committing to > HEAD. How do I do that? Some kind of rebase? Clearly I want to start > from current HEAD, rather than having weird merge patches involved. > > > > I was thinking of starting a new branch and doing a manual diff/patch, > but that seems crude. > > > > I think that one other person (Iavor) has pulled from this branch, but > he has not modified it. > > > > Thanks > > > > Simon > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Thu Oct 16 09:24:10 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Thu, 16 Oct 2014 11:24:10 +0200 Subject: Merging a branch In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F354C91@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF3F354C91@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <1413451450.8047.1.camel@joachim-breitner.de> Hi, Am Donnerstag, den 16.10.2014, 08:08 +0000 schrieb Simon Peyton Jones: > I?d like to completely re-organise the patches before committing to > HEAD. How do I do that? Some kind of rebase? Clearly I want to > start from current HEAD, rather than having weird merge patches > involved. > > I was thinking of starting a new branch and doing a manual diff/patch, > but that seems crude. Not too crude, if your new set of (logically interesting) patches is going to be completely different from your original set of (historically grown) patches. Here is one workflow for that: (aren?t you glad that you get so many different suggestions :-) $ git checkout master # We first create a branch that contains the final state of the files # that you will like to push $ git checkout -b tmp-merge-branch $ git merge wip/new-flatten-skolems-Aug14 # resolve your conflicts here, once # now you have a branch with your desired final state, but a messy # history. We now create multiple nice patches from that state that, # together, yield the same result: $ git checkout master $ git checkout --patch tmp-merge-branch # now you can interactively select portions from your patch. Select # those that you want in your first polished commit $ emacs .... # do any additional cleanup of this commit, if required $ git commit -a -m 'First patch' $ git checkout --checkout tmp-merge-branch # Select parts for the second commit $ emacs .... # do any additional cleanup of this commit, if required $ git commit -a -m 'Second patch' ... repeat ... # (in the final "git checkout --patch", you should have selected all # changes) # now master is identical to tmp-merge-branch, check this using $ git diff master..tmp-merge-branch $ git branch -D tmp-merge-branch $ git push origin master Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From jan.stolarek at p.lodz.pl Thu Oct 16 10:47:25 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Thu, 16 Oct 2014 12:47:25 +0200 Subject: RTS validate failures on Linux Message-ID: <201410161247.25891.jan.stolarek@p.lodz.pl> Simon, I'm getting this error when validating HEAD on 64 bit Linux: =================== "inplace/bin/ghc-stage1" -optc-fno-stack-protector -optc-Werror -optc-Wall -optc-Wall -optc-Wextra -optc-Wstrict-prototypes -optc-Wmissi ng-prototypes -optc-Wmissing-declarations -optc-Winline -optc-Waggregate-return -optc-Wpointer-arith -optc-Wmissing-noreturn -optc-Wnest ed-externs -optc-Wredundant-decls -optc-Iincludes -optc-Iincludes/dist -optc-Iincludes/dist-derivedconstants/header -optc-Iincludes/dist -ghcconstants/header -optc-Irts -optc-Irts/dist/build -optc-DCOMPILING_RTS -optc-fno-strict-aliasing -optc-fno-common -optc-O2 -optc-fom it-frame-pointer -optc-DRtsWay=\"rts_thr\" -static -optc-DTHREADED_RTS -H32m -O -Werror -Wall -H64m -O0 -Iincludes -Iincludes/dist -Iin cludes/dist-derivedconstants/header -Iincludes/dist-ghcconstants/header -Irts -Irts/dist/build -DCOMPILING_RTS -this-package-key rts -dcmm-lint -i -irts -irts/dist/build -irts/dist/build/autogen -Irts/dist/build -Irts/dist/build/autogen -O2 -c rts/posix/OSThreads.c -o rts/dist/build/posix/OSThreads.thr_o cc1: warnings being treated as errors rts/posix/OSThreads.c: In function ?createOSThread?: rts/posix/OSThreads.c:132:40: error: unused parameter ?name? gmake[1]: *** [rts/dist/build/posix/OSThreads.thr_o] Error 1 =================== I believe this might be a result of your commit 674c631ea111233daa929ef63500d75ba0db8858 from Friday. Janek From iavor.diatchki at gmail.com Thu Oct 16 20:28:44 2014 From: iavor.diatchki at gmail.com (Iavor Diatchki) Date: Thu, 16 Oct 2014 13:28:44 -0700 Subject: Merging a branch In-Reply-To: <1413451450.8047.1.camel@joachim-breitner.de> References: <618BE556AADD624C9C918AA5D5911BEF3F354C91@DB3PRD3001MB020.064d.mgd.msft.net> <1413451450.8047.1.camel@joachim-breitner.de> Message-ID: Hello, Simon, I wouldn't worry about my branch---my changes are fairly orthogonal, so it shouldn't be too hard for me to the same sort of operation on top of `master`, once your changes are in there. -Iavor On Thu, Oct 16, 2014 at 2:24 AM, Joachim Breitner wrote: > Hi, > > > Am Donnerstag, den 16.10.2014, 08:08 +0000 schrieb Simon Peyton Jones: > > I?d like to completely re-organise the patches before committing to > > HEAD. How do I do that? Some kind of rebase? Clearly I want to > > start from current HEAD, rather than having weird merge patches > > involved. > > > > I was thinking of starting a new branch and doing a manual diff/patch, > > but that seems crude. > > Not too crude, if your new set of (logically interesting) patches is > going to be completely different from your original set of (historically > grown) patches. > > Here is one workflow for that: > (aren?t you glad that you get so many different suggestions :-) > > $ git checkout master > > # We first create a branch that contains the final state of the files > # that you will like to push > $ git checkout -b tmp-merge-branch > $ git merge wip/new-flatten-skolems-Aug14 > # resolve your conflicts here, once > > # now you have a branch with your desired final state, but a messy > # history. We now create multiple nice patches from that state that, > # together, yield the same result: > $ git checkout master > $ git checkout --patch tmp-merge-branch > # now you can interactively select portions from your patch. Select > # those that you want in your first polished commit > $ emacs .... # do any additional cleanup of this commit, if required > $ git commit -a -m 'First patch' > $ git checkout --checkout tmp-merge-branch > # Select parts for the second commit > $ emacs .... # do any additional cleanup of this commit, if required > $ git commit -a -m 'Second patch' > ... repeat ... > # (in the final "git checkout --patch", you should have selected all > # changes) > > # now master is identical to tmp-merge-branch, check this using > $ git diff master..tmp-merge-branch > $ git branch -D tmp-merge-branch > $ git push origin master > > > > Greetings, > Joachim > > > -- > Joachim ?nomeata? Breitner > mail at joachim-breitner.de ? http://www.joachim-breitner.de/ > Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F > Debian Developer: nomeata at debian.org > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Oct 16 20:43:07 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 16 Oct 2014 20:43:07 +0000 Subject: Windows build broken in Linker.c Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F35E9E8@DB3PRD3001MB020.064d.mgd.msft.net> Simon Aargh! I think the Windows build is broken again. I think this is your commit 5300099ed Admittedly this is on a branch I'm working on, but it's up to date with HEAD. And I have no touched Linker.c! Any ideas? Simon rts\Linker.c: In function 'allocateImageAndTrampolines': rts\Linker.c:3708:19: error: 'arch_name' undeclared (first use in this function) rts\Linker.c:3708:19: note: each undeclared identifier is reported only once for each function it appears in rts/ghc.mk:236: recipe for target 'rts/dist/build/Linker.o' failed make[1]: *** [rts/dist/build/Linker.o] Error 1 make[1]: *** Waiting for unfinished jobs.... -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin at well-typed.com Thu Oct 16 21:02:50 2014 From: austin at well-typed.com (Austin Seipp) Date: Thu, 16 Oct 2014 16:02:50 -0500 Subject: RTS validate failures on Linux In-Reply-To: <201410161247.25891.jan.stolarek@p.lodz.pl> References: <201410161247.25891.jan.stolarek@p.lodz.pl> Message-ID: Hi Jan, This fix should work, although I can't reproduce it; I imagine you must not have pthread_setname_ap available. It's glibc 2.12+ only, while Debian oldstable only has glibc 2.11. https://phabricator.haskell.org/D344 Let me know if it works for you. On Thu, Oct 16, 2014 at 5:47 AM, Jan Stolarek wrote: > Simon, > > I'm getting this error when validating HEAD on 64 bit Linux: > > =================== > > "inplace/bin/ghc-stage1" -optc-fno-stack-protector -optc-Werror -optc-Wall -optc-Wall -optc-Wextra -optc-Wstrict-prototypes -optc-Wmissi > ng-prototypes -optc-Wmissing-declarations -optc-Winline -optc-Waggregate-return -optc-Wpointer-arith -optc-Wmissing-noreturn -optc-Wnest > ed-externs -optc-Wredundant-decls -optc-Iincludes -optc-Iincludes/dist -optc-Iincludes/dist-derivedconstants/header -optc-Iincludes/dist > -ghcconstants/header -optc-Irts -optc-Irts/dist/build -optc-DCOMPILING_RTS -optc-fno-strict-aliasing -optc-fno-common -optc-O2 -optc-fom > it-frame-pointer -optc-DRtsWay=\"rts_thr\" -static -optc-DTHREADED_RTS -H32m -O -Werror -Wall -H64m -O0 -Iincludes -Iincludes/dist -Iin > cludes/dist-derivedconstants/header -Iincludes/dist-ghcconstants/header -Irts -Irts/dist/build -DCOMPILING_RTS -this-package-key > rts -dcmm-lint -i -irts -irts/dist/build -irts/dist/build/autogen -Irts/dist/build -Irts/dist/build/autogen -O2 -c > rts/posix/OSThreads.c -o rts/dist/build/posix/OSThreads.thr_o > > cc1: warnings being treated as errors > rts/posix/OSThreads.c: In function ?createOSThread?: > > rts/posix/OSThreads.c:132:40: error: unused parameter ?name? > gmake[1]: *** [rts/dist/build/posix/OSThreads.thr_o] Error 1 > > =================== > > I believe this might be a result of your commit 674c631ea111233daa929ef63500d75ba0db8858 from > Friday. > > Janek > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From sprw121 at gmail.com Thu Oct 16 21:21:53 2014 From: sprw121 at gmail.com (Steven Wright) Date: Thu, 16 Oct 2014 14:21:53 -0700 Subject: Merging a branch In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F354C91@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF3F354C91@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: I remember having a similiar issue once that I think I used git rebase -p. Unfortunately -p is incompatible with -i, but I think there's a way around this that I don't remember right now. On Thu, Oct 16, 2014 at 1:08 AM, Simon Peyton Jones wrote: > Friends > > > > I have a branch, wip/new-flatten-skolems-Aug14, which has a long > succession of work-in progress patches. Plus a couple of merges from HEAD. > > > > I?d like to completely re-organise the patches before committing to HEAD. > How do I do that? Some kind of rebase? Clearly I want to start from > current HEAD, rather than having weird merge patches involved. > > > > I was thinking of starting a new branch and doing a manual diff/patch, but > that seems crude. > > > > I think that one other person (Iavor) has pulled from this branch, but he > has not modified it. > > > > Thanks > > > > Simon > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin at well-typed.com Thu Oct 16 21:32:15 2014 From: austin at well-typed.com (Austin Seipp) Date: Thu, 16 Oct 2014 16:32:15 -0500 Subject: Windows build broken in Linker.c In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F35E9E8@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF3F35E9E8@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: I see what's going on and am fixing it... The code broke 32-bit due to #ifdefery, but I think it can be removed, perhaps (which would be preferable). On Thu, Oct 16, 2014 at 3:43 PM, Simon Peyton Jones wrote: > Simon > > Aargh! I think the Windows build is broken again. > > I think this is your commit 5300099ed > > Admittedly this is on a branch I?m working on, but it?s up to date with > HEAD. And I have no touched Linker.c! > > Any ideas? > > Simon > > > > rts\Linker.c: In function 'allocateImageAndTrampolines': > > > > rts\Linker.c:3708:19: > > error: 'arch_name' undeclared (first use in this function) > > > > rts\Linker.c:3708:19: > > note: each undeclared identifier is reported only once for each > function it appears in > > rts/ghc.mk:236: recipe for target 'rts/dist/build/Linker.o' failed > > make[1]: *** [rts/dist/build/Linker.o] Error 1 > > make[1]: *** Waiting for unfinished jobs.... > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From marlowsd at gmail.com Fri Oct 17 00:00:22 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Thu, 16 Oct 2014 17:00:22 -0700 Subject: Windows build broken in Linker.c In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF3F35E9E8@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <54405C16.8000901@gmail.com> I was working on a fix yesterday but ran out of time. Frankly this code is a nightmare, every time I touch it it breaks on some platform - this time I validated on 64 bit Windows but not 32. Aargh indeed. On 16 Oct 2014 14:32, "Austin Seipp" > wrote: I see what's going on and am fixing it... The code broke 32-bit due to #ifdefery, but I think it can be removed, perhaps (which would be preferable). On Thu, Oct 16, 2014 at 3:43 PM, Simon Peyton Jones > wrote: > Simon > > Aargh! I think the Windows build is broken again. > > I think this is your commit 5300099ed > > Admittedly this is on a branch I?m working on, but it?s up to date with > HEAD. And I have no touched Linker.c! > > Any ideas? > > Simon > > > > rts\Linker.c: In function 'allocateImageAndTrampolines': > > > > rts\Linker.c:3708:19: > > error: 'arch_name' undeclared (first use in this function) > > > > rts\Linker.c:3708:19: > > note: each undeclared identifier is reported only once for each > function it appears in > > rts/ghc.mk:236 : recipe for target 'rts/dist/build/Linker.o' failed > > make[1]: *** [rts/dist/build/Linker.o] Error 1 > > make[1]: *** Waiting for unfinished jobs.... > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Fri Oct 17 03:15:33 2014 From: david.feuer at gmail.com (David Feuer) Date: Thu, 16 Oct 2014 23:15:33 -0400 Subject: T3064 failures Message-ID: I don't know what's going on, but T3064 is giving some substantial performance trouble, making all the validations fail: max_bytes_used value is too high: Expected T3064(normal) max_bytes_used: 13251728 +/-20% Lower bound T3064(normal) max_bytes_used: 10601382 Upper bound T3064(normal) max_bytes_used: 15902074 Actual T3064(normal) max_bytes_used: 17472560 Deviation T3064(normal) max_bytes_used: 31.9 % bytes allocated value is too high: Expected T3064(normal) bytes allocated: 385145080 +/-5% Lower bound T3064(normal) bytes allocated: 365887826 Upper bound T3064(normal) bytes allocated: 404402334 Actual T3064(normal) bytes allocated: 1372924224 Deviation T3064(normal) bytes allocated: 256.5 % I don't know what's allocating an extra gigabyte, but unless there's a good reason, someone probably needs to tighten something up. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.stolarek at p.lodz.pl Fri Oct 17 07:24:22 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Fri, 17 Oct 2014 09:24:22 +0200 Subject: T3064 failures In-Reply-To: References: Message-ID: <201410170924.22533.jan.stolarek@p.lodz.pl> This test also fails for me, although slightly differently: max_bytes_used value is too high: Expected T3064(normal) max_bytes_used: 13251728 +/-20% Lower bound T3064(normal) max_bytes_used: 10601382 Upper bound T3064(normal) max_bytes_used: 15902074 Actual T3064(normal) max_bytes_used: 16313776 Deviation T3064(normal) max_bytes_used: 23.1 % *** unexpected failure for T3064(normal) I'm not getting the "bytes allocated value is too high" part. Janek From mail at joachim-breitner.de Fri Oct 17 07:38:19 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Fri, 17 Oct 2014 09:38:19 +0200 Subject: T3064 failures In-Reply-To: References: Message-ID: <1413531499.1298.4.camel@joachim-breitner.de> Hi, Am Donnerstag, den 16.10.2014, 23:15 -0400 schrieb David Feuer: > I don't know what's going on, but T3064 is giving some substantial > performance trouble, making all the validations fail: > > max_bytes_used value is too high: > Expected T3064(normal) max_bytes_used: 13251728 +/-20% > Lower bound T3064(normal) max_bytes_used: 10601382 > Upper bound T3064(normal) max_bytes_used: 15902074 > Actual T3064(normal) max_bytes_used: 17472560 > Deviation T3064(normal) max_bytes_used: 31.9 % > bytes allocated value is too high: > Expected T3064(normal) bytes allocated: 385145080 +/-5% > Lower bound T3064(normal) bytes allocated: 365887826 > Upper bound T3064(normal) bytes allocated: 404402334 > Actual T3064(normal) bytes allocated: 1372924224 > Deviation T3064(normal) bytes allocated: 256.5 % > I can?t observe the bytes allocated too high on ghc-speed: Expected T3064(normal) max_bytes_used: 18744992 +/-20% Lower bound T3064(normal) max_bytes_used: 14995993 Upper bound T3064(normal) max_bytes_used: 22493991 Actual T3064(normal) max_bytes_used: 19011008 Deviation T3064(normal) max_bytes_used: 1.4 % Expected T3064(normal) peak_megabytes_allocated: 42 +/-20% Lower bound T3064(normal) peak_megabytes_allocated: 33 Upper bound T3064(normal) peak_megabytes_allocated: 51 Actual T3064(normal) peak_megabytes_allocated: 42 Expected T3064(normal) bytes allocated: 385145080 +/-5% Lower bound T3064(normal) bytes allocated: 365887826 Upper bound T3064(normal) bytes allocated: 404402334 Actual T3064(normal) bytes allocated: 383504280 Deviation T3064(normal) bytes allocated: -0.4 % http://ghcspeed-nomeata.rhcloud.com/timeline/?exe=2&base=2%2B68&ben=tests%2Falloc%2FT3064&env=1&revs=50&equid=on Are you sure you build with the exact same settings as validate would do? Also interesting that according to https://phabricator.haskell.org/harbormaster/ only builds of DRs are failing, the builds of recent commits to master succeed. Are these built with slightly different settings, or is that just by accident? Greetings, Joachim -- Joachim Breitner e-Mail: mail at joachim-breitner.de Homepage: http://www.joachim-breitner.de Jabber-ID: nomeata at joachim-breitner.de -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From david.macek.0 at gmail.com Fri Oct 17 10:36:11 2014 From: david.macek.0 at gmail.com (David Macek) Date: Fri, 17 Oct 2014 12:36:11 +0200 Subject: Build on Windows fails Message-ID: <5440F11B.6070100@gmail.com> Hi all. My build fails, log: https://gist.github.com/elieux/2b2a074e03005f7754d6 I'm working on Windows x64, using msys2 with mingw64-x86_64 4.9.1. What I did: download the official 7.6.3 x64 build without included toolchain download x64 msys2, install x64 toolchain and set ghc to use it build Cabal, install Alex and Happy set paths to relevant binaries download http://www.haskell.org/ghc/dist/7.8.3/ghc-7.8.3-src.tar.xz extract cp mk/build.mk.sample mk/build.mk ./boot patch configure to remove the in-tree toolchain ./configure with some parameters make I have a PKGBUILD, but since I'm bootstrapping with the manually configured ghc, the environment is not 100% reproducible at the moment. Can you point me at what should I try changing to build successfully? -- David Macek From v.dijk.bas at gmail.com Fri Oct 17 12:34:28 2014 From: v.dijk.bas at gmail.com (Bas van Dijk) Date: Fri, 17 Oct 2014 14:34:28 +0200 Subject: One-shot semantics in GHC event manager In-Reply-To: <8761fnx01l.fsf@gmail.com> References: <878uknxmt3.fsf@gmail.com> <8761frxkh4.fsf@gmail.com> <87y4smwpg6.fsf@gmail.com> <87k346wajj.fsf@gmail.com> <8761fnx01l.fsf@gmail.com> Message-ID: Hi Ben, Austin, Is there any chance of Ben's event manager patch landing in GHC-7.8.4? Bas On 13 October 2014 21:05, Ben Gamari wrote: > Ben Gamari writes: > >> Andreas Voellmy writes: >> >>> On Sat, Oct 11, 2014 at 12:17 PM, Ben Gamari wrote: >>> >>> Ah... so this is not useful to you. I guess we could add `loop` to >>> GHC.Event's export list. On the other hand, I like your LifeTime proposal >>> better and then no one needs `loop`, so let's try this first. >>> >> I have a first cut of this here [1]. It compiles but would be I shocked >> if it ran. All of the pieces are there but I need to change >> EventLifetime to a more efficient encoding (there's no reason why it >> needs to be more than an Int). >> > As it turns out the patch seems to get through the testsuite after a few > minor fixes. > > What other tests can I subject this to? I'm afraid I don't have the > access to any machine even close to the size of those that the original > event manager was tested on so checking for performance regressions will > be difficult. > > Cheers, > > - Ben > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From andreas.voellmy at gmail.com Fri Oct 17 12:59:54 2014 From: andreas.voellmy at gmail.com (Andreas Voellmy) Date: Fri, 17 Oct 2014 08:59:54 -0400 Subject: One-shot semantics in GHC event manager In-Reply-To: References: <878uknxmt3.fsf@gmail.com> <8761frxkh4.fsf@gmail.com> <87y4smwpg6.fsf@gmail.com> <87k346wajj.fsf@gmail.com> <8761fnx01l.fsf@gmail.com> Message-ID: I haven't had a chance to dig into Ben's patch yet, but I expect it will accepted soon - I don't think the change will affect performance. Austin, would it be possible to get a relatively minor change to the event manager into 7.8.4? It may change a semi-public API under GHC.Event, but will not change any public API. What is our time window? Andi On Fri, Oct 17, 2014 at 8:34 AM, Bas van Dijk wrote: > Hi Ben, Austin, > > Is there any chance of Ben's event manager patch landing in GHC-7.8.4? > > Bas > > On 13 October 2014 21:05, Ben Gamari wrote: > > Ben Gamari writes: > > > >> Andreas Voellmy writes: > >> > >>> On Sat, Oct 11, 2014 at 12:17 PM, Ben Gamari > wrote: > >>> > >>> Ah... so this is not useful to you. I guess we could add `loop` to > >>> GHC.Event's export list. On the other hand, I like your LifeTime > proposal > >>> better and then no one needs `loop`, so let's try this first. > >>> > >> I have a first cut of this here [1]. It compiles but would be I shocked > >> if it ran. All of the pieces are there but I need to change > >> EventLifetime to a more efficient encoding (there's no reason why it > >> needs to be more than an Int). > >> > > As it turns out the patch seems to get through the testsuite after a few > > minor fixes. > > > > What other tests can I subject this to? I'm afraid I don't have the > > access to any machine even close to the size of those that the original > > event manager was tested on so checking for performance regressions will > > be difficult. > > > > Cheers, > > > > - Ben > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin at well-typed.com Fri Oct 17 14:27:36 2014 From: austin at well-typed.com (Austin Seipp) Date: Fri, 17 Oct 2014 09:27:36 -0500 Subject: One-shot semantics in GHC event manager In-Reply-To: References: <878uknxmt3.fsf@gmail.com> <8761frxkh4.fsf@gmail.com> <87y4smwpg6.fsf@gmail.com> <87k346wajj.fsf@gmail.com> <8761fnx01l.fsf@gmail.com> Message-ID: The catch with such a change is that there is no macro to determine whether we're using 7.8.3 or 7.8.4, so it's harder for users to figure things out (they have to use `MIN_VERSION_base` from Cabal). But maybe that doesn'tm atter too much. So, yes, I think it's doable, but that's a sticky bit. Andreas - want me to go ahead and get you some hardware to test Ben's patch in the mean time? This way we'll at least not leave it hanging until the last moment... On Fri, Oct 17, 2014 at 7:59 AM, Andreas Voellmy wrote: > I haven't had a chance to dig into Ben's patch yet, but I expect it will > accepted soon - I don't think the change will affect performance. > > Austin, would it be possible to get a relatively minor change to the event > manager into 7.8.4? It may change a semi-public API under GHC.Event, but > will not change any public API. What is our time window? > > Andi > > On Fri, Oct 17, 2014 at 8:34 AM, Bas van Dijk wrote: >> >> Hi Ben, Austin, >> >> Is there any chance of Ben's event manager patch landing in GHC-7.8.4? >> >> Bas >> >> On 13 October 2014 21:05, Ben Gamari wrote: >> > Ben Gamari writes: >> > >> >> Andreas Voellmy writes: >> >> >> >>> On Sat, Oct 11, 2014 at 12:17 PM, Ben Gamari >> >>> wrote: >> >>> >> >>> Ah... so this is not useful to you. I guess we could add `loop` to >> >>> GHC.Event's export list. On the other hand, I like your LifeTime >> >>> proposal >> >>> better and then no one needs `loop`, so let's try this first. >> >>> >> >> I have a first cut of this here [1]. It compiles but would be I shocked >> >> if it ran. All of the pieces are there but I need to change >> >> EventLifetime to a more efficient encoding (there's no reason why it >> >> needs to be more than an Int). >> >> >> > As it turns out the patch seems to get through the testsuite after a few >> > minor fixes. >> > >> > What other tests can I subject this to? I'm afraid I don't have the >> > access to any machine even close to the size of those that the original >> > event manager was tested on so checking for performance regressions will >> > be difficult. >> > >> > Cheers, >> > >> > - Ben >> > >> > _______________________________________________ >> > ghc-devs mailing list >> > ghc-devs at haskell.org >> > http://www.haskell.org/mailman/listinfo/ghc-devs >> > > > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From kazu at iij.ad.jp Fri Oct 17 14:51:17 2014 From: kazu at iij.ad.jp (Kazu Yamamoto (=?iso-2022-jp?B?GyRCOzNLXE9CSScbKEI=?=)) Date: Fri, 17 Oct 2014 23:51:17 +0900 (JST) Subject: One-shot semantics in GHC event manager In-Reply-To: References: Message-ID: <20141017.235117.27092370466503596.kazu@iij.ad.jp> Austin, > Andreas - want me to go ahead and get you some hardware to test Ben's > patch in the mean time? This way we'll at least not leave it hanging > until the last moment... I will also try this with two 20-core machines connected 10G on Monday. --Kazu From bgamari.foss at gmail.com Fri Oct 17 15:14:56 2014 From: bgamari.foss at gmail.com (Ben Gamari) Date: Fri, 17 Oct 2014 11:14:56 -0400 Subject: One-shot semantics in GHC event manager In-Reply-To: References: <878uknxmt3.fsf@gmail.com> <8761frxkh4.fsf@gmail.com> <87y4smwpg6.fsf@gmail.com> <87k346wajj.fsf@gmail.com> <8761fnx01l.fsf@gmail.com> Message-ID: <87egu6683z.fsf@gmail.com> Austin Seipp writes: > The catch with such a change is that there is no macro to determine > whether we're using 7.8.3 or 7.8.4, so it's harder for users to figure > things out (they have to use `MIN_VERSION_base` from Cabal). But maybe > that doesn'tm atter too much. So, yes, I think it's doable, but that's > a sticky bit. > Hmm, that is slightly sticky. I'm not sure what Bas thinks but IMHO it's not the end of the world if usb needs to disable event manager support in the 7.8 series. Whatever happens I want to make sure this is very well tested before it is merged. I'm still recovering from the shock of this change being so painless. The reason here may be that I've only tested against Linux. It would be good if someone with a Mac could run a validation. Same with BSD. On that note, are there plans to bring up a BSD test box for harbormaster? > Andreas - want me to go ahead and get you some hardware to test Ben's > patch in the mean time? This way we'll at least not leave it hanging > until the last moment... > Just so every is on the same page, I've taken and rebased the patch set on master and opened D347 to track it. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From the.dead.shall.rise at gmail.com Fri Oct 17 15:42:48 2014 From: the.dead.shall.rise at gmail.com (Mikhail Glushenkov) Date: Fri, 17 Oct 2014 17:42:48 +0200 Subject: One-shot semantics in GHC event manager In-Reply-To: References: <878uknxmt3.fsf@gmail.com> <8761frxkh4.fsf@gmail.com> <87y4smwpg6.fsf@gmail.com> <87k346wajj.fsf@gmail.com> <8761fnx01l.fsf@gmail.com> Message-ID: Hi, On 17 October 2014 16:27, Austin Seipp wrote: > The catch with such a change is that there is no macro to determine > whether we're using 7.8.3 or 7.8.4, so it's harder for users to figure > things out (they have to use `MIN_VERSION_base` from Cabal). But maybe > that doesn'tm atter too much. So, yes, I think it's doable, but that's > a sticky bit. Recent versions of Cabal (1.20+) define a MIN_TOOL_VERSION macro similar to MIN_VERSION_. So you can use '#if MIN_TOOL_VERSION_ghc(7,8,4)' to detect GHC 7.8.4. From austin at well-typed.com Fri Oct 17 16:32:34 2014 From: austin at well-typed.com (Austin Seipp) Date: Fri, 17 Oct 2014 11:32:34 -0500 Subject: Request: Phab Differentials should include road maps In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F3363ED@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF3F3363ED@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Yes, I think what Richard wants is point #4 out of your list Simon: how you might follow the patch implementing it. I think this is a good idea, actually, and would help digest some patches more quickly. Richard, I just had a talk with Phabricator upstream about this, and I think this is certainly doable and can be added to our list of extensions. Here's how I would imagine what it would look like: Below the 'Summary' field, there is the 'Test Plan' field (as in D344). We can add another field, 'Patch Roadmap', that appears the same way (i.e. a bulk textedit form) and appears in the same area as well. How does that sound? On Tue, Oct 14, 2014 at 9:09 AM, Simon Peyton Jones wrote: > I frequently find myself asking for a different kind of road map: a wiki page saying > - what is the problem we are trying to solve > - what is the general approach for solving it > - what is the specification for what a GHC user (or maybe a GHC API client, > depending) would see? > - what is a road map for how the implementation is structured. > > We often have these wiki pages but not always. Simply reviewing a big blob of source-code diffs and trying to reconstruct the above four points is not much fun! Moreover the act of writing them can be fantastically illuminating. The StaticPtr stuff is a case in point. > > Simon > > | -----Original Message----- > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of > | Richard Eisenberg > | Sent: 14 October 2014 14:34 > | To: ghc-devs at haskell.org Devs > | Subject: Request: Phab Differentials should include road maps > | > | Hi devs, > | > | I have what I hope is a simple request: that patch submissions contain > | a "road map" describing the patch. I'll illustrate via example: I just > | took a quick look at D323, about updating the design of Uniques. > | Although this patch was fairly straightforward, I would have been > | helped by a comment somewhere saying "All the important changes are in > | Unique.lhs. The rest of the changes are simply propagating the new > | UniqueDomain type." Then, I would just look at the one file and skim > | the rest very briefly. The reason I'm requesting this comment from the > | patch author is that my assumption above -- that all the action is in > | Unique.lhs -- might be quite wrong. Maybe there's a really important > | (perhaps one-line) change elsewhere that deserves attention. Or, maybe > | there's a function/type in Unique.lhs that the patch author is very > | uncertain about and wants extra scrutiny. In any case, a few sentences > | at the top of the patch would help focus reviewers' time where the > | author thinks it is most neede d. > | > | What do we think? Is this a behavior we wish to adopt? > | > | Thanks! > | Richard > | _______________________________________________ > | ghc-devs mailing list > | ghc-devs at haskell.org > | http://www.haskell.org/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From eir at cis.upenn.edu Fri Oct 17 16:34:29 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Fri, 17 Oct 2014 12:34:29 -0400 Subject: Request: Phab Differentials should include road maps In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF3F3363ED@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <78856067-C1E1-476F-B53E-B3739701F274@cis.upenn.edu> On Oct 17, 2014, at 12:32 PM, Austin Seipp wrote: > > Here's how I would imagine what it would look like: Below the > 'Summary' field, there is the 'Test Plan' field (as in D344). We can > add another field, 'Patch Roadmap', that appears the same way (i.e. a > bulk textedit form) and appears in the same area as well. How does > that sound? > Sounds perfect to me. I was actually thinking of your proposed approach, but didn't want to push for it as it requires more work than social enforcement would. Thanks, Richard From bgamari.foss at gmail.com Fri Oct 17 19:41:50 2014 From: bgamari.foss at gmail.com (Ben Gamari) Date: Fri, 17 Oct 2014 15:41:50 -0400 Subject: One-shot semantics in GHC event manager In-Reply-To: References: <878uknxmt3.fsf@gmail.com> <8761frxkh4.fsf@gmail.com> <87y4smwpg6.fsf@gmail.com> <87k346wajj.fsf@gmail.com> <8761fnx01l.fsf@gmail.com> Message-ID: <8738am5vr5.fsf@gmail.com> Austin Seipp writes: > The catch with such a change is that there is no macro to determine > whether we're using 7.8.3 or 7.8.4, so it's harder for users to figure > things out (they have to use `MIN_VERSION_base` from Cabal). But maybe > that doesn'tm atter too much. So, yes, I think it's doable, but that's > a sticky bit. > Also, I should mention that as written the patch changes no exported interfaces. Instead of changing `registerFd` it adds an additional variant `registerFd'` which allows the user to specify a lifetime. That being said, I'm personally not terribly fond of adding these sorts of backwards compatibility variants unless really necessary. Given that this is such a low-visibility interface we may want to consider just modifying `registerFd` and avoid further polluting the namespace (this would be the third exported variant of `registerFd`). Thoughts? - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From austin at well-typed.com Sat Oct 18 00:25:51 2014 From: austin at well-typed.com (Austin Seipp) Date: Fri, 17 Oct 2014 19:25:51 -0500 Subject: Warning on tabs by default (#9230) for GHC 7.10 Message-ID: Hi all, Please see here: https://phabricator.haskell.org/D255 and https://ghc.haskell.org/trac/ghc/ticket/9230 Making tabs warn by default has been requested many times before, and now that the compiler is completely detabbed, this should become possible to enable easily, and we can gradually remove warnings from everything else. Unless someone has huge complaints or this becomes a gigantic bikeshed/review (bike-review), please let me know - I would like this to go in for 7.10. -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From chengang31 at gmail.com Sat Oct 18 02:45:49 2014 From: chengang31 at gmail.com (cg) Date: Sat, 18 Oct 2014 10:45:49 +0800 Subject: Build on Windows fails In-Reply-To: <5440F11B.6070100@gmail.com> References: <5440F11B.6070100@gmail.com> Message-ID: On 10/17/2014 6:36 PM, David Macek wrote: > Hi all. My build fails, log: https://gist.github.com/elieux/2b2a074e03005f7754d6 > > I'm working on Windows x64, using msys2 with mingw64-x86_64 4.9.1. What I did: > [...] > > download http://www.haskell.org/ghc/dist/7.8.3/ghc-7.8.3-src.tar.xz > I had trouble building 7.8.3 under msys2 myself before, but it is okay to build ghc trunk. If you don't have to stick to a specific version, I would suggest you try trunk -- it is 7.9.2 at the moment. -- cg From chrisdone at gmail.com Sat Oct 18 15:48:48 2014 From: chrisdone at gmail.com (Christopher Done) Date: Sat, 18 Oct 2014 17:48:48 +0200 Subject: Making GHCi awesomer? Message-ID: Good evening, So I?ve been working on Haskell user-facing tooling in general for some years. By that I mean the level of Emacs talking with Haskell tools. I wrote the interactive-haskell-mode (most functionality exists in this file ). which launches a GHCi process in a pipe and tries very earnestly to handle input/output with the process reasonably. For Emacs fanciers: Written in Elisp, there?s a nice command queue that you put commands onto, they will all be run on a FIFO one-by-one order, and eventually you?ll get a result back. Initially it was just me using it, but with the help of Herbert Riedel it?s now a mode on equal footing with the venerable inferior-haskell-mode all ye Emacs users know and love. It?s part of haskell-mode and can be enabled by enabling the interactive-haskell-mode minor mode. For years I?ve been using GHCi as a base and it?s been very reliable for almost every project I?ve done (the only exceptions are things like SDL and OpenGL, which are well known to be difficult to load in GHCi, at least on Linux). I think we?ve built up a good set of functionality purely based on asking GHCi things and getting it to do things. I literally use GHCi for everything. For type-checking, type info, I even send ?:!cabal build? to it. Everything goes through it. I love my GHCi. Now, I?m sort of at the end of the line of where I can take GHCi. Here are the problems as I see them today: 1. There is no programmatic means of communicating with the process. I can?t send a command and get a result cleanly, I have to regex match on the prompt, and that is only so reliable. At the moment we solve this by using \4 (aka ?END OF TRANSMISSION?). Also messages (warnings, errors, etc.) need to be parsed which is also icky, especially in the REPL when e.g. a defaulted Integer warning will mix with the output. Don?t get me started on handling multi-line prompts! Hehe. 2. GHCi, as a REPL, does not distinguish between stdout, stderr and the result of your evaluation. This can be problematic for making a smooth REPL UI, your results can often (with threading) be interspersed in unkind ways. I cannot mitigate this with any kind of GHCi trickery. 3. It forgets information when you reload. (I know this is intentional.) 4. Not enough information is exposed to the user. (Is there ever? ;) 5. There is a time-to-market overhead of contributing to GHCi ? if I want a cool feature, I can write it on a locally compiled version of GHC. But for the work projects I have, I?m restricted to given GHC versions, as are other people. They have to wait to get the good features. 6. This is just a personal point ? I?ve like to talk to GHCi over a socket, so that I can run it on a remote machine. Those familiar with Common Lisp will be reminded of SLIME and Swank. Examples for point 4 are: - Type of sub-expressions. - Go to definition of thing at point (includes local scope). - Local-scope completion. - A hoogle-like query (as seen in Idris recently). - Documentation lookup. - Suggest imports for symbols. - Show core for the current module. - Show CMM for the current module, ASM, etc. SLIME can do this. - Expand the template-haskell at point. - The :i command is amazingly useful, but programmatic access would be even better.? - Case split anyone? - Etc. ?I?ve integrated with it in Emacs so that I can C-c C-i any identifier and it?ll popup a buffer with the :i result and then within that buffer I can drill down further with C-c C-i again. It makes for very natural exploration of a type. You?ve seen some of these features in GHC Mod, in hdevtools, in the FP Haskell Center, maybe some are in Yi, possibly also in Leksah (?). So in light of point (5), I thought: I?ve used the GHC API before, it can do interactive evaluation, why not write a project like ?ghc-server? which encodes all these above ideas as a ?drop-in? replacement for GHCi? After all I could work on my own without anybody getting my way over architecture decisions, etc. And that?s what I did. It?s here . Surprisingly, it kind of works. You run it in your directoy like you would do ?cabal repl? and it sets up all the extensions and package dependencies and starts accepting connections. It will compile across three major GHC versions. Hurray! Rub our hands together and call it done, right? Sadly not, the trouble is twofold: 1. The first problem with this is that every three projects will segfault or panic when trying to load in a project that GHCi will load in happily. The reasons are mysterious to me and I?ve already lugged over the GHC API to get to this point, so that kind of thing happening means that I have to fall back to my old GHCi-based setup, and is disappointing. People have similar complaints of GHC Mod & co. ?Getting it to work? is a deterrant. 2. While this would be super beneficial for me, and has been a good learning experience for ?what works and what doesn?t?, we end up with yet another alternative tool, that only a few people are using. 3. There are just certain behaviours and fixes here and there that GHCi does that take time to reproduce. So let?s go back to the GHCi question: is there still a development overhead for adding features to GHCi? Yes, new ideas need acceptance and people have to wait (potentially a year) for a new feature that they could be using right now. An alternative method is to do what Herbert did which is to release a ?ghci-ng? which sports new shiny features that people (with the right GHC version) will be able to compile and use as a drop-in for GHCi. It?s the same codebase, but with more stuff! An example is the ?:complete? command, this lets IDE implementers do completion at least at the REPL level. Remember the list of features earlier? Why are they not in GHCi? So, of course, this got me thinking that I could instead make ghc-server be based off of GHCi?s actual codebase. I could rebase upon the latest GHC release and maintain 2-3 GHC versions backwards. That?s certainly doable, it would essentially give me ?GHCi++?. Good for me, I just piggy back on the GHCi goodness and then use the GHC API for additional things as I?m doing now. But is there a way I can get any of this into the official repo? For example, could I hack on this (perhaps with Herbert) as ?ghci-ng?, provide an alternative JSON communication layer (e.g. via some ?use-json flag) and and socket listener (?listen-on ), a way to distinguish stdout/stderr (possibly by forking a process, unsure at this stage), and then any of the above features (point 4) listed. I make sure that I?m rebasing upon HEAD, as if to say ghci-ng is a kind of submodule, and then when release time comes we merge back in any new stuff since the last release. Early adopters can use ghci-ng, and everyone benefits from official GHC releases. The only snag there is that, personally speaking, it would be better if ghci-ng would compile on older GHC versions. So if GHC 7.10 is the latest release, it would still be nice (and it *seems* pretty feasible) that GHC 7.8 users could still cabal install it without issue. People shouldn?t have to wait if they don?t have to. Well, that?s everything. Thoughts? Ciao! ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Sat Oct 18 15:56:07 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Sat, 18 Oct 2014 17:56:07 +0200 Subject: Making GHCi awesomer? In-Reply-To: References: Message-ID: I think there is currently a more general interest in this, and the ghc-mod guys are thinking on similar lines, see https://github.com/kazu-yamamoto/ghc-mod/issues/349 Alan On Sat, Oct 18, 2014 at 5:48 PM, Christopher Done wrote: > Good evening, > > So I?ve been working on Haskell user-facing tooling in general for > some years. By that I mean the level of Emacs talking with Haskell > tools. > > I wrote the interactive-haskell-mode (most functionality exists > in this file > > ). > which launches a GHCi process in a pipe and tries very earnestly to > handle input/output with the process reasonably. > > For Emacs fanciers: Written in Elisp, there?s a nice command queue > that you put commands onto, they will all be run on a FIFO one-by-one > order, and eventually you?ll get a result back. Initially it was just > me using it, but with the help of Herbert Riedel it?s now a mode on > equal footing with the venerable inferior-haskell-mode all ye Emacs > users know and love. It?s part of haskell-mode and can be enabled by > enabling the interactive-haskell-mode minor mode. > > For years I?ve been using GHCi as a base and it?s been very reliable > for almost every project I?ve done (the only exceptions are things > like SDL and OpenGL, which are well known to be difficult to load in > GHCi, at least on Linux). I think we?ve built up > a good set of functionality > > purely based on asking GHCi things and getting it to do things. > > I literally use GHCi for everything. For type-checking, type info, I > even send ?:!cabal build? to it. Everything goes through it. I love my > GHCi. > > Now, I?m sort of at the end of the line of where I can take GHCi. Here > are the problems as I see them today: > > 1. There is no programmatic means of communicating with the > process. I can?t send a command and get a result cleanly, I have to > regex match on the prompt, and that is only so reliable. At the > moment we solve this by using \4 (aka ?END OF TRANSMISSION?). Also > messages (warnings, errors, etc.) need to be parsed which is also > icky, especially in the REPL when e.g. a defaulted Integer warning > will mix with the output. Don?t get me started on handling > multi-line prompts! Hehe. > 2. GHCi, as a REPL, does not distinguish between stdout, stderr and > the result of your evaluation. This can be problematic for making a > smooth REPL UI, your results can often (with threading) be > interspersed in unkind ways. I cannot mitigate this with any kind > of GHCi trickery. > 3. It forgets information when you reload. (I know this is > intentional.) > 4. Not enough information is exposed to the user. (Is there ever? ;) > 5. There is a time-to-market overhead of contributing to GHCi ? if I > want a cool feature, I can write it on a locally compiled version > of GHC. But for the work projects I have, I?m restricted to given > GHC versions, as are other people. They have to wait to get the > good features. > 6. This is just a personal point ? I?ve like to talk to GHCi over a > socket, so that I can run it on a remote machine. Those familiar > with Common Lisp will be reminded of SLIME and Swank. > > Examples for point 4 are: > > - Type of sub-expressions. > - Go to definition of thing at point (includes local scope). > - Local-scope completion. > - A hoogle-like query (as seen in Idris recently). > - Documentation lookup. > - Suggest imports for symbols. > - Show core for the current module. > - Show CMM for the current module, ASM, etc. SLIME can do this. > - Expand the template-haskell at point. > - The :i command is amazingly useful, but programmatic access would be > even better.? > - Case split anyone? > - Etc. > > ?I?ve integrated with it in Emacs so that I can C-c C-i any identifier > and it?ll popup a buffer with the :i result and then within that > buffer I can drill down further with C-c C-i again. It makes for > very natural exploration of a type. > > You?ve seen some of these features in GHC Mod, in hdevtools, in the FP > Haskell > Center, maybe some are in Yi, possibly also in Leksah (?). > > So in light of point (5), I thought: I?ve used the GHC API before, it > can do interactive evaluation, why not write a project like > ?ghc-server? which encodes all these above ideas as a ?drop-in? > replacement for GHCi? After all I could work on my own without anybody > getting my way over architecture decisions, etc. > > And that?s what I did. It?s > here . Surprisingly, it kind of > works. You run it in your directoy like you would do ?cabal repl? > and it sets up all the extensions and package dependencies and starts > accepting connections. It will compile across three major GHC > versions. Hurray! Rub our hands together and call it done, right? > Sadly not, the trouble is twofold: > > 1. The first problem with this is that every three projects will > segfault or panic when trying to load in a project that GHCi will > load in happily. The reasons are mysterious to me and I?ve already > lugged over the GHC API to get to this point, so that kind of thing > happening means that I have to fall back to my old GHCi-based > setup, and is disappointing. People have similar complaints of GHC > Mod & co. ?Getting it to work? is a deterrant. > 2. While this would be super beneficial for me, and has been a good > learning experience for ?what works and what doesn?t?, we end up > with yet another alternative tool, that only a few people are > using. > 3. There are just certain behaviours and fixes here and there that > GHCi does that take time to reproduce. > > So let?s go back to the GHCi question: is there still a development > overhead for adding features to GHCi? Yes, new ideas need acceptance > and people have to wait (potentially a year) for a new feature that > they could be using right now. > > An alternative method is to do what Herbert did which is to release a > ?ghci-ng? which sports > new shiny features that people (with the right GHC version) will be > able to compile and use as a drop-in for GHCi. It?s the same codebase, > but with more stuff! An example is the ?:complete? command, this lets > IDE implementers do completion at least at the REPL level. Remember > the list of features earlier? Why are they not in GHCi? > > So, of course, this got me thinking that I could instead make > ghc-server be based off of GHCi?s actual codebase. I could rebase upon > the latest GHC release and maintain 2-3 GHC versions backwards. That?s > certainly doable, it would essentially give me ?GHCi++?. Good for me, > I just piggy back on the GHCi goodness and then use the GHC API for > additional things as I?m doing now. > > But is there a way I can get any of this into the official repo? For > example, could I hack on this (perhaps with Herbert) as ?ghci-ng?, > provide an alternative JSON communication layer (e.g. via some > ?use-json flag) and and socket listener (?listen-on ), a way > to distinguish stdout/stderr (possibly by forking a process, unsure at > this stage), and then any of the above features (point 4) listed. I > make sure that I?m rebasing upon HEAD, as if to say ghci-ng is a kind > of submodule, and then when release time comes we merge back in any > new stuff since the last release. Early adopters can use > ghci-ng, and everyone benefits from official GHC releases. > > The only snag there is that, personally speaking, it would be better > if ghci-ng would compile on older GHC versions. So if GHC 7.10 is the > latest release, it would still be nice (and it *seems* pretty > feasible) that GHC 7.8 users could still cabal install it without > issue. People shouldn?t have to wait if they don?t have to. > > Well, that?s everything. Thoughts? > > Ciao! > ? > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuuzetsu at fuuzetsu.co.uk Sat Oct 18 17:05:49 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Sat, 18 Oct 2014 18:05:49 +0100 Subject: Making GHCi awesomer? In-Reply-To: References: Message-ID: <54429DED.1030805@fuuzetsu.co.uk> On 10/18/2014 04:48 PM, Christopher Done wrote: > Good evening, > > So I?ve been working on Haskell user-facing tooling in general for > some years. By that I mean the level of Emacs talking with Haskell > tools. > > [snip] > > You?ve seen some of these features in GHC Mod, in hdevtools, in the FP > Haskell > Center, maybe some are in Yi, possibly also in Leksah (?). Currently any Yi support for such things is either poor or not present. We currently also just talk to the REPL and parse stuff out. The upside is that it is possible for us to talk to any Haskell stuff natively (including GHC API/ghc-mod) so we don't need to depend as much on GHCi as emacs or other editors, at least in theory. > [snip] > > Well, that?s everything. Thoughts? > > Ciao! Sounds interesting. My only request/comment is that I hope whatever conclusion you come to, the library part of it will be usable just as much (or even more) as the executable: if we can talk to the library natively then that's much easier than talking to some remote socket and parsing out data. -- Mateusz K. From dxld at darkboxed.org Sat Oct 18 17:28:22 2014 From: dxld at darkboxed.org (Daniel =?iso-8859-1?Q?Gr=F6ber?=) Date: Sat, 18 Oct 2014 19:28:22 +0200 (CEST) Subject: Making GHCi awesomer? In-Reply-To: References: Message-ID: <20141018.192822.2031996827257086812.dxld@darkboxed.org> From: Christopher Done Subject: Making GHCi awesomer? Date: Sat, 18 Oct 2014 17:48:48 +0200 > 1. The first problem with this is that every three projects will > segfault or panic when trying to load in a project that GHCi will > load in happily. [...] People have similar complaints of GHC Mod > & co. ?Getting it to work? is a deterrant. Do you have any examples of such projects, I've never seen any complaints about ghc-mod doing this. > So, of course, this got me thinking that I could instead make > ghc-server be based off of GHCi?s actual codebase. I could rebase upon > the latest GHC release and maintain 2-3 GHC versions backwards. That?s > certainly doable, it would essentially give me ?GHCi++?. Good for me, > I just piggy back on the GHCi goodness and then use the GHC API for > additional things as I?m doing now. I had that idea too for ghc-mod unfortunately it's not so easy as ghci's internal API mostly consists of functions that only have side effects (i.e. don't return anything you can process further) :/ > But is there a way I can get any of this into the official repo? For > example, could I hack on this (perhaps with Herbert) as ?ghci-ng?, > provide an alternative JSON communication layer (e.g. via some > ?use-json flag) and and socket listener (?listen-on ), a way > to distinguish stdout/stderr (possibly by forking a process, unsure at > this stage), and then any of the above features (point 4) listed. I > make sure that I?m rebasing upon HEAD, as if to say ghci-ng is a kind > of submodule, and then when release time comes we merge back in any > new stuff since the last release. Early adopters can use > ghci-ng, and everyone benefits from official GHC releases. > > The only snag there is that, personally speaking, it would be better > if ghci-ng would compile on older GHC versions. So if GHC 7.10 is the > latest release, it would still be nice (and it *seems* pretty > feasible) that GHC 7.8 users could still cabal install it without > issue. People shouldn?t have to wait if they don?t have to. Sounds awesome I'd love to get in on this :) --Daniel -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From dxld at darkboxed.org Sat Oct 18 17:30:46 2014 From: dxld at darkboxed.org (Daniel =?iso-8859-1?Q?Gr=F6ber?=) Date: Sat, 18 Oct 2014 19:30:46 +0200 (CEST) Subject: Making GHCi awesomer? In-Reply-To: <54429DED.1030805@fuuzetsu.co.uk> References: <54429DED.1030805@fuuzetsu.co.uk> Message-ID: <20141018.193046.2267836332693018368.dxld@darkboxed.org> From: Mateusz Kowalczyk Subject: Re: Making GHCi awesomer? Date: Sat, 18 Oct 2014 18:05:49 +0100 > Sounds interesting. My only request/comment is that I hope whatever > conclusion you come to, the library part of it will be usable just as > much (or even more) as the executable: if we can talk to the library > natively then that's much easier than talking to some remote socket and > parsing out data. I agree! We should factor out useful bits in ghci into a library that can be used by other tools too since there's quite a lot of logic and workarounds in ghci that tools have to copy otherwise. --Daniel -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From carter.schonwald at gmail.com Sat Oct 18 17:43:59 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sat, 18 Oct 2014 13:43:59 -0400 Subject: GHC 7.8.4: call for tickets, show stoppers, and timelines - oh my! In-Reply-To: References: Message-ID: would https://ghc.haskell.org/trac/ghc/ticket/9284 be good candidate for 7.8.4 ? It looks like its the only forkProcess related bug fix that wasnt merged into 7.8.3, and impacts OS X On Mon, Oct 13, 2014 at 12:37 PM, Austin Seipp wrote: > Hi *, > > After some discussion with Simon & Mikolaj today, I'd like to direct > you all at this: > > https://ghc.haskell.org/trac/ghc/wiki/Status/GHC-7.8.4 > > This status page is the basic overview of what we plan on doing for > 7.8.4. There are two basic components to this page: > > - Show stopping bugs. > - Everything else, which is "nice to have". > > Show stoppers are listed at the top of the page, in the first > paragraph. Right now, this includes: > > - #9439 - LLVM mangling too vigorously. > - #8819 - Arithmetic failures for unregistered systems > - #8690 - SpecConstr blow-up > > And that's all. But what's all the other stuff? That's "everything else". > > Aside from these tickets listed here - and any future amendments to it > - all other tickets will only be considered nice-to-have. What does > that mean? > > - It's low risk to include. > - It clearly fixes the problem > - It doesn't take Austin significant amounts of time to merge. > > For example, "Tickets marked merge with no milestone" are all > nice-to-have. Similarly, all the *closed tickets* on this page may be > re-opened and merged again[1], since most didn't make it to 7.8.4. > > Ditto with the remaining categories. > > OK, so that's the gist. Now I ask of you the following: > > - If you have a show-stopping bug with GHC 7.8.3, **you really, > _positively_ need to file a bug, and get in contact with me ASAP**. > Otherwise you'll be waiting for 7.10 most likely. > - Again: if you have a show stopper, contact me. Very soon. > - If there are bugs you *think* are showstoppers, but we didn't > categorize them properly, let me know. > > Anything we accept as a show-stopper will delay the release of 7.8.4. > Anything else can (and possibly will) be left behind. Luckily, almost > all of the show stoppers have patches. Only #8819 does not, but I have > asked Sergei to look into it for me if he has time today. > > Finally, I would please ask that users/developers do not include their > own personal pet tickets under "show stoppers" without consulting me > first, at least. :) If it's just nice to have, you can still pester > me, of course, and I'll try to make it happen. > > I would like to have 7.8.4 out and done with by mid November, before > we freeze the new STABLE branch for 7.10.1. That's not a hard > deadline; just a timeframe I'd like to hit. > > Let me know if you have any questions or comments; thanks! > > [1] A lot of the closed tickets on this page had an improper milestone > set, which is why they show up. You can mostly ignore them, I > apologize. > > -- > Regards, > > Austin Seipp, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chrisdone at gmail.com Sat Oct 18 17:52:48 2014 From: chrisdone at gmail.com (Christopher Done) Date: Sat, 18 Oct 2014 19:52:48 +0200 Subject: Making GHCi awesomer? In-Reply-To: <54429DED.1030805@fuuzetsu.co.uk> References: <54429DED.1030805@fuuzetsu.co.uk> Message-ID: On 18 October 2014 19:05, Mateusz Kowalczyk wrote: > Sounds interesting. My only request/comment is that I hope whatever > conclusion you come to, the library part of it will be usable just as > much (or even more) as the executable: if we can talk to the library > natively then that's much easier than talking to some remote socket and > parsing out data. > I'm not sure what GHC HQ would think of GHCi exposing an API, but certainly I'm for it. Yi would have no reason to talk via JSON API if it could just call the functions in a type-safe way directly, or at least have an API that does the talking to the remote socket so that you don't have to. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chrisdone at gmail.com Sat Oct 18 17:59:24 2014 From: chrisdone at gmail.com (Christopher Done) Date: Sat, 18 Oct 2014 19:59:24 +0200 Subject: Making GHCi awesomer? In-Reply-To: <20141018.192822.2031996827257086812.dxld@darkboxed.org> References: <20141018.192822.2031996827257086812.dxld@darkboxed.org> Message-ID: On 18 October 2014 19:28, Daniel Gr?ber wrote: > Do you have any examples of such projects, I've never seen any > complaints about ghc-mod doing this. > I haven't used ghc-mod enough to have a crash happen to me. I couldn't get it to work the times I'd tried it and others make this complaint. Whereas GHCi works for everyone! > Sounds awesome I'd love to get in on this :) > Herbert doesn't have time to hack on it, but was encouraging about continuing with ghci-ng. I'm thinking to try forward-porting ghci-ng to GHC 7.8, or otherwise extracting GHC 7.8's GHCi again and then backporting it to 7.6. (Under the assumption that current + past is a reasonable number of GHCs to support.) I'm going to experiment with the JSON interface and I'll report back with results. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dxld at darkboxed.org Sat Oct 18 18:10:35 2014 From: dxld at darkboxed.org (Daniel =?iso-8859-1?Q?Gr=F6ber?=) Date: Sat, 18 Oct 2014 20:10:35 +0200 (CEST) Subject: Making GHCi awesomer? In-Reply-To: References: <20141018.192822.2031996827257086812.dxld@darkboxed.org> Message-ID: <20141018.201035.817184817704903582.dxld@darkboxed.org> From: Christopher Done Subject: Re: Making GHCi awesomer? Date: Sat, 18 Oct 2014 19:59:24 +0200 > I haven't used ghc-mod enough to have a crash happen to me. I couldn't get > it to work the times I'd tried it and others make this complaint. Whereas > GHCi works for everyone! I didn't mean ghc-mod specifically. I was wondering which projects caused this problem for you (or others) when using ghc-server (or something else) as I'd like to try if it happens with ghc-mod too. > Herbert doesn't have time to hack on it, but was encouraging about > continuing with ghci-ng. Yeah I saw that conversation on IRC. > I'm thinking to try forward-porting ghci-ng to GHC 7.8, or otherwise > extracting GHC 7.8's GHCi again and then backporting it to > 7.6. (Under the assumption that current + past is a reasonable > number of GHCs to support.) I'm going to experiment with the JSON > interface and I'll report back with results. Cool :) --Daniel -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From hvriedel at gmail.com Sat Oct 18 20:36:20 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Sat, 18 Oct 2014 22:36:20 +0200 Subject: Making GHCi awesomer? In-Reply-To: (Christopher Done's message of "Sat, 18 Oct 2014 19:59:24 +0200") References: <20141018.192822.2031996827257086812.dxld@darkboxed.org> Message-ID: <87ppdpcdyz.fsf@gmail.com> On 2014-10-18 at 19:59:24 +0200, Christopher Done wrote: [...] > Herbert doesn't have time to hack on it, but was encouraging about > continuing with ghci-ng. Yeah, it's quite convenient to hack on GHCi that way as it's just an ordinary Cabal package (so it doesn't require to setup a GHC source-tree and wrangle with the GHC build-system), if you're lucky enough (which is most of the time) that the parts you want to tweak don't require changing the GHC API. > I'm thinking to try forward-porting ghci-ng to GHC 7.8, Iirc all of the deltas in ghci-ng-7.6 relative to GHC 7.6.3 landed in GHC 7.8.1, so extracting the latest GHCi frontend code would be probably better. > or otherwise extracting GHC 7.8's GHCi again > and then backporting it > to 7.6. Fwiw, I setup the ghci-ng .cabal's in such a way, that if you 'cabal install ghci-ng' with a GHC 7.4.x, you'd get a ghci-ng-7.4.2.1, while when on GHC 7.6.x, ghci-ng-7.6.3.5 would be selected. Supporting multiple major-versions of the GHC API simultanously in the same code-base could prove to be rather tedious (and make it more difficult to extract clean patches to merge back into GHC HEAD). But this is only speculation on my part, so your mileage may vary.... > (Under the assumption that current + past is a reasonable number of > GHCs to support.) I'm going to experiment with the JSON interface and > I'll report back with results. You may want to be careful with the build-deps though; e.g. if you use JSON and want this to be merged back into GHC HEAD at some point, we may need something lighter than the usual go-to JSON implementation `aeson` in terms of build-deps... PS: I've added you to http://hackage.haskell.org/package/ghci-ng/maintainers/, just in case... From carter.schonwald at gmail.com Sat Oct 18 20:52:53 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sat, 18 Oct 2014 16:52:53 -0400 Subject: panic when building ghc head Message-ID: hey all, when doing a devel1 build i got the following panic "inplace/bin/genapply" >rts/dist/build/AutoApply.cmm"inplace/bin/ghc-stage1" -static -H64m -O -fasm -Iincludes -Iincludes/dist -Iincludes/dist-derivedconstants/header -Iincludes/dist-ghcconstants/header -Irts -Irts/dist/build -DCOMPILING_RTS -this-package-key rts -dcmm-lint -DDTRACE -i -irts -irts/dist/build -irts/dist/build/autogen -Irts/dist/build -Irts/dist/build/autogen -O2 -c rts/dist/build/AutoApply.cmm -o rts/dist/build/AutoApply.o"inplace/bin/ghc-stage1" -fPIC -dynamic -H64m -O -fasm -Iincludes -Iincludes/dist -Iincludes/dist-derivedconstants/header -Iincludes/dist-ghcconstants/header -Irts -Irts/dist/build -DCOMPILING_RTS -this-package-key rts -dcmm-lint -DDTRACE -i -irts -irts/dist/build -irts/dist/build/autogen -Irts/dist/build -Irts/dist/build/autogen -O2 -c rts/dist/build/AutoApply.cmm -o rts/dist/build/AutoApply.dyn_o"inplace/bin/ghc-stage1" -static -eventlog -H64m -O -fasm -Iincludes -Iincludes/dist -Iincludes/dist-derivedconstants/header -Iincludes/dist-ghcconstants/header -Irts -Irts/dist/build -DCOMPILING_RTS -this-package-key rts -dcmm-lint -DDTRACE -i -irts -irts/dist/build -irts/dist/build/autogen -Irts/dist/build -Irts/dist/build/autogen -O2 -c rts/dist/build/AutoApply.cmm -o rts/dist/build/AutoApply.l_o"inplace/bin/ghc-stage1" -static -optc-DDEBUG -ticky -DTICKY_TICKY -H64m -O -fasm -Iincludes -Iincludes/dist -Iincludes/dist-derivedconstants/header -Iincludes/dist-ghcconstants/header -Irts -Irts/dist/build -DCOMPILING_RTS -this-package-key rts -dcmm-lint -DDTRACE -i -irts -irts/dist/build -irts/dist/build/autogen -Irts/dist/build -Irts/dist/build/autogen -O2 -O0 -c rts/dist/build/AutoApply.cmm -o rts/dist/build/AutoApply.debug_o*** Core Lint warnings : in result of Desugar (after optimization) ***{-# LINE 101 "libraries/ghc-prim/GHC/Classes.hs #-}: Warning: [RHS of $c/=_a1qs :: GHC.Types.Float -> GHC.Types.Float -> GHC.Types.Bool] INLINE binder is (non-rule) loop breaker: $c/=_a1qs{-# LINE 104 "libraries/ghc-prim/GHC/Classes.hs #-}: Warning: [RHS of $c/=_a1ql :: GHC.Types.Double -> GHC.Types.Double -> GHC.Types.Bool] INLINE binder is (non-rule) loop breaker: $c/=_a1ql{-# LINE 85 "libraries/ghc-prim/GHC/Classes.hs #-}: Warning: [RHS of $c/=_a1qZ :: forall a_a1X. GHC.Classes.Eq a_a1X => [a_a1X] -> [a_a1X] -> GHC.Types.Bool] INLINE binder is (non-rule) loop breaker: $c/=_a1qZ ghc-stage1: panic! (the 'impossible' happened) (GHC version 7.9.20141018 for x86_64-apple-darwin): tyConAppTyCon a_12 Please report this as a GHC bug: http://www.haskell.org/ghc/reportabug any ideas of how I could debug this? Or what it might be? -------------- next part -------------- An HTML attachment was scrubbed... URL: From chrisdone at gmail.com Sat Oct 18 21:01:27 2014 From: chrisdone at gmail.com (Christopher Done) Date: Sat, 18 Oct 2014 23:01:27 +0200 Subject: Making GHCi awesomer? In-Reply-To: <87ppdpcdyz.fsf@gmail.com> References: <20141018.192822.2031996827257086812.dxld@darkboxed.org> <87ppdpcdyz.fsf@gmail.com> Message-ID: On 18 October 2014 22:36, Herbert Valerio Riedel wrote: Yeah, it's quite convenient to hack on GHCi that way as it's just an > ordinary Cabal package (so it doesn't require to setup a GHC source-tree > and wrangle with the GHC build-system), if you're lucky enough (which is > most of the time) that the parts you want to tweak don't require > changing the GHC API. > Right, so far my work on ghc-server has all been doable as far back as GHC 7.2. Iirc all of the deltas in ghci-ng-7.6 relative to GHC 7.6.3 landed in > GHC 7.8.1, so extracting the latest GHCi frontend code would be probably > better. > Okies! Supporting multiple major-versions of the GHC API simultanously in the > same code-base could prove to be rather tedious (and make it more > difficult to extract clean patches to merge back into GHC HEAD). But > this is only speculation on my part, so your mileage may vary.... > It hasn?t been too tedious to support old versions at least on ghc-server ? I went back as far as 7.2, but GHC 7.6 for example is very similar to 7.8 so kind of comes ?for free?. Makes sense, really. One major version bump to another is rather passable, it?s when going a few versions back that it becomes tedious. At least in my experience. I?ll see anyway. You may want to be careful with the build-deps though; e.g. if you use > JSON and want this to be merged back into GHC HEAD at some point, we may > need something lighter than the usual go-to JSON implementation `aeson` > in terms of build-deps... > Indeed, I was considering extracting and embedding a simple parser/printer from the old json package (remember that?). Served me well for years before aeson usurped it. :-) I think it can be reduced down to one module that operators on Strings. PS: I've added you to > http://hackage.haskell.org/package/ghci-ng/maintainers/, just in > case... > Thanks! ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuuzetsu at fuuzetsu.co.uk Sat Oct 18 22:30:45 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Sat, 18 Oct 2014 23:30:45 +0100 Subject: Warning on tabs by default (#9230) for GHC 7.10 In-Reply-To: References: Message-ID: <5442EA15.3000408@fuuzetsu.co.uk> On 10/18/2014 01:25 AM, Austin Seipp wrote: > Hi all, > > Please see here: > > https://phabricator.haskell.org/D255 and > https://ghc.haskell.org/trac/ghc/ticket/9230 > > Making tabs warn by default has been requested many times before, and > now that the compiler is completely detabbed, this should become > possible to enable easily, and we can gradually remove warnings from > everything else. > > Unless someone has huge complaints or this becomes a gigantic > bikeshed/review (bike-review), please let me know - I would like this > to go in for 7.10. > On Phabricator I see a diff which adds a suppression for the warning to GHC. Is this necessary considering you say GHC is now fully detabbed? -- Mateusz K. From austin at well-typed.com Sat Oct 18 22:48:26 2014 From: austin at well-typed.com (Austin Seipp) Date: Sat, 18 Oct 2014 17:48:26 -0500 Subject: Warning on tabs by default (#9230) for GHC 7.10 In-Reply-To: <5442EA15.3000408@fuuzetsu.co.uk> References: <5442EA15.3000408@fuuzetsu.co.uk> Message-ID: The boot libraries have not been detabbed, and that's something we can't immediately fix. However, the warnings being on by default means people should feel the burn to fix it quickly, I hope, and we can just update all of our submodules accordingly. I did notice however in the diff that I missed the fact `hsc2hs` has also not been detabbed. That can be fixed immediately, however. On Sat, Oct 18, 2014 at 5:30 PM, Mateusz Kowalczyk wrote: > On 10/18/2014 01:25 AM, Austin Seipp wrote: >> Hi all, >> >> Please see here: >> >> https://phabricator.haskell.org/D255 and >> https://ghc.haskell.org/trac/ghc/ticket/9230 >> >> Making tabs warn by default has been requested many times before, and >> now that the compiler is completely detabbed, this should become >> possible to enable easily, and we can gradually remove warnings from >> everything else. >> >> Unless someone has huge complaints or this becomes a gigantic >> bikeshed/review (bike-review), please let me know - I would like this >> to go in for 7.10. >> > > On Phabricator I see a diff which adds a suppression for the warning to > GHC. Is this necessary considering you say GHC is now fully detabbed? > > -- > Mateusz K. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From carter.schonwald at gmail.com Sat Oct 18 22:49:37 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sat, 18 Oct 2014 18:49:37 -0400 Subject: panic when building ghc head In-Reply-To: References: Message-ID: nevermind, pardon the noise On Sat, Oct 18, 2014 at 4:52 PM, Carter Schonwald < carter.schonwald at gmail.com> wrote: > hey all, > when doing a devel1 build i got the following panic > > > "inplace/bin/genapply" >rts/dist/build/AutoApply.cmm"inplace/bin/ghc-stage1" -static -H64m -O -fasm -Iincludes -Iincludes/dist -Iincludes/dist-derivedconstants/header -Iincludes/dist-ghcconstants/header -Irts -Irts/dist/build -DCOMPILING_RTS -this-package-key rts -dcmm-lint -DDTRACE -i -irts -irts/dist/build -irts/dist/build/autogen -Irts/dist/build -Irts/dist/build/autogen -O2 -c rts/dist/build/AutoApply.cmm -o rts/dist/build/AutoApply.o"inplace/bin/ghc-stage1" -fPIC -dynamic -H64m -O -fasm -Iincludes -Iincludes/dist -Iincludes/dist-derivedconstants/header -Iincludes/dist-ghcconstants/header -Irts -Irts/dist/build -DCOMPILING_RTS -this-package-key rts -dcmm-lint -DDTRACE -i -irts -irts/dist/build -irts/dist/build/autogen -Irts/dist/build -Irts/dist/build/autogen -O2 -c rts/dist/build/AutoApply.cmm -o rts/dist/build/AutoApply.dyn_o"inplace/bin/ghc-stage1" -static -eventlog -H64m -O -fasm -Iincludes -Iincludes/dist -Iincludes/dist-derivedconstants/header -Iincludes/dist-ghcconstants/header -Irts -Irts/dist/build -DCOMPILING_RTS -this-package-key rts -dcmm-lint -DDTRACE -i -irts -irts/dist/build -irts/dist/build/autogen -Irts/dist/build -Irts/dist/build/autogen -O2 -c rts/dist/build/AutoApply.cmm -o rts/dist/build/AutoApply.l_o"inplace/bin/ghc-stage1" -static -optc-DDEBUG -ticky -DTICKY_TICKY -H64m -O -fasm -Iincludes -Iincludes/dist -Iincludes/dist-derivedconstants/header -Iincludes/dist-ghcconstants/header -Irts -Irts/dist/build -DCOMPILING_RTS -this-package-key rts -dcmm-lint -DDTRACE -i -irts -irts/dist/build -irts/dist/build/autogen -Irts/dist/build -Irts/dist/build/autogen -O2 -O0 -c rts/dist/build/AutoApply.cmm -o rts/dist/build/AutoApply.debug_o*** Core Lint warnings : in result of Desugar (after optimization) ***{-# LINE 101 "libraries/ghc-prim/GHC/Classes.hs #-}: Warning: > [RHS of $c/=_a1qs :: GHC.Types.Float > -> GHC.Types.Float -> GHC.Types.Bool] > INLINE binder is (non-rule) loop breaker: $c/=_a1qs{-# LINE 104 "libraries/ghc-prim/GHC/Classes.hs #-}: Warning: > [RHS of $c/=_a1ql :: GHC.Types.Double > -> GHC.Types.Double -> GHC.Types.Bool] > INLINE binder is (non-rule) loop breaker: $c/=_a1ql{-# LINE 85 "libraries/ghc-prim/GHC/Classes.hs #-}: Warning: > [RHS of $c/=_a1qZ :: forall a_a1X. > GHC.Classes.Eq a_a1X => > [a_a1X] -> [a_a1X] -> GHC.Types.Bool] > INLINE binder is (non-rule) loop breaker: $c/=_a1qZ > ghc-stage1: panic! (the 'impossible' happened) > (GHC version 7.9.20141018 for x86_64-apple-darwin): > tyConAppTyCon a_12 > Please report this as a GHC bug: http://www.haskell.org/ghc/reportabug > > any ideas of how I could debug this? Or what it might be? > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Sat Oct 18 23:29:09 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sat, 18 Oct 2014 19:29:09 -0400 Subject: Fwd: ASCIIDoc Grammar Message-ID: Levi, thanks for sharing this! @herbert and austin, how much should we care about the doc format being easy to reparse? ---------- Forwarded message ---------- From: Levi Pearson Date: Sat, Oct 18, 2014 at 6:50 PM Subject: ASCIIDoc Grammar To: carter.schonwald at gmail.com I saw your question in the ghc-devs archive about asciidoc, and I figured I'd reply out-of-band since I don't subscribe to ghc-devs. Feel free to forward it there if you think it'd be useful. I have looked pretty deeply into the implementation of the canonical asciidoc as well as asciidoctor, which is a re-implementation in Ruby and which serves as the translator for github. There's no formal grammar, and it would be difficult to construct one, as it's not actually a fixed format. The asciidoc engine is actually a macro-processing engine that's designed to translate both in-line patterns and block-style patterns based on delimiters and attributes. It's driven to a large extent by a set of configuration files that define a lot of macros in a fairly general way, with just a couple of special-purpose parsing mechanisms. The macros can all be parameterized by "attributes" that can have default values, and this allows you to inject semantic tags or create new inline or block patterns that essentially expand to lower-level inlines/blocks with specific attributes. The back-ends can also use attributes to select how spans and blocks are rendered. It's a very flexible system, and lines up pretty well with the semantics of DocBook and XML in general. Its main advantage over something like Markdown (and it can actually be reconfigured to be very Markdown-like simply by changing regular expressions in the config files) is that it allows you to add semantic markup and higher-level document structure that's specific to your documentation project without having to touch the main engine code. That's an especially good thing as it's a bit messy and hard-to-follow. -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Sun Oct 19 17:02:35 2014 From: david.feuer at gmail.com (David Feuer) Date: Sun, 19 Oct 2014 13:02:35 -0400 Subject: Avoiding the hazards of orphan instances without dependency problems In-Reply-To: References: Message-ID: Orphan instances are bad. The standard approach to avoiding the orphan hazard is to always put an instance declaration in the module that declares the type or the one that declares the class. Unfortunately, this forces packages like lens to have an ungodly number of dependencies. Yesterday, I had a simple germ of an idea for solving this (fairly narrow) problem, at least in some cases: allow a programmer to declare where an instance declaration must be. I have no sense of sane syntax, but the rough idea is: {-# InstanceIn NamedModule [Context =>] C1 T1 [T2 ...] #-} This pragma would appear in a module declaring a class or type. The named module would not have to be available, either now or ever, but attempting to declare such an instance in any module *other* than the named one would be an error by default, with a flag -XAllowForbiddenInstancesAndInviteNasalDemons to turn it off. The optional context allows multiple such pragmas to appear in the type/class-declaring modules, to allow overlapping instances (all of them declared in advance). -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Sun Oct 19 17:05:37 2014 From: allbery.b at gmail.com (Brandon Allbery) Date: Sun, 19 Oct 2014 13:05:37 -0400 Subject: Avoiding the hazards of orphan instances without dependency problems In-Reply-To: References: Message-ID: On Sun, Oct 19, 2014 at 1:02 PM, David Feuer wrote: > with a flag -XAllowForbiddenInstancesAndInviteNasalDemons > One could argue this is spelled -XIncoherentInstances.... -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Sun Oct 19 17:13:45 2014 From: david.feuer at gmail.com (David Feuer) Date: Sun, 19 Oct 2014 13:13:45 -0400 Subject: Avoiding the hazards of orphan instances without dependency problems In-Reply-To: References: Message-ID: Although they have the same nasal-demon-inducing effects, IncoherentInstances and AllowForbiddenInstances would turn off errors that result from distinct situations. It's possible that one might want to play with forbidden instances in development, keeping the standard coherence checks in place, and then modify an imported module later. On Oct 19, 2014 1:05 PM, "Brandon Allbery" wrote: > On Sun, Oct 19, 2014 at 1:02 PM, David Feuer > wrote: > >> with a flag -XAllowForbiddenInstancesAndInviteNasalDemons >> > > One could argue this is spelled -XIncoherentInstances.... > > -- > brandon s allbery kf8nh sine nomine > associates > allbery.b at gmail.com > ballbery at sinenomine.net > unix, openafs, kerberos, infrastructure, xmonad > http://sinenomine.net > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jwlato at gmail.com Sun Oct 19 23:17:13 2014 From: jwlato at gmail.com (John Lato) Date: Mon, 20 Oct 2014 07:17:13 +0800 Subject: Avoiding the hazards of orphan instances without dependency problems In-Reply-To: References: Message-ID: Thinking about this, I came to a slightly different scheme. What if we instead add a pragma: {-# OrphanModule ClassName ModuleName #-} and furthermore require that, if OrphanModule is specified, all instances can *only* appear in the module where the class is defined, the involved types are defined, or the given OrphanModule? We would also need to add support for the compiler to understand that multiple modules may appear under the same name, which might be a bit tricky to implement, but I think it's feasible (perhaps in a restricted manner). I think I'd prefer this when implementing orphan instances, and probably when writing the pragmas as well. On Mon, Oct 20, 2014 at 1:02 AM, David Feuer wrote: > Orphan instances are bad. The standard approach to avoiding the orphan > hazard is to always put an instance declaration in the module that declares > the type or the one that declares the class. Unfortunately, this forces > packages like lens to have an ungodly number of dependencies. Yesterday, I > had a simple germ of an idea for solving this (fairly narrow) problem, at > least in some cases: allow a programmer to declare where an instance > declaration must be. I have no sense of sane syntax, but the rough idea is: > > {-# InstanceIn NamedModule [Context =>] C1 T1 [T2 ...] #-} > > This pragma would appear in a module declaring a class or type. The named > module would not have to be available, either now or ever, but attempting > to declare such an instance in any module *other* than the named one would > be an error by default, with a flag > -XAllowForbiddenInstancesAndInviteNasalDemons to turn it off. The optional > context allows multiple such pragmas to appear in the type/class-declaring > modules, to allow overlapping instances (all of them declared in advance). > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Mon Oct 20 01:29:36 2014 From: david.feuer at gmail.com (David Feuer) Date: Sun, 19 Oct 2014 21:29:36 -0400 Subject: Avoiding the hazards of orphan instances without dependency problems In-Reply-To: References: Message-ID: I don't think your approach is flexible enough to accomplish the purpose. For example, it does almost nothing to help lens. Even my approach should, arguably, be extended transitively, allowing the named module to delegate that authority, but such an extension could easily be put off till later. On Oct 19, 2014 7:17 PM, "John Lato" wrote: > Thinking about this, I came to a slightly different scheme. What if we > instead add a pragma: > > {-# OrphanModule ClassName ModuleName #-} > > and furthermore require that, if OrphanModule is specified, all instances > can *only* appear in the module where the class is defined, the involved > types are defined, or the given OrphanModule? We would also need to add > support for the compiler to understand that multiple modules may appear > under the same name, which might be a bit tricky to implement, but I think > it's feasible (perhaps in a restricted manner). > > I think I'd prefer this when implementing orphan instances, and probably > when writing the pragmas as well. > > On Mon, Oct 20, 2014 at 1:02 AM, David Feuer > wrote: > >> Orphan instances are bad. The standard approach to avoiding the orphan >> hazard is to always put an instance declaration in the module that declares >> the type or the one that declares the class. Unfortunately, this forces >> packages like lens to have an ungodly number of dependencies. Yesterday, I >> had a simple germ of an idea for solving this (fairly narrow) problem, at >> least in some cases: allow a programmer to declare where an instance >> declaration must be. I have no sense of sane syntax, but the rough idea is: >> >> {-# InstanceIn NamedModule [Context =>] C1 T1 [T2 ...] #-} >> >> This pragma would appear in a module declaring a class or type. The named >> module would not have to be available, either now or ever, but attempting >> to declare such an instance in any module *other* than the named one would >> be an error by default, with a flag >> -XAllowForbiddenInstancesAndInviteNasalDemons to turn it off. The optional >> context allows multiple such pragmas to appear in the type/class-declaring >> modules, to allow overlapping instances (all of them declared in advance). >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Mon Oct 20 01:30:56 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Sun, 19 Oct 2014 18:30:56 -0700 Subject: A simpler remove HEAP_ALLOCED check Message-ID: <1413768368-sup-6009@sabre> Hey Simon, I was chatting with Sergio Benitez about GHC's HEAP_ALLOCED woes and he suggested an alternate fix which I'm not sure we have considered. The idea is simple: pre-assign some portion of the virtual address space for the dynamic heap, and then have HEAP_ALLOCED check if it's inside this space. Now, *obviously* this doesn't work for 32-bit (and I assume this is why we didn't go this route), but that's fine: the bitmap we use for 32-bit works pretty great and isn't a bottleneck. For 64-bit, we have a lot more address space to play with. Certainly we have to make sure the system linker never puts segments inside our pre-assigned space, but this seems far more manageable. I'm not particularly wedded to the indirections patchset, so if we can make this work for 64-bit, it seems good enough to me. Edward From david.feuer at gmail.com Mon Oct 20 01:39:08 2014 From: david.feuer at gmail.com (David Feuer) Date: Sun, 19 Oct 2014 21:39:08 -0400 Subject: Help understanding Specialise.lhs Message-ID: I'm trying to figure out how to address #9701, but I'm having an awfully hard time figuring out what's going on in Specialise.lhs. I think I get the vague general idea of what it's supposed to do, based on the notes, but the actual code is a mystery to me. Is there anyone who might be able to help me get enough of a sense of it to let me do what I need? Many thanks in advance. David Feuer -------------- next part -------------- An HTML attachment was scrubbed... URL: From jwlato at gmail.com Mon Oct 20 01:43:15 2014 From: jwlato at gmail.com (John Lato) Date: Mon, 20 Oct 2014 09:43:15 +0800 Subject: Avoiding the hazards of orphan instances without dependency problems In-Reply-To: References: Message-ID: I fail to see how this doesn't help lens, unless we're assuming no buy-in from class declarations. Also, your approach would require c*n pragmas to be declared, whereas mine only requires c. Also your method seems to require having both the class and type in scope, in which case one could simply declare the instance in that module anyway. On Mon, Oct 20, 2014 at 9:29 AM, David Feuer wrote: > I don't think your approach is flexible enough to accomplish the purpose. > For example, it does almost nothing to help lens. Even my approach should, > arguably, be extended transitively, allowing the named module to delegate > that authority, but such an extension could easily be put off till later. > On Oct 19, 2014 7:17 PM, "John Lato" wrote: > >> Thinking about this, I came to a slightly different scheme. What if we >> instead add a pragma: >> >> {-# OrphanModule ClassName ModuleName #-} >> >> and furthermore require that, if OrphanModule is specified, all instances >> can *only* appear in the module where the class is defined, the involved >> types are defined, or the given OrphanModule? We would also need to add >> support for the compiler to understand that multiple modules may appear >> under the same name, which might be a bit tricky to implement, but I think >> it's feasible (perhaps in a restricted manner). >> >> I think I'd prefer this when implementing orphan instances, and probably >> when writing the pragmas as well. >> >> On Mon, Oct 20, 2014 at 1:02 AM, David Feuer >> wrote: >> >>> Orphan instances are bad. The standard approach to avoiding the orphan >>> hazard is to always put an instance declaration in the module that declares >>> the type or the one that declares the class. Unfortunately, this forces >>> packages like lens to have an ungodly number of dependencies. Yesterday, I >>> had a simple germ of an idea for solving this (fairly narrow) problem, at >>> least in some cases: allow a programmer to declare where an instance >>> declaration must be. I have no sense of sane syntax, but the rough idea is: >>> >>> {-# InstanceIn NamedModule [Context =>] C1 T1 [T2 ...] #-} >>> >>> This pragma would appear in a module declaring a class or type. The >>> named module would not have to be available, either now or ever, but >>> attempting to declare such an instance in any module *other* than the named >>> one would be an error by default, with a flag >>> -XAllowForbiddenInstancesAndInviteNasalDemons to turn it off. The optional >>> context allows multiple such pragmas to appear in the type/class-declaring >>> modules, to allow overlapping instances (all of them declared in advance). >>> >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://www.haskell.org/mailman/listinfo/ghc-devs >>> >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Mon Oct 20 02:08:27 2014 From: david.feuer at gmail.com (David Feuer) Date: Sun, 19 Oct 2014 22:08:27 -0400 Subject: Avoiding the hazards of orphan instances without dependency problems In-Reply-To: References: Message-ID: OK, so first off, I don't have anything against your pragma; I just think that something akin to mine would be good to have too. Mine was not intended to require both class and type to be in scope; if one of them is not, then it should be given its full name: {-# InstanceIn Module Foo.Class Type #-} {-# InstanceIn Module Class Bar.Type #-} As Edward Kmett explained to me, there are reasons for module authors not to want to include instances for lens stuff?in particular, they apparently tend to use a lot of non-portable code, but even aside from that, they may just not want to have to deal with maintaining that particular code. This leads to a slew of instances being dumped into lens modules, forcing the lens package to depend on a bunch of others. What I'm suggesting is that sticking {-# InstanceIn Data.Text.Lens Strict Data.Text.Lazy.Text Data.Text.Text #-} into Control.Lens.Iso (and so on) would allow Data.Text.Lens to be broken off into a separate package, removing the text dependency from lens. Note also: I described a way to (try to) support overlapping instances for mine, but I think it would be valuable to offer mine even without that feature (dropping the context stuff), if it's just too complex. On Sun, Oct 19, 2014 at 9:43 PM, John Lato wrote: > I fail to see how this doesn't help lens, unless we're assuming no buy-in > from class declarations. Also, your approach would require c*n pragmas to > be declared, whereas mine only requires c. Also your method seems to > require having both the class and type in scope, in which case one could > simply declare the instance in that module anyway. > > On Mon, Oct 20, 2014 at 9:29 AM, David Feuer > wrote: > >> I don't think your approach is flexible enough to accomplish the purpose. >> For example, it does almost nothing to help lens. Even my approach should, >> arguably, be extended transitively, allowing the named module to delegate >> that authority, but such an extension could easily be put off till later. >> On Oct 19, 2014 7:17 PM, "John Lato" wrote: >> >>> Thinking about this, I came to a slightly different scheme. What if we >>> instead add a pragma: >>> >>> {-# OrphanModule ClassName ModuleName #-} >>> >>> and furthermore require that, if OrphanModule is specified, all >>> instances can *only* appear in the module where the class is defined, the >>> involved types are defined, or the given OrphanModule? We would also need >>> to add support for the compiler to understand that multiple modules may >>> appear under the same name, which might be a bit tricky to implement, but I >>> think it's feasible (perhaps in a restricted manner). >>> >>> I think I'd prefer this when implementing orphan instances, and probably >>> when writing the pragmas as well. >>> >>> On Mon, Oct 20, 2014 at 1:02 AM, David Feuer >>> wrote: >>> >>>> Orphan instances are bad. The standard approach to avoiding the orphan >>>> hazard is to always put an instance declaration in the module that declares >>>> the type or the one that declares the class. Unfortunately, this forces >>>> packages like lens to have an ungodly number of dependencies. Yesterday, I >>>> had a simple germ of an idea for solving this (fairly narrow) problem, at >>>> least in some cases: allow a programmer to declare where an instance >>>> declaration must be. I have no sense of sane syntax, but the rough idea is: >>>> >>>> {-# InstanceIn NamedModule [Context =>] C1 T1 [T2 ...] #-} >>>> >>>> This pragma would appear in a module declaring a class or type. The >>>> named module would not have to be available, either now or ever, but >>>> attempting to declare such an instance in any module *other* than the named >>>> one would be an error by default, with a flag >>>> -XAllowForbiddenInstancesAndInviteNasalDemons to turn it off. The optional >>>> context allows multiple such pragmas to appear in the type/class-declaring >>>> modules, to allow overlapping instances (all of them declared in advance). >>>> >>>> _______________________________________________ >>>> ghc-devs mailing list >>>> ghc-devs at haskell.org >>>> http://www.haskell.org/mailman/listinfo/ghc-devs >>>> >>>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.stolarek at p.lodz.pl Mon Oct 20 07:51:42 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Mon, 20 Oct 2014 09:51:42 +0200 Subject: Avoiding the hazards of orphan instances without dependency problems In-Reply-To: References: Message-ID: <201410200951.42109.jan.stolarek@p.lodz.pl> In the past I've spent some time thinking about the orphan instances problem. I concluded that the Right Thing to do is to turn instances into first-class citizens and allow them to be explicitly imported and exported. I think devising pragmas is a workaround, not a solution. Janek Dnia poniedzia?ek, 20 pa?dziernika 2014, David Feuer napisa?: > OK, so first off, I don't have anything against your pragma; I just think > that something akin to mine would be good to have too. Mine was not > intended to require both class and type to be in scope; if one of them is > not, then it should be given its full name: > > {-# InstanceIn Module Foo.Class Type #-} > {-# InstanceIn Module Class Bar.Type #-} > > As Edward Kmett explained to me, there are reasons for module authors not > to want to include instances for lens stuff?in particular, they apparently > tend to use a lot of non-portable code, but even aside from that, they may > just not want to have to deal with maintaining that particular code. This > leads to a slew of instances being dumped into lens modules, forcing the > lens package to depend on a bunch of others. What I'm suggesting is that > sticking {-# InstanceIn Data.Text.Lens Strict Data.Text.Lazy.Text > Data.Text.Text #-} into Control.Lens.Iso (and so on) would allow > Data.Text.Lens to be broken off into a separate package, removing the text > dependency from lens. > > Note also: I described a way to (try to) support overlapping instances for > mine, but I think it would be valuable to offer mine even without that > feature (dropping the context stuff), if it's just too complex. > > On Sun, Oct 19, 2014 at 9:43 PM, John Lato wrote: > > I fail to see how this doesn't help lens, unless we're assuming no buy-in > > from class declarations. Also, your approach would require c*n pragmas > > to be declared, whereas mine only requires c. Also your method seems to > > require having both the class and type in scope, in which case one could > > simply declare the instance in that module anyway. > > > > On Mon, Oct 20, 2014 at 9:29 AM, David Feuer > > > > wrote: > >> I don't think your approach is flexible enough to accomplish the > >> purpose. For example, it does almost nothing to help lens. Even my > >> approach should, arguably, be extended transitively, allowing the named > >> module to delegate that authority, but such an extension could easily be > >> put off till later. > >> > >> On Oct 19, 2014 7:17 PM, "John Lato" wrote: > >>> Thinking about this, I came to a slightly different scheme. What if we > >>> instead add a pragma: > >>> > >>> {-# OrphanModule ClassName ModuleName #-} > >>> > >>> and furthermore require that, if OrphanModule is specified, all > >>> instances can *only* appear in the module where the class is defined, > >>> the involved types are defined, or the given OrphanModule? We would > >>> also need to add support for the compiler to understand that multiple > >>> modules may appear under the same name, which might be a bit tricky to > >>> implement, but I think it's feasible (perhaps in a restricted manner). > >>> > >>> I think I'd prefer this when implementing orphan instances, and > >>> probably when writing the pragmas as well. > >>> > >>> On Mon, Oct 20, 2014 at 1:02 AM, David Feuer > >>> > >>> wrote: > >>>> Orphan instances are bad. The standard approach to avoiding the orphan > >>>> hazard is to always put an instance declaration in the module that > >>>> declares the type or the one that declares the class. Unfortunately, > >>>> this forces packages like lens to have an ungodly number of > >>>> dependencies. Yesterday, I had a simple germ of an idea for solving > >>>> this (fairly narrow) problem, at least in some cases: allow a > >>>> programmer to declare where an instance declaration must be. I have no > >>>> sense of sane syntax, but the rough idea is: > >>>> > >>>> {-# InstanceIn NamedModule [Context =>] C1 T1 [T2 ...] #-} > >>>> > >>>> This pragma would appear in a module declaring a class or type. The > >>>> named module would not have to be available, either now or ever, but > >>>> attempting to declare such an instance in any module *other* than the > >>>> named one would be an error by default, with a flag > >>>> -XAllowForbiddenInstancesAndInviteNasalDemons to turn it off. The > >>>> optional context allows multiple such pragmas to appear in the > >>>> type/class-declaring modules, to allow overlapping instances (all of > >>>> them declared in advance). > >>>> > >>>> _______________________________________________ > >>>> ghc-devs mailing list > >>>> ghc-devs at haskell.org > >>>> http://www.haskell.org/mailman/listinfo/ghc-devs From alan.zimm at gmail.com Mon Oct 20 08:13:41 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Mon, 20 Oct 2014 10:13:41 +0200 Subject: [GHC] #9628: Add Annotations to the AST to simplify source to source conversions In-Reply-To: <059.05009cd19e7505ad9748c0e77ac33c99@haskell.org> References: <044.285fa4db7fb10488df4811e6070f6acb@haskell.org> <059.05009cd19e7505ad9748c0e77ac33c99@haskell.org> Message-ID: For the review process I updated the phabricator summary to capture the current implementation, I will move it on to the wiki too. The ExtraCommas have been removed as unworkable, they are an abandoned point in the design space. The lessons learned are being transferred to Alexander Berntsen for his record syntax extension proposal. I did consider bringing comments in directly, as being a natural part of things, but did not want to confuse the issue too much in one go. It should be possible to capture them in a similar process, and it will definitely help with round tripping. At the moment tooling has to access them via `getRichTokenStream` and then work them in to the correct place. In terms of haddock usage, I would have to discuss with them what they do/need, to find out if this will serve as a replacement. On Mon, Oct 20, 2014 at 9:55 AM, GHC wrote: > #9628: Add Annotations to the AST to simplify source to source conversions > -------------------------------------+------------------------------------- > Reporter: alanz | Owner: alanz > Type: feature | Status: new > request | Milestone: > Priority: normal | Version: 7.9 > Component: Compiler | Keywords: > Resolution: | Architecture: Unknown/Multiple > Operating System: | Difficulty: Unknown > Unknown/Multiple | Blocked By: > Type of failure: | Related Tickets: > None/Unknown | > Test Case: | > Blocking: | > Differential Revisions: D297 | > -------------------------------------+------------------------------------- > > Comment (by simonpj): > > Well done for making progress. Some thoughts > > * If the patch is ready for review, is [wiki:GhcAstAnnotations] also > fully up to date? Could you move any discussion of alternatives to the > end, under "Other possible design alternatives" so that what remains is > actually a description of the feature you propose, and a sketch of its > implementation? I'm unsure about which bits of the wiki page are rejected > ideas and which are the ones you adopted. > > * Floating around is also `ExtraCommas`. I think the two are somewhat > orthogonal, right? > > * Does your design say where comments are? That is, can you really > round-trip source code? > > In particular, an excellent criterion could be: can you do Haddock this > way? Currently Haddock has a lot of Haddock-specific fields in HsSyn. > Could they all be replaced with annotations in your style? If not, what > would take to make that possible? It would be highly cool; after all, > Haddock may be privileged, but the more we can make it possible for others > to do Haddock-like things without changing GHC itself, the better. > > * You outlined a number of "customers" in an earlier post. Would it be > worth adding them to the wiki page? > > Simon > > -- > Ticket URL: > GHC > The Glasgow Haskell Compiler > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Oct 20 09:04:36 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 20 Oct 2014 09:04:36 +0000 Subject: Help understanding Specialise.lhs In-Reply-To: References: Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F364C0D@DB3PRD3001MB020.064d.mgd.msft.net> David, I?m unclear what you are trying to achieve with #9701. I urge you to write a clear specification that we all agree about before burning cycles hacking code. There are a lot of comments at the top of Specialise.lhs. But it is, I?m afraid, a tricky pass. I could skype. Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of David Feuer Sent: 20 October 2014 02:39 To: ghc-devs Subject: Help understanding Specialise.lhs I'm trying to figure out how to address #9701, but I'm having an awfully hard time figuring out what's going on in Specialise.lhs. I think I get the vague general idea of what it's supposed to do, based on the notes, but the actual code is a mystery to me. Is there anyone who might be able to help me get enough of a sense of it to let me do what I need? Many thanks in advance. David Feuer -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Mon Oct 20 11:53:11 2014 From: david.feuer at gmail.com (David Feuer) Date: Mon, 20 Oct 2014 07:53:11 -0400 Subject: Help understanding Specialise.lhs In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F364C0D@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF3F364C0D@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: On Oct 20, 2014 5:05 AM, "Simon Peyton Jones" wrote: > I?m unclear what you are trying to achieve with #9701. I urge you to write a clear specification that we all agree about before burning cycles hacking code. What I'm trying to achieve is to make specialization work in a situation where it currently does not. It appears that when the type checker determines that a GADT carries a certain dictionary, the specializer happily uses it *even once the concrete type is completely known*. What we would want to do in that case is to replace the use of the GADT-carried dictionary with a use of the known dictionary for that type. > There are a lot of comments at the top of Specialise.lhs. But it is, I?m afraid, a tricky pass. I could skype. I would appreciate that. What day/time are you available? -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Mon Oct 20 12:57:47 2014 From: david.feuer at gmail.com (David Feuer) Date: Mon, 20 Oct 2014 08:57:47 -0400 Subject: Help understanding Specialise.lhs In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF3F364C0D@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: To be super-clear about at least one aspect: I don't want Tidy Core to ever contain something that looks like this: GADTTest.potato :: GHC.Types.Int -> GADTTest.Silly GHC.Types.Int -> GHC.Types.Int GADTTest.potato = \ (x_asZ :: GHC.Types.Int) (ds_dPR :: GADTTest.Silly GHC.Types.Int) -> case ds_dPR of _ { GADTTest.Silly $dNum_aLV ds1_dPS -> GHC.Num.+ @ GHC.Types.Int $dNum_aLV x_asZ x_asZ } Here we see GHC.Num.+ applied to GHC.Types.Int and $dNum_aLV. We therefore know that $dNum_aLV must be GHC.Num.$fNumInt, so GHC.Num.+ can eat these arguments and produce GHC.Num.$fNumInt_$c+. But for some reason, GHC fails to recognize and exploit this fact! I would like help understanding why that is, and what I can do to fix it. On Mon, Oct 20, 2014 at 7:53 AM, David Feuer wrote: > On Oct 20, 2014 5:05 AM, "Simon Peyton Jones" > wrote: > > I?m unclear what you are trying to achieve with #9701. I urge you to > write a clear specification that we all agree about before burning cycles > hacking code. > > What I'm trying to achieve is to make specialization work in a situation > where it currently does not. It appears that when the type checker > determines that a GADT carries a certain dictionary, the specializer > happily uses it *even once the concrete type is completely known*. What we > would want to do in that case is to replace the use of the GADT-carried > dictionary with a use of the known dictionary for that type. > > > There are a lot of comments at the top of Specialise.lhs. But it is, > I?m afraid, a tricky pass. I could skype. > > I would appreciate that. What day/time are you available? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Oct 20 13:07:11 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 20 Oct 2014 13:07:11 +0000 Subject: Making GHCi awesomer? In-Reply-To: References: Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F366B60@DB3PRD3001MB020.064d.mgd.msft.net> Christopher You are doing very cool things. Thank you. What I?m puzzled about is this: the GHC API *is* a programmatic interface to GHC. Why not just use it? I can think of some reasons: ? It?s not very clear just what?s in the GHC API and what isn?t, since you have access to all of GHC?s internals if you use ?package ghc. And the API isn?t very well designed. (Answer: could you help make it better?) ? You want some functionality that is currently in GHCi, rather than in the ?ghc? package. (Answer: maybe we should move that functionality into the ?ghc? package and make it part of the GHC API?) ? You have to be writing in Haskell to use the GHC API, whereas you want a separate process you connect to via a socket. (Answer: Excellent: write a server wrapper around the GHC API that offers a JSON interface, or whatever the right vocabulary is. Sounds as if you have more or less done this.) ? Moreover, the API changes pretty regularly, and you want multi-compiler support. (No answer: I don?t know how to simultaneously give access to new stuff without risking breaking old stuff.) My meta-point is this: GHC is wide open to people like you building a consensus about how GHC?s basic functionality should be wrapped up and exposed to clients. (Luite is another person who has led in this space, via GHCJS.) So please do go ahead and lay out the way it *should* be done, think about migration paths, build a consensus etc. Much better that than do fragile screen-scraping on GHCi?s textual output. Thanks for what you are doing here. Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Christopher Done Sent: 18 October 2014 16:49 To: ghc-devs at haskell.org Subject: Making GHCi awesomer? Good evening, So I?ve been working on Haskell user-facing tooling in general for some years. By that I mean the level of Emacs talking with Haskell tools. I wrote the interactive-haskell-mode (most functionality exists in this file). which launches a GHCi process in a pipe and tries very earnestly to handle input/output with the process reasonably. For Emacs fanciers: Written in Elisp, there?s a nice command queue that you put commands onto, they will all be run on a FIFO one-by-one order, and eventually you?ll get a result back. Initially it was just me using it, but with the help of Herbert Riedel it?s now a mode on equal footing with the venerable inferior-haskell-mode all ye Emacs users know and love. It?s part of haskell-mode and can be enabled by enabling the interactive-haskell-mode minor mode. For years I?ve been using GHCi as a base and it?s been very reliable for almost every project I?ve done (the only exceptions are things like SDL and OpenGL, which are well known to be difficult to load in GHCi, at least on Linux). I think we?ve built up a good set of functionality purely based on asking GHCi things and getting it to do things. I literally use GHCi for everything. For type-checking, type info, I even send ?:!cabal build? to it. Everything goes through it. I love my GHCi. Now, I?m sort of at the end of the line of where I can take GHCi. Here are the problems as I see them today: 1. There is no programmatic means of communicating with the process. I can?t send a command and get a result cleanly, I have to regex match on the prompt, and that is only so reliable. At the moment we solve this by using \4 (aka ?END OF TRANSMISSION?). Also messages (warnings, errors, etc.) need to be parsed which is also icky, especially in the REPL when e.g. a defaulted Integer warning will mix with the output. Don?t get me started on handling multi-line prompts! Hehe. 2. GHCi, as a REPL, does not distinguish between stdout, stderr and the result of your evaluation. This can be problematic for making a smooth REPL UI, your results can often (with threading) be interspersed in unkind ways. I cannot mitigate this with any kind of GHCi trickery. 3. It forgets information when you reload. (I know this is intentional.) 4. Not enough information is exposed to the user. (Is there ever? ;) 5. There is a time-to-market overhead of contributing to GHCi ? if I want a cool feature, I can write it on a locally compiled version of GHC. But for the work projects I have, I?m restricted to given GHC versions, as are other people. They have to wait to get the good features. 6. This is just a personal point ? I?ve like to talk to GHCi over a socket, so that I can run it on a remote machine. Those familiar with Common Lisp will be reminded of SLIME and Swank. Examples for point 4 are: ? Type of sub-expressions. ? Go to definition of thing at point (includes local scope). ? Local-scope completion. ? A hoogle-like query (as seen in Idris recently). ? Documentation lookup. ? Suggest imports for symbols. ? Show core for the current module. ? Show CMM for the current module, ASM, etc. SLIME can do this. ? Expand the template-haskell at point. ? The :i command is amazingly useful, but programmatic access would be even better.? ? Case split anyone? ? Etc. ?I?ve integrated with it in Emacs so that I can C-c C-i any identifier and it?ll popup a buffer with the :i result and then within that buffer I can drill down further with C-c C-i again. It makes for very natural exploration of a type. You?ve seen some of these features in GHC Mod, in hdevtools, in the FP Haskell Center, maybe some are in Yi, possibly also in Leksah (?). So in light of point (5), I thought: I?ve used the GHC API before, it can do interactive evaluation, why not write a project like ?ghc-server? which encodes all these above ideas as a ?drop-in? replacement for GHCi? After all I could work on my own without anybody getting my way over architecture decisions, etc. And that?s what I did. It?s here. Surprisingly, it kind of works. You run it in your directoy like you would do ?cabal repl? and it sets up all the extensions and package dependencies and starts accepting connections. It will compile across three major GHC versions. Hurray! Rub our hands together and call it done, right? Sadly not, the trouble is twofold: 1. The first problem with this is that every three projects will segfault or panic when trying to load in a project that GHCi will load in happily. The reasons are mysterious to me and I?ve already lugged over the GHC API to get to this point, so that kind of thing happening means that I have to fall back to my old GHCi-based setup, and is disappointing. People have similar complaints of GHC Mod & co. ?Getting it to work? is a deterrant. 2. While this would be super beneficial for me, and has been a good learning experience for ?what works and what doesn?t?, we end up with yet another alternative tool, that only a few people are using. 3. There are just certain behaviours and fixes here and there that GHCi does that take time to reproduce. So let?s go back to the GHCi question: is there still a development overhead for adding features to GHCi? Yes, new ideas need acceptance and people have to wait (potentially a year) for a new feature that they could be using right now. An alternative method is to do what Herbert did which is to release a ?ghci-ng? which sports new shiny features that people (with the right GHC version) will be able to compile and use as a drop-in for GHCi. It?s the same codebase, but with more stuff! An example is the ?:complete? command, this lets IDE implementers do completion at least at the REPL level. Remember the list of features earlier? Why are they not in GHCi? So, of course, this got me thinking that I could instead make ghc-server be based off of GHCi?s actual codebase. I could rebase upon the latest GHC release and maintain 2-3 GHC versions backwards. That?s certainly doable, it would essentially give me ?GHCi++?. Good for me, I just piggy back on the GHCi goodness and then use the GHC API for additional things as I?m doing now. But is there a way I can get any of this into the official repo? For example, could I hack on this (perhaps with Herbert) as ?ghci-ng?, provide an alternative JSON communication layer (e.g. via some ?use-json flag) and and socket listener (?listen-on ), a way to distinguish stdout/stderr (possibly by forking a process, unsure at this stage), and then any of the above features (point 4) listed. I make sure that I?m rebasing upon HEAD, as if to say ghci-ng is a kind of submodule, and then when release time comes we merge back in any new stuff since the last release. Early adopters can use ghci-ng, and everyone benefits from official GHC releases. The only snag there is that, personally speaking, it would be better if ghci-ng would compile on older GHC versions. So if GHC 7.10 is the latest release, it would still be nice (and it seems pretty feasible) that GHC 7.8 users could still cabal install it without issue. People shouldn?t have to wait if they don?t have to. Well, that?s everything. Thoughts? Ciao! ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin at well-typed.com Mon Oct 20 14:22:07 2014 From: austin at well-typed.com (Austin Seipp) Date: Mon, 20 Oct 2014 09:22:07 -0500 Subject: GHC Weekly News - 2014/10/20 Message-ID: Hi *, Here's a weekly news update covering the past several weeks, covering some of our discussions we've had - please let me know if I missed out on something important so I can update the post. https://ghc.haskell.org/trac/ghc/blog/edit/weekly20141020 -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From allbery.b at gmail.com Mon Oct 20 14:24:58 2014 From: allbery.b at gmail.com (Brandon Allbery) Date: Mon, 20 Oct 2014 10:24:58 -0400 Subject: GHC Weekly News - 2014/10/20 In-Reply-To: References: Message-ID: On Mon, Oct 20, 2014 at 10:22 AM, Austin Seipp wrote: > https://ghc.haskell.org/trac/ghc/blog/edit/weekly20141020 You might want to provide the ordinary mortals link instead of the edit link. :) -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Mon Oct 20 14:26:12 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Mon, 20 Oct 2014 16:26:12 +0200 Subject: GHC Weekly News - 2014/10/20 In-Reply-To: References: Message-ID: All you have to do is edit out /edit/ in the URL,.. On Mon, Oct 20, 2014 at 4:24 PM, Brandon Allbery wrote: > On Mon, Oct 20, 2014 at 10:22 AM, Austin Seipp > wrote: > >> https://ghc.haskell.org/trac/ghc/blog/edit/weekly20141020 > > > You might want to provide the ordinary mortals link instead of the edit > link. :) > > -- > brandon s allbery kf8nh sine nomine > associates > allbery.b at gmail.com > ballbery at sinenomine.net > unix, openafs, kerberos, infrastructure, xmonad > http://sinenomine.net > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin at well-typed.com Mon Oct 20 14:28:02 2014 From: austin at well-typed.com (Austin Seipp) Date: Mon, 20 Oct 2014 09:28:02 -0500 Subject: GHC Weekly News - 2014/10/20 In-Reply-To: References: Message-ID: Good catch. That's what I get for copy-pasting without double checking :) https://ghc.haskell.org/trac/ghc/blog/weekly20141020 On Mon, Oct 20, 2014 at 9:24 AM, Brandon Allbery wrote: > On Mon, Oct 20, 2014 at 10:22 AM, Austin Seipp > wrote: >> >> https://ghc.haskell.org/trac/ghc/blog/edit/weekly20141020 > > > You might want to provide the ordinary mortals link instead of the edit > link. :) > > -- > brandon s allbery kf8nh sine nomine associates > allbery.b at gmail.com ballbery at sinenomine.net > unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From allbery.b at gmail.com Mon Oct 20 14:28:12 2014 From: allbery.b at gmail.com (Brandon Allbery) Date: Mon, 20 Oct 2014 10:28:12 -0400 Subject: GHC Weekly News - 2014/10/20 In-Reply-To: References: Message-ID: On Mon, Oct 20, 2014 at 10:26 AM, Alan & Kim Zimmerman wrote: > All you have to do is edit out /edit/ in the URL Yes, I did that. It's still better to not require people to do that.... -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From mlen at mlen.pl Mon Oct 20 15:35:13 2014 From: mlen at mlen.pl (Mateusz Lenik) Date: Mon, 20 Oct 2014 17:35:13 +0200 Subject: Warning on tabs by default (#9230) for GHC 7.10 In-Reply-To: References: <5442EA15.3000408@fuuzetsu.co.uk> Message-ID: <20141020153513.GA3289@polaris.local> I guess template-haskell should be also easy to detab immediately. It is the only thing in ./libraries that is not a git submodule. Best, Mateusz Lenik On Sat, Oct 18, 2014 at 05:48:26PM -0500, Austin Seipp wrote: > The boot libraries have not been detabbed, and that's something we > can't immediately fix. However, the warnings being on by default means > people should feel the burn to fix it quickly, I hope, and we can just > update all of our submodules accordingly. > > I did notice however in the diff that I missed the fact `hsc2hs` has > also not been detabbed. That can be fixed immediately, however. > > On Sat, Oct 18, 2014 at 5:30 PM, Mateusz Kowalczyk > wrote: > > On 10/18/2014 01:25 AM, Austin Seipp wrote: > >> Hi all, > >> > >> Please see here: > >> > >> https://phabricator.haskell.org/D255 and > >> https://ghc.haskell.org/trac/ghc/ticket/9230 > >> > >> Making tabs warn by default has been requested many times before, and > >> now that the compiler is completely detabbed, this should become > >> possible to enable easily, and we can gradually remove warnings from > >> everything else. > >> > >> Unless someone has huge complaints or this becomes a gigantic > >> bikeshed/review (bike-review), please let me know - I would like this > >> to go in for 7.10. > >> > > > > On Phabricator I see a diff which adds a suppression for the warning to > > GHC. Is this necessary considering you say GHC is now fully detabbed? > > > > -- > > Mateusz K. > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > > -- > Regards, > > Austin Seipp, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From simonpj at microsoft.com Mon Oct 20 16:11:49 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 20 Oct 2014 16:11:49 +0000 Subject: Help understanding Specialise.lhs In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF3F364C0D@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F36708C@DB3PRD3001MB020.064d.mgd.msft.net> David If you want to suggest a couple of possible alternative 20-min slots in work time (London time zone), not Mon-Weds this week, then maybe we can find a mutually convenient time. Do you have reason to suppose that the pattern you describe below is common? That is, if implemented, would it make a big difference to programs we care about? Simon From: David Feuer [mailto:david.feuer at gmail.com] Sent: 20 October 2014 13:58 To: Simon Peyton Jones Cc: ghc-devs Subject: Re: Help understanding Specialise.lhs To be super-clear about at least one aspect: I don't want Tidy Core to ever contain something that looks like this: GADTTest.potato :: GHC.Types.Int -> GADTTest.Silly GHC.Types.Int -> GHC.Types.Int GADTTest.potato = \ (x_asZ :: GHC.Types.Int) (ds_dPR :: GADTTest.Silly GHC.Types.Int) -> case ds_dPR of _ { GADTTest.Silly $dNum_aLV ds1_dPS -> GHC.Num.+ @ GHC.Types.Int $dNum_aLV x_asZ x_asZ } Here we see GHC.Num.+ applied to GHC.Types.Int and $dNum_aLV. We therefore know that $dNum_aLV must be GHC.Num.$fNumInt, so GHC.Num.+ can eat these arguments and produce GHC.Num.$fNumInt_$c+. But for some reason, GHC fails to recognize and exploit this fact! I would like help understanding why that is, and what I can do to fix it. On Mon, Oct 20, 2014 at 7:53 AM, David Feuer > wrote: On Oct 20, 2014 5:05 AM, "Simon Peyton Jones" > wrote: > I?m unclear what you are trying to achieve with #9701. I urge you to write a clear specification that we all agree about before burning cycles hacking code. What I'm trying to achieve is to make specialization work in a situation where it currently does not. It appears that when the type checker determines that a GADT carries a certain dictionary, the specializer happily uses it *even once the concrete type is completely known*. What we would want to do in that case is to replace the use of the GADT-carried dictionary with a use of the known dictionary for that type. > There are a lot of comments at the top of Specialise.lhs. But it is, I?m afraid, a tricky pass. I could skype. I would appreciate that. What day/time are you available? -------------- next part -------------- An HTML attachment was scrubbed... URL: From bgamari.foss at gmail.com Mon Oct 20 16:14:49 2014 From: bgamari.foss at gmail.com (Ben Gamari) Date: Mon, 20 Oct 2014 12:14:49 -0400 Subject: Making GHCi awesomer? In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F366B60@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF3F366B60@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <87bnp6zpja.fsf@gmail.com> Simon Peyton Jones writes: > Christopher > > You are doing very cool things. Thank you. > > What I?m puzzled about is this: the GHC API *is* a programmatic > interface to GHC. Why not just use it? One issue that sometimes bites me when trying to compile against GHC is that of dependencies. When compiling against GHC you are bound to use whatever dependency versions GHC was compiled with. In some cases these can be a bit dated which can lead to Cabal hell. I'm not really sure what can be done about this short of making Cabal/GHC more robust in the face of multiple dependency versions within the same build. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From eric at seidel.io Mon Oct 20 16:32:41 2014 From: eric at seidel.io (Eric Seidel) Date: Mon, 20 Oct 2014 09:32:41 -0700 Subject: Making GHCi awesomer? In-Reply-To: <87bnp6zpja.fsf@gmail.com> References: <618BE556AADD624C9C918AA5D5911BEF3F366B60@DB3PRD3001MB020.064d.mgd.msft.net> <87bnp6zpja.fsf@gmail.com> Message-ID: <8D4A7EC5-29EC-4302-8563-23FA81207458@seidel.io> > On Oct 20, 2014, at 09:14, Ben Gamari wrote: > > Simon Peyton Jones writes: > >> Christopher >> >> You are doing very cool things. Thank you. >> >> What I?m puzzled about is this: the GHC API *is* a programmatic >> interface to GHC. Why not just use it? > > One issue that sometimes bites me when trying to compile against GHC is > that of dependencies. When compiling against GHC you are bound to use > whatever dependency versions GHC was compiled with. In some cases these > can be a bit dated which can lead to Cabal hell. I'm not really sure > what can be done about this short of making Cabal/GHC more robust > in the face of multiple dependency versions within the same build. I read recently that Rust has some sort of symbol-mangling in place to allow multiple versions of the same library to co-exist within a single build. How feasible would it be to add this feature to GHC? At a first glance it seems like it would help substantially. From allbery.b at gmail.com Mon Oct 20 16:59:38 2014 From: allbery.b at gmail.com (Brandon Allbery) Date: Mon, 20 Oct 2014 12:59:38 -0400 Subject: Making GHCi awesomer? In-Reply-To: <8D4A7EC5-29EC-4302-8563-23FA81207458@seidel.io> References: <618BE556AADD624C9C918AA5D5911BEF3F366B60@DB3PRD3001MB020.064d.mgd.msft.net> <87bnp6zpja.fsf@gmail.com> <8D4A7EC5-29EC-4302-8563-23FA81207458@seidel.io> Message-ID: On Mon, Oct 20, 2014 at 12:32 PM, Eric Seidel wrote: > How feasible would it be to add this feature to GHC? At a first glance it > seems like it would help substantially Only until you need to hand off data between them, sadly. -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at seidel.io Mon Oct 20 17:08:58 2014 From: eric at seidel.io (Eric Seidel) Date: Mon, 20 Oct 2014 10:08:58 -0700 Subject: Making GHCi awesomer? In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF3F366B60@DB3PRD3001MB020.064d.mgd.msft.net> <87bnp6zpja.fsf@gmail.com> <8D4A7EC5-29EC-4302-8563-23FA81207458@seidel.io> Message-ID: <02478112-6B7C-4BE6-8219-8EF0AA01D05F@seidel.io> Sure, but how often does the API deal with types that aren't defined by `ghc` or `base`? ByteString is one case I can think of, if you want to muck about with FastStrings without the overhead of Strings. > On Oct 20, 2014, at 09:59, Brandon Allbery wrote: > > On Mon, Oct 20, 2014 at 12:32 PM, Eric Seidel wrote: > How feasible would it be to add this feature to GHC? At a first glance it seems like it would help substantially > > Only until you need to hand off data between them, sadly. > > -- > brandon s allbery kf8nh sine nomine associates > allbery.b at gmail.com ballbery at sinenomine.net > unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net From ezyang at mit.edu Mon Oct 20 18:22:42 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Mon, 20 Oct 2014 11:22:42 -0700 Subject: Making GHCi awesomer? In-Reply-To: <8D4A7EC5-29EC-4302-8563-23FA81207458@seidel.io> References: <618BE556AADD624C9C918AA5D5911BEF3F366B60@DB3PRD3001MB020.064d.mgd.msft.net> <87bnp6zpja.fsf@gmail.com> <8D4A7EC5-29EC-4302-8563-23FA81207458@seidel.io> Message-ID: <1413829305-sup-8775@sabre> Excerpts from Eric Seidel's message of 2014-10-20 09:32:41 -0700: > I read recently that Rust has some sort of symbol-mangling in place to allow multiple versions of the same library to co-exist within a single build. > > How feasible would it be to add this feature to GHC? At a first glance it seems like it would help substantially. GHC already has this feature (and in 7.10, it will be upgraded to allow multiple instances of the same version of a library, but with different dependencies). The problem here is that Cabal doesn't understand how to put dependencies together like this. Edward From david.feuer at gmail.com Mon Oct 20 18:26:12 2014 From: david.feuer at gmail.com (David Feuer) Date: Mon, 20 Oct 2014 14:26:12 -0400 Subject: Help understanding Specialise.lhs In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F36708C@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF3F364C0D@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF3F36708C@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: I'll let you know as soon as I can what times I'm available Thursday/Friday. I don't know that the pattern I describe is common (now), but it's a straightforward application of constraints on GADT constructors. Whether people *like* such constraints is another question?there seem to be good reasons to use them and good reasons not to use them. At the moment, the lack of specialization is a good reason not to. You'll see the same thing if you look at the Core for the code down below the line. By the way, I tried experimentally adding {-# SPECIALIZE eval :: Expr Int -> Int #-} and got a warning about the pragma being used on a non-overloaded function. In theory, the function is not overloaded, but in practice it effectively is; I would hope to be able to do that and get a specialized version like this: evalInt :: Expr Int -> Int evalInt (N n) = n -- No B case, because Int is not Bool evalInt (Add a b) = evalNum a `+.Int` evalNum b -- Specialized addition evalInt (Mul a b) = evalNum a `*.Int` evalNum b -- Specialized multiplication -- No EqNum case, because Int is not Bool -- ---------------------------------------------- module Calc (checkInt, eval) where data Expr a where N :: Num n => n -> Expr n B :: Bool -> Expr Bool Add :: Num n => Expr n -> Expr n -> Expr n Mul :: Num n => Expr n -> Expr n -> Expr n EqNum :: (Num e, Eq e) => Expr e -> Expr e -> Expr Bool infixl 6 `Add` infixl 7 `Mul` infix 4 `EqNum` eval :: Expr a -> a eval (N n) = n eval (B b) = b eval (Add a b) = evalNum a + evalNum b eval (Mul a b) = evalNum a * evalNum b eval (EqNum a b) = evalNum a == evalNum b {-# SPECIALIZE evalNum :: Expr Int -> Int #-} evalNum :: Num a => Expr a -> a evalNum (N n) = n evalNum (Add a b) = evalNum a + evalNum b evalNum (Mul a b) = evalNum a * evalNum b {-# SPECIALIZE check :: Int -> Int -> Int -> Bool #-} check :: (Eq n, Num n) => n -> n -> n -> Bool check x y z = eval $ N x `Add` N y `Mul` N z `EqNum` N z `Mul` N y `Add` N x checkInt :: Int -> Int -> Int -> Bool checkInt x y z = check x y z On Mon, Oct 20, 2014 at 12:11 PM, Simon Peyton Jones wrote: > David > > > > If you want to suggest a couple of possible alternative 20-min slots in > work time (London time zone), not Mon-Weds this week, then maybe we can > find a mutually convenient time. > > > > Do you have reason to suppose that the pattern you describe below is > common? That is, if implemented, would it make a big difference to > programs we care about? > > > > Simon > > > > *From:* David Feuer [mailto:david.feuer at gmail.com] > *Sent:* 20 October 2014 13:58 > *To:* Simon Peyton Jones > *Cc:* ghc-devs > *Subject:* Re: Help understanding Specialise.lhs > > > > To be super-clear about at least one aspect: I don't want Tidy Core to > ever contain something that looks like this: > > GADTTest.potato > :: GHC.Types.Int -> GADTTest.Silly GHC.Types.Int -> GHC.Types.Int > GADTTest.potato = > \ (x_asZ :: GHC.Types.Int) > (ds_dPR :: GADTTest.Silly GHC.Types.Int) -> > case ds_dPR of _ { GADTTest.Silly $dNum_aLV ds1_dPS -> > GHC.Num.+ @ GHC.Types.Int $dNum_aLV x_asZ x_asZ > } > > Here we see GHC.Num.+ applied to GHC.Types.Int and $dNum_aLV. We > therefore know that $dNum_aLV must be GHC.Num.$fNumInt, so GHC.Num.+ can > eat these arguments and produce GHC.Num.$fNumInt_$c+. But for some reason, > GHC fails to recognize and exploit this fact! I would like help > understanding why that is, and what I can do to fix it. > > > > On Mon, Oct 20, 2014 at 7:53 AM, David Feuer > wrote: > > On Oct 20, 2014 5:05 AM, "Simon Peyton Jones" > wrote: > > I?m unclear what you are trying to achieve with #9701. I urge you to > write a clear specification that we all agree about before burning cycles > hacking code. > > What I'm trying to achieve is to make specialization work in a situation > where it currently does not. It appears that when the type checker > determines that a GADT carries a certain dictionary, the specializer > happily uses it *even once the concrete type is completely known*. What we > would want to do in that case is to replace the use of the GADT-carried > dictionary with a use of the known dictionary for that type. > > > There are a lot of comments at the top of Specialise.lhs. But it is, > I?m afraid, a tricky pass. I could skype. > > I would appreciate that. What day/time are you available? > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cma at bitemyapp.com Mon Oct 20 18:27:25 2014 From: cma at bitemyapp.com (Christopher Allen) Date: Mon, 20 Oct 2014 13:27:25 -0500 Subject: Making GHCi awesomer? In-Reply-To: <1413829305-sup-8775@sabre> References: <618BE556AADD624C9C918AA5D5911BEF3F366B60@DB3PRD3001MB020.064d.mgd.msft.net> <87bnp6zpja.fsf@gmail.com> <8D4A7EC5-29EC-4302-8563-23FA81207458@seidel.io> <1413829305-sup-8775@sabre> Message-ID: Sorry to bother everybody, but where is this documented? What happens if incompatible versions pass data between each other? On Mon, Oct 20, 2014 at 1:22 PM, Edward Z. Yang wrote: > Excerpts from Eric Seidel's message of 2014-10-20 09:32:41 -0700: > > I read recently that Rust has some sort of symbol-mangling in place to > allow multiple versions of the same library to co-exist within a single > build. > > > > How feasible would it be to add this feature to GHC? At a first glance > it seems like it would help substantially. > > GHC already has this feature (and in 7.10, it will be upgraded to allow > multiple instances of the same version of a library, but with different > dependencies). The problem here is that Cabal doesn't understand how > to put dependencies together like this. > > Edward > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eir at cis.upenn.edu Mon Oct 20 18:35:04 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Mon, 20 Oct 2014 14:35:04 -0400 Subject: GADTs in implementation of Template Haskell Message-ID: <16077A03-605D-4BE3-85B6-57DC51C05CB7@cis.upenn.edu> I'm doing a bunch of bug-fixes / improvements to Template Haskell. Two of these are to fix GHC bugs #8100 (add standalone-deriving support) and #9064 (add `default` method type signature support), both of which introduce new constructors for `Dec`. This got me thinking about `Dec` and the fact that different declaration forms are allowable in different contexts. (For example, datatype declarations are allowable only at the top level, and fixity declarations are allowable anywhere except in instance declarations.) How to encode these restrictions? With types, of course! Thus, I redesigned `Dec` to be a GADT. Having done so, I'm not 100% convinced that this is the right thing to do. I would love feedback on my full, concrete proposal available at https://ghc.haskell.org/trac/ghc/wiki/Design/TemplateHaskellGADTs Is this a change for the better or worse? Feel free either to comment on the wiki page or to this email. Thanks! Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From bgamari.foss at gmail.com Mon Oct 20 18:41:39 2014 From: bgamari.foss at gmail.com (Ben Gamari) Date: Mon, 20 Oct 2014 14:41:39 -0400 Subject: Making GHCi awesomer? In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF3F366B60@DB3PRD3001MB020.064d.mgd.msft.net> <87bnp6zpja.fsf@gmail.com> <8D4A7EC5-29EC-4302-8563-23FA81207458@seidel.io> <1413829305-sup-8775@sabre> Message-ID: <8738aiziqj.fsf@gmail.com> Christopher Allen writes: > Sorry to bother everybody, but where is this documented? What happens if > incompatible versions pass data between each other? > I would hope this would manifest as a type error. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From bgamari.foss at gmail.com Mon Oct 20 19:13:32 2014 From: bgamari.foss at gmail.com (Ben Gamari) Date: Mon, 20 Oct 2014 15:13:32 -0400 Subject: GitHub pull requests In-Reply-To: <6D1821C0-52BE-4506-8503-307D2CBD83DF@cis.upenn.edu> References: <6D1821C0-52BE-4506-8503-307D2CBD83DF@cis.upenn.edu> Message-ID: <87y4say2oz.fsf@gmail.com> Richard Eisenberg writes: > I've just finished reading this: > http://www.reddit.com/r/haskell/comments/2hes8m/the_ghc_source_code_contains_1088_todos_please/ > > For better or worse, I don't read reddit often enough to hold a > conversation there, so I'll ask my question here: Is there a way we > can turn GitHub pull requests into Phab code reviews? > Since things have died down here a bit this might be a good time to review the points made and distill some conclusions, 1. There is a large number of people who maintain that arc poses a significant barrier to new contributions. 2. Even if it weren't a significant barrier, given the small (but growing!) size of our contributor pool we should be reducing friction wherever possible 2. Github's pull request mechanism has a great deal of mindshare, may cause confusion, and can't be disabled. 3. There are varying degrees of concern that using the Github PR process in addition to Phab will result in confusion. This comes in a few flavors, a) Confusion between Github issue numbers, Trac bug numbers, and Phabricator identifiers b) Accepting pull requests directly may result in some users falling into the habit of submitting pull requests instead of Phab differentials c) The revision and review features of the pull request mechanism are inferior to those of Phab and may cost reviewers time. Future steps ============= There are are few ways forward, 1. Do nothing, ignore pull requests as we do now 2. Monitor Github for new pull requests and close with a message requesting that the user opens a differential instead 3. Teach Phabricator to allow to submit a URL to a commit (or branch) in a forked github.com/ghc/ghc repo, and create a code-revision out of that. (suggested by hvr) 4. Monitor Github for new pull requests and use facility in (3) to open a differrential and close the pull request with a message pointing to it. 5. Start accepting pull requests in addition to differentials (suggested by Joachim) What do we think about these options? I'd lean towards (4) and would be willing to try implementing it assuming there is agreement that it's a reasonable way forward. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From carter.schonwald at gmail.com Mon Oct 20 19:39:49 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 20 Oct 2014 15:39:49 -0400 Subject: Making GHCi awesomer? In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF3F366B60@DB3PRD3001MB020.064d.mgd.msft.net> <87bnp6zpja.fsf@gmail.com> <8D4A7EC5-29EC-4302-8563-23FA81207458@seidel.io> <1413829305-sup-8775@sabre> Message-ID: different versions will be considered to have *different* types (albeit with the same name) On Mon, Oct 20, 2014 at 2:27 PM, Christopher Allen wrote: > Sorry to bother everybody, but where is this documented? What happens if > incompatible versions pass data between each other? > > On Mon, Oct 20, 2014 at 1:22 PM, Edward Z. Yang wrote: > >> Excerpts from Eric Seidel's message of 2014-10-20 09:32:41 -0700: >> > I read recently that Rust has some sort of symbol-mangling in place to >> allow multiple versions of the same library to co-exist within a single >> build. >> > >> > How feasible would it be to add this feature to GHC? At a first glance >> it seems like it would help substantially. >> >> GHC already has this feature (and in 7.10, it will be upgraded to allow >> multiple instances of the same version of a library, but with different >> dependencies). The problem here is that Cabal doesn't understand how >> to put dependencies together like this. >> >> Edward >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From merijn at inconsistent.nl Tue Oct 21 03:44:43 2014 From: merijn at inconsistent.nl (Merijn Verstraaten) Date: Mon, 20 Oct 2014 20:44:43 -0700 Subject: GitHub pull requests In-Reply-To: <4F006ECE-DFA8-48C8-B798-76312A8BF569@inconsistent.nl> References: <6D1821C0-52BE-4506-8503-307D2CBD83DF@cis.upenn.edu> <87y4say2oz.fsf@gmail.com> <4F006ECE-DFA8-48C8-B798-76312A8BF569@inconsistent.nl> Message-ID: Whoops, accidentally only addressed Ben instead of the list: On 20 Oct 2014, at 12:13 , Ben Gamari wrote: > a) Confusion between Github issue numbers, Trac bug numbers, and > Phabricator identifiers It is possible to disable GitHub issues on a repository, would this not at least solve the issue number confusion? I only figured this out today when trying to *enable* issues on my personal fork of another project. Cheers, Merijn -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From kazu at iij.ad.jp Tue Oct 21 04:15:46 2014 From: kazu at iij.ad.jp (Kazu Yamamoto (=?iso-2022-jp?B?GyRCOzNLXE9CSScbKEI=?=)) Date: Tue, 21 Oct 2014 13:15:46 +0900 (JST) Subject: One-shot semantics in GHC event manager In-Reply-To: <20141017.235117.27092370466503596.kazu@iij.ad.jp> References: <20141017.235117.27092370466503596.kazu@iij.ad.jp> Message-ID: <20141021.131546.1502962042806916557.kazu@iij.ad.jp> Hi, >> Andreas - want me to go ahead and get you some hardware to test Ben's >> patch in the mean time? This way we'll at least not leave it hanging >> until the last moment... > > I will also try this with two 20-core machines connected 10G on > Monday. I measured the performace of GHC head, 7.8.3 and 7.8.3 + Ben's patch set. Server: witty 8080 -r -a -s +RTS -N *1 Measurement tool: weighttp -n 100000 -c 1000 -k -t 19 http://192.168.0.1:8080/ Measurement env: two 20 core (w/o HT) machines directly connected 10G Here is result (req/s): -N 1 2 4 8 16 --------------------------------------------------------- head 92,855 155,957 306,813 498,613 527,034 7.8.3 86,494 160,321 310,675 494,020 510,751 7.8.3+ben 37,608 69,376 131,686 237,783 333,946 head and 7.8.3 has almost the same performance. But I saw significant performance regression in Ben's patch set. *1 https://github.com/kazu-yamamoto/witty/blob/master/README.md P.S. - Scalability is not linear as you can see. - prefork (witty -n ) got much better result than Mio (witty +RTS ) (677,837 req/s for witty 8080 -r -a -s -n 16) --Kazu From alexander at plaimi.net Tue Oct 21 07:18:03 2014 From: alexander at plaimi.net (Alexander Berntsen) Date: Tue, 21 Oct 2014 09:18:03 +0200 Subject: GitHub pull requests In-Reply-To: <87y4say2oz.fsf@gmail.com> References: <6D1821C0-52BE-4506-8503-307D2CBD83DF@cis.upenn.edu> <87y4say2oz.fsf@gmail.com> Message-ID: <544608AB.6070406@plaimi.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 20/10/14 21:13, Ben Gamari wrote: > > 1. Do nothing, ignore pull requests as we do now > > 2. Monitor Github for new pull requests and close with a message > requesting that the user opens a differential instead This is stupid. Instead, disable pull requests on GitHub altogether. > 3. Teach Phabricator to allow to submit a URL to a commit (or branch) > in a forked github.com/ghc/ghc repo, and create a code-revision out > of that. (suggested by hvr) > > 4. Monitor Github for new pull requests and use facility in (3) to > open a differrential and close the pull request with a message > pointing to it. If someone can be bothered implementing these, they could work. > 5. Start accepting pull requests in addition to differentials > (suggested by Joachim) - -1. - -- Alexander alexander at plaimi.net https://secure.plaimi.net/~alexander -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iF4EAREIAAYFAlRGCKsACgkQRtClrXBQc7U5jgD+NFGH9wNB8t54K8v2xjOQ3U1a T4W6hPkKbmUNrdPjf2YA/0eDUG2XJgEvxiuSnVtmimLgrGFc2weD9f1656/S2tm8 =3ELn -----END PGP SIGNATURE----- From alexander at plaimi.net Tue Oct 21 08:36:29 2014 From: alexander at plaimi.net (Alexander Berntsen) Date: Tue, 21 Oct 2014 10:36:29 +0200 Subject: GitHub pull requests In-Reply-To: <544608AB.6070406@plaimi.net> References: <6D1821C0-52BE-4506-8503-307D2CBD83DF@cis.upenn.edu> <87y4say2oz.fsf@gmail.com> <544608AB.6070406@plaimi.net> Message-ID: <54461B0D.4080007@plaimi.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 21/10/14 09:18, Alexander Berntsen wrote: > On 20/10/14 21:13, Ben Gamari wrote: >> > >> > 1. Do nothing, ignore pull requests as we do now >> > >> > 2. Monitor Github for new pull requests and close with a message >> > requesting that the user opens a differential instead > This is stupid. Instead, disable pull requests on GitHub altogether. Apparently PRs are no longer a feature which may be disabled. (Thanks Merijn, for pointing that out to me.) So I guess I vote for 1 bar anyone taking the time to do 3 and optionally also 4. - -- Alexander alexander at plaimi.net https://secure.plaimi.net/~alexander -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iF4EAREIAAYFAlRGGw0ACgkQRtClrXBQc7UQ7AD+IFkEomzkhbXIQKynvYgcrTzI OamTaaXKeBBT54rG2cIBAIl2fESCyTGTbeYETRV58QiNAuGZ/4Tf+hfohfiHFDPE =ydbf -----END PGP SIGNATURE----- From eir at cis.upenn.edu Tue Oct 21 13:34:31 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Tue, 21 Oct 2014 09:34:31 -0400 Subject: `arc` changes my commit messages Message-ID: <3BD98820-35C9-42C1-9E2A-4666F394B68C@cis.upenn.edu> Hi all, Is there a way to put `arc` into a read-only mode? Frequently while working on a patch, I make several commits, preferring to separate out testing commits from productive work commits and non-productive (whitespace, comments) commits. Sometimes each of these categories are themselves broken into several commits. These commits are *not* my internal workflow. They are intentionally curated by rebasing as I'm ready to publish the patch, as I think the patches are easy to read this way. (Tell me if I'm wrong, here!) I've resolved myself not to use `arc land`, but instead to apply the patch using git. Yet, when I call `arc diff`, even if I haven't amended my patch during the `arc diff`ing process, the commit message of the tip of my branch is changed, and without telling me. I recently pushed my (tiny, uninteresting) fix to #9692. Luckily, my last commit happened to be the meat, so the amended commit message is still wholly relevant. But, that won't always be the case, and I was surprised to see a Phab-ified commit message appear in the Trac ticket after pushing. I know I could use more git-ery to restore my old commit message. But is there a way to stop `arc` from doing the message change in the first place? Thanks! Richard From shumovichy at gmail.com Tue Oct 21 13:45:59 2014 From: shumovichy at gmail.com (Yuras Shumovich) Date: Tue, 21 Oct 2014 16:45:59 +0300 Subject: `arc` changes my commit messages In-Reply-To: <3BD98820-35C9-42C1-9E2A-4666F394B68C@cis.upenn.edu> References: <3BD98820-35C9-42C1-9E2A-4666F394B68C@cis.upenn.edu> Message-ID: <1413899159.2698.1.camel@gmail.com> On Tue, 2014-10-21 at 09:34 -0400, Richard Eisenberg wrote: > Hi all, > > Is there a way to put `arc` into a read-only mode? Not sure it is relevant, please ignore me if it is not. Does "arc diff --preview" work for you? It will create a diff without creating revision and changing anything locally. Then you can attach the diff to an existent revision or create new one. > > Frequently while working on a patch, I make several commits, preferring to separate out testing commits from productive work commits and non-productive (whitespace, comments) commits. Sometimes each of these categories are themselves broken into several commits. These commits are *not* my internal workflow. They are intentionally curated by rebasing as I'm ready to publish the patch, as I think the patches are easy to read this way. (Tell me if I'm wrong, here!) I've resolved myself not to use `arc land`, but instead to apply the patch using git. > > Yet, when I call `arc diff`, even if I haven't amended my patch during the `arc diff`ing process, the commit message of the tip of my branch is changed, and without telling me. I recently pushed my (tiny, uninteresting) fix to #9692. Luckily, my last commit happened to be the meat, so the amended commit message is still wholly relevant. But, that won't always be the case, and I was surprised to see a Phab-ified commit message appear in the Trac ticket after pushing. > > I know I could use more git-ery to restore my old commit message. But is there a way to stop `arc` from doing the message change in the first place? > > Thanks! > Richard > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From johan.tibell at gmail.com Tue Oct 21 13:46:16 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Tue, 21 Oct 2014 11:16:16 -0230 Subject: `arc` changes my commit messages In-Reply-To: <3BD98820-35C9-42C1-9E2A-4666F394B68C@cis.upenn.edu> References: <3BD98820-35C9-42C1-9E2A-4666F394B68C@cis.upenn.edu> Message-ID: This is probably the biggest shortcoming of Phab. If you don't want this merging behavior you need to make a separate Phab review *per commit*. When I use arc I usually use git to rewrite the message after the review to something less messy. On Tue, Oct 21, 2014 at 11:04 AM, Richard Eisenberg wrote: > Hi all, > > Is there a way to put `arc` into a read-only mode? > > Frequently while working on a patch, I make several commits, preferring to > separate out testing commits from productive work commits and > non-productive (whitespace, comments) commits. Sometimes each of these > categories are themselves broken into several commits. These commits are > *not* my internal workflow. They are intentionally curated by rebasing as > I'm ready to publish the patch, as I think the patches are easy to read > this way. (Tell me if I'm wrong, here!) I've resolved myself not to use > `arc land`, but instead to apply the patch using git. > > Yet, when I call `arc diff`, even if I haven't amended my patch during the > `arc diff`ing process, the commit message of the tip of my branch is > changed, and without telling me. I recently pushed my (tiny, uninteresting) > fix to #9692. Luckily, my last commit happened to be the meat, so the > amended commit message is still wholly relevant. But, that won't always be > the case, and I was surprised to see a Phab-ified commit message appear in > the Trac ticket after pushing. > > I know I could use more git-ery to restore my old commit message. But is > there a way to stop `arc` from doing the message change in the first place? > > Thanks! > Richard > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bgamari.foss at gmail.com Tue Oct 21 15:31:07 2014 From: bgamari.foss at gmail.com (Ben Gamari) Date: Tue, 21 Oct 2014 11:31:07 -0400 Subject: One-shot semantics in GHC event manager In-Reply-To: <20141021.131546.1502962042806916557.kazu@iij.ad.jp> References: <20141017.235117.27092370466503596.kazu@iij.ad.jp> <20141021.131546.1502962042806916557.kazu@iij.ad.jp> Message-ID: <87r3y1xww4.fsf@gmail.com> Kazu Yamamoto writes: > Hi, > >>> Andreas - want me to go ahead and get you some hardware to test Ben's >>> patch in the mean time? This way we'll at least not leave it hanging >>> until the last moment... >> >> I will also try this with two 20-core machines connected 10G on >> Monday. > > I measured the performace of GHC head, 7.8.3 and 7.8.3 + Ben's patch > set. > > Server: witty 8080 -r -a -s +RTS -N *1 > Measurement tool: weighttp -n 100000 -c 1000 -k -t 19 http://192.168.0.1:8080/ > Measurement env: two 20 core (w/o HT) machines directly connected 10G > > Here is result (req/s): > > -N 1 2 4 8 16 > --------------------------------------------------------- > head 92,855 155,957 306,813 498,613 527,034 > 7.8.3 86,494 160,321 310,675 494,020 510,751 > 7.8.3+ben 37,608 69,376 131,686 237,783 333,946 > > head and 7.8.3 has almost the same performance. But I saw significant > performance regression in Ben's patch set. > Hmm, uh oh. Thanks for testing this. I'll try to reproduce this on my end. It looks like it shouldn't be so hard as even the single-threaded performance regresses drastically. Just to confirm, you are using the latest revision of D347? Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From gershomb at gmail.com Tue Oct 21 15:46:49 2014 From: gershomb at gmail.com (Gershom B) Date: Tue, 21 Oct 2014 11:46:49 -0400 Subject: GADTs in implementation of Template Haskell In-Reply-To: <16077A03-605D-4BE3-85B6-57DC51C05CB7@cis.upenn.edu> References: <16077A03-605D-4BE3-85B6-57DC51C05CB7@cis.upenn.edu> Message-ID: On October 20, 2014 at 2:35:27 PM, Richard Eisenberg (eir at cis.upenn.edu) wrote: > Having done so, I'm not 100% convinced that this is the right thing to do. I would love feedback > on my full, concrete proposal available at https://ghc.haskell.org/trac/ghc/wiki/Design/TemplateHaskellGADTs > > Is this a change for the better or worse? Feel free either to comment on the wiki page or > to this email. > > Thanks! > Richard As I understand it, the big downsides are that we don?t get `gunfold` for Dec and Pragma, and we may not get `Generic` instances for them at all. At first this felt pretty bad, but then I reviewed how generics and TH tend to get used together, and now I?m not _quite_ as anxious. In my mind the main use case for them is in things like this: http://www.well-typed.com/blog/2014/10/quasi-quoting-dsls/ ? skimming the code involved, and the way `dataToExpQ` and friends tend to work, the key bit is having the generic instances on what we?re inducting on, not what we?re building? By the time we hit concrete TH syntax, it feels a bit late to be doing further generic transformations on it, so I have a suspicion this won?t hit anyone. I certainly don?t think it?ll affect _my_ uses of TH at least :-) Cheers, Gershom From singpolyma at singpolyma.net Tue Oct 21 15:52:27 2014 From: singpolyma at singpolyma.net (Stephen Paul Weber) Date: Tue, 21 Oct 2014 10:52:27 -0500 Subject: Avoiding the hazards of orphan instances without dependency problems In-Reply-To: References: Message-ID: <20141021155227.GB1824@singpolyma-liberty> Somebody claiming to be John Lato wrote: >Thinking about this, I came to a slightly different scheme. What if we >instead add a pragma: > >{-# OrphanModule ClassName ModuleName #-} I really like this. It solve all the real orphan instance cases I've had in my libraries. -- Stephen Paul Weber, @singpolyma See for how I prefer to be contacted edition right joseph -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: Digital signature URL: From singpolyma at singpolyma.net Tue Oct 21 15:54:31 2014 From: singpolyma at singpolyma.net (Stephen Paul Weber) Date: Tue, 21 Oct 2014 10:54:31 -0500 Subject: Warning on tabs by default (#9230) for GHC 7.10 In-Reply-To: References: Message-ID: <20141021155431.GC1824@singpolyma-liberty> >Making tabs warn by default has been requested many times before, and >now that the compiler is completely detabbed, this should become >possible to enable easily, and we can gradually remove warnings from >everything else. I hate this, but I assume there will be an easy flag to turn this warning off that I can put in all of my builds. -- Stephen Paul Weber, @singpolyma See for how I prefer to be contacted edition right joseph -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: Digital signature URL: From david.feuer at gmail.com Tue Oct 21 16:11:13 2014 From: david.feuer at gmail.com (David Feuer) Date: Tue, 21 Oct 2014 12:11:13 -0400 Subject: Avoiding the hazards of orphan instances without dependency problems In-Reply-To: <20141021155227.GB1824@singpolyma-liberty> References: <20141021155227.GB1824@singpolyma-liberty> Message-ID: As I said before, it still doesn't solve the problem I'm trying to solve. Look at a package like criterion, for example. criterion depends on aeson. Why? Because statistics depends on it. Why? Because statistics wants a couple types it defines to be instances of classes defined in aeson. John Lato's proposal would require the pragma to appear in the relevant aeson module, and would prevent *anyone* else from defining instances of those classes. With my proposal, statistics would be able to declare {-# InstanceIn Statistics.AesonInstances AesonModule.AesonClass StatisticsType #-} Then it would split the Statistics.AesonInstances module off into a statistics-aeson package and accomplish its objective without stepping on anyone else. We'd get a lot more (mostly tiny) packages, but in exchange the dependencies would get much thinner. On Oct 21, 2014 11:52 AM, "Stephen Paul Weber" wrote: > Somebody claiming to be John Lato wrote: > >> Thinking about this, I came to a slightly different scheme. What if we >> instead add a pragma: >> >> {-# OrphanModule ClassName ModuleName #-} >> > > I really like this. It solve all the real orphan instance cases I've had > in my libraries. > > -- > Stephen Paul Weber, @singpolyma > See for how I prefer to be contacted > edition right joseph > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jwlato at gmail.com Tue Oct 21 17:22:36 2014 From: jwlato at gmail.com (John Lato) Date: Tue, 21 Oct 2014 10:22:36 -0700 Subject: Avoiding the hazards of orphan instances without dependency problems In-Reply-To: References: <20141021155227.GB1824@singpolyma-liberty> Message-ID: Perhaps you misunderstood my proposal if you think it would prevent anyone else from defining instances of those classes? Part of the proposal was also adding support to the compiler to allow for a multiple files to use a single module name. That may be a larger technical challenge, but I think it's achievable. I think one key difference is that my proposal puts the onus on class implementors, and David's puts the onus on datatype implementors, so they certainly are complementary and could co-exist. On Tue, Oct 21, 2014 at 9:11 AM, David Feuer wrote: > As I said before, it still doesn't solve the problem I'm trying to solve. > Look at a package like criterion, for example. criterion depends on aeson. > Why? Because statistics depends on it. Why? Because statistics wants a > couple types it defines to be instances of classes defined in aeson. John > Lato's proposal would require the pragma to appear in the relevant aeson > module, and would prevent *anyone* else from defining instances of those > classes. With my proposal, statistics would be able to declare > > {-# InstanceIn Statistics.AesonInstances AesonModule.AesonClass > StatisticsType #-} > > Then it would split the Statistics.AesonInstances module off into a > statistics-aeson package and accomplish its objective without stepping on > anyone else. We'd get a lot more (mostly tiny) packages, but in exchange > the dependencies would get much thinner. > On Oct 21, 2014 11:52 AM, "Stephen Paul Weber" > wrote: > >> Somebody claiming to be John Lato wrote: >> >>> Thinking about this, I came to a slightly different scheme. What if we >>> instead add a pragma: >>> >>> {-# OrphanModule ClassName ModuleName #-} >>> >> >> I really like this. It solve all the real orphan instance cases I've had >> in my libraries. >> >> -- >> Stephen Paul Weber, @singpolyma >> See for how I prefer to be contacted >> edition right joseph >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kuznero at gmail.com Tue Oct 21 17:58:37 2014 From: kuznero at gmail.com (Roman Kuznetsov) Date: Tue, 21 Oct 2014 19:58:37 +0200 Subject: Automating GHC build for Windows Message-ID: Hello *, As I am still new in this, I will ask. Were there any attempt to automate GHC build process on Windows with some kind of CI engine, like Hudson, Jenkins, etc.? It seems to be rather helpful to publish build results if not after each commit, but at least once every day. ?? WDYT? -- Sincerely yours, Roman Kuznetsov -------------- next part -------------- An HTML attachment was scrubbed... URL: From mlen at mlen.pl Tue Oct 21 18:08:23 2014 From: mlen at mlen.pl (Mateusz Lenik) Date: Tue, 21 Oct 2014 20:08:23 +0200 Subject: Warning on tabs by default (#9230) for GHC 7.10 In-Reply-To: <20141021155431.GC1824@singpolyma-liberty> References: <20141021155431.GC1824@singpolyma-liberty> Message-ID: <20141021180823.GA74953@polaris.local> Yes, all you need to do is to add -fno-warn-tabs to ghc-options. On Tue, Oct 21, 2014 at 10:54:31AM -0500, Stephen Paul Weber wrote: > >Making tabs warn by default has been requested many times before, and > >now that the compiler is completely detabbed, this should become > >possible to enable easily, and we can gradually remove warnings from > >everything else. > > I hate this, but I assume there will be an easy flag to turn this warning > off that I can put in all of my builds. > > -- > Stephen Paul Weber, @singpolyma > See for how I prefer to be contacted > edition right joseph > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs -- Mateusz Lenik -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From david.feuer at gmail.com Tue Oct 21 18:22:20 2014 From: david.feuer at gmail.com (David Feuer) Date: Tue, 21 Oct 2014 14:22:20 -0400 Subject: Avoiding the hazards of orphan instances without dependency problems In-Reply-To: References: <20141021155227.GB1824@singpolyma-liberty> Message-ID: On Oct 21, 2014 1:22 PM, "John Lato" wrote: > > Perhaps you misunderstood my proposal if you think it would prevent anyone else from defining instances of those classes? Part of the proposal was also adding support to the compiler to allow for a multiple files to use a single module name. That may be a larger technical challenge, but I think it's achievable. You are right; I definitely did not realize this. What happens when files using the same module name both define instances for the same class and type(s)? I don't know nearly enough about how these things work to know if there's a nice way to catch this. Could you explain a bit more about how it would work? Also, what exactly would be in scope in each of these? Would adding a file to the module necessitate recompilation of everything depending on it? > I think one key difference is that my proposal puts the onus on class implementors, and David's puts the onus on datatype implementors, so they certainly are complementary and could co-exist. Mine puts the onus on either, actually, to support both the pattern of a maintainer maintaining a class with instances and of one maintaining a type with instances. To a certain extent these could even be mixed. For example, a module in base could delegate a number of instances of a certain class, but we wouldn't want pragmas relating to Hackagy types in there. One nice thing about my approach is that any program that's correct *with* the pragma is also correct *without* it?it's entirely negative. In particular, if someone should come up with a broader/better/ultimate solution to the orphan instance problem, the pragma could just go away without breaking anything. Something using multiple files to define one module inherently requires more support from the future. -------------- next part -------------- An HTML attachment was scrubbed... URL: From adam at sandbergericsson.se Tue Oct 21 18:38:35 2014 From: adam at sandbergericsson.se (Adam Sandberg Eriksson) Date: Tue, 21 Oct 2014 20:38:35 +0200 Subject: Automating GHC build for Windows In-Reply-To: References: Message-ID: <1413916715.326367.181663173.2FBCEE30@webmail.messagingengine.com> Hello, There is a lot of building infrastructure and I believe most of it is listed on [1]. For example the nightly builders [2] where there are indeed 2 windows buildmachines working (but they seem to fail to build currently). I believe there is ongoing work on adding windows machines to Harbormaster [3] for validating each commit as well as patches submitted to Phabricator. Regards, Adam Sandberg Eriksson [1]: https://ghc.haskell.org/trac/ghc/wiki/Infrastructure [2]: http://haskell.inf.elte.hu/builders/ [3]: https://ghc.haskell.org/trac/ghc/wiki/Phabricator/Harbormaster On Tue, Oct 21, 2014, at 07:58 PM, Roman Kuznetsov wrote: > Hello *, > > As I am still new in this, I will ask. > > Were there any attempt to automate GHC build process on Windows with some > kind of CI engine, like Hudson, Jenkins, etc.? > > It seems to be rather helpful to publish build results if not after each > commit, but at least once every day. > ?? > > WDYT? > > -- > Sincerely yours, > Roman Kuznetsov > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From dev at rodlogic.net Tue Oct 21 18:45:39 2014 From: dev at rodlogic.net (RodLogic) Date: Tue, 21 Oct 2014 14:45:39 -0400 Subject: Avoiding the hazards of orphan instances without dependency problems In-Reply-To: References: <20141021155227.GB1824@singpolyma-liberty> Message-ID: One other benefit of multiple files to use a single module name is that it would be easy to separate testing code from real code even when testing internal/non-exported functions. On Tue, Oct 21, 2014 at 1:22 PM, John Lato wrote: > Perhaps you misunderstood my proposal if you think it would prevent anyone > else from defining instances of those classes? Part of the proposal was > also adding support to the compiler to allow for a multiple files to use a > single module name. That may be a larger technical challenge, but I think > it's achievable. > > I think one key difference is that my proposal puts the onus on class > implementors, and David's puts the onus on datatype implementors, so they > certainly are complementary and could co-exist. > > On Tue, Oct 21, 2014 at 9:11 AM, David Feuer > wrote: > >> As I said before, it still doesn't solve the problem I'm trying to solve. >> Look at a package like criterion, for example. criterion depends on aeson. >> Why? Because statistics depends on it. Why? Because statistics wants a >> couple types it defines to be instances of classes defined in aeson. John >> Lato's proposal would require the pragma to appear in the relevant aeson >> module, and would prevent *anyone* else from defining instances of those >> classes. With my proposal, statistics would be able to declare >> >> {-# InstanceIn Statistics.AesonInstances AesonModule.AesonClass >> StatisticsType #-} >> >> Then it would split the Statistics.AesonInstances module off into a >> statistics-aeson package and accomplish its objective without stepping on >> anyone else. We'd get a lot more (mostly tiny) packages, but in exchange >> the dependencies would get much thinner. >> On Oct 21, 2014 11:52 AM, "Stephen Paul Weber" >> wrote: >> >>> Somebody claiming to be John Lato wrote: >>> >>>> Thinking about this, I came to a slightly different scheme. What if we >>>> instead add a pragma: >>>> >>>> {-# OrphanModule ClassName ModuleName #-} >>>> >>> >>> I really like this. It solve all the real orphan instance cases I've >>> had in my libraries. >>> >>> -- >>> Stephen Paul Weber, @singpolyma >>> See for how I prefer to be contacted >>> edition right joseph >>> >> > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From george.colpitts at gmail.com Tue Oct 21 18:58:41 2014 From: george.colpitts at gmail.com (George Colpitts) Date: Tue, 21 Oct 2014 15:58:41 -0300 Subject: Making GHCi awesomer? Message-ID: Along the lines of making GHCi better it would be very nice if we could use SDL, OpenGL etc in GHCi. Is there a characterization of the libraries that we can't use in ghci? As you know this has been a problem for years. Is there an open bug about it? I know there was some hope that dynamic linking in 7.8.3 would fix this. What has been your experience? For some reason I get mail to ghc-devs but can't send mail to the mailing list. Thanks George On Sat, Oct 18, 2014 at 12:48 PM, Christopher Done wrote: > Good evening, > > ?... > > For years I?ve been using GHCi as a base and it?s been very reliable > for almost every project I?ve done (the only exceptions are things > like SDL and OpenGL, which are well known to be difficult to load in > GHCi, at least on Linux). I think we?ve built up > a good set of functionality > > purely based on asking GHCi things and getting it to do things. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Tue Oct 21 19:00:09 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Tue, 21 Oct 2014 12:00:09 -0700 Subject: `arc` changes my commit messages In-Reply-To: <3BD98820-35C9-42C1-9E2A-4666F394B68C@cis.upenn.edu> References: <3BD98820-35C9-42C1-9E2A-4666F394B68C@cis.upenn.edu> Message-ID: <1413917934-sup-2906@sabre> For a while, I tried working around this using a "branch summary patch", which is just an empty commit I kept on top of the patchset which then Phabricator would hit. It was really annoying and Git kept swallowing up. So I eventually gave up and just arc diff'd each patch in the set individually. Edward Excerpts from Richard Eisenberg's message of 2014-10-21 06:34:31 -0700: > Hi all, > > Is there a way to put `arc` into a read-only mode? > > Frequently while working on a patch, I make several commits, preferring to separate out testing commits from productive work commits and non-productive (whitespace, comments) commits. Sometimes each of these categories are themselves broken into several commits. These commits are *not* my internal workflow. They are intentionally curated by rebasing as I'm ready to publish the patch, as I think the patches are easy to read this way. (Tell me if I'm wrong, here!) I've resolved myself not to use `arc land`, but instead to apply the patch using git. > > Yet, when I call `arc diff`, even if I haven't amended my patch during the `arc diff`ing process, the commit message of the tip of my branch is changed, and without telling me. I recently pushed my (tiny, uninteresting) fix to #9692. Luckily, my last commit happened to be the meat, so the amended commit message is still wholly relevant. But, that won't always be the case, and I was surprised to see a Phab-ified commit message appear in the Trac ticket after pushing. > > I know I could use more git-ery to restore my old commit message. But is there a way to stop `arc` from doing the message change in the first place? > > Thanks! > Richard From bgamari.foss at gmail.com Tue Oct 21 22:21:19 2014 From: bgamari.foss at gmail.com (Ben Gamari) Date: Tue, 21 Oct 2014 18:21:19 -0400 Subject: One-shot semantics in GHC event manager In-Reply-To: <20141021.131546.1502962042806916557.kazu@iij.ad.jp> References: <20141017.235117.27092370466503596.kazu@iij.ad.jp> <20141021.131546.1502962042806916557.kazu@iij.ad.jp> Message-ID: <87h9yxawtc.fsf@gmail.com> Kazu Yamamoto writes: > Hi, > > I measured the performace of GHC head, 7.8.3 and 7.8.3 + Ben's patch > set. > > Server: witty 8080 -r -a -s +RTS -N *1 > Measurement tool: weighttp -n 100000 -c 1000 -k -t 19 http://192.168.0.1:8080/ > Measurement env: two 20 core (w/o HT) machines directly connected 10G > > Here is result (req/s): > > -N 1 2 4 8 16 > --------------------------------------------------------- > head 92,855 155,957 306,813 498,613 527,034 > 7.8.3 86,494 160,321 310,675 494,020 510,751 > 7.8.3+ben 37,608 69,376 131,686 237,783 333,946 > > head and 7.8.3 has almost the same performance. But I saw significant > performance regression in Ben's patch set. > This may be due to lacking INLINEs on definitions added in GHC.Event.Internal [1]. I'm currently in the middle of reproducing these results on an EC2 instance to confirm this. So far the results look much more consistent than my previous attempts at benchmarking on my own hardware. Cheers, - Ben [1] https://github.com/bgamari/ghc/tree/event-rework-7.10 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From bgamari.foss at gmail.com Tue Oct 21 23:22:18 2014 From: bgamari.foss at gmail.com (Ben Gamari) Date: Tue, 21 Oct 2014 19:22:18 -0400 Subject: One-shot semantics in GHC event manager In-Reply-To: <20141021.131546.1502962042806916557.kazu@iij.ad.jp> References: <20141017.235117.27092370466503596.kazu@iij.ad.jp> <20141021.131546.1502962042806916557.kazu@iij.ad.jp> Message-ID: <87egu1atzp.fsf@gmail.com> Kazu Yamamoto writes: > Hi, > Hi Kazu, >>> Andreas - want me to go ahead and get you some hardware to test Ben's >>> patch in the mean time? This way we'll at least not leave it hanging >>> until the last moment... >> >> I will also try this with two 20-core machines connected 10G on >> Monday. > > I measured the performace of GHC head, 7.8.3 and 7.8.3 + Ben's patch > set. > > Server: witty 8080 -r -a -s +RTS -N *1 > Have you noticed that witty will sometimes terminate (with exit code 0) spontaneously during a run? This seems to happen more often with higher core counts. I've seen this both with and without my patch (based on master as of earlier today). Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From carter.schonwald at gmail.com Wed Oct 22 01:20:57 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Tue, 21 Oct 2014 21:20:57 -0400 Subject: Making GHCi awesomer? In-Reply-To: References: Message-ID: i'm pretty sure they're usable in ghci... i think theres just certain flags that need to be invoked for one reason or another, but I could be wrong (and i've not tried in a while) On Tue, Oct 21, 2014 at 2:58 PM, George Colpitts wrote: > Along the lines of making GHCi better it would be very nice if we could > use SDL, OpenGL etc in GHCi. > > Is there a characterization of the libraries that we can't use in ghci? > > As you know this has been a problem for years. Is there an open bug about > it? > > I know there was some hope that dynamic linking in 7.8.3 would fix this. > What has been your experience? > > For some reason I get mail to ghc-devs but can't send mail to the mailing > list. > > Thanks > George > > On Sat, Oct 18, 2014 at 12:48 PM, Christopher Done > wrote: > >> Good evening, >> >> ?... >> >> For years I?ve been using GHCi as a base and it?s been very reliable >> for almost every project I?ve done (the only exceptions are things >> like SDL and OpenGL, which are well known to be difficult to load in >> GHCi, at least on Linux). I think we?ve built up >> a good set of functionality >> >> purely based on asking GHCi things and getting it to do things. >> >> > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kazu at iij.ad.jp Wed Oct 22 02:52:20 2014 From: kazu at iij.ad.jp (Kazu Yamamoto (=?iso-2022-jp?B?GyRCOzNLXE9CSScbKEI=?=)) Date: Wed, 22 Oct 2014 11:52:20 +0900 (JST) Subject: One-shot semantics in GHC event manager In-Reply-To: <87r3y1xww4.fsf@gmail.com> References: <20141017.235117.27092370466503596.kazu@iij.ad.jp> <20141021.131546.1502962042806916557.kazu@iij.ad.jp> <87r3y1xww4.fsf@gmail.com> Message-ID: <20141022.115220.1118300854973687667.kazu@iij.ad.jp> Ben, > Hmm, uh oh. Thanks for testing this. I'll try to reproduce this on my > end. It looks like it shouldn't be so hard as even the single-threaded > performance regresses drastically. Just to confirm, you are using the > latest revision of D347? I used the following as you suggested: https://github.com/bgamari/packages-base/compare/ghc:ghc-7.8...event-rework I cannot tell whether or not this is idential to D347. --Kazu From kazu at iij.ad.jp Wed Oct 22 03:16:51 2014 From: kazu at iij.ad.jp (Kazu Yamamoto (=?iso-2022-jp?B?GyRCOzNLXE9CSScbKEI=?=)) Date: Wed, 22 Oct 2014 12:16:51 +0900 (JST) Subject: One-shot semantics in GHC event manager In-Reply-To: <87h9yxawtc.fsf@gmail.com> References: <20141017.235117.27092370466503596.kazu@iij.ad.jp> <20141021.131546.1502962042806916557.kazu@iij.ad.jp> <87h9yxawtc.fsf@gmail.com> Message-ID: <20141022.121651.1952776272893680057.kazu@iij.ad.jp> Ben, > This may be due to lacking INLINEs on definitions added in > GHC.Event.Internal [1]. I'm currently in the middle of reproducing these > results on an EC2 instance to confirm this. So far the results look much > more consistent than my previous attempts at benchmarking on my own > hardware. If using https://github.com/bgamari/packages-base/commits/event-rework is a right way, please push the INLINE commit to this repo? I will try it gain. --Kazu From bgamari.foss at gmail.com Wed Oct 22 03:26:04 2014 From: bgamari.foss at gmail.com (Ben Gamari) Date: Tue, 21 Oct 2014 23:26:04 -0400 Subject: One-shot semantics in GHC event manager In-Reply-To: <20141022.121651.1952776272893680057.kazu@iij.ad.jp> References: <20141017.235117.27092370466503596.kazu@iij.ad.jp> <20141021.131546.1502962042806916557.kazu@iij.ad.jp> <87h9yxawtc.fsf@gmail.com> <20141022.121651.1952776272893680057.kazu@iij.ad.jp> Message-ID: <87y4s8aipf.fsf@gmail.com> Kazu Yamamoto writes: > Ben, > >> This may be due to lacking INLINEs on definitions added in >> GHC.Event.Internal [1]. I'm currently in the middle of reproducing these >> results on an EC2 instance to confirm this. So far the results look much >> more consistent than my previous attempts at benchmarking on my own >> hardware. > > If using > https://github.com/bgamari/packages-base/commits/event-rework > is a right way, please push the INLINE commit to this repo? > I will try it gain. > I already pushed it. The commit in question is 5dce47eb8415eb31e1c6759b6f6a2ef5bfe32470. Thanks for the benchmarking! Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From kazu at iij.ad.jp Wed Oct 22 03:37:05 2014 From: kazu at iij.ad.jp (Kazu Yamamoto (=?iso-2022-jp?B?GyRCOzNLXE9CSScbKEI=?=)) Date: Wed, 22 Oct 2014 12:37:05 +0900 (JST) Subject: One-shot semantics in GHC event manager In-Reply-To: <87y4s8aipf.fsf@gmail.com> References: <87h9yxawtc.fsf@gmail.com> <20141022.121651.1952776272893680057.kazu@iij.ad.jp> <87y4s8aipf.fsf@gmail.com> Message-ID: <20141022.123705.2239580722964809205.kazu@iij.ad.jp> > I already pushed it. The commit in question is > 5dce47eb8415eb31e1c6759b6f6a2ef5bfe32470. Thanks for the benchmarking! I believe this is in bgamari/ghc (for GHC 7.10?). I'm using bgamari/packages-base for GHC 7.8 and asking to push the same commit to this repo. Actually I compared the latest Internal.hs in bgamari/ghc and one in bgamari/packages-base and I saw *additional* differences. So, I hesitated to apply the patch by myself. --Kazu From bgamari.foss at gmail.com Wed Oct 22 03:51:12 2014 From: bgamari.foss at gmail.com (Ben Gamari) Date: Tue, 21 Oct 2014 23:51:12 -0400 Subject: One-shot semantics in GHC event manager In-Reply-To: <20141022.123705.2239580722964809205.kazu@iij.ad.jp> References: <87h9yxawtc.fsf@gmail.com> <20141022.121651.1952776272893680057.kazu@iij.ad.jp> <87y4s8aipf.fsf@gmail.com> <20141022.123705.2239580722964809205.kazu@iij.ad.jp> Message-ID: <87vbncahjj.fsf@gmail.com> Kazu Yamamoto writes: >> I already pushed it. The commit in question is >> 5dce47eb8415eb31e1c6759b6f6a2ef5bfe32470. Thanks for the benchmarking! > > I believe this is in bgamari/ghc (for GHC 7.10?). > I'm using bgamari/packages-base for GHC 7.8 and asking to push the > same commit to this repo. > Ahh, yes. Sorry, I forgot you were on 7.8. Just pushed a new patch to the event-rework-squashed branch [1]. > Actually I compared the latest Internal.hs in bgamari/ghc and > one in bgamari/packages-base and I saw *additional* differences. > So, I hesitated to apply the patch by myself. > There were a few changes necessary due to AMP fallout. They are mostly harmless. Cheers, - Ben [1] https://github.com/bgamari/packages-base/commit/01ac6692f04378052ff7ad8444092ea2d0cc95ef#diff-e5f7b0b727d777e8d0c77f827c3fcc2fR95 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From kazu at iij.ad.jp Wed Oct 22 03:55:13 2014 From: kazu at iij.ad.jp (Kazu Yamamoto (=?iso-2022-jp?B?GyRCOzNLXE9CSScbKEI=?=)) Date: Wed, 22 Oct 2014 12:55:13 +0900 (JST) Subject: One-shot semantics in GHC event manager In-Reply-To: <87vbncahjj.fsf@gmail.com> References: <87y4s8aipf.fsf@gmail.com> <20141022.123705.2239580722964809205.kazu@iij.ad.jp> <87vbncahjj.fsf@gmail.com> Message-ID: <20141022.125513.429212147810598499.kazu@iij.ad.jp> > Ahh, yes. Sorry, I forgot you were on 7.8. Just pushed a new patch to > the event-rework-squashed branch [1]. I believe that you are trying to merge your patches to GHC 7.8.4? If not, I will work on the GHC head branch. --Kazu From bgamari.foss at gmail.com Wed Oct 22 04:05:27 2014 From: bgamari.foss at gmail.com (Ben Gamari) Date: Wed, 22 Oct 2014 00:05:27 -0400 Subject: One-shot semantics in GHC event manager In-Reply-To: <20141022.125513.429212147810598499.kazu@iij.ad.jp> References: <87y4s8aipf.fsf@gmail.com> <20141022.123705.2239580722964809205.kazu@iij.ad.jp> <87vbncahjj.fsf@gmail.com> <20141022.125513.429212147810598499.kazu@iij.ad.jp> Message-ID: <87siigagvs.fsf@gmail.com> Kazu Yamamoto writes: >> Ahh, yes. Sorry, I forgot you were on 7.8. Just pushed a new patch to >> the event-rework-squashed branch [1]. > > I believe that you are trying to merge your patches to GHC 7.8.4? > If not, I will work on the GHC head branch. > Well, Bas was wondering whether this would be possible. At this point I'm a bit on the fence; on one hand it's not a crucial fix (we have a workaround in usb) and it may involve changes to exported interfaces (although not very high visibility). On the other hand, it's a pretty easy change to make and it cleans up the semantics of the event manager nicely. Frankly I doubt that the performance characteristics of the patch will change much between HEAD and ghc-7.8 (up to the difference that you've already reported in your last set of benchmarks). Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From kazu at iij.ad.jp Wed Oct 22 04:52:30 2014 From: kazu at iij.ad.jp (Kazu Yamamoto (=?iso-2022-jp?B?GyRCOzNLXE9CSScbKEI=?=)) Date: Wed, 22 Oct 2014 13:52:30 +0900 (JST) Subject: One-shot semantics in GHC event manager In-Reply-To: <87siigagvs.fsf@gmail.com> References: <87vbncahjj.fsf@gmail.com> <20141022.125513.429212147810598499.kazu@iij.ad.jp> <87siigagvs.fsf@gmail.com> Message-ID: <20141022.135230.250613027303556291.kazu@iij.ad.jp> > Well, Bas was wondering whether this would be possible. At this point > I'm a bit on the fence; on one hand it's not a crucial fix (we have a > workaround in usb) and it may involve changes to exported interfaces > (although not very high visibility). On the other hand, it's a pretty > easy change to make and it cleans up the semantics of the event manager > nicely. I benchmarked your patches on GHC 7.8 branch: -N 1 2 4 8 16 --------------------------------------------------------- head 92,855 155,957 306,813 498,613 527,034 7.8.3 86,494 160,321 310,675 494,020 510,751 7.8.3+ben 84,472 140,978 291,550 488,834 523,837 The inline patch works very nice. :-) # And I was disappointed a bit because GHC does not automatically do # this inline. --Kazu From bgamari.foss at gmail.com Wed Oct 22 05:02:43 2014 From: bgamari.foss at gmail.com (Ben Gamari) Date: Wed, 22 Oct 2014 01:02:43 -0400 Subject: One-shot semantics in GHC event manager In-Reply-To: <20141022.135230.250613027303556291.kazu@iij.ad.jp> References: <87vbncahjj.fsf@gmail.com> <20141022.125513.429212147810598499.kazu@iij.ad.jp> <87siigagvs.fsf@gmail.com> <20141022.135230.250613027303556291.kazu@iij.ad.jp> Message-ID: <87ppdkae8c.fsf@gmail.com> Kazu Yamamoto writes: >> Well, Bas was wondering whether this would be possible. At this point >> I'm a bit on the fence; on one hand it's not a crucial fix (we have a >> workaround in usb) and it may involve changes to exported interfaces >> (although not very high visibility). On the other hand, it's a pretty >> easy change to make and it cleans up the semantics of the event manager >> nicely. > > I benchmarked your patches on GHC 7.8 branch: > > -N 1 2 4 8 16 > --------------------------------------------------------- > head 92,855 155,957 306,813 498,613 527,034 > 7.8.3 86,494 160,321 310,675 494,020 510,751 > 7.8.3+ben 84,472 140,978 291,550 488,834 523,837 > > The inline patch works very nice. :-) > Awesome! Out of curiosity are these numbers from single runs or do you average? What are the uncertainties on these numbers? Even on the Rackspace machines I was finding very large variances in my benchmarks, largely due to far outliers. I didn't investigate too far but it seems that a non-trivial fraction of connections were failing. At some point it would be nice to chat about how we might replicate your benchmarking configuration on the Rackspace boxen. > # And I was disappointed a bit because GHC does not automatically do > # this inline. > Yeah, this isn't the first time I've been caught assuming that GHC will inline. Thanks again for the benchmarking! Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From kazu at iij.ad.jp Wed Oct 22 05:10:46 2014 From: kazu at iij.ad.jp (Kazu Yamamoto (=?iso-2022-jp?B?GyRCOzNLXE9CSScbKEI=?=)) Date: Wed, 22 Oct 2014 14:10:46 +0900 (JST) Subject: One-shot semantics in GHC event manager In-Reply-To: <87ppdkae8c.fsf@gmail.com> References: <87siigagvs.fsf@gmail.com> <20141022.135230.250613027303556291.kazu@iij.ad.jp> <87ppdkae8c.fsf@gmail.com> Message-ID: <20141022.141046.1951189416086493481.kazu@iij.ad.jp> > Out of curiosity are these numbers from single runs or do you average? Run three times and took the middle in this time. > What are the uncertainties on these numbers? Even on the Rackspace > machines I was finding very large variances in my benchmarks, largely > due to far outliers. I didn't investigate too far but it seems that > a non-trivial fraction of connections were failing. If cores are in sleep mode, the results are poor. You need to warm cores up somehow. I forget how to disable the deep sleep mode by a command on Linux. (Open a special file and write something?) I believe that Andi knows that. To my experience, 1G network is NOT good enough. >> # And I was disappointed a bit because GHC does not automatically do >> # this inline. >> > Yeah, this isn't the first time I've been caught assuming that GHC will > inline. I read your code and you export these functions. That's why GHC does not inline them automatically. --Kazu From kazu at iij.ad.jp Wed Oct 22 05:15:10 2014 From: kazu at iij.ad.jp (Kazu Yamamoto (=?iso-2022-jp?B?GyRCOzNLXE9CSScbKEI=?=)) Date: Wed, 22 Oct 2014 14:15:10 +0900 (JST) Subject: One-shot semantics in GHC event manager In-Reply-To: <87ppdkae8c.fsf@gmail.com> References: <87siigagvs.fsf@gmail.com> <20141022.135230.250613027303556291.kazu@iij.ad.jp> <87ppdkae8c.fsf@gmail.com> Message-ID: <20141022.141510.1453808307821454188.kazu@iij.ad.jp> Ben, I have some comments and questions about your code: C: registerFd' is exported. So, it should have document. Q: Since registerFd uses OneShot and threadWait uses registerFd, basic IO functions use OneShot by default. No changes from GHC 7.8.3. Do I understand correctly? Q: dbus library will use registerFd' to specify MultiShot, right? --Kazu From austin at well-typed.com Wed Oct 22 05:21:53 2014 From: austin at well-typed.com (Austin Seipp) Date: Wed, 22 Oct 2014 00:21:53 -0500 Subject: One-shot semantics in GHC event manager In-Reply-To: <20141022.141046.1951189416086493481.kazu@iij.ad.jp> References: <87siigagvs.fsf@gmail.com> <20141022.135230.250613027303556291.kazu@iij.ad.jp> <87ppdkae8c.fsf@gmail.com> <20141022.141046.1951189416086493481.kazu@iij.ad.jp> Message-ID: On Wed, Oct 22, 2014 at 12:10 AM, Kazu Yamamoto wrote: >> Out of curiosity are these numbers from single runs or do you average? > > Run three times and took the middle in this time. > >> What are the uncertainties on these numbers? Even on the Rackspace >> machines I was finding very large variances in my benchmarks, largely >> due to far outliers. I didn't investigate too far but it seems that >> a non-trivial fraction of connections were failing. > > If cores are in sleep mode, the results are poor. You need to warm > cores up somehow. > > I forget how to disable the deep sleep mode by a command on Linux. > (Open a special file and write something?) I believe that Andi knows > that. You need to set the CPU into C0 using /dev/cpu_dma_latency. Here's a short paper with a program to show the way to do it[1]. The Mio paper mentions this, and the results are pretty dramatic: "We disable power-saving by specifying the maximum transition latency for the CPU, which forces the CPU cores to stay in C0 state. Figure 12 shows the results, with the curves labelled ?Default? and ?NoSleep? showing the performance in the default configuration and the default configuration with power-saving disabled, respectively. Without limiting the CPU sleep states (curve ?Default?), SimpleServerC cannot benefit from using more CPU cores and the throughput is less than 218,000 requests per second. In contrast, after preventing CPU cores entering deep sleep states (curve ?NoSleep?), SimpleServerC scales up to 20 cores and can process 1.2 million requests per second, approximately 6 times faster than with the default configuration."[2] > To my experience, 1G network is NOT good enough. The Rackspace machines come with bonded 10GigE, so hopefully over the internal DC network they can handle that. :) [1] http://en.community.dell.com/cfs-file/__key/telligent-evolution-components-attachments/13-4491-00-00-20-22-77-64/Controlling_5F00_Processor_5F00_C_2D00_State_5F00_Usage_5F00_in_5F00_Linux_5F00_v1.1_5F00_Nov2013.pdf [2] Section 5.1, http://haskell.cs.yale.edu/wp-content/uploads/2013/08/hask035-voellmy.pdf -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From kazu at iij.ad.jp Wed Oct 22 05:38:02 2014 From: kazu at iij.ad.jp (Kazu Yamamoto (=?iso-2022-jp?B?GyRCOzNLXE9CSScbKEI=?=)) Date: Wed, 22 Oct 2014 14:38:02 +0900 (JST) Subject: One-shot semantics in GHC event manager In-Reply-To: References: <87ppdkae8c.fsf@gmail.com> <20141022.141046.1951189416086493481.kazu@iij.ad.jp> Message-ID: <20141022.143802.2015259521689627953.kazu@iij.ad.jp> Hi Austin, > You need to set the CPU into C0 using /dev/cpu_dma_latency. Here's a > short paper with a program to show the way to do it[1]. This paper is what I'm looking for. Thanks! --Kazu From bgamari.foss at gmail.com Wed Oct 22 05:45:14 2014 From: bgamari.foss at gmail.com (Ben Gamari) Date: Wed, 22 Oct 2014 01:45:14 -0400 Subject: One-shot semantics in GHC event manager In-Reply-To: <20141022.141510.1453808307821454188.kazu@iij.ad.jp> References: <87siigagvs.fsf@gmail.com> <20141022.135230.250613027303556291.kazu@iij.ad.jp> <87ppdkae8c.fsf@gmail.com> <20141022.141510.1453808307821454188.kazu@iij.ad.jp> Message-ID: <87mw8oac9h.fsf@gmail.com> Kazu Yamamoto writes: > Ben, > > I have some comments and questions about your code: > > C: registerFd' is exported. So, it should have document. > It is documented [1] in the 7.10 branch. I didn't bother to bring this patch over to 7.8 (although I will do so when it becomes clear that this is going in to 7.8.4). > Q: Since registerFd uses OneShot and threadWait uses registerFd, basic > IO functions use OneShot by default. No changes from GHC 7.8.3. Do > I understand correctly? > That is the idea. That being said adding another variant of registerFd (which as far as I know has three users) for backwards compatibility seems a bit silly. If we decided to punt on this patch until 7.10 I'd say we should just change the interface of registerFd. If we are going to put it in 7.8.4, however, then this isn't as clear. > Q: dbus library will use registerFd' to specify MultiShot, right? > I'm not sure about dbus but this is how usb will use it, yes. Does dbus also use the event manager directly? Cheers, - Ben [1] https://github.com/bgamari/ghc/commit/fb948ef1cdb92419b88fb621edee19d644a26027 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From kazu at iij.ad.jp Wed Oct 22 06:47:19 2014 From: kazu at iij.ad.jp (Kazu Yamamoto (=?iso-2022-jp?B?GyRCOzNLXE9CSScbKEI=?=)) Date: Wed, 22 Oct 2014 15:47:19 +0900 (JST) Subject: One-shot semantics in GHC event manager In-Reply-To: <87mw8oac9h.fsf@gmail.com> References: <87ppdkae8c.fsf@gmail.com> <20141022.141510.1453808307821454188.kazu@iij.ad.jp> <87mw8oac9h.fsf@gmail.com> Message-ID: <20141022.154719.1556396303785639789.kazu@iij.ad.jp> >> Q: Since registerFd uses OneShot and threadWait uses registerFd, basic >> IO functions use OneShot by default. No changes from GHC 7.8.3. Do >> I understand correctly? >> > That is the idea. That being said adding another variant of registerFd > (which as far as I know has three users) for backwards compatibility > seems a bit silly. If we decided to punt on this patch until 7.10 I'd > say we should just change the interface of registerFd. If we are going > to put it in 7.8.4, however, then this isn't as clear. Understood. Thanks. >> Q: dbus library will use registerFd' to specify MultiShot, right? >> > I'm not sure about dbus but this is how usb will use it, yes. Does dbus > also use the event manager directly? Never mind. I meant usb, not dbus. --Kazu From mail at joachim-breitner.de Wed Oct 22 08:50:23 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Wed, 22 Oct 2014 10:50:23 +0200 Subject: [commit: ghc] master: Reify data family instances correctly. (e319d6d) In-Reply-To: <20141021132130.5945A3A300@ghc.haskell.org> References: <20141021132130.5945A3A300@ghc.haskell.org> Message-ID: <1413967823.1376.3.camel@joachim-breitner.de> Hi Richard, Travis is complaining about your commit: Actual stderr output differs from expected: --- ./th/T9692.stderr 2014-10-21 13:22:53.783212762 +0000 +++ ./th/T9692.comp.stderr 2014-10-21 13:52:36.314083513 +0000 @@ -1,2 +0,0 @@ -data family T9692.F (a_0 :: k_1) (b_2 :: k_3) :: * -data instance T9692.F GHC.Types.Int x_4 = T9692.FInt x_4 *** unexpected failure for T9692(normal) https://s3.amazonaws.com/archive.travis-ci.org/jobs/38604151/log.txt It maybe be due to a missing flushing of stdout; I added that line and will let you know if that was not it. Greetings, Joachim Greetings, Joachim Am Dienstag, den 21.10.2014, 13:21 +0000 schrieb git at git.haskell.org: > Repository : ssh://git at git.haskell.org/ghc > > On branch : master > Link : http://ghc.haskell.org/trac/ghc/changeset/e319d6d2704edc2696f47409f85f4d4ce58a6cc4/ghc > > >--------------------------------------------------------------- > > commit e319d6d2704edc2696f47409f85f4d4ce58a6cc4 > Author: Richard Eisenberg > Date: Mon Oct 20 15:36:57 2014 -0400 > > Reify data family instances correctly. > > Summary: > Fix #9692. > > The reifier didn't account for the possibility that data/newtype > instances are sometimes eta-reduced. It now eta-expands as necessary. > > Test Plan: th/T9692 > > Reviewers: simonpj, austin > > Subscribers: thomie, carter, ezyang, simonmar > > Differential Revision: https://phabricator.haskell.org/D355 > > > >--------------------------------------------------------------- > > e319d6d2704edc2696f47409f85f4d4ce58a6cc4 > compiler/typecheck/TcSplice.lhs | 10 +++++++++- > 1 file changed, 9 insertions(+), 1 deletion(-) > > diff --git a/compiler/typecheck/TcSplice.lhs b/compiler/typecheck/TcSplice.lhs > index bb6af8c..e952a27 100644 > --- a/compiler/typecheck/TcSplice.lhs > +++ b/compiler/typecheck/TcSplice.lhs > @@ -1338,8 +1338,16 @@ reifyFamilyInstance (FamInst { fi_flavor = flavor > DataFamilyInst rep_tc -> > do { let tvs = tyConTyVars rep_tc > fam' = reifyName fam > + > + -- eta-expand lhs types, because sometimes data/newtype > + -- instances are eta-reduced; See Trac #9692 > + -- See Note [Eta reduction for data family axioms] > + -- in TcInstDcls > + (_rep_tc, rep_tc_args) = splitTyConApp rhs > + etad_tyvars = dropList rep_tc_args tvs > + eta_expanded_lhs = lhs `chkAppend` mkTyVarTys etad_tyvars > ; cons <- mapM (reifyDataCon (mkTyVarTys tvs)) (tyConDataCons rep_tc) > - ; th_tys <- reifyTypes lhs > + ; th_tys <- reifyTypes (filter (not . isKind) eta_expanded_lhs) > ; return (if isNewTyCon rep_tc > then TH.NewtypeInstD [] fam' th_tys (head cons) [] > else TH.DataInstD [] fam' th_tys cons []) } > > _______________________________________________ > ghc-commits mailing list > ghc-commits at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-commits > -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From sophie at traumapony.org Wed Oct 22 09:26:27 2014 From: sophie at traumapony.org (Sophie Taylor) Date: Wed, 22 Oct 2014 19:26:27 +1000 Subject: Current description of Core? Message-ID: Hi, Is the current description of Core still System FC_2 (described in https://www.seas.upenn.edu/~sweirich/papers/popl163af-weirich.pdf)? -------------- next part -------------- An HTML attachment was scrubbed... URL: From adam at well-typed.com Wed Oct 22 09:34:46 2014 From: adam at well-typed.com (Adam Gundry) Date: Wed, 22 Oct 2014 10:34:46 +0100 Subject: Current description of Core? In-Reply-To: References: Message-ID: <54477A36.9090700@well-typed.com> Hi, On 22/10/14 10:26, Sophie Taylor wrote: > Is the current description of Core still System FC_2 (described in > https://www.seas.upenn.edu/~sweirich/papers/popl163af-weirich.pdf)? There have been a few extensions since then, described in these papers: http://research.microsoft.com/en-us/um/people/simonpj/papers/ext-f/ Also, if you're interested in the gory details of what GHC *really* implements, as opposed to the sanitized academic version, Richard Eisenberg put together a nice description of Core that you can find in the GHC repository: https://github.com/ghc/ghc/blob/master/docs/core-spec/core-spec.pdf?raw=true Hope this helps, Adam -- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From simonpj at microsoft.com Wed Oct 22 09:35:25 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 22 Oct 2014 09:35:25 +0000 Subject: Current description of Core? In-Reply-To: References: Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F36BDFF@DB3PRD3001MB020.064d.mgd.msft.net> Is the current description of Core still System FC_2 (described in https://www.seas.upenn.edu/~sweirich/papers/popl163af-weirich.pdf)? We never implemented that particular version (too complicated!). This is the full current story (thanks to Richard for keeping it up to date), in the GHC source tree : https://ghc.haskell.org/trac/ghc/browser/ghc/docs/core-spec/core-spec.pdf Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Sophie Taylor Sent: 22 October 2014 10:26 To: ghc-devs at haskell.org Subject: Current description of Core? Hi, Is the current description of Core still System FC_2 (described in https://www.seas.upenn.edu/~sweirich/papers/popl163af-weirich.pdf)? -------------- next part -------------- An HTML attachment was scrubbed... URL: From sophie at traumapony.org Wed Oct 22 09:53:11 2014 From: sophie at traumapony.org (Sophie Taylor) Date: Wed, 22 Oct 2014 19:53:11 +1000 Subject: Current description of Core? In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F36BDFF@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF3F36BDFF@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Ah, thanks HEAPS. I've been banging my head against a wall for the last few days trying to see exactly what is going on :) I'm trying to find a way to minimise/eliminate the changes required to Core for the arrow notation rewrite - specifically, introducing kappa abstraction and application - semantically different to lambda abstraction/application but close enough that I can probably get away with either adding a simple flag to the Abstraction/Application constructors or doing it higher up in the HsExpr land, but the latter method leaves a sour taste in my mouth. On 22 October 2014 19:35, Simon Peyton Jones wrote: > Is the current description of Core still System FC_2 (described in > https://www.seas.upenn.edu/~sweirich/papers/popl163af-weirich.pdf)? > > > > We never implemented that particular version (too complicated!). > > > > This is the full current story (thanks to Richard for keeping it up to > date), in the GHC source tree > > : > > https://ghc.haskell.org/trac/ghc/browser/ghc/docs/core-spec/core-spec.pdf > > > > Simon > > > > *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of *Sophie > Taylor > *Sent:* 22 October 2014 10:26 > *To:* ghc-devs at haskell.org > *Subject:* Current description of Core? > > > > Hi, > > > > Is the current description of Core still System FC_2 (described in > https://www.seas.upenn.edu/~sweirich/papers/popl163af-weirich.pdf)? > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Wed Oct 22 09:59:09 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 22 Oct 2014 09:59:09 +0000 Subject: Current description of Core? In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF3F36BDFF@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F36BE66@DB3PRD3001MB020.064d.mgd.msft.net> Interesting. There is a pretty high bar for changes to Core itself. Currently arrow notation desugars into Core with no changes. If you want to change Core, then arrow ?notation? is actually much more than syntactic sugar. Go for it ? but it would be a much more foundational change than previously, and hence would require more motivation. S From: Sophie Taylor [mailto:sophie at traumapony.org] Sent: 22 October 2014 10:53 To: Simon Peyton Jones Cc: ghc-devs at haskell.org Subject: Re: Current description of Core? Ah, thanks HEAPS. I've been banging my head against a wall for the last few days trying to see exactly what is going on :) I'm trying to find a way to minimise/eliminate the changes required to Core for the arrow notation rewrite - specifically, introducing kappa abstraction and application - semantically different to lambda abstraction/application but close enough that I can probably get away with either adding a simple flag to the Abstraction/Application constructors or doing it higher up in the HsExpr land, but the latter method leaves a sour taste in my mouth. On 22 October 2014 19:35, Simon Peyton Jones > wrote: Is the current description of Core still System FC_2 (described in https://www.seas.upenn.edu/~sweirich/papers/popl163af-weirich.pdf)? We never implemented that particular version (too complicated!). This is the full current story (thanks to Richard for keeping it up to date), in the GHC source tree : https://ghc.haskell.org/trac/ghc/browser/ghc/docs/core-spec/core-spec.pdf Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Sophie Taylor Sent: 22 October 2014 10:26 To: ghc-devs at haskell.org Subject: Current description of Core? Hi, Is the current description of Core still System FC_2 (described in https://www.seas.upenn.edu/~sweirich/papers/popl163af-weirich.pdf)? -------------- next part -------------- An HTML attachment was scrubbed... URL: From sophie at traumapony.org Wed Oct 22 10:18:52 2014 From: sophie at traumapony.org (Sophie Taylor) Date: Wed, 22 Oct 2014 20:18:52 +1000 Subject: Current description of Core? In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F36BE66@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF3F36BDFF@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF3F36BE66@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Yeah, definitely. Part of the reason why arrow notation is so frustrating at the moment is because it forces everything into lambda calculus; that is, it requires every category to be Cartesian Closed. When your arrow category isn't Cartesian Closed, it raises two issues. 1) When it's not Cartesian, you have to lie and say it supports products instead of tensors (that is, you are able to get back the arguments of a product unchanged, i.e. simple tuples), but this isn't the relevant part for Core. 2) When it's not closed, you have to lie and say it supports higher order functions (i.e., lambda abstractions applied to lambda abstractions) and implement arr. Now, you can lie at the syntax level and typecheck it as kappa calculus (i.e. first order functions only unless you are explicitly a Closed category) but then say it is lambda calculus at the core level; this would work because lambda calculus subsumes kappa calculus. This would allow the optimiser/RULES etc to work unchanged. However, you would lose a lot of the internal consistency checking usefulness of Core, and could miss out on kappa-calculus-specific optimisations (although come to think of it, call arity analysis might solve a lot of this issue). On 22 October 2014 19:59, Simon Peyton Jones wrote: > Interesting. There is a pretty high bar for changes to Core itself. > Currently arrow notation desugars into Core with no changes. If you want > to change Core, then arrow ?notation? is actually much more than syntactic > sugar. Go for it ? but it would be a much more foundational change than > previously, and hence would require more motivation. > > > > S > > > > *From:* Sophie Taylor [mailto:sophie at traumapony.org] > *Sent:* 22 October 2014 10:53 > *To:* Simon Peyton Jones > *Cc:* ghc-devs at haskell.org > *Subject:* Re: Current description of Core? > > > > Ah, thanks HEAPS. I've been banging my head against a wall for the last > few days trying to see exactly what is going on :) I'm trying to find a way > to minimise/eliminate the changes required to Core for the arrow notation > rewrite - specifically, introducing kappa abstraction and application - > semantically different to lambda abstraction/application but close enough > that I can probably get away with either adding a simple flag to the > Abstraction/Application constructors or doing it higher up in the HsExpr > land, but the latter method leaves a sour taste in my mouth. > > > > On 22 October 2014 19:35, Simon Peyton Jones > wrote: > > Is the current description of Core still System FC_2 (described in > https://www.seas.upenn.edu/~sweirich/papers/popl163af-weirich.pdf)? > > > > We never implemented that particular version (too complicated!). > > > > This is the full current story (thanks to Richard for keeping it up to > date), in the GHC source tree > > : > > https://ghc.haskell.org/trac/ghc/browser/ghc/docs/core-spec/core-spec.pdf > > > > Simon > > > > *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of *Sophie > Taylor > *Sent:* 22 October 2014 10:26 > *To:* ghc-devs at haskell.org > *Subject:* Current description of Core? > > > > Hi, > > > > Is the current description of Core still System FC_2 (described in > https://www.seas.upenn.edu/~sweirich/papers/popl163af-weirich.pdf)? > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From svenpanne at gmail.com Wed Oct 22 10:54:38 2014 From: svenpanne at gmail.com (Sven Panne) Date: Wed, 22 Oct 2014 12:54:38 +0200 Subject: Making GHCi awesomer? In-Reply-To: References: Message-ID: 2014-10-22 3:20 GMT+02:00 Carter Schonwald : > i'm pretty sure they're usable in ghci... i think theres just certain flags > that need to be invoked for one reason or another, but I could be wrong (and > i've not tried in a while) I just gave a few OpenGL/GLUT examples a try with the 2014.2.0.0 platform on Ubuntu 14.04.1 (x64), and things work nicely using plain "ghci" without any flags. I remember there were some threading issues (OpenGL uses TLS), but obviously that's not the case anymore. Hmmm, has something changed? Anyway, I would be interested in any concrete problems, too. From svenpanne at gmail.com Wed Oct 22 13:16:24 2014 From: svenpanne at gmail.com (Sven Panne) Date: Wed, 22 Oct 2014 15:16:24 +0200 Subject: cabal sdist trouble with GHC from head Message-ID: Does anybody have a clue what's going wrong at the sdist step here? https://travis-ci.org/haskell-opengl/OpenGLRaw/jobs/38707011#L104 This only happens with a GHC from head, a build with GHC 7.8.3 is fine: https://travis-ci.org/haskell-opengl/OpenGLRaw/jobs/38707010 Any help highly appreciated... Cheers, S. From eir at cis.upenn.edu Wed Oct 22 14:16:32 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Wed, 22 Oct 2014 10:16:32 -0400 Subject: [commit: ghc] master: Reify data family instances correctly. (e319d6d) In-Reply-To: <1413967823.1376.3.camel@joachim-breitner.de> References: <20141021132130.5945A3A300@ghc.haskell.org> <1413967823.1376.3.camel@joachim-breitner.de> Message-ID: <13B9267E-36EA-45E2-81CA-0C893F960C6D@cis.upenn.edu> Bah. I think it is the flushing, because I seem to remember that problem happening before. Will try to be more careful next time. Sorry for the bother, and thanks. Richard On Oct 22, 2014, at 4:50 AM, Joachim Breitner wrote: > Hi Richard, > > Travis is complaining about your commit: > > Actual stderr output differs from expected: > --- ./th/T9692.stderr 2014-10-21 13:22:53.783212762 +0000 > +++ ./th/T9692.comp.stderr 2014-10-21 13:52:36.314083513 +0000 > @@ -1,2 +0,0 @@ > -data family T9692.F (a_0 :: k_1) (b_2 :: k_3) :: * > -data instance T9692.F GHC.Types.Int x_4 = T9692.FInt x_4 > *** unexpected failure for T9692(normal) > https://s3.amazonaws.com/archive.travis-ci.org/jobs/38604151/log.txt > > It maybe be due to a missing flushing of stdout; I added that line and > will let you know if that was not it. > > Greetings, > Joachim > > Greetings, > Joachim > > Am Dienstag, den 21.10.2014, 13:21 +0000 schrieb git at git.haskell.org: >> Repository : ssh://git at git.haskell.org/ghc >> >> On branch : master >> Link : http://ghc.haskell.org/trac/ghc/changeset/e319d6d2704edc2696f47409f85f4d4ce58a6cc4/ghc >> >>> --------------------------------------------------------------- >> >> commit e319d6d2704edc2696f47409f85f4d4ce58a6cc4 >> Author: Richard Eisenberg >> Date: Mon Oct 20 15:36:57 2014 -0400 >> >> Reify data family instances correctly. >> >> Summary: >> Fix #9692. >> >> The reifier didn't account for the possibility that data/newtype >> instances are sometimes eta-reduced. It now eta-expands as necessary. >> >> Test Plan: th/T9692 >> >> Reviewers: simonpj, austin >> >> Subscribers: thomie, carter, ezyang, simonmar >> >> Differential Revision: https://phabricator.haskell.org/D355 >> >> >>> --------------------------------------------------------------- >> >> e319d6d2704edc2696f47409f85f4d4ce58a6cc4 >> compiler/typecheck/TcSplice.lhs | 10 +++++++++- >> 1 file changed, 9 insertions(+), 1 deletion(-) >> >> diff --git a/compiler/typecheck/TcSplice.lhs b/compiler/typecheck/TcSplice.lhs >> index bb6af8c..e952a27 100644 >> --- a/compiler/typecheck/TcSplice.lhs >> +++ b/compiler/typecheck/TcSplice.lhs >> @@ -1338,8 +1338,16 @@ reifyFamilyInstance (FamInst { fi_flavor = flavor >> DataFamilyInst rep_tc -> >> do { let tvs = tyConTyVars rep_tc >> fam' = reifyName fam >> + >> + -- eta-expand lhs types, because sometimes data/newtype >> + -- instances are eta-reduced; See Trac #9692 >> + -- See Note [Eta reduction for data family axioms] >> + -- in TcInstDcls >> + (_rep_tc, rep_tc_args) = splitTyConApp rhs >> + etad_tyvars = dropList rep_tc_args tvs >> + eta_expanded_lhs = lhs `chkAppend` mkTyVarTys etad_tyvars >> ; cons <- mapM (reifyDataCon (mkTyVarTys tvs)) (tyConDataCons rep_tc) >> - ; th_tys <- reifyTypes lhs >> + ; th_tys <- reifyTypes (filter (not . isKind) eta_expanded_lhs) >> ; return (if isNewTyCon rep_tc >> then TH.NewtypeInstD [] fam' th_tys (head cons) [] >> else TH.DataInstD [] fam' th_tys cons []) } >> >> _______________________________________________ >> ghc-commits mailing list >> ghc-commits at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-commits >> > > -- > Joachim ?nomeata? Breitner > mail at joachim-breitner.de ? http://www.joachim-breitner.de/ > Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F > Debian Developer: nomeata at debian.org > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From eir at cis.upenn.edu Wed Oct 22 14:28:55 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Wed, 22 Oct 2014 10:28:55 -0400 Subject: Current description of Core? In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF3F36BDFF@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF3F36BE66@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <2DB129B1-A495-4CBF-9144-439767452A08@cis.upenn.edu> Hi Sophie, I agree with Simon in that I'm skeptical that arrows should *require* a change in Core, but I'm more willing to believe that a change in Core could permit better optimizations over arrow-intensive code. Though, I would say we should spend some time looking for ways to achieve this without changing Core. All that said, I'm happy to help you understand Core better, and can explain some of that core-spec document you've been referred to. It's terse, and intended to be a somewhat minimal explanation. Let me know if I can be of help. Richard On Oct 22, 2014, at 6:18 AM, Sophie Taylor wrote: > Yeah, definitely. Part of the reason why arrow notation is so frustrating at the moment is because it forces everything into lambda calculus; that is, it requires every category to be Cartesian Closed. When your arrow category isn't Cartesian Closed, it raises two issues. 1) When it's not Cartesian, you have to lie and say it supports products instead of tensors (that is, you are able to get back the arguments of a product unchanged, i.e. simple tuples), but this isn't the relevant part for Core. 2) When it's not closed, you have to lie and say it supports higher order functions (i.e., lambda abstractions applied to lambda abstractions) and implement arr. Now, you can lie at the syntax level and typecheck it as kappa calculus (i.e. first order functions only unless you are explicitly a Closed category) but then say it is lambda calculus at the core level; this would work because lambda calculus subsumes kappa calculus. This would allow the optimiser/RULES etc to work unchanged. However, you would lose a lot of the internal consistency checking usefulness of Core, and could miss out on kappa-calculus-specific optimisations (although come to think of it, call arity analysis might solve a lot of this issue). > > On 22 October 2014 19:59, Simon Peyton Jones wrote: > Interesting. There is a pretty high bar for changes to Core itself. Currently arrow notation desugars into Core with no changes. If you want to change Core, then arrow ?notation? is actually much more than syntactic sugar. Go for it ? but it would be a much more foundational change than previously, and hence would require more motivation. > > > > S > > > > From: Sophie Taylor [mailto:sophie at traumapony.org] > Sent: 22 October 2014 10:53 > To: Simon Peyton Jones > Cc: ghc-devs at haskell.org > Subject: Re: Current description of Core? > > > > Ah, thanks HEAPS. I've been banging my head against a wall for the last few days trying to see exactly what is going on :) I'm trying to find a way to minimise/eliminate the changes required to Core for the arrow notation rewrite - specifically, introducing kappa abstraction and application - semantically different to lambda abstraction/application but close enough that I can probably get away with either adding a simple flag to the Abstraction/Application constructors or doing it higher up in the HsExpr land, but the latter method leaves a sour taste in my mouth. > > > > On 22 October 2014 19:35, Simon Peyton Jones wrote: > > Is the current description of Core still System FC_2 (described in https://www.seas.upenn.edu/~sweirich/papers/popl163af-weirich.pdf)? > > > > We never implemented that particular version (too complicated!). > > > > This is the full current story (thanks to Richard for keeping it up to date), in the GHC source tree > > : > > https://ghc.haskell.org/trac/ghc/browser/ghc/docs/core-spec/core-spec.pdf > > > > Simon > > > > From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Sophie Taylor > Sent: 22 October 2014 10:26 > To: ghc-devs at haskell.org > Subject: Current description of Core? > > > > Hi, > > > > Is the current description of Core still System FC_2 (described in https://www.seas.upenn.edu/~sweirich/papers/popl163af-weirich.pdf)? > > > > > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From christiaan.baaij at gmail.com Wed Oct 22 16:54:25 2014 From: christiaan.baaij at gmail.com (Christiaan Baaij) Date: Wed, 22 Oct 2014 18:54:25 +0200 Subject: Current description of Core? In-Reply-To: <2DB129B1-A495-4CBF-9144-439767452A08@cis.upenn.edu> References: <618BE556AADD624C9C918AA5D5911BEF3F36BDFF@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF3F36BE66@DB3PRD3001MB020.064d.mgd.msft.net> <2DB129B1-A495-4CBF-9144-439767452A08@cis.upenn.edu> Message-ID: <6E90B745-336D-43AF-9918-5BB78741C16D@gmail.com> Perhaps slightly off-topic. I have looked at the core-spec document, and had a question regarding the operational semantics part. Given the Core expressions: > case (let r = 1 : r in r) of (x:xs) -> x An interpreter following the semantics would loop on the above expression, as the S_LetRecReturn rule, the one that throws away let-expressions, never applies. > case (let r = 1 : r in r) of (x:xs) -> x => S_Case + S_LetRec + S_Var > case (let r = 1 : r in 1:r) of (x:xs) -> x => S_Case + S_LetRec + S_Var > case (let r = 1 : r in 1:1:r) of (x:xs) -> x etc. Adding a step reduction rule: [S_CaseLet] case (let us in e) of ps --> let us in (case e of ps) We would however get: > case (let r = 1 : r in r) of (x:xs) -> x => S_CaseLet > let r = 1 : r in (case r of (x:xs) -> x) => S_LetRec + S_Var > let r = 1 : r in (case 1:r of (x:xs) -> x) => S_LetRec + S_MatchData > let r = 1 : r in 1 => S_LetRecReturn > 1 Would it make sense to add such a step reduction rule? or am I incorrect in assuming that an interpreter following the current rules would loop? -- Christiaan On Oct 22, 2014, at 4:28 PM, Richard Eisenberg wrote: > Hi Sophie, > > I agree with Simon in that I'm skeptical that arrows should *require* a change in Core, but I'm more willing to believe that a change in Core could permit better optimizations over arrow-intensive code. Though, I would say we should spend some time looking for ways to achieve this without changing Core. > > All that said, I'm happy to help you understand Core better, and can explain some of that core-spec document you've been referred to. It's terse, and intended to be a somewhat minimal explanation. > > Let me know if I can be of help. > Richard > > On Oct 22, 2014, at 6:18 AM, Sophie Taylor wrote: > >> Yeah, definitely. Part of the reason why arrow notation is so frustrating at the moment is because it forces everything into lambda calculus; that is, it requires every category to be Cartesian Closed. When your arrow category isn't Cartesian Closed, it raises two issues. 1) When it's not Cartesian, you have to lie and say it supports products instead of tensors (that is, you are able to get back the arguments of a product unchanged, i.e. simple tuples), but this isn't the relevant part for Core. 2) When it's not closed, you have to lie and say it supports higher order functions (i.e., lambda abstractions applied to lambda abstractions) and implement arr. Now, you can lie at the syntax level and typecheck it as kappa calculus (i.e. first order functions only unless you are explicitly a Closed category) but then say it is lambda calculus at the core level; this would work because lambda calculus subsumes kappa calculus. This would allow the optimiser/RULES etc to work unchanged. However, you would lose a lot of the internal consistency checking usefulness of Core, and could miss out on kappa-calculus-specific optimisations (although come to think of it, call arity analysis might solve a lot of this issue). >> >> On 22 October 2014 19:59, Simon Peyton Jones wrote: >> Interesting. There is a pretty high bar for changes to Core itself. Currently arrow notation desugars into Core with no changes. If you want to change Core, then arrow ?notation? is actually much more than syntactic sugar. Go for it ? but it would be a much more foundational change than previously, and hence would require more motivation. >> >> >> >> S >> >> >> >> From: Sophie Taylor [mailto:sophie at traumapony.org] >> Sent: 22 October 2014 10:53 >> To: Simon Peyton Jones >> Cc: ghc-devs at haskell.org >> Subject: Re: Current description of Core? >> >> >> >> Ah, thanks HEAPS. I've been banging my head against a wall for the last few days trying to see exactly what is going on :) I'm trying to find a way to minimise/eliminate the changes required to Core for the arrow notation rewrite - specifically, introducing kappa abstraction and application - semantically different to lambda abstraction/application but close enough that I can probably get away with either adding a simple flag to the Abstraction/Application constructors or doing it higher up in the HsExpr land, but the latter method leaves a sour taste in my mouth. >> >> >> >> On 22 October 2014 19:35, Simon Peyton Jones wrote: >> >> Is the current description of Core still System FC_2 (described in https://www.seas.upenn.edu/~sweirich/papers/popl163af-weirich.pdf)? >> >> >> >> We never implemented that particular version (too complicated!). >> >> >> >> This is the full current story (thanks to Richard for keeping it up to date), in the GHC source tree >> >> : >> >> https://ghc.haskell.org/trac/ghc/browser/ghc/docs/core-spec/core-spec.pdf >> >> >> >> Simon >> >> >> >> From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Sophie Taylor >> Sent: 22 October 2014 10:26 >> To: ghc-devs at haskell.org >> Subject: Current description of Core? >> >> >> >> Hi, >> >> >> >> Is the current description of Core still System FC_2 (described in https://www.seas.upenn.edu/~sweirich/papers/popl163af-weirich.pdf)? >> >> >> >> >> >> >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From jan.stolarek at p.lodz.pl Wed Oct 22 16:56:17 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Wed, 22 Oct 2014 18:56:17 +0200 Subject: Avoiding the hazards of orphan instances without dependency problems In-Reply-To: References: Message-ID: <201410221856.17637.jan.stolarek@p.lodz.pl> It seems that my previous mail went unnoticed. Perhaps because I didn't provide enough justification for my solution. I'll try to make up for that now. First of all let's remind ourselves why orphan instances are a problem. Let's say package A defines some data types and package B defines some type classes. Now, package C might make data types from A instances of type classes from B. Someone who imports C will have these instances in scope. But since C defines neither the data types nor the type classes it might be surprising for the user of C that C makes A data types instances of B type classes. So we issue a warning that this is potentially dangerous. Of course person implementing C might suppress these warnings so the user of C can end up with unexpected instances without knowing anything. I feel that devising some sort of pragmas to define which orphan instances are allowed does not address the heart of the problem. And the heart of the problem is that we can't control importing and exporting of instances. Pragmas are just a workaround, not a real solution. It would be much better if we could just write this (warning, half-baked idea ahead): module BazModule ( instance Bar Foo ) where import FooModule (Foo (...)) -- import Foo data type from FooModule import BarModule (class Bar) -- import class Bar from BazModule instance Bar Foo ... And then someone importing BazModule can decide to import the instance: module User where import FooModule (Foo(..)) import BarModule (class Bar) import BazModule (instance Bar Foo) Of course requiring that classes and instances are exported and imported just like everything else would be a backawrds incompatible change and would therefore require effort similar to AMP proposal, ie. first release GHC version that warns about upcoming change and only enforce the change some time later. Janek Dnia wtorek, 21 pa?dziernika 2014, RodLogic napisa?: > One other benefit of multiple files to use a single module name is that it > would be easy to separate testing code from real code even when testing > internal/non-exported functions. > > On Tue, Oct 21, 2014 at 1:22 PM, John Lato wrote: > > Perhaps you misunderstood my proposal if you think it would prevent > > anyone else from defining instances of those classes? Part of the > > proposal was also adding support to the compiler to allow for a multiple > > files to use a single module name. That may be a larger technical > > challenge, but I think it's achievable. > > > > I think one key difference is that my proposal puts the onus on class > > implementors, and David's puts the onus on datatype implementors, so they > > certainly are complementary and could co-exist. > > > > On Tue, Oct 21, 2014 at 9:11 AM, David Feuer > > > > wrote: > >> As I said before, it still doesn't solve the problem I'm trying to > >> solve. Look at a package like criterion, for example. criterion depends > >> on aeson. Why? Because statistics depends on it. Why? Because statistics > >> wants a couple types it defines to be instances of classes defined in > >> aeson. John Lato's proposal would require the pragma to appear in the > >> relevant aeson module, and would prevent *anyone* else from defining > >> instances of those classes. With my proposal, statistics would be able > >> to declare > >> > >> {-# InstanceIn Statistics.AesonInstances AesonModule.AesonClass > >> StatisticsType #-} > >> > >> Then it would split the Statistics.AesonInstances module off into a > >> statistics-aeson package and accomplish its objective without stepping > >> on anyone else. We'd get a lot more (mostly tiny) packages, but in > >> exchange the dependencies would get much thinner. > >> On Oct 21, 2014 11:52 AM, "Stephen Paul Weber" > >> > >> > >> wrote: > >>> Somebody claiming to be John Lato wrote: > >>>> Thinking about this, I came to a slightly different scheme. What if > >>>> we instead add a pragma: > >>>> > >>>> {-# OrphanModule ClassName ModuleName #-} > >>> > >>> I really like this. It solve all the real orphan instance cases I've > >>> had in my libraries. > >>> > >>> -- > >>> Stephen Paul Weber, @singpolyma > >>> See for how I prefer to be contacted > >>> edition right joseph > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs From christiaan.baaij at gmail.com Wed Oct 22 17:14:00 2014 From: christiaan.baaij at gmail.com (Christiaan Baaij) Date: Wed, 22 Oct 2014 19:14:00 +0200 Subject: Current description of Core? In-Reply-To: <6E90B745-336D-43AF-9918-5BB78741C16D@gmail.com> References: <618BE556AADD624C9C918AA5D5911BEF3F36BDFF@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF3F36BE66@DB3PRD3001MB020.064d.mgd.msft.net> <2DB129B1-A495-4CBF-9144-439767452A08@cis.upenn.edu> <6E90B745-336D-43AF-9918-5BB78741C16D@gmail.com> Message-ID: > Perhaps slightly off-topic. > I have looked at the core-spec document, and had a question regarding the operational semantics part. > > Given the Core expressions: >> case (let r = 1 : r in r) of (x:xs) -> x > > An interpreter following the semantics would loop on the above expression, as the S_LetRecReturn rule, the one that throws away let-expressions, never applies. > >> case (let r = 1 : r in r) of (x:xs) -> x > => S_Case + S_LetRec + S_Var >> case (let r = 1 : r in 1:r) of (x:xs) -> x > => S_Case + S_LetRec + S_Var >> case (let r = 1 : r in 1:1:r) of (x:xs) -> x > etc. Actually, I don't think it would loop, it would just get stuck: > case (let r = 1 : r in r) of (x:xs) -> x => S_Case + S_LetRec + S_Var > case (let r = 1 : r in 1:r) of (x:xs) -> x => no more rules step apply The body of the let-expression can not be reduced any further since it is a constructor application. I assume it should not get stuck, right? So we would need extra step-reduction rules. -- Christiaan From simonpj at microsoft.com Wed Oct 22 18:25:56 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 22 Oct 2014 18:25:56 +0000 Subject: D202: Injective type families In-Reply-To: <20141022131834.122981.94937@phabricator.haskell.org> References: , <20141022131834.122981.94937@phabricator.haskell.org> Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F370538@DB3PRD3001MB020.064d.mgd.msft.net> Devs I now know how to use 'arc patch' to get a Phab ticket onto my disk. But if I edit the code, can I make a git commit and upload it back to D202? That would be akin to sharing a branch with (in this case Jan) the author. How do I do that? It is often more direct than making comments. Simon ________________________________________ From: noreply at phabricator.haskell.org [noreply at phabricator.haskell.org] Sent: 22 October 2014 14:18 To: Simon Peyton Jones Subject: [Differential] [Updated, 486 lines] D202: Injective type families jstolarek updated this revision to Diff 961. jstolarek added a comment. The implementation of parser and renamer is finished (subject to revisions of course). The parser recognizes injectivity declaration for closed and open type families and for associated types. Renamer checks whether the injectivity condition is well-formed. If it's not then it prints out informative (hopefully) error message. I've made the decission to require that type variables on the RHS of injectivity condition should be placed in exactly the same order as they were declared in type family head. Thanks to this the implementation of renaming is linear in the numer of type variables. Also, SynTyCon stores a list of Bools and so the functions are allowed to be injective only in some arguments. What remains to be done is: 1. type checking of injectivity (ie. whether the function declared as injective really is injective). 2. extending constraint solver with knowledge about injectivity REPOSITORY rGHC Glasgow Haskell Compiler CHANGES SINCE LAST UPDATE https://phabricator.haskell.org/D202?vs=960&id=961 BRANCH T6018-injective-type-families REVISION DETAIL https://phabricator.haskell.org/D202 AFFECTED FILES compiler/deSugar/DsMeta.hs compiler/hsSyn/Convert.lhs compiler/hsSyn/HsDecls.lhs compiler/iface/BuildTyCl.lhs compiler/iface/IfaceSyn.lhs compiler/iface/MkIface.lhs compiler/iface/TcIface.lhs compiler/parser/Parser.y.pp compiler/parser/RdrHsSyn.lhs compiler/prelude/TysPrim.lhs compiler/rename/RnSource.lhs compiler/rename/RnTypes.lhs compiler/typecheck/TcCanonical.lhs compiler/typecheck/TcInstDcls.lhs compiler/typecheck/TcInteract.lhs compiler/typecheck/TcTyClsDecls.lhs compiler/typecheck/TcTypeNats.hs compiler/types/CoAxiom.lhs compiler/types/FamInstEnv.lhs compiler/types/TyCon.lhs compiler/vectorise/Vectorise/Type/Env.hs REPLY HANDLER ACTIONS Reply to comment, or !reject, !abandon, !reclaim, !resign, !rethink, !unsubscribe. To: jstolarek, simonpj, goldfire, austin Cc: thomie, goldfire, simonmar, ezyang, carter, Mikolaj From david.feuer at gmail.com Wed Oct 22 19:33:08 2014 From: david.feuer at gmail.com (David Feuer) Date: Wed, 22 Oct 2014 15:33:08 -0400 Subject: Avoiding the hazards of orphan instances without dependency problems In-Reply-To: <201410221856.17637.jan.stolarek@p.lodz.pl> References: <201410221856.17637.jan.stolarek@p.lodz.pl> Message-ID: You're not the first one to come up with this idea (and I don't know who is). Unfortunately, there are some complications. I'm pretty sure there are simpler examples than this, but this is what I could think of. Suppose we have module PotatoModule (Root (..), T (..)) where -- Does not export instance Root T class Root t where cook :: t -> String data T = T data Weird :: * -> * where Weird :: Root t => t -> Weird t instance Root T where cook T = "Boil, then eat straight out of the pot." potato :: Weird T potato = Weird T -- -------------- module ParsnipModule where import PotatoModule instance Root T where cook T = "Slice into wedges or rounds and put in the soup." parsnip :: Weird T parsnip = Weird T mash :: Weird t -> Weird t -> String mash (Weird x) (Weird y) = cook x ++ cook y mush :: String mush = mash potato parsnip -- -------------- OK, so what happens when we compile mash? Well, we have a bit of a problem! When we mash the potato and the parsnip, the mash function gets access to two different dictionaries for Root T, and two values of type T. There is absolutely nothing to indicate whether we should use the dictionary that's "in the air" because Root T has an instance in ParsnipModule, the dictionary that we pull out of parsnip (which is the same), or the dictionary we pull out of potato (which is different). I think inlining and specialization will make things even stranger and less predictable. In particular, the story of what goes on with inlining gets much harder to understand at the Haskell level: if mash and mush are put into a third module, and potato and parsnip are inlined there, that becomes a type error, because there's no visible Root T instance there! On Wed, Oct 22, 2014 at 12:56 PM, Jan Stolarek wrote: > It seems that my previous mail went unnoticed. Perhaps because I didn't > provide enough > justification for my solution. I'll try to make up for that now. > > First of all let's remind ourselves why orphan instances are a problem. > Let's say package A > defines some data types and package B defines some type classes. Now, > package C might make data > types from A instances of type classes from B. Someone who imports C will > have these instances in > scope. But since C defines neither the data types nor the type classes it > might be surprising for > the user of C that C makes A data types instances of B type classes. So we > issue a warning that > this is potentially dangerous. Of course person implementing C might > suppress these warnings so > the user of C can end up with unexpected instances without knowing > anything. > > I feel that devising some sort of pragmas to define which orphan instances > are allowed does not > address the heart of the problem. And the heart of the problem is that we > can't control importing > and exporting of instances. Pragmas are just a workaround, not a real > solution. It would be much > better if we could just write this (warning, half-baked idea ahead): > > module BazModule ( instance Bar Foo ) where > > import FooModule (Foo (...)) -- import Foo data type from FooModule > import BarModule (class Bar) -- import class Bar from BazModule > > instance Bar Foo ... > > And then someone importing BazModule can decide to import the instance: > > module User where > import FooModule (Foo(..)) > import BarModule (class Bar) > import BazModule (instance Bar Foo) > > Of course requiring that classes and instances are exported and imported > just like everything else > would be a backawrds incompatible change and would therefore require > effort similar to AMP > proposal, ie. first release GHC version that warns about upcoming change > and only enforce the > change some time later. > > Janek > > Dnia wtorek, 21 pa?dziernika 2014, RodLogic napisa?: > > One other benefit of multiple files to use a single module name is that > it > > would be easy to separate testing code from real code even when testing > > internal/non-exported functions. > > > > On Tue, Oct 21, 2014 at 1:22 PM, John Lato wrote: > > > Perhaps you misunderstood my proposal if you think it would prevent > > > anyone else from defining instances of those classes? Part of the > > > proposal was also adding support to the compiler to allow for a > multiple > > > files to use a single module name. That may be a larger technical > > > challenge, but I think it's achievable. > > > > > > I think one key difference is that my proposal puts the onus on class > > > implementors, and David's puts the onus on datatype implementors, so > they > > > certainly are complementary and could co-exist. > > > > > > On Tue, Oct 21, 2014 at 9:11 AM, David Feuer > > > > > > wrote: > > >> As I said before, it still doesn't solve the problem I'm trying to > > >> solve. Look at a package like criterion, for example. criterion > depends > > >> on aeson. Why? Because statistics depends on it. Why? Because > statistics > > >> wants a couple types it defines to be instances of classes defined in > > >> aeson. John Lato's proposal would require the pragma to appear in the > > >> relevant aeson module, and would prevent *anyone* else from defining > > >> instances of those classes. With my proposal, statistics would be able > > >> to declare > > >> > > >> {-# InstanceIn Statistics.AesonInstances AesonModule.AesonClass > > >> StatisticsType #-} > > >> > > >> Then it would split the Statistics.AesonInstances module off into a > > >> statistics-aeson package and accomplish its objective without stepping > > >> on anyone else. We'd get a lot more (mostly tiny) packages, but in > > >> exchange the dependencies would get much thinner. > > >> On Oct 21, 2014 11:52 AM, "Stephen Paul Weber" > > >> > > >> > > >> wrote: > > >>> Somebody claiming to be John Lato wrote: > > >>>> Thinking about this, I came to a slightly different scheme. What if > > >>>> we instead add a pragma: > > >>>> > > >>>> {-# OrphanModule ClassName ModuleName #-} > > >>> > > >>> I really like this. It solve all the real orphan instance cases I've > > >>> had in my libraries. > > >>> > > >>> -- > > >>> Stephen Paul Weber, @singpolyma > > >>> See for how I prefer to be contacted > > >>> edition right joseph > > > > > > _______________________________________________ > > > ghc-devs mailing list > > > ghc-devs at haskell.org > > > http://www.haskell.org/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.stolarek at p.lodz.pl Wed Oct 22 19:59:38 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Wed, 22 Oct 2014 21:59:38 +0200 Subject: Avoiding the hazards of orphan instances without dependency problems In-Reply-To: References: <201410221856.17637.jan.stolarek@p.lodz.pl> Message-ID: <201410222159.38892.jan.stolarek@p.lodz.pl> These are certainly good points and I'm far from claiming that I have solved all the potential problems that may arise (if I had I would probably be implementing this right now). But I still believe that pragmas are not a good solution, while control of imports and exports is. Unless the problems turn out to be impossible to overcome. Janek Dnia ?roda, 22 pa?dziernika 2014, David Feuer napisa?: > You're not the first one to come up with this idea (and I don't know who > is). Unfortunately, there are some complications. I'm pretty sure there are > simpler examples than this, but this is what I could think of. Suppose we > have > > module PotatoModule (Root (..), T (..)) where -- Does not export instance > Root T > class Root t where > cook :: t -> String > > data T = T > data Weird :: * -> * where > Weird :: Root t => t -> Weird t > > instance Root T where > cook T = "Boil, then eat straight out of the pot." > > potato :: Weird T > potato = Weird T > > -- -------------- > > module ParsnipModule where > import PotatoModule > > instance Root T where > cook T = "Slice into wedges or rounds and put in the soup." > > parsnip :: Weird T > parsnip = Weird T > > mash :: Weird t -> Weird t -> String > mash (Weird x) (Weird y) = cook x ++ cook y > > mush :: String > mush = mash potato parsnip > > -- -------------- > > OK, so what happens when we compile mash? Well, we have a bit of a > problem! When we mash the potato and the parsnip, the mash function gets > access to two different dictionaries for Root T, and two values of type T. > There is absolutely nothing to indicate whether we should use the > dictionary that's "in the air" because Root T has an instance in > ParsnipModule, the dictionary that we pull out of parsnip (which is the > same), or the dictionary we pull out of potato (which is different). I > think inlining and specialization will make things even stranger and less > predictable. In particular, the story of what goes on with inlining gets > much harder to understand at the Haskell level: if mash and mush are put > into a third module, and potato and parsnip are inlined there, that becomes > a type error, because there's no visible Root T instance there! > > On Wed, Oct 22, 2014 at 12:56 PM, Jan Stolarek > > wrote: > > It seems that my previous mail went unnoticed. Perhaps because I didn't > > provide enough > > justification for my solution. I'll try to make up for that now. > > > > First of all let's remind ourselves why orphan instances are a problem. > > Let's say package A > > defines some data types and package B defines some type classes. Now, > > package C might make data > > types from A instances of type classes from B. Someone who imports C will > > have these instances in > > scope. But since C defines neither the data types nor the type classes it > > might be surprising for > > the user of C that C makes A data types instances of B type classes. So > > we issue a warning that > > this is potentially dangerous. Of course person implementing C might > > suppress these warnings so > > the user of C can end up with unexpected instances without knowing > > anything. > > > > I feel that devising some sort of pragmas to define which orphan > > instances are allowed does not > > address the heart of the problem. And the heart of the problem is that we > > can't control importing > > and exporting of instances. Pragmas are just a workaround, not a real > > solution. It would be much > > better if we could just write this (warning, half-baked idea ahead): > > > > module BazModule ( instance Bar Foo ) where > > > > import FooModule (Foo (...)) -- import Foo data type from FooModule > > import BarModule (class Bar) -- import class Bar from BazModule > > > > instance Bar Foo ... > > > > And then someone importing BazModule can decide to import the instance: > > > > module User where > > import FooModule (Foo(..)) > > import BarModule (class Bar) > > import BazModule (instance Bar Foo) > > > > Of course requiring that classes and instances are exported and imported > > just like everything else > > would be a backawrds incompatible change and would therefore require > > effort similar to AMP > > proposal, ie. first release GHC version that warns about upcoming change > > and only enforce the > > change some time later. > > > > Janek > > > > Dnia wtorek, 21 pa?dziernika 2014, RodLogic napisa?: > > > One other benefit of multiple files to use a single module name is that > > > > it > > > > > would be easy to separate testing code from real code even when testing > > > internal/non-exported functions. > > > > > > On Tue, Oct 21, 2014 at 1:22 PM, John Lato wrote: > > > > Perhaps you misunderstood my proposal if you think it would prevent > > > > anyone else from defining instances of those classes? Part of the > > > > proposal was also adding support to the compiler to allow for a > > > > multiple > > > > > > files to use a single module name. That may be a larger technical > > > > challenge, but I think it's achievable. > > > > > > > > I think one key difference is that my proposal puts the onus on class > > > > implementors, and David's puts the onus on datatype implementors, so > > > > they > > > > > > certainly are complementary and could co-exist. > > > > > > > > On Tue, Oct 21, 2014 at 9:11 AM, David Feuer > > > > > > > > wrote: > > > >> As I said before, it still doesn't solve the problem I'm trying to > > > >> solve. Look at a package like criterion, for example. criterion > > > > depends > > > > > >> on aeson. Why? Because statistics depends on it. Why? Because > > > > statistics > > > > > >> wants a couple types it defines to be instances of classes defined > > > >> in aeson. John Lato's proposal would require the pragma to appear in > > > >> the relevant aeson module, and would prevent *anyone* else from > > > >> defining instances of those classes. With my proposal, statistics > > > >> would be able to declare > > > >> > > > >> {-# InstanceIn Statistics.AesonInstances AesonModule.AesonClass > > > >> StatisticsType #-} > > > >> > > > >> Then it would split the Statistics.AesonInstances module off into a > > > >> statistics-aeson package and accomplish its objective without > > > >> stepping on anyone else. We'd get a lot more (mostly tiny) packages, > > > >> but in exchange the dependencies would get much thinner. > > > >> On Oct 21, 2014 11:52 AM, "Stephen Paul Weber" > > > >> > > > >> > > > >> wrote: > > > >>> Somebody claiming to be John Lato wrote: > > > >>>> Thinking about this, I came to a slightly different scheme. What > > > >>>> if we instead add a pragma: > > > >>>> > > > >>>> {-# OrphanModule ClassName ModuleName #-} > > > >>> > > > >>> I really like this. It solve all the real orphan instance cases > > > >>> I've had in my libraries. > > > >>> > > > >>> -- > > > >>> Stephen Paul Weber, @singpolyma > > > >>> See for how I prefer to be contacted > > > >>> edition right joseph > > > > > > > > _______________________________________________ > > > > ghc-devs mailing list > > > > ghc-devs at haskell.org > > > > http://www.haskell.org/mailman/listinfo/ghc-devs From jan.stolarek at p.lodz.pl Wed Oct 22 20:01:00 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Wed, 22 Oct 2014 22:01:00 +0200 Subject: D202: Injective type families In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F370538@DB3PRD3001MB020.064d.mgd.msft.net> References: <20141022131834.122981.94937@phabricator.haskell.org> <618BE556AADD624C9C918AA5D5911BEF3F370538@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <201410222201.00981.jan.stolarek@p.lodz.pl> > I now know how to use 'arc patch' to get a Phab ticket onto my disk. But > if I edit the code, can I make a git commit and upload it back to D202? > That would be akin to sharing a branch with (in this case Jan) the author. > How do I do that? It is often more direct than making comments. Not exactly on topic, but if this turns out to be impossible I can add you as a colaborator to my github fork of GHC. Janek From carlos.camarao at gmail.com Thu Oct 23 01:40:12 2014 From: carlos.camarao at gmail.com (Carlos Camarao) Date: Wed, 22 Oct 2014 23:40:12 -0200 Subject: Avoiding the hazards of orphan instances without dependency problems In-Reply-To: <201410221856.17637.jan.stolarek@p.lodz.pl> References: <201410221856.17637.jan.stolarek@p.lodz.pl> Message-ID: +1. I have followed the road of trying to enable instances to be imported and exported, without success: a paper that discusses the subject and argues in favour of this support is available at: http://www.dcc.ufmg.br/~camarao/controlling-the-scope-of-instances-in-Haskell-sblp2011.pdf A previous version was rejected by the 2011 Haskell Symposium program committee. Referee reports are attached, since perhaps they can be useful to the discussion. Carlos ---------- Forwarded message ---------- From: Jan Stolarek Date: Wed, Oct 22, 2014 at 2:56 PM Subject: Re: Avoiding the hazards of orphan instances without dependency problems To: ghc-devs at haskell.org Cc: RodLogic , David Feuer It seems that my previous mail went unnoticed. Perhaps because I didn't provide enough justification for my solution. I'll try to make up for that now. First of all let's remind ourselves why orphan instances are a problem. Let's say package A defines some data types and package B defines some type classes. Now, package C might make data types from A instances of type classes from B. Someone who imports C will have these instances in scope. But since C defines neither the data types nor the type classes it might be surprising for the user of C that C makes A data types instances of B type classes. So we issue a warning that this is potentially dangerous. Of course person implementing C might suppress these warnings so the user of C can end up with unexpected instances without knowing anything. I feel that devising some sort of pragmas to define which orphan instances are allowed does not address the heart of the problem. And the heart of the problem is that we can't control importing and exporting of instances. Pragmas are just a workaround, not a real solution. It would be much better if we could just write this (warning, half-baked idea ahead): module BazModule ( instance Bar Foo ) where import FooModule (Foo (...)) -- import Foo data type from FooModule import BarModule (class Bar) -- import class Bar from BazModule instance Bar Foo ... And then someone importing BazModule can decide to import the instance: module User where import FooModule (Foo(..)) import BarModule (class Bar) import BazModule (instance Bar Foo) Of course requiring that classes and instances are exported and imported just like everything else would be a backawrds incompatible change and would therefore require effort similar to AMP proposal, ie. first release GHC version that warns about upcoming change and only enforce the change some time later. Janek Dnia wtorek, 21 pa?dziernika 2014, RodLogic napisa?: > One other benefit of multiple files to use a single module name is that it > would be easy to separate testing code from real code even when testing > internal/non-exported functions. > > On Tue, Oct 21, 2014 at 1:22 PM, John Lato wrote: > > Perhaps you misunderstood my proposal if you think it would prevent > > anyone else from defining instances of those classes? Part of the > > proposal was also adding support to the compiler to allow for a multiple > > files to use a single module name. That may be a larger technical > > challenge, but I think it's achievable. > > > > I think one key difference is that my proposal puts the onus on class > > implementors, and David's puts the onus on datatype implementors, so they > > certainly are complementary and could co-exist. > > > > On Tue, Oct 21, 2014 at 9:11 AM, David Feuer > > > > wrote: > >> As I said before, it still doesn't solve the problem I'm trying to > >> solve. Look at a package like criterion, for example. criterion depends > >> on aeson. Why? Because statistics depends on it. Why? Because statistics > >> wants a couple types it defines to be instances of classes defined in > >> aeson. John Lato's proposal would require the pragma to appear in the > >> relevant aeson module, and would prevent *anyone* else from defining > >> instances of those classes. With my proposal, statistics would be able > >> to declare > >> > >> {-# InstanceIn Statistics.AesonInstances AesonModule.AesonClass > >> StatisticsType #-} > >> > >> Then it would split the Statistics.AesonInstances module off into a > >> statistics-aeson package and accomplish its objective without stepping > >> on anyone else. We'd get a lot more (mostly tiny) packages, but in > >> exchange the dependencies would get much thinner. > >> On Oct 21, 2014 11:52 AM, "Stephen Paul Weber" > >> > >> > >> wrote: > >>> Somebody claiming to be John Lato wrote: > >>>> Thinking about this, I came to a slightly different scheme. What if > >>>> we instead add a pragma: > >>>> > >>>> {-# OrphanModule ClassName ModuleName #-} > >>> > >>> I really like this. It solve all the real orphan instance cases I've > >>> had in my libraries. > >>> > >>> -- > >>> Stephen Paul Weber, @singpolyma > >>> See for how I prefer to be contacted > >>> edition right joseph > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- ----------------------- REVIEW 1 --------------------- PAPER: 27 TITLE: Controlling the scope of instances in Haskell AUTHORS: Marco Silva and Carlos Camar?o OVERALL RATING: -1 (weak reject) REVIEWER'S CONFIDENCE: 2 (medium) Summary: This paper presents a design proposal that would allow Haskell programs to have more than one instance of a type class for a given type within a whole program. As usual, there can be only a single instance in scope at any particular point, but which instance is available can vary from module to module. The paper proposes to extend the module import/export syntax to allow instances to be selectively imported or exported; it also proposes that typeclass instances can be named (to permit shorter interface definitions). Review: This paper does a good job of explaining the limitations of the current restrictions on Haskell's type classes. However, as the paper itself notes, there are potential pitfalls with the proposed solution. In particular: - This approach doesn't provide good encapsulation: overloading resolution is affected by the scope of instances, so the meaning of invariants of encapsulated data structures can be affected. (e.g. the Ord constraints on a Set might be used inconsistently in different parts of the program). - Type annotations change the behavior of the program by picking particular instances at the point of the annotation. (But only between modules, not within a module.) This paper points out these problems, but dismisses them as not too important relative to the extra expressiveness obtained by allowing multiple instances. - There are other approaches to type classes that address the problems above in a more convincing way. For example, the paper "Modular Type Classes" by Dryer, Harper, and Chakravarty. This paper doesn't mention this related work at all. - This paper is rather narrowly aimed at Haskell implementors, though the proposed design would affect users of Haskell too. I didn't find the discussion of how the proposed language changes could be incrementally deployed particularly useful; that kind of planning doesn't seem like it belongs in a conference paper. Overall I give this paper a 'weak reject'. ----------------------- REVIEW 2 --------------------- PAPER: 27 TITLE: Controlling the scope of instances in Haskell AUTHORS: Marco Silva and Carlos Camar?o OVERALL RATING: -2 (reject) REVIEWER'S CONFIDENCE: 3 (high) Technical summary: ------------------ The paper argues for a modification of Haskell module import/export mechanism which allows the programmer to control the scope of instances and hence (i) give more flexibility to the programmer (such as the ability to redefine class instances locally) and (ii) avoid some spurious type checker errors (related to orphan instances). Concrete examples are presented as well as an intermediate solution which does not break backwards compatibility with Haskell 2010. There are no technical results in this paper, but suggestions for modifications to the Haskell standards. Opinion: -------- This paper is an interesting, natural, and convincingly presented story for controlling the scope of Haskell type class instances. I believe that a solution along the lines of this proposal is a plausible one. But, I am pretty sure that more things need to be investigated so I don't know if this paper is the end of the story. (For instance the problem outlined in the first paragraph in page 5 is one that is hard to tackle.) There are also quite a few issues related to importing/exporting instances that are not discussed: * How exactly should you name instances? Just with their head? Their head and context? What about through type synonyms in the head? * One reason that Haskell made everything visible has been to avoid incoherence and situations like the impredictability described in page 5, but the coherence problem is not mentioned at all in the paper. * In fact if one wants to talk about exporting/hiding also type family instances later on, one has to think not only about coherence but also about type soundness. In addition there is no heavy technical content in the paper. In all, I think this paper is an extremely useful starting point for a discussion but I can't quite see it in a published proceedings. Other points: ------------- - pg1, col1, first para: use spaces before placing a citation - As another instance of things to think about, maybe it would be a good idea to think of the use of wildcards as well to export all or none classes from a module - Section 4.2, first line: "to specificate" <- to specify ----------------------- REVIEW 3 --------------------- PAPER: 27 TITLE: Controlling the scope of instances in Haskell AUTHORS: Marco Silva and Carlos Camar?o OVERALL RATING: -2 (reject) REVIEWER'S CONFIDENCE: 3 (high) The initial version of Haskell required an instance declarations to be in the same module as either the class or the datatype declaration it related. Haskell 1.3 relaxed the so-called C-T rule and allowed instance declarations to be placed in arbitrary modules. The designers wrote: "The visibility of instance declarations presents a problem. Unlike classes or types, instances cannot be mentioned explicitly in export lists. Instead of changing the syntax of the export list, we have adopted a simple rule: all instances are exported regardless of the export list." The paper proposes to change the simple rule and to add explicit import and export declarations for instances. This change would allow the user to write several C-T instances and to use them selectively. I am afraid that this creates more problems than it solves. In the first place, I am not sure how pressing the problem of not being able to define different instances for the same type actually is. (The authors do provide some evidence that the current restriction is limiting, but that says little about the overall impact.) As the authors point out in Section 3.4, having different instances for the same type also creates problems, when these instances are used inconsistently across modules (module A builds a search tree using ordering X, module B uses the search tree using ordering Y). If one badly needs to instantiate a method with different bodies, then using a class is perhaps not a good idea at all, and one should simply use higher-order functions instead. So thumbs down for the proposal. ----------------------- REVIEW 4 --------------------- PAPER: 27 TITLE: Controlling the scope of instances in Haskell AUTHORS: Marco Silva and Carlos Camar?o OVERALL RATING: -3 (strong reject) REVIEWER'S CONFIDENCE: 3 (high) This is a discussion paper which proposes making the import and export of instances explicit. The paper does a reasonable of job of motivating why it may be important to do so. It makes the obvious suggestion of how to do so. And it discusses some of the issues that may arise. I didn't feel the paper made any new contribution to the understanding of the issues to do with making the scope of instances explicit and controllable. In particular, the issue of long exports was only addressed a comment (suggesting naming the instances -- which, if done only to handle export lists, feels like a hack), nor were issues of instance confusion really addressed -- these could be especially tough in the modern setting of expressive type-level programming with multi-parameter type classes and functional dependencies as promulgated by kiselyov et al. At a higher level, the paper did not address what could be described as the "philosophy" behind type classes, which were built with the intent of being used when there was only a single interpretation of an operator on a type. Specifics. P2. In fact, type classes were introduced expressly to remove the need of having sortBy-like functions where there was a uniform approach to the behavior on a certain type. P3. I don't buy your "backwards compatible" argument. If there is a language pragma, then there is no need to maintain strict backwards compatitbility. -------------- next part -------------- A non-text attachment was scrubbed... Name: instances-communities-report.tex Type: application/x-tex Size: 927 bytes Desc: not available URL: From david.feuer at gmail.com Thu Oct 23 03:06:41 2014 From: david.feuer at gmail.com (David Feuer) Date: Wed, 22 Oct 2014 23:06:41 -0400 Subject: Avoiding the hazards of orphan instances without dependency problems In-Reply-To: <201410222159.38892.jan.stolarek@p.lodz.pl> References: <201410221856.17637.jan.stolarek@p.lodz.pl> <201410222159.38892.jan.stolarek@p.lodz.pl> Message-ID: As far as I can tell, all the ideas for "really" solving the problem are either half-baked ideas, ideas requiring a complete re-conception of Haskell (offering both ups and downs), or long term lines of research that will probably get somewhere good some day, but not today. Yes, it would be great to get a beautiful modular instance system into Haskell, but unless I'm missing some development, that's not too likely to happen in a year or three. That's why I think it would be nice to create a system that will ease some of the pain without limiting further developments. On Wed, Oct 22, 2014 at 3:59 PM, Jan Stolarek wrote: > These are certainly good points and I'm far from claiming that I have > solved all the potential > problems that may arise (if I had I would probably be implementing this > right now). But I still > believe that pragmas are not a good solution, while control of imports and > exports is. Unless the > problems turn out to be impossible to overcome. > > Janek > > Dnia ?roda, 22 pa?dziernika 2014, David Feuer napisa?: > > You're not the first one to come up with this idea (and I don't know who > > is). Unfortunately, there are some complications. I'm pretty sure there > are > > simpler examples than this, but this is what I could think of. Suppose we > > have > > > > module PotatoModule (Root (..), T (..)) where -- Does not export > instance > > Root T > > class Root t where > > cook :: t -> String > > > > data T = T > > data Weird :: * -> * where > > Weird :: Root t => t -> Weird t > > > > instance Root T where > > cook T = "Boil, then eat straight out of the pot." > > > > potato :: Weird T > > potato = Weird T > > > > -- -------------- > > > > module ParsnipModule where > > import PotatoModule > > > > instance Root T where > > cook T = "Slice into wedges or rounds and put in the soup." > > > > parsnip :: Weird T > > parsnip = Weird T > > > > mash :: Weird t -> Weird t -> String > > mash (Weird x) (Weird y) = cook x ++ cook y > > > > mush :: String > > mush = mash potato parsnip > > > > -- -------------- > > > > OK, so what happens when we compile mash? Well, we have a bit of a > > problem! When we mash the potato and the parsnip, the mash function gets > > access to two different dictionaries for Root T, and two values of type > T. > > There is absolutely nothing to indicate whether we should use the > > dictionary that's "in the air" because Root T has an instance in > > ParsnipModule, the dictionary that we pull out of parsnip (which is the > > same), or the dictionary we pull out of potato (which is different). I > > think inlining and specialization will make things even stranger and less > > predictable. In particular, the story of what goes on with inlining gets > > much harder to understand at the Haskell level: if mash and mush are put > > into a third module, and potato and parsnip are inlined there, that > becomes > > a type error, because there's no visible Root T instance there! > > > > On Wed, Oct 22, 2014 at 12:56 PM, Jan Stolarek > > > > wrote: > > > It seems that my previous mail went unnoticed. Perhaps because I didn't > > > provide enough > > > justification for my solution. I'll try to make up for that now. > > > > > > First of all let's remind ourselves why orphan instances are a problem. > > > Let's say package A > > > defines some data types and package B defines some type classes. Now, > > > package C might make data > > > types from A instances of type classes from B. Someone who imports C > will > > > have these instances in > > > scope. But since C defines neither the data types nor the type classes > it > > > might be surprising for > > > the user of C that C makes A data types instances of B type classes. So > > > we issue a warning that > > > this is potentially dangerous. Of course person implementing C might > > > suppress these warnings so > > > the user of C can end up with unexpected instances without knowing > > > anything. > > > > > > I feel that devising some sort of pragmas to define which orphan > > > instances are allowed does not > > > address the heart of the problem. And the heart of the problem is that > we > > > can't control importing > > > and exporting of instances. Pragmas are just a workaround, not a real > > > solution. It would be much > > > better if we could just write this (warning, half-baked idea ahead): > > > > > > module BazModule ( instance Bar Foo ) where > > > > > > import FooModule (Foo (...)) -- import Foo data type from FooModule > > > import BarModule (class Bar) -- import class Bar from BazModule > > > > > > instance Bar Foo ... > > > > > > And then someone importing BazModule can decide to import the instance: > > > > > > module User where > > > import FooModule (Foo(..)) > > > import BarModule (class Bar) > > > import BazModule (instance Bar Foo) > > > > > > Of course requiring that classes and instances are exported and > imported > > > just like everything else > > > would be a backawrds incompatible change and would therefore require > > > effort similar to AMP > > > proposal, ie. first release GHC version that warns about upcoming > change > > > and only enforce the > > > change some time later. > > > > > > Janek > > > > > > Dnia wtorek, 21 pa?dziernika 2014, RodLogic napisa?: > > > > One other benefit of multiple files to use a single module name is > that > > > > > > it > > > > > > > would be easy to separate testing code from real code even when > testing > > > > internal/non-exported functions. > > > > > > > > On Tue, Oct 21, 2014 at 1:22 PM, John Lato wrote: > > > > > Perhaps you misunderstood my proposal if you think it would prevent > > > > > anyone else from defining instances of those classes? Part of the > > > > > proposal was also adding support to the compiler to allow for a > > > > > > multiple > > > > > > > > files to use a single module name. That may be a larger technical > > > > > challenge, but I think it's achievable. > > > > > > > > > > I think one key difference is that my proposal puts the onus on > class > > > > > implementors, and David's puts the onus on datatype implementors, > so > > > > > > they > > > > > > > > certainly are complementary and could co-exist. > > > > > > > > > > On Tue, Oct 21, 2014 at 9:11 AM, David Feuer < > david.feuer at gmail.com> > > > > > > > > > > wrote: > > > > >> As I said before, it still doesn't solve the problem I'm trying to > > > > >> solve. Look at a package like criterion, for example. criterion > > > > > > depends > > > > > > > >> on aeson. Why? Because statistics depends on it. Why? Because > > > > > > statistics > > > > > > > >> wants a couple types it defines to be instances of classes defined > > > > >> in aeson. John Lato's proposal would require the pragma to appear > in > > > > >> the relevant aeson module, and would prevent *anyone* else from > > > > >> defining instances of those classes. With my proposal, statistics > > > > >> would be able to declare > > > > >> > > > > >> {-# InstanceIn Statistics.AesonInstances AesonModule.AesonClass > > > > >> StatisticsType #-} > > > > >> > > > > >> Then it would split the Statistics.AesonInstances module off into > a > > > > >> statistics-aeson package and accomplish its objective without > > > > >> stepping on anyone else. We'd get a lot more (mostly tiny) > packages, > > > > >> but in exchange the dependencies would get much thinner. > > > > >> On Oct 21, 2014 11:52 AM, "Stephen Paul Weber" > > > > >> > > > > >> > > > > >> wrote: > > > > >>> Somebody claiming to be John Lato wrote: > > > > >>>> Thinking about this, I came to a slightly different scheme. > What > > > > >>>> if we instead add a pragma: > > > > >>>> > > > > >>>> {-# OrphanModule ClassName ModuleName #-} > > > > >>> > > > > >>> I really like this. It solve all the real orphan instance cases > > > > >>> I've had in my libraries. > > > > >>> > > > > >>> -- > > > > >>> Stephen Paul Weber, @singpolyma > > > > >>> See for how I prefer to be contacted > > > > >>> edition right joseph > > > > > > > > > > _______________________________________________ > > > > > ghc-devs mailing list > > > > > ghc-devs at haskell.org > > > > > http://www.haskell.org/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.stolarek at p.lodz.pl Thu Oct 23 07:50:11 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Thu, 23 Oct 2014 09:50:11 +0200 Subject: Reading type families from interface files Message-ID: <201410230950.11064.jan.stolarek@p.lodz.pl> Devs, Is there a plumbing for returning declarations (not instances) of type families from an interface file? Janek From carter.schonwald at gmail.com Thu Oct 23 07:55:28 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Thu, 23 Oct 2014 03:55:28 -0400 Subject: Current description of Core? In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF3F36BDFF@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF3F36BE66@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Another tactic, that might be more effective/simpler, is to work out what "primops" you want to use that aren't currently in core. It looks like all the "kappa calclus" operations are expressible in core, so perhaps what you really want are better "markers" that an expression in core might be "kappa calculus like" in some fashion? I'm happy to help you suss that out on IRC or the like! cheers -Carter On Wed, Oct 22, 2014 at 6:18 AM, Sophie Taylor wrote: > Yeah, definitely. Part of the reason why arrow notation is so frustrating > at the moment is because it forces everything into lambda calculus; that > is, it requires every category to be Cartesian Closed. When your arrow > category isn't Cartesian Closed, it raises two issues. 1) When it's not > Cartesian, you have to lie and say it supports products instead of tensors > (that is, you are able to get back the arguments of a product unchanged, > i.e. simple tuples), but this isn't the relevant part for Core. 2) When > it's not closed, you have to lie and say it supports higher order functions > (i.e., lambda abstractions applied to lambda abstractions) and implement > arr. Now, you can lie at the syntax level and typecheck it as kappa > calculus (i.e. first order functions only unless you are explicitly a > Closed category) but then say it is lambda calculus at the core level; this > would work because lambda calculus subsumes kappa calculus. This would > allow the optimiser/RULES etc to work unchanged. However, you would lose a > lot of the internal consistency checking usefulness of Core, and could miss > out on kappa-calculus-specific optimisations (although come to think of it, > call arity analysis might solve a lot of this issue). > > On 22 October 2014 19:59, Simon Peyton Jones > wrote: > >> Interesting. There is a pretty high bar for changes to Core itself. >> Currently arrow notation desugars into Core with no changes. If you want >> to change Core, then arrow ?notation? is actually much more than syntactic >> sugar. Go for it ? but it would be a much more foundational change than >> previously, and hence would require more motivation. >> >> >> >> S >> >> >> >> *From:* Sophie Taylor [mailto:sophie at traumapony.org] >> *Sent:* 22 October 2014 10:53 >> *To:* Simon Peyton Jones >> *Cc:* ghc-devs at haskell.org >> *Subject:* Re: Current description of Core? >> >> >> >> Ah, thanks HEAPS. I've been banging my head against a wall for the last >> few days trying to see exactly what is going on :) I'm trying to find a way >> to minimise/eliminate the changes required to Core for the arrow notation >> rewrite - specifically, introducing kappa abstraction and application - >> semantically different to lambda abstraction/application but close enough >> that I can probably get away with either adding a simple flag to the >> Abstraction/Application constructors or doing it higher up in the HsExpr >> land, but the latter method leaves a sour taste in my mouth. >> >> >> >> On 22 October 2014 19:35, Simon Peyton Jones >> wrote: >> >> Is the current description of Core still System FC_2 (described in >> https://www.seas.upenn.edu/~sweirich/papers/popl163af-weirich.pdf)? >> >> >> >> We never implemented that particular version (too complicated!). >> >> >> >> This is the full current story (thanks to Richard for keeping it up to >> date), in the GHC source tree >> >> : >> >> https://ghc.haskell.org/trac/ghc/browser/ghc/docs/core-spec/core-spec.pdf >> >> >> >> Simon >> >> >> >> *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of *Sophie >> Taylor >> *Sent:* 22 October 2014 10:26 >> *To:* ghc-devs at haskell.org >> *Subject:* Current description of Core? >> >> >> >> Hi, >> >> >> >> Is the current description of Core still System FC_2 (described in >> https://www.seas.upenn.edu/~sweirich/papers/popl163af-weirich.pdf)? >> >> >> >> >> >> >> > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Oct 23 10:19:38 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 23 Oct 2014 10:19:38 +0000 Subject: Reading type families from interface files In-Reply-To: <201410230950.11064.jan.stolarek@p.lodz.pl> References: <201410230950.11064.jan.stolarek@p.lodz.pl> Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F372B10@DB3PRD3001MB020.064d.mgd.msft.net> I don't know what you mean. Can you be more explicit. The mi_fam_insts field of a ModIface sounds like what you want | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Jan | Stolarek | Sent: 23 October 2014 08:50 | To: ghc-devs at haskell.org | Subject: Reading type families from interface files | | Devs, | | Is there a plumbing for returning declarations (not instances) of type | families from an interface file? | | Janek | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From jan.stolarek at p.lodz.pl Thu Oct 23 10:31:00 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Thu, 23 Oct 2014 12:31:00 +0200 Subject: Reading type families from interface files In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F372B10@DB3PRD3001MB020.064d.mgd.msft.net> References: <201410230950.11064.jan.stolarek@p.lodz.pl> <618BE556AADD624C9C918AA5D5911BEF3F372B10@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <201410231231.00595.jan.stolarek@p.lodz.pl> Say I have: module Foo where type family F a type instance F Int = Char mi_fam_insts stores type family instances, in that case "F Int = Char". What I would like to load is type family declaration: "F a". If that does not exist already I wonder whether information about type family declarations should be cached in ModDetails in the same way md_fam_insts caches information about instances? Rationale: I want to load information about tyfam declarations in FamInst.checkFamInstConsistency and pass these definitions of type families to FamInst.checkForConflicts. My plan is to verify whether an open type family is injective at the same time when looking for conflicts. Janek Dnia czwartek, 23 pa?dziernika 2014, napisa?e?: > I don't know what you mean. Can you be more explicit. The mi_fam_insts > field of a ModIface sounds like what you want > > | -----Original Message----- > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Jan > | Stolarek > | Sent: 23 October 2014 08:50 > | To: ghc-devs at haskell.org > | Subject: Reading type families from interface files > | > | Devs, > | > | Is there a plumbing for returning declarations (not instances) of type > | families from an interface file? > | > | Janek > | _______________________________________________ > | ghc-devs mailing list > | ghc-devs at haskell.org > | http://www.haskell.org/mailman/listinfo/ghc-devs From simonpj at microsoft.com Thu Oct 23 10:51:33 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 23 Oct 2014 10:51:33 +0000 Subject: Reading type families from interface files In-Reply-To: <201410231231.00595.jan.stolarek@p.lodz.pl> References: <201410230950.11064.jan.stolarek@p.lodz.pl> <618BE556AADD624C9C918AA5D5911BEF3F372B10@DB3PRD3001MB020.064d.mgd.msft.net> <201410231231.00595.jan.stolarek@p.lodz.pl> Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F372BC1@DB3PRD3001MB020.064d.mgd.msft.net> It's in mi_decls, along with data type declarations, class declarations, and the like. If you are talking about a ModDetails, then look in the range of md_types; the TyThings there include all the data types, classes, and type families declared in this module S | -----Original Message----- | From: Jan Stolarek [mailto:jan.stolarek at p.lodz.pl] | Sent: 23 October 2014 11:31 | To: Simon Peyton Jones | Cc: ghc-devs at haskell.org | Subject: Re: Reading type families from interface files | | Say I have: | | module Foo where | type family F a | type instance F Int = Char | | mi_fam_insts stores type family instances, in that case "F Int = | Char". What I would like to load is type family declaration: "F a". If | that does not exist already I wonder whether information about type | family declarations should be cached in ModDetails in the same way | md_fam_insts caches information about instances? | | Rationale: I want to load information about tyfam declarations in | FamInst.checkFamInstConsistency and pass these definitions of type | families to FamInst.checkForConflicts. My plan is to verify whether an | open type family is injective at the same time when looking for | conflicts. | | Janek | | Dnia czwartek, 23 pa?dziernika 2014, napisa?e?: | > I don't know what you mean. Can you be more explicit. The | > mi_fam_insts field of a ModIface sounds like what you want | > | > | -----Original Message----- | > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | > | Jan Stolarek | > | Sent: 23 October 2014 08:50 | > | To: ghc-devs at haskell.org | > | Subject: Reading type families from interface files | > | | > | Devs, | > | | > | Is there a plumbing for returning declarations (not instances) of | > | type families from an interface file? | > | | > | Janek | > | _______________________________________________ | > | ghc-devs mailing list | > | ghc-devs at haskell.org | > | http://www.haskell.org/mailman/listinfo/ghc-devs | From jan.stolarek at p.lodz.pl Thu Oct 23 11:02:16 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Thu, 23 Oct 2014 13:02:16 +0200 Subject: Reading type families from interface files In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F372BC1@DB3PRD3001MB020.064d.mgd.msft.net> References: <201410230950.11064.jan.stolarek@p.lodz.pl> <201410231231.00595.jan.stolarek@p.lodz.pl> <618BE556AADD624C9C918AA5D5911BEF3F372BC1@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <201410231302.16502.jan.stolarek@p.lodz.pl> > If you are talking about a ModDetails, then look in the range of md_types; > the TyThings there include all the data types, classes, and type families > declared in this module Ah, typeEnvTyCons looks like the thing I was looking for. Thanks. Janek From svenpanne at gmail.com Thu Oct 23 12:40:16 2014 From: svenpanne at gmail.com (Sven Panne) Date: Thu, 23 Oct 2014 14:40:16 +0200 Subject: cabal sdist trouble with GHC from head In-Reply-To: References: Message-ID: 2014-10-22 15:16 GMT+02:00 Sven Panne : > Does anybody have a clue what's going wrong at the sdist step here? > > https://travis-ci.org/haskell-opengl/OpenGLRaw/jobs/38707011#L104 > > This only happens with a GHC from head, a build with GHC 7.8.3 is fine: > > https://travis-ci.org/haskell-opengl/OpenGLRaw/jobs/38707010 > > Any help highly appreciated... I would really need some help here, even adding a few more diagnostic things to the Travis CI configuration didn't give me a clue what's going wrong: https://travis-ci.org/haskell-opengl/OpenGLRaw/jobs/38813449#L110 I totally fail to understand why Cabal's sdist step works with every released compiler, but not with a GHC from head. I don't even know if this is a Cabal issue or a GHC issue. The relevant part from the Travis CI log is: ... cabal-1.18 sdist --verbose=3 creating dist/src creating dist/src/sdist.-3586/OpenGLRaw-1.5.0.0 Using internal setup method with build-type Simple and args: ["sdist","--verbose=3","--builddir=dist","--output-directory=dist/src/sdist.-3586/OpenGLRaw-1.5.0.0"] cabal-1.18: dist/setup-config: invalid argument The command "cabal-1.18 sdist --verbose=3" exited with 1. ... As can be seen from the log, dist/setup-config is there and can be accessed. Confused, S. From alan.zimm at gmail.com Thu Oct 23 13:01:40 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Thu, 23 Oct 2014 15:01:40 +0200 Subject: cabal sdist trouble with GHC from head In-Reply-To: References: Message-ID: cabal has changed for HEAD, you need to install 1.21.1.0 On Thu, Oct 23, 2014 at 2:40 PM, Sven Panne wrote: > 2014-10-22 15:16 GMT+02:00 Sven Panne : > > Does anybody have a clue what's going wrong at the sdist step here? > > > > https://travis-ci.org/haskell-opengl/OpenGLRaw/jobs/38707011#L104 > > > > This only happens with a GHC from head, a build with GHC 7.8.3 is fine: > > > > https://travis-ci.org/haskell-opengl/OpenGLRaw/jobs/38707010 > > > > Any help highly appreciated... > > I would really need some help here, even adding a few more diagnostic > things to the Travis CI configuration didn't give me a clue what's > going wrong: > > https://travis-ci.org/haskell-opengl/OpenGLRaw/jobs/38813449#L110 > > I totally fail to understand why Cabal's sdist step works with every > released compiler, but not with a GHC from head. I don't even know if > this is a Cabal issue or a GHC issue. The relevant part from the > Travis CI log is: > > ... > cabal-1.18 sdist --verbose=3 > creating dist/src > creating dist/src/sdist.-3586/OpenGLRaw-1.5.0.0 > Using internal setup method with build-type Simple and args: > > ["sdist","--verbose=3","--builddir=dist","--output-directory=dist/src/sdist.-3586/OpenGLRaw-1.5.0.0"] > cabal-1.18: dist/setup-config: invalid argument > The command "cabal-1.18 sdist --verbose=3" exited with 1. > ... > > As can be seen from the log, dist/setup-config is there and can be > accessed. > > Confused, > S. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From svenpanne at gmail.com Thu Oct 23 13:47:05 2014 From: svenpanne at gmail.com (Sven Panne) Date: Thu, 23 Oct 2014 15:47:05 +0200 Subject: cabal sdist trouble with GHC from head In-Reply-To: References: Message-ID: 2014-10-23 15:01 GMT+02:00 Alan & Kim Zimmerman : > cabal has changed for HEAD, you need to install 1.21.1.0 Hmmm, so we *force* people to update? o_O Perhaps I've missed an announcement, and I really have a hard time deducing this from the output on Travis CI. Is 1.21.1.0 backwards-compatible to previous GHCs? Or do I have to set up something more or less complicated depending on the GHC version (which would be unfortunate)? From stegeman at gmail.com Thu Oct 23 20:49:23 2014 From: stegeman at gmail.com (Luite Stegeman) Date: Thu, 23 Oct 2014 22:49:23 +0200 Subject: Making GHCi awesomer? In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F366B60@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF3F366B60@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Sorry I'm a bit late to the party, I'm a bit snowed under with some GHCJS refactoring work and the things I really need to do before the 7.10 merge window closes. I think that exposing GHC's front end functionality through the library would be a good idea. Unfortunately it adds the haskeline dependency, so adding it to the `ghc` package wouldn't be ideal. On the other hand, if we exposed the GHC/GHCi modules as a library in a `ghc-bin` package, then we'd avoid this, and also address ghc-mod's problem of the terminfo dependency. Unfortunately this part of GHC has never been written with use as a library in mind, so users would likely run into limitations at some point. For example, GHCJS has a complete copy - with some modifications - of the `ghc/Main.hs` module containing the command line parser and session setup code. Even if this module was exposed through the library, I wouldn't be able to use much of it, because of slight differences in command line options. My approach/plan so far has been to first copy code from GHC to the GHCJS tree to make it work, and then make changes in the next major GHC version that'd let me remove most of the lower level (and most likely to be version specific) code from my copy. It would probably take a few iterations to satisfy all the needs of ghc-mod/ghc-server/GHCJS and others, but the result, a library, would be more flexible than a JSON API for GHCi (which would still be useful by itself). If stability/segfaults are a major factor in choosing to communicate with the GHCi program, rather than using GHC as a library, then this really should be addressed directly. Has anyone done investigation of the situations that make ghc-mod/ghc-server, but not GHCi, crash? On Mon, Oct 20, 2014 at 3:07 PM, Simon Peyton Jones wrote: > Christopher > > > > You are doing very cool things. Thank you. > > > > What I?m puzzled about is this: the GHC API **is** a programmatic > interface to GHC. Why not just use it? > > I can think of some reasons: > > ? It?s not very clear just what?s in the GHC API and what isn?t, > since you have access to all of GHC?s internals if you use ?package ghc. > And the API isn?t very well designed. (Answer: could you help make it > better?) > > ? You want some functionality that is currently in GHCi, rather > than in the ?ghc? package. (Answer: maybe we should move that > functionality into the ?ghc? package and make it part of the GHC API?) > > ? You have to be writing in Haskell to use the GHC API, whereas > you want a separate process you connect to via a socket. (Answer: > Excellent: write a server wrapper around the GHC API that offers a JSON > interface, or whatever the right vocabulary is. Sounds as if you have > more or less done this.) > > ? Moreover, the API changes pretty regularly, and you want > multi-compiler support. (No answer: I don?t know how to simultaneously > give access to new stuff without risking breaking old stuff.) > > My meta-point is this: GHC is wide open to people like you building a > consensus about how GHC?s basic functionality should be wrapped up and > exposed to clients. (Luite is another person who has led in this space, > via GHCJS.) So please do go ahead and lay out the way it **should** be > done, think about migration paths, build a consensus etc. Much better that > than do fragile screen-scraping on GHCi?s textual output. > > Thanks for what you are doing here. > > Simon > > > > *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of *Christopher > Done > *Sent:* 18 October 2014 16:49 > *To:* ghc-devs at haskell.org > *Subject:* Making GHCi awesomer? > > > > Good evening, > > So I?ve been working on Haskell user-facing tooling in general for > some years. By that I mean the level of Emacs talking with Haskell > tools. > > I wrote the interactive-haskell-mode (most functionality exists > in this file > > ). > which launches a GHCi process in a pipe and tries very earnestly to > handle input/output with the process reasonably. > > For Emacs fanciers: Written in Elisp, there?s a nice command queue > that you put commands onto, they will all be run on a FIFO one-by-one > order, and eventually you?ll get a result back. Initially it was just > me using it, but with the help of Herbert Riedel it?s now a mode on > equal footing with the venerable inferior-haskell-mode all ye Emacs > users know and love. It?s part of haskell-mode and can be enabled by > enabling the interactive-haskell-mode minor mode. > > For years I?ve been using GHCi as a base and it?s been very reliable > for almost every project I?ve done (the only exceptions are things > like SDL and OpenGL, which are well known to be difficult to load in > GHCi, at least on Linux). I think we?ve built up > a good set of functionality > > purely based on asking GHCi things and getting it to do things. > > I literally use GHCi for everything. For type-checking, type info, I > even send ?:!cabal build? to it. Everything goes through it. I love my > GHCi. > > Now, I?m sort of at the end of the line of where I can take GHCi. Here > are the problems as I see them today: > > 1. There is no programmatic means of communicating with the > process. I can?t send a command and get a result cleanly, I have to > regex match on the prompt, and that is only so reliable. At the > moment we solve this by using \4 (aka ?END OF TRANSMISSION?). Also > messages (warnings, errors, etc.) need to be parsed which is also > icky, especially in the REPL when e.g. a defaulted Integer warning > will mix with the output. Don?t get me started on handling > multi-line prompts! Hehe. > > 2. GHCi, as a REPL, does not distinguish between stdout, stderr and > the result of your evaluation. This can be problematic for making a > smooth REPL UI, your results can often (with threading) be > interspersed in unkind ways. I cannot mitigate this with any kind > of GHCi trickery. > > 3. It forgets information when you reload. (I know this is > intentional.) > > 4. Not enough information is exposed to the user. (Is there ever? ;) > > 5. There is a time-to-market overhead of contributing to GHCi ? if I > want a cool feature, I can write it on a locally compiled version > of GHC. But for the work projects I have, I?m restricted to given > GHC versions, as are other people. They have to wait to get the > good features. > > 6. This is just a personal point ? I?ve like to talk to GHCi over a > > socket, so that I can run it on a remote machine. Those familiar > with Common Lisp will be reminded of SLIME and Swank. > > Examples for point 4 are: > > ? Type of sub-expressions. > > ? Go to definition of thing at point (includes local scope). > > ? Local-scope completion. > > ? A hoogle-like query (as seen in Idris recently). > > ? Documentation lookup. > > ? Suggest imports for symbols. > > ? Show core for the current module. > > ? Show CMM for the current module, ASM, etc. SLIME can do this. > > ? Expand the template-haskell at point. > > ? The :i command is amazingly useful, but programmatic access > would be > even better.? > > ? Case split anyone? > > ? Etc. > > ?I?ve integrated with it in Emacs so that I can C-c C-i any identifier > and it?ll popup a buffer with the :i result and then within that > buffer I can drill down further with C-c C-i again. It makes for > very natural exploration of a type. > > You?ve seen some of these features in GHC Mod, in hdevtools, in the FP > Haskell > Center, maybe some are in Yi, possibly also in Leksah (?). > > So in light of point (5), I thought: I?ve used the GHC API before, it > can do interactive evaluation, why not write a project like > ?ghc-server? which encodes all these above ideas as a ?drop-in? > replacement for GHCi? After all I could work on my own without anybody > getting my way over architecture decisions, etc. > > And that?s what I did. It?s > here . Surprisingly, it kind of > works. You run it in your directoy like you would do ?cabal repl? > and it sets up all the extensions and package dependencies and starts > accepting connections. It will compile across three major GHC > versions. Hurray! Rub our hands together and call it done, right? > Sadly not, the trouble is twofold: > > 1. The first problem with this is that every three projects will > segfault or panic when trying to load in a project that GHCi will > load in happily. The reasons are mysterious to me and I?ve already > lugged over the GHC API to get to this point, so that kind of thing > happening means that I have to fall back to my old GHCi-based > setup, and is disappointing. People have similar complaints of GHC > Mod & co. ?Getting it to work? is a deterrant. > > 2. While this would be super beneficial for me, and has been a good > learning experience for ?what works and what doesn?t?, we end up > with yet another alternative tool, that only a few people are > using. > > 3. There are just certain behaviours and fixes here and there that > > GHCi does that take time to reproduce. > > So let?s go back to the GHCi question: is there still a development > overhead for adding features to GHCi? Yes, new ideas need acceptance > and people have to wait (potentially a year) for a new feature that > they could be using right now. > > An alternative method is to do what Herbert did which is to release a > ?ghci-ng? which sports > new shiny features that people (with the right GHC version) will be > able to compile and use as a drop-in for GHCi. It?s the same codebase, > but with more stuff! An example is the ?:complete? command, this lets > IDE implementers do completion at least at the REPL level. Remember > the list of features earlier? Why are they not in GHCi? > > So, of course, this got me thinking that I could instead make > ghc-server be based off of GHCi?s actual codebase. I could rebase upon > the latest GHC release and maintain 2-3 GHC versions backwards. That?s > certainly doable, it would essentially give me ?GHCi++?. Good for me, > I just piggy back on the GHCi goodness and then use the GHC API for > additional things as I?m doing now. > > But is there a way I can get any of this into the official repo? For > example, could I hack on this (perhaps with Herbert) as ?ghci-ng?, > provide an alternative JSON communication layer (e.g. via some > ?use-json flag) and and socket listener (?listen-on ), a way > to distinguish stdout/stderr (possibly by forking a process, unsure at > this stage), and then any of the above features (point 4) listed. I > make sure that I?m rebasing upon HEAD, as if to say ghci-ng is a kind > of submodule, and then when release time comes we merge back in any > new stuff since the last release. Early adopters can use > ghci-ng, and everyone benefits from official GHC releases. > > The only snag there is that, personally speaking, it would be better > if ghci-ng would compile on older GHC versions. So if GHC 7.10 is the > latest release, it would still be nice (and it *seems* pretty > feasible) that GHC 7.8 users could still cabal install it without > issue. People shouldn?t have to wait if they don?t have to. > > Well, that?s everything. Thoughts? > > Ciao! > > ? > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Oct 24 08:08:00 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 24 Oct 2014 08:08:00 +0000 Subject: Does D351 Unwiring-Integer patch now look like you envisioned? In-Reply-To: <87zjcl29js.fsf@gmail.com> References: <87zjcl29js.fsf@gmail.com> Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F375187@DB3PRD3001MB020.064d.mgd.msft.net> I replied, but failed to press "Submit" (which is several screenfuls away). Sorry The Maybe DataCon idea looks right to me. FWIW I *hate* the way that TidyPgm is forced to predict what CorePrep will do. I've created https://ghc.haskell.org/trac/ghc/ticket/9718 to explain. Simon | -----Original Message----- | From: Herbert Valerio Riedel [mailto:hvriedel at gmail.com] | Sent: 24 October 2014 08:48 | To: Simon Peyton Jones | Subject: Does D351 Unwiring-Integer patch now look like you | envisioned? | | Hello Simon, | | I was wondering if | | https://phabricator.haskell.org/D351 | | looks the way you expected, and more specifically I'd like some | feedback on the `DataCon` vs. `Maybe Id` comment at | | https://phabricator.haskell.org/D351#8682 | | Thanks! | hvr From hvriedel at gmail.com Fri Oct 24 08:45:53 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Fri, 24 Oct 2014 10:45:53 +0200 Subject: Does D351 Unwiring-Integer patch now look like you envisioned? In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F375187@DB3PRD3001MB020.064d.mgd.msft.net> (Simon Peyton Jones's message of "Fri, 24 Oct 2014 08:08:00 +0000") References: <87zjcl29js.fsf@gmail.com> <618BE556AADD624C9C918AA5D5911BEF3F375187@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <87vbn926v2.fsf@gmail.com> Hello! On 2014-10-24 at 10:08:00 +0200, Simon Peyton Jones wrote: > I replied, but failed to press "Submit" (which is several screenfuls > away). Sorry Thanks! (if you press the 'z'-key outside of any input-form on code-revision pages, you get the screen horizontally tiled with the submit-form in the bottom half - this is sometimes generally useful to have IMHO) > The Maybe DataCon idea looks right to me. > > > FWIW I *hate* the way that TidyPgm is forced to predict what CorePrep > will do. I've created https://ghc.haskell.org/trac/ghc/ticket/9718 to > explain. Btw, this reminds me I have a low-priority plan to improve the `mkInteger` call interface, as the current one is terrible: it requires splitting a large integer literal into 31-bit words (even when machine wordsize is 64bit!), and then wrapping those into an ordinary [Int]-list. I'd rather like mkInteger to take an packed array of machine-size words, similar to how [Char] literals are handled via `unpackCString#`, which would allow for a more compact representation in object files as well as possibly direct conversion of integer literals in GHC's Integer backends... I've created a ticket to keep track of that idea: https://ghc.haskell.org/trac/ghc/ticket/9719 Cheers, hvr From benno.fuenfstueck+ghc at gmail.com Fri Oct 24 10:18:18 2014 From: benno.fuenfstueck+ghc at gmail.com (=?UTF-8?B?QmVubm8gRsO8bmZzdMO8Y2s=?=) Date: Fri, 24 Oct 2014 12:18:18 +0200 Subject: Making GHCi awesomer? In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF3F366B60@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: 2014-10-23 22:49 GMT+02:00 Luite Stegeman : > If stability/segfaults are a major factor in choosing to communicate with > the GHCi program, rather than using GHC as a library, then this really > should be addressed directly. Has anyone done investigation of the > situations that make ghc-mod/ghc-server, but not GHCi, crash? > The crashes tend to be related to linker problems (like duplicate symbols, often also in combination with TH) IME. I think the problem that projects like ghc-mod, hdevtools etc encounter is that the GHC API is quite lowlevel. For example, just look at the code for `:load`: https://github.com/ghc/ghc/blob/5bb73d79a83bca57dc431421ca1e022f34b8dec9/ghc/InteractiveUI.hs#L1343 . - First, I have no idea what abandonAll is doing here (and hdevtools isn't using it). I guess it is related to GHCi debugger, so that might not be a problem. - We then unload the active program and call doLoad. doLoad just resets some debugging related things (note that it calls discardActiveBreakPoints again, even though loadModule' already did that) and then calls GHC.load. After that, it calls `afterLoad`, which uses a foreign function calling into the RTS (!) to reset CAFs. As this function is only foreign import'ed in GHCi itself, I'm sure neither ghc-mod nor hdevtools call it. Needing to call a RTS function just to safely load a new module, replacing the old program, doesn't feel right to me. The problem here is that this `loadModule` function is only inside GHCi, and not exported through the GHC API. This was just one example, I'm sure there are more. There is just no highlevel GHC API function for most of GHCi's commands. Another reason for the crashes might be that ghc-mod and hdevtools tend to do many, many more reloads than GHCi, because they reload on every file save (or even after 0.5s idle time). Crashes that only appear very infrequent are thus much more likely to occur in ghc-mod. -- Benno -------------- next part -------------- An HTML attachment was scrubbed... URL: From gintautas at miliauskas.lt Fri Oct 24 14:08:23 2014 From: gintautas at miliauskas.lt (Gintautas Miliauskas) Date: Fri, 24 Oct 2014 16:08:23 +0200 Subject: Windows build broken in Linker.c In-Reply-To: <54405C16.8000901@gmail.com> References: <618BE556AADD624C9C918AA5D5911BEF3F35E9E8@DB3PRD3001MB020.064d.mgd.msft.net> <54405C16.8000901@gmail.com> Message-ID: This is still not fixed, right? I've been working on the mingw gcc upgrade and testing on 32 bit, and this failure got me running in circles until I discovered that baseline was broken too... On Fri, Oct 17, 2014 at 2:00 AM, Simon Marlow wrote: > I was working on a fix yesterday but ran out of time. Frankly this code > is a nightmare, every time I touch it it breaks on some platform - this > time I validated on 64 bit Windows but not 32. Aargh indeed. > On 16 Oct 2014 14:32, "Austin Seipp" wrote: > >> I see what's going on and am fixing it... The code broke 32-bit due to >> #ifdefery, but I think it can be removed, perhaps (which would be >> preferable). >> >> On Thu, Oct 16, 2014 at 3:43 PM, Simon Peyton Jones >> wrote: >> > Simon >> > >> > Aargh! I think the Windows build is broken again. >> > >> > I think this is your commit 5300099ed >> > >> > Admittedly this is on a branch I?m working on, but it?s up to date with >> > HEAD. And I have no touched Linker.c! >> > >> > Any ideas? >> > >> > Simon >> > >> > >> > >> > rts\Linker.c: In function 'allocateImageAndTrampolines': >> > >> > >> > >> > rts\Linker.c:3708:19: >> > >> > error: 'arch_name' undeclared (first use in this function) >> > >> > >> > >> > rts\Linker.c:3708:19: >> > >> > note: each undeclared identifier is reported only once for each >> > function it appears in >> > >> > rts/ghc.mk:236: recipe for target 'rts/dist/build/Linker.o' failed >> > >> > make[1]: *** [rts/dist/build/Linker.o] Error 1 >> > >> > make[1]: *** Waiting for unfinished jobs.... >> > >> > >> > _______________________________________________ >> > ghc-devs mailing list >> > ghc-devs at haskell.org >> > http://www.haskell.org/mailman/listinfo/ghc-devs >> > >> >> >> >> -- >> Regards, >> >> Austin Seipp, Haskell Consultant >> Well-Typed LLP, http://www.well-typed.com/ >> > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -- Gintautas Miliauskas -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Fri Oct 24 15:33:53 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Fri, 24 Oct 2014 16:33:53 +0100 Subject: Windows build broken in Linker.c In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF3F35E9E8@DB3PRD3001MB020.064d.mgd.msft.net> <54405C16.8000901@gmail.com> Message-ID: <544A7161.1050705@gmail.com> I sent a patch to Austin to validate+commit earlier this week. On 24/10/2014 15:08, Gintautas Miliauskas wrote: > This is still not fixed, right? I've been working on the mingw gcc > upgrade and testing on 32 bit, and this failure got me running in > circles until I discovered that baseline was broken too... > > On Fri, Oct 17, 2014 at 2:00 AM, Simon Marlow > wrote: > > I was working on a fix yesterday but ran out of time. Frankly this > code is a nightmare, every time I touch it it breaks on some > platform - this time I validated on 64 bit Windows but not 32. Aargh > indeed. > > On 16 Oct 2014 14:32, "Austin Seipp" > wrote: > > I see what's going on and am fixing it... The code broke 32-bit > due to > #ifdefery, but I think it can be removed, perhaps (which would be > preferable). > > On Thu, Oct 16, 2014 at 3:43 PM, Simon Peyton Jones > > wrote: > > Simon > > > > Aargh! I think the Windows build is broken again. > > > > I think this is your commit 5300099ed > > > > Admittedly this is on a branch I?m working on, but it?s up to > date with > > HEAD. And I have no touched Linker.c! > > > > Any ideas? > > > > Simon > > > > > > > > rts\Linker.c: In function 'allocateImageAndTrampolines': > > > > > > > > rts\Linker.c:3708:19: > > > > error: 'arch_name' undeclared (first use in this function) > > > > > > > > rts\Linker.c:3708:19: > > > > note: each undeclared identifier is reported only once > for each > > function it appears in > > > > rts/ghc.mk:236 : recipe for target > 'rts/dist/build/Linker.o' failed > > > > make[1]: *** [rts/dist/build/Linker.o] Error 1 > > > > make[1]: *** Waiting for unfinished jobs.... > > > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > > -- > Regards, > > Austin Seipp, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > -- > Gintautas Miliauskas From austin at well-typed.com Fri Oct 24 15:37:09 2014 From: austin at well-typed.com (Austin Seipp) Date: Fri, 24 Oct 2014 10:37:09 -0500 Subject: Windows build broken in Linker.c In-Reply-To: <544A7161.1050705@gmail.com> References: <618BE556AADD624C9C918AA5D5911BEF3F35E9E8@DB3PRD3001MB020.064d.mgd.msft.net> <54405C16.8000901@gmail.com> <544A7161.1050705@gmail.com> Message-ID: Gah, this slipped my mind. On it. On Fri, Oct 24, 2014 at 10:33 AM, Simon Marlow wrote: > I sent a patch to Austin to validate+commit earlier this week. > > On 24/10/2014 15:08, Gintautas Miliauskas wrote: >> >> This is still not fixed, right? I've been working on the mingw gcc >> upgrade and testing on 32 bit, and this failure got me running in >> circles until I discovered that baseline was broken too... >> >> On Fri, Oct 17, 2014 at 2:00 AM, Simon Marlow > > wrote: >> >> I was working on a fix yesterday but ran out of time. Frankly this >> code is a nightmare, every time I touch it it breaks on some >> platform - this time I validated on 64 bit Windows but not 32. Aargh >> indeed. >> >> On 16 Oct 2014 14:32, "Austin Seipp" > > wrote: >> >> I see what's going on and am fixing it... The code broke 32-bit >> due to >> #ifdefery, but I think it can be removed, perhaps (which would be >> preferable). >> >> On Thu, Oct 16, 2014 at 3:43 PM, Simon Peyton Jones >> > wrote: >> > Simon >> > >> > Aargh! I think the Windows build is broken again. >> > >> > I think this is your commit 5300099ed >> > >> > Admittedly this is on a branch I?m working on, but it?s up to >> date with >> > HEAD. And I have no touched Linker.c! >> > >> > Any ideas? >> > >> > Simon >> > >> > >> > >> > rts\Linker.c: In function 'allocateImageAndTrampolines': >> > >> > >> > >> > rts\Linker.c:3708:19: >> > >> > error: 'arch_name' undeclared (first use in this function) >> > >> > >> > >> > rts\Linker.c:3708:19: >> > >> > note: each undeclared identifier is reported only once >> for each >> > function it appears in >> > >> > rts/ghc.mk:236 : recipe for target >> 'rts/dist/build/Linker.o' failed >> > >> > make[1]: *** [rts/dist/build/Linker.o] Error 1 >> > >> > make[1]: *** Waiting for unfinished jobs.... >> > >> > >> > _______________________________________________ >> > ghc-devs mailing list >> > ghc-devs at haskell.org >> > http://www.haskell.org/mailman/listinfo/ghc-devs >> > >> >> >> >> -- >> Regards, >> >> Austin Seipp, Haskell Consultant >> Well-Typed LLP, http://www.well-typed.com/ >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> >> >> >> -- >> Gintautas Miliauskas > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From austin at well-typed.com Fri Oct 24 23:52:53 2014 From: austin at well-typed.com (Austin Seipp) Date: Fri, 24 Oct 2014 18:52:53 -0500 Subject: Proposal: Improving the LLVM backend by packaging it Message-ID: Hi *, A few days ago a discussion on IRC occurred about the LLVM backend, its current status, and what we could do to make it a rock solid part of GHC for all our users. Needless to say, the situation right now isn't so hot: we have no commitment to version support, two major versions are busted, others are seriously buggy, and yet there are lots of things we could improve on. So I give you a proposal, from a few of us to you all, about improving it: https://ghc.haskell.org/trac/ghc/wiki/ImprovedLLVMBackend I won't repeat what's on the wiki page too much, but the TL;DR version is: we should start packaging a version of LLVM, and shipping it with e.g. binary distributions of GHC. It's just a lot better for everyone. I know we're normally fairly hesitant about things like this (shipping external dependencies), but I think it's the only sane thing to do here, and the situation is fairly unique in that it's not actually very complicated to implement or support, I think. We'd like to do this for 7.12. I've also wrangled some people to help. Those people know who they are (because they're CC'd), and I will now badger them into submission until it is fixed for 7.12. Please let me know what you think. PS. Joachim, I would be particularly interested in upstream needs for Debian, as I know of their standard packaging policy to not duplicate things. -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From austin at well-typed.com Sat Oct 25 00:00:50 2014 From: austin at well-typed.com (Austin Seipp) Date: Fri, 24 Oct 2014 19:00:50 -0500 Subject: GHC Weekly News - 10/24/2014 Message-ID: Note: this post is available (with full hyperlinks) at https://ghc.haskell.org/trac/ghc/blog/weekly20141024 --------- Hi *, Welcome to the weekly GHC news. This one will be short this week, as the preceding one occurred only on Monday - but we'll be going with Fridays from now on, so next week we'll hopefully see a longer list. - GHC 7.8.4 tickets have been in waiting, and the RC will be soon after Austin finishes some final merges and tests on his branch. **We have not committed a time for the release after the RC**, yet we would like people to **please seriously test** and immediately report any major showstoppers - or alert us of ones we missed. - For the GHC 7.10 release, one of the major features we planned to try and merge was DWARF debugging information. This is actually a small component of larger ongoing work, including adding stack traces to Haskell executables. While, unfortunately, not all the work can be merged, we talked with Peter, and made a plan: our hope is to get Phab:D169 merged, which lays all the groundwork, followed by DWARF debugging information in the code generators. This will allow tools like `gdb` or other extensible debuggers to analyze C-- IR accurately for compiled executables. Peter has written up a wiki page, available at SourceNotes, describing the design. We hope to land all the core infrastructure in Phab:D169 soon, followed by DWARF information for the Native Code Generator, all for 7.10.1 - This past week, a discussion sort of organically started on the `#ghc` IRC channel about the future of the LLVM backend. GHC's backend is buggy, has no control over LLVM versions, and breaks frequently with new versions. This all significantly impacts users, and relegates the backend to a second class citizen. After some discussion, Austin wrote up a proposal for a improved backend, and wrangled several other people to help. The current plan is to try an execute this by GHC 7.12, with the goal of making the LLVM backend Tier 1 for major supported platforms. - You may notice https://ghc.haskell.org is now responds slightly faster in some cases - we've activated a caching layer (CloudFlare) on the site, so hopefully things should be a little more smooth. Closed tickets this week: #9684, #9692, #9038, #9679, #9537, #1473. -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From austin at well-typed.com Sat Oct 25 00:03:08 2014 From: austin at well-typed.com (Austin Seipp) Date: Fri, 24 Oct 2014 19:03:08 -0500 Subject: Automating GHC build for Windows In-Reply-To: <1413916715.326367.181663173.2FBCEE30@webmail.messagingengine.com> References: <1413916715.326367.181663173.2FBCEE30@webmail.messagingengine.com> Message-ID: Hi Roman, Yes, what Adam said is accurate: we're working on integrating Windows into our build pipeline on Phabricator. As well as that, Gabor's bots (at http://haskell.inf.elte.hu/builders) also build on Windows. You can see the build results on the ghc-builds at haskell.org mailing list. But we can always use more help! GHC on Windows has a small amount of hackers, so any amount of help will make a small difference. On Tue, Oct 21, 2014 at 1:38 PM, Adam Sandberg Eriksson wrote: > Hello, > > There is a lot of building infrastructure and I believe most of it is > listed on [1]. For example the nightly builders [2] where there are > indeed 2 windows buildmachines working (but they seem to fail to build > currently). > > I believe there is ongoing work on adding windows machines to > Harbormaster [3] for validating each commit as well as patches submitted > to Phabricator. > > Regards, > Adam Sandberg Eriksson > > [1]: https://ghc.haskell.org/trac/ghc/wiki/Infrastructure > [2]: http://haskell.inf.elte.hu/builders/ > [3]: https://ghc.haskell.org/trac/ghc/wiki/Phabricator/Harbormaster > > On Tue, Oct 21, 2014, at 07:58 PM, Roman Kuznetsov wrote: >> Hello *, >> >> As I am still new in this, I will ask. >> >> Were there any attempt to automate GHC build process on Windows with some >> kind of CI engine, like Hudson, Jenkins, etc.? >> >> It seems to be rather helpful to publish build results if not after each >> commit, but at least once every day. >> >> >> WDYT? >> >> -- >> Sincerely yours, >> Roman Kuznetsov >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From austin at well-typed.com Sat Oct 25 00:24:23 2014 From: austin at well-typed.com (Austin Seipp) Date: Fri, 24 Oct 2014 19:24:23 -0500 Subject: GHC on Windows (extended/broad discussion) Message-ID: Gintautas, Tamar, Roman, (CC'ing those on https://ghc.haskell.org/trac/ghc/wiki/WindowsTaskForce, and Kyrill, who has helped us out much in the past) Thank you all for all your help with Windows recently. I apologize for not responding to some of your concerns sooner in the recent threads about tarballs, etc. First off, all your contributions are extremely welcome - GHC has had many talented Windows hackers in days long past, but these days this number has dwindled! Anyone who has an interest in GHC on Windows is in a place to make a big impact and help us. All the work Gintautas has done for example, will dramatically improve the ghc-tarballs scenario. On that note: Gintautas, I will get D339 merged in ASAP, as soon as I test it and make a download mirror for you. Haskell.org has an awesome new CDN setup, and once I implement https://downloads.haskell.org, it will be easy to update tarballs and serve them to mass amounts of users. However, beyond that, we still need more done. First off, if you can help, we can help you! We can make lots of Windows build bots for people on demand, so if you're in desperate need of disk space or your computers are a bit slow, we can help accommodate. Right now, we have nightly builds with Gabor's[1] build system, and soon, we're working on a Phabricator integration, which should be great - and hopefully reduce the amount of breakage substantially. I also notice there is a ticket list of Windows issues[2], and that's fantastic. After a quick glance, a lot of these tickets are old, duplicates, or could possibly be closed or fixed easily. A good first task for any new contributor would be to go through this list, and try to replicate some of them! And you can always ask me - I can certainly help you navigate GHC a bit to get somewhere. But there are still other things. The Win32 package for example, is dreadfully lacking in maintainership. While we merge patches, it would be great to see a Windows developer spearhead and clean it up - we could even make some improvements in GHC itself based on this. This would be an excellent opportunity to make a good impact in the broader ecosystem! Finally, we desperately need someone to consult with when we're up a creek. Are certain patches OK for Windows? What's the best way to fix certain bugs, or implement certain features? I feel like often we try to think about this, but it's a bit lonely when nobody else is there to help! I'm not sure how to fix this, other than encouraging things like doing active code reviews and helping grind out some patches. But at the very minimum, I'd just like to talk with you about things perhaps! So in summary - the work so far is grand, and we want to help you do more! And I'm sure everyone can help - there's always so much to do and so little time, we need to encourage it all we can. As Simon says: Upward and Onward! [1] http://haskell.inf.elte.hu/builders/ [2] https://ghc.haskell.org/trac/ghc/query?status=!closed&os=Windows&desc=1&order=id -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From david.feuer at gmail.com Sat Oct 25 00:34:41 2014 From: david.feuer at gmail.com (David Feuer) Date: Fri, 24 Oct 2014 20:34:41 -0400 Subject: Improving specialization, redux Message-ID: I spoke with Simon today, and I think I have a bit of a better idea now of what's going on with specialization, and why it sometimes fails to specialize things as much as it could. Apparently, the replacement of (sel @ type dict) by sel.type is accomplished by the use of a rewrite rule generated by the specializer when it specializes an overloaded function using that (or something similar to that, anyway). The trouble is that as things are currently done, there's no way in general to get the right replacement to stick on the RHS of the rule when such forms arise in other contexts. The type checker constructs dictionaries in type checking, but afterwards we can't produce new ones. The most likely way to solve this is probably to (somehow) decouple dictionary creation and selection from type checking so as to expose those facilities to later phases. The concept would be something vaguely like this: instead of creating rewrite rules, the specializer would instead check whether the appropriate dictionary already existed, and create it if not. The simplifier would then check for a dictionary every time it encountered a class method whose instance is determined. No, I have no idea how much work such a change would involve. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Sat Oct 25 09:14:53 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Sat, 25 Oct 2014 11:14:53 +0200 Subject: [commit: ghc] master: Implementation of hsig (module signatures), per #9252 (aa47995) In-Reply-To: <20141024234754.001D93A300@ghc.haskell.org> References: <20141024234754.001D93A300@ghc.haskell.org> Message-ID: <1414228493.1665.3.camel@joachim-breitner.de> Hi Edwardd, Am Freitag, den 24.10.2014, 23:47 +0000 schrieb git at git.haskell.org: > >--------------------------------------------------------------- > > commit aa4799534225e3fc6bbde0d5e5eeab8868cc3111 > Author: Edward Z. Yang > Date: Thu Aug 7 18:32:12 2014 +0100 > > Implementation of hsig (module signatures), per #9252 this breaks a few test cases: Actual stderr output differs from expected: --- ./ghci/scripts/T5979.stderr 2014-10-24 23:49:33.395524791 +0000 +++ ./ghci/scripts/T5979.run.stderr 2014-10-25 00:24:08.934279006 +0000 @@ -2,6 +2,6 @@ : Could not find module ???Control.Monad.Trans.State??? Perhaps you meant - Control.Monad.Trans.State (from transformers-0.4.1.0 at trans_GjLVjHaAO8fEGf8lChbngr) - Control.Monad.Trans.Class (from transformers-0.4.1.0 at trans_GjLVjHaAO8fEGf8lChbngr) - Control.Monad.Trans.Cont (from transformers-0.4.1.0 at trans_GjLVjHaAO8fEGf8lChbngr) + Control.Monad.Trans.State (from transformers-0.4.1.0 at trans_5jw4w9yTgmZ89ByuixDAKP) + Control.Monad.Trans.Class (from transformers-0.4.1.0 at trans_5jw4w9yTgmZ89ByuixDAKP) + Control.Monad.Trans.Cont (from transformers-0.4.1.0 at trans_5jw4w9yTgmZ89ByuixDAKP) *** unexpected failure for T5979(ghci) Actual stdout output differs from expected: --- ./safeHaskell/check/pkg01/safePkg01.stdout 2014-10-24 23:49:33.705509654 +0000 +++ ./safeHaskell/check/pkg01/safePkg01.run.stdout 2014-10-25 00:19:17.451490530 +0000 @@ -29,17 +29,17 @@ require own pkg trusted: True M_SafePkg6 -package dependencies: array-0.5.0.1 at array_5q713e1nmXtAgNRa542ahu +package dependencies: array-0.5.0.1 at array_GX4NwjS8xZkC2ZPtjgwhnz trusted: trustworthy require own pkg trusted: False M_SafePkg7 -package dependencies: array-0.5.0.1 at array_5q713e1nmXtAgNRa542ahu +package dependencies: array-0.5.0.1 at array_GX4NwjS8xZkC2ZPtjgwhnz trusted: safe require own pkg trusted: False M_SafePkg8 -package dependencies: array-0.5.0.1 at array_5q713e1nmXtAgNRa542ahu +package dependencies: array-0.5.0.1 at array_GX4NwjS8xZkC2ZPtjgwhnz trusted: trustworthy require own pkg trusted: False *** unexpected failure for safePkg01(normal) https://s3.amazonaws.com/archive.travis-ci.org/jobs/38981598/log.txt It seems you need to adjust the testsuite to remove these hashes before comparing the output. Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From pali.gabor at gmail.com Sat Oct 25 11:26:26 2014 From: pali.gabor at gmail.com (=?UTF-8?B?UMOhbGkgR8OhYm9yIErDoW5vcw==?=) Date: Sat, 25 Oct 2014 13:26:26 +0200 Subject: Automating GHC build for Windows In-Reply-To: References: <1413916715.326367.181663173.2FBCEE30@webmail.messagingengine.com> Message-ID: 2014-10-25 2:03 GMT+02:00 Austin Seipp : > As well as that, Gabor's bots > (at http://haskell.inf.elte.hu/builders) also build on Windows. Unfortunately, my builders are still suffering from the sudden breakage induced by a Cabal library update on September 24 [1]. That is, they are unable to complete the build due to some ghc-cabal failure right after bootstrapping [2]. As a result, I basically suspended the builders until I could do something with this. Yes, I have just checked it, the situation is still the same. Curiously, I do not know about others who are experiencing the same problem, however, the revisions before the referenced commit (mostly) build just fine. As of yet, I have not had the time and chance to even attempt to fix this. I have already answered Herbert the version of the build environment (Windows 7 SP1, MinGW from July 4 (32 bit) and February 16 (64 bit), with GHC 7.6.3) -- I do not even know if that was considered old or problematic. I am not also sure if the developers involved in the aforementioned change are aware of this issue and what their opinion on this is. I admit that I have not submitted a ticket on this, perhaps I shall. Probably I shall also experiment with moving to a newer version of the toolchain per the recently revamped Windows build instructions in the meantime. [1] http://git.haskell.org/ghc.git/commit/4b648be19c75e6c6a8e6f9f93fa12c7a4176f0ae [2] http://haskell.inf.elte.hu/builders/windows-x86-head/56/10.html From mail at joachim-breitner.de Sat Oct 25 16:24:34 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Sat, 25 Oct 2014 18:24:34 +0200 Subject: Call Arity, oneShot or both Message-ID: <1414254274.1665.16.camel@joachim-breitner.de> Hi, some months ago I tried to make foldl a good consumer in the common case. The starting point is always to write foldl k a xs = foldr (\v f a -> f (v `k` a)) id xs a and then somehow make GHC produce good code with this. I came up with two solutions: A more sophisticated compiler analysis (Call Arity), or an explicit annotation in the form of foldlB k a xs = foldr (\v f -> oneShot (\a -> f (v `k` a))) id xs a where oneShot :: (a -> b) -> (a -> b) is a built-in function, semantically the identity, but telling the compiler that it can assume that the (\a -> ...) is called at most once. Back then, we decided to use Call Arity, on the grounds that it might improve other code as well, despite not having a lot of evindence that this may happen. Then recently David Feuer built on my work by making more functions fuse, including functions like scanl that are not built on foldl, but benefit from the same analysis. This supports the usefulness of Call Arity. But he also found cases where Call Arity is not powerful enough and GHC would produce bad code. So I wanted to properly compare Call Arity with the oneShot approach. Based on todays master (0855b249), I disabled Call Arity and changed the definitions of foldl, foldl', scanl and scanl' to use oneShot, and ran nofib. The result are mixed. With the current code, the oneShot machinery does not always work as expected: Program Size Allocs Runtime Elapsed TotalMem Min -0.1% -1.5% -2.8% -2.8% -5.8% Max +0.4% +4.7% +5.8% +5.6% +5.4% Geometric Mean -0.0% +0.1% +0.3% +0.3% +0.1% The biggest loser is calendar, which uses scanl. I am not fully sure what went wrong here: Either the one-shot annotation on the lambda?s variable got lost somewhere in the pipeline, or despite it being there, the normal arity analysis did not use it. But there is also a winner, fft2, with -1.5% allocations. Here Call Arity was not good enough, but oneShot did the jobs. There is also the option of combining both. Then we do not get the regression, but still the improvement for fft2: Min -0.1% -1.5% -3.9% -3.8% -5.8% Max +0.2% +0.1% +6.4% +6.3% +13.1% Geometric Mean -0.0% -0.0% +0.0% +0.0% +0.1% The oneShot code is on the branch wip/oneShot. The changes are clearly not ready to be merged. In particular, there is the question of how to best keep the oneShot annotation in the unfoldings: The isOneShotLambda flag is currently not stored in the interface. I work around this by making sure that the oneShot function is never inlined in unfoldings, but maybe it would be better to serialize the isOneShotLambda flag in interfaces, which might have other good effects? If we want as much performance as possible, we should simply include both approaches. But there might be other things to consider... so not sure what the best thing to do is. Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From johan.tibell at gmail.com Sat Oct 25 16:57:00 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Sat, 25 Oct 2014 09:57:00 -0700 Subject: Call Arity, oneShot or both In-Reply-To: <1414254274.1665.16.camel@joachim-breitner.de> References: <1414254274.1665.16.camel@joachim-breitner.de> Message-ID: Thanks for the interesting analysis Joachim. I wouldn't trust nofib's runtime numbers (but the allocation ones should be good). The tests don't run long enough and the way we don't handle typical noise issues (e.g. clock resolution, benchmark length, etc) well enough. I'd love to see some numbers using Criterion instead. It might also be worth to check for which programs the Core changed. On Sat, Oct 25, 2014 at 9:24 AM, Joachim Breitner wrote: > Hi, > > > some months ago I tried to make foldl a good consumer in the common > case. The starting point is always to write > > foldl k a xs = foldr (\v f a -> f (v `k` a)) id xs a > > and then somehow make GHC produce good code with this. I came up with > two solutions: A more sophisticated compiler analysis (Call Arity), or > an explicit annotation in the form of > > foldlB k a xs = foldr (\v f -> oneShot (\a -> f (v `k` a))) id > xs a > > where oneShot :: (a -> b) -> (a -> b) is a built-in function, > semantically the identity, but telling the compiler that it can assume > that the (\a -> ...) is called at most once. > > Back then, we decided to use Call Arity, on the grounds that it might > improve other code as well, despite not having a lot of evindence that > this may happen. > > Then recently David Feuer built on my work by making more functions > fuse, including functions like scanl that are not built on foldl, but > benefit from the same analysis. This supports the usefulness of Call > Arity. > > But he also found cases where Call Arity is not powerful enough and GHC > would produce bad code. So I wanted to properly compare Call Arity with > the oneShot approach. > > > Based on todays master (0855b249), I disabled Call Arity and changed the > definitions of foldl, foldl', scanl and scanl' to use oneShot, and ran > nofib. > > The result are mixed. With the current code, the oneShot machinery does > not always work as expected: > > Program Size Allocs Runtime Elapsed TotalMem > Min -0.1% -1.5% -2.8% -2.8% -5.8% > Max +0.4% +4.7% +5.8% +5.6% +5.4% > Geometric Mean -0.0% +0.1% +0.3% +0.3% +0.1% > > The biggest loser is calendar, which uses scanl. I am not fully sure > what went wrong here: Either the one-shot annotation on the lambda?s > variable got lost somewhere in the pipeline, or despite it being there, > the normal arity analysis did not use it. > > But there is also a winner, fft2, with -1.5% allocations. Here Call > Arity was not good enough, but oneShot did the jobs. > > There is also the option of combining both. Then we do not get the > regression, but still the improvement for fft2: > > Min -0.1% -1.5% -3.9% -3.8% -5.8% > Max +0.2% +0.1% +6.4% +6.3% +13.1% > Geometric Mean -0.0% -0.0% +0.0% +0.0% +0.1% > > > > The oneShot code is on the branch wip/oneShot. The changes are clearly > not ready to be merged. In particular, there is the question of how to > best keep the oneShot annotation in the unfoldings: The isOneShotLambda > flag is currently not stored in the interface. I work around this by > making sure that the oneShot function is never inlined in unfoldings, > but maybe it would be better to serialize the isOneShotLambda flag in > interfaces, which might have other good effects? > > > > If we want as much performance as possible, we should simply include > both approaches. But there might be other things to consider... so not > sure what the best thing to do is. > > Greetings, > Joachim > > -- > Joachim ?nomeata? Breitner > mail at joachim-breitner.de ? http://www.joachim-breitner.de/ > Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F > Debian Developer: nomeata at debian.org > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuuzetsu at fuuzetsu.co.uk Sun Oct 26 05:00:09 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Sun, 26 Oct 2014 05:00:09 +0000 Subject: Proposal: Improving the LLVM backend by packaging it In-Reply-To: References: Message-ID: <544C7FD9.5020607@fuuzetsu.co.uk> On 10/25/2014 12:52 AM, Austin Seipp wrote: > Hi *, > > A few days ago a discussion on IRC occurred about the LLVM backend, > its current status, and what we could do to make it a rock solid part > of GHC for all our users. > > Needless to say, the situation right now isn't so hot: we have no > commitment to version support, two major versions are busted, others > are seriously buggy, and yet there are lots of things we could improve > on. > > So I give you a proposal, from a few of us to you all, about improving it: > > https://ghc.haskell.org/trac/ghc/wiki/ImprovedLLVMBackend > > I won't repeat what's on the wiki page too much, but the TL;DR version > is: we should start packaging a version of LLVM, and shipping it with > e.g. binary distributions of GHC. It's just a lot better for everyone. > > I know we're normally fairly hesitant about things like this (shipping > external dependencies), but I think it's the only sane thing to do > here, and the situation is fairly unique in that it's not actually > very complicated to implement or support, I think. > > We'd like to do this for 7.12. I've also wrangled some people to help. > Those people know who they are (because they're CC'd), and I will now > badger them into submission until it is fixed for 7.12. > > Please let me know what you think. > > PS. Joachim, I would be particularly interested in upstream needs for > Debian, as I know of their standard packaging policy to not duplicate > things. > I don't think any distro wants to duplicate things. Even if GHC does end up shipping with LLVM, it should be easy for distro packagers to ignore that and use their own. -- Mateusz K. From hvr at gnu.org Sun Oct 26 10:45:32 2014 From: hvr at gnu.org (Herbert Valerio Riedel) Date: Sun, 26 Oct 2014 11:45:32 +0100 Subject: old-time.git/tests/time004.hs is DST-sensitive :-/ In-Reply-To: <87txg3ghdu.fsf@gnu.org> (Herbert Valerio Riedel's message of "Sun, 27 Oct 2013 11:03:41 +0100") References: <87txg3ghdu.fsf@gnu.org> Message-ID: <87a94jnm7n.fsf@gnu.org> Hello *, It's that time of the year again when time004 fails... and there's also an associated ticket: https://ghc.haskell.org/trac/ghc/ticket/4440 On 2013-10-27 at 11:03:41 +0100, Herbert Valerio Riedel wrote: > if anyone wonders, why TEST=time004 suddenly fails: it's sensitive to > DST (and depends on your system's TZ-config): > > http://git.haskell.org/packages/old-time.git/blob/HEAD:/tests/time004.hs > > For me, the comparison this unit-test checks now suddenly fails, because > the DST-switch occurs on different days in 2012 and 2013 in my TZ, i.e. > > length "Sun Oct 27 10:56:42 CET 2013" /= length "Sat Oct 27 11:56:42 CEST 2012" > > Does anyone happen to know what the rationale behind this 'time004' unit > test was? > > Cheers, > hvr From david.feuer at gmail.com Sun Oct 26 14:56:01 2014 From: david.feuer at gmail.com (David Feuer) Date: Sun, 26 Oct 2014 10:56:01 -0400 Subject: Call Arity, oneShot, or both Message-ID: > There is also the option of combining both. Then we do not get the > regression, but still the improvement for fft2: I *definitely* think we should leave Call Arity in place by default unless and until something strictly better comes along. One very nice feature is that it works for a lot of user-written code of various kinds without the user having to do *anything* special. oneShot seems more limited in applicability, for use primarily in library code. So I would personally think that it should be added, with an option, of course, to turn it off. I would also go for documenting it as experimental and provisional. David -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Sun Oct 26 16:06:44 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Sun, 26 Oct 2014 17:06:44 +0100 Subject: Call Arity, oneShot, or both In-Reply-To: References: Message-ID: <1414339604.1378.9.camel@joachim-breitner.de> Hi, Am Sonntag, den 26.10.2014, 10:56 -0400 schrieb David Feuer: > > There is also the option of combining both. Then we do not get the > > regression, but still the improvement for fft2: > > I *definitely* think we should leave Call Arity in place by default > unless and until something strictly better comes along. One very nice > feature is that it works for a lot of user-written code of various > kinds without the user having to do *anything* special. That would be great! But do we have evidence of this user-written code that benefits? So far I have only seen relevant improvement due to list-fusion a left-foldish function. > oneShot seems more limited in applicability, for use primarily in > library code. So I would personally think that it should be added, > with an option, of course, to turn it off. I would also go for > documenting it as experimental and provisional. well, either we put in oneShot and use it for foldl etc. (so it wouldn?t be optional) or we leave it out completely. Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From gintautas at miliauskas.lt Sun Oct 26 17:17:32 2014 From: gintautas at miliauskas.lt (Gintautas Miliauskas) Date: Sun, 26 Oct 2014 18:17:32 +0100 Subject: Automating GHC build for Windows In-Reply-To: References: <1413916715.326367.181663173.2FBCEE30@webmail.messagingengine.com> Message-ID: Hi P?li, that error is reminiscent of issues when trying to move a file that is open (and implicitly locked) by another process. I have seen some sporadic issues like that in the build unfortunately (never tracked them down to the root cause though), Maybe it's something silly like one process not having enough time to clean up and close the file before the next command tries to move it? I'd add a sleep before the mv and see if that helps. Does the build proceed if you try "make" without cleaning the repository, or does it hang again at the same spot? On Sat, Oct 25, 2014 at 1:26 PM, P?li G?bor J?nos wrote: > 2014-10-25 2:03 GMT+02:00 Austin Seipp : > > As well as that, Gabor's bots > > (at http://haskell.inf.elte.hu/builders) also build on Windows. > > Unfortunately, my builders are still suffering from the sudden > breakage induced by a Cabal library update on September 24 [1]. That > is, they are unable to complete the build due to some ghc-cabal > failure right after bootstrapping [2]. As a result, I basically > suspended the builders until I could do something with this. Yes, I > have just checked it, the situation is still the same. > > Curiously, I do not know about others who are experiencing the same > problem, however, the revisions before the referenced commit (mostly) > build just fine. > > As of yet, I have not had the time and chance to even attempt to fix > this. I have already answered Herbert the version of the build > environment (Windows 7 SP1, MinGW from July 4 (32 bit) and February 16 > (64 bit), with GHC 7.6.3) -- I do not even know if that was considered > old or problematic. I am not also sure if the developers involved in > the aforementioned change are aware of this issue and what their > opinion on this is. I admit that I have not submitted a ticket on > this, perhaps I shall. > > Probably I shall also experiment with moving to a newer version of the > toolchain per the recently revamped Windows build instructions in the > meantime. > > [1] > http://git.haskell.org/ghc.git/commit/4b648be19c75e6c6a8e6f9f93fa12c7a4176f0ae > [2] http://haskell.inf.elte.hu/builders/windows-x86-head/56/10.html > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Gintautas Miliauskas -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Sun Oct 26 21:02:41 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Sun, 26 Oct 2014 21:02:41 +0000 Subject: [commit: ghc] wip/oneShot: Add GHC.Prim.oneShot (955d9f5) In-Reply-To: <20141025102715.6E1A53A300@ghc.haskell.org> References: <20141025102715.6E1A53A300@ghc.haskell.org> Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F37A0E0@DB3PRD3001MB020.064d.mgd.msft.net> Is all this one-shot stuff carefully documented somewhere? I think it's basically a good plan, but I see no Notes, alas. Simon | -----Original Message----- | From: ghc-commits [mailto:ghc-commits-bounces at haskell.org] On Behalf Of | git at git.haskell.org | Sent: 25 October 2014 11:27 | To: ghc-commits at haskell.org | Subject: [commit: ghc] wip/oneShot: Add GHC.Prim.oneShot (955d9f5) | | Repository : ssh://git at git.haskell.org/ghc | | On branch : wip/oneShot | Link : | http://ghc.haskell.org/trac/ghc/changeset/955d9f53b6c934585a90423dfc95d86 | d8a129908/ghc | | >--------------------------------------------------------------- | | commit 955d9f53b6c934585a90423dfc95d86d8a129908 | Author: Joachim Breitner | Date: Sun Jan 26 11:36:23 2014 +0000 | | Add GHC.Prim.oneShot | | Conflicts: | compiler/basicTypes/MkId.lhs | | | >--------------------------------------------------------------- | | 955d9f53b6c934585a90423dfc95d86d8a129908 | compiler/basicTypes/MkId.lhs | 17 +++++++++++++++-- | compiler/prelude/PrelNames.lhs | 3 ++- | 2 files changed, 17 insertions(+), 3 deletions(-) | | diff --git a/compiler/basicTypes/MkId.lhs b/compiler/basicTypes/MkId.lhs | index bf1c199..05dcdd5 100644 | --- a/compiler/basicTypes/MkId.lhs | +++ b/compiler/basicTypes/MkId.lhs | @@ -135,7 +135,8 @@ ghcPrimIds | seqId, | magicDictId, | coerceId, | - proxyHashId | + proxyHashId, | + oneShotId | ] | \end{code} | | @@ -1016,7 +1017,7 @@ another gun with which to shoot yourself in the | foot. | \begin{code} | lazyIdName, unsafeCoerceName, nullAddrName, seqName, | realWorldName, voidPrimIdName, coercionTokenName, | - magicDictName, coerceName, proxyName, dollarName :: Name | + magicDictName, coerceName, proxyName, dollarName, oneShotName :: Name | unsafeCoerceName = mkWiredInIdName gHC_PRIM (fsLit "unsafeCoerce#") | unsafeCoerceIdKey unsafeCoerceId | nullAddrName = mkWiredInIdName gHC_PRIM (fsLit "nullAddr#") | nullAddrIdKey nullAddrId | seqName = mkWiredInIdName gHC_PRIM (fsLit "seq") | seqIdKey seqId | @@ -1028,6 +1029,7 @@ magicDictName = mkWiredInIdName gHC_PRIM | (fsLit "magicDict") magicDict | coerceName = mkWiredInIdName gHC_PRIM (fsLit "coerce") | coerceKey coerceId | proxyName = mkWiredInIdName gHC_PRIM (fsLit "proxy#") | proxyHashKey proxyHashId | dollarName = mkWiredInIdName gHC_BASE (fsLit "$") | dollarIdKey dollarId | +oneShotName = mkWiredInIdName gHC_PRIM (fsLit "oneShot") | oneShotKey oneShotId | \end{code} | | \begin{code} | @@ -1119,6 +1121,17 @@ lazyId = pcMiscPrelId lazyIdName ty info | info = noCafIdInfo | ty = mkForAllTys [alphaTyVar] (mkFunTy alphaTy alphaTy) | | +oneShotId :: Id | +oneShotId = pcMiscPrelId oneShotName ty info | + where | + info = noCafIdInfo `setInlinePragInfo` alwaysInlinePragma | + `setUnfoldingInfo` mkCompulsoryUnfolding rhs | + ty = mkForAllTys [alphaTyVar, betaTyVar] (mkFunTy fun_ty fun_ty) | + fun_ty = mkFunTy alphaTy betaTy | + [body, x] = mkTemplateLocals [fun_ty, alphaTy] | + x' = setOneShotLambda x | + rhs = mkLams [alphaTyVar, betaTyVar, body, x'] $ Var body `App` Var | x | + | | ------------------------------------------------------------------------ | -------- | magicDictId :: Id -- See Note [magicDictId magic] | diff --git a/compiler/prelude/PrelNames.lhs | b/compiler/prelude/PrelNames.lhs | index e053b11..e2ade33 100644 | --- a/compiler/prelude/PrelNames.lhs | +++ b/compiler/prelude/PrelNames.lhs | @@ -1682,10 +1682,11 @@ rootMainKey, runMainKey :: Unique | rootMainKey = mkPreludeMiscIdUnique 101 | runMainKey = mkPreludeMiscIdUnique 102 | | -thenIOIdKey, lazyIdKey, assertErrorIdKey :: Unique | +thenIOIdKey, lazyIdKey, assertErrorIdKey, oneShotKey :: Unique | thenIOIdKey = mkPreludeMiscIdUnique 103 | lazyIdKey = mkPreludeMiscIdUnique 104 | assertErrorIdKey = mkPreludeMiscIdUnique 105 | +oneShotKey = mkPreludeMiscIdUnique 106 | | breakpointIdKey, breakpointCondIdKey, breakpointAutoIdKey, | breakpointJumpIdKey, breakpointCondJumpIdKey, | | _______________________________________________ | ghc-commits mailing list | ghc-commits at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-commits From simonpj at microsoft.com Sun Oct 26 21:03:10 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Sun, 26 Oct 2014 21:03:10 +0000 Subject: [commit: ghc] wip/oneShot: Use oneShot in the definition of foldl etc. (6f101d2) In-Reply-To: <20141025102718.0481C3A300@ghc.haskell.org> References: <20141025102718.0481C3A300@ghc.haskell.org> Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F37A0E6@DB3PRD3001MB020.064d.mgd.msft.net> No Notes! Surely, surely it deserves one! Simon | -----Original Message----- | From: ghc-commits [mailto:ghc-commits-bounces at haskell.org] On Behalf Of | git at git.haskell.org | Sent: 25 October 2014 11:27 | To: ghc-commits at haskell.org | Subject: [commit: ghc] wip/oneShot: Use oneShot in the definition of | foldl etc. (6f101d2) | | Repository : ssh://git at git.haskell.org/ghc | | On branch : wip/oneShot | Link : | http://ghc.haskell.org/trac/ghc/changeset/6f101d20805fb52de0423bc8beab373 | b94bd4a7d/ghc | | >--------------------------------------------------------------- | | commit 6f101d20805fb52de0423bc8beab373b94bd4a7d | Author: Joachim Breitner | Date: Sat Oct 25 12:27:06 2014 +0200 | | Use oneShot in the definition of foldl etc. | | | >--------------------------------------------------------------- | | 6f101d20805fb52de0423bc8beab373b94bd4a7d | libraries/base/Data/OldList.hs | 5 +++-- | libraries/base/GHC/List.lhs | 6 +++--- | 2 files changed, 6 insertions(+), 5 deletions(-) | | diff --git a/libraries/base/Data/OldList.hs | b/libraries/base/Data/OldList.hs | index 0e6709e..75fba35 100644 | --- a/libraries/base/Data/OldList.hs | +++ b/libraries/base/Data/OldList.hs | @@ -499,7 +499,8 @@ pairWithNil x = (x, []) | | mapAccumLF :: (acc -> x -> (acc, y)) -> x -> (acc -> (acc, [y])) -> acc | -> (acc, [y]) | {-# INLINE [0] mapAccumLF #-} | -mapAccumLF f = \x r s -> let (s', y) = f s x | +mapAccumLF f = \x r -> oneShot $ \s -> | + let (s', y) = f s x | (s'', ys) = r s' | in (s'', y:ys) | | @@ -1058,7 +1059,7 @@ unfoldr f b0 = build (\c n -> | | -- | A strict version of 'foldl'. | foldl' :: forall a b . (b -> a -> b) -> b -> [a] -> b | -foldl' k z0 xs = foldr (\(v::a) (fn::b->b) (z::b) -> z `seq` fn (k z v)) | (id :: b -> b) xs z0 | +foldl' k z0 xs = foldr (\(v::a) (fn::b->b) -> oneShot (\(z::b) -> z | `seq` fn (k z v))) (id :: b -> b) xs z0 | -- Implementing foldl' via foldr is only a good idea if the compiler can | optimize | -- the resulting code (eta-expand the recursive "go"), so this needs - | fcall-arity! | -- Also see #7994 | diff --git a/libraries/base/GHC/List.lhs b/libraries/base/GHC/List.lhs | index 2d01678..c7a0cb3 100644 | --- a/libraries/base/GHC/List.lhs | +++ b/libraries/base/GHC/List.lhs | @@ -186,7 +186,7 @@ filterFB c p x r | p x = x `c` r | | foldl :: forall a b. (b -> a -> b) -> b -> [a] -> b | {-# INLINE foldl #-} | -foldl k z0 xs = foldr (\(v::a) (fn::b->b) (z::b) -> fn (k z v)) (id :: b | -> b) xs z0 | +foldl k z0 xs = foldr (\(v::a) (fn::b->b) -> oneShot (\(z::b) -> fn (k z | v))) (id :: b -> b) xs z0 | -- Implementing foldl via foldr is only a good idea if the compiler can | optimize | -- the resulting code (eta-expand the recursive "go"), so this needs - | fcall-arity! | -- Also see #7994 | @@ -221,7 +221,7 @@ scanl = scanlGo | | {-# INLINE [0] scanlFB #-} | scanlFB :: (b -> a -> b) -> (b -> c -> c) -> a -> (b -> c) -> b -> c | -scanlFB f c = \b g x -> let b' = f x b in b' `c` g b' | +scanlFB f c = \b g -> oneShot (\x -> let b' = f x b in b' `c` g b') | | {-# INLINE [0] constScanl #-} | constScanl :: a -> b -> a | @@ -258,7 +258,7 @@ scanl' = scanlGo' | | {-# INLINE [0] scanlFB' #-} | scanlFB' :: (b -> a -> b) -> (b -> c -> c) -> a -> (b -> c) -> b -> c | -scanlFB' f c = \b g x -> let b' = f x b in b' `seq` b' `c` g b' | +scanlFB' f c = \b g -> oneShot (\x -> let b' = f x b in b' `seq` b' `c` | g b') | | {-# INLINE [0] flipSeqScanl' #-} | flipSeqScanl' :: a -> b -> a | | _______________________________________________ | ghc-commits mailing list | ghc-commits at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-commits From simonpj at microsoft.com Sun Oct 26 21:04:50 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Sun, 26 Oct 2014 21:04:50 +0000 Subject: [commit: ghc] wip/oneShot: Avoid inlining oneShot in unfoldings (16066a1) In-Reply-To: <20141025125830.DBD2F3A300@ghc.haskell.org> References: <20141025125830.DBD2F3A300@ghc.haskell.org> Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F37A124@DB3PRD3001MB020.064d.mgd.msft.net> | Avoid inlining oneShot in unfoldings I'm sure there's a reason for this but, again, please say what is, lest I accidentally reverse it in 3 years time. Simon | -----Original Message----- | From: ghc-commits [mailto:ghc-commits-bounces at haskell.org] On Behalf Of | git at git.haskell.org | Sent: 25 October 2014 13:59 | To: ghc-commits at haskell.org | Subject: [commit: ghc] wip/oneShot: Avoid inlining oneShot in unfoldings | (16066a1) | | Repository : ssh://git at git.haskell.org/ghc | | On branch : wip/oneShot | Link : | http://ghc.haskell.org/trac/ghc/changeset/16066a165dd7245be58d0fbee265e13 | fb21e2aed/ghc | | >--------------------------------------------------------------- | | commit 16066a165dd7245be58d0fbee265e13fb21e2aed | Author: Joachim Breitner | Date: Sat Oct 25 14:46:27 2014 +0200 | | Avoid inlining oneShot in unfoldings | | | >--------------------------------------------------------------- | | 16066a165dd7245be58d0fbee265e13fb21e2aed | compiler/basicTypes/MkId.lhs | 4 ++-- | 1 file changed, 2 insertions(+), 2 deletions(-) | | diff --git a/compiler/basicTypes/MkId.lhs b/compiler/basicTypes/MkId.lhs | index 05dcdd5..31cd426 100644 | --- a/compiler/basicTypes/MkId.lhs | +++ b/compiler/basicTypes/MkId.lhs | @@ -1124,8 +1124,8 @@ lazyId = pcMiscPrelId lazyIdName ty info | oneShotId :: Id | oneShotId = pcMiscPrelId oneShotName ty info | where | - info = noCafIdInfo `setInlinePragInfo` alwaysInlinePragma | - `setUnfoldingInfo` mkCompulsoryUnfolding rhs | + info = noCafIdInfo -- `setInlinePragInfo` alwaysInlinePragma | + `setUnfoldingInfo` mkWwInlineRule rhs 1 | ty = mkForAllTys [alphaTyVar, betaTyVar] (mkFunTy fun_ty fun_ty) | fun_ty = mkFunTy alphaTy betaTy | [body, x] = mkTemplateLocals [fun_ty, alphaTy] | | _______________________________________________ | ghc-commits mailing list | ghc-commits at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-commits From gintautas.miliauskas at gmail.com Sun Oct 26 21:54:19 2014 From: gintautas.miliauskas at gmail.com (Gintautas Miliauskas) Date: Sun, 26 Oct 2014 22:54:19 +0100 Subject: GHC on mingw-w64 / gcc 4.8.3 on Windows Message-ID: The patch to migrate to mingw-w64 / gcc 4.8.3 is finally starting to look reasonable (Phab:D339 ). Whew! The part about downloading the tarball was relatively straightforward, but the mingw migration has been really tricky, in part due to the gcc version bump, but mostly, I think, due to use of mingw-w64 on i686 as well (which is a supported configuration). In particular, "#define _MSVCRT_ 1" took ages to figure out after digging through heaps of headers and weird errors. I am not sure I would have signed on if I had an idea of how much effort this needed... Jeez. On the bright side, now that we have standartised on mingw-w64, further gcc version upgrades should be relatively straightforward. Could people with some experience in Windows matters take a look at the patch? In particular, rts/Linker.c has been a problem; my fixes there seem to work but I have no idea if the approach is right (the whole file does seem like a grabbag of ad-hocness though). The patch also could use some more testing (both for x86 and for x86-64). There are still some validate.sh failures (although I am actually seeing fewer than with the legacy setup), some of which could probably be fixed easily. Some additional eyes and fingers on keyboards would be very welcome there. If you want to experiment with the patch, make sure to also patch in the related changes listed in the last comment on the Phab page. -- Gintautas Miliauskas -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Sun Oct 26 23:51:58 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Mon, 27 Oct 2014 00:51:58 +0100 Subject: [commit: ghc] wip/oneShot: Avoid inlining oneShot in unfoldings (16066a1) In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F37A124@DB3PRD3001MB020.064d.mgd.msft.net> References: <20141025125830.DBD2F3A300@ghc.haskell.org> <618BE556AADD624C9C918AA5D5911BEF3F37A124@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <1414367518.17900.2.camel@joachim-breitner.de> Hi, Am Sonntag, den 26.10.2014, 21:04 +0000 schrieb Simon Peyton Jones: > | Avoid inlining oneShot in unfoldings > > I'm sure there's a reason for this but, again, please say what is, lest I accidentally reverse it in 3 years time. don?t worry, this is a very experimental branch, barely enough to run benchmarks. If we are going to add this (which I?m not sure of) there will be proper commits with Notes and everything. Do we need a convention to signal ?I know that this commit is horrible by all aspects, don?t waste time reviewing it?? I, for one, don?t expect others to review wip/ branches unless explicitly invited to. But I guess I could also have made this clearer in the commit message. Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From kyrab at mail.ru Mon Oct 27 07:00:37 2014 From: kyrab at mail.ru (kyra) Date: Mon, 27 Oct 2014 10:00:37 +0300 Subject: GHC on mingw-w64 / gcc 4.8.3 on Windows In-Reply-To: References: Message-ID: <544DED95.8090606@mail.ru> Ah, Gintautas! Since I don't follow Phabricator I didn't even suspect you are making all this work. I had a patch to migrate to mingw-w64 gcc 4.8.x (and even 4.9.x, which requires extra runtime linker support) both for 64-bit and 32-bit for more than half of the year already. That patch is not such a thorough as yours is (I didn't bother with downloading the tarball and all that stuff), but regarding Linker.c I think it is perhaps more systematic than yours. Why I didn't exposed it to a broad audience was that I believed there was not much interest and also because starting at some moment I didn't bother to maintain several patches per issue and combined them in one big patch (including some nonportable things) and I didn't want to spend a time to break it apart to per-issue parts. During a couple of days I'll extract my patch to Linker.c and put it to Phabricator then, and also I'll comment on Linker.c issues. Btw, I wonder what the problem does "#define _MSVCRT_ 1" solve? I didn't need it at all (I never run validate.sh though). Cheers, Kyra On 10/27/2014 12:54 AM, Gintautas Miliauskas wrote: > The patch to migrate to mingw-w64 / gcc 4.8.3 is finally starting to > look reasonable (Phab:D339 ). Whew! > > The part about downloading the tarball was relatively straightforward, > but the mingw migration has been really tricky, in part due to the gcc > version bump, but mostly, I think, due to use of mingw-w64 on i686 as > well (which is a supported configuration). In particular, "#define > _MSVCRT_ 1" took ages to figure out after digging through heaps of > headers and weird errors. I am not sure I would have signed on if I > had an idea of how much effort this needed... Jeez. > > On the bright side, now that we have standartised on mingw-w64, > further gcc version upgrades should be relatively straightforward. > > Could people with some experience in Windows matters take a look at > the patch? In particular, rts/Linker.c has been a problem; my fixes > there seem to work but I have no idea if the approach is right (the > whole file does seem like a grabbag of ad-hocness though). > > The patch also could use some more testing (both for x86 and for > x86-64). There are still some validate.sh failures (although I am > actually seeing fewer than with the legacy setup), some of which could > probably be fixed easily. Some additional eyes and fingers on > keyboards would be very welcome there. > > If you want to experiment with the patch, make sure to also patch in > the related changes listed in the last comment on the Phab page. > > -- > Gintautas Miliauskas > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From david.feuer at gmail.com Mon Oct 27 07:39:26 2014 From: david.feuer at gmail.com (David Feuer) Date: Mon, 27 Oct 2014 03:39:26 -0400 Subject: Call Arity, oneShot, or both Message-ID: Joachim Breitner ??? That would be great! But do we have evidence of this user-written code > that benefits? So far I have only seen relevant improvement due to > list-fusion a left-foldish function. > I was under the impression that the transformation was much more general than that, improving various recursive forms. Was I wrong? But aside from that, I would be astonished if the library authors were the only ones writing left-accumulating folds! David -------------- next part -------------- An HTML attachment was scrubbed... URL: From slyich at gmail.com Mon Oct 27 09:25:11 2014 From: slyich at gmail.com (Sergei Trofimovich) Date: Mon, 27 Oct 2014 09:25:11 +0000 Subject: Proposal: Improving the LLVM backend by packaging it In-Reply-To: References: Message-ID: <20141027092511.149b841e@sf> On Fri, 24 Oct 2014 18:52:53 -0500 Austin Seipp wrote: > I won't repeat what's on the wiki page too much, but the TL;DR version > is: we should start packaging a version of LLVM, and shipping it with > e.g. binary distributions of GHC. It's just a lot better for everyone. > > I know we're normally fairly hesitant about things like this (shipping > external dependencies), but I think it's the only sane thing to do > here, and the situation is fairly unique in that it's not actually > very complicated to implement or support, I think. That makes a lot of sense! Gentoo allows user upgrade llvm and ghc independently, which makes syncing harder. Thus Gentoo does not care much about llvm support in ghc. -- Sergei -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 181 bytes Desc: not available URL: From mail at joachim-breitner.de Mon Oct 27 10:16:40 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Mon, 27 Oct 2014 11:16:40 +0100 Subject: Call Arity, oneShot, or both In-Reply-To: References: Message-ID: <1414405000.2854.10.camel@joachim-breitner.de> Hi, Am Montag, den 27.10.2014, 03:39 -0400 schrieb David Feuer: > Joachim Breitner ??? > > That would be great! But do we have evidence of this > user-written code > that benefits? So far I have only seen relevant improvement > due to > list-fusion a left-foldish function. > > > I was under the impression that the transformation was much more > general than that, improving various recursive forms. Was I wrong? It is more general, but that doesn?t mean that the general form occurs in practice... > But aside from that, I would be astonished if the library authors > were the only ones writing left-accumulating folds! I?d expect most others to write their folds already in eta-expanded form; it is rather unnatural to write it in non-expanded form. Maybe if someone uses difference lists there is a possibility for Call Arity to do something. Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From gintautas at miliauskas.lt Mon Oct 27 10:37:01 2014 From: gintautas at miliauskas.lt (Gintautas Miliauskas) Date: Mon, 27 Oct 2014 11:37:01 +0100 Subject: GHC on mingw-w64 / gcc 4.8.3 on Windows In-Reply-To: <544DED95.8090606@mail.ru> References: <544DED95.8090606@mail.ru> Message-ID: > > Since I don't follow Phabricator I didn't even suspect you are making all > this work. > Yes, looks like I undercommunicated, sorry about that. I started off with the patch to download the tarballs and just went ahead for standartising of mingw-w64 without quite realising how involved that would be (I expected only a few compilation errors...). > I had a patch to migrate to mingw-w64 gcc 4.8.x (and even 4.9.x, which > requires extra runtime linker support) both for 64-bit and 32-bit for more > than half of the year already. That patch is not such a thorough as yours > is (I didn't bother with downloading the tarball and all that stuff), but > regarding Linker.c I think it is perhaps more systematic than yours. > Yes, your patch looks much better. I've integrated it and running tests on 64-bit and 32-bit as we speak. Will keep you posted. > Why I didn't exposed it to a broad audience was that I believed there was > not much interest and also because starting at some moment I didn't bother > to maintain several patches per issue and combined them in one big patch > (including some nonportable things) and I didn't want to spend a time to > break it apart to per-issue parts. > Most of the prerequisite auxiliary fixes should be in now; the non-backwards compatible ones are part of D339. > During a couple of days I'll extract my patch to Linker.c and put it to > Phabricator then, and also I'll comment on Linker.c issues. > Sounds good. By the way, you attached a patch to #9218 ; should I consider that an up-to-date version of your proposed change, or should I wait a few days more? Btw, I wonder what the problem does "#define _MSVCRT_ 1" solve? I didn't > need it at all (I never run validate.sh though). > Without it I think I got a bunch of conflicting export definitions from the C library, and removing the ones in Linker.c resulted in runtime crashes. Not quite sure what was going on there. Probably issues due to my patch being rather ad-hoc. -- Gintautas Miliauskas -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvriedel at gmail.com Mon Oct 27 10:57:25 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Mon, 27 Oct 2014 11:57:25 +0100 Subject: GHC on mingw-w64 / gcc 4.8.3 on Windows In-Reply-To: (Gintautas Miliauskas's message of "Mon, 27 Oct 2014 11:37:01 +0100") References: <544DED95.8090606@mail.ru> Message-ID: <87ppddu6ei.fsf@gmail.com> Hello *, On 2014-10-27 at 11:37:01 +0100, Gintautas Miliauskas wrote: >> Since I don't follow Phabricator I didn't even suspect you are making all >> this work. > Yes, looks like I undercommunicated, sorry about that. In the hopes to aid with communication, I've just created a "GHC Windows Task Force" Team/Project in Phab: https://phabricator.haskell.org/project/view/11/ Assuming you make use of this, you can reference the group of people that are part of that "project" by simply using of its associated hash-tags like e.g. `#ghc_windows_task_force` (or a bit easier its abbreviation `#ghc-wtf`, we can add additional ones at well) in place of a user-name. So, if everyone interested in keeping the Windows platform working by code-reviewing would join the the "GHC Windows Task Force" Phab group, he/she could get CC'ed everytime there's the suspicion a patch may affect compilation on windows and may benefit from additional eyes... HTH, hvr From kyrab at mail.ru Mon Oct 27 11:01:40 2014 From: kyrab at mail.ru (kyra) Date: Mon, 27 Oct 2014 14:01:40 +0300 Subject: GHC on mingw-w64 / gcc 4.8.3 on Windows In-Reply-To: References: <544DED95.8090606@mail.ru> Message-ID: <544E2614.80705@mail.ru> On 10/27/2014 1:37 PM, Gintautas Miliauskas wrote: > > During a couple of days I'll extract my patch to Linker.c and put > it to Phabricator then, and also I'll comment on Linker.c issues. > > > Sounds good. By the way, you attached a patch to #9218 > ; should I > consider that an up-to-date version of your proposed change, or should > I wait a few days more? The last GHC built with this patch applied was x86_64 ghc-7.9.20140917. I had no problems with that build. I've made only the minor modification today (20141027) to let it apply to today's HEAD cleanly. The last 32-bit build was made 4-6 month ago perhaps. I have no much time to test this patch agaings today's HEAD, so I would greatly appreciate if you consider this as a final version (and report the problems if any). Cheers, Kyra From gintautas at miliauskas.lt Mon Oct 27 12:34:18 2014 From: gintautas at miliauskas.lt (Gintautas Miliauskas) Date: Mon, 27 Oct 2014 13:34:18 +0100 Subject: Windows build broken in Linker.c In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF3F35E9E8@DB3PRD3001MB020.064d.mgd.msft.net> <54405C16.8000901@gmail.com> <544A7161.1050705@gmail.com> Message-ID: FYI, after the fix ghc builds again on Windows 32-bit, but validate.sh fails: rts\Linker.c: In function 'allocateImageAndTrampolines': rts\Linker.c:3657:31: error: unused parameter 'member_name' [-Werror=unused-parameter] pathchar* arch_name, char* member_name, ^ cc1.exe: all warnings being treated as errors rts/ghc.mk:236: recipe for target 'rts/dist/build/Linker.o' failed What's the usual way to deal with this type of warning in ghc land? On Fri, Oct 24, 2014 at 5:37 PM, Austin Seipp wrote: > Gah, this slipped my mind. On it. > > On Fri, Oct 24, 2014 at 10:33 AM, Simon Marlow wrote: > > I sent a patch to Austin to validate+commit earlier this week. > > > > On 24/10/2014 15:08, Gintautas Miliauskas wrote: > >> > >> This is still not fixed, right? I've been working on the mingw gcc > >> upgrade and testing on 32 bit, and this failure got me running in > >> circles until I discovered that baseline was broken too... > >> > >> On Fri, Oct 17, 2014 at 2:00 AM, Simon Marlow >> > wrote: > >> > >> I was working on a fix yesterday but ran out of time. Frankly this > >> code is a nightmare, every time I touch it it breaks on some > >> platform - this time I validated on 64 bit Windows but not 32. Aargh > >> indeed. > >> > >> On 16 Oct 2014 14:32, "Austin Seipp" >> > wrote: > >> > >> I see what's going on and am fixing it... The code broke 32-bit > >> due to > >> #ifdefery, but I think it can be removed, perhaps (which would > be > >> preferable). > >> > >> On Thu, Oct 16, 2014 at 3:43 PM, Simon Peyton Jones > >> > wrote: > >> > Simon > >> > > >> > Aargh! I think the Windows build is broken again. > >> > > >> > I think this is your commit 5300099ed > >> > > >> > Admittedly this is on a branch I?m working on, but it?s up to > >> date with > >> > HEAD. And I have no touched Linker.c! > >> > > >> > Any ideas? > >> > > >> > Simon > >> > > >> > > >> > > >> > rts\Linker.c: In function 'allocateImageAndTrampolines': > >> > > >> > > >> > > >> > rts\Linker.c:3708:19: > >> > > >> > error: 'arch_name' undeclared (first use in this > function) > >> > > >> > > >> > > >> > rts\Linker.c:3708:19: > >> > > >> > note: each undeclared identifier is reported only once > >> for each > >> > function it appears in > >> > > >> > rts/ghc.mk:236 : recipe for target > >> 'rts/dist/build/Linker.o' failed > >> > > >> > make[1]: *** [rts/dist/build/Linker.o] Error 1 > >> > > >> > make[1]: *** Waiting for unfinished jobs.... > >> > > >> > > >> > _______________________________________________ > >> > ghc-devs mailing list > >> > ghc-devs at haskell.org > >> > http://www.haskell.org/mailman/listinfo/ghc-devs > >> > > >> > >> > >> > >> -- > >> Regards, > >> > >> Austin Seipp, Haskell Consultant > >> Well-Typed LLP, http://www.well-typed.com/ > >> > >> > >> _______________________________________________ > >> ghc-devs mailing list > >> ghc-devs at haskell.org > >> http://www.haskell.org/mailman/listinfo/ghc-devs > >> > >> > >> > >> > >> -- > >> Gintautas Miliauskas > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > -- > Regards, > > Austin Seipp, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > -- Gintautas Miliauskas -------------- next part -------------- An HTML attachment was scrubbed... URL: From gintautas at miliauskas.lt Mon Oct 27 15:02:40 2014 From: gintautas at miliauskas.lt (Gintautas Miliauskas) Date: Mon, 27 Oct 2014 16:02:40 +0100 Subject: GHC on mingw-w64 / gcc 4.8.3 on Windows In-Reply-To: <544E2614.80705@mail.ru> References: <544DED95.8090606@mail.ru> <544E2614.80705@mail.ru> Message-ID: The patch looks good both on i686 and x86_64. Cool stuff! On Mon, Oct 27, 2014 at 12:01 PM, kyra wrote: > On 10/27/2014 1:37 PM, Gintautas Miliauskas wrote: > >> >> During a couple of days I'll extract my patch to Linker.c and put >> it to Phabricator then, and also I'll comment on Linker.c issues. >> >> >> Sounds good. By the way, you attached a patch to #9218 < >> https://ghc.haskell.org/trac/ghc/ticket/9218#comment:14>; should I >> consider that an up-to-date version of your proposed change, or should I >> wait a few days more? >> > The last GHC built with this patch applied was x86_64 ghc-7.9.20140917. I > had no problems with that build. > > I've made only the minor modification today (20141027) to let it apply to > today's HEAD cleanly. > > The last 32-bit build was made 4-6 month ago perhaps. > > I have no much time to test this patch agaings today's HEAD, so I would > greatly appreciate if you consider this as a final version (and report the > problems if any). > > Cheers, > Kyra > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Gintautas Miliauskas -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Oct 27 22:36:13 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 27 Oct 2014 22:36:13 +0000 Subject: Call Arity, oneShot or both In-Reply-To: <1414254274.1665.16.camel@joachim-breitner.de> References: <1414254274.1665.16.camel@joachim-breitner.de> Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F37C733@DB3PRD3001MB020.064d.mgd.msft.net> | The biggest loser is calendar, which uses scanl. I am not fully sure | what went wrong here: Either the one-shot annotation on the lambda?s | variable got lost somewhere in the pipeline, or despite it being there, | the normal arity analysis did not use it. | | But there is also a winner, fft2, with -1.5% allocations. Here Call | Arity was not good enough, but oneShot did the jobs. It's always useful to discover what happens. Why didn't Call Arity do the job? How did the one-shot-ness get lost? I know it takes work to find out these things; but sometimes they show up something that is easy to fix, and which gives improvements across the board. | best keep the oneShot annotation in the unfoldings: The isOneShotLambda | flag is currently not stored in the interface. I work around this by | making sure that the oneShot function is never inlined in unfoldings, | but maybe it would be better to serialize the isOneShotLambda flag in | interfaces, which might have other good effects? Serialising the one-shot lambda info sounds like a good plan to me. | If we want as much performance as possible, we should simply include | both approaches. But there might be other things to consider... so not | sure what the best thing to do is. I'd be inclined to do both. Call-arity hits programs where the programmer had no idea that there was something to do; or where the lambda isn't *statically* one-shot... it only becomes so when inlined into a particular context. One-shot-lambdas may help in situations where the one-shot-ness is manifestly too hard to spot. Good work! Keep a wiki page to describe the choices, point to the tickets, etc Simon From simonpj at microsoft.com Mon Oct 27 22:36:11 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 27 Oct 2014 22:36:11 +0000 Subject: [commit: ghc] wip/oneShot: Avoid inlining oneShot in unfoldings (16066a1) In-Reply-To: <1414367518.17900.2.camel@joachim-breitner.de> References: <20141025125830.DBD2F3A300@ghc.haskell.org> <618BE556AADD624C9C918AA5D5911BEF3F37A124@DB3PRD3001MB020.064d.mgd.msft.net> <1414367518.17900.2.camel@joachim-breitner.de> Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F37C6B5@DB3PRD3001MB020.064d.mgd.msft.net> Sorry -- I missed the fact that it wasn't on HEAD! Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Joachim | Breitner | Sent: 26 October 2014 23:52 | To: ghc-devs at haskell.org | Subject: Re: [commit: ghc] wip/oneShot: Avoid inlining oneShot in | unfoldings (16066a1) | | Hi, | | | Am Sonntag, den 26.10.2014, 21:04 +0000 schrieb Simon Peyton Jones: | > | Avoid inlining oneShot in unfoldings | > | > I'm sure there's a reason for this but, again, please say what is, lest | I accidentally reverse it in 3 years time. | | don?t worry, this is a very experimental branch, barely enough to run | benchmarks. If we are going to add this (which I?m not sure of) there | will be proper commits with Notes and everything. | | Do we need a convention to signal ?I know that this commit is horrible | by all aspects, don?t waste time reviewing it?? I, for one, don?t expect | others to review wip/ branches unless explicitly invited to. But I guess | I could also have made this clearer in the commit message. | | Greetings, | Joachim | | -- | Joachim ?nomeata? Breitner | mail at joachim-breitner.de ? http://www.joachim-breitner.de/ | Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F | Debian Developer: nomeata at debian.org From ezyang at mit.edu Tue Oct 28 00:54:39 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Mon, 27 Oct 2014 17:54:39 -0700 Subject: [commit: ghc] master: Implementation of hsig (module signatures), per #9252 (aa47995) In-Reply-To: <1414228493.1665.3.camel@joachim-breitner.de> References: <20141024234754.001D93A300@ghc.haskell.org> <1414228493.1665.3.camel@joachim-breitner.de> Message-ID: <1414457660-sup-9052@sabre> Thanks for the note; I'm validating a patch that fixes this. Landing soon! Edward Excerpts from Joachim Breitner's message of 2014-10-25 02:14:53 -0700: > Hi Edwardd, > > Am Freitag, den 24.10.2014, 23:47 +0000 schrieb git at git.haskell.org: > > >--------------------------------------------------------------- > > > > commit aa4799534225e3fc6bbde0d5e5eeab8868cc3111 > > Author: Edward Z. Yang > > Date: Thu Aug 7 18:32:12 2014 +0100 > > > > Implementation of hsig (module signatures), per #9252 > > > this breaks a few test cases: > Actual stderr output differs from expected: > --- ./ghci/scripts/T5979.stderr 2014-10-24 23:49:33.395524791 +0000 > +++ ./ghci/scripts/T5979.run.stderr 2014-10-25 00:24:08.934279006 +0000 > @@ -2,6 +2,6 @@ > : > Could not find module ???Control.Monad.Trans.State??? > Perhaps you meant > - Control.Monad.Trans.State (from transformers-0.4.1.0 at trans_GjLVjHaAO8fEGf8lChbngr) > - Control.Monad.Trans.Class (from transformers-0.4.1.0 at trans_GjLVjHaAO8fEGf8lChbngr) > - Control.Monad.Trans.Cont (from transformers-0.4.1.0 at trans_GjLVjHaAO8fEGf8lChbngr) > + Control.Monad.Trans.State (from transformers-0.4.1.0 at trans_5jw4w9yTgmZ89ByuixDAKP) > + Control.Monad.Trans.Class (from transformers-0.4.1.0 at trans_5jw4w9yTgmZ89ByuixDAKP) > + Control.Monad.Trans.Cont (from transformers-0.4.1.0 at trans_5jw4w9yTgmZ89ByuixDAKP) > *** unexpected failure for T5979(ghci) > > Actual stdout output differs from expected: > --- ./safeHaskell/check/pkg01/safePkg01.stdout 2014-10-24 23:49:33.705509654 +0000 > +++ ./safeHaskell/check/pkg01/safePkg01.run.stdout 2014-10-25 00:19:17.451490530 +0000 > @@ -29,17 +29,17 @@ > require own pkg trusted: True > > M_SafePkg6 > -package dependencies: array-0.5.0.1 at array_5q713e1nmXtAgNRa542ahu > +package dependencies: array-0.5.0.1 at array_GX4NwjS8xZkC2ZPtjgwhnz > trusted: trustworthy > require own pkg trusted: False > > M_SafePkg7 > -package dependencies: array-0.5.0.1 at array_5q713e1nmXtAgNRa542ahu > +package dependencies: array-0.5.0.1 at array_GX4NwjS8xZkC2ZPtjgwhnz > trusted: safe > require own pkg trusted: False > > M_SafePkg8 > -package dependencies: array-0.5.0.1 at array_5q713e1nmXtAgNRa542ahu > +package dependencies: array-0.5.0.1 at array_GX4NwjS8xZkC2ZPtjgwhnz > trusted: trustworthy > require own pkg trusted: False > > *** unexpected failure for safePkg01(normal) > > https://s3.amazonaws.com/archive.travis-ci.org/jobs/38981598/log.txt > > It seems you need to adjust the testsuite to remove these hashes before > comparing the output. > > > Greetings, > Joachim > From mail at joachim-breitner.de Tue Oct 28 10:08:52 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Tue, 28 Oct 2014 11:08:52 +0100 Subject: Call Arity, oneShot or both In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F37C733@DB3PRD3001MB020.064d.mgd.msft.net> References: <1414254274.1665.16.camel@joachim-breitner.de> <618BE556AADD624C9C918AA5D5911BEF3F37C733@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <1414490932.1418.4.camel@joachim-breitner.de> Hi, Am Montag, den 27.10.2014, 22:36 +0000 schrieb Simon Peyton Jones: > | The biggest loser is calendar, which uses scanl. I am not fully sure > | what went wrong here: Either the one-shot annotation on the lambda?s > | variable got lost somewhere in the pipeline, or despite it being there, > | the normal arity analysis did not use it. > | > | But there is also a winner, fft2, with -1.5% allocations. Here Call > | Arity was not good enough, but oneShot did the jobs. > > It's always useful to discover what happens. Why didn't Call Arity do > the job? I?ll see what I can do. But we already know that Call Arity is not complete, chance are high that it is among the known unsolvable cases (e.g. a recursive call in an argument position). > Good work! Keep a wiki page to describe the choices, point to the tickets, etc I added a proposal page https://ghc.haskell.org/trac/ghc/wiki/OneShot here. I can do the implementation, including the interface changes, but I am not sure that I?ll be able to investigate each core2core transformation for where oneShot flags might be lost. But the good thing is that we can still deploy the whole thing without regressions and make then iteratively make the compiler better in preserving this bit. > | best keep the oneShot annotation in the unfoldings: The isOneShotLambda > | flag is currently not stored in the interface. I work around this by > | making sure that the oneShot function is never inlined in unfoldings, > | but maybe it would be better to serialize the isOneShotLambda flag in > | interfaces, which might have other good effects? > > Serialising the one-shot lambda info sounds like a good plan to me. Ok, thanks for guidance. Is https://ghc.haskell.org/trac/ghc/wiki/OneShot#PreservationofsetOneShotLambdaacrossmoduleboundaries a sensible design? Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From eir at cis.upenn.edu Tue Oct 28 12:39:11 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Tue, 28 Oct 2014 08:39:11 -0400 Subject: small `arc` victory -- commit message not changed Message-ID: <217BB309-A9E2-494E-BBEC-D36407DB8A1C@cis.upenn.edu> I just revised a Phab revision using `arc`, and it all worked swimmingly, doing exactly what I wanted, even though this was non-trivial. I describe the process below, and am happy to add to the Phab wiki page, but wanted to check here first to make sure I wasn't making an invisible dreadful mistake. Here's my story: - I had done a few edits in TcSplice and posted the (boring) revision D359. - Austin and Harbormaster both had good suggestions for me. - I spent a week completing other tasks. - I rebased my WIP branch against master. This worried me about what Phab might think. - I incorporated the suggestions. As is my usual workflow, I then rebased to integrate these changes into my original commits. As previously discussed, I think breaking a patch into separable commits is a Good Thing, and I curate these commits to be reader-friendly as I work. In the process, I de-Phabified my previous commit message, worried I was inviting demons, but determined to proceed. - I then updated my Phab revision with this: arc diff --allow-untracked --head HEAD --update D359 Explanation: --allow-untracked is because I have testsuite garbage floating in my working directory, and I'm never confident enough to modify .gitignore to ignore this garbage. `--head HEAD`, I think, is the magic bit. It specifies the *end* of the commit range to be included in the Phab diff. (The beginning was inferred to be origin/master, but can be specified without flags on the command line.) According to `arc help`, "This [flag] disables many Arcanist/Phabricator features which depend on having access to the working copy." Indeed, it was this warning which made me think `--head` was my friend. Of course, specifying `--head HEAD` on the command line seems redundant, but it still effectively stopped `arc` from touching my commits, thinking that this would break my git-ness. So, `arc` did the best job it could without touching my git information, which is exactly what I wanted. Results: My new code is now viewable and reviewable at D359. Despite all of my rebasing, the diffs are clean. You can even ask for the differences between my two revisions, and Phab does the right thing -- even though there's a week's worth of other commits that were rebased in. I'm sure Harbormaster is hard at work right now checking my changes. Fellow devs can offer nice feedback. And, I have retained control over my git structure. Hooray! Question: Have I done anything wrong here? By "wrong", I mean both in a technical sense (e.g., is Harbormaster now deeply confused?) and in a project-management sense (e.g., would this be a bad pattern for others to follow?). Should I put this workflow on the wiki? If no one tells me otherwise, I plan on using `--head` every time. Thanks, Richard From simonpj at microsoft.com Tue Oct 28 14:42:38 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 28 Oct 2014 14:42:38 +0000 Subject: Call Arity, oneShot or both In-Reply-To: <1414490932.1418.4.camel@joachim-breitner.de> References: <1414254274.1665.16.camel@joachim-breitner.de> <618BE556AADD624C9C918AA5D5911BEF3F37C733@DB3PRD3001MB020.064d.mgd.msft.net> <1414490932.1418.4.camel@joachim-breitner.de> Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F37E4EA@DB3PRD3001MB020.064d.mgd.msft.net> | > Serialising the one-shot lambda info sounds like a good plan to me. | | Ok, thanks for guidance. Is | https://ghc.haskell.org/trac/ghc/wiki/OneShot#PreservationofsetOneShotLam | bdaacrossmoduleboundaries | a sensible design? Generally yes, but I'd define a new data IfaceLamBndr, rather like IfaceLetBndr, rather than clutter up IfaceLam itself. Simon From mail at joachim-breitner.de Tue Oct 28 14:45:41 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Tue, 28 Oct 2014 15:45:41 +0100 Subject: Call Arity, oneShot or both In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F37E4EA@DB3PRD3001MB020.064d.mgd.msft.net> References: <1414254274.1665.16.camel@joachim-breitner.de> <618BE556AADD624C9C918AA5D5911BEF3F37C733@DB3PRD3001MB020.064d.mgd.msft.net> <1414490932.1418.4.camel@joachim-breitner.de> <618BE556AADD624C9C918AA5D5911BEF3F37E4EA@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <1414507541.1418.8.camel@joachim-breitner.de> HI, Am Dienstag, den 28.10.2014, 14:42 +0000 schrieb Simon Peyton Jones: > | > Serialising the one-shot lambda info sounds like a good plan to me. > | > | Ok, thanks for guidance. Is > | https://ghc.haskell.org/trac/ghc/wiki/OneShot#PreservationofsetOneShotLam > | bdaacrossmoduleboundaries > | a sensible design? > > Generally yes, but I'd define a new data IfaceLamBndr, rather like > IfaceLetBndr, rather than clutter up IfaceLam itself. heh, that?s what I ended up doing :-) It seems to work quite well, I?m heating my room right now with a few nofib runs of various combinations. Also your suggestion to investigate fft2 was good: Turns out iterateFB would never inline, which defeats the purpose. With that fixed (just pushed to master) I expect Call Arity to handle the case in fft2 as well; well?ll see when the benchmarks finish. Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From mail at joachim-breitner.de Tue Oct 28 16:01:31 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Tue, 28 Oct 2014 17:01:31 +0100 Subject: Call Arity, oneShot or both In-Reply-To: <1414507541.1418.8.camel@joachim-breitner.de> References: <1414254274.1665.16.camel@joachim-breitner.de> <618BE556AADD624C9C918AA5D5911BEF3F37C733@DB3PRD3001MB020.064d.mgd.msft.net> <1414490932.1418.4.camel@joachim-breitner.de> <618BE556AADD624C9C918AA5D5911BEF3F37E4EA@DB3PRD3001MB020.064d.mgd.msft.net> <1414507541.1418.8.camel@joachim-breitner.de> Message-ID: <1414512091.1418.13.camel@joachim-breitner.de> Hi, Am Dienstag, den 28.10.2014, 15:45 +0100 schrieb Joachim Breitner: > Also your suggestion to investigate fft2 was good: Turns out iterateFB > would never inline, which defeats the purpose. With that fixed (just > pushed to master) I expect Call Arity to handle the case in fft2 as > well; well?ll see when the benchmarks finish. indeed. So nofib gives no hard evidence that oneShot does any good (nor does it do any harm).? But since it is plausible that there are cases out there where it might help, even if just a little, we could go forward ?unless the implementation becomes ugly. It also seems that the OS=Once flag survives most transformations just fine; I had to add it to the list of IdInfo flags that make it through TidyCore, though. Serializing and reading it from the interface was also quite smooth. I think I can prepare a differential revision soon. After writing some Notes :-) Greetings, Joachim ? We need more benchmarks. -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From gintautas at miliauskas.lt Tue Oct 28 16:07:46 2014 From: gintautas at miliauskas.lt (Gintautas Miliauskas) Date: Tue, 28 Oct 2014 17:07:46 +0100 Subject: GHC on Windows (extended/broad discussion) In-Reply-To: References: Message-ID: Hey Austin, thanks for pushing this forward. It sure looks like Windows deserves more attention than it is getting. We definitely need a broader action plan. My thoughts were close to yours: 1. Push through the gcc compiler upgrade (D339) 2. Fix the Windows continuous builds. This is necessary to prevent regressions. 3. Make sure validate.sh results are clean on Windows. Tests that are known to be failing are not providing new information, they should be disabled and issues filed. 4. Triage the Windows bug list. I already made a few passes, but most of the bugs are far from trivial. I think we'll need to prioritize very aggressively to focus the limited resources. Are there any broader ideas for architecture-level changes related to GHC on Windows? The people problem is tricky. At work, this would be the right time to do a video chat and at least see the faces of the other people involved. Would folks be interested in a Skype/Hangout sometime? It would be interesting to hear what interests / skills / resources / constraints we have between us. On Sat, Oct 25, 2014 at 2:24 AM, Austin Seipp wrote: > Gintautas, Tamar, Roman, > > (CC'ing those on > https://ghc.haskell.org/trac/ghc/wiki/WindowsTaskForce, and Kyrill, > who has helped us out much in the past) > > Thank you all for all your help with Windows recently. I apologize for > not responding to some of your concerns sooner in the recent threads > about tarballs, etc. > > First off, all your contributions are extremely welcome - GHC has had > many talented Windows hackers in days long past, but these days this > number has dwindled! Anyone who has an interest in GHC on Windows is > in a place to make a big impact and help us. All the work Gintautas > has done for example, will dramatically improve the ghc-tarballs > scenario. > > On that note: Gintautas, I will get D339 merged in ASAP, as soon as I > test it and make a download mirror for you. Haskell.org has an awesome > new CDN setup, and once I implement https://downloads.haskell.org, it > will be easy to update tarballs and serve them to mass amounts of > users. > > However, beyond that, we still need more done. First off, if you can > help, we can help you! We can make lots of Windows build bots for > people on demand, so if you're in desperate need of disk space or your > computers are a bit slow, we can help accommodate. > > Right now, we have nightly builds with Gabor's[1] build system, and > soon, we're working on a Phabricator integration, which should be > great - and hopefully reduce the amount of breakage substantially. > > I also notice there is a ticket list of Windows issues[2], and that's > fantastic. After a quick glance, a lot of these tickets are old, > duplicates, or could possibly be closed or fixed easily. A good first > task for any new contributor would be to go through this list, and try > to replicate some of them! And you can always ask me - I can certainly > help you navigate GHC a bit to get somewhere. > > But there are still other things. The Win32 package for example, is > dreadfully lacking in maintainership. While we merge patches, it would > be great to see a Windows developer spearhead and clean it up - we > could even make some improvements in GHC itself based on this. This > would be an excellent opportunity to make a good impact in the broader > ecosystem! > > Finally, we desperately need someone to consult with when we're up a > creek. Are certain patches OK for Windows? What's the best way to fix > certain bugs, or implement certain features? I feel like often we try > to think about this, but it's a bit lonely when nobody else is there > to help! I'm not sure how to fix this, other than encouraging things > like doing active code reviews and helping grind out some patches. But > at the very minimum, I'd just like to talk with you about things > perhaps! > > So in summary - the work so far is grand, and we want to help you do > more! And I'm sure everyone can help - there's always so much to do > and so little time, we need to encourage it all we can. > > As Simon says: Upward and Onward! > > [1] http://haskell.inf.elte.hu/builders/ > [2] > https://ghc.haskell.org/trac/ghc/query?status=!closed&os=Windows&desc=1&order=id > > -- > Regards, > > Austin Seipp, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > -- Gintautas Miliauskas -------------- next part -------------- An HTML attachment was scrubbed... URL: From gintautas.miliauskas at gmail.com Tue Oct 28 16:18:48 2014 From: gintautas.miliauskas at gmail.com (Gintautas Miliauskas) Date: Tue, 28 Oct 2014 17:18:48 +0100 Subject: Automating GHC build for Windows In-Reply-To: References: <1413916715.326367.181663173.2FBCEE30@webmail.messagingengine.com> Message-ID: Any luck with getting the Windows continuous builds back up and running? I'd be willing to try to help out if remote access is available, although the build environment should be updated first in case that's the source of the trouble. On Sat, Oct 25, 2014 at 1:26 PM, P?li G?bor J?nos wrote: > 2014-10-25 2:03 GMT+02:00 Austin Seipp : > > As well as that, Gabor's bots > > (at http://haskell.inf.elte.hu/builders) also build on Windows. > > Unfortunately, my builders are still suffering from the sudden > breakage induced by a Cabal library update on September 24 [1]. That > is, they are unable to complete the build due to some ghc-cabal > failure right after bootstrapping [2]. As a result, I basically > suspended the builders until I could do something with this. Yes, I > have just checked it, the situation is still the same. > > Curiously, I do not know about others who are experiencing the same > problem, however, the revisions before the referenced commit (mostly) > build just fine. > > As of yet, I have not had the time and chance to even attempt to fix > this. I have already answered Herbert the version of the build > environment (Windows 7 SP1, MinGW from July 4 (32 bit) and February 16 > (64 bit), with GHC 7.6.3) -- I do not even know if that was considered > old or problematic. I am not also sure if the developers involved in > the aforementioned change are aware of this issue and what their > opinion on this is. I admit that I have not submitted a ticket on > this, perhaps I shall. > > Probably I shall also experiment with moving to a newer version of the > toolchain per the recently revamped Windows build instructions in the > meantime. > > [1] > http://git.haskell.org/ghc.git/commit/4b648be19c75e6c6a8e6f9f93fa12c7a4176f0ae > [2] http://haskell.inf.elte.hu/builders/windows-x86-head/56/10.html > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -- > Gintautas Miliauskas > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Tue Oct 28 16:50:38 2014 From: david.feuer at gmail.com (David Feuer) Date: Tue, 28 Oct 2014 12:50:38 -0400 Subject: Call Arity, oneShot or both Message-ID: Simon Peyton Jones wrote: > > But since it is plausible that there are cases out there where it might > help, even if just a little, we could go forward ?unless the > implementation becomes ugly. > Based on our experience with Call Arity, it's much more likely that it will help a lot in a few cases than that it will help a little in a lot of cases. David -------------- next part -------------- An HTML attachment was scrubbed... URL: From pali.gabor at gmail.com Tue Oct 28 17:37:54 2014 From: pali.gabor at gmail.com (=?UTF-8?B?UMOhbGkgR8OhYm9yIErDoW5vcw==?=) Date: Tue, 28 Oct 2014 18:37:54 +0100 Subject: Automating GHC build for Windows In-Reply-To: References: <1413916715.326367.181663173.2FBCEE30@webmail.messagingengine.com> Message-ID: 2014-10-26 18:17 GMT+01:00 Gintautas Miliauskas : > Maybe it's something silly like one process not having enough > time to clean up and close the file before the next command tries to move > it? I'd add a sleep before the mv and see if that helps. I am far from understanding all the details of the problem, but I believe the move operation originates from the ghc-cabal binary. For what it is worth, I could not even find the place in the sources where the referenced MoveFileEx is called: "inplace/bin/ghc-cabal.exe" configure libraries/binary dist-boot "" --with-ghc="/ghc-7.6.3/bin/ghc.exe" --with-ghc-pkg="/ghc-7.6.3/bin/ghc-pkg" --package-db=C:/msys32/home/ghc-builder/work/builder/tempbuild/build/libraries/bootstrapping.conf --disable-library-for-ghci --enable-library-vanilla --enable-library-for-ghci --disable-library-profiling --disable-shared --with-hscolour="/c/Users/ghc-builder/AppData/Roaming/cabal/bin/HsColour" --configure-option=CFLAGS=" -U__i686 -march=i686 -fno-stack-protector " --configure-option=LDFLAGS=" " --configure-option=CPPFLAGS=" " --gcc-options=" -U__i686 -march=i686 -fno-stack-protector " --constraint "binary == 0.7.1.0" --constraint "Cabal == 1.21.1.0" --constraint "hpc == 0.6.0.2" --constraint "bin-package-db == 0.0.0.0" --constraint "hoopl == 3.10.0.2" --constraint "transformers == 0.4.1.0" --with-gcc="C:/msys32/ghc-7.6.3/lib/../mingw/bin/gcc.exe" --configure-option=--with-cc="C:/msys32/ghc-7.6.3/lib/../mingw/bin/gcc.exe" --with-ar="C:/msys32/ghc-7.6.3/lib/../mingw/bin/ar.exe" --with-alex="/c/Users/ghc-builder/AppData/Roaming/cabal/bin/alex" --with-happy="/c/Users/ghc-builder/AppData/Roaming/cabal/bin/happy" Configuring binary-0.7.1.0... ghc-cabal.exe: dist-boot\setup-config3736.tmp: MoveFileEx "dist-boot\\setup-config3736.tmp" "dist-boot\\setup-config": permission denied (Access is denied.) libraries/binary/ghc.mk:3: recipe for target 'libraries/binary/dist-boot/package-data.mk' failed > Does the build > proceed if you try "make" without cleaning the repository, or does it hang > again at the same spot? It is fully reproducible by the subsequent make(1) invocations as well, that is, yes, it hangs at the same spot. 2014-10-28 17:18 GMT+01:00 Gintautas Miliauskas : > Any luck with getting the Windows continuous builds back up and running? No, not yet. But I will keep posted on the results. > I'd be willing to try to help out if remote access is available I think this could be arranged. > although the > build environment should be updated first in case that's the source of the > trouble. Right, I will contact you off-list about this once I moved to toolchain versions to the ones recommended at the Windows build page. From gintautas at miliauskas.lt Tue Oct 28 20:49:58 2014 From: gintautas at miliauskas.lt (Gintautas Miliauskas) Date: Tue, 28 Oct 2014 21:49:58 +0100 Subject: Automating GHC build for Windows In-Reply-To: References: <1413916715.326367.181663173.2FBCEE30@webmail.messagingengine.com> Message-ID: Can you try running the offending command with -v to see which step breaks? I tried running it locally under strace but did not see any file renames either. On Tue, Oct 28, 2014 at 6:37 PM, P?li G?bor J?nos wrote: > 2014-10-26 18:17 GMT+01:00 Gintautas Miliauskas : > > Maybe it's something silly like one process not having enough > > time to clean up and close the file before the next command tries to move > > it? I'd add a sleep before the mv and see if that helps. > > I am far from understanding all the details of the problem, but I > believe the move operation originates from the ghc-cabal binary. For > what it is worth, I could not even find the place in the sources where > the referenced MoveFileEx is called: > > "inplace/bin/ghc-cabal.exe" configure libraries/binary dist-boot "" > --with-ghc="/ghc-7.6.3/bin/ghc.exe" > --with-ghc-pkg="/ghc-7.6.3/bin/ghc-pkg" > > --package-db=C:/msys32/home/ghc-builder/work/builder/tempbuild/build/libraries/bootstrapping.conf > --disable-library-for-ghci --enable-library-vanilla > --enable-library-for-ghci --disable-library-profiling --disable-shared > --with-hscolour="/c/Users/ghc-builder/AppData/Roaming/cabal/bin/HsColour" > --configure-option=CFLAGS=" -U__i686 -march=i686 -fno-stack-protector > " --configure-option=LDFLAGS=" " --configure-option=CPPFLAGS=" " > --gcc-options=" -U__i686 -march=i686 -fno-stack-protector " > --constraint "binary == 0.7.1.0" --constraint "Cabal == 1.21.1.0" > --constraint "hpc == 0.6.0.2" --constraint "bin-package-db == > 0.0.0.0" --constraint "hoopl == 3.10.0.2" --constraint > "transformers == 0.4.1.0" > --with-gcc="C:/msys32/ghc-7.6.3/lib/../mingw/bin/gcc.exe" > --configure-option=--with-cc="C:/msys32/ghc-7.6.3/lib/../mingw/bin/gcc.exe" > --with-ar="C:/msys32/ghc-7.6.3/lib/../mingw/bin/ar.exe" > --with-alex="/c/Users/ghc-builder/AppData/Roaming/cabal/bin/alex" > --with-happy="/c/Users/ghc-builder/AppData/Roaming/cabal/bin/happy" > Configuring binary-0.7.1.0... > ghc-cabal.exe: dist-boot\setup-config3736.tmp: MoveFileEx > "dist-boot\\setup-config3736.tmp" "dist-boot\\setup-config": > permission denied (Access is denied.) > libraries/binary/ghc.mk:3: recipe for target > 'libraries/binary/dist-boot/package-data.mk' failed > > > Does the build > > proceed if you try "make" without cleaning the repository, or does it > hang > > again at the same spot? > > It is fully reproducible by the subsequent make(1) invocations as > well, that is, yes, it hangs at the same spot. > > 2014-10-28 17:18 GMT+01:00 Gintautas Miliauskas > : > > Any luck with getting the Windows continuous builds back up and running? > > No, not yet. But I will keep posted on the results. > > > I'd be willing to try to help out if remote access is available > > I think this could be arranged. > > > although the > > build environment should be updated first in case that's the source of > the > > trouble. > > Right, I will contact you off-list about this once I moved to > toolchain versions to the ones recommended at the Windows build page. > -- Gintautas Miliauskas -------------- next part -------------- An HTML attachment was scrubbed... URL: From stegeman at gmail.com Tue Oct 28 21:14:10 2014 From: stegeman at gmail.com (Luite Stegeman) Date: Tue, 28 Oct 2014 22:14:10 +0100 Subject: GHC Weekly News - 10/24/2014 In-Reply-To: References: Message-ID: > > > - For the GHC 7.10 release, one of the major features we planned to > try and merge was DWARF debugging information. This is actually a > small component of larger ongoing work, including adding stack traces > to Haskell executables. While, unfortunately, not all the work can be > merged, we talked with Peter, and made a plan: our hope is to get > Phab:D169 merged, which lays all the groundwork, followed by DWARF > debugging information in the code generators. This will allow tools > like `gdb` or other extensible debuggers to analyze C-- IR accurately > for compiled executables. > Peter has written up a wiki page, available at SourceNotes, > describing the design. We hope to land all the core infrastructure in > Phab:D169 soon, followed by DWARF information for the Native Code > Generator, all for 7.10.1 > I'm currently working on some internal changes in GHCJS to prepare for GHC 7.10, support source maps [1] and make the optimizer more effective and reliable (smaller/faster code!). So this change, retaining more source code location information, would still be very useful, even if you don't manage to land the whole NCG/Cmm part (GHCJS only uses STG). luite [1] https://developer.chrome.com/devtools/docs/javascript-debugging#source-maps -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Oct 28 22:02:25 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 28 Oct 2014 22:02:25 +0000 Subject: Call Arity, oneShot or both In-Reply-To: <1414512091.1418.13.camel@joachim-breitner.de> References: <1414254274.1665.16.camel@joachim-breitner.de> <618BE556AADD624C9C918AA5D5911BEF3F37C733@DB3PRD3001MB020.064d.mgd.msft.net> <1414490932.1418.4.camel@joachim-breitner.de> <618BE556AADD624C9C918AA5D5911BEF3F37E4EA@DB3PRD3001MB020.064d.mgd.msft.net> <1414507541.1418.8.camel@joachim-breitner.de> <1414512091.1418.13.camel@joachim-breitner.de> Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F37F0BA@DB3PRD3001MB020.064d.mgd.msft.net> | But since it is plausible that there are cases out there where it might | help, even if just a little, we could go forward ?unless the | implementation becomes ugly. Yes, I'm content with that, provided it is well documented. In general, I think that when a programmer wants to be sure something is going to happen, it's better to allow them provide a pragma or annotation to say "this is what I want" rather than to rely on a complex and (who knows?) perhaps fragile analysis. So I'm all for keeping 'oneShot' in the definition of foldl or whatever, even if the analysis spots it. But with a Note please! Simon From simonpj at microsoft.com Tue Oct 28 22:02:26 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 28 Oct 2014 22:02:26 +0000 Subject: GHC on Windows (extended/broad discussion) In-Reply-To: References: Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F37F0D2@DB3PRD3001MB020.064d.mgd.msft.net> The people problem is tricky. At work, this would be the right time to do a video chat and at least see the faces of the other people involved. Would folks be interested in a Skype/Hangout sometime? It would be interesting to hear what interests / skills / resources / constraints we have between us. I think that?s a great idea, thanks. It?s easier to work with people with whom you have formed a personal relationship, and a video conf is a good way to do that. Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Gintautas Miliauskas Sent: 28 October 2014 16:08 To: Austin Seipp Cc: kyra; ghc-devs at haskell.org Subject: Re: GHC on Windows (extended/broad discussion) Hey Austin, thanks for pushing this forward. It sure looks like Windows deserves more attention than it is getting. We definitely need a broader action plan. My thoughts were close to yours: 1. Push through the gcc compiler upgrade (D339) 2. Fix the Windows continuous builds. This is necessary to prevent regressions. 3. Make sure validate.sh results are clean on Windows. Tests that are known to be failing are not providing new information, they should be disabled and issues filed. 4. Triage the Windows bug list. I already made a few passes, but most of the bugs are far from trivial. I think we'll need to prioritize very aggressively to focus the limited resources. Are there any broader ideas for architecture-level changes related to GHC on Windows? The people problem is tricky. At work, this would be the right time to do a video chat and at least see the faces of the other people involved. Would folks be interested in a Skype/Hangout sometime? It would be interesting to hear what interests / skills / resources / constraints we have between us. On Sat, Oct 25, 2014 at 2:24 AM, Austin Seipp > wrote: Gintautas, Tamar, Roman, (CC'ing those on https://ghc.haskell.org/trac/ghc/wiki/WindowsTaskForce, and Kyrill, who has helped us out much in the past) Thank you all for all your help with Windows recently. I apologize for not responding to some of your concerns sooner in the recent threads about tarballs, etc. First off, all your contributions are extremely welcome - GHC has had many talented Windows hackers in days long past, but these days this number has dwindled! Anyone who has an interest in GHC on Windows is in a place to make a big impact and help us. All the work Gintautas has done for example, will dramatically improve the ghc-tarballs scenario. On that note: Gintautas, I will get D339 merged in ASAP, as soon as I test it and make a download mirror for you. Haskell.org has an awesome new CDN setup, and once I implement https://downloads.haskell.org, it will be easy to update tarballs and serve them to mass amounts of users. However, beyond that, we still need more done. First off, if you can help, we can help you! We can make lots of Windows build bots for people on demand, so if you're in desperate need of disk space or your computers are a bit slow, we can help accommodate. Right now, we have nightly builds with Gabor's[1] build system, and soon, we're working on a Phabricator integration, which should be great - and hopefully reduce the amount of breakage substantially. I also notice there is a ticket list of Windows issues[2], and that's fantastic. After a quick glance, a lot of these tickets are old, duplicates, or could possibly be closed or fixed easily. A good first task for any new contributor would be to go through this list, and try to replicate some of them! And you can always ask me - I can certainly help you navigate GHC a bit to get somewhere. But there are still other things. The Win32 package for example, is dreadfully lacking in maintainership. While we merge patches, it would be great to see a Windows developer spearhead and clean it up - we could even make some improvements in GHC itself based on this. This would be an excellent opportunity to make a good impact in the broader ecosystem! Finally, we desperately need someone to consult with when we're up a creek. Are certain patches OK for Windows? What's the best way to fix certain bugs, or implement certain features? I feel like often we try to think about this, but it's a bit lonely when nobody else is there to help! I'm not sure how to fix this, other than encouraging things like doing active code reviews and helping grind out some patches. But at the very minimum, I'd just like to talk with you about things perhaps! So in summary - the work so far is grand, and we want to help you do more! And I'm sure everyone can help - there's always so much to do and so little time, we need to encourage it all we can. As Simon says: Upward and Onward! [1] http://haskell.inf.elte.hu/builders/ [2] https://ghc.haskell.org/trac/ghc/query?status=!closed&os=Windows&desc=1&order=id -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ -- Gintautas Miliauskas -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Tue Oct 28 22:59:22 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Tue, 28 Oct 2014 23:59:22 +0100 Subject: Call Arity, oneShot or both In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F37F0BA@DB3PRD3001MB020.064d.mgd.msft.net> References: <1414254274.1665.16.camel@joachim-breitner.de> <618BE556AADD624C9C918AA5D5911BEF3F37C733@DB3PRD3001MB020.064d.mgd.msft.net> <1414490932.1418.4.camel@joachim-breitner.de> <618BE556AADD624C9C918AA5D5911BEF3F37E4EA@DB3PRD3001MB020.064d.mgd.msft.net> <1414507541.1418.8.camel@joachim-breitner.de> <1414512091.1418.13.camel@joachim-breitner.de> <618BE556AADD624C9C918AA5D5911BEF3F37F0BA@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <1414537162.1269.7.camel@joachim-breitner.de> Hi, Am Dienstag, den 28.10.2014, 22:02 +0000 schrieb Simon Peyton Jones: > So I'm all for keeping 'oneShot' in the definition of foldl or > whatever, even if the analysis spots it. But with a Note please! Three Notes actually! :-) I created three mouth-sized commits, each of which can be reviewed independently: https://phabricator.haskell.org/D391 https://phabricator.haskell.org/D392 https://phabricator.haskell.org/D393 You can also review the wip/oneShot branch if you prefer that or want to play around with it. The Wiki page will see some updates (e.g. moving ?Challenges? to ?Implementation?) when I get to merge these. Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From david.feuer at gmail.com Wed Oct 29 00:24:19 2014 From: david.feuer at gmail.com (David Feuer) Date: Tue, 28 Oct 2014 20:24:19 -0400 Subject: Is USE_REPORT_PRELUDE still useful? Message-ID: A lot of code in GHC.List and perhaps elsewhere compiles differently depending on whether USE_REPORT_PRELUDE is defined. Not all code differing from the Prelude implementation. Furthermore, I don't know to what extent, if any, such code actually works these days. Some of it certainly was not usable for *years* because GHC.List did not import GHC.Num. Should we 1. Convert all those code blocks to comments? 2. Go through everything, check it to make sure it's written as in the Prelude or has an alternative block, and then actually set up all the infrastructure so that works? 3. Leave it alone? My general inclination is to go to 1. I don't *really* like option 3 for four reasons: a. It leaves untouched code to rot b. It forces us to run CPP on files that otherwise have no need for it. c. It interrupts the flow of the code with stuff that *looks* like real code (and is highlighted as such) but is actually inactive. d. It's not hard to accidentally move code into or out of the #ifdef blocks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Wed Oct 29 07:21:09 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Wed, 29 Oct 2014 08:21:09 +0100 Subject: arc land changes author Message-ID: <1414567269.1346.1.camel@joachim-breitner.de> Hi, I was just about to apply a DR by David. First I ran $ arc patch D390 which put me on a feature branch. Then I reworded the commit with $ git commit --amend and then I tried to land it with $ arc land Luckily I also passed --hold... It even asked me if I want to land the commit although I am not the author: This branch has revision 'D390: Reorder GHC.List; fix performance regressions' but you are not the author. Land this revision by dfeuer? [y/N] y But it still put me in the author field: $ git show HEAD commit a387c359b99667ce55016de3816b5d9873a155db Author: Joachim Breitner Date: Wed Oct 29 08:17:05 2014 +0100 Reorder GHC.List; fix performance regressions Summary: Rearrange some oddly placed code. (and undid my commit message changes....) Did I do something wrong? Anyways, it said something along the lines of Cleaning up feature branch... (Use `git checkout -b 'arcpatch-D390' 'a3ceb0e7f44d53f728fb8bd0dfb4c97297818029'` if you want it back.) so with $ git commit --amend -C a3ceb0e7f44d53f728fb8bd0dfb4c97297818029 I was able to get the edited commit message with the right author back. Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From simonpj at microsoft.com Wed Oct 29 08:45:43 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 29 Oct 2014 08:45:43 +0000 Subject: Is USE_REPORT_PRELUDE still useful? In-Reply-To: References: Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F37F88A@DB3PRD3001MB020.064d.mgd.msft.net> Adding core-libraries, whose bailiwick this is. Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of David Feuer Sent: 29 October 2014 00:24 To: ghc-devs Subject: Is USE_REPORT_PRELUDE still useful? A lot of code in GHC.List and perhaps elsewhere compiles differently depending on whether USE_REPORT_PRELUDE is defined. Not all code differing from the Prelude implementation. Furthermore, I don't know to what extent, if any, such code actually works these days. Some of it certainly was not usable for *years* because GHC.List did not import GHC.Num. Should we 1. Convert all those code blocks to comments? 2. Go through everything, check it to make sure it's written as in the Prelude or has an alternative block, and then actually set up all the infrastructure so that works? 3. Leave it alone? My general inclination is to go to 1. I don't *really* like option 3 for four reasons: a. It leaves untouched code to rot b. It forces us to run CPP on files that otherwise have no need for it. c. It interrupts the flow of the code with stuff that *looks* like real code (and is highlighted as such) but is actually inactive. d. It's not hard to accidentally move code into or out of the #ifdef blocks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From malcolm.wallace at me.com Wed Oct 29 09:18:29 2014 From: malcolm.wallace at me.com (Malcolm Wallace) Date: Wed, 29 Oct 2014 09:18:29 +0000 Subject: Is USE_REPORT_PRELUDE still useful? In-Reply-To: References: Message-ID: On 29 Oct 2014, at 00:24, David Feuer wrote: > A lot of code in GHC.List and perhaps elsewhere compiles differently depending on whether USE_REPORT_PRELUDE is defined. Not all code differing from the Prelude implementation. Furthermore, I don't know to what extent, if any, such code actually works these days. Some of it certainly was not usable for *years* because GHC.List did not import GHC.Num. I'm not completely certain, but I have a vague feeling that the Haskell Report appendices that define the standard libraries might be auto-generated (LaTeX/HTML/etc) from the base library sources, and might use these #ifdefs to get the right version of the code. Regards, Malcolm From lonetiger at gmail.com Wed Oct 29 09:59:18 2014 From: lonetiger at gmail.com (Phyx) Date: Wed, 29 Oct 2014 10:59:18 +0100 Subject: GHC on Windows (extended/broad discussion) In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F37F0D2@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF3F37F0D2@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Hi All, Sorry for the late reply, I need to adjust my mail filtering rules to let these mails go to my inbox as well. I have also taken a few passes through the windows trac and there are a lot of issues with someone assigned to them but with no activity so a while on them. I was also wondering the state of these. > The Win32 package for example, is dreadfully lacking in maintainership. While we merge patches, it would be great to see a Windows developer spearhead and clean it up A while back I was looking at adding some functionality to this package, but could never figure out which one was actually being used. I think there are multiple repositories out there. > Would folks be interested in a Skype/Hangout sometime I would be interested in this, though timezones may prove to be an issue, or not. Regards, Tamar On Tue, Oct 28, 2014 at 11:02 PM, Simon Peyton Jones wrote: > The people problem is tricky. At work, this would be the right time to > do a video chat and at least see the faces of the other people involved. > Would folks be interested in a Skype/Hangout sometime? It would be > interesting to hear what interests / skills / resources / constraints we > have between us. > > > > I think that?s a great idea, thanks. It?s easier to work with people with > whom you have formed a personal relationship, and a video conf is a good > way to do that. > > > > Simon > > > > *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of *Gintautas > Miliauskas > *Sent:* 28 October 2014 16:08 > *To:* Austin Seipp > *Cc:* kyra; ghc-devs at haskell.org > *Subject:* Re: GHC on Windows (extended/broad discussion) > > > > Hey Austin, > > > > thanks for pushing this forward. It sure looks like Windows deserves more > attention than it is getting. > > > > We definitely need a broader action plan. My thoughts were close to yours: > > > > 1. Push through the gcc compiler upgrade (D339) > > 2. Fix the Windows continuous builds. This is necessary to prevent > regressions. > > 3. Make sure validate.sh results are clean on Windows. Tests that are > known to be failing are not providing new information, they should be > disabled and issues filed. > > 4. Triage the Windows bug list. I already made a few passes, but most of > the bugs are far from trivial. I think we'll need to prioritize very > aggressively to focus the limited resources. > > > > Are there any broader ideas for architecture-level changes related to GHC > on Windows? > > > > The people problem is tricky. At work, this would be the right time to do > a video chat and at least see the faces of the other people involved. Would > folks be interested in a Skype/Hangout sometime? It would be interesting to > hear what interests / skills / resources / constraints we have between us. > > > > > > On Sat, Oct 25, 2014 at 2:24 AM, Austin Seipp > wrote: > > Gintautas, Tamar, Roman, > > (CC'ing those on > https://ghc.haskell.org/trac/ghc/wiki/WindowsTaskForce, and Kyrill, > who has helped us out much in the past) > > Thank you all for all your help with Windows recently. I apologize for > not responding to some of your concerns sooner in the recent threads > about tarballs, etc. > > First off, all your contributions are extremely welcome - GHC has had > many talented Windows hackers in days long past, but these days this > number has dwindled! Anyone who has an interest in GHC on Windows is > in a place to make a big impact and help us. All the work Gintautas > has done for example, will dramatically improve the ghc-tarballs > scenario. > > On that note: Gintautas, I will get D339 merged in ASAP, as soon as I > test it and make a download mirror for you. Haskell.org has an awesome > new CDN setup, and once I implement https://downloads.haskell.org, it > will be easy to update tarballs and serve them to mass amounts of > users. > > However, beyond that, we still need more done. First off, if you can > help, we can help you! We can make lots of Windows build bots for > people on demand, so if you're in desperate need of disk space or your > computers are a bit slow, we can help accommodate. > > Right now, we have nightly builds with Gabor's[1] build system, and > soon, we're working on a Phabricator integration, which should be > great - and hopefully reduce the amount of breakage substantially. > > I also notice there is a ticket list of Windows issues[2], and that's > fantastic. After a quick glance, a lot of these tickets are old, > duplicates, or could possibly be closed or fixed easily. A good first > task for any new contributor would be to go through this list, and try > to replicate some of them! And you can always ask me - I can certainly > help you navigate GHC a bit to get somewhere. > > But there are still other things. The Win32 package for example, is > dreadfully lacking in maintainership. While we merge patches, it would > be great to see a Windows developer spearhead and clean it up - we > could even make some improvements in GHC itself based on this. This > would be an excellent opportunity to make a good impact in the broader > ecosystem! > > Finally, we desperately need someone to consult with when we're up a > creek. Are certain patches OK for Windows? What's the best way to fix > certain bugs, or implement certain features? I feel like often we try > to think about this, but it's a bit lonely when nobody else is there > to help! I'm not sure how to fix this, other than encouraging things > like doing active code reviews and helping grind out some patches. But > at the very minimum, I'd just like to talk with you about things > perhaps! > > So in summary - the work so far is grand, and we want to help you do > more! And I'm sure everyone can help - there's always so much to do > and so little time, we need to encourage it all we can. > > As Simon says: Upward and Onward! > > [1] http://haskell.inf.elte.hu/builders/ > [2] > https://ghc.haskell.org/trac/ghc/query?status=!closed&os=Windows&desc=1&order=id > > -- > Regards, > > Austin Seipp, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > > > > > > -- > Gintautas Miliauskas > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvriedel at gmail.com Wed Oct 29 10:04:14 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Wed, 29 Oct 2014 11:04:14 +0100 Subject: GHC on Windows (extended/broad discussion) In-Reply-To: (Phyx's message of "Wed, 29 Oct 2014 10:59:18 +0100") References: <618BE556AADD624C9C918AA5D5911BEF3F37F0D2@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <871tpr42g1.fsf@gmail.com> On 2014-10-29 at 10:59:18 +0100, Phyx wrote: [...] >> The Win32 package for example, is dreadfully lacking in >> maintainership. While we merge patches, it would be great to see a >> Windows developer spearhead and clean it up > > A while back I was looking at adding some functionality to this > package, but could never figure out which one was actually being > used. I think there are multiple repositories out there. I'm not sure which multiple repositories you have seen, but http://hackage.haskell.org/package/Win32 points quite clearly to https://github.com/haskell/win32 and that's the official upstream repository GHC tracks (via a locally mirrored repo at git.haskell.org) Cheers, hvr From gintautas at miliauskas.lt Wed Oct 29 11:36:25 2014 From: gintautas at miliauskas.lt (Gintautas Miliauskas) Date: Wed, 29 Oct 2014 12:36:25 +0100 Subject: GHC on Windows (extended/broad discussion) In-Reply-To: <871tpr42g1.fsf@gmail.com> References: <618BE556AADD624C9C918AA5D5911BEF3F37F0D2@DB3PRD3001MB020.064d.mgd.msft.net> <871tpr42g1.fsf@gmail.com> Message-ID: By the way, regarding that repository, could someone merge my pull request ? In general, it's a bit frustrating how a lot of the patches in the Phabricator queue seem to take a while to get noticed. Don't take it personally, I'm just sharing my impressions, but I do feel it's taking away some momentum - not good for me & other contributors, and not good for the project. I know reviewers are understaffed, maybe consider spreading commit rights a bit more widely until the situation improves? On Wed, Oct 29, 2014 at 11:04 AM, Herbert Valerio Riedel wrote: > On 2014-10-29 at 10:59:18 +0100, Phyx wrote: > > [...] > > >> The Win32 package for example, is dreadfully lacking in > >> maintainership. While we merge patches, it would be great to see a > >> Windows developer spearhead and clean it up > > > > A while back I was looking at adding some functionality to this > > package, but could never figure out which one was actually being > > used. I think there are multiple repositories out there. > > I'm not sure which multiple repositories you have seen, but > > http://hackage.haskell.org/package/Win32 > > points quite clearly to > > https://github.com/haskell/win32 > > and that's the official upstream repository GHC tracks (via a locally > mirrored repo at git.haskell.org) > > Cheers, > hvr > -- Gintautas Miliauskas -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvriedel at gmail.com Wed Oct 29 12:47:13 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Wed, 29 Oct 2014 13:47:13 +0100 Subject: GHC on Windows (extended/broad discussion) In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF3F37F0D2@DB3PRD3001MB020.064d.mgd.msft.net> <871tpr42g1.fsf@gmail.com> Message-ID: On Wed, Oct 29, 2014 at 12:36 PM, Gintautas Miliauskas < gintautas at miliauskas.lt> wrote: > By the way, regarding that repository, could someone merge my pull request > ? > > ?The problem here is that the official maintainer according to? http://www.haskell.org/haskellwiki/Library_submissions#The_Core_Libraries ?is Bryan, so he's the one supposed to pull the trigger on pull-requests (unless he's ok with GHC HQ pushing commits straight to `master` or granting the GHC Windows Task Force officially co-maintership of the Win32 package) -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Wed Oct 29 13:13:10 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 29 Oct 2014 13:13:10 +0000 Subject: GHC on Windows (extended/broad discussion) In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF3F37F0D2@DB3PRD3001MB020.064d.mgd.msft.net> <871tpr42g1.fsf@gmail.com> Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F380ABD@DB3PRD3001MB020.064d.mgd.msft.net> I bet Bryan would willingly cede maintainership of Win32. I?m copying him. Bryan? From: Herbert Valerio Riedel [mailto:hvriedel at gmail.com] Sent: 29 October 2014 12:47 To: Gintautas Miliauskas Cc: Phyx; Simon Peyton Jones; kyra; ghc-devs at haskell.org Subject: Re: GHC on Windows (extended/broad discussion) On Wed, Oct 29, 2014 at 12:36 PM, Gintautas Miliauskas > wrote: By the way, regarding that repository, could someone merge my pull request? ?The problem here is that the official maintainer according to? http://www.haskell.org/haskellwiki/Library_submissions#The_Core_Libraries ?is Bryan, so he's the one supposed to pull the trigger on pull-requests (unless he's ok with GHC HQ pushing commits straight to `master` or granting the GHC Windows Task Force officially co-maintership of the Win32 package) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekmett at gmail.com Wed Oct 29 14:25:46 2014 From: ekmett at gmail.com (Edward Kmett) Date: Wed, 29 Oct 2014 10:25:46 -0400 Subject: [core libraries] RE: Is USE_REPORT_PRELUDE still useful? In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F37F88A@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF3F37F88A@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: I could definitely see moving the code to comments. Sent from my iPad On Oct 29, 2014, at 4:45 AM, Simon Peyton Jones wrote: > Adding core-libraries, whose bailiwick this is. > > Simon > > From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of David Feuer > Sent: 29 October 2014 00:24 > To: ghc-devs > Subject: Is USE_REPORT_PRELUDE still useful? > > A lot of code in GHC.List and perhaps elsewhere compiles differently depending on whether USE_REPORT_PRELUDE is defined. Not all code differing from the Prelude implementation. Furthermore, I don't know to what extent, if any, such code actually works these days. Some of it certainly was not usable for *years* because GHC.List did not import GHC.Num. Should we > > 1. Convert all those code blocks to comments? > > 2. Go through everything, check it to make sure it's written as in the Prelude or has an alternative block, and then actually set up all the infrastructure so that works? > > 3. Leave it alone? > > My general inclination is to go to 1. > > > > I don't *really* like option 3 for four reasons: > > a. It leaves untouched code to rot > > b. It forces us to run CPP on files that otherwise have no need for it. > > c. It interrupts the flow of the code with stuff that *looks* like real code (and is highlighted as such) but is actually inactive. > > d. It's not hard to accidentally move code into or out of the #ifdef blocks. > > -- > You received this message because you are subscribed to the Google Groups "haskell-core-libraries" group. > To unsubscribe from this group and stop receiving emails from it, send an email to haskell-core-libraries+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekmett at gmail.com Wed Oct 29 14:31:41 2014 From: ekmett at gmail.com (Edward Kmett) Date: Wed, 29 Oct 2014 10:31:41 -0400 Subject: Is USE_REPORT_PRELUDE still useful? In-Reply-To: References: Message-ID: Ack! That -is- a somewhat scary invisible backdoor dependency. :/ We ripped out a lot of unused and untestable ifdefs for other compilers from base a couple of years back, I'd be curious if this was already affected. Any idea where the code for the report generation lies? -Edward On Oct 29, 2014, at 5:18 AM, Malcolm Wallace wrote: > > On 29 Oct 2014, at 00:24, David Feuer wrote: > >> A lot of code in GHC.List and perhaps elsewhere compiles differently depending on whether USE_REPORT_PRELUDE is defined. Not all code differing from the Prelude implementation. Furthermore, I don't know to what extent, if any, such code actually works these days. Some of it certainly was not usable for *years* because GHC.List did not import GHC.Num. > > I'm not completely certain, but I have a vague feeling that the Haskell Report appendices that define the standard libraries might be auto-generated (LaTeX/HTML/etc) from the base library sources, and might use these #ifdefs to get the right version of the code. > > Regards, > Malcolm > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From eir at cis.upenn.edu Wed Oct 29 15:52:01 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Wed, 29 Oct 2014 11:52:01 -0400 Subject: emailing to Trac? Message-ID: Hi devs, There's a feature I've wanted for some time, and I don't see a good reason not to ask: Is it possible to email comments into Trac? My commute involves a ~40 minute train ride, which I generally can use productively despite no connection. One of the biggest annoyances, though, is that I can't respond to Trac comments while on the train. It would be great if there were an email address to write to that would post a comment. Of course, it's easy enough for me to write and locally cache a response, but then I have to remember to flush the cache and it's all a bit annoying. Does Trac have this feature? Is it perhaps already enabled but not advertised? Thanks! Richard From ezyang at mit.edu Wed Oct 29 18:50:36 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Wed, 29 Oct 2014 11:50:36 -0700 Subject: emailing to Trac? In-Reply-To: References: Message-ID: <1414608611-sup-2244@sabre> I guess maybe we could install this plugin: https://oss.trac.surfsara.nl/email2trac Edward Excerpts from Richard Eisenberg's message of 2014-10-29 08:52:01 -0700: > Hi devs, > > There's a feature I've wanted for some time, and I don't see a good reason not to ask: Is it possible to email comments into Trac? My commute involves a ~40 minute train ride, which I generally can use productively despite no connection. One of the biggest annoyances, though, is that I can't respond to Trac comments while on the train. It would be great if there were an email address to write to that would post a comment. > > Of course, it's easy enough for me to write and locally cache a response, but then I have to remember to flush the cache and it's all a bit annoying. Does Trac have this feature? Is it perhaps already enabled but not advertised? > > Thanks! > Richard From alan.zimm at gmail.com Wed Oct 29 19:04:44 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Wed, 29 Oct 2014 21:04:44 +0200 Subject: File Header Pragmas in Lexer Message-ID: As part of my ongoing efforts to round-trip source code, I have bumped into an issue around file header pragmas, e.g. {-# LANGUAGE PatternSynonyms #-} {-# Language DeriveFoldable #-} {-# options_ghc -w #-} In normal mode, when not called from headerInfo, the file header pragmas are lexed enough to generate a warning about an invalid pragma if enabled, and then lexed to completion and returned as an `ITblockComment` if `Opt_KeepRawTokenStream` is enabled. The relevant Alex rule is <0> { -- In the "0" mode we ignore these pragmas "{-#" $whitechar* $pragmachar+ / { known_pragma fileHeaderPrags } { nested_comment lexToken } } The problem is that the tokens returned are ITblockComment " PatternSynonyms #" ITblockComment " DeriveFoldable #" ITblockComment " -w #" It is not possible to reproduce the original comment from these. It looks like nested comment ignores what has been lexed so far nested_comment :: P (RealLocated Token) -> Action nested_comment cont span _str _len = do ... So my question is, is there any way to make the returned comment include the prefix part? Perhaps be a specific variation of nested_comment that uses str and len. Alan -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Wed Oct 29 19:54:14 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Wed, 29 Oct 2014 21:54:14 +0200 Subject: File Header Pragmas in Lexer In-Reply-To: References: Message-ID: Ok, to answer my own question, I changed nested_comment to nested_comment :: P (RealLocated Token) -> Action nested_comment cont span buf len = do input <- getInput go (reverse $ lexemeToString buf len) (1::Int) input It now starts off with the already lexed part. On Wed, Oct 29, 2014 at 9:04 PM, Alan & Kim Zimmerman wrote: > As part of my ongoing efforts to round-trip source code, I have bumped > into an issue around file header pragmas, e.g. > > {-# LANGUAGE PatternSynonyms #-} > {-# Language DeriveFoldable #-} > {-# options_ghc -w #-} > > > In normal mode, when not called from headerInfo, the file header pragmas > are lexed enough to generate a warning about an invalid pragma if enabled, > and then lexed to completion and returned as an `ITblockComment` if > `Opt_KeepRawTokenStream` is enabled. > > The relevant Alex rule is > > <0> { > -- In the "0" mode we ignore these pragmas > "{-#" $whitechar* $pragmachar+ / { known_pragma fileHeaderPrags } > { nested_comment lexToken } > } > > The problem is that the tokens returned are > > ITblockComment " PatternSynonyms #" > ITblockComment " DeriveFoldable #" > ITblockComment " -w #" > > It is not possible to reproduce the original comment from these. > > > It looks like nested comment ignores what has been lexed so far > > nested_comment :: P (RealLocated Token) -> Action > nested_comment cont span _str _len = do > ... > > So my question is, is there any way to make the returned comment include > the prefix part? Perhaps be a specific variation of nested_comment that > uses str and len. > > Alan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gintautas at miliauskas.lt Wed Oct 29 21:13:30 2014 From: gintautas at miliauskas.lt (Gintautas Miliauskas) Date: Wed, 29 Oct 2014 22:13:30 +0100 Subject: GHC on Windows (extended/broad discussion) In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F380ABD@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF3F37F0D2@DB3PRD3001MB020.064d.mgd.msft.net> <871tpr42g1.fsf@gmail.com> <618BE556AADD624C9C918AA5D5911BEF3F380ABD@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: No need to cede maintainership, but I found it to be good practice to have additional people with commit rights to facilitate prompt submission of cleanups and minor changes that do not have architectural impact. On Wed, Oct 29, 2014 at 2:13 PM, Simon Peyton Jones wrote: > I bet Bryan would willingly cede maintainership of Win32. I?m copying > him. Bryan? > > > > *From:* Herbert Valerio Riedel [mailto:hvriedel at gmail.com] > *Sent:* 29 October 2014 12:47 > *To:* Gintautas Miliauskas > *Cc:* Phyx; Simon Peyton Jones; kyra; ghc-devs at haskell.org > *Subject:* Re: GHC on Windows (extended/broad discussion) > > > > > > On Wed, Oct 29, 2014 at 12:36 PM, Gintautas Miliauskas < > gintautas at miliauskas.lt> wrote: > > By the way, regarding that repository, could someone merge my pull > request ? > > > > > > ?The problem here is that the official maintainer according to? > > > > http://www.haskell.org/haskellwiki/Library_submissions#The_Core_Libraries > > > > ?is Bryan, so he's the one supposed to pull the trigger on pull-requests > (unless he's ok with GHC HQ pushing commits straight to `master` or > granting the GHC Windows Task Force officially co-maintership of the Win32 > package) > > > -- Gintautas Miliauskas -------------- next part -------------- An HTML attachment was scrubbed... URL: From gintautas at miliauskas.lt Wed Oct 29 22:36:48 2014 From: gintautas at miliauskas.lt (Gintautas Miliauskas) Date: Wed, 29 Oct 2014 23:36:48 +0100 Subject: Windows build broken in Linker.c In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF3F35E9E8@DB3PRD3001MB020.064d.mgd.msft.net> <54405C16.8000901@gmail.com> <544A7161.1050705@gmail.com> Message-ID: https://phabricator.haskell.org/D400 On Mon, Oct 27, 2014 at 1:34 PM, Gintautas Miliauskas < gintautas at miliauskas.lt> wrote: > FYI, after the fix ghc builds again on Windows 32-bit, but validate.sh > fails: > > rts\Linker.c: In function 'allocateImageAndTrampolines': > > rts\Linker.c:3657:31: > error: unused parameter 'member_name' [-Werror=unused-parameter] > pathchar* arch_name, char* member_name, > ^ > cc1.exe: all warnings being treated as errors > rts/ghc.mk:236: recipe for target 'rts/dist/build/Linker.o' failed > > What's the usual way to deal with this type of warning in ghc land? > > > On Fri, Oct 24, 2014 at 5:37 PM, Austin Seipp > wrote: > >> Gah, this slipped my mind. On it. >> >> On Fri, Oct 24, 2014 at 10:33 AM, Simon Marlow >> wrote: >> > I sent a patch to Austin to validate+commit earlier this week. >> > >> > On 24/10/2014 15:08, Gintautas Miliauskas wrote: >> >> >> >> This is still not fixed, right? I've been working on the mingw gcc >> >> upgrade and testing on 32 bit, and this failure got me running in >> >> circles until I discovered that baseline was broken too... >> >> >> >> On Fri, Oct 17, 2014 at 2:00 AM, Simon Marlow > >> > wrote: >> >> >> >> I was working on a fix yesterday but ran out of time. Frankly this >> >> code is a nightmare, every time I touch it it breaks on some >> >> platform - this time I validated on 64 bit Windows but not 32. >> Aargh >> >> indeed. >> >> >> >> On 16 Oct 2014 14:32, "Austin Seipp" > >> > wrote: >> >> >> >> I see what's going on and am fixing it... The code broke 32-bit >> >> due to >> >> #ifdefery, but I think it can be removed, perhaps (which would >> be >> >> preferable). >> >> >> >> On Thu, Oct 16, 2014 at 3:43 PM, Simon Peyton Jones >> >> > wrote: >> >> > Simon >> >> > >> >> > Aargh! I think the Windows build is broken again. >> >> > >> >> > I think this is your commit 5300099ed >> >> > >> >> > Admittedly this is on a branch I?m working on, but it?s up >> to >> >> date with >> >> > HEAD. And I have no touched Linker.c! >> >> > >> >> > Any ideas? >> >> > >> >> > Simon >> >> > >> >> > >> >> > >> >> > rts\Linker.c: In function 'allocateImageAndTrampolines': >> >> > >> >> > >> >> > >> >> > rts\Linker.c:3708:19: >> >> > >> >> > error: 'arch_name' undeclared (first use in this >> function) >> >> > >> >> > >> >> > >> >> > rts\Linker.c:3708:19: >> >> > >> >> > note: each undeclared identifier is reported only once >> >> for each >> >> > function it appears in >> >> > >> >> > rts/ghc.mk:236 : recipe for target >> >> 'rts/dist/build/Linker.o' failed >> >> > >> >> > make[1]: *** [rts/dist/build/Linker.o] Error 1 >> >> > >> >> > make[1]: *** Waiting for unfinished jobs.... >> >> > >> >> > >> >> > _______________________________________________ >> >> > ghc-devs mailing list >> >> > ghc-devs at haskell.org >> >> > http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > >> >> >> >> >> >> >> >> -- >> >> Regards, >> >> >> >> Austin Seipp, Haskell Consultant >> >> Well-Typed LLP, http://www.well-typed.com/ >> >> >> >> >> >> _______________________________________________ >> >> ghc-devs mailing list >> >> ghc-devs at haskell.org >> >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> >> >> >> >> >> >> >> >> -- >> >> Gintautas Miliauskas >> > >> > _______________________________________________ >> > ghc-devs mailing list >> > ghc-devs at haskell.org >> > http://www.haskell.org/mailman/listinfo/ghc-devs >> >> >> >> -- >> Regards, >> >> Austin Seipp, Haskell Consultant >> Well-Typed LLP, http://www.well-typed.com/ >> > > > > -- > Gintautas Miliauskas > -- Gintautas Miliauskas -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin at well-typed.com Wed Oct 29 23:17:05 2014 From: austin at well-typed.com (Austin Seipp) Date: Wed, 29 Oct 2014 18:17:05 -0500 Subject: Windows build broken in Linker.c In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF3F35E9E8@DB3PRD3001MB020.064d.mgd.msft.net> <54405C16.8000901@gmail.com> <544A7161.1050705@gmail.com> Message-ID: Sorry about this; merged - 208a0c207c1001da0fe63e9640e2a7e0e11c4aff (I actually only built the RTS to fix that build failure... I didn't ./validate with -Wall enabled. I'm to blame here!) On Wed, Oct 29, 2014 at 5:36 PM, Gintautas Miliauskas wrote: > https://phabricator.haskell.org/D400 > > On Mon, Oct 27, 2014 at 1:34 PM, Gintautas Miliauskas > wrote: >> >> FYI, after the fix ghc builds again on Windows 32-bit, but validate.sh >> fails: >> >> rts\Linker.c: In function 'allocateImageAndTrampolines': >> >> rts\Linker.c:3657:31: >> error: unused parameter 'member_name' [-Werror=unused-parameter] >> pathchar* arch_name, char* member_name, >> ^ >> cc1.exe: all warnings being treated as errors >> rts/ghc.mk:236: recipe for target 'rts/dist/build/Linker.o' failed >> >> What's the usual way to deal with this type of warning in ghc land? >> >> >> On Fri, Oct 24, 2014 at 5:37 PM, Austin Seipp >> wrote: >>> >>> Gah, this slipped my mind. On it. >>> >>> On Fri, Oct 24, 2014 at 10:33 AM, Simon Marlow >>> wrote: >>> > I sent a patch to Austin to validate+commit earlier this week. >>> > >>> > On 24/10/2014 15:08, Gintautas Miliauskas wrote: >>> >> >>> >> This is still not fixed, right? I've been working on the mingw gcc >>> >> upgrade and testing on 32 bit, and this failure got me running in >>> >> circles until I discovered that baseline was broken too... >>> >> >>> >> On Fri, Oct 17, 2014 at 2:00 AM, Simon Marlow >> >> > wrote: >>> >> >>> >> I was working on a fix yesterday but ran out of time. Frankly this >>> >> code is a nightmare, every time I touch it it breaks on some >>> >> platform - this time I validated on 64 bit Windows but not 32. >>> >> Aargh >>> >> indeed. >>> >> >>> >> On 16 Oct 2014 14:32, "Austin Seipp" >> >> > wrote: >>> >> >>> >> I see what's going on and am fixing it... The code broke >>> >> 32-bit >>> >> due to >>> >> #ifdefery, but I think it can be removed, perhaps (which would >>> >> be >>> >> preferable). >>> >> >>> >> On Thu, Oct 16, 2014 at 3:43 PM, Simon Peyton Jones >>> >> > wrote: >>> >> > Simon >>> >> > >>> >> > Aargh! I think the Windows build is broken again. >>> >> > >>> >> > I think this is your commit 5300099ed >>> >> > >>> >> > Admittedly this is on a branch I?m working on, but it?s up >>> >> to >>> >> date with >>> >> > HEAD. And I have no touched Linker.c! >>> >> > >>> >> > Any ideas? >>> >> > >>> >> > Simon >>> >> > >>> >> > >>> >> > >>> >> > rts\Linker.c: In function 'allocateImageAndTrampolines': >>> >> > >>> >> > >>> >> > >>> >> > rts\Linker.c:3708:19: >>> >> > >>> >> > error: 'arch_name' undeclared (first use in this >>> >> function) >>> >> > >>> >> > >>> >> > >>> >> > rts\Linker.c:3708:19: >>> >> > >>> >> > note: each undeclared identifier is reported only once >>> >> for each >>> >> > function it appears in >>> >> > >>> >> > rts/ghc.mk:236 : recipe for target >>> >> 'rts/dist/build/Linker.o' failed >>> >> > >>> >> > make[1]: *** [rts/dist/build/Linker.o] Error 1 >>> >> > >>> >> > make[1]: *** Waiting for unfinished jobs.... >>> >> > >>> >> > >>> >> > _______________________________________________ >>> >> > ghc-devs mailing list >>> >> > ghc-devs at haskell.org >>> >> > http://www.haskell.org/mailman/listinfo/ghc-devs >>> >> > >>> >> >>> >> >>> >> >>> >> -- >>> >> Regards, >>> >> >>> >> Austin Seipp, Haskell Consultant >>> >> Well-Typed LLP, http://www.well-typed.com/ >>> >> >>> >> >>> >> _______________________________________________ >>> >> ghc-devs mailing list >>> >> ghc-devs at haskell.org >>> >> http://www.haskell.org/mailman/listinfo/ghc-devs >>> >> >>> >> >>> >> >>> >> >>> >> -- >>> >> Gintautas Miliauskas >>> > >>> > _______________________________________________ >>> > ghc-devs mailing list >>> > ghc-devs at haskell.org >>> > http://www.haskell.org/mailman/listinfo/ghc-devs >>> >>> >>> >>> -- >>> Regards, >>> >>> Austin Seipp, Haskell Consultant >>> Well-Typed LLP, http://www.well-typed.com/ >> >> >> >> >> -- >> Gintautas Miliauskas > > > > > -- > Gintautas Miliauskas -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From pali.gabor at gmail.com Wed Oct 29 23:18:51 2014 From: pali.gabor at gmail.com (=?UTF-8?B?UMOhbGkgR8OhYm9yIErDoW5vcw==?=) Date: Thu, 30 Oct 2014 00:18:51 +0100 Subject: Automating GHC build for Windows In-Reply-To: References: <1413916715.326367.181663173.2FBCEE30@webmail.messagingengine.com> Message-ID: 2014-10-28 21:49 GMT+01:00 Gintautas Miliauskas : > Can you try running the offending command with -v to see which step > breaks? I have tried it, even together with building the GHC sources with a recent toolchain, but I did not get much forward. > I tried running it locally under strace but did not see any file renames > either. Although, I think I managed to find the place where some renaming happens. That is `writeFileAtomic` in Cabal's Distribution.Simple.Utils module [1]. The bin-package-db library has a patched version of this function [2] that has a workaround for Windows. After incorporating this change in Cabal, I was able to pass the previously problematic point in the build. Unfortunately, this was not enough for the complete build, as a similar error (with DeleteFile that time) was raised. [1] https://github.com/haskell/cabal/blob/master/Cabal/Distribution/Simple/Utils.hs#L1032 [2] https://github.com/ghc/ghc/blob/master/libraries/bin-package-db/GHC/PackageDb.hs#L252 From jarl.flaten at gmail.com Thu Oct 30 07:11:58 2014 From: jarl.flaten at gmail.com (Jarl Gunnar Flaten) Date: Thu, 30 Oct 2014 08:11:58 +0100 Subject: Problems compiling with llvm-3.5.0-2 on ARM Message-ID: Hello, (cf. reddit thread ) I am trying to compile a simple "hello world" program (test.hs). When compiling I am notified: > [1 of 1] Compiling Main ( test.hs, test.o ) > You are using a new version of LLVM that hasn't been tested yet! > We will try though... > Linking test ... But it seems to finish compiling fine, without further errors. However, running the program (./test): > test: schedule: re-entered unsafely. > Perhaps a 'foreign import unsafe' should be 'safe'? Isn't working. I'm not familiar enough with neither LLVM nor Haskell to troubleshoot from either of these; I don't even know if the messages are even related. Any ideas for solutions? Any commands or tests I can perform to give you more information? Thanks, Jarl Gunnar T. Flaten -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvriedel at gmail.com Thu Oct 30 08:13:05 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Thu, 30 Oct 2014 09:13:05 +0100 Subject: RFC: Properly stated origin of code contributions Message-ID: <87egtq6kmm.fsf@gmail.com> Hi, GHC's Git history has (mostly) a good track record of having properly attributed authorship information in the recent past; Some time ago I've even augmented the .mailmap file to fix-up some of the pre-Git meta-data which had mangled author/committer meta-data (try 'git shortlog -sn' if you're curious) However, I just noticed that http://git.haskell.org/ghc.git/commitdiff/322810e32cb18d7749e255937437ff2ef99dca3f landed recently, which did change a significant amount of code, but at the same time the author looks like a pseudonym to me (and apologies if I'm wrong). Other important projects such as Linux or Samba, just to name two examples, reject contributions w/o a clearly stated origin, and explicitly reject anonymous/pseudonym contributions (as part of their "Developer's Certificate of Origin" policy[1] which involves a bit more than merely stating the real name) I believe the GHC project should consider setting some reasonable ground-rules for contributions to be on the safe side in order to avoid potential copyright (or similiar) issues in the future, as well as giving confidence to commercial users that precautions are taken to avoid such issues. Comments? Cheers, hvr [1]: See http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/SubmittingPatches From simonpj at microsoft.com Thu Oct 30 08:43:55 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 30 Oct 2014 08:43:55 +0000 Subject: Properly stated origin of code contributions In-Reply-To: <87egtq6kmm.fsf@gmail.com> References: <87egtq6kmm.fsf@gmail.com> Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F383005@DB3PRD3001MB020.064d.mgd.msft.net> | I believe the GHC project should consider setting some reasonable | ground-rules for contributions to be on the safe side in order to | avoid potential copyright (or similiar) issues in the future, as well | as giving confidence to commercial users that precautions are taken to | avoid such issues. I agree with that. We could list the policy on https://ghc.haskell.org/trac/ghc/wiki/WorkingConventions, or a page linked from there. One possibility would be to add a "Contributors" section to the GHC Team page https://ghc.haskell.org/trac/ghc/wiki/TeamGHC, and ask anyone submitting a patch to add an entry describing themselves (including their real name) to that page. By "contributor" I means someone who is submitting a patch but is not yet a committer. We could have separate sub-pages for committers and contributors. That would give a way to celebrate contributors, as well as a way to identify them. Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | Herbert Valerio Riedel | Sent: 30 October 2014 08:13 | To: ghc-devs | Subject: RFC: Properly stated origin of code contributions | | Hi, | | GHC's Git history has (mostly) a good track record of having properly | attributed authorship information in the recent past; Some time ago | I've even augmented the .mailmap file to fix-up some of the pre-Git | meta-data which had mangled author/committer meta-data (try 'git | shortlog -sn' if you're curious) | | However, I just noticed that | | | http://git.haskell.org/ghc.git/commitdiff/322810e32cb18d7749e255937437 | ff2ef99dca3f | | landed recently, which did change a significant amount of code, but at | the same time the author looks like a pseudonym to me (and apologies | if I'm wrong). | | Other important projects such as Linux or Samba, just to name two | examples, reject contributions w/o a clearly stated origin, and | explicitly reject anonymous/pseudonym contributions (as part of their | "Developer's Certificate of Origin" policy[1] which involves a bit | more than merely stating the real name) | | I believe the GHC project should consider setting some reasonable | ground-rules for contributions to be on the safe side in order to | avoid potential copyright (or similiar) issues in the future, as well | as giving confidence to commercial users that precautions are taken to | avoid such issues. | | Comments? | | Cheers, | hvr | | [1]: See | http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Do | cumentation/SubmittingPatches | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From jan.stolarek at p.lodz.pl Thu Oct 30 09:00:49 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Thu, 30 Oct 2014 11:00:49 +0200 Subject: RFC: Properly stated origin of code contributions In-Reply-To: <87egtq6kmm.fsf@gmail.com> References: <87egtq6kmm.fsf@gmail.com> Message-ID: <201410301000.49331.jan.stolarek@p.lodz.pl> > Comments? +1 However: > I believe the GHC project should consider setting some reasonable > ground-rules for contributions to be on the safe side in order to avoid > potential copyright (or similiar) issues in the future, as well as > giving confidence to commercial users that precautions are taken to > avoid such issues. Projects like Scala and Clojure require filling in a "Contributor [License] Agreement". I have not bothered to investigate the exact purpose. My guess is that it is supposed to prevent situations like "un-authorized" commiting code into the project. (Meaning: employee of company M commits code into the project but then the company says that person was not allowed to do that, beacuse the code is patented or sth and requests that the code is withdrawn or sues the project.) Somehow I feel that introducing such contributor licenses into GHC would scare away some contributors. But then again doing that could prevent some potential problems. Janek From svenpanne at gmail.com Thu Oct 30 09:44:14 2014 From: svenpanne at gmail.com (Sven Panne) Date: Thu, 30 Oct 2014 10:44:14 +0100 Subject: cabal sdist trouble with GHC from head In-Reply-To: <87wq7i32qv.fsf@gmail.com> References: <87wq7i32qv.fsf@gmail.com> Message-ID: 2014-10-29 23:55 GMT+01:00 Herbert Valerio Riedel : > Fyi, there's a `cabal-install-head` package now[1] (may take a few > minutes till its properly published in the PPA though); please test it > and lemme know if it works as expected... > > [1]: https://launchpad.net/~hvr/+archive/ubuntu/ghc/+sourcepub/4539223/+listing-archive-extra Thanks for uploading this. Nevertheless, GHC from head and cabal from head still don't like each other: https://travis-ci.org/haskell-opengl/StateVar/jobs/39470537#L103 I get "cabal: Prelude.chr: bad argument: 3031027"... o_O From juhpetersen at gmail.com Thu Oct 30 10:24:59 2014 From: juhpetersen at gmail.com (Jens Petersen) Date: Thu, 30 Oct 2014 19:24:59 +0900 Subject: GHC Weekly News - 10/24/2014 In-Reply-To: References: Message-ID: Thanks for the Weekly News that is really useful info. On 25 October 2014 09:00, Austin Seipp wrote: > Note: this post is available (with full hyperlinks) at > https://ghc.haskell.org/trac/ghc/blog/weekly20141024 > > - This past week, a discussion sort of organically started on the > `#ghc` IRC channel about the future of the LLVM backend. GHC's backend > is buggy, has no control over LLVM versions, and breaks frequently > with new versions. This all significantly impacts users, and relegates > the backend to a second class citizen. After some discussion, Austin > wrote up a proposal for a improved backend > , and wrangled > several other > people to help. The current plan is to try an execute this by GHC > 7.12, with the goal of making the LLVM backend Tier 1 for major > supported platforms. > Is this effort orthogonal to NCG for armv7 and armv8? I am glad people are thinking about how to address this but "we ship and fix a version of LLVM for GHC" sounds a bit scary to me. :) Jens -------------- next part -------------- An HTML attachment was scrubbed... URL: From gintautas.miliauskas at gmail.com Thu Oct 30 11:48:32 2014 From: gintautas.miliauskas at gmail.com (Gintautas Miliauskas) Date: Thu, 30 Oct 2014 12:48:32 +0100 Subject: Tests with compilation errors Message-ID: Going through some validate.sh results, I found compilation errors due to missing libraries, like this one: =====> stm052(normal) 4088 of 4108 [0, 21, 0] cd ../../libraries/stm/tests && 'C:/msys64/home/Gintas/ghc/bindisttest/install dir/bin/ghc.exe' -fforce-recomp -dcore-lint -dcmm-lint -dno-debug-output -no-user-package-db -rtsopt s -fno-warn-tabs -fno-ghci-history -o stm052 stm052.hs -package stm >stm052.comp.stderr 2>&1 Compile failed (status 256) errors were: stm052.hs:10:8: Could not find module ?System.Random? Use -v to see a list of the files searched for. I was surprised to see that these are not listed in the test summary at the end of the test run, but only counted towards the "X had missing libraries" row. That setup makes it really easy to miss them, and I can't think of a good reason to sweep such tests under the rug; a broken test is a failing test. How about at least listing such failed tests in the list of failed tests of the end? At least in this case the error does not seem to be due to some missing external dependencies (which probably would not be a great idea anyway...). The test does pass if I remove the "-no-user-package-db" argument. What was the intention here? Does packaging work somehow differently on Linux? (I'm currently testing on Windows.) On a related note, how about separating test failures from failing performance tests ("stat too good" / "stat not good enough")? The latter are important, but they seem to be much more prone to fail without good reason. Perhaps do some color coding of the test runner output? That would also help. -- Gintautas Miliauskas -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Oct 30 12:57:49 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 30 Oct 2014 12:57:49 +0000 Subject: URL issue Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F384137@DB3PRD3001MB020.064d.mgd.msft.net> Herbert, Austin I was in a fairly old repo, but one I have been using daily, and pushing to the central repo. The pushurl was pushurl = ssh://git at ghc.haskell.org/ghc.git That worked two days ago, but silently hangs now. But when I finally realised that it should be "git.haskell.org" it works fine. So something must have changed. I'm certain I was pushing to ghc.haskell.org until a couple of days ago. Strange, and may be useful knowledge for others. Anyway, no problem now. But the "Repositories" pages says nothing about what URL to use for pushing - and it really should! That would be worth fixing. Thanks Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Thu Oct 30 13:04:26 2014 From: allbery.b at gmail.com (Brandon Allbery) Date: Thu, 30 Oct 2014 09:04:26 -0400 Subject: RFC: Properly stated origin of code contributions In-Reply-To: <201410301000.49331.jan.stolarek@p.lodz.pl> References: <87egtq6kmm.fsf@gmail.com> <201410301000.49331.jan.stolarek@p.lodz.pl> Message-ID: On Thu, Oct 30, 2014 at 5:00 AM, Jan Stolarek wrote: > Projects like Scala and Clojure require filling in a "Contributor > [License] Agreement". I have not > bothered to investigate the exact purpose. > In the absence of a license agreement, the contribution is usually owned by the submitter and not the project (copyright, see Berne convention). This doesn't scale very well. A signed CLA allows the project to demonstrate that the submitter has agreed to transfer ownership of the contribution to the project('s administrators). -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Thu Oct 30 13:50:58 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Thu, 30 Oct 2014 14:50:58 +0100 Subject: RFC: Properly stated origin of code contributions In-Reply-To: References: <87egtq6kmm.fsf@gmail.com> <201410301000.49331.jan.stolarek@p.lodz.pl> Message-ID: <1414677058.1396.6.camel@joachim-breitner.de> Hi, Am Donnerstag, den 30.10.2014, 09:04 -0400 schrieb Brandon Allbery: > On Thu, Oct 30, 2014 at 5:00 AM, Jan Stolarek > wrote: > Projects like Scala and Clojure require filling in a > "Contributor [License] Agreement". I have not > bothered to investigate the exact purpose. > > In the absence of a license agreement, the contribution is usually > owned by the submitter and not the project (copyright, see Berne > convention). This doesn't scale very well. A signed CLA allows the > project to demonstrate that the submitter has agreed to transfer > ownership of the contribution to the project('s administrators). Given that the Linux kernel doesn?t require (paper-signed) CLAs, I do think it scales very well, and does not seem to scare off commercial users. > In the absence of a license agreement, the contribution is usually > owned by the submitter and not the project (copyright, see Berne > convention). This doesn't scale very well. A signed CLA allows the > project to demonstrate that the submitter has agreed to transfer > ownership of the contribution to the project('s administrators). As long we can properly assume that contributors license the code to us under the terms of the GHC license (which we seem to do), we got what we need. No need to hold the copyright in a single place. It?s too late for that anyways. Please avoid introducing unnecessary bureaucracy into the contributing process, especially not due to legal fear, cased from FUD and smattering. Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From jan.stolarek at p.lodz.pl Thu Oct 30 14:15:52 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Thu, 30 Oct 2014 16:15:52 +0200 Subject: Understanding core2core optimisation pipeline Message-ID: <201410301515.52137.jan.stolarek@p.lodz.pl> I'm resurrecting this 3-month old thread as I have some more questions about cardinality analysis. 1. I'm still a bit confused about terminology. Demand analysis, strictness analysis, cardinality analysis - do these three terms mean exactly the same thing? If no, then what are the differences? 2. First pass of full laziness is followed by floating in. At that stage we have not yet run the demand analysis and yet the code that does the floating-in checks whether a binder is one-shot (FloatIn.okToFloatInside called by FloatIn.fiExpr AnnLam case). This suggests that cardinality analysis is done earlier (but when?) and that demand analysis is not the same thing as cardinality analysis. 3. Does demand analyser perform any transformations? Or does it only annotate Core with demand information that can be used by subsequent passes? 4. BasicTypes module defines: data OneShotInfo = NoOneShotInfo -- ^ No information | ProbOneShot -- ^ The lambda is probably applied at most once | OneShotLam -- ^ The lambda is applied at most once. Do I understand correctly that `NoOneShotInfo` really means no information, ie. a binding annotated with this might in fact be one shot? If so, then do we have means of saying that a binding is certainly not a one-shot binding? 5. What is the purpose of SetLevels.lvlMFE function? Janek > The wiki page just went live: > > https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/Core2CorePipeline > > It's not yet perfect but it should be a good start. > > > Roughtly, a complete run of the simplifier means "run the simplifier > > repeatedly until nothing further happens". The iterations are the > > successive iterations of this loop. Currently there's a (rather arbitrary) > > limit of four such iterations before we give up and declare victory. > > A limit or a default value for that limit? > > To Ilya: > > If you grep for the "late_dmd_anal" option variable in the compiler/simplCore/SimplCore.lhs > > module, you'll see that it triggers a phase close to the endo of getCoreToDo's tasks, which > > contains, in particular, the "CoreDoStrictness" pass. This is the "late" phase. > > The paper said that the late pass is run to detect single-entry thunks and the reason why it is > run late in the pipeline is that if it were run earlier this information could be invalidated > by the transformations. But in the source code I see that this late pass is followed by the > simplifier, which can invalidate the information. Also, the documentation for -flate-dmd-anal > says: "We found some opportunities for discovering strictness that were not visible earlier; > and optimisations like -fspec-constr can create functions with unused arguments which are > eliminated by late demand analysis". This says nothing about single-netry thunks. So, is the > single-entry thunk optimisation performed by GHC? > > Janek From gintautas at miliauskas.lt Thu Oct 30 15:24:42 2014 From: gintautas at miliauskas.lt (Gintautas Miliauskas) Date: Thu, 30 Oct 2014 16:24:42 +0100 Subject: Automating GHC build for Windows In-Reply-To: References: <1413916715.326367.181663173.2FBCEE30@webmail.messagingengine.com> Message-ID: Thanks for pushing this forward. I wonder what's going on with DeleteFile. What is the step that's failing? Can you post the log? I also wonder why this issue is not arising on other Windows machines... On Thu, Oct 30, 2014 at 12:18 AM, P?li G?bor J?nos wrote: > 2014-10-28 21:49 GMT+01:00 Gintautas Miliauskas : > > Can you try running the offending command with -v to see which step > > breaks? > > I have tried it, even together with building the GHC sources with a > recent toolchain, but I did not get much forward. > > > I tried running it locally under strace but did not see any file renames > > either. > > Although, I think I managed to find the place where some renaming > happens. That is `writeFileAtomic` in Cabal's > Distribution.Simple.Utils module [1]. The bin-package-db library has > a patched version of this function [2] that has a workaround for > Windows. After incorporating this change in Cabal, I was able to pass > the previously problematic point in the build. Unfortunately, this > was not enough for the complete build, as a similar error (with > DeleteFile that time) was raised. > > [1] > https://github.com/haskell/cabal/blob/master/Cabal/Distribution/Simple/Utils.hs#L1032 > [2] > https://github.com/ghc/ghc/blob/master/libraries/bin-package-db/GHC/PackageDb.hs#L252 > -- Gintautas Miliauskas -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Thu Oct 30 15:25:41 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Thu, 30 Oct 2014 11:25:41 -0400 Subject: RFC: Properly stated origin of code contributions In-Reply-To: <1414677058.1396.6.camel@joachim-breitner.de> References: <87egtq6kmm.fsf@gmail.com> <201410301000.49331.jan.stolarek@p.lodz.pl> <1414677058.1396.6.camel@joachim-breitner.de> Message-ID: Indeed. A cla is overkill for ghc. More over, a good CLA merely documents that I'm granting license under BSD compatible terms, ownership transfer is inappropriate and abusive. At MOST, "this work is my own and I grant license for its use In ghc using the BSD license" is plenty. And even that might be overkill. I'm happy to ask the IP lawyers in my family for some opinions on this but I think what we are doing now is fine. On Oct 30, 2014 9:51 AM, "Joachim Breitner" wrote: > Hi, > > > Am Donnerstag, den 30.10.2014, 09:04 -0400 schrieb Brandon Allbery: > > On Thu, Oct 30, 2014 at 5:00 AM, Jan Stolarek > > wrote: > > Projects like Scala and Clojure require filling in a > > "Contributor [License] Agreement". I have not > > bothered to investigate the exact purpose. > > > > In the absence of a license agreement, the contribution is usually > > owned by the submitter and not the project (copyright, see Berne > > convention). This doesn't scale very well. A signed CLA allows the > > project to demonstrate that the submitter has agreed to transfer > > ownership of the contribution to the project('s administrators). > > Given that the Linux kernel doesn?t require (paper-signed) CLAs, I do > think it scales very well, and does not seem to scare off commercial > users. > > > > In the absence of a license agreement, the contribution is usually > > owned by the submitter and not the project (copyright, see Berne > > convention). This doesn't scale very well. A signed CLA allows the > > project to demonstrate that the submitter has agreed to transfer > > ownership of the contribution to the project('s administrators). > > As long we can properly assume that contributors license the code to us > under the terms of the GHC license (which we seem to do), we got what we > need. No need to hold the copyright in a single place. It?s too late for > that anyways. > > > Please avoid introducing unnecessary bureaucracy into the contributing > process, especially not due to legal fear, cased from FUD and > smattering. > > Greetings, > Joachim > > > > -- > Joachim ?nomeata? Breitner > mail at joachim-breitner.de ? http://www.joachim-breitner.de/ > Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F > Debian Developer: nomeata at debian.org > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Oct 30 15:26:45 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 30 Oct 2014 15:26:45 +0000 Subject: package hashes Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F384849@DB3PRD3001MB020.064d.mgd.msft.net> Edward On branch wip/new-flatten-skolems-Oct14, I'm getting this test failure repeatably on test safePkgO1. I'm pretty sure I have done nothing to mess with package hashes! Any ideas? Simon =====> safePkg01(normal) 108 of 120 [0, 0, 0] cd ./check/pkg01 && $MAKE -s --no-print-directory safePkg01 VANILLA=--enable-library-vanilla PROF=--disable-library-profiling DYN=--enable-shared safePkg01.run.stdout 2>safePkg01.run.stderr Actual stdout output differs from expected: --- ./check/pkg01/safePkg01.stdout 2014-10-29 15:09:16.000000000 +0000 +++ ./check/pkg01/safePkg01.run.stdout 2014-10-30 15:25:17.799094762 +0000 @@ -29,17 +29,17 @@ require own pkg trusted: True M_SafePkg6 -package dependencies: array-0.5.0.1 at array_ +package dependencies: array-0.5.0.1 base-4.8.0.0* trusted: trustworthy require own pkg trusted: False M_SafePkg7 -package dependencies: array-0.5.0.1 at array_ +package dependencies: array-0.5.0.1 base-4.8.0.0* trusted: safe require own pkg trusted: False M_SafePkg8 -package dependencies: array-0.5.0.1 at array_ +package dependencies: array-0.5.0.1 base-4.8.0.0 trusted: trustworthy require own pkg trusted: False -------------- next part -------------- An HTML attachment was scrubbed... URL: From gintautas.miliauskas at gmail.com Thu Oct 30 15:34:27 2014 From: gintautas.miliauskas at gmail.com (Gintautas Miliauskas) Date: Thu, 30 Oct 2014 16:34:27 +0100 Subject: GHC on Windows (extended/broad discussion) In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F37F0D2@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF3F37F0D2@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: On Tue, Oct 28, 2014 at 11:02 PM, Simon Peyton Jones wrote: > The people problem is tricky. At work, this would be the right time to > do a video chat and at least see the faces of the other people involved. > Would folks be interested in a Skype/Hangout sometime? It would be > interesting to hear what interests / skills / resources / constraints we > have between us. > > > > I think that?s a great idea, thanks. It?s easier to work with people with > whom you have formed a personal relationship, and a video conf is a good > way to do that. > Let's try that. Shall we try to find a good timeslot? Sign up at http://doodle.com/34e598zc7m8sbaqm -- Gintautas Miliauskas -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Thu Oct 30 15:45:17 2014 From: allbery.b at gmail.com (Brandon Allbery) Date: Thu, 30 Oct 2014 11:45:17 -0400 Subject: RFC: Properly stated origin of code contributions In-Reply-To: References: <87egtq6kmm.fsf@gmail.com> <201410301000.49331.jan.stolarek@p.lodz.pl> <1414677058.1396.6.camel@joachim-breitner.de> Message-ID: On Thu, Oct 30, 2014 at 11:25 AM, Carter Schonwald < carter.schonwald at gmail.com> wrote: > I'm happy to ask the IP lawyers in my family for some opinions on this but > I think what we are doing now is fine. As Joachim already noted, it's a bit late to switch course for GHC; you'd have to track down every past contributor. (I've been involved with projects that needed to do that; if at all possible, avoid it.) The reality, as I understand it (note that I Am Not A Lawyer(tm) but have experience with projects that have had to face the question), is that there's complex interactions between copyright law and contract law (not to mention questions of how contract law affects contributions to an open project). And both have a certain "valid until proven otherwise" aspect, which often makes it wisest to not change what's already working well enough --- especially since even asking a lawyer "on the clock" can potentially have legal implications on the whole project (but only if someone actually challenges in court and brings it up). As a result, the FUD's kinda built into the legal structure. :/ (My earlier response is not incompatible with this; the question I was answering was why a project might go with a CLA. In reality, whether the answer is *relevant* to a project is certainly open to question. One difference between the situation with GHC and the situation with Scala or Perl 6 is that the latter are also defining a language specification, which may have implications if there is a plan to submit it to an official standards body at some point. For ghc, that rests on the language committee, not the GHC developers.) If it really bothers you, probably best to ask someone like the EFF. Almost certainly do *not* formally ask a lawyer (informal is fine) --- they are going to concentrate on the worst case, mainly because even asking for a formal evaluation suggests that there is a need to worry about the worst case. Otherwise, leave well enough alone. -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Oct 30 15:45:32 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 30 Oct 2014 15:45:32 +0000 Subject: [commit: ghc] wip/T9732: Generate two versions of pattern synonym matcher: * one where the continuation is lifted, * one where the continuation is unlifted. (423e9b2) In-Reply-To: <20141030154010.B2F503A300@ghc.haskell.org> References: <20141030154010.B2F503A300@ghc.haskell.org> Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F38498E@DB3PRD3001MB020.064d.mgd.msft.net> No no! Let's not do that. It's grotesque to generate identical code twice. We must find a better way. Simon | -----Original Message----- | From: ghc-commits [mailto:ghc-commits-bounces at haskell.org] On Behalf | Of git at git.haskell.org | Sent: 30 October 2014 15:40 | To: ghc-commits at haskell.org | Subject: [commit: ghc] wip/T9732: Generate two versions of pattern | synonym matcher: * one where the continuation is lifted, * one where | the continuation is unlifted. (423e9b2) | | Repository : ssh://git at git.haskell.org/ghc | | On branch : wip/T9732 | Link : | http://ghc.haskell.org/trac/ghc/changeset/423e9b28930fb9e6dfcd4a20dd60 | 4ec488a2bb1d/ghc | | >--------------------------------------------------------------- | | commit 423e9b28930fb9e6dfcd4a20dd604ec488a2bb1d | Author: Dr. ERDI Gergo | Date: Thu Oct 30 23:07:50 2014 +0800 | | Generate two versions of pattern synonym matcher: | * one where the continuation is lifted, | * one where the continuation is unlifted. | | | >--------------------------------------------------------------- | | 423e9b28930fb9e6dfcd4a20dd604ec488a2bb1d | compiler/basicTypes/OccName.lhs | 5 +++-- | compiler/basicTypes/PatSyn.lhs | 21 ++++++++++++--------- | compiler/deSugar/DsUtils.lhs | 2 +- | compiler/iface/BuildTyCl.lhs | 7 ++++--- | compiler/iface/IfaceSyn.lhs | 8 ++++++-- | compiler/iface/MkIface.lhs | 4 +++- | compiler/iface/TcIface.lhs | 4 +++- | compiler/typecheck/TcBinds.lhs | 3 ++- | compiler/typecheck/TcPatSyn.lhs | 28 +++++++++++++++++----------- | 9 files changed, 51 insertions(+), 31 deletions(-) | | Diff suppressed because of size. To see it, use: | | git diff-tree --root --patch-with-stat --no-color --find-copies- | harder --ignore-space-at-eol --cc | 423e9b28930fb9e6dfcd4a20dd604ec488a2bb1d | _______________________________________________ | ghc-commits mailing list | ghc-commits at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-commits From nicolas.frisby at gmail.com Thu Oct 30 15:56:36 2014 From: nicolas.frisby at gmail.com (Nicolas Frisby) Date: Thu, 30 Oct 2014 08:56:36 -0700 Subject: Understanding core2core optimisation pipeline In-Reply-To: <201410301515.52137.jan.stolarek@p.lodz.pl> References: <201410301515.52137.jan.stolarek@p.lodz.pl> Message-ID: I implemented -flate-dmd-anal last year Here's some outdated notes about my initial implementation. I share it in order to indicate what thoughts were in my mind at the time (eg re motivation). https://ghc.haskell.org/trac/ghc/wiki/Frisby2013Q1#LateStrictnessWW Aha! More up-to-date info here, including links to some of the older, motivating tickets. https://ghc.haskell.org/trac/ghc/ticket/7782 https://ghc.haskell.org/trac/ghc/wiki/LateDmd Also, I now suspect this pass is risky: I think it may enter unused arguments. I realize I didn't understand that stuff well enough at the time. This definitely deserves some attention if people are using the flag. To answer your question directly: I do not recall explicitly considering single-entry thunks when implementing -flate-dmd-anal. HTH. On Thu, Oct 30, 2014 at 7:15 AM, Jan Stolarek wrote: > I'm resurrecting this 3-month old thread as I have some more questions > about cardinality analysis. > > 1. I'm still a bit confused about terminology. Demand analysis, strictness > analysis, cardinality > analysis - do these three terms mean exactly the same thing? If no, then > what are the > differences? > > 2. First pass of full laziness is followed by floating in. At that stage > we have not yet run the > demand analysis and yet the code that does the floating-in checks whether > a binder is one-shot > (FloatIn.okToFloatInside called by FloatIn.fiExpr AnnLam case). This > suggests that cardinality > analysis is done earlier (but when?) and that demand analysis is not the > same thing as > cardinality analysis. > > 3. Does demand analyser perform any transformations? Or does it only > annotate Core with demand > information that can be used by subsequent passes? > > 4. BasicTypes module defines: > > data OneShotInfo = NoOneShotInfo -- ^ No information > | ProbOneShot -- ^ The lambda is probably applied at > most once > | OneShotLam -- ^ The lambda is applied at most once. > > Do I understand correctly that `NoOneShotInfo` really means no information, > ie. a binding annotated with this might in fact be one shot? If so, then > do we > have means of saying that a binding is certainly not a one-shot binding? > > 5. What is the purpose of SetLevels.lvlMFE function? > > Janek > > > The wiki page just went live: > > > > > https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/Core2CorePipeline > > > > It's not yet perfect but it should be a good start. > > > > > Roughtly, a complete run of the simplifier means "run the simplifier > > > repeatedly until nothing further happens". The iterations are the > > > successive iterations of this loop. Currently there's a (rather > arbitrary) > > > limit of four such iterations before we give up and declare victory. > > > > A limit or a default value for that limit? > > > > To Ilya: > > > If you grep for the "late_dmd_anal" option variable in the > compiler/simplCore/SimplCore.lhs > > > module, you'll see that it triggers a phase close to the endo of > getCoreToDo's tasks, which > > > contains, in particular, the "CoreDoStrictness" pass. This is the > "late" phase. > > > > The paper said that the late pass is run to detect single-entry thunks > and the reason why it is > > run late in the pipeline is that if it were run earlier this information > could be invalidated > > by the transformations. But in the source code I see that this late pass > is followed by the > > simplifier, which can invalidate the information. Also, the > documentation for -flate-dmd-anal > > says: "We found some opportunities for discovering strictness that were > not visible earlier; > > and optimisations like -fspec-constr can create functions with unused > arguments which are > > eliminated by late demand analysis". This says nothing about > single-netry thunks. So, is the > > single-entry thunk optimisation performed by GHC? > > > > Janek > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gergo at erdi.hu Thu Oct 30 16:00:57 2014 From: gergo at erdi.hu (Dr. ERDI Gergo) Date: Fri, 31 Oct 2014 00:00:57 +0800 (SGT) Subject: [commit: ghc] wip/T9732: Generate two versions of pattern synonym matcher: * one where the continuation is lifted, * one where the continuation is unlifted. (423e9b2) In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F38498E@DB3PRD3001MB020.064d.mgd.msft.net> References: <20141030154010.B2F503A300@ghc.haskell.org> <618BE556AADD624C9C918AA5D5911BEF3F38498E@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: On Thu, 30 Oct 2014, Simon Peyton Jones wrote: > No no! Let's not do that. It's grotesque to generate identical code > twice. We must find a better way. So the type of an open-kinded matcher function, for a pattern of type pattern type P :: [T a] would need to be something like $m?P :: forall (r :: ?) a. [T a] -> R(r) -> R(r) -> r where R(r) = Void# -> r if r :: # , r otherwise Is there a way to do that? I couldn't think of anything better than to generate two versions: $mP :: forall r a. [T a] -> r -> r -> r $m#P :: forall (r :: #) a. [T a] -> (Void# -> r) -> (Void# -> r) -> r Now, to cut down on the amount of code generated, I guess we could have $m?P :: forall (r :: ?) a. [T a] -> (Void# -> r) -> (Void# -> r) -> r and always compile pattern synonym match continuations into lambdas over this dummy Void#, but I thought we also wanted to avoid that... Note that if P were to have arguments, the same problem would still be present with the fail continuation (but not the success one). From austin at well-typed.com Thu Oct 30 16:16:48 2014 From: austin at well-typed.com (Austin Seipp) Date: Thu, 30 Oct 2014 11:16:48 -0500 Subject: GHC Weekly News - 10/24/2014 In-Reply-To: References: Message-ID: Yes, it is orthogonal to any effort for a NCG for ARMv7 or ARMv8. Note that ARM is a platform that would particularly benefit from this: right now, building GHC on ARM is a bit... er, troublesome. There are a lot of LLVM versions that don't work, or are patched by maintainers (making it impossible to know what might work or not), or have simply never been tested depending on the distro. I've had quite my share of problems trying to get GHC/ARM to build on ARM/Linux, due to all kinds of incompatibilities or bugs in the code generator for whatever verson I was using. I agree that for upstreams (Debian, Fedora, etc) the proposition is a bit scary perhaps. But I think the significant number of advantages outweigh the disadvantages quite handily, and as we go forward, I still think it will be the only truly maintainable solution, at least in the forseeable future. I am also of course more than willing to help upstreams adapt to this (for example, should Debian or Fedora wish to package their own LLVM, I'd be more than willing to help identify a version that works and we could give to users). FWIW, I'm aiming for not requiring any extra patches to LLVM. I'd prefer using a stable version, and any bugs we find should go upstream.( GHC requiring extra LLVM patches would be a possibility, but one of the last things we'd want possibly due to extra complication.) On Thu, Oct 30, 2014 at 5:24 AM, Jens Petersen wrote: > Thanks for the Weekly News that is really useful info. > > On 25 October 2014 09:00, Austin Seipp wrote: >> >> Note: this post is available (with full hyperlinks) at >> https://ghc.haskell.org/trac/ghc/blog/weekly20141024 > > >> >> - This past week, a discussion sort of organically started on the >> `#ghc` IRC channel about the future of the LLVM backend. GHC's backend >> is buggy, has no control over LLVM versions, and breaks frequently >> with new versions. This all significantly impacts users, and relegates >> the backend to a second class citizen. After some discussion, Austin >> wrote up a proposal for a improved backend, and wrangled several other >> people to help. The current plan is to try an execute this by GHC >> 7.12, with the goal of making the LLVM backend Tier 1 for major >> supported platforms. > > > Is this effort orthogonal to NCG for armv7 and armv8? > > I am glad people are thinking about how to address this but > "we ship and fix a version of LLVM for GHC" sounds a bit scary to me. :) > > Jens -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From simonpj at microsoft.com Thu Oct 30 16:16:22 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 30 Oct 2014 16:16:22 +0000 Subject: [commit: ghc] wip/T9732: Generate two versions of pattern synonym matcher: * one where the continuation is lifted, * one where the continuation is unlifted. (423e9b2) In-Reply-To: References: <20141030154010.B2F503A300@ghc.haskell.org> <618BE556AADD624C9C918AA5D5911BEF3F38498E@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F384B72@DB3PRD3001MB020.064d.mgd.msft.net> | $m?P :: forall (r :: ?) a. [T a] -> R(r) -> R(r) -> r | | where R(r) = Void# -> r if r :: # | , r otherwise | | Is there a way to do that? No indeed. | Now, to cut down on the amount of code generated, I guess we could | have | | $m?P :: forall (r :: ?) a. [T a] -> (Void# -> r) -> (Void# -> r) -> r | | and always compile pattern synonym match continuations into lambdas | over this dummy Void#, but I thought we also wanted to avoid that... I think that's fine. These matchers will usually be inlined and all the clutter will go away. | Note that if P were to have arguments, the same problem would still be | present with the fail continuation (but not the success one). Yes, let's take advantage of that S From austin at well-typed.com Thu Oct 30 16:20:07 2014 From: austin at well-typed.com (Austin Seipp) Date: Thu, 30 Oct 2014 11:20:07 -0500 Subject: cabal sdist trouble with GHC from head In-Reply-To: References: Message-ID: On Thu, Oct 23, 2014 at 8:47 AM, Sven Panne wrote: > 2014-10-23 15:01 GMT+02:00 Alan & Kim Zimmerman : >> cabal has changed for HEAD, you need to install 1.21.1.0 > > Hmmm, so we *force* people to update? o_O Perhaps I've missed an > announcement, and I really have a hard time deducing this from the > output on Travis CI. Is 1.21.1.0 backwards-compatible to previous > GHCs? Or do I have to set up something more or less complicated > depending on the GHC version (which would be unfortunate)? > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > Just to be clear, Cabal will always support every major GHC version going back several years. I think it even supports GHC 6.12 still and Duncan tests with it. But, sometimes a GHC change may require you to use a newer version of Cabal, for things to work. So this just means that Cabal isn't necessarily *future compatible* with future GHCs - they may change the package format, etc. But it is backwards compatible with existing ones. -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From austin at well-typed.com Thu Oct 30 16:33:43 2014 From: austin at well-typed.com (Austin Seipp) Date: Thu, 30 Oct 2014 11:33:43 -0500 Subject: URL issue In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F384137@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF3F384137@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: (Resending to hit ghc-devs list.) Yep, good point. For those at home: this will affect you too, if you're pushing to 'ghc.haskell.org' instead of 'git.haskell.org'. Why? Because ghc.haskell.org now uses a CDN (Content Delivery Network) to cache and make the server a bit more responsive. But one of the limitations of a CDN is they will not proxy your SSH access, unfortunately. (We implemented this back on Sunday, I believe, so you may only be seeing it now). However, git.haskell.org remains off a CDN and will continue to do so, meaning it will always be safe to push to via SSH. I'll update the Git pages to reflect this. On Thu, Oct 30, 2014 at 7:57 AM, Simon Peyton Jones wrote: > Herbert, Austin > > > > I was in a fairly old repo, but one I have been using daily, and pushing to > the central repo. The pushurl was > > pushurl = ssh://git at ghc.haskell.org/ghc.git > > That worked two days ago, but silently hangs now. > > > > But when I finally realised that it should be ?git.haskell.org? it works > fine. > > > So something must have changed. I?m certain I was pushing to > ghc.haskell.org until a couple of days ago. Strange, and may be useful > knowledge for others. > > > > Anyway, no problem now. But the ?Repositories? pages says nothing about what > URL to use for pushing ? and it really should! That would be worth fixing. > > > Thanks > > > > Simon > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From carter.schonwald at gmail.com Thu Oct 30 16:35:38 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Thu, 30 Oct 2014 12:35:38 -0400 Subject: RFC: Properly stated origin of code contributions In-Reply-To: References: <87egtq6kmm.fsf@gmail.com> <201410301000.49331.jan.stolarek@p.lodz.pl> <1414677058.1396.6.camel@joachim-breitner.de> Message-ID: tl;dr I think we're fine :) long version; asking people the first time they contribute to confirm that their work is their own, and they can and do grant bsd license is about all thats needed. i'm not sure how language standards relate to this topic, but i'll ask you about that out of band I guess ;) On Thu, Oct 30, 2014 at 11:45 AM, Brandon Allbery wrote: > On Thu, Oct 30, 2014 at 11:25 AM, Carter Schonwald < > carter.schonwald at gmail.com> wrote: > >> I'm happy to ask the IP lawyers in my family for some opinions on this >> but I think what we are doing now is fine. > > > As Joachim already noted, it's a bit late to switch course for GHC; you'd > have to track down every past contributor. (I've been involved with > projects that needed to do that; if at all possible, avoid it.) > > The reality, as I understand it (note that I Am Not A Lawyer(tm) but have > experience with projects that have had to face the question), is that > there's complex interactions between copyright law and contract law (not to > mention questions of how contract law affects contributions to an open > project). And both have a certain "valid until proven otherwise" aspect, > which often makes it wisest to not change what's already working well > enough --- especially since even asking a lawyer "on the clock" can > potentially have legal implications on the whole project (but only if > someone actually challenges in court and brings it up). As a result, the > FUD's kinda built into the legal structure. :/ > > (My earlier response is not incompatible with this; the question I was > answering was why a project might go with a CLA. In reality, whether the > answer is *relevant* to a project is certainly open to question. One > difference between the situation with GHC and the situation with Scala or > Perl 6 is that the latter are also defining a language specification, which > may have implications if there is a plan to submit it to an official > standards body at some point. For ghc, that rests on the language > committee, not the GHC developers.) > > If it really bothers you, probably best to ask someone like the EFF. > Almost certainly do *not* formally ask a lawyer (informal is fine) --- they > are going to concentrate on the worst case, mainly because even asking for > a formal evaluation suggests that there is a need to worry about the worst > case. Otherwise, leave well enough alone. > > -- > brandon s allbery kf8nh sine nomine > associates > allbery.b at gmail.com > ballbery at sinenomine.net > unix, openafs, kerberos, infrastructure, xmonad > http://sinenomine.net > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Thu Oct 30 16:55:18 2014 From: david.feuer at gmail.com (David Feuer) Date: Thu, 30 Oct 2014 12:55:18 -0400 Subject: Understanding core2core optimisation pipeline Message-ID: On Thu, Oct 30, 2014 Jan Stolarek wrote: > > 2. First pass of full laziness is followed by floating in. At that stage > we have not yet run the > demand analysis and yet the code that does the floating-in checks whether > a binder is one-shot > (FloatIn.okToFloatInside called by FloatIn.fiExpr AnnLam case). This > suggests that cardinality > analysis is done earlier (but when?) and that demand analysis is not the > same thing as > cardinality analysis. > If you're looking at super-recent code, that could be Joachim Breitner's work. He's exposed the one-shot stuff at the Haskell level with the experimental magic oneShot function, intended primarily for use in the libraries to make foldl-as-foldr and related things be analyzed more reliably. The old GHC arity analysis combined with his Call Arity get almost everything right, but there are occasional corner cases where things go wrong, and when they do the results tend to be extremely bad. -------------- next part -------------- An HTML attachment was scrubbed... URL: From singpolyma at singpolyma.net Thu Oct 30 17:13:57 2014 From: singpolyma at singpolyma.net (Stephen Paul Weber) Date: Thu, 30 Oct 2014 12:13:57 -0500 Subject: RFC: Properly stated origin of code contributions In-Reply-To: References: <87egtq6kmm.fsf@gmail.com> <201410301000.49331.jan.stolarek@p.lodz.pl> Message-ID: <20141030171357.GC2465@singpolyma-liberty> >In the absence of a license agreement, the contribution is usually owned by >the submitter and not the project (copyright, see Berne convention). This >doesn't scale very well. A signed CLA allows the project to demonstrate >that the submitter has agreed to transfer ownership of the contribution to >the project('s administrators). I wouldn't want a copyright-assignment system (since that allows the project to re-license when it wants, for example) but an inbound=outbound agreement (that is, an explicit agreement from contributors to have their contributions released under the license of the project) is not an unreasonable thing to do. -- Stephen Paul Weber, @singpolyma See for how I prefer to be contacted edition right joseph -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: Digital signature URL: From carter.schonwald at gmail.com Thu Oct 30 17:16:49 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Thu, 30 Oct 2014 10:16:49 -0700 (PDT) Subject: RFC: Properly stated origin of code contributions In-Reply-To: <20141030171357.GC2465@singpolyma-liberty> References: <20141030171357.GC2465@singpolyma-liberty> Message-ID: <1414689409195.942a3b77@Nodemailer> yup, agreed -Carter On Thu, Oct 30, 2014 at 1:14 PM, Stephen Paul Weber wrote: >>In the absence of a license agreement, the contribution is usually owned by >>the submitter and not the project (copyright, see Berne convention). This >>doesn't scale very well. A signed CLA allows the project to demonstrate >>that the submitter has agreed to transfer ownership of the contribution to >>the project('s administrators). > I wouldn't want a copyright-assignment system (since that allows the project > to re-license when it wants, for example) but an inbound=outbound agreement > (that is, an explicit agreement from contributors to have their > contributions released under the license of the project) is not an > unreasonable thing to do. > -- > Stephen Paul Weber, @singpolyma > See for how I prefer to be contacted > edition right joseph -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.frisby at gmail.com Thu Oct 30 17:37:39 2014 From: nicolas.frisby at gmail.com (Nicolas Frisby) Date: Thu, 30 Oct 2014 10:37:39 -0700 Subject: Understanding core2core optimisation pipeline In-Reply-To: References: Message-ID: On Thu, Oct 30, 2014 at 9:55 AM, David Feuer wrote: > On Thu, Oct 30, 2014 Jan Stolarek wrote: > >> >> 2. First pass of full laziness is followed by floating in. At that stage >> we have not yet run the >> demand analysis and yet the code that does the floating-in checks whether >> a binder is one-shot >> (FloatIn.okToFloatInside called by FloatIn.fiExpr AnnLam case). This >> suggests that cardinality >> analysis is done earlier (but when?) and that demand analysis is not the >> same thing as >> cardinality analysis. >> > > If you're looking at super-recent code, that could be Joachim Breitner's > work. He's exposed the one-shot stuff at the Haskell level with the > experimental magic oneShot function, intended primarily for use in the > libraries to make foldl-as-foldr and related things be analyzed more > reliably. The old GHC arity analysis combined with his Call Arity get > almost everything right, but there are occasional corner cases where things > go wrong, and when they do the results tend to be extremely bad. > > Joachim's work looks like neat stuff. I've only been scanning those emails, but I recall mention of interface files. Jan, would your question #2 be addressed by that information being imported from interface files? With separate compilation, phase ordering because more nuanced. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lonetiger at gmail.com Thu Oct 30 17:40:02 2014 From: lonetiger at gmail.com (lonetiger at gmail.com) Date: Thu, 30 Oct 2014 17:40:02 +0000 Subject: =?utf-8?Q?Re:_GHC_on_Windows_(extended/broad_discussion)?= In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF3F37F0D2@DB3PRD3001MB020.064d.mgd.msft.net>, Message-ID: <54527870.66bbb40a.6c56.ffffb18b@mx.google.com> Hi Gintautas, Is it possible for you to add the rest of next week to the schedule times? I?m unavailable on the given dates. Kind Regards, Tamar From: Gintautas Miliauskas Sent: ?Thursday?, ?October? ?30?, ?2014 ?16?:?34 To: Simon Peyton Jones Cc: kyra, ghc-devs at haskell.org On Tue, Oct 28, 2014 at 11:02 PM, Simon Peyton Jones wrote: The people problem is tricky. At work, this would be the right time to do a video chat and at least see the faces of the other people involved. Would folks be interested in a Skype/Hangout sometime? It would be interesting to hear what interests / skills / resources / constraints we have between us. I think that?s a great idea, thanks. It?s easier to work with people with whom you have formed a personal relationship, and a video conf is a good way to do that. Let's try that. Shall we try to find a good timeslot? Sign up at http://doodle.com/34e598zc7m8sbaqm -- Gintautas Miliauskas -------------- next part -------------- An HTML attachment was scrubbed... URL: From pali.gabor at gmail.com Thu Oct 30 18:13:10 2014 From: pali.gabor at gmail.com (=?UTF-8?B?UMOhbGkgR8OhYm9yIErDoW5vcw==?=) Date: Thu, 30 Oct 2014 19:13:10 +0100 Subject: Automating GHC build for Windows In-Reply-To: References: <1413916715.326367.181663173.2FBCEE30@webmail.messagingengine.com> Message-ID: 2014-10-30 16:24 GMT+01:00 Gintautas Miliauskas : > I wonder what's going on with DeleteFile. What is the step that's failing? Basically it happens at the same point, that is, at the "configure" phase but at the ghc-prim package. Note that the previously mentioned workaround has a "removeFile" action [1], I guess the failure of that triggers the DeleteFile exception. > Can you post the log? "inplace/bin/ghc-cabal.exe" configure libraries/ghc-prim dist-install "" --with-ghc="C:/msys64/home/ghc-builder/ghc/inplace/bin/ghc-stage1.exe" --with-ghc-pkg="C:/msys64/home/ghc-builder/ghc/inplace/bin/ghc-pkg.exe" --flag=include-ghc-prim --disable-library-for-ghci --enable-library-vanilla --enable-library-for-ghci --enable-library-profiling --disable-shared --configure-option=CFLAGS=" -U__i686 -march=i686 -fno-stack-protector " --configure-option=LDFLAGS=" " --configure-option=CPPFLAGS=" " --gcc-options=" -U__i686 -march=i686 -fno-stack-protector " --with-gcc="C:/msys64/home/ghc-builder/ghc/inplace/mingw/bin/gcc.exe" --with-ld="C:/msys64/home/ghc-builder/ghc/inplace/mingw/bin/ld.exe" --configure-option=--with-cc="C:/msys64/home/ghc-builder/ghc/inplace/mingw/bin/gcc.exe" --with-ar="/usr/bin/ar" --with-alex="/usr/local/bin/alex" --with-happy="/usr/local/bin/happy" Configuring ghc-prim-0.3.1.0... ghc-cabal.exe: DeleteFile "dist-install\\setup-config": permission denied (The process cannot access the file because it is being used by another process.) libraries/ghc-prim/ghc.mk:4: recipe for target 'libraries/ghc-prim/dist-install/package-data.mk' failed make[1]: *** [libraries/ghc-prim/dist-install/package-data.mk] Error 1 Makefile:71: recipe for target 'all' failed > I also wonder why this issue is not arising on other Windows machines... As the comment in the workaround goes, it has a "Big fat hairy race condition". Therefore I am inclined to believe that it may be a problem for other systems as well, but I am the most unfortunate one who hits this error with 100% probability :-) [1] https://github.com/ghc/ghc/blob/master/libraries/bin-package-db/GHC/PackageDb.hs#L267 From gintautas.miliauskas at gmail.com Thu Oct 30 20:03:35 2014 From: gintautas.miliauskas at gmail.com (Gintautas Miliauskas) Date: Thu, 30 Oct 2014 21:03:35 +0100 Subject: GHC on Windows (extended/broad discussion) In-Reply-To: <54527870.66bbb40a.6c56.ffffb18b@mx.google.com> References: <618BE556AADD624C9C918AA5D5911BEF3F37F0D2@DB3PRD3001MB020.064d.mgd.msft.net> <54527870.66bbb40a.6c56.ffffb18b@mx.google.com> Message-ID: Updated. Note that I'm on vacation starting Friday (Nov 7) and will be back only on Nov 24. On Thu, Oct 30, 2014 at 6:40 PM, wrote: > Hi Gintautas, > > Is it possible for you to add the rest of next week to the schedule times? > I?m unavailable on the given dates. > > Kind Regards, > Tamar > > *From:* Gintautas Miliauskas > *Sent:* ?Thursday?, ?October? ?30?, ?2014 ?16?:?34 > *To:* Simon Peyton Jones > *Cc:* kyra , ghc-devs at haskell.org > > > > On Tue, Oct 28, 2014 at 11:02 PM, Simon Peyton Jones < > simonpj at microsoft.com> wrote: > >> The people problem is tricky. At work, this would be the right time to >> do a video chat and at least see the faces of the other people involved. >> Would folks be interested in a Skype/Hangout sometime? It would be >> interesting to hear what interests / skills / resources / constraints we >> have between us. >> >> >> >> I think that?s a great idea, thanks. It?s easier to work with people >> with whom you have formed a personal relationship, and a video conf is a >> good way to do that. >> > > Let's try that. Shall we try to find a good timeslot? Sign up at > http://doodle.com/34e598zc7m8sbaqm > > -- > Gintautas Miliauskas > > -- Gintautas Miliauskas -------------- next part -------------- An HTML attachment was scrubbed... URL: From svenpanne at gmail.com Thu Oct 30 20:30:18 2014 From: svenpanne at gmail.com (Sven Panne) Date: Thu, 30 Oct 2014 21:30:18 +0100 Subject: cabal sdist trouble with GHC from head In-Reply-To: References: Message-ID: 2014-10-30 17:20 GMT+01:00 Austin Seipp : > [...] So this just means that Cabal isn't necessarily *future compatible* > with future GHCs - they may change the package format, etc. But it is > backwards compatible with existing ones. OK, that's good to know. To be sure, I've just tested Cabal head + GHC 7.8.3, and it works. But as I've mentioned already, there seems to be *no* Cabal version which works with GHC head: https://travis-ci.org/haskell-opengl/StateVar/builds/39533448 Is this known to the Cabal people? From austin at well-typed.com Thu Oct 30 20:38:47 2014 From: austin at well-typed.com (Austin Seipp) Date: Thu, 30 Oct 2014 15:38:47 -0500 Subject: GHC on Windows (extended/broad discussion) In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF3F37F0D2@DB3PRD3001MB020.064d.mgd.msft.net> <871tpr42g1.fsf@gmail.com> Message-ID: Hey Gintautas, Yes, I apologize about that (and I missed this request in my quick read over this email yesterday). To be clear, I apologize if my review/merge latencies are too long. :) What normally happens it that I review and merge patches in bulk, about once or twice a week. I'll review about, say, a dozen patches one day, and wait a few days for more to come in, then sweep up everything in that time at once. So there are two things: a review latency, *and* a merge latency. However, two things are also clear: 1) This is annoying for people who submit 'rapid improvements', e.g. in the process of working on GHC, you may fix 4 or 5 things, and then not having those in the mainline is a bit of a drain! 2) Phabricator building patches actually means the merge latency can be *shorter*, because in the past, we'd always have to double check if a patch worked in the first place (so it took *even longer* before!) Another thing is that I'm the primary person who lands things off Phabricator, although occasionally other people do too. This is somewhat suboptimal in some cases, since really, providing something has the OK (from me or someone else), anyone should be able to merge it. So I think this can be improved too. Finally, it's also worth mentioning that Phabricator reviews are special (and unlike GitHub) in that people who are *not* reviewers *do not* see the patch by default! That means if I am the *only* person on the review, it is pretty high guarantee that the review will only be done by me, and it will only be merged by me, unless I poke someone else. Others can see your review using a slightly different search criterion, however, but that's not the default. Note this is not a mistake - it is intentional in the design. Why? Because realistically, I'd say for about 85% of the patches that come in, they are irrelevant to 90% of all GHC developers, and historically, 90% of developers will never bother committing it either. It is often pointless to spam them with emails, and enlarging their review queue beyond what's necessary makes things even *worse* for them, since they can't tell what may really deserve their attention. I do want more people reviewing code actively - but to do that, there must be a tradeoff - we should try and keep contributor burden low. Most developers are just our friends after all, including you - not paid GHC hackers! I don't want to overburden you; we need you! I am one of the exceptions to this: I realistically care and want to see about 95% of all patches that go into the tree, at least to keep up to date with what's happening, and also to ensure things get proper oversight - by, say, adding someone else to a review who I want to look at it. This is why I'm the common denominator, and a reviewer of almost every patch (and I think I'm fairly keen on who might care about what). However it's clear that if this is slowing you down we should try to fix it - we want you to help after all! We already have nearly 40 people with commit rights to GHC, and you've clearly dedicated yourself to helping. That's fantastic. Perhaps it's time for you to enter the fray as well so I can get out of your way. :) But I do still want you to submit code reviews, as everyone else does - it really does help everyone, and increases a sense of shared ownership, IMO. In light of this though, I do think I need to ramp up my merge frequency. So how does a plan of just trying to merge all outstanding patches every day sound? This is normally very trivial amounts of time these days, considering Phabricator tends to catch the most obvious failures. BTW: I merged your pull request on the Win32 repository, so we can update MinGW - I didn't realize that it was open at all, and in fact I completely forgot I had permissions to merge things on that repository! Most of the external library management is normally dealt with by Herbert or individual maintainers. On Wed, Oct 29, 2014 at 6:36 AM, Gintautas Miliauskas wrote: > By the way, regarding that repository, could someone merge my pull request? > > In general, it's a bit frustrating how a lot of the patches in the > Phabricator queue seem to take a while to get noticed. Don't take it > personally, I'm just sharing my impressions, but I do feel it's taking away > some momentum - not good for me & other contributors, and not good for the > project. I know reviewers are understaffed, maybe consider spreading commit > rights a bit more widely until the situation improves? > > On Wed, Oct 29, 2014 at 11:04 AM, Herbert Valerio Riedel > wrote: >> >> On 2014-10-29 at 10:59:18 +0100, Phyx wrote: >> >> [...] >> >> >> The Win32 package for example, is dreadfully lacking in >> >> maintainership. While we merge patches, it would be great to see a >> >> Windows developer spearhead and clean it up >> > >> > A while back I was looking at adding some functionality to this >> > package, but could never figure out which one was actually being >> > used. I think there are multiple repositories out there. >> >> I'm not sure which multiple repositories you have seen, but >> >> http://hackage.haskell.org/package/Win32 >> >> points quite clearly to >> >> https://github.com/haskell/win32 >> >> and that's the official upstream repository GHC tracks (via a locally >> mirrored repo at git.haskell.org) >> >> Cheers, >> hvr > > > > > -- > Gintautas Miliauskas > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From austin at well-typed.com Thu Oct 30 20:41:56 2014 From: austin at well-typed.com (Austin Seipp) Date: Thu, 30 Oct 2014 15:41:56 -0500 Subject: GHC on Windows (extended/broad discussion) In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF3F37F0D2@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: For the record, I also think this is a great idea. I'll find a time that works for me in the next few days (I've never used doodle but I imagine I can manage). On Thu, Oct 30, 2014 at 10:34 AM, Gintautas Miliauskas wrote: > > > On Tue, Oct 28, 2014 at 11:02 PM, Simon Peyton Jones > wrote: >> >> The people problem is tricky. At work, this would be the right time to do >> a video chat and at least see the faces of the other people involved. Would >> folks be interested in a Skype/Hangout sometime? It would be interesting to >> hear what interests / skills / resources / constraints we have between us. >> >> >> >> I think that?s a great idea, thanks. It?s easier to work with people with >> whom you have formed a personal relationship, and a video conf is a good way >> to do that. > > > Let's try that. Shall we try to find a good timeslot? Sign up at > http://doodle.com/34e598zc7m8sbaqm > > -- > Gintautas Miliauskas -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From austin at well-typed.com Thu Oct 30 20:48:33 2014 From: austin at well-typed.com (Austin Seipp) Date: Thu, 30 Oct 2014 15:48:33 -0500 Subject: cabal sdist trouble with GHC from head In-Reply-To: References: Message-ID: I would imagine they are well aware, what with all the changes that have gone in the past few weeks (for backpack support, package db overhauls, etc). However, I think crux of it (and the real question) isn't are they aware - but "When will there be a Cabal release that supports GHC HEAD, which will become 7.10"? This is a question I'm afraid I cannot answer - Johan does the typical Cabal releases, AFAIK. I've CC'd Duncan and Johan - do either of you have plans for this? Considering we hope the stable freeze will happen soon for 7.10, I imagine Cabal won't be very far behind in this regard, but I'm not sure if there's a plan set down anywhere as to when the next release will happen. On Thu, Oct 30, 2014 at 3:30 PM, Sven Panne wrote: > 2014-10-30 17:20 GMT+01:00 Austin Seipp : >> [...] So this just means that Cabal isn't necessarily *future compatible* >> with future GHCs - they may change the package format, etc. But it is >> backwards compatible with existing ones. > > OK, that's good to know. To be sure, I've just tested Cabal head + GHC > 7.8.3, and it works. But as I've mentioned already, there seems to be > *no* Cabal version which works with GHC head: > https://travis-ci.org/haskell-opengl/StateVar/builds/39533448 Is this > known to the Cabal people? > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From austin at well-typed.com Thu Oct 30 20:55:14 2014 From: austin at well-typed.com (Austin Seipp) Date: Thu, 30 Oct 2014 15:55:14 -0500 Subject: RFC: Properly stated origin of code contributions In-Reply-To: <1414689409195.942a3b77@Nodemailer> References: <20141030171357.GC2465@singpolyma-liberty> <1414689409195.942a3b77@Nodemailer> Message-ID: I hate to spam this with possibly-tangential requests, but I also have a peeve: Can we get a standardized copyright/comment header across all our files? It seems as if every single file in the compiler (and RTS) has different header text, mentioning different people or groups, some from 10 years ago or more and others just recently added. This is somewhat related to this RFC but also somewhat not, I feel. Ideally it would be nice if we could have, say, an AUTHORS.txt file containing the names of all those people who have committed to GHC (essentially like 'git shortlog -sn' shows), which we would ask users to add their name into, and then if we could standardize all the headers to follow a known convention, and give boilerplate for people to copy. On Thu, Oct 30, 2014 at 12:16 PM, Carter Schonwald wrote: > yup, agreed > > -Carter > > > On Thu, Oct 30, 2014 at 1:14 PM, Stephen Paul Weber > wrote: >> >> > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From austin at well-typed.com Thu Oct 30 21:19:36 2014 From: austin at well-typed.com (Austin Seipp) Date: Thu, 30 Oct 2014 16:19:36 -0500 Subject: Tests with compilation errors In-Reply-To: References: Message-ID: On Thu, Oct 30, 2014 at 6:48 AM, Gintautas Miliauskas wrote: > Going through some validate.sh results, I found compilation errors due to > missing libraries, like this one: > > =====> stm052(normal) 4088 of 4108 [0, 21, 0] > cd ../../libraries/stm/tests && > 'C:/msys64/home/Gintas/ghc/bindisttest/install dir/bin/ghc.exe' > -fforce-recomp -dcore-lint -dcmm-lint -dno-debug-output -no-user-package-db > -rtsopt > s -fno-warn-tabs -fno-ghci-history -o stm052 stm052.hs -package stm >>stm052.comp.stderr 2>&1 > Compile failed (status 256) errors were: > > stm052.hs:10:8: > Could not find module ?System.Random? > Use -v to see a list of the files searched for. > > I was surprised to see that these are not listed in the test summary at the > end of the test run, but only counted towards the "X had missing libraries" > row. That setup makes it really easy to miss them, and I can't think of a > good reason to sweep such tests under the rug; a broken test is a failing > test. Actually, these tests aren't broken in the way you think :) It's a bit long-winded to explain... Basically, GHC can, if you let it, build extra dependencies in its build process, one of which is the 'random' library. 'random' was not ever a true requirement to build GHC (aka a 'bootlib' as we call them). So why is this test here? Because 'random' was actually a dependency of the Data Parallel Haskell package, and until not too long ago (earlier this year), `./validate` built and compiled DPH - with all its dependencies; random, vector, primitive - by default. This actually adds a pretty noticeable time to the build (you are compiling 5-8 more libraries after all), and at the time, DPH was also not ready for the Applicative-Monad patch. So we turned it off, as well as the dependencies. Additionally, GHC does have some 'extra' libraries which you can optionally build during the build process, but which are turned off by default. Originally this was because the weirdo './sync-all' script used to not need everything, and 'stm' was a library that wasn't cloned by default. Now that we've submoduleified everything though, these tests and the extra libraries could be built by default. Which we could certainly do. > How about at least listing such failed tests in the list of failed tests of > the end? I'd probably be OK with this. > At least in this case the error does not seem to be due to some missing > external dependencies (which probably would not be a great idea anyway...). > The test does pass if I remove the "-no-user-package-db" argument. What was > the intention here? Does packaging work somehow differently on Linux? (I'm > currently testing on Windows.) I'm just guessing but, I imagine you really don't want to remove '-no-user-package-db' at all, for any platform, otherwise Weird Things Might Happen, I'd assume. The TL;DR here is that when you build a copy of GHC and all the libraries, it actually *does* register the built packages for the compiler... this always happens, *even if you do not install it*. The primary 'global' package DB just sits in tree instead, under ./inplace. When you run ./validate, what happens is that after the build, we actually create a binary distribution and then test *that* compiler instead, as you can see (obviously for a good reason - broken bindists would be bad). The binary distribution obviously has its own set of binary packages it came with; those are the packages you built into it after all. The reason we tell GHC to ignore the user package db here is precisely because we *do not* want to pick it up! We only want to test the binary distribution with the packages *it* has. Now you might say, well, Austin, the version numbers are different! How would it pick that up? Not always... What if I built a copy of GHC HEAD today, then built something with it using Cabal? Then that will install into my user package database. Now I go back to my GHC tree and hack away _on the same day_ and run './validate'... the version number hasn't changed *at all* because it's date based, meaning the binary distribution could certainly pick up the previously installed libraries, which I installed via the older compiler. But I don't want that! I only want to run those tests with the compiler I'm validating *now*. I imagine the reason you see this test pass if you remove this argument is precisely for this reason: it doesn't fail because it's picking up a package database in your existing environment. But that's really, really not what you want (I'd be surprised if it worked and didn't result in some horrible error or crash). > On a related note, how about separating test failures from failing > performance tests ("stat too good" / "stat not good enough")? The latter are > important, but they seem to be much more prone to fail without good reason. > Perhaps do some color coding of the test runner output? That would also > help. I also think this is a good idea. > -- > Gintautas Miliauskas > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From ml at isaac.cedarswampstudios.org Thu Oct 30 21:59:45 2014 From: ml at isaac.cedarswampstudios.org (Isaac Dupree) Date: Thu, 30 Oct 2014 17:59:45 -0400 Subject: RFC: Properly stated origin of code contributions In-Reply-To: <87egtq6kmm.fsf@gmail.com> References: <87egtq6kmm.fsf@gmail.com> Message-ID: <5452B4D1.4080507@isaac.cedarswampstudios.org> There are good reasons not to require people's "real" name to participate: http://geekfeminism.wikia.com/wiki/Who_is_harmed_by_a_%22Real_Names%22_policy%3F Simon PJ often advocates to know people's name as part of creating a friendly community. There are good things about this. It also helps exclude people with less privilege, whom we have few enough of already, if it is a policy. I like most things about "Developer's Certificate of Origin", though. -Isaac On 10/30/2014 04:13 AM, Herbert Valerio Riedel wrote: > Hi, > > GHC's Git history has (mostly) a good track record of having properly > attributed authorship information in the recent past; Some time ago I've > even augmented the .mailmap file to fix-up some of the pre-Git meta-data > which had mangled author/committer meta-data (try 'git shortlog -sn' if > you're curious) > > However, I just noticed that > > http://git.haskell.org/ghc.git/commitdiff/322810e32cb18d7749e255937437ff2ef99dca3f > > landed recently, which did change a significant amount of code, but at > the same time the author looks like a pseudonym to me (and apologies if > I'm wrong). > > Other important projects such as Linux or Samba, just to name two > examples, reject contributions w/o a clearly stated origin, and > explicitly reject anonymous/pseudonym contributions (as part of their > "Developer's Certificate of Origin" policy[1] which involves a bit more > than merely stating the real name) > > I believe the GHC project should consider setting some reasonable > ground-rules for contributions to be on the safe side in order to avoid > potential copyright (or similiar) issues in the future, as well as > giving confidence to commercial users that precautions are taken to > avoid such issues. > > Comments? > > Cheers, > hvr > > [1]: See http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/SubmittingPatches > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From nicolas.frisby at gmail.com Thu Oct 30 22:42:17 2014 From: nicolas.frisby at gmail.com (Nicolas Frisby) Date: Thu, 30 Oct 2014 15:42:17 -0700 Subject: Tests with compilation errors In-Reply-To: References: Message-ID: This reply is very informative! Could you put it on the wiki for me to digest at a later date? (Or maybe there's already a consolidated place to find it all on there?) Thanks very much for sharing all of this. On Thu, Oct 30, 2014 at 2:19 PM, Austin Seipp wrote: > On Thu, Oct 30, 2014 at 6:48 AM, Gintautas Miliauskas > wrote: > > Going through some validate.sh results, I found compilation errors due to > > missing libraries, like this one: > > > > =====> stm052(normal) 4088 of 4108 [0, 21, 0] > > cd ../../libraries/stm/tests && > > 'C:/msys64/home/Gintas/ghc/bindisttest/install dir/bin/ghc.exe' > > -fforce-recomp -dcore-lint -dcmm-lint -dno-debug-output > -no-user-package-db > > -rtsopt > > s -fno-warn-tabs -fno-ghci-history -o stm052 stm052.hs -package stm > >>stm052.comp.stderr 2>&1 > > Compile failed (status 256) errors were: > > > > stm052.hs:10:8: > > Could not find module ?System.Random? > > Use -v to see a list of the files searched for. > > > > I was surprised to see that these are not listed in the test summary at > the > > end of the test run, but only counted towards the "X had missing > libraries" > > row. That setup makes it really easy to miss them, and I can't think of a > > good reason to sweep such tests under the rug; a broken test is a failing > > test. > > Actually, these tests aren't broken in the way you think :) It's a bit > long-winded to explain... > > Basically, GHC can, if you let it, build extra dependencies in its > build process, one of which is the 'random' library. 'random' was not > ever a true requirement to build GHC (aka a 'bootlib' as we call > them). So why is this test here? > > Because 'random' was actually a dependency of the Data Parallel > Haskell package, and until not too long ago (earlier this year), > `./validate` built and compiled DPH - with all its dependencies; > random, vector, primitive - by default. This actually adds a pretty > noticeable time to the build (you are compiling 5-8 more libraries > after all), and at the time, DPH was also not ready for the > Applicative-Monad patch. So we turned it off, as well as the > dependencies. > > Additionally, GHC does have some 'extra' libraries which you can > optionally build during the build process, but which are turned off by > default. Originally this was because the weirdo './sync-all' script > used to not need everything, and 'stm' was a library that wasn't > cloned by default. > > Now that we've submoduleified everything though, these tests and the > extra libraries could be built by default. Which we could certainly > do. > > > How about at least listing such failed tests in the list of failed tests > of > > the end? > > I'd probably be OK with this. > > > At least in this case the error does not seem to be due to some missing > > external dependencies (which probably would not be a great idea > anyway...). > > The test does pass if I remove the "-no-user-package-db" argument. What > was > > the intention here? Does packaging work somehow differently on Linux? > (I'm > > currently testing on Windows.) > > I'm just guessing but, I imagine you really don't want to remove > '-no-user-package-db' at all, for any platform, otherwise Weird Things > Might Happen, I'd assume. > > The TL;DR here is that when you build a copy of GHC and all the > libraries, it actually *does* register the built packages for the > compiler... this always happens, *even if you do not install it*. The > primary 'global' package DB just sits in tree instead, under > ./inplace. > > When you run ./validate, what happens is that after the build, we > actually create a binary distribution and then test *that* compiler > instead, as you can see (obviously for a good reason - broken bindists > would be bad). The binary distribution obviously has its own set of > binary packages it came with; those are the packages you built into it > after all. The reason we tell GHC to ignore the user package db here > is precisely because we *do not* want to pick it up! We only want to > test the binary distribution with the packages *it* has. > > Now you might say, well, Austin, the version numbers are different! > How would it pick that up? Not always... What if I built a copy of GHC > HEAD today, then built something with it using Cabal? Then that will > install into my user package database. Now I go back to my GHC tree > and hack away _on the same day_ and run './validate'... the version > number hasn't changed *at all* because it's date based, meaning the > binary distribution could certainly pick up the previously installed > libraries, which I installed via the older compiler. But I don't want > that! I only want to run those tests with the compiler I'm validating > *now*. > > I imagine the reason you see this test pass if you remove this > argument is precisely for this reason: it doesn't fail because it's > picking up a package database in your existing environment. But that's > really, really not what you want (I'd be surprised if it worked and > didn't result in some horrible error or crash). > > > On a related note, how about separating test failures from failing > > performance tests ("stat too good" / "stat not good enough")? The latter > are > > important, but they seem to be much more prone to fail without good > reason. > > Perhaps do some color coding of the test runner output? That would also > > help. > > I also think this is a good idea. > > > -- > > Gintautas Miliauskas > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > -- > Regards, > > Austin Seipp, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gintautas.miliauskas at gmail.com Thu Oct 30 23:06:50 2014 From: gintautas.miliauskas at gmail.com (Gintautas Miliauskas) Date: Fri, 31 Oct 2014 00:06:50 +0100 Subject: Tests with compilation errors In-Reply-To: References: Message-ID: > > On a related note, how about separating test failures from failing > > performance tests ("stat too good" / "stat not good enough")? The latter > are > > important, but they seem to be much more prone to fail without good > reason. > > I also think this is a good idea. > https://phabricator.haskell.org/D406 -- Gintautas Miliauskas -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvriedel at gmail.com Thu Oct 30 23:34:00 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Fri, 31 Oct 2014 00:34:00 +0100 Subject: RFC: Properly stated origin of code contributions In-Reply-To: <5452B4D1.4080507@isaac.cedarswampstudios.org> (Isaac Dupree's message of "Thu, 30 Oct 2014 17:59:45 -0400") References: <87egtq6kmm.fsf@gmail.com> <5452B4D1.4080507@isaac.cedarswampstudios.org> Message-ID: <8761f16sk7.fsf@gmail.com> On 2014-10-30 at 22:59:45 +0100, Isaac Dupree wrote: > There are good reasons not to require people's "real" name to participate: > > http://geekfeminism.wikia.com/wiki/Who_is_harmed_by_a_%22Real_Names%22_policy%3F > > Simon PJ often advocates to know people's name as part of creating a > friendly community. There are good things about this. It also helps > exclude people with less privilege, whom we have few enough of already, > if it is a policy. > > I like most things about "Developer's Certificate of Origin", though. However, if we want to adopt the DCO[1] (as used by Linux Kernel development) as a good-faith (and yet light-weight) attempt to track the origin/accountability of contributions it relies on real names to know who is actually making that assertion. Having the DCO signed off by an obvious pseudonym would defeat the whole point of the DCO imho. Cheers, hvr [1]: http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/SubmittingPatches#n358 From carter.schonwald at gmail.com Fri Oct 31 02:34:21 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Thu, 30 Oct 2014 22:34:21 -0400 Subject: RFC: Properly stated origin of code contributions In-Reply-To: <8761f16sk7.fsf@gmail.com> References: <87egtq6kmm.fsf@gmail.com> <5452B4D1.4080507@isaac.cedarswampstudios.org> <8761f16sk7.fsf@gmail.com> Message-ID: I agree with herbert, and one solution would be to ask those people who which to remain pseudonymous to have a named person who's agreed to be their proxy co-sign the patch or whatever. That i think accomplishes that same goal :) On Thu, Oct 30, 2014 at 7:34 PM, Herbert Valerio Riedel wrote: > On 2014-10-30 at 22:59:45 +0100, Isaac Dupree wrote: > > There are good reasons not to require people's "real" name to > participate: > > > > > http://geekfeminism.wikia.com/wiki/Who_is_harmed_by_a_%22Real_Names%22_policy%3F > > > > Simon PJ often advocates to know people's name as part of creating a > > friendly community. There are good things about this. It also helps > > exclude people with less privilege, whom we have few enough of already, > > if it is a policy. > > > > I like most things about "Developer's Certificate of Origin", though. > > However, if we want to adopt the DCO[1] (as used by Linux Kernel > development) as a good-faith (and yet light-weight) attempt to track the > origin/accountability of contributions it relies on real names to know > who is actually making that assertion. Having the DCO signed off by an > obvious pseudonym would defeat the whole point of the DCO imho. > > Cheers, > hvr > > [1]: > http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/SubmittingPatches#n358 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From the.dead.shall.rise at gmail.com Fri Oct 31 03:44:05 2014 From: the.dead.shall.rise at gmail.com (Mikhail Glushenkov) Date: Fri, 31 Oct 2014 04:44:05 +0100 Subject: cabal sdist trouble with GHC from head In-Reply-To: References: Message-ID: Hi, On 30 October 2014 21:30, Sven Panne wrote: > 2014-10-30 17:20 GMT+01:00 Austin Seipp : >> [...] So this just means that Cabal isn't necessarily *future compatible* >> with future GHCs - they may change the package format, etc. But it is >> backwards compatible with existing ones. > > OK, that's good to know. To be sure, I've just tested Cabal head + GHC > 7.8.3, and it works. But as I've mentioned already, there seems to be > *no* Cabal version which works with GHC head: > https://travis-ci.org/haskell-opengl/StateVar/builds/39533448 Is this > known to the Cabal people? Yes, I know that the Cabal test suite wasn't passing on GHC HEAD for some time. Re: the next release - I think it'll be out at the same time with GHC 7.10.1. From sophie at traumapony.org Fri Oct 31 05:18:08 2014 From: sophie at traumapony.org (Sophie Taylor) Date: Fri, 31 Oct 2014 15:18:08 +1000 Subject: Proposal: Improving the LLVM backend by packaging it In-Reply-To: <20141027092511.149b841e@sf> References: <20141027092511.149b841e@sf> Message-ID: If this does happen, it'd probably make sense to use this as a chance to refactor out the LLVM bits and use the llvm-general package. llvm-general seems to only depend on base libraries (apart from parsec, which seems to only be used for parsing data layout formats; it could probably be disabled with a compiler flag if we construct the data layout structures directly; see https://github.com/bscarlet/llvm-general/blob/5f266db5ad8015f7d79374684b083ffdeed3c245/llvm-general-pure/src/LLVM/General/DataLayout.hs). It seems a more principled way than what is currently implemented, and work done to improve that library via ghc would also help every other user of the library, and visa versa. On 27 October 2014 19:25, Sergei Trofimovich wrote: > On Fri, 24 Oct 2014 18:52:53 -0500 > Austin Seipp wrote: > > > I won't repeat what's on the wiki page too much, but the TL;DR version > > is: we should start packaging a version of LLVM, and shipping it with > > e.g. binary distributions of GHC. It's just a lot better for everyone. > > > > I know we're normally fairly hesitant about things like this (shipping > > external dependencies), but I think it's the only sane thing to do > > here, and the situation is fairly unique in that it's not actually > > very complicated to implement or support, I think. > > That makes a lot of sense! Gentoo allows user > upgrade llvm and ghc independently, which makes > syncing harder. Thus Gentoo does not care much > about llvm support in ghc. > > -- > > Sergei > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sophie at traumapony.org Fri Oct 31 06:33:36 2014 From: sophie at traumapony.org (Sophie Taylor) Date: Fri, 31 Oct 2014 16:33:36 +1000 Subject: Proposal: Improving the LLVM backend by packaging it In-Reply-To: References: <20141027092511.149b841e@sf> Message-ID: Also, it could be a chance to make it easier to experiment with things like polly: http://polly.llvm.org/ On 31 October 2014 15:18, Sophie Taylor wrote: > If this does happen, it'd probably make sense to use this as a chance to > refactor out the LLVM bits and use the llvm-general package. llvm-general > seems to only depend on base libraries (apart from parsec, which seems to > only be used for parsing data layout formats; it could probably be disabled > with a compiler flag if we construct the data layout structures directly; > see > https://github.com/bscarlet/llvm-general/blob/5f266db5ad8015f7d79374684b083ffdeed3c245/llvm-general-pure/src/LLVM/General/DataLayout.hs). > It seems a more principled way than what is currently implemented, and work > done to improve that library via ghc would also help every other user of > the library, and visa versa. > > On 27 October 2014 19:25, Sergei Trofimovich wrote: > >> On Fri, 24 Oct 2014 18:52:53 -0500 >> Austin Seipp wrote: >> >> > I won't repeat what's on the wiki page too much, but the TL;DR version >> > is: we should start packaging a version of LLVM, and shipping it with >> > e.g. binary distributions of GHC. It's just a lot better for everyone. >> > >> > I know we're normally fairly hesitant about things like this (shipping >> > external dependencies), but I think it's the only sane thing to do >> > here, and the situation is fairly unique in that it's not actually >> > very complicated to implement or support, I think. >> >> That makes a lot of sense! Gentoo allows user >> upgrade llvm and ghc independently, which makes >> syncing harder. Thus Gentoo does not care much >> about llvm support in ghc. >> >> -- >> >> Sergei >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Oct 31 08:10:01 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 31 Oct 2014 08:10:01 +0000 Subject: package hashes In-Reply-To: <1414698533-sup-323@sabre> References: <618BE556AADD624C9C918AA5D5911BEF3F384849@DB3PRD3001MB020.064d.mgd.msft.net> <1414698533-sup-323@sabre> Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F3854CF@DB3PRD3001MB020.064d.mgd.msft.net> No, I did not. You can try it yourself! | -----Original Message----- | From: Edward Z. Yang [mailto:ezyang at cs.stanford.edu] | Sent: 30 October 2014 19:50 | To: Simon Peyton Jones | Cc: ghc-devs | Subject: Re: package hashes | | Hmm. I'm not sure, but the right thing might just be to accept this | new output. But I guess I'm a bit confused, since you probably didn't | change this test-case at all, did you? | | Edward | | Excerpts from Simon Peyton Jones's message of 2014-10-30 08:26:45 - | 0700: | > Edward | > | > On branch wip/new-flatten-skolems-Oct14, I'm getting this test | failure repeatably on test safePkgO1. I'm pretty sure I have done | nothing to mess with package hashes! | > | > Any ideas? | > | > Simon | > | > =====> safePkg01(normal) 108 of 120 [0, 0, 0] | > cd ./check/pkg01 && $MAKE -s --no-print-directory safePkg01 | VANILLA=--enable-library-vanilla PROF=--disable-library-profiling | DYN=--enable-shared safePkg01.run.stdout | 2>safePkg01.run.stderr | > Actual stdout output differs from expected: | > --- ./check/pkg01/safePkg01.stdout 2014-10-29 15:09:16.000000000 | +0000 | > +++ ./check/pkg01/safePkg01.run.stdout 2014-10-30 | 15:25:17.799094762 +0000 | > @@ -29,17 +29,17 @@ | > require own pkg trusted: True | > M_SafePkg6 | > -package dependencies: array-0.5.0.1 at array_ | > +package dependencies: array-0.5.0.1 base-4.8.0.0* | > trusted: trustworthy | > require own pkg trusted: False | > M_SafePkg7 | > -package dependencies: array-0.5.0.1 at array_ | > +package dependencies: array-0.5.0.1 base-4.8.0.0* | > trusted: safe | > require own pkg trusted: False | > M_SafePkg8 | > -package dependencies: array-0.5.0.1 at array_ | > +package dependencies: array-0.5.0.1 base-4.8.0.0 | > trusted: trustworthy | > require own pkg trusted: False From simonpj at microsoft.com Fri Oct 31 08:25:42 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 31 Oct 2014 08:25:42 +0000 Subject: Tests with compilation errors In-Reply-To: References: Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F38550D@DB3PRD3001MB020.064d.mgd.msft.net> Nick, Where in the wiki would you have looked for it? This isn?t at trick question. It?s quite hard to know where to record info! S From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Nicolas Frisby Sent: 30 October 2014 22:42 To: Austin Seipp Cc: ghc-devs at haskell.org Subject: Re: Tests with compilation errors This reply is very informative! Could you put it on the wiki for me to digest at a later date? (Or maybe there's already a consolidated place to find it all on there?) Thanks very much for sharing all of this. On Thu, Oct 30, 2014 at 2:19 PM, Austin Seipp > wrote: On Thu, Oct 30, 2014 at 6:48 AM, Gintautas Miliauskas > wrote: > Going through some validate.sh results, I found compilation errors due to > missing libraries, like this one: > > =====> stm052(normal) 4088 of 4108 [0, 21, 0] > cd ../../libraries/stm/tests && > 'C:/msys64/home/Gintas/ghc/bindisttest/install dir/bin/ghc.exe' > -fforce-recomp -dcore-lint -dcmm-lint -dno-debug-output -no-user-package-db > -rtsopt > s -fno-warn-tabs -fno-ghci-history -o stm052 stm052.hs -package stm >>stm052.comp.stderr 2>&1 > Compile failed (status 256) errors were: > > stm052.hs:10:8: > Could not find module ?System.Random? > Use -v to see a list of the files searched for. > > I was surprised to see that these are not listed in the test summary at the > end of the test run, but only counted towards the "X had missing libraries" > row. That setup makes it really easy to miss them, and I can't think of a > good reason to sweep such tests under the rug; a broken test is a failing > test. Actually, these tests aren't broken in the way you think :) It's a bit long-winded to explain... Basically, GHC can, if you let it, build extra dependencies in its build process, one of which is the 'random' library. 'random' was not ever a true requirement to build GHC (aka a 'bootlib' as we call them). So why is this test here? Because 'random' was actually a dependency of the Data Parallel Haskell package, and until not too long ago (earlier this year), `./validate` built and compiled DPH - with all its dependencies; random, vector, primitive - by default. This actually adds a pretty noticeable time to the build (you are compiling 5-8 more libraries after all), and at the time, DPH was also not ready for the Applicative-Monad patch. So we turned it off, as well as the dependencies. Additionally, GHC does have some 'extra' libraries which you can optionally build during the build process, but which are turned off by default. Originally this was because the weirdo './sync-all' script used to not need everything, and 'stm' was a library that wasn't cloned by default. Now that we've submoduleified everything though, these tests and the extra libraries could be built by default. Which we could certainly do. > How about at least listing such failed tests in the list of failed tests of > the end? I'd probably be OK with this. > At least in this case the error does not seem to be due to some missing > external dependencies (which probably would not be a great idea anyway...). > The test does pass if I remove the "-no-user-package-db" argument. What was > the intention here? Does packaging work somehow differently on Linux? (I'm > currently testing on Windows.) I'm just guessing but, I imagine you really don't want to remove '-no-user-package-db' at all, for any platform, otherwise Weird Things Might Happen, I'd assume. The TL;DR here is that when you build a copy of GHC and all the libraries, it actually *does* register the built packages for the compiler... this always happens, *even if you do not install it*. The primary 'global' package DB just sits in tree instead, under ./inplace. When you run ./validate, what happens is that after the build, we actually create a binary distribution and then test *that* compiler instead, as you can see (obviously for a good reason - broken bindists would be bad). The binary distribution obviously has its own set of binary packages it came with; those are the packages you built into it after all. The reason we tell GHC to ignore the user package db here is precisely because we *do not* want to pick it up! We only want to test the binary distribution with the packages *it* has. Now you might say, well, Austin, the version numbers are different! How would it pick that up? Not always... What if I built a copy of GHC HEAD today, then built something with it using Cabal? Then that will install into my user package database. Now I go back to my GHC tree and hack away _on the same day_ and run './validate'... the version number hasn't changed *at all* because it's date based, meaning the binary distribution could certainly pick up the previously installed libraries, which I installed via the older compiler. But I don't want that! I only want to run those tests with the compiler I'm validating *now*. I imagine the reason you see this test pass if you remove this argument is precisely for this reason: it doesn't fail because it's picking up a package database in your existing environment. But that's really, really not what you want (I'd be surprised if it worked and didn't result in some horrible error or crash). > On a related note, how about separating test failures from failing > performance tests ("stat too good" / "stat not good enough")? The latter are > important, but they seem to be much more prone to fail without good reason. > Perhaps do some color coding of the test runner output? That would also > help. I also think this is a good idea. > -- > Gintautas Miliauskas > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.stolarek at p.lodz.pl Fri Oct 31 08:48:23 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Fri, 31 Oct 2014 10:48:23 +0200 Subject: Understanding core2core optimisation pipeline In-Reply-To: References: Message-ID: <201410310948.23737.jan.stolarek@p.lodz.pl> Thanks guys. I've read the wiki page about demand analysis and things start to make a bit more sense now. > If you're looking at super-recent code, that could be Joachim Breitner's > work. He's exposed the one-shot stuff at the Haskell level with the > experimental magic oneShot function I'm looking at recent HEAD. But I'm not sure if we're thinking about the same thing. If you mean Joachim's work on call arity then I believe this is only realted to callArityInfo field of IdInfo. I'm interested in the oneShotInfo field of IdInfo. >I recall mention of interface files. Jan, would your question #2 be > addressed by that information being imported from interface files? No, I don't think that's the answer. I mean I imagine that when we import things from interface files we get this information. But I'm interested in how this information is generated when we compile things in the current module. One more question about reading the demand analysis results: Str=DmdType Here the argument is demanded once. But what if I have: Str=DmdType Does the lack of `1*` imply that the argument is used many times? Janek From mail at joachim-breitner.de Fri Oct 31 09:11:18 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Fri, 31 Oct 2014 10:11:18 +0100 Subject: Understanding core2core optimisation pipeline In-Reply-To: <201410310948.23737.jan.stolarek@p.lodz.pl> References: <201410310948.23737.jan.stolarek@p.lodz.pl> Message-ID: <1414746678.1384.2.camel@joachim-breitner.de> Hi, Am Freitag, den 31.10.2014, 10:48 +0200 schrieb Jan Stolarek: > One more question about reading the demand analysis results: > > Str=DmdType > > Here the argument is demanded once. But what if I have: > > Str=DmdType > > Does the lack of `1*` imply that the argument is used many times? no; these things tend to be always an approximation in one direction. So you either know that it is used at most once, or both is possible. Nothing goes wrong when treating something that is used once as if it is used multiple times. What would be the value of knowing that it is definitely used multiple times? Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From simonpj at microsoft.com Fri Oct 31 09:19:17 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 31 Oct 2014 09:19:17 +0000 Subject: Understanding core2core optimisation pipeline In-Reply-To: <201410310948.23737.jan.stolarek@p.lodz.pl> References: <201410310948.23737.jan.stolarek@p.lodz.pl> Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F3855DE@DB3PRD3001MB020.064d.mgd.msft.net> | One more question about reading the demand analysis results: | | Str=DmdType | | Here the argument is demanded once. But what if I have: | | Str=DmdType | | Does the lack of `1*` imply that the argument is used many times? Well, *may* be used many times. S From jan.stolarek at p.lodz.pl Fri Oct 31 10:02:52 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Fri, 31 Oct 2014 12:02:52 +0200 Subject: Understanding core2core optimisation pipeline In-Reply-To: <1414746678.1384.2.camel@joachim-breitner.de> References: <201410310948.23737.jan.stolarek@p.lodz.pl> <1414746678.1384.2.camel@joachim-breitner.de> Message-ID: <201410311102.52440.jan.stolarek@p.lodz.pl> > What would be the value of knowing that it is definitely used multiple > times? I'm working on core-to-core optimisation that improves code when something is used multiple times. I fear that in practice it might not be beneficial to transform thing that are used just once. Janek From alexander at plaimi.net Fri Oct 31 10:31:23 2014 From: alexander at plaimi.net (Alexander Berntsen) Date: Fri, 31 Oct 2014 11:31:23 +0100 Subject: RFC: Properly stated origin of code contributions In-Reply-To: References: <20141030171357.GC2465@singpolyma-liberty> <1414689409195.942a3b77@Nodemailer> Message-ID: <545364FB.10602@plaimi.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 30/10/14 21:55, Austin Seipp wrote: > Can we get a standardized copyright/comment header across all our > files? It seems as if every single file in the compiler (and RTS) > has different header text, mentioning different people or groups, > some from 10 years ago or more and others just recently added. This > is somewhat related to this RFC but also somewhat not, I feel. > > Ideally it would be nice if we could have, say, an AUTHORS.txt > file containing the names of all those people who have committed to > GHC (essentially like 'git shortlog -sn' shows), which we would ask > users to add their name into, and then if we could standardize all > the headers to follow a known convention, and give boilerplate for > people to copy. I posted a thread about this earlier, and the only response I got was that it doesn't matter... For what it's worth I agree with you. - -- Alexander alexander at plaimi.net https://secure.plaimi.net/~alexander -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iF4EAREIAAYFAlRTZPsACgkQRtClrXBQc7U5oAD/SORgSMrSBML4ULjyG5HdnKEx Qd5vubaiRPVTitRaG50A/AwOm7SHquSydUIVcLtm3qlvbHBO2Z0FnecT7rAQKTQY =MoV8 -----END PGP SIGNATURE----- From simonpj at microsoft.com Fri Oct 31 10:42:34 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 31 Oct 2014 10:42:34 +0000 Subject: Understanding core2core optimisation pipeline In-Reply-To: <201410301515.52137.jan.stolarek@p.lodz.pl> References: <201410301515.52137.jan.stolarek@p.lodz.pl> Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F3856A4@DB3PRD3001MB020.064d.mgd.msft.net> Jan As people respond on this thread, would you be willing to capture what you learn in a wiki page in the Commentary? That way, your successor would have at least those questions answered right away. Somewhere under here https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler would be good. | 1. I'm still a bit confused about terminology. Demand analysis, | strictness analysis, cardinality analysis - do these three terms mean | exactly the same thing? If no, then what are the differences? Strictness and demand analysis are the same. Cardinality analysis, strictness/demand analysis, and CPR analysis, are all different analyses, but they are all carried out by the same piece of code, namely DmdAnal. | 2. First pass of full laziness is followed by floating in. At that | stage we have not yet run the demand analysis and yet the code that | does the floating-in checks whether a binder is one-shot | (FloatIn.okToFloatInside called by FloatIn.fiExpr AnnLam case). This | suggests that cardinality analysis is done earlier (but when?) and | that demand analysis is not the same thing as cardinality analysis. Imported functions (eg foldr or build) have strictness and cardinality analysis info in their interface file signatures. That can in turn drive the attachment of one-shot info to binders. See one_shots = argsOneShots (idStrictness fun) n_val_args -- See Note [Use one-shot info] line 1345 of OccurAnal. | 3. Does demand analyser perform any transformations? Or does it only | annotate Core with demand information that can be used by subsequent | passes? The demand analyser simply annotates Core It is immediately followed by the worker/wrapper transformation, which uses the strictness annotations to transform Core using the w/w idea. | 4. BasicTypes module defines: | | data OneShotInfo = NoOneShotInfo -- ^ No information | | ProbOneShot -- ^ The lambda is probably applied | at most once | | OneShotLam -- ^ The lambda is applied at most | once. | | Do I understand correctly that `NoOneShotInfo` really means no | information, ie. a binding annotated with this might in fact be one | shot? Correct | If so, then do we have means of saying that a binding is | certainly not a one-shot binding? We do not. | 5. What is the purpose of SetLevels.lvlMFE function? It decides what level to float an expression out to. Simon | From hvriedel at gmail.com Fri Oct 31 10:45:28 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Fri, 31 Oct 2014 11:45:28 +0100 Subject: RFC: Properly stated origin of code contributions In-Reply-To: (Austin Seipp's message of "Thu, 30 Oct 2014 15:55:14 -0500") References: <20141030171357.GC2465@singpolyma-liberty> <1414689409195.942a3b77@Nodemailer> Message-ID: <87k33g4iwn.fsf@gmail.com> Hi Austin, On 2014-10-30 at 21:55:14 +0100, Austin Seipp wrote: [...] > Can we get a standardized copyright/comment header across all our > files? It seems as if every single file in the compiler (and RTS) has > different header text, mentioning different people or groups, some > from 10 years ago or more and others just recently added. This is > somewhat related to this RFC but also somewhat not, I feel. [...] Could you draft up a standard-header as a suggestion somewhere, maybe on the Wiki? The current situation is suboptimal, as it's unclear where the threshold for adding yourself as an author to a module header is (whitespace/indentation cleanups, fixing/writing docs, removing lines, adding a 5-line function in a 500 line module, ...?), and it's a bit unfair to those that have contributed far more to a module but haven't bothered to add themselves to the module header. So I'd welcome a standard approach. Would there be a single AUTHORS file for the code in ghc.git, or multiple ones (one for the compiler proper, one for base, ghc-prim, template-haskell, integer-*, ...?) Cheers, hvr From simonpj at microsoft.com Fri Oct 31 11:11:25 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 31 Oct 2014 11:11:25 +0000 Subject: git question Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F3856FC@DB3PRD3001MB020.064d.mgd.msft.net> Friends I am working on wip/new-flatten-skolems-Oct14. I have pushed lots of patches up to the main repo. Now I want to rebase to clean up. Can I just do a local rebase and then git push? Nothing special about the push? I know that will confuse anyone who is pulling from that branch, but I've warned them! Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.trstenjak at gmail.com Fri Oct 31 11:22:54 2014 From: daniel.trstenjak at gmail.com (Daniel Trstenjak) Date: Fri, 31 Oct 2014 12:22:54 +0100 Subject: git question In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F3856FC@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF3F3856FC@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <20141031112254.GA7862@machine> Hi Simon, > Now I want to rebase to clean up. Can I just do a local rebase and then git > push? Nothing special about the push? You will most likely need to add a '--force' to your push, to overwrite the branch in the remote repo. Greetings, Daniel From mikolaj at well-typed.com Fri Oct 31 11:24:13 2014 From: mikolaj at well-typed.com (Mikolaj Konarski) Date: Fri, 31 Oct 2014 12:24:13 +0100 Subject: git question In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F3856FC@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF3F3856FC@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: > Now I want to rebase to clean up. Can I just do a local rebase and then git > push? Nothing special about the push? You need 'push -f' to force overwriting the old version of your branch. Please make sure you are not forcefully pushing while on master branch, though. ;) (I seem to remember 'push -f' can be blocked on master, but I don't know if it is in our repo.) > I know that will confuse anyone who is pulling from that branch, but I?ve > warned them! If they do 'git fetch', they can compare the old and the new version of the branch and mix and match at will. From simonpj at microsoft.com Fri Oct 31 11:28:10 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 31 Oct 2014 11:28:10 +0000 Subject: git question In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF3F3856FC@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F385740@DB3PRD3001MB020.064d.mgd.msft.net> | You need 'push -f' to force overwriting the old version of your | branch. Please make sure you are not forcefully pushing while on | master branch, though. ;) (I seem to remember 'push -f' can be blocked Right! I'm on branch wip/new-flatten-skolems-Oct14, so git push --force should push just that branch right? If I want to be super-safe, and say "push only this branch" would I say git push --force HEAD or git push --force wip/new-flatten-skolems-Oct14 or something like that? S | -----Original Message----- | From: Mikolaj Konarski [mailto:mikolaj at well-typed.com] | Sent: 31 October 2014 11:24 | To: Simon Peyton Jones | Cc: ghc-devs at haskell.org | Subject: Re: git question | | > Now I want to rebase to clean up. Can I just do a local rebase and | > then git push? Nothing special about the push? | | You need 'push -f' to force overwriting the old version of your | branch. Please make sure you are not forcefully pushing while on | master branch, though. ;) (I seem to remember 'push -f' can be blocked | on master, but I don't know if it is in our repo.) | | > I know that will confuse anyone who is pulling from that branch, but | > I?ve warned them! | | If they do 'git fetch', they can compare the old and the new version | of the branch and mix and match at will. From daniel.trstenjak at gmail.com Fri Oct 31 11:37:14 2014 From: daniel.trstenjak at gmail.com (Daniel Trstenjak) Date: Fri, 31 Oct 2014 12:37:14 +0100 Subject: git question In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F385740@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF3F3856FC@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF3F385740@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <20141031113714.GA8183@machine> > Right! I'm on branch wip/new-flatten-skolems-Oct14, so > git push --force > should push just that branch right? > > If I want to be super-safe, and say "push only this branch" would I say > > git push --force HEAD > or > git push --force wip/new-flatten-skolems-Oct14 > or something like that? To ensure, that you're only operating on your current branch you can add to your '~/.gitconfig': [push] default = simple Newer versions of git have this now as their default behaviour, but I'm not quite sure which git version was the first one. Greetings, Daniel From mikolaj at well-typed.com Fri Oct 31 11:38:46 2014 From: mikolaj at well-typed.com (Mikolaj Konarski) Date: Fri, 31 Oct 2014 12:38:46 +0100 Subject: git question In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F385740@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF3F3856FC@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF3F385740@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: > Right! I'm on branch wip/new-flatten-skolems-Oct14, so > git push --force > should push just that branch right? Right. But don't trust me, try with --dry-run. > If I want to be super-safe, and say "push only this branch" would I say > > git push --force HEAD > or > git push --force wip/new-flatten-skolems-Oct14 > or something like that? The latter seems safest. Probably git push --force origin wip/new-flatten-skolems-Oct14 would work, depending on how your remotes are named. Again, --dry-run. :) From daniel.trstenjak at gmail.com Fri Oct 31 11:38:57 2014 From: daniel.trstenjak at gmail.com (Daniel Trstenjak) Date: Fri, 31 Oct 2014 12:38:57 +0100 Subject: git question In-Reply-To: <20141031113714.GA8183@machine> References: <618BE556AADD624C9C918AA5D5911BEF3F3856FC@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF3F385740@DB3PRD3001MB020.064d.mgd.msft.net> <20141031113714.GA8183@machine> Message-ID: <20141031113857.GB8183@machine> > Newer versions of git have this now as their default behaviour, but > I'm not quite sure which git version was the first one. Ok, it's starting with git 2.0. Greetings, Daniel From mikolaj at well-typed.com Fri Oct 31 11:54:11 2014 From: mikolaj at well-typed.com (Mikolaj Konarski) Date: Fri, 31 Oct 2014 12:54:11 +0100 Subject: RFC: Properly stated origin of code contributions In-Reply-To: <87k33g4iwn.fsf@gmail.com> References: <20141030171357.GC2465@singpolyma-liberty> <1414689409195.942a3b77@Nodemailer> <87k33g4iwn.fsf@gmail.com> Message-ID: > The current situation is suboptimal, as it's unclear where the threshold > for adding yourself as an author to a module header is > (whitespace/indentation cleanups, fixing/writing docs, removing lines, > adding a 5-line function in a 500 line module, ...?), and it's a bit > unfair to those that have contributed far more to a module but haven't > bothered to add themselves to the module header. For these reasons, and due to the power of git and Phab, I think, author annotations in module headers do not fulfill their original purpose particularly well any more. Neither the purpose of stating who is responsible/interested in a code fragment, nor the purpose of giving people credit. Could we get rid of them? The annotations already in the code would stay in git history and any future authors are recorded in git and Phab messages. We just need to make sure to mention original authors in git commit messages, if for whatever reason the original commit creator git metadata would be lost (or manually revert the loss of info via git options). BTW, this relates to the pseudonymous contributions discussion. The git/Phab history only helps to the extent that people identify themselves. One less place with one less version of contributor names/pseudonyms should actually help accounting for all contributors on the wiki, in .cabal, etc., for the purpose of giving public credit, for legal reasons, etc. Best, Mikolaj From jan.stolarek at p.lodz.pl Fri Oct 31 12:05:20 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Fri, 31 Oct 2014 14:05:20 +0200 Subject: Understanding core2core optimisation pipeline In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF3F3856A4@DB3PRD3001MB020.064d.mgd.msft.net> References: <201410301515.52137.jan.stolarek@p.lodz.pl> <618BE556AADD624C9C918AA5D5911BEF3F3856A4@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <201410311305.20719.jan.stolarek@p.lodz.pl> Thank you for answers. > As people respond on this thread, would you be willing to capture what you > learn in a wiki page in the Commentary? I already created such a wiki page some time ago: https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/Core2CorePipeline But, since there are things I don't yet understant, this page is still incomplete. > Imported functions (eg foldr or build) have strictness and cardinality > analysis info in their interface file signatures. That can in turn drive > the attachment of one-shot info to binders. See one_shots = argsOneShots > (idStrictness fun) n_val_args > -- See Note [Use one-shot info] > line 1345 of OccurAnal. And if we don't import anything then we're assuming NoOneShotInfo, which means we don't float in past lambdas? Janek From simonpj at microsoft.com Fri Oct 31 12:19:30 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 31 Oct 2014 12:19:30 +0000 Subject: Understanding core2core optimisation pipeline In-Reply-To: <201410311305.20719.jan.stolarek@p.lodz.pl> References: <201410301515.52137.jan.stolarek@p.lodz.pl> <618BE556AADD624C9C918AA5D5911BEF3F3856A4@DB3PRD3001MB020.064d.mgd.msft.net> <201410311305.20719.jan.stolarek@p.lodz.pl> Message-ID: <618BE556AADD624C9C918AA5D5911BEF3F385819@DB3PRD3001MB020.064d.mgd.msft.net> | And if we don't import anything then we're assuming NoOneShotInfo, | which means we don't float in past lambdas? correct From mikolaj at well-typed.com Fri Oct 31 14:22:32 2014 From: mikolaj at well-typed.com (Mikolaj Konarski) Date: Fri, 31 Oct 2014 15:22:32 +0100 Subject: git question In-Reply-To: <20141031113714.GA8183@machine> References: <618BE556AADD624C9C918AA5D5911BEF3F3856FC@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF3F385740@DB3PRD3001MB020.064d.mgd.msft.net> <20141031113714.GA8183@machine> Message-ID: On Fri, Oct 31, 2014 at 12:37 PM, Daniel Trstenjak wrote: > To ensure, that you're only operating on your current branch you can > add to your '~/.gitconfig': > > [push] > default = simple Oh, useful. > > (I seem to remember 'push -f' can be blocked on master, > > but I don't know if it is in our repo.) Herbert tells me that actually 'push -f' (aka "non-fast-forwards") is forbidden in the GHC repo everywhere except wip/, as part of a configuration that also restricts touching named GHC version branches, handles mirroring, etc. So it's harder to mess up a repo that in ye olde days, though I'm sure one can still produce lots of colourful git art (to be watched in the 'gitk' art viewer) by repeatedly merging forward and backward two branches, without any rebasing. From george.colpitts at gmail.com Fri Oct 31 19:45:36 2014 From: george.colpitts at gmail.com (George Colpitts) Date: Fri, 31 Oct 2014 16:45:36 -0300 Subject: cabal error: http://hackage.haskell.org/packages/archive/00-index.tar.gz : ErrorMisc "Error HTTP code: 502" Message-ID: ?When I type cabal update I get ?http://hackage.haskell.org/packages/archive/00-index.tar.gz : ErrorMisc "Error HTTP code: 502" ?Anybody else seeing this? Thanks George -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikolaj at well-typed.com Fri Oct 31 19:52:35 2014 From: mikolaj at well-typed.com (Mikolaj Konarski) Date: Fri, 31 Oct 2014 20:52:35 +0100 Subject: cabal error: http://hackage.haskell.org/packages/archive/00-index.tar.gz : ErrorMisc "Error HTTP code: 502" In-Reply-To: References: Message-ID: Yes, unfortunatley, hackage was down. See https://status.haskell.org/ I think it's being brought up right now by our fearless volunteer infrastructure rapid response team (they are drafting!). Please try again. On Fri, Oct 31, 2014 at 8:45 PM, George Colpitts wrote: > When I type > > cabal update > > I get > > http://hackage.haskell.org/packages/archive/00-index.tar.gz : ErrorMisc > "Error > HTTP code: 502" > > > Anybody else seeing this? > > > Thanks > George > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From george.colpitts at gmail.com Fri Oct 31 19:59:09 2014 From: george.colpitts at gmail.com (George Colpitts) Date: Fri, 31 Oct 2014 16:59:09 -0300 Subject: cabal error: http://hackage.haskell.org/packages/archive/00-index.tar.gz : ErrorMisc "Error HTTP code: 502" In-Reply-To: References: Message-ID: Thanks for the quick response, yes it's up. Next time I'll check the status page which I was unaware of. Thanks again On Fri, Oct 31, 2014 at 4:52 PM, Mikolaj Konarski wrote: > Yes, unfortunatley, hackage was down. See > > https://status.haskell.org/ > > I think it's being brought up right now by our fearless > volunteer infrastructure rapid response team (they are drafting!). > Please try again. > > On Fri, Oct 31, 2014 at 8:45 PM, George Colpitts > wrote: > > When I type > > > > cabal update > > > > I get > > > > http://hackage.haskell.org/packages/archive/00-index.tar.gz : ErrorMisc > > "Error > > HTTP code: 502" > > > > > > Anybody else seeing this? > > > > > > Thanks > > George > > > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin at well-typed.com Fri Oct 31 23:58:00 2014 From: austin at well-typed.com (Austin Seipp) Date: Fri, 31 Oct 2014 18:58:00 -0500 Subject: GHC Weekly News - 2014/10/31 (Halloween Edition) Message-ID: Hello *, Welcome to the GHC Weekly news. And it's just in time before you go out and eat lots of candy and food. * David Feuer and Joachim Brietner spent some time this past week talking about more optimizations for Haskell code for fusing code, and creating better consumers and producers. This work includes optimizations of "One shot lambdas" (lambdas used at most once) and Call Arity, which was implemented by Joachim at Microsoft Research. The thread is here - https://www.haskell.org/pipermail/ghc-devs/2014-October/006901.html The current situation is that Call Arity and One shot analysis tends to have good combined results for exploiting more fusion opportunities, but sometimes these backfire. As a result, Joachim and David have worked on improving the situation - particularly by letting programmers help with a new `oneShot` primitive (in Phab:D392 & Phab:D393). * Herbert Valerio Riedel opened up a discussion about the origins of code contributions. In short, we'd like to have some stronger ground to stand on in the face of contributions from contributors - the origin of a change and its legal status is a bit nebulous. The thread is here: https://www.haskell.org/pipermail/ghc-devs/2014-October/006959.html Overall, there's been a lot of separate points made, including CLAs (unlikely), "Developer Certificates of Origin" a la the Linux Kernel, and reworking how we mark up header files, and keep track of GHC's long history of authors. If you work on a big project where some of these concerns are real, we'd like to hear what you have to say! * Gintautas Milauskas has done some fantastic work for GHC on Windows lately, including fixing tests, improving the build, and making things a lot more reasonable to use. With his work, we hope GHC 7.10 will finally ship an updated MinGW compiler (a long requested feature), and have a lot of good fixes for windows. Thank you, Gintautas! * And on that note, the call for Windows developers rages on - it looks like Gintautaus, Austin, Simon, and others will be meeting to discuss the best way to tackle our current woes. Are you a Windows user? Please give us input - having input is a crucial part of the decision making process, so let us know. * Jan Stolarek had a question about core2core - a lot of questions, in fact. What's the difference between demand, strictness, and cardinality analylsis? Does the demand analyzer change things? And what's going on in some of the implementation? A good read if you're interested in deep GHC optimization magic: https://www.haskell.org/pipermail/ghc-devs/2014-October/006968.html * Peter Wortmann has put up the new DWARF generation patches for review, in Phab:D396. This is one of the major components we still plan on landing in 7.10, and with a few weeks to spare, it looks like we can make sure it's merged for the STABLE freeze! * There have been a lot of good changes in the tree this past week: Thanks to Michael Orlitzky, we plan on adding doctest examples to more modules in 7.10, and increase that coverage further. This is *really* important work, but very low fruit - thanks a ton Michael! `Data.Bifunctor` is now inside base! (Phab:D336) `atomicModifyIORef'` has been optimized with excellent speedups (as much as 1.7x to 1.4x, depending on the RTS used), thanks to some older work by Patrick Palka (Phab:D315). GHC's internals have been reworked to unwire `Integer` from GHC, leading not only to a code cleanup, but laying the foundation for further GMP (and non-GMP!) related `Integer` improvements (Phab:D351). David Feuer and Joachim have been relentless in improving fusion opportunities, including the performance of `take`, `isSuffixOf`, and more prelude improvements, spread over nearly half a dozen patches. And this doesn't even include the work on improving `oneShot` or Call Arity! In a slight change to `base` semantics, David Feuer also finally fixed #9236. This is a change that can expose latent bugs in your program (as it did for Haddock), so be sure to test thoroughly with 7.10 (Phab:D327). GHC now has support for a new `__GLASGOW_HASKELL_TH__` macro, primarily useful for testing bootstrap compilers, or compilers which don't support GHCi. And there have been many closed tickets: #9549, #9593, #9720, #9031, #8345, #9439, #9435, #8825, #9006, #9417, #9727, #2104, #9676, #2628, #9510, #9740, #9734, #9367, #9726, #7984, #9230, #9681, #9747, and #9236. -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/